id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
24201213
|
pes2o/s2orc
|
v3-fos-license
|
The Cek1 and Hog1 Mitogen-Activated Protein Kinases Play Complementary Roles in Cell Wall Biogenesis and Chlamydospore Formation in the Fungal Pathogen Candida albicans
ABSTRACT The Hog1 mitogen-activated protein (MAP) kinase mediates an adaptive response to both osmotic and oxidative stress in the fungal pathogen Candida albicans. This protein also participates in two distinct morphogenetic processes, namely the yeast-to-hypha transition (as a repressor) and chlamydospore formation (as an inducer). We show here that repression of filamentous growth occurs both under serum limitation and under other partially inducing conditions, such as low temperature, low pH, or nitrogen starvation. To understand the relationship of the HOG pathway to other MAP kinase cascades that also play a role in morphological transitions, we have constructed and characterized a set of double mutants in which we deleted both the HOG1 gene and other signaling elements (the CST20, CLA4, and HST7 kinases, the CPH1 and EFG1 transcription factors, and the CPP1 protein phosphatase). We also show that Hog1 prevents the yeast-to-hypha switch independent of all the elements analyzed and that the inability of the hog1 mutants to form chlamydospores is suppressed when additional elements of the CEK1 pathway (CST20 or HST7) are altered. Finally, we report that Hog1 represses the activation of the Cek1 MAP kinase under basal conditions and that Cek1 activation correlates with resistance to certain cell wall inhibitors (such as Congo red), demonstrating a role for this pathway in cell wall biogenesis.
Polymorphism, that is, the ability to acquire different morphologies, has long been considered a major virulence factor in the human fungal pathogen Candida albicans. This fungus is present on the skin and mucosal surfaces of many organisms, including humans, acquiring mainly a unicellular yeast-like form, while in infected tissues, different morphologies (yeast, mycelia, and even chlamydospores) have been observed (9,13). These types of morphologies have distinct abilities to adhere, proliferate, invade, or escape phagocytic cells and, therefore, contribute by different degrees to the pathogenesis of the infection. The transfer from the yeast form to the filamentous form of growth is induced by certain chemicals (14,18,20,48), a temperature close to 37°C (30), and a neutral pH (49), while chlamydospore formation is induced in vitro under special conditions, such as a low concentration of glucose, darkness, low temperature (24 to 28°C) and microaerophilia.
The molecular mechanisms involved in the regulation of polymorphism in C. albicans are very complex. Genetic analysis has shown the implication of several genes and regulatory cascades in this process (31,37,54,56). These include, among others, the cyclic AMP (cAMP)-dependent protein kinase pathway and the mitogen-activated protein (MAP) kinase pathway. The cAMP pathway leads to an increase in intracellular cAMP (44) and controls the Efg1 transcription factor (16,51,52). C. albicans efg1 mutants are defective in both filamentation and chlamydospore formation (50,51) and have a reduced virulence in certain models of experimental infection (33). Other pathways involved in filamentation are mediated by MAP kinases and include the Cek1-mediated pathway and the HOG pathway. The Cek1 pathway involves the Cst20 PAKlike protein, the Hst7 MAP kinase kinase (26), the Cek1 MAP kinase (11,55), and the Cph1 transcription factor (32). Mutants in these genes present defects in hyphal development to a different degree on certain media and have a reduced virulence in animal models. Other elements that have been partially characterized include the CPP1 phosphatase (11) and the PAK-like kinase Cla4 (27,34). The HOG (high-osmolarity glycerol response) MAP kinase pathway has also been involved in the morphological transition, as deletion of certain elements of the pathway results in enhanced hyphal growth on serum and altered colony morphologies on certain media (1,4). In addition, hog1 mutants are not able to form chlamydospores (2). In Saccharomyces cerevisiae, a similar situation occurs, and deletion of HOG1 allows an efficient cross talk to the Kss1mediated pathway and Fus3 mating pathway (40). In the present work, we demonstrate that the enhanced hyphal growth of C. albicans hog1 mutants is independent of the CEK1 pathway and the Efg1 transcription factor while, in close contrast, we show that the role of Hog1 in chlamydospore development is dependent on this pathway. We also propose that resistance of certain mutants of the HOG pathway to chitininterfering compounds is linked to a hyperactivation of the Cek1 MAP kinase.
MATERIALS AND METHODS
Strains and growth conditions. Yeast strains are listed in Table 1. For clarity, and unless otherwise stated, a mutant in a geneX (hog1, cst20, etc.) will always indicate the homozygous geneX/geneX Ura ϩ strain. Yeast strains were grown at 37°C (unless otherwise stated) in YPD medium (1% yeast extract, 2% glucose, 2% peptone) and SD minimal medium (2% glucose, 0.67% yeast nitrogen base without amino acids) with the appropriate auxotrophic requirements (50 g/ml).
The ability of cells to undergo the yeast-to-hypha transition was tested using Lee's medium at different pHs (4.3 to 5.8 and 6.7) (30), SD adjusted to the pHs indicated, fetal bovine serum, or YPD medium plus fetal bovine serum at 5%. To check the dimorphic transition, cells were inoculated in prewarmed liquid medium at 10 5 cells per ml. Growth in liquid medium was estimated as the absorbance at 600 nm (A 600 ). Uridine and histidine were routinely added to liquid and solid media used for phenotypic assays to minimize the differences between strains. Usually, overnight cultures were inoculated into fresh medium to an optical density of 0.1 (measured at 600 nm), and experiments were performed when cultures reached an optical density of 1 (600 nm) when exponential-phase cells were required. A 24-h culture was routinely used in the case of stationaryphase cells.
Sensitivity to different compounds (oxidative agent, NaCl, sorbitol, Congo red, or calcofluor white) was tested on solid YPD medium. Serially diluted (1/10) cell suspensions were spotted to examine the growth of the different strains. Plates were incubated overnight at 37°C unless otherwise indicated.
Chlamydospore formation was assayed essentially as indicated previously (50). The borders of more than 50 colonies were examined for each strain tested.
Construction of strains. All strains generated in the present study were obtained by disrupting the HOG1 gene in various single-mutation strains of C. albicans. HOG1 gene disruption was performed as previously reported (46) following the Fonzi and Irwin strategy (17) and using the transformation method developed by Köhler et al. (24). Gene deletion was verified by Southern blotting. Genomic DNA was digested with EcoRI and HpaI, and the probe was obtained by PCR using the primers o-HOG1 ext (GAGTAGTAGTTTTGGATAAAT GTA) and HE2r2 (GATTTGCTTCCTGTACTCAACGTT).
Protein extracts and immunoblot analysis. Overnight cultures were refreshed to an optical density of 0.1 (measured at 600 nm), and samples were collected when cultures reached an optical density of 1 (600 nm). Alternatively, cultures in stationary growth phase were refreshed in YPD or YPD plus Congo red, and then samples were taken from the stationary-phase culture and after 1 and 2 h of growth in these conditions. Cell extracts were obtained as previously indicated (36). Equal amounts of proteins were loaded onto each lane, as assessed by 280-nm measurement of the samples and Ponceau red staining of the membranes prior to blocking and detection. Blots were probed with phospho-p42/44 MAP kinase (Thr202/Tyr204) (Cell Signaling Technology, Inc.), ScHog1 polyclonal antibody (Santa Cruz Biotechnology), and Ab-CaCek1 (developed in our lab) and developed according to the manufacturer's conditions using the Hybond ECL kit (Amersham Pharmacia Biotech).
-1,3-Glucanase sensitivity assay. To measure the inhibition of growth caused by Zymolyase, cells from an exponentially growing culture were inoculated to an optical density at 600 nm (OD 600 ) of 0.025 in YPD medium supplemented with different amounts of Zymolyase 100T (ICN Biomedicals, Inc.). The assay was performed in a 96-well plate in duplicate rows and incubated overnight at 37°C. Zymolyase was suspended in Tris-HCl (pH 7.5)/glucose 5%. Growth is depicted as the percentage of growth in YPD supplemented with Zymolyase compared with growth in YPD alone. Graphs represent the means of the results from at least three independent experiments.
RESULTS
hog1 mutants are derepressed in the yeast-to-hypha transition. We have previously shown that hog1 mutant cells are derepressed in hyphal formation when cells are exposed to limiting concentrations of serum (1). This result indicated that the threshold level to activate filamentation in hog1 mutant cells was lower than in the wild type. In the present work, we investigated whether this effect was exclusive to serum or could also be mimicked by other conditions known to promote morphological transitions in C. albicans, such as pH and temperature. When cells were grown in minimal medium at 37°C, both the wild type and hog1 mutants were able to induce hyphal growth at pH 6.7; when the pH was lowered to 4.5, only the hog1 mutant was able to form filaments (Fig. 1A). A similar behavior was observed when cells were grown in liquid Lee's 1A). A pH below 5 prevented filamentation of the wild-type strain. In contrast, the hog1 mutant was able to undergo the morphological transition at any pH. Finally, the enhanced hyphal formation of the hog1 mutant was also evident using temperature as an inducer of filamentation. As shown in Fig. 1B, when cells were grown in 5% serum at low temperature (24 or 30°C), only the hog1 mutant displayed a filamentous phenotype, while the wild-type cells were able to display only hyphae-like structures at 37°C (Fig. 1B). We conclude from these observations that the absence of the Hog1 MAP kinase leads to an enhanced hyphal formation evidenced under several conditions (low serum concentration, low pH, and low temperature), and therefore, Hog1 does play a constitutive/basal role in repressing the morphological transition.
The repression of filamentation mediated by Hog1 is not dependent on the Cek1 MAP kinase. In S. cerevisiae, Hog1 prevents cross talk between the HOG and the pheromone response/invasive growth pathways (19,40). We explore the existence of a similar mechanism in C. albicans by analyzing (i) the phosphorylation state of the MAP kinases under different conditions and (ii) the ability to undergo the yeast-to-hypha transition in response to physiological stimuli. For the first purpose, antibodies that recognize the TEY motif of growth MAP kinases (Cek1 and Mkc1) (4) were used, and whole-cell extracts obtained from cells obtained under different conditions were analyzed. Immunodetection studies showed a constitutive basal activation of Cek1 when exponentially growing cells of the hog1 mutant (but not wild type) were used (4,38). The levels of phospho-Cek1 were 2 to 4 times higher in hog1 cells than in wild-type strain cells (as determined by autoradiography), suggesting that the enhanced hyphal growth of hog1 mutants may be the result of a constitutive activation of the CEK1-mediated pathway. We tested this assumption genetically through the construction of double hog1 mutants with other signaling elements. For this purpose, a HOG1-hisG-URA3-hisG disruption construction was used to perform the disruption of the HOG1 gene in cla4, cst20, hst7, cpp1, cph1, efg, and cph1 efg1 mutants. We checked the basal state of Cek1 phosphorylation in the mutant strains generated. Activation of Cek1 completely disappeared in hst7; furthermore, this signal was also absent in hst7 hog1 mutants ( Fig. 2A), indicating that the Hst7 MAP kinase kinase is required to phosphorylate the Cek1 MAP kinase (MAPK). In contrast, deletion of CST20, CPH1, and CPP1 had no evident effect on Cek1 phosphorylation. Single mutants (cla4, cst20, cph1, and cpp1) displayed a phosphorylation of Cek1 similar to that of the wild type ( Fig. 2A), and the deletion of the HOG1 gene in these backgrounds also showed an increased phospho-Cek1 similar to the hog1 single mutant. These immunodetection assays also revealed a significant and reproducible reduction in the amount of Cek1 protein in cla4 extracts; remarkably, the Cek1 protein level is restored in the cla4 hog1 double mutant. The increased activation of Cek1 is not exclusive to hog1 mutants, as it was recently reported in other mutants of the HOG pathway, such as the ssk1 mutant (45) and the pbs2 mutant (4). We conclude from these observations that the HOG pathway represses the activation of the CEK1-mediated pathway.
To determine if the enhanced hyphal formation of the hog1 mutant correlated with Cek1 phosphorylation, we performed specific filamentation assays. The ability of these strains to form filaments was tested using a subinducing serum concentration (5%) and incubation at 30°C. These conditions were chosen because they allowed us to clearly discern between the behavior of hog1 and wild-type strains. Assays in liquid media revealed that all strains tested grew as yeast cells when grown in YPD medium, but under 100% serum, they all formed filaments (Fig. 2B). This result contrasts with previous published data showing that cla4 mutants were unable to form FIG. 2. MAP kinase activation and cell morphologies. (A) Ten milliliters of cell cultures growing exponentially (1 OD) was taken and processed for immunoblot assay. The same membrane was incubated subsequently with the antibodies (Ab) indicated. Ab p42-44 P, phospho-p42/44 MAP kinase; Ab Hog1, ScHog1 polyclonal antibody; Ab Cek1, Ab-CaCek1; Cek1*, Cek1 phosphorylated. (B) Cell morphology of different mutants under subinducing conditions. Cells were inoculated at 10 6 cells/ml in YPD plus 5% serum or serum and incubated at 30°C for 5 h before being photographed. Bars, 10 m. wt, wild type.
350
EISMAN ET AL. EUKARYOT. CELL filaments (28); in our laboratory, cla4 cells were able to form filaments when grown in 100% serum. However, under limiting serum concentrations, all of the mutant strains lacking the HOG1 gene were able to form true filaments (Fig. 2B), including those where the phosphorylation of Cek1 was not detected, such as the hst7 hog1 mutant. These data indicate that the hyperfilamentous phenotype is not due to activation of CEK1mediated pathway in C. albicans. The role of the Cph1 and Efg1 transcription factors, implicated in the morphological transition, was also analyzed in relation to the HOG1 gene. The double cph1 efg1 mutant was unable to form filaments under any laboratory conditions (although hyphal forms have been isolated in vivo from the throat of gnotobiotic piglets) (43); nevertheless, the disruption of the HOG1 gene in this background resulted in the characteristic derepressed phenotype of hog1 mutants (Fig. 3). The cph1 hog1 and efg1 hog1 double mutants also displayed an enhanced ability to form true filaments. These data suggest that Hog1 is a dominant repressor of filamentation, probably acting through other transcription factors.
Blockage of the CEK1-mediated pathway suppresses the defect in chlamydospore formation of hog1 mutants. Given that the CEK1-mediated pathway has been implicated in the dimorphic transition and that there is cross talk with the HOG1 pathway, we aimed to determine its role in chlamydospore formation. When single mutants cla4, cst20, hst7, cek1, cph1, and cpp1 were analyzed, they were all found to form a similar abundance of these structures to a similar degree of maturity in comparison to wild-type cells. The behavior of cpp1 mutants has also been recently reported (47). Interestingly, the analysis of double mutants implicated the Cek1 pathway in chlamydospore formation, since the double hog1 cst20, hog1 hst7, and hog1 cpp1 mutants were able to form such structures. In contrast, deletion of the HOG1 gene in a cla4 mutant generated a hog1 phenotype, that is, the inability to form chlamydospores (Fig. 4). This result indicates that the mechanism inhibiting the formation of chlamydospores in hog1 cells is CST20, HST7, and CPP1 dependent.
The epistatic relationship between Hog1 and Efg1 was also analyzed using this approach. Both efg1 and hog1 mutants have been shown to block this process. The double efg1 hog1 (as well as a cph1 efg1 hog1 mutant) was unable to form chlamydospores. Overexpression of the EFG1 gene under the control of PCK1 promoter in the double efg1 hog1 (as well as in a hog1 mutant) did not suppress the hog1 phenotype (Fig. 5). Furthermore, overexpression of the HOG1 gene under the control of the strong constitutive ACT1 promoter did not restore this capacity in the double mutant (efg1 hog1) (not shown). Both results suggest that chlamydospore formation could be controlled by two independent pathways, one mediated by Efg1 and the other by Hog1.
The role of Hog1 in mediating resistance to osmotic and oxidative stresses is independent of Cek1. The HOG pathway is required for the adaptation of cells to oxidative and osmotic stresses (1) in C. albicans (46). The role of CLA4 and other elements of the putative CEK1-mediated pathway in response to osmotic and oxidative stress has not been reported previously. None of the cek1, hst7, cst20, cla4, cph1, efg1, or cph1 efg1 mutants displayed sensitivity to osmotic stress (Fig. 6) or to oxidants (data not shown) compared to wild-type cells. In addition, the single cla4, cst20, and hst7 mutations did not impair the signaling to other MAPKs (Hog1 and Mkc1) in response to NaCl or H 2 O 2 (data not shown). Furthermore, combining these mutations in a hog1 background did not aggravate the susceptibility of the hog1 mutant to both osmotic (NaCl and sorbitol) or oxidative (H 2 O 2 and menadione) stress. Those results suggest that the role of the HOG pathway in the response to stress is at least partially independent of Cla4, Cst20, Hst7, Cpp1, Cph1, and Efg1.
Congo red resistance is dependent on Cek1 activation. The Cek1 MAP kinase is involved in the biogenesis of the cell wall, since mutants defective in this MAP kinase, and other elements that mediate its activation, show sensitivity to certain cell wall assembly inhibitors such as Congo red and calcofluor white (45). As hog1 mutants also present cell wall alterations (1) and constitutively activate the Cek1 MAP kinase (4, 45), we reasoned that both phenomena could be linked. This hypothesis was genetically tested by performing assays of sensitivity to Congo red and calcofluor white on solid media. As shown in Fig. 7, the cst20, cla4, hst7, cek1, cph1, and efg1 mutant strains showed impaired growth in the presence of these compounds, while a cpp1 mutant displayed a phenotype close to that of the wild-type strain. Deletion of HOG1 in these strains resulted in two different phenotypes (Fig. 7). An hst7 hog1 mutant showed an hst7 phenotype; therefore, the lack of the HOG1 gene did not improve the growth in the presence of cell wall-disturbing agents, which clearly correlated with the absence of Cek1 activation. However, in cst20, cla4, and cph1 mutants, the absence of the HOG1 gene enhanced growth in the presence of Congo red and calcofluor white, consistent with the fact that these mutants displayed Cek1 phosphorylation levels similar to those of the hog1 mutant ( Fig. 2A). The role of the Cph1 and Efg1 transcription factors was also analyzed. As mentioned above, the sensitivity of the cph1 mu-
352
EISMAN ET AL. EUKARYOT. CELL tant to cell wall-interfering agents is reversed to resistance when HOG1 gene is lacking (Fig. 7). This effect does not occur in the case of efg1, since both efg1 and efg1 hog1 mutants display an increased sensitivity to Congo red and calcofluor white, suggesting a possible epistatic relationship between Hog1 and Efg1. Remarkably, the double cph1 efg1 mutant was resistant to these compounds, arguing for the implication of both transcription factors in the architecture of the cell wall. This result suggests a different mechanism for both proteins in the biogenesis of the cell wall. Deletion of the HOG1 gene in a cph1 efg1 background did not significantly alter the resistant phenotype of the double cph1 efg1 mutant.
Recently, Cek1 activation has been shown to correlate with cellular growth and/or the transition from stationary to exponential phase (45). Congo red inhibits the growth of C. albicans in a dose-dependent manner in liquid cultures. We therefore tried to correlate both phenomena (Cek1 activation and growth in optical density) using a compound that had a different effect on wild-type and hog1 mutants. Cells were allowed to enter stationary phase and were then diluted in media containing different amounts of Congo red. Samples were taken at 1 and 2 h and processed for Western blot analyses. As shown in Fig. 8, levels of activated Cek1 were found to be inversely dependent on Congo red concentration, consistent with the inhibition of growth caused by this compound. In addition, Cek1 phosphorylation was always higher in the hog1 strain versus the wild-type strain (independent of time of sample withdrawal), and finally, it appeared earlier in this mutant at the same concentration (see, for example, lanes at 1 h). As shown in the growth curves, hog1 mutant cells suffered a less pronounced growth delay in the presence of Congo red than the wild-type strain (Fig. 8B).
Previous studies have revealed that mutants in the HOG pathway (both in C. albicans and S. cerevisiae) are sensitive to Zymolyase, a -1,3-glucanase-enriched enzyme preparation (3,4,23). To characterize in more detail the relationship between the cell wall composition/architecture and the Cek1-and Hog1-MAPK pathways, we performed the following assay. Cells were grown overnight in YPD medium supplemented with different amounts of Zymolyase, and cell growth was quantified by the final OD reached. cst20, hst7, cek1, and cph1 mutants were found to be more sensitive to Zymolyase than the wild type. The deletion of the HOG1 gene in hst7 and cph1 mutants slightly aggravated the Zymolyase-sensitive phenotype (Fig. 9); however, the cst20 hog1 double mutant displayed an increase in the resistance to glucanase. In agreement with the phenotype observed on Congo red and calcofluor white plates, a cpp1 mutant was not sensitive to -1,3-glucanase. efg1 and cph1 efg1 mutants showed similar sensitivities to Zymolyase but a lower sensitivity than cph1 mutants; deletion of HOG1 aggravated these phenotypes to a sensitivity similar to that of the hog1 mutant. This observation suggests that Hog1 plays a role in glucan assembly/regulation independent of Efg1 and Cph1. Deletion of CLA4 rendered cells drastically sensitive to cell wall-interfering compounds, and further deletion of the HOG1 gene slightly improved growth in the presence of these compounds (still far beyond the levels attained in the hog1 mutant), suggesting that Cla4 and Hog1 contribute independently to cell wall biogenesis (Fig. 7). This idea was reinforced when the susceptibility to glucanase was tested. A cla4 mutant was as resistant as the wild-type strain, while the double cla4 hog1 mutant displayed the sensitive phenotype characteristic of hog1 mutants (Fig. 9).
DISCUSSION
The aim of the current work was to investigate the relationship between the HOG and the Cek1-mediated MAPK pathways. Both routes have been implicated in important cellular functions such as morphogenesis and cell wall construction. The data obtained is this work are summarized in the model shown in Fig. 10.
In S. cerevisiae, a genetic interaction between both routes has been described previously (15,40). When cells are exposed to osmotic stress, in the absence of either the HOG1 or PBS2 gene, cells display an invasive growth on solid media, shmoo projection, and expression of mating type-specific genes, and these phenotypes are dependent on the transmission of the signal through Sho1 to Ste20 and Ste11 and Ste7-Kss1. We demonstrate that, in C. albicans, the mechanism of cross talk is different. In this organism, deletion of some of the predicted elements of the pathway (CST20, HST7, CEK1, and CPH1) FIG. 7. Growth in the presence of cell wall-disturbing compounds. Serial dilutions of cells were spotted on plates supplemented with calcofluor white or Congo red, and plates were incubated at 37°C for 24 h before photographs were taken. wt, wild type.
354
EISMAN ET AL. EUKARYOT. CELL generate mutants that show defects on certain solid media that induce morphological transitions, although they retain the ability to form filaments on serum. In addition hog1 and pbs2 mutants display an enhanced ability to form filaments (1,4) independent of the stimuli (either pH, temperature, or serum concentration) tested (Fig. 1). This occurs even in the absence of osmotic stress, suggesting that the activation of Hog1 does not have an effect on filamentation. However, genetic analysis of double mutants in the HOG and CEK1-mediated pathways show that the derepressed behavior of hog1 cells is not mediated by the Cek1 pathway, since the hog1 hyperfilamentous phenotype is dominant when the Cek1 pathway is impaired ( Fig. 2 and 3). A similar situation is observed when the HOG1 gene is deleted in concert with the EFG1 and CPH1 genes ( Fig. 2 and 3). Deletion of EFG1 and CPH1 renders cells unable to form filaments under most laboratory conditions tested, although not in vivo (43). The triple deletion mutant cph1 efg1 hog1 was able to form filaments under subinducing conditions, similar to hog1. These data indicate that the HOG1 gene might carry out its repressing effect on additional elements, not Efg1 or Cph1. Potential candidates include RBF1 (21) or TUP1, which have not been accommodated in any signaling pathway mediated by MAP kinases. Deletion of these genes led to enhanced (RBF1) (22) or even constitutive (TUP1) (5, 6) hyphal growth. The Tup1 protein is a strong candidate, as the Ssn6-Tup1 repressor has been involved in S. cerevisiae in the induction of certain HOG1-dependent genes (35); Hog1 could signal environmental changes to Tup1 in C. albicans and consequently relieve the repression of certain filamentation-responsive genes.
We have also shown that the HOG pathway is involved in the formation of chlamydospores, a process that occurs under defined environmental conditions, such as low temperature, oxygen concentration, and rich media. It can also occur, apparently, in vivo, as chlamydospore-like cells were isolated from the gastrointestinal tract of cyclophosphamide-treated mice (9). It has been suggested that chlamydospores are resistant forms, since they displayed a thickened cell wall which could protect against environmental challenges. Moreover, most of the C. albicans clinical isolates are able to induce the formation of chlamydospores, arguing for an important role of chlamydospores in C. albicans biology. Both the EFG1 and HOG1 genes are essential in the formation of chlamydospores (2,50), involving a MAPK signal transduction pathway and the cAMP pathway in this process. We present data suggesting that both proteins, Hog1 and Efg1, act independently, since overexpression of the EFG1 gene did not restore the ability to form chlamydospores in the hog1 mutant, and similarly, overexpression of HOG1 gene does not restore the formation of chlamydospores in the efg1 mutant. The reasons for the inability of hog1 mutants to form chlamydospores are not yet known (2). One possible explanation could be oxidative stress: chlamydospore formation is favored under microaerophilia and absence of light, a result that suggests that reactive oxygen species impair this process. The absence of Hog1-dependent defense mechanisms in hog1 mutants could generate a higher concentration of reactive oxygen species and, therefore, the inability to form chlamydospores. An additional and alternative explanation could be a repressive role of the Cek1 pathway in chlamydospore formation, as this pathway is constitutively active in hog1 mutants (Fig. 2) and pbs2 mutants (4). This suggests that a coordinate balance between both pathways is necessary to generate such structures. In a recent study, a number of different genes have been reported to be required for chlamydospore formation, such as SUV3, SCH9, and ISW2, which are involved in mitochondrial function, glycogen accumulation, and chromatin remodeling, respectively (39). It is reasonable to assume that the expression of some of these genes may be dependent on HOG1 and/or CEK1. It must be stated, however, that the effect of the Cek1 pathway seems to be independent of oxidative stress, since Cek1 pathway mutants do not show altered sensitivity to oxidants nor increase the sensitivity of hog1 cells to these compounds (data not shown).
The results presented in this work also show that hog1 mutants display increased resistance to certain cell wall inhibitory compounds, such as Congo red and calcofluor white, indicating its relationship with cell wall biogenesis. We propose that Cek1 activation is responsible for this effect, as evidenced by biochemical and genetic analyses. The failure to activate Cek1 (as occurs in hog1 hst7 cells) would suppress the resistance phenotype in hog1 mutants, while deletion of the CPP1 phosphatase gene or the CST20 PAK gene would have minor effects according to the activation pattern determined by Western blot analyses. However, the stimuli (either extra-or intracellular) involved in Cek1 activation remains unclear. In S. cerevisiae, Kss1 (a Cek1 homologue) participates in the SVG (sterile vegetative growth) pathway, which is involved in cell wall biogenesis (12,29). Defects in protein glycosylation cause its constitutive and SHO1-dependent activation. Cek1 activation could be triggered in response to those physiological situations that require active cell wall remodeling, such as exit from the stationary phase and entrance to the exponential phase of growth, and this sensing mechanism is fully functional in hog1 mutants ( Fig. 2 and 8), despite its derepressed behavior on Cek1 activation. The stimuli that could lead to an activation of Cek1 are not yet clear, as recent data (38) indicate that Cek1 is activated in response to Zymolyase, a -glucanaseenriched enzymatic preparation. Furthermore, Zymolyase, as well as Congo red, also activates the cell integrity Mkc1 MAP kinase, similar to what is observed in S. cerevisiae for the Slt2 protein (36).
Interestingly, cst20 and the cst20 hog1 mutants activate Cek1 similar to the wild-type and hog1 mutant strains, respectively, indicating that Cst20 is not the only mediator of Cek1 activation. In C. albicans, the PAK Cla4 protein is a putative transduction element that has been reported to be involved in morphogenesis and virulence in this fungus (28,41). Our results, as FIG. 9. Susceptibility to Zymolyase. The strains indicated, PAKs and MAP kinases (A) or phosphatases and transcription factors (B), were grown overnight at 37°C in the presence of different amounts of Zymolyase starting with an OD of 0.025. Growth is depicted as the percentage of growth in YPD supplemented with Zymolyase compared to growth in YPD alone. wt, wild type.
356
EISMAN ET AL. EUKARYOT. CELL revealed by the pattern of MAPK activation, chlamydospore formation, cell wall resistance phenotypes, and filament formation, suggest that Cla4 is not a member of the pathway mediated by Cek1 or that there is redundancy at this level. Other elements implicated in the transmission of the signal at the level of Cst20, such as Cdc42 (53) or Ste50, could play a role in this process (42). Unfortunately, the construction of the double hog1 cek1 mutant was not possible despite continued genetic attempts (data not shown), suggesting either synthetic lethality or that the mutant is strongly counterselected under the normal experimental conditions of isolation. Since a hst7 hog1 mutant is viable and a BLAST analysis reveals no functional homologue to Hst7 in the C. albicans genome, one possible explanation for lethality could invoke a downstream mediator of Hst7. Cek2 is a candidate for such a role, since this MAP kinase has been shown to complement the mating deficiency defect of a fus3 kss1 mutant in S. cerevisiae and a C. albicans cek1 cek2 mutant is also mating deficient (8). Whether Cek2 is functionally redundant to Cek1 in nonmating functions (such as chlamydospore formation or filamentation) is, however, open to speculation, since it is also possible that other downstream mediators compensate for the absence of Cek1.
In conclusion, the data obtained in this work indicate that the Hog1 and the Cek1-mediated pathways play independent roles in processes such as filamentation and osmotic/oxidative stress resistance but play complementary roles in cell wall biogenesis and chlamydospore formation in C. albicans. Fur-ther work will be aimed toward the definition of the elements of the HOG pathway responsible for Cek1-mediated signaling. FIG. 10. Proposed model of interaction between the pathways mediated by Hog1 and Cek1 MAP kinases. Osmotic stress triggers Hog1 activation through both branches, enabling the cell to adapt to hyperosmotic conditions (black arrow). The Cek1 pathway is involved in the construction of the cell wall (gray arrow); the stimulus is not known and is depicted as a question mark. Regarding morphogenesis, the HOG pathway plays an inhibitory role over yeast-to-hypha transition; this role is independent or dominant over the CEK1 pathway (discontinuous black bar) and the transcription factor, Efg1. Under specific conditions, such as low glucose concentration, darkness, low temperature (24°C to 28°C), and microaerophilia, Hog1 plays an inducing role in the formation of chlamydospores; this positive role may be played, presumably, through Cst20, Ste11, Hst7, and Cek1 (discontinuous thick gray arrow). Under standard growth conditions, Hog1 controls the activation of Cek1 (light gray bar). VOL. 5,2006 ROLES OF Cek1 AND Hog1 MAPKs IN C. ALBICANS 357
|
2018-04-03T01:33:58.992Z
|
2006-02-01T00:00:00.000
|
{
"year": 2006,
"sha1": "a15f5cf1335125ec09909ea8243b0a9960b726f2",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/ec.5.2.347-358.2006",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "dace4c9058034d2710abf911ab3c1801e08ca066",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
51625793
|
pes2o/s2orc
|
v3-fos-license
|
Identification and Analysis of Human Sex-biased MicroRNAs
Sex differences are widely observed under various circumstances ranging from physiological processes to therapeutic responses, and a myriad of sex-biased genes have been identified. In recent years, transcriptomic datasets of microRNAs (miRNAs), an important class of non-coding RNAs, become increasingly accessible. However, comprehensive analysis of sex difference in miRNA expression has not been performed. Here, we identified the differentially-expressed miRNAs between males and females by examining the transcriptomic datasets available in public databases and conducted a systemic analysis of their biological characteristics. Consequently, we identified 73 female-biased miRNAs (FmiRs) and 163 male-biased miRNAs (MmiRs) across four tissues including brain, colorectal mucosa, peripheral blood, and cord blood. Our results suggest that compared to FmiRs, MmiRs tend to be clustered in the human genome and exhibit higher evolutionary rate, higher expression tissue specificity, and lower disease spectrum width. In addition, functional enrichment analysis of miRNAs show that FmiR genes are significantly associated with metabolism process and cell cycle process, whereas MmiR genes tend to be enriched for functions like histone modification and circadian rhythm. In all, the identification and analysis of sex-biased miRNAs together could provide new insights into the biological differences between females and males and facilitate the exploration of sex-biased disease susceptibility and therapy.
Introduction
Sex difference is a prevalent phenomenon in physiology, disease susceptibility, and clinical therapy [1]. Men and women not only exhibit obvious anatomical differences, but more importantly, dozens of differences in diseases and therapeutic responses. Epidemiological studies have identified differences between men and women in disease incidence and prevalence [2]. Men are more likely to suffer from occlusive coronary artery disease (CAD) [3], autism spectrum disorders (ASD) [4], and stroke [5,6], whereas women exhibit higher incidence of a non-obstructive CAD or microvascular dysfunction [3], rheumatic diseases [7], chronic radiation enteritis (CRE) [8], and post-traumatic stress disorder [9,10]. As for drug response, aspirin has not been definitively proven to prevent cardiovascular events in women but men may gain greater benefit than women from angiotensin-converting enzyme inhibitors [11]. Moreover, molecular differences between male and female samples, including gene expression and somatic mutations, have also been reported from TCGA tumor datasets [12]. Accordingly, dissecting the differences between women and men is critical for precision medicine.
MicroRNAs (miRNAs) are small non-coding RNAs that play pivotal roles in a variety of cellular functions and biological processes such as cell proliferation and differentiation [13,14], growth and development [15], as well as metabolic homeostasis [16]. There is accumulating evidence for differential miRNA expression between women and men across a variety of tissues, and the sex-biased expression of miRNAs could have functional implication [17]. For example, expression of miR-29a and miR-29c, which are involved in neuronal cell maintenance [18], is significantly up-regulated in frontal cortex of female mice in comparison with male mice [19]. Similarly, miRNAs from the miR-200 family, which target the gonadotropin releasing hormone receptor pathway, are also found to be differentially expressed between female and male rats, and this sex-biased expression pattern may partially contribute to the sexual disparity in brain development [20]. Women are more vulnerable to lupus than men, and one possible explanation is the higher expression of miR-98, miR-188, miR-421, and miR-503 in CD4 + T cells of women in comparison with men [21].
Recently, the sex-biased expression of circulating miRNAs (i.e., miRNAs in blood) has attracted much attention for their potential confounding effect that compromises the accuracy of the miRNA biomarkers [22]. For example, miR-221 and let-7g are expressed more prominently in the plasma of women compared to men, and could be sex-specific biomarkers of metabolic syndrome [23]. A more comprehensive, cohort-based survey has identified 35 sex-biased miRNAs after excluding some confounding factors like age and body weight [24]. Nevertheless, whether and how miRNAs exhibit sex-biased expression in tissues other than blood remained largely unexplored. Since sexual dimorphism of miRNAs could influence many physiological and pathological processes, a large-scale identification and analysis of the sex differentially-expressed miRNAs across multiple tissues is required for further understanding their role in human biology and diseases.
With the rapid development of high-throughput sequencing technologies, more than 500 human miRNA transcriptomic datasets and small RNA sequencing datasets across diverse tissues have become available in the Gene Expression Omnibus (GEO) database [25]. Nevertheless, no systematic investigation of the sex-biased expression of miRNAs has been reported yet. In a previous study, we developed a computational framework to identify sex-biased genes (ISBG) based on public gene expression datasets [26]. In this work, we adopted the framework above to identify sex-biased miRNAs (namely ISBM) from public GEO datasets. We finally collected 8 high quality miRNA expression datasets with gender information across multiple tissues ( Figure 1) and performed a series of bioinformatics analyses on the sex-biased miRNAs identified. Our study provides some insights into biological differences between females and males and could serve as a starting point for gender-stratified personalized medicine.
Results and discussion
Identification and experimental validation of the sex-biased miRNAs
Identification of the sex-biased miRNAs
To identify the sex-biased miRNAs, we first downloaded the human miRNA expression datasets across various tissues from the GEO database. Despite hundreds of miRNA expression datasets are available in the GEO database, few datasets contain normal samples with unambiguous gender labels. We discarded the datasets that had no normal samples or did not contain both male and female samples, and just retained the datasets that included normal tissues and were annotated with clear gender information. Finally, we obtained 8 miRNA expression datasets suitable for further analysis ( Figure 1A).
Subsequently, we used the non-parametric test (Wilcoxon's test) to identify the differentially expressed miRNAs between genders, with only normal samples considered). To retain enough miRNAs for further analysis, we took 0.05 as the P value threshold and did not consider fold change. Accordingly, we identified sex-biased miRNAs across four tissues, including brain, colorectal mucosa, peripheral blood, and cord blood (see Table 1 and Table S1 for details of these miRNAs). Notably, a few miRNAs exhibited conflicting gender bias among different tissues (Table S2). For example, hsa-miR-553 exhibits female bias in brain but male bias in peripheral blood. To avoid ambiguity, we excluded such miRNAs in the following analysis. Consequently, 73 female-biased miRNAs (highly expressed in females, FmiRs) and 163 male-biased miRNAs (highly expressed in males, MmiRs) were finally identified, Figure 1 The overview of the computational framework and the identified the sex-biased miRNAs A. The overall computational framework of this work. First, human miRNA expression datasets with gender-labeled normal samples were curated from the GEO database. Second, the sex-biased miRNAs across different tissues were screened using Wilcoxon's test comparing the expression levels between male and female samples (P < 0.05). Finally, various characteristics of these miRNAs were analyzed subsequently using different computational pipelines. B. Fraction of the sex-biased miRNAs in the dataset curated in the current study. C. The overlap of FmiRs and MmiRs between peripheral blood and the other two non-blood tissues, i.e., brain and colorectal mucosa in this study. D. RT-PCR validation of sex-biased miRNA expression in whole blood samples. The expression levels of selected miRNAs were analyzed using t-test in independently collected whole blood samples of males and females as described in the experimental procedures. The RT-PCR assay was performed on the same batch of blood sample. *indicates significant difference in miRNA expression between male and female samples (P < 0.05). N = 8-10. FmiR, female-biased miRNA; MmiR, male-biased miRNA; DSW, disease spectrum width, which is defined as the number of diseases associated with a given miRNA gene divided by the total number of diseases associated with any miRNA genes. which accounted for about 14.0% of total miRNAs in the miRNA datasets analyzed ( Figure 1B). Accordingly, 1453 miR-NAs that show no significant sex-biased expression in any of the tissues were grouped as the non-biased miRNAs (NmiRs).
Comparison and experimental validation of the sex-biased miRNAs
Previous study has identified some sex-related miRNAs in human blood [24]. Some of these miRNAs were also found in our analysis. For example, hsa-miR-45 and hsa-miR-16 exhibit higher expression level in male, which is in line with our results. Nevertheless, it seems that the sex-biased miRNAs in blood cannot fully recapitulate sex-biased miRNAs in other tissues. In our dataset, only a limited overlap of sex-biased miRNAs between peripheral blood and other non-blood tissues is observed ( Figure 1C). There are only 8 shared MmiRs and no shared FmiRs between peripheral blood and other two non-blood tissues, brain and colorectal mucosa, suggesting the importance for surveying different tissues other than blood alone when analyzing sex-biased expression of miRNAs.
To experimentally validate the reliability of our computational analysis, we quantified the expression of three miRNAs in separate whole blood samples. These include hsa-miR-296-5p, hsa-miR-548m, and hsa-miR-1262, all of which show biased expression between women and men and defined as FmiRs in our dataset. RT-PCR analysis indicates that all of these three miRNAs exhibited women-biased expression in whole blood ( Figure 1D), which agrees with our computational analysis. Therefore, the RT-PCR assay, at least partially, validates our computational analysis for the identification of sex-biased miRNAs (namely ISBM).
Chromosomal distribution of sex-biased miRNA genes
It is known that miRNA genes are scattered among chromosomes [27]. We speculate that the sex-biased miRNA genes would show non-random distribution on chromosomes. To test this hypothesis, we first compared the distribution of FmiR and MmiR genes on each chromosome. For 19 out of 23 chromosomes, the proportion of MmiR genes to the total number of miRNA genes on the same chromosome is higher than that of the FmiR genes ( Figure 2A). Our previous study has demonstrated that the proportion of female-biased genes (FGs) is higher than that of male-biased genes (MGs) among most of the chromosomes [26]. It is therefore interesting to analyze if there is a correlation between the chromosomal distribution of the sex-biased miRNAs and that of the sex-biased genes. For an intuitive comparison, we first plotted a Circos graph [28] to show the distribution of sex-biased miRNAs (and genes) per million bp on each chromosome. As shown in Figure 2B, the distributions of the sex-biased miRNAs and genes are not consistent for most chromosomes except X chromosome. Indeed, no significant correlation between distributions of FmiR genes and FGs (Spearman's correlation q = À0.24, P = 0.27), nor between those of MmiR genes and MGs (q = À0.28, P = 0.20) is found, suggesting that there seems no obvious connection between sex-biased miRNA genes and genes in terms of chromosomal distribution. We also noted that MmiR genes outnumber FmiR genes in total so that the proportion of MmiR genes would be randomly expected to be higher. Nevertheless, for some chromosomes like chromosomes 2, 4, 5, 8, and 19, the proportion of MmiRs is more than four-fold higher than that of FmiRs (Figure 2A), which cannot be simply explained by the higher total number of MmiRs. In all, our analysis indicates a non-random distribution of miR-NAs across chromosomes.
We further investigated the intra-chromosomal distribution of sex-biased miRNA genes by comparing the chromosomal distance between two miRNA genes from the same group. As shown in Figure 2C, the median intra-chromosomal distance between two MmiR genes is significantly smaller than that for FmiR genes (median distance 1.09E+7 bp vs. 3.67E +7 bp, Wilcoxon's test P = 4.71EÀ7) or NmiR genes (median distance 1.09E+7 bp vs. 3.46E+7 bp, Wilcoxon's test P = 3.43EÀ55), indicating that MmiR genes tend to be more clustered on the chromosomes. In contrast, FmiR genes do not show obvious clustering tendency. In order to examine the potential bias for individual chromosomes, the average intra-chromosomal distances between miRNAs for each chromosome are also examined ( Figure S1A). The intrachromosomal distances between sex-biased miRNA genes do not agree well with those between other miRNA genes, indicating that the non-random distribution of sex-biased miRNA genes. Finally, we compared the allocation of miRNA genes on sexual chromosome and autosomes for each tissue ( Figure 2D). Sex-biased miRNA genes are located on the autosomes much more frequently than the sex chromosome, implying that the sexual dimorphism of miRNA expression could not totally be contributed by sex chromosomes but should be also associated with the differential regulation of autosome miRNA genes between males and females.
Evolutionary conservation of sex-biased miRNA genes
Evolutionarily conserved miRNAs are more likely to be associated with diseases [29]. Moreover, the conservation of genes is an important feature to consider when exploring gene functions [30]. To dissect the evolutionary conservation of sexbiased miRNAs, we first divided miRNA genes into 5 groups as described previously [31]. These include the humanspecific group (G1), primate-specific group (G2), mammalspecific group (G3), vertebrate-specific group (G4), and the group for miRNA genes present in other more distal species (G5, which is the most conserved). miRNA genes from each group are listed in Table S3, and the distribution of FmiR, MmiR and NmiR genes among different groups is shown in Figure 3A. In the fast-evolving groups like G1 and G2 groups, there are more MmiR genes (49.6%, 56/113) than FmiR genes (30.9%, 17/55) (P = 0.031, OR = 0.46, Fisher's exact test). We further compared the number of species in which the miRNA families of FmiR genes and MmiR genes are present, and found that FmiR families are present in more species than MmiR families (Wilcoxon's test P = 0.050) and NmiR families ( Figure 3B, Wilcoxon's test P = 0.017). Together, these results indicate that MmiR genes have a faster evolutional rate, whereas FmiR genes tend to be more conserved, suggesting that FmiR genes are inclined to play roles in more fundamental biological processes.
Figure 2
The chromosomal distribution of the sex-biased miRNA genes A. Chromosomal distribution of the sex-biased miRNA genes. Y-axis shows the percentage of the sex-biased miRNA genes to the total miRNA genes on the same chromosome. B. Detailed chromosomal distribution of the sex-biased miRNA genes and sex-biased genes, the number of the sex-biased miRNA genes per million bp is plotted using the barplots embedded in the Circos graph. Red, green, purple, and blue bars represent the distributions of the FmiR genes, MmiR genes, FGs, and MGs, respectively. C. The boxplot comparing the intrachromosomal distances between the sex-biased miRNA genes. ***indicates significant difference in intra-chromosomal distances, when comparing MmiR gene group with any of the other two groups (P < 0.001), according to Wilcoxon's test. D. The percentage of the sexbiased miRNA genes on autosomes and sex chromosomes in each tissue. FmiR, female-biased miRNA; MmiR, male-biased miRNA; FG, female-biased coding gene; MG, male-biased coding gene; NmiR, non-biased miRNA; n.s., non-significant.
Expression regulation and disease spectrum width of the sexbiased miRNAs
Gene expression is another characteristic implicating gene function, and miRNAs expressed in highly tissue-specific manner tend to be involved in tissue identity and differentiation [32,33]. To understand the tissue expression specificity of sexbiased miRNAs, we computed the tissue specificity index based on the miRNA expression profiles across 40 tissues from Liang and colleagues [34]. We found that MmiRs have a significantly higher tissue expression specificity index than FmiRs ( Figure 4A, Wilcoxon's test P = 0.050), indicating that MmiRs prefer to express in a tissue-specific manner. Besides, we noted that the tissue specificity of sex-biased miRNAs is significantly higher than NmiRs ( Figure 4A, Wilcoxon's test P = 0.0036). The result suggests that the sex-biased miRNAs could play critical roles in distinguishing cell types and tissue development status.
To a certain extent, gene expression is subject to the activity of transcription factors (TFs) [35]. For better understanding of the regulation of sex-biased miRNAs, we analyzed the TFs regulating the sex-biased miRNA genes. Through extensive exploration of the comprehensive ChIP-seq atlas, we identified the TFs that preferentially bind the proximal genomic regions of sex-biased miRNA genes in each tissue. On average, each sex-biased miRNA gene has 15.7 TF binding sites in vicinity, which is significantly higher than that of a non-biased miRNA gene (9.1 TF binding sites per gene on average; P = 1.47EÀ19, OR = 1.73, Fisher's exact test). Moreover, some TFs are more likely to regulate sex-biased miRNA genes than non-biased miRNA genes. We identified 33 FmiR gene-regulating TFs and 56 MmiR gene-regulating TFs. FmiR gene-regulating TFs tend to be involved in functions like positive regulation of transcription, hemopoiesis, and cellular response to chemical stimulus, whereas MmiR gene-regulating TFs tend to be involved in functions like chromosome organization, response to organic substance, and histone modification ( Figure 4B). These TFs are also significantly associated with diseases. For instance, both FmiR gene-regulating TFs and MmiR generegulating TFs are associated with Alzheimer's disease and bone mineral density. In addition, MmiR gene-regulating TFs are also associated with Cornelia de Lange syndrome and myeloid leukemia ( Figure 4C). Finally, we checked if these TFs themselves show sex-biased expression, based on the sexbiased genes identified in our previous study [26]. Our analysis indicates no significant overlap between sex-biased genes and TFs that regulate sex-biased miRNA genes (Fisher's exact test, P = 0.53). Among 33 FmiR gene-regulating TFs and 56 MmiR gene-regulating TFs, only 4 and 9 TFs show sexbiased expression, respectively. Nevertheless, we note that some of the sex-biased TFs are likely to be associated with diseases that have known sex-biased incidence. For instance, male-biased TF CDK9 can inhibit cell proliferation and induce apoptosis in human breast cancer [36], whereas the gene encoding the female-biased TF PBX1 has a genetic and functional association with bone mineral density, one of the major determinants of risk for osteoporosis [37].
In consideration of the extensive association between miRNAs and diseases, we next examined the relationship between sex-biased miRNAs and diseases with the disease spectrum width (DSW) [38]. Intuitively, DSW describes how many diseases are associated with a particular miRNA gene, and higher DSW indicates wider disease associations of a particular miRNA gene. We re-calculated DSW (Table S4) Figure 3 The evolutionary characteristic of the sex-biased miRNA genes A. The distribution of the FmiR, MmiR, and NmiR genes in different conservation groups. Note that miRNA genes in more conserved group (e.g., primate-specific group) do not include the miRNA genes present in less conserved groups (e.g., human-specific group). B. Comparison of the number of species in which the corresponding miRNA gene family members are present. *indicates significantly higher number of species in which the family members of FmiR genes are present, in comparison with MmiR and NmiR genes (P < 0.05), according to Wilcoxon's test. n.s., non-significant.
using the updated data of the miRNA gene-disease associations in HMDD [39], which contained 578 miRNA genes, 383 diseases, and 10,381 miRNA gene-disease associations. As shown in Figure 4D, DSW of FmiR genes is significantly higher than that of MmiR genes (median: 0.013 vs. 0.012, Wilcoxon's test P = 0.049). Combining the finding of tissue specificity and DSW, miRNA genes that are associated with more diseases tend to have lower tissue specificity, and this observation is consistent with the previous study [29] showing a negative correlation between the tissue specificity of a miRNA and the number of diseases it is associated with. We further compared the percentage of disease-associated miRNA genes among FmiR, MmiR and NmiR groups. We found that FmiR genes (66.7%, 44/66) do not have significantly more disease-associated miRNAs than MmiR genes (62.5%, 90/144) (P = 0.64, OR = 1.20, Fisher's exact test). However, the percentage of disease-associated miRNA genes in NmiRs (38.7%, 336/868) is significantly lower than FmiR genes (P = 1.3EÀ5, OR = 0.32, Fisher's exact test) and MmiR genes (P = 1.4EÀ7, OR = 0.38, Fisher's exact test). These data suggest a more extensive involvement of the sex-biased miRNAs (miRNA genes) in human diseases. Figure 4 The comparison of tissue expression specificity and disease spectrum width of the sex-biased miRNAs A. Comparison of tissue expression specificity between FmiRs, MmiRs, and NmiRs. The tissue expression specificity of one miRNA is defined as the ratio of its maximum expression level among the 40 tissues examined against the total expression from all of the 40 tissues. * P < 0.05; ** P < 0.01, Wilcoxon's test. B. The enriched functions of the FmiR-regulating TFs and MmiR-regulating TFs using the g:Profiler tool, only the top 20% significant terms (P < 0.05) are shown. Numbers in the parenthesis indicate the numbers of TFs associated with the respective functions. C. The associated diseases of the FmiR-regulating TFs and MmiR-regulating TFs using the DAVID tool (P < 0.05). Numbers in the parenthesis indicate the numbers of TFs associated with the respective diseases. D. Comparison of disease spectrum width between FmiR genes, MmiR genes, and NmiR genes. *indicates significant lower disease spectrum width of MmiR group, when compared to any of the other two groups (P < 0.05) according to Wilcoxon's test. n.s., non-significant.
Functional enrichment analysis of the sex-biased miRNA genes
To learn more about the functions and specific diseases related to the sex-biased miRNAs, we performed the functional enrichment analysis of the sex-biased miRNA genes using the TAM tool [40,41] to screen the enriched function terms (P < 0.05). As depicted in Figure 5A, MmiR genes tend to be enriched for terms like histone modification, circadian rhythm, and cell reprogramming. Previous studies have reported that androgen interacts with histone modifying enzymes [42] and histone modifications are sexually dimorphic in the developing mouse brain [43]. In contrast, FmiR genes are enriched for functional terms related to apoptosis, lipid metabolism, glucose metabolism, cholesterol metabolism, immune system, brain development, and a series of cell cycle-related processes ( Figure 5A). These results suggest that FmiR genes tend to play roles in the fundamental processes for maintaining physiological activities, which is also supported by aforementioned biological characteristics of FmiR genes. Interestingly, among the top 25 enriched function terms of FmiR genes, 7 terms are related to the cell cycle process and 4 terms are related to metabolism, suggesting that these miRNA genes may coordinate the cell proliferation and the metabolism process. Indeed, some coding genes have been shown to link cell proliferation with metabolism [44], and it is possible that some sex-biased miRNAs could have similar regulatory roles in the cell as well. Moreover, it is well known that sex-biased gene (mRNA) expression, partially induced by sex hormone, significantly influences brain development [45].
Our result further suggests that FmiR genes could also be involved in brain development. Furthermore, we performed functional enrichment analysis on the sex-biased miRNA genes from each tissue. The top 50% significant functional terms are listed in Figure S1. As expected, when we focused on specific tissues, we found that the MmiR genes with higher tissue specificity tend to show more specified functions. For example, in colorectal mucosa, one of the significantly enriched functions of MmiR genes is carbohydrate metabolism. Indeed, the metabolic syndrome has been considered as important risk factors in colorectal neoplasm [46], which provides a plausible hypothesis that MmiRs from colorectal mucosa, through regulating carbohydrate metabolism, might participate in the disease progression.
We further investigated the enriched diseases of the sexbiased miRNA genes. Notably, sex-biased miRNA genes are enriched in a myriad of cancers ( Figure 5B). We found that FmiR genes are enriched in the miRNAs that are known to be down-regulated in hepatocellular carcinoma, and this observation is plausibly related to the enriched functional term of FmiR genes as tumor suppressor in the above analysis. FmiR genes are partially enriched in disease terms like Parkinson's disease and Alzheimer's disease (Table S5), in coincide with their enriched function of brain development and aging. A previous clinical survey has shown that males are more vulnerable to cutaneous tuberculosis than females [47]. Interestingly, our result also indicates that a variety of MmiR genes are significantly associated with lupus vulgaris, a skin disease (Table S6). We also noted that some high-risk diseases for (Table S6). We note that some of these MmiRs are known disease suppressors [48]. For instance, two MmiRs, hsa-miR-424 and hsa-miR-451, are shown to inhibit endometriosis [48,49]. Thus, the lower expression of these MmiRs in females may contribute to the higher susceptibility of females to the associated diseases.
Preliminary analysis on The Cancer Genome Atlas (TCGA) datasets
For more comprehensive understanding of sex-biased miRNA expression, we collected miRNA (miRNA gene) expression profiles in tumor adjacent tissues from TCGA database, which covered 12 cancer types in 9 tissues [50]. We first compared the expression level of sex-biased miRNA genes and NmiR genes in the TCGA dataset. The expression levels across 9 tissues in TCGA dataset are summarized in Figure S2. Generally, the expression level of FmiR genes is significantly higher than that of MmiR and NmiR genes for most tissues, in line with the extensive associations between FmiRs and cancers ( Figure 5B). We also tested the miRNA expression specificity across different cancer types. While MmiR genes and FmiR genes show comparable cancer type specificity (median: 0.30 vs. 0.29, Wilcoxon's test P = 0.83), MmiR genes are more specific to particular cancer types than NmiR genes (median: 0.30 vs. 0.27, Wilcoxon's test P = 0.014). Taken together, these results indicate that FmiR genes are associated with wider spectrum of cancer types.
Taken together, the sex-biased miRNAs found in tumor adjacent tissues exhibit noticeable distinctions in biological characteristics compared with those identified in normal tissues. The most prominent distinction is that the MmiRs identified in normal tissues seem to share some biological features with the FmiRs identified in tumor adjacent tissues. Indeed, we noticed that the MmiRs from normal tissues tend to show female-biased expression in tumor adjacent tissues (Figure S2K). That is to say, these MmiRs are more likely to show higher expression in females than males in the TCGA dataset (paired t-test P = 0.025). These results indicate that the sexbiased expression of miRNAs is context-specific and could be changed or reversed in disease conditions, even in tumor adjacent tissues without observable pathological alterations. In this study, we focused on sex-biased miRNAs in healthy samples, whereas the tumor adjacent tissues would not be good representatives of normal tissues in terms of gene expression pattern. Instead, a GTEx-like comprehensive panel of miRNA expression profiles in normal tissue samples would ultimately depict the whole picture of sex-biased miRNAs across different tissues in human body [51] in the future, although such expression panel is still prohibitively expensive and labor-intensive for now.
Conclusions
Identifying the molecular signature of sex difference has profound importance for disease studies and personalized medicine. Our previous studies on the sex-biased coding genes reveal that compared to male-biased coding genes, femalebiased coding genes have higher evolutionary rate, higher single-nucleotide polymorphism density, lower homologous gene numbers, and younger phyletic age, which are highly involved in immune-related functions, whereas male-biased coding genes are more enriched in metabolic process [26]. Evidence for the functional importance of small non-coding RNAs, such as miRNAs, in diverse biological processes has been accumulating. However, whether and how the miRNAs are expressed in sex-biased fashion remain largely unexplored, which would hinder the full understanding of sexual discrepancy in physiology, disease incidence, and therapeutic response. We speculate that sex differences should also exist in terms of miRNA expression, and such sex-biased miRNA expression would carry functional implications as well. In the present study, we found 73 female-biased miRNAs and 163 male-biased miRNAs. Male-biased miRNAs exhibit a faster evolutionary rate and a higher tissue specificity, whereas female-biased miRNAs have higher disease spectrum width and are likely to be related to various cancers and neurodegenerative diseases. Functional annotation shows that femalebiased miRNA genes are associated with metabolism process and cell cycle process, whereas male-biased miRNA genes tend to be enriched in histone modification and circadian rhythm.
Nevertheless, due to the intrinsic characteristics of miRNAs and the limitation of current miRNA annotations, some analyses are difficult to perform. For example, we tried to investigate the SNP density in sex-biased miRNA genes and found that miRNA gene loci (based on pre-miRNAs) currently available are too short to have sizable number of SNPs on them for further analysis. Besides, as indicated by the analysis on TCGA data, caution should be taken on the biological characteristics of samples when compiling the dataset, as the sexbiased expression of miRNAs is context-dependent.
Furthermore, miRNA expression datasets currently available clearly has their limitations. First, the sample size of our dataset is limited due to the lack of gender-labeled samples for most miRNA expression profiles in GEO. Second, the number of male and female samples could be imbalanced in a particular dataset, resulting in additional bias. Integrative analysis of sex-biased miRNAs and sex-biased genes could provide novel insights, but current tissue coverage of the miRNA datasets does not permit such analysis. In addition, many details like age and ethnic groups are missing or insufficient in current heterogeneous GEO datasets. Therefore, more rigorous pipelines, in which the sex-biased miRNAs are corrected by confounding co-factors [12,24], cannot be performed to reduce the false positives in the current study. Therefore, we expect that more comprehensive panels of healthy human miRNA expression profiles would become available in the future, to enable more reliable analysis and provide more insightful information for understanding physiology, diseases, medicine and clinical therapy in sexual dimorphism.
Identification of sex-biased miRNAs (ISBM)
We searched the human miRNA expression datasets in GEO [25]. Only datasets generated for human normal samples with gender information were retained for manual curation. The 8 GEO datasets selected include GSE15745, GSE34608, GSE41012, GSE41574, GSE48353, GSE67489, GSE70425, and GSE77668. We mapped probes to miRNA name, deleted null values, and merged redundant probes by averaging their expression values. Next, we classified the samples in each dataset into two groups according to gender information, the male group and the female group, to identify the sex-biased miR-NAs. Using Wilcoxon's test (P < 0.05), sex-biased miRNAs are identified and distinguished based on the bias direction (i.e., male-biased or female-biased) according to fold change. The final list of sex-biased miRNAs was obtained by merging the results from each dataset and excluding the miRNAs showing conflicting bias directions across different tissues.
Human blood samples
Blood samples from 9 healthy males and 9 healthy females were obtained from Ruike Donghua Translation Medical Research Center. The whole blood samples were EDTA anticoagulant, and were stored at À80°C before use. The usage of patientderived materials was approved by the Ethics Committees of the Staff Hospital of Jidong Oil-field of Chinese National Petroleum, Beijing Tiantan Hospital, and Capital Medical University. Written consent was obtained from all of the patients.
Bulge-loop real-time RT-PCR
Similar to our previous study [52], blood total RNA was isolated with Trizol reagent (Invitrogen). Complementary DNA was reversely transcribed in RNase-free water using 0.2-0.5 lg of total RNA mixed with 1 ml (500 nM) miRNA-specific bulge-loop RT-primers. Real-time PCR experiment was performed on Real-Time qPCR System (Agilent Technologies, Stratagene Mx3000P). For quantitative assay of miRNAs in the blood, their relative expression levels were firstly normalized to that of small nuclear RNA U6 in each gender, and then normalized to female data values using 2 ÀDDCt method. All the Bulge-Loop RT-primers for both miRNAs and U6 were purchased from RiboBio Co. Ltd (Guangzhou, China) [53,54].
Analysis of the chromosomal distribution of sex-biased miRNA genes
The sex-biased miRNA genes were mapped onto chromosomes. The number of miRNA genes on each chromosome was counted to calculate the proportion of FmiR genes or MmiR genes to the total number of miRNA genes on each chromosome. Meanwhile, the number of sex-biased miRNA genes per million bp per chromosome was computed to depict more detailed distribution using Circos graph [28]. The correlation between the proportions of sex-biased miRNA genes with those of sex-biased genes on each chromosome was evaluated using Spearman's correlation. We next calculated the intra-group distance of any miRNA gene pair on the same chromosome within FmiR, MmiR and NmiR groups to test if some miRNA genes tend to be clustered on the chromosome.
Analyzing evolutionary characteristics and functional enrichment of sex-biased miRNA genes
Family numbers of miRNAs were downloaded from the miR-Base database version 21 [55] as one of the characteristics of conservation. Based on the species in which the corresponding miRNA family members are present, we classified all miRNA genes into 5 group according to the method described by Zhang and colleagues [31]. Numbers of species in which the family members of FmiR, MmiR, and NmiR genes are present were counted. The enrichment analysis of sex-biased miRNA genes were executed, taking into consideration of both functions and disease associations of miRNA genes, using TAM 2.0 (http://www.scse.hebut.edu.cn/tam/) with the P value threshold of 0.05 and the analysis of up-and down-miRNAs in diseases enabled [40].
Expression regulation and DSW analysis
To figure out the tissue expression specificity of miRNAs, we first accessed the miRNA expression profile described by Liang and colleagues [34], which covered 345 miRNAs and 40 normal tissues, such as brain, muscle, lymphoid, and respiratory systems. For each miRNA in our dataset, we calculated the tissue expression specificity based on its expression profile, if applicable. The tissue expression specificity of a miRNA is defined as the ratio of its maximum expression level among the 40 tissues examined against its total expression from all of the 40 tissues [26]. To investigate the regulation of sexbiased miRNA genes, we obtained transcription factor (TF) binding sites in each tissue from ChIP-Atlas (http://chipatlas.org/) and mapped them to the proximal regions (from upstream 5000 bp to downstream 1000 bp) of the miRNA gene loci. A sex-biased miRNA identified in a specific tissue is deemed to be regulated by a particular TF if the TF binds to the proximal region of the miRNA gene in the related tissues. We further screened the TFs preferentially regulating sexbiased miRNA genes with at least 5 binding sites present among the FmiRs or MmiRs. We performed the function and diseases enrichment analysis of these TFs using g:Profiler and DAVID tools, respectively [56,57].
DSW was presented by Qiu et al. [38] as an important measure of the relationship between miRNAs and diseases. DSW was calculated as the number of diseases associated with a given miRNA gene divided by the total number of diseases associated with any human miRNA gene. Here, we used the updated v2.0 version of HMDD [39] miRNA-disease association dataset to re-calculate DSW for each miRNA in our datasets.
TCGA data collection and analysis
We downloaded miRNA-seq data of the non-tumor control tissues from TCGA database (https://portal.gdc.cancer.gov/), discarding samples without clear gender information. We obtained data for 12 cancer types: bladder urothelial carcinoma (BLCA), cholangiocarcinoma (CHOL), esophageal carcinoma (ESCA), head-neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), stomach adenocarcinoma (STAD), and thyroid carcinoma (THCA). The expression profiles from TCGA are available at the miRNA gene level rather than mature miRNA level. Therefore, for each cancer type, we identified the sex-biased miRNA genes using the aforementioned identified sex-biased miRNAs (ISBM) pipeline. Then, the framework, which was used for analyzing the abovementioned GEO dataset-derived sexbiased miRNAs, was applied to the sex-biased miRNA genes obtained from the TCGA dataset, to analyze chromosome distribution, conservation, tissue specificity, DSW, and functional enrichment of these sex-biased miRNA genes.
|
2018-08-01T19:32:15.236Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b1e05cd8425d84ea79074174589a9f4be360bb9a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gpb.2018.03.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1e05cd8425d84ea79074174589a9f4be360bb9a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
}
|
256454844
|
pes2o/s2orc
|
v3-fos-license
|
Overmodulation of Six-Phase Cascaded-CSI Using Optimal Harmonics Injection
This paper proposes a new and straightforward method to extend the modulation range of the six-phase cascaded current source inverter (CSI). The proposed technique employs vector space decomposition (VSD) to mitigate the inverter current harmonics and extend the linear modulation region by about 8%. For motor drive applications, increasing the fundamental output component can reflect higher torque production capability for the same drive size, given that thermal limits are not exceeded. The extension can be realized by injecting optimized xy harmonics while keeping the amplitude of the resulting phase currents under the maximum value. The method utilizes a look-up table of optimized values of the injected harmonics to extend the modulation range. The output filter capacitor effects are also studied in this paper, and a selection approach is introduced. Finally, the experimental results of a C-CSI laboratory prototype are presented and discussed to verify the feasibility of the proposed modulation technique.
I. INTRODUCTION
Current-source inverter (CSI) is one of the attractive candidates for medium-voltage high-power motor drive applications [1] due to its tolerance for short-circuit faults compared to voltage-source inverters (VSI). For instance, an offshore wind farm with high-voltage DC transmission (HVDC) based on the CSI topology has been discussed [2], [3]. Other industrial applications such as aerospace applications [4], electric vehicle applications [5], [6], [7], [8], and industrial motor drives [9] are prominent candidates for using CSI technology as well. In addition to fault-tolerance, another advantage of CSI is the mitigation of the dv/dt problem that occurs at switching transitions. This can help to avoid the deterioration of the bearing of motors, failure in the insulation of the wiring, and high acoustic noise while operating [10]. The output voltages and currents of a CSI are motor-friendly, thanks to The associate editor coordinating the review of this manuscript and approving it for publication was Qinfen Lu . the capacitive filtering stage. The bulky dc-link electrolytic capacitor banks are also removed in CSIs, which can help to improve the reliability of the overall system [11]. Moreover, CSI can control the output phase currents directly without the need to control the output voltages to produce the reference currents [12].
Multiphase drives have great potential for several industry applications [13], [14], such as electric ship propulsion [15], more electric aircraft [16], and high-power traction applications [17]. Six-phase systems are prevalent among multiphase systems for their inherent two three-phase structure. Distinctive features arise from the change of the displacement angle between the two three-phase winding sets in six-phase drives. The arrangement with a 30 • phase shift is called the asymmetrical six-phase or dual three-phase machine. In contrast, a displacement of 60 • results in a symmetrical six-phase system. The symmetrical machines outmatch the asymmetrical ones in the post-fault torque range [18]. However, asymmetrical machines have a better distribution of airgap magneto force VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ (MMF) [19]. Hence the latter is considered in this context. If the phases are connected in a star-shaped connection, every three-phase group set will have a neutral point, and they can be either connected to form a single-neutral point (1N ) or disconnected from each other and form two-isolated neutral points (2N ). For a CSI-based six-phase system, a cascaded CSI (C-CSI) topology is introduced in [20] and shown in Fig. 1. The cascaded connection of the two three-phase CSIs is implemented to simultaneously supply each group set of the load. Another configuration of the six-phase CSI can be achieved by connecting the two three-phase CSIs in parallel (P-CSI). However, only one three-phase CSI is allowed to work at a time in P-CSI [20]. The main advantage of the C-CSI topology is the capability to double the dc-link current utilization compared to P-CSI because both three-phase inverters can operate simultaneously. For modeling, a six-phase system can be considered a double three-phase system called the doubledq method. In this method, the variables of each group set can be transferred to a three-dimensional space; two axes represent an equivalent in-quadrature two-phase system, and the third one represents one zero-sequence axis. The C-CSI modulation technique [20] is based on the double-dq modeling method. However, the most used method to simplify modeling a multiphase system is the vector space decomposition (VSD) introduced in [21] and well-established for VSIs.
The development of space vector modulation (SVM) techniques to utilize the features of such systems is one of the attractive research topics in multiphase inverters. Using VSD modeling, the SVM method can control the harmonics in different subspaces [22], [23], [24]. Ordinarily, reducing the harmonics is the main target of most techniques to increase the system's efficiency and maintain thermal limits [25]. Moreover, in two-level VSI multiphase inverters, the modulation range can be extended linearly using the additional degrees of freedom [26], [27]. The penalty of such a feature is the increased harmonic content in the output voltage waveform, which implicates increased losses and lower efficiency. This topic is exciting since the trade-off can achieve beneficial gains outweighing the drawbacks by optimizing the harmonic content required to achieve the extension. This approach has been investigated for five-phase systems [28] and six-phase ones [29], [30], [31]. In these methods, an objective function is defined based on the harmonic content in the extra subspaces, and the optimization process aims to achieve the desired modulation index in the fundamental subplane with the minimum possible harmonic content in the other subplanes.
The SVM based on the VSD method has been discussed for five-phase CSI [11], [32]. In both works, an extension method of the dc-link current utilization is achieved by using the same ratio between the large and medium vectors in [32] or injecting a third harmonic component in the additional subspace [11]. C-CSI modulation has been discussed in [20] based on the double-dq method. However, to the best of the authors' knowledge, a realization of SVM based on VSD for C-CSI and extension of dc-link utilization has not been investigated. In this paper, the VSD modeling method is proposed and developed for asymmetrical six-phase C-CSI to offer: • The proposed method adopts the two large two medium vectors modulation scheme to control the modulation index linearly from zero to the maximum. Minimized harmonic content is achieved using zero-average Ampere-second balance per switching sampling period by using the VSI and CSI systems analogy.
• Extension of the modulation index range by around 8% with minimum injected harmonics in the xy using a new proposed approach based on stored, operating points in a look-up table (LUT) for fast and easy implementation of the scheme.
The work presented in this paper is distinguished from the VSD-based for VSIs in devising the modulation in the extension region. In the proposed method, a backward approach is developed. The desired output after harmonic injection is shaped, and the ability of C-CSI is checked to produce such a reference. Then, the operating points are stored in a LUT to recall while needed to achieve a desired modulation index in the extension range. On the contrary, in the previous methods for VSI, the possible inverter states are studied in the extension region, and the schemes are based on optimizing all the possible solutions based on the geometry of the selected vectors. The proposed method can also be generalized easily to other multiphase CSIs based on each case's available degrees of freedom. Another advantage of the proposed method is that the dwell time calculation remains the same over the whole modulation range, unlike the previous extension works for VSIs. The proposal mimics the harmonic injection methods used for torque density improvement, such as in [33], [34], only using SVM rather than tuning several proportionalresonant controllers as in VSI-based systems. This paper is organized as follows: Section II discusses the system model, and the inverter outputs are mapped into the equivalent subspaces using the VSD method. The details of the proposed SVM technique are illustrated in Section III, and the optimization problem is to minimize the harmonics content while realizing the maximum linear modulation index. The effects of the filtering capacitors are also discussed in this section. The experimental results with discussions are presented in Section V.
A. OPERATION OF A SIX-PHASE C-CSI
The structure of a six-phase cascaded CSI is shown in Fig. 1. The C-CSI comprises two three-phase CSIs connected in series. The dc-link current I dc passes from one inverter to the other one, as shown in Fig. 1. This structure allows the modulation of the two inverters separately, which means full utilization of the dc-link current.
Two conditions of operation must be followed to operate CSIs properly. The first condition is that the I dc must be continuous without any interruption. The second condition is to produce a predefined output current waveform. This condition allows only two out of the six switches to be turned on simultaneously in each three-phase inverter. It should be noted that the only valid neutral point configuration that can be applied here is the 2N configuration. There are nine possible switching states for each three-phase CSI [33], [34], and thus in total, there are 81 possible switching states for the six-phase C-CSI. The output currents produced by the possible switching state can be calculated using (1): where T s is the switching period. t 1 to t 4 are the dwell times of the selected active vectors per sector I 1 to I 4 , respectively. In Eq. TABLE 1 the current components I γ g , γ ∈ {α, β, x, y} refers to the axis of synthetization of the component, and g ∈ {1, 2, 3, 4} refers to the order of the selected vector to represent a sector of modulation. Meanwhile i γ r refers to the decoupled reference current components.
where S j , j= {1, 2, . . . 12} is the state of the inverter switch and i a1 to i c2 are the phase currents.
B. VECTOR SPACE DECOMPOSITION (VSD)
The VSD method decomposes machine variables (voltage, current, and flux) into three two-dimensional subspaces. The three subspaces are orthogonal to each other; hence decoupled variables are mapped to each subspace. The first subspace is called the αβ subspace. As in the Clarke transformation, the αβ subspace represents all the harmonics of the order l= 12h ± 1, h = (1, 2, 3, . . .) which impact the electromechanical conversion process (i.e., torque-producing harmonics). The second subspace is called xy subspace, and harmonics of the order l= 6h±1, h = (1, 3, 5, . . .) are mapped to it and considered as loss components. The final subspace is the representation of the triplen harmonics or the zero-sequence harmonics l= 3h, h = (1, 3, 5, . . .) and is referred to as the 0 + 0 − subspace. The xy and the 0 + 0 − subspaces do not contribute to the torque production process, and they are losses harmonics. A transformation matrix is deduced mathematically in [21] to transfer six-phase currents into the three subspaces. This transformation is based on the phase shift angle between the two-three phase group sets and the angles between the phases in each group and is given in (2): The new variables are decoupled and can be controlled to achieve the desired performance, such as desired power transfer and power factor control for grid-tied applications or the torque-speed references for motor drive applications.
C. MAPPING OF THE CURRENT COMPONENTS
The VSD method can help to map the output current vectors into the three-decoupled subspaces using all the possible switching states. Since the load is connected in the 2N configuration, the current components in the 0 + 0 − subspace are all nullified. The current components are shown in Fig. 2. These output currents components are the result of applying all possible switching states and can be determined by using Eq.
The mapped components are classified in this context based on their magnitude in the αβ subspace, shown in TABLE 1. The group I L has the largest αβ components and the smallest xy components. The classified components as I M 1 have equal magnitudes in both αβ and the xy subspaces. It is worth noting that the two groups I L and I M 1 have components that are out of phase in the xy subspace, which can be exploited in the modulation scheme. The vectors from the other groups have lower magnitudes compared to the others.
III. THE PROPOSED MODULATION TECHNIQUE
In this paper, the proposed approach introduces two regions for the modulation index. The first region (Region I) covers from zero to the highest modulation index. In this region, the VSD-based SVM technique is employed to diminish the unwanted xy harmonics and obtain sinusoidal output currents. The second region (Region II) is the extension region, in which the proposed method is applied to achieve the highest possible modulation index (1.08) while minimizing harmonic components.
A. PROPOSED VSD-BASEDt SVM TECHNIQUE FOR C-CSI
The αβ subspace can be divided into twelve sectors. The numbering of the sectors is shown from I to XII in Fig. 2(a). When a reference vector is in each sector, the reference vector can be synthesized by selecting four active vectors and one null vector. In the proposed approach, a five-segment switching sequence cycle is also considered. The method is based on the analogy between VSI and CSI. Hence, two vectors are chosen from the large group (I L ), and the other two vectors are chosen from the (I M1 ) group as in [31]. The large and medium vectors are selected such that each large vector is in the same direction as the medium vector in the αβ subspace. Meanwhile, they are out of phase in the xy subspace. One example in Sector I is shown in Fig. 3.
The calculation of the dwell times is given TABLE 1 in the same manner as the six-phase VSI [35]. The calculation method is based on synthesizing the selected vectors and the reference into their respective α, β, x, y components: Since the aim of this paper is to apply VSD modeling to extend the maximum modulation index in C-CSI, only vectors from the I L and I M 1 groups are considered in the following discussions.
B. REGIONS OF OPERATION
Two regions of operation for the proposed scheme are defined and studied. This categorization is based on the designated level of harmonics allowed in the output currents of the C-CSI. The following subsections define the set limits of the output harmonics for each region, and it is explained how to modulate the scheme accordingly: Undoubtedly, lower harmonic content in the output current can lead to higher efficiency and better system thermal performance. However, the harmonics could contribute to the output power or torque production [36], [37]. In Region I, the goal is to linearly change the modulation range from zero to maximum with the low content of the xy subplane harmonics. This can be done by setting the references i x r , i y r to zero in the dwell times calculations in Eq. TABLE 1.
The modulation index (m) is defined as the ratio between the amplitude of the fundamental component of the reference inverter currents i inv f 1 ,max and the dc-link current I dc . Hence, the six-phase reference currents vector, i inv r can be realized using Eq. (4): Based on the fundamentals of pulse width modulation, as the modulation index increases, the null time decreases. Hence, the maximum modulation index in Region I m Reg.I max can be determined by determining the equation to calculate the null time (t o ). Since the dwell times of the inverter are periodically repeated in every sector, studying one sector of modulation is enough to calculate t 0 .
For instance, the following procedure obtains the null time equation in sector I. To determine the t 0 first, all the components on the right-hand side in TABLE 1 must be defined. The components I γ g of the selected active vectors (I 61 , I 37 , I 7 , I 55 ) in sector I should be substituted in (3).
Then, by using the VSD transformation given in (2), the reference currents can be mapped into the decoupled reference current components i γ r shown in (5): The 0 + 0 − References are physically realized because of the neutral connection applied in the C-CSI. Consequently, there is no need to include the last branch of Eq. (5) in the following calculations. The reference output currents i γ r given in (5) should be substituted in TABLE 1 to determine the null time equation in sector I, and it is described in (6).
To achieve m Reg.I max , two variables in Eq. (6) must be determined; the values of θ and t 0,sectorI . The angle that represents the minimum null time value is θ = 0 • which makes the cosine term in Eq. (6) becomes maximum. By substituting The null time t 0,sectorI at m Reg.I max reaches zero, which is the minimum realizable null time. By equating III-B2 to zero, the m Reg.I max deduced is equal to one. That means the ability of the six-phase C-CSI to realize the maximum fundamental output currents ( i inv f 1 ,max ) with zero-average xy harmonic currents. Fig. 4 shows the null time over two sectors at the maximum modulation index inside Region I. It is worth mentioning that applying a zero-average Ampere balance does not guarantee the total elimination of the harmonic currents in the xy subplane. However, a minimized harmonic content can be achieved by using such a technique [21].
2) REGION II (EXTENDED REGION)
In Region II, the target of the proposed SVM is to take advantage of the C-CSI system and achieve the full dc-link utilization linearly. The proposal is to calculate and inject appropriate harmonic content in the xy to produce a higher fundamental component than Region I. The dwell time calculations in TABLE 1 can be used without changes. Unlike the triplen harmonics, the xy harmonics can flow without any hardware reconfiguration [38].
The harmonics mapped to the xy subspace are of the order l= 6h±1, h = (1, 3, 5, . . .) as mentioned in the VSD section. The general formula of the output current waveform after injecting the xy harmonics is shown in (8): where k is the per-unit (p.u.) value of the injected harmonics and ϕ is the phase-shifting angles of the waveforms.
To obtain the form of the xy current components in the reference, every reference phase current in the reference vector i inv r stated in (4) should be modified to include the injected currents as in (8). Then, by applying VSD transformation and simplifying the formulas, the current references injected in VOLUME 11, 2023 the xy subspace are shown in (9): The next step is to determine the optimum values of the coefficients k. An optimization process is developed here to find these coefficients. The optimization aims to find the minimum harmonic content to be injected to extend the modulation range into Region II. The objective function (obj) is defined in (10), and it consists of the summation of the squared values of the coefficients. The optimization problem is to find the minimum of obj for each modulation index in Region II. The constraint is also illustrated in (11), representing the feasibility of applying the dwell times. These times calculated by TABLE 1 must be greater than or equal to zero at the values of k selected by the optimization process. The optimization problem can be solved using MATLAB software by deploying the fmincon() function with the MultiStart option.
Since the formula in (9) is infinite, an algorithm is applied to determine a finite number of essential harmonics to get an applicable solution. The algorithm starts with h = 1 and attempts to solve the optimization with the modulation indexes beyond the limit of Region I. The optimization algorithm increases the values of m until no feasible solution can be found. This approach determines the maximum modulation index in Region II m Reg.II max while applying a certain h.
The algorithm stops at h = 3 and achieves m Reg.II max = 1.0773. The h = 3 means that the essential harmonics to be injected are the 5 th , 7 th , 17 th , and 19 th to ensure a feasible modulation. It is worth mentioning that considering cases with h > 3 would increase the accuracy of implementation and reduce the harmonic content. However, a trade-off is made to stop at h = 3 because of increased problems with higher harmonics regarding the implementation. The selection of the filtering capacitors and switching frequency are other motives to stop at h = 3. The optimum values of all coefficients are shown in Fig. 6 when the modulation index changes from 1 to 1.0773 in Region II.
As shown in Fig. 6, the amplitudes of the optimal injected harmonics are not linearly increasing with the modulation index in Region II. The SVM method can be easily implemented in Region II by storing the coefficients of the harmonics in the digital controller memory and recalling them when needed. Based on the deduced maximum modulation indexes in each region, the limits of the two regions can be illustrated geometrically in Fig. 7. The two arcs in Fig. 7 mark the end of the two regions. Region I realizes the references of magnitude up to √ 3 I dc . Region II ends at the outer circle with a perimeter equal to 1.0773 All the coefficients are calculated over the modulation range in Region II to check that the optimization problem results are feasible. Sector II is chosen to execute the check for feasibility since the dwell times are periodical, and one sector is enough for the check. The calculated dwell times are shown in Fig. 8. Since all the times are positive, it can be concluded that the C-CSI can realize the selected harmonics for injection.
C. EFFECT OF FILTERING CAPACITORS
A critical CSI output phenomenon is the resonance between the filtering capacitors and the load inductance. In this section, the per-phase equivalent circuit is studied to determine the relationship between the output current of the inverter I inv and the load current I load after the filtering stage, the main reason is to get a deeper understanding of the system to avoid the resonance effect. Another reason is to avoid amplifying or shifting the injected harmonics in the proposed extension method. The equivalent circuit of the filter capacitor and the load is shown in Fig. 9. The transfer function between the inverter output and the load in the s-domain is For discussion and clarification, a system running at the parameters specified in TABLE 2 is illustrated as an example. Given that the system under study is a medium-power load, selecting the switching frequency of the CSI around the 5-10kHz range is suitable for such applications [5]. Using (13), three values of the filtering capacitor C f are chosen such that the resonance frequency f res would be at the (25 th , 30 th , 35 th ) harmonics.
The magnitude of G vs. a frequency range up to 50 kHz is presented in Fig. 10 using a bode plot. Meanwhile, the transfer function G angle is shown in Fig. 11. It can be noticed from Fig. 10 that the resonant frequency is at the selected harmonics, and the frequency span can be divided into three sections. The first one stretches from the beginning of the range of frequencies up to the resonance phenomenon. The second section is the resonance bandwidth, which is estimated at around 70.7% of the maximum value of |G|. The third section is the filtering section, where the high frequencies get attenuated, such as the band around the switching frequency. From Fig. 11, it can be deduced that the angles of the injected harmonics are barely changed, which makes the proposed SVM with the harmonic injection method more powerful to be used for C-CSI topology.
Selecting the filtering capacitor must ensure the following point based on the previous discussion. The f res must be placed between the significant harmonics and the switching frequency harmonics. In addition, the high the value of f res , leads to selecting small filtering capacitors, as seen in (13). This approach can reduce the size of capacitors and thus improve the system's overall lifetime. A simple solution to compensate for the filter effect on the magnitude and angle of the current is to employ an observer, which is introduced in [39] for motor drive applications.
A. EXPERIMENTAL SETUP
A scaled-down prototype is used for the experimental test as illustrated in Fig. 12 to verify the feasibility of the proposed modulation scheme. The parameters of the experimental setup are summarized in TABLE 3. A six-phase C-CSI is implemented by six half-bridges SKM50GB12V IGBT modules connected to SKHI 22 A/B H4 gate drivers from Semikron. The reverse blocking is achieved by connecting each half-bridge to a DSEI 2 × 31.06C diode module one diode to clamp each IGBT to the positive and negative rails.
The firing signals are generated by a LAUNCHXL-F28379D digital signal processor. A programmable dc supply is used in current-control mode to provide constant current. A six-phase load consisting of R − L combination is used. The filtering capacitors are selected to place the f res at the 13 th harmonic. The fundamental frequency of the system is 60 Hz, and the setup runs at different modulation indexes in both modulation regions mentioned earlier.
B. EXPERIMENTAL RESULTS
The tests performed in this section include: 1) Running the inverter at m = 1.
3) A step change from m = 0.8 to m = 1.0773. 4) Comparison between the xy harmonic content (injected and measured) in the inverter output currents. The experimental results of running the setup at m= 1 are shown in Fig. 13. This test shows the performance of the proposed scheme in Region I of the modulation. The waveforms shown are the load currents in Fig. 13(a) where phase currents (A 1 ,B 1 , A 2 , and B 2 ) appear on the 4-channel scope measured using hall-effect current sensors. The currents are sinusoidal and the harmonic spectrum of i A 1 is shown in Fig. 13(b) using the Fast Fourier Transform function in MAT-LAB. As expected, the spectrum shows low harmonic content as the operation is in Region I of the modulation.
The inverter output current is illustrated in Fig. 13(c) which also shows the dc-link current fed from the supply to the inverter which appears steady with small ripples. The spectrum of the inverter current is demonstrated in Fig. 13(d) and it is clear of harmonics since the xy harmonics are nullified in this region. The spectrum of the load and inverter currents are almost identical for the harmonics below the band of the switching frequency. This band of harmonics is diminished by the effect of the filtering capacitors. The load phase voltage is demonstrated in Fig. 13(e) and its spectrum is in Fig. 13(f). For R-L loads, the higher the harmonic understudy the higher the value of the reactance X L, and the higher the voltage harmonic component that would appear. This is a drawback of using small filtering capacitors, however, this selection is necessary for operation in Region II. The capacitor current is shown in Fig. 13(g). It represents the filtered harmonics from the inverter output current as excepted.
The supply voltage V dc and the inverter input voltage V inv are shown in Fig. 13(h). From the inverter structure in Fig. 1, the inverter voltage is the summation of the two input voltages of the two three-phase inverters. Each input voltage is equal to the line voltage between the two activated phases due to the switching state applied to the inverter. The programmable supply applies a constant voltage at the required level to regulate the current and keep it at the set value. The currents and voltages of the series diode used to implement the C-CSI are shown in Fig. 13(i) and the purpose of the diode can be seen as it protects the IGBT/anti-parallel diode from reverse conducting due to negative voltages that might appear because of the nature of the load. The current and voltages of the switch are shown in Fig. 13(j) and it shows that the switch conducts pulsations between zero and the dclink current values. The voltage on the other hand is subject to the loading condition and capacitor selection. It is expected to have higher instantaneous voltages because of the selection of small capacitors and that should be noted in the selection of the rating switches. The results of the C-CSI running in Region II are illustrated in Fig. 14. The waveforms of the output currents have a flat-top shape because of the injection as in Fig. 14(a). The cost of extending the modulation index appears in the harmonic spectrum of the load and inverter currents as in Fig. 14(b), (d).
The 5th and 7th harmonics are a bit higher in the load currents since the capacitor selected resonates with the load inductance at the 13th harmonic. However, the 17th and 19th harmonics are diminished because of that selection. The load voltage is shown in Fig. 14(e) and it can be noticed that the harmonics showing in the load current are amplified in the voltage spectrum in Fig. 14(f). Consequently, this phenomenon affects both the voltages before and after the inductor V dc and V inv as shown in Fig. 14(g), and the switches and diodes selection since higher voltage peaks are expected in addition to the increased voltage due to the extension in the first place as shown in Fig. 14(i) and (j).
The harmonics of the currents in Region II are analyzed by MATLAB to process the recorded data points of the waveforms. The formula for calculating the THD of the measured currents is defined in (14). The THD of the currents and voltage waveforms are illustrated in (14) where I l is the harmonic current of the order l and I 1 is the fundamental current component.
V. CONCLUSION
This paper presents a new modulation scheme to control a six-phase C-CSI with minimized harmonic contents. The proposed method can also increase the modulation index range by about 8% which is attractive for motor drive applications that can lead to improving the torque density. The modulation index range is classified into two regions and the calculation of the dwell times is the same for the full range which can help with simple implementation. The experimental results show that the method is effective in eliminating the low-order harmonic in Region I where the degrees of freedom allow this feature. The extension of the modulation range is verified experimentally as well at the cost of an increase in the asymmetries harmonic content. The C-CSI topology combined with the proposed method is utilized to produce the output currents with unavoidable, yet minimized, harmonic content in the extension region with a near flat-top waveform. This feature is also required in motor drive applications to avoid iron saturation and exceeding designed current stresses for the converter semiconductors. The simplicity of the method promotes it for application to other multiphase systems easily based on their respective asymmetry subspaces.
|
2023-02-01T16:05:09.739Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9375c3d3d29261e3d739e2055da1ed5833933cb8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/access.2023.3240575",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "b886cd8e6b9520d18e4bdfd22f98922b6b2218d0",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
267573129
|
pes2o/s2orc
|
v3-fos-license
|
Electroreduction of divanillin to polyvanillin in an electrochemical flow reactor
The electrochemical conversion of biobased intermediates offers an attractive and sustainable process for the production of green chemicals. One promising synthesis route is the production of the total vanillin-based polymer polyvanillin, which can be produced by electrochemical pinacolization of divanillin (5–5´bisvanillyl). Divanillin can be easily enzymatically generated from vanillin, a renewable intermediate accessible from lignin on an industrial scale. This study investigates systematically the electrochemical production of polyvanillin in a divided plane parallel flow reactor in recirculation mode. Several analytic methods, such as online UV–VIS spectroscopy, size exclusion chromatography (SEC), 2D-NMR (HSQC, 13C/1H), TGA and DSC were used to monitor the reaction progress and to characterize the reaction products under different galvanostatic reaction conditions revealing new insights into the reaction mechanism and structural features of the polymer. Further, by using an electrochemical engineering-based approach determining the limiting current densities, we readily achieved high current densities over 50 mA cm−2 for the polyvanillin synthesis and reached averaged molecular weights up to Mw = 4100 g mol−1 and Mn = 2700 g mol−1. The cathodic polymerization to polyvanillin offers an innovative approach for the electrochemical production of biobased polymers presented on flow cell level. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13065-024-01133-2.
Introduction
The successive replacement of fossil-based polymers with polymers from renewable resources, so-called biobased polymers, is an effective way for the reduction of greenhouse gas emissions due to their carbon neutrality [1,2].Vanillin (4-hydroxy-3-methoxybenzaldehyde), a biobased platform chemical derived from lignin on an industrial scale, has therefore recently garnered increased attention as building block for biobased polymer synthesis [3].Besides the traditional route to obtain vanillin from lignosulfonates via thermo-catalytic depolymerization in the presence of copper-based catalysts and oxygen operated by Borregaard (Norway) [4], many novel electrochemical strategies have been lately investigated achieving a total green process combining sustainable conversion technologies and renewable feedstock [5].Different methods using either direct electrochemical oxidation [6,7] or indirect oxidation via electrochemical generated oxidizer [8][9][10] were applied to obtain vanillin, vanillin acid and 5-Iodovanillin from lignin or lignosufonates.Many polymer synthesis strategies were reported making use of the multifunctional aromatic character of vanillin [11].A wide range of vanillin-based polymers are accessible, such as phenolic [12], epoxy [13] and cyanate resins [14], polyesters [15] or polycarbonates [14].
A promising alternative to conventional synthesis routes to vanillin-based polymers and polymer building blocks is the reductive electrochemical pinacolization of vanillin's carbonyl group enabling a sustainable pathway for C-C bond formation, since electrochemistry fulfills several criteria of the 12 principles of green chemistry [16].Direct pinacolization of vanillin to hydrovanilloin was first described by I. A. Pearl in 1952 at Pb cathodes in diluted sodium hydroxide solution [17].Due to its bisphenolic character hydrovanilloin was recently used by Amarasekara et al. for the synthesis of several polymers, such as a hydrovanilloinformaldehyde polymer [18], a hydrovanilloin-diglycidyl ether phenoxy resin [19] or a poly(hydrovanilloin-urethane) [20].Besides pinacolization transferring one electron to the carbonyl group followed by hydrodimerization, the carbonyl group can be reduced in a two-electron pathway to the corresponding alcohol.Jow et al. investigated the reaction mechanism of the vanillin reduction at Hg cathodes showing that pinacolization is favored over alcohol formation at higher pH-values, lower current densities and higher substrate concentrations.The reaction outcome is influenced by the deprotonation of vanillin's phenolic group leading to a decreased stability of the negative charged intermediate species increasing the dimerization step over alcohol formation at higher pH values [21].For reaching adequate faradaic efficiencies for the hydrovanilloin production cathode materials exhibiting a high overpotential for the competing hydrogen evolution reaction (HER) are required due to the negative onset potential of vanillin (≈ − 0.6 V vs. RHE).As these materials are mostly toxic, such as Pb, Hg or Cd, our group investigated different non-toxic cathode materials showing that Zn cathodes can be used for hydrovanilloin production with neglectable amounts of vanillyl alcohol and high faradaic efficiencies in alkaline aqueous media [22,23].
Divanillin (5,5-bis-vanillyl) is an easily accessible compound from vanillin bearing two remote carbonyl groups, which is generated by enzymatic aryl-aryl coupling with either horseradish peroxidase and H 2 O 2 [24] or laccase in O 2 -saturated solution [15].Interestingly, utilizing the same reaction type of electrochemical pinacolization a molecular weight increase from divanillin to polyvanillin is possible, which was first described by Amarasekara et al. in a feasibility study at Pb cathodes in a divided beaker cell in 2012 [25].Recently, our group further investigated structural features of the formed polyvanillin by size exclusion chromatography (SEC) and 2D-NMR (HSQC, 13 C/ 1 H) in H-type batch cells at Zn, Pb and GC cathodes.Thereby, we showed that the molecular weight increase by electrochemical pinacolization is analogously to the vanillin reduction competing with alcohol formation, which terminates the polymer chain.Further, we found stilbene-like double bound systems in the aliphatic region of polyvanillin (Scheme 1).After complete carbonyl consumption molecular weights of M w = 3200 g mol −1 and M n = 2400 g mol −1 versus pullulan standard were reached for Zn cathodes showing no significant influence of current density or divanillin concentration in the H-type Batch cell [22].
Since electrosyntheses in H-type batch cells often show poor performance due to a sluggish mass transport, wide electrode distances resulting in large ohmic losses and no uniform potential distribution [26], we herein report for the first time the polyvanillin synthesis by electrochemical divanillin pinacolization in a plane parallel flow reactor in recirculation mode.The transfer onto an electrochemical flow reactor enables us to present deeper insights into the reaction mechanism by using online analytics in the electrolyte loop offering access to high time resolved data.Further, defined flow conditions and uniform current distributions in the flow reactor allows precise investigation of the impact of the current density on structural features of produced polyvanillin samples, which we analyzed by SEC and 2D-NMR (HSQC, 13 C/ 1 H).Lastly, we show an approach using dimensionless numbers for reaching reasonable high current densities of > 50 mA cm −2 at higher divanillin concentration and calculate the corresponding key figures of merit, such as space-time-yield STY and specific energy consumption E s [27].The resulting polymer is then thermally characterized by TGA and DSC analysis.
Flow reactor setup and experiments
A divided plane parallel flow reactor in recirculation batch mode was used.The setup and the reactor are described in detail in a previous publication [23].Briefly, catholyte and anolyte chamber of the reactor were separated by a Nafion N324 membrane and flat electrodes with an electrode area of 4 × 14 cm 2 each were used.The distance between each electrode and the separator was 0.5 cm.Catholyte and anolyte were fed parallel to the electrode surface into the reactor.Inhouse 3D-printed (Photon S, Anycubic) inert plastic mesh turbulence promoters made of acrylate-based UV curing resin (Value DLP Resin, PrimaCreator) covering the full reaction channel above each electrode (4 × 14 cm 2 ) were inserted in both electrolyte chambers to enhance the mass transport within the reactor and to prevent the membrane Scheme 1 Electrochemical reduction of divanillin to polyvanillin from bulging.Both turbulence promoters had a mesh width of 5 × 5 mm 2 in diagonal orientation with respect to the flow direction.The cathodic turbulence promoter was made of 6 layers with a web thickness of 1 mm each resulting in an overall voidance of 0.7, whereas the anodic turbulence promoter was made of 4 layers with a web thickness of 1.1 mm each resulting in an overall voidance of 0.72.For further detailed investigation of the mass transport behavior obtained for the turbulence promoters see our previous publication (type D cathode side and type C anode side) [23].Catholyte and anolyte were each circulated from a glass reservoir through the reactor by a gear pump (VGS 24 V OEM, Verder Deutschland GmbH & Co. KG).The mean linear flow velocities in the reactor were measured by flow meters (FCH-midi-PCDF, B.I.O.-Tech e.K.) positioned before the reactor entries.A reversible hydrogen reference electrode (RHE, HydroFlex © , Gaskatel) was connected in flow-by mode to the catholyte chamber by a 1/16′′ PTFE tube.Thereby, the PTFE tube was fixated within the turbulence promotor and the end was positioned as close as possible to the cathode to minimize the iR-drop.A minimum electrolyte flow was established and the withdrawn electrolyte was fed back catholyte reservoir.A Bio-Logic SAS SP-150 potentiostat coupled with a VMP3 10 A booster was used for performing all electrochemical measurements.
Before each measurement a Zn sheet and a Ni sheet, serving as cathode and anode respectively, were polished with SiC papers with decreasing roughness (FEPA #P180/#P500/#P1000, Struers GmbH, Germany) and rinsed with ethanol and water.The Nafion N324 membrane was soaked in 1 M NaOH for at least 24 h before the experiment to ensure its Na + -form.The catholyte was freshly prepared by dissolving 7.55 g (45.30g for the high concentration experiment) of divanillin in 0.5 l of 1 M NaOH resulting in a 50 mM (300 mM) divanillin solution.The anolyte consisted of 1 l of 1 M NaOH.The volume ratio of anolyte to catholyte was 2:1 to maintain a sufficient high ionic conductivity, since Na + ions migrate from the anolyte through the Nafion membrane to the catholyte in the electrolysis.Catholyte and anolyte were recirculated before starting the electrolysis at the targeted mean linear flow velocity of 20 cm s −1 until a steady state flow behavior set in, which was usually 5 to 10 min.
Linear sweep voltammograms (LSVs) of both the cathodic and anodic reaction were performed at a potential sweep rate of 3.33 mV s −1 .Anode and cathode side were changed to record the anodic LSV to minimize the distance between the working and the reference electrode.The recorded potentials were corrected by the iRdrop of the electrolyte solution between the working and the reference electrode.The iR-drop was determined by galvano impedance spectroscopy (GEIS) measured at a current of -9 mA cm −2 and an amplitude of 0.9 mA cm −2 between frequencies of 100 kHz and 10 Hz.
Bulk electrolysis was performed galvanostatically at current densities of 5, 9 and 18 mA cm −2 for the 50 mM divanillin solution and at 54 mA cm −2 for the 300 mM divanillin solution until a charge of 8 F mol −1 passed.The experiment at 9 mA cm −2 was conducted three times to evaluate the overall experimental error.The reaction was monitored by an online UV-VIS setup.The resulting polymer was isolated after the electrolysis by acidifying the catholyte to pH ≈ 2 with 1 M HCl.An overview of the isolated yields is shown in the supporting information (Additional file 1: Table S1).The precipitate was filtered, thoroughly washed with water and dried in a desiccator with silica gel under vacuum overnight.The isolated polymer was then analyzed by SEC and 2D-NMR (HSQC, 13 C/ 1 H).Further, aliquots of few milliliters each were withdrawn from the catholyte throughout the electrolysis to monitor the polymerization.The sampling was divided into two experiments to minimize the amount of withdrawn catholyte and, therefore, the impact onto the reaction.The withdrawn aliquots were isolated similar to the bulk catholyte with the difference of using a 12 mL syringe with an inserted filter paper for the filtration and washing step due to its small amount.
Investigation of Zn behavior after resting phase at open circuit potential
To access the behavior of Zn cathodes after resting at open circuit potential (OCP) accompanied investigation were conducted in an undivided beaker cell.50 mL of 1 M NaOH were filled in a glass beaker.A 2 × 2.5 cm 2 Zn-piece and a 1.5 cm 2 Pt-piece were used as working and counter electrode, respectively.A reversible hydrogen reference electrode (RHE, HydroFlex © , Gaskatel) was used as reference electrode.The Zn electrode was allowed to rest in OCP (≈ -450 mV vs. RHE) for 10 min.Afterwards, 3 cyclic voltammogram (CV) cycles between − 0.450 V vs. RHE and − 1.1 V vs. RHE were recorded with a potential sweep rate of 20 mV s −1 .The procedure comprised of 10 min OCP resting phase followed by 3 CV cycles was repeated 3 times.Potentials were corrected after the experiment by the iR-drop between the working and the reference electrode determined by potentio electrochemical impedance spectroscopy (PEIS).The iR-drop was 0.4 Ohm.
Rotating disc electrode (RDE) experiments
RDE experiments were conducted to determine the diffusion coefficient of divanillin.A Pb disc (Ø disc = 5 mm, Pine Research) was inserted in a PTFE tip holder (E6R1 Change Disk RRDE, Pine Research).The Pb surface was polished with a 0.05 µm diamond suspension (Buehler), rinsed with water and sonicated in ultra-pure water for at least 10 min.The RDE was then mounted in an electrode rotator (MSR Rotator, Pine Research).The electrochemical measurements were conducted in a standard three electrode setup and a BioLogic SAS SP-150 potentiostat was used.A freshly prepared 20 mM Divanillin in 1 M NaOH solution served as electrolyte, which was deoxygenated with argon gas for at least 30 min before the measurement.A platinated Pt sheet and a reversible hydrogen electrode (RHE Hydroflex © , Gaskatel, Germany) were used as counter and reference electrode, respectively.The potential of the RDE was held at − 0.50 V vs. RHE and then CVs between − 0.50 V vs. RHE and − 1.10 V vs. RHE were recorded at a potential sweep rate of 10 mV s −1 for different rotation rates (100, 400, 900, 1600, 2500 rpm).The recorded potentials were corrected by the iR-drop after the measurement, which was determined by PEIS measurement at − 0.80 V vs. RHE with an alternating potential of 10 mV and frequencies between 10 Hz and 200 kHz.The diffusion coefficient was obtained by applying the Levich-equation to the limiting current densities j lim extracted at a potential of -0.9 V vs. RHE: where n is the number of transferred electrons, F is the Faraday constant (96485 As mol −1 ), c is the divanillin concentration, D is the diffusion coefficient, ν is the kinematic viscosity and ω is the angular velocity.A value of n = 2 was assumed, as the one electron reduction of each carbonyl group to the pinacol is expected to be the main reaction pathway in 1 M NaOH at Pb cathodes at moderate negative potentials of − 0.9 V vs. RHE [21,22].The experiment was conducted two times and the resulting mean value of the diffusion coefficient was calculated.
Online UV-VIS setup
The electrochemical consumption of divanillin was monitored by an online UV-VIS setup.A similar setup was used to track the vanillin electroreduction to hydrovanilloin in a previous publication in the same reactor setup [23].The online UV-VIS setup was implemented in a bypass in the catholyte loop.It consisted of an all-quartz glass flow through cuvette (1 mm optical path length, Hellma GmbH & Co. KG), a cuvette holder (CUV-UV/ VIS, Avantes), a deuterium tungsten halogen light source (DT-Mini-2-GS, Ocean Optics) and an UV-VIS spectrometer (USB 2000 + , Ocean Optics).Divanillin exhibits an absorption peak at 355 nm, which corresponds to the absorption of its carbonyl group.Upon the electroreduction of divanillin to polyvanillin the peak is vanishing (1) j lim = 0.62nFcD (Fig. 1a).For comparison the structural similar compound vanillin exhibits an absorption peak of its carbonyl group at 348 nm, where both the one-electron reduction product hydrovanilloin and the two-electron reduction product vanillyl alcohol do not absorb [23].As the setup measures directly the undiluted catholyte in the bypass, the spectrometer quickly reaches its detection limit.Therefore, the flank of the peak was used for evaluation.The calibration of the UV-VIS system was conducted with divanillin solutions with concentrations between 0.5 mM and 50 mM (Fig. 1b).For higher concentrations the sensitivity of the system was not sufficient enough.A linear correlation was found in the semi-logarithmic plot of the divanillin concentration versus the wavelength at an absorption of 1.25 (Fig. 1c).The calibration was carried out three times from freshly prepared solutions.In case of the 300 mM divanillin reduction experiment online UV-VIS measurement was not feasible due to the high absorption of divanillin resulting in low precision and sensitivity.Therefore, aliquots were withdrawn at distinct time intervals from the catholyte and after a 1 to 6 dilution with water the samples were offline measured at the UV-VIS setup leading to a lower amount of data points in the high concertation experiment.
Size exclusion chromatography (SEC)
The molecular weight distributions (MWD) of the isolated polyvanillin samples were measured at a LC-system (LC 1200, Agilent Technologies) coupled with a refraction index detector using three different columns (PSS MCX 10 µm as guard column, 10 µ, 100 Å and 10 µ, 1000 Å as analytical columns) according to the literature [28].2-3 mg of isolated polyvanillin were dissolved in 1 mL 0.1 M NaOH solution.20 µL of the solution were injected into the system.The mobile phase was 0.1 M NaOH.A flow rate of 1 mL min −1 and a temperature of 35 °C was set.Conventional pullulan standards (342 g mol −1 to 805,000 g mol −1 , PSS Mainz, Germany) were used for calibration.Therefore, molecular weights are presented versus pullulan. 13C/. 1 H) 2D-NMR (HSQC, 13 C/ 1 H) spectra were recorded on a 500 MHz Bruker AVANCE spectrometer.50 mg of isolated and vacuum dried polyvanillin were dissolved in 600 µL Pyridin-d 5 .The samples were fully dissolved, if not stated otherwise.Chemical shifts are given in ppm downfield form TMS (δ = 0.00).The spectra were adjusted to the δ C /δ H cross coupling signals of pyridine-d5 (δ H in ppm/δ C in ppm: 7.220/123.87;7.580/135.91;8.740/150.35).The generated 2D-NMR spectra were processed on a MestReNova Software.
Thermogravimetric analysis (TGA)
TGA of selected samples were performed on a TG 209 F1 Iris (Nietzsch).Approximately 8 mg of the sample were heated from 25 to 950 °C in an alumina crucible with a heating rate of 10 K min −1 under synthetic air atmosphere (flow rate of 20 mL min −1 ).
Differential scanning calorimetry (DSC)
DSC analyses of selected samples were performed on a DSC 1 device (Mettler Toledo).Approximately 2 mg of the sample were weighed in a high-pressure crucible with gold sealing.The sample was heated under N 2 atmosphere with a heating rate of 10 K min −1 from 25 to 300 °C, cooled down to 25 °C and heated up again to 300 °C.
Polarization curve
First, we recorded linear sweep voltammograms (LSVs) of the blank catholyte and anolyte solutions as well as with addition of 50 mM divanillin to the catholyte to get familiar with the electrochemical system (Fig. 2).In the blank electrolytes the hydrogen evolution reaction (HER) and the oxygen evolution reaction (OER) occur at the cathode and anode side, respectively.We observed an exponential increase of the current density with increasing overpotential showing, as expected, no mass transport limitation of both reactions in the investigated current density region.When adding divanillin to the catholyte solution a broad reduction peak occurs with an onset potential of the divanillin reduction at ≈ − 675 mV vs. RHE.At potentials more negative than − 0.9 V vs. RHE the polarization curve approaches the HER at current densities of ≈ 20 mA cm −2 .We observed parasitic currents in the potential region between − 500 mV and − 650 mV vs. RHE, which are probably due to the electroreduction of an oxide layer on the Zn electrode building up in the OCP in the start-up phase until a steady-state hydrodynamic behavior sets.The OCP of the Zn cathode in 1 M NaOH was ≈ − 450 mV vs. RHE.The value of the parasitic currents varied between ≈ 1-2 mA cm −2 depending on the length of the start-up phase, which was usually longer with addition of divanillin in the electrolyte due to initial foam formation until the system was free of gases.Therefore, smaller parasitic currents were observed in the LSV measurement in pure 1 M NaOH compared to the electrolyte with addition of divanillin.To support the suggestion of the formation of an oxide layer on the Zn electrodes in the OCP, accompanied CV studies of Zn electrodes in an undivided beaker cell in 1 M NaOH were performed (Additional file 1: Fig. S1).Three CV cycles between − 0.45 V vs. RHE and − 1.1 V vs. RHE after a resting phase of 10 min in OCP were recorded showing a cathodic peak at ≈ − 500 mV for the first cycle, which vanishes in the two consecutive CV cycles.As other reduction processes of the electrolyte besides the HER in pure 1 M NaOH can be excluded, this peak is attributed to the reduction of a formed oxide layer on the Zn electrode.The elimination of this peak after the first CV cycle is properly explained by the reduction of the Zn surface after cathodic polarization.The formation of this layer was confirmed by repeating this experiment three times showing a peak in the first cycle of ≈ 1-2 mA cm −2 at ≈ − 500 mV vanishing in the consecutive cycles.
RDE studies and limiting current density determination
In the next step, we conducted RDE studies to determine the diffusion of coefficient of divanillin in order to calculate the limiting current densities of the divanillin reduction in the flow reactor system.Exemplary CVs of the divanillin reduction at different rotations rates and the corresponding Levich-plot in the limiting current region are shown in the supporting information (Additional file 1: Fig. S2).We observed a similar onset potential at the Pb RDE compared to the Zn cathode in the flow reactor of ≈ − 675 mV vs. RHE for the divanillin reduction.Comparatively, the cathodic onset potential of the structural similar compound vanillin at Zn and Pb were also similar [22].No parasitic currents occurred as the OCP of Pb is slightly more positive in 1 M NaOH (≈− 200 mV), no start-up phase was needed and the potential was held at -0.5 V vs. RHE before the actual CVs.A mean value of 4.07 ± 0.09*10 -6 cm 2 s −1 for the diffusion coefficient of divanillin in 1 M NaOH is calculated from two measurements.We determined a slightly higher diffusion coefficient for vanillin in 1 M NaOH of 6.85*10 -6 cm 2 s −1 , which agrees with divanillin being a larger compound than vanillin [23].
Applying the dimensionless hydrodynamic characterization of the flow reactor from our previous study [23], the limiting current density j lim within the flow reactor system can be calculated for a given mean linear flow velocity according to the following equations: where Sh is the Sherwood number, Re is the Reynolds number, Sc is the Schmidt number, k m is the mass transport coefficient, d e is the hydrodynamic diameter, D Di- vanillin is the diffusion coefficient of divanillin and c is the bulk concentration of divanillin in the catholyte.A limiting current density of ≈18 mA cm −2 is calculated for a divanillin concentration of 50 mM and a mean linear flow velocity of 20 cm s −1 (for detailed calculation see Additional file 1).It should be mentioned that the limiting current density should decrease for oligomers building up in the electroreduction of divanillin to polyvanillin, as the diffusion coefficient decreases with increasing molecular weight [29], which is not covered in here.
General synthesis
We performed galvanostatic electrolysis of the divanillin solution in the flow reactor.Galvanostatic electrolysis was chosen over potentiostatic electrolysis, since a future scale-up is facilitated and no potential measurement of the working electrode is necessary [26].As drawback, the selectivity of the reaction is controlled only indirectly, as the potential of the working electrode adjusts correspondingly to the applied current and the reaction conditions.However, as introduced by others [30][31][32], the dimensionless current density γ, which is defined as the fraction of the applied current density j in relation to the limiting current density at t = 0: readily predicts the reaction outcome and can be used as a measure for the overpotential in galvanostatic electrolysis.
As first step for the galvanostatic electrolysis, we selected a moderate dimensionless current density of γ = 0.5 at a start divanillin concentration of 50 mM.The concentration course of divanillin and the faradaic (2) Sh = 1.83Re 0.38 Sc 0. 33 = efficiency course for the consumption of divanillin is shown in Fig. 3a, assuming one electron reduction of its carbonyl groups to the corresponding pinacols.It should be noted that the concentration course of divanillin is measured by the UV absorption peak of divanillin's carbonyl groups.Consequently, carbonyl groups in terminal positions of oligomeric and polymeric reaction intermediates resulting from mono-pinacolized divanillin should be also covered by the online UV-VIS measurement.Since γ is less than 1, we expected a kinetic controlled phase at the beginning of the electrolysis, which would then transition to mass transport control with increasing consumption of divanillin's carbonyl groups.This would be expressed by a linear decrease and an exponential decrease of the substrate concentration with time (or applied charge in galvanostatic electrolysis) for the kinetic and the mass transport phase, respectively.However, we observed a slower non-linear decrease of the substrate concentration in the kinetic controlled region, which suggests a pronounced conditioning phase of the Zn cathode.Also, this is shown in the course of the faradaic efficiency, where a steep increase of the faradaic efficiency at the start of the electrolysis is observed until a constant faradaic efficiency sets in.We strongly suspect this due to the electroreduction of the zinc oxide layer building up in the start-up phase, where the cathode remains in the OCP, as discussed above.At applied charges higher than ≈ 3 F mol −1 the mass transport limitation sets in resulting in an exponential decrease of the substrate concentration and the faradaic efficiency.At an applied charge of > 6 F mol −1 full conversion of divanillin's carbonyl groups is achieved showing no further concentration decrease.The final concentration measured by the online UV-VIS setup is not zero, which is suspected to be from the overlapping UV-VIS absorption spectra of the produced polyvanillin and divanillin.
The molecular weight increase due to C-C coupling by pinacolization of divanillin substrate measured by SEC is shown in Fig. 3b and c.The molecular weight distribution (MWD) of the resulting polyvanillin exhibits a bimodular distribution, which agrees with obtained data from the polyvanillin synthesis in the batch cell [22].Thereby, a small prepeak is observed at lower molecular weights of ≈1200 g mol −1 and a larger main peak at higher molecular weights of ≈ 4100 g mol −1 .Final weight averaged molecular weights of M w = 3588 ± 344 g mol −1 and number averaged molecular weights M n = 2207 ± 139 g mol −1 are achieved after 8 F mol −1 .The calculated polydispersity of 1.5 shows a relatively narrow MWD.No further increase of the molecular weight is observed after an applied charge of 6 F mol −1 , which matches the concentration course of divanillin measured by online UV-VIS.Figure 3d shows the relationship of the divanillin conversion versus the molecular weight.The plot resembles an exponential increase of M w and M n with increasing divanillin conversion, whereby higher molecular weights are achieved only at the end of the electrolysis at high substrate conversions.This relationship suggests a step-growth polymerization producing mainly smaller molecules, such as dimers and trimers at low divanillin conversions.As the electrolysis progresses C-C coupling of molecules of different degrees of polymerization can occur leading to a rapid increase of M w and M n .However, the polymerization finally reaches a plateau and no further increase is observed with ongoing electrolysis.To explain the latter, we recorded 2D-NMR (HSQC, 13 C/ 1 H) spectra of a partly and a full polymerized sample at an applied charge of 2 and 8 F mol −1 , respectively (Additional file 1: Fig. S5).The assignment of the peaks to the corresponding structural features of polyvanillin is described in detail in our previous study, where a similar structure for polyvanillin was obtained in the batch cell [22].As expected, we observed no remaining aldehyde groups after an applied charge of 8 F mol −1 , which agrees with the UV-VIS and the SEC data.If no aldehyde groups remain after 8 F mol −1 and pinacolization would be the only occurring reaction, very large molecular weights should be observed.However, two electron reduction of the aldehyde group to the corresponding alcohol instead of C-C coupling by pinacolization terminates a further polymer chain growth.The alcohol production should mainly occur at the end of the electrolysis, when the carbonyl concentration is low and the radical dimerization is less likely to appear due to low local radical concentrations.Moreover, with decreasing divanillin concentration the working electrode potential is shifting toward more negative values in galvanostatic electrolysis.These more negative potentials also lead to an increase of alcohol production instead of pinacolization [21].However, in general the significant increase of the molar mass from divanillin to polyvanillin can be seen from broadening of the peaks in the 1 H and 13 C spectra at an applied charge of 8 F mol −1 in comparison 2 F mol −1 , e.g.see methoxy groups at δ H = 3.7 ppm/δ C = 55.5 ppm or the aromatic ring systems at δ H ≈ 7-8 ppm/δ C = 105-130 ppm.Further, we could confirm the already known stilbene-like double bound systems in the aliphatic regions of polyvanillin obtained in our batch cell experiments for the flow cell electrolysis.Their exact formation mechanism is still a subject of further studies, although similar stilbene-like structural features were achieved from acetylated pinacol groups and epoxides in the presence of Zn [14,33,34].
Impact of current density
As a next step, we were interested in the impact of the current density on the concentration courses and on structural features of the resulting polymer polyvanillin.Figure 4a and b show the concentration profiles of divanillin and the faradaic efficiency for its conversion versus the reaction time for the three different applied dimensionless current densities γ = 0.28, 0.5 and 1 corresponding to a current density of 5, 9 and 18 mA cm −2 , respectively.Thereby, the applied charge of 8 F mol −1 and the mean linear velocity of 20 cm s −1 were held constant.For the sake of completeness concentration profiles of divanillin and the faradaic efficiency plotted against the applied charge are shown in the supporting information (Additional file 1: Fig. S3 and Fig. S4).Surely, we observed a faster conversion with increasing current density for γ ≤ 1, as a kinetically controlled phase in the electrolysis is expected.For the highest current density of 18 mA cm −2 the reaction is mass transport limited almost from the beginning of the electrolysis, which agrees with the calculated limiting current density at the given reaction conditions.For example, the semi-log plot of the divanillin concentration versus time shows a linear behaviour (inlaid plot in Fig. 4a) and the faradaic efficiency decreases exponentially with time.However, the start of the electrolysis is also at 18 mA cm −2 overlaid by a suggested conditioning phase of the cathode, even if only little.In detail, the conditioning phase gets less pronounced with increasing current density, which would be explained by a decrease of the current ratio in the cathode conditioning in relation to the total applied current.
Figure 4c and d show the impact of the current density on the MWD and on the achieved M w and M n values for polyvanillin after full divanillin conversion at an applied charge of 8 F mol −1 .We observed a significant decrease of M w and M n with increasing current density.This agrees with the suggestion that at higher current densities alcohol formation increases due to more negative potentials at the cathode leading to an earlier polymer chain termination.This suggestion was confirmed by 2D-NMR (HSQC, 13 C/ 1 H) analyses of the polyvanillin synthesized at 9 and 18 mA cm −2 .A significant increase of the terminal alcohol peak was observed in the 2D-NMR spectra (Additional file 1: Fig. S6 and Table S2).Further, the amount of stilbene-like double bound groups also increases with increasing current densities suggesting that more severe reaction conditions promote also the stilbene formation.Interestingly, with decreasing current density the prepeak at low molar masses in the MWD is decreasing.This leads to the suggestion that the bimodular distribution of the molar mass would be originated from the competing alcohol formation of the carbonyl reduction.However, as the MWD of divanillin is overlapping with the prepeak, we cannot make any statement about the time of the actual prepeak formation.
When comparing these results to polyvanillin synthesized in an H-type batch cell in our previous study, slightly higher molecular weights were achieved in the flow cell.Moreover, we observed no significant impact of the current density on M w and M n in the batch cell, e.g.decreasing only from M w = 3171 g mol −1 at 15 mA cm −2 to M w = 3016 g mol −1 at 60 mA cm −2 for a Zn cathode after an applied charge of 8 F mol −1 and a divanillin concentration of 100 mM [22].One possible explanation for this could be the more inhomogeneous distribution of the reactant concentration and the current density along the electrode's surface compared to the homogenous distribution in a symmetrically-shaped plane parallel flow reactor.Therefore, batch cells often lack reproducibility, as the orientation of the working electrode is geometrically non-equivalent with regard to the counter electrode [26].As a result, at certain sites of the electrode in the batch cell depending on their orientation more severe potentials and current densities, as expected from average distribution, can occur leading to unwanted reaction outcomes.Hereby, alcohol formation may occur on these cathode sites, e.g. at the backside of the working electrode, and lower molecular weights are achieved.Moreover, the sensitivity of impact parameter, such as the current density, become less pronounced.Higher molecular weights of polyvanillin synthesized in the flow cell and a significant impact of the current density on the MWD would agree with this suggestion.
Increasing the divanillin start concentration
The productivity of polyvanillin is capped by the limiting current density of the divanillin reduction.As seen from Eq. ( 3), the limiting current density can be increased by increasing either the mass transport coefficient or the bulk concentration of divanillin.As already high flow velocities of 20 cm s −1 within the reactor are present and a further increase would lead to high pressure drops over the flow reactor, we increased the start concentration of divanillin by a factor of 6 to 300 mM instead.Thereby, we held the dimensionless current density constant at a value of γ = 0.5 expecting a similar reaction outcome.Figure 5a shows the conversion of divanillin and the faradaic efficiency course for the consumption of divanillin for the experiment conducted at a high divanillin concentration of 300 mM and a current density of 54 mA cm −2 .For comparison the reference experiment with the same dimensionless current density of γ = 0.5 at a low divanillin concentration of 50 mM and a current density of 9 mA cm −2 is shown.The conversion as well as the corresponding faradaic efficiency courses of the 54 mA cm −2 attempt lie within the error bars of the 9 mA cm −2 experiment and, therefore, equal conversion behaviour of divanillin is confirmed.No significant change of the viscosity of the catholyte at the end of the reaction was measured in the high concentration attempt.Further, we observed similar resulting M w and M n values of polyvanillin after an applied charge of 8 F mol −1 synthesized at 300 mM and 50 mM divanillin concentration (Fig. 5b).However, the MWDs of both samples are slightly different showing a slightly higher prepeak at ≈ 1200 g mol −1 and a broader main peak at the same peak maxima of 4100 g mol −1 (Additional file 1: Fig. S7).This difference may be explained by a higher local radical concentration in the vicinity of the cathode at higher substrate concentration and current densities favoring the dimerization step.Consequently, fractions of higher molecular weights can be expected in the MWD of polyvanillin synthesized at high concentrations.At the end of the electrolysis more severe potential for the 54 mA cm −2 experiment at the cathode occur, as the potential is dominated by the HER in the mass transport controlled region of the divanillin reduction.As a result, the alcohol formation leading to a more pronounced prepeak should be increased at more negative cathode potentials.However, more studies need to be conducted to confirm this suggestion.Moreover, increasing the starting concentration lead to a significant increase of the isolated yields of polyvanillin after the electrolysis from ≈ 52-57% for 50 mM to 94% for 300 mM (Additional file 1: Table S1).This increase is well explained by the remaining solubility of polyvanillin or low-molecular-weight oligomers within the product in the acidified catholyte.The high isolated yields confirm that polyvanillin is the major product in the electrolysis and potential by-products may only be produced in very small amounts.However, the remaining filtrate after the M w M n Fig. 5 Impact of initial divanillin concentration on a conversion and faradaic efficiency courses and b weight-averaged and number averaged molar masses.Parameters: 9 mA cm −2 for 50 mM divanillin and 54 mA cm −2 for 300 mM divanillin, γ = 0.5, 20 cm s −1 polyvanillin separation was not analyzed.Overall, we observed equal behaviour of the divanillin reduction at a constant dimensionless current density when increasing the start concentration in terms of conversion rates and faradaic efficiencies, but the reaction outcome is slightly different, which may be due to the complex behaviour of the reductive electrochemical polyvanillin formation.
We calculated key figures of merit, such as the spacetime-yield STY and the specific energy consumption E s , for the divanillin conversion to evaluate the productivity of the polyvanillin synthesis as follows: where c 0,Divanillin is the start concentration of divanillin, V t is the volume of the catholyte in the glass reservoir, X is the conversion of divanillin, M Divanillin is the molar mass of divanillin (302 g mol −1 ), V R is the volume of the catholyte within the reactor, t is the reaction time, I is the total current and U cell is the voltage between cathode and anode.Table 1 summarizes the STY and the E s values of the low and high concentration experiment.The reaction courses of the STY and E s for both experiments at low and high concentration for γ = 0.5 are shown in the supporting information (Additional file 1: Fig. S8).As expected, a six-fold increase of the STY from 0.072 kg l −1 h −1 to 0.471 kg l −1 h −1 in the kinetically controlled region was achieved when increasing the divanillin concentration from 50 to 300 mM, whereby E s only increased by 27% from 0.938 kWh kg −1 to 1.191 kWh kg −1 .Thereby, the moderate increase of E s results from an increase of the mean cell voltage from 2.58 V to 3.63 V due to higher overpotentials for the HER and the OER at higher current densities.The overpotential for the actual divanillin reduction should remain equal ( 5) at a constant dimensionless current density and, therefore, would not contribute to the increase of the cell voltage.At high conversions of e.g.90% the STY decreases to 0.314 kg l −1 h −1 and the E s values increases to 1.787 kWh kg −1 for the 300 mM experiment, as more and more charge is consumed by the competing HER as soon as the divanillin reduction becomes mass transport limited.For comparison, maximum STY of 1.13 kg l −1 h −1 for the vanillin reduction to the hydrodimer hydrovanilloin and 1.18 kg l −1 h −1 for L-cysteine hydrochloride synthesis were published for similar flow reactor systems in recirculation mode at low substrate conversions at the start of the electrolysis [23,35].E s values under the same reaction conditions for the hydrovanilloin and L-cysteine hydrochloride synthesis were 0.46 and 1.6 kWh kg −1 , respectively.Moreover, Table 2 compares the performance of this study with the previous studies investigating polyvanillin synthesis in divided batch cells.As expected, a significant improvement of the space-time-yield and the specific energy consumption was obtained in the flow cell compared to previous batch cell approaches.The improvements can be attributed to higher electrode area to volume ratios, better mass transport and more narrow distances between the electrodes resulting in faster conversion and lower cell voltages impacting these key figures of merit.Lastly, it should be noted that fivefold higher molecular weights were obtained by Amarasekara et al. [25] compared to our studies in batch [22] and flow.However, no calibration standards were stated and the SEC analysis was performed in DMF, which makes the results hardly comparable.
TGA and DSC analysis of polyvanillin
Finally, TGA and DSC analysis were conducted of polyvanillin produced at 54 mA cm −2 (Fig. 6a and b).The TGA shows that the polymer exhibits good thermal stability with a mass loss of 50% at 484 °C.However, beside the actual decomposition of the polymer at an onset temperature of 322 °C and a decomposition temperature The DSC analysis shows no glass transition temperature in the first heat-up phase.However, we observed complex behavior with three no sharp thermal transitions at peak temperatures of 109 °C, 146 °C and 253 °C, respectively.Cooling down the melt led to reorganization of molecules in a broad zone with a peak temperature of 190 °C and a glass transition temperature of T g = 109 °C was observable.In the second heat-up phase a glass transition temperature of T g = 123 °C followed by a broad melting phase at a peak temperature of 212 °C was measured.These results indicate thermal behavior of a thermoplastic material, with properties like glass transition as well as melting and crystallization.
No glass transition temperature and a slightly lower 50% weight loss in the TGA of 440 °C was found for polyvanillin by Amarasekara et al. in their batch cell feasibility study [25].However, the 1 H-NMR analysis exhibited sharp peaks and presence of still unreacted terminal aldehyde groups was observed indicating no full polymerization.The transfer onto a flow reactor with improved mass transport in the present study compared to the batch cell led to full polymerization of divanillin, which could explain the difference in the thermal characterization of polyvanillin.
Comparatively, Llevot et al. found for a totally divanillin-based polyester (corresponding to polymer P7 in the publication) a glass transition temperature of T g = 102 °C with a 50% weight loss at T ≈ 475 °C [15].The phenolic groups of divanillin were methylated before the polymerization process.No melting transition was found between − 70 and 200 °C.The absence of crystallinity in the divanillin-based polyester in contrast to polyvanillin could be due to the absence of phenolic groups resulting in amorphous polymers.When comparing with commercially available polymers exhibiting one glass transition and one melting temperature, polyvanillin shows most similar thermal behavior to polyphenylene sulfide (PPS).PPS exhibits a glass transition temperature of T g = 90 °C, a melting temperature of ≈ 285 °C and a decomposition temperature of ≈ 530 °C [36].
Conclusion
We systematically studied the synthesis of polyvanillin by electrochemical pinacolization of divanillin at a Zn cathode in a divided plane parallel flow reactor in recirculation mode.Prevenient calculation of the limiting current density for the divanillin reduction using the hydrodynamic characterization of the flow reactor and the diffusion coefficient of divanillin enabled us to select suitable galvanostatic reaction parameters.We used the dimensionless current density γ as figure of merit for the reaction outcome and to estimate kinetic and mass transport limitations.We showed a charge resolved molecular weight increase from divanillin to polyvanillin, whereby full divanillin conversion was reached after an applied charge of ≈ 6 F mol −1 .Despite the negative onset potential of divanillin of − 650 mV vs. RHE, faradaic efficiencies of 50-60% were reached in the kinetically controlled region due to the high HER overpotential at Zn cathodes.A plot of divanillin conversion against the molecular weight of the product revealed a step-growth polymerization for the polyvanillin synthesis, whereby the competing 2e − reduction to the corresponding alcohol terminates the chain growth and caps the maximum final molecular weight of polyvanillin.This was confirmed by 2D-NMR (HSQC, 13 C/ 1 H) showing besides expected pinacol and terminal alcohol groups also stilbene-like double bounds in the aliphatic region in the resulting polymer confirming the structure of polyvanillin synthesized in previous batch studies [22].The molecular weight of polyvanillin decreased with increasing current densities after complete consumption of carbonyl groups in divanillin due to an increasing formation of terminal alcohol groups.Lastly, we were able to increase the productivity of polyvanillin synthesis by a sixfold increase of the divanillin start concentration.While maintaining a constant dimensionless current density, the electrolysis was conducted at current densities of 54 mA cm −2 for a divanillin start concentration of 300 mM divanillin showing similar conversion behavior compared to the 50 mM experiment at 9 mA cm −2 .Thereby, promising space-time-yields up to 0.47 kg l −1 h −1 at specific energy consumptions of 1.19 kWh kg −1 could be reached for the polyvanillin production, in which averaged molecular weights of M w = 3700 g mol −1 and M n = 2100 g mol −1 were achieved.The characterization of polyvanillin by TGA and DSC analysis revealed good thermal stability and thermal behavior of a thermoplastic material with T g = 109-123 °C, T melting = 190-212 °C and T decom.= 541 °C.
Fig. 1 a
Fig. 1 a UV-VIS spectrum of divanillin (7.8 µM) and polyvanillin (equivalent mass to divanillin) in 1 M NaOH recorded on a UV-1650PC Spectrometer (Shimadzu).b Exemplary calibration spectra of high concentrated divanillin solutions (1 M NaOH) in the online UV-VIS setup.c Semi-logarithmic fit of the calibration line in the online UV-VIS setup.Fit function: wavelength [nm] = 380.37nm + 25.31 nmL/ mmol * c divanillin [mmol L −1 ].R 2 = 0.9975 Fig.2LSV of the HER at the Zn cathode in the blank catholyte (black), at the Zn cathode with addition of 50 mM divanillin to the catholyte (red) and of the OER at the Ni anode (blue).Potentials were corrected by the iR-drop.Potential sweep rate: 3.33 mV s −1 .Mean linear flow rate: 20 cm s−1
Fig. 3
Fig. 3 Exemplary courses of a substrate concentration measured by online UV-VIS and faradaic efficiency (assuming 100% pinacolization) vs. charge, b molecular weight distributions measured by SEC vs. charge, c weight averaged and number averaged molar mass and polydispersity vs. charge, d weight averaged and number averaged molar mass vs. conversion measured by online UV-VIS.Parameters: 50 mM initial divanillin concentration, 9 mA cm −2 , 20 cm s −1 and γ = 0.5.Error bars indicate standard deviations of 3 experiments up to 4 F mol −1 and 4 experiments up to 8 F mol. −1
Table 1
Impact of high concentration and high current density attempt on the molar mass of synthesized polyvanillin and on key figures of merit for performance evaluation Parameters: 20 cm s −1 , 8 F mol −1 Error bars indicate standard deviations of at least 3 experiments a After conditioning phase of the electrode
Table 2
Comparison of the performance of divanillin reduction of this study with the literature a Current density cannot be stated, as immersed area of electrodes (2.5*9 cm 2 ) in the electrolyte was not given b Versus pullulan standard c Avarage from 6 polymerization trials.No calibration standard stated (SEC performed with DMF as solvent) d Calculated using isolated yield of polyvanillin after electrolysis.e Avarage cell voltage of 6.5 V f Assuming a cell voltage of 12 V (power supply)
|
2024-02-10T06:17:31.280Z
|
2024-02-08T00:00:00.000
|
{
"year": 2024,
"sha1": "c1f854d94215f3162cedf3644caa401a08de61e2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d3bfa01644bdc224cc40db53b6bce910b2dd138b",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231685692
|
pes2o/s2orc
|
v3-fos-license
|
Application of Glomus deserticola as bio-fertilizer of Gasteraloe in saline growing medium and biocontrol of Fusarium sp
In this study the possibility of using a biostimulant based on Glomus deserticola to improve the growth and quality of Gasteraloe plants and protection against Fusarium sp. was evaluated. Objectives of the work were: i) use Glomus deserticola to assess whether the use of this Arbuscular mycorrhizal fungi can increase the growth rate of Gasteraloe plants generally slow in their growth cycle; ii) consider if the use of Glomus deserticola can lead to an increase in plant resistance under saline substrate conditions; iii) evaluate how the use of Glomus deserticola allows greater protection of plants from Fusarium sp. which often affects the roots of these succulents. The four experimental groups in cultivation were: i) group without Glomus, irrigated with water and substrate previously fertilized; ii) group without Glomus, irrigated with salt water (0.50 g of NaCl Kg-1 dry soil) and substrate previously fertilized; iii) group with Glomus deserticola, irrigated with water and substrate previously fertilized; iv) group with Glomus deserticola, irrigated with salt water (0.50 g of NaCl Kg-1 dry soil) and substrate previously fertilized. The test showed a significant increase in agronomic parameters analyzed in plants treated with Glomus deserticola on Gasteraloe cv. Magica and Gasteraloe aristata x platinum. The test also highlighted how the use of mycorrhizae in particular of Glomus spp. can determine a greater resistance against salt stress and greater protection against attacks of Fusarium sp. The application of mycorrhizae in the cultivation of succulent plants guarantees growers the possibility of obtaining a superior quality product, greater resistance to biotic and abiotic stress, an increase in the growth rate and mineral content of the tissues, aspects that are then found in the improvement of the quality of the plants and consequently their commercialization. Key-words: Sustainable Applications; Succulent Plants; Biofertilizers; Rhizosphere; Microorganisms
Introduction
Gasteraloe plants, also known as x Gastrolea, are a particular type of succulent plants obtained from the hybridization between Gasteria and Aloe. Native to South Africa, Gasteraloe plants have thick succulent leaves with toothed margins. Usually these plants produce tubular flowers that bloom on stems that can be up to 1 m long. Gasteraloe hybrids are stemless or almost stemless. Gonialoe and Aristaloe aristata are particularly used for these hybrids, as they are much more susceptible to hybridization with gasteries than most other "aloes" [1].The reproduction takes place through shoots that grow from the base of the mother plant, need light and must be protected from the afternoon sun. In Mediterranean environment Gasteraloe usually grows as perennial [2]. Arbuscular mycorrhizal fungi (AMF) are symbiotic soil fungi that can colonize the roots of most plants. The genus Glomus lives mainly in neutral and alkaline agricultural soils. The fungus-plant association usually increases water and nutrient uptake by the roots [3,4], improving the hydraulic conductivity of the roots [5] or modifying the root architecture [6]. Accordingly, the plant has a variety of benefits that can lead to increased growth, improved water relationships [7], increased nutrient uptake compared to non-mycorrhizal controls [8] and a change in root morphology [9]. The effect of AMF on the drought resistance of host plants has been studied [10,11] and it has been shown that mycorrhizal infection increases the ability of plants to extract water and nutrients [12,13]. The response of mycorrhizal plants to drought stress depends on the specific fungal species [14], the interaction between the plant species and the introduced fungi and the level of drought stress. This association is interesting for those looking for drought-resistant species/plants that can be used for re-vegetation and soil conservation [15] of semi-arid areas where the availability of water for irrigation is limited. In the Mediterranean area, the limited rainfall and the high evaporative demand of the atmosphere combine with anthropogenic disturbances, making desertification a serious problem that generates a progressive reduction of plant cover coupled with rapid soil erosion [16].
AMF (Glomus deserticola) are obligate symbiotic biotrophs that increase plants' resistance to drought and pathogens, increase the contact area of plants with the soil, increase the absorption area of the roots up to 47 times, improve the absorption of water and mineral elements, increase the accidental formation of roots, promote plant growth and plant growth [17,18,19]. So far, literature has shown that, there is no information on the influence of G. deserticola on the growth and defense of succulent plants In this experiment, the main objective was to: 1) Use Glomus deserticola to assess whether the use of this Arbuscular mycorrhizal fungi can increase the growth rate of Gasteraloe plants generally slow in their growth cycle; 2) Consider if the use of Glomus deserticola can lead to an increase in plant resistance under saline substrate conditions; 3) Evaluate how the use of Glomus deserticola allows greater protection of plants from Fusarium sp. which often affects the roots of these succulents. The plants were placed in ø 12 cm pots; 60 plants per thesis, divided into 3 replicas of 20 plants each. All plants were fertilized with a controlled release fertilizer (2 kg m -3 Osmocote Pro®, 6 months with 190 g/kg N, 39 g/kg P, 83 g/kg K) mixed with the growing medium before transplanting.
The plants were watered 2 times per week and grown for 8 months. The plants were irrigated with drip irrigation. The irrigation was activated by a timer whose program was adjusted weekly according to climatic conditions and the fraction of leaching. On October 15, 2020, plants height, leaves number, vegetative and radical weight, number and weight of new shoots, inflorescences number, inflorencences height were recorded. Additionally, in the experiment the presence of plant mortality following attacks of Fusarium sp. and N, P, K analysis (Kjeldal UDK 169; Jenway 630501 6300 visible spectrophotometer) were recorded.
Statistics
The experiment was carried out in a randomized complete block design. Collected data were analysed by one-way ANOVA, using GLM univariate procedure, to assess significant (P ≤ 0.05, 0.01 and 0.001) differences among treatments. Mean values were then separated by LSD multiple-range test (P = 0.05). Statistics and graphics were supported by the programs Costat (version 6.451) and Excel (Office 2010).
Plant growth
The test showed a significant increase in agronomic parameters analyzed in plants treated with Glomus deserticola on Gasteraloe cv. Magica and Gasteraloe aristata x platinum. The test also highlighted how the use of mycorrhizae in particular of Glomus spp. can determine a greater resistance against salt stress and greater protection against attacks of Fusarium sp.
In fact, all plants treated with Glomus deserticola (GD) showed a significant increase in height and number of leaves per plant, vegetative and root weight of the plants, number and weight of new shoots and inflorescences. There was also an increase in nitrogen, phosphorus and potassium content in plant tissue of plants grown in substrates inoculated with mycorrhizae. The use of mycorrhizae has effectively demonstrated that the use of certain microorganisms has a beneficial effect against pathogenic fungi in particular Fusarium sp.
In particular on Gasteraloe cv. Magica (Table 1) The treatment with Glomus deserticola had also resulted in a significant increase in nitrogen, phosphorus and potassium content in the plant tissues of the treated plants. In both types of Gasteraloe in cultivation, the thesis with (GD) was the best compared to the others, highlighting how the use of mycorrhizae can improve the absorption of minerals from the soil even under conditions of saline stress. It was also noted that the thesis treated with Glomus deserticola there was a significant reduction in the presence of Fusarium sp. effect probably due to the biocontrol action determined by the presence of mycorrhizae in the substrate.
Discussion
A wide range of relationships can be established between plant roots and fungi. In these relationships the plant does not show pathological symptoms due to the presence of fungal organisms. The classification of mycorrhizae is based both on morphological aspects and on where the fungus is located. Mycorrhiza is mainly established on the lateral roots and branches. Mycorrhizate roots remain shorter and tend to have a larger diameter. The external appearance varies depending on the type of fungus, the intensity of the infection and the way the root system of the plant grows [20].
The intensity of mycorrhizal infection varies from soil to soil. The amount of roots is higher in acid humus mor soils than in mull soils. The formation of mycorrhizal roots is favoured by conditions of nutrient deficiency, especially nitrogen, as well as intense photosynthetic activity. It seems therefore that the carbohydrate content of the roots is a factor of decisive importance and that any condition that favors the presence of an excess of carbohydrates stimulates mycorrhizal infection [21].
Mycorrhizal roots have a higher capacity to absorb mineral elements, especially nitrogen and phosphorus, than normal roots. This capacity, useful in poor soils, is favored by a greater absorbing surface area, also because from the fungal sheath branch off mycelial filaments that penetrate the surrounding soil. In addition, it seems that the fungus carries out a very intense metabolic activity and that this activity contributes to the mobilization of nutrients [12].
In this test, plants treated with Glomus deserticola showed a significant increase in plant height and leaves number, vegetative and radical weight, number and weight of new shoots, number and weight of inflorescences. Experimentation has shown how the use of mycorrhizae can increase plants resistance to saline stress. Plants grown in substrate with Glomus deserticola and wetted with salt water have grown more than control plants irrigated with water and salt. It was also evident how the use of microorganisms in particular mycorrhizae can have a biocontrol effect against plant pathogens, in this case there was a significant control effect on the mortality caused by Fusarium sp.. In addition, the use of mycorrhizae determines a significant increase in root growth and consequently the absorption of water and mineral nutrients, a mechanism that then determines an increase in mineral content in plant tissues.
Arbuscular mycorrhizae are characterized by the formation of unique structures, shrubs and vesicles of the Glomeromycota phylum fungi. In this symbiotic association, the fungus helps the plant to capture nutrients such as phosphorus, sulfur, nitrogen and micronutrients from the soil. It is believed that the development of symbiosis with arbuscular mycorrhizae played a crucial role in the initial colonization of the soil by plants and in the evolution of vascular plants [22]. This symbiosis is a highly evolved mutualistic relationship found between fungi and plants. Arbuscular mycorrhizae are found in 80% of known vascular plant families. The enormous advances in research on mycorrhizal physiology and ecology in the last 40 years have led to a greater understanding of the multiple functions of arbuscular mycorrhizae in the ecosystem. This knowledge is applicable to human efforts in ecosystem management and restoration, and in agriculture [23].
Conclusion
The test has shown how the use of Glomus deserticola in growing media can improve the quality and growth of Gasteraloe plants. In particular by increasing the height and number of leaves, the vegetative and root weight, the number of new shoots and inflorescences. In addition, there is a higher resistance of plants to saline stress in plants treated with mycorrhizae, a higher mineral content in tissues and a higher resistance to attacks of Fusarium sp. The use of Glomus deserticola can lead to an increase in the growth rate of succulent plants such as Gasteraloe and greater protection against fungal pathogens.
The application of mycorrhizae in the cultivation of succulent plants guarantees growers the possibility of obtaining a superior quality product, greater resistance to biotic and abiotic stress, an increase in the growth rate and mineral content of the tissues, aspects that are then found in the improvement of the quality of the plants and consequently their commercialization.
Acknowledgments
The article is part of the "Microsuc" project: microorganisms for the growth and protection of cacti and succulent plants.
Disclosure of conflict of interest
The author declares no conflict of interest.
|
2021-01-22T19:54:45.263Z
|
2021-01-30T00:00:00.000
|
{
"year": 2021,
"sha1": "fa309b26c119c52937039af4333c0da17de68899",
"oa_license": "CCBY",
"oa_url": "https://magnascientiapub.com/journals/msabp/sites/default/files/MSABP-2020-0023.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fa309b26c119c52937039af4333c0da17de68899",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
209460750
|
pes2o/s2orc
|
v3-fos-license
|
Improved limits on a hypothetical X(16.7) boson and a dark photon decaying into $e^+e^-$ pairs
The improved results on a direct search for a new X(16.7 MeV) boson which could explain the anomalous excess of $e^+e^-$ pairs observed in the excited 8Be nucleus decays ("Berillium anomaly") are reported. Due to its coupling to electrons, the X boson could be produced in the bremsstrahlung reaction e-Z ->e-ZX by a high-energy beam of electrons incident on active target in the NA64 experiment at the CERN SPS and observed through its subsequent decay into $e^+e^-$ pair. No evidence for such decays was found from the combined analysis of the data samples with total statistics corresponding to 8.4\times 10^{10} electrons on target collected in 2017 and 2018. This allows to set the new limits on the $X$--$e^-$ coupling in the range 1.2 \times 10^{-4}<\epsilon_e<6.8 \times 10^{-4}, excluding part of the parameter space favored by the Berillium anomaly. We also set new bounds on the mixing strength of photons with dark photons (A') from non-observation of the decay $A' \to e^+e^-$ of the bremsstrahlung A' with a mass below 24 MeV.
The improved results on a direct search for a new X(16.7 MeV) boson which could explain the anomalous excess of e + e − pairs observed in the excited 8 Be * nucleus decays ("Berillium anomaly") are reported. Due to its coupling to electrons, the X boson could be produced in the bremsstrahlung reaction e − Z → e − ZX by a high-energy beam of electrons incident on active target in the NA64 experiment at the CERN SPS and observed through its subsequent decay into e + e − pair. No evidence for such decays was found from the combined analysis of the data samples with total statistics corresponding to 8.4 × 10 10 electrons on target collected in 2017 and 2018. This allows to set the new limits on the X − e − coupling in the range 1.2 × 10 −4 e 6.8 × 10 −4 excluding part of the parameter space favored by the Berillium anomaly. We also set new bounds on the mixing strength of photons with dark photons (A ) from non-observation of the decay A → e + e − of the bremsstrahlung A with a mass 24 MeV.
Recently, the search for new light bosons weakly coupled to SM particles was additionally inspired by the observation in the ATOMKI experiment by Krasznahorkay et al. [1,2] of a ∼7σ excess of events in the invariant mass distributions of e + e − pairs produced in the nuclear transitions of excited 8 Be * to its ground state via internal pair creation. It has been shown that this anomaly can be interpreted as the emission of a new protophobic gauge boson X with a mass of 16.7 MeV decaying into e + e − pair [3,4]. This explanation of the anomaly was found to be consistent with the existing constraints assuming that the X has non-universal coupling to quarks, coupling to electrons in the range 2 × 10 −4 e 1.4 × 10 −3 and lifetime 10 −14 τ X 10 −12 s. It is interesting that a new boson with such relatively large couplings to charged leptons could also resolve the so-called (g µ − 2 ) anomaly, a discrepancy between measured and predicted values of the muon anomalous magnetic moment. This has motivated worldwide efforts towards the experimental searches, see, e.g., Refs. [5,6], and studies of the phenomenological aspects of light vector bosons weakly coupled to quarks and leptons, see, e.g., Refs. [7][8][9][10][11][12]. The latest experimental results from the ATOMKI group show a similar excess of events at approximately the same invariant mass in the nuclear transitions of another nucleus, 4 He [13]. This further increases the importance of independent searches for a new particle X.
Another strong motivation to search for new light bosons decaying into e + e − pair is provided by the dark matter puzzle. An interesting possibility is that in addition to gravity a new force between the dark sector and visible matter, carried by a new vector boson, A (dark photon), might exist [14,15]. Such A could have a mass m A 1 GeV, associated with a spontaneously broken gauged U (1) D symmetry, and would couple to the Standard Model (SM) through kinetic mixing with the ordinary photon, − 1 2 F µν A µν , parameterized by the mixing strength 1 [16][17][18], for a review see, e.g., Refs. [5,19,20]. A number of previous experiments, such as beam dump [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35], fixed target [36][37][38], collider [39][40][41] and rare particle decay searches [42][43][44][45][46][47][48][49][50][51][52][53], have already put stringent constraints on the mass m A and of such dark photons, excluding, in particular, the parameter space region favored by the g µ − 2 anomaly. However, a large range of mixing strengths 10 −4 10 −3 corresponding to a short-lived A remains unexplored. These values of could naturally be obtained from the loop effects of particles charged under both the dark and SM U (1) interactions with a typical 1-loop value = eg D /16π 2 [18], where g D is the coupling constant of the U (1) D gauge interactions. The search for e + e − decays of new short-lived particles at the CERN SPS was performed by the NA64 experiment in 2017 [54]. We report here the improved results from the NA64 experiment obtained using the data collected in 2018 in the new run at the CERN SPS performed after optimization of the experiment configuration and parameters.
The NA64 experiment employs the optimized electron beam from the H4 beam line of the CERN SPS. The beam delivers 5 × 10 6 e − per SPS spill of 4.8 s produced by the primary 400 GeV proton beam with an intensity of a few 10 12 protons on target. The NA64 setup designed for the searches of X bosons and A is schematically shown in Fig. 1. The thin scintillation counters, S 1 -S 3 and V 0 , are used for the beam definition, while another one, S 4 , is used to detect the e + e − pairs. The detector is equipped with a magnetic spectrometer consisting of two MBPL magnets and a low material budget tracker. The tracker is a set of four upstream Micromegas (MM) chambers for the incoming e − angle selection, four GEM chambers and three straw tube planes allowing the reconstruction of the outgoing tracks [59,60]. To enhance the electron identification the synchrotron radiation (SR) emitted by electrons is used for their efficient tagging and for additional suppression of the initial hadron contamination in the beam π/e − 10 −2 down to the level 10 −6 [58,61]. The use of SR detectors (SRD) is a key point for the hadron background suppression and improvement of the sensitivity compared to the previous electron beam dump searches [25,26]. The dump is a compact electromagnetic (EM) calorimeter WCAL made as short as possible to maximize the sensitivity to short lifetimes while keeping the leakage of particles at a small level. The purpose of the WCAL design was to absorb not the full energy of the shower generated by the primary electrons, but the energy of the showers produced by the recoil electrons from the primary reaction (1), which is typically significantly lower. The WCAL is assembled from the tungsten and plastic scintillator plates with wave length shifting fiber read-out. The first five layers of the WCAL are separated from the main part (WCAL preshower). Immediately after the WCAL there are the veto counters W 2 and V 2 , several meters downstream the decay counter S 4 and tracking detectors. These detectors are followed by another EM calorimeter (ECAL), which is a matrix of 6 × 6 shashlik-type lead -plastic scintillator sandwich modules [58]. The ECAL is 40 radiation lengths (X0) with the first 4 X0 serving as a preshower subdetector. Downstream the ECAL the detector is equipped with a high-efficiency counter VETO and a thick hadron calorimeter (HCAL) [58] used as a hadron veto and muon identificator.
The events are collected with a hardware trigger requiring in-time energy deposition in S 1 -S 3 , no energy deposition in V 0 and E W CAL 0.7 × E beam . The latter requirement was not used in the runs used for calibration (calibration beams).
In order to increase the sensitivity to short-lived X bosons (higher ) the following optimization steps were performed for the 2018 run: (i) Beam energy increased to 150 GeV; (ii) Thinner counter W 2 was installed immediately after the last tungsten plate inside the WCAL box; (iii) more track detectors installed between WCAL and ECAL. In addition, the vacuum pipe was installed immediately after the WCAL, and the distance between the WCAL and ECAL was increased. These changes would allow to perform the full track and vertex reconstruction if the e + e − pair energy is not very high as immediate additional checks in case of signal observation.
To choose selection criteria, for the calculation of efficiencies and for background estimation the package based on Geant4 [62,63] for the detailed full simulation of the experiment is developed. It contains the subpackage for the simulation of various types of dark matter particles based on the exact tree-level calculation of cross sections [65].
The method of the search for A → e + e − (or X → e + e − ) decays is described in [55,56,64,65]. If the A exists, it could be produced via the coupling to electrons wherein high-energy electrons scatter off nuclei of the active WCAL dump target, followed by the decay into e + e − pairs: The reaction (1) typically occurs within the first few radiation lengths (X 0 ) of the WCAL. The downstream part of the WCAL serves as a dump to absorb completely the EM shower tail. The bremsstrahlung A would penetrate the rest of the dump and the veto counter without interactions and then decay in flight into an e + e − pair in the decay volume downstream of the WCAL. A fraction (f ) of the primary beam energy E 1 = f E 0 is deposited in the WCAL by the recoil electron from the reaction (1). The remaining part of the primary electron energy E 2 = (1 − f )E 0 is transferred through the dump by the A , and deposited in the second downstream calorimeter ECAL via the A (X) → e + e − decay in flight, as shown in Fig. 1.
The occurrence of A → e + e − decays produced in e − Z interactions would appear as an excess of events with two EM-like showers in the detector: one shower in the WCAL and another one in the ECAL, with the total energy E tot = E W CAL + E ECAL equal to the beam energy (E 0 ), above those expected from the background sources.
The candidate events were selected with the following criteria: (i) Small energy in the veto counter (W 2 in 2018), well below one M IP (most probable energy deposition of a minimum ionizing particle). The concrete cut was slightly different for different periods, it was optimized taking into account the energy resolution, the electronic noise and the pileup effects in the counter; (ii) The signal in the decay counter S 4 is consistent with two M IP s; (iii) The sum of energies deposited in the WCAL+ECAL is equal to the beam energy within the boundaries determined by the energy resolution of these detectors. At least 25 GeV should be deposited in the ECAL; (iv) The shower in the WCAL should start to develop within a few first X 0 , which is ensured by the WCAL preshower energy cut; (v) The cell with maximal energy deposition in the ECAL should be (3,3): the cell on the axis of the beam bent by the magnets; (vi) The longitudinal and lateral shape of the shower in the ECAL are consistent with a single EM one. The longitudinal shape is checked by the cut on the energy deposition in the ECAL preshower. Checking the lateral shower shape does not decrease the efficiency for signal events because the distance between e − and e + in the ECAL is significantly smaller than the ECAL cell size. Finally, the rejection of events with hadrons in the final state was based on the energy deposition in the VETO and HCAL.
As in the previous analyses [57,58], in order to check efficiencies and the reliability of the MC simulations, we selected a clean sample of 10 5 µ + µ − events with E W CAL < 0.6 × E beam from the QED muon pair production in the dump (dimuons). This rare process is dominated by the reaction e − Z → e − Zγ; γ → µ + µ − of a photon conversion into muon pair on a dump nucleus. We performed various comparisons between these events and the corresponding MC simulated sample and applied the estimated efficiency corrections to the MC events.
The counter W 2 is very important for this analysis. It is made using the same technology as for the tiles of the WCAL and installed inside the WCAL box to be as close to the possible place of A creation as possible. We paid special attention to check that it works correctly and to make the MC simulation of this counter as close to the real data as possible. In the simulation we took into account the following effects: • fluctuations of the number of photoelectrons from the photocathode • pulse reconstruction threshold curve for the counter below 0.8 MIP
• small cross-talk between the WCAL and W2 signals
• uncertainties of the W 2 pulse reconstruction due to readout electronic noise and pileup effects The cross-talk between the neighboring WCAL and W2 signals include contributions from the light cross-talk and the electronic cross-talk between the two channels. The average cross-talk value was assumed to be proportional to the energy deposition in the WCAL.
in Fig. 2 the comparison of the MC simulation with data for selected muons in the hadron beam and for the electron beam with several different selections is shown. There is some remaining disagreement for the electron calibration beam and for dimuons. However, the agreement for dimuons becomes better for smaller energy in the WCAL, i.e. for the conditions that we would have in signal events. For reliability we also estimated the systematic errors of the signal efficiency due to W 2 by changing the W 2 threshold (30% up and down) and comparing the signal efficiencies. The systematic error calculated this way is 10%. It was used in the final statistical analysis together with other systematic errors. The energy deposition in W 2 expected for the detectable signal events (with A decays after the last tungsten plate) is shown in Fig. 3. It is significantly smaller than for the electrons from the primary beam ( Fig. 2 lower left plot). It is also smaller for the bigger value of since the short-lived A should have higher energy for the same probability to decay after the WCAL tungsten plates, which means smaller energy of the recoil electron (shorter shower).
The main background in this search comes from the K 0 S → π 0 π 0 events from K 0 mainly produced by hadrons misidentified as electrons [54]. K 0 can pass the veto counters without energy deposition and decay into π 0 π 0 . These π 0 decay immediately into photons that can convert on the setup material into e + e − pair upstream of the S 4 . The decay chain K 0 S → π 0 π 0 ; π 0 → γe + e − is also possible. We estimated this background using both simulation and data. For this, we selected the sample of neutral events changing the cut (ii) to E S4 < 0.5M IP . This sample has 3 events in the 2017 data. No events were found with standard criteria in the 2018 data, for this reason we relaxed for this sample the criteria iii) and vi). The distribution of neutral events is shown in Fig. 4. The MC sample of K 0 S was simulated according to distributions predicted for the hadron interactions in WCAL. With this sample we calculated the number of neutral and signal-like events passing the criteria. This gives us the prediction of the number of background events: 0.06 for the 2017 data and 0.005 for the 2018 data (Table I). The smaller number of neutral events and lower background in the 2018 data are expected, because due to the increased distance between the WCAL and ECAL less K 0 S events pass the criteria (v) and (vi). In addition, the background is decreased due the vacuum pipe installed upstream of the S 4 .
The charge-exchange reaction π − p → (≥ 1)π 0 + n + ... that can occur in the last layers of the WCAL with decay photons escaping the dump without interactions and accompanied by poorly detected secondaries is another source of fake signal. To evaluate this background we used the extrapolation of the charge-exchange cross sections, σ ∼ Z 2/3 , measured on different nuclei [66]. The beam pion flux suppression by the SRD tagging is taken into account in the estimation. The background from the punchthrough π − can appear because of small inefficiency of the veto counter, mainly due to pile-up. It was estimated using simulation and the data from the calibration runs with a hadron beam. The contribution from the beam kaon decays in-flight K − → e − νπ + π − (K e4 ) was estimated from the simulation with biased lifetime and found to be negligible. The background from the dimuon production in the dump e − Z → e − Zµ + µ − with either π + π − or µ + µ − pairs misidentified as EM event in the ECAL was also found to be negligible. Source of background 2017 data 2018 data K 0 S → 2π 0 0.06 ± 0.034 0.005 ± 0.003 πN → (≥ 1)π 0 + n + ...
After determining and optimizing the selection criteria and estimating the background levels, we examined the signal box and found no candidates.
The combined 90% confidence level (C.L.) upper limits for the mixing strength were determined from the 90% C.L. upper limit for the expected number of signal events, N 90% A by using the modified frequentist approach for confidence levels (C.L.), taking the profile likelihood as a test statistic in the asymptotic approximation [67][68][69]. The total number of expected signal events in the signal box was the sum of expected events from the 2017 and 2018 runs: where n i EOT is the effective number of EOT in run-i (5.4 × 10 10 and 3 × 10 10 ), P i tot is the signal efficiency in the run i, and n i A ( , m A ) is the number of the A → e + e − decays in the decay volume with energy E A > 25 GeV per EOT, calculated under assumption that this decay mode is predominant, see, e.g., Eq.(3.7) in Ref. [56]. The value n i EOT takes into account the data acquisition system (DAQ) dead time. Each i-th entry in this sum was calculated by simulating signal events for the corresponding beam running conditions and processing them through the reconstruction program with the same selection criteria and efficiency corrections as for the data sample from the run-i. In the overall signal efficiency for each run the acceptance loss due to pileup in the veto detectors was taken into account [3,4], is also shown (red area). The constraints on the mixing from the experiments E774 [26], E141 [23], BaBar [41], KLOE [46], HADES [48], PHENIX [49], NA48 [51], and bounds from the electron anomalous magnetic moment (g − 2)e [72] are also shown.
The A yield from the dump was calculated as described in Ref. [65]. These calculations were cross-checked with the calculations of Ref. [70,71]. The 10% difference between the two calculations was accounted for as a systematic uncertainty in n A ( , m A ). The total systematic uncertainty on N A calculated by adding all errors in quadrature did not exeed 25% for both runs. The combined 90% C.L. exclusion limits on the mixing as a function of the A mass is shown in Fig. 5 together with the current constraints from other experiments. Our results exclude X-boson as an explanation for the 8 Be* anomaly for the X − e − coupling e 6.8 × 10 −4 and mass value of 16.7 MeV, leaving some unexplored region at this mass as an exciting prospect for further searches.
We gratefully acknowledge the support of the CERN management and staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the HISKP, University of Bonn
|
2019-12-22T20:00:55.000Z
|
2019-12-22T00:00:00.000
|
{
"year": 2019,
"sha1": "77ef74a376a636674d1984b78da82d0e51b3b2e9",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.101.071101",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "bfaba50d4e57eadf9bd81d460bcd1aaa4004cd81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
271957959
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of malignant neoplasms in celiac disease patients - a nationwide United States population-based study
BACKGROUND Celiac disease (CeD) is an autoimmune disorder triggered by the immune response to gluten in genetically predisposed individuals. Recent research has unveiled a heightened risk of developing specific malignant neoplasms (MN) and various malignancies, including gastrointestinal, lymphomas, skin, and others, in individuals with CeD. AIM To investigate the prevalence of MN in hospitalized CeD patients in the United States. METHODS Using data from the National Inpatient Sample spanning two decades, from January 2000 to December 2019, we identified 529842 CeD patients, of which 78128 (14.75%) had MN. Propensity score matching, based on age, sex, race, and calendar year, was employed to compare CeD patients with the general non-CeD population at a 1:1 ratio. RESULTS Positive associations were observed for several malignancies, including small intestine, lymphoma, nonmelanoma skin, liver, melanoma skin, pancreas myelodysplastic syndrome, biliary, stomach, and other neuroendocrine tumors (excluding small and large intestine malignant carcinoid), leukemia, uterus, and testis. Conversely, CeD patients exhibited a reduced risk of respiratory and secondary malignancies. Moreover, certain malignancies showed null associations with CeD, including head and neck, nervous system, esophagus, colorectal, anus, breast, malignant carcinoids, bone and connective tissues, myeloma, cervix, and ovary cancers. CONCLUSION Our study is unique in highlighting the detailed results of positive, negative, or null associations between different hematologic and solid malignancies and CeD. Furthermore, it offers insights into evolving trends in CeD hospital outcomes, shedding light on advancements in its management over the past two decades. These findings contribute valuable information to the understanding of CeD’s impact on health and healthcare utilization.
INTRODUCTION
Celiac disease (CeD) is an autoimmune, inflammatory condition developed in genetically predisposed individuals due to the immune response to the gluten component of wheat [1].The most recent global prevalence of CeD is 1.4% based on serological markers and 0.7% confirmed through histological examination [2].This disease is characterized by the presence of specific autoantibodies in the bloodstream and distinct pathological changes in the small intestine, including villous atrophy, crypt hypertrophy, and an increase in intraepithelial lymphocytes [3].What sets CeD apart is that by avoiding gluten, its progression can be halted, and mucosal damage can even be reversed [4].
Individuals with CeD are at an elevated risk of developing other autoimmune conditions, such as autoimmune thyroiditis, type 1 diabetes mellitus, Addison's disease, and various other disorders [5].Furthermore, there is a wellestablished heightened risk of malignancies among CeD patients.Recent research has shown that CeD increases the likelihood of developing specific cancers, including gastrointestinal, lymphomas, skin cancers, and others [6,7].Studies have indicated that the risk of cancer is most pronounced within the first year after diagnosis and subsequently decreases, likely due to better adherence to a gluten-free diet.Therefore, early diagnosis and gluten avoidance may reduce the risk of such complications [8,9].
Geographic differences in the occurrence of malignancies among individuals with CeD have been observed [10].However, there is a lack of population-based studies on the connection between CeD and cancer in the United States.In this study, we investigate the prevalence of malignant neoplasms (MN) in CeD patients admitted to hospitals in the United States.We analyze the significance of the association between MN and CeD, categorized by the type of cancer, using a national inpatient database comprising 529842 CeD patients.The second part of our research explores hospital outcomes, including detailed mortality data, length of hospital stays, and the cost of care in CeD patients both with and without MN.
Data source
We employed data from the National (Nationwide) Inpatient Sample (NIS) database spanning a two-decade period from January 2000 to December 2019.The NIS is an integral component of the Healthcare Cost and Utilization Project (HCUP), a collaborative initiative established through a Federal-State partnership and financially supported by the Agency for Healthcare Research and Quality (AHRQ).This database stands as the most extensive publicly accessible repository of inpatient care information, encompassing over seven million hospital admissions and representing a 20% stratified sample of all hospital discharges across the United States [11].
Within this dataset, a comprehensive array of patient demographic details, clinical information (including diagnoses and procedure codes), and data pertaining to hospital utilization and outcomes are included.The diagnostic coding system employed in the dataset adhered to the International Classification of Diseases, 9 th Edition (ICD-9) until the third quarter of 2015, after which it transitioned to the ICD-10 system in September 2015 [12].Importantly, it is noteworthy that HCUP databases align with the definition of limited datasets, and in accordance with the Health Insurance Portability and Accountability Act, no International Review Board review is necessitated for limited datasets [13].
Study population
Patient and hospital characteristics, along with outcomes and resource utilization data, were retrieved from the NIS database using ICD codes (see Supplementary Table 1).Individuals who were admitted with either the primary or secondary diagnosis of CeD, as indicated by ICD-9 code 579.0 or ICD-10 code K90.0, were included in the case group.The case group underwent matching, aligning with individuals from the non-CeD general population in a 1:1 ratio based on age, sex, race, and calendar year, utilizing the nearest neighbor propensity score method.Detailed information about the progression of cases and controls to compare the prevalence of MN is shown in Figure 1.
Additionally, we conducted a comparison of hospital outcomes between individuals with MN who had CeD and those without it.The group without CeD but with MN was carefully chosen from the general population, employing a 1:1 nearest neighbor propensity score matching technique based on age, sex, race, calendar year, and precise matching regarding the type of malignant neoplasm.Figure 2 provides a visual representation of the process outlining the development of cohorts with and without CeD in the context of MN.
Outcome measures
We conducted a comparison of the occurrence of MN in individuals with CeD, referred to as cases, against a matched group without CeD, referred to as controls.We examined the demographic characteristics of CeD patients, including age, sex, race, and socioeconomic status, in the context of the presence or absence of MN.The NIS dataset included socioeconomic status information, which was categorized by dividing the median household income in the patient's zip code into quartiles for each year.
Furthermore, we conducted a comparison of hospital-related outcomes among individuals with CeD who had MN and matched them against those without CeD.The matching process was based on age, sex, race, year, and the specific profile of MN.We assessed various aspects of hospital outcomes, which encompassed inpatient mortality, the length of hospital stays, and the overall charges incurred.To ensure accuracy, the total cost of care was adjusted using the Consumer Price Index from the United States Bureau of Labor Statistics [14].
Statistical analyses
Data processing was conducted using R (Studio 1.4), and statistical analyses were performed using SAS (SAS Institute, Cary, NC, United States).We executed one-to-one propensity score matching employing the "Matchit" package in R, employing nearest neighbor and exact matching techniques.Nominal variables were presented using frequency distributions, while continuous variables were summarized with means and standard deviations.
To compare the prevalence of MN in CeD patients with and without CeD, we utilized the χ 2 test.Group comparisons of continuous variables were carried out using the Student t-test and the Rao-Scott χ 2 test, accounting for the weighted sample in the analysis.We adhered to the year-specific AHRQ recommendations and adjusted the weights for years up to 2012 [15].Age was categorized into five groups for group-level comparisons: < 18; 18-49; 49-59; 59-69; and ≥ 70 years.In instances where race and socioeconomic status data were missing, they were categorized as "other".
For assessing temporal trends, we employed the Cochran-Armitage trend test for nominal variables and Poisson regression with a log link for continuous variables.Outliers and missing values in the length of stay (LOS) and total charges were excluded from the hospital outcomes analysis.Hypothesis testing was conducted with a two-tailed approach, and statistical significance was established at a P value < 0.05.
Baseline characteristics of the study population
The baseline characteristics of the CeD patients with and without MN are shown in Table 1.We found 529842 CeD patients from January 2000 to December 2019 in weighted NIS, of which 78128 (14.75%) had MN.Among the CeD patients, those who had MN were older, with a mean age of 68 (± 16) years as compared to a mean age of 53 (± 22) years in CeD without MN with P value < 0.0001, more males (36% among CeD with MN vs 28% among CeD without MN) than females (64% among CeD with MN vs 72% among CeD without MN) with P value < 0.0001, frequent in the Caucasian race (84% among CeD with MN vs 79% among CeD without MN) than African American race (1.90% among CeD with MN vs 2.96% among CeD without MN) with P value < 0.0001.Conversely, individuals with CeD exhibit a notably reduced risk of developing respiratory malignancies (OR = 0.68; 95%CI: 0.63-0.73;P < 0.0001) and secondary malignancies (OR = 0.76; 95%CI: 0.73-0.81;P < 0.0001).Furthermore, our analysis did not reveal any significant association between CeD and the occurrence of malignancies affecting the head and neck, nervous system, esophagus, colorectal, anus, breast, malignant carcinoids, bone and connective tissues, myeloma, cervix, or ovary.
Hospital outcomes of MN with vs without CeD
Figure 3 and Table 3, provide a comparative analysis of hospital outcomes and resource utilization.This assessment includes mortality rates, length of hospitalization, and the overall cost of care, and it contrasts patients with CeD who have MN with a matched group of patients without CeD but with MN.The matching process is based on age, sex, race, and the specific type of malignant neoplasm.The mean differences in length of the stay and total cost of care are found to be higher in CeD cohort (0.21 days; 99%CI: 0.05-0.38;P < 0.001) and ($3172; 99%CI: $1467-$4878; P < 0.001), respectively.However, the inpatient mortality is lower in CeD with MN than non-CeD with MN (0.72; 99%CI: 0.61-0.86;P < 0.001).Supplementary Figure 2 illustrated the trend in the total cost of care for hospitalized CeD patients with MN, compared to non-CeD patients with MN.
DISCUSSION
Utilizing the NIS dataset, which offers a representative sample of the United States population, our investigation revealed that individuals with CeD exhibited a notably heightened incidence of at least sixteen distinct MN.The most significant increase was observed in cases of small intestinal adenocarcinomas, with an OR of 7.7, followed by lymphomas (OR = 2.06) and other malignancies affecting the GI organs (OR = 2.01).Conversely, a reduced risk was identified for respiratory MNs.This study aligns with some findings from previous research while also highlighting notable disparities.
The initial documentation of malignancy in individuals with CeD dates back to 1965 when a case of small intestinal adenocarcinoma was reported in France [16].Subsequently, during the late 1960s, research investigating the connection between CeD and the development of malignancies began to emerge [17,18].Over the ensuing decades, a series of research studies conducted in Europe provided substantial evidence supporting the association between CeD and MN [19][20][21][22].Likewise, a study conducted in the United States observed a heightened risk of MNs in a limited cohort of CeD patients compared to the general American population [23].Various investigations have highlighted that the risk of MN development is notably higher in the early stages following the diagnosis of CeD.However, as time progresses, the Standardized Incidence Ratio (SIR) for MNs tends to decrease and may even become statistically nonsignificant after the initial year of diagnosis or in subsequent years [24][25][26] with CeD experience elevated mortality rates in the initial years following diagnosis, potentially linked to malignancies [27][28][29][30].Notably, adherence to a gluten-free diet emerges as a robust protective factor against the development of malignancies [8,31].A study underscores that mortality rates are significantly higher among patients with delayed diagnoses or severe symptoms at the time of diagnosis [29].
The precise pathogenic mechanism underlying malignancy development in CeD remains an enigma [32].Several factors, including but not limited to persistent inflammation, the release of proinflammatory cytokines, continual antigen stimulation, cytokine surges, heightened susceptibility to carcinogens, and nutritional deficiencies induced by the disease or the adoption of a gluten-free diet, have all been proposed as potential contributors to the onset of malignancies [23,33].
Among the malignancies exhibiting a positive association with CeD, our findings indicate that lymphomas have the highest prevalence and the second-highest OR within the CeD sample.The connection between CeD and the development of lymphomas and lymphoproliferative malignancies has been the subject of long-standing investigation in scientific literature [34][35][36].For instance, a study conducted by Green et al [23], which focused on the period from 1981 to 2000 and involved 381 patients with CeD at a referral center in New York, revealed a noteworthy 9-fold elevated risk of non-Hodgkin's lymphoma (NHL).NHL emerged as the most prevalent MN in CeD patients.Importantly, all NHL patients in the study reported strict adherence to a gluten-free diet for an average of 5 years [23].This observation aligns with similar findings reported in other studies [37,38].
Two Swedish studies, one involving roughly 11000 CeD patients and the other with approximately 11650 CeD patients, reported SIR of 6.3 (95%CI: 4.2-125) [31] and 6.6 (95%CI: 5.0-8.6)for NHL [38].Furthermore, Elfström et al [39] research suggested that the risk of lymphoproliferative malignancies was similar between CeD patients with only positive serology and those without documented inflammation, compared to the general population.Additionally, earlier investigations have identified relative risks (RR) ranging from 3 to 100 for various lymphoma subtypes in the context of CeD [8,31,38,40,41].The prevalence of enteropathy-associated T-cell lymphoma (EATL), a rare tumor, has attracted research attention alongside the increasing incidence of CeD in recent decades [42][43][44].Typically, EATL has been linked to refractory CeD, which is characterized by the persistent or recurrent pathological manifestations of CeD despite stringent adherence to a gluten-free diet [45,46].Notably, in regions of Northern Europe where CeD is more widespread, EATL type 1 (pleomorphic and anaplastic, typically CD56 negative, with gains in chromosomes 1q and 5q) prevails over EATL type 2 (medium-sized cancer cells, typically CD56 positive, with oncogene MYC gain), which is more commonly observed in Asian countries [47][48][49][50].It is notable that there is no clear causality on the exact mechanism of how CeD patients develop lymphoma, the main hypothesis is that as an autoimmune disease with immunity overactivation, persistent inflammation, villous atrophy, and intestinal mucosal healing resulting in aberrant lymphocytes' hyperproliferation can result in a possibility of malignant transformation of intraepithelial lymphocytes causing lymphomas [51][52][53].Numerous studies have reported the occurrence of both T and B cell lymphomas in individuals with CeD [39,[54][55][56].In terms of prognosis, research has indicated that CeD patients face a roughly 0.15% elevated risk of NHL-related mortality in the decade following diagnosis [57].It is also notable that T-cell lymphomas tend to exhibit a poorer prognosis in comparison to B-cell lymphomas [58,59].
Our findings indicate that small bowel carcinomas (SBC) exhibit the highest OR of 7.71 (95%CI: 5.0-11.9)among MN, demonstrating a positive correlation with CeD.SBC is a rare malignancy in the general population, and its association with CeD is firmly established.This connection was initially documented by Swinson et al [22], revealing a RR of 82.6 for SBC development in individuals with CeD.Over subsequent decades, several significant studies have reaffirmed this association.Elfström et al [60] in a prospective analysis encompassing more than 45000 CeD patients, reported an average hazard ratio (HR) for SBC development ranging from 2.22 to 4.67, stratified based on CeD marsh classification or positive serology.In 2014, Ilus et al [61] conducted a retrospective investigation involving 32439 CeD patients in Finland, unveiling a positive link between CeD and SBC development.The study reported an SIR of 5 in females, 3.47 in males, and 4.29 in all combined cases.Another substantial retrospective Swedish study led by Emilsson et al [62], encompassing more than 48000 CeD patients, reported an HR of 3.05, underscoring the affirmative association between CeD and SBC.Additionally, a sequential progression from adenoma to carcinoma has been suggested as a potential pathway for SBC development in CeD [63].Notably, the survival rates for SBC in CeD patients are comparatively higher than those in individuals without CeD [64].As prostate cancer is the most common cancer in males in general [65], our study shows a positive association between prostate cancer and CeD with an HR of 1.14 (95%CI: 1.06-1.23).Surprisingly, other studies that addressed this association did not show any significantly heightened risk of prostate cancer in CeD [25,31,61,66].
To our knowledge, this study stands as one of the most extensive investigations underscoring the positive link between nonmelanoma skin cancers and CeD.In contrast, the association between melanoma and CeD has been examined in three studies, with one study, also conducted in the United States, reporting a notably elevated SIR [23].Conversely, two studies from Sweden failed to establish any significant connection between these two conditions [31,67].Notably, the latter study, which involved 29028 patients, did not identify a significant association.
Our study reveals a positive correlation between pancreatic malignancies and CeD.However, it's essential to note the divergence in findings across various studies concerning the association between CeD and pancreatic malignancies.For instance, Elfström et al [60] reported a substantially higher HR of 10.7 within the first year of follow-up, which subsequently decreased to 1.4.Lebwohl et al [26] conducted another large Swedish study, documented analogous outcomes with distinct HR.In contrast, a study utilizing a United States Veterans Affairs database reported an elevated RR for pancreatic cancer in individuals with CeD [68], while two separate European studies failed to identify any significant risk association [31,69].
Our results indicate an elevated incidence and a positive correlation between thyroid malignancies and CeD.It is worth mentioning that the existing literature has yielded conflicting outcomes in this regard.Specifically, two studies conducted in the United States and Italy have reported a positive association [70,71], whereas two other population-based studies in Sweden have reported a lack of significant association [31,72].
It is noteworthy that our study's findings align with those of other research regarding the risk of lung cancer in CeD patients.Our study reveals a statistically significant negative correlation between respiratory malignancies and CeD.Similarly, the two largest studies conducted in Finland [61] and Sweden [26] also report a negative association.Conversely, several other studies have failed to identify a heightened risk of lung malignancies in individuals with CeD [23][24][25]31,69,73,74].This lack of association could potentially be attributed to a lower prevalence of smoking among individuals with CeD [75,76].Regarding colorectal cancer, our study demonstrates no significant association with CeD.This finding is consistent with a study by Lebwohl et al [77], which also reported no elevated risk of colorectal cancer.However, results from the study by Ilus et al [61], indicated an increased risk of colon cancer but not rectal cancer.
In the context of breast cancer risk in CeD, most available studies have reported a significantly decreased risk, including large-scale investigations by Lebwohl et al [26], Ilus et al [61], and Ludvigsson et al [78], as well as other relatively smaller studies with similar findings [31,25,69,79].Our study supports this trend, reporting no increased risk of breast cancer in individuals with CeD, which aligns with several previous studies on this association [
Strengths and limitations
Our study exhibits several notable strengths.First and foremost, it leverages an extensive database, including more than 108000 individuals diagnosed with CeD, which, when weighted, expands to encompass over 500000 individuals.This dataset stands as the most substantial cohort among comparable studies within the existing body of research, as far as our knowledge extends.Secondly, we investigated hospital outcomes, encompassing a variety of factors associated with mortality, LOS, and the burden on the healthcare system, spanning a substantial two-decade period.However, it is important to acknowledge several limitations in our study.Firstly, it is confined to inpatient populations, potentially limiting the generalizability of our findings to outpatient settings.Secondly, the NIS dataset lacks essential clinical details such as laboratory values, treatment modalities, and diagnostic procedures like histology and endoscopy findings that definitively confirm CeD.Thirdly, the NIS, as an administrative database, may be susceptible to selection bias and coding errors, which can occur without external validation.Lastly, it's worth noting that the NIS does not track individual patients, meaning that if a patient is admitted multiple times, they may contribute to multiple entries in the database.Furthermore, we lack information about the level of adherence to a gluten-free diet among the patients and the degree to which their CeD is controlled.These limitations should be considered when interpreting our results.
CONCLUSION
Our study is unique in highlighting the detailed results of positive, negative, or null associations between different hematologic and solid malignancies and CeD.It also sheds light on data on hospitalized CeD patients with and without MN in terms of mortality, LOS, and related costs with trends shown over the last two decades, which have been understudied in this disease.
FOOTNOTES
Author contributions: Haider MB and Green P designed the research study; Haider MB performed the data collection, analysis, and interpretation of results; Haider MB, Al Sbihi A, and Reddy SN wrote the manuscript; Green P supervised the project; and all authors have read and approved the final manuscript.
Institutional review board statement: Information from this research utilized de-identified data sourced from the National Inpatient Sample Database, which is a publicly accessible database encompassing all-payer inpatient care information in the United States.There is no necessity for an Institutional Review Board Approval Form or Document.
Informed consent statement:
The study utilized de-identified data from the National Inpatient Sample Database, which is a publicly accessible database containing information on all-payer inpatient care in the United States.Patient consent was not necessary for this analysis.
Conflict-of-interest statement:
All the authors report no relevant conflicts of interest for this article.
Data sharing statement: Data that support the findings of this study are publicly available at https://www.hcup-us.ahrq.gov/db/nation/nis/nisdbdocumentation.jsp.
STROBE statement:
The authors have read the STROBE Statement-checklist of items, and the manuscript was prepared and revised according to the STROBE Statement-checklist of items.
Figure 1
Figure 1 Flow diagram outlining the cases and controls for comparing the prevalence of malignant neoplasm in celiac disease and nonceliac disease patients.CeD: Celiac disease.
Figure 2 Figure 3
Figure 2 Flow diagram outlining the cases and controls for comparing the hospital outcomes of celiac disease with malignant neoplasm vs non-celiac disease with malignant neoplasm.CeD: Celiac disease; NIS: National Inpatient Sample; MN: Malignant neoplasm.
Table 1 Comparison between patient's characteristics of celiac disease patients with and without malignant neoplasms, National Inpatient Sample 2000-2019 Overall CeD patients CeD without MN, weighted, n (%) CeD with MN, weighted, n (%) P value
1 Two sample Student t-test, 2-tailed for comparing means of two continuous variables. 2Rao-Scott χ 2 , 2-tailed test for the association of two categorical variables. 3Rao-Scott χ 2 , 2-tailed test for two by n table.Statistical significance illustrates that the two groups differ.CeD: Celiac disease; MN: Malignant neoplasm.
Table 2
compared the prevalence of MN in CeD (cases) and matched (age, sex
Table 3 Comparison of inpatient mortality, mean total charges, and length of stay in celiac disease patients with malignant neoplasms vs matched non-celiac disease with malignant neoplasm (matched by age-, sex-, race-, and malignant neoplasm profile), National Inpatient Sample 2000-2019
1 χ 2 , 2-tailed test for the association of two categorical variables. 2Two sample Student t-test, 2-tailed for comparing means of two continuous variables.CeD: Celiac disease; MN: Malignant neoplasm; OR: Odds ratio; CI: Confidence interval; NA: Not available.
|
2024-08-28T05:09:15.324Z
|
0001-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "b7cf83dfaeb1c2b93afa56277bf3625d09558956",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b7cf83dfaeb1c2b93afa56277bf3625d09558956",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
263251257
|
pes2o/s2orc
|
v3-fos-license
|
Inflammatory loops in the epithelial–immune microenvironment of the skin and skin appendages in chronic inflammatory diseases
The epithelial–immune microenvironment (EIME) of epithelial tissues has five common elements: (1) microbial flora, (2) barrier, (3) epithelial cells, (4) immune cells, and (5) peripheral nerve endings. EIME provides both constant defense and situation-specific protective responses through three-layered mechanisms comprising barriers, innate immunity, and acquired immunity. The skin is one of the largest organs in the host defense system. The interactions between the five EIME elements of the skin protect against external dangers from the environment. This dysregulation can result in the generation of inflammatory loops in chronic inflammatory skin diseases. Here, we propose an understanding of EIME in chronic skin diseases, such as atopic dermatitis, psoriasis, systemic lupus erythematosus, alopecia areata, and acne vulgaris. We discuss the current treatment strategies targeting their inflammatory loops and propose possible therapeutic targets in the future.
Introduction
The epithelial-immune microenvironment (EIME) provides both constant defense and situation-specific protective responses in several organs, such as the skin, gut, and lungs, which are located at the interface between the environment and an organism (1).The host defense system can be classified into three layers: (constant and nonspecific) barriers, innate immunity, and acquired immunity (2).There are five common elements in the microenvironments of these organs: microbial flora, barriers, epithelial cells, immune cells, and peripheral nerve endings (Figure 1).The interaction between these five elements provides protection against dangers from the environment.This dysregulation can result in the generation of inflammatory loops in chronic inflammatory diseases (1).
The skin is one of the largest organs in the host defense system (3).Here, we propose an understanding of the EIME in five chronic skin diseases: atopic dermatitis (AD), psoriasis, systemic lupus erythematosus (SLE), alopecia areata (AA), and acne vulgaris.We discuss current treatments targeting inflammatory loops and propose possible therapeutic strategies for the future.
Loops in atopic dermatitis
Atopic dermatitis (AD) is a common chronic inflammatory skin disease characterized by chronic pruritic eczematous skin lesions (1).AD is an atopic disorder characterized by elevated serum concentrations of immunoglobulin E (IgE) (4).AD has two age peaks (infancy and the third decade of life) in its prevalence and is spontaneously ameliorated (1).The onset of AD is often followed by serial occurrence of allergic diseases that represent the atopic march (5).AD lesions affect predilection sites including the cubital and popliteal fossae that are predominantly colonized by Staphylococcus aureus (1).Topical therapies with moisturizers and corticosteroids are first-line treatment (6).The blockade of interleukin (IL)-4 or IL-13 is highly effective, indicating that T H 2-type inflammation is essential for its pathogenesis (1).
A relationship chart of the elements in type 2 EIME of AD depicts double loops (Figure 2A) (1).This redundancy results in the partial efficacy of IL-4/13 blockade therapies in AD, in contrast to the almost perfect efficacy of the IL-17-blockades in psoriasis.The first is a positive feedback loop between keratinocytes and immune cells.Keratinocytes produce epithelial type 2 mediators including thymic stromal lymphopoietin (TSLP), IL-33, granulocytemacrophage colony-stimulating factor (GM-CSF), and IL-25.In contrast, type 2 cytokines, such as IL-4 and IL-13, produced by immune cells, activate keratinocytes via IL-4/13 receptors.The other is a positive feedback involving dysbiosis of the microbial flora and peripheral nerve sensing of pruritus.Impaired barrier formation in the skin results in S. aureus-predominant dysbiosis in AD. S. aureus activates type 2 immune responses.In contrast, IL-4 and IL-13 directly dampen barrier formation via IL-4/13 receptors in keratinocytes, and several type 2 cytokines indirectly damage the skin barrier by activating sensory nerve endings via receptors for IL-4, IL-13, IL-31, IL-33, and TSLP that cause pruritus and subsequent scratching behavior (7).Additional activation of G-protein-coupled receptors (GPCRs) and ion channels in sensory nerve endings may be involved in the itch-scratch cycle in AD (7).
IL-31 from immune cells enhances the release of brain-derived natriuretic peptide (BNP) from dorsal root ganglionic neurons The epithelial-immune microenvironment (EIME) of the skin and skin appendages.There are five common elements in the microenvironment of epithelial tissues: microbial flora, barrier, epithelial cells, immune cells, and peripheral nerve endings.The interaction between these five elements provides protection against dangers from the environment.(DRGs).BNP induces the activation of glycogen synthase kinase 3 (GSK3) and production of matrix metalloproteinase (MMP)9 in cultured human keratinocytes (8).These results suggest that the activation of sensory nerves directly affects keratinocyte activation and may impair the skin barrier, regardless of the induction of scratching behavior in the EIME of AD.Basophils are involved in both chronic itch and itch flares in AD (9).In chronic AD skin lesions, keratinocytes produce TSLP that primes basophils to release IL-4, and activation of IL-4 receptors in sensory neurons drives chronic itch.In contrast, during allergenstimulated AD itch flares, the epithelial barrier disruption allows increased allergen infiltration.IgE-R + basophils recruited to the skin release leukotriene C4 (LTC 4 ) and drive itch sensations via LTC 4 receptors in sensory nerve endings (10).
Keratinocytes play a pivotal role in driving the inflammatory loop of type 2 inflammation in AD (1).Single-cell RNA sequencing of skin lesions from patients with AD who underwent long-term treatment with the IL-4Ra blocker dupilumab demonstrated that transcriptomic dysregulation in keratinocytes was completely normalized, whereas the AD signature in dendritic cells (DCs) and T lymphocytes persisted for up to a year after clinical remission (11).These results suggest that keratinocytes are the major target of dupilumab in AD, and that IL-4/13 signaling in keratinocytes is essential for the inflammatory loop of type 2 EIME in AD, regardless of the persistent activation of DCs and T lymphocytes.
Loops in psoriasis
Psoriasis is a common chronic inflammatory disease characterized by both cutaneous and systemic manifestations (1,12).It is clinically characterized by red scaly papules and plaques, and can be associated with psoriatic arthritis.Its prevalence is estimated to be 1-3% worldwide.Psoriasis typically develops in genetically predisposed middle-aged individuals and is commonly associated with metabolic syndrome.Genetic predisposition is related to keratinocyte pro-inflammatory signaling and type 17 responses.The efficacy of selective biologics targeting tumor necrosis factor (TNF), IL-23, and IL-17 has demonstrated their pivotal roles in the pathogenesis of psoriasis (1).
Psoriasis simulates the protective machinery of the body opposing dermatophytes.The skin removes them together with the stratum corneum by accelerating its turnover and neutrophil attacks, mediated by the T H 17 response, which is called 'psoriasiform dermatitis,' characterized by epithelial hyperplasia and neutrophil infiltration (1).
p38 mitogen-activated protein kinase (MAPK)-dominant activation of the TNF receptor-associated factor 6 (TRAF6) pathway in keratinocytes may be involved in triggering psoriasis (13).Many psoriasis-susceptibility genes, such as IL36RN and CARD14, are related to skin-specific p38 activation.In addition, psoriasis develops during middle age, and the p38 pathway activation in the skin of aged individuals is more inducible than that of young subjects (14).Furthermore, skin scrubbing elicits psoriatic lesions (Koebner's phenomenon), and physiologically scrubbed stresses immediately induce p38 activation in keratinocytes (15).Moreover, keratinocyte TRAF6 signaling is necessary for releasing proinflammatory cytokines and chemokines, such as IL-1, IL-6, C-X-C motif ligand (CXCL)1, and C-C motif ligand (CCL)20, and for the activation and propagation of the IL-23-IL-17 axis in psoriatic inflammation (16), while the cutaneous p38 activation is sufficient to induce psoriatic inflammation (15).
In contrast to the type 2 EIME in AD, the type 17 EIME in psoriasis depicts a single-loop circuit (Figure 2B) (1).This is consistent with the efficacy of biologics targeting IL-17, IL-23, and TNF in this loop in type 17 EIME in psoriasis.transient receptor potential vanilloid 1 (TRPV1) + sensory nerves sense Candida albicans and drives type 17 protective cutaneous immunity (17).By contrast, microbiota-induced S. aureus-specific T H 17 cells accelerate sensory neuronal regeneration (18).However, the fungal and bacterial skin microbiota in lesional skin of patients with psoriasis are similar to those in non-lesional or healthy skin (1).Therefore, despite the bidirectional interaction between skin microbiota and sensory nerves in an acute protective response (17, 18), these two elements do not appear to contribute to the Inflammatory loops in the epithelial-immune microenvironment (EIME) of chronic inflammatory diseases (A) Loops in atopic dermatitis.Two inflammatory loops drive type 2 EIME in skin lesions in AD.One is the loop between epithelial and immune cells, which constructs T H 2 interplay.The other is a loop involving S. aureus-dominant dysbiosis and abnormal sensory nerve endings that cause pruritus.(B) A loop in psoriasis.A single inflammatory loop between epithelial and immune cells in the interleukin (IL)-23-IL-17 axis drives type 17 EIME in lesional skin in psoriasis.The skin microbiome remains unchanged, suggesting less involvement of microbial flora, whereas C albicans colonization elicits a type 17 response by directly stimulating the sensory nerve endings.(C) Loops in systemic erythematosus (SLE).An inflammatory loop between epithelial cells, microbial flora, and immune cells is drawn in the EIME of lesional skin in SLE.Another loop may be organized without microbial flora.Keratinocyte damage caused by sunlight or microbial flora can trigger these loops.The plasmacytoid dendritic cells (pDCs) promote these loops only during the initiation phase by releasing type I interferons (IFN-I).The constitutive activation of Toll-like receptor (TLR)7/9 drives type-I IFN loops.Neutrophil extracellular traps (NET) activate pDCs at an early stage and promote disease propagation.(D) A loop in alopecia areata (AA).A single inflammatory loop of interferon (IFN)-g and IL-15 is driven by the EIME of hair follicles (HFs) in the anagen phase.The outer root sheath (ORS) cells of HFs express abnormal or ectopic major histocompatibility complex (MHC) molecules and NKG2D ligands.IFN-g produced from NKG2D + T cells and NK cells induce hair loss and promote the expression of these molecules, and IL-15, from the HF ORS cells.IL-15 activates IFN-g-producing cells.(E) Loops in acne vulgaris.The inflammatory loops in acne vulgaris involve sebocytes, infundibular cells, Cutibacterium acnes, and immune cells.An increase in androgen levels triggers these loops.Comedogenesis is a bottleneck in acne pathophysiology.AMPs, antimicrobial peptides; BNP, brain-derived natriuretic peptide; CCL, C-C motif ligand; CGRP, calcitonin gene-related peptide; CXCL, C-X-C motif ligand; DCs, dendritic cells; IFN, interferon; IL, interleukin; ILC, innate lymphoid cell; LCs, Langerhans cells; LTC 4 , leukotriene C4; NET, Neutrophil extracellular trap; NK, natural killer; ORS, outer root sheath; PAMPs, pathogen-associated molecular patterns; pDCs, plasmacytoid dendritic cells; T H 2, T helper type 2; TLR, Toll-like receptor.
formation of a closed circuit between other elements in the EIME during chronic inflammation.Collectively, the contribution of skin microbiota and sensory nerves to the inflammatory loop in type 17 EIME remains obscure in psoriasis.
Loops in systemic lupus erythematosus
SLE is a systemic syndrome that affects multiple organs including the skin, kidneys, brain, and vasculature, with a profound clinical heterogeneity (19)(20)(21).SLE has long been considered a systemic autoimmune disease.However, recent progress suggests that the initial trigger probably involves recognition of self or foreign molecules, especially nucleic acids, by innate sensors (22).
SLE can be spontaneously triggered by exposure to environmental stimuli such as ultraviolet light or infection (21).Dysregulation of apoptosis and nuclear debris clearance is a characteristic of SLE and contributes to multi-organ autoimmunity (23).Studies in mice and humans have shown definitive roles of neutrophils, plasmacytoid DCs (pDCs), Toll-like receptor (TLR) activation, and type I interferon (IFN) production in SLE, and increased IL-17 production may contribute to this process (20).
The skin of patients with SLE shows LC defects and reduced epidermal epidermal growth factor receptor (EGFR) phosphorylation, and topical EGFR ligands reduce photosensitivity (24).These results suggest that a defective Langerhans cellkeratinocyte axis protects against photosensitivity and triggers keratinocyte apoptosis and subsequent events in SLE.
SLE patients display an increased capacity to form neutrophil extracellular traps (NETosis).NETs harboring self-and foreign RNA and DNA antigens are poorly cleared and stimulate pDCs to produce type I IFN via TLR7 and TLR9 stimulation.It induces an innate immune response and the propagation of proinflammatory T H 17 cells that are involved in disease expression and promote NETosis (20).The blockade of type I IFN receptor by treatment with anifrolumab is effective for reducing the disease activity in patients with SLE (25).
In patients with cutaneous lupus erythematosus (CLE), interfollicular keratinocytes exhibit a type I IFN-rich signature in pre-lesional skin (26).pDCs dominated the perifollicular region in non-lesional skin but not in lesional skin.In contrast, CD16 + DCs arise from non-classical monocytes, migrate into the non-lesional skin, and undergo IFN education for inflammation in the CLE.
TLR7 gain-of-function gene mutations and single nucleotide polymorphisms (SNPs) in the TLR trafficking chaperone UNC93B1 are found in patients with SLE (27,28).Epicutaneous application of TLR7 agonists for four weeks led to a significant increase in Ifna expression in the spleen and the development of SLE-like systemic autoimmunity (29).These results indicate that dysregulation of EIME in the skin results in systemic autoimmunity.
Gut barrier defects associated with microbial dysbiosis have been observed in SLE patients and mouse models (30,31).Additionally, the skin microbiota of patients with SLE is distinct from that of healthy individuals (32,33).Notably, S. aureus skin colonization in epithelial cell-specific IkBz-deficient (Nfkbiz DK5 ) mice promotes SLE-like autoimmune inflammation via caspasemediated keratinocyte apoptosis and the subsequent activation of neutrophils and the IL-23-IL-17 axis (34).
Among patients with SLE, 7.6% experience peripheral nervous system events, including peripheral neuropathy (35).SLE may have an early effect on peripheral nerve function in patients without clinical or electrophysiological neuropathy (36).However, little is known about the interactions among peripheral nerves in the EIME of the skin.Peripheral blood mononuclear cells (PBMCs) from SLE patients are highly susceptible to apoptosis induced by calcitonin gene-related peptide (CGRP), a neuropeptide produced by the central and peripheral nerves (37).CGRP from the peripheral nerves drives dermal DCs to produce IL-23 in a type 17 response to cutaneous C. albicans infection (17).Notably, increased serum levels of procalcitonin, an alternative transcription product of CGRP, are diagnostic markers of bacterial infection in patients with SLE (38).
Thus, an inflammatory loop of type I IFN between keratinocytes and immune cells emerges in the EIME of the skin in patients with SLE (Figure 2C).Increased susceptibility to keratinocyte cell death induces the release of danger-associated molecular patterns (DAMPs), neutrophil recruitment, and NETosis, which trigger the activation of the TLR7 and TLR9 pathways in pDCs and their production of type I IFN.The type I IFN-rich signature of the skin primes CD16 + DCs and propagates a type 17 immune response involving keratinocyte activation and NETosis.However, the contribution of dysbiosis and peripheral neuropathy and the difference in the role of the type 17 immune response in psoriasis and SLE remain obscure.
Loops in alopecia areata
Alopecia areata (AA) is a common, acquired, non-scarring hair loss that affects 2% of the global population and is intractable in severe and relapsing cases (39).
AA is an autoimmune disease resulting from a disruption in hair follicle immune privilege, a structure or a system that protects vital organs, including the central nervous system, testes, placenta, eyes, and hair follicles (HFs), from the potential harm of immune recognition (40).Immune-privileged sites in HFs prevent natural killer (NK) cells from activating them.Specifically, suppressed expression of major histocompatibility complex (MHC) class I and NKG2D ligands in healthy HFs protected them from NK cell attack and subsequent hair loss.In contrast, HFs in patients with AA show abnormal expression of major histocompatibility complex (MHC) class I and II molecules and NKG2D ligands.Indeed, a genome-wide association study (GWAS) in 1,024 patients with AA and 3,278 controls identified ULBP3, which encodes an NKG2D ligand, as the responsible gene (41).Histologically, late anagen HFs in patients with AA show perifollicular infiltration of mononuclear cells, including CD4 + or CD8 + NKG2D + T cells and CD56 + NKG2D + NK cells (42).
In AA, an inflammatory loop of IFN-g and IL-15 emerges between immune cells and HF epithelial cells in the EIME of the lesional scalp, and is thought to be the driving force of the disease state (Figure 2D) (43,44).IFN-g induces abnormal expression of MHC molecules and NKG2D ligands in the anagen hair bulb, leading to the collapse of the HF immune privilege.IFN-g also acts on the HF epithelial cells to enhance the expression of IL-15.The expression levels of IL-15 and IL-15 receptor a in the outer root sheath of HF were higher in patients with AA and in animal models than in healthy controls (45).Furthermore, IL-15 signaling enhances CD8 + memory T cell survival, expansion, and maintenance of T and NK cells, and CD8 + T cell production of 47).Consistently, serum levels of IFN-g and IL-15 are higher in patients than in controls and correlate with disease activity (46, 47).In contrast, IL-15 prolongs anagen phase, stimulates proliferation, and suppresses apoptosis in the hair matrix of human scalp hair follicles (48).Of note, the IFN-g pathway depends on Janus kinase (JAK)1/2 and the IL-15 pathway depends on JAK1/3, respectively (43,44).Therefore, peroral JAK inhibitors selective for either JAK1/3 provide a clear example of the treatment development process via the blockade of the inflammatory loop in EIME (44,49,50).
The microbial flora of the lesional scalp may be less involved in AA pathogenesis because the scalp microbiome is more diverse in patients with AA than in healthy controls, but is not significantly different according to the severity of AA (51).
The AA scalp shows defective C-fiber sensory perception (52).However, the involvement of sensory nerves in the EIME of patients with AA remains largely unknown.
Loops in acne vulgaris
Acne vulgaris is a chronic inflammatory condition involving the pilosebaceous units of skin on the face, neck, chest, or back (53).Acne vulgaris affects approximately 85% of people aged 12-24 years, 18% of women, and 8% of men aged ≥ 25 years.Acne accounts for approximately 16% of the dermatological disease burden (54), and the global market size is estimated at USD 10.48 billion in 2022 (55).GWAS identified the possible link to genes related to androgen metabolism, inflammation processes, the tumor growth factor-b (TGF-b) pathway, and hair follicle development (56)(57)(58).A possible relationship between acne and diet, such as a high-glycemic-load diet or chocolate, has been suggested (59)(60)(61).
The development of acne involves the interplay of multiple factors, including (i) hormonal influences on sebum production and composition, (ii) follicular hyperkeratinization, and (iii) inflammation involving colonization with Cutibacterium acnes (62, 63).The specific relationship between these key factors remains to be defined, although an older study suggested that inflammation precedes hyperkeratinization (64).
Sebum production by the sebaceous glands is regulated by many factors and is primarily controlled by androgens that are produced both outside (gonads and adrenal glands) and inside the pilosebaceous unit (62, 63).Androgen levels are elevated during the neonatal period and puberty and have a significant impact on triggering the development of acne.Acne is associated with alterations in sebum composition.The sebum of patients with acne contains fewer essential free fatty acids (65) and increased levels of monounsaturated fatty acids (MUFAs) and lipoperoxides that influence keratinocyte proliferation and differentiation compared to healthy people (66-68).Notably, the topical application of fatty acids is sufficient to facilitate follicular hyperkeratosis in animal model (69).
Comedo, a hyperkeratotic plug in the infundibulum, is a diagnostic clue to acne vulgaris and can differentiate it from other acneiform eruptions.Microcomedo is the precursor of all acne lesions and a bottleneck in acne formation (62).However, the inciting event for microcomedo formation remains obscure whereas IL-1a may be involved (64).
C. acnes is a Gram-positive anaerobic/microaerophilic rod and is a commensal organism in the pilosebaceous unit.The amount of C. acnes is similar between patients with acne and healthy controls and was not correlated with disease severity; however, the strain populations differed between patients with acne and healthy individuals (70, 71).C. acnes and associated lipopolysaccharide (LPS) activate the TLR2/4 pathway and nucleotide-binding domain, leucine-rich-containing family, pyrin domain-containing-3 (NLRP3) inflammasome and induces the release of proinflammatory mediators, such as IL-8, TNF, IL-1a, IL-1b, and GM-CSF in human sebocytes and keratinocytes.In addition, C. acnes promote T H 17 and T H 1 response pathways, which are activated in acne lesions, by inducing the secretion of IL-17A and IFN-g from CD4 + T cells (63).The type 17 immune response can affect keratinocyte proliferation and differentiation at the infundibulum and sebaceous duct in the pilosebaceous unit, and promote the infiltration of neutrophils via the release of their chemoattractants.
The facial skin of patients with acne is highly innervated and the sebaceous glands express receptors for several neuropeptides.Their activation in human sebocytes modulates cytokine production, cell proliferation, cell differentiation, lipogenesis, and androgen metabolism.The expression levels of substance P in the peripheral nerves and neutral endopeptidase, which degrades substance P, in the sebaceous glands of the facial skin are higher in patients with acne than in healthy individuals (72).However, the interaction between peripheral nerves in the EIME of the skin in acne remains poorly investigated.
In acne, factors other than the five major components of EIME, such as pre-adipocytes and triggering receptors expressed on myeloid cells 2 (TREM2) + macrophages, have also been suggested (73,74).Their interplay in the EIME is also expected to be critical for the pathogenesis and treatment of acne.
Thus, more than one inflammatory loops involving the pilosebaceous unit including sebocytes, C. acnes, and immune cells, emerge in the EIME of the skin in patients with acne vulgaris (Figure 2E), and may be primarily triggered by hormonal changes that influence sebum composition during puberty.
Discussion and concluding remarks
The epithelium senses external factors on the body surface in the earliest stages, determines the type of immune response, and constructs an optimal EIME that is best suited for defense.Epithelial stem cells memorize tissue invasion of the skin and respond rapidly to a second attack (75).This mechanism also induces allergic inflammation in the respiratory tract (76).If chronic inflammation is a pathological mimic of host defense, the epithelium could also determine the type of inflammation in chronic inflammatory diseases by constructing each EIME, such as type 2 EIME in atopic dermatitis and type 17 EIME in psoriasis.This perspective raises several questions: What determines whether immune responses terminating in healthy skin are also terminated in chronic inflammatory diseases?What mechanisms of the epithelium determine the type of immune response?Do EIME in other epithelial organs, such as the gut and lungs, share common or unique mechanisms that govern biological defense and chronic inflammation?
The EIME concept will facilitate the development of new therapeutic targets for chronic inflammatory diseases because it simplifies the model of each disease.In addition, drawing the EIMEs for multiple diseases will clarify the contradictions involved in each existing model.Targeting disease-specific interrelationships between immune cells and non-immune factors will lead to the development of new therapies in the future.
|
2023-09-30T15:16:57.578Z
|
2023-09-28T00:00:00.000
|
{
"year": 2023,
"sha1": "2e6c414ed0fba6e50a722c723fe7dceb9c6a210a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1274270/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83c1519a218dd637fb52280a7dcc8309f30b48aa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237966312
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the effectiveness of state regulation of the agro-industrial complex on the example of several countries
This article examines the effectiveness of state regulation of the agro-industrial complex in the Russian Federation. An assessment of the main legal problems in this area is given. On the example of foreign countries, various approaches to the reform of the agricultural industry are analyzed. The author examines the reforms in the field of agro-industrial complex and assesses their effectiveness. The main programs and directions of the studied field of economics are given. The provisions of federal laws, civil, tax, and land codes of several countries and other regulatory legal acts were considered. Conclusions are formulated about the main most optimal and necessary directions of state regulation. The author expresses the opinion about the need for certain actions in the state policy on the regulation and support of agro-industrial production.
Introduction
The problems of legal regulation of the agro-industrial complex will have the status of topical issues in any state at any stage of its development. The tendency to establish market relations in the economy of different countries does not reduce the need to regulate the economy in the agro-industrial sphere. But this state of affairs leads to a change in the methods and methods of regulation, sets new priorities. The development of the agroindustrial complex and, consequently, the level of national and food security depends largely on the level of state support, which is expressed in the form of state assistance to producers, processors, and sellers of agricultural products.
The specific features of the agricultural sector make it uncompetitive in a thriving market economy. Therefore, one of the most important factors for the preservation, development and improvement of the efficiency of agricultural production is not only the support of the industry from the state, but also the regulation and development of innovative activities in the field of agricultural industry. Due to the forms and methods of legal regulation of the agro-industrial complex used by the state, a certain balance is established between the interests of society and the state, as well as market mechanisms of self-regulation.When studying agricultural legal relations, specialists of legal science pay special attention to taking into account the peculiarities of agriculture in a particular region, the influence of legislation and law in general on the formation, development and functioning of the agricultural market. The changes that the world economy is undergoing set goals for agricultural enterprises to produce high-quality products that would be available and have a huge demand not only in the domestic market, but also in the world. The analysis of the effectiveness of legal regulation and state support will help to draw conclusions that will provide an understanding of the importance of legal regulation of the agricultural industry, as well as help the reader to understand the correctness of the priorities that need to be set to achieve the goals and objectives of the development of the agro-industrial complex in any state.
Methodology
In the article the author used several research methods. In addition to considering historical and theoretical data, empirical methods were used, such as comparison, generalization, analysis of various information sources and legislation of several countries, statistical data of services and ministries in this area were used.
When considering the information material, legislation, statistics of various countries, as well as when studying the experience of state regulation of the agro-industrial complex in foreign countries, the author sees the aim to target readers to understand the need to develop and develop value orientations and set the right priorities to establish the most effective forms, methods, mechanisms, goals, methods and tools for regulating the agricultural industry and the economy as a whole.
The author also discusses the effectiveness of the reform of the agro-industrial sector in the Russian Federation, putting the analysis of the experience of foreign countries in this area as the basis of his judgments. The rating of countries on the import of grain crops was also considered. The top ten countries are included. (Fig. 1) In the rating of regions for the export of grain products of the agro-industrial complex of Russia, the first place is taken by the Rostov region, the second place is the city of Moscow, and the Krasnodar Territory closes the top three.
Discussion of results
It is worth starting with the fact that the growth rate of agricultural products and the development of agro-industry in the late eighties of the twentieth century decreased to 1-2%. Such a food problem in the conditions of the current economic situation at that time could not be solved quickly and efficiently. This has created an urgent need for serious reform [1].
In 1991 the modern agrarian reform began to develop. The legal documents underlying this reform are the Land Code, the laws "On land reform" and "On peasant (farmer) farming". They served as the foundation for land reform. This gave a start to serious transformations in the field of agriculture. But this reform was not comprehensive, did not solve many socioeconomic issues, which caused the need for an agrarian reform that would comprehensively solve the tasks set. As a result, several normative legal acts were adopted, such as Resolutions of the Government of the Russian Federation and Decrees of the President of the Russian Federation: "On the regulation of land relations and the development of Agrarian Reform in Russia" (1993), "On the reform of Agricultural Enterprises" (1994), "On the implementation of the constitutional rights of citizens to land" (1996), "On State regulation of Agro-industrial production" (1997), etc. They determined the order, principles, and methods of transformation in the field of agricultural activity.
The principles of agrarian reform are highlighted: 1)freedom of forms of ownership and management; any collective or individual can choose the form of ownership that best suits its interests, goals and capabilities; 2)transformations are carried out on a voluntary basis; reform in the field of agroindustrial complex should not be carried out under pressure from above, it is necessary to ensure voluntariness, which is based on personal interest in the ongoing transformations; 3)the activities carried out within the framework of the reform should be of a nature; it is necessary that any changes take place consistently, it is important that the characteristics of each subject of these legal relations are taken into account; 4)the peasant becomes a real owner in the functioning of any form of management, which occurs due to the allotment of his land and the provision of a property share. When using these resources, it is necessary to adhere to the most acceptable forms of management, and it is also necessary to take into account the economic conditions and features of the development of the agricultural industry in the country for this period; 5)all agricultural transformations should be open, which will be achieved through broad public awareness, availability of discussion of these issues [2].
The results of the land reform include a gradual transition from state ownership of land, which existed in a single form, to a variety of its forms. Producers of agricultural goods became owners of 81.9% of the agricultural land area. Thus, approximately 13.5 million hectares, or 6.9% of all agricultural land, remained for farms.
The second significant result of the land reform is the transition to a multi-layered economy in the field of agricultural industry. Thus, several different forms of ownership could be rationally combined here. Plus, an important part of the agrarian reform is the privatization and the developing market infrastructure of the agro-industrial complex. This contributed to the formation of the economic conditions of its functioning. Previously, it was planned to develop a system in which there will be a balance of self-regulation of supply and demand, as well as the relationship between sellers and buyers in the agricultural industry market. Buttheseareashavesufferedsetbacksandhavebeenmissed.
Despite this, the fact remains that any economy in any society is a system that has a complete structure. Each element here is an important link that affects the quality and efficiency of the functioning of the economic system [3].
The goal of the state in regulating the agro-industrial complex should be to take into account national interests as the basis for determining the directions of economic policy. This is the major difference between state regulation and market mechanisms. Due to this provision, the state regulation of the economy is given special importance here [4].
Developed countries have a huge variety in the combination of the functions of the state and the subjects of economic relations. That is why analyzing the experience of foreign countries, it is worth considering the situation in the field of agro-industrial complex in the United States of America, in France, and in some countries of the Commonwealth of Independent States.
The main objectives of state regulation of the economy in foreign countries are identified. These include: 1. consolidation of the existing economic system and its adaptation to constantly changing conditions; 2. the alignment and stabilization of economic cycles; 3. improving the structure of the national economy; 4. regulation and maintenance of monetary circulation; 5. ensuring employment of the working-age population; 6. ensuring balance in the external economy; 7. maintaining and developing competition; 8. maintaining stability in the economic and social spheres; 9. improving the standard of living of the population; One of the main tools used by the state to support the economy in developed foreign countries is considered to be antitrust policy, the level of concentration of state entrepreneurship and ensuring a balanced pricing policy. State support for agricultural producers in these countries is not aimed at stimulating production, but at supporting the level of income of the producer, as well as at implementing structural, social and regional policies that are not directly related to production, but help to ensure the optimization and improvement of the quality of life of citizens [5].
The dominant position in modern developed countries is taken by the approach that dictates the task for the state, which is not to support economic growth at the expense of budget expenditures, but to ensure that the subjects of relations in this area have access to tools with which they can benefit from entrepreneurial actions. Ensuring the competitiveness of the country, creating and improving the legal and economic environment, as well as supervising and supporting the actors to achieve their competitive goals are a manifestation of the role of the state in agricultural policy [6].
In the United States of America agriculture and its efficiency are based on a number of basic conditions: 1. formed agro-industrial complex; 2. organization and maintenance of the economy in this industry; 3. mass introduction of technological and managerial innovations, their modernization and high speed of distribution; 4. state policy on regulation and stabilization of production in the field of agriculture and export of its products.
The complex of the agricultural industry was formed and developed with the increase and strengthening of intersectoral integration relations.
Another fairly effective method of regulating the agricultural industry by the state in the United States of America is the price policy, which is expressed in the establishment of guaranteed prices for specific products of agricultural producers. This achieves the goal of providing a fixed level of average income for farmers. State support is divided not only by areas of activity, but also by regions, which makes it possible to create equal conditions for farming and maintaining production for entrepreneurs and farmers.
By reducing the area under certain crops, it is possible to receive a direct state payment, which is also a good mechanism for regulating agricultural production in this economic sector [7].
The essence of the regulation of the sphere of agricultural production by the state in the United States of America is the possibility of a farmer receiving income from two sources: 1. from the sale of its products; 2. fromdirectgovernmentpayments. Thus, the state protects farmers from the risks associated with the sale and sale of goods produced by the farmer on the market.
The reasons that dictate the need to introduce innovations and changes in the agroindustrial policy of the United States of America are the strict and inflexible requirements of the World Trade Organization for trade liberalization at the international level, the processes of globalization of the world economy, and other reasons [8].
Such a change, which require the above conditions, is achieved primarily due to the redistribution of the budget, namely the reduction of direct government payments to farmers, but at the same time increase public expenditure in the field of science, education, and information security measures for the protection of nature and environment, etc. also there is a reduction of customs duties, fees and tariffs, high size which is a serious hindrance to the development of trade at the international level.
In Western Europe, in France is singled out as one of the most developed countries in terms of economic development. A large territory and a high population gives France the right to be on the list of the largest countries in Europe. As a percentage, it is established that approximately 17% of industrial and 20% of agricultural production in Western Europe is accounted for by this state.
In France, there are principles under which trade, farming, and entrepreneurship are free. The market mechanisms of the economy stimulate the development of the country's economy as a whole, but the state has a large role in organizing the freedom of pricing. This role is manifested in the preparation of plans, programming of policies in the field of agricultural industry. Special authorities were established in France to solve these tasks. Planning helps to focus the French economy on the modernization of technologies, the introduction of innovations, the conduct of scientific and research work, and development. Just as in the United States, much attention is paid to stimulating and maintaining competition in the economy. Government support is expressed in the promotion and development assistance for small businesses. The state provides assistance and facilitates obtaining loans for various periods, offers tax incentives, and distributes commercial information [9].
The systematic control of the economy by the state in France is an example of high political influence on economic processes. Control and regulation of the activities of large organizations, enterprises and firms is carried out through the examination of compliance with antimonopoly legislation.
In general, any agrarian reform is defined as a complex, multi-faceted and timeconsuming process, which is primarily aimed at streamlining all links in this industry. The reform of the agricultural industry is a responsible, socially and economically important matter. This largely explains the situation in some CIS countries, where agricultural reforms led to significant changes, but either were not completed or did not correspond to the planned results [17].
Let us consider the example of agrarian reform in the Republic of Tajikistan. The privatization of state farms, the reorganization of collective farms, and the denationalization of property led to serious changes in the structure of farms and farms. This led to the need to re-evaluate, reform and change the methods and structure of the agro-industrial complex.
In the process of reforming the sphere of agricultural production in the Republic of Tajikistan, the structure of land and agricultural formations was radically changed [10]. The main goal of the agrarian reform in the Republic of Tajikistan was to improve the quality and efficiency of domestic production of agricultural products, increase its volume, establish an optimal price level to ensure competitiveness in international markets, fully provide the population with food, as well as fully provide the industry with raw materials. There have been changes in the percentage of the share of large producers of agricultural products. So, in the nineties of the twentieth century, they accounted for about 45% of gross output, and by 2010 this figure had fallen to 2% [15].
But at the same time, the reform did not lead to the establishment of a system of issuing loans to farmers, the state does not offer effective ways to protect against risks in the production and sale of farm products, as well as protection from natural disasters. Thus, the dependence of national farming on many factors is generated. This helps to formulate the conclusion that in the Republic of Tajikistan, agrarian reform at this stage has not led to significant improvements in the agricultural sector. Agricultural production in this state remains at a low level, continuing to have the character of natural production. The state has not reached the level of development of the agro-industrial complex, at which the domestic production of the country could provide full satisfaction of the needs of the population. We can say that the reform is still ongoing, it has an unfinished character. That is why the state needs to take all possible measures and use a variety of tools to strengthen the effectiveness of the policy in the field of agricultural production. It is worth starting at least with the adoption of valid legal acts, as well as to modernize the material and technical base of agricultural producers. Legal relations in the field of property contracts and business activities in the Republic of Tajikistan are regulated by the Civil Code.
In the Kyrgyz Republic, the ongoing agrarian reform was also aimed at ensuring a radical change in the entire structure of land and agricultural production. The processes associated with the collapse of the USSR, namely the denationalization and privatization of state property, began in 1991. And in 1993, the Kyrgyz Republic was one of the first among the countries of the Commonwealth of Independent States to introduce its own state national currency [16]. Then, in 1998, private ownership of land was introduced, which led to the lifting of the moratorium on its purchase and sale. This stimulated the growth of the agricultural industry, which required the agricultural trade to reach a new level. Thus, in 1998, the Kyrgyz Republic was also one of the first countries of the Commonwealth of Independent States to join the World Trade Organization. The reform in this republic has led to a radical transformation of the system that took place before the reforms were implemented. The forms of ownership of fixed assets of production and land have undergone significant changes. The transition to real market relations in the economy of this country, as well as the processes of democratization and the promotion of the rejection of socialist production systems in the field of agricultural industry, led to the development of a complex of agricultural production. It seemed to the government that starting with the reformation and modernization of the agricultural sector after the collapse of the USSR, it would be easier to bring the entire economy as a whole to a new level. Legal relations in this area are also regulated by the Civil Code [11].
Consider the policy in this industry in the Republic of Belarus. Based on the State Program for the Development of Agricultural Business in the Republic of Belarus for 2016-2020, experts predicted an increase in the growth rate of economic efficiency of the agroindustrial complex. It was planned to achieve an improvement in the quality of manufactured products, to ensure the competitiveness of agricultural products. Improving the efficiency of the activities and work of processing enterprises and organizations of the agro-industrial complex, the production of marketable products to be sold, both at the internal and external level, is the main goal of the modern complex of the agricultural industry in the Republic of Belarus [12].
The modernized methods of development of the agricultural industry currently accept the fact of the need to use the services of agricultural consulting. This helps to solve issues and problems related to the effective organization of activities among entrepreneurs and farmers, which leads to an increase in production volumes. Agroconsulting is defined as a set of works on the preparation of production processes in the agricultural sector, the definition of effective production technologies, training and advanced training of personnel and employees of the agricultural sector, as well as to achieve absolute results and performance indicators from the customer, the purpose of which in turn is determined by the creation of a modernized, high-quality, efficient, rational and systematic approach to doing business in the agro-industrial business. This approach ensures the integration and integration of all production stages.
Such relations in modern society, of course, must be protected by legal norms. Relations in the field of consulting are built on the basis of a contract. This is regulated by Article 39 of the Civil Code of the Republic of Belarus. According to the legislation of this state, a contract for the provision of consulting services is a type of contract for the provision of paid services. This method of regulating and stimulating the development of the agro-industrial complex is innovative and in demand under modern market economic conditions.
In the Russian Federation, the process of reforming the agricultural sector also began with the privatization of state land. This is a primary process in the transition to a new stage of economic development after the collapse of the USSR, so the reorganization of enterprises, organizations and the structure of the agro-industrial complex as a whole was inevitable and necessary [13].
But in the nineties of the twentieth century, the pace of the reorganization process was reduced, the implementation of agrarian reform was delayed, and the phenomena of the crisis were gradually outlined, which could occur largely due to the non-systematic approach in the policy of reforming the country's economy. The standard of living of the population was falling, problems that had not been solved for many years were getting worse, the quality of products and production volumes were noticeably declining. Relations in the field of agricultural industry were regulated by articles of the Civil Code in terms of property relations. In addition to the Civil Code, a number of federal laws and other regulatory legal acts were in force in the legislation. Non-compliance with the law. As well as numerous gaps and miscalculations in the legislation have led to the complication of the situation and the aggravation of the crisis phenomena. The methods of legal regulation at that time were not effective enough, producers of agricultural products were not aware of and did not master the legal requirements for conducting this activity. Thatiswhy thestatefacedtwomaintasks: 1. reform in the field of state regulation of the agro-industrial complex; 2. establishment and stabilization of legal protection of subjects of legal relations in the agro-industrial sphere.
The vast territory of the Russian state determines the production of products in various industries. The state needs to support each sector for the harmonious development of each region and the entire economy as a whole.
The main methods of state regulation of the agro-industrial complex in Russia are: 1. competent pricing policy; 2. regulation of anti-monopoly policy; 3. regulation of credit policy, budget and tax policy; 4. planning; 5. organization, development and implementation of programs in the agricultural industry; 6. ensuring balanced production in various branches of the agricultural industry; 7. the method of regulatory regulation. The problem of legal methods of state regulation of the agricultural complex had a significant weight in the development of the Federal Law" On the Development of Agriculture " of 29.12.2006 N 264-FL as well as in the process of delineating powers in the field of legal regulation of the agro-industrial sphere between the Russian Federation, its subjects and local self-government bodies.
These provisions also played a huge role in considering Russia's accession to the World Trade Organization. Accordingtoparagraph 3 ofArticle 5 oftheabove-mentionedFederalLaw: "Thestateagrarianpolicyisbasedonthefollowingprinciples: 1. availability and targeting of state support of agricultural producers, as well as organizations and individual entrepreneurs engaged in the primary and (or) subsequent (industrial) processing of agricultural products, scientific institutions, professional educational organizations, educational organizations of higher education that are in the process of scientific, technical and (or) carry out educational activities, agricultural production, its initial and subsequent (industrial) processing in accordance with the list, specified in part 1 of Article 3 of this Federal Law; 2. availability of information on the state of the state agrarian policy; 3. unity of the market of agricultural products, raw materials and food and ensuring equal conditions of competition in this market; 4. the sequence of implementation of measures of the state agrarian policy and its sustainable development; 5. participation of unions (associations) of agricultural producers in the formation and implementation of the state agrarian policy".
An important impetus in the legal regulation is the development of state programs for the development of the agro-industrial complex of the Russian Federation. Among them are the Federal Scientific and Technical Program for the Development of Agriculture for 2017-2025, the State Program for the Development of Agriculture, and others [14].
Conclusions
The experience of foreign countries helps to conduct the most complete analysis of the effectiveness of state regulation of the agro-industrial complex. Comparing the situation in several countries helps to see the level of well-being of citizens, the quality of life of the population of the state. We believe that these provisions are based on the growth of labor activity, as well as on increasing the efficiency of this activity that occurs at a targeted state regulation of agro-industrial complex and improve productivity. Such growth can be achieved through attracting foreign investments, as well as the modernization of production and introduction of innovative technologies.
That is why we can say that the experience of foreign countries can be very interesting for Russian specialists to study for the further development of the agro-industrial complex and the market economy as a whole.
After analyzing the effectiveness of state regulation of the agricultural industry, we can formulate several conclusions.
Despite the stabilization in the reform process and the harmonious flow of changes and transformations currently in the Russian Federation, we believe that it is necessary to adhere to several conditions for the preservation and greater recovery of the agricultural sector.
The first of them is the provision that the transition from one model of the economy to another, as well as the change in the structure of agro-industrial production, requires a fairly long period of time. Such reforms should have a preparatory stage, so that domestic production does not notice it. The spontaneity of changes can lead to an imbalance, and an aggressive policy on the part of the state will negatively affect the development of the agroindustrial complex. Thus, it is necessary to modernize and reform the agricultural sector of the economy with restraint and gradually over a long period of time.
Also when implementing state regulation of the agro-industrial complex, the state must take into account the peculiarities of each sector of this sphere. In this way, a high rate of industrial growth in each industry will be achieved. It is necessary to actively influence the development of agriculture by means of credit and tax regulation, by offering special credit conditions for entrepreneurs, as well as tax benefits.
It is also necessary to constantly monitor and evaluate the effectiveness of the ongoing changes, for a timely response and change of direction, if necessary.
For the majority of unprofitable farms, effective management programs should be selected, production funds and labor should be used.
We believe that it is necessary to pay attention to the development of rental relations. By attracting foreign citizens to the activities of conducting agricultural business in the territory of the Russian Federation only by renting land.
Agricultural transformations and reforms in this sector are necessary not only for the development of these sectors of the economy, but also for ensuring the efficiency of production, ensuring the food security of the country.
|
2021-08-27T17:12:08.106Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1da1356042a734a667345f173bffe75348f8d8cc",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/49/e3sconf_interagromash2021_08013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "869ca3925bb12477845fcff6716e7a70ae622919",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics",
"Law"
],
"extfieldsofstudy": [
"Business"
]
}
|
219910877
|
pes2o/s2orc
|
v3-fos-license
|
Neo- and Paleo-Limnological Studies on Diatom and Cladoceran Communities of Subsidence Ponds Affected by Mine Waters (S. Poland)
: Plankton assemblages can be altered to different degrees by mining. Here, we test how diatoms and cladocerans in ponds along a river in southern Poland respond to the cessation of the long-term Pb-Zn mining. There are two groups of subsidence ponds in the river valley. One of them (DOWN) was contaminated over a period of mining, which ceased in 2009, whereas the other (UP) appeared after the mining had stopped. We used diatoms and cladocerans (complete organisms in plankton and their remains in sediments) to reveal the influence of environmental change on the structure and density of organisms. The water of UP pond was more contaminated by major ions (SO 4 2− , Cl − ) and nutrients (NO 3 − , PO 43 − ) than the DOWN ponds. Inversely, concentrations of Zn, Cd, Cu and Pb were significantly higher in sediment cores of DOWN ponds in comparison to those in the UP pond. Ponds during mining had higher diversity of diatoms and cladocerans than the pond formed after the mining had stopped. CCA showed that diatom and cladoceran communities related most significantly to concentrations of Pb in sediment cores. Comparison of diatom and cladoceran communities in plankton and sediment suggests significant recovery of assemblages in recent years and reduction of the harmful effect of mine-originating heavy metals. Some features of ponds such as the rate of water exchange by river flow and the presence of water plants influenced plankton communities more than the content of dissolved heavy metals.
Introduction
Mining industry influences the aquatic environment. Among environmental effects, draining of mines and mine tailings as well as leaching of spoil heaps have been recognized as particularly harmful for aquatic organisms. The impact of such pollution can be the most distinct in small catchments, receiving large amounts of mine drainage where dilution with natural waters is limited. Mine waters usually contain many compounds in potentially harmful amounts, inducing synergistic effects on organisms [1][2][3][4]. For example, heavy metals (Cu, Pb, Zn) could be less toxic for biota in water (e.g., Cladocera) at high content of cations like Ca 2+ and Mg 2+ [5,6]. It was also found that in systems with elevated metal concentrations and acidic pH, species richness decreases and the number of taxa is low [4,[7][8][9].
Chemical quality of water bodies (e.g., lakes, rivers, dam reservoirs) receiving mining waters has been monitored in many aquatic systems, but their long term ecological impacts are only rarely estimated. Most data show a negative effect of pollution (especially by heavy metals) on planktonic organisms [4,10]. However, some observations show that algae and zooplankton can adapt to prolonged heavy metal contamination e.g., [11][12][13][14]. For example, in small fishponds in a partially reclaimed area impacted by the lead-zinc mine Matylda (southern Poland, Chrzanów area), the influence of heavy metals remains a minor factor, although small amounts of teratogenic forms of pyto-and zooplankton have been found [12,13].
Sediments from lakes are environmental libraries, abundant in information about the history of catchments and their ecosystems. Paleolimnological studies of these sediments polluted by mining, can help to reconstruct changes of environmental conditions. As a rule, sedimentary geochemistry and associated subfossil remains of biological communities (e.g., cladoceran crustacea, diatom algae) are used to assess the natural pre-disturbance variability, the impact of the disturbance and post-disturbance dynamics [4,15]. Based on ecological preferences of particular organisms they can be used to assess the impacts of pollution on biological communities [4,16]. Because of different habitat preferences, biological remains of taxa can be sources of information about differences between depositional subenvironments and their changes over time (e.g., [4,17]).
Diatoms and cladocerans are most often used as indicators in paleoenvironmental reconstructions because of good preservation of chitinous and siliceous cell walls and well-established environmental preferences of a number of taxa [4,18]. Diatoms are a base element of trophic food chain with observed biomagnification of heavy metals [19]. They are good bioindicators of metal toxicity in fluvial and lentic systems [15,[19][20][21][22]. Diatoms and cladocerans have great potential in paleolimnological pollution studies because of their sensitivity to changes in water quality and their location at the basis of food-webs. Heavily impacted aquatic environments can be dominated by metal-resistant diatoms and cladocerans species or with species of broad ecological tolerance [4].
The aim of our study is to recognize changes in the species composition of diatoms and cladocerans in response to Zn-Pb mining cessation, recorded in water and bottom sediments of subsidence ponds situated on the Chechło River floodplain (southern Poland). We compared these communities in subsidence ponds active during the period of mining and in a subsidence pond inundated after the mining cessation assuming that the younger one will be less polluted with heavy metals. The first hypothesis assumes that diatom and cladoceran communities are not affected by concentration of heavy metals in water of subsidence ponds. The second hypothesis assumes that regeneration of the diatom and cladoceran communities are influenced by high heavy metal concentrations in the sediments of ponds in the river valley downstream of the mine waters discharge. The present study may be a key to understanding factors controlling ecosystem recovery from long-term disturbance. We address this by comparison of diatoms and cladocerans species living in water with their past communities using remains preserved in sediments of subsidence ponds and by correlation of their composition with present water physico-chemical variables and records of metal contamination in sediments.
Study Area
The ponds are situated in the middle course of the Chechło River. This area was impacted by the discharge of mine waters from a Zn-Pb mine (Trzebionka) and by the other industrial and municipal sewage from the two towns, Trzebinia and Chrzanów [23,24]. Over the investigated period, Zn-Pb mine was the dominating source of heavy metals in this river system whereas pollution from the two towns continued despite some variability [23]. We distinguished two research areas about 1 km apart: a large subsidence pond that emerged after the closure of the mine (UP) and several subsidence depressions ponded during the peak of the ore exploitation (DOWN) (Figure 1). Their areas range from 0.5 to ca. 5 ha and the average depth ranges between 1 and 2 m. Some (ca. 20-50%) of the ponds are overgrown with macrophytes. Figure 1. Sampling area-UP: Subsidence pond formed after the mine closure: water and plankton samples-CH4, CH5, and sediment cores: RI, RIV; DOWN: Subsidence ponds formed during peak exploitation: water and plankton samples-CH1, CH2, CH3 and sediment cores: RVI, RVII, RXII.
Sampling and Measurements
Samples for water, diatom, and cladoceran analyses were taken from sites CH1, CH2, CH3 (DOWN ponds), CH4 and CH5 (UP pond) four times a year (April, July, September and October 2016) ( Figure 1). Core samples (UP pond: RI, RIV; DOWN pond: RVI, RVII, RXII) for heavy metals concentration, diatoms, and cladocerans were taken once in 2016, close to the same sites where water and plankton samples were collected ( Figure 1). Cores were sampled using a multisampler piston corer with diameter 4.5 cm (Eijkelkamp, Giesbeek, Netherlands). The methodology for developing samples of collected cores is described in Pociecha et al. [25].
Physico-Chemical Water and Sediment Core Analyses
In the water samples pH, conductivity, anions Cl − , NO3 − , PO4 3− , cations NH4 + , Mg 2+ , Ca 2+ , and heavy metals Cd, Cu, Pb, and Zn concentrations were analyzed. pH and conductivity were measured in situ using a WTW (Multi 340i/SET 2, Wissenschafttlich-Technische Werkstacten 823362 Weiheim, Germany) handheld multimeters. For anion and cation analysis water samples were filtered through a 0.45-μm pore-sized syringe filter. They were analyzed within 48 h from sampling by ion chromatography (DIONEX, IC25 and ICS-1000, Dionex Corporation, Sunnyvale, CA, USA). Concentrations of Cd, Pb, Cu, and Zn in total and dissolved phases (after filtration through 0.45-μm filter) were measured by atomic absorption spectroscopy (ASA), using a Varian Spectra AA-20 with a Graphite Furnace (Varian 20, Varian Techtron PTY Limited, Mulgrave, Victoria, Australia). Standard reference materials for water SPS-SW1 Batch 12, National Institute of Standards and Technology (USA), was used to determine the accuracy of metal analyses in the water samples. Water hardness was calculated as a sum of Ca and Mg ions. Sediment samples for heavy metal analysis were dried at 105 °C and sieved through a 0.063 mm sieve. Then they (0.5 g) were digested with 10 cm 3 of 65% HNO3 and 2 cm 3 of 30% H2O2 (both analytical grade) using a microwave digestion technique [14]. The Cd, Pb, Zn, and Cu concentrations were measured with an inductively coupled plasma-mass spectrometer (Perkin Elmer ELAN 6100) in the certified Hydrogeochemical Laboratory (AGH University, Krakow, Poland) according to the standard certified analytical quality control procedure (PN-EN ISO 17294-1:2007).
Diatom-Water and Sediment Analyses
In the field, 10 L of water was collected with a 10-μm plankton net. The core samples for diatoms analysis were taken close to the same sites where the plankton was collected. The 1 cm 3 samples were taken at 10 cm intervals immediately after retrieval. The samples for diatom analysis were boiled in concentrated H2O2, treated with 10% HCl and washed several times with distilled water in order to remove organic matter. The cleaned diatom material was air dried on cover slips and mounted in Naphrax Mountant, Brunel Microscopes Ltd. Observations of the diatoms were performed with a Nikon Eclipse 80i microscope equipped with oil immersion and differential interference contrast. The identification of diatoms was based mainly on Krammer and Lange-Bertalot [26][27][28][29], and specific taxonomic publications. Taxonomic identifications were made to the lowest possible level. Diatoms collected from the plankton and sediment cores were processed following a procedure-a minimum of 400 valves were counted from every subsample. Only taxa that exceeded 0.2% of the relative abundance were used for statistical analysis. Diatom data were expressed as relative abundance reflecting changes in the assemblage structure, indicating potential fluctuations in the environment. In order to reconstruct the environmental conditions in plankton and during the deposition of the sediments studied, diatoms were grouped according to their environmental requirements. Here we used a term-sedimentary diatoms-for all taxa found in the cores.
Cladocera in Water and Sediment
Samples for living Cladocera were taken from the central point of each pond. For taxonomic identification and quantitative analyses, samples were collected using a 5-L Ruttner sampler. In the field, 10 L of water samples (2 replicate, 5 L samples) were concentrated with a 50-μm plankton net. For identification and counting of zooplankton species, five replicate sub-samples were analyzed microscopically (×100 or ×200) in the chamber volume of 0.5 mL −1 . Taxonomic analyses of Cladocera were conducted using the identification keys [30,31]. The density of individuals were calculated per liter. Subfossil sediment Cladocera were prepared according to Frey [32]. One centimeter cube of fresh homogenized sediment was taken from the particular depths from each core for cladoceran analysis. Laboratory methods were described in a previous publication [25]. Taxa were identified and counted at 200× or 400× magnification under a Nikon 50i microscope. All skeletal parts were counted: headshields, shells, postabdomens, postabdominal claws, ephippia, and filtering combs. The most abundant body part for each taxon was chosen to represent the number of individuals. The results of qualitative and quantitative analyses are presented in diagrams, in which an absolute number of specimens was calculated for 1 cm 3 sediment volume. Identification of the species was based on Frey [33] and Szeroczyńska and Sarmaja-Korjonen [34].
Statistical Analyses
In order to find the significant differences in the values of studied physicochemical variables in water between UP (CH4-CH5) and DOWN (CH1-CH3) ponds Mann-Whitney test was used. Differences in metal concentrations in the sediments between separate groups (as defined by hierarchical cluster analysis) were evaluated by Mann-Whitney test. To determine the degree of sediment contamination by heavy metals the index of geoaccumulation (Igeo) was calculated according to Müller [35] equation: Igeo = log2(Cn/1.5Bn), where: Cn is the mean concentration of an element in the bottom sediment, and Bn is the geochemical background of the element in the shale [36].
We used Spearman's correlation coefficient to investigate the relationship between cladocerans and diatoms occurrence and the content of heavy metals in the sediments and the physico-chemical characteristics of the waters (Statistica 13 program).
Cladocera and diatom communities were classified based on their similarities using the hierarchical clustering method (UPGMA). The clustering classification was obtained using the MVSP 3.1 program.
The significance of the differences between ponds created during mine exploitation (DOWN) and those created after the mine was closed (UP) and the density of diatoms and cladocerans were evaluated using Mann-Whitney U test (Statistica version 13.1, Dell version).
Canonical correspondence analysis (CCA) was used to analyze species and environmental data. We performed DCA analysis (detrended correspondence analysis) based on the length of the gradient expressed in standard deviation (SD) units. For DCA and CCA analysis, the data was log-transformed (ln (x + 1)) and centered. In the CCA analysis a forward selection was used to reduce the set of environmental variables. Analysis was performed on cladocerans and diatoms data and sediments samples to identify the changes in the water bodies and to show the relationships between the environmental variables and the distribution of studied organisms. The statistical significance, as well as the statistical significance of canonical axes, was accessed using the Monte Carlo permutation test for 499 repetitions (CANOCO for Windows 4.5 program).
Physico-Chemical Variates in Waters of Subsidence Ponds
The water of ponds was from circumneutral to slightly alkaline pH (6.7-7.9). Conductivity ranged between 412 and 892 μS cm −1 , contents of major anions (mg/dm 3 (Tables 1 and 2). Some parameters show differences between the sites at the DOWN ponds. The lowest medians of conductivity, ions SO4 2− , Cl − , and PO4 3− were found at site CH3 ( Table 1). The highest variability of concentrations of the major ions (with the exception of hydrocarbonates), nutrients, and total hardness was found in the UP pond (site CH5) ( Table 1).
Total heavy metal concentrations in water varied in following ranges (in μg/dm 3 ): Cd nd-4.6, Pb 1.0-20.3, Cu nd-5.0, and Zn 20.0-91.3, while metals in dissolved phase varied: Cd nd-0.53, Pb 0.1-7.3, Zn nd-47.1. In the studied waters the concentrations of Cd total was usually < 0.6 μg/dm 3 , Pb total < 5.5 μg/dm 3 , Zn total < 45 μg/dm 3 (70%, 70%, and 60% of cases, respectively), while Cd dissolved < 0.13 μg/dm 3 , Pb dissolved < 2 μg/dm 3 , and Zn dissolved < 30 μg/dm 3 (and 65%, 75, and 80% of cases, respectively). The concentrations of Zn total were significantly higher in water of the UP pond than those in DOWN ponds ( Table 2). The highest metal concentrations in water appeared in different seasons and sites. Maximum concentrations of Cd total and dissolved and Pb total and dissolved in all waters were found in August (with the exception of Pb total in the pond CH1 and Pb dissolved in pond CH4). Then, the concentrations of Pb (total and dissolved) were ca. 2-3 times higher in ponds CH2 and CH3 than in CH4 and CH5. The highest concentrations of total Cd and Zn were found at site CH5, Pb at site CH2, while Cu at site CH4 (Table 1).
Heavy Metals in Sediments of Subsidence Ponds
Metal concentrations in the sediment cores significantly varied (in μg/g): Cd 6.1-612.0, Pb 302.6-10,223, Cu 21.4-397, and Zn 506.7-23,081 ( Figure 2). Metal concentrations in the cores RVI, RVII, and RXII (0-10 cm strata) were from a few to several dozen times higher in the DOWN ponds than those in cores RI and RIV from the UP pond ( Figure 2). Metal concentrations in the lower and/or middle strata (10-20 and 20-30 cm) of core RXII were similar to those in the UP pond ( Figure 2). According to the geoaccumulation index, the 0-10 cm strata of sediment cores RVI, RVII, RXII, and of core RI (with the exception of Pb were extremely contaminated by Zn, Cd, and Pb (Igeo > 5, class 6) (Table 3). Additionally, they were extremely contaminated by Cd in the core RXII at the depth 10-20 cm. Other sediment strata in cores RXII, RVII, and RIV were moderately to heavily contaminated by Zn (classes II-V) and heavily contaminated by Cd and Pb (classes IV-V). Sediments were usually uncontaminated or weakly contaminated by Cu (classes 0-II).
The pond (CH2; RXII; Table 3) had the highest diversity of Cladocera taxa in comparison with the other DOWN reservoirs. Total density of Cladocera remains in sediments (ind./cm 3 ) varied from 1 in UP pond (RI) to over 100 individuals in core XII in DOWN ponds (Table 3). In the UP pond Cladocera assemblage was rather poor and its density was not higher than 20 ind./cm 3 . Chydorus sphaericus was dominant, present in all studied ponds. Ch. sphaericus is known in pelagic and littoral zones, and its high density is characteristic for eutrophic and polluted water. The highest density of this species was observed in the core RXII (21 ind./cm 3 ), in the pond of mining area (DOWN) whereas, a much smaller density, below 10 ind./cm 3 was observed in cores of the pond formed after the mining period (UP) (Figure 3).
Relationship between Diatoms, Cladocera, and Environmental Variables in Subsidence Ponds
The highest Shannon (H') diversity rates (up to 4) occurred for plankton and for diatoms of six sediment samples from UP and DOWN ponds. Values of the index for Cladocera community were different from those obtained for diatoms. Moreover, the high H' diversity rates of Cladocera communities were observed in the two sediment cores (up to <1) from water bodies which were formed after mining (UP), as well as in a one plankton sample and in a one sediment core from the DOWN pond (Table 4). The analysis of similarities of the plankton community (diatoms and cladocerans) ordered communities without distinguishing groups with respect to their similarity. This was caused by the different dominance of the identified species structure of diatoms and cladocerans at the sampling sites. The three closely situated DOWN ponds were characterized by stagnant water and were formed during the mine exploitation (D-CH1, D-CH3, D-CH2). The furthest two sampling sites were situated in a pond formed after the mine was closed, and the pond was characterized by a flow-through of water (U-CH4, U-CH5) (Figure 4). There were statistically significant correlations between the abundance of particular species of diatoms and cladocerans and physico-chemical data (also heavy metals) of water as well as the abundance of diatoms and cladocerans and heavy metal concentrations in the sediment cores of UP and DOWN ponds (Tables 5-8).
In the UP pond we did not find a significant correlation between heavy metals in water and planktonic diatoms. However, the highest and significant Spearman's rank correlation were found for Aulacoseira ambigua, Pseudostaurosira brevistriata, and Staurosirella pinnata (all for Zn dissolved), Melosira varians (negative correlation), Navicula cryptocephala (both for Pb dissolved) and significant negative correlation for Cyclotella meneghiniana (Cd dissolved) (Table 5A). In the sediment from the UP pond, significant positive correlations (Spearman's rank order) were found between Aulacoseira ambigua and Cu and between Encyonema ventricosum and Zn and Cd. Other species of diatoms negatively correlated with the heavy metal content in sediment (Table 6A).
In the sediment samples of DOWN ponds, such diatoms as Achnanthidium minutissimum (Cd, Cu), Asterionella formosa (Zn), Aulacoseira granulata (Cd), Encyonopsis cesatii (Pb), Fragilaria cf. gracilis (Table 8A). The highest correlation values with Pb concentrations were obtained for Encyonopsis cesatii (0.793), Sellaphora nigri, and Surirella brebisonii var. kuetzingii (0.756 and 0.839 respectively). The remaining diatoms were negatively correlated with the heavy metals content (Table 8A). In the UP pond we found no correlation between heavy metals and cladocerans community in the water. In the water two species Moina micrura and Daphnia pulex were positively correlated with other physico-chemical variables. In the sediments of the UP pond a negative correlation was found only between heavy metals concentration and Cladocera species. Chydorus sphaericus negatively correlated with all heavy metals, but Alona affinis and A. quadrangularis negatively correlated with Pb. Moreover, different species of diatoms positively correlated with cladocerans species probably reflecting trophic relationship (Tables 5B and 6B).
In the most polluted DOWN ponds a positive correlation between the two species of Cladocera (Daphnia pulex and Simocephalus vetulus) with dissolved Cd in the water, as well as a negative correlation between four taxa (Alona sp., A. affinis, Glaptoleberis testudinaria, Pleuroxus truncatus) and Pb in sediments were found. Cladocera were also negatively correlated with Zn, Cd, and Cu in the sediments. The correlation between diatoms and cladocerans taxa were both negative and positive, which may indicate more complex trophic relationships in these ponds (Tables 7B and 8B).
Generally, DOWN and UP ponds were statistically different with respect to plankton density and physico-chemicals of the water ( Table 2). The U Mann-Whitney test showed significant differences in the density of plankton between UP and DOWN ponds (Z = 2.044452, p = 0.040). The density was significantly higher in UP ponds. Moreover, U Mann-Whitney test showed statistical differences in the cadmium concentration in the sediments (Z = 2.607971, p = 0.009) between DOWN and UP ponds. Significant differences were found also between the number of diatom and cladoceran species (Z = 3.152921, p = 0.001).
Canonical correspondence analyses (CCA) revealed an influence of physicochemical variables and heavy metal concentrations in water and sediments on the distribution of diatoms and cladocerans communities. Statistically significant relationships were observed only in sediment cores ( Figure 5).
The CCA model for diatoms and cladocerans in the pond sediments indicated statistically significant negative correlation with lead. The Monte Carlo permutation test showed statistical significance for both the first canonical axis (F = 2.591, p = 0.008) and for all canonical axes (F = 2.174, p = 0.006). In the CCA analysis, the first axis explains 32.7%, and the second axis 18.2% of the total variability of diatoms and claodocerans in the cores. The results of the stepwise forward selection of environmental variables showed that distribution of diatoms and cladocerans in the cores were related only to the content of Pb in the sediments and to the age of ponds. Figure 5 shows the group of organisms which were associated with a high content of Pb in sediments and the group of organisms which were associated with DOWN water bodies.
Discussion
All the waters of the subsidence ponds on the Chechło River floodplain have higher values of conductivity and contents of ions SO4 2− , Cl − , PO4 3− than small unpolluted water bodies in southern Poland [37]. However, these characteristics were similar to those in water bodies in the vicinity of another abandoned lead and zinc mine in Upper Silesia, in southern Poland [14]. Higher mean contents of above parameters in the UP pond (sites CH5 or CH4) were associated with the direct inflow of the Chechło River, contaminated by municipal sewages from the towns Trzebinia (~20,000 inhabitants) and Chrzanów (~40,000 inhabitants) in the upper section of the catchment [38]. Fluctuations of major ions and nutrients at site CH5 near the inflow of the Chechło River to the UP pond was probably mainly controlled by the river discharge because such changes were much lower in the downstream part of that pond. The differences of the same parameters in the water between sites CH1-CH3 of the DOWN ponds, are related to variable exchange rate between particular ponds and the Chechło River. The lowest concentrations of the above ions were found at site CH3 situated upstream from the inflow channel, in the most distant part of the pond. Inversely they were the highest at site CH2 of pond situated in proximity to the channel connecting the pond with the river. However, it should be emphasized that even during small floods all ponds (CH1-CH3) are flooded with river water.
Similarly to macroions, the total Cd, Pb, Zn, and Cu concentrations in the studied waters were predominantly close to values from industrialized areas [39]; nevertheless, they were much lower than in aquatic systems polluted by active Zn and Pb mining [40,41]. The concentrations of dissolved Cd did not exceed permissible values for priority substances, while dissolved Cu and Zn were not higher than permissible country values for substances harmful to the aquatic environment [42]. Only the concentrations of dissolved Pb at sites CH1-CH3 and CH5 exceeded the average annual permissible values (AA-EQS, 1.2 μg/dm 3 ) for priority substances, however they were still below maximum permissible values (Mac-EQS, 14 μg/L, [42]). Sporadically higher total Cd, Pb, and Zn concentrations in the UP pond (site CH5), could be related to runoff from industrialized part of the catchment during higher rainfalls. Higher or maximum Cd (total and dissolved) and Pb (total and dissolved) concentrations noted in late summer (August) could be related to a degradation of organic matter in ponds. The largest maxima of total and dissolved Cd and Pb occurred in DOWN ponds (CH2 and CH3) with the most contaminated sediments. A similar phenomenon was observed also in a fishpond of the nearby catchment and could reflect a release of these metals from sediments [39].
In contrary to the water, sediments were extremely contaminated by Cd, Pb, and Zn (according to values of Igeo, [35]) reaching levels found in water bodies affected by active and closed Zn and Pb mines [14,40,41,43]. This confirms that sediments of waters in mining areas act as long-term sinks for heavy metals [44,45]. Lower sediment contamination of the UP pond compared to the DOWN ponds (with some exceptions in the core RXII) is associated with the cessation of a discharge after closure of the mine. Low Cd, Pb, and Zn concentrations in the bottom strata of cores IV and XII indicate the lack of fluvial sediment deposition during mining era, before ponding of subsidence basins.
We studied changes of the planktonic and sedimentary diatoms over a temporal and spatial gradient of metal pollution in ponds affected by the operation of the ore mine because diatoms and cladocerans are excellent indicators of environmental change [4,46,47]. Most diatoms found in the plankton are tychoplanktic. That can be related to the small size of the ponds, which have the area not exceeding 4.5 ha of surface and 2 m depth [48]. However, diatom assemblages in Zn, Pb, Cu, and Cd polluted waters were generally resistant to observed metal concentrations because of large similarity to populations from non-contaminated waters.
The sampled sites from UP and DOWN ponds were grouped on a dendrogram of similarities constructed for diatoms and Cladocera in plankton samples where the CH4 and CH5 (UP) are clearly different from CH1, CH2, and CH3 (DOWN) (Figure 4). In the UP pond (sites CH5 or CH4), the content of nutrients and total hardness (Table 1) is higher than in DOWN ponds because of the inflow of municipal sewages from nearby towns. The site-CH1 was rich in Achnanthidium minutissimum, Gomphonema parvulum, Lemnicola hungarica, Nitzschia amphibia, and N. supralitorea. All of them belong to mesosaprobic and indifferent-mesotraphentic diatom group. Their abundance was highest at site CH2. Presence of some of these species, like Lemnicola hungarica, Nitzschia amphibia, and N. supralitorea indicate their adaptation to metal-contaminated waters. Achnanthidium minutissimum is well-known from metal contaminated waters [49], where this diatom clearly increases in population size [21,50]. Another diatom, Gomphonema parvulum is also present numerously under these conditions and similarly to Lemnicola hungarica and Nitzschia amphibia, it is considered a good indicator of strong water pollution. The example of over average dissolved Cd and Zn content is the site CH5 in the UP pond (Table 1) dominated by Gomphonema parvulum and Planothidium frequentissimum known as metal resistant [50]. Also, the site CH3 (DOWN) with the highest average dissolved Pb content (Table 1) was dominated by Achnanthidium minutissimum, Cocconeis placentula var. placentula, Gomphonema parvulum. Moreover, the above mentioned diatoms that adopted well to metal pollution belong to Cocconeis placentula var. placentula. Important diatoms in UP pond included also Melosira varians (Table 5), a metal-resistant diatom [50].
Generally, our results suggest that diatoms common in the ponds are resistant to moderate metal contamination in neutral and alkaline waters ( Table 1) and even at sites CH5, CH2, or CH4 (Tables 1 and 2) with metals content raised over average. No shift toward domination of metal-resistant species was noted. This is supported also by other works stressing the presence of high content of hardnesscausing cations (e.g., Ca 2+ and Mg 2+ ) as the factor mitigating the toxicity of metals in mine water [5,6]. Also, the dominance of more sensitive species (e.g., Gomphonema utae, Meridion circulare var. circulare, Planothidium lanceolatum and Staurosira venter) indicates good adaptation to metal-contaminated waters.
The cores from UP pond (RI and RIV) were dominated by mesosaprobous and mesoeutraphentic diatoms Gomphonema utae, Planothidium lanceolatum, and Staurosira venter. The diatom/metal correlations are significant for several taxa (Tables 6A, 7A, and 8A). But the CCA analyses exhibited the highest (significant) importance of Pb concentrations on the distribution of investigated biota. Other metals were correlated to Pb, however their impact was not significant. The diatoms most positively correlated to increase of Pb content were Achnanthidium minutissimum, Nitzschia amphibia, Sellaphora nigri, and Surirella brebisonii var. kuetzingii.
The increase in the number of these diatoms in our metal polluted sediments corresponds well with another finding. Achnanthidium minutissimum is generally considered as an indicator of metals pollution and is often reported as predominant in lotic waters exposed to heavy pollution by metals [50]. However, the status of this species as an indicator of this type of pollution has been discussed for a long time (diatoms attached to the substrate are more resistant but the ability of mobile diatoms makes them more susceptible to toxic substances) [20,51].
The presence of Sellaphora nigri (as Eolimna minima sensu auct. nonnull.), the most common benthic species in European freshwaters, is related to human-induced of eutrophication, heavy metal pollution, and nutrient-rich environments [20]. Surirella brebisonii var. kuetzingii and Nitzschia apmphibia are also known to prefer metal-contaminated waters [50] and they are widely distributed diatoms in eutrophicated inland waters.
The taxa, in which relative abundance decreased with raised Pb content were e.g., Gomphonema utae, Staurosirella pinnata, Eunotia bilunaris, and Alona spp. (Figure 5). High number of Achnanthidium minutissimum (formerly called Achnanthes) associated with the decrease of Staurosira venter, Staurosirella leptostauron, and S. pinnata (formerly called Fragilaria) fits Hill et al.'s [52] opinion, that Fragilaria dominates at the less metal impacted sites when Achnanthes dominates at the more impacted sites. Moreover, the largest population of Staurosira venter (over 90%), was observed in the core RXII-the least metal-polluted site (Figure 2). Several diatoms species are known as metal tolerant and pioneer, substrate-adherent species. Interesting and probably related to the neutral and alkaline reaction of waters is the almost complete lack of teratological forms. Many authors [50,53] suggest their occurrence as indicator of strong metal pollution.
The Cladocera showed evident alteration after mine closure. Because of the short period of time after finishing of the exploitation and poorly identifiable post-mining sediment strata, this change could be identified from comparing the sediment and planktonic organisms. Generally, planktonic Cladocera is a more differentiated group (5 family and 13 taxa) than in sediment (3 families and 15 taxa). There was also a shift of dominant organisms from Alona sp. and Chydorus sphaericus in sediments to Daphnia pulex dominating the present-day planktonic taxa. Such a change was observed also in the Lake Orta (Italy) by Jeppensen et al. [54] where during the period of toxic discharge, the only dominant species were Chydorus sphaericus, scarce Bosmina, and rare Alona spp. whereas, the lake recovery was signified by a return of Daphnia pulex. The result achieved for the studied ponds is probably related to the fact that in the UP ponds the river water flows through the center of the pond, whereas the DOWN ponds are supplied with river water by side channels, and have stagnant water with abundant macrophytes. Leppänen [4,10] in studies on mining pollution on Bosmina longirostris and Chydorus sphaericus underlined that those organisms tolerate mine water-impacted conditions. Some of authors mentioned also that Ch. sphaericus is tolerant to water pollution in a wide range of abiotic conditions [55], but the long-term exposure of this species to Cu can reduce its rate of population growth [56,57]. We observed strong negative correlation between heavy metals (Zn, Cd, Pb, Cu) and Ch. sphaericus in subsidence pond formed after the mining cessation with sediments less contaminated by heavy metals.
In the present study the Shannon (H') index showed that much more diverse Cladocera communities occurred in plankton-but also in sediment cores of the subsidence pond formed after the mining cessation (UP)-than in older ponds (DOWN). These differences confirm the cladogram of similarities (constructed both for Cladocera and diatoms).
The change in abundance of some Cladocera correlates with water chemistry for: SO4 (Moina micrura), NO3 (Moina micrura, Daphnia pulex), PO4 (Daphnia pulex-UP) (Ceriodaphnia quadrangula-DOWN). A negative impact was noted only in DOWN ponds with NO3 and Alonella exigua and Ceriodaphnia quadrangula. The highest density of Daphnia pulex in water of DOWN ponds appear to be weakly impacted by heavy metals reflecting their adaptation to long-lasting contamination.
Pb was the most important metal that negatively impacted Cladocera in UP and DOWN ponds. Pb was not tolerated by Alona, Chydorus, Graptoleberis, and Pleuroxus. García-García et al. [58] confirm that high Pb concentration in water had a negative impact on Diaphanosoma, Moina, and Alona, excluding periods of raised water turbidity mitigating lead toxicity to cladocerans. In all subsidence ponds Alona and Chydorus were the dominant, and most abundant in studied sediments. The dominance of less sensitive species confirmed adaptation of cladocerans communities to chronic metal contamination [25]. Trophic relationship between diatoms and cladocerans were observed in sediment cores from UP and DOWN ponds. This is related to the ability of cladocerans to colonize in almost every type of freshwater.
Our research confirmed that heavy metal concentration in water from subsidence ponds had no influence on diatom and cladoceran communities, and recovery of the diatom and cladocerans communities is influenced by high heavy metal concentrations in the sediments of ponds in the river valley downstream of the mine waters discharge.
These results may be a key to understanding drivers for recovery of water ecosystems after longterm disturbances of their functioning.
Conclusions
This work presents important information on assessing the mine-water pollution impact on an aquatic ecosystem. In particular, it highlights the usefulness of diatoms and cladocerans as warning indicators of environmental change, supporting the use of multiple sediment proxies in paleolimnological pollution research. They provide information about the timing, direction, and magnitude of impacts caused by pollution events.
The analysis of plankton and remains of diatoms and cladocerans allowed to reconstruct premining condition in the subsidence ponds. It also showed the conditions of the ponds when the Zn-Pb mine was operating and after it had been closed. The occurrence of different ecological groups of diatoms and cladocerans (diversity in taxa and in density) in the subsidence ponds revealed the changes in water quality during mine operation and afterwards.
Neolimnological studies describe the present conditions of biotic communities but paleolimnological information reveals past limnological conditions as an archive of environmental history.
Author Contributions: A.P. and D.C. were responsible for the research design. A.P., A.Z.W., E.S.-G., S.C., and D.C. laboratory analysis, analyzed the data, prepared drafted the text and figures. A.C. performed statistical analyses. All authors participated in discussions and editing. All authors have read and agreed to the published version of the manuscript.
|
2020-06-04T09:06:48.343Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "b110de3d6e783b0535dd92a16cf4cb1eafa955d1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/6/1581/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "05b0d6d1f7f90368d866fb12da715fae0034c8e6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
59423042
|
pes2o/s2orc
|
v3-fos-license
|
A Road Map to Finding Microbiomes that Most Contribute to Plant and Soil Health
Microbiology studies for the most part have focused on the impacts of microorganisms as pathogens or for their use in industrial production of valuable commodities. Today however, the primary focus is on identifying their role in ecosystem health and ecology. The very significant reduction in the cost and speed for molecular tools and sequencing continues to significantly increase our abilities to examine whole microbial communities and to identify their potential functions. In the agriculture sector, results from such studies have considerably improved our knowledge to comprehend plant-microbe interactions and have provided revolutionary information as to the role these microbiomes play in plants’ health. The plant microbiome could potentially help its host by providing nutrients, producing phytohormones, synthesizing vitamins, detoxifying toxic compounds, stimulating plants’ induced systemic resistance (ISR), and protecting them from a variety of biotic and abiotic stresses, etc.
Introduction
Microbiology studies for the most part have focused on the impacts of microorganisms as pathogens or for their use in industrial production of valuable commodities. Today however, the primary focus is on identifying their role in ecosystem health and ecology. The very significant reduction in the cost and speed for molecular tools and sequencing continues to significantly increase our abilities to examine whole microbial communities and to identify their potential functions. In the agriculture sector, results from such studies have considerably improved our knowledge to comprehend plant-microbe interactions and have provided revolutionary information as to the role these microbiomes play in plants' health. The plant microbiome could potentially help its host by providing nutrients, producing phytohormones, synthesizing vitamins, detoxifying toxic compounds, stimulating plants' induced systemic resistance (ISR), and protecting them from a variety of biotic and abiotic stresses, etc.
The unravelling of the plant microbiome is changing agriculture practice and the concept of what is a "healthy plant". The agromicrobial revolution primarily focuses on the optimal use of existing plants' microbial companion in order to improve plant performance and agricultural ecosystem functioning.
Agricultural production systems must be examined from an ecological approach, with crop productivity being related to ecosystem services. Plant-associated microorganisms are fundamental for plant health and productivity as they affect plant nutrition, metabolism, physiology and performance. While the negative impacts of microorganisms on agroecosystem performance remains important, a greater focus on their beneficial impacts deserves closer attention. However, it must be emphasized that such benefits are going to be realized slowly. Here we provide three examples of how plant-microbe interactions have been utilized over millions of hectares and why it took decades for their utility to be realized. Suggestions that we can change agroecosystems overnight will only lead to disappointments in the research results.
The most well understood and exploited trait in the plantmicrobial interaction catalogue is nitrogen fixation by Rhizobium species. The ability of Rhizobia to make atmospheric nitrogen available to plants and significantly increase their yields has been known for over 150 years. It has been estimated that nitrogen fixation by legumes in natural ecosystems is in the range of 25-75 lb of nitrogen per acre per year whereas, in cropping systems it may be several hundred pounds per acre [1]. Commercially available Rhizobium inocula have a nominal cost by comparison Volume 5 Issue 4 -2017 to their yield and environmental benefits. The nodules formed on legumes remains the only plant-microbial interaction that breeders recognize as being critical to the success of any new cultivar they aim to generate. While plant-microbe interactions of similar significance likely occur with many other crop species, the lack of any obvious phenotypic indicator (i.e. presence of nodules analogous to that of legumes representing plant-Rhizobium interactions) likely could have resulted in the loss of genetic traits in the plants required for a successful outcome of the interaction with a designated partner. Who knows how many of such genetic functions have been deleted through the millennia of breeding processes? Studies of the interactions between Rhizobia and their hosts have revealed that there is an exchange of numerous chemical signals and this is a form of molecular dialogue. Hundreds of genes are likely involved in regulating the interactions of the two main groups of molecules that are required for a successful interaction. These are the nod gene-inducing flavonoids from the plant and the lipochito-oligosaccharide Nod factors from the Rhizobia. We have no idea of the traits that regulate microbial associations in corn, wheat, or rice in old or modern cultivars. The finding that Rhizobia can also form endophytic associations with rice and are able to colonize all the internal tissues of plant suggests that such interactions can have a major role in plant fitness and productivity. Surprisingly, the interactions proved to be strain and variety specific [2] indicating that the growth responses are indeed heritable traits. Large-scale field trials evaluating five rice varieties and seven Rhizobia strains over five seasons showed that bacterial treatments increased yield by up to 47% in farmers' fields, with an average increase of 19.5%. This study exemplifies the critical importance for selection of appropriate isolates and specific crop cultivars for optimizing yield benefits. We need to appreciate the growers of the Nile Delta who had the wisdom to recognize that intercropping rice with legumes contributed to yield increases of their rice crop. However, it was the selection of this site for study by the researchers that allowed them to discover that the build up of the selected Rhizobia populations provided the critical benefit to the agroecosystem functions in this area.
Modern agriculture was designed to provide crop plants with the optimal levels of all extraneous necessities for growth and yield. This approach however, may also have compromised the microbial partners associated with the crop plants. In the development of sugarcane as a bioethanol stock in Brazil, cultivars with high yield potentials were selected based their ability to perform at sites where fertilizers were never used [3]. The best yielding cultivars were subsequently found to be colonized by numerous species of endophytic bacteria that provided the plants with nutrients and growth factors required for high yields, but without the need for large quantities of extraneous fertilizers. Sugarcane production in Brazil uses 50kgN/ha vs. the 350kg of N/ha used in the USA [3]. It is not known if this approach can be used to select other crops plants to provide acceptable yields in the absence of large inputs of fertilizers, but the concept should be evaluated with other crops. The development of the Brazilian sugarcane industry also suggests that the most likely areas to find beneficial microorganisms within a field are at sites with exceptionally high yields. Our efforts thus have focused on examining sites where production practices have created exceptional yields as means to identify if, and what beneficial role microbiomes play in such instances.
Everything we do to grow plants, including the type of plants, impacts soil health and its microbiology. The development of disease-suppressive soils for take-all disease of wheat, pioneered by James Cook [4], is an example of how soil and plant health can be built through microbiology. Cook's research showed that continuous cropping of wheat for five or more years reduced or eliminated take-all disease of wheat. When a suppressive soil was added at low rates to a soil where disease was present disease development was curtailed, even in the presence of the pathogen. Suppressiveness was shown to be caused by a buildup of fluorescent pseudomonad bacteria capable of synthesizing a diversity of fungicidal compounds (up to five antibiotics or antibiotic-like substances including 2,4-diacetylphoroglucinol, DAPG) that inhibit the pathogenic fungi [4]. Crop rotations that reduced populations of this specific group of fluorescent pseudomonads eliminated suppressiveness, whereas those that increased them enhanced crop health. The focus for both soil and wheat health is now to design rotations where bacteria populations are increased or at least maintained at the suppressive level. Similar disease suppressive systems have been found for other disease and crops and most are biologically based.
We identified a grower in Southern Ontario (Mr. Glenney, Haldimand, ON, G-site) who developed a cropping system where strips of corn and soybeans are planted in the exact same place on alternating years using a no-till production system and precision planters. While no obvious benefits in yield were obtained for the first five years, by the sixth-year his yields increased and after twenty years plateaued at about 300bu/A in a region where the average yields are 150 bu/A. The grower wondered if he created a disease suppressive soil for corn. We used this site as a model farm to identify if the yield response maybe associated with biological factors derived from the cultural practices being utilized. We planted the same corn variety at a conventional farm (H site) and at the G site over two years and tracked changes in plant and soil chemistry and biology [5]. Bacterial and fungal communities of the soil from roots (rhizosphere), washed roots, and sap from the stem were studied at three-successional plant developmental stages, (30,60 and 90 days after planting) using terminal restriction fragment length polymorphism (TRFLP) analysis, which is a fast, reproducible, inexpensive, and, robust molecular fingerprinting technique (Figure 1) that has been used to determine the microbial diversity in a variety of environmental samples [6][7][8][9][10][11]. TRFLP based molecular fingerprinting can not only distinguish the microbial communities, but it can predict their composition based on the fragment sizes obtained. Our initial objectives were to determine when, where, and how to look using the most rapid and inexpensive technique (i.e. TRFLP). More thorough molecular techniques could then be carried out on the most relevant samples. TRFLP analysis revealed significant temporal restructuring and dynamic alterations in the bacterial communities in all parts of the corn plant but the greatest differences between G and H sites bacterial populations occurred in the stem sap by day 60 (Figure 2) and persisted until day 90. The plant sap microbiome became our test of choice for assessing microbial diversity in subsequent large scale field trials. Fungal diversity differed significantly only in rhizosphere soils of the two sites and this was present at all sampling times. It appears that bacterial endosphere communities are most affected by the crops' physiological development, while the fungi are apparently most impacted in the rhizosphere by tillage practices. With a level of confidence that the microbiomes in corn sap can be readily differentiated from high and low yielding sites by 60 days after planting we went on to test plants harvested randomly across several corn fields. The results were disappointing in that we found no statistically significant differences in any of the samples for any field. By chance we received corn samples from fields that were sampled from specific sites within a field based on normalized difference vegetation index (NDVI) images collected by a drone. Here we found highly significant differences in the sap microbiomes from plants associated with stressed sites versus sites with highly vigorous plants (Figure 3a). We now have results collected from over 40 fields using NDVI images to sample high and low production sites. In most cases, significant differences were identified in their bacterial communities using TRFLP analysis. NDVI images indicate that almost all fields have poor, mediocre and high producing zones, where yields can vary from 75 to 350 bu/A. These zones might each account for about 1/3 of the farm area. The bacterial microbiomes within a specific zone in a field overall reflected yield expectations but why they were so differentially distributed or what they were doing within the plants remains to be determined. The results however, explain our earlier finding that when you sample across a field you can expect to identify the average corn microbiome. However, if you can partition the field to productive and underperforming plants you will find significant differences in communities that occupy various niches of the plants. Such variances can be seen in even very small plots (Figure 3b). The good news for growers is that one third of their fields are already producing well. We are currently analyzing the interaction between various plant and or soil physical, chemical, and biological factors related to plant and soil health and the productivity. We have not done justice to fungal populations in our studies, but do realize that they may have even greater impacts of crop productivity than the bacteria.
The most obvious set of samples are being analyzed by high-throughput sequencing to examine the exact nature of the populations. Ultimately, our interest is to identify the physiological functions associated with the microbiomes that support plant vigour and yield. The major lesson here however, is that when you sample across a large population of plants you get the microbiomes of the average.
We have established robust culture collection of microbes related to higher productive sites. It is very foreseeable that in the future we will have unique biofertilizers that will be formulated for broad spectrum functions for crop productivity. We already know that microbes can supply plants with nutrients of all types, protect them from diseases, and environmental stresses. How to introduce these into the plant ecosystem will be a major challenge, but first we need to know the conditions they require to thrive. Once we know this, we will be able to better manage agricultural practices that shift gene populations from the negative to the positive potentials.
|
2019-04-02T13:09:29.865Z
|
2017-07-26T00:00:00.000
|
{
"year": 2017,
"sha1": "095dfa62045cf843c1e047fc03cbeac0edba982f",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JMEN/JMEN-05-00153.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6973f6c3677702a077c03f896a8a15ddd2a859bf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
14872661
|
pes2o/s2orc
|
v3-fos-license
|
Abnormal Baseline Brain Activity in Non-Depressed Parkinson’s Disease and Depressed Parkinson’s Disease: A Resting-State Functional Magnetic Resonance Imaging Study
Depression is the most common psychiatric disorder observed in Parkinson’s disease (PD) patients, however the neural contribution to the high rate of depression in the PD group is still unclear. In this study, we used resting-state functional magnetic resonance imaging (fMRI) to investigate the underlying neural mechanisms of depression in PD patients. Twenty-one healthy individuals and thirty-three patients with idiopathic PD, seventeen of whom were diagnosed with major depressive disorder, were recruited. An analysis of amplitude of low-frequency fluctuations (ALFF) was performed on the whole brain of all subjects. Our results showed that depressed PD patients had significantly decreased ALFF in the dorsolateral prefrontal cortex (DLPFC), the ventromedial prefrontal cortex (vMPFC) and the rostral anterior cingulated cortex (rACC) compared with non-depressed PD patients. A significant positive correlation was found between Hamilton Depression Rating Scale (HDRS) and ALFF in the DLPFC. The findings of changed ALFF in these brain regions implied depression in PD patients may be associated with abnormal activities of prefrontal-limbic network.
Introduction
For people with Parkinson's disease (PD), depression is the most common and disabling symptom, and up to 50 percent of people with PD experience mild or moderate depressive symptoms [1,2]. In addition to the unpleasant mood characteristics, depression can worsen the symptoms of PD, such as motor symptom deterioration [3,4], rapid disease progression [5] and cognitive attenuation [3,6]. Therefore, understanding and characterizing the underlying brain mechanisms of depression in PD patients using a neuroimaging approach is clearly an international imperative.
During the last decades, the pathophysiology of depression in PD patients has been accumulated from structural and functional neuroimaging studies [2,7,8,9,10]. High-resolution structural magnetic resonance imaging (MRI) showed PD patients with depression displayed abnormality in size of some areas, including the orbitofrontal gyrus, the superior temporal pole, and the mediodorsal thalamus, when compared with the patients with PD alone [2,9]. Functional neuroimaging techniques were also been used to study depression in PD patients [2,7]. A previous PET study found decreased levels of regional cerebral blood flow (rCBF) in the medial prefrontal cortex and the cingulated cortex in depressed PD group contrast to non-depressed PD group [7]. Recently, Cardoso and his colleagues using functional magnetic resonance imaging (fMRI) observed decreased activities in the left mediodorsal thalamic nucleus and the left dorsomedial prefrontal cortex of depressed PD patients but not of non-depressed PD patients [2]. These abnormal brain regions, which were found in these previous studies, mainly focused on the prefrontal cortex and limbic system, implying depression in PD patients may be associated with abnormal alterations in the prefrontal-limbic network.
Recently, resting-state fMRI has been widely used for investigating the brain functions under normal and pathological conditions for several special advantages, including high-resolution, no radiation use, and easy application [11,12,13,14]. During rest, low-frequency blood-oxygen level fluctuations within a specific frequency range (0.01-0.08 Hz) are considered to be related to spontaneous neuronal activity [11,12,15]. The amplitude of low-frequency fluctuations (ALFF), in a method developed by Zang et al., has been widely applied to explore abnormal brain activity associated with some neuropsychiatric disorders, including mild cognitive impairment (MCI) [16], depression [17], Alzheimer's disease (AD) [18], schizophrenia [19] and medial temporal lobe epilepsy [20]. Compared with traditional, task-related fMRI, the resting-state fMRI can be performed in all manner of people and is especially fit for people who are unable to cooperate with functional tasks [21]. To date, few resting-state fMRI studies have examined whether depressed PD patients present an abnormal activities.
In our study, we utilized ALFF to investigate the alterations in resting state brain activities in depressed PD patients compared with non-depressed PD patients. These abnormalities may be a trait marker and could be helpful for the future diagnosis of depression in PD patients. Based on previous studies, we hypothesized that an abnormal ALFF would be discovered in certain areas of the prefrontal-limbic network in depressed PD patients contrast to those patients with PD alone. In addition, we also compared PD patients those with and without depression with normal controls (Ncs).
Ethics Statement
The human fMRI experiment conducted in this study was approved by the Institutional Review Board of Beijing Normal University (BNU) Imaging Center for Brain Research, National Key Laboratory of Cognitive Neuroscience. All of the subjects gave written informed consent according to the guidelines set by the MRI Center of Beijing Normal University.
Participants
Twenty-one right-handed NCs and thirty-three right-handed patients with idiopathic Parkinson's disease, who were recruited from the Beijing Xuan Wu Hospital of China, participated in this study after giving written informed consent. The diagnosis of PD was based on medical history, physical and neurological examinations, response to levodopa or dopaminergic drugs, and laboratory tests and MRI scans to exclude other diseases. All subjects came in off medication for imaging and neuropsychological testing. Only PD patients with normal cognitive function as defined by a score on the Mini-Mental State Examination (MMSE) of 27 or more [22] were selected. Seventeen of PD patients were diagnosed with depression disorder according to the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) criteria (American Psychiatric Association, 1994) and the remaining sixteen patients had PD alone. The 24-item Hamilton Depression Rating Scale (HDRS) was used to evaluate the severity of depression and all depressed PD patients had a score of at least 8 points at HDRS [23]. Additionally, Unified Parkinson's Disease Rating Scale (UPDRS) [24] and Hoehn and Yahr (HY) [25] were also recorded for describing the severity of the PD. The detailed clinical data were shown in Table I.
fMRI Data Acquisition
All fMRI data were acquired on a 3-Tesla Siemens whole-body MRI system scanner at Xuan Wu Hospital in Beijing, China. Foam padding and earplugs were used to limit head movement and reduce scanner noise for the subject. During the scan, the subjects were instructed to rest and keep their eyes closed without thinking about anything in particular. The functional images were collected using echo planar imaging (EPI) sequence. For each subject, 210 images were collected and the imaging parameters were as follows: repetition time = 2000 ms; echo time = 40 ms; Flip Angle = 90 o ; slice = 28; matrix size = 64664; voxel size = 46465 mm 3 . A high-resolution, three-dimensional T1weighted structural image was acquired for each subject with the following parameters: repetition time = 2100 ms; echo time = 3.25 ms; Flip Angle = 10 o ; slice = 176; matrix size = 2246256; voxel size = 16161 mm 3 .
Data Processing
Image preprocessing was performed using Statistical Parametric Mapping (SPM8, http://www.fil.ion.ucl.ac.uk/spm). Allowing for the equilibration of the magnetic field, the first 10 volumes were discarded. The remaining 200 time points were slice-timing corrected to the middle axial slice, and all images were then realigned to the first image to account for head motion. A participant would be excluded if the translation and rotation parameters exceeded 62 mm or 2 o during the whole fMRI scans. In our study, no subjects were excluded. After slice acquisition and head motion correction were performed, all of the volumes were spatially normalized to the standard SPM8 Montreal Neurological Institute (MNI) template, re-sampled to 3 mm cubic voxels, and smoothed by a Gaussian kernel with the full width set at a half maximum of 5 mm.
ALFF Calculation
After preprocessing in SPM8, Further data preprocessing and ALFF analysis was performed with REST software (http://restingfmri.sourceforge.net) [26]. Firstly, the linear trend was removed, and every voxel was band-pass filtered (0.01 Hz,f,0.08 Hz) to remove the effects of low-frequency drift and high-frequency noise. Then we removed the influence of head motion using linear regression but white matter and cerebral cerebrospinal fluid (CSF) were not regressed out. The ALFF calculation procedure: 1) Fast Fourier Transform (FFT) was used to convert all voxels from the time domain to the frequency domain; 2) the ALFF of every voxel was calculated by averaging the square root of the power spectrum across 0.01 Hz to 0.08 Hz; 3) the resulting ALFF was converted into z-scores by subtracting the mean and dividing by the global standard deviation for standardization purposes.
Statistical Analysis
A two-sample t-test was performed to explore the ALFF differences among depressed PD patients, non-depressed PD patients and NCs. The between-group statistical threshold was set at p = 0.005 and cluster size. = 432 mm 3 (16 voxels), which corresponded to a corrected p,0.05. This correction was determined by the Monte Carlo simulations, which were performed with REST software (http://resting-fmri.sourceforge. net) (whole brain mask: 70831 voxels; simulation number = 5000) [26].
Correlation between Clinical Data and ALFF
To examine the association of the ALFF abnormality with the severity of the depression of PD patients, we performed a partial correlation analysis (controlling age and gender) between HDRS data and ALFF values extracted from clusters of voxels, which showed the most significant differences between depressed and non-depressed PD patients. Each of the clusters was the intersection of the corresponding region defined by Anatomical Automatic Labeling atlas toolbox [27] and the within group two sample t-test map with a cut-off threshold at p = 0.005.
Clinical and Demographic Testing of Samples
In regard to our clinical and demographics of sample participants (Table I), there were no significant differences in gender (t = 0.495, p = 0.624), age (t = 1.668, p = 0.105), MMSE (t = 0.692, p = 0.495), HY (t = 0.394, p = 0.730) and UPDRS scores (t = 1.656, p = 0.110) between depressed PD patients and non-depressed PD patients. As for HDRS (t = 8.965, p,0.001), PD patients with depression were significantly higher than those with PD alone. Our study tested non-depressed PD patients and NCs and found the differences in gender (t = 0.709, p = 0.483) and age (t = 1.414, p = 0.166) were also not significant.
Depressed PD Patients versus Non-depressed PD Patients
Compared to non-depressed PD patients, depressed PD patients exhibited a decreased ALFF in the right dorsolateral prefrontal cortex (DLPFC), ventromedial prefrontal cortex (vMPFC), the rostral anterior cingulate cortex (rACC), the superior frontal cortex and the right middle temporal gyrus. The opposite (depressed.non-depressed) was observed in the right cerebellum posterior lobe and the right cerebellum anterior lobe. Detailed information about the Montreal neurological institute (MNI) coordinates and clusters was provided in Fig. 1A and Table II.
Non-depressed PD Patients versus NCs
The group differences between non-depressed PD patients and NCs were shown in Fig. 1B and Table III. The ALFF in NCs was higher than in patients with PD alone in the bilateral caudate, the left putamen, the supplementary motor area (SMA), the bilateral superior frontal gyrus and the posterior cingulated gyrus. The ALFF, which was significantly lower in NCs than non-depressed PD, was found in the left middle temporal gyrus, the right middle occipital gyrus, the bilateral superior occipital gyrus, the left inferior temporal gyrus, the left precuneus and the right angular gyrus.
Depressed PD Patients versus NCs
As shown in Fig. 1C and Table IV, the depressed PD group demonstrated a decreased ALFF in the bilateral caudate, the left putamen, the bilateral precuneus, the right superior frontal gyrus, the right middle frontal gyrus, the left putamen, the right medial frontal gyrus, the right superior temporal gyrus and the right thalamus contrast to NCs. Conversely, the right angular gyrus, the bilateral middle temporal gyrus, the left inferior frontal gyrus, the left precuneus, the left inferior parietal gyrus and the right fusiform gyrus displayed an increased ALFF in the depressed PD patients.
Correlations between ALFF Values and HDRS
We examined the relationships between the HDRS and ALFF in regions with significant group differences (depressed PD patients vs. non-depressed PD patients), including DLPFC, rACC, vMPFC. The only significant correlation we found between ALFF values and HDRS was in the DLPFC (r = 0.698, p = 0.003). The other correlation were all less than 60.2 (p.0.05).
Discussion
The present fMRI study aimed to investigate the alterations in resting-state brain activities in depressed PD patients, and we found a decreased ALFF in the DLPFC, the vMPFC and the rACC in depressed PD patients when compared with nondepressed PD patients. Inversely, An increased ALFF (depressed PD patients. non-depressed PD patients) was observed in the cerebellum posterior cortex. In addition, when compared with NCs, the depressed PD patents and non-depressed PD patients both showed altered activities mainly in the basal ganglia and the prefrontal cortex. Furthermore, a significant positive correlation was found between the HDRS score and ALFF within the DLPFC.
The DLPFC provides a key hub in the prefrontal-limbic network which connects to the orbitofrontal cortex, the thalamus, parts of the basal ganglia, the hippocampus, and primary and secondary association areas of the neocortex [28]. It has an important role in cognitive, executive and emotional processes, especially the down-regulation of negative emotional conditions [29,30,31]. Abnormal activity in the DLPFC may lead to a cognitive and mental disorder and partly contribute to interest or pleasure deficiency and cognition declines exhibited by patients with depression [32,33]. Our current study using resting-state fMRI found a decreased ALFF in the DLPFC in depressed PD patients contrast to those patients with PD alone and a positive correlation was also been found between HDRS score and ALFF values in the DLPFC. Consistent with our result, the hypoactivity in the DLPFC in depression has been identified by many previous studies, which was regarded as a critical hallmark for depression [7,32,33,34,35,36,37]. For example, Bench et al. found a decreased rate of metabolism and decreased rCBF levels in the DLPFC in depression [32], and an increase in activity in the DLPFC will remit depression symptoms [37]. Similar results were also been found in depressed PD groups. A previous PET study reported a decreased rCBF level in the DLPFC of depressed PD patients compared with non-depressed PD patients [7], and stimulating the DLPFC with repetitive transcranial magnetic stimulation (rTMS) can be effective in remitting depression symptoms in PD [33,35]. Together with these findings, we speculated that hypoactivation in the DLPFC may be an important factor for the genesis and development of depression in PD patients. The vMPFC seems to be a critical area in PD-associated depression. The abnormalities within the vMPFC in patients with major depressive disorder (MDD) have been documented in previous structure and functional studies [38,39,40]. In our study, we using ALFF investigated the abnormalities of depressed PD patients and found a decreased activity in the vMPFC in depressed PD contrast to those without depression PD patients. Similar to our finding, a previous PET study compared the regional blood flow of depressed and non-depressed PD patients and found a decreased rCBF level in the vMPFC [7]. The vMPFC is connected with the ACC, the hippocampus and the amygdala [41] and plays a vital role in emotion generation and regulation [42,43]. The activity of the vMPFC was associated with the suppression of affective responses to a negative emotional signal and might dampen amygdala activity [44]. Jonestone et al. found, during an effortful affective reappraisal task, normal subjects showed an inverse relationship between amygdala but depressed individuals were not [45]. Therefore, the decreased level of activity in the vMPFC in depressed PD patients may lead to an imbalance in the inhibitory influence of the amygdala on activity leading to the genesis of depression. Our data do not allow us to state that an altered relationship between the vMPFC and the amygdala is responsible for the observed decrease in activity levels in the vMPFC, but, based on previous reports, this hypothesis should be evaluated in the future.
In addition, a decreased ALFF in the rACC was also reported in our current study. The rACC is a part of the brain's limbic system which is strongly connected with the amygdala, the orbitofrontal cortex and the hippocampus, and it has been reported to be associated with the processing and integration of affect-related information [46,47]. Lesions in the rACC can lead to a series of symptoms, including apathy, inattention, the dysregulation of autonomic functions, akineticmutism and emotional instability, which overlap considerably with the quintessential symptoms of patients with MDD, implying depression has a relationship with abnormal activity in rACC [48]. A recent resting-state fMRI study also demonstrated that the severity of depression in PD patients was correlated with the ALFF values in the rACC, which was consistent with the result of our current study [49]. However, in the absence of normal control group, Skidmore and his colleagues' study could not decide whether the activity in rACC was decreased or increased in depressed PD patients. Our study compensated for this limitation and identified the rACC showed a decreased ALFF in depressed PD patients compared with nondepressed PD group, which gave a more complete fMRI status of depression in PD patients.
The regions we found to have a decreased ALFF in the depressed PD group, including the DLPFC, the vMPFC and the rACC, are parts of the prefrontal-limbic network, which is important for affective processing [50]. Previous non-invasive brain imaging study has identified abnormal changes in the prefrontal-limbic network existed in patients with MDD [41]. In our study, we found abnormal activity levels in prefrontal-limbic network were also existed in the depressed PD patients giving a new clue to the pathophysiology of depression in PD group.
In contrast to the decreased activities in the prefrontal-limbic network, we observed an increased ALFF in the right cerebellum posterior lobe in depressed PD patients compared with nondepressed PD patients. The traditional view of the cerebellum is that it is only responsible for the regulation of motor functions, but recent studies identified this area also being associated with emotional and cognitive processing [51,52]. Previous studies demonstrated the patients with depression showed abnormal changes in cerebellum [53,54,55,56]. Pillay et al. reported that patients with depression showed a volume reduction in the cerebellum [54]. Using fMRI, Liu et al. and Guo et al. found a decrease in regional homogeneity (ReHo) in depression patient group compared with NCs [55,56]. Additionally, the reciprocal connections linking the cerebellum with brainstem areas contain neurotransmitters involved in mood regulation, including serotonin, norepinephrine and dopamine [57]. The degeneration of the dopaminergic pathway, a hallmark of PD patients [8], may lead to the genesis of increased activity in the cerebellum. Our current study provides evidence for the involvement of cerebellar abnormality in depressed PD patients.
Additionally, comparing ALFF maps between non-depressed PD patients and NCs, our study also investigated the PD related pathophysiology and found the altered activities in PD mainly focused on basal ganglia (including putamen, caudate) and prefrontal cortex. Findings from previous studies suggested that basal ganglia plays an important role in cortico-subcortical circuits, including motor, oculomotor, dorsolateral, prefrontal, lateral orbitofrontal and anterior cingulate [58,59]. PD is a movement disorder characterized by the triad of bradykinesia, tremor at rest and muscular rigidity [60,61], which mainly result from varying forms of abnormally patterned activity throughout the motor circuit [62]. Similar to our study, some recent researches using the index of ALFF also found abnormal activities in PD patients mainly in prefrontal cortex and motor cortex including SMA, the mesial prefrontal cortex and middle frontal cortex [63,64]. Combined these previous findings with our current study, the speculation, that PD was associated with abnormal changes in motor circuit, was further been demonstrated. It has recently been reported that in-scanner head motion can have an influence on analysis results even though traditional realignment was performed [65,66]. In our study, to control the impact of head motion, we not only made every effort to reduce its occurrence in the scanner and precluded those subjects with the translation and rotation parameters exceeded 62 mm or 2 o during the whole fMRI scan, but also removed the influence of head motion using linear regression based on REST software (http://resting-fmri.sourceforge.net) [26] before ALFF calculating. In addition, following previous studies, the mean relative displacement was used to measure subjects' head motion in scanner [65,66]. Then two sample t-test was used to test differences of head motion between groups and no significant differences were found (depressed PD patients vs non-depressed PD patients: t = 0.435, p = 0.666; non-depressed PD patients vs NCs: t = 0.361, p = 0.720; depressed PD patients vs NCs: t = 0.795, p = 0.432). The significant correlations between ALFF and mean relative displacement were also not found (ACC: r = 20.114, p = 0.662; MPFC: r = 20.017, p = 0.947; DLPFC: r = 0.223, p = 0.39) according to calculating the correlations coefficients between these two indexes. These findings suggested the significant differences among groups in our current study may have no relationship with head motion. However, further works are needed to explore this issue.
In summary, our study used ALFF to examine the alterations in the resting state between depressed PD patients and non-depressed PD patients and found abnormal neural activity levels in several brain areas associated with the prefrontal-limbic network. Our study not only advances the knowledge of depression in PD but also provides a new insight into the underlying neural mechanism behind the high rate of depression in PD patients.
|
2016-05-12T22:15:10.714Z
|
2013-05-22T00:00:00.000
|
{
"year": 2013,
"sha1": "4f3de497b97f425555add0bf0cd04354baf97a20",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0063691&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f3de497b97f425555add0bf0cd04354baf97a20",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119199734
|
pes2o/s2orc
|
v3-fos-license
|
Evolved solar systems in Praesepe
We have obtained near-IR photometry for the 11 Praesepe white dwarfs, to search for an excess indicative of a dusty debris disk. All the white dwarfs are in the DAZ temperature regime, however we find no indications of a disk around any white dwarf. We have, however determined that the radial velocity variable white dwarf WD0837+185 could have an unresolved T8 dwarf companion that would not be seen as a near-IR excess.
INTRODUCTION
There are ≈ 10 dusty debris disks known to exist around white dwarfs. The disks are found around relatively cool white dwarfs (9, 000 < T < 22, 000K) whose atmospheres are polluted with heavy elements (the so-called DAZ white dwarfs). The disks provide the obvious reservoir for the white dwarf to accrete this material, which otherwise would sink from the atmosphere on a timescale of days. DAZs are identified through high resolution, high S/N optical spectra in which lines of Ca, Si and Fe can be detected. [1] estimated that 20-25 per cent of single DA white dwarfs show Ca II K lines, indicating they are DAZ. This statistic has been put into question, however by [2] who studied 478 DA white dwarfs with 10 000 K ≥ T eff ≥30 000 K and found 24 DAZs, 6 of which had been discovered by [1]. This put the fraction of DA white dwarfs that are DAZ at 0.5 per cent. It was suggested by [2] that this discrepancy has occurred as the [1] sample was mainly of objects with T eff < 10 000 K, where the Ca absorption lines at lower abundances are easiest to detect, so their sample was biased towards the DAZs.
[3] studied a sample of 37 DAZ white dwarfs using IRTF and Spitzer, of which 7 had dusty debris disks. They tentatively estimate the fraction of DAZ that harbour disks at ≈ 20 per cent, although this is obviously from a small sample.
Open star clusters are ideal places within which to search for white dwarfs with disks as all cluster members have a known age. Therefore we can calculate the cooling age of any white dwarf, and hence the mass of the progenitor star. We recently investigated the white dwarf members of the moderately rich nearby Praesepe open cluster, measuring their effective temperatures and gravities [4,5]. We identified that WD0837+218 has a radial velocity that is inconsistent with it being a cluster member, but have included it in the sample here for completeness. All eleven white dwarfs have temperatures between scepticism as the temperature fitting is unreliable. The above statistics suggest we should find a 0-2 DAZs in our sample. At a distance of 177 +10.3 −9.2 pc (as determined from Hipparcos measurements, [6]), Praesepe is one of the closest star clusters. It is slightly metal rich with respect to the Sun ([Fe/H] = +0.11, [7]). Indeed, as both the metallicity and the kinematics of Praesepe are similar to those of the Hyades, the former is often touted as a member of the Hyades moving group and therefore is assumed have an age comparable to the latter, τ=625±50 Myr (e.g. [8]). We note that this age for the Hyades was derived by comparing model isochrones generated from slightly metal enhanced (Z=0.024) stellar models which included moderate convective overshooting to the colors and magnitudes of a sample of cluster members selected using Hipparcos astrometric data [9].
ACQUISITION AND REDUCTION OF DATA
The data were acquired between 6/12/08 and 24/01/09 using the United Kingdom Infrared Telescope (UKIRT) and the UKIRT Fast Track Imager (UFTI).
The data were taken with total exposure times of 3000 s in the K band, 1200 s in the H band and 600 s in the J band. All images were taken using a 5 point jitter pattern with 60 s exposures. A standard star from the UKIRT faint standards (Casali & Hawarden, 1992: UKIRT Newsletter, 4, 33) was also observed for each observation with a total time of 15 s in the H and K band (5 3 s exposures) and 40 s in the J band (5 8 s exposures).
The data were reduced using the STARLINK based ORAC-DR pipeline with the recipe JITTER_SELF_FLAT which performs the dark correction and creates and applies a sky flat field to the images before mosaicing them. The aperture for the standard star was set to 5 times the FWHM, and an aperture correction was derived using bright stars in the target field. This was performed using the IRAF DIGIPHOT package QPHOT. The photometry can be seen in Table 1.
RESULTS
Using T eff and log g from [4], we generated WD models using TLUSTY and SYNSPEC that extend from 0.3 to 2.5 microns, covering the Sloan Digital Sky Survey u, g, r, i, z and the near-IR photometry range.
All the white dwarfs in this sample have SDSS photometry except for WD0836+199 which is too close to a nearby bright star as discussed in [4]. The SDSS photometry g band has been estimated for this star using its T eff and log g. For each object, the model was normalized to the i band as using g and r led to underpredictions of the JHK photometry due to several deep absorption lines in the g band, and Hα in the r band.
Some of these white dwarfs also have Z, Y , J, H and K from the UKIRT Infrared Sky Survey (UKIDSS) Galactic Cluster Survey (GCS) DR6 and these have also been included where relevant.
None of the Praesepe white dwarfs possesses a detectable debris disk except from WD0837+199, which shows a clear near-IR excess in the UKIDSS H and K bands (Figure 1, left panel) and also in Spitzer IRAC photometry at 4.5 and 8 microns [10]. However, our deeper UFTI images clearly show a nearby, red galaxy ( Figure 1, right panel), implying this excess is unlikely to be due to the white dwarf. One other object of note is WD0837+185, a radial velocity variable. Our near-infrared photometry shows no excess although Figure 2 shows that an unresolved T8 (M≈25M Jup from the RV curve) brown dwarf companion could be hidden by the white dwarf. We require further data before we can make any firm conclusions however. [3] suggest that up to 25 per cent of externally polluted DAZ white dwarfs show an infrared excess indicative of a dust disk, and in the absence of prior knowledge of metallicity, [11] expect 1-3 per cent of all single white dwarfs with cooling ages less than ≈0.5 Gyr to possess dust disks. Hence, it is not necessarily surprising that we have not detected a dust disk in Praesepe. However, we are able to place constraints on the presence of substellar companions. These white dwarfs have evolved from late B-type main sequence stars (2.9-3.5 M ⊙ [4]), spectral types that are not observed in radial velocity searches for substellar and planetary companions and we have determined that a T8 brown dwarf cannot be detected in the UKIDSS and UFTI observations of WD0837+185. Such a brown dwarf has a mass of 25 M Jup at 625 Myr. Hence, these limits can be combined with the results of radial velocity searches for substellar companions to place limits on the formation of brown dwarfs as binary companions to late B-stars.
|
2010-12-06T12:11:03.000Z
|
2010-12-06T00:00:00.000
|
{
"year": 2010,
"sha1": "0f4ce8015badf4a7c015110b3d1ead57d91b2757",
"oa_license": null,
"oa_url": "http://uhra.herts.ac.uk/bitstream/2299/7085/1/905430.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0f4ce8015badf4a7c015110b3d1ead57d91b2757",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
33386727
|
pes2o/s2orc
|
v3-fos-license
|
Zebra rocks: compaction waves create ore deposits
Nature has a range of distinct mechanisms that cause initially heterogeneous systems to break their symmetry and form patterns. One of these patterns is zebra dolomite that is frequently hosting economically important base metal mineralization. A consistent generic model for the genesis of these periodically banded rocks is still lacking. In this contribution, we present for the first time a fully consistent mathematical model for the genesis of the pattern by coupling the reactive fluid-solid system with hydromechanics. We show that visual banding develops at a given stress and host-rock permeability indicating that the wavelength and occurrence of the pattern may be predictable for natural settings. This finding offers the exciting possibility of estimating conditions of formation of known deposits as well as forecasting potential exploration targets.
Banded, striped or wave-like patterns are very common in natural systems and can develop spontaneously in chemical reactions. They are found on animal skins and shells 1 as well as in a variety of fluids and solids. The genesis of such patterns is linked to the formation of waves, in a system far from equilibrium. For example, a reaction-diffusion process develops waves, where a fast reaction precipitates a band of a mineral phase while diffusion depletes the surroundings in the respective reactants. The repetition of this process leads to the formation of several bands with a distinct spacing, an example of so-called self-organization. One of the fundamental works that was published on stationary waves or bands, goes back to Alan Turing 2 in 1952. He introduced a reaction-diffusion system in which a pattern (Turing pattern) develops as a result of instabilities in the underlying reaction.
The formation of stationary wave-like patterns is very common in geosystems 3 . Examples are compositional layering in igneous or metamorphic rocks, layering during sedimentation, chemical banding including the banded iron formation 4 , Liesegang rings on fracture surfaces 5 and in ore deposits 6 , as well as layering in fault and shear zones 7 . The studies of such nonlinear dynamic systems in geosciences gave rise to the concept of geochemical self-organisation 8,9 in which patterns develop spontaneously out of initially unordered systems. Recent advances include stress-induced effects producing not only compositional bands but also a layering in porosity 10,11 .
A striking example of waves in rocks is the zebra texture, a pattern that can be found in a variety of rock types ranging from claystones 12 , siderite- 13,14 and sphalerite mineralization 15 to hydrothermal dolomite formations [15][16][17][18][19][20][21][22][23] . In this work, we focus on the latter case, in which the texture of the periodic banded dolomites (zebra dolomites) consists of alternating dark and light bands (Fig. 1). The zebra texture in dolostones is frequently associated with base metal deposits of the Mississippi Valley-Type (MVT) 24 , whereby the banding predates the ore precipitation 25,26 .
What the underlying processes of the pattern formation in zebra dolomites are, is still debatable. Hypothesis on their genesis vary from the development of a fracture network 18 ; opening of bedding/cleavage planes 27 ; the development of a near-horizontal set of microfractures, along which dolomite precipitates 25 ; pre-existing sedimentary partings 21 ; sedimentary structures such as corals 25 ; displacive vein growth 19,28 or a form of geochemical self-organization 16 . In most of the areas where zebra dolomites are found, implications for an over-pressurized hydrothermal system [25][26][27] can be observed, and zebra layers frequently form parallel or at a low angle to the bedding/foliation 18,22,27 .
The purpose of this communication is to present a physically and chemically coherent model of zebra formation in a stressed sedimentary basin with evolving fluid pressure. Our model shows that the formation of rhythmic banded dolomites is the result of compaction instabilities that arise during a reaction-diffusion process in a system under applied stress, and that these instabilities can be mathematically described by periodic waves (cnoidal waves, see Fig. 2). Our 1D-model merges all existing hypothesis, and explains the typical structural features of the zebra texture in dolomites. Moreover, it is able to predict under what conditions they form. Zebras in mineralized dolostones. The MVT is a mineralization type that is typically hosted in carbonate formations of sedimentary basins that often contain dolomite 29 . While the lithology is relatively consistent, the orogenic type in which these deposits form varies (collisional, Andean or transpressional) 30 . The tectonic regime is therefore unlikely to be a first order control of MVT mineralization, or the zebra pattern formation. However, there is a spatial and temporal relation to orogenic foreland basin development reported for the mineralization 31 .
Field observations and the analysis of hand specimens show that a high variation in spacing and thickness of the bands exists on the outcrop scale. The distance between the centres of two light layers can vary between 2 mm to 10 cm (Fig. 1a). Laterally, the bands can extend as far as tens of meters and they can merge forming dislocation or cross-bedding like patterns. The bands also exist as isolated patches of layers confined by uniform dark dolomite (Fig. 1a). The dark matrix dolomite (Do Ia) is thought of originating from the replacement of the initial limestone 16,17,19,20,28 and can be regarded as the host rock of the zebra dolomite. This dark dolomite is still lighter than dark dolomite bands (Do Ib) that are located in between the light bands ( Fig. 1b,d). On the hand-specimen scale the light layers (Do II) display a median line (m) along which a vuggy porosity (v) is visible (Fig. 1c). In some areas, the centres of the light bands are filled with a late carbonate phase, which can be distinguished by the lighter colour of the material (Fig. 1c). The variability of the band spacing and thickness is assessable by comparing Fig. 1c,d: the spacing in Fig. 1c is about 1 cm in width whereas it is only about 2-3 mm in Fig. 1d. On the micro scale, a closer look at a single layer under cross-polarized light (Fig. 1e) reveals that the crystal's size differs by several orders of magnitude between the fine-grained (Do Ib) and the coarse-grained layer (Do II). The fine-grained dolomite (in the dark bands) contains a large amount of impurities, which are partly clustered at the grain boundaries whereas the light layers are almost impurity free. In addition to that, the dolomite crystals in the coarse-grained light layers are elongated towards the central line, along which the void-filling carbonate cement is frequently observed 16,25,26,32 .
The samples analysed in this study were collected at the San Vicente mine (Peru) that represents one of the world's largest ore deposits of the MVT 31 . The mine is located in the Subandean fold-and-thrust belt of the eastern Andean cordillera located about 300 km east of Lima. The hydrothermal mineralization consists primarily of the sulphide ore minerals galena and sphalerite. The ore bodies are strata-bound and hosted in Triassic/Jurassic platform carbonates (Pucara Group) in the western flank of the NW-SE striking Pucara basin. Faults likely provided pathways for the dolomitizing as well as mineralizing fluids 33,34 . The strata that host the mineralization are over-thrusted by younger plutonic rocks (Tamra Granodiorite). The setting of the San Vicente mine shows several of the typical features of environments hosting MVT mineralization 31 and can therefore be regarded as a representative study area for the formation of banded dolomites and their relationship to mineralization.
Theoretical description of pattern formation. In this section, we describe the development of the bands in the zebra dolomites by a phase separation process based on the Cahn-Hilliard and Allen-Cahn reaction-diffusion equations coupled with hydromechanics to mimic the pattern formation in a stressed sedimentary basin that undergoes diagenesis and builds up high fluid pressures. A reaction that can explain the virtually impurity-free dolomite in the light layers and the accumulation of impurities between these layers is the replacement of the primary dolomite by a secondary dolomite. While the dark dolomite (Do Ia & Do 1b) formed by the replacement of limestone will still contain impurities initially present in the rock, the replacement of the primary by the secondary dolomite (Do II) will segregate the impurities from the solid into the fluid (phase separation). Such a replacement reaction can be described as a process of coupled dissolution precipitation 35 during which the impurity rich dolomite is locally dissolved and replaced by impurity free dolomite, thus leaving the impurities in the fluid. In the actual rock, this process takes place during grain growth in which impurities are collected in grain boundaries across which the dissolution-precipitation takes place and newly grown parts of grains become impurity free. In the natural samples, this process is indicated by the accumulation of impurities on grain boundaries in the dark layers. In addition to that, grains located at the transition between dark and light With increasing depth, the overpressure rise and the permeability will decrease. However, upwelling fluids that are confined within the host-dolomite by an overlying impermeable layer (e.g. shale cap) could as well generate the overpressure. Such structures are considered to indicate good potential for MVT-deposits 56 .
regions exhibit a sharp transition from an impurity rich nucleus within the dark bands to virtually impurity free crystals that grow towards the centre of the light layer (see Figure S4 in the supplementary material). The accumulation of impurities in the fluid can be regarded similar to industrial zone refining during which impurity-free crystalline materials are produced by a moving melting-freezing front 36 . That impurity rich layers in rocks can be the result of a similar mechanism had already been suggested by Krug et al. 13 . Effective cleansing and impurity redistribution by dissolution-precipitation can be achieved in rocks during grain growth as pointed out by Jessell et al. 37 . The basis of our model is a generic phase separation process that can be described as: In the case of the zebra dolomites AB s represents the initial dark impurity rich dolomite (dolomite I) that is of replacive origin. The phase separation is driven by a fluid that accumulates impurities during dolomite-dolomite replacement (B f ) and leaves behind an impurity-depleted dolomite phase (A s ) after the reaction. The dark layers of the zebra texture are formed during the replacement of calcite by dolomite 19,20,25,26,28,32 , with an example shown in the supplementary material ( Figure S4 in the supplementary material). The replacive origin of the primary impurity-rich dolomite (AB s ) is shown by the preservation of initial sedimentary features (Ooids) in the dark bands 19 . The second dolomite generation of the light zebra bands (A s ) appears as a coarse-crystalline impurity-free phase indicating an inverse correlation between final grain size and impurity density. Typical MVT-fluids 19 that are acidic, out of equilibrium with respect to the carbonate host-rock will enhance the recrystallization rates. This could explain why zebra dolomites are predominantly encountered within MVT districts. An Arrhenius law defines the mass production rate during grain coarsening. The rate of the reaction (r) can be written in a generic form as: In this expression, K 0 is a material specific rate constant, E is the activation energy of the process, P is the pressure related to the volume change process, V the activation volume of the reaction (which can be grain size dependent), R is the ideal gas constant and T is the temperature. Grain coarsening, as well as the dissolution of minerals in rocks, is a stress and grain size dependent process 38 . At elevated pressures, the dissolution rate will be higher than the precipitation rate and therefore we can consider the precipitation as the rate controlling mechanism of a coupled dissolution-precipitation reaction. The formulation of the model is based on a mixed mass balance expression for the solid-fluid system, derived from initially defined partial densities for the solid (ρ s ) and the liquid (ρ f ) phase respectively. All the derivations are detailed in the supplementary material. The model represents an extension of the classical compaction bands theory 39 to a viscous non-linear rheology, similar to Veveakis et al. 40,41 and Weinberg et al. 42 . We apply the equation of state on the expression of the mixed mass balance and assume isothermal conditions (dT = 0). We can further lower the complexity by reducing the problem to 1D and considering the steady-state limit (d/dt = 0). We then derive: In the expression P denotes the normalised over-pressure = , with y 0 being the reference length scale) in a coordinate system moving with respect to the direction of compaction, Pe is the Peclet number and m is the stress exponent. The remaining identities λ, μ and α in equations 4-6 include the hydromechanical parameters (Table S1 in the supplementary material) whose values are stated in the caption of Fig. 3 and Table S2 (supplementary). Additional information on the development of equation 3 can be found in the supplementary materials. The solution of expression 3 can include non-linear periodic waves given by an elliptic function. The power law exponent m in equation 3 dictates whether the solution is the Jacobian-(odd numbers) or the Weierstrass-function (even numbers) 11 . How both are related as elliptic functions was pointed out by Abramowitz et al. 42 . The wave peaks appear as equidistantly spaced stress singularities (elevated fluid pressure) where dissolution takes place, thus correlating with high permeability channels 40 (for more details see Alevizos et al. 43 ): The genesis of these hydromechanical instabilities represents the response of the solid-fluid system to compaction, whereas the formation of the instabilities does not need to appear simultaneous but is closely related to the solution of the wave equation. As discussed in the supplementary material, the solution is very weakly depending on the value of Pe, for the selected range of values of the parameters listed in Table S2 (supplementary). It therefore can be concluded, that the solution depends mainly on the parameters that include permeability and mean stress.
The dependence of the solution to equation 3 on stress (depth) and permeability is shown in Figs 2 and 3. The number of wave peaks grows with increasing depth and/or decreasing permeability. This consequently means that the amount of compaction instabilities (the number of light bands) or the amount of waves that occur in a fluid saturated rock, as described by equation 3, is a function of permeability and vertical stress.
Scaling to field observations. The value of λ in equation 3 is critical as it includes the hydromechanical parameters, fluid viscosity and permeability, as well as the reference values for strain, stress and space. We further notice that λ has the highest impact on the solution of equation 3. In order to invert for the spacing between the light layers we fitted λ to the number of wave peaks (NB) for constant values of μ and α. By inserting appropriate values, it can be shown that the values of the latter two dimensionless parameters are relatively low and have only little influence on the result. We obtained a square root dependency for λ = NB C where C is a constant weakly depending on the value of α (see Figure S3 in the supplementary material). For typical values, we can accept = . C 0 27. This scaling for the number of bands can lead to a second scaling for their spacing (h), related to the compaction length 40 found in the classical compaction bands theory 39 . With this value, it is possible to obtain a relationship to the spacing (h) and directly compare the prediction of our model to field data. The detailed description of the inversion routine can be accessed in the supplementary material section. We applied fixed values of reference strain rate (ε 0 ), reference stress (σ 0 ) and of the material parameter such as density, viscosity, dissolution rate, and activation energy (see Table S2 in the supplementary material). It is important to note that the experimental determination of the energy parameter can reach uncertainties of 200-300% 44 . As this value is included in the dimensionless parameter α we performed a sensitivity analysis to assess possible influences ( Figure S3 in the supplementary material). For a combination of appropriate parameters, we obtain Fig. 3 where the light grey shaded area indicates realistic values of stress and permeability for buried dolomites. The upper and lower dark grey shaded areas indicate the regions in which no banding will be observed, either because equation 3 remains stable without periodic waves in the solution (stable area) or the distance between the layers is too narrow (critical spacing) to produce macroscopically visible bands as the spacing would be on the order of the grain size (~100 μm).
We can now quantify the relationship between permeability, overpressure and the density of bands. The overpressure in San Vicente is related to a burial depth of about 3 km which is the maximum burial depth of the strata hosting the zebra dolomites 17 . The yield stress can be defined as the point at which the material permanently deforms by 0.2% 45 . This state should be reached at relatively low stresses for dolomites and therefore the overpressure (σ'-σ' y ) can be assumed to show only moderate fluctuation around the maximum pressure. The remaining critical parameter is the permeability, which is related to the porosity of the rock 46 . A variation of this parameter can explain the difference in band spacing and thickness (Fig. 1a). Additionally, the spatial localization of the zebra texture, which is observable in Fig. 1b,d, can be explained by local permeability contrasts.
The development of the compaction bands out of an initially heterogeneous rock is the primary stage of the pattern formation. The compaction bands have a higher permeability compared to the surrounding host rock and fluid flow as well as dissolution/precipitation processes will be focused inside these channels due to elevated fluid pressure. We hypothesise that, due to the development of the compaction instabilities, a local recrystallization takes place. During this process, the pre-existing impurities in the dolomite are washed out and accumulate outside of the channels (Fig. 1b). This is in good agreement with the findings of 25,27 who interpreted the light bands as dissolution or recrystallization features that develop during focused fluid flow. However, our model does not require pulsed expulsion of fluids 27 or the development of en-echelon fractures 22,25,27 . The grain growth process will affect the whole rock volume but will be favoured in areas of low impurity densities 47 , and therefore inside the compaction bands. Grain boundary migration in systems that are comprised of a layered distribution of second-phase material can produce structures which are very similar to the zebra dolomites 48 . We argue that a fracture can also develop in the central part of the impurity depleted coarsening layers (Fig. 1c), because the fluid pressure is at its highest in the central part of the compaction band and the breaking strength of the material decreases with increasing grain size 49 . This can quantitatively be described by the Hall-Petch effect 50,51 , that gives a relationship between yield stress and crystal size. A developing crack is accompanied by a stress drop in the solid around the fracture. As grain growth is sensitive to stress, the crystals will tend to elongate towards this crack (Fig. 1e) 52 . Even without material failure, dissolution will occur in the central part of the coarse layer in response to the elevated fluid pressure. Such dissolution features were reported for several zebra dolomite occurances 16,25,26,32 and crack-like structures filled with a late carbonate phase are shown in Fig. 1c.
Towards an integrative model of zebra dolomite formation.
We presented a generic model of zebra dolomite formation (Fig. 4) based on the compaction band theory 11,40 coupled with a reaction-diffusion model (see also Alevizos et al. 43 for more details). In detail, our model is based on local dissolution-precipitation dynamics and, in contrast to other theories 19,20,28 , does not require displacive vein growth for the band's equidistant is related to the localisation of high-porosity and high-pressure channels. Recrystallization is then focused in equidistant channels (Do II) whereas the second-phase is washed out and accumulated between the channels (3.1). The subsequent grain growth is now focused in the areas of low impurity densities. The high pressure then leads to fracturing in the high-pressure channels of the compaction bands as the yield stress is successively lowered during grain coarsening (Hall-Petch relation 50,51 ). The highest pressures occur in the centre of the compaction instabilities leading to fracturing and subsequent dissolution along the median line of the coarse grained layers (3.2). The grain growth continues whereas the grains now elongate towards the central line where stress is depleted due to the fracturing (3.3). The resulting texture is periodically layered (4). If a mineralizing fluid percolates into this structure, the sulphide (Su) will precipitate along the vuggy median line (5).
spacing to occur. Our model also does not rely on initial sedimentary partings 21 or the development of fracture networks 15 (see supplementary material for further discussion).
We were able to show that the spacing of bands is a function of permeability and/or stress variations. Our approach is capable of integrating the findings of other works 16,[18][19][20]22,[25][26][27][28]46 and it successfully explains all the specific features of the pattern. The layered distribution of impurities is caused by the focused recrystallization inside the compaction bands (Fig. 4.3) and the grain size variation is a direct result of the recrystallization 48 . The elongated shape of the crystals of the light layers is caused by the dissolution and/or fracturing along the central part of the layer (Fig. 4.3.2) as a function of high fluid pressures and large grain sizes. A vuggy porosity along the central line will remain and a late carbonate phase will precipitate in the median line (Fig. 4.3.3). The texture is important for ore mineralization. If a mineralizing fluid percolates into the structure, the sulphides will start precipitating along the median line (Fig. 4.5), which is often observed in the samples.
In line with Turing 2 we propose a general process of pattern formation based on a reaction-diffusion equation but extended by hydromechanics. The basis of our model is the Cahn-Hilliard equation, where the process modelled is phase separation during which domains develop that contain a large amount of one of the two phases, a process that is often accompanied by pattern formation 53 . In our scenario, these accumulated phases are the impurities in the dark dolomite layers. In contrast to previously published theories on zebra dolomite formation we put forward a mathematical description of the reactive solid-fluid system coupled with hydromechanics. The predictions of our model can be scaled with field observation and we propose that it can represent a new tool for field-geologists to estimate rheological parameters such as permeability and stress. For now being solely a mathematical description, we state that our approach to pattern formation in dolomites represents one of nature's general processes of producing periodic wave-like patterns in natural systems. Furthermore, the inversion routine applied in this communication demonstrates a relationship between the spacing of geological structures such as bands or layers and the fluid pressure and permeability during pattern formation. Having an access to these parameters is of high interest for scientific research as well as for mineral-or hydrocarbon exploration and extraction. In addition to that, the mathematical model predicts that the zebra layers form perpendicular to the main stress direction. It is therefore possible to determine the orientation of the stress-ellipsoid during the pattern formation as well. Thus, the model presented in this study represents the first step towards the development of a new tool for geologists that will help to assess paleo-fluid pressure, paleo-stress and paleo-permeability of geological formations hosting banded or layered structures.
Future exploration of mineral and energy resources will target deposits at greater depth. With increasing depth feedback mechanisms between mechanical compaction and chemical reaction rates will become more important. A complete understanding of the exact mechanisms that are active in such deeper environments will become crucial for successful prediction of deposit location and their exploration. In addition, the extraction of resources from deeper hosts may trigger feedback mechanisms that to date are not fully understood. Striking examples of such feedback mechanism are the production induced compaction in the Ekofisc oil field 54 and the success of gas extraction in the Cooper Basin, Australia 55 .
|
2018-04-03T04:31:42.045Z
|
2017-10-27T00:00:00.000
|
{
"year": 2017,
"sha1": "66be6b16cecc2e6c80ce8cf4137e93005b175f2b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-14541-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55a85af824cf4a9ac2ac1c516034a09f5915a3bb",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
}
|
30780376
|
pes2o/s2orc
|
v3-fos-license
|
Identification of PECAM-1 in Solid Tumor Cells and Its Potential Involvement in Tumor Cell Adhesion to Endothelium*
PECAM- 1 (CD3 l/EndoCAM) is an adhesion molecule in the immunoglobulin supergene family that is ex- pressed on endothelial cells, platelets, and some he-matopoietic lineage cells. In this paper, using several polyclonal and monoclonal antibodies against PECAM-1, we identified PECAM-1 molecules on human, rat, and murine solid tumor cell lines. Immunocytochemical labeling and flow cytometric analysis using either polyclonal, monoclonal, or Fab portion of the antibodies against PECAM-1 detected a distinct distribution on tumor cell surface. Immunoblotting revealed proteins ranging from 120 to 130 kDa in tumor cells derived from different species. Immunoprecipitation and sub- cellular fractionation studies indicated that PECAM- 1 is constitutively expressed on the surface of human tumor cells (Le. colon adenocarcinoma). The specificity of a major polyclonal anti-PECAM-1 used in the current study (Le. SEW-3) was confirmed by the preab- sorption studies. PECAM-1 molecules
PECAM-1 (CD3 l/EndoCAM) is an adhesion molecule in the immunoglobulin supergene family that is expressed on endothelial cells, platelets, and some hematopoietic lineage cells. In this paper, using several polyclonal and monoclonal antibodies against PECAM-1, we identified PECAM-1 molecules on human, rat, and murine solid tumor cell lines. Immunocytochemical labeling and flow cytometric analysis using either polyclonal, monoclonal, or Fab portion of the antibodies against PECAM-1 detected a distinct distribution on tumor cell surface. Immunoblotting revealed proteins ranging from 120 to 130 kDa in tumor cells derived from different species. Immunoprecipitation and subcellular fractionation studies indicated that PECAM-1 is constitutively expressed on the surface of human tumor cells (Le. colon adenocarcinoma). The specificity of a major polyclonal anti-PECAM-1 used in the current study (Le. was confirmed by the preabsorption studies. PECAM-1 molecules on tumor cells appear to bear terminal carbohydrate moieties (i.e. sialic acid residues) different from those on platelets, since neuraminidase treatment of tumor cells, unlike platelets, did not result in a mobility shift. Polymerase chain reaction (PCR) analysis of genomic DNA derived from tumor cell lines of different species revealed the presence of PECAM-1 gene in the genome. The mRNAs of PECAM-1 in tumor cells were detected by reverse transcription-PCR followed by Southern hybridization. Screening of more than 20 human, rat, and murine solid tumor cell lines indicated that PECAM-1 is widely expressed, although the level of expression varies considerably among different cell lines. The expression of PECAM-1 message in tumor cells was confirmed by Northern blotting. DNA sequencing of the PCR fragment revealed that human tumor cell PE-CAM-1 matches 100% to the human endothelial cell counterpart. Finally, it was demonstrated that tumor cell PECAM-1 is involved in mediating tumor cell adhesion to endothelium, as evidenced by the ability of anti-PECAM-1 antibodies to decrease the adhesion of unstimulated tumor cells to microvascular endothelial cells. , CA 47115 (to K. V. H.), and CA 29997 (to K. * This work was supported by National Institutes of Health Grants V. H.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$4 TO whom correspondence should be addressed Dept. of Radiation Oncology, Wayne State University, 431 Chemistry, Detroit, MI 48202. Tel.: 313-577-1018;Fax: 313-577-0798. Tumor cell interactions with platelets, endothelial cells, and subendothelial matrix are considered essential intermediate steps for the completion of "metastatic cascade" (Weiss et al., 1988;Honn et al., 1992;Liotta et al., 1986). Tumor cells employ cell surface glycoprotein receptors to achieve these cell-cell and cell-matrix interactions. Organ specificity of tumor metastasis suggests that distinct tumor cell types utilize different repertoires of adhesion receptors (Nicholson, 1988;Pauli et al., 1990;Honn and Tang, 1992). Cell-cell adhesion is a complicated biological process in which various mechanisms and factors are involved. In addition to integrins which are heterodimeric membrane glycoproteins primarily involved in cell-substrate adhesion, three other major families of proteins are implicated in mediating cell-cell interactions. They are immunoglobulin (Ig)' superfamily (William and Barclay, 1988), cadherins (Takeichi, 1990), and selectins. Down-regulated cadherins (e.g. E-cadherin) have been implicated in the loss of cell-cell contact and correlated with the invasiveness and metastatic potential of tumor cells (Schipper et al., 1991). Selectins expressed on vascular endothelial cells have been shown to mediate tumor cell adhesion by recognizing carbohydrate ligands (SLe" or SLe" antigen) on the tumor cell surface Majuri et at., 1992).
The Ig supergene family are cell surface adhesion molecules that possess immunoglobulin-like folds in the extracellular domain. PECAM-1 (for platelet endothelial cell adhesion molecule-1; also called CD31 and EndoCAM) is a newly characterized adhesion molecule that belongs in the Ig superfamily and is expressed on platelets, granulocytes, monocytes, lymphocytes, macrophages, and endothelial cells, as well as certain tumor cell lines such as lymphoma and leukemic cells (Muller et al., 1989;Knapp et al., 1989;Newman et al., 1990;Stockinger et al., 1990;Simmons et al., 1990;Albelda et al., 1991;Zehnder et al., 1992). Various experiments have demonstrated that PECAM-1 is a 120-130-kDa transmembrane glycoprotein that carries a significant amount (about 40% of the molecular mass) of carbohydrate moieties (Newman et al., 1990). Molecular cloning and nucleotide sequence of human endothelial cell PECAM-1 have revealed an open reading frame of 2,114 bp that encodes a 738-amino acid protein with six extracellular C-2 type Ig-like domains (Newman et al., 1990;Albelda et aL, 1991), although some minor differences in the molecular structure have been noticed by other groups (Stockinger et al., 1990;Simmons et al., 1990;Zehnder et al., 1992). Functional studies have indicated that PECAM-1 plays an important role in establishing a contiguous endothelial cell monolayer (Albelda et al., 1990). Transfection of a full-length PECAM-1 cDNA to cells (i.e. the L cells) that do not endogenously express the molecule induced a Ca2+-dependent homotypic cell-cell aggregation and heterotypic cell-cell adhesion (Albelda et al., 1991), suggesting that PECAM-1 may be involved in a variety of intercellular adhesion processes. Based on the realization that cell-cell adhesion is an essential intermediate step in tumorigenesis and cancer metastasis, that most Ig family members are involved in tumor cell-host cell interactions, and that PECAM-1 is functional in mediating cell-cell adhesion, we hypothesized that some tumor cells may express PECAM-1 that is involved in tumor cell-tumor cell, as well as tumor cell-platelet-endothelial cell interactions. In the current paper we present biological, biochemical, and molecular evidence that PECAM-1 is expressed in cultured human and rodent solid tumor cells. Furthermore, we will show that PECAM-1 functions in supporting nonstimulated tumor cell adhesion to vascular endothelium in vitro.
MATERIALS AND METHODS
Antibodies-Five antibodies against PECAM-1 were used in the present studies. Polyclonal anti-PECAM SEW-3 and SEW-16 (IgG) were generated in rabbit using affinity-purified human platelet PE-CAM-1 as the immunogen. SEW-3 was derived from the same batch of preparation as described previously (Albelda et al., 1991). SEW-16 was raised by once-weekly subdermal injections of 100-pg doses of immunoaffinity-purified PECAM-1 antigen (Newman et al., 1992). The Fab fragment of anti-PECAM-1 was derived from cleaving pAb SEW-3 with papain according to the product manual (Pierce Chemical Co.). Monoclonal anti-PECAM-1, mAb 1.3, (IgG1) was produced by immunizing mice with purified human platelet PECAM-1 protein (Albelda et al., 1991;Newman et al., 1992). Another mAb against PECAM-1, BBA-7, was purified by affinity chromatography on protein A-Sepharose using human umbilical vein endothelial cells as the immunogen and shown to be specific for PECAM-l(R & D Systems, Minneapolis, MN).
Rabbit nonimmunized IgG or nonimmune rabbit serum (Cooper Biochemical, Malvern, PA) and mineral oil-elicited mouse ascites produced from MOPC tumor cell line (IgG1, K chain, Sigma) were used as negative (antibody) controls in immunofluorescence studies, adhesion studies, and immunoblotting. Goat whole serum (Sigma) was used as the Fc receptor blocking agent. The secondary antibodies used in the experiment were fluorescein isothiocyanate-conjugated goat anti-mouse or anti-rabbit IgG (ICN Immunologicals, Lisle, IL).
Cell Culture-Mouse microvascular endothelial cells, CD3, were isolated and characterized as described previously (Chopra et al., 1990). Large vessel endothelial cells, RAEC, were derived from rat (Sprague-Dawley) aortic rings (Diglio et aL, 1989). These endothelial cells were routinely maintained in Dulbecco's minimal essential medium supplemented with 10% fetal bovine serum (FBS, Life Technologies, Inc.) and various antibiotics (50 pg/ml gentamycin, 100 pg/ ml penicillin G, and 2.5 wg/ml amphotericin B). Cells were cultured in a humified atmosphere with 5% COS, and the culture media were changed every 48 h. Endothelial cells were passaged with a mixture of EDTA (0.1%) and trypsin (0.05%). All cells used in this experiment were free of micoplasma infection.
B16 amelanotic melanoma cell line (B16a), rat W256 carcinosarcoma (W256) cell line, and Lewis lung carcinoma (3LL) cell line were obtained from the Division of Cancer Treatment, National Institutes of Health (Frederick, MD) and adapted for cell culture as described previously (Chopra et al., 1988(Chopra et al., , 1990Grossi et al., 1988Grossi et al., , 1989Tang et al., 1993b). B16a and 3LL cells were passaged with 2 mM EDTA in syngeneic (C57BL/&J) male mice and cultured in either MEM (Life Technologies, Inc.) supplemented with 5% FBS (for B16a cells), or Dulbecco's modified Eagle's medium supplemented with 10% FBS (for 3LL cells) and antibiotics (see above). 3LL cells were cultured in a humidified atmosphere with 5% COS. W256 cells were grown in MEM supplemented with 5% FBS and antibiotics and passaged with 2 mM EDTA.
HEL (human erythroleukemia), clone A and DLD-l(human colon carcinoma; Grossi et al., 1988;Tibbetts et al., 1977), MS751 (human cervical epidermoid carcinoma; metastasis to lymph node), TCCSUP (human primary bladder transitional-cell carcinoma, grade IV; Nayak et al., 1977), ACHN (human renal carcinoma; originally derived from the malignant pleural effusion of a patient with widely metastatic renal adenocarcinoma), SK-HEP-1 (human liver carcinoma; Fogh et al., 1977), and SW900 (human lung squamous carcinoma; Fogh et al., 1977) cells were obtained from American Type Cell Culture Collection. A series of human melanoma cell lines, WM35, WM115, WM164, WM226-4, WM793, WM983-A, and WM983-B were kindly provided by Dr. M. Herlyn (The Wistar Institute of Anatomy and Biology). These cell lines have not been extensively characterized in the literature. Human prostate adenocarcinoma Du145 (Stone et al., 1978) and PPC-1 (Brothman et al., 1989), human head and neck squamous carcinoma (SSC-UM), human breast carcinoma (MCF-7; Soule et al., 1973), rat prostate adenocarcinoma (AT-3), and B16F1 and B16F10 murine melanoma cell lines were kindly provided by Drs. Institute, TX), respectively. HEL, MCF-7, and AT-3 cells were cultured in RPMI medium plus 10% FBS. SW900 cells were grown in L-15 medium with 10% FBS and human melanoma cells of the WM series were cultured in MCDB/L-15 (4:l) supplemented with 2% FBS and 5 pg/ml of insulin. All of the remaining tumor cell lines were cultured in either MEM or Dulbecco's modified Eagle's medium containing 10% FBS and passaged with 2 mM EDTA. A summary of the cell lines used in the present study is presented in Table I. Chemic& and Reagents-Protease inhibitors PMSF, leupeptin, antipain, aprotinin, chymostatin, and protein standard markers were obtained from Sigma. Immunoblotting detection kit (ECL system) was bought from Amersham Corp. Peroxidase-anti-peroxidase staining kit was purchased from Biogenex (San Ramon, CAI. Protein kinase C activator TPA and eicosanoid 12(S)-HETE (i.e. 12[S]hydroxyeicosatetraenoic acid) were purchased from Sigma and Cayman Chemical (Ann Arbor, MI), respectively. The RNA ladder (0.24-9.5 kb) and prestained protein standard SDS-7B (26.5-180 kDa) were obtained from Life Technologies, Inc.) and Sigma, respectively.
Indirect Immunofluorescence-Cultured B16a, W256, 3LL, clone A, and B16F10 cells were dissociated from the tissue culture flasks with 2 mM EDTA and washed once with MEM and then fixed with 2% paraformaldehyde in PBS containing 1 mM CaC12, 1 mM MgClt, and 5% sucrose for 20 min at room temperature. CD3 endothelial cells were used as positive control cell line. Immunofluorescent labeling was performed essentially as described previously (Tang et al., 1993a and 1993b). Briefly, for intracellular labeling, cells were permeabilized with HEPES-Triton buffer (20 mM HEPES, pH 7.6, 300 mM sucrose, 50 mM NaC1, 3 mM CaC12, and 0.5% Triton x-100) for 3 min at room temperature. For surface labeling, cells were not permeabilized. All of the coverslips were incubated with 20% goat whole serum in 4% BSA-containing PBS for 20 min at 37 "C to block nonspecific Fc-binding sites. The primary antibody reaction was performed by incubating coverslips with polyclonal (SEW-3; 30 pg/ ml), monoclonal (mAb 1.3; 8 pglml), or Fab fragment (30 pg/ml) of anti-PECAM-1, or equivalent antibody controls for 60 min at 37 "C, followed by washing (4 X, PBS). Afterward, the cells were labeled with goat anti-rabbit or goat anti-mouse IgG-fluorescein isothiocyanate (1:200), depending on the primary antibodies used. Coverslips were mounted with glycerol and PBS (9:l) containing 0.1% Npropylgallat. Phase contrast and immunofluorescence pictures were taken with a Nikon Optiphot microscope. Transmission light micrographs were taken with a Leitz Orthoplan microscope.
Immunocytochemistry-Subconfluent endothelial cells and tumor cells were cultured for 18 h before being used for peroxidase-antiperoxidase staining. Cells were fixed and permeabilized as described for immunofluorescence. After washing, the coverslips were incubated with 3% H,02 at room temperature for 5 min to eliminate endogenous peroxidase activity. Following primary antibody (SEW-3 or mAb 1.3) incubation, coverslips were sequentially incubated with anti-mouse or anti-rabbit IgG (corresponding to the primary antibodies used) and the peroxidase-anti-peroxidase complex for 60 min at 37 "C each. The staining results were revealed by incubating cells with chromagen (AEC) for 15 min at 37 ' C .
Platelet Preparation-Human and mouse blood were collected with 3.8% sodium citrate and 4.5% dextrose in 0.9% physiological saline. Platelet-rich plasma was obtained by centrifuging the collected blood at 600 X g for 15 min. This procedure was repeated twice, and the platelet-rich plasma was combined. An appropriate amount of platelet wash buffer (1.6% 0.1 M EDTA in platelet wash) was added to the pooled platelet-rich plasma to prevent platelet aggregation and release reaction, and the above mixture was centrifuged at 2,000 X g for 15 min to obtain platelets. After washing, the platelet pellet was extracted as described below.
Cell Extraction and Subcellular Fractionation-Endothelial cells (i.e. CD3 and RAEC) and tumor cells (B16a, W256, 3LL, clone A, DLD-1, HEL, and B16F10) were first washed free of media with PBS containing 5 mM of PMSF and 1% aprotinin and were then scraped off the culture flasks. The cell pellets as well as platelet pellet were lysed and extracted with the TNC lysis buffer (0.01 M Tris-acetate, pH 8.0, 0.5% Nonidet P-40, and 0.5 mM Ca2') containing a mixture of protease inhibitors (5 mM PMSF, 1 mM leupeptin, 1% aprotinin, 1 pg/ml of pepstatin and chymostatin) on ice for 45 min. The whole cell lysates were centrifuged at 14,000 X g for 30 min and the supernatants aliquoted and frozen at -70 "C until use. To prepare the membrane fraction (i.e. subcellular fractionation), platelets, clone A, or DLD-1 cells (either dissociated with EDTA or directly scraped off using a rubber policeman) were lysed in ice-cold hypotonic buffer (1 mM NaHC03, 5 mM MgCl,, 50 mM Tris-HC1, pH 7.5, 0.5 mM EGTA, 1 mM PMSF, 0.2 mM leupeptin, 1 p~ aprotinin, and 0.5 p~ pepstatin A). The cell lysates were centrifuged at 500 X g for 5 min to remove nuclei and unbroken cells. The supernatant was further centrifuged at 100,000 X g for 90 min at 4 "C. The resulting pellet, i.e. the membrane fraction (Liu et al., 1991), was washed once with the hypotonic buffer, lysed in the TNC lysis buffer, and measured for protein concentration (Bradford, 1976). In some experiments, clone A cells and DLD-1 cells treated with protein kinase C activators TPA (0.1 FM, 15 min) or 12(S)-HETE (0.1 p~, 15 min; Tang et al., 1993aTang et al., , 1993bLiu et al., 1991;Grossi et al., 1989) were used for membrane preparation as described above.
Zmmunoblotting, Immunoprecipitation, and SDS-PAGE-The procedures for Western blotting, immunoprecipitation, and gel running were detailed previously (Tang et al., 1993a(Tang et al., , 1993b. Briefly, protein samples (either whole cell lysates or membrane fractions) were dissolved into the sample buffer (0.1 M Tris, pH 6.8, 2% SDS, and 40% glycerol) in the presence or absence of 10% 2-mercaptoethanol. In some experiments the whole cell lysates from human platelet, BNa, 3LL cells or clone A cells were treated with 1.0 unit/ml of neuraminidase from Clostridium perfringem (Sigma) at pH 5.0 for 30 min, 1, 2, or 4 h. The treatment was terminated by dissolving samples in the sample buffer. In other experiments aimed at testing the specificity of pAb SEW-3 to PECAM-1, this Ab was preabsorbed with 0, 10,50, or 100 fig of purified platelet membrane before being used for Western blotting. Samples were boiled for 5 min and analyzed with 7.5% denaturing polyacrylamide gels. Gels were stained with either Coomassie or silver nitrate, or transferred to nitrocellulose membrane and proteins were detected using the ECL (Enhanced Chemiluminescence) Western blotting detection system (Tang et al., 1993a and1993b). The primary antibodies used were either pAbs (i.e. SEW-3 and SEW-16; 40 pg/ml) or mAb BBA-7 (20 pg/ml). The secondary antibody used was either goat anti-rabbit or anti-mouse IgG coupled to horseradish peroxidase. Immunodetection was performed basically according to the product directions (Amersham). Autoradiography was conducted with Hyperfilm-ECL (Amersham). For immunoprecpitation, HEL cells or clone A cells (dissociated with 2 mM EDTA) were surface iodinated as described previously (Tang et al., 1993b). The antibodies used were either SEW-3 (10 gg/ml) or mAb 1.3 (5 pg/ ml). Immunoprecipitates were dissolved in the sample buffer and separated on 7.5% SDS-PAGE under reducing conditions. Gels were stained, dried, and exposed at -80 "C using an intensifying screen.
Preabsorption Studies-To confirm the specificity of SEW-3, the major polyclonal antibody used throughout the current study, we preabsorbed this antiserum with PECAM-1 and then employed the preabsorbed antibody in the immunoblotting as well as immunostaining of tumor cells. L cells that do not express endogenous PECAM-1 were transfected with complete PECAM-1 cDNA sequence or the vector alone (Albelda et al., 1991). One mg of the SEW-3 IgG was incubated with 5.6 X lo7 of L cells transfected with either PECAM-1 or vector for 1 h at room temperature with mixing. The cells were then removed by centrifugation. This absorption step was repeated two more times. The final supernatant was filtered through a 0.2-pm filter and spun at 100,000 X g at 4 "C for 1 h. The antibody was then tested by ELISA to determine the activity remaining using purified PECAM-1 plated in the microtiter wells. Dilutions of l:lO, 1:50, 1:250, and 1:1250 were used for the PECAM-1-absorbed antibody and the control-absorbed SEW-3 and normal rabbit IgG were used at 100, 20, 4, and 0.8 pg/ml in the ELISA experiments. The absorbed and control-absorbed SEW-3 IgG were then used in immunofluorescent labeling and immunoblotting of tumor cells, as described above.
Tumor Cell Adhesion to EndotheCurn-In uitro cell adhesion assay was run to determine the potential functions of tumor cell PECAM-1 molecules. B16a or 3LL cells metabolically labeled with 0.1 mCi/ ml of [32P]orthophosphate (37 "C for 5 h in P-free MEM) were dissociated (with 2 mM EDTA) and washed twice with MEM. Then tumor cell adhesion to confluent CD3 cells in 24-well culture plates (Falcon) was performed according to the following three protocols: (a) tumor cells were first incubated with 40 pg/ml of polyclonal anti-PECAM-I in MEM (containing 4% BSA) for 30 min at 15 "C and then added (100,000 cells/well) to CD3 monolayer; (b) CD3 monolayer was first treated with Ab (the same amount as in a) for 30 min at 15 "C and then untreated tumor cells were added and (c) tumor cells were suspended in the Ab solution and immediately added onto EC monolayer. Time course was run as described in c) for 10, 30, and 60 min following addition of tumor cells. Dose studies were performed using different concentrations of polyclonal anti-PECAM-1 (40, 20, and 10 pg/ml) or an equivalent amount of nonimmune rabbit whole serum. In some experiments, tumor cell adhesion to CD3 monolayers was performed by preincubating either tumor cells or endothelial cells with anti-PECAM-1 followed by washing to remove residual antibodies. The adhesion was terminated by aspirating media and nonadherred cells. The culture wells were rinsed with PBS (4 x) and the contents harvested with a mixture of 0.1 N NaOH and 1% SDS. The number of adherent tumor cells was determined by counting the radioactivity. Triplicates for each experimental condition were performed, and the experiment was repeated three times with comparable results.
Polymerase Chain Reaction (PCR) Analysis of Genomic DNA and Reverse Transcription-Polymerase Chain Reaction (RT-PCR)-Genomic DNA of cultured solid tumor cells was isolated using SDS/ proteinase K method (Sambrook et al., 1987). PCR of genomic DNA was performed using 0.25 pg of DNA as the template in 50 pl of the following reaction buffer: 10 X PCR buffer (200 mM Tris-HC1, pH 8.3, 750 mM KCl, 1 mg/ml BSA, 25 mM MgC12, 0.25 pg/pl primers, and 0.1 unit/pl Taq polymerase). PCR reaction without the template was used as the experimental control (i.e. negative control). To prevent carry-over (i.e. the product) contamination, 0.5 unit/pl of restriction enzyme AuaII, which cuts three times in the PCR fragment specified by the nested pair of primers (see below), was used to treat the reaction buffer (37 "C, 60 min) before adding template and the enzyme. The cycling conditions were the same as used for RT-PCR (see below). Total RNA was extracted from whole cell lysates using guanidium thiocyanate-CsC1 method (Chang et al., 1992;Tang et al., 1993b). RT-PCR was performed basically as described previously Tang et al., 1993a and1993b). Briefly, 1 pg of total RNA was reverse transcribed in a 20-p1 transcription buffer made up of 50 mM Tris, pH 8.3,75 mM KC1,3 mM MgC12, 10 mM dithiothreitol, 0.5 mM dNTPs, 20 units of RNase inhibitor (RNasin; Promega), 1 p~ of antisense (see primer B described below), and 200 units of M-MLV reverse transcriptase (Life Technologies, Inc.). RT reactions without RNA template was used as the negative control. To prevent DNA contamination, 1 unitlpl of DNase was used to pretreat the mixture prior to the addition of primers and reverse transcriptase (37 "C, 60 rnin). The mixture was heated at 95 "C for 10 min (to deactivate DNase). Then primers and reverse transcriptase were added, and the mixture was incubated at 42 "C for 1 h, followed by heating at 100 "C for 10 min and immediate cooling on ice. In another set of experiments, the complete RT mixture was treated with 0.5 unit/pl RNase A and then used for PCR in order to confirm the absence of DNA contamination. Two pl of the above cDNA was used as the template in PCR. Nested PCR was performed to detect PECAM-1 message. Since the genomic sequence of PECAM-1 is unknown, we designed two pairs of primers, presented as follows (also see Fig. 11), on the basis of published human endothelial cell PECAM-1 cDNA sequence (Albelda et al., 1991): primer A, 5'-CAA AGA CAA CCC CAC TGA AG-3' (sense); primer B, 5'-CAC TCC GAT GAT AAC CAC TG-3' (antisense); primer C, 5'-CTG AGG GTG AAG GTG ATA GC-3' (nested sense); primer D, 5'-AGT ATT TTG CTT CTG GGG AC-3' (nested antisense).
Primer A and primer B covers the nucleotide sequence region from 1533 to 1975 (443 bp). The nested (or internal) pair of primers (i.e. primer C and D) encompass nucleotides 1615-1910 (296 bp). The whole region amplified includes a segment of the transmembrane domain and a section of the adjacent extracellular Ig domain (Newman et al., 1990). Two pl of the reverse transcription mixture was amplified in a total of 100 p1 of the PCR buffer (20 mM Tris, pH 8.3, 50 mM KCl, 2.5 mM MgCl,, 0.25 mM dNTPs, and 0.1 mg/ml BSA) containing 1 unit of AmpliTaq DNA polymerase (Perkin Elmer Cetus). The first round of PCR was run in the presence of primers A and B using GeneAmp PCR System 9600 (Perkin Elmer Cetus) at 94 "C X 1 min, 51 "C x 1 min, and 72 "C X 2 min for 30 cycles. TWO pl of the first round PCR product was used for the second round of PCR using primers C and D at 94 "C X 30 s, 50 "C X 30 s, and 72 "C X 1 min for 30 cycles. All PCR buffers used in RT-PCR were treated with AvaII (37 "C, 60 min) to prevent carry-over contamination. PCR reaction without cDNA template was used as the negative control.
Southern Hybridization and Northern Blotting-Twenty pl of PCR-amplified product was separated on a 1.5% agarose gel and transferred to GeneScreen plus membrane (Du Pont-New England Nuclear) using a PosiBlot Pressure Blotter (Stratagene, La Jolla, CA). DNA was then UV-linked to the membrane. Radioactive cDNA probe was prepared with [3ZP]dCTP using Prime-it random primer labeling kit (Stratagene) and a 1.8-kb cDNA insert encoding PECAM-1 (this fragment was originally cloned in the plasmid vector ptZl8R and released by EcoR I and HindIII). The membrane was prehybridized in the solution containing 50% formamide, 5 X Denhardt's, 10% dextran sulfate, 5 X SSPE, 0.1% SDS, and 100 pg/ml salmon sperm DNA (42 'C, 4 h). Hybridization was performed with the addition of labeled PECAM-1 probe (42 "C, overnight). The membrane was washed under high stringency conditions (0.1 X SSC and 0.1% SDS for 15 min at room temperature and several washes at 65 "C) and exposed to Kodak XOMAT x-ray film at -80 "C using an enhancing screen. For Northern blotting, total cellular RNA was obtained from cultured HEL, 3LL, B16F1, B16F10, clone A, and W256 cells as described above for RT-PCR. Poly(A+)-RNA was isolated using PolyATtract mRNA Isolation Systems (Promega) and 4 pg of denatured (glyoxal/dimethyl sulfoxide, 50 "C, 1 h) mRNA was loaded onto a 1.0% agarose gel. A 0.24-9.5-kb RNA ladder was used as the bridized and hybridized as described above. After hybridization, the molecular mass standard. Gel was transferred and membrane prehy-cDNA probe. membrane was deprobed and then reprobed with radiolabeled p-actin DNA Sequencing-The PCR amplified fragment migrating at the predicted size (as defined by the primers) on agarose gels was cut from the gel and purified by electroelution. The sequence of the amplified double strand cDNA fragment was determined by the Sanger dideoxynucleotide termination method using the AmpliTaq sequencing kit (Perkin Elmer Cetus) with some modifications . Purified fragment (template) and primers (C or D) were denatured at 100 "C for 5 min and annealed on dry ice for 2 min. Extending DNA chain was labeled with "S-dATP. Samples were loaded onto a 6% polyacrylamide sequencing gel. Autoradiography and data processing were performed as described .
RESULTS
Immunological Identification of PECAM-1 on Human, Rat, and Murine Solid Tumor Cells-Cultured tumor cells from different species were surface labeled with either pAb (Fig. l), mAb (data not shown), or Fab fragment (Fig. 2) against PECAM-1. Immunodetection was conducted using immunofluorescence, peroxidase-anti-peroxidase staining, and flow cytometry. As shown in Fig. 1, tumor cells, like endothelial cells (Fig. la), expressed PECAM-1 molecules on their cell surface. The distribution pattern of the positive labels varied among different tumor cell lines. B16a (Fig. lb) cells demonstrated a homogeneous surface labeling although heterogeneity existed among individual cells (i.e. some tumor cells expressed a much lower amount of PECAM-1 than others). B16F10 melanoma cells exhibited a similar staining pattern (Fig. le). In contrast, larger aggregates of positive label were detected on 3LL cells (Fig. 1, c and h). On the other hand, clone A cells (human colon carcinoma) appeared to be enriched for PECAM-1 molecules at the cell periphery in subconfluent cultures (Fig. Id) and cell borders in confluent cultures (data not shown). Peroxidase-anti-peroxidase staining revealed brownish granules on the cell surface of B16a cells (Fig. lg) and also in the perinuclear region of 3LL cells (Fig. lh). When cells were permeabilized with Triton-HEPES buffer, immunostaining with SEW-3 detected an intracellular pool of PECAM-1 molecules (data not shown). Staining with the Fab fragment of SEW-3 also detected cell surface labeling on B16a, 3LL, W256, clone A, and B16F10 cells (Fig. 2, a-e, respectively), excluding the possibility of nonspecific binding of intact antibody to Fc receptors. In addition, staining of 3LL cells with control antibodies, i.e. nonimmune rabbit IgG (Fig. 1, f and Biochemical Identification and Partial Characterization of PECAM-1 on Tumor Cells-Several PAbs and mAbs were used to detect PECAM-1 in solid tumor cell lines. PAb SEW-3 detected a 130-kDa protein, together with multiple lower bands, in human platelet lysates (Fig. 3, A and B). The lower bands represent either the degraded species or nonspecific staining since in other preparations of human platelets only the 130 kDa band protein was detected (e.g. see Fig. 3, C and
cells (data not shown). Rat aortic endothelial cells (RAEC)
revealed a protein band of identical size to that of the W256 cells, i.e. 125 kDa (Fig. 3A). Rb I& only stained some low molecular mass nonspecific protein bands (Fig. 3A). Two bands right below the 128 kDa band in clone A cells are probably the degradation products (see Fig. 30 and Fig. 5, C and D, for comparison). When used to stain murine cells, SEW-3 detected a 130-kDa protein in CD3 microvascular endothelial cells and 3LL tumor cells and a -125-kDa protein in two melanoma cell lines, B16a and B16 F10 (Fig. 3 R ) . 3LL cells also demonstrated two lower molecular mass bands which are probably the degradation products. Again, RbIgC only detected some low molecular mass nonspecific bands (data not shown). The reactivity of SEW-3 to PECAM-1 was. in a dose-dependent manner, blocked by preincubating this Ab with purified platelet membrane (Fig. 31)). thus providing indirect evidence for the specificity of SEW-3. Another pAb, SEW-16, also detected a -128-kDa protein in clone A and DLD-1 cells (Fig 3E). Interestingly, this pAb does not recognize PECAM-1 in rodent tumor cells (data not shown). A mAb, BBA-7, detected similar protein bands in clone A and DLD-1 cells (Fig. 3F).
Human colon adenocarcinoma cells (i.e. clone A and DLD-1) were further studied by immunoprecipitation and subcellular fractionation (Fig. 5). It appears that PECAM-1 is constitutively expressed on the surface of clone A cells, since immunoprecipitation of radioiodinated clone A cells with both SEW-3 (Fig. 5A) and mAb 1.3 (Fig. 5B) resulted in the expected -128 kDa protein band. Immunoblotting using membrane fractions readily detected PECAM-1 in both clone A and DLD-1 cells (Fig. 5C). In addition, it appears that the method of harvesting tumor cells (i.e. scraping uersuS EDTA dissociation) does not affect the detectability of PECAM-1 by SEW-3 (Fig. 5C). Treatment of clone A cells with TPA or 12(S)-HETE did not significantly alter the level of PECAM-1 associated with plasma membrane (Fig. 5 0 ) , although these two agents have been shown to increase the surface expression of integrin receptor avB3 (Tang et al., 1993a and1993b).
PECAM-1 is a heavily glycosylated molecule where about 40% of the molecular mass is composed of carbohydrates. It is primarily N-glycosylated and has been demonstrated to possess terminal sialic acid residues (Newman et ol., 1990). A preliminary characterization of tumor'cell PECAM-1 molecules was performed using neuraminidase treatment. As presented in Fig. 6, 30-min treatment of human platelets with 1.0 unit/ml of neuraminidase resulted in a molecular mass reduction of approximately 5 kDa. In contrast, this decrease in the molecular weight, following neuraminidase treatment, was not observed with all of the tumor cell lines tested (i.e. B16a, W256, and 3LL; Fig. 6), This differential sensitivity of PECAM-1 to C. perfringens neuraminidase in platelets and tumor cells was observed following treatment of samples for up to 4 h (data not shown). Preliminary experiments with clone A cells revealed similar insensitivity of PECAM-1 to neuraminidase treatment (data not shown).
PCR of Genomic DNA, RT-PCR, Southern Hybridization. and Northern Blotting-PCR analysis of genomic DNA revealed the PECAM-1 genes in the genomes of several human, rat, and murine tumor cells (Fig. 7). The size (296 bp) of PECAM-1 fragments (which were confirmed by hybridization) in all cell lines examined was precisely the same as predicted from the size covered by the nested pair of primers which are based on the cDNA sequence, suggesting that this fragment represents a gene encoding segment (i.e. no intron is included in this fragment). All cell lines demonstrated the same size of PECAM-1 fragment (Fig. 7), suggesting the lanes 3 and 3 ' ) , and W256 cells (lone 4 and 4 ' ) using SEW-3 preincubated with either control L cells (lanes 1-4) or L cells transfected with PECAM -1 (lanes 1'-4'). The detected PECAM-1 proteins migrating at 130 kDa (for human platelet, lone I ) , -128 kDa (for clone A and DLD-1 cells, lanes 2 and 3 ) , and -125 kDa (for W256 cells, lane 4 ) were indicated by three arrowheads. c-e, immunofluorescent labeling of clone A (c and e) and 3LL cells ( d ) with SEW-3 preabsorbed with either control L cells (c and d) or L cells expressing transfected PECAM-1 (e). X 600. genomic composition of PECAM-1 may be similar. PECAM-1 mRNA was examined by RT-PCR using PECAM-specific primers and DNA hybridization using PECAM-1 cDNA probes. The results of these experiments are depicted in Fig. 8. Interspecies and intraspecies differences in the amount, pattern, and sequence homology of PECAM-1 molecules were noted. For example, the PECAM-1 message of the predicted size (0.3 kb) was observed in two human tumor cell lines, i.e. HEL cells (Fig. 8, lane 3) and clone A cells (Fig. 8, lane 6 ) . But HEL cells also expressed two larger forms of the message as confirmed by hybridization. On the other hand, murine endothelial cells (CD3; Fig. 8, lane 1 ), murine fibroblasts (Fig. 8, lane 2) and B16F10 melanoma cells (Fig. 8, lane 7) expressed abundant PECAM-1 message, while B16a cells (lane 4 ) and 3LL cells (lane 5) expressed lower PECAM-1 mRNA which was barely visible on RT-PCR by ethidium bromide staining but whose presence was confirmed by hybridization. This low amount may also result from the possibility that the PECAM-1 sequence of B16a and 3LL is divergent from both 1 from surface iodinated clone A cells using mAh 1.3. HEL cells were used as the positive control cell line. The expected PECAM-1 band was indicated by arrows. C, cultured clone A or DLD-1 cells were either scraped using a rubber policeman or dissociated using 2 mM EDTA and then lysed in the hypotonic buffer for membrane preparation (see "Materials and Methods"). An equal amount of membrane protein was loaded onto 7.5% SDS-PAGE and the membrane was blotted using SEW-3. D. clone A cells were treated with either TPA or 12(S)-HETE (0.1 PM, 1 B min). scraped off using a rubher policeman, and used for membrane preparation. Fqual amounts of membrane proteins were separated on a 7.5% SDS-PAGE and the blotted membrane was stained using SEW-3. H P was included as a control. 6. N e u r a m i n i d a s e does not affect the mobility of murine and rat t u m o r cell PECAM-1 molecules. Nonidet P-40extracted protein samples prepared from human platelets or from tumor cells were treated with 1.0 unit/ml neuraminidase derived from C. perfringens (marked by +) or with buffer alone (lanes marked hy -) for 30 min on ice and then analyzed by immunoblotting using polyclonal anti-PECAM. Shown on the left are molecular mass in kDa. Note, the lower bands in 3LL lane are degradation products. The molecular mass standards are EcoRV-restricted X-phage fragments (see Fig. 8 for values). Note: the black spots below the PECAM-1 fragment in the hybridization panel are nonspecific binding.
FIG.
human and other murine cell counterparts. Interestingly, B16a and 3LL also had an upper band that comigrated with one of the upper bands in HEL cells but it did not hybridize to PECAM-1 probes, suggesting that they might be coamplified by-products. In contrast, all rat cell lines, including rat aortic endothelial cells (lane 8), rat carcinosarcoma cells (W256, lane 9), and AT 3.0 rat prostate carcinosarcoma cells (lane IO), expressed PECAM-1 mRNAs which were not amplified well by the human sequence-based PECAM-1 primers used in RT-PCR. However, the presence of the PECAM-1 message was corroborated by hybridization.
To further examine the expression of PECAM-1 in tumor cells, about 20 human tumor cell lines derived from different histological and pathological origins were screened for the expression of PECAM-1 message utilizing RT-PCR technique combined with Southern hybridization. The result of the screening (Fig. 9) indicated that all cell lines tested contained PECAM-1 message, although some tumor cell lines (e.g. MS751 human cervical carcinoma; lane 16) expressed little PECAM-1 mRNA. Again, heterogeneity was observed among these PECAM-1 messages, even among the tumor cells of the same histological and pathological source. A typical example was the WM series of melanoma cell lines (Fig. 9, lanes 1-7), some of which demonstrated an upper band. Surprisingly, none of these upper bands hybridized to the PECAM-1 cDNA probe. In contrast, the upper band in some other tumor cell lines (e.g. Du 145; lane 9 ) was positive by Southern blot.
The presence of PECAM-1 messages in tumor cells were further confirmed by Northern blotting analysis of human (HEL and clone A), rat (W256), and murine (3LL, B16F1, and B16F10) tumor cells (Fig. 10). The results revealed three bands, i.e. 3.7, 3.4, and 3.0 kb, for HEL cells, as reported by others (Zehnder et al., 1992). Hybridization revealed a single message of -4.1 kb in clone A, W256, B16F10, and, to a lesser extent, in B16F1 cells (Fig. 10). 3LLfcells demonstrated a mRNA band of 3.3 kb (Fig. 10). The loading of samples was confirmed by rehybridization to 8-actin cDNA probe.
Human Tumor Cell PECAM-1 Sequence Matches 100% to Human Endothelial Cell PECAM-1 Sequence-For final analysis, we obtained a partial cDNA sequence of PECAM-1 from a human tumor cell line, i.e. clone A colon carcinoma cells. Clone A cells were chosen because these cells are highly invasive, easily cultured, and express abundant amount of PECAM-1 protein (see Figs. 1-3). The DNA sequence ob- belda et al., 1991), the tumor cell sequence was found to be identical (Fig. ll), therefore providing the conclusive evidence that solid tumor cells express PECAM-1.
PECAM-1 Is Involved in Tumor Cell Adhesion to Vascular Endothelium-Homologous in vitro cell adhesion assay was performed to examine the function of tumor cell PECAM-1 molecules. Radiolabeled tumor cells (i.e. B16a and 3LL) were coincubated with confluent murine microvascular endothelial cells (CD3) in the presence of anti-PECAM (SEW-3, mAb 1.3, or Fab Ab) or non-immune rabbit IgG or MOPC ascites (as the Ab control). Fig. 12a demonstrated that all three Abs could inhibit B16a cell adhesion to endothelium, although in general the pAb SEW-3 demonstrated the strongest inhibitory effect. An inhibition of approximately 40% was obtained with all of the antibodies. The adhesion-blocking effect was observed 10 min following addition of the antibody and persisted throughout the experimental period (up to 60 min). The inhibition by the pAb of 3LL cell adhesion to CD3 appeared to be greater than the inhibition with B16a (Fig. 12b). Dose studies indicated that SEW-3 exhibited a dose-dependent inhibition of B16a adhesion to endothelium (Fig. 12c). Incubation with antibody for a shorter time period (i.e. 10 min) appeared to give rise to a greater inhibition than observed with a longer time period (i.e. 20 min; Fig. 12c). When either tumor cells or endothelial cells were individually treated with antibodies (after which Abs were washed off) and then used in the adhesion assay (see "Materials and Methods" for details), inhibition of adhesion was also observed (data not shown). Collectively, these data suggest that tumor cells express functional surface PECAM-1 molecules which are involved in tumor cell adhesion to endothelium.
DISCUSSION
PECAM-1 is a member in the Ig family of adhesion molecules. A large array of Ig family adhesion molecules have been implicated in tumorigenesis and cancer metastasis. For example, both N-CAM and Ng-CAM have been detected in neuroblastoma and phaeochromocytoma and found to be related to tumor cell invasion (Brackenbury, 1985). Vascular cell adhesion molecule-1 expressed on cytokine-activated endothelial cells have been demonstrated to mediate tumor cell adhesion to the vascular monolayer via binding to VLA-4 (Martin-Padura et al., Taichman et al., 1991). Carcinoembryonic antigen gene family are expressed on solid tumor cell lines such as colon carcinoma and breast cancers and mediate either Ca2+-dependent (Turbide et al., 1991) or Ca2+independent homotypic tumor cell aggregation or heterotypic cell-cell adhesion. Another Ig family member, ICAM-1 (intercellular cell adhesion molecule-1), which is normally expressed on activated endothelium, has been observed to be expressed on solid tumor cells and its expression is correlated with metastatic potential (Johnson et al., 1989). Based on the above observations, we hypothesized that some solid tumor cells may express PECAM-1 and that PECAM-1 may be involved in tumor cell-endothelial cell interactions. Hence we undertook the studies presented in this paper.
Tumor cell PECAM-1 molecules were shown to be expressed on the cell surface, as indicated by immunocytochemical surface labeling, flow cytometry, immunoprecipitation of surface iodinated cells, as well as by subcellular fractionation studies using diverse antibodies. Tumor cells from different histological origins appear to exhibit different topological distribution patterns of PECAM-1 on the surface, since some tumor cells demonstrate homogeneous labeling (e.g. B16a melanoma), while others (e.g. 3LL lung carcinoma) demonstrate larger surface "granules" (or aggregates), and still others (e.g. clone A colon carcinoma) appear to be enriched for PECAM-1 molecules at the cell periphery ( i e . cell-cell contact zones; see Fig. 1). The heterogeneity in PECAM-1 expression is also observed within a specific tumor cell type, i.e. the level of expression is not homogeneous among all cells in a population. Labeling of permeabilized tumor cells also reveals an intracellular pool.
Western blotting using two pAbs and a mAb (Fig. 4) demonstrated that tumor cell PECAM-1 migrates in the range of 120-130 kDa, a molecular mass similar to PECAM-1 expressed in platelets and endothelial cells (Muller et al., 1989;Albelda et al., 1990;Newman et al., 1990;Albelda et al., 1991;this study). The specificity of a major polyclonal antibody used in the current study, i.e. SEW-3, was confirmed indirectly by preabsorbing this antibody with purified platelet membrane as well as directly by preincubating the antibody with PECAM-1-transfected cells. The molecular weight of tumor cell (i.e. clone A) PECAM-1 is not affected by reduction (data not shown), suggesting that this protein, like endothelial cell PECAM-1, is made up of a single polypeptide. The Western blotting results were confirmed by immunoprecipitation and subcellular fraction studies. PECAM-1 appears to be constitutively expressed on the surface of some tumor cells, e.g. clone A cells. Several lines of experimental data support this conclusion. First, immunofluorescence, peroxidase-antiperoxidase staining, and flow cytometric analysis all detected the surface labeling. Second, immunoprecipitation with surface-radiolabeled cells resulted in the protein band. Third, immunoblotting using membrane fraction revealed the PE-CAM-1 band. Finally, non-treated clone A cells and clone A cells treated with TPA or 12(S)-HETE do not demonstrate a difference in terms of the amount of membrane-associated PECAM-1. TPA and lB(S)-HETE, by activating protein kinase C, have previously been shown to increase the surface expression of integrin avP3 in endothelial cells (Tang et al., 1993a(Tang et al., , 1993b. Therefore, the observations in the present were resuspended in different concentrations of various antibody solutions and immediately added to confluent microvascular endothelial cell monolayer (CD3 cells). The adhesion was performed for different periods of time. Shown is the percentage of tumor cell adhesion to CD3 relative to the control (ie. control antibody-treated cells). Each condition was performed in quadriplicate and standard deviations were between 1-5% (not shown). a, B16a adhesion to endothelial cells was inhibited by antibodies (pAb, mAb, and Fab) against PECAM-1 (time course); b, pAb to PECAM-1 inhibited adhesion of both B16a and 3LL to endothelium (time course); c, dose study of pAb inhibition of B16a adhesion to endothelial cells. Two time points (10 and 20 min) of antibody treatment were shown. study suggest that PECAM-1 is constitutively expressed on the surface of tumor cells.
The form of PECAM-1 present on some tumor cells may have different biochemical properties than its platelet counterpart. Neuraminidase treatment does not result in any alterations in the molecular mass of tumor cell PECAM-1, although an approximate 5 kDa decrease is observed with the platelet PECAM-1 following neuraminidase treatment. Two possibilities arise from this observation. Tumor cell PECAM-1 molecules may not be significantly sialylated in the termini of their carbohydrate chains. Alternatively, tumor cells may possess aberrant or abnormal terminal sialyl residues which are not cleaved by neuraminidase from C. perfringens. This possibility is especially tempting and more plausible when considering that tumor cells are reported to possess aberrant glycosylation of surface glycoproteins.
The presence of PECAM-1 on solid tumor cells is confirmed by detection of PECAM-1 gene in tumor cell genomes using the PCR technique. The presence of PECAM-1 message in tumor cells was investigated by RT-PCR followed by hybridization of PCR-amplified fragments. Both PCR analysis of genomic DNA and RT-PCR of cellular RNA revealed a PCR fragment of the same size which was confirmed to be PECAM-1 fragment by subsequent hybridization. These observations suggest that this segment of PECAM-1 molecule as defined by the nested pair of primers does not encompass an intron. Several lines of experimental data exclude the possibility that the PECAM-1 fragment detected by RT-PCR is due to contaminated genomic DNA. First, the RT reaction buffer was treated by DNase prior to the initiation of RT reaction (see "Materials and Methods"). Second, in another set of experiments, the RT buffer was pretreated with RNase prior to RT and the subsequent PCR reaction did not reveal any product.2 Third, the PCR and hybridization patterns of genomic PCR and RT-PCR are significantly different. For instance, in RT-PCR B16a and 3LL cells and all rat cell lines do not demonstrate a well-defined PECAM-1 band (see Fig. 8). However, these cell lines demonstrate a strong PECAM-1 band in genomic PCR (Fig. 7). Another example is that hybridization of some RT-PCR-derived fragments gives two or more bands (e.g. in Du 145 cell line, see Fig. 9) while hybridization of genomic PCR-derived fragments all result in a single predicted band (Fig. 7).
Previously, Simmons et al. (1990) detected PECAM-1 transcripts in a metastatic colon carcinoma and the authors concluded that the transcripts might come from tumor-infiltrated macrophages. These authors did not detect PECAM-1 mRNA on other solid tumor cells. From our own experiments, we suspect that the negative results obtained by these authors are due to insensitivity of Northern blotting using total cellular RNA. Using purified mRNA for Northern blotting, we detected the PECAM-1 message in solid tumor cells. This provides substantive corroboration for the RT-PCR data. Three messages are detected in HEL cells. This observation is consistent with our RT-PCR data revealing the presence of multiple PECAM-1 mRNAs and with the Northern blotting results of others (Zehnder et al., 1992). In clone A, W256, 3LL, and B16F10 cells, only a single species of message is observed. The size of PECAM-1 mRNA in 3LL cells appears to be smaller than that in other cells and identical to the second species of mRNA in HEL cells ( i e . 3.3 kb). This difference in the size of PECAM-1 message in different tumor cell lines may represent cell type-specific alternative splicing. The molecular biology experiments we performed allow us to conclude: (a) PECAM-1 appears to be expressed on many human solid tumor cells, although the level of expression varies greatly among different cell lines; ( b ) human tumor cell PECAM-1 sequence is identical or highly homologous to that of endothelial cell PECAM-1; and ( c ) rodent solid tumor cells also express PECAM-1.
The significance of the detection of PECAM-1 on solid tumor cells is reinforced by the fact that it is widely expressed and involved in mediating tumor cell adhesion to vascular endothelium, one of the most important steps leading to organ preference of metastasis (Pauli et al., 1990;Honn and Tang, 1992). Adhesion assays either in the presence or absence (but with pretreatment of tumor cells with anti-PECAM-1) of Abs has consistently demonstrated that PECAM-1 is functional in mediating tumor cell adhesion to unstimulated endothe-D. G. Tang and K. V. Honn, unpublished observations. lium. PECAM-1 has been shown to be enriched at the cellcell borders of confluent endothelial cells in culture and of the vessel wall in situ (Muller et al., 1989;Albelda et al., 1990;Newman et al., 1990). Of interest, morphological studies on tumor cell-endothelial cell adhesion frequently indicate that adhesion preferentially occurs at the apposition zone between neighboring endothelial cells (Pauli and Lee, 1988). Thus, it is tempting to speculate that PECAM-1, among other adhesion molecules, may mediate early phase (ie. "docking"; see Honn and Tang, 1992) tumor cell adhesion to unstimulated endothelium, when activation-dependent adhesion molecules such as ICAM-1, E-selectin, P-selectin, and vascular cell adhesion molecule-1 are not available. Interestingly, Lee et al. (1992) recently reported that adhesion of melanoma cells to cultured human microvascular endothelial cells is independent of vascular cell adhesion molecule-1, E-selectin, and ICAM-1. It was hypothesized that some novel microvesselrelated adhesion proteins are involved. Our results suggest that PECAM-1 may be one of these "novel" adhesion molecules.
It is worthwhile to point out that we did not examine whether all available tumor cell lines express PECAM-1 protein although RT-PCR results indicated that most of these cells express readily detectable message. In situ hybridization experiments are underway to determine the expression of PECAM-1 mRNA in tumor cells in uiuo.
|
2018-04-03T01:45:54.525Z
|
1993-10-25T00:00:00.000
|
{
"year": 1993,
"sha1": "66a73f0cadcd9f97d6b617159af9455f25cdcd96",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)41609-2",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0d2700e93b53012a296913c6311ec84650b8c3c2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252568107
|
pes2o/s2orc
|
v3-fos-license
|
Embedding Hindsight Reasoning in Separation Logic
Proving linearizability of concurrent data structures remains a key challenge for verification. We present temporal interpolation as a new proof principle to conduct such proofs using hindsight arguments within concurrent separation logic. Temporal reasoning offers an easy-to-use alternative to prophecy variables and has the advantage of structuring proofs into easy-to-discharge hypotheses. To hindsight theory, our work brings the formal rigor and proof machinery of concurrent program logics. We substantiate the usefulness of our development by verifying the linearizability of the Logical Ordering (LO-)tree and RDCSS. Both of these involve complex proof arguments due to future-dependent linearization points. The LO-tree additionally features complex structure overlays. Our proof of the LO-tree is the first formal proof of this data structure. Interestingly, our formalization revealed an unknown bug and an existing informal proof as erroneous.
INTRODUCTION
We are concerned with automatically proving linearizability, the standard correctness criterion for concurrent data structures [Herlihy and Wing 1990]. A concurrent data structure is linearizable subject to a sequential specification of its methods, if each method takes effect in a single atomic step of its concurrent execution, the method's linearization point, and satisfies the sequential specification in this step.
Concurrent separation logics [Bell et al. 2010;Delbianco et al. 2017;Elmas et al. 2010;Fu et al. 2010;Gotsman et al. 2013;Gu et al. 2018;Hemed et al. 2015;Sergey et al. 2015;Vafeiadis and Parkinson 2007] provide a powerful toolbox of deductive reasoning techniques to verify complex concurrent data structures. However, the proof construction heavily relies on the proof author's creativity and expertise in wielding the available tools effectively. For instance, in order to construct the inductive invariant of the data structure, the proof author may have to devise proof-specific resource algebras to express ghost state that captures the key aspects of the computation history. This hinders proof automation due to the vast complexity of the proof space that needs to be explored. Similarly, the proofs may make use of prophecy variables [Abadi and Lamport 1991] to predict future-dependent linearization points [Jung et al. 2020;Liang and Feng 2013;Vafeiadis 2008]. Constructing such proofs involves backward reasoning, which is difficult to automate [Bouajjani et al. 2017]. It stands to reason that there is a need for guiding principles that help to structure the proof and that provide effective strategies for automated tools to prune the search space.
Hindsight theory [Feldman et al. 2018[Feldman et al. , 2020Lev-Ari et al. 2015;O'Hearn et al. 2010] provides such a guiding principle, which we refer to as temporal interpolation. One proves lemmas of the form: if there existed a past state that satisfied property and the current state satisfies , then there must have existed an intermediate state that satisfied . Such lemmas can then be applied, e.g., to prove the existence of a future-dependent linearization point in hindsight. Hindsight is 20/20, the arguments only involve forward reasoning, which is easier to automate than, say, prophecy-based arguments.
One limitation of the existing hindsight theory is that it has only explored the general idea of temporal interpolation very narrowly. Concretely, it has been used only to prove hindsight lemmas about concurrent traversals of data structures. These are variations of statements of the form "if the current node of the traversal was reachable from the root at some point in the past ( ), and is the successor of in the present state ( ), then was reachable from the root at some point in the past ( )". We show that temporal interpolation applies more broadly in other contexts as well.
Another limitation is that the proof and application of these hindsight lemmas has so far been confined to meta-level linearizability arguments. As a consequence, existing hindsight proofs can lack the rigor enforced by a program logic. We show that this has resulted in at least one incorrect hindsight-based proof in the past [Feldman et al. 2020].
Contributions. Building on [Meyer et al. 2022], we present a concurrent separation logic that integrates temporal interpolation as a general proof rule. The logic offers the best of both worlds: it enables the intuitive reasoning of hindsight theory within the rigorous framework of a formal proof system. As in [Meyer et al. 2022], the logic's semantic foundation is based on computations rather than states, which it exposes at the syntactic level in the form of a lightweight temporal operator. This operator provides a uniform mechanism for tracking history information. This reduces the need for introducing proof-specific auxiliary ghost state and helps to prune the space of possible proofs to consider for automatic proof construction. At the same time, the logic offers all advantages of separation logic, including the ability to reason locally about state mutation and concurrency via the frame rule, and to introduce ghost state if and when needed.
The key innovation over [Meyer et al. 2022] is a new proof rule that enables general hindsight reasoning via temporal interpolation. The proof rule postulates and then applies hypotheses h( , , ) that state the correctness of the temporal interpolation. These hypotheses are collected by the main proof and then discharged in subproofs. This approach provides a proof-structuring mechanism: the subproofs can use a coarse-grained abstraction of the program behavior, which often simplifies the overall proof argument and aids automation. The nature of temporal interpolation as a proof-structuring mechanism is made formally precise in our soundness proof by showing that the proof rule can be eliminated from the logic.
To demonstrate the usefulness of our development, we have integrated temporal interpolation into plankton [Meyer et al. 2022], an automated verifier for concurrent search structures based on separation logic. As a case study, we have used the extended tool [Meyer et al. 2023] to automatically verify the logical-ordering (LO-)tree [Drachsler et al. 2014]. The proof exercises the full power of our logic by combining a linearizability argument based on temporal interpolation with local reasoning in separation logic. To our knowledge, there has been no formal proof of the LO-tree prior to this work (either automated or mechanized). In fact, our efforts identified one previously unreported bug in the original implementation of the data structure. Another bug was identified by Feldman et al. [2020], who presented an informal hindsight-based proof. While the fix proposed by Feldman et al. [2020] addresses the original bug, we show that it introduces a new linearizability violation. This underscores the benefit of supporting hindsight proofs in a formal logic.
Limitations. Our focus is on automating linearizability proofs for concurrency library implementations. In particular, our program logic was not designed for modular verification of library clients against the proved linearizability specifications. Moreover, plankton is not yet fully automated: the user provides an invariant describing the properties of each node comprising the data structure in the shared heap. Finally, the implementation of temporal interpolation in plankton is currently geared towards reasoning about pure future-dependent linearization points (i.e., those that do else FAA ( .r, 1) 18 } counter( , + 1) not modify the abstract state of the data structure). We leave the handling of impure cases in the implementation as future work. Though, we note that these cases are not prevalent in the context of concurrent search structures.
OVERVIEW
We illustrate our approach using the idealized distributed counter shown in Figure 1. A counter object has an abstract state that tracks an integer value and supports two methods: inc( ) atomically increments by 1 and read( ) returns . The counter is distributed in the sense that is the sum of two integer values stored in separate memory locations .l and .r. The implementation of inc non-deterministically chooses one of the two locations and then atomically increments it using a fetch-and-add (FAA) instruction. The implementation of read non-atomically reads the values of the two memory locations and then returns their sum.
Our goal is to prove that the distributed counter is linearizable with respect to its sequential specification, which is given in Figure 1 as Hoare annotations expressed in separation logic. The specification uses the predicate counter( , ) to define the abstract state of the counter in terms of the underlying memory representation. Here, a points-to predicate ↦ → expresses ownership of the memory location at address and, moreover, that this location stores value . The operator * is separating conjunction, which expresses that and hold over disjoint memory regions. In the following, we assume an intuitionistic semantics of these predicates, i.e. * true = .
To prove linearizability, we need to show that each method transforms its precondition to its postcondition in a single atomic step. Due to interferences by concurrent inc methods, the counter value may change throughout the execution of a method. Hence, the value in the precondition of the specification does not refer to the counter's initial abstract state when the method is invoked, but rather to its abstract state at the linearization point. This semantics of the Hoare annotations corresponds to that of logically atomic triples [da Rocha Pinto et al. 2014]. Note that the variable in the postcondition of read is bound to the method's return value.
The linearization point of inc is when FAA is executed and the desired Hoare specification follows immediately from the specification of FAA. So we focus on the more interesting case of read. The read method does not change the value of the counter. Hence, it suffices to show that the returned value + is equal to the counter value at the linearization point. The challenge is that the linearization point depends on the future interferences of concurrent inc operations. In fact, it may lie in a concurrently executing inc. For example, consider the scenario where at the point when read executes Line 9, we have .l = .r = 0 and before it proceeds to Line 10, two concurrent incs increment first .l and then .r to 1. That is, when read executes Line 9 we have = 0 and when it executes Line 10 we have = 2, yet the return value is 1. Nevertheless, this execution of read is linearizable because there is a time point in between when = 1, namely right after the linearization point of the first concurrently executing inc. Note that if the second inc incremented .l instead of .r, then the return value of read would be 0 and its linearization point would already be when it reads .l. This is why the linearization point of read is future-dependent.
Intuitively, the linearizability of read follows from the fact that the two memory locations increase monotonically by increments of 1. So if the counter has value at some point and value ′ > at some later point ′ , then for each value ′′ with ≤ ′′ ≤ ′ there is an intermediate state between and ′ where the value of the counter is ′′ . We demonstrate how to formalize this intuitive argument in our program logic. The logic enables temporal reasoning about computations using past predicates, ⟐p, which express that the state predicate p held true at some prior state of the computation. Our goal is to derive that ⟐(counter( , + )) is true after Line 10. This implies the existence of a linearization point for read.
The proof proceeds in two parts. The first part proves the goal above but assumes the validity of an auxiliary hypothesis that is derived during the proof. This hypothesis captures the intuitive reasoning used above to conclude the existence of an unobserved intermediate state due to interferences by other threads. The second part of the proof discharges this hypothesis.
An outline of the first part of the proof is shown in Figure 2. Throughout the proof, variables that do not occur in the program code such as are implicitly existentially quantified. The program logic follows a thread-modular approach that mostly uses sequential Hoare-style reasoning. The soundness of this reasoning is guaranteed by ensuring that each two consecutive atomic commands are separated by an interference-free intermediate assertion. That is, concurrently executing threads will not affect the truth value of this assertion. In the following, we elude the details of the mechanism used to check interference freedom as it is orthogonal to our core contributions. The details of this mechanism are presented in §3. The proof starts by unfolding the definition of counter( , ) in the precondition, yielding the assertion on Line 21. After reading .l we know that is bound to the old value of .l. We also record the state of the counter before the read command in a past predicate ⟐( .l ↦ → * .r ↦ → ), yielding the assertion on Line 23. This assertion is not interference-free because concurrent inc threads may change the values of .r and .l. We therefore weaken the assertion by introducing fresh variables ′ and ′ for these values. We leave ′ unconstrained but preserve ≤ ′ , capturing that concurrent threads can only increase .r. Since ≤ ′ only concerns logical variables, we can push this fact into the past predicate. The resulting interference-free assertion is shown on Line 24. We proceed similarly for the read of .r resulting in the assertion on Line 26. Again, this assertion is not interference-free because concurrent threads may change the value of .r. We want to weaken this assertion to the interference-free assertion on Line 31, which implies our desired goal. Observe that Line 31 follows from Line 30 using equality reasoning. So it remains to connect lines 26 and 30. First, observe that the predicate counter( , ′ ) is obtained from .l ↦ → ′ * .r ↦ → ′ by choosing ′ = ′ + ′ . To derive, ⟐(counter( , + ′ )), the proof conjectures the validity of the hypothesis on Line 27. This hypothesis is a Hoare triple of the shape { } I * { → _ ⟐ }. Here, → is logical implication and _ ⟐ is syntactic sugar for ∨ ⟐ . The variable I stands for a set of interferences that the overall proof infers as an auxiliary output of its derivation. The set I consists of pairs (g, com) where com is any atomic command in the program that affects the thread-local or shared program state, and g is the intermediate assertion preceding com in the proof. In our example, the derived interferences all come from the inc method. They comprise the set Each interference can be viewed as a guarded command that first assumes g and then executes com. From these guarded commands, we build the new program I * which nondeterministically executes the interferences in I an arbitrary number of times. That is, I * can be viewed as abstracting the overall program. Thus, the hypothesis { } I * { → _ ⟐ } states that if execution starts from a state that satisfies and after any number of program steps it reaches a state that satisfies , then must have been true in some intermediate state. The temporal interpolation rule allows us to derive from such a hypothesis that if the program is in a state s that satisfies ⟐ ∧ , then also _ ⟐ holds in s. We use temporal interpolation to derive Line 30 from Line 26 using the hypothesis on Line 27.
The second part of the proof is then to establish the validity of the hypothesis. This part can also be carried out in the logic, using the same thread-modular and local reasoning principles. Effectively, the proof boils down to finding an invariant inv that is implied by , implies → _ ⟐ , and is preserved by each of the interferences. In our example, the following invariant does the trick: Intuitively, the first disjunct of the invariant holds up to the linearization point and afterwards, the second disjunct holds. Note that inv contains a past operator and is therefore a computation predicate, not a state predicate.
We contrast the above proof with one based on prophecy reasoning in the style of [Jung et al. 2020]. Without temporal interpolation, the proof has to witness the linearization point of a read thread at the exact moment where the relevant inc thread sets to + ′ for the value ′ that will be later read by . However, ′ depends on how many other inc threads will still increment r between these two points. One can introduce a prophecy variable for that predicts the number of such increments between the points when reads l and r. To establish the linearizability argument, the prophecy variables and linearization obligations for the unboundedely many read threads need to be shared with all inc threads that may execute concurrently. This involves a complex helping protocol construction that governs the transfer of resources between threads. This construction is reflected in the proof in the form of a more complex invariant capturing the shared state of the data structure.
PRELIMINARIES
We study concurrency libraries, i.e., a single program executed by a potentially unbounded number of threads. We give a formal account of concurrency libraries and introduce a Hoare-style proof system for verifying them. Our formalism is based on [Meyer et al. 2022].
Programming Model
Along the lines of abstract separation logic [Calcagno et al. 2007;Dinsdale-Young et al. 2013;Jung et al. 2018], the actual sets of states and commands are a parameter to our development. States and Computations. We draw states from a separation algebra, a partial commutative monoid (Σ, * , emp) with a set of units emp so that (i) each state s ∈ Σ has a unit 1 ∈ emp with s * 1 = s, and (ii) 1 * 1 ′ is undefined for any two distinct units 1, 1 ′ ∈ emp. Definedness of s * s ′ is denoted s # s ′ .
We work over a separation algebra with a certain structure. We expect states from (Σ, * , emp) to be composed from a global and a local state. The global resp. local states are again drawn from separation algebras (Σ G , * G , emp G ) resp. (Σ L , * L , emp L ). We require that (i) states in Σ are multiplied elementwise, (g 1 , l 1 ) * (g 2 , l 2 ) ≜ (g 1 * G g 2 , l 1 * L l 2 ) provided the resulting state is in Σ and undefined otherwise, (ii) states Σ can be decomposed, (g 1 * G g 2 , l 1 * L l 2 ) ∈ Σ implies (g 1 , l 1 ) ∈ Σ, and (iii) units emp are also composed, emp ≜ emp G × emp L . It is readily checked that this is a separation algebra.
The temporal interpolation principle we propose reasons over knowledge obtained at different points in time during a computation. To formulate it, we lift the given separation algebra (Σ, * , emp) to a separation algebra over computations (Σ + , * , Σ * .emp). A computation is a non-empty sequence of states. We write . for the concatenation of two computations and . The multiplication of two computations .s, .t ∈ Σ + is defined, .s # .t, if = and s # t. In this case, the multiplication yields .s * .t ≜ .(s * t). The two computations share the same history, which is preserved by the multiplication. In the current state, we use the composition given by the separation algebra. This construction works in general, not just for our product separation algebra.
Predicates. For clarity of the exposition, we refrain from introducing an assertion language that needs to be interpreted but work on the semantic level. Given a separation algebra (Γ, * , emp), a predicate is a set of elements from Γ. The predicates form a Boolean algebra (P(Γ), ∪, ∩, ⊆, , ∅, Γ) with disjunction, conjunction, implication, negation, false, and true. We moreover have the standard connectives separating conjunction * and separating implication − * : In our setting, we have the separation algebra of states (Σ, * , emp) and state predicates p, q, o ⊆ Σ. We moreover have the separation algebra of computations (Σ + , * , emp + ) and computation predicates a, b, c ⊆ Σ + . For our temporal interpolation principle developed in §4, it suffices to consider simple computation predicates that reason about single states of the computation. These computation predicates are derived from state predicates.
Definition 3.2. From state predicates p ⊆ Σ we construct (i) the now predicate _p ≜ Σ * .p and (ii) the past predicate ⟐p ≜ Σ * .p.Σ + and (iii) the weak past predicate _ ⟐p ≜ _p ∪ ⟐p. Now predicates lift state predicates to hold in the last (the current) state of a computation. Past predicates lift state predicates to hold at some time in the past of the computation. The precise moment when the state predicate was true is not known, which means framing is not relevant for past predicates, and lead us to define the multiplication of computations as an intersection in the past. Intuitionism carries over from state to computation predicates. Lemma 3.3. If p is intuitionistic, so is _p. Predicate ⟐p is intuitionistic.
The predicates are compatible with the separation logic operators as follows.
Commands. We assume a potentially infinite set of commands (COM, ⟦−⟧). The actual set is a parameter and not relevant for our development. Commands com ∈ COM transform a pre state into a post state which, due to non-determinism, need not be unique. This state transformer is given by the interpretation ⟦com⟧ : Σ → P(Σ) of com. We lift the transformer to computations by appending the post state: ⟦ ⟦com⟧ ⟧( .s) ≜ { .s.s ′ | s ′ ∈ ⟦com⟧(s) }. The transformer extends to predicates in the usual way [Dijkstra 1976]: ⟦ ⟦com⟧ ⟧(a) ≜ ∈a ⟦ ⟦com⟧ ⟧( ). We expect to have a neutral command skip ∈ COM that is interpreted as the identity. To model failing commands, we follow [Calcagno et al. 2007] and assume their post state to be abort, a dedicated top value in the lattice of predicates.
For the frame rule to be sound, we require the following locality: Note that (LocCom) requires the computation predicate c to perform a stuttering step when being framed on the right-hand side of the latter inclusion. We call a computation predicate c ⊆ Σ + frameable, if .s ∈ c implies .s.s ∈ c for all .s ∈ Σ + . Fortunately, all computation predicates that are constructed by union, intersection, and separating conjunction from now and past predicates are frameable. Unless otherwise stated, we will assume that all predicates we encounter are frameable. Concurrency libraries. Concurrency libraries consist of an unbounded number of threads that all execute the same program st. Different functions would be modeled by an initial non-deterministic choice among the function bodies, which is supported in our while language together with sequential composition and repetition: A configuration cf = ( , pc) of the library comprises a global computation ∈ Σ + G and a program counter pc. The program counter maps thread identifiers ∈ N to pairs pc( ) = ( , st) containing thread--local information: a computation ∈ Σ + L and a program fragment st the execution of which remains. The transition rules among configurations are as expected: a step of thread changes the shared and the thread--local information according to the transformer of the executed command, and leaves all other threads unchanged.
Towards a Hoare-style proof system, we call a configuration ( , pc) initial wrt. computation predicate a and program st, if all threads with pc( Reachability is defined as usual. We refer to the initial, accepting, and reachable configurations by Init a,st , Acc b , and Reach(cf), respectively.
The correctness condition we would like to prove for concurrency libraries is whether all configurations reachable from a-st-initial configurations are b-accepting, Reach(Init a,st ) ⊆ Acc b . In this case, we say that a Hoare triple of the form
Program Logic
We use a proof system to establish the validity of Hoare triples, Figure 3 below (ignore the marked parts for now). The proof system is thread-modular [Berdine et al. 2008;Jones 1983] in nature, thus verifies a single thread in isolation. To account for the actions of other threads which may affect the isolated thread, we ensure interference freedom [Owicki and Gries 1976] of the overall proof.
Technically, the proof system establishes judgements P, I ⊩ { a } st { b } with the following components: (i) a Hoare triple { a } st { b } for the isolated thread, (ii) a set P of intermediary assertions used during the proof of the Hoare triple, and (iii) a set I of interferences that the isolated thread is subject to. Recording the intermediary assertions allows us to separate the interference freedom check from the derivation of the Hoare triple [Dinsdale-Young et al. 2013, Section 7.3]. We denote the interference freedom of P under I by I P. The resulting proof system is sound.
In our development, we will use the set of computations ⟦ ⟦st⟧ ⟧ I (a) defined by extending each computation in a by every sequence of states encountered when executing program st to completion while admitting interferences from I. The formal definition is the straightforward lift of ⟦ ⟦−⟧ ⟧ to sequences of commands and interferences. A consequence of the soundness result is the following.
Lemma 3.6. If there is a set P with a ∈ P, I P, and P, Interference Freedom. The isolated thread is influenced by the actions of other, interfering threads. We capture those actions as interferences (c, com), meaning that com may be executed by an interfering thread from a configuration satisfying c. Observe that the global portion of c imposes restrictions on when the interference may happen while the local portion of c supplies the local computation the interfering thread needs for its execution. From the point of view of the isolated thread with computation ( , ), only the global portion changes, formally: The interference freedom check wrt. a set I of interferences then proceeds as follows. It takes a computation predicate a and tests whether ⟦ ⟦(c, com)⟧ ⟧(a) ⊆ a for all (c, com) ∈ I. If this is the case, the interference does not invalidate a and the predicate is interference-free. The interference freedom check extends naturally to the set of predicates P. We write I * b for the set of interferences (a * b, com) with (a, com) ∈ I. We also use the notation for sets of predicates P and write P * b for the set of predicates a * b with a ∈ P. We also remark that past information is always interference-free, because interferences append states and this does not change the past of the computation.
TEMPORAL INTERPOLATION
Temporal interpolation is a reasoning principle to derive information about intermediary states that have not been observed in the program proof. Coming back to the example of a distributed counter, if the counter value has been 1 in the past and is now 2 > 1 , then we wish to derive that there has been a moment in which the counter has been with 2 ≥ ≥ 1 . Temporal interpolation will allow us to do so, although an assertion with counter value is not interference-free and hence will not be observable in the program proof. We can actually guarantee that the moment in which the counter was is in between the past and the current state, but defer the timing aspect for now. Another example of temporal interpolation is reachability in concurrent data structures, as studied by the hindsight principle which inspired this work [Feldman et al. 2018[Feldman et al. , 2020O'Hearn et al. 2010]. If a node 1 has been reachable in the past, and the node now points to 2 , then there has been a moment in which the node was reachable and pointed to 2 . Also this moment will not be interference-free and hence cannot be recorded in the program proof (the set of predicates P).
To derive the intermediary information, temporal interpolation proves inclusions of the form The inclusion indeed formulates an interpolation property for the set of computations: if state predicate p has been true in the past of the computation and we now have q, then there has been a moment in which o was true, and typically o will be p ∩ q. Unfortunately, the inclusion will rarely hold in this generality. The first problem is that the set of computations leading from p to q is too liberal. Rather than considering all sequences of states, we should only consider the ones generated by the program at hand. The second problem is that even if we restrict the computations, we need to prove the inclusion. Our technical contribution is to embed the above inclusion into a proof system in which it can justifiably be used.
To restrict the set of computations leading from p to q, we introduce a new predicate that reflects the influence of the program on the course of the computation. The observation behind the definition of the predicate is that the set of interferences I which we collect during the proof gives us precise information about the program behavior. An interference (a, com) ∈ I not only says that a command com is executable, it also records in predicate a the conditions under which the command will be executed. Notably, these conditions refer to the shared as well as the local state, meaning the interference captures the thread-local behavior as well. The new predicate thus employs the set of interferences as an abstraction of the overall program behavior.
.
We turn an interference (a, com) into an atomic block the execution of which is guarded by an assumption. Recall that atomic blocks are not part of our programming constructs, but the above expression will be treated as a single command with the expected semantics. The reason we need a single command is that 2com(a, com) should abstract command com in the program, and that command leads to a single state change. Also note that a is a predicate from the assertion language that we deliberately use within an assumption. To be closer to programming practice, one can weaken a to information about the current state that can be checked over the program variables. We use a * true rather than a to make sure the command satisfies (LocCom). We also call 2com(a, com) a self-interference. Function 2stmt(−) lifts the construction to a set of interferences. The resulting program repeatedly executes all self-interferences in random order.
The new predicate Gov(I) describes the set of I-governed computations, the computations in which every state change is due to an interference or a self-interference: Gov(I) ≜ ⟦ ⟦2stmt(I)⟧ ⟧ I (Σ) .
We view here Σ as a set of computations that consist of a single state. With this definition, we intend to replace Inclusion (1) by This inclusion may or may not hold depending on the set of interferences. To prove the inclusion for the set of interferences at hand, we define Hoare triples that take a set of interferences as a parameter. We justify the need for this parameterization in moment. A so-called hypothesis h has the form Variable X will be evaluated by a set of interferences. The hypothesis is said to hold for I, denoted by I ✓ h, if we can prove the Hoare triple with X replaced by I: there is a set of predicates P with a ⊆ a ′ ∈ P so that P, I ⊩ { a ′ } 2stmt(I) { b } is derivable and I P. We elaborate on the weakening of a to a ′ further below. For a set of hypothesis H, we write I ✓ H to mean I ✓ h for all h ∈ H. The hypotheses we are interested in have the shape Since the shape is fixed, we write the hypothesis as h(p, q, o). It states that from a computation ending in p, every execution of the interferences and the self-interferences that leads to a state from _q satisfies _ ⟐o. This is precisely the information that has been missing to justify Inclusion (2).
We incorporate temporal interpolation into the separation logic presented in §3 by means of the new proof rule temporal-interpolation given in Figure 3. It draws a conclusion as in Equation (1) at the expense of recording a hypothesis h(p, q, o). There are a few things worth noting. The rule does not expect the predicate Gov(I) to be present in the premise. The soundness result will show that any program proof can be strengthend to maintain the set of governed computations, and we can therefore leave this set implicit. We draw the conclusion after a skip command, which turns the weak past predicate _ ⟐o from the hypothesis into a proper past predicate ⟐o. This is needed to harmonize the implicit treatment of Gov(I) with framing. However, one can easily avoid the skip by applying the rule to the preceding command. The state predicates p and q should be intuitionistic. This is also related to framing. Rule temporal-interpolation-unordered is a variant in which we do not know whether p or q has been observed first and we rely on both hyptheses. Meyer et al. [2022] with our extension of hypotheses and temporal interpolation. We denote the former by ⊩ and the latter by ⊩ ti (the subscript is short for "temporal interpolation").
The hypotheses spawned by temporal-interpolation have to be discharged against the full set of interferences collected for the overall program. This is the reason why we work with hypotheses as parameterized Hoare triples rather than ordinary Hoare triples: in the moment we interpolate, we do not yet know the full set of interferences. Instead, we may only have a fraction of the program (and hence the interferences) at hand. It is also the reason why the separation logic judgements given in Figure 3 maintain a set H of hypotheses, and the rules are modified to join these sets. We are not allowed to forget a hypothesis while building up the correctness judgement for the overall program.
We elaborate on why we weaken a to a ′ in the definition of I ✓ h. The purpose of temporalinterpolation is to derive _ ⟐o from _ ⟐p ∩ _q. Typically, p occurs within a weak past predicate, because it is not interference-free. This means no interference-free set of predicates P can prove the hypothesis { _p } 2stmt(X) { _q → _ ⟐o }. A way out would be to prove the hypothesis for a weaker predicate p ⊆ p ′ and replace the predicate _ ⟐p in the main proof by _ ⟐p ′ . Unfortunately, the predicates p that require temporal interpolation not only fail the interference freedom test, it also seems to be impossible to weaken them to interference-free state predicates. All we can do is weaken them by introducing past information. Consider the example of a distributed counter given in §2. There, p is the predicate .l ↦ → * .r ↦ → ∧ ≤ ′ . We weaken it to the invariant inv defined as ∃ ′′ ′′ . .l ↦ → ′′ * .r ↦ → ′′ ∧ ≤ ′′ ∧ ′′ + ′′ < + ′ ∨ _ ⟐(counter( , + ′ )). Although we have _p ⊆ inv, the invariant does not have the shape _p ′ . This means the invariant does not lead to a hypothesis h(p ′ , q, o) as required for temporal interpolation. By weakening the condition of when h(p, q, o) holds, we bridge the gap between _p and inv.
Hypotheses require an ordinary program proof, using a method of choice. Yet, their shape suggests an invariance-based proof strategy: since program 2stmt(I) repeats self-interferences 2com(a, com), it suffices to find a predicate that is stable under these commands, contains the precondition, and entails the postcondition. Call inv ⊆ Σ + an inductive invariant for I if ⟦ ⟦2com(a, com)⟧ ⟧(inv) ⊆ inv for all (a, com) ∈ I and I inv. We say that inv proves h(p, q, o), if _p ⊆ inv and inv ∩ _q ⊆ _ ⟐o.
Lemma 4.2 (Strategy). Let inv be an inductive invariant for I proving h(p, q, o). Then I ✓ h(p, q, o).
Soundness
We show that every proof in the new program logic of Figure 3 gives rise to a proof in the program logic of §3, provided the hypotheses hold for the overall set of interferences. Also successful interference freedom checks will carry over. This means we can take full advantage of temporal interpolation, trusting that a traditional program proof will exist which discharges all hypotheses along the way. Temporal interpolation can therefore be understood as a way of structuring and shortening traditional program proofs that involve temporal reasoning. Technically, soundness shows that any derivation in the new program logic can be strengthened by an intersection with Gov(I). This allows us to replace temporal-interpolation by conseqence-ti relying on Lemma 4.1. The difficulty in proving the theorem is the interplay between the intersection we intend to add and the frame rule. Therefore, our first step is to eliminate the frame rule and show that whenever a correctness statement can be derived, then it can be derived without frame-ti. Let ⊩ ti,nf denote the restriction of ⊩ ti that avoids frame-ti.
At the heart of the lemma is the fact that the frame rule commutes with the remaining rules of the program logic. This allows us to organize proofs in such a way that the frame rule is applied right after com-ti. A combination of com-ti and frame-ti, in turn, can be captured by com-ti alone. The difficult case is temporal-interpolation, for the proof of which we rely on the following identity.
With the previous result, the derivation that makes use of temporal interpolation can be assumed to be frame-ti-free. We now show that also temporal-interpolation can be eliminated, provided we strengthen the correctness statement by the governed computations.
Lemma 4.6. If P, The previous lemmas allow us to prove Theorem 4.3. For interference freedom, note that the governed computations are interference-free, I Gov(I), and we have I P by the assumption. The intersection of two interference-free predicates is interference-free.
TEMPORAL INTERPOLATION FOR LINEARIZABILITY
We present an extension of our program logic from §4 to verify linearizability. The approach is akin to atomic triples [da Rocha Pinto et al. 2014], except that we do not aim to support compositional reasoning about clients against atomic specifications of libraries. Instead, we only focus on verifying library implementations. We use update tokens that encode a method's obligation to execute a linearization point. Once the method executes a command that resembles the linearization point, the update token is traded into a receipt token certifying successful linearization. This also prevents the method from having further linearization points since tokens are not duplicable and thus no more tokens can be traded. Here, we focus on concurrent search structures (CSS), however, the approach applies more generally. Sequential specifications Ψ of concurrent search structure methods op and key take the following form: Here, C and C ′ are the logical contents of the structure before and after the operation takes effect. The predicate CSS(C) ties the physical state of the structure to C. How the method call op( ) changes the contents is prescribed by the relation UP(C, C ′ , , ).
The linearizability obligation is denoted by OBL Ψ and the receipt token by RCT Ψ, , and we drop Ψ if it is clear from the context. Receipts are parameterized in the result value of the operation to com-lin-void reconcile the actual return value with the one prescribed by Ψ. For concurrent search structures, the sequential specifications of the methods contains(k), insert(k), and delete(k) are as expected and we denote their obligations by CTN k , INS k , and DEL k (their receipts are just RCT ).
To deal with the tokens in a proof, we lift the proof system ⊩ ti from §4 to a new proof system ⊩ lin ti which inherits all the rules of ⊩ ti except for Rule com-ti. Rule com-ti is replaced by the three new rules from Figure 4. The rules extract the tokens, invoke ⊩ ti , and then add the tokens back. However, in the process, they potentially transform the tokens if a linearization point is registered. That is, the updates of tokens are handled by ⊩ lin ti rather than ⊩ ti . To do this, we lift the program semantics ⟦ ⟦com⟧ ⟧ in a trivial way: the ghost component of the state is simply ignored. However, for temporal interpolation to remain sound, we need to capture the effect of ghost state updates in the interferences. So, we decorate commands com × (OBL ⇝ RCT ). Then, decorating an interference (a, com) decorates the command and adds the required token to the premise, (a, com) × (OBL ⇝ RCT ) = (a * OBL, com × (OBL ⇝ RCT )). With this, we are ready for the proof rules of ⊩ lin ti . Rule com-lin-void deals with commands that do not alter the logical contents of the structure. Consequently, they maintain the current obligation/receipt token. Rule com-lin-impure trades an obligation for a receipt if the executed command is the linearization point, that is, if it updates the logical contents of the structure according to the sequential specification. If a command changes the logical contents but does not satisfy the specification or has no obligation token, the proof fails. Rule com-lin-pure also trades an obligation for a receipt. However, the rule does so in hindsight. That is, there is no need to perform the trade at the very moment the sequential specification is satisfied, it can be done later if a past predicate can certify the existence of the linearization point. It is this rule that sets our approach apart from atomic triples [da Rocha Pinto et al. 2014]. We allow for this retrospective linearization only if the linearization point is pure, i.e., does not alter the logical contents of the structure. The reason is this: such pure linearization points can be used by arbitrarily many threads to linearize whereas impure linearization points require a one-to-one correspondence to threads. The approach can be extended to support impure linearization points. We discuss this in Appendix F and demonstrate it in a proof for the RDCSS data structure [?].
CASE STUDY: THE LO-TREE
We substantiate the usefulness of the developed program logic by verifying the linearizability of a challenging concurrent data structure: the the logical-ordering (LO-)tree [Drachsler et al. 2014]. We identify and fix bugs in the original implementation from Drachsler et al. [2014] as well as in the correction attempt by Feldman et al. [2020].
The LO-Tree in a Nutshell
Overview. The LO-tree [Drachsler et al. 2014] is a self-balancing binary search tree implementing a set data type. Self-balancing refers to the tree periodically restructuring itself to maintain a low height in order to speed up accesses. The restructuring mechanism in the LO-tree are standard tree rotations. For an example rotation consider Figure 5. There, node 13 experiences a right rotation: its left child 7 takes the position of node 13 and node 13 becomes the right child of 7. The formerly right subtree of 7 becomes the left subtree of 13. The resulting tree is a binary search tree again. In a concurrent setting, rotations pose a major challenge. To avoid performance bottlenecks, one wishes to traverse the tree without synchronization, e.g., without acquiring locks that prevent rotations from happening. Without synchronization, however, one cannot prevent traversals to go astray in the presence of rotations. In Figure 5, if a tree traversal searching for node 5 arrives at node 13 and node 13 experiences the right rotation before the tree traversal continues, then the tree traversal will never reach node 5 but end up at node 9. For the implementation to be linearizable, it must detect this and be able to find node 5 despite the rotation.
The LO-tree solves the problem by organizing the nodes in a doubly-linked list, the eponymous logical ordering. In fact, it is this list which dictates the contents of the LO-tree. The tree structure is merely an overlay to that list which helps to speed up accesses. In Figure 5, the logical ordering contains all nodes in ascending order while the tree overlay does not yet contain node 17. Hence, the previous tree traversal, which arrives at node 9 on its way to node 5, can follow the logical ordering backward to find 5. Similarly, a tree traversal searching for 17 arrives at node 13 and then follows the logical ordering forward to find it. Implementation. We link the above ideas to the implementation of the LO-tree in Figure 6 (ignore the proof outline annotations for now). The nodes of the tree are represented by the struct type Node. Each node stores an integer key and a Boolean mark as well as several pointers and locks. The mark field is used to indicate that the node is being or has been removed from the tree. For the doubly-linked logical ordering list each node stores a forward succ and a backward pred pointer. To synchronize mutations of the list, there is a lock listLock. For the tree overlay, each node stores pointers left and right to its children and a pointer parent to its parent. Tree mutations are synchronized with a lock treeLock. There are two sentinel nodes min resp. max storing values −∞ resp. ∞. The initial logical ordering consists of these two nodes. The root of the tree is max.
The user-facing API of the LO-tree consists of the three methods of a concurrent search structure: contains, insert, and delete. The methods return a Boolean indicating success of the operation. Methods insert and remove use fine-grained locking to synchronize mutators. Both methods rely on the helper method locate(k) which finds (and locks) the position in the logical ordering to which value k belongs. This position can be thought of as the interval between two successive nodes and , . succ = , so that k is logically ordered between the two or in , k ∈ ( . key, . key]. To arrive at this location, a straightforward binary tree traversal is used, as implemented by traverse(k). Since the traversal may yield or depending on the tree structure, the remaining node is determined using pred/succ of the logical ordering. To account for the tree traversal going astray due to rotations, locate validates the found position. More precisely, it checks for k ∈ ( . key, . key] and ensures that is unmarked, i.e., still part of the logical ordering. The validation happens after locking listLock of so that the position cannot be invalidated by concurrent mutators. Insertions of value k proceed as follows. They first locate the position , in the logical ordering where k should be inserted. The returned position also reveals whether k is already present in the logical ordering. If so, the insertion fails and returns false. Otherwise, a new node is inserted in between and . The new node's pred and succ are pointed to and , respectively. Then, is inserted into the logical ordering. It is first inserted into the forward ordering by pointing . succ to . Only after this, it is inserted into the backward ordering by pointing . pred to . This order deviates from the original version [Drachsler et al. 2014] for reasons we explain in §6.2. Finally, is inserted into the tree by a call to performTreeInsertion( , ). This call expects the node that is the parent of x. The parent is determined before x is inserted into the logical ordering by prepareTreeInsertion( , ), which does not alter the logical ordering nor the tree but may acquire locks. We do not got into the details of the tree modifications as they are orthogonal to our linearizability proof. Finally, true is returned by insert.
Deletions of value k are similar to insertions. They locate the position , where k resides. If . key ≠ k, then k is not present and the deletion fails, returning false. Otherwise, it acquires 's listLock and reads 's successor . To remove , it is marked by setting . mark = true, unlinked from the backward logical ordering by setting . pred = , and then unlinked from the forward logical ordering by setting . succ = . Afterwards, is removed from the tree using performTreeDeletion( ) which expects prepareTreeDeletion( ) has been called before was marked. Similar to insertions, prepareTreeDeletion does not alter the logical ordering nor the tree but may acquire locks. Again, we elide performTreeDeletion and prepareTreeDeletion as they are unimportant for our discussion.
Unlike the above mutations, the contains(k) method is wait-free, in particular it does not acquire locks. It traverses the tree, follows pred pointers, and finally follows succ pointers to check whether there is an unmarked node containing k. In addition to the original version [Drachsler et al. 2014], we need to follow pred pointers at least until the first unmarked node to guarantee that k is found indeed, see §6.2.
Bugs and their Fixes
The original version of the LO-tree [Drachsler et al. 2014] has two bugs which we fixed in Figure 6. See Appendix A for more details.
Bug 1: Duplicate Values. A subtle quirk of the LO-tree is the fact that an insertion of value k may be unaware of a concurrent deletion of k because the tree traversal of the insertion experienced a rotation but still ended up in the right position for the insertion (the validation in locate succeeds). Successful validation requires that the deletion already removed k from the logical ordering. So, the insertion can proceed and insert k into the logical ordering and into the tree. If the deletion has not yet removed the old marked version of k, then the tree contains two nodes with value k that disagree on the mark bit. Hence, rotations influence the result of contains(k)-it is not linearizable.
Our implementation from Figure 6 fixes the above problem by adding Line 55: the logical ordering is followed backward (pred fields) at least until an unmarked node is encountered. This ensures that the final result is not confused by concurrent deletions. Other than that contains proceeds as originally devised by Drachsler et al. [2014]. Interestingly, adding Line 55 renders the mark bit check on Line 59 superfluous.
Bug 2: Insertion Order. Feldman et al. [2020] identified another bug in the insert method. In the original version [Drachsler et al. 2014], new nodes are inserted first into the backward logical ordering and then into the forward one (compared to Figure 6, Lines 89 and 91 are reversed). To see why this is problematic, assume an insertion of a new node with value k between nodes and already linked . pred to but . succ is still pointing to . Then, contains(k) will find only if the struct Node { int key; bool mark; Lock treeLock, listLock; Node* left, right, parent, pred, succ; } val min = new Node { key = -∞; mark = false; }; val max = new Node { key = ∞; mark = false; } min.pred, min.succ := max, max; max.pred, max.succ := min, min tree traversal takes it to nodes that appear after in the logical order. For earlier nodes, contains will only follow succ fields which cannot yet reach . It is easy to see that this violates linearizability. We fixed this bug by changing the order in which is linked into the logical ordering, cf. Lines 89 and 91. Feldman et al. [2020] apply the same fix. 1 However, they also change insert to link new nodes first into the tree overlay and then into the logical ordering (without modifying contains). This violates linearizability: if a new node with value k is inserted into the tree but not yet into the logical ordering, contains will find k if and only if it is not affected by concurrent rotations. 2
Local Reasoning Principle
Local Reasoning. While our program logic from §5 tells us how to establish linearizability, it leaves us with a hard task: show that a command does or does not alter the contents of the structure. The contents is defined inductively over the data structure graph. To localize the reasoning about this inductive quantity, we build on the keyset framework [Krishna et al. 2020a[Krishna et al. , 2021Shasha and Goodman 1988].
Suppose the global data structure graph consists of a set of nodes N . We will define a predicate Inv(C, K, N, M) that describes the resources and properties of a subregion M ⊆ N in the graph.
Here, C will be the logical contents of the subregion, which is the union of the logical contents C(x) of all nodes x ∈ M. The set K is the keyset of the region M, which consists of all those keys that could be in . We require the invariant to guarantee C ⊆ K. The keyset will be defined inductively over the graph structure as we explain below. We then define the invariant CSS(C) of the entire structure as follows: To enable local reasoning, we aim for a definition of Inv that yields the following compositionality: That is, the predicate allows us to decompose the graph arbitrarily into disjoint subregions and ′ and compose them back together. In particular, separating conjunction will guarantee that the keysets (and hence the logical contents) of disjoint subregions will also be disjoint. For proofs, this means that we can focus our reasoning on appropriate fragments Inv(C, K, N, M) with a small set M. When reasoning about updates we can focus on the fragment M that contains only those nodes whose fields or keysets change. As we will see, three nodes will suffice to handle the LO-tree. Also, Inv enables a local-to-global lifting of the specification UP(C, C ′ , k, ) of our search structure methods. For example, if we have identified a fragment of the form Inv(C, K, N, { x }) with k ∈ K, then k ∈ C = C(x) iff k is in the logical contents of the entire structure. Flows. To obtain a definition of Inv with the desired properties, we build on the flow framework [Krishna et al. 2018[Krishna et al. , 2020b which enables local reasoning about inductively-defined quantities of graphs. We sketch the main ideas for our specific application of the flow framework to keysets.
Each node is augmented with a ghost quantity called inset. Intuitively, the inset of a node x is the set IS(x) of all keys k, such that a thread searching for k will traverse x. That x is traversed means that the search eventually considers x; the search may or may not continue from there. The keyset KS(x) of x is the subset of IS(x) for which the traversal will terminate at x. For the LO-tree, the inset is IS(min) = [−∞, ∞] for the root node of the logical ordering and for the remaining nodes it is obtained as a solution to the following recursive equation: The inset propagates via succ links only, because it is the list of succ links that makes up the logical contents of the LO-tree, as alluded to in §6.1. With this, we formally express the logical contents of node x by C( To express insets in a separation algebra, the flow framework adds an additional ghost resource component. The technical details are not relevant for our discussion. In our proofs, we use the separation algebras proposed by Meyer et al. [2022] and defer the interested reader there. What is important here, is that the above definitions guarantee that the keysets of subregions are always disjoint.
The Structural Invariant
We use standard separation logic assertions to represent the semantic predicates used so far. In particular, we use boxed assertions to denote that is interpreted in the shared rather than the local state [Vafeiadis 2008;Vafeiadis and Parkinson 2007]. Moreover, we use fractional permissions [Boyland 2003] for points-to predicates ↦ − −− → 1 /n to allow reads but prevent interfering updates to lock-protected resources. We also use persistent points-to [Vindum and Birkedal 2021] predicates ↦ − − → □ to easily share knowledge about immutable fields. We define a predicate N(x) for the shared resources of a node x. For simplicity, we assume that proofs are implicitly existentially closed. This enables the naming convention where a use of (x) in the outer proof context refers to the value of field f as defined within N(x). We define: Field in is the ghost field storing the node's inflow (cf. §6.3). We use fractional permissions for the fields listLock, succ, and mark. The listLock protects succ which is why N(x) has a full permission for succ only if listLock is unlocked. Otherwise, there is half a permission, the other half is transferred to the local state of the locking thread. The setup for mark is similar. As noted above, the lock protects the resources Guarded(x) whose ownership is transferred from the shared state to the local state of the thread acquiring the lock. To make this precise, we define Locked(x) ≜ x . listLock ↦ − − → 1 /2 1 * Guarded(x) and obtain the following behavior of locks: For the first Hoare triple, note that its pre condition does not require x . listLock to be unlocked, llock(x) = 0. This is established by lock as it blocks until x . listLock can be acquired. The post condition realizes the ownership transfer: Locked(x) contains the protected resources Guarded(x) in the local state while maintaining the node's shared resources N(x).
With the resources of individual nodes set up, we are ready to state the invariant of the LO-tree: The invariant follows the form and satisfies the properties laid out in §5 and §6.3. Its main part is the node-x-local invariant NInv, which restricts the resources held by the overall invariant Inv.
The properties are as follows. (I1) The contents of a node are governed by its keyset. Moreover, the invariant is closed under following pointer fields of x. Observe that we require the overall invariant containing full N to be closed, not the fragment comprising M. (I2) Nodes min resp. max are unmarked and store values −∞ resp. ∞. (I3) Unmarked nodes have a non-empty inset which contains all values greater or equal to the node's own value. Moreover, nodes receive inset from at most one node, meaning that the succ list between min and max is a path. The abstract predicate indegree-one(x) can be expressed using flows. (I4) Nodes are sorted in the sense that a node's predecessor (successor) stores a lesser (greater) key. It is worth pointing out that (I4) is a node-x-local property indeed, because N(x) holds the required resources. We may simply write Inv(C, M) instead of Inv(C, K, N, M) if N is clear from the context.
Proof Outline
The proof outline can be found in Figure 6. While the proof for insert and delete requires mostly standard reasoning, it reveals the interference that other threads are subjected to. The hindsight reasoning for method contains is performed relative to this interference. Using our proof system ⊩ lin ti , we give a proof template of the LO-tree: we do not make any assumptions about the operations manipulating the tree overlay other than them being memory-safe.
6.5.1 Locating Nodes. Recall from §6.1 that insert and delete use the helper locate to find the position , to which a given key k belongs. Node is the result of a tree traversal, Line 67. Since we elide the mechanics of the tree overlay, we only know that the resulting pointer is non-nil-this little information suffices. Next, is locked, Line 69. This provides us with the protected resources, Guarded( ). They guarantee that . succ and . mark cannot change due to interference. Reading . succ, Line 70, binds to succ( ). Hence, the validation of position , on Line 72 results in the interference-free knowledge that is unmarked, is the successor of , and that k indeed belongs in-between and , k ∈ (key( ), key( )]. This together with the obtained resources forms the predicate LocInv(N, , ), formally defined in Figure 6, and is the post condition of locate on Line 75. Later, we will use the fact that LocInv(N, , ) implies k ∈ KS( ). To see this, invoke invariant (I3) for the unmarked . We get [key( ), ∞] ⊆ IS( ). The keys (key( ), ∞] distributes via . succ as inset to according to §6.3. Hence, k ∈ KS( ) = (key( ), key( )].
6.5.2
Insertions. An Insertion of key k first calls locate to find the position , to which k belongs. The position reveals if k is already contained because k ∈ KS( ) as inferred above. If key( ) = k, then k ∈ C( ) and thus k ∈ C. That is, if the conditional in Line 79 succeeds, the specification of an unsuccessful insertion is met. We trade the obligation INS k for the receipt RCT false .
Otherwise, k is inserted into the structure. To do that, a new node containing k is allocated in Line 86. The pred and succ fields are set to and , respectively. It remains to link into the (Lines 110 and 112) of node . The arrows resp. indicate pred resp. succ pointers. The intervals on succ links denote the insets, the intervals on nodes denote their keysets. Mark bits ( is marked prior to the unlinking) and acquired locks are not depicted.
logical ordering, as depicted in Figure 7. First, Line 89 redirects . succ to . This is the linearization point: receives the inset (key( ), ∞] from so that we get C( ) = { k }. Hence, the update turns C into C ∪ { k } so that INS k can be traded for RCT true . Next, Line 91 redirects . pred to . The command has no effect on the logical contents C which is why we need no INS k to proceed. It is readily checked that the update maintains the node-local invariants of the nodes , , .
Our proof outline does not consider the methods for inserting the new node into the tree overlay. We simply assume that prepareTreeInsertion in Line 85 produces an interference-free predicate TreeIns( , ) that is maintained by the updates of the logical ordering in Lines 89 and 91 and consumed by the later performTreeInsertion in Line 94.
Deletions.
Deletions are similar to insertions (see Figure 7). We omit the details.
6.5.4 Contains. The proof in Figure 6 uses implicitly existentially quantified symbolic variables v, u, t to share knowledge between now and past predicates. We cannot use program variables for this purpose because their values change during computation, meaning they may be valuated differently in now and past predicates. To further avoid confusion between now and past states, we write ‵ to replace in expression all symbolic variables like v . mark with ‵ v . mark. We think of ‵ as the old version and use it under past operators. For example, in _Inv(C, we would use IS(v) resp. ‵ IS(v) to clearly refer to the inset of v in the current resp. past state. The proof of contains(k) has these five stages: (1) The tree traversal, Line 51, finds a starting node for traversing the logical ordering. The only guarantee for is that it is non-nil, Line 52.
(2) The logical ordering is traversed by following pred fields as long as k is less than the key in the traversed node, Line 53. The resulting node is nonnil by (I1). Moreover, we obtain the interference-free fact k ≥ . key, Line 54.
(3) The traversal continues to follow pred pointers until an unmarked node is reached, Line 55. By invariant (I1), the resulting node is non-nil. That is unmarked means that its inset is at least [key( ), ∞] by invariant (I3). Moreover, k ≥ . key from the previous stage is preserved due to (I4). Together, this implies that k is in 's inset. This fact is not interference-free because is not locked. To preserve it, we turn it into a past predicate, Line 56.
(4) The traversal follows succ pointers as long as k is greater than the key in the traversed node, Line 57. Using temporal interpolation (details below), we conclude that also the reached node had k in its inset at some point. Note that this together with k ≤ . key from Line 58 means k ∈ KS( ) in some past state. So k was in the structure at this past state iff k = key( ).
(5) Using temporal interpolation (details below), we derive from the past contents and the current key field of whether or not k has been logically contained, Line 60. This past state is, in fact, the linearization point. We retrospectively linearize, Line 61, before returning.
We turn to the details of the temporal interpolation that goes into stages (4) and (5). Temporal Interpolation in Stage (4). The proof outline for the loop from Line 57 is given in Figure 8. The temporal interpolation needed here is this: that had flow in the past and its succ field currently points to and its key field currently is less than k means that all three facts were true simultaneously at some point. Intuitively, this is the case because has a non-empty inset whenever . succ is changed and because key( ) is never changed. Technically, we show the hypothesis h(p, q, p ∩ q) on Line 123 with The symbolic variables v resp. u are bound to resp. by the outer proof context; we use v/u instead of / as they are logically pure and thus do not change their valuation. To prove the hypothesis, we establish P, I ⊩ { a } 2stmt(I) { _q → _ ⟐ (p ∩ q) } and I P for some set P of predicates with _p ⊆ a ∈ P (cf. §4). We cannot simply use a = _p because p is not interference-free. Instead, we use a ≜ _q → _ ⟐ (p ∩ q). It is easy to see that a is weaker than _p, _p ⊆ a. Note that a is the invariant that the hypothesis proof strategy from Lemma 4.2 asks for. Next, we show that a is interference-free, i.e., ⟦ ⟦(c, com)⟧ ⟧(a) ⊆ a for all interferences (c, com) ∈ I of the LO-tree. For an interference (c, com) to invalidate a it must change the truth of q in the current state. If the truth of q is changed to false, then a is vacuously true. Otherwise, the interference changes succ(v) to u (key(v) is not changed by any interference). This means com stems from Line 89 in insert or Line 112 in delete. In both cases we know from the proof (Figures 6 and 7) that v has a non-empty inset after the interfering update. Concretely, this means ⟦ ⟦(c, com)⟧ ⟧(a) ⊆ _p. Because we already established _p ⊆ a, we obtain the interference-freedom of a, as required.
It remains to show that a is invariant under the self-interferences 2stmt(I). To see this, observe that a concerns only the global state, not the local state. Hence, the self-interferences invalidate a iff the interferences of other threads do so. Since the latter is not the case, nothing needs to be shown.
With the hypothesis proved, we obtain ⟐(p ∩ q) from Rule temporal-interpolation. The rule is applied to a command which we make explicit in the form of skip on Line 126. One can avoid this skip by applying the rule together with the previous command. Finally, we invoke invariant (I3) under the past predicate to obtain [ ‵ key(v), ∞] ⊆ ‵ IS(v). By definition, this means that ‵ succ(v) = u receives ‵ IS(v) \ ( ‵ key(v), ∞]. Because ‵ key(v) < k, this means k ∈ ‵ IS(u). Altogether, we arrive at the desired assertion on Line 127, namely ⟐( ‵ Inv(C ′ , M ∪ u) * k ∈ ‵ IS(u)).
Temporal Interpolation in Stage (5) We proceed in two steps. First, we prove that h(p, q, p∩q) holds for arbitrary p and q ≜ key(v) = t. As before, we use Lemma 4.2 with invariant a ≜ _q → _ ⟐ (p ∩ q).
Since key(v) is immutable, a is immediately stable under (self-)interferences. This justifies to move facts about the key freely between now and past states.
Towards the assertion on Line 60, assume key(v) = k. We move this fact into the past predicate from Line 58 using the above argument. The result is: This means that k was contained in the structure in the past: ⟐( ‵ Inv(C ′ , M) * k ∈ ‵ C(v) ⊆ C ′ ). This conclusion uses the fact that k ∈ ‵ IS(v) * k = ‵ key(v) implies k ∈ ‵ KS(v). The case for key(v) ≠ k is similar. Overall, rewriting both cases into one yields the desired assertion, Line 60. Finally, this allows us to retrospectively linearize as the past predicate witnesses a past state where k was resp. was not in the structure as reflected by the return value. This concludes the linearizability proof.
Proof Automation
We substantiate our claims that temporal interpolation and the resulting proof system for linearizability aid automated proof construction. To this end, we adapted the plankton tool [Meyer et al. 2022]. plankton is a verifyer for non-blocking data structures that constructs proofs in the program logic from §3 extended by rules for linearizability akin to those from §5. To be more precise, plankton takes as input the implementation under scrutiny together with a candidate node invariant, like NInv(N, M, x) from §6.4. It then performs an exhaustive proof search. We extended plankton to use our new proof rules from Figures 3 and 4, in particular Rule temporalinterpolation. Our implementation [Meyer et al. 2023] applies temporal interpolation only for hypotheses of the form h(p, q, p∩q) and only if it is able to discharge the hypothesis using Lemma 4.2 with invariant _q → _ ⟐(p ∩ q). This eager approach ensures that we do not pollute the proof search with temporal interpolations that are doomed to fail because their hypotheses do not hold. Note that this is possible despite a potentially incomplete interference set as plankton restarts proof construction whenever a new interference is discovered. Altogether, our implementation establishes linearizability results along Theorem 4.3.
We used our tool to verify automatically the LO-tree from Figure 6. Similarly to the presented proof, we did not use the actual implementation of the helper functions modifying the tree overlay. Instead, we used most general stubs, functions that change the tree overlay arbitrarily (leaving the logical ordering list unchanged). The node invariant we specified is the one from §6.4. With this, plankton is able to fully automatically construct a linearizability proof for the LO-tree within twenty minutes (see Table 1). We stress that this includes fully automatic applications of temporal interpolation, which are strictly necessary to prove the LO-tree linearizable.
We also compared our new version of plankton against the original version form Meyer et al. [2022]. See Table 1 for the results: temporal interpolation incurs a slow down of factor 3.15 in the worst case and factor 2 on average. We believe that this slowdown is justified by the reasoning power brought by temporal interpolation. We consider a more extensive evaluation of our implementation future work. As of now, plankton's proof construction is limited by orthogonal concerns (e.g. imprecise joins, the handling of updates with non-local effects) that still limit its applicability.
RELATED WORK
The hindsight principle [Feldman et al. 2018[Feldman et al. , 2020Lev-Ari et al. 2015;O'Hearn et al. 2010] and our temporal interpolation have relatives in classical program verification [Manna and Pnueli 1995;Schneider 1997]. So-called causality formulas, in our notation written as _p → ⟐q, express that q is a prerequisite for seeing p. Temporal interpolation is more general in that it may take past information into account in order to infer the existence of an intermediary state. Yet, the past invariance proof principle by Manna and Pnueli [1995, §4.1] inspired an application of temporal-interpolation in the RDCSS proof (Appendix F) to derive a contradiction in a case distinction. The careful identification of verification conditions by Manna and Pnueli [1995] has also lead us to the definition of hypotheses that can be proven in isolation. What sets our work apart is that we incorporate temporal interpolation into a modern program logic with powerful reasoning techniques [Jung et al. 2018 [Calcagno et al. 2007], in particular flows [Krishna et al. 2018[Krishna et al. , 2020b.
There are first tools that automate linearizability proofs based on hindsight reasoning. The poling tool [Zhu et al. 2015] implements the hindsight lemma in the formulation of O'Hearn et al. [2010]. The plankton tool [Meyer et al. 2022] automates a restricted form of hindsight reasoning that can be expressed via state-independent variables shared between a past and the current state. However, it did not support general temporal interpolation prior to our extension. Without this extension, the tool would have been unable to verify the LO-tree and other structures that require more complex hindsight reasoning.
We are not the first to study program logics defined over computations instead of states. Historybased local rely-guarantee [Fu et al. 2010;Gotsman et al. 2013] has an elaborate assertion language whose temporal operators are carefully harmonized with the rules of the program logic. Our approach builds on the logic proposed by Meyer et al. [2022] from which it inherits the notion of past predicates over computations. We introduce temporal interpolation by means of a new proof rule. The soundness result shows that the proof rule can be eliminated, and hence is really a mechanism for structuring complex proofs. This means that, in principle, all of our proofs can also be expressed in the logic of Meyer et al. [2022]. Doing so, however, requires one to repeat the soundness arguments within each program proof anew. In particular, this (i) requires reasoning about the governed computations explicitly and (ii) thwarts the use of the frame rule. Realistically, this would make the proofs intractable, even manual ones. The conclusions [Meyer et al. 2022] can draw directly about the past of the computation are all based on immutability arguments, and compared to what we propose here this is a very weak form of hindsight reasoning. Notably, the version of plankton presented in [Meyer et al. 2022] cannot handle the example from §2 nor the LO-tree from §6. Comparing to other computation-based separation logics, we note that the formalization of computations matters: definitions based on interleaving products [Bell et al. 2010] or the union of sets of events [Delbianco et al. 2017;Sergey et al. 2015] seem to be less suited for temporal interpolation.
Prophecies were introduced to separation logic by Vafeiadis [2008] and formalized by Zhang et al. [2012] to structural prophecies that foresee the actions of one thread, a restriction overcome by Jung et al. [2020]. Temporal interpolation conducts full subproofs in the presence of interferences. However, it is in the nature of Owicki-Gries, and has been observed early on [Owicki and Gries 1976], that interferences may require auxiliary variables to increase precision. What seems to make prophecies more difficult to use is the need to reason about the computation backward, against the control flow [Bouajjani et al. 2017]. This is shared with simulation and refinementbased proofs [Liang and Feng 2013;Turon et al. 2013], where backward reasoning is known to be complete [Schellhorn et al. 2012].
Our proofs use standard techniques like boxed assertions [Vafeiadis 2008;Vafeiadis and Parkinson 2007], fractional permissions [Boyland 2003], and persistent points-to predicates [Vindum and Birkedal 2021]. Combining these techniques is no contribution of ours. In fact, they were already combined in the original plankton tool from Meyer et al. [2022], although the use of fractional permissions and persistent points-to predicates has not been discussed there (probably due to their focus on lock-free implementations).
ACKNOWLEDGMENTS
This work is funded in parts by the National Science Foundation under grant 1815633 and by an Amazon Research Award. The third author is supported by a Junior Fellowship from the Simons Foundation (855328, SW).
DATA-AVAILABILITY STATEMENT
Our extended version of plankton and the dataset (
A BUGS IN THE LO-TREE AND THEIR FIXES
The original version of the LO-tree [Drachsler et al. 2014] contains two bugs which we fixed in Figure 6. The fist bug concerns contains: concurrent insertions and deletions of a value k in the original implementation may lead to duplicates of k in the tree that do not agree on their mark field, making contains produce non-linearizable results. The second bug concerns insert: the original sequence in which new nodes are linked into the logical ordering leads contains to miss values and thus produce non-linearizable results. This bug has been reported by Feldman et al. [2020].
Bug 1: Duplicate Values.
A subtle quirk of the LO-tree is the fact that an insertion of value k may be unaware of a concurrent deletion of k because the tree traversal of the insertion experienced a rotation but still ended up in the right position for the insertion (the validation in locate succeeds). Successful validation requires that the deletion already removed k from the logical ordering. As a consequence, the insertion can proceed and insert k into the logical ordering and into the tree. If the deletion has not yet removed the old marked version of k, then the tree contains two nodes with value k that disagree on the mark bit. Hence, the result of contains(k) is influenced by rotations. We make the malicious scenario precise. To that end, consider Figure 9. In the first (leftmost) state, node 77 is logically contained in the data structure. Moreover, there is an insert(77) underway whose tree traversal is currently at node 42. The second state is the result of a left rotation on node 42. Next, a delete(77) starts. Its tree traversal finds node 77, and subsequently marks and unlinks it from the logical ordering. Before the deletion removes node 77 from the tree (Line 116), the insertion continues. Ominously, the insertion manages to proceed: locate is able to validate that 77 should be inserted between node 42 and its now successor 99. The insertion will insert 77 into the logical ordering and into the tree as a child of node 42 which is situated in the left subtree of the marked node 77 whose deletion is stalled. This is the last (rightmost) state in Figure 9. Note that this scenario is not prevented by the treeLocks acquired by prepareTreeDeletion and prepareTreeInsertion (in Figure 9, the treeLocks held by delete resp. insert according to Drachsler et al. [2014] are marked with resp. ). In the last state, the original implementation of contains(77), which coincides with the one from Figure 6 without Line 55, produces non-linearizable results. The problem is this: without rotations, traverse(77) will return the marked node 77. Hence, traverse(77) will not follow the logical ordering and returns false because the found node is marked. This is not The first contains(77) of thread is executed in the first state of Figure 9, certifying that 77 is indeed in the data structure. Then, we perform the insertion and deletion of 77 concurrently as described above. After the insertion is finished, the same thread starts contains(77) which returns false, again as described above. There are three possible linearizations of that execution: contains(77) = true; insert(77) = true; contains(77) = false; delete(77) = true contains(77) = true; insert(77) = true; delete(77) = true; contains(77) = false contains(77) = true; delete(77) = true; insert(77) = true; contains(77) = false It is easy to see that all linearizations violate the sequential specification of a set data type, meaning that the implementation is not linearizable. The above linearizations also reveal that we can alleviate the problem by making the second contains return true so that the last linearization complies with the sequential specification of a set data type. Our implementation from Figure 6 achieves this by adding Line 55: after the tree traversal, the logical ordering is followed (pred fields) until an unmarked node is encountered. This ensures that the final result is not confused by concurrent deletions. Once at an unmarked node, contains proceeds as devised by Drachsler et al. [2014]. Bug 2: Insertion Order. Feldman et al. [2020] identified another bug in the insert method. In the original version by Drachsler et al. [2014], new nodes are inserted first into the backward logical ordering and then into the forward one (compared to Figure 6, Lines 89 and 91 are reversed). To see why this is problematic, assume an insertion of a new node with value k between nodes and already linked . pred to but . succ is still pointing to . Then, contains(k) will find only if the tree traversal takes it to nodes that appear after in the logical order. For earlier nodes, contains will only follow succ fields which cannot yet reach . It is easy to see that this violates linearizability.
We fixed this bug by changing the order in which is linked into the logical ordering, cf. Lines 89 and 91. Feldman et al. [2020] apply the same fix. However, they also change insert to link new nodes first into the tree overlay and then into the logical ordering (without modifying contains). This violates linearizability: if a new node with value k is inserted into the tree but not yet into the logical ordering, method contains will find k if and only if it is not affected by concurrent rotations. This gives rise to a linearizabilty violation similar to the one above.
To see linearizability violation, consider Figure 10. In the first (leftmost) state, an insert(101) has already linked value 101 into the tree overlay but not yet into the logical ordering-note that this is the order proposed by Feldman et al. [2020] and differs from our one in Figure 6. (The lock is the one held by insert(101).) In the second state, there is a contains(101) underway and it reached node 42. For the last state, a rotation was executed. (The locks are the ones held by the rotation.) As a consequence of this rotation, the contains(101) currently at node 42 will no longer be able to find 101. To obtain a linearizability violation, consider the following execution: Here, the first contains(101) returns true as it can find 101 via the tree overlay. The second contains (101) is the one depicted in Figure 10, which cannot find 101 due to the rotation it experienced, and returns false. The last contains (101) It is easy to see that all linearizations violate the sequential specification of a set data type, meaning that the implementation is not linearizable. Overall, this means that inserting into the tree overlay before inserting into the logical ordering as done by Feldman et al. [2020] is incorrect. It is worth pointing out that the linearizability violation is independent of the order in which new nodes are inserted into the logical ordering (succ first vs. pred first).
B A CONTROL-FLOW-SENSITIVE GENERALIZATION
Temporal interpolation derives information between _ ⟐p and _q from an abstraction of the program of interest, namely 2stmt(I). This abstraction is control-flow insensitive, and there are situations in which it is too rough. Particularly problematic seem to be local computations and mutually exclusive accesses. As for the mutual exclusion, consider a data structure in which a node's mark field is protected by the node's lock and may be set from false to true and from true to false. Imagine we find _ ⟐p ∩ _q, where p expresses, amongst other things, that we have the lock, .lock ↦ → 1, and q says that the node is unmarked, .mark ↦ → 0. The goal is to derive _ ⟐(p ∩ q). The control-flow insensitive temporal-interpolation will not allow us to do so. Predicate p may refer to other fields of node that are not protected by the lock, and so the predicate will not be stable beyond the moment in the past where we find it. However, we will fail to conclude that the mark field was 0 in that moment. The reason is that the self-interferences in 2stmt(I) may aribtrarily release the lock held in p, and then the mark field may experience arbitrary changes on the way to q. With the control-flow sensitive version of temporal interpolation that we develop below, Rule temporal-interpolation-cf, we will be able to conclude that predicate q held true already in the moment we found p, and we thus have _ ⟐(p ∩ q). The reasoning is as follows. With the control flow at hand, we know that the thread of interest has not released the lock on the way from p to q. This means no interference can modify the mark field. We also know that the thread of interest has not modified the mark field. Together, the mark field was not changed on the way from p to q.
To incorporate control-flow information into our program abstraction, the idea is to modify the past predicate ⟐p to a so-called history predicate ⟨p⟩st. The history predicate is meant to say that there has been a moment in the computation in which p was true, and from that moment on the thread has executed a sequence of commands from program st. This latter information is what will make temporal information control-flow sensitive. To formalize the semantics of the predicate, we need to adapt the separation algebra.
Given a separation algebra (Σ, * , emp) and a (potentially infinite) set of commands COM, we define the separation algebra of histories HST ≜ HST .Σ with HST ≜ (Σ + .COM) * .Σ * . Histories interleave non-empty sequences of states with commands. The intention behind this definition will become clear in a moment. The multiplication of histories is similar to the one for computations: we share the past and use the multiplication from the given separation algebra in the current state. It is defined, hst 1 .s 1 # hst 2 .s 2 , if hst 1 = hst 2 and s 1 # s 2 , and in this case yields hst 1 .s 1 * hst 2 .s 2 ≜ hst 1 .(s 1 * s 2 ). The set of units is emphst ≜ HST .emp.
The idea of a history hst = 1 .com 1 2 . . . .com . +1 is to record the commands executed by the thread of interest. This means the state change from last( ) to first( +1 ) is due to an execution of command com . We lift the semantics of commands com ∈ COM to histories accordingly: The state changes within the non-empty sequences of states are due to interferences from other threads. We do not record the command used in the interference, with the idea that the history predicate is meant to track thread-local information. The definition is as expected.
The history predicate takes as input a state predicate p ⊆ Σ and a program st over COM: Here, we understand st as a regular language and write com 1 . . . com ∈ st for membership, meaning the program is run to completion resp. a finite automaton for the language accepts the sequence of commands. We also write st 1 ⊆ st 2 for the corresponding language inclusion. As a special case, we may have the empty sequence of commands and p holding in the current state. This means the history predicate has a weak understanding of the past, similar to _ ⟐p.
Lemma B.2. The history predicate has the following properties. (i) It is monotonic in both components: p 1 ⊆ p 2 and st 1 ⊆ st 2 imply ⟨p 1 ⟩st 1 ⊆ ⟨p 2 ⟩st 2 . (ii) The interplay with commands is as expected: ⟦ ⟦com⟧ ⟧(⟨p⟩st) ⊆ ⟨p⟩st; com. (iii) It is interference-free: I ⟨p⟩st. (iv) With the purpose to track the execution of commands, it is not and should not be frameable. (v) If p is intuitionistic, so is ⟨p⟩st.
Neither the separation algebra of histories nor the history predicate require the state changes in a history to respect the semantics of commands and interferences. The reason we have not made this requirement, again, is that we do not know the interferences until we have built up a proof for the overall program. Fortunately, the set of governed computations provides the missing information. The intersection ⟨p⟩st ∩ Gov(I) will keep from ⟨p⟩st only the histories in which the state changes are due to the commands and interferences. As before, our proofs will keep the intersection with Gov(I) implicit, which means we can think about ⟨p⟩st as having the expected semantics without having to add the notational overhead.
Recall that the program 2stmt(I) has commands 2com(a, com) = atomic{ assume(a * true); com }, which are now recorded in the history. The program of interest, in turn, has plain commands com. For the intersection with Gov(I) to be meaningful, the projection operation ↓ strips the atomic block and the assumption from 2com(a, com), which has the effect of recording the block as com.
We also lift the now and past predicates _p and ⟐p to histories. The definition is as expected, and the Lemmas 3.3, 3.4, and 4.5 continue to hold.
The analogue of Inclusion (2) that we would like to use for temporal interpolation is The hypothesis that, if true for the set I, justifies this inclusion is The hypothesis has the same pre-and postcondition as h(p, q, o), but replaces the program 2stmt(I) by enrich(st, I). This enriched program uses the control-flow as recorded in st, but enriches the commands by information about the states in which they are executed. Technically, function enrich turns every command com into a choice over 2com(a, com) with (a, com) ∈ I, and preserves the remaining programming constructs: It is worth noting that the non-deterministic choice in enrich(com, I) can be avoided if we uniquely label each command in the program of interest (and therefore record a single interference for it). The analogue of Lemma 4.1 that will guarantee soundness of the control-flow sensitive temporal interpolation rule is this.
The control-flow sensitive version of temporal interpolation is: The rule expects a history predicate ⟨p⟩st together with _q and allows us to conclude ⟐o after a skip step, provided the hypothesis hcf (p, st, q, o) can be shown to hold for the final set of interferences. The rule is used together with the program logic in Figure 3. Soundness follows very closely the argumentation for temporal-interpolation in Lemma 4.6. Temporal interpolation only ensure that the predicate o has been true some time in the past. For linearizability proofs, it is important to know that this happens while the method executes. The proof of the hypothesis { _p } st(I) { _q → _ ⟐o } with st(I) being 2stmt(I) or enrich(st, I) already guarantees this: the premise _p in particular contains the computation consisting of a single state in p, so if the program takes us to _q → _ ⟐o, then o can only have been true in between the two moments in time. To encode this knowledge into the logical reasoning, a simple way is to work with ghost flags that are raised by ghost commands upon method start or in the moment p became true. There is a detail: as the past operator ⟐p has no constraints except for the one state p, we need to make explicit that in all moments before p was true, the flag was down. This would be done with a predicate of the form r + . We prefer to keep the mechanism of flags implicit, taking for granted that temporal interpolation ensures the existence of appropriate moments.
Temporal interpolation resembles the rule of conjunction, and an interesting question is whether this analogy may lead to a lighter formulation of our proof principle. To make the analogy explicit, we would mimic temporal interpolation by executing the following steps: (i) conduct a proof in which { a ∩ _p } st { b ∩ _q } holds, (ii) conduct a proof { _p } st { _q → ⟐o }, and finally (iii) conjoin this proof with the original one, resulting in { a ∩ _p } st { b ∩ _q ∩ ⟐o }. Unfortunately, the simpler formulation of temporal interpolation has problems the solutions to which lead to the development we have presented. First, we will not see p in the proof conducted in (i), because it is typically not interference-free. So we will have to record its occurrence in a past predicate _ ⟐p. But then we need a mechanism to identify the program between p and q. History predicates offer such a mechanism. Another aspect is that in many cases _q → ⟐o can already be derived for a coarse abstraction of the program. Self-interferences form such a coarse abstraction that allows for concise subproofs. Finally, the subproof conducted in (ii) does not have available the knowledge derived in the outer proof. We may add the predicate a to the precondition _p, but this means repeating the outer proof. With the enrichment enrich(st, I), we add all knowledge from the outer proof, including intermediary assertions, and can focus on the implication to be derived.
C DETAILS OF SECTION 3
The transition rules among configurations are as follows: The initial, accepting, and reachable configurations are defined by: The proof system due to Meyer et al. [2022] consists of the following rules: Proof (of Lemma 4.2). We use the fact that inv ∩ _q ⊆ _ ⟐o implies inv ⊆ _q → _ ⟐o. □ Proof (of Lemma 4.5). ⊆ Consider .s ∈ b * c ∩ ⟐o. Then = 1 .t. 2 with t ∈ o. Moreover, s = s 1 * s 2 with .s 1 ∈ b and .s 2 ∈ c.
We thus have .s 1 = 1 .t. 2 .s 1 ∈ ⟐o ∩ b. Moreover, .s = .s 1 * .s 2 ∈ (⟐o ∩ b) * c. 4.4). The implication from right to left is by definition. For the implication from left to right, we proceed by Noetherian induction on the height of the derivation tree for The height of the derivation tree is the maximal number of consecutive rule applications leading to the correctness statement.
Base case
The task is to find a derivation tree that does not use frame-ti. The observation is that com-ti can deal with the framed predicate a * c right away. By the locality of commands, This derivation is frame-free.
Case temporal-interpolation For intuitionistic p, q, and some o, we have We apply temporal-interpolation followed by conseqence-ti: The equality a * c ∩ ⟐o = { a ∩ ⟐o } * c is Lemma 3.3. It is also used in the postcondition. For the interferences, { (a, skip) } * c = { (a * c, skip) }. We indeed strengthen the precondition, as The second inclusion uses that _ ⟐p and _q are intuitionistic by Lemma 3.3. The derivation is frame-ti-free.
Case temporal-interpolation-unordered Similar to the previous case.
Induction step We assume that for every correctness statement derived with a tree of height at most , we have a derivation without frame-ti. We consider a correctness statement that is derived with a tree of height + 1 in which the last rule is frame-ti. This means we have frame-ti and the premise has a derivation of height .
To eliminate this application of frame-ti, we consider the rule application that lead to the premise.
Case frame-ti Then for some d we have P = P ′ * d, The derivation of height + 1 thus has the shape frame-ti We frame d * c with a single application of frame-ti: Since separating conjunction is associative, this is the desired correctness statement. The difference, however, is that now the derivation tree has height only . Thus, the induction hypothesis applies and yields a frame-ti-free derivation.
Case loop-ti Then the derivation tree of height + 1 ends with loop-ti Note that P from above is { a } ∪ P ′ and b is a. We construct a different end of the derivation tree in which we first apply frame-ti and then loop-ti: Since { a * c } ∪ (P ′ * c) = ({ a } ∪ P ′ ) * c, the result is the desired correctness statement.
The application of frame-ti in the rewritten proof occurs within a derivation tree of height . By the induction hypothesis, we get P ′ * c, I * c, H ⊩ ti,nf { a * c } st { a * c }, meaning we can derive the intermediary correctness statement without frame-ti. Adding another application of loop-ti keeps the derivation frame-ti-free.
Case conseqence-ti Then the derivation tree of height + 1 ends with conseqence-ti where a ⊆ a ′ and P ′ ⊆ P, I ′ ⊆ I, H ′ ⊆ H, and b ′ ⊆ b. We construct a different end of the derivation tree with first frame-ti and then conseqence-ti: To be able to apply conseqence-ti, we note that a ⊆ a ′ entails a * c ⊆ a ′ * c, P ′ ⊆ P entails P ′ * c ⊆ P * c, and similar for the other inclusions.
The application of frame-ti in the rewritten proof occurs within a derivation tree of height . By the induction hypothesis, we get P ′ * c, Adding another application of conseqence-ti keeps the derivation frame-ti-free.
Case seq-ti Then the derivation tree of height + 1 ends with seq-ti So P from above is { d } ∪ P 1 ∪ P 2 and similar for the other components. We construct a different end of the derivation tree in which we first apply frame-ti to the correctness statements from the two branches and then seq-ti: frame-ti Consider I with { (a, com) } ⊆ I. We have ⟦ ⟦com⟧ ⟧(a ∩ Gov(I)) ⊆ ⟦ ⟦com⟧ ⟧(a) ∩ Gov(I) ⊆ b ∩ Gov(I). The latter inclusion is by the assumption ⟦ ⟦com⟧ ⟧(a) ⊆ b. To see the former, consider .s.s ′ ∈ ⟦ ⟦com⟧ ⟧(a ∩ Gov(I)). Then .s ∈ Gov(I) by definition. Moreover, we have .s ∈ a. Hence, the state change s.s ′ is covered by the interference { (a, com) }. Since { (a, com) } ⊆ I, we get .s.s ′ ∈ Gov(I) as required.
The inclusion allows us to derive Let I be a set of interferences with { (a, skip) } ⊆ I so that I ✓ h(p, q, o). To obtain an ordinary derivation, we first apply com and get com ⟦ ⟦skip⟧ ⟧(a ∩ _ ⟐o ∩ Gov(I)) ⊆ a ∩ ⟐o ∩ Gov(I) To see the inclusion in the premise, we have ⟦ ⟦skip⟧ ⟧(a) ⊆ a, because a is frameable. Then ⟦ ⟦skip⟧ ⟧(_ ⟐o) ⊆ ⟐o, since skip adds an extra step. We thus get ⟦ ⟦skip⟧ ⟧(a ∩ _ ⟐o) ⊆ a ∩ ⟐o. We can add the governed computations with the same argument as in the previous case.
We now apply conseqence and get We have generalized the set of interferences and strengthened the precondition. As for the latter, note that we have I ✓ h(p, q, o) by the assumption. Hence, Lemma 4.1 applies and yields a ∩ _ ⟐p ∩ _q ∩ Gov(I) ⊆ a ∩ _ ⟐o ∩ Gov(I).
Case temporal-interpolation-unordered The argumentation is similar to the previous case, but one has to show that I ✓ h(p, q, o) and I ✓ h(q, p, o) justify _ ⟐p ∩ _ ⟐q ∩ Gov(I) ⊆ _ ⟐o. To this end, consider a commputation ∈ _ ⟐p ∩ _ ⟐q ∩ Gov(I). There has been a moment in which p was true and a moment in which q was true. Say p was earlier. Then = 1 .s. 2 .t. 3 with s ∈ p and t ∈ q. We thus have 1 .s. 2 .t ∈ _ ⟐p ∩ _q ∩ Gov(I). Lemma 4.1 applies and yields 1 .s. 2 .t ∈ _ ⟐o. The weak past predicate does not change if we append 3 , and so also ∈ _ ⟐o.
Induction step We assume that for every correctness statement derived with a tree of height at most , potentially using temporal interpolation but not using frame-ti, and for all larger sets of interferences that satisfy the hypotheses, we can give an ordinary derivation in which the preand postcondition are strengthend by an intersection with the corresponding set of governed computations. We consider a correctness statement that is derived with a tree of height + 1 and perform an analysis along the last rule that has been applied.
Case loop-ti Then the derivation tree of height + 1 ends with loop-ti So P from above is { a } ∪ P ′ . Consider I with I ′ ⊆ I and I ✓ H.
Case conseqence-ti Then the derivation tree of height + 1 ends with conseqence-ti where a ⊆ a ′ and P ′ ⊆ P, I ′′ ⊆ I ′ , H ′ ⊆ H, and b ′ ⊆ b. Consider I with I ′ ⊆ I and I ✓ H.
Since the tree for P ′ , I ′′ , H ′ ⊩ ti,nf { a ′ } st { b ′ } has height , and since I ′′ ⊆ I with I ✓ H ′ , the induction hypothesis yields Since we have P ′ ⊆ P, we get P ′ ∩ Gov(I) ⊆ P ∩ Gov(I), and similarly a ∩ Gov(I) ⊆ a ′ ∩ Gov(I) and b ′ ∩ Gov(I) ⊆ b ∩ Gov(I). This justifies an application of Rule conseqence The resulting correctness statement is as desired.
Case seq-ti Then the derivation tree of height + 1 ends with seq-ti So P from above is { d } ∪ P 1 ∪ P 2 and similar for the other components.
Consider I with I 1 ∪ I 2 ⊆ I so that I ✓ (H 1 ∪ H 2 ).
Since I 1 ⊆ I with I ✓ H 1 and similar for the second correctness statement, and since the derivation trees for these statements have height at most , the induction hypothesis applies and yields We use these correctness statements given by the hypothesis as a premise for sequential composition: Since { d ∩Gov(I) }∪ (P 1 ∩Gov(I)) ∪ (P 2 ∩Gov(I)) = ({ d }∪P 1 ∪P 2 ) ∩Gov(I), the latter correctness statement is as desired.
Case choice-ti Similar to the previous case. If a ∈ P, then a ∩ Gov(I) ∈ P ∩ Gov(I) by the definition of P ∩ Gov(I).
For interference freedom, we have I P by the assumption. Moreover, I Gov(I).
If we intersect two interference-free predicates, we obtain an interference-free predicate. So the last point I (P ∩ Gov(I)) follows. □ Proof (of Lemma B.3). Consider hst ∈ ⟨p⟩st ∩ Gov(I). We turn it into a history to which I ✓ hcf (p, st, q, o) applies and allows us to conclude _q → _ ⟐o.
Hence, hst 1 .hst ′ 2 ∈ _q → _ ⟐o. Since the now and the weak past predicate only refer to the states in the computation, which coincide for hst 1 .hst ′ 2 and hst 1 .hst 2 , we can conclude hst 1 .hst 2 ∈ _q → _ ⟐o, as desired. □ Proof (of Lemma 4.4). To eliminate frame-ti, we proceed by Noetherian induction on the height of the derivation tree for P, I, H ⊩ ti { a } st { b }. The height of the derivation tree is the maximal number of consecutive rule applications leading to the correctness statement. We give here the base case of temporal-interpolation followed by frame-ti. Consider intuitionistic predicates p, q, and a predicate o. We have For frame-ti-free derivations, we apply temporal-interpolation followed by conseqence-ti: The equality a * c ∩ ⟐o = { a ∩ ⟐o } * c is Lemma 3.3. It is also used in the postcondition. For the interferences, we have { (a * c, skip) } = { (a, skip) } * c. We indeed strengthen the precondition, as The second inclusion uses that _ ⟐p and _q are intuitionistic by Lemma 3.3. □ Proof (of Lemma 4.6). We again proceed by Noetherian induction on the height of frame-ti-free derivations and consider the difficult base case of temporal-interpolation: Let I be a set of interferences with { (a, skip) } ⊆ I so that I ✓ h(p, q, o). To obtain an ordinary derivation, we first apply com and get ⟦ ⟦skip⟧ ⟧(a ∩ _ ⟐o ∩ Gov(I)) ⊆ a ∩ ⟐o ∩ Gov(I) To see the inclusion in the premise, we have ⟦ ⟦skip⟧ ⟧(a) ⊆ a, because skip is the identity and a is frameable. Then ⟦ ⟦skip⟧ ⟧(_ ⟐o) ⊆ ⟐o, since skip adds an extra step. For the governed computations, consider .s.s ∈ ⟦ ⟦skip⟧ ⟧(a ∩ Gov(I)). Then .s ∈ Gov(I), meaning the state changes in .s are governed by the interferences. Moreover, we have .s ∈ a. Hence, the state change from s to s is covered by the interference { (a, skip) }. Since { (a, skip) } ⊆ I, we get .s.s ∈ Gov(I) as required. We now apply conseqence and get We have generalized the set of interferences and strengthened the precondition. As for the former, we have { (a, skip) } ⊆ I by the assumption. This implies { (a ∩ _ ⟐o ∩ Gov(I), skip) } ⊆ I. As for the latter, note that we have I ✓ h(p, q, o) by the assumption. Hence, Lemma 4.1 applies and yields a ∩ _ ⟐p ∩ _q ∩ Gov(I) ⊆ a ∩ _ ⟐o ∩ Gov(I). □
E META-THEORY FOR PROVING LINEARIZABILITY
Linearizability assumes to be given a sequential specification of an object. A sequential specification is a language over operation calls and returns in which (i) every operation call is decorated by the actual parameters, (ii) the return immediately follows the call, and (iii) the return is decorated by the return values for the actual parameters. Let OP be the set of all operations for accessing the object and for simplicity assume that every operation op accepts a single parameter a and returns a single value v from a domain D. With this, a sequential specification is a subset Search structures store sets of keys C ⊆ N and their operations modify these sets. We can therefore give the sequential specification as a set of predicates UP op (C, C ′ , a, v), one per operation, that specify this modification relative to a given actual parameter and return value. For example, an insertion of key with return value v would be captured by With the predicates at hand, we define an automaton whose trace language is the sequential specification of the search structure. The automaton is (P(N), op∈OP op ), the states are all possible search structure contents, and we have a set of labeled edges op per operation. This set is defined to contain all transitions We use S(C) for the trace language of this automaton when starting in C, and write S for S(∅).
A concurrent implementation of the search structure is a program of the form Every operation is represented by a piece of code st op , and a thread executing the implementation may exercise the operations in arbitrary order. The semantics is as defined in Appendix C. The implementation is executed by an arbitrary number of threads, each represented by an id , which modify a global state from Σ G and a local state from Σ L . For linearizability, we need a small addition. We assume the execution of the first command in st op , say by thread , makes visible the letter call op( ), the execution of the operation's return command yields v = ret op( ) with v the return value, and the execution of commands inside the operation makes visible the thread id . The transition system from Appendix C then yields a language over the alphabet Γ of thread ids and calls and returns decorated with thread ids. We write I (Init a,st ) for the trace language starting in a configuration from Init a,st . We simply write I if the initial global and local heaps are empty. We focus on traces in which all operations execute to completion. The words in I interleave the operations executed by different threads. Linearizability admits the following rewriting of such an interleaving: . . . ⇝ . . . , provided and stem from different threads and it is not the case that is a return and a call. This means we may arbitrarily order overlapping operations, but may not change the order of consecutive operations (the real-time order). To make the link to sequential specifications, we define the partial function ↓ that drops thread ids as letters from calls and returns. The function is only defined if word is sequential, meaning decomposes into infixes of commands by thread leading from an invocation to the corresponding return.
Definition E.1. [Herlihy and Wing 1990] A concurrent implementation st is linearizable wrt. sequential specification S, if for every ∈ I there is with ⇝ * so that ↓ ∈ S.
Towards a proof principle for linearizability, we now tie words over Γ to runs of the automaton underlying the sequential specification. We consider words C 1 . 1 .C 2 . 2 . . . that interleave search structure contents and thread ids resp. decorated calls and returns. We call an infix C. .C ′ of such a word a command of thread . We call an infix call op( ) . . v = ret op( ) an operation of thread , if does not contain any calls or returns by . We call such a word a computation, if the projection to every thread yields a sequence of operations of that thread. Note that a computation does not have to stem from st but the term applies more broadly. Our proof principle is this.
Definition E.2. Operation call op( ) . . v = ret op( ) adheres to the sequential specification, if (1) it contains a command C. .C ′ of thread , the linearization point, with UP op (C, C ′ , , v), and (2) for all other commands C. .C ′ of we have C = C ′ . We say that a computation adheres to the sequential specification, if this holds for every operation in . We use Γ for the projection to Γ.
If the proof principle holds, the computation actually is a run of the automaton underlying the sequential specification. To see this, note that the commands in (2) do not alter the contents of the data structure, and so the sequential specification can stay in the same state. A linearization point may result in a contents modification, in which case the operation's predicate in the sequential specification is guaranteed to hold. Since the predicate defines the edges of the automaton underlying the sequential specification, the contents modification can be tracked in the automaton. Theorem E.3. If computation adheres to the sequential specification, then Γ is linearizable.
Proof. Let be a computation that adheres to the sequential specification. We show that there is a computation ′ so that (1) ′ adheres to the sequential specification, (2) Γ ⇝ * ′ Γ , (3) ′ Γ ↓ is defined, and (4) the last contents in and ′ is the same. This is enough to establish linearizability of Γ . Since ′ adheres to the sequential specification by (1), we have that ′ Γ ∈ S. This follows from the paragraph before the theorem, arguing that the automaton underlying the sequential specification has ′ as a run. We moreover have Γ ⇝ * ′ Γ by (2) and ′ Γ ↓ defined by (3). We proceed by induction on the number of linearization points in the computation. In the base case of a single linearization point, there is nothing to do. Assume the claim holds for computations with linearization points. Let computation have + 1 linearization points. Then has the shape 1 . . 2 .C. .C ′ . 3 . . 4 so that C. .C ′ is the last linearization point and and are the call and return of the corresponding operation. We know that 4 does not contain calls, otherwise C. .C ′ would not be the last linearization point. Moreover, 4 will not contain -commands. This allows us to move all commands from 4 before , resulting in 1 . . 2 .C. .C ′ . 3 . 4 . . Relation ⇝ allows us to move the commands of other threads out of 2 and 3 . We do so from left to right in order to preserve the fact that we have a computation and the order of linearization points potentially present in 2 . The result is ′ . Here, ′ 2 and ′ 3 contain the commands from 2 resp. 3 that belong to threads different from , and ′′ 2 and ′′ 3 contain the -commands. In ′ 2 we maintain the memory contents we had in 2 . In ′ 3 , ′ 4 , and ′′ 2 , we change the memory contents to C. Note that ′ is a computation and Γ ⇝ * ′ Γ .
We argue that ′ adheres to the specification, by showing that C is the last contents in ′ 2 . Let C ′′ . .C ′′′ be the last linearization point in 2 before rewriting. Since adheres to the specification, the subsequent commands will not modify the contents and C ′′′ = C has to hold. Since we move the commands out of 2 from left to right, C ′′ . .C will also be the last linearization point in ′ 2 . Since ′ is a computation that adheres to the specification and w ′ op is an operation, also w ′ pre is a computation that adheres to the specification. Since it only has linearization points, the induction hypothesis applies to w ′ pre and yields a computation with properties (1) to (4). We append the last operation and obtain ′ ≜ .w ′ op . Then ′ is again a computation. We show that ′ has properties (1) to (4). To see (1), that ′ adheres to the sequential specification, note that adheres to the sequential specification by (1) from the induction hypothesis. Moreover, the last contents in w ′ pre is C, and by (4) from the hypothesis this is also the last contents in . Since C is also the first contents in w ′ op , and since w ′ op adheres to the specification, we have that ′ adheres to the specification. To see (3), note that ↓ is defined by (3) from the hypothesis, and w ′ op is a sequential operation, hence ′ ↓ is defined. For (4), the last contents in and ′ is C ′ .
It remains to show (2), namely Γ ⇝ * ′ Γ . We showed above Γ ⇝ * ′ Γ with ′ = w ′ pre .w ′ op . By (2) from the hypothesis, we have w ′ pre Γ ⇝ * Γ . The rewriting relation is stable under contexts. We can thus also execute this rewriting with w ′ op appended, yielding ′ To apply the proof principle, we associate with every global state g reachable when executing the concurrent implementation of the search structure its contents C(g) ⊆ N. It is defined as the unique C ⊆ N for which g |= CSS(C). Recall that CSS(C) ≜ ∃N . Inv(C, N ), the contents predicate is derived from the invariant, §6.3. Since the invariant is guaranteed to be maintained, C(g) is guaranteed to be defined. With this definition, we can understand the words ∈ I as interleavings = C(g 1 ). 1 .C(g 2 ). 2 . . . It is readily checked that these interleavings form computations in the above sense. We say that st adheres to the sequential specification, if this holds for all ∈ I when seen as computations.
Corollary E.4. If st adheres to the sequential specification S, then st is linearizable wrt. S. Figure 4 implement the check that the execution of every operation adheres to the sequential specification, and thus Corollary E.4 applies. To be precise, Rule com-lin-void checks that a command does not alter the search structure content, as required by Condition (2) in Definition E.2. Rule com-lin-impure explicitly checks that contents modification, actual parameter, and return value together respect the predicate specifying the operation. This is one requirement of Condition (1), but Definition E.2 requires more: there should be at most one linearization point. Uniqueness is guaranteed by the fact that the rule expects an OBL token in the precondition, produces a RCT token in the postcondition, and a RCT token cannot be transformed into an OBL token nor can an OBL token be produced by commands. We have argued here about the modification of the search structure contents on the level of rules. Definition E.2 refers to computations, instead. The close correspondence between rules and program semantics is made precise in the program logic's soundness proof (proof of Theorem 3.5), which can be found in [?].
F IMPURE FUTURE-DEPENDENT LINEARIZATION POINTS
The rule given in §5 for proving linearizability with retrospective reasoning is restricted to pure future-dependent linearization points. However, the approach can be generalized to handle impure future-dependent linearization points, i.e., those that modify the abstract state of the data structure.
In the presence of impure future-dependent linearization points, the abstract state of the data structure at any given point in time of the concurrent execution may depend on future thread interferences. Rather than tracking a single abstract state in the proof, the idea is to track a set of 131 Rstate( , ) get( ) . = ∧ Rstate( , ) 132 ℓ ↦ → * Rstate( , ) rdcss( ,ℓ, 1 , 1 , 2 ) . = ∧ ℓ ↦ → * Rstate( , = 1 ∧ = 1 ? 2 : ) Fig. 11. RDCSS data structure specification abstract states, one for each possible future. This set of abstract states can be defined purely in terms of the computation history. This idea is inspired by the original proof of the Herlihy/Wing queue [Herlihy and Wing 1990]. A similar idea has also been explored in [?]. Each of the tracked abstract states carries its own obligation/fulfillment token for each active operation. When a thread changes the physical representation of the data structure, the change may affect the abstract state for some but not all possible futures. For the affected abstract state, the proof obligation is to show that the change is consistent with the sequential specification and that the associated obligation token can be traded in for the fulfillment token. A modification of the data structure may also eliminate some of the possible abstract states but it must not eliminate all.
At the return point of an operation the proof obligation is to show that the thread has indeed linearized for all possible abstract states at that point. This step can then make use of retrospective reasoning using temporal interpolation, similar to the rule com-lin-pure. This more general construction necessitates a helping protocol that governs the transfer of linearizability obligations between threads to handle cases where an impure linearization point of an operation lies in another thread. These proofs are therefore more difficult to automate than those involving only pure future-dependent linearization points.
We consider the automation of proofs involving impure future-dependent linearization points future work. However, to provide evidence that our logical is equipped to express such proofs, we here discuss a second case study: verifying the RDCSS data structure [?]. This case study involves impure future-dependent linearization points and helping. However, the data structure's abstract state is always uniquely determined by the computation history. So there is still no need to track sets of abstract states in the proof of this data structure.
F.1 High-level Overview of RDCSS
RDCSS, which stands for restricted double compare single swap, is a data structure that implements a form of multi-word compare and swap operation. The data structure governs a memory location and its logical value by an abstract predicate Rstate( , ). It provides two operations, rdcss and get, whose sequential specification is shown in Figure 11. The operation get( , ) simply returns the current logical value of . The operation rdcss( ,ℓ, 1 , 1 , 2 ) takes a reference to a second memory location ℓ and only if the current value of ℓ is 1 and the current value of is 1 , does it update to the new value 2 . Otherwise, it leaves unchanged. In all cases, the operation returns the old value of .
An implementation of the data structure is shown in Figure 12. The key challenge for the implementation is that the rdcss operation must read and ℓ in a single logically atomic step, even though two physical steps are required to read both locations. So other threads may interfere and change the value of either location between the two reads. In particular, the location ℓ is extraneous to the data structure and, hence, the client may concurrently update its value while an rdcss operation is in progress. The data structure solves this challenge by maintaining two state modes. If the structure is in inactive mode, indicated by storing the value I( ) in , then no rdcss operation is in progress and the logical value is . In particular, a get operation can simply read out from the inactive state and return. If an rdcss( ,ℓ, 1 , 1 , 2 ) operation starts, it first checks whether the structure is in inactive mode and whether its value is 1 into active mode by replacing I( 1 ) in with A( ) where is a fresh location allocated on Line 153. The location stores a descriptor value D(ℓ, 1 , 1 , 2 ) that remembers the actual arguments of this rdcss operation. The check and update are performed using a single atomic compare and exchange operation (CmpX) on Line 154. The CmpX returns the old value of before the attempted update. If the update succeeded, the operation is completed by calling complete( , ) on Line 158. The complete method then reads the value of ℓ (Line 138), and sets the state back to inactive, I( ′ ), for the new or old value ′ = ( = 1 ? 2 : 1 ) (Line 140). The correctness of the implementation hinges on the fact that the active state value A( ) acts like a lock that gives the active rdcss operation exclusive access to the abstract state Rstate( , 1 ). Excluding other rdcss operations from accessing the abstract state guarantees that at Line 138, still has the old logical value 1 that it had on Line 154. Line 138 must be the linearization point because it is the only point where one can guarantee that the logical value of is 1 and, at the same time, the value of ℓ is . Concurrent get operations are then still prevented from reading the old value 1 between the linearization point and the point when the physical state of the data structure is updated to store the new value ′ on Line 140.
A complication in the algorithm is that concurrent operations are not simply blocked while an rdcss operation is active. Instead, the implementation provides a fast path: a concurrent operation encountering an active state A( ) will try to help complete the active rdcss operation using the information provided in the descriptor . Consequently, there can be an unbounded number of threads that concurrently read the value ℓ on Line 138 and then compete for setting back to the inactive state on Line 140. Thus, only the thread who will "win this race" and execute the CmpX first should linearize the active rdcss at Line 138. This makes the linearization point of rdcss future-dependent.
[2020] provided a fully-mechanized proof of RDCSS, correcting a technical issue in an earlier pencil-and-paper proof by Vafeiadis [2008]. Both proofs share the same basic idea: one introduces a prophecy variable for each active rdcss operation. The prophecy predicts the sequence in which the helping threads will execute the CmpX on Line 140. By case analysis on the value of at Line 138, a helping thread can then determine whether it will be the first thread to execute the CmpX and should therefore linearize the active rdcss.
We here provide an alternative proof that uses temporal interpolation instead of prophecy reasoning. However, we note that our proof draws on ideas from [Jung et al. 2020] to encode the ownership transfer of the linearization obligation and receipt resources between the active and helping threads via the shared data structure invariant.
The need for prophecy variables arises because the linearizability reasoning outlined in §5 demands that impure operations are committed at the actual linearization point, i.e., Line 138 for rdcss. If we take a closer look at the specification of the operation, we observe that it consists of two parts. The first part is pure and states that and are the values of ℓ and at the linearization point, which are then related to 1 and 1 . The second part is impure in the case where the logical value of is updated to the value 2 . Without prophecies, the pure part can still be established at the linearization point. However, the impure part can only be established at the point when the winning thread updates the physical state on Line 140. Establishing the two parts at different points in time is permissible if we can show that no other operation can have been linearized between the two points. In a sense, we can think of rdcss as having a linearization interval rather than a linearization point. All concurrent operations on the data structure logically perceive this interval as a single point, which we identify with the beginning of the interval at Line 138.
To capture this argument formally, we extend the program logic from §5 for deriving linearizability judgments of the form P, I, H ⊩ lin ti { a } com { b }. We need to augment the program state with auxiliary ghost state for the relevant bookkeeping. First, we introduce a resource Clock( , ) for ∈ N that counts the number of get and rdcss operations that have already linearized. We will use this resource to express that no operations have linearized over some period of time. The underlying separation algebra is that of partial maps from references to clock values with disjoint union as composition. The resource is initialized to Clock( , 0) when the instance is created and the clock is incremented each time a linearizability obligation resource OBL is fulfilled.
Next, we change the separation algebra of obligation and receipt resources Σ lin to allow a thread to linearize other threads. In particular, we track the two types of resources in different components of the ghost state and endow each with their own separation algebra. First, we introduce a separation algebra Σ OBL of multisets of OBL values with separating conjunction defined as multiset union. The intuition for the multiset structure is that many operations with the same parameter values may be executing concurrently. So we need to track exactly how many such obligations are available at any time. In assertions, we will write OBL to represent the singleton multiset containing OBL .
For the receipt resources we give a two-layered construction. First, we introduce a separation algebra Σ RCT of values of the form RCT , and •RCT , where each •RCT , value is the unit of the value RCT , and separating conjunction is undefined in all other cases. The intuition is that once we have obtained a fulfillment resource RCT , , we can snapshot it as a •RCT , to keep a persistent record of its existence even after RCT , has been consumed by the postcondition of its associated operation. The second step is to lift this separation algebra to partial maps N ⇀ Σ RCT in the expected way. That is, partial maps ℎ and ℎ ′ only compose if for every ∈ N either ℎ( ) or ℎ ′ ( ) is undefined or ℎ( ) and ℎ ′ ( ) compose. In assertions, we write ↦ → RCT , for the singleton map { ↦ → RCT , } and similarly for ↦ → •RCT , .
We add a Σ OBL and a Σ RCT component to both the global and local state. Finally, to encode the helping mechanism, our data structure invariant will be a computation predicate rather than a state predicate. However, recall that in our linearizability proof rules, the predicate describing the abstract state of the data structure occurs below past operators in some of the rules. It must therefore be a state predicate. To circumnavigate this issue, we introduce an auxiliary ghost resource that tracks abstract predicates Rstate( , ) for all the existing RDCSS instances. The underlying separation algebra is that of partial maps from references to values with disjoint union as composition. The abstract predicate Rstate( , ) in assertions thus represents the singleton map { ↦ → }. Our actual data structure invariant will then be of the form Rstate • ( , ) = Rstate( , ) * Inv( , ) where Inv( , ) is a computation predicate that ties to the physical state of .
The derivation rules for the judgments P, I, H ⊩ lin ti { a } com { b } are appropriately updated to work with the new ghost state. For example, the rule for a pure linearization point (instantiated for get) now looks like this: com-lin-pure a ⊆ ⟐ = ∧ Rstate( , ) p = OBL * Clock( , ) q = ↦ → RCT , * Clock( , + 1) a * q, {(a * p, skip + (p q))}, ∅ ⊩ lin ti { a * p } skip { a * q } More interesting is the rule we will use for handling the linearization interval of the rdcss operation: com-lin-mixed P, I, H ⊩ ti { a } com { b } a ⊆ ⟐ ℓ ↦ → * Rstate( , ) * Clock( , ) * Inv( , ) b ⊆ Inv( , ′ ) ∧ ′ = = 1 ∧ = 1 ? 2 : p = Rstate( , ) * OBL ,ℓ, 1 , 1 , 2 * Clock( , ) q = Rstate( , ′ ) * ↦ → RCT ,ℓ, 1 , 1 , 2 , * Clock( , + 1) The premise of the rule states that we can show that com changes the physical state of the data structure such that its logical value is changed from to ′ while preserving the invariant. Moreover, the new value ′ satisfies the postcondition of rdcss. This captures the impure part of the specification. The additional precondition ⟐ ℓ ↦ → * Rstate( , ) * Clock( ) then ensures that there was some past state at logical time when the value of ℓ was and the logical value of was . This captures the pure part of the specification. Because the abstract state transition also happens at logical time , the specification is logically satisfied at a single point in time.
Data structure invariant. The invariant Inv( , ) of the RDCSS data structure that we use for our proof is shown in Figure 13. The disjunction Inactive( , ) ∨Active( , ) keeps track of the resources associated with the inactive and active state modes of the data structure and ties the logical value to the physical state. The invariant also keeps track of the clock resource Clock( , ). Throughout the rest of this section, we just write Clock( ) instead of Clock( , ) since we will always reason about a single fixed . The final conjunct Proto( , ) stores some resources for each past operation that has already linearized before time . In particular, it is used to encode the helping protocol. That is, it governs the transfer of the fulfillment resource for a completed rdcss operation from the helping thread that linearized the operation back to the thread that performed the operation.
The predicate Inactive( , ) simply stores the resource ↦ → I( ), indicating that is in inactive mode. Likewise, Active( , ) stores ↦ → A( ) to indicate that is in active mode. The predicate additionally contains a fraction of the descriptor location . The invariant ties the logical value to the value 1 that is physically stored in (i.e., the value last stored in before became active). It also contains a fraction of the permission ℓ to ensure that helping threads can always safely dereference ℓ. The final conjunct OBL ,ℓ, 1 , 1 , 2 is the linearization obligation of the active rdcss operation. The winning thread will convert this resource into the linearization receipt when it linearizes the operation and then transfer it to Proto( , ). Likewise the permissions to and are transferred to Proto( , ) at this point. They need to remain in the invariant forever, even after the active operation has been completed, because helping threads may still read these locations afterwards.
The constraint > 1 /2 on the fractional permission of plays two important roles in the proof. First, the correctness of the implementation relies on the fact that the descriptors are never reused after an operation has completed. Otherwise, there is an ABA problem. The implementation assumes a garbage collected semantics. This allows the invariant to retain the permissions for descriptors that will no longer be accessed. The invariant ensures that more than half of 's permission remains in Proto( ) after has been used by a past rdcss operation. One can then conclude that cannot have been reused by the currently active operation, as this would also require more than half of the permission in Active( , ), exceeding the maximal full permission amount.
Similarly, the constraints on the permission amounts on are used to govern the ownership transfer of the linearizability receipt for the associated rdcss operation. The thread executing the active rdcss operation retains 1 /4 of the permission on in its local state throughout its own execution of complete. By the time that the call to complete has returned, some thread must have linearized the active operation, which will increase the clock value . At the point when the clock is incremented, the predicate Proto( ) in the invariant forces the helping thread to relinquish ownership of the receipt ↦ → RCT ,ℓ, 1 , 1 , 2 , 1 and transfer it to the invariant. The active thread will then use the knowledge that the clock must have incremented to retrieve the receipt from the invariant by trading it in for its 1 /4 permission on . If another thread had already retrieved the receipt, then the invariant would already own more than 3 /4 of the permission on , contradicting the fact that the active thread still owns 1 /4.
In the remainder of the section, we discuss the proof in some more detail.
Proof of rdcss. We start with the proof of the rdcss operation whose outline is shown in Figure 14. The precondition corresponds to the precondition of the sequential specification in Figure 11, except that we have replaced the abstract predicate Rstate( , ) by the full invariant Rstate • ( , ) and also added the linearization obligation OBL ,ℓ, 1 , 1 , 2 . After the allocation of the descriptor , the thread gains the full permission ↦ → D(ℓ, 1 , 1 , 2 ) in its local state, leading to the interference-free assertion on Line 169. Next, the thread tries to change the state of to active using the CmpX. The proof then proceeds by case analysis on the returned old value of .
If the old value of was ( 1 ), the CmpX succeeded and we end up on Line 173. Here, we know that the new value of must now be A( ). To show that the invariant is maintained we need to of the invariant into the local state provided we conversely transfer 1 /4 ↦ → D(ℓ, 1 , 1 , 2 ) back into the global state so that the invariant is maintained. This finally yields the assertion on Line 179 which implies the postcondition and completes this case.
The next case is when the return value of the CmpX is I( ) for ≠ 1 . This yields the assertion on Line 182. This case corresponds to the pure case of the operation's specification. Hence, the proof can directly linearize the operation at this point and complete this case.
The final case is when the return value of the CmpX is A( ). That is, a concurrent rdcss operation is already active. To satisfy the precondition of the call to complete, the proof proceeds as follows. First, the postcondition of CmpX gives us ↦ → A( ′ ) for some descriptor ′ . Hence, we obtain Active( , ) ∧ ↦ → A( ′ ) from the invariant. We can now transfer some fraction ′ of the permission ↦ → D(ℓ, 1 , 1 , 2 ) into the local state that leaves enough in the global state to maintain the invariant, say ′ = − 1 /2 2 . We also transfer some fraction ℓ ′ of the permission on ℓ ′ into the local state. Finally, we derive the weak past predicate _ ⟐( ↦ → A( ′ ) * Clock( ) ) which establishes the connection between the value of at the current clock time and the descriptor value ′ . The resulting interference-free assertion on Line 188 implies the precondition of complete. The postcondition of complete yields the assertion on Line 190 which implies the precondition of the recursive call.
Proof of complete and get. The proofs of complete and get follow similar reasoning. Their outlines are shown in Figure 15 and Figure 16. We omit a detailed description but provide the key reasoning steps for complete inline. The proof of get closely follows that of rdcss but is simpler.
|
2022-09-29T06:42:27.489Z
|
2022-09-27T00:00:00.000
|
{
"year": 2022,
"sha1": "3b628bd5d83741cf9fb6890fcdde9bd63d6ca290",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2eb3bbbb20b8d7ff475eaee97c8ec172a7e7ca0a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
158558206
|
pes2o/s2orc
|
v3-fos-license
|
Changes in European wind energy generation potential within a 1.5 °C warmer world
Global climate model simulations from the ‘ Half a degree Additional warming, Prognosis and Projected Impacts ’ (HAPPI) project were used to assess how wind power generation over Europe would change in a future world where global temperatures reach 1.5 ◦ C above pre-industrial levels. Comparing recent historical (2006–2015) and future 1.5 ◦ C forcing experiments highlights that the climate models demonstrate a northward shift in the Atlantic jet, leading to a significant ( p < 0.01) increase in surface winds over the UK and Northern Europe and a significant ( p < 0.05) reduction over Southern Europe. We use a wind turbine power model to transform daily near-surface (10 m) wind speeds into daily wind power output, accounting for sub-daily variability, the height of the turbine, and power losses due to transmission and distribution of electricity. To reduce regional model biases we use bias-corrected 10 m wind speeds. We see an increase in power generation potential over much of Europe, with the greatest increase in load factor over the UK of around four percentage points. Increases in variability are seen over much of central and northern Europe with the largest seasonal change in summer. Focusing on the UK, we find that wind energy production during spring and autumn under 1.5 ◦ C forcing would become as productive as it is currently during the peak winter season. Similarly, summer winds would increase driving up wind generation to resemble levels currently seen in spring and autumn. We conclude that the potential for wind energy in Northern Europe may be greater than has been previously assumed, with likely increases even in a 1.5 ◦ C warmer world. While there is the potential for Southern Europe to see a reduction in their wind resource, these decreases are likely to be negligible.
Introduction
In December 2015, the Conference of the Parties (COP) to the United Nations Framework Convention on Climate Change (UNFCCC) convened a meeting in Paris, France, and invited the Intergovernmental Panel on Climate Change (IPCC) to provide a Special Report 'on the impacts of global warming of 1.5 • C above pre-industrial levels and related greenhouse gas emission pathways.' The resulting IPCC report is due to be released in autumn 2018 (www.ipcc.ch/report/sr15/). The IPCC determined that current climate datasets (such as the Coupled Model Intercomparison Project Phase 5, CMIP5) are not wholly suited to the task of assessing regional impacts with a 1.5 • C warming scenario (Mitchell et al 2016), while CMIP6 was not going to be available in time to be used for this assessment. Therefore the 'Half a degree Additional warming, Prognosis and Projected Impacts' (HAPPI) project was formed and called on the climate modelling groups around the world to undertake a series of experiments specifically designed to quantify the relative risks associated with 1.5 • C and 2 • C of warming (Mitchell et al 2017, www.happimip.org/). In this study we specifically use the HAPPI dataset to address the question 'How would a future 1.5 • C warmer world affect wind energy generation across Europe? ' In a move to a low-carbon economy, wind power is a crucial component of electricity generation. Wind power now comprises a significant share of the world's electricity supply, with total global installed capacity of 487 GW at the end of 2016, and around 154 GW installed in Europe (GWEC 2017). Wind energy now accounts for 18% of the total installed power generation capacity in Europe (Wind Europe 2017) and is set to increase further in line with the European Commission's '2030 Energy Strategy' which currently includes a renewable energy target of at least 27%.
The output from wind turbines is related nonlinearly to the local, intermittent and highly variable nature of wind. This makes it challenging to match demand and supply. Therefore, near-term weather forecasts are routinely employed to help optimise this balance (Foley et al 2012). Climate models may also provide potential utility on longer monthly and seasonal timescales, but this potential is currently underutilised and is an area of active research , White et al 2017. Various studies have used post-processed output from climate models at their native resolutions, and applied dynamical and statistical downscaling to simulate possible changes in wind resource. Changes found in the annual-mean, for standard future forcing scenarios from the previous two CMIP exercises (CMIP3 and CMIP5), include increases in Northern Europe Barthelmie 2010, Hueging et al 2013), and decreases over Southern Europe (Carvalho et al 2017). However, the scope of these changes and seasonal details vary between climate models (Reyers et al 2016, Tobin et al 2015.
A key factor influencing future changes in wind energy generation is the change in the large-scale wind patterns. Climate models generally project a northward shift of the peak North Atlantic westerly winds, by about 1 degree latitude at the end of the 21st century under the 'business as usual' representative concentration pathway 8.5 (RCP8.5) scenario (Christensen et al 2013, Collins et al 2013. However, this hides seasonal differences as the poleward shift of the Atlantic jet is less pronounced in winter (Barnes and Polvani 2013). Assessing the downstream extension of the westerly wind maximum has been shown to be a better description of the changes over mainland Europe (e.g. Haarsma et al 2013), although it should be noted that there is considerable uncertainty about dynamical changes (Shepherd 2014). It is important to note that projected changes in the frequency of phenomena affecting variability of wind power generation, such as blocking and extratropical cyclones, are generally small when averaged over different climate models (Ohba et al 2016), and more uncertain than changes in the mean state (Masato et al 2013, Zappa et al 2013. However, there is agreement within the literature that climate models project a substantial decrease in winter storm frequency in the Mediterranean (Christensen et al 2013, Zappa et al 2013.
To make robust predictions about wind generation under a future 1.5 • C warmer world it is important that any regional model biases are corrected, and that we have large sample sizes to enable extreme conditions to be represented. This makes the HAPPI dataset ideal for this study. In this paper our primary aim is to identify regions across Europe which will likely see increases and decreases in wind generation potential within a 1.5 • C warming world.
Methodology
This study uses output from atmosphere-only global climate models run as part of the 'Half a degree Additional warming, Prognosis and Projected Impacts' (HAPPI) project (Mitchell et al 2017, www.happimip.org). Ten different modelling centres took part in HAPPI, each running the three 'Tier 1' experiments: (i) a climate run for a recent decade, 2006-2015; (ii) 1.5 • C warmer than pre-industrial (1861-1880 conditions) relevant for the 2106-2115 period; and (iii) similar to the previous experiment but for 2.0 • C warmer than pre-industrial. Each experiment required 50-to 100 member ensembles, each spanning 10 years. In this paper we will only use the historical and future 1.5 • C experiments (experiments (i) and (ii) described above). More details on the design of the HAPPI experiments is covered in the supplementary material and Mitchell et al 2017.
In this study we only use those models where daily mean 10 m wind speed has been locally bias-corrected by using the 'Inter-Sectoral Impact Model Intercomparison Project' ISIMIP2b calibration methodology (Lange 2016). The bias correction was performed on a regular 0.5 • × 0.5 • grid using a first-order conservative remapping scheme over all landpoints (see www.isimip.org/gettingstarted/isimip2bbias-correction/). The transfer functions for the biascorrections were computed from longer runs of 25 years or more. The four available bias-corrected models used are here referred to as: CAM4-2 degree . Only the first 10 ensemble members for each of the four models were bias-corrected, providing us with 100 years of daily data for each model and each experiment. To ensure we only show multimodel mean changes between experiments where there is reasonable agreement between the four models, in this paper we only define a change (between 1.5 • C and the historical experiment) for those regions where three or four models agree on the sign of that change.
Then, for each model grid-point, the multi-model mean change is calculated by averaging only those values that are in agreement (e.g. Kaye et al 2012). For example, if the change in load factor in a single gridpoint between the historical and 1.5 • C experiments for the four individual models is 2, −1, 3, and 4, then we only average the positive values (here giving a value of 3). The −1 is ignored as an outlier.
The wind turbine power curves are derived using the methodology in Macleod et al 2017 which was validated from up to 11 years of data from 282 turbines located across varying terrain. Daily mean surface winds are transformed to a wind turbine load factor using a defined 'power curve' (see figure S1 in supplementary material available at stacks.iop.org/ERL/13/054032/mmedia). Here we define the cut-in speed at 4 m s −1 (where the wind turbine blades start to turn) and a cut-out speed of 25 m s −1 (above which the blades are prevented from spinning for safety). The power generation is capped at the rated power (speed at which maximum load factor is reached), here 12.5 m s −1 . Load factor is measured as a percentage and is defined as the actual power generated relative to the maximum. This methodology accounts for the sub-daily temporal variability in wind speed using a Rayleigh distribution, the increase in wind speed from near-surface (10 m) to turbine height (here defined as 60 m), and power losses due to the transmission and distribution of electricity. After taking these corrections into account, the resulting power curve (shown by the orange line in figure S1) is then used to transform daily 10 m wind speeds from the model simulations into daily load factor. It should be noted that Macleod et al (2017) found that the calculation of load factor is relatively insensitive to the atmospheric temperature (compared to wind speeds) and so it is appropriate to use a fixed temperature in the calculation of the power curve, which here was set at 10 • C.
Our aim is to identify regions across Europe which will likely see increases and decreases in wind generation potential within a 1.5 • C warming world. In order to constrain the problem, we use the same power curve throughout the study, thus making the assumption that any changes in surface roughness and sub-daily wind distribution, within the timeframe of reaching a 1.5 • C warming world, will be small relative to the changes driven by large-scale winds. Predicting future efficiency gains from improvements in turbine technology and optimisations in hub-height is also highly uncertain. Using the same power curve throughout this study therefore allows us to focus solely on the relative changes in wind energy generation driven by shifts in large-scale wind patterns. Due to these various uncertainties and the lack of sub-daily climate model data output, the assumptions made here are reasonable for a climate scenario-based study and are in line with those found in other studies (e.g. Karnauskas et al 2017).
Results
Four atmosphere-only climate models are used to assess the change in wind energy generation between recent historical climate conditions (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) and a future where global mean surface temperatures reach 1.5 • C above pre-industrial levels. We start by assessing how the models compare in their representation of large scale wind change. Figure 1 shows the change in the median zonal wind speeds at 850 hPa (u850) between the future 1.5 • C and historical experiments. The models all simulate a northward shift in the region of the Atlantic jet (figure 1), resulting in a significant increase (p < 0.01) in wind speeds around 54 • N (over the UK, Germany and Poland) and a significant (p < 0.05) decrease around 42 • N (over north Africa and Spain) (see figure S2). The regional peak magnitudes of wind speed change agree well between the CAM2-2 degree, ECHAM-3-LR and MIROC5 models at around 4 m s −1 , while the NorESM1-HAPPI model appears to be more sensitive to the additional global greenhouse gas forcing with changes exceeding 6 m s −1 seen over Germany, and over and downwind of Scotland.
We now assess the seasonal impact these changes have on the generation of wind energy and variability (figure 2). The seasonal mean wind load factor for each of the four models is calculated as described in section 2. As the 10 m wind speed fields are bias-corrected, the mean historical maps for each model are all very similar to one-another, therefore (after transformation using the power curve seen in figure S1) it is appropriate to simply calculate and display the multi-model mean (figures 2(a)-(d), labelled as 'P'). However, the change in wind power generation (calculated in percentage points) under 1.5 • C varies somewhat between the four models, as expected from figure 1. From now on the multi-model mean changes (between 1.5 • C and the historical experiment) are shown only for those regions where three or four models agree on the sign of the change (see section 2 for details).
The multi-model change under 1.5 • C is shown in figures 2(e)-(h) (labelled ΔP). Under the historical experiment, the largest load factor is generally seen in winter (figure 2(a), December-February 'DJF'), with values exceeding 20% over the UK and similar values over the Portugal coastline, with slightly lower values seen across mainland Europe, especially over Poland and Belarus. These spatial patterns are similar across the four seasons, although with lower values in spring (2b, March-May 'MAM') and autumn (2d, September-November 'SON'), with the lowest values found in summer (2c, June-August 'JJA') with values around 10% over the UK. This seasonality is in agreement with Heide et al (2010). The seasonal spatial changes (ΔP) across the four seasons are also very similar to one another, with the largest increases found over the UK, with an increase in load factor of up to four percentage points seen across the four seasons (figures 2(e)-(h)). Changes over mainland Europe are less pronounced, with even a small reduction (around 1 percentage point) in load factor over parts of Spain. This again is in agreement with figure 1.
We also assess the day-to-day standard deviation ( ) of load factor within the historical experiment in figures 2(i)-(l), and the change under a 1.5 • C future in figures 2(m)-(p) (Δ ). The regions and seasons with the largest values appear to be broadly similar to those with the largest values of P. However, the seasons with the largest change under a 1.5 • C future (Δ ) are summer and autumn (JJA and SON), with the highest values across the UK and central and eastern Europe.
In figure 3 we assess the change in potential viability of wind farms across Europe. Here we use a load factor threshold of 10%, which is suitable for the specific power curve used in this study (figure S1), to more clearly highlight the spatial differences between Northern and Southern Europe. (Note that while this threshold of 'viability' might be higher if we calculated our load factors using a different power curve, the spatial distribution should be fairly robust.) Over the UK, the coastal zone over Portugal and the northern parts of central Europe (France, Belgium, Netherlands, Germany, Denmark, Poland, and Belarus) the mean load factor exceeds 10% during both the historical and 1.5 • C experiments (blue shading). Conversely, most of Southern Europe falls below this threshold in both experiments (white/unshaded).
What is potentially more interesting are those regions where we see a switch in the exceedance between experiments. Here we see large areas where wind farms could become more viable in the future; over Germany, Poland and Lithuania (purple shading). However, there are only a few regions where we see the opposite situation, where the historical load factor exceeds this threshold then drops below it within the 1.5 • C future experiment (black shading). This is expected from figure 2 as in general most regions see an increase in load factor, especially in Northern Europe, with few regions showing any decrease.
We now focus on Central England within the UK (3.5 • W-0 • E, 51.5 • N-53.5 • N) where we see the largest changes in load factor under a 1.5 • C warming world (see figure 2). In figure 4 we show the distributions of the probability of exceedance in daily mean load factor to illustrate how the frequencies of larger values (>20%) change between experiments and between seasons. Note that the maximum load factor is 45%, limited by the power curve used in this study as shown by figure S1. In agreement with figure 2, historical values of load factor (black lines) are generally larger in DJF and smaller in JJA, with MAM and SON sharing similar distributions to one another. The smallest shift between experiments is seen in DJF while the largest shift is in JJA. In these figure panels we see that the 1.5 • C distributions in MAM and SON resemble those seen in the historical DJF, i.e. under 1.5 • C forcing we could see 9 months of the year where wind resource resembles those seen in the current peak winter months. Similarly, the future 1.5 • C distribution in JJA seems to match the historical distributions in MAM and SON.
Discussion
The results in this paper are in broad agreement with Pryor et al (2005) who find an increase in the annual wind energy resource over Northern Europe, though they found that a large fraction of their uncertainty (within the regional climate model simulations used) originate from the inter-model differences of the global climate model boundary conditions. Our findings also agree with the review of climate change impacts on wind energy of Pryor and Barthelmie (2010), who find that the potential for wind energy in Northern Europe is not at risk from climate change. Results here suggest that with 1.5 • C warming, load factor potential in Northern Europe will only increase. Any decreases seen here in Southern Europe are small and unlikely to impact the potential for wind power generation. This is in agreement with Carvalho et al (2017).
Due to large inter-model uncertainties, many studies have emphasised the need to consider multiple climate models when assessing future changes in wind energy (e.g. Reyers et al 2016, Tobin et al 2015. Despite this, there are still notable disagreements between studies. For example, over the USA Johnson and Erhardt (2016) found the opposite sign of impact compared to Karnauskas et al (2017). Additionally: while we find a large change in future wind resource over the UK, Karnauskas et al (2017) showed little change; and while Carvalho et al (2017) finds the largest increases in generation around the Baltic sea, we find the largest increases over the west of Northern Europe. Also in contrast to our findings, dynamically or statistically downscaled climate models have shown Here we adopt a threshold of 10% which is suitable for the power curve used in this study. The shading over land represents the four possible combinations of exceedance of the 10% threshold for the historical and 1.5 • C experiment. Blue shading represents those regions where the annual mean load factor exceeds the threshold in the historical and future experiments. Conversely, the white/unshaded regions are those where the load factors lie below the threshold in both experiments. The purple coloured regions are those where load factor lies below 10% in the historical experiment but then exceeds this level in the 1.5 • C experiment; while the black regions (for which there are only a few points, over France) highlights regions which are currently viable but becomes unviable in a 1.5 • C warming world. Note that ocean points are masked as those points were not bias-corrected (grey shading). decreases in wind energy potential over Western Europe (southern UK, Germany and France) (Hueging et al 2013 and Reyers et al 2016). The lack of regional agreement between these climate modelling based studies demonstrates the sensitivity to the intermodel spread, thus requiring a large radiative forcing (e.g. RCP8.5) to produce a clear signal of change (Reyers et al 2016). In response to this, the HAPPI project was designed to reduce inter-model spread and produce a clearer signal of change between experiments. This was achieved by: (i) fixing the levels of greenhouse gas forcing; (ii) using sea surface temperatures (SSTs) which are based on observations, thus reducing the occurrence of atmospheric artefacts driven by SST biases which are characteristic of atmosphere-ocean coupled models; and (iii) using relatively large number of ensemble runs for each model assessed. As a result, our analysis demonstrates that notable changes in wind power potential could occur even under this weaker forcing. However, we recognise that further investigation using larger number of models, and the addition of dynamical downscaling, would be important to better understand the likelihood of the changes presented in this paper.
Whilst our analysis focuses mainly on potential changes in the underlying wind resource, there are other factors which will influence load factor which have not been taken into account in this study, such as improvements in wind turbine technology and site placement. For example, the average load factor in the UK has increased from 26%-32% between 2005 and 2015, whilst projections based on planned longterm installations suggest that the UK average load factor may approach 40% by 2025 (Drew et al 2015, Staffell andPfenninger 2016). For Europe as a whole, planned developments of the wind fleet are estimated to have load factors one-third higher than today, and any increases in underlying wind speeds will increase this further.
The limited spatial resolution of climate models limits their ability to accurately simulate wind speeds, particularly in regions of complex topography. Here, models are unable to represent the detailed topography and thus they are likely to underestimate the potential load factor possible by missing speed-up and blockage effects. The true potential of wind power will be achieved in reality by optimised placement of turbines at points within a region reaching on average higher wind speeds than those around them. Indeed, Staffell and Pfenninger (2016) find that basing load factor estimates based on 'reanalysis' data (which like climate models have limited spatial resolution) leads to errors across Europe, with underestimation in the mountainous regions of Southern Europe and Scandinavia relative to Northern Europe. They recommend that in order to obtain accurate values of load factor with reanalysis data, results should be bias corrected according to actual load factor data. The suggestion would also likely apply to load factor calculated using climate models. Though the winds from the climate model are bias corrected, in this current study no bias correction of load factor has been figure 2). The black lines represent the frequency distribution for the historical experiment, while the blue lines represents the future 1.5 • C experiment. The daily load factor data are computed from daily mean 10 m wind data from four climate models, each run for 100 years. attempted as the main focus of the paper is on relative changes; any calibration factor would be applied to present-day and future load factor estimates equally and so the relative change would be unaffected.
Limited spatial resolution also means that sub-grid scale processes and turbulent effects occurring below the grid scale will be unrepresented. Given this, we must deduce that any sub-grid scale nonlinear changes in the wind speed under a 1.5 • C scenario are not represented in the models. The results then are valid under the assumption that these are small relative to changes in the large scale. Of course, this a limitation is faced by all results derived from climate model projections.
Conclusion
As of April 2018, 195 UNFCCC members have signed the agreement to keep global mean temperature rise this century well below 2 • C above pre-industrial levels, and to pursue efforts to limit the temperature increase even further to 1.5 • C. To this end, an Intergovernmental Panel on Climate Change (IPCC) 'Special Report on Global Warming of 1.5 • C' will be published in autumn 2018 with the aim to help strengthen the global response to the threat of climate change, produce sustainable development strategies and outline efforts to eliminate poverty.
In this particular study, we focused on how the potential for future wind generation of electricity over Europe would change within a 1.5 • C warming world, relative to current climate conditions. We used large ensembles from atmosphere-only global climate models from the HAPPI project. We derived daily wind power output by adopting the methodology from MacLeod et al (2017) using the output of the four models where near-surface (10 m) wind speeds had been bias-corrected. This method takes into account the distributions of sub-daily variability, the height of the turbine, and power losses due to transmission and distribution of electricity.
We found an increase in load factor over much of Europe, with the UK seeing the greatest increase by around four percentage points. However, we also found that the UK would experience the greatest increase in variability, especially during the summer months (June-August). Germany and Poland could also see regions with notable increases in variability.
Lastly, we assessed the change in distribution of daily load factor between current climate conditions and a future 1.5 • C warmer world, and difference between seasons, for Central England, UK. We found that wind energy resources during spring and autumn could become as productive as they are currently during the peak winter season, i.e. under 1.5 • C forcing 9 months of the year could see wind speeds that resemble those currently seen in the peak winter seasons. Similarly, during the summer months where wind speeds are generally low, wind under a 1.5 • C warming world could increase to resemble those currently seen in spring and autumn. While this study only assessed the changes in wind resources over land (where the data was bias corrected, see section 2), there is no indication (figure 1) that the regional changes would be significantly different offshore. It should be noted that we only assessed broad-scale changes in winds highlighting regions with the potential for increases and decreases in future wind generation. As an area of further work one could consider localised changes in surface roughness, e.g. through changes in vegetation.
We conclude that the potential for wind energy in Northern Europe may be greater than has been previously assumed (TCEP 2017), with likely increases even in a 1.5 • C warmer world presenting an opportunity for climate mitigation. While there is the potential for Southern Europe to see a reduction in their wind resource, any changes are likely to be negligible.
|
2019-05-20T13:04:56.763Z
|
2018-05-17T00:00:00.000
|
{
"year": 2017,
"sha1": "4c57c0ba057b6b8200740e18bd899a2a73af6f2a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/aabf78",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cf863ef0d096806874874b30c4fda9d65a09713c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
267033871
|
pes2o/s2orc
|
v3-fos-license
|
Chiral arylsulfinylamides as reagents for visible light-mediated asymmetric alkene aminoarylations
Two- or one-electron-mediated difunctionalizations of internal alkenes represent straightforward approaches to assemble molecular complexity by the simultaneous formation of two contiguous Csp3 stereocentres. Although racemic versions have been extensively explored, asymmetric variants, especially those involving open-shell C-centred radical species, are much more limited both in number and scope. Here we describe enantioenriched arylsulfinylamides as all-in-one reagents for the efficient asymmetric, intermolecular aminoarylation of alkenes. Under mild photoredox conditions, nitrogen addition of the arylsulfinylamide onto the double bond, followed by 1,4-translocation of the aromatic ring, produce, in a single operation, the corresponding aminoarylation adducts in enantiomerically enriched form. The sulfinyl group acts here as a traceless chiral auxiliary, as it is eliminated in situ under the mild reaction conditions. Optically pure β,β-diarylethylamines, aryl-α,β-ethylenediamines and α-aryl-β-aminoalcohols, prominent motifs in pharmaceuticals, bioactive natural products and ligands for transition metals, are thereby accessible with excellent levels of regio-, relative and absolute stereocontrol.
Two-or one-electron-mediated difunctionalizations of internal alkenes represent straightforward approaches to assemble molecular complexity by the simultaneous formation of two contiguous Csp 3 stereocentres.Although racemic versions have been extensively explored, asymmetric variants, especially those involving open-shell C-centred radical species, are much more limited both in number and scope.Here we describe enantioenriched arylsulfinylamides as all-in-one reagents for the efficient asymmetric, intermolecular aminoarylation of alkenes.Under mild photoredox conditions, nitrogen addition of the arylsulfinylamide onto the double bond, followed by 1,4-translocation of the aromatic ring, produce, in a single operation, the corresponding aminoarylation adducts in enantiomerically enriched form.The sulfinyl group acts here as a traceless chiral auxiliary, as it is eliminated in situ under the mild reaction conditions.Optically pure β,β-diarylethylamines, aryl-α,β-ethylenediamines and α-aryl-β-aminoalcohols, prominent motifs in pharmaceuticals, bioactive natural products and ligands for transition metals, are thereby accessible with excellent levels of regio-, relative and absolute stereocontrol.
Nature's secondary metabolites, as well as de novo-designed smallmolecule probes, are substantially populated with nitrogen atoms.HIV inhibitors 1 , ion-channel modulators 2 , opioids 3,4 and endogenous neurotransmitters 5 (Fig. 1a) are representative examples of relevant bioactive compounds showcasing N-containing motifs, many of which feature amines substituted with an aromatic group in the β-position.Access to these prominent chemical blueprints in enantiomerically pure form is crucial, not only for accurate target engagement studies, but also for the optimization of their pharmacological profiles.A representative example is R-(+)-dinapsoline, a selective and efficient D 1 dopamine agonist 5 , which has been found to be 161-fold more potent than its S-(−)-enantiomer.
have been showcased
Article https://doi.org/10.1038/s41557-023-01414-8combined with excellent regio-, diastereo-and enantioselectivity, highlight both the generality and synthetic utility of these transformations in the assembly of relevant blueprints populating pharmaceuticals, bioactive natural products and ligands for transition-metal catalysis.
Reaction optimization
Enantiopure (S S )-N-(p-tolylsulfinyl)butyramide 1a and trans-anethole were chosen as model substrates for our initial investigations.Reactions under blue light-emitting diode irradiation in the presence of different photocatalysts were performed, combining these two starting materials in a 1:1.2 ratio (experimental details are provided in Supplementary Table 1).Extensive screening revealed that, using 1 mol% of (Ir[dF(CF 3 )ppy] 2 (dtbpy))PF 6 and 0.3 equiv. of potassium benzoate as the base in an isopropanol/trifluoroethanol/water mixture at ambient temperature, the desired β,β-diarylethylamine 2.1 could be produced in 53% yield with excellent diasteroselectivity (>20:1 d.r.) and promising 89:11 enantiomeric ratio (e.r.).Adjusting the stoichiometry between 1a and the olefin to a 1:2 ratio and decreasing the reaction temperature to −20 °C furnished 2.1 in an improved 83% yield with almost perfect levels of both relative and absolute stereocontrol (>20:1 d.r.; >99:1 e.r.).Furthermore, the efficiency of stereochemical information transfer was maintained when the reaction was scaled up tenfold, affording 2.1 in 58% yield (>20:1 d.r. and 98:2 e.r.; experimental details are provided in Supplementary Fig. 4).Additional experiments in the presence of radical inhibitors or excluding the photocatalyst, the light or the base resulted in the recovery of both unreacted starting materials (experimental details are provided in Supplementary Table 2).
Reaction scope
With the optimal conditions in hand, we set out to explore the compatibility of different N-atom donors and aryl migrating groups within the all-in-one arylsulfinylamide reagents.To this end, modifications on both the N-atom donor and the aryl migrating group were investigated.Alkyl amide derivatives featuring a diverse set of substitution as non-cleavable C,N-tethered reagents to orchestrate asymmetric annulations with alkenes.Additionally, a handful of examples featuring three-component reactions have been reported (Fig. 1b).In 2017, the addition of N-fluoro-N-alkylsulfonamides (NFSA)-derived radicals and (hetero)arylboronic acids across the π-system in the presence of a chiral BOX-ligated copper catalyst to yield β,β-diarylethylamines with excellent levels of absolute stereocontrol was demonstrated 38 .More recently, an asymmetric Minisci reaction involving quinoline derivatives and O-acyl hydroxylmethylamine with N-vinylacetamide as a radical acceptor was reported 39 .Notwithstanding the undisputable synthetic utility of these transformations, limitations regarding both the type of N donors and the olefinic partners justify the quest for alternative, more flexible strategies in this context.
Inspired by these results, we hypothesized that the addition of a nitrogen atom bound to a chiral arylsulfoxide group onto the terminal position of a 1,2-disubstituted alkene could control the absolute stereochemistry in the formation of the newly created Csp 3 -N bond as well as on the neighbouring Csp 3 -Csp 2 centre generated upon a radical Truce-Smiles rearrangement of the corresponding aryl moiety.In this Article we describe enantioenriched arylsulfinylamides as multifunctional all-in-one reagents able to forge, regio-and stereoselectively, two contiguous Csp 1).Furthermore, successful incorporation of aromatic and heteroaromatic substituted amides and even the tert-butyl carbamate derivative (2.7-2.9)emphasizes the functional-group compatibility of the method.Furthermore, carbamate derivative (2.9) could also be transformed under the standard conditions, providing access to the corresponding free amine upon acid hydrolysis, with complete retention of the stereochemical information (Supplementary Information, compound 2.39).
The scope with respect to the migrating aromatic groups was investigated next.Transposition of a simple phenyl group proceeded smoothly under standard conditions to give 2.10 in high yield.Interestingly, substrates bearing both electron-withdrawing and electron-donating groups in the para-position of the arene proved to be suitable precursors, furnishing the corresponding β,β-diarylethylamines (2.11-2.14) in good yields with outstanding levels of stereocontrol.The meta-methoxy and meta-bromo derivatives also delivered the desired products (m-OMe, 2.15; m-Br, 2.16), although with slightly lower stereoinduction.In contrast, more sterically hindered substrates bearing ortho-substituted aromatic rings (o-Me, 2.17; o-Br, 2.18) were obtained with excellent enantioselectivities.To our delight, the ortho-bromo adduct 2.18 was quantitatively converted into the corresponding indoline in the presence of the Pd catalyst with retention of configuration, highlighting the synthetic
+
Unless otherwise noted, reactions were carried out under the standard conditions.Full conversion of the starting material was observed, and yields are reported after purification by column chromatography in silica gel.All compounds were obtained with >20:1 d.r.The d.r. and e.r.values were determined by 1 H NMR of the crude reaction mixture and by chiral stationary HPLC of the isolated products, respectively.n-Pr, n-propyl; PMP, p-methoxyphenyl; Ph, phenyl; TBS, t-butyldimethylsilyl; t-Bu, t-butyl.
Article
https://doi.org/10.1038/s41557-023-01414-8potential of the obtained aminoarylation products (Supplementary Information, compound 2.40).Moreover, heteroaryl migration also took place under the standard conditions, furnishing the corresponding thiophene derivatives 2.19 and 2.20 in good yields, with excellent levels of both relative and absolute stereocontrol.
X-ray crystallographic analysis of compounds 2.3 and 2.11 confirmed the syn addition of the N atom and the arene across the π-system.Adduct 2.11, stemming from an (S S )-arylsulfinylamide precursor containing a Br atom, enabled us to assign the absolute configuration of the major diastereoisomer produced in this reaction as (1S,2R).It is important to note that the substitution pattern in the aromatic ring affects the priority of the groups at the new asymmetric carbon atom.As a result, a (1R,2R) configuration can be assigned to most of the obtained compounds.The reaction proved to be stereospecific: when the (R)-enantiomer of the arylsulfinylamide (R S )-1a′ was used as a precursor, the opposite enantiomer of the β,β-diarylethylamine product (1S,2S)-2.1′could be obtained in similar yield and e.r.(experimental details are provided in Supplementary Fig. 28).
The compatibility of the reaction between (S S )-N-(p-tolylsulfinyl) butyramide 1a with different styrene partners was also explored (Table 2).Although simple styrenes (R 3 = H) were not competent substrates, phenethyl, cyclohexyl, 4-tetrahydropyranyl and carbinyl acetate groups at the terminal position of the double bond were effectively accommodated in the aminoarylation process.The corresponding β,β-diarylethylamines (2.21-2.25)were obtained in moderate to good yields with high enantioselectivity.Moreover, a chiral para-methoxy styrene derived from (R)-citronellal provided the corresponding aminoarylation adduct 2.26 in moderate yield but with excellent levels of regio-and both relative and absolute stereocontrol.
To further expand the scope of this multicomponent radical cascade, different electron-rich olefins were surveyed.To our delight, aromatic vinyl amides turned out to be suitable partners, providing efficient access to aryl-α,β-ethylenediamines. These motifs are not only present in biologically active compounds 2,45 , but have also been prominently used as bidentate ligands in transition-metal complexes 46,47
Mechanistic investigations
Having demonstrated the synthetic utility of this methodology, we focused our investigations on the underlying reaction mechanism.First, Stern-Volmer fluorescence quenching studies were performed to shed light on the potential species activated by the photocatalyst at the outset of the reaction 48 .The experiments were conducted using [Ir[(dFCF 3 )ppy] 2 (dtbpy)]PF 6 excited with light (430 nm) in the presence of the different reactants.In the case of trans-anethole, a decrease in fluorescence intensity was observed as a function of olefin concentration (Fig. 2, top left).In sharp contrast, (E)-N-(prop-1-en-1-yl)benzamide did not quench the excited photocatalyst, even at high concentrations (Supplementary Figs.7-12).Cyclic voltammetry of this vinylamide (E 1/2 = +1.45V versus saturated calomel electrode (SCE) in MeCN) confirmed the mismatched redox potential with respect to that of the photocatalyst (E 1/2 = +1.26V versus SCE in MeCN) (Supplementary Fig. 25) 40 .Interestingly, fluorescence quenching was not observed at low concentrations of arylsulfinylamide 1a and potassium benzoate.
Increasing the concentration of either arylsulfinylamide 1a or both 1a and base (Supplementary Figs.13-18) led to oxidation of the reagent 49 .
A more soluble tetrabutylammonium-conjugated sulfinylamide salt 3 proved to be an efficient quencher of the iridium photocatalyst (Fig. 2, top right), in line with the reduction potential measured by cyclic voltammetry (E 1/2 = +0.57V versus SCE in MeCN) (Supplementary Fig. 26).
These results indicate that different mechanisms might be operating at the outset of the reaction, depending on olefinic partner.In the case of electron-rich styrenes, the formation of a radical cation via single-electron oxidation can be confidently proposed as the initial step of the photocatalytic cycle.In contrast, single-electron oxidation of the deprotonated arylsulfinylamide by the excited Ir photocatalyst to form an N-centred radical seems to be a more likely first step in the case of poorly oxidizable olefins.
To gain additional insights into the stereochemical outcome of these transformations, several control experiments as well as density functional theory (DFT) calculations were performed using anethole derivatives as benchmark substrates.First, the standard reaction conditions were applied in three independent experiments featuring cis-, trans-and a 1:1 mixture of cis-and trans-anethole.The formation of the corresponding products was analysed by 1 H NMR (experimental details are provided in Supplementary Fig. 29).In all three cases, the aminoarylation adduct 2.1 was obtained in comparable yields, with almost identical d.r. and e.r.values.Next, and this time in the absence of arylsulfinylamide 1a, cis-and trans-anethole were separately subjected to the standard reaction conditions and their potential isomerization 50 was monitored by 1 H NMR. A plot of temporal concentration versus time revealed that, after only 10 min, both isomers converge to an ~1.7:1 cis-to-trans ratio (Supplementary Figs. 5 and 6).Such a photostationary state, reached in a much faster regime than the aminoarylation reaction itself, suggests that both isomers will be present at the outset of the reaction, regardless of the initial alkene geometry.Following olefin oxidation to the corresponding radical cation II, addition of the arylsulfinylamide I will proceed at the β-carbon atom so that the absolute configuration of the first stereogenic centre is thus defined by that of the chiral sulfinyl moiety.DFT calculations revealed a low-energy transition state TS I-III (S S ,R) (ΔG ǂ = +4.8kcal mol −1 ) for this step, which delivers the benzylic radical III in a net exothermic process (ΔG = −25.2kcal mol −1 ).Intermediate III undergoes a 1,4-aryl shift.No radical Meisenheimer intermediate could be located along the reaction energy profile 51 .Instead, a spirocyclic transition state TS III-IV was found to precede the exothermic formation of SO-centred radical IV (ΔG = −38.5 kcal mol −1 ).TS III-IV can be considered an early transition state in which the new C-C bond between the benzylic radical and the migrating aromatic group is only marginally formed (d Cbn−Csp2 = 2.11 Å in TS III-IV versus d Cbn−Csp2 = 1.52 Å in IV), and the S(O)-C bond is barely elongated (d S-Csp2 = 1.82 Å in TS III-IV versus d S-Csp2 = 1.80 Å in III).Formation of the minor diastereoisomer can be traced back to the generation of intermediate III′ before the aryl transposition.Conformational analysis of the two intermediates suggests that the aryl translocation preferentially takes place through a trajectory in which the steric interactions between the PMP group and the methyl substituent within the anethole are minimized (ΔΔG III/III′ = +5.8kcal mol −1 ).DFT calculations support the notion of the aryl migration being the rate-determining step (TS III-IV , ΔG ǂ = +12.2kcal mol −1 ).As a result, and regardless of any potential kinetic preference for the formation and/or subsequent reactivity of either a Z-or an E-anethole-derived radical cation, the fast interconversion of III′ into III by rotation along the C α -C β bond supports the syn relative configuration observed in the aminoarylation products (additional details are provided in Supplementary Fig. 30).The photocatalytic cycle is closed thereafter by oxidation of IV to V by Ir(II) to recover the Ir(III) catalyst.The precise fate of the sulfur-based chiral linker is challenging to assess.However, having detected the bisulfite (HSO 3 − ) anion using commercially available colorimetric test strips, we can confirm that sulfur(IV) species account at least in part for the SO lost 52 (additional details are provided in Supplementary Fig. 27).Additionally, Fig. 2 shows TS I-III (S S ,S), the alternative transition state for the enantiodetermining step, in which I adds to the alkene radical cation II.TS I-III (S S ,S) is 1.3 kcal mol −1 higher in energy than TS I-III (S S ,R), which rationalizes the absolute configuration observed in the aminoarylation products.
Conclusion
Here we have described an asymmetric intermolecular aminoarylation of alkenes.A photoredox-mediated radical cascade capitalizes on a chiral all-in-one arylsulfinylamide reagent featuring a traceless chiral auxiliary to forge two vicinal Csp 3 -Csp 2 and Csp 3 -N bonds across the π-system in a stereocontrolled manner.Mechanistic investigations revealed the likelihood of multiple reaction pathways operating in these transformations.In the case of electron-rich styrenes, the formation of a radical cation via single-electron oxidation can be confidently proposed at the outset of the reaction.In contrast, the single-electron oxidation of the deprotonated arylsulfinylamide by the excited Ir photocatalyst to form an N-centred radical seems favoured in the case of poorly oxidizable olefins.The C-N bond formation is stereocontrolled by the chirality of the sulfoxide, whereas the subsequent transposition of the aromatic ring with concomitant elimination of the sulfinyl tether proceeds in a highly diastereoselective manner governed by steric factors.The β,β-diarylethylamines, aryl-α,β-ethylenediamines and α-aryl-β-aminoalcohols, ubiquitous motifs in bioactive molecules as well as in bidentate transition-metal ligands, are obtained herein with very high levels of regio-and both relative and absolute stereocontrol, thus highlighting the synthetic utility of this methodology.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41557-023-01414-8.
3 -C and Csp3 -N bonds across a variety of π-systems.A photochemically enabled addition of the nitrogen atom onto the terminal position of styrenes, vinyl amides and vinyl ethers furnishes a C-radical intermediate, which, upon translocation of the aromatic ring, delivers enantioenriched β,β-diarylethylamines, aryl-α,β-ethylenediamines and α-aryl-β-aminoalcohols, respectively.The mild reaction conditions and broad functional-group tolerance,
8 Via 2 G 2 G 5 GFig. 2 |
Fig. 2 | Mechanistic studies and proposed reaction mechanism.Results of Stern−Volmer experiments using trans-anethole (top left) and arylsulfinylamide 3 (top right) as quenchers (I, intensity, e.u., arbitrary energy units).The proposed reaction mechanism is shown, featuring two different initiation pathways: formation of a radical cation for electron-rich olefins (grey) and formation of an amidyl radical in the case of vinylamide acceptors (pink).DFT calculations were performed on trans-anethole as a benchmark substrate.Optimized transition states, relevant structural parameters, starting materials, products, reaction intermediates and transition states were computed at the M062X/6-31+G(d,p) level (IEFPCM, integral equation formalism with the polarizable continuum model), with the solvent 2-propanol at −20 °C (R = PMP).Energies are given in units of kcal mol −1 .For further details, see Supplementary section 'DFT calculations'.TS I-III (S S ,R) visualizes the enantiodetermining step in which I adds to the alkene radical cation II in line with the absolute configuration observed in the aminoarylation products.The alternative TS I-III (S S ,S), is 1.3 kcal mol −1 higher in energy than TS I-III (S S ,R).Conformations were calculated for intermediates III and III′.The conformer yielding the minor isomer is disfavoured by steric factors: the PMP group adopts an unfavourable syn-periplanar disposition with the methyl group, unlike the case of the major diastereomer experimentally obtained, in which these groups exhibit a less sterically demanding anti-periplanar geometry.PMP, p-methoxyphenyl, p-Tol, p-methylphenyl.
Table 2 | Scope of the alkene partner for the intermolecular aminoarylation with arylsulfinylamide (S S )-1
,Unless otherwise noted, reactions were carried out under the standard conditions.Full conversion of the starting material was observed, and yields are reported after purification by column chromatography in silica gel.All compounds were obtained with >20:1 d.r.The d.r. and e.r.values were determined by 1 H NMR of the crude reaction mixture and by chiral stationary HPLC of the isolated products, respectively.a 5 mol% of [Ir[(dFCF 3 )ppy] 2 (dtbpy)]PF 6 ] at 0 °C.n-Bu, n-butyl.
|
2024-01-19T06:17:42.680Z
|
2024-01-16T00:00:00.000
|
{
"year": 2024,
"sha1": "f7254310aa69f89eb364a034d6bc139593f7b29d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41557-023-01414-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d13fed27e4e02e66d3f23f70a68b76f220be8a9a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255222915
|
pes2o/s2orc
|
v3-fos-license
|
A Modified Transversal Two-Suture Microsurgical Intussusception Vasoepididymostomy for the Treatment of Epididymal Obstructive Azoospermia
Introduction: We have developed a modified vasoepididymostomy procedure, namely “fenestrated” transversal two-suture microsurgical intussusception vasoepididymostomy. This study aimed to investigate the therapeutic efficacy and outcome of this fenestrated vasoepididymostomy for epididymal obstructive azoospermia (OA). Methods: Microsurgical two-suture transversal intussusception vasoepididymostomy was performed using our modified fenestration technique in 64 OA patients due to epididymal obstruction at our hospital. Fenestration means making an opening on the epididymal tubule wall. The edges of the epididymal tubule “window” were stitched transversally (two stitches) using the two double-armed 9–0 atraumatic sutures. The epididymal tubule was anastomosed to the lumen of the vas deferens. The patency rate and pregnancy rate were assessed. Results: Of the 64 OA patients, 45 received bilateral microsurgical two-suture transversal intussusception vasoepididymostomy, while 19 underwent unilateral microsurgical two-suture transversal intussusception vasoepididymostomy. All of the patients were followed up after the operation. The follow-up period ranged from 4 to 54 months. Among 45 cases of bilateral surgery, the patency rate was 88.89% (40/45), and the natural pregnancy rate was 28.89% (13/45). After the patency was confirmed postoperatively, 3 cases had recurrent OA, of which 2 cases had return of sperm to the ejaculate by oral antibiotics and scrotal self-massage. As for the 19 cases of unilateral microsurgery, the patency rate was 68.42% (13/19), and the natural pregnancy rate was 21.05% (4/19). Conclusion: The fenestrated transversal two-suture microsurgical intussusception vasoepididymostomy can achieve a good patency rate in OA patients and did not increase the difficulty and duration of the procedure.
Introduction
Azoospermia is defined as the complete absence of sperm in the semen and affects 10-15% of infertile men [1]. Approximately 40% of azoospermia patients belong to obstructive azoospermia (OA) [2,3], which is primarily caused by obstruction of vas deferens-epididymis con- nection or epididymal tubule [4]. Microsurgical treatment is an optimal option for the treatment of OA, which involves anastomosis of the seminal tract to bypass the obstruction [5]. Although various assisted reproductive techniques are adopted more and more widely, microsurgical treatment remains the first choice for some OA. Accumulating evidence has suggested that microsurgical technology is a more cost-effective option for OA as compared with assisted reproductive techniques [6][7][8][9][10]. However, this microsurgical procedure is heavily dependent on surgical skill and is the most challenging procedure of all urological microsurgeries [11]. The modification of surgical procedures to improve success rate is helpful for their clinical promotion.
Between January 2009 and January 2020, we had performed a modified vasoepididymostomy procedure and followed up postoperatively, namely "fenestrated" transversal two-suture microsurgical intussusception vasoepididymostomy. The purpose of this study was to investigate the therapeutic efficacy and outcome of the "fenestrated" transversal two-suture microsurgical intussusception vasoepididymostomy for OA.
Study Subjects
From January 2009 to January 2020, 64 patients with OA underwent the fenestrated transversal two-suture microsurgical intussusception vasoepididymostomy in our hospital. The inclusion criteria were as follows: (1) the sperm could not be detected in at least two routine semen analysis and one centrifugal microscopy analysis (1,500 g × 15 min); (2) serum sex hormone levels were normal; and (3) seminal plasma biochemical analysis suggested OA due to epididymal obstruction, with fructose level within normal range. This study was approved by the Institutional Review Board of Shenzhen People's Hospital (No. LL-KY-2021641), and written informed consent was waived by the IRB due to the retrospective nature of this study.
Surgical Procedures
All patients underwent combined spinal-epidural anesthesia, and a longitudinal incision was made in the middle of the scrotum. After the testis was free, the vas deferens was transversally cut to expose the vas deferens lumen. A venous indwelling needle (0.7 × 19 mm) was inserted into the lumen of the vas to inject saline toward the distal end of the seminal tract. No resistance to saline injection indicated the vas deferens was patency.
Under the microscope (magnification 18-20×), the distal epididymal tubule was dissected free, the epididymal tubule wall was gently lifted by the microscopic forceps, and part of the epididymal tubule wall was cut off by the microscissor to create a "window" (the diameter of the "window" should be approximately equal to the inner diameter of the vas deferens). The sampling of epididymal tubule fluid was used to search for sperm under a high-power optical microscope. If no sperm was found, the epididymis was explored from the tail to the head. If sperm was seen, intussusception anastomosis could be performed regardless of sperm motility.
The distal vas deferens was carefully mobilized to ensure that it can be anastomosed without tension (Fig. 1a). The procedure is similar to the currently used technique transverse 2-suture intussusception vasoepididymostomy but with modifications. First, instead of a transversely linear incision in the loop of the epididymal tubule, a round tubulotomy was created between two transverse sutures using microsurgical curved scissors (Fig. 1b). The diameter of the round tubulotomy was matched to the diameter of the vasal lumen. Second, sutures placed in the wall of the vas deferens were full thickness, which allowed a deeper invagination of the epididymal tubule into the vasal lumen (Fig. 1c), and then the outer muscular layer of the vas deferens was fixed to the epididymal tunic (Fig. 1d).
Postoperative Follow-Up At 1, 3, 6, 9, and 12 months after surgery, patients were followed up for the semen analysis to check for the presence or absence of sperm. Meanwhile, the spouse's pregnancy status was investigated.
Patients Demographic and Clinical Characteristics
A total of 64 azoospermia patients underwent microsurgical transversal two-suture intussusception vasoepididymostomy; the mean age was 31 years (range: 21-42 years). The course of disease ranged from 0.5 to 14 years. Among them, 45 received bilateral microsurgical transversal twosuture intussusception vasoepididymostomy, while 19 underwent unilateral vasoepididymostomy. Nine cases used to have their spouse/sexual partner's natural conception, 21 cases had a history of reproductive tract infection, and one case had a history of inguinal hernia surgery.
Outcomes of Bilateral Microsurgical Two-Suture Transversal Intussusception Vasoepididymostomy
The outcomes of bilateral microsurgical two-suture transversal intussusception vasoepididymostomy were summarized in Table 1. Among 45 patients who underwent bilateral surgery, 40 cases continuously showed motile sperm in the postoperative semen analysis. The patency rate was 88.89%, and 13 cases (28.89%) had their spouse's successful conception. Regarding the timing of patency, 10 cases showed motile sperm in the semen analysis 1 month after the operation, 2 cases at 2 months after the operation, 7 cases at 3 months after the operation, and the other 2 cases at 6 months and 7 months after the operation, respectively.
Outcomes of Unilateral Microsurgical Two-Suture Transversal Intussusception Vasoepididymostomy
The outcomes of unilateral microsurgical two-suture transversal intussusception vasoepididymostomy were summarized in Table 2. Of the 19 patients undergoing unilateral surgery, 13 cases (68.42%) showed motile sperm in the postoperative semen analysis, 1 case (7.69%) had a recurrence of obstruction, and 4 cases (21.05%) had their spouse's successful conception. Regarding the timing of patency, 4 cases showed motile sperm in the semen analysis 1 month after the operation and the other 2 cases at 6 months and 18 months after the operation, respectively.
Discussion
The earliest vas deferens and epididymis anastomosis were attempted by Martin et al. [12] in 1903. In 1918, Lespinasse [13] completed the first formal vasoepididymostomy. In 1978, Silber [14] developed the end-to-end microanastomosis of a single epididymal tubule and vas deferens under microscopic surgery. In 1980, Wagenknecht et al. [15] performed the microsurgical end-to-side anastomosis of the vas deferens and epididymal duct for the first time, and this end-to-side anastomosis technique was further promoted and popularized by Thomas [16] in 1987. In 1991, Stefanovic et al. [17] introduced the singleneedle intussusception technique based on the end-toside anastomosis technique in rats, which was clinically applied by Berger [18] in 1997 and achieved a potency rate of 92%. In 2000, Marmar [19] proposed the transversal two-suture microsurgical intussusception technique.
In the 2000s, Chan et al. [20] and his coworkers reported a novel longitudinal two-needle intussusception vasoepididymostomy which greatly simplifies the surgical procedure and achieves a higher potency rate as compared with the conventional three-suture triangulation, end-to-end, and end-to-side anastomoses methods [21]. The advantages of this method are as follows: first, the method of longitudinal cutting and longitudinal four-suture makes the anastomosis larger with better patency. Second, this suture method makes the anastomoses form a good impermeable layer, which greatly reduces the occurrence of semen granulomas [22]. At present, the potency rate following vasoepididymostomy ranges between 31-92%, the postoperative pregnancy rate ranges between 10-50%, and the recurrence rate of an obstruction within 1 year after surgery is 4% [23][24][25][26].
The fenestration technique described in this study uses the "fenestrated" transversal two-suture method in the end-to-side intussusception anastomosis. The "fenestration" means the removal of a round-shaped wall of the epididymal tubule to create a "window" through which semen can pass freely. Afterward, the vas deferens lumen is sutured symmetrically from the inside to the outside, and the suture penetrates through the full thickness of the vas deferens wall. The suture is knotted, and the epididymal tubules are intussuscepted into the lumen of the vas deferens. Our results showed that in the bilateral surgery, the postoperative patency rate was 88.89% (40/45), the pregnancy rate was 28.89% (13/45), and the postoperative recurrence rate was 6.7% (3/45). As for the unilateral surgery, the potency rate was 68.42% (13/19), the pregnancy rate was 21.05% (4/13), and the postoperative recurrence rate of obstruction was 7.69% (1/13). The potency rate and natural conception rate of the fenestrated vasoepididymostomy in this study are comparable with previous reports [23][24][25][26].
Compared with the conventional transversal and longitudinal two-suture anastomosis, the fenestrated anastomosis has the following advantages: first, in the longitudinal two-needle anastomosis, it is difficult to control the strength to make the longitudinal incision of the epididymal tubules, resulting in residual epididymal tubule outer membrane at the incision and hindering the smooth flow of the anastomosis [24]. The fenestrated anastomosis method can avoid the residual epididymal tubule outer membrane, making better patency in the anastomosis. Second, the "round window" can obtain a larger cross-sectional area than the anastomosis of the longitudinal or transverse incision and better patency. Third, the round lumen of the vas deferens is more matched with the round anastomosis of the epididymal tubule. Fourth, the suture penetrates through the full DOI: 10.1159/000528391 thickness of the vas deferens wall, making the epididymal tubule wall intussuscepted deeply into the lumen of the vas deferens, effectively preventing leakage and reducing the occurrence of semen granuloma. Fifth, in the longitudinal two-suture anastomosis, two parallel stitches are sutured on the epididymal tubule, and then an incision is made between the two sutures, making a limited surgical space. The diameter of the needle is 70 μm [16], and the width of two parallel needles is 140 μm, equivalent to the diameter of some epididymal tubules. Therefore, it inevitably requires selecting a thicker epididymal tubule for longitudinal two-suture anastomosis. Our fenestrated incision is transversally sutured; thereby, thin epididymal tubules can also be used. In addition, in the longitudinal two-suture anastomosis, premature leakage of semen is concerned as it disturbs the surgical field exposure [24]. On the contrary, in the fenestration technique, the outflow of semen makes it easier to see the edge of the window and facilitates needle insertion and suture.
There are still some limitations to this study. First, this study was limited by its retrospective nature. Also, the sample size of this study was small, especially for the patients receiving unilateral surgery. Furthermore, this study is not compared with the reference technique of vasoepididymostomy by intussusception with two sutures with a longitudinal incision without fenestration. In the future, a comparative study is necessary to demonstrate the superiority of this new technique compared to the reference technique.
Conclusions
In summary, our fenestrated transversal two-suture microsurgical intussusception vasoepididymostomy can achieve a good patency rate in OA patients and did not increase the difficulty and duration of the operation, which is worthy of clinical promotion for the treatment of epididymal OA.
Statement of Ethics
This study was approved by the Institutional Review Board of Shenzhen People's Hospital (No. LL-KY-2021641). Written informed consent was waived by the IRB due to the retrospective nature of this study.
|
2022-12-29T16:01:56.701Z
|
2022-12-27T00:00:00.000
|
{
"year": 2022,
"sha1": "b5401cf4ac9480e6d71f93576f99a72651164e8e",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/528391",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "ac68525b9fedaf2654c17bb86c20c88e4812c410",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
246551744
|
pes2o/s2orc
|
v3-fos-license
|
THE LEGALITY OF THE ANTI-MIGRANT ACTIONS OF THE ITALIAN AND OF THE HUNGARIAN GOVERNMENTS: IT IS MORE THAN JUST LAW. THE NECESSITY TO REFORM EXISTING RULES
This paper discusses the legal basis of those anti-migratory individual actions of certain states of the European Union, specifically Italy and Hungary, which have recently created a challenge to the enforcement of International and European Union legal rules on asylum. On the one side, legal rules are stemming from International Law, the case-law of the European Court of Human Rights, EU Law (i.e. Dublin Regulation) which impose specific duties on those countries where migrants and asylum-seekers first come. On the other side, there are countries (i.e. Italy, Hungary) that are or have been particularly exposed to the inflow of refugees and asylum-seekers. These countries, in these last years, have taken individual initiatives against what their Governments have perceived as a massive inflow of migrants. These initiatives have spurred a debate and have also contributed to EU initiatives and plans related to the reallocation of migrants. This paper, after introducing the International and EU legal rules on the treatment of migrants and asylum-seekers, studies the legal basis for certain individual states’ initiatives against massive migration, and the possible consequences of a conflict between the EU/International authorities and those states following restrictive polIntereulaweast, Vol. VIII (2) 2021 2 icies against migration. Finally, the paper suggests that the existing international and EU rules on asylum should be reviewed. This would also take into account the constraints that a massive inflow of migrants can create to individual states and would prevent conflicts between anti-migration national Governments and EU/International authorities.
INTRODUCTION
Immigration is a constant phenomenon affecting European countries. In particular, the recent crisis and war in Northern Africa and the Middle East have created the presuppositions for massive flows of persons attempting at reaching the European Continent and settling there, eventually (but not necessarily) applying for asylum. These inflows of migrants have spurred a debate regarding the opportunities and constraints that huge arrivals create for those countries receiving them. 1 2 3 4 5 6 . In particular, specific countries lie at the outer borders of the European Union and, according to the existing EU legislation (Dublin Regulation, see the next section), are primarily responsible for receiving migrating persons, identifying them, and examining their eventual asylum requests. The EU Law establishes that, in most of the cases, the state of the first arrival (basically those EU states with a border with Third Countries or with a sea border) is responsible for the process of offering support to migrants, selecting those with the right to obtain asylum or any form of protection and, eventually, to expel those without a title for staying. 1 Yilmaz, F.: Right-wing hegemony and immigration: How the populist far-right achieved hegemony through the immigration debate in Europe, Current Sociology, Vol. 60 (3) The last years have witnessed a reaction from those states whose Governments have felt that their country was too exposed to the constraints created by the massive inflows of migrants and by the Dublin system. For example, both Denmark as well as Germany, during the peak of the inflow of migrants in 2015, had temporarily re-established controls at their borders with, respectively, Germany and Austria. The Hungarian Government has decided to build a wall to prevent the arrival of migrants going through Serbia and, at a certain point, has ordered the Army to patrol the border. 7 8 9 10 One further example is represented by the behavior of the Italian authorities during the period 2018-2019. The Italian Government of that time had repeatedly refused or delayed entry into the Italian ports to NGO boats carrying refugees allegedly rescued in the Mediterranean Sea. 11 12 13 14 This type of initiative has been at the origin of controversies and of continuous requests for a reform of the Dublin Regulation rules. The European Union has occasionally reacted to such requests. A decision was made in 2015 aiming at relocating 150,000 refugees from Iraq, Syria, and Afghanistan based on quotas (some countries, including Hungary, refuse to comply with their quota 15 15
Podda, P. A. The Legality of Migrant Quotas and the Pronouncement of the Court of Justice of the European Union on Migrant Quotas after the Action for Annulment of Slovakia and
Hungary. Does a EU Member State Have its Own Obligations to Respect EU Law and also the over, an informal mechanism of reallocation of migrants from Italy to some EU countries has been established.
In particular, the actions taken by the Hungarian Government of Viktor Orban in 2015 and those of the Italian Government in 2018-2019 have captured the attention of the media. The initiatives of the two Governments have generated a huge discussion with varying views regarding the political convenience, humanitarian perspective as well as technical legality of these initiatives. Our paper will concentrate on the latter and also on their political legitimacy, studying and discussing separately whether the decisions and actions taken by the Hungarian and Italian authorities have a legal basis.
We believe that an investigation into the legitimacy of these types of individual actions is worthy. So far, the literature has studied the problems related to migration and asylum mainly focusing on the difficulties experienced by migrants. The attention of researchers has been directed mostly to the hurdles, tragedies, and problems experienced by those persons fleeing their countries and Continents and coming into Europe. 16 17 18 19 20 21 This appears meritorious, in view of the legal as well as ethical/humanitarian connotations of the matter. On the other side, it is clear that a massive inflow of migrants may, in extreme cases, create serious hurdles to the receiving countries in terms of serious organizational and financial challenges as well as threats to public safety 22 . This particular situation has generated an increase in anti-migratory 22 Trines. S. (2017). Lessons from Germany's Refugee Crisis: Integration. Costs. and Benefits. WENR. https://wenr.wes.org/2017/05/lessons-germanys-refugee-crisis-integration-costs-benefits. Accessed on the 1/3/2021 feelings among Europeans leading to consequent actions aiming at preventing or acting against massive migration. Indeed, the literature has often relegated the analysis of the eventual legitimacy of actions like those of Italian and Hungarian authorities into a very secondary role. The dangers to public safety brought by massive migration have often been presented as "perceived," with little investigation into whether or not the perception is actually (un)grounded. The literature has so far not really expounded the legitimacy of actions (like those of the Hungarian and the Italian authorities) aiming at putting a limit to the number of migrants and asylum-seekers. 23 24 These actions are, eventually, often briskly dismissed as illegal or immoral, without a proper analysis of their legality and morality. 25 This appears as a limitation because of the importance of these particular actions, in terms of their (il) legitimacy and also in view of their impact on the life of both migrants as well as the citizens of receiving states. Our paper aims at contributing to filling the gap. This paper will be organized in the following way. The following section will present EU and International Law (including the case-law of the European Court of Human Rights) in the area of treatment of migrants and of refugees. The second section will provide details of the actions taken by, respectively, Hungary and Italy and comment critically on their consonance with those rules discussed in the previous section. The third section will expand the discussion in order to encompass also the consistency between these EU/International rules and some specific Principles of Law. Conclusion and references will follow.
THE MAIN LEGAL RULES ON MIGRATION AND ASYLUM: INTERNATIONAL AND EU LAW
This section will discuss the state of the art of legislation on Migration and Asylum. The first sub-section (1.1.) will review the main International Law rules and the case-law of the European Court of Human Rights on the duties of a state towards migrants and asylum-seekers, whereas the second (1.2.) will study the most relevant legal instruments of the EU Asylum and Migration Policy. Neither sub-section aims at offering a complete historical overview of the relevant legislation, nor the following pages are devised with the intention 23 Armstrong, A. B.: You Shall Not Pass! How the Dublin System Fueled Fortress Europe, Chicago Journal of International Law, Vol. 20(2), 2020, pp. 332. 24 Zapalacova, V. The capability of a Member State to protect its national security by placing a limit on mass irregular migration, Master Thesis, 2021, Anglo American University to summarize legal statutes. Nonetheless, the following pages will highlight those main legal principles applicable at the moment of the actions taken by the Hungarian and Italian Governments (these principles are still valid even at the moment of writing)
MIGRATION, ASYLUM, AND INTERNATIONAL LAW
International Law does not confer to any individual any right to migrate to a foreign state in order to enjoy better living standards. 26 Hence, purely economic migrants do not have any right to enter any EU country, unless, of course, there is a specific country willing to accept them.
Nonetheless, international law recognizes the right of a person to object to the persecution of finding asylum in another country. The first relevant legislative instrument is Article 14 of the Universal Declaration of Human Rights which states that "Everyone has the right to seek and to enjoy in other countries asylum from persecution." Further documents are the United Nations Convention Relating to the Status of Refugees of 1951 (Geneva Convention) and the Protocol Relating to the Status of Refugees of 1967. An agency of the United Nations, the United Nations High Commissioner for Refugees (UNHCR) is in charge of monitoring compliance with the Convention.
The Convention of Geneva recognizes certain rights to persons who have a reasonable fear of being persecuted in their countries or the country of residence on the basis of race, religion, nationality, membership of a particular social group, or political opinion. All those states part of the Convention have the obligation to consider these persons as refugees and provide them with protection. In particular, the authorities of the states where the person seeking protection arrives must examine their demand, provide free access to Courts, provide accommodation and meals, provide administrative assistance, provide the possibility of assimilation and naturalization to refugees, cooperate with the UNHCR, and offer reception also to the family of the refugee. Those states where refugees arrive also have particular duties to refrain from, in particular, sending the refugee back to the country where they have fled from (principle of non-refoulement). Refugees have, for themselves and their families, the right to access education, medical assistance, the job market, and self-employment. On the other side, the refugee has to respect the law of the country offering him/ her protection. Refugees should not be penalized for having entered illegally the country where they apply for asylum. The Convention should not be of 26 de Voss, V.: The 2015 Refugee Crisis, Sister Namibia, Vol. 27(3), 2015, pp. 22 benefit to persons having committed serious crimes or non-political crimes, in this type of case the principle of non-refoulement may not apply.
The Convention has been recently analyzed by scholars. 27 28 29 In our view, it maintains a certain balance between the duty imposed on states versus persons persecuted and the right for these states to protect their community against persons who, while eventually fulfilling the requirements for obtaining asylum, are considered as a danger to the security of a country or have committed particularly serious crimes according to a final judgment. As said, in these particular cases, the principle of non-refoulement does not apply. These particular provisions recognize the right of national authorities to avoid exposing their citizens to dangers even when the person representing a danger would otherwise be entitled to receive the status of refugee. Another provision (article 9) does not restrict a state "in time of war or other grave and exceptional circumstances, from taking provisional measures which it considers to be essential to the national security in the case of a particular person, pending a determination by the contracting state that that person is in fact a refugee and that the continuance of such measures is necessary in his case in the interests of national security." Nonetheless, the Convention does not specify whether or not a state is entitled to impose a limit to the number of refugees it can actually take or to the number of demands it can assess. Specifically, the Convention prohibits collective expulsions based on the nationality of the applicants The problem with the acceptability of a limit to the number of arrivals/refugees acquires relevance when the arrivals exceed the financial and managerial capacity of a receiving state. The receiving state may not have sufficient resources in order to monitor the behavior of asylum-seekers, conduct investigations in case of reported crimes, guarantee fair trials, and the eventual execution of custodial sentences. These are reasons why certain states have expressed concerns in case of massive arrivals of migrants demanding asylum. 30 These concerns are even stronger because it is perceived that a significant number of 27 30 Bove, V., Elia, L., & Ferraresi, M. (2019). Immigration, fear, and public spending on security | VOX, CEPR Policy Portal. https://voxeu.org/article/immigration-fear-and-public-spending-security as accessed on the 1 st June 2021. asylum requests are purely instrumental and aim at guaranteeing the applicant with a temporary presence in Europe, eventually (and in extreme cases) as a cover for terroristic activities. 31 A further legislative instrument invokable to prevent repatriation is the Convention on Human Rights and Fundamental Freedoms of the Council of Europe, in particular Article 3. 32 Article 3 states that "No one shall be subjected to torture or to inhuman or degrading treatment or punishment." The case law is oriented towards giving priority to the migrant asylum seekers right to avoid the risk of ill-treatment than to the right of a state to deport a person deemed to be a threat to security, thus canceling the possibility to deport a refugee having committed a crime. In Application no. 19017/16), the European Court of Human Rights (ECHR) has established that even where national security is threatened, the receiving state of a person at risk of violation of his human rights cannot order deportation lacking a genuine adversarial procedure carried out before an independent judicial authority. In the case of Application no. 1365/07, the ECHR has blocked the expulsion of a suspected terrorist from Sweden, as he would risk, even at a reasonable speculative level, ill-treatment in his country based on the reason for his expulsion from Sweden. 33 A similar orientation is expressed also in Applications No. 8139/09, 25803/94. The ECHR has clearly established a principle, according to which those states part of the Convention should follow a proper judicial procedure before establishing that any person represents a danger to public security. Moreover, these states (for example Art. 3 of the Convention) are in any case not allowed to deport this person back to a country where he/she risks "inhuman or degrading treatment or punishment." This doctrine is certainly an expression of a noble idea, avoiding that any human being risks ill-treatment. On the other side, the law of the Convention and the doctrine of the ECHR may also create the presuppositions for serious constraints when states receive large amounts of migrants. There is a significant number of obligations posed on receiving states, which are expected to offer accommodation, legal and administrative assistance to any person (and their family) demanding asylum on the basis of the Convention. Moreover, receiving states are expected to examine the demand and reconstruct the story of the applicant. In presence of massive inflows, a state needs to invest a significant amount of financial and managerial resources in order to comply with these obligations. Guaranteeing access to accommodation, access to education, free legal administrative assistance are costly activities. The receiving state is certainly allowed to place a migrant in detention if there is a need. Nonetheless, this also entails costs and practical difficulties, in view of the need to provide legal assistance, translation, and appropriate space in correctional facilities. Moreover, reconstructing the story of a person (who may come without a passport and a clear identity) in order to verify the effective risks of ill-treatment may become very difficult. The picture becomes even more complicated because receiving states are not allowed to deport a person who is considered to be a danger for security in case the person risks ill-treatment in his home country. The practical consequence is that, according to the ECHR case law, receiving states may (at least theoretically) be put under serious pressure and may even be requested to tolerate the presence of persons considered to be a danger for their community. The ECHR acknowledges these constraints; however, it still states that the protection of the migrant against probable ill-treatment takes priority. Because of this fact, (even if the Geneva Convention does not protect convicted criminals against deportation) we maintain that the current international legislation on the protection of persons at risk of persecution is migrant centered.
THE EU ASYLUM AND MIGRATION POLICY
The European Union has, over the time, developed specific legal rules on the matters of Migration and Asylum, following the evolution of International Conventions and the law and case law of the Council of Europe. The EU has created the Common European Asylum System (CEAS) and has created some uniform standards, even if the member states retain a certain space of autonomy. The various rules aim at 1) guaranteeing asylum and protection to persons needing it according to those legal instruments reviewed in the previous section and 2) establishing criteria for the identification of the state responsible for reception, examination of eventual demand of protection, and eventual expulsion of persons not fulfilling the criteria at point one above.
An important piece of EU legislation on migration is the Dublin Regulation. 34 35 36 37 The Dublin Regulation, in its last version from 2013, represents the most important legal instrument on the criteria for the identification of the EU member state responsible for the reception, examination of the demand of protection, and eventual expulsion of a Migrant. This latter point is of major relevance because the actions of those Governments we are studying are presented by their proponents as a reaction against the inefficiency and unfairness of the Dublin Regulation's mechanisms. One of the most important criteria for the identification of the responsible state is the one indicating that this responsible entity is the First State the migrant has arrived at. Technically, this is a residual criterion, applicable when the person arriving in the EU territory does not, for example, have family members in or a visa issued by a member state different from the one where the migrant him/herself has first entered the EU territory. Nevertheless, the criterion of the "First Country" is the one most applied in practice and is the target of the reaction of those states (like Hungary and Italy) that lie at the external border of the EU (including maritime borders) and are logically (among) the responsible state in many cases. Based on the Dublin Regulation, the responsible state is legally required to respect the principle of non-refoulement, inform the migrant (in a language known or supposed to be known by this latter) of their right to present a demand for protection, offer accommodation, examine the eventual demand request and guarantee a procedure of appeal in case of rejection of the request. Clearly, responsible states may face difficulties in respecting these rules when the number of arriving persons exceeds certain limits. This even if a special mechanism, called European Asylum Support Office (EASO), has been devised in order to offer assistance to the responsible states in case of disproportionate inflows.
The principle of the First Country is under discussion since the emergence of the refugee crisis of 2015. Indeed, a massive inflow of migrants may impose a heavy financial and organizational burden on the state receiving large numbers of persons. For this reason, the European Commission has announced a plan to shift towards a system of quotas, in order to redistribute the migrants across 34 Armstrong, A. EU states based on the criteria like the population and the GDP of the EU member states. The underpinning idea is that it would be fair to release those EU member states located at the external borders of the EU of those undeniable hurdles related to the practical difficulties described above. Moreover, the picture becomes even more problematic taking into account the prohibition to deport anybody, including a criminal, to a country where he risks ill-treatment (a considerable number of home countries of refugees). However, at the moment of writing, these initiatives have not been transformed into a reform of the Dublin Regulation and the principle of the First Country still fully retains its legal validity. An eventual reform would request the approval of the majority of states, which is far from being realistic at the moment. At the moment, the redistribution of migrants is happening on the basis of informal and voluntary mechanisms of solidarity.
The current legal rules may seem to be the origin of a situation of unfairness, where specific countries (Spain, Greece, Italy, Malta, Croatia, and Hungary) are automatically exposed to (and the subsequent concrete risks of) major constraints only on the basis of their geographical location. In particular, a country such as Malta is in practice not able to comply with the Dublin obligations given its limited size and because of the exiguity of its population. Nevertheless, the state of the art of the legal rules is such that these countries are legally required to offer assistance to migrants irrespective of the hurdles that this may create on their financial balance and of the risks for the security of their citizens. A reform of the Dublin rules with a redistribution of migrants may eventually become an expression of solidarity across the EU. Nevertheless, those states not at the frontline (these are the majority of EU states, whose votes would be necessary to change the system) have actually an interest in retaining the status quo. The various National Governments hesitate to share the financial and organizational burdens of receiving migrants also in order to avoid difficulties at the moment of re-election.
THE LEGALITY OF THE REACTION OF INDIVIDUAL STATES: THE CASES OF HUNGARY AND ITALY
This section will discuss the legality of two types of behaviors followed by two specific countries members of the European Union: Hungary and Italy. Their national Governments have, rightfully or wrongly, decided that the inflow of migrants has become disproportionate, financially and organizationally unmanageable, and dangerous for the security of their citizens. The first subsection (2.1.) will study the Hungarian case, the second (2.2.) will focus on the Italian case. The current literature tends to describe Orban and Salvini in quite negative terms, presenting them as illiberal and populistic 38 39 40 . Nonetheless, there is more room for a critical discussion of the legality of their measures. This section attempts at doing so.
THE HUNGARIAN CASE
During the second half of 2015, the Government of Hungary led by Prime Minister Orban has shifted towards an anti-migrant policy. This policy has a precise ideological underpinning, namely the overt refusal of those legal and political traditions according to which there cannot be a limit to the duty to offer reception and examination of requests for protection. The Hungarian authorities, despite the pressure put on them by various EU bodies, other member states, and NGOs, have decided to block or seriously limit the arrival of migrants from neighboring countries. The latter are mostly persons coming from Africa or the Middle East.
The legality of the Hungarian Government's actions can be studied by deconstructing their various manifestations. The simple construction of a fence at the border with Serbia (which is an external EU border) is, in itself, a perfectly legal act, as a sovereign state has the full right to protect its external border. Actually, paradoxically, Hungary has a major responsibility towards the whole Schengen area, as the Hungarian border is also the border of the Schengen area. There is no rule of International or EU Law technically and explicitly preventing a sovereign state from building any fence within its own territory. Hence, insofar as the construction of the fence is discussed per se, Hungary is not infringing any legal commitment, leaving aside all political and symbolic aspects.
Indeed, the construction of a fence with Serbia may eventually turn out to be legally questionable in case it results in a de facto refoulement of asylum-seekers. The Hungarian border patrols have been instructed to oppose physical resistance against the attempts of groups of migrants to enter the country. This fact is still not a representation of illegality in itself, in view of the right of any authority of a sovereign state to refuse and enforce refusal of entry to foreigners not entitled to stay in the country. Nonetheless, some of these foreigners may be entitled to receive international protection, based on those rules discussed in the first section of this paper. This fact may pose a legal problem. Hungary should not expose these persons to the risk of suffering inhumane treatment, on the basis of the Geneva Convention. Those migrants whose entry to Hungary was denied by the local authorities were already in Serbian territory, after having crossed its territory and previously the Greek territory. Therefore, the refusal of entry from the Hungarian authorities would be justified in case those countries crossed by the migrants (in particular, Serbia considering that the migrants were already in Serbian territory when trying to enter Hungary) were already able to offer sufficient standards of international protection to the migrants (Safe country). Unfortunately, at the time the action of building the fence was taken, there was not an official EU list of safe (and by exclusion unsafe) countries. Therefore, one cannot automatically conclude that Hungary has been violating the principle of non-refoulement when it had basically not allowed those migrants already in Serbian territory entry into Hungary. Serbia is a full UN Member, therefore bound to respect all duties from the UN Charter and is also a member of the Council of Europe and a signatory of the Geneva Convention. Moreover, it is run by a regularly elected Government and is not in a state of war. Because of these points, the Hungarian authorities are not automatically accusable of violating the principle of non-refoulement when they refuse entry into their country to migrants already in Serbian territory. The effectiveness of Democracy and the fair treatment of refugees in Serbia is criticized by some NGOs. 41 Nonetheless, these NGOs are private organizations and are not entitled to prepare themselves a list of Safe/Unsafe Countries that can have a binding effect on a Sovereign Country like Hungary is. On the other side, the UNHCR argues that rejections to supposedly safe countries are also legally problematic since the fact that a specific individual might be at risk on a personal basis.
In view of the discussion presented in the paper, it appears that irrespective of any political views, the action of building a fence is not in itself illegal. However, the action of blocking the entry of migrants coming from the Serbian territory is legally questionable.
THE ITALIAN CASE
The governmental authorities of Italy have delayed or blocked the entry of private boats carrying migrants allegedly rescued by these private boats while the 41 [http://www.bgcentar.org.rs/bgcentar/eng-lat/wp-content/uploads/2021/03/Human-Rightsin-Serbia-2020-za-web.pdf] accessed on 1/7/2021 migrants themselves were attempting to cross the Mediterranean Sea. These actions have been taken several times during the period 2018-2019 before a reshuffling in the Governmental coalition has occurred. These initiatives are inspired by underpinning ideas, according to which the number of migrants coming to Italy has become disproportionate, the state cannot manage the procedures of reception, examination of requests, and eventual expulsion and the massive inflow of migrants entails concrete risks for the security and welfare of citizens because of terrorism and other types of crimes.
The Italian authorities, at the time of the facts, were led by a triumvirate composed of Giuseppe Conte (Prime Minister) and Luigi Di Maio and Matteo Salvini (both of them Deputy Prime Ministers, even if Salvini has directly been giving the orders to refuse entry to the boats. The Italian authorities have invoked the following legal grounds in order to justify the refusal of allowing entry to the NGO boats: 1) Italy was not the closest safe harbor at the moment when the migrants have been rescued. The boats have purposefully avoided directing towards countries like Tunisia, Algeria, Malta, or Libya itself 2) The authorities of Malta have intentionally avoided answering the requests for assistance even when the rescuing boat were crossing their own waters 3) Italy is not the state responsible in view of the Dublin Regulation when the boats are registered in any other EU Member state than Italy. According to the Italian authorities, the boats themselves would represent an extension of that member state's territory abroad. Hence, for example, in the case the boat carrying the migrants has been registered in the Netherlands, then this latter would be the responsible state.
The points highlighted above as going to be discussed separately. We discuss the arguments themselves, not the veracity of the concrete facts alleged by the Italian authorities at points 1) and 2). As for point 1), a definition of the meaning of the concept of "Safe Harbor" would be helpful. According to a certain interpretation, a Safe Harbor is a place offering repair from adverse weather and attacks. 42 According to this interpretation, any place closer to Italy at the moment when the migrants were rescued and able to offer protection against adverse weather and attack would represent a Safe Harbor. Nevertheless, it is clear that boats could hardly return migrants to Libya, where there is a war in the act. Still, the boats' authorities should direct to other countries than Libya when they are closer than Italy is. Nonetheless, there is a further point to consider. At the moment when the Italian authorities were denying the boats' entry 42 [https://www.collinsdictionary.com/dictionary/english/safe-harbour], accessed on 1/7/2021 into the Italian harbors, the boats were actually closer to Italian land. This means that at that moment, the Italian harbors were the closest Safe Harbor. Forcing the boat away could expose the migrants to risks, in case the condition of the boat itself, or the health of the passengers and the meteorological conditions are adverse. Overall, considering all points, it seems that the denial of entry or delaying of entry into Italian harbors is legally problematic. The boats have regularly been provided with food and medicaments, passengers in critical health conditions have immediately been allowed to access the Italian land territory. However, the permission to stay in front of the land and the supply of food and medicaments is hardly equivalent to the provision of a "Safe Port". The Italian authorities argue that 1) some rescuing boats have purposefully been directed towards Italy even when another closer Harbor was available or 2) in some cases the authorities of Malta and Tunisia have (unlawfully) denied entry and repair to the rescuing boats. However, while it is clear that these alleged particular behaviors may entail legal responsibilities for the authors, this does not seem to give the Italian authorities the right to deny repair to the boats and their passengers. Eventually, the Italian authorities could take action against those other national authorities having refused entry and/or against those persons who have eventually purposefully directed their boat to Italy even when a closer Safe Harbor was available.
We turn now to the second point, namely the discussion on Italy not being the state Responsible in case the rescuing boats are registered in another EU state. It all depends on whether or not a boat is a part of the territory of the state where it has been registered. This is a complex issue, whose solution goes beyond the scope of this paper. Nevertheless, the denial of access to a Safe Harbor is hardly justifiable on the basis of the argument according to which Italy is not the responsible state in terms of the Dublin Convention whenever the rescuing boat carries the flag of another EU state. Italy would still be obliged to offer a Safe Haven and eventually the other particular EU state would have to accept to take the migrants over and treat their case. Any eventual refusal from any specific state could in itself be a reason for an action of the Italian or (EU) authorities. Italian prosecutors could eventually act against the carrier of the migrants, allegedly because of solicitation of illegal immigration. However, this is a different issue, distinguished from the obligation of giving immediate shelter (Safe Haven) to those persons in immediate danger.
DISCUSSING THE CASE STUDIES
The section on the case studies mentioned above has shown that the legality of those actions taken by the Hungarian and Italian authorities may, at least in the case of Italy, be questioned in formal terms. Nonetheless, there are sever-al remarks regarding the compatibility of the current international legislation with some legal principles like Public Security and Order, Proportionality, and Reasonableness of legal obligations. The main problem is that these various principles, while recognized, are sometimes hard to invoke in practice.
As highlighted in the first section of this paper, the current international rules tend to be migrant-centered. The whole set of legal standards is inspired by an underlying aim. The main aim is that migrants are offered all possible guarantees that their eventual request of protection is examined, their personal story is reconstructed, they are offered adequate information and legal assistance, and that an eventual rejection of their demand may be appealed. According to the UNHCR, also the designation of a "Safe Country of Origin" should be avoided because it would undermine the rights of those individual persons who, while coming from a technically Safe Country, are objects of individual persecution. Hence, according to this view, the automatic rejection of persons going through Serbia could represent an infringement of the Geneva Convention. This would lead, in practice, to an (ab absurdum) conclusion: that no application should be rejected or go through a summary treatment not even when the applicant comes originally from or has been crossing through a supposedly Safe Country.
Our point here is that the migrant-centered perspectives of the Geneva Convention and the UNHCR are themselves questionable. The Geneva Convention's perspective implicitly assumes that those states receiving the migrants have sufficient financial and managerial resources in order to offer reception, accommodation, legal assistance in a language that the migrants know or are supposed to know. The case law of the ECHR puts the safety of the refugee in any case ahead of the constraints that illegal and criminal behaviors from their side may create to receiving communities. All these rules are problematic because of various reasons. At first, states may encounter serious financial and managerial burdens when offering accommodation, means of subsistence to refugees, and, eventually, even free legal assistance/translation and sometimes even space in correctional facilities in case of conviction. A massive inflow of migrants and/or specific migrants may represent a concrete risk for the financial stability, welfare, and security of the citizens of those countries receiving significant amounts of foreigners who are often not familiar with the recipient's culture, language, and hardly employable in remunerative working activities. These persons may logically tend to recur to crime, even petty crimes, with a certain frequency, leaving aside also the terrorism-related risk. Irrespective of crime, sudden massive inflows of migrants may "aggravate long-standing structural problems and bottlenecks in local infrastructures, such as housing, transportation, and education…… Similarly, although this is not usually the case, in some circumstances, large numbers of low-skilled migrants arriving in a particular area may have a negative impact on the local labor market prospects of low-skilled residents already present." 43 These various arguments are not really ignored by the EU authorities themselves, as they represent part of the underlying rationale for the Decision EU/2015/1601 on the relocation of certain migrants (see below). Nonetheless, the main perspective of the EU and International Law is, as said, migrant-centric.
We identify that the migrant-centered perspective is a possible limitation of the Geneva Convention and, even more, of the case-law of the ECHR. In our view, these legal sources may, at least in extreme cases, fail to find a balance between the legitimate need of offering protection to persons in need and the objective difficulties and limitations that any country may encounter when the inflows exceed the capacity to offer reception and comply with those other consequent obligations.
The points raised above acquire more relevance if one considers that the experience has shown that the current international legislation leaves the room open to abuses. Besides genuine asylum-seekers, there is also a certain number of persons who are actually economic migrants, moving away from their country to Europe not in order to escape persecution but in order to find better living standards. These persons are not protected by the Geneva Convention but still, the attempt at present requests of protection claiming to be political refugees or persons deserving humanitarian protection.
These limitations in international law and case law may also take legal relevance. states are entrusted with the task of guaranteeing order, stability, and security to their citizens (see also article 12 of the International Convention on Political and Civil Rights). 44 Hence, a challenge arises. What would happen should a state be able to demonstrate that a massive inflow of migrants really creates a hardly manageable financial burden and a risk for the security of its citizens? In such a particular type of case, presented here as hypothetical, the national authority may have a ground in declaring that the migrant-centered connotation of those international provisions imposes a disproportionate burden on the very recipient state. This burden could in principle go beyond the 43 Scarpetta, S. How OECD countries can address the migration backlash, OECD Paper Serie, https://core.ac.uk/download/pdf/80784892.pdf, ccessed on the 1 st June 2021. 44 Article 12 -1. Everyone lawfully within the territory of a State shall, within that territory, have the right to liberty of movement and freedom to choose his residence. 2. Everyone shall be free to leave any country, including his own. 3. The above-mentioned rights shall not be subject to any restrictions except those which are provided by law, are necessary to protect national security, public order (ordre public), public health or morals or the rights and freedoms of others, and are consistent with the other rights recognized in the present Covenant. 4. No one shall be arbitrarily deprived of the right to enter his own country. capacity of the state to comply with the international obligations and, at the same time, guarantee its duty of promoting the welfare and the security of its own citizens. Obviously, any national authority issuing this type of declaration (and acting on its basis) would have the onus of proving that the inflow of migrants is really disproportionate and prone to create real and hardly manageable dangers for its citizens. Indeed, should this be proved, then the national authority could eventually have a legal basis for arguing that those international provisions which basically create unconditional burdens for the state of arrival, irrespectively of the state's reasonable capacity to manage the arrivals and guarantee security to its citizens, are themselves legally questionable. These types of arguments could find a greater audience should a national authority become able to demonstrate that a significant number of migrants is actually attempting to abuse the legal provisions and stay in Europe as long as possible without filling the requirements for asylum. These problems may acquire a certain relevance since some countries (see below) may actually reasonably end up demonstrating that their actual capacity to face the duties of a responsible state is limited. Reception and accommodation can be costly, convenient spaces are not necessarily easy to find, there is only a limited number of legal experts, translators, professionals qualified to investigate and eventually incriminate/defend and there is, in case, even a limited amount of rooms in correctional facilities. Inflows of huge numbers of migrants may in principle really force some states to deploy considerable resources, eventually with great efforts. The risks for security cannot be denied considering that, based on common sense, it can be expected that persons without financial resources and liable to pay debts to migrant smugglers can easily turn to commit crimes.
The arguments discussed above may in principle find legal recognition, even if this would not be easy, mostly because concepts like "public security" and "public order" are themselves ill-defined. The balance would be hard to strike. The very case law of the ECHR indeed prevents signatories' states from escaping their obligation alleging reasons of public order and security. Nonetheless, the competence of any national state on deciding on national security, while vaguely defined, is well embedded in the Treaty on the Functioning of the European Union (Art.72, see below for a deeper discussion) and some national judicial authorities may even consider it also as part of the constitutional order of the state. A given competent national authority (i.e. a Constitutional Court) may even state on the technical incapacity of a state to surrender its duty to guarantee public order and security. This could, in principle, create a serious conflict between international and supreme national authorities. Leaving aside the possibility that any state or a group of states might withdraw from the Geneva Convention (according to the Convention itself) and, in extreme cases, even from the Convention of the Council of Europe. So far, we are talking at a speculative level, but we cannot exclude concrete actions either.
We would bring more material to discuss the very questionability of the existing International and EU rules discussed in the first section of this work. Some countries may reasonably end up demonstrating that their actual capacity to face the duties incumbent on a responsible state is limited. The most remarkable example is Malta. Malta is a small country with only half a million residents and lies at the external border of the European Union. Malta, reasonably, cannot be expected to be capable of meeting those legal requirements as a responsible state. Indeed, Malta is located in an area crossed by many boats carrying migrants and, in many cases, is likely to be classifiable as the responsible state for 1) offering a Safe Harbor and 2) treating migrants' asylum requests. In view of the numbers of migrants crossing the Mediterranean Sea, it appears as self-understanding that this small country cannot offer reception, accommodation, legal assistance to all those who may actually turn up asking for it. The Maltese authorities cannot be expected to monitor the behavior of all migrants/refugees and eventually follow those legal procedures necessary to investigate, incriminate and sanction migrants in case of necessity. Nonetheless, Malta is still legally compelled to offer assistance to all migrants entering its territory, including its territorial waters, offering Safe Harbour in case of need. We conclude that the country is theoretically exposed to meet obligations that it is not capable of meeting.
A given state facing unreasonable burdens, like Malta in our example, may object that legal rules cannot impose unrealistic and unreasonable obligations. This is a Principle of Law discussed already by Fuller. 45 Fuller mentions the actual possibility of compliance as being one of the essential characteristics of legal rules. In addition, legal rules should respect the principle of proportionality. Legal rules violate the principle of proportionality when, in order to achieve a certain aim, they end up imposing disproportionate burdens on their addressees. This principle, and the one of Fuller, could be useful to explain those alleged (from the Italian authorities) refusals from the Maltese authorities of complying with their international obligations. To release Malta of a serious burden, other EU countries have spontaneously been taking charge of the bulk of those persons for whom Malta should be the responsible state. Nevertheless, this solution has not been formalized. In case of a major emergency, Malta would probably put a limit to entries thus infringing those laws studied in the previous section. However, Malta could eventually invoke the unreasonableness and disproportionality of the burden as a legal justification 45 Fuller, L.L.: The Morality of Law, rev. edn., Yale University Press, 1969. for its behavior. In practice, this has allegedly already been done by the Maltese authorities exactly because of the limited capacity of Malta to receive migrants (and the other EU member states, in particular Italy, a de facto taking the migrants for which Malta should be responsible). Nonetheless, the solidity of the argument has not been tested in Court.
As for Proportionality, the propensity of a Court to declare that specific EU rules violate the principle of proportionality has been tested on various occasions. 46 We identify two cases providing indication regarding the attitude of the Court to excuse the violation of disproportionate rules. The first is case 231/83, where the French Government had set a minimum resale price, thus canceling the competitive advantage of imported oil, infringing the rules on the Freedom of Circulation of Goods. The French authorities, among other arguments, claimed also that an eventual removal of the minimum price would have probably originated riots and hardly manageable violent actions from the side of privately organized groups, thus threatening public security. The Court of Justice replied that the justification presented by the French authorities could not be accepted, as the French authorities themselves had not been able to demonstrate that the riots and demonstrations would have been unmanageable. Another case, more recent and directly related to migration, is case C643/15. Here the Governments of Hungary and Slovakia had requested the annulment of the obligatory reallocation 47 of 150, 000 migrants within the EU based on quotas. The two-member states have claimed, among other arguments, that the related obligations would cause disproportionate burdens. The Court of Justice has replied that there was no evidence of any eventual lack of proportionality of the Decision, considering the urgency, una tantum connotation of the decision, and the ambiguity of the numbers. The two cases discussed in this paragraph show that the Court has rejected the existence of a threat to public security and a disproportionate burden in the specific cases, but has not dismissed the valid- The Article stats that: this Title shall not affect the exercise of the responsibilities incumbent upon Member States with regard to the maintenance of law and order and the safeguarding of internal security.
does not offer a final answer, not even after its most recent decision. 49 There is a grey area and the Court has manifested a certain openness to consider also a potential threat to public order as a factor relevant to state's decisions. The Court has shown a certain reluctance towards accepting such an argument in concrete, but it has not ruled its potential validity out.
Besides the Court of Justice, also as said, some National Constitutional courts could question the conformity between 1) unconditional and migrant-centered obligations of International and EU Law and 2) national constitutional provision. An eventual pronouncement could break important international equilibria.
The analysis of the last paragraphs re-open the question about the legality of the actions of the Hungarian and the Italian authorities. Obviously, these authorities would bear the onus of proving that those constraints they have claimed having been exposed have really been disproportionate and not manageable in practice. Nonetheless, should they succeed in this task and we, then they could invoke defects in those very international and EU Laws they were expected to respect and, eventually, try to rely on the Article 72 TFEU. It is important to remind that the Court has been strict as for the scrutiny of the actual danger to public order and of the violation of proportionality. Nonetheless, the avenue has not been closed.
Besides purely legal arguments, there would also be another important direction worthy of consideration. EU and International authorities have an imperfect capacity to enforce decisions that national authorities really refuse to obey for political reasons. This migrant-centrism is politically very unpopular and it is attacked by various political parties interested in maximizing their consensus. A significant part of the electorate has developed, rightfully or wrongfully, the perception that massive migration generates too many problems and that existing international standards should be reviewed. On this basis, some national authorities, as in the case of Orban and of the Italian Government, have recurred to extreme forms of preventing/limiting migration, as studied in the previous section. The analysis of the current section has shown that the legality of these actions is, at least in the case of Italy, questionable. Nevertheless (and irrespective of the possibility that the actions of these Governments may finally actually result to be legally valid), there is objectively little that the judicial international authorities can do in order to force these Governments into changing their path of action. This is because 1) the European Union does not dispose of real coercive means to enforce the pronouncements of the Court of Justice and 2) the pressure of NGOs and other Organizations has not resulted 49 [https://europeanlawblog.eu/2020/04/14/coming-to-terms-with-the-refugee-relocation-mechanism/], accessed on the 1/7/2021. to be effective. Despite all pressure from these various entities, neither Orban nor the Italian Government on duty at the time of the denials has changed their path of action (the successive reshuffling in the Italian Government Coalitions is, officially the result of internal dissents among the coalition, not an imposition from other authorities). This evidence indicates that some Governments may actually willfully elude their legal international obligation without any EU/International authorities being, in practice, able to coerce these very national Governments. These latter representatives are likely to benefit from an increase in popularity due to the firmness demonstrated towards excessive migration and, consequently, are likely to persist with their action. The combined effect may be a chaotic situation, meaning migrants are blocked at the terrestrial or maritime border of the EU.
The solution appears difficult also in view of the fact that both supporters, as well as opponents of the national anti/migratory policies, present ethically valid arguments. On the one side, the former claim that the rejection of persons likely needing help against war and persecution is unethical and illegal. On the other side, the latter highlight that their countries cannot take responsibility for numbers like those seen in the last years without seriously compromising the reasonable management of resources and the safety of citizens (even against violent and petty crimes). There is a general agreement regarding the necessity to reform the Dublin Regulation, however, there are no concrete actions taken. Moreover, the EU could/should take coordinate actions in order to pressure those countries from where migrants come from into 1) effectively controlling their maritime coasts in order to prevent massive departures and 2) take concretely migrants back when asylum and protection have been denied in the EU and 3) the EU should take or authorize reasonable and proportionate initiatives aiming at guaranteeing the safety (including safety against violent and petty crimes) of EU citizens in case of massive arrivals. The European Union has demonstrated certain inertia under all lines, despite talks and proposals. This appears extremely unfortunate as EU border states have now taken individual actions which, legally or illegally as they might be, are de facto suspending the effectiveness of those international rules on Asylum and Protection. Should the EU concretely take actions like those described at points 1-3 above, then those member states would have less ground for arguing that they are forced to take individual initiatives questionable in terms of International and EU Law.
It appears that some further necessary steps should be taken in order to update the existing international provisions. The example of Malta reminds us that no state has infinite resources for guaranteeing reception, accommodation, and examination of demands for protection and, at the same time, also guaranteeing the safety of its citizens and public order in case of massive arrivals. This is a crucial point: the difficulties that massive inflows of migrants can in principle create to receiving states should not be denied nor ignored or undermined by any international body. We are not stating here whether or not the Hungarian and Italian authorities are facing disproportionate difficulties. We just propose that this type of possibility should be seriously considered by international and EU rules and appropriate instruments should be devised. Otherwise, as said already, some states could, rightfully or wrongly, 1) conclude that the migratory pressure put on them is unreasonable and represents a threat to the security of their citizens 2) take actions aiming at blocking or reducing the inflows, eventually also going against their formal obligations. This is happening already, as the Hungarian and the Italian cases have shown, and it can certainly jeopardize the credibility and effectiveness of those international provisions aiming at protecting persons really deserving protection and asylum.
Overall, it seems that there is a need of reforming the existing rules, which have been devised having the migrant and his/her needs as the center of the whole system. Certainly, it makes sense to respect those persons who may have imperative reasons, eventually the risk of saving their lives, for leaving their home countries and applying for asylum in the EU. On the other side, the experience of the last decade has demonstrated that receiving states can have also a limited capacity to guarantee the respect of the rights of a massive number of migrants and, at the same time, guarantee also their social stability, public order and the safety of their citizens. The international and EU rules should take both sides of the problem into due and proportionate consideration.
CONCLUSION
This paper has analyzed the legality of the individual actions taken by the Italian and the Hungarian authorities and has established that, while finding a clear-cut threshold of legality/illegality is questionable, elements are suggesting that at least the Italian Government has violated the existing international and EU rules. Nonetheless, also these rules may in themselves be legally problematic when they end up imposing unconditional burdens on those responsible states. This may result in disproportionate constraints and a threat to public order and the safety (including against violent and petty crime) of their citizens. Moving from this argument, some countries have decided that the pressure had become unmanageable and have decided to take individual actions in order to deter or limit the inflow. Our paper suggests also that, apart from political pressure, there are few instruments that the international and EU authorities may use in practice in order to coerce recalcitrant states into respecting relevant international obligations. Moreover, their national Governments, when implementing anti-migratory policies, often enjoy certain support from their constituents. This support may be a factor able to reinforce the firmness of those national Governments against massive migration.
We are proposing that the current legal rules on asylum and protection should be updated to 1) consider the constraints that a major inflow of migrants may in principle create to receiving states at least in some extreme cases and in order to 2) prevent and react to abuses. In addition, the EU (and eventually also the United Nations) should act in order for the home Governments of those migrants without a title to stay in the EU to promptly take these persons back. We believe that a reform of the existing rules could weaken the political and eventually also legal basis for taking individual actions like those of Hungary and Italy. This way, the existing EU and international rules would become more balanced and, hence, politically stronger. Furthermore, the justification of individual actions like those of the Hungarian and Italian authorities would be more difficult and, eventually, there would be less of a reason for such types of actions.
Future research may focus on the development of methodologies useful to determine whether a country is exposed to unmanageable inflows of migrants, taking into account factors like the number of arrivals, recipient states' managerial and financial resources, the crime rate, and the percentage of crimes committed by migrants. This is in order to assess the legitimacy of the eventual and probable future complaints coming from Governments presenting their country as overburdened.
|
2022-01-30T15:46:15.554Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e5439ce3f149d9491578b881ac5a857d56420522",
"oa_license": null,
"oa_url": "https://hrcak.srce.hr/file/392848",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e5439ce3f149d9491578b881ac5a857d56420522",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": []
}
|
85544334
|
pes2o/s2orc
|
v3-fos-license
|
Strategy to avoid local recurrence in patients with locally advanced rectal cancer
Background To clarify the short- and long-term outcomes of radical surgery after neoadjuvant chemoradiotherapy (NCRT) with TS-1 and irinotecan, which enhances radiosensitivity, in patients with locally advanced rectal cancer. Methods The study group comprised 105 patients with locally advanced rectal cancer who received NCRT followed by radical surgery. NCRT consisted of pelvic radiotherapy (45 Gy in 25 fractions over a period of 5 weeks), S-1 (80 mg/m2) given concurrently for 25 days, and irinotecan (60 mg/m2), given once a week as a continuous intravenous infusion. Radical surgery was performed 8 weeks after treatment. Results A pathological complete response was confirmed in 23.8%. The 5-year recurrence-free survival rate was 79.3%, and the 5-year overall survival rate was 87.1%. Multivariate analysis showed that the following 4 variables were independent predictors of recurrence-free survival: Sex (male: p = 0.0172), Pre-treatment tumor diameter (< 40 mm: p = 0.0223), Histopathological treatment response (grade 0,1: p = 0.0169), and ypN (ypN1: p = 0.1995; ypN2: p = 0.0007). Only ypN was an independent predictor of overall survival (ypN1: p = 0.0009; ypN2: p = 0.0012). Conclusions Our treatment strategy combining TS-1 with irinotecan to increase radiosensitivity had a high response rate.
Background
Improved outcomes and the control of local recurrence have long been important goals in the treatment of locally advanced lower rectal cancer. Preoperative radiotherapy has been demonstrated to be associated with better compliance and a lower incidence of complications than postoperative radiotherapy [1]. Many prospective clinical studies have shown that preoperative radiotherapy combined with chemotherapy can significantly decrease the rate of local recurrence [2]. At present, total mesorectal excision (TME) combined with preoperative chemoradiotherapy has become the main standard treatment in Western countries [3]. However, whether preoperative radiotherapy should be given according to a short-course or long-course schedule remains controversial. In addition, to prevent defecatory disturbances and late complications caused by radiotherapy, clinical trials have studied whether neoadjuvant chemotherapy can be given instead of radiotherapy [4].
In Japan, a retrospective study demonstrated that TME plus lateral lymph-node dissection (LLD) can decrease the rate of local recurrence in patients with lower rectal cancer. This procedure has therefore become standard treatment. A randomized, controlled trial was performed to determine whether TME alone is noninferior to TME plus LLD as standard treatment [5]. The results showed that TME plus LLD was more invasive and associated with a higher incidence of complications than TME alone. However, the noninferiority of TME alone was not demonstrated on long-term follow-up, and the local recurrence rate was significantly lower in the TME plus LLD group. TME plus LLD has thus retained its position as standard treatment [6]. In our study, TME was conducted without performing LLD in patients whose lateral lymph nodes were not swollen before surgery because they had received radiotherapy up to the lateral region at the time of NCRT. In the TME study, preoperative treatment was standard therapy and was showed to be fully effective. In patients with swollen lateral lymph nodes before surgery, LLD should be performed in addition to preoperative therapy to achieve local control.
Prophylactic LLD is considered adequately effective in patients who receive NCRT. In patients with swollen lateral lymph nodes before surgery, we confirmed the treatment response, carefully considered the advantages and disadvantages of LLD, and performed LLD to achieve local control.
S-1 is a combined preparation of tegafur, oteracil, and gimeracil. Gimeracil can enhance radiosensitivity. Irinotecan enhances the effectiveness of 5-fluorouracil converted from tegafur. To further decrease local recurrence and increase the survival rate, we previously designed a regimen that combined LLD with neoadjuvant chemoradiotherapy (NCRT) including S-1, irinotecan, and radiotherapy to decrease the local recurrence rate and increase the survival rate. We conducted Phase 1 and 2 studies and showed that this regimen was safe and effective, with a very low rate of local recurrence [7][8][9]. However, LLD was associated with an increased bleeding volume, adverse effects on urination and sexual function, and increased difficulty in laparoscopic surgery. To minimize surgical damage and prevent postoperative complications, we designed a new regimen for NCRT. LLD was omitted, and the irradiated region was extended laterally. Therapeutic LLD was performed only in patients with enlarged lateral lymph nodes on preoperative imaging studies to eradicate local recurrence by dissection and chemoradiotherapy.
The aim of our study was to evaluate the effectiveness and the safety of this new regimen for NCRT and to clarify the long-term outcomes, particularly with respect to whether the risk of local recurrence was decreased.
Subjects
The study group comprised 105 patients with advanced lower rectal cancer who were treated in our hospital from January 2011 through December 2015 (Fig. 1). Risk factors for recurrence, 5-year disease-free survival (DFS) rates, and 5-year overall survival (OS) rates were studied. Our protocol was approved by the institutional ethics committee of Kitasato University Hospital (Kanagawa, Japan) on June 19, 2017 (B17-063). Written informed consent was obtained from all patients. Eligible patients had to have a histopathologically confirmed diagnosis of previously untreated rectal adenocarcinoma and an Eastern Cooperative Oncology Group performance status of 0 to 3. Pathologic staging was determined according to the 7Th Edition of the TNM Staging System (Union for International Cancer Control). Other inclusion criteria were an age of 20 to 82 years at the time of enrollment and no severe dysfunction of major organs (including the bone marrow, heart, lung, liver, and kidneys). The preoperative diagnosis was based of the results of barium enema examination, colonoscopic examination including histopathological examinations, computed tomography (CT), and magnetic resonance imaging (MRI), and the Fig. 1 As for the clinical target volume (CTV), the superior border did not go beyond the L5-S1 interspace, the lateral border did not go beyond the outer edge of the lesser pelvic cavity, and the posterior border did not go beyond the pelvic surface of the sacrum (a). TS-1 (80 mg/m2) was given orally after breakfast and dinner on days 1 to 5, days 8 to 12, days 22 to 26, and days 29 to 33. Irinotecan (60 mg/m2/day) was given as a continuous intravenous infusion over a 90-min period on days 1, 8, 22, and 29 (b) absence of distant metastasis was confirmed. The height of tumors was not included in the inclusion criteria. Lymph-node metastasis was defined as lymph nodes with a short-axis diameter of 7 mm or greater on MRI. The site of the tumor was determined by measuring the distance between the lower edge of the tumor and the anal verge before treatment. The pros and cons of anal sphincter preservation were evaluated on the basis of tumor location.
Radiotherapy and chemotherapy
Radiotherapy was administered in a dose of 1.8 Gy once daily 5 days per week. A total of 25 fractions were administered (total dose, 45 Gy), using a three-field (bilateral and posterior) technique or a four-field (anteroposterior and bilateral) technique. All 3 or 4 fields were irradiated at each session of radiotherapy. Lymph nodes included the mesorectum (pararectal lymph nodes), internal iliac lymph nodes, and obturator lymph nodes. As for the clinical target volume (CTV), the superior border did not go beyond the L5-S1 interspace, the lateral border did not go beyond the outer edge of the lesser pelvic cavity, and the posterior border did not go beyond the pelvic surface of the sacrum (Fig. 1a). TS-1 (80 mg/m 2 ) was given orally after breakfast and dinner on days 1 to 5, days 8 to 12, days 22 to 26, and days 29 to 33. Irinotecan (60 mg/m 2 /day) was given as a continuous intravenous infusion over a 90-min period on days 1, 8, 22, and 29 ( Fig. 1b). The histologic response rate of the primary tumor was evaluated according to the grade of the response. However, the histologic response rate of lymph nodes was not evaluated.
Treatment schedule and criteria for changes in treatment regimens
In our protocol, treatment was temporally discontinued if grade 3 or higher diarrhea and vomiting developed. Blood and urine tests were performed every week to investigate hematologic toxicity and renal toxicity. The dose was reduced or treatment was postponed on the basis of the results. Toxicity was evaluated according to the second edition of the Common Terminology Criteria for Adverse Events (CTCAE), issued by the National Cancer Institute. If toxicity requiring dose reduction occurred within a course of treatment, the dose of irinotecan was decreased by 1 level (20%), and treatment could be resumed. If toxicity requiring dose reduction occurred after the dose had been decreased by 1 level, chemotherapy was discontinued without further dose reduction.
Surgery
Open surgery was performed during the first half of the study, and laparoscopic surgery was performed during the second half. In both groups, the autonomic nerves were preserved bilaterally, and TME was performed. Prophylactic lateral lymph-node dissection was not done. In patients who had enlarged lateral lymph nodes with a short-axis diameter of 7 mm or greater on imaging studies before treatment, the ipsilateral middle rectal, internal iliac, and obturator lymph nodes (lateral lymph nodes) were dissected. The distal portion of the rectum was incised, while securing a distance of at least 2 cm from the lower edge of the tumor. If the distance could not be secured, abdominoperineal resection was performed. A temporary ileostomy was performed in all patients. The surgical wound was closed after confirming the absence of suture failure and anastomotic stricture 3 to 6 months after surgery.
Evaluation of pathologic specimens
The antitumor effectiveness of NCRT was evaluated histopathologically using serial sections of the resected specimen obtained at 5-mm intervals. The degrees of cancer cell degeneration, necrosis, and fusion were assessed. The evaluation criteria were based on the histopathological classification proposed by Dworak et al. [10] and the Japanese Classification of Colorectal Carcinoma [11]. Grade 1a and grade 1b were similarly classified as Grade 1. The histopathological response of the tumor was evaluated according to the Tumor Regression Grade (TRG) as follows: Grade 0 (no response), no discernible treatment-induced degeneration or necrosis of the cancer cells; Grade 1 (mild response), degeneration, necrosis, or fusion of less than about two-thirds of the cancer cells; Grade 2 (substantial response), marked degeneration, fusion, or disappearance of at least two-thirds of the cancer cells; Grade 3 (complete response), necrosis of all cancer cells or fusion or disappearance of all cancer cells and replacement by granuloma-like or fibrous tissue.
Follow-up survey
During preoperative chemotherapy and radiotherapy, medical examinations were performed in the Department of Radiology on the days of radiotherapy. Blood samples were obtained before treatment, and irinotecan was administered once per week a total of 4 times. Blood samples were obtained in the outpatient clinic every 2 to 3 weeks after the completion of preoperative treatment. The following variables were assessed in the outpatient clinic: medical history; the results of physical examinations; performance status; carcinoembryonic antigen (CEA) and cancer antigen (CA) 19-9 levels; blood cell counts and serum chemistry; the results of CT of the chest, abdomen, and pelvis; the results of barium enema examinations; and the results of colonoscopic examinations. After the completion of 5 courses of NCRT, barium enema examination, CT of the chest, the abdomen, and the pelvis, rectal MRI, and biopsy of colonoscopic specimens were performed within 8 weeks before surgery to evaluate clinical response. For follow-up after surgery, the patients were evaluated every 3 months during the first 2 years, every 6 months during years 3 to 5, and every 12 months subsequently. Recurrence was diagnosed on the basis of the results of CT, positron emission tomography, MRI, and colonoscopic examinations, including the results of biopsy and cytologic examinations if possible.
Postoperative chemotherapy
Patients with ypN1 or ypN2 disease were given 6 courses of adjuvant chemotherapy with FOLFOX.
Statistical analysis
Descriptive statistics and distributions were calculated for demographic variables. The effects of demographic variables on the 5-year DFS and 5-year OS were investigated as long-term outcomes. DFS was defined as the interval from the date of starting treatment to the date of recurrence. OS was defined as the interval from the date of enrollment to the date of death from any cause. The following 12 demographic variables were studied: sex, age, tumor location, clinical tumor stage, tumor diameter before treatment, CEA level at initial examination, CA19-9 level, whether or not NCRT was completed, surgical procedures, TRG, ypN, and ypCR.
The 5-year DFS and 5-year OS were calculated according to each demographic variable by the Kaplan-Meier method. A log-rank test was used to evaluate the influence of each variable. Variables with P values of less than 0.1 in a log-rank test were designated as candidate explanatory variables. A Cox proportional-hazards model was used to select the variables (stepwise forward selection method: P values of < 0.1 as calculated by the Wald test were regarded as standard). The hazard ratio and the 95% confidence interval of each explanatory variable in the final model were calculated. Statistical analysis was performed with the use of SPSS version 8, OJ (SPSS, Chicago, USA).
Toxicity of NCRT
The demographic characteristics of the patients in our study are shown in Table 1. The completion rate of our regimen was 85.7% (90 patients). Treatment was discontinued in 5 patients, and dose reduction was performed in 10 patients. No patient died of NCRT-related causes. Grade 3 adverse events occurred in 11 patients (10%): 6 patients (6%) had diarrhea, and 5 patients (5%) had neutropenia.
Prognostic factors
The median follow-up period was 52 months (range, 16 to 88). The 5-year DFS rate was 79.3%, and the 5-year OS rate was 87.1%. Recurrence occurred in 20 patients (19.0%). The site of initial recurrence was the lung in 8 patients, the liver in 7 patients, the para-aortic lymph node region in 4 patients, and bone metastasis in 1 patient. No patient had local recurrence.
Univariate analysis was performed to evaluate the influence of each prognostic factor on DFS and OS and to identify candidate prognostic factors (log-rank test, p < 0.1). Candidate prognostic factors were selected by multivariate analysis (Cox proportional-hazards model) (Wald test, p < 0.1). Candidate prognostic factors for DFS were sex, tumor diameter before treatment (≥40 mm vs. < 40 mm), clinical tumor stage, TRG (Grade 0 (Fig. 2c), ypN (1 or 0 vs. 2 or 0) (Fig. 2), and ypCR. Candidate prognostic factors for OS were sex, tumor diameter before treatment, and ypN (Fig. 3, Table 2). The results of multivariate analysis showed that sex, tumor diameter before treatment, TRG, and ypN were prognostic factors for DFS, whereas only ypN was a prognostic factor for OS (Table 3).
Discussion
In the present study, we examined prognostic factors and the effectiveness of a new regimen consisting of NCRT with S-1 plus irinotecan, TME, and therapeutic LLD in patients with locally advanced rectal cancer and found that NCRT was associated with a low incidence of grade 3 or 4 adverse events. Treatment could be safely performed and had a high completion rate. The ypCR response rate was relatively high (23.8%). It is noteworthy that no patient had local recurrence. The major route of initial recurrence was hematogenous spread to the lung and liver metastasis. The 5-year DFS rate was 79.3%, and the 5-year OS rate was 87.1%, indicating relatively good long-term outcomes. Our regimen was safe with a high response rate and completely inhibited local recurrence, achieving good outcomes. Sex, tumor diameter, TRG, and ypN(+) were identified as prognostic factors, and the long-term outcomes of patients with ypN2 disease were extremely poor.
The safety of CRT depends on factors such as patients' general condition, the extent and dose of radiotherapy, and the type and dose of chemotherapy. It is generally known that as compared with surgery alone, the incidence of adverse events is increased by neoadjuvant radiotherapy (NRT) and is further increased by adding chemotherapy, leading to decreased treatment completion rates [2,3,[12][13][14]. Given that FOLFOX is standard adjuvant chemotherapy for colon cancer, many studies have combined 5-fluorouracil-based chemotherapy with oxaliplatin to enhance the effectiveness of NCRT for rectal cancer. However, preoperative FOLFOX combined with radiotherapy was associated with a significantly increased incidence of adverse events including death and a decreased treatment completion rate in patients with rectal cancer [15][16][17][18]. Therefore, preoperative FOLFOX combined with radiotherapy must be carefully followed up in further studies and cannot be recommended at present. However, FOLFOX monotherapy is expected to be effective for rectal cancer similar to colon cancer and can therefore be used for preoperative chemotherapy and chemoselection. On the other hand, the completion rate of our NCRT regimen was high (85.7%) and the incidence of grade 3 or higher adverse events was only 10%, and no patient died. Radiotherapy combined with 5-fluorouracil and irinotecan was therefore considered safe and effective. However, this chemotherapeutic regimen combined with expanded-field radiation was associated with an increased incidence of adverse events. Therefore, careful setting of the radiation fields is essential for this regimen. S-1 is a combined preparation of tegafur, oteracil, and gimeracil. Gimeracil can enhance radiosensitivity. Irinotecan enhances the effectiveness of 5-fluorouracil converted from tegafur. To further decrease local recurrence and increase the survival rate, we previously designed a regimen that combined LLD with neoadjuvant chemoradiotherapy (NCRT) including S-1, irinotecan, and radiotherapy to decrease the local recurrence rate and increase the survival rate. Our study was characterized by the fact that hematological toxicity was most common. In other studies, hematological toxicity was less common, and many patients had dermal toxicity. There was no local recurrence, although many patients had hematogenous metastasis. Future measures may be needed. However, the DFS and OS rates were good [2,3]. Our TNT protocol was associated with a higher treatment completion rate and a lower incidence of toxicity than other than the rates in other large controlled studies.
Because the local recurrence rate was decreased by combining chemotherapy with preoperative radiotherapy as compared with radiotherapy alone, the use of 5-fluorouracil-based NCRT has been recommended [1,2,5]. However, even when NCRT was combined with various drugs, including oxaliplatin, irinotecan, and molecular-targeted agents, the local recurrence rate could not be reduced to 0% [16,17]. We combined NCRT with TME and additionally performed therapeutic LLD when lymph-node metastasis was diagnosed on MRI. LLD was performed in 14 patients who had enlarged lateral lymph nodes with a diameter of at least 7 mm on MRI. We believe that this led to the 0% rate of local recurrence, including lateral lymph-node recurrence. Akiyoshi at al [18]. also performed LLD in patients suspected to have lateral lymph-node metastasis on diagnostic imaging studies and reported that the 3-year DFS rate was good even among patients who were positive for lateral lymph-node metastasis. This was an epoch-making study because the presence or absence of lateral lymphnode metastasis, a conventional prognostic factor, did not become a prognostic factor in patients who received NCRT. Akiyoshi at al. reported that lateral lymph-node metastasis was found at the time of lateral lymph-node dissection in 40% of lateral lymph nodes that had a longest diameter of 7 mm or greater [18]. On the basis of this report, lateral lymph-node dissection was indicated for swollen lateral lymph nodes that had a diameter of 7 mm or greater on MRT before treatment. This description was added to the text. Our results and those of Akiyoshi at al. showed that NCRT combined with therapeutic LLD can inhibit local recurrence. On the other hand, our study showed that ypN2 disease was a very strong predictor of hematogenous metastasis and suggested that LLD can inhibit local recurrence and may facilitate selection of the Fig. 3 The results of multivariate analysis showed that sex, tumor diameter before treatment, TRG, and ypN were prognostic factors for DFS, whereas only ypN was a prognostic factor for OS optimal postoperative treatment regimen. We believe that further clinical studies are warranted to prospectively confirm the effectiveness of the strategy of combining therapeutic LLD with a preoperative treatment such as NCRT.
In general, the ypCR rate is 17 to 19.2%. In our study, however, the rate of ypCR in patients who received NCRT was relatively good (23.8%) [13,19]. Given that the ypCR rate is higher after CRT than after preoperative radiotherapy, studies have been conducted to examine the rate of ypCR after combining 5-fluorouracil with oxaliplatin, irinotecan, and molecular-targeted agents. Because patients with ypCR had good outcomes, a new watch-and-see approach was recently been attempted. Patients who had a CR after CRT were divided into 2 groups: a watch-and-see group and a TME group. The local recurrence rate, DFS rate, and OS rate were compared between the groups, and one study reported that the differences were not significant [20]. Local resection of lesions that shrank after CRT or chemotherapy was attempted. To avoid curative surgery, which negatively affects patients' quality of life, it is essential to perform effective multidisciplinary treatment, accurately diagnose pathological CR, and administer reliable salvage therapy. At present, however, diagnostic imaging studies are not practical for predicting ypCR because the sensitivity Table 2 Univariate prognostic analysis in 105 patients with rectal cancer who underwent NCRT CRT*;chemoradiotherapy, CR**;complete response and specificity of fluorodeoxyglucose positron emission tomography is low [21].
Multivariate analysis was performed to examine prognostic factors and showed that DFS significantly correlated with sex, tumor diameter, ypN(+), and a TRG of 0 or 1. OS correlated with only ypN(+). Fokas et al. reported that ypN(+) and TRG significantly correlated with DFS and that ypN(+) and lymphatic invasion were significantly related to local recurrence on the basis of the results of CAO/ARO/AIO-94 trial [12]. Tumor diameter, ypN(+), and TRG thus represent treatment resistance, similar to tumor proliferation. As for treatment resistance, specific treatment regimens should be developed on the basis of the detailed genetic characteristics of tumor cells. In other words, new developments in precision medicine may play a key role in improving outcomes. In general, males have higher rates of complications, particularly suture failure, than females. Suture failure has long been know to be a prognostic factor for rectal cancer. Therefore, the development of more precise surgical techniques might contribute to an improvement in the outcomes of surgery [22,23].
Our study demonstrated that a new NCRT regimen combining S-1 with irinotecan, which enhances the radiosensitivity of locally advanced rectal cancer, was a safe preoperative treatment with a high completion rate. Our results also showed that TME combined with therapeutic LLD can completely suppress local recurrence and improve DFS and OS. Further studies of biomarkers that can be used to predict ypCR are needed to improve outcomes, and new treatment regimens should be developed for treatment-resistant patients.
Conclusions
Our treatment strategy combining TS-1 with irinotecan to increase radiosensitivity had a high response rate.
|
2019-03-29T00:20:13.838Z
|
2019-03-27T00:00:00.000
|
{
"year": 2019,
"sha1": "4f60e74c9631549aa25be6ec577dd2d8672bb2fa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13014-019-1253-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6acac4c685298f2bf603d11712d72e9562da5518",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14623102
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between resistant hypertension and arterial stiffness assessed by brachial-ankle pulse wave velocity in the older patient
Background Resistant hypertension (RH) is a common clinical condition associated with increased cardiovascular mortality and morbidity in older patients. Several factors and conditions interfering with blood pressure (BP) control, such as excess sodium intake, obesity, diabetes, older age, kidney disease, and certain identifiable causes of hypertension are common in patients resistant to antihypertensive treatment. Arterial stiffness, measured by brachial-ankle pulse wave velocity (baPWV), is increasingly recognized as an important prognostic index and potential therapeutic target in hypertensive patients. The aim of this study was to determine whether there is an association between RH and arterial stiffness. Methods This study included 1,620 patients aged ≥65 years who were referred or self-referred to the outpatient hypertension unit located at a single cardiovascular center. They were separated into normotensive, controlled BP, and resistant hypertension groups. Home BP, blood laboratory parameters, echocardiographic studies and baPWV all were measured. Results The likelihood of diabetes mellitus was significantly greater in the RH group than in the group with controlled BP (odds ratio 2.114, 95% confidence interval [CI] 1.194–3.744, P=0.010). Systolic BP was correlated in the RH group significantly more than in the group with controlled BP (odds ratio 1.032, 95% CI 1.012–1.053, P=0.001). baPWV (odds ratio 1.084, 95% CI 1.016–1.156, P=0.015) was significantly correlated with the presence of RH. The other factors were negatively correlated with the existence of RH. Conclusion In patients aged ≥65 years, the patients with RH have elevated vascular stiffness more than the well controlled hypertension group. baPWV increased with arterial stiffness and was correlated with BP levels. Strict BP control is necessary to prevent severe functional and structural vascular changes in the course of hypertensive disease.
Introduction
Resistant hypertension (RH) is defined as blood pressure (BP) that remains above goal, despite concurrent use of three antihypertensive agents from different classes. Ideally, one of these three agents should be a diuretic, and all agents should be prescribed at optimal doses. 1 RH is a common clinical condition and is associated with increased cardiovascular mortality and morbidity.
The prevalence of RH is difficult to estimate because there have been few relevant prospective studies. However, a recent study reported a 12.8% prevalence of RH among the antihypertensive drug-treated population in the USA. 2 Many RH-related issues remain unclear because of difficulties in selecting patients with true isolated RH. In selected RH patients, renal denervation has been shown to control BP by suppressing relationship between resistant hypertension and arterial stiffness assessed by brachial-ankle pulse wave velocity in the older patient
1496
Chung et al sympathetic nervous system overactivity. Treatment of RH should focus on pathophysiological mechanisms that prevent good hypertensive control. As a result of concurrent conditions such as diabetes, chronic kidney disease, sleep apnea, and coronary artery disease, this population remains difficult to study with regard to etiology and treatment efficacy. Arterial stiffening (AS) independently predicts cardiovascular events in patients with hypertension [3][4][5] and in those with diabetes mellitus. 6 Elevated AS is associated with complex coronary artery disease, 7 as well as numerous cardiovascular risk factors, including age, hypertension, diabetes mellitus, and end-stage renal disease. 8,9 Brachial-ankle pulse wave velocity (baPWV) is the gold standard method for measuring AS. 10 AS and RH share similar associated characteristics, including older age, isolated systolic hypertension, chronic kidney disease, diabetes, left ventricular hypertrophy, female sex, obesity, and excessive dietary salt intake. 11 To the authors' knowledge, only one previous study has investigated the role of AS in RH. 12 Thus, our study may be the first to determine an association between RH and AS by evaluation of baPWV.
Participants and study design
This observational study was conducted between April 2011 and December 2013. Patients were either referred or self-referred to the outpatient hypertension unit at a single cardiovascular center. The study was approved by the local ethics committee. A total of 1,336 consecutive patients were diagnosed with essential hypertension, and 284 normotensive patients were also entered into the study. Patients were selected according to the following inclusion criteria: age 65 years; hypertension, defined as at least three measurements of office systolic BP 140 mmHg and/or diastolic BP 90 mmHg in a sitting position, or previously diagnosed and receiving antihypertensive medication; and normal sinus rhythm. Individuals were excluded if they had any of the following: secondary causes of hypertension, established kidney failure (estimated glomerular filtration rate 15 mL/min/1.73 m 2 ), diagnosed atrial fibrillation, physical or mental impairment, or inability to perform home BP measurement.
Each subject then underwent a comprehensive patient history and physical examination. All subjects were monitored for home BP, body mass index, and waist and hip circumference. Blood samples were obtained for all participants following an overnight fast and prior to taking any medications. baPWV measurements and echocardiography studies were performed within 1 week of the initial evaluation.
Patients were divided into three groups. The diagnostic definition for RH has been detailed previously. Control subjects were selected from those without hypertension or any other exclusion criteria (group 1, n=284). Those with normal BP levels on treatment with less than three drugs were defined as the hypertension group (group 2, n=1,194). Patients who fulfilled the criteria for RH were selected as the RH group (group 3, n=142).
BP measurement method
A clinically validated automatic electronic device (M10-IT; Omron, Tokyo, Japan) was used for all home BP measurements. Patients were instructed on the home BP measurement technique in a 20-minute training session with their nurse. At the end of the session, patients tested the home BP measurement technique through three consecutive self-measurements taken in the presence of the nurse. The patients monitored their BP at home over a 4-day period, taking three morning measurements (every 2 minutes between 6 am and 9 am) and three evening measurements (between 6 pm and 9 pm). The home BP readings were recorded and stored in the device. Mean home BP was calculated by discarding values obtained on the first day as well as the first measurement obtained each morning and evening. The BP measurement protocol was repeated every 3 months over a 1-year period.
strategy for reaching goal BP
The main hypertension treatment objective was to attain and maintain the desired BP goal. Start with one drug then add a second drug before achieving the maximum recommended dose of the initial drug, then titrate both drugs up to the maximum recommended doses of both to achieve goal BP. If goal BP could not be reached with two drugs, both drugs were titrated up to the maximum recommended doses. If the BP goal was not achieved with two drugs, a third drug including a diuretic was selected specifically to avoid combined angiotensin-converting enzyme inhibitor and angiotensin receptor blocker use. The third drug was titrated up to the maximum recommended dose to achieve the BP goal (home daytime mean BP 135/85 mmHg).
Definitions
Diabetes mellitus was defined as a previously diagnosed condition, prescribed diet, use of antidiabetic medication, or a fasting venous blood glucose level of 126 mg/dL on two occasions. Dyslipidemia was defined as a previously diagnosed condition, use of lipid-lowering agents, elevated plasma total cholesterol (200 mg/dL) and/or Clinical Interventions in Aging 2014:9 submit your manuscript | www.dovepress.com Dovepress Dovepress 1497 relationship between resistant hypertension and arterial stiffness triglycerides (150 mg/dL), or a low high-density lipoprotein level 40 mg/dL.
Assessment of aortic stiffness
baPWV was measured using a volume-plethysmographic apparatus (Form/ABI; Colin Co Ltd., Komaki, Aichi, Japan). The methodology details have been previously described. 13 Briefly, this device simultaneously measures bilaterally formed brachial and tibial arterial pressure waves, the lead I electrocardiogram, and a phonocardiogram. With the patient in the supine position, occlusion cuffs were connected to both plethysmographic and oscillometric sensors that were placed around both arms and ankles. All cuffs were then inflated until the brachial and tibial arteries were completely occluded and deflated. Arterial pressure waveforms were digitized at 1,200 Hz for brachial arterial pressure waves and at 240 Hz for tibial arterial pressure waves. Time differences between brachial and ankle arterial pressure waves (ΔT) were examined according to wave front velocity theory. Distances between the brachium and ankle (D) were calculated based on anthropometric data for the Japanese population. Finally, the baPWV was calculated as D/ΔT, thereby not only reflecting aortic stiffness but also leg muscular artery stiffness. Thus, the baPWV is a global AS measure reflecting both elastic and muscular arterial properties.
echocardiographic studies
Patients were then taken to the echocardiography laboratory and imaged in the left lateral decubitus position using a Philips iE33 ultrasound system (Philips Healthcare Systems, Eindhoven, the Netherlands) equipped with a multifrequency transducer. A complete echocardiographic study was performed using standard views and techniques. M-mode echocardiograms were obtained by two-dimensional guided echocardiography using a transducer with frequency range of 3-5 MHz. The mean of two M-mode measurements obtained by two different investigators was used. Left ventricular mass was subsequently calculated using Devereux's method. 14 The left ventricular mass index was calculated as the left ventricular mass divided by body surface area.
statistical analysis
Numeric data are presented as the mean and standard deviation and categoric data are presented as frequencies and percentages. One-way analysis of variance and chi-square tests were used for comparisons between the three groups. A multivariate logistic regression analysis was carried out for assessing odds ratios for factors related to the three groups.
A receiver operating characteristic (ROC) curve was then used to show a positive correlation between baPWV and RH. Cut-off values were determined as the sum of sensitivity and specificity. The statistical analysis was performed using Statistical Package for the Social Sciences version 20.0 software (IBM Corp., Armonk, NY, USA). A P-value 0.05 was considered to be statistically significant.
Results
Among the 1,620 patients enrolled in this study, 284 were defined as normotensive (group 1); 1,194 patients on treatment with less than three drugs were defined as the hypertension group (group 2); and 142 patients were defined as the RH group (group 3). The baseline characteristics of the study population and of each patient group are summarized in Table 1. Group 1 (normotensive individuals) contained a higher proportion of men, and showed a lower mean body mass index as well as lower rates of diabetes mellitus, smoking, dyslipidemia, cerebral vascular accident, metabolic syndrome, left atrial enlargement, and left ventricular hypertrophy than the other groups. Group 1 individuals also had lower systolic BP, diastolic BP, and baPWV than the other groups. No differences in age, body surface area, or proportion of individuals with chronic obstructive pulmonary disease was seen between the three groups. Group 3 subjects had higher systolic BP, diastolic BP, and baPWV than the other two groups. The laboratory results are summarized in Table 2. Group 1 had a lower left ventricular end diastolic diameter, lower left ventricular end systolic diameter, lower interventricular septum diameter, lower left ventricular mass index, and higher aortic root diameter than the other groups. No differences in total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, serum creatinine, estimated glomerular filtration rate, glycated hemoglobin, uric acid, left ventricular ejection fraction, posterior wall diameter, and left atrium diameter were noted between the groups. However, hemoglobin levels were significantly lower in group 3 than in group 1.
Major antihypertensive and antihyperlipidemic drug categories are summarized in Table 3. The frequently prescribed antihypertensive agents were angiotensin receptor blockers and calcium channel blockers. In group 3, 57% of patients used diuretics, and statins were used by 22.7% of patients in group 2 and 29.6% of those in group 3 (not statistically significant).
We carried out multivariate logistic regression analysis in order to identify factors related to successful BP control Comparison of baPWV according to patient group is shown in Figure 1. A significant increase in baPWV was observed in the hypertensive groups (groups 2 and 3). The highest baPWV was in group 3. The ROC curve analysis for the relationship between baPWV and RH is shown in Using the ROC curve, we determined the optimal cut-off value of baPWV that could predict the presence of RH ( Figure 2). The cut-off value of baPWV, which was set at 1,803 cm per second, had a sensitivity of 63.4%, a specificity of 67.2%, and an area under the ROC curve of 0.687 in predicting RH.
Discussion
According to the Framingham Heart Study, approximately 60% of the population has hypertension by the age of 60 years, and about 65% of men and about 75% of women have the disease by 70 years. The elderly are also more likely to suffer from the complications of high BP and are more likely to have uncontrolled hypertension. Compared with younger patients with similar BP, elderly hypertensive patients have lower cardiac output, higher peripheral resistance, wider pulse pressure, lower intravascular volume, and lower renal blood flow. These age-related pathophysiological changes must be considered when treating hypertension in the elderly. Most elderly hypertensive patients with RH have multiple comorbidities, and need multiple drugs to control their BP. A decade ago, in a meta-analysis of more than 15,000 patients aged 62-76 years, Staessen et al 15 showed that treating isolated systolic hypertension substantially reduced morbidity and mortality rates. Another large-scale meta-analysis demonstrated the relevance of BP to cardiovascular mortality in the population aged 40-89 years, but the contribution of high BP to cardiovascular mortality decreases with advancing age. 16 Further, a 2011 meta-analysis of randomized controlled trials in hypertensive patients aged 75 years and older concluded that treatment reduced cardiovascular morbidity and mortality rates and the incidence of heart failure, even though the total mortality rate was not affected. 17 Opinion on treating the very elderly (80 years) was divided until the results of the Hypertension in the Very Elderly Trial 18 were published in 2008. This study documented major benefits of treatment in the very elderly age group as well.
To our knowledge, this is the first study showing a direct relationship between RH and AS by assessment of baPWV. We observed increased AS in patients with RH when compared with subjects who had controlled hypertension. This study showed that RH patients had greater numbers of risk factors than BP controlled patients, including diabetes mellitus, dyslipidemia, metabolic syndrome and baPWV. Multivariate logistic regression analysis showed that diabetes mellitus and baPWV were significantly related to the presence of RH. Other factors correlated negatively with the presence of RH. In two large studies, AS predicted future development of hypertension in normotensive subjects.
The first of these trials was the Atherosclerosis Risk in Communities study in middle-aged subjects (aged 45-64 years), in which 6,992 normotensive subjects were followed over 6 years. 19 AS was assessed by carotid artery diameter using high-resolution B-mode ultrasound and was found to significantly predict future hypertension. Each standard deviation increase in AS correlated with a 15% greater risk of future hypertension, independent of established risk factors and BP levels. However, this trial was criticized because of its nonadjusted analysis. Since most determinants of AS are also risk factors for hypertension, it is important to verify the predictive value of AS with regards to future hypertension remaining after adjustment for these risk factors. The second trial assessed 2,571 normotensive subjects (aged 35-93 years) who were followed up for 4 years. AS was measured using aortic strain and distensibility parameters. 20 Aortic stiffness was determined by M-mode echocardiography using the polynomial regression analysis technique, 21 which calculated aortic systolic and diastolic diameters using standard equations for aortic strain, distensibility, and stiffness index (β). Aortic stiffness in normotensive individuals then predicted future hypertension after correcting for other risk factors by multiple linear regression modeling. This association was noted in both young and old subjects of both sexes.
AS occurs as a result of structural changes in connective tissue proteins within the endothelial and smooth muscle cells of the tunica media in the arterial wall, which are potentially related to the risk of development and progression of atherosclerosis. 22 The data from this study demonstrating increased baPWV also reflected stiffening as a result of structural changes in the arterial wall. AS related to hypertension is an insidious and progressive process, and is associated with numerous adverse hemodynamic effects and conditions associated with endothelial dysfunction. 6,23,24 It also sets up a vicious cycle whereby subtle early damage accelerates the rise in systolic pressure, causing further degeneration of aortic function. [25][26][27] This results in a mid-life rise in systolic pressure, subsequently progressing to both isolated systolic hypertension and resistant systolic hypertension. [28][29][30] Poorly controlled hypertension undoubtedly can lead to progressive vascular damage. This effect also sets up a further vicious cycle whereby increasing vascular stiffness leads to increased BP, thus contributing to further AS. This sets the stage for progressive worsening of hypertension and an increasing need for more BP medications. 29,31 Accordingly, the current data support the hypothesis that progressive rigidity in the large arteries is characterized by progression from early to severe stages of hypertension that are difficult to control. 29,31,32 Recognition of this progression is clinically important, as it may allow vascular stiffness indexing to facilitate early identification of patients at risk of RH.
More recently, Daugherty et al 33 confirmed that there was a high rate of cardiovascular events (ie, death, myocardial infarction, heart failure, stroke, chronic kidney disease) in RH patients. Among 205,750 patients with hypertension found incidentally, 1.9% developed RH at a median of 1.5 years from the initial treatment. These RH patients were older, more often of male sex, and more frequently diabetic than patients who did not have RH. Cardiovascular event rates were significantly higher in RH patients as compared with non-RH patients (18.0% versus 13.5%, respectively; hazard ratio 1.47 [CI 1.33-1.62]; P0.001) after adjusting for patient and clinical characteristics.
Our study also demonstrated that diabetes mellitus is an independent risk factor for RH. RH represents an uncontrolled BP subset that is strongly associated with organ involvement, particularly at the cardiac, renal, and vascular levels. 35 The relationship between RH and cardiovascular disease/target organ damage may be bidirectional. RH may directly cause both development and worsening of target organ damage through persistent elevation of BP. Similarly, cardiovascular target organ damage may worsen the resistance to treatment, rendering hypertension even more difficult to control. 35,36 The prevalence and incidence of RH is comparatively high in patients with renal disease, microvascular disease, left ventricular hypertrophy, aortic stiffness, or cerebrovascular disease, and in those with secondary hypertension. The findings of Dernellis and Panaretou, 20 as well as those of Liao et al 19 together with earlier studies, provide support for the bidirectional interaction of AS and hypertension. Numerous lifestyle and pharmacological interventions are effective for reducing AS. Furthermore, its early diagnosis with noninvasive techniques before development of RH or cardiovascular complications may identify individuals at risk at a time when lifestyle intervention may be useful.
Conclusion
We have demonstrated that patients with RH have more elevated vascular stiffness than hypertensive patients with well controlled BP in older age. Thus, increases in baPWV, as demonstrated by AS, have a direct correlation with BP levels. It appears reasonable that strict BP control associated with reducing AS should be obtained to prevent severe functional and structural vascular changes during the hypertensive disease's course. We also propose that noninvasive modalities evaluating vascular stiffness (ie, baPWV) should be used in clinical practice to stratify cardiovascular risk.
Disclosure
The authors report no conflicts of interest in this work.
|
2016-05-04T20:20:58.661Z
|
2014-09-05T00:00:00.000
|
{
"year": 2014,
"sha1": "32465e290e52d20bf41663038e2ea71132f6738c",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=21494",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f30e4bc1d388e2977346174ad60a628a8dae96f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
228891075
|
pes2o/s2orc
|
v3-fos-license
|
Two new records of the genus Icerya Signoret, 1875 (Hemiptera, Coccomorpha, Monophlebidae) from Oman
. Two species of the family Monophlebidae (Hemiptera: Coccomorpha) are recorded for the first time in Oman. Icerya purchasi and Icerya seychellarum are occurred in northern Oman in Al-Jabel Al Akhdher while Icerya aegyptiaca is occurred in South Oman in Dhofar. Icerya purchasi and I. seychellarum are caused considerable damage on Punica granatum, Juglans regia, Ziziphus spina, Ficus carica, Acacia sp and Nerium oleander. The populations of the I. purchasi and I. seychellarum were considered to be exotic pests rather than aggressive native pest. These new records will be useful in the future as it could establish a solid area for researchers who will be interested to know about this species or the relation between these species with other species either from same species type or different species from other genera that could be reported in the future.
Introduction
The genus Icerya Signoret, 1875 of the tribe Icerini Cockerell, 1899 belonging to the family Monophlebidae Morrison, 1928 (Hemiptera: Coccomorpha) (Unruh & Gullan, 2008b). There are 35 species in the world are included in this genus and commonly known as fluted scales because of the fluted appearance of the ovisac (Moghaddam, et al, 2015). Most species of the family Monophlebidae being relatively polyphagous (Ben-Dov, 2005). Some iceryine species when introduced to new areas without their adapted natural enemies could proliferate and become serious plant pests (Kondo et al., 2016), for example, Icerya aegyptiaca (Douglas) in the Ryukyu Islands (Japan) (Uesato et al., 2011). Icerya purchasi Maskell has been introduced into other parts of the world through global trade in California (USA) on Acacia plants around 1868 or 1869 (Kollár et al., 2016). Icerya aegyptiaca is so common and so widely distributed in Afrotropical, Australasian, Oriental and Palaearctic, and probably is imported from the southeast borders (Watson & Malumphy, 2004). Unruh & Gullan (2008a, 2008b suggested that I. aegyptiaca is native to either the Australasian or the Indo-Malayan biogeographic regions and I. purchasi is native to Australia or New Zealand. Rodolia cardinalis (Mulsant) (Coleoptera: Coccinellidae) has successfully reduced I. purchasi populations in many countries (Caltagirone & Doutt, 1989). Some trials were also conducted for colonizing the parasitic fly Cryptochetum iceryae Williston (Diptera: Cryptochetidae with cotton cusion scale. Both of these natural enemies showed high efficiency in the control of the cottony cushion scale due to their short generation time (4-6 weeks) and host specificity, attacking only the cottony cushion scale (Grafton-Cardwell & Flint, 2003).
The presented species in Oman in this paper is necessarily uncompleted, because most the specimens collected from few locations in South and North of Oman as well as with limited methods used in collection. However, the goal of this paper is the further faunistic study on Monophlebidae of Oman. Here we review all the available material of Monophlebidae, reporting three species; two of them are being new records for Oman.
Material and methods
The specimens of I.purchasi were collected from private citizens farms and some wild plants from three villages in Al-Jabel Al Akhadher (Wadi beni habeeb, Sayq and Al-Manakher) on Punica granatum, Juglans regia, Ziziphus spina, Ficus carica, Acacia sp. and Nerium oleander. The I. seychellarum were also collected from private citizens' farms in one location in Al-Jabel Al Akhadher (Wadi beni habeeb) on Juglans regia, Psidium guajava and Punica granatum. The locality Al-Jabel Al Akhdher is situated in the Ad Dakhiliyah Governorate and it is part of the Al Hajar Mountains range with high land (2950 m). The typical climate of Al-Jabel Al AKhadher is characterized as warm, moderately dry with mild winter. The average annual temperature of 5-15°C and annual average precipitation of 10-48 mm .
Icerya aegyptiaca were collected in Dhofar (Ein Hamran) on Prosopis juliflora and from private house garden on Boswellia sacra. The locality Ein Hamran is situated in Dhofar Governorate and it is part of the Dhofar Mountains. The typical climate of Ein Hamran is characterized as warm with mild winter. The average annual temperature of 19-27°C and annual average precipitation of 1-25 mm ).
The identification key of adult females was based in the identification guide to species of the scale insect tribe Iceryini (Coccoidea: Monophlebidae) by Unruh & Gullan (2008b). The damage level on the host plants was evaluated according to the count of adult female individuals.
Results
In this research, three species of the family Monophlebidae are recorded form Oman, of which Icerya purchasi and Icerya seychellarum are new records for the fauna of Oman.
Biological notes:
The cottony cushion scale can be easily distinguished from other scale insects. The mature females (hermaphrodites) have bright orange-red, yellow, or brown bodies (Ebeling, 1959). The body is partially or entirely covered with yellowish or white wax. The most observed feature is the fluted egg sac, which frequently to be about 2.5 times longer than the body. The egg sac contains around 1000 red eggs (Gossard, 1901).
Males are winged with a dark red body and dark colored antennae and they are rare. Dark whorls of setae extend from each antennal segment, except the first (Ebeling, 1959). It is interesting that the female is always a hermaphrodite with both testes and ovaries. If selffertilization occurs, just hermaphrodites are produced. However, Ebeling (1959) reported that when a hermaphrodite mate with a male, more males and hermaphrodites are produced.
Regarding the effects of temperature, eggs need a few days to two months to hatch. The newly hatched nymphs are bright red along with dark antennae and brown legs. The antennae are six-segmented. This is the primary dispersal stage. Nymphs can move to new locations by wind, crawl to nearby plants. The adult begins to deposit eggs after three molts and secretes the conspicuous egg sac. As the egg sac is formed, the scale's abdomen becomes more tilted until the scale appears to be standing on its head (Kollár et al., 2016).
Distribution in Oman:
The species is so far known only from Dhofar (Ain Hamran).
Icerya seychellarum
Biological notes: Adult female usually with 11-segmented antennae. The dorsal body with transverse rows of white to yellowish waxy secretion and marginal tufts; glassy filaments projecting from margins and medial dorsum; ovisac projecting from posterior end of body, covered dorsally by a series of long, cylindrical waxy tassels.
Distribution in Oman:
The species is so far known only from Al-Jabel Al Akdhar (Wadi beni habib).
General Distribution: It is distributed in tropical and subtropical areas and is recorded South-East Asia, Africa, Southern Europe and Australia (García Morales et al., 2016).
Discussion
The cottony cushion scale I. purchasi and the Egyptian icerya, I. aegyptiaca are most important Icerya species with powerful invasiveness (Liu & Shi, 2020). Icerya purchasi is reported from 78 families and 190 genera of plants, I. aegyptiaca is reported from 59 families and 113 genera of plants, I. seychellarum is reported from from 58 families and 128 genera of plants (García Morales et al., 2016). They are pests of several ornamentals and crops in Oman, such as Punica granatum, Juglans regia, Ziziphus spina, Ficus carica, Acacia sp., Nerium oleander and Boswellia sacra. The host range of I.purchasi was wide and it is considered to be more aggressive than I. seychellarum and I. aegyptiaca. Icerya purchasi represents one of the most important examples of successful biological control through the release of the predator Rodolia cardinalis (Mulsant) (Coleoptera: Coccinellidae) (Lo Verde et al., 2020). A predacious ladybird beetle of the Rodolia argodi was observed feeding on I. aegyptiaca in Dhofar area. Single specimen of R. argodi was collected from vegetation in Al-Jabel Al AKhadher and this gives indication of preying. The populations of the I. purchasi and I. seychellarum were considered to be exotic pests by introducing infected plants rather than aggressive native pest as there were no records or observation in Oman. The I. purchasi and I. seychellarum should be reported as the first record in Oman as they have infecting economically important crops and plants.
|
2020-12-14T20:04:16.391Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6b59b584134cddaa30582197a6fd929b8f4ef67e",
"oa_license": "CCBYNC",
"oa_url": "http://jibs.modares.ac.ir/files/jibs/user_files_749497/aliabduallah-A-10-58395-1-bbf93c7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c004be63c7faf65ac6443aced12aeca6d727acf6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
222377922
|
pes2o/s2orc
|
v3-fos-license
|
An extremely metal-deficient globular cluster in the Andromeda Galaxy
Globular clusters (GCs) are dense, gravitationally bound systems of thousands to millions of stars. They are preferentially associated with the oldest components of galaxies, and measurements of their composition can therefore provide insight into the build-up of the chemical elements in galaxies in the early Universe. We report a massive GC in the Andromeda Galaxy (M31) that is extremely depleted in heavy elements. Its iron abundance is about 800 times lower than that of the Sun, and about three times lower than in the most iron-poor GCs previously known. It is also strongly depleted in magnesium. These measurements challenge the notion of a metallicity floor for GCs and theoretical expectations that massive GCs could not have formed at such low metallicities.
Globular clusters (GCs) are roughly spherical agglomerations of thousands to millions of stars, bound by their mutual gravity, and have central densities that can exceed 10 6 solar masses per cubic parsec (M pc −3 ) (1). GCs formed early in the history of the Universe and therefore record the early stages of galaxy formation and evolution. The nearest neighboring spiral galaxy, the Andromeda Galaxy, also known as Messier 31 (M31), has a system of GCs that align spatially and kinematically with stars in the outer parts of the galaxy. The GCs in the outer parts of M31 appear to belong to at least two kinematically distinct subsystems that were accreted separately (2).
The GC systems in most galaxies are dominated by clusters with low abundances of elements heavier than hydrogen and helium ("metals") relative to the composition of the Sun.
However, there appears to be a deficit of GCs at the very lowest metal abundances ("metallici- (4), where square brackets denote the abundance ratios of the elements, relative to the solar photospheric composition, on a logarithmic scale. The number of iron atoms per hydrogen atom in the most metal-poor GCs is thus about 300 times lower than in the Sun. The notion of a metallicity floor for GCs at [Fe/H] = −2.5 is supported by observations of GCs in several external galaxies (5), and various explanations have been suggested. The correlation between mass and metallicity for galaxies in the early Universe might set a minimum metallicity for formation of GCs that are sufficiently massive to survive until the present day, or the formation of massive GCs could be suppressed at low metallicities due to inefficient gas cooling (3,(5)(6)(7).
A metallicity floor for GCs would thus have implications for cluster-and star formation and for the build-up of metals in galaxies in the early Universe.
Because the metallicity distributions of both GCs and individual stars decline steeply towards low metallicities and are poorly constrained, it remains unclear how statistically significant the proposed metallicity floor is. In M31, three clusters with metallicities that may fall in 2 the range −2.8 < [Fe/H] < −2.5 are known (8), but the uncertainties are large (0.3-0.4 dex) and the metallicities may lie well above the floor. Similarly, three GCs in the Sombrero galaxy may have metallicities below [Fe/H] = −2.5 (9), but the uncertainties on the spectroscopic measurements are large and the red colors of these clusters suggest higher metallicities.
We investigate the globular cluster RBC EXT8 (hereafter EXT8) in M31, located at right ascension 00 h 53 m 14 s .53, declination +41 • 33 24 .5 (J2000 equinox) according to the Revised Bologna Catalogue (10). From a kinematic analysis (2), EXT8 belongs to the smoothly distributed component of the M31 halo, and lies at a projected distance of 27 kpc from the galaxy center. Figure 1 shows a color-magnitude diagram for GCs in M31 (11). With an apparent magnitude in the g-band of g = 15.87, EXT8 is among the brighter GCs, and its integrated light color with respect to the u-band (u − g = 1.11) is less red than most of the other GCs, suggesting a low metallicity. Previous low-resolution spectroscopy yielded an age ≥ 8 Gyr and can be used as an age indicator in the spectra of GCs (17). While the blue color of EXT8 could, in principle, be caused by a younger age, Figure 2 shows no discernible difference in the strengths or shapes of the Hβ lines in the two spectra, indicating that EXT8 is similarly old, so must be a metal-poor GC. Figure 3 shows two metallicity-sensitive features. Figure 3A shows the Fe I feature near 4957Å (actually a blend of several Fe I lines, of which the two strongest are marked), which is much weaker in the spectrum of EXT8 than in M15. Figure 3B shows two of the three lines of the Mg I triplet (Fraunhofer's b feature) at 5167Å and 5173Å. The third line, at 5184Å, falls in the gap between the two detectors of UVES, but is included in the HIRES spectrum. The Mg I lines, as well as other lines visible in this region of the spectra, are much weaker in the EXT8 spectrum.
To quantify these results, we analyzed the EXT8 spectrum using a spectral fitting technique used in previous studies of extragalactic GCs (16,18). Figure 3 shows the best-fitting model spectrum for M15 (16), with an iron abundance of [Fe/H]=−2.39 ± 0.02. This model spectrum is based on a color-magnitude diagram (CMD) of M15 (19). We do not have spatially resolved data to empirically build a CMD for EXT8, and substituted it with stellar models (20) with a metal fraction chosen to self-consistently match that derived from the spectral modeling. We found an iron abundance of [Fe/H]=−2.91±0.04 for EXT8 from model fitting of the wavelength range 4400-6200Å (19). These model spectra are also shown in Figure 3. We tested the assumptions required for the input CMD and found that they do not substantially affect this measurement (19). We conclude that EXT8 is about 0.5 dex more metal-poor than the value of Within the standard paradigm of hierarchical galaxy assembly, metal-poor GCs are expected to have formed in the early Universe in low-mass galaxies that merged to form larger galaxies (6,7,34). The correlation between the mass and metallicity of galaxies therefore imprints a maximum mass for a GC that could form with a given metallicity. At M31 GCs EXT8 Figure 1: Color-magnitude diagram for globular clusters in M31 (11). No correction for dust reddening has been applied. EXT8 is marked with a large square, and has one of the bluest u − g colors among the GCs in M31. Typical one-sigma error bars are shown on the right. Authors contributions: JPB secured the observing time for this project, all authors contributed to the planning of the observations, and the inclusion of EXT8 as a target was suggested by AJR. AW conducted the observations and SSL carried out the data reduction and analysis and drafted the paper. All authors assisted in the interpretation of the results and writing of the paper.
Data and materials availability: The average measured abundances are listed in Table S2 and individual measurements are in Tables
|
2020-10-16T01:01:08.374Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "636a8288505b34bc3f8512b1eea73770ed5084c1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.07395",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "636a8288505b34bc3f8512b1eea73770ed5084c1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
54444850
|
pes2o/s2orc
|
v3-fos-license
|
Excision and Primary Anastomosis for Short Bulbar Strictures: Is It Safe to Change from the Transecting towards the Nontransecting Technique?
Objective To explore whether it is safe to change from transecting excision and primary anastomosis (tEPA) towards nontransecting excision and primary anastomosis (ntEPA) in the treatment of short bulbar urethral strictures and to evaluate whether surgical outcomes are not negatively affected after introduction of ntEPA. Materials and Methods Two-hundred patients with short bulbar strictures were treated by tEPA (n=112) or ntEPA (n=88) between 2001 and 2017 in a single institution. Failure rate and other surgical outcomes (complications, operation time, hospital stay, catheterization time, and extravasation at first cystography) were calculated for both groups. Potentially predictive factors for failure (including ntEPA) were analyzed using Cox regression analysis. Results Median follow-up for the entire cohort was 76 months, 118 months, and 32 months for, respectively, tEPA and ntEPA (p<0.001). Nineteen (9.5%) patients suffered a failure, 13 (11.6%) with tEPA and 6 (6.8%) with ntEPA (p=0.333). High-grade (grade ≥3) complication rate was low (1%) and not higher with ntEPA. Median operation time, hospital stay, and catheterization time with tEPA and ntEPA were, respectively, 98 and 87 minutes, 3 and 2 days, and 14 and 9 days. None of these outcomes were negatively affected by the use of ntEPA. Diabetes and previous urethroplasty were significant predictors for failure (Hazard ratio resp. 0.165 and 0.355), whereas ntEPA was not. Conclusions Introduction of ntEPA did not negatively affect short-term failure rate, high-grade complication rate, operation time, catheterization time, and hospital stay in the treatment of short bulbar strictures. Diabetes and previous urethroplasty are predictive factors for failure.
Introduction
The International Consultations on Urologic Diseases (ICUD) recommends urethroplasty by excision and primary anastomosis (EPA) for short and isolated bulbar urethral strictures as it provides an excellent success rate (93.8%) with a low complication rate [1]. After EPA, the diseased segment is entirely removed and replaced by own healthy urethra without the need for urethral substitution material (grafts or flaps), which is probably the reason for the high success rate. During the "classic" transecting EPA (tEPA), the corpus spongiosum containing the urethra is transected full thickness at the level of the stricture [2]. As EPA only requires excision of the narrowed urethra and the surrounding spongiofibrosis, a full thickness transection is usually not necessary. To avoid this and to preserve the dual blood supply of the urethra, Jordan et al. introduced the concept of vessel-sparing or nontransecting EPA (ntEPA) [3], later slightly modified by Andrich et al. [4]. This nontransecting variant is an attempt to reduce the surgical trauma of tEPA and several centers have introduced this technique in their reconstructive repertoire [4][5][6]. A prerequisite to use ntEPA is that the outcomes are at least not inferior compared to the standard technique of tEPA. Case series of ntEPA have a promising short-term success rate of 94.5-100% [3,[5][6][7], which is in line with the composite success rate of 93.8% for the tEPA reported by the ICUD [1]. However, indirect comparison of series is hazardous as patient and 2 BioMed Research International stricture characteristics, follow-up schedules, and reporting of outcomes might vary among series. Therefore, the primary objective of this study is to evaluate whether the change in practice from tEPA to ntEPA yielded surgical outcomes that are not inferior to the patient. To the best of our knowledge, this is the first paper to report this.
Study Population.
A database was collected of all male patients (n=852) who underwent urethroplasty at Ghent University Hospital starting from 2001 (start of electronic medical file). Since 2008, this collection was done prospectively. Patients who underwent EPA, either by the transecting or nontransecting technique, for isolated short bulbar strictures (ranging from the penoscrotal angle up to the urogenital diaphragm) were selected from this database until December 2017. Exclusion criteria were EPA performed for posterior or penile strictures, EPA with concomitant urethroplasty at another part of the urethra, EPA in transgender patients, and EPA in patients on clean intermittent catheterization. All patients underwent preoperative evaluation including history taking (with emphasis on stricture etiology and previous urethral interventions), clinical examination, uroflowmetry, and urethrography. According to our in-home algorithm to treat urethral strictures, EPA is the preferred technique for short (≤3cm) bulbar strictures [8]. After attendance at a masterclass on urethroplasty we became familiar with the technique and being convinced of the theoretical advantages of ntEPA, we performed our first cases in November 2011. Starting from January 2012, ntEPA became the standard technique.
Surgical Technique.
A detailed description of the operative techniques is beyond the scope of this article as it has been published previously [6,9]. In brief, the patient is placed in the social lithotomy position, a midline perineal incision is made, and the bulbospongiosus muscle is incised at the midline and dissected away from the corpus spongiosum containing the bulbar urethra. The bulbar urethra is circumferentially detached from the corporal bodies and mobilized from the penoscrotal angle up to the urogenital diaphragm. With tEPA, the perineal body ("centrum tendineum") is incised for further mobilization of the ventral bulbar urethra. With this technique, the spongious tissue is transected full thickness at the level of the stricture which is marked after introduction of a metal sound through the meatus. The narrowed urethra and surrounding spongiofibrosis are fully excised, the healthy urethral ends are spatulated, and a tension-free anastomosis is made by 8 resorbable sutures 4.0. For ntEPA, the modification described by Andrich et al. was used [4]. The urethra is incised dorsally at the level of the stricture. Again, the stricture and surrounding fibrosis are excised but with preservation of the ventral spongiosum. The urethral edges are also spatulated and connected end-to-end with 8 resorbable sutures 4.0. In case of any difficulties to ensure a complete resection of the fibrosis or if the fibrosis encompasses the entire thickness of the spongious tissue, conversion to tEPA is done. The spongious tissue is closed with resorbable sutures 4.0 over the urethral anastomosis. This second layer ("spongioplasty") is circumferential with tEPA and at the dorsolateral side with ntEPA. For both techniques, a 20Fr silicone catheter is left in place as well as a perineal suction drain.
Follow-Up and Evaluation.
The suction drain is removed after 24-48 hours. The patient is discharged when his clinical condition allows for it, which is usually after 2 days. The catheter is removed 1 to 2 weeks later on ambulatory base if voiding cystourethrography confirms absence of extravasation [10]. In case of extravasation, the examination is repeated after one week. Follow-up including history taking and uroflowmetry was advised every 3 months during the 1st year, and annually thereafter. Surgical complications (≤90 days) were scored according to the Clavien-Dindo classification. Patients were asked to come on earlier visit if they experience obstructive urinary symptoms or had a urinary tract infection. In case of suspicion of recurrence (clinical symptoms or maximum urinary flow <15ml/s), retrograde urethrography or urethroscopy was performed. Referred patients were sent back to and followed by their local urologist. A functional definition of failure was used, namely, obstructive symptoms with the need for additional urethral instrumentation (including simple dilation) [11]. Other surgical outcomes analyzed are operation time, hospital stay, catheterization time, and extravasation at first cystography. Functional outcomes (incontinence, erectile function, and genital sensitivity) are not the scope of this study as these parameters were not systematically questioned and in those where it was done, different questionnaires were used over the years. The study was approved by the local ethics committee (EC UZG 2008/234). All operations were done by 2 surgeons (W.O., N.L.).
Statistical Analysis.
A first analysis was done per surgical technique (tEPA versus ntEPA). As mentioned above, since 2012 ntEPA became the standard technique. However, in case of difficulties or severe spongiofibrosis, conversion to tEPA was possible. As these are presumably more complex cases, a selection bias between surgical groups since 2012 is imminent. In order to minimize this, a second analysis was done using the intention-to-treat (ITT) principle in which all conversions to tEPA since 2012 remained classified as ntEPA cases (further called "ITT-ntEPA"). Statistical tests were done using IBM6 SPSS software version 25.0. All tests were done 2-sided and a p value <0.05 indicates statistical significance. Next to descriptive statistics, categorical variables were compared using Fischer's exact test. Continuous variables were analyzed for parametric distribution using the Shapiro-Wilk test and as all variables had a nonparametric distribution, the Mann-Whitney U test was used for comparison. Failure-free survival (FFS) was calculated using Kaplan-Meier survival analysis with log rank statistics. To evaluate whether ntEPA was an independent predictor for failure, uni-and multivariate Cox regression analysis with calculation of the Hazard Ratio (HR) was performed. Table 2). The null hypothesis cannot be rejected.
Intention-to-Treat (ITT) Analysis.
Patient and stricture characteristics did not significantly differ between these 2 cohorts ( Table 3). As mentioned above, all patients in the ITT-tEPA cohort (n=101) underwent tEPA. However, conversion towards tEPA was performed in 11 of 99 (11.1%) patients of the ITT-ntEPA cohort. Table 4 provides information about the characteristics of the patients converted to tEPA and those treated by ntEPA. In the ITT-ntEPA cohort, patients finally treated with tEPA had a median stricture length of 2 cm compared to 1,25 cm for ntEPA (p=0.019) whereas other preoperative characteristics were comparable. Median operation time for ITT-tEPA and ITT-ntEPA was, respectively, 95 and 88 minutes (p<0.009). in the ITT-ntEPA cohort, patients finally treated by tEPA had a median operation time of 155 minutes compared to 87 minutes with ntEPA (p=0.01). Median hospital stay was 3 and 2 days for, respectively, ITT-tEPA and ITT-ntEPA (p<0.001). In the ITT-ntEPA cohort, patients finally treated by tEPA and ntEPA both had a median hospital stay of 2 days (p=0.088). Extravasation at Table 4: Characteristics and surgical outcomes of patients treated by tEPA and ntEPA in the intention-to-treat ntEPA cohort (IQR: interquartile range; FFS: failure-free survival; ITT-tEPA: intention-to-treat transecting excision and primary anastomosis; ITT-ntEAP: intention-to-treat nontransecting excision and primary anastomosis; NA: not available).
Discussion
The success rate of 88.4% for tEPA in this series might appear somewhat lower compared to the 93.8% composite success rate for tEPA reported in the ICUD-review [1]. However, the median follow-up of 115 months in this paper is substantially longer compared to the papers included in that review [1]. Andrich et al. reported an 87% success rate after 10-year follow-up [12]. Although this series reported durable results on the long term with most of the recurrences occurring with the first years after surgery [12], this could not be confirmed by the present series as 53.8% of failures with tEPA were found even after the 5th postoperative year. In two other series, where time-related events are available, a steady decline in the success rate of tEPA was observed as well [13,14]. As for substitution urethroplasty, this indicates that EPA also needs prolonged follow-up as late recurrences are possible. Some of our late failures were detected on occasion in an asymptomatic patient for which access to the bladder was needed (e.g., urethral catheter during surgery and cystoscopy because of hematuria). It has indeed been described that a stricture only becomes symptomatic once the urethral diameter is less than 10Fr. It is likely that a strict follow-up schedule with standard cystoscopy would have detected these failures earlier [11]. Some of the late failures might also be attributed to progression of the stricture disease as almost 20% of patients already underwent previous urethroplasty. The shorter follow-up with ntEPA in this series is explained by the change in practice since 2012 where it became the standard technique. The 93.2% success rate with ntEPA is in line with previous reports [3][4][5][6][7]. Estimated 1-and 3-FFS could not demonstrate inferiority of ntEPA versus tEPA nor could uni-and multivariate Cox regression analysis identify ntEPA as an independent predictive factor for failure. With ntEPA, 2 failures were detected between the 2nd and 5th postoperative year, also underlining the need for prolonged follow-up to evaluate whether this noninferiority remains on the long term (>5 years follow-up). With ntEPA, the operation time was on average 11 minutes shorter. With ntEPA, no need for ventral dissection deeper than the perineal body is needed which saves time. Furthermore, full transection of the corpus spongiosum with tEPA leads to substantial bleeding through the bulbar arteries with need for additional hemostasis (and time to achieve this). On the other hand, we perceive that the anastomosis itself is somewhat more difficult to perform and more time-consuming with ntEPA. However, other factors might bias operation time. By the standard introduction of ntEPA in 2012, both surgeons already had a large experience with urethral anatomy and urethroplasty. This experience probably has facilitated the introduction of ntEPA. In the earlier stages when uniformly tEPA was performed, this experience was less and surgery could have taken more time. Furthermore, since 2012 an important selection bias is present at the expense of tEPA: the more complex cases are still treated with tEPA and this complexity might account for a longer operation time. Nevertheless, even with ITT-analysis, operation time remained in favor of ntEPA. At least, this indicates that a shift in practice towards the use of ntEPA does not negatively affect operation time in surgeons already proficient with tEPA. The more complex nature of tEPA cases since 2012 might also be apparent by the longer stricture length and the longer catheterization time compared to the contemporary ntEPA cases. This selection bias might be the reason why strictures treated by ntEPA were shorter in the per surgery analysis but no longer in the ITT-analysis. This selection bias might in part explain the longer catheterization time with tEPA. However, the longer catheterization time is undoubtedly related to a change in our practice for catheter stay since 2010 when it was decided to remove the catheter after 1 week for simple cases (whereas this was 2 weeks before) [10]. With ntEPA, a one-day shorter hospital stay was observed. Although this might indicate a quicker recovery with ntEPA, this cannot by assumed as such. In recent years, budgetary reasons have urged us to discharge the patients as early as possible which probably attributed towards the shorter hospital stay since 2012. The observation that the tEPA cases since 2012 had an equal hospital stay despite a probably more complex stricture supports the latter hypothesis. The complication rate in this series is low and confirms the finding of other colleagues [5,14]. High-grade (≥grade 3) complications were not more frequent with ntEPA. With ITT-analysis (but not per surgery analysis), low-grade complications were somewhat more frequent with ntEPA. This is likely due to the mainly retrospective data collection with risk of underreporting of low-grade complications in tEPA versus the prospective data collection with ntEPA. Nevertheless, this observation must raise a concern and warrants further evaluation.
Despite introduction of ntEPA, we needed to convert towards a tEPA in approximately 10% of cases. A more distal location of the stricture within the bulbar urethra was not a reason for conversion to tEPA in this series. The main reason for conversion was extensive spongiofibrosis ("full thickness") in which it was no longer valuable to spare the ventral spongious tissue. This conversion to tEPA is not at all jeopardized by an approach to go for ntEPA as all initial surgical steps are the same. From this series, it is clear that tEPA must remain in the repertoire of the urethral surgeon. Furthermore, tEPA remains indispensable in the delayed treatment of pelvic fracture related injuries [15,16] and iatrogenic posterior urethral injuries [17]. However, the applicability of ntEPA for posterior strictures is currently explored as well [6,18].
The aim of ntEPA is to reduce the surgical trauma with preservation of the dual blood supply of the urethra. This might offer an advantage for subsequent urethral interventions, e.g., redo-urethroplasty with free graft in which a well-vascularized graft bed is essential or implantation of an artificial urinary sphincter with less risk of cuff erosion [3]. Furthermore, ntEPA might offer a benefit regarding the reported vascular deficiency of the glans penis and the risk of erectile dysfunction with tEPA [19,20]. The present dataset lacks information to evaluate these potential advantages. Nevertheless, despite the theoretical benefit, at BioMed Research International 7 least a transient decline in erectile function in 6-21.9% of cases has already been reported with ntEPA as well [4][5][6][7].
Diabetes and previous urethroplasty were identified as independent predictors for failure. With both techniques of EPA and the associated need for extensive mobilization of the bulbar urethra, the "3th" vascular supply (small arterial connections between the corporal bodies and the corpus spongiosum) of the urethra is sacrificed. Diabetes with its associated microangiopathy further increases the risk of ischemia at the urethral ends which is a reason for failure of the anastomosis [21]. In addition, diabetes might be a contributing factor in the development of ischemic strictures, which might explain some late failures. Diabetes as risk factor for failure was identified by another series as well [21]. A previous failed urethroplasty usually reflects a more complex urethral pathology with a higher risk of failure [22]. EPA for a failure after previous urethroplasty is possible in case of a previous EPA in which the urethral mobilization was insufficient for tension-free anastomosis (technical error). EPA is also possible for a short recurrence after graft urethroplasty, usually at one of the ends of the graft [23]. Other series have also identified previous urethroplasty as a negative predictive factor [14,21], whilst others have not [23].
This study has several limitations. Until 2008, data collection was retrospective with its inherent risk of bias. Although a follow-up schedule is proposed to the patients and the referring urologists, this is not systematically followed. This might also explain delayed detection of failure or underreporting of (minor) complications. A functional definition of failure was used, but at the moment, an anatomical definition is advised as it is more accurate and objective [11]. Validated patient reported outcome measures as suggested by Jackson et al. [24] were not systematically used, as it lasted to 2017 until a Dutch validation was available [25]. This prohibits any meaningful further evaluation. The follow-up of ntEPA is relatively short. Another important limitation is that this paper is an evaluation of basically 2 noncontemporary cohorts. Changes in practice, increasing surgical experience, selection of more challenging cases, etc. might have a major impact on outcome parameters. Therefore, any direct comparison of these 2 cohorts must be avoided. To overcome all the above-mentioned shortcomings, it is necessary to conduct a prospective randomized study comparing tEPA with ntEPA evaluating surgical and functional outcomes using a strict protocol. Because important surgical parameters were not negatively affected in this series after introduction of ntEPA, it appears justified to start up such a trial. In this perspective and to elucidate the definitive role of ntEPA, our center has initiated the VeSpAR-trial: a prospective randomized controlled trial comparing Vessel-Sparing Anastomotic Repair and transecting anastomotic repair in isolated short bulbar urethral strictures (ClinicalTrials.gov NCT03572348).
Conclusion
Introduction of ntEPA for short bulbar strictures by experienced urethral surgeons does not negatively affect short-term failure rate, high-grade complication rate, operation time, hospital stay, and catheterization time. Late recurrences are possible with both types of EPA underlining the need for continued follow-up in these patients. tEPA must remain in the surgical repertoire for challenging cases. Diabetes and previous urethroplasty are independent predictors for failure of EPA.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
|
2018-12-12T19:54:04.520Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3ff753699f2ab9be5ac449ba1368490b3a67c7d4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2018/3050537.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49f8703ac323bc02cd1b8a5c821e4be6328cc2b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53072359
|
pes2o/s2orc
|
v3-fos-license
|
Compulsive Bowel Emptying and Rectal Prolapse in Eating Disorders. Presentation of Two Cases
Eating Disorders are a heterogeneous group of complex psychiatric disorders that affect physical and psychological functioning, thus compromising life itself. They are often characterized by extreme preoccupation with food, caloric intake and expenditure as well as bodily weight and shape. Additionally, individuals present several forms of recurrent compulsive behavior, such as frequent weighting, body checking, and eating rituals. In many cases food consumption is considered a” failure” and its presence in the body “harmful and even “toxic” leading the individual to adopt a wide variety of purging behaviors in order to achieve a state of mental and physical “cleanliness”
Introduction
Anorexia Nervosa (AN) and Bulimia Nervosa (ΒΝ) are complex and debilitating, psychiatric disorders that share common core features such as intense fear of gaining weight. Moreover selfevaluation is unduly influenced by body shape and weight [1]. The effects of AN and BN can be devastating as they can seriously endanger the patients physical health, are associated with high risk of morbidity and mortality [2,3], significantly affect psychosocial functioning of the sufferers, burden enormously the patients families and often lead to a considerable drainage of health system resources [4].
The treatment of patients suffering from Eating Disorders (ED) is quite challenging. Apart from defective insight and weak motivation for change, patients present a variety of medical complications. ED and especially AN eventually affect almost all systems of the human body, causing serious cardiopulmonary, endocrine, metabolic, gastrointestinal, musculoskeletal, neurological, dermatological, ophthalmic and oral complications [5,6]. Severe and chronic starvation, purging behaviors and drug (usually laxatives, diuretics, from self-induced vomiting were described as possible causes for this co-occurrence. A first documentation of the association between AN and rectal prolapse came a few years later, in a series of three patients with the authors implying that this co-occurrence might be more common than originally thought [11]. Finally Mitchell and Norris presenting a case of a 16 year old female AN patient that developed rectal prolapse concluded that rectal prolapse could be described as an infrequent secondary complication of ED [12]. Most of the authors have suggested that the manifestation of rectal prolapse in ED patients is probably related with malnutrition, frequent purging and prolonged constipation.
The objective of this article is the presentation of two cases of ED complicated by a rare form of compulsive behavior related to the "emptiness" of the bowel. This behavior compromised of repeated and/or prolonged voluntary tension of the abdominal and pelvic muscles as well as insertion of the figure in the rectum to "check" if the bowel has been completely empty of its content. The purpose of this compulsive behavior was to alleviate intense anxiety caused by obsessive thoughts that the individual would get "fat" and/or "dirty/ intoxicated" if the bowel was not completely vacated of the stools. In both cases the compulsive behavior facilitated the manifestation of rectum prolapse that reinforced the vicious circle of OCD symptomatology.
The demographic and personal data of both patients have been altered in order to avoid identification.
Case 1
Miss L, 24 student of medicine has been suffering from AN restrictive type since the age of 16. At the age of 18 she developed a constant fear of not being able to empty her bowel. The starting point of this fear, according to her, was her chronic constipation. The constipation has been attributed, by her GP, to the low caloric, low fiber diet that she was following due to AN symptomatology. According to miss L after her admittance to medical school she started having intrusive thoughts that the stools inside her bowel will remain indefinitely and that they will "rote" inside her body. The thoughts caused intense disgust at how "dirty" her body was and how food was "contaminating" her bowel. She started spending an escalating amount of time in the toilet trying to empty her bowel of its "disgusting and "dirty" content. She ritualistically contracted her muscles as hard as possible and pressed her belly with her hands in order to completely empty her bowel from the stools. She remained in the toilet until she was "totally" certain that she has gotten rid of all the "disgusting dirt" in her body.
At the age of 23 rectum prolapse was observed. The surgeon that she visited explained to her that the prolapse was probably caused by the toilet ritual and her low body weight. He also informed her that in order to operate her she would have to try to restore her diet and weight to normal. Miss L did not follow any of the surgeon's suggestions and continued to restrain her diet and spend more and more time trying to empty her bowel and manually reset the prolapse. She abandoned her medical studies due to her inability to keep up with the university's requirements. Gradually she lost all her social activities and remained at her house with her divorced mother that was suffering from alcoholism.
At the time of the first examination the patient had a BMI of 14.3. She reported a variety of compulsive behaviors beyond compulsive bowel emptying. Some of them were related to feeling "clean" while the rest of them had to do with preparation of food and eating. She was also suffering from depressive mood and insomnia. Due to the complexity and the severity of the psychiatric symptomatology miss L agreed to receive inpatient treatment. During her first hospitalization the patient spend around 2-4 hours every day in the toilet following the ritual that has been presented in the previous paragraphs. She was treated with paroxetine 60mg and behavioral therapy for OCD and followed the multi-disciplined inpatient program for AN. Although during her hospitalization she managed to restore her weight and nutrition and reduce the time spend on compulsive bowel emptying rituals to 30-60 minutes per day the feeling of disgust towards he bowel and its content did not subside at all.
Two months after discharge miss L condition deteriorated rapidly and she had to be readmitted. Again her condition improved during hospitalization but not so dramatically as previously. After discharge she stopped eating and having any kind of social activity. She was actually spending most of the day laying in her bed transfixed in a state between sleep and arousal. The only time during the day that she was getting of her bed was in order to go to the toilet. She discontinued all medication during the first month after the second discharge. Four months afterwards she was feeling depressed, hopeless and expressed intense desire to abandon all efforts to live and let herself die from starvation.
At this point after three relapses and consecutive readmissions the patient remains at residential care. Her condition is very slowly improving (including her mood and compulsive behaviors) with the exception of the compulsive bowel emptying symptomatology. Although the antidepressant medication contributed to the improvement of her mood it did not seem to offer any improvement to the compulsive symptomatology so it was gradually tapered down to 20mg per day.
Case 2
Miss K, 22 student of art has been suffering from BN since the age of 18. The bulimic symptomatology started when she left her family and birth city to study art at the University of another Big City in Greece. 2-3 months after BN onset she applied for psychological help and started psychodynamic psychotherapy with a frequency of two sessions every week. Two years afterwards her condition was not only gradually deteriorating but she also started suffering from compulsive behavior that focused on checking rituals concerning her diet, caloric intake and bowel operation. Simultaneously she reported to her therapist that she has started having "disgusting images" of excrements followed by fear concerning the possibility of getting fat due to food remaining in her gastrointestinal system. At that point she developed the obsessive belief that in order to be internally clean and healthy she had to "totally" empty her bowel of its content. Following that belief every time that she visited the toilet she inserted her finger in her rectum to check if it was completely empty of stools. Also when she was staying at her apartment she continually contracted her abdominal and pelvic muscles in order to "push" the bowel content towards the rectum.
After one year rectum prolapse was observed. Miss K was scared by the prolapse. In a state of panic she decided to terminate psychotherapy, discontinue her studies and return to her family. Her parents initially supported her attempt to seek surgical help. The surgeon that treated her insisted that she should apply for psychiatric treatment in a mental health service for ED prior to the operation. He explained to her that the continuation of the compulsive bowel emptying rituals will probably result in the re-emerging of the rectum prolapse even after successful surgery. Miss K agreed to seek expert help as she was experiencing the prolapse as a catastrophic event for her personal and social life.
At the time of the first examination Miss K's BMI was 18.9. She reported daily morning and evening bulimic episodes and extremely restrictive diet between the episodes. She started treatment with Cognitive Behavioral Therapy (CBT) once every week for BN and behavioral therapy every two weeks for the OCD symptomatology. She was also prescribed with sertraline up to 200mg for the OCD symptomatology.
After 6 months of treatment the patient has achieved a considerable but not sufficient reduction in the number of bulimic episodes (usually one every two days) and has slightly improved her non-bulimic diet. BMI remains at the same level as the beginning of treatment. Miss K has also reported a significant improvement in her mood but only a slight improvement in the feelings of disgust and fear concerning her bowel and the following compulsive ritual. So far the patient has refused to keep any kind of diet diary and avoided to perform any kind of behavioral experiment concerning the compulsive bowel emptying symptomatology. The treating team has unanimously observed that the patient is constantly actively asking for help from others while at the same time she remains quite passive and does not commit to the therapeutic effort.
Discussion
To our knowledge in the literature there is only one other report of similar compulsive bowel emptying that resulted in rectum prolapse. Guerdjikova et al., described a young woman with bulimia nervosa and irritable bowel syndrome who used rectal purging (excessive finger evacuation to induce defecation) as a method of counteracting the effects of her binge eating and subsequently underwent two corrective surgeries for rectal prolapse [13].
From a theoretical point of view this bowel-related compulsive behavior can be viewed as a type of body checking ritual that is characteristic of ED and especially AN. According to CBT frequent checking of body parts is one of ED perpetuating mechanisms [14]. Patients suffering from ED usually view food as a "desired threat". As something that is feared because it can lead to obesity and sought upon for its soothing properties. This ambivalent relation leads to an intense focus on food and everything in the body that is related to that. The intensified focus increases further the awareness of threats (hyper-vigilance) leading to more anxiety and negative affect. Compulsive body checking can be regarded as the behavioral manifestation of this hyper-vigilance body monitoring [15].
In fact, people with ED frequently present with inflexible behaviors concerning food and body related issues and often develop rigid rituals in the daily routine while they experience difficulties in adopting alternative ways of coping with problems. Prevalence of OCD symptoms in ED is significantly higher than in general population [16,17]. Follow up studies showed that, although there is a decrease in the extent and severity of OCD symptoms after weight restoration, obsessive-compulsive traits may persist for some time after recovery [18]. The association between ED and OCD has been proposed to be mediated by similar underlying neurocognitive processes, such as difficulty with set-shifting and central coherence [19,20]. Body checking rituals resemble behaviors observed in OCD, such as compulsive checking, cleaning, and ritualized compulsions. Interestingly, a case study of a female patient with long-standing history of OCD symptomatology related to dirt and germs reported that the patient' s fear of developing bowel cancer led her to manually evacuate faeces from her rectum five times a day thus leading to the manifestation of rectal prolapse [21].
Finally, it should be noted that both patients described that beyond the typical obsessive thought-fear/anxiety reaction they were also experiencing intense feeling of disgust in the possibility that their bowel has not been completely emptied of its content. Both patients made a disclosure to their therapist that is was this feeling of disgust that pushed them to perform the insertion of the finger in the rectum to check for stools. It has been argued that disgust is the gate keeper of the gastrointestinal tract preventing through avoidance behaviors the spread of illness and disease by ensuring that distasteful, infectious or potentially toxic items are not orally incorporated by the individual [22]. Although there are reports in the literature on the association of disgust and ED, most of them have focused on food activation stimuli. Ellison et al, showed that when patients suffering from AN were exposed to disorder-relevant cues such as pictures showing high-caloric drinks, an increased activation in the left amygdala, the insula and the anterior cingulate, cerebral was observed [23]. This activation pattern was quite similar to what was observed during disgust induction.
Conclusions
Although compulsive and/or purging behavior is one of the main characteristics of ED, the clinician is often faced with extreme behaviors that quite often put in jeopardy the patients physical health and therapeutic alliance. In both cases that have been presented the sufferers faced severe medical complications, social isolation and professional inability as a result of their never ending worry of their bowel "emptiness". Sadly in both cases with the exception of inpatient behavioral therapy that also failed to generalize in outpatient conditions none of the therapeutic interventions proved to be significantly effective.
|
2019-03-16T13:08:17.589Z
|
2015-10-27T00:00:00.000
|
{
"year": 2015,
"sha1": "86dc701b4dfe012c23bdbde2c3840a9b11d6e828",
"oa_license": "CCBYNC",
"oa_url": "https://www.peertechz.com/articles/compulsive-bowel-emptying-and-rectal-prolapse-in-eating-disorders-presentation-of-two-cases.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ad1ee74b822f9a864d813f51904a8b089659d5f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251972115
|
pes2o/s2orc
|
v3-fos-license
|
Temperature Dependence of Rubber Hyper-Elasticity Based on Different Constitutive Models and Their Prediction Ability
Based on the electronic universal testing machine with a temperature chamber, this paper investigated the temperature and filler effects on the hyper-elastic behavior of reinforced rubbers and revealed the regulation of the stress and strain of the natural rubber and filled rubber with temperature. The experimental results showed that the hyper-elastic behavior of the filled rubber was temperature-dependent in a wide range. Comparing the adaptability of different models to the stress–strain variation with temperature, the Yeoh model was proven to reasonably characterize the experimental data at different temperatures. Based on the Yeoh model, an explicit temperature-dependent constitutive model was developed to describe the stress–strain response of the filled rubber in a relatively large temperature range. The prediction data of this proposed constitutive model fit well with the test data of the mechanical experiments, indicating that the model is suitable to characterize the large deformation behavior of filled rubbers at different temperatures to a certain degree. The proposed model can be used to obtain the material parameters and has been successfully applied to finite element analysis (FEA), suggesting a high application value. Notably, the model has a simple form and can be conveniently applied in related performance tests of actual production or finite element analysis.
Introduction
Rubber elastomers composed of long chains, macromolecules, and mesh-crosslinked structures have been commonly used in the automotive, aeronautical, and electronic industries. As a commonly used reinforcing agent for rubber products, carbon black has endowed natural rubber with better mechanical properties and thermo-elasticity. For carbon black-filled rubbers, temperature has a great influence on its hyper-elastic behavior. Due to the wide range of temperatures in a variety of applications [1,2], it is of great necessity to consider the impact of temperature on the hyper-elastic properties of carbon black-filled rubbers.
Filled rubber is usually used as damping and shock-absorbing components in the automotive and aerospace industries. Since the self-healing nature of elastic components in high-temperature environments or which are subjected to cyclic loading makes the temperature rise, the mechanical response of elastic components will be severely affected due to thermal coupling. [3] Although the mechanical responses of filled and unfilled rubber have been characterized at room temperature and high temperatures [4][5][6][7][8], the effects of temperature on the mechanical response of rubber materials in a certain deformation range, such as a 150% strain, have rarely been studied. From the point of view of tire applications and industrial formulations, it is necessary to carry out tests on the hyper-elastic mechanical properties of carbon black-filled rubber specimens at different temperatures in a wide range of deformations (150% strain) [9]. Meanwhile, both the filler-rubber matrix and filler-filler interactions at different temperatures also have a significant effect on the thermal behavior of rubber components [10].
The hyper-elastic mechanical behavior of rubber materials depends not only on the temperature but also the filler content. It is necessary to develop a temperature-dependent model to predict the hyper-elastic behavior of rubber based on the existing hyper-elastic constitutive model. The model is able to clearly describe and expose the temperature characteristics of rubber components with different carbon black contents [11]. Several thermodynamic models have been proposed to evaluate the effect of temperature on the mechanical properties of filled rubber [3,12,13]. However, the effect of temperature on the mechanical behavior of rubber materials in a larger deformation range (150% strain) is rarely studied. Meanwhile, due to the temperature correlation of different constitutive models having rarely been studied, the hyper-elastic behavior of rubber at different temperatures under a constant elongation strain can be accurately described by different constitutive models. This is helpful to select a fitted constitutive model that can better characterize the hyper-elastic behavior of rubber specimens at various temperatures.
This paper aims to systematically investigate the influence of temperature on the hyper-elastic mechanical behavior of filled rubbers. Section 2 introduces the commonly used hyper-elastic constitutive model for carbon black-filled rubbers. Section 3 displays the materials and the experimental setup. Section 4 shows different hyper-elastic stress-strain curves of unfilled and filled rubbers with various CB contents at different temperatures. Section 5 not only reveals the relationship between the Yeoh model parameters and the ambient temperature but extends the Yeoh model to an explicit temperature-dependent model. The evaluation results show that the model can accurately reveal the effect of temperature on the hyper-elastic behavior of tire rubber. Combining the relationship between the parameters of the Yeoh constitutive model and the ambient temperature, an explicit temperature-dependent Yeoh constitutive model was developed and has been applied to the FEA. Finally, Section 6 comes to the conclusion.
Constitutive Models
Current rubber hyper-elastic constitutive models can be divided into two main categories: one is the molecular network model based on the theory of thermodynamics statistics to examine the conformational entropy change of the molecular network. These models can predict the hyper-elastic mechanical behavior of large strains with fewer model parameters [14]. The other is the image-only models based on the continuum medium mechanics theory to simulate the elastic response of unfilled and filled rubbers at large strains [15,16].
In the following sections, we will thoroughly introduce three constitutive models, including the Yeoh model, the Ogden model, and the Arruda-Boyce model.
Yeoh Model
The most general strain energy density function in the form of a deformation tensor invariant series, first proposed by Rivlin [17], was taken as the reduced polynomial model and can be expressed as: Yeoh proposed a simplified polynomial strain energy function after analyzing the experimental data of filled rubber. The incompressible Yeoh model assumes that the strain energy function is only a general polynomial in the first principal stretching invariant. For the reduced polynomial model Equation (1) with N = 3, the Yeoh constitutive model is obtained and can be expressed: The Yeoh constitutive model can produce a typical S-shaped stress-strain relationship curve, which is more in line with the highly nonlinear mechanical properties of the hyperelastic rubber material. It is generally believed that the Yeoh model has better accuracy in the larger deformation range of rubber, and, therefore, the Yeoh model is widely used in practical engineering analysis [18,19].
The relationship between the nominal stress f and the stretch ratio λ can be expressed as follows: where C 10 , C 20 , and C 30 are the parameters of the material model, which can be determined by uniaxial tensile tests. The characteristics of the Yeoh model are mainly reflected in the small strain state, which represents the initial shear modulus. C 10 = µ/2 is the Yeoh constitutive model and is half of the initial shear modulus at a small strain. C 20 is the softening phenomenon at a medium deformation, and it is generally negative. C 30 shows that the material becomes hard again within a large deformation range.
Ogden Model
Ogden [20] directly expressed the strain energy density function in terms of the principal elongation ratio λ i , which was used as an independent variable and can be expressed as: where µ i and α i are arbitrary constants (they can be non-integers). The analytical accuracy of the Ogden model gradually increases with the increase in the polynomial order N, suggesting a relatively large flexibility of the Ogden model. However, the value of N is generally not greater than 4. To satisfy stability, the value of N ∑ i µ i α i should be greater than 0. Due to more or fewer problems with the computational accuracy of the lower-order model, it is very difficult to accurately fit a large number of intrinsic parameters of the higher-order model. It is also believed that the fourth-order or higher-order strain energy density functions are no longer of much practical value. Therefore, it is not recommended to choose a higher-order model for calculation, and the third-order Ogden constitutive model is commonly used in engineering. The relationship between the nominal stress f and the stretch ratio λ can be expressed as: where µ i and α i are material constants.
Arruda-Boyce Model
The model based on Langevin statistical theory proposed by Arruda and Boyce (1993) is a non-Gaussian chain network model [14]. The Arruda-Boyce model is also known as the eight-chain model, which can be expanded to the form of the Taylor series (only the first five terms are retained) as: where C 1 = 1 2 , C 2 = 1 20 , C 3 = 11 1050 , C 4 = 19 7000 , C 5 = 519 673750 . C R = Nkθ is the first strain invariant.
The relationship between the nominal stress f and the stretch ratio λ can be expressed as: where C R and N are material constants. The parameter N is independent of temperature in the physical sense. For filled rubbers, N is temperature-dependent due to the steric dislocation effect of carbon black particles on polymer chains inside the rubber material in space [12,14,21]. In a large deformation strain range, the eight-chain model provides more accurate calculations even though there is only a small amount of material behavior known, because it has only two parameters.
Experimental Materials
Four rubber specimens with different contents of carbon black were used for the experiments. The rubber matrix was natural rubber, and the filled carbon black was N220. Among the four rubber formulations, only the filling amount of carbon black was different. The filler mass fractions of carbon black in the four types of rubbers, i.e., C00, C20, C40, and C60, were 0 phr, 20 phr, 40 phr, and 60 phr, respectively. The rubber types and formulations used in the test are shown in Table 1. The natural rubber was obtained from Shandong Haoshun Chemical Co., Ltd. in Jinan, Shandong, China. The carbon black N220 was obtained from Tianjin Zhengning New Material Co, Ltd. (Tianjin, China). The other stearic acid, zinc oxide, sulfur, accelerator NS, and antioxidant 4020 were all commercially available industrial-grade products.
Sample Preparation
The rubber compounding was divided into two stages. First, the natural rubber was pressed for 3 min on a double-roller opener (Model: S(X)-160A, Shanghai First Rubber Machinery Co., Ltd., Shanghai, China). The finished natural rubber was put into the Hakke torque rheometer (HAAKE Germany, Germany), and then the zinc oxide, stearic acid, antioxidant 4020, and carbon black were added sequentially and mixed for 10 min. The second mixing stage was conducted on a two-roller kneader (Shanghai Rubber & Plastic Machinery Co. Shanghai, China). The sulfur and accelerator NS were added to a blended section of the mixture and blended well. Then, the mixture was pressed into 3-5 mm sheets and left to stand for about 5 h before vulcanization. The vulcanization conditions were 150 • C, 10 MPa, and equivalent vulcanization time (Tc90). According to ISO 37-2017, specimens of dumbbell type 2 were prepared, and the thickness of the rubber specimen was 2 mm. The double eccentric wheel fixture RA-4-1 was used for uniaxial tensile testing, which is a special tensile fixture for rubber. The force and displacement accuracy of the universal electronic tensile tester (Taiwan High-Tech Testing Instruments Co, Taiwan, China) was 0.5, the temperature control accuracy was ±1 K, and the scale distance of the displacement sensor was 20 mm. To ensure the specimen reached the required test temperature, the temperature of the chamber needs to stabilize for 10 min after reaching the test temperature before starting the experiment. To eliminate the Mullins effect (the stress softening effect of rubber materials) [3], rubber specimens were loaded and unloaded at a rate of 100 mm/min for 10 cycles. The purpose of this step is to more accurately reproduce the working condition of the tire. The modulation strain should be the 150% strain, and the modulation temperature should be set to 288 K. After modulation, the rubber specimens should stand for more than 24 h to fully recover the elastic deformation and exhibit stable properties. The experiments were repeated at least five times under each condition, and the average value was taken as the final experimental result.
Test Results
The deformation of rubber components in engineering applications is generally less than 100%. In some extreme operating conditions, such as the rolling of the tire on a raised road surface, the stretching of the rubber spring may be greater than the 100% strain. Therefore, the 150% strain was used to characterize the mechanical properties of the rubber material in the uniaxial tensile test. Figure 1 shows the relationship between the nominal stress and nominal strain for the C00, C20, C40, and C60 rubber specimens in Table 1. The experimental temperatures were 293 K, 313 K, 333 K, 353 K, 363 K, and 383 K, respectively. Figure 1(a2-d2) locally magnify the stress-strain differences between the four rubber specimens at different temperatures. From the strain-strain curve in Figure 1, it can be observed that the hyper-elastic mechanical behavior of the rubber material shows a more pronounced temperature dependence over a deformation range of 150%. For the carbonblack-filled rubber specimens C20, C40, and C60, the rubber samples first became "soft" with increasing temperature, and when the temperature reached a certain temperature, the rubber samples gradually became "hard" as the temperature rose. The turning temperature was different for different rubber samples. Therefore, the temperature dependence of the hyper-elastic mechanical behavior of rubber materials can be considered the result of two mechanisms. One is the "positive effect" that hardens the rubber sample, and the other is the "negative effect" that softens the rubber sample. For natural rubber C00 unfilled with carbon black, the stress-strain curve always increased with increasing temperatures. There was no change from soft to hard. The stress-strain curve does not show a clear turning temperature. tensile testing, which is a special tensile fixture for rubber. The force and displacement accuracy of the universal electronic tensile tester (Taiwan High-Tech Testing Instruments Co, Taiwan, China) was 0.5, the temperature control accuracy was ±1 K, and the scale distance of the displacement sensor was 20 mm. To ensure the specimen reached the required test temperature, the temperature of the chamber needs to stabilize for 10 min after reaching the test temperature before starting the experiment. To eliminate the Mullins effect (the stress softening effect of rubber materials) [3], rubber specimens were loaded and unloaded at a rate of 100 mm/min for 10 cycles. The purpose of this step is to more accurately reproduce the working condition of the tire. The modulation strain should be the 150% strain, and the modulation temperature should be set to 288 K. After modulation, the rubber specimens should stand for more than 24 h to fully recover the elastic deformation and exhibit stable properties. The experiments were repeated at least five times under each condition, and the average value was taken as the final experimental result.
Test Results
The deformation of rubber components in engineering applications is generally less than 100%. In some extreme operating conditions, such as the rolling of the tire on a raised road surface, the stretching of the rubber spring may be greater than the 100% strain. Therefore, the 150% strain was used to characterize the mechanical properties of the rubber material in the uniaxial tensile test. Figure 1 shows the relationship between the nominal stress and nominal strain for the C00, C20, C40, and C60 rubber specimens in Table 1. The experimental temperatures were 293 K, 313 K, 333 K, 353 K, 363 K, and 383 K, respectively. Figure 1(a2-d2) locally magnify the stress-strain differences between the four rubber specimens at different temperatures. From the strain-strain curve in Figure 1, it can be observed that the hyper-elastic mechanical behavior of the rubber material shows a more pronounced temperature dependence over a deformation range of 150%. For the carbon-black-filled rubber specimens C20, C40, and C60, the rubber samples first became "soft" with increasing temperature, and when the temperature reached a certain temperature, the rubber samples gradually became "hard" as the temperature rose. The turning temperature was different for different rubber samples. Therefore, the temperature dependence of the hyper-elastic mechanical behavior of rubber materials can be considered the result of two mechanisms. One is the "positive effect" that hardens the rubber sample, and the other is the "negative effect" that softens the rubber sample. For natural rubber C00 unfilled with carbon black, the stress-strain curve always increased with increasing temperatures. There was no change from soft to hard. The stress-strain curve does not show a clear turning temperature.
Discussion
From the uniaxial tensile experimental data at different temperatures, a preliminary study on the temperature dependence of rubber between the Yeoh model, Ogden model, and Arruda-Boyce model was carried out on the C20 rubber specimen. The smaller the residual sum of squares ( RSS ), the closer the fit is to 1. In order to obtain the fitting ability of the hyper-elastic intrinsic model more quickly, the residual sum of squares ( RSS ) was calculated to evaluate the fitting ability of the Arruda-Boyce model, Ogden model, and Yeoh model.
Discussion
From the uniaxial tensile experimental data at different temperatures, a preliminary study on the temperature dependence of rubber between the Yeoh model, Ogden model, and Arruda-Boyce model was carried out on the C20 rubber specimen. The smaller the residual sum of squares (RSS), the closer the fit is to 1. In order to obtain the fitting ability of the hyper-elastic intrinsic model more quickly, the residual sum of squares (RSS) was calculated to evaluate the fitting ability of the Arruda-Boyce model, Ogden model, and Yeoh model.
where P i is the experimental value; P is the average of the test values;P i is the model fit value; N is the number of experimental data points involved in the fit. The smaller the RSS, the larger the R 2 , indicating a better overall fit of the model. From Figures 2 and 3, it can be seen that the Arruda-Boyce model shows general "S-shaped" stress-strain characteristics of the hyperelastic behavior of carbon black-filled rubber at different temperatures. However, there are still some obvious deviations between the fitted results and the experimental data. It was difficult for the Arruda-Boyce model to reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon blackfilled rubber at different ambient temperatures under the 150% strain. This was consistent with the conclusion summarized in the previous theoretical presentation. Meanwhile, the Arruda-Boyce model cannot reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon black-filled rubbers in small and medium deformations well, showing a more significant error with experimental data. The Ogden model (N = 3) and Yeoh model also show an "S-shaped" stress-strain curve for the hyper-elastic behavior of carbon black-filled rubber at different temperatures. This indicates that the fitted curves of the two models can reasonably describe the experimental data under the 150% strain.
where i P is the experimental value; P is the average of the test values; ˆi P is the model fit value; N is the number of experimental data points involved in the fit. The smaller the RSS , the larger the 2 R , indicating a better overall fit of the model. From Figures 2 and 3, it can be seen that the Arruda-Boyce model shows general "Sshaped" stress-strain characteristics of the hyperelastic behavior of carbon black-filled rubber at different temperatures. However, there are still some obvious deviations between the fitted results and the experimental data. It was difficult for the Arruda-Boyce model to reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon black-filled rubber at different ambient temperatures under the 150% strain. This was consistent with the conclusion summarized in the previous theoretical presentation. Meanwhile, the Arruda-Boyce model cannot reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon black-filled rubbers in small and medium deformations well, showing a more significant error with experimental data. The Ogden model (N = 3) and Yeoh model also show an "S-shaped" stress-strain curve for the hyperelastic behavior of carbon black-filled rubber at different temperatures. This indicates that the fitted curves of the two models can reasonably describe the experimental data under the 150% strain. Table 2 lists the parameters fitting of the Ogden constitutive model (N = 3) with the experimental data of the C20 rubber specimen at different temperatures. Figure 4 shows the trend of the parameters of the uniaxial tensile data with temperature. The Ogden constitutive model (N = 3) fit the experimental data in Figures 2 and 3 well. Figure 4 and Table 2 show the relationship between the parameters of the Ogden model (N = 3) and temperature. As can be seen from the diagram, the parameters of the Ogden model have no law with the change of temperature, so it can be said that the parameters have no temperature dependence. Due to the excessive parameters of the Ogden model (N = 3), the model did not converge easily when fitting to the experimental data, resulting in longer computation times and a low applicability of the model. Table 2 lists the parameters fitting of the Ogden constitutive model (N = 3) w experimental data of the C20 rubber specimen at different temperatures. Figure 4 the trend of the parameters of the uniaxial tensile data with temperature. The Ogd stitutive model (N = 3) fit the experimental data in Figures 2 and 3 well. Figure 4 an 2 show the relationship between the parameters of the Ogden model (N = 3) and t ature. As can be seen from the diagram, the parameters of the Ogden model have with the change of temperature, so it can be said that the parameters have no temp dependence. Due to the excessive parameters of the Ogden model (N = 3), the mo not converge easily when fitting to the experimental data, resulting in longer comp times and a low applicability of the model. Table 2 lists the parameters fitting of the Ogden constitutive model (N = 3) with the experimental data of the C20 rubber specimen at different temperatures. Figure 4 shows the trend of the parameters of the uniaxial tensile data with temperature. The Ogden constitutive model (N = 3) fit the experimental data in Figures 2 and 3 well. Figure 4 and Table 2 show the relationship between the parameters of the Ogden model (N = 3) and temperature. As can be seen from the diagram, the parameters of the Ogden model have no law with the change of temperature, so it can be said that the parameters have no temperature dependence. Due to the excessive parameters of the Ogden model (N = 3), the model did not converge easily when fitting to the experimental data, resulting in longer computation times and a low applicability of the model. From Table 3 and Figure 5, it can be seen that the material parameters 10 20 , C C , and 30 C vary approximately as a quadratic function with temperature, which indicates that the material parameters are correlated with temperature. As the temperature gradually increased, the rubber gradually softened, and the shear modulus decreased. The reason for this is that the material parameter 10 C indicates the initial shear modulus at small strains.
However, the rubber started to "harden" after reaching the turning temperature. The ability of the carbon black-filled rubber to resist the strain increased, and the shear modulus gradually rose. This was consistent with the trend of the experimental data in Figure 1(b1). The material parameter 20 C indicates the softening phenomenon of the filled rubber in a medium deformation. The larger the material parameter 20 C , the more obvious the softening phenomenon, and when it came to the turning temperature, the filled rubber was the softest, and the material parameter 20 C was the largest. The material parameter 30 C indicates the phenomenon that the material started to harden again in a large deformation, and after reaching the turning temperature, the material parameter 30 C became larger From Table 3 and Figure 5, it can be seen that the material parameters C 10 , C 20 , and C 30 vary approximately as a quadratic function with temperature, which indicates that the material parameters are correlated with temperature. As the temperature gradually increased, the rubber gradually softened, and the shear modulus decreased. The reason for this is that the material parameter C 10 indicates the initial shear modulus at small strains. However, the rubber started to "harden" after reaching the turning temperature. The ability of the carbon black-filled rubber to resist the strain increased, and the shear modulus gradually rose. This was consistent with the trend of the experimental data in Figure 1(b1). The material parameter C 20 indicates the softening phenomenon of the filled rubber in a medium deformation. The larger the material parameter C 20 , the more obvious the softening phenomenon, and when it came to the turning temperature, the filled rubber was the softest, and the material parameter C 20 was the largest. The material parameter C 30 indicates the phenomenon that the material started to harden again in a large deformation, and after reaching the turning temperature, the material parameter C 30 became larger due to the hardening of the filled rubber. Therefore, there was significant dependence between the Yeoh model and temperature, and this temperature dependence can be expressed by numerical fitting using the quadratic function. In summary, the ArrudaBoyce model cannot reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon black-filled rubbers in small and medium deformations well, showing a more significant error with experimental data. Although the Ogden constitutive model (N = 3) fit the experimental data well, there also existed an irregularity of the model parameters with temperature. Due to the excessive parameters of the Ogden model (N = 3), the model did not converge easily when fitted to the experimental data, resulting in longer computation times and a low applicability of the model. On that basis, it can be concluded that there was significant dependence between the Yeoh model and temperature, and this temperature dependence can be expressed by numerical fitting using the quadratic function.
Using the Yeoh constitutive model, the relationship between the material parameters and temperature can be expressed as: In summary, the ArrudaBoyce model cannot reflect the nonlinear characteristics of the hyper-elastic mechanical behavior of carbon black-filled rubbers in small and medium deformations well, showing a more significant error with experimental data. Although the Ogden constitutive model (N = 3) fit the experimental data well, there also existed an irregularity of the model parameters with temperature. Due to the excessive parameters of the Ogden model (N = 3), the model did not converge easily when fitted to the experimental data, resulting in longer computation times and a low applicability of the model. On that basis, it can be concluded that there was significant dependence between the Yeoh model and temperature, and this temperature dependence can be expressed by numerical fitting using the quadratic function.
Using the Yeoh constitutive model, the relationship between the material parameters and temperature can be expressed as: where A 0 , A 1 , A 2 , B 0 , B 1 , B 2 , C 0 , C 1 , C 2 are the temperature-dependent parameters of the Yeoh constitutive model, which can be determined by fitting the Yeoh constitutive model. For different volume fractions of carbon black-filled rubbers, the temperature-dependent parameter values can be obtained by fitting the parameters of the Yeoh constitutive model at different temperatures by Equation (9). The details are shown in Tables 4-6 and Figure 6. By combining Equations (2) and (9), the Yeoh constitutive model with the explicit temperature parameter can be obtained.
Based on Equation (10) and the parameters in Tables 4-6, the stress-strain curves for four different contents of carbon black-filled rubbers at different temperatures were plotted, which can be used to predict the trend of the parameters of the Yeoh constitutive model with explicit temperature parameters. Figure 7 shows the prediction curves of the model (a1-d1) with local zoom-in plots (a2-d2). From Figure 7, the predicted results of By combining Equations (2) and (9), the Yeoh constitutive model with the explicit temperature parameter can be obtained.
Based on Equation (10) and the parameters in Tables 4-6, the stress-strain curves for four different contents of carbon black-filled rubbers at different temperatures were plotted, which can be used to predict the trend of the parameters of the Yeoh constitutive model with explicit temperature parameters. Figure 7 shows the prediction curves of the model (a1-d1) with local zoom-in plots (a2-d2). From Figure 7, the predicted results of the Yeoh constitutive model with the explicit temperature parameter are in good agreement with the experimental results. This indicates that the Yeoh constitutive model with the explicit temperature parameter can more accurately describe the nonlinear hyper-elastic mechanical behavior of carbon black-filled rubber at different temperatures under the 150% strain. To visualize the relationship between the effect of temperature on the "softness" and "hardness" of the rubber, the correlation between the adhesive stress and the temperature at different constant elongation strains of 0.2, 0.6, 1, and 1.4 was investigated. From Figure 8, it can be seen that, as the temperature increased, the stress in the rubber specimens (C20, C40, C60) at a constant elongation first decreased with the increase in temperature and then, after reaching the turning temperature, increased again with the increase in temperature. The carbon black-filled rubber sample first became "soft" with the increase in temperature and then gradually became "hard" when it reached the turning temperature. The temperature changed with the number of carbon black-filled masses. The stress transition temperature increased with the increase in the volume fraction of the carbon black. However, for the unfilled carbon black rubber specimen C00, the stress tended to increase roughly linearly with the increasing temperature at different constant elongation strains. The above results were the same as the conclusion of the stress-strain curves measured by the above tests. To visualize the relationship between the effect of temperature on the "softness" and "hardness" of the rubber, the correlation between the adhesive stress and the temperature at different constant elongation strains of 0.2, 0.6, 1, and 1.4 was investigated. From Figure 8, it can be seen that, as the temperature increased, the stress in the rubber specimens (C20, C40, C60) at a constant elongation first decreased with the increase in temperature and then, after reaching the turning temperature, increased again with the increase in temperature. The carbon black-filled rubber sample first became "soft" with the increase in temperature and then gradually became "hard" when it reached the turning temperature. The temperature changed with the number of carbon black-filled masses. The stress transition temperature increased with the increase in the volume fraction of the carbon black. However, for the unfilled carbon black rubber specimen C00, the stress tended to increase roughly linearly with the increasing temperature at different constant elongation strains. The above results were the same as the conclusion of the stress-strain curves measured by the above tests. To visualize the relationship between the effect of temperature on the "softness" and "hardness" of the rubber, the correlation between the adhesive stress and the temperature at different constant elongation strains of 0.2, 0.6, 1, and 1.4 was investigated. From Figure 8, it can be seen that, as the temperature increased, the stress in the rubber specimens (C20, C40, C60) at a constant elongation first decreased with the increase in temperature and then, after reaching the turning temperature, increased again with the increase in temperature. The carbon black-filled rubber sample first became "soft" with the increase in temperature and then gradually became "hard" when it reached the turning temperature. The temperature changed with the number of carbon black-filled masses. The stress transition temperature increased with the increase in the volume fraction of the carbon black. However, for the unfilled carbon black rubber specimen C00, the stress tended to increase roughly linearly with the increasing temperature at different constant elongation strains. The above results were the same as the conclusion of the stress-strain curves measured by the above tests. There are two reasons for the temperature dependence of the hyper-elasticity of carbon black-filled rubbers. Firstly, due to the gradual increase in temperature, the movement between molecules is more intense. The intermolecular potential energy is reduced, which results in a thermal softening effect of the filled rubber. Secondly, due to the gradual increase in temperature, the conformational entropy of the long-chain molecular system of rubber changes. The thermos-elasticity of the rubber is enhanced, which makes the filled rubber show a thermal hardening effect. The thermal softening effect of the filled rubber plays a major role at lower temperatures. When the test temperature exceeds the turning temperature, the thermal hardening effect of the filled rubber gradually plays a major role. Therefore, the phenomenon of "softening first, then hardening" of the filled rubber occurs with the increase in temperature. Meanwhile, due to the addition of carbon black, the thermal softening effect of the filled rubber increases with the increase in the volume fraction of the carbon black filling, the thermal hardening effect gradually decreases, and the turning temperature becomes higher and higher.
Application of the Yeoh Model with Explicit Temperature Parameters in FEA
From Equation (10) and Tables 4-6, the temperature-dependent characterization parameters of rubber specimens with four different carbon black-filled mass fractions can be obtained from the model parameters at different temperatures. Then, uniaxial tensile simulations were performed on C60 rubber specimens at 293 K using ABAQUS/CAE. The simulation results were compared with the experimental data. A dumbbell-shaped model with the same properties as the experiment was built using ABAQUS/CAE. Then, the uniaxial tensile model was obtained by adding material properties, building components, setting analysis steps, and dividing meshes, as shown in Figure 9. The tensile specimen model used the C3D8RH unit. In order to resemble the experimental process as much as possible, this simulation coupled the specimen area with reference points A and B. In addition, the boundary condition that A is completely fixed and the displacement along the Y-axis is applied to the reference point B was set. The tensile process of the universal power tensile tester was simulated. There are two reasons for the temperature dependence of the hyper-elasticity of carbon black-filled rubbers. Firstly, due to the gradual increase in temperature, the movement between molecules is more intense. The intermolecular potential energy is reduced, which results in a thermal softening effect of the filled rubber. Secondly, due to the gradual increase in temperature, the conformational entropy of the long-chain molecular system of rubber changes. The thermos-elasticity of the rubber is enhanced, which makes the filled rubber show a thermal hardening effect. The thermal softening effect of the filled rubber plays a major role at lower temperatures. When the test temperature exceeds the turning temperature, the thermal hardening effect of the filled rubber gradually plays a major role. Therefore, the phenomenon of "softening first, then hardening" of the filled rubber occurs with the increase in temperature. Meanwhile, due to the addition of carbon black, the thermal softening effect of the filled rubber increases with the increase in the volume fraction of the carbon black filling, the thermal hardening effect gradually decreases, and the turning temperature becomes higher and higher.
Application of the Yeoh Model with Explicit Temperature Parameters in FEA
From Equation (10) and Tables 4-6, the temperature-dependent characterization parameters of rubber specimens with four different carbon black-filled mass fractions can be obtained from the model parameters at different temperatures. Then, uniaxial tensile simulations were performed on C60 rubber specimens at 293 K using ABAQUS/CAE. The simulation results were compared with the experimental data. A dumbbell-shaped model with the same properties as the experiment was built using ABAQUS/CAE. Then, the uniaxial tensile model was obtained by adding material properties, building components, setting analysis steps, and dividing meshes, as shown in Figure 9. The tensile specimen model used the C3D8RH unit. In order to resemble the experimental process as much as possible, this simulation coupled the specimen area with reference points A and B. In addition, the boundary condition that A is completely fixed and the displacement along the Y-axis is applied to the reference point B was set. The tensile process of the universal power tensile tester was simulated. Polymers 2022, 14, x FOR PEER REVIEW 16 of 19 Figure 9. Uniaxial tensile specimen model and meshing.
It can be seen from Figure 10 that the stress distribution in the middle part of the dumbbell model is more uniform, and the stress concentration region of the specimen is a circular arc from narrow to wide. The stress-strain curve for the simulation was also determined from the uniform deformation region in the middle part of the dumbbell model. From Figure 11, the simulated uniaxial tensile data are more consistent with the trend of the experimental data. This indicates that the Yeoh constitutive model with apparently included temperature parameters can predict the uniaxial tensile test data at different temperature ranges under the 150% strain. Because the model parameters can be obtained through uniaxial tensile tests, they can be better applied to actual working conditions, with a guarantee of certain accuracy requirements. Therefore, the Yeoh constitutive model with explicit temperature parameters has high engineering applicability. It can be seen from Figure 10 that the stress distribution in the middle part of the dumbbell model is more uniform, and the stress concentration region of the specimen is a circular arc from narrow to wide. The stress-strain curve for the simulation was also determined from the uniform deformation region in the middle part of the dumbbell model. From Figure 11, the simulated uniaxial tensile data are more consistent with the trend of the experimental data. This indicates that the Yeoh constitutive model with apparently included temperature parameters can predict the uniaxial tensile test data at different temperature ranges under the 150% strain. Because the model parameters can be obtained through uniaxial tensile tests, they can be better applied to actual working conditions, with a guarantee of certain accuracy requirements. Therefore, the Yeoh constitutive model with explicit temperature parameters has high engineering applicability. It can be seen from Figure 10 that the stress distribution in the middle part of the dumbbell model is more uniform, and the stress concentration region of the specimen is a circular arc from narrow to wide. The stress-strain curve for the simulation was also determined from the uniform deformation region in the middle part of the dumbbell model. From Figure 11, the simulated uniaxial tensile data are more consistent with the trend of the experimental data. This indicates that the Yeoh constitutive model with apparently included temperature parameters can predict the uniaxial tensile test data at different temperature ranges under the 150% strain. Because the model parameters can be obtained through uniaxial tensile tests, they can be better applied to actual working conditions, with a guarantee of certain accuracy requirements. Therefore, the Yeoh constitutive model with explicit temperature parameters has high engineering applicability. If the temperature of the rubber specimen is known, the corresponding model parameters can be calculated immediately from the Yeoh constitutive model with explicit temperature parameters. Therefore, the Yeoh constitutive model with explicit temperature parameters can be quickly applied to finite element analysis. The Yeoh constitutive model with explicit temperature parameters provides a more convenient and accurate method for the analysis of other hyper-elastic finite element models. However, the simulation results of the Yeoh constitutive model with explicit temperature parameters still have some deviations from the experimental data. This indicates that there is still room for improvement in the model. If the temperature of the rubber specimen is known, the corresponding model parameters can be calculated immediately from the Yeoh constitutive model with explicit temperature parameters. Therefore, the Yeoh constitutive model with explicit temperature parameters can be quickly applied to finite element analysis. The Yeoh constitutive model with explicit temperature parameters provides a more convenient and accurate method for the analysis of other hyper-elastic finite element models. However, the simulation results of the Yeoh constitutive model with explicit temperature parameters still have some deviations from the experimental data. This indicates that there is still room for improvement in the model.
Conclusions
Based on the Yeoh constitutive model and continuum medium mechanics theory, the Yeoh constitutive model with explicit temperature parameters was constructed. According to the uniaxial tensile experimental data of rubber samples, the following conclusions can be obtained: (1) The hyper-elastic mechanical behavior of the carbon black-filled rubber specimens was strongly correlated with temperature in a large deformation range (150% strain). The turning temperature was correlated with the carbon black-filled mass fraction, which gradually increased with the increase in the carbon black-filled mass fractions. The turning temperatures were 333 K, 353 K, and 363 K. For the unfilled carbon black rubber, its stress-strain curve always increased with the increase in temperature. (2) The Yeoh model, Ogden model, and Arruda-Boyce model were investigated to characterize the hyper-elastic mechanical behavior of rubbers at different temperatures.
Although the Ogden constitutive model fits the stress-strain curves at different temperatures well, there was no regularity in the variation of its model parameters with temperature. The Yeoh constitutive model can fit the stress-strain curves at different temperatures well with the experimental results, and it can exhibit temperature-dependent hyper-elastic mechanical behavior in a wide range of deformations. Based on the Yeoh constitutive model and the continuous medium mechanics theory, the Yeoh constitutive model with explicit temperature parameters was constructed, which can better describe the constitutive mechanical behavior of rubber at different temperatures. (3) The finite element analysis of uniaxial stretching was performed using the Yeoh constitutive model with explicit temperature parameters. The simulation results were in Figure 11. The stress-strain curve of the FEA results and the experimental data of the C60 rubber specimen.
Conclusions
Based on the Yeoh constitutive model and continuum medium mechanics theory, the Yeoh constitutive model with explicit temperature parameters was constructed. According to the uniaxial tensile experimental data of rubber samples, the following conclusions can be obtained: (1) The hyper-elastic mechanical behavior of the carbon black-filled rubber specimens was strongly correlated with temperature in a large deformation range (150% strain). The turning temperature was correlated with the carbon black-filled mass fraction, which gradually increased with the increase in the carbon black-filled mass fractions. The turning temperatures were 333 K, 353 K, and 363 K. For the unfilled carbon black rubber, its stress-strain curve always increased with the increase in temperature. (2) The Yeoh model, Ogden model, and Arruda-Boyce model were investigated to characterize the hyper-elastic mechanical behavior of rubbers at different temperatures.
Although the Ogden constitutive model fits the stress-strain curves at different temperatures well, there was no regularity in the variation of its model parameters with temperature. The Yeoh constitutive model can fit the stress-strain curves at different temperatures well with the experimental results, and it can exhibit temperature-dependent hyper-elastic mechanical behavior in a wide range of deformations. Based on the Yeoh constitutive model and the continuous medium mechanics theory, the Yeoh constitutive model with explicit temperature parameters was constructed, which can better describe the constitutive mechanical behavior of rubber at different temperatures. Institutional Review Board Statement: All patients involved in this study gave their informed consent. Institutional review board approval of our hospital was obtained for this study.
|
2022-09-01T15:24:04.306Z
|
2022-08-27T00:00:00.000
|
{
"year": 2022,
"sha1": "2470e598e7047f01aaf557ac0e5d556d95a90c69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/17/3521/pdf?version=1661593688",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "283267effe228a7aaefc0cb974b82e1f40925ae5",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
211729757
|
pes2o/s2orc
|
v3-fos-license
|
Melanomas with activating RAF1 fusions: clinical, histopathologic, and molecular profiles
A subset of melanomas is characterized by fusions involving genes that encode kinases. Melanomas with RAF1 fusions have been rarely reported, mostly in clinical literature. To investigate this distinctive group of melanomas, we searched for melanomas with activating structural variants in RAF1, utilizing our case archive of clinical samples with comprehensive genomic profiling (CGP) by a hybrid capture-based DNA sequencing platform. Clinical data, pathology reports, and histopathology were reviewed for each case. RAF1 breakpoints, fusion partners, and co-occurring genetic alterations were characterized. From a cohort of 7119 melanomas, 40 cases (0.6%) featured fusions that created activating structural variants in RAF1. Cases with activating RAF1 fusions had median age of 62 years, were 58% male, and consisted of 9 primary tumors and 31 metastases. Thirty-nine cases were cutaneous primary, while one case was mucosal (anal) primary. Primary cutaneous melanomas showed variable architectures, including wedge-shaped and nodular growth patterns. Cytomorphology was predominantly epithelioid, with only one case, a desmoplastic melanoma, consisting predominantly of spindle cells. RAF1 5′ rearrangement partners were predominantly intrachromosomal (n = 18), and recurrent partners included MAP4 (n = 3), CTNNA1 (n = 2), LRCH3 (n = 2), GOLGA4 (n = 2), CTDSPL (n = 2), and PRKAR2A (n = 2), all 5′ of the region encoding the kinase domain. RAF1 breakpoints occurred in intron 7 (n = 32), intron 9 (n = 4), intron 5 (n = 2), and intron 6 (n = 2). Ninety-eight percent (n = 39) were wild type for BRAF, NRAS, and NF1 genomic alterations (triple wild type). Activating RAF1 fusions were present in 2.1% of triple wild-type melanomas overall (39/1882). In melanomas with activating RAF1 fusions, frequently mutated genes included TERTp (62%), CDKN2A (60%), TP53 (13%), ARID2 (10%), and PTEN (10%). Activating RAF1 fusions characterize a significant subset of triple wild-type melanoma (2.1%) with frequent accompanying mutations in TERTp and CDKN2A. CGP of melanomas may improve tumor classification and inform potential therapeutic options, such as consideration of specific kinase inhibitors.
Introduction
The majority of melanomas harbor point mutations of BRAF, NRAS, KIT, or NF1 that drive tumor growth [1,2]. Kinase rearrangements, although less common, represent the oncogenic drivers in emerging subgroups of melanoma, often through activation of MAP kinase pathways. Rearrangements in several genes, including BRAF, RET, ROS1, ALK, NTRK1, and NTRK3, have been characterized in subsets of melanoma [2][3][4][5]. Surprisingly, despite the central role of RAF1 in the MAP kinase pathway, there are only isolated reports of RAF1 (CRAF) fusions in melanomas, and the histopathologic characterizations of these tumors have been limited [6][7][8].
The literature on melanoma with rearrangements in kinase genes other than RAF1 is extensive. In particular, many of these rearrangements are associated with melanomas that demonstrate characteristic spitzoid cytomorphology, leading to their classification as Spitz melanomas [9][10][11]. As specified in the most recent World Health Organization (WHO) classification [12], the term "Spitz melanoma" refers specifically to melanoma with both histologic changes reminiscent of Spitz nevi and a known oncogenic fusion driver, particularly of kinase-encoding genes. In contrast, "spitzoid melanoma" refers to melanoma with some morphologic resemblance to Spitz nevus, but without a known fusion driver.
Retrospective studies of spitzoid neoplasms have found activating fusions involving BRAF, RET, ROS1, ALK, or NTRK1 in 39% of melanomas with spitzoid morphology and just over 50% of Spitz nevi and atypical Spitz tumors [11,13]. In the largest reported series to date, patients with fusion-positive Spitz melanomas showed a broad age distribution, ranging from 6 to 73 years old, with a median age of 31 years [11]. While Spitz nevi and atypical Spitz tumors with NTRK3 [14] and NTRK1 [15,16] fusions, as well as pigmented spindle cell nevi of Reed with NTRK3 fusions [17], have shown distinctive clinical and histopathologic profiles, Spitz melanomas with these alterations generally do not show similarly distinguishing characteristics. Small series have characterized adult cutaneous melanomas with NTRK fusions, correlated with large epithelioid and amelanotic cytomorphology, and BRAF fusions, which have been variably correlated with spitzoid morphology [3,4,18,19]. A recent study described pediatric Spitz melanomas with MAP3K8 fusions or truncations which tended to show expansile growth, hypercellularity, deep mitoses, and ulceration [20].
Following the identification in our archive of an activating RAF1-fusion melanoma that responded to therapy with a MEK inhibitor [21], we performed a search of our archive of 276,645 clinical samples to identify melanoma cases with RAF1 fusions that created known or likely activating structural variants in RAF1, defined as loss of the autoinhibitory domain but retention of the kinase domain. In this study, we present the first series of activating RAF1fusion melanomas with clinical-pathologic correlation, detailed descriptions of several new fusion variants, and a thorough characterization of accompanying mutations.
Cohort and genomic analyses
Comprehensive genomic profiling (CGP) was performed in a Clinical Laboratory Improvement Amendments certified, College of American Pathologists-accredited laboratory (Foundation Medicine, Inc., Cambridge, MA, USA).
Approval for this study, including a waiver of informed consent and a HIPAA waiver of authorization, was obtained from the Western Institutional Review Board (Protocol No. 20152817). For quality assurance, the presence of diagnostic tumor tissue was confirmed on routine hematoxylin and eosin (H&E)-stained slides before DNA extraction. In brief, ≥60 ng of DNA was extracted from 40 μm sections of 7119 melanoma specimens, in formalin fixed, paraffinembedded tissue blocks. The samples were assayed by CGP using adaptor ligation, and hybrid capture was performed for all coding exons from 287 (version 1) to 315 (version 2) cancer-related genes plus select introns from 19 (version 1) to 28 (version 2) genes frequently rearranged in cancer (Supplementary Table 1). Sequences were analyzed for all classes of genomic alterations, including short variant alterations (base substitutions, insertions, and deletions), copy number alterations (focal amplifications and homozygous deletions), and select gene fusions or rearrangements, by methods previously described [22][23][24]. Tumor mutational burden (TMB, mutations/Mb) was determined on 0.8-1.1 Mbp of sequenced DNA [24]. Microsatellite instability was determined on up to 114 loci [25].
Mutational signatures
Mutational signatures were evaluated for all samples containing at least 20 nondriver somatic missense alterations. Signatures were given by analysis of the trinucleotide context and profiled using the Sanger COSMIC signatures of mutational processes in human cancer [26]. A positive signature was determined if a sample had at least a 40% fit to a mutational process [26]. The COSMIC UV signature is dominated by C > T transition mutations in a CC or TT dinucleotide setting [27].
Clinical-pathological analysis of melanoma cohort harboring activating RAF1 fusions The cohort of melanomas harboring activating RAF1 fusions comprised 40 cases, each from a different patient. Assays with CGP (Foundation Medicine Cambridge, MA, USA) occurred during clinical care at other institutions. Clinicopathological data including patient age, gender, tumor site, tumor diameter, and stage were extracted from the accompanying pathology reports.
H&E stained sections from each of the 40 cases were assessed retrospectively by two board-certified dermatopathologists (JYT and MCM). Histologic parameters assessed on primary tumors included tumor silhouette (dome shape, plaque-like growth, nodular growth, etc.), symmetry, shape of the tumor base (wedge-shaped, flat, bulbous, etc.), presence of epidermal involvement (and intraepidermal growth patterns), ulceration, maturation, deep nested growth, fascicular growth, associated dermal fibrosis, Breslow depth, mitotic rate, grade of solar elastosis [28], and tumor-infiltrating lymphocytes. Cytologic features, assessed on all cases, included predominant cytomorphology (epithelioid, spindled, mixed epithelioid and spindled), cytoplasmic color and abundance, and nuclear features of chromatin quality, nucleolar prominence, and degree of pleomorphism.
The H&E slides were independently diagnostic of melanoma for primary tumors and for pigmented metastases. In contrast, H&E slides were not independently diagnostic for cases of metastatic melanoma that showed nonpigmented malignant epithelioid proliferations. Those cases required diagnostic corroboration with accompanying pathology reports for relevant historical and immunohistochemical details (e.g., documented Melan-A and S100 positivity to confirm the diagnosis of melanoma vs. other malignant epithelioid tumors).
Quantitative data were analyzed using the Fisher exact test owing to the categorical quality of the data and the size of the cohort. For TMB comparison between two groups, the nonparametric Mann-Whitney U test was used. A twotailed P value of <0.05 was considered to be statistically significant.
Clinical-pathologic features
From an internal series of 7119 melanomas that had undergone prior hybrid capture-based DNA sequencing, 40 cases (0.6%), each from a different patient, featured gene rearrangements that created known or likely activating structural variants in RAF1, defined as loss of the autoinhibitory domain but retention of the kinase domain.
Among patients with activating RAF1-rearranged melanomas, the ages ranged from 34 to 86 years, with a median of 62 years. There were 23 males and 17 females. All patients had clinically advanced disease. Clinical staging ranged from at least stage 2A to stage 4, with the majority of cases documented at stage 4 (n = 25 of 40; 63%) and most of the remaining cases at either stage 3A or 3B (n = 10 of 40; 25%). Sequencing was performed on the original primary tumor in 8 primary cutaneous melanomas and on 31 metastatic disease samples. Of the metastatic samples, sites included regional lymph nodes (n = 8), in-transit metastasis (n = 1), and distant lymph nodes (n = 3). Additional distant metastatic sites included skin (subcutaneous (n = 4) and dermal (n = 2)), soft tissue (n = 3), brain (n = 2), lung (n = 2), and one each involving liver, omentum, small intestine, adrenal, bone, and spleen. Thirty-nine cases were consistent with either primary cutaneous melanoma or metastatic melanoma from a skin primary, while one case was a primary melanoma of anal mucosa.
Primary cutaneous tumors occurred on the extremities and trunk showed a mean tumor diameter of 16 mm (range 3-40 mm), mean thickness of 6.6 mm (range 2.3-17 mm), and mean mitotic rate of 5.5 per mm 2 (range 1-14 per mm 2 ) ( Table 1). Melanomas were classified as nodular (3), superficial spreading (2), unclassified (2), and desmoplastic (1). The two melanomas with an unclassified subtype comprised one specimen of broadly ulcerated and deeply invasive melanoma without assessable lesional edges (case 1) and another specimen of residual melanoma deep to scar (case 4).
Histopathologic examination of the eight primary cutaneous melanomas revealed heterogeneous features (Fig. 1). Three cases showed domed surfaces with wedge-shaped bases ( Fig. 1a-g), two showed nodular growth in the dermis and subcutis, one showed nodular and diffusely infiltrative growth in the dermis and subcutis, one appeared plaque like with an exophytic component (Fig. 1c), and the desmoplastic melanoma appeared as a haphazard spindle cell proliferation in the dermis and subcutis with fibrosis and lymphoid aggregates. Four cases were plainly asymmetric, while four were subtly asymmetric. Four cases showed epidermal involvement, with two demonstrating melanoma in situ extending beyond the dermal proliferation (corresponding to the radial growth phases) and the remaining two showing focal involvement overlying the dermal component. Of those with epidermal involvement, all four showed some degree of epidermal hyperplasia, three displayed pagetoid growth, and a single case showed confluent growth with epidermal effacement.
Regarding the dermal component of the primary cutaneous melanomas, significant maturation with depth was not seen in any case. The presence of deep nested growth was seen in four cases, including all three with wedgeshaped arrangement (Fig. 1b-h). Three cases were associated with densely eosinophilic and slightly thickened collagen fibers, while one case showed a minute focus of fibrosis with increased fibroblasts and pale myxoid stroma. All cases contained mitotic figures within their deep aspects. Solar elastosis tended to be scant; by the WHO scoring system [28], two had grade 0 solar elastosis, four had grade 1, one had grade 2, and one could not be determined owing to lack of tumor-free and evaluable dermis in the H&E sections. Tumor-infiltrating lymphocytes were absent or sparse in three cases, nonbrisk in four cases, and brisk in one case.
Among the eight cutaneous melanomas, melanocytic cytology was predominantly epithelioid in seven cases, and spindled in the desmoplastic melanoma case. Among the seven predominantly epithelioid cases, one showed focally spindled growth in the vertical growth phase (Fig. 1f). Their cytoplasm tended to be amphophilic (four cases) to palely eosinophilic (three cases), while one case had densely eosinophilic cytoplasm. Cytoplasmic quantity in the epithelioid cases was moderate to abundant in five and relatively scant in two. Only two showed cytoplasmic pigmentation (Fig. 1f), which was focal in both cases. Nuclear size was medium to large, and chromatin was heterogeneous (admixed dense and pale) in all cases. Nucleoli were prominent in three cases, small and distinct in two cases, and indistinct in three cases. Nuclear pleomorphism was judged to be severe in four cases and mild to moderate in the other four. No cases showed convincing Spitz-nevuslike cytomorphology (i.e., voluminous, homogeneous cytoplasm with sharp borders) or pulverocytic cytology characterized by pale, finely pigmented cytoplasm.
Three additional cases with RAF1 rearrangements that did not result in loss of the regulatory domain of RAF1 were identified, including two RAF1 intergenic rearrangements (exon 2-6 inversion and exon 6-7 duplication) and one OPRM1-RAF1 rearrangement with RAF1 breakpoint at exon 2. The exon inversion case was of vulvar origin with two copy loss of CDKN2A, while the remaining two cases were cutaneous in origin and had pathogenic BRAF mutations.
Discussion
To our knowledge, this study represents the first series of activating RAF1-fusion melanomas with clinicopathologic correlation and detailed characterization of genetic alterations. Activating RAF1 fusions represent a significant subset of triple wild-type melanoma (2.1% of all triple wild-type melanoma). Recurrent fusion partners and recurrent RAF1 breakpoints were present. Frequent accompanying mutations in TERTp and CDKN2A were identified, typical for skin primary melanoma, and TMB was not significantly different from primary skin and anus melanoma cases without activating RAF1 fusions.
Reports of activating RAF1 fusions in melanocytic neoplasms have been rare. In one study seeking therapeutically targetable gene fusions in multiple cancer types, FISH for BRAF and RAF1 performed on 131 melanomas identified one BRAF rearrangement and one RAF1 rearrangement [6]. A more recent study of kinase fusions across large numbers of various malignancies found four cases of RAF1 fusion out of 397 melanomas [29]. A whole-genome study of 183 melanomas found RAF1 fusions in two melanomas (with partners CDH3 and GOLGA4): one triple wild type and one with an NF1 comutation [30]. In a study of 21 large to giant congenital nevi, one case was found to have a SOX5-RAF1 fusion [31]. This congenital nevus, like that of an ALKfused nevus in the same study, lacked comutations of NRAS and BRAF. A striking case report also showed a fusion of SASS6-RAF1 in a giant congenital nevus that gave rise to melanoma with rhabdomyosarcomatous differentiation [32]. While a balanced translocation was found in the background congenital nevus, an unbalanced translocation was noted in the rhabdomyosarcomatous component.
Of note, activating RAF1 fusions are also found in a small proportion of thyroid carcinomas, prostatic adenocarcinomas, and pilocytic astrocytomas [29,33,34]. Atefi et al. described RAF1 missense point mutation R391W in one melanoma cell line that lacked common driver mutations and showed resistance to vemurafenib despite MAPK signaling [7]. One report described a melanoma with GOLGA4 fused to exons 8-17 of RAF1, retaining the RAF1 kinase domains, and with accompanying mutations in CTNNB1 and CDKN2A [8]. Importantly, this patient's melanoma showed a marked clinical response to therapeutic MEK inhibition, as indicated by serial PET scans.
Prior histopathologic descriptions of activating RAF1fusion melanomas are limited. We observed various architectural patterns, including wedge-shaped growth with associated epidermal hyperplasia and deep nested melanocytes in three cases, reminiscent of some Spitz tumors with ALK and NTRK1 fusions [10]. In contrast to Spitz tumors with fusions of ALK, NTRK1, and ROS1, which are usually compound with a prominent epidermal component, epidermal involvement in the RAF1-fused melanomas was typically limited or absent [9,10,16,35,36]. Characteristic spitzoid cytomorphology was not observed. Rather, we noted a somewhat heterogeneous group of tumors with predominantly epithelioid melanocytic cytomorphology with amphophilic to eosinophilic cytoplasm and medium to large nuclei with heterogeneous chromatin and often prominent nucleoli. In some cases, the large epithelioid cytology resembled that reported in NTRK-fusion melanomas, as well as in some Spitz tumors with ALK translocation [4,13,36]. In our cases of primary cutaneous melanoma with activating RAF1 fusions, a distinctive growth pattern, such as the fascicular pattern of ALK1-fused Spitz tumors, was not seen. Nevertheless, typical histopathologic features of melanoma, including asymmetry, lack of maturation, cytologic atypia, and dermal mitotic activity, were readily identifiable in the cases evaluated in our study.
Triple wild-type melanomas (BRAF, RAS, and NF1 wild type) typically lack a UV signature [37]. In our series, however, activating RAF1-fusion melanoma was almost entirely triple wild type, cutaneous, and UV driven. Given the frequent concurrent mutations in TERTp, CDKN2A, and TP53, and the relatively low TMB (median = 10.2 mut/Mb), these tumors may fit best into the low cumulative sun damage group, as described in the current WHO classification [28]. Concordant with this classification, primary cases in our cohort generally lacked significant solar elastosis.
The RAF1 fusions we identified appear to be pathogenic, given that the most significant regulatory mechanism for RAF1 is the direct association of the N-terminal autoinhibitory domains to the kinase domain. Loss of this domain but retention of the kinase domains, as seen in each of our cases (Fig. 3a), would cause autonomous, unregulated activation of kinase activity [38].
Limitations of this study include its retrospective nature and the distinct population of patients highly enriched for aggressive tumors, mostly metastatic to distant sites. Typically, extensive genomic testing is performed on advanced malignancies from patients whose oncologists are seeking targeted therapies. Thus, this patient population may not be representative of the general population of patients with melanoma. We note that the median age of 62 years for RAF1-fusion melanomas in this study is significantly older than that reported in prior melanocytic tumors with gene fusions, which tend toward pediatric and young adult patients. While an actual age difference may exist, this age discrepancy may be attributable to disparate study cohorts: our cohort was selected from melanomas with proven aggressive behavior, while other fusion-positive melanocytic tumor cohorts often were selected for Spitz morphology, often in diagnostically challenging cases, and therefore enriched for young patients. Finally, while our review of histologic slides from all cases enabled us to confirm histopathologic diagnoses, particularly for cases of primary cutaneous melanoma and pigmented metastatic lesions where H&E slides were independently diagnostic, some cases required corroboration with details from the accompanying pathology reports (e.g., for metastatic melanomas with histologic slides showing malignant epithelioid proliferations, we relied on corresponding pathology reports for confirmatory immunohistochemical details).
CGP of melanomas may provide insights into pathogenesis, as well as potential therapeutic options. Additional studies will be needed to correlate the finding of activating RAF1 fusions in melanoma with prognostic data and treatment outcomes. Prognostic data will also enable the comparison of RAF1-fused melanomas by comutations, such as TERTp, which has been shown to be an important prognostic marker for spitzoid melanocytic neoplasms [39]. Furthermore, the spectrum of melanocytic lesions with activating RAF1 rearrangements may be wider than the highly selected, aggressive tumors examined in this study. Overall, our findings provide a compelling rationale for consideration of CGP of melanomas, which may offer insights into melanoma biology and potentially inform therapeutic options, including specific kinase inhibitors.
Compliance with ethical standards
Conflict of interest EAW, NS, MM, RS, DCP, ESS, BMA, JMV, JAE, JSR, and JYT are employees of Foundation Medicine, Inc., a wholly owned subsidiary of Roche Holdings, Inc. and Roche Finance Ltd, and these employees have equity interest in an affiliate of these Roche entities. MCM declares no conflict of interest.
Ethical approval IRB approval status: reviewed and approved by Western IRB; Protocol No. 20152817.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
|
2020-03-03T16:42:00.750Z
|
2020-03-02T00:00:00.000
|
{
"year": 2020,
"sha1": "1e9f8088ccac379b65332ec3d7d723b1c1858a93",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41379-020-0510-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e9f8088ccac379b65332ec3d7d723b1c1858a93",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119359072
|
pes2o/s2orc
|
v3-fos-license
|
Spin Networks, Turaev-Viro Theory and the Loop Representation
We investigate the Ponzano-Regge and Turaev-Viro topological field theories using spin networks and their $q$-deformed analogues. I propose a new description of the state space for the Turaev-Viro theory in terms of skein space, to which $q$-spin networks belong, and give a similar description of the Ponzano-Regge state space using spin networks. I give a definition of the inner product on the skein space and show that this corresponds to the topological inner product, defined as the manifold invariant for the union of two 3-manifolds. Finally, we look at the relation with the loop representation of quantum general relativity, due to Rovelli and Smolin, and suggest that the above inner product may define an inner product on the loop state space.
3-manifolds.
Finally, we look at the relation with the loop representation of quantum general relativity, due to Rovelli and Smolin, and suggest that the above inner product may define an inner product on the loop state space.
State space for Regge-Ponzano theory.
The topological field theory of Turaev and Viro [1992] is defined by taking a triangulation of a 3-manifold M , on which a state or colouring is given by labelling each edge of the triangulation by a representation j i of the quantum group U q (sl (2)). The partition function Z T V (M ) for the manifold M is given by a finite sum over states on the interior of the manifold of the product of q-6j symbols {6j (t) } q corresponding to tetrahedra t, weighted by specified functions w v (j i ) for each vertex, w e (j i ) for each internal edge, and f (c j ) for the colouring c j on the boundary, (2.1) for states which satisfy certain admissibility conditions, as we shall discuss below. For a closed 3-manifold, this partition function is a topological invariant, i.e. it depends only on the topology of the manifold M and not on the triangulation used in the definition. For a 3-manifold with boundary, the partition function is a function of the colouring on the boundary, but still depends only on the topology of the interior of the manifold.
In the q → 1 limit, this reduces to the much earlier theory of Ponzano and Regge [1968], in which the j i label representations of the classical group SU (2). The Regge-Ponzano partition function Z RP (M ) has a similar form to (2.1) but with the {6j (t) } q replaced by classical 6j symbols {6j (t) }. Specificly, for manifold M with colouring c j on its boundary F , . (2.2) In this case, the state sum is infinite, but it may be regularised by truncating the sum at a large value L of j i , and the taking the limit as L → ∞. In this section, we shall assume that this may be done, and interpret the Regge-Ponzano theory as a formal topological field theory. To define fully a 3-dimensional topological field theory [Atiyah 1990], the vector space of states V (F ) assigned to a 2-dimensional surface F must also be specified. We define the vector space V RP (F ) for Regge-Ponzano theory by analogy with the Turaev-Viro definition.
This is essentially equivalent to that given by Ooguri [1992], but written in a way that allows us to give a simple definition of the inner product. The vector space V RP (F ) associated to a surface F , triangulated by ∆, is defined as a quotient of the space C(F, ∆) of linear combinations of colourings of the triangulated surface, given by where P is the projection operator corresponding to the cylinder F × [0, 1] over the surface, given by It is natural to assign the vector | M ∈ V RP (F ) associated to a 3-manifold M with boundary ∂M = F as where c i is a colouring of the boundary, and Z RP (M, c i ) is the partition function given by a state sum of the form (2.2) over all colourings of the interior which extend the colouring c i on the boundary.
An inner product may be defined on the space of colourings C(F, ∆) by taking the set of different colourings on F to form an orthonormal basis, (2.6) On the quotient space V RP (F ), we define the inner product to be so that, for φ, η ∈ {φ ∆ : P[φ ∆ ] = 0 }, as P is self-adjoint under (2.6), we have and so the inner product is well-defined on V RP (F ).
The above definitions imply that the inner product between the two vectors corresponding to two 3-manifolds M 1 , M 2 , which meet in a common surface F , is given by the invariant of the manifold obtained by gluing M 1 and M 2 along F , as follows This is the definition of the inner product required for a topological field theory [Atiyah 1990] to be consistent, and so this definition carries a lot of information about the elements of the theory. Witten [1989a] defined a set of topological field theories in which the partition function with the connection A taking values in the Lie algebras of various gauge groups. He argued [Witten 1989b,c] that Chern-Simons theories with certain choices of gauge group are equivalent to 2+1-dimensional gravity with or without cosmological constant. It follows that the Chern-Simons theory with gauge group ISO(3) is equivalent to 3-dimensional Euclidean gravity without cosmological constant. This theory is related to the Regge-Ponzano model as follows.
The state space V CSW (F ) of the ISO(3) theory consists of functionals ψ(ω) on the moduli space of flat SO(3) connections on a 2-surface F . The wave function for a particular manifold M is given by the functional ψ M (ω) ≡ ω i | ψ M defined by the path integral (2.10), with boundary condition that ω i is the flat connection on the boundary ∂M = F .
There is a natural inner product on this space given by (2.11) It follows that the inner product of two Chern-Simons wave functions corresponding to two handlebodies M 1 and M 2 is given by the partition function for the closed, oriented manifold M formed by gluing together the two handlebodies, (2.12) Ooguri [1992] related the two theories by constucting the trivalent graph W , dual to the triangulation ∆ on the 2-dimensional boundary ∂M . Each edge C i of this graph is labelled by the representation j i on the edge of the triangulation that it crosses. This graph is interpreted as a Wilson line network by assigning, to edge C i , the Wilson line where t j i are the SO(3) generators in the j i representation. The product of these Wilson lines, contracted by 3j symbols at trivalent vertices, gives a gauge-invariant function ψ ∆,c (ω) of the triangulation ∆, the colouring c and the flat connection ω. This may be interpreted as a change of basis function where the sum is over all colourings c of the (fixed) triangulation ∆ on F .
Ooguri showed that this correspondance is 1-1 and independent of the triangulation chosen, and so the state spaces of the two theories V RP (F ) and V CSW (F ) are isomorphic. He also claimed that the two inner products given by (2.9) and (2.12) are equivalent, and so the Regge-Ponzano theory and the ISO(3) Chern-Simons theory are equivalent as physical theories, (subject to the proviso that neither theory is mathematically well-defined). This result suggests that the Regge-Ponzano theory may be interpreted as providing a model for quantum gravity in three dimensions, and that we should look for a similar result for the well-defined Turaev-Viro theory.
Spin networks and the inner product.
Spin networks were introduced by Penrose [1971] as a discrete model underlying spacetime. A spin network is an SU (2) invariant tensor represented by a trivalent graph, a network with three edges meeting at each vertex. Each edge is labelled by a spin-j representation of the angular momentum covering group SU (2), where j ∈ 0, 1 2 , 1, . . .. For the spin network to be non-zero, the triangle inequalities must hold at each vertex, and also, the sum of the spins must be integral, where j 1 , j 2 , j 3 are the spins assigned to the three edges meeting at that vertex. Penrose showed how to evaluate a spin network by using the binor calculus [Penrose 1972]. In this approach, a strand network is associated with each spin network by replacing each edge, labelled by spin j, by a linear combination of 2j strands, and summing over all ways of joining these strands at each vertex. So, a vertex is replaced in the strand picture by where each bar represents an anti-symmetriser, given by the linear combination of the different ways the 2j strands may cross, with a co-efficient of (−1) for each crossing, e.g.
An oriented abstract spin network (where the orientation may be given by the embedding of the spin network in the plane) is evaluated by decomposing each strand network into a number of closed loops using the two basic binor identities [Penrose 1971[Penrose , 1972 These identities form the basis of a topologically invariant diagrammatic calculus [Kauffman 1990], based on the SU(2) invariant tensor ε AB represented by The binor calculus provides a way of calculating the norm of an abstract oriented spin network, which only depends on the network as an abstract graph together with its orientation. As we shall see in the next section, Kauffman [1991] showed that the binor identity is a special case of his bracket identity, which can be used to define a spin network based on the quantum group sl(2) q .
We can now use spin networks to look at the relation of the Regge-Ponzano model to the loop representation of quantum general relativity [Rovelli and Smolin 1989], in a similar way to the work of Rovelli [1993]. Rovelli related Ooguri's Wilson line network to the loop representation as follows. Each Wilson line U j i [A, C i ] in the j representation is replaced by 2j Wilson lines in the 1 2 representation, and each trivalent intersection is replaced by the sum over all ways of joining the strands. The strand network corresponding to a coloured triangulation (∆, c) is thus an ensemble of multiple loops, E ∆,c = {α 1 , α 2 , . . .}, where each multiple loop α i has the property that 2j single loops cross a link of the triangulation with colour j.
Here, we interpret the dual graph to a coloured triangulation in the Regge-Ponzano model as a spin network. This was first suggested by Hasslacher and Perry [1981] and Moussouris [1983] for the case of a spin network on S 2 . This spin network is converted to a strand network by replacing each trivalent vertex by a sum over strands according to (3.3). This strand network is then evaluated using the binor relations (3.5),(3.6). The key difference here from Rovelli's approach is that the presence of the extra negative signs in the anti-symmetrisers (3.4). This rule, together with the binor relations, ensures that the evaluation of the spin network is topologically invariant. We shall see in the next section that this is because the binor relations (3.5),(3.6) are a special case of the Kauffman relations. Now, the state space of the Regge-Ponzano model V RP (F ) is given by the space of linear combinations of colourings of a triangulation. We map a particular coloured triangulation to its dual spin network, by mapping a coloured edge of the triangulation to an edge of the network crossing it with the same colouring, as follows It is clear that this mapping is 1-1. A spin network is evaluated as a linear combination of strand networks which obey the binor identities and so are isotopy invariant. This suggests that we can re-write the state space V RP (F ) as the space of linear combinations of isotopy classes of links, quotiented by the binor relations. As we shall see in the next section, this is an example of a skein space.
A basis element in the Regge-Ponzano state space V RP (F ) given by a paticular colouring of a triangulation | ∆, c i maps to a spin network state | s i . A spin network state corresponds to a linear combination of strand network states | α i as above, where n is the number of crossings in that state. We now map a strand network state to an equivalence class of a multiple loop under the binor relations (3.5),(3.6). So, a spin network | s i ∈ V RP (F ) maps to a state in the space V loop (F ) of complex linear combinations of characteristic functionals on isotopy classes of loops, quotiented by the binor relations.
We now want to consider the question of how to define an inner product on the state space. Rovelli proposed that mappings of the above type could be used to define the inner product between two loop states using Regge-Ponzano theory. From the above formulation, we propose the following interpretation. As a coloured triangulation maps 1-1 on to a spin network, this suggests that, following (2.6), we should take spin network states with different colourings to be orthonormal, (3.10) An arbitrary loop state | γ i ∈ V loop (F ) may be written as a linear combination of spin network states | s j as (3.11) For a particular state corresponding to the vector for a 3-manifold, | γ M , the co-efficient is λ j = Z(M, s j ), and so We shall discuss in section 5, the comparison between this approach and that of Rovelli and Smolin [1990].
Turaev-Viro theory and skein space.
I now want to extend the above work to begin to investigate the relation between the topological field theory of Turaev and Viro [1992] and the loop representation of quantum general relativity in 3 dimensions. There are two pieces of evidence which lead us to suspect such a relation. Firstly, Turaev-Viro theory is the generalisation of the Regge-Ponzano theory in which the representations of the classical group SU (2) are replaced by those of a quantum group U q (sl (2)) at q an r-th root of unity, so that the partition function is a finite sum. Thus, we may expect that the Turaev-Viro theory would be related to the loop representation. Secondly, we know that the Turaev-Viro invariant is the square of the Chern-Simons-Witten invariant for gauge group SU (2), which is equivalent to the SO(4) Chern-Simons theory that is related to 3-dimensional Euclidean gravity with a positive cosmological constant. This formal relation was implicit in Witten's work and was first written explicitly in [Ooguri and Sasakura 1991] and [Archer and Williams 1991], as follows where for connections A, B taking values in the Lie algebra of SU (2), connection ω taking values in SO (3), triad e and cosmological constant Λ k , for level k of the Chern-Simons theory, which is related to the level r of the Turaev-Viro theory by Much work has been undertaken by mathematicians to provide a more mathematically rigorous proof of the Chern-Simons-Witten topological field theory with gauge group SU (2) which does not rely on the use of path integrals. Reshetikhin and Turaev [1991] defined a topological invariant based on representations of the quantum group U q (sl(2)) which satisfies the same formal properties and so may be identified with the Chern-Simons-Witten invariant. In a series of papers, Lickorish [1991Lickorish [ ,1993 showed that this Witten-Reshetikhin-Turaev (W-R-T) invariant could be reproduced using skein theory. Independently, both Turaev [1992a,b] and Walker [1992] showed that the Turaev-Viro invariant is the square of the W-R-T invariant, and so justified equation (4.1). More insight into the relation between these two theories was provided by the work of Justin Roberts [Roberts 1993[Roberts ,1994 extending the skein-theoretic approach, and here we shall extend this approach. In a physical model, we are interested in finding an interpretation of the vector space of states and the inner product between states defined by a topological field theory. From the relation (4.1) between the partition functions and the formalism of a topological field theory [Atiyah 1990], it is expected that the state space for the Turaev-Viro theory to be isomorphic to endomorphisms of the state space for the Witten theory, An outline proof of this theorem was given in [Turaev 1992b]. We show how this relation is recovered in the skein-theoretic approach. The skein space SM of an oriented 3-manifold M is the vector space of formal linear sums, over C , of isotopy classes of framed links L in M , quotiented by the Kauffman relations [Kauffman 1991] where is a closed contractible loop, and the diagrams in (4.5) represent pieces of links inside a 3-ball, outside of which the links are identical. Applying (4.5) to a link reduces that link to a linear combination of loops with no crossings inside any local 3-ball, and each contractible loop may be replaced by a co-efficient by (4.6). Thus, the content of the skein space is that it is the space of linear combinations of closed loops with no local crossings which are both non-contractible and disjoint.
We can see that the Penrose binor relations (3.5),(3.6) are a special of the Kauffman skein relations, given by taking A = −1. It is convenient to use a two-dimensional projection to describe the skein space, but we can see that the above definition is inherently three-dimensional. In particular, the skein space SA of a solid torus S 2 × S 1 is described by its projection on to a 2-dimensional annulus A and is given by the polynomial algebra over α n , where α n is the basis element consisting of n loops around the hole. We can also use this description as a projection to define the skein space SF of a 2-surface F as the skein space of the cylinder F × [0, 1] over the surface.
A description of the state space of a surface in the W-R-T theory using skein theory was given by Roberts [1994] as follows. Consider the surface F as dividing S 3 into a handlebody H and its dual H ′ , This implies [Blanchet et al 1993] that the space H is finite dimensional and may be identified as the state space of the W-R-T theory (4.12) In the limit A → −1, which corresponds to q = A 2 → 1, these reduce to the antisymmetrisers (3.4), and so a q-spin network reduces to a classical spin network as q → 1.
A q-spin network is evaluated by applying the Kauffman relations (4.5), (4.6) to each crossing and contractible loop, and so is naturally an element of the skein space SM . A triad T (i, j, k) is defined for (4.15) and is said to be admissble if (4.16) and inadmissible if these relations are not satisfied. Note that networks are labelled by integers, whereas triangles in the Turaev-Viro picture were labelled by half-integers.
A q-spin network is admissible if and only if all its triads are admissible. As we shall see, we want to impose the condition that we only consider admissible networks. So, we define N as the subspace of the skein space SM of any 3-manifold generated by inserting an inadmissible triad anywhere into the manifold M , i.e. the subspace of isotopy classes of q-spin networks in M for which any triad is inadmissible, and quotient out by this subpsace. Roberts [1994] showed that this quotient space of the skein space of a handlebody, which we call the reduced skein space, is equivalent to the quotient space (4.10). So, the state space for the Witten theory V W RT (F ) ∼ = H has now been written as the quotient space of the skein space of the handlebody by the subspace generated by inadmissible triads. We can regard this as the space of admissible q-spin networks on the handlebody. Comparing this relation with the expected relation (4.4) between the state spaces of the two theories, I conjecture that space F is isomorphic to the state space of the Turaev-Viro theory V T V (F ), and further that, in analogy with Roberts' result for the Witten state space, this space F is equivalent to the reduced skein space defined by (4.20) where N is the subspace of SF generated by inserting any inadmissible triad into F . This relation may be expected from general principles. Recall that the Turaev-Viro theory [Turaev and Viro 1992] is specified by the partition function (2.1), which is given by a state sum over admissible colourings of a triangulated 3-manifold. The labels (i, j, k) assigned to the three edges around any triangle must satisfy (4.21) and the admissibility condition where the theory is defined at q an r-th root of unity. The state space V T V (F ) of the Turaev-Viro theory is defined as the space of linear combinations of admissible colourings C(F ) on the triangulated surface F , quotiented by the kernel of the isomorphism Φ (F ×I) corresponding to the cylinder F × [0, 1] over the surface, (4.23) An admissible colouring of a triangulation of the surface, | ∆, c ∈ V T V (F ), may be mapped to a q-spin network, as follows. Take the trivalent graph on the surface dual to the triangulation, as in (3.8), and label an edge of the graph by twice the half-integer label on the edge of the triangulation which it crosses. This integer-labelled trivalent graph may now be interpreted as a q-spin network. Furthermore, a colouring of a triangulation which is admissible according to (4.21), (4.22) maps to a network in which the colouring of each triad is admissible by (4.16). Thus, the map from the Turaev-Viro state space V T V (F ) to the reduced skein space F given by is surjective and we expect it to be an isomorphism.
To gain more physical insight into the nature of the reduced skein space F , we may attempt to find a basis for this space. A basis is not given simply by the set of all admissible colourings of a q-spin network dual to a simple triangulation of the surface, as there is a subtlety due to the fact that this space is the skein space quotiented by the space generated by inadmissible triads.
Let us consider the simple example in which F is the surface of the torus S 1 × S 1 and r = 4 so that the admissible colourings of an edge are j ∈ {0, 1, 2}. The Witten state space for this surface is the reduced skein space H of the solid torus, which has a basis α j given by j ∈ {0, 1, 2} loops around the hole, (since the inadmissibility of triads of higher order imply that the skein element for any higher number of loops must be zero), 2} (4.25) and so is 3-dimensional. By the relation (4.4), the Turaev-Viro state space V T V (F ) should be 9-dimensional. The simplest (degenerate) triangulation of the surface consists of two triangles with their edges identified and so the dual q-spin network has two vertices and three edges, labelled by j 1 , j 2 , j 3 , such that (j 1 , j 2 , j 3 ) form an admissible triad. In this case, the admissible colourings of a triad are {(0, 0, 0), (0, 1, 1), (0, 2, 2), (1, 1, 2)} and so, allowing for permutations, there are 10 admissible colourings of this network. However, these colourings are not independent in the reduced skein space F because of relations generated by inadmissible colourings. In particular, we find that there is exactly one relation which may be generated by the inadmissible colouring (2, 2, 2) of this network, and so the space F is indeed 9-dimensional. Similarly, we find that, for the case r = 5, there are four relations amongst the 20 admissible colourings of this network and so F is 16-dimensional. These examples lend support to our conjecture that F may be identified as the Turaev-Viro state space V T V (F ).
Comparing with the definition (2.3) for the Regge-Ponzano theory, this suggests that we could define an inner product on the reduced skein space by taking different colourings of a q-spin network to be orthonormal. This picture of the state space should also enable us to relate a q-spin network on the boundary to the the Kauffman-Lins [1990] formulation of the Turaev-Viro theory in the interior of the 3-manifold.
Discussion.
In this paper, we have explored several aspects of the relation between spin networks, simplicial state-sum models and the loop representation for quantum general relativity. We conclude by summarising what we have learned and discussing the possible implications.
We have given descriptions of the state space of a surface for the Ponzano-Regge theory (2.3) as the space of spin networks on that surface, and for the Turaev-Viro theory as the space of admissible q-spin networks on the surface (4.24). The latter space is called the reduced skein space of the surface. The advantage of working with skein space is that isotopy invariance is automatically encoded in the formalism. Indeed, the Kauffman skein relations underly the Jones polynomial isotopy invariant (see [Kauffman 1991]). The other advantage is that the reduced skein space is isomorphic to endomorphisms the state space for the Witten theory based on SU (2), as expected from the fact that the Turaev-Viro theory is the square of that theory.
We also considered the definition of the inner product for the Regge-Ponzano theory. In any topological field theory, there corresponds a vector, in the state space for a surface, to a manifold with that surface as its boundary. The inner product defined on the state space must be such that the inner product of the two vectors corresponding to two manifolds which meet in a common surface is given by the invariant of the manifold given by their union. This guarantees that the invariant of a closed manifold does not depend on how the manifold is cut into two pieces (see [Atiyah 1990]), and so this inner product carries a great deal of information about the elements of the theory. We saw that, for the Regge-Ponzano theory, we could recover this topological inner product from the inner product defined by taking different colourings of a particular spin network on the surface to be orthonormal.
Building on the work of Ooguri [1992] and Rovelli [1993], we considered the relation between the Regge-Ponzano theory and the loop representation for quantum general relativity. As emphasized by Ashtekar [1991], the problem of finding an inner product on the space of states is one of the key problems in non-perturbative canonical gravity. Rovelli and Smolin [1990] defined the state space for quantum general relativity as a quotient of the space of functionals on multiple loops. Such loop states were related by the SU (2) spinor relations, from which the binor relations differ in the sign of the terms. They showed that states given by functionals on link isotopy classes of simple, non-intersecting loops satisfied the constraint equations, and so could be identified as physical states. They defined an inner product by taking characteristic functionals on isotopy classes of simple loops to be orthonormal, (see Smolin [1992]).
Here, we took the loop state space to be the space of complex linear combinations of isotopy classes of loops, and wrote an arbitrary loop state as a linear combination of spin network states. We defined the inner product by taking different colourings of a spin network to be orthonormal. The advantages of this approach are that isotopy invariance is automatically encoded by the binor relations which underly spin networks, and that the inner product is consistent with the topological inner product, as explained above.
However, it is not clear whether suitable self-adjoint operators may be defined under this inner product, and so this interpretation must be regarded as provisional.
The above considerations, together with the work of Ooguri [1992], suggest that the Regge-Ponzano theory may be regarded as a discrete model for quantum gravity in three dimensions. Unfortunately, this theory is not mathematically well-defined as the state sum (2.2) is infinite and it must be regularised. Ooguri followed the regularisation of Ponzano and Regge, but it is not clear how rigorous this is. We have ignored these problems here and treated the Regge-Ponzano theory as a formal topological field theory. It is actually a generalisation of such a theory in the sense that the state space V RP (F ) is infinitedimensional. Rovelli [1993] suggested that quantum general relativity in four dimensions may be a generalised topological field theory of this kind.
However, at least in three dimensions, these problems with regularisation may be completely overcome by considering instead the well-defined Turaev-Viro theory. The state sum (2.1) is finite as there are only a finite set of edge labels (4.21) and so of possible colourings, for the theory defined at q an r-th root of unity. As q → 1 and r → ∞, the Turaev-Viro theory reduces to the Ponzano-Regge theory and so it may be regarded as a naturally regularised version of that theory. The Turaev-Viro state space V T V (F ) is finitedimensional, and we have conjectured that it is isomorphic to the reduced skein space of the surface F . As r → ∞, the reduced skein space goes to the space of linear combinations of isotopy classes of links quotiented by the binor relations, which we identified as the Regge-Ponzano state space.
This suggests that we may interpret the Turaev-Viro theory as a finite discrete model for quantum gravity. As with the Ponzano-Regge theory, we may define the inner product by taking different colourings of a q-spin network to be orthonormal. We would then need to check that this definition was equivalent to the topological inner product. If so, it would be interesting to consider whether the Turaev-Viro theory could be related to the loop representation.
|
2019-04-14T02:24:28.963Z
|
1994-08-10T00:00:00.000
|
{
"year": 1994,
"sha1": "49c61cb76e16f1575c112aaa90fa8e1f7b5ab495",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9408013",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "49c61cb76e16f1575c112aaa90fa8e1f7b5ab495",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
264253513
|
pes2o/s2orc
|
v3-fos-license
|
A COVID-19 vaccination precipitating symptomatic calcific tendinitis: A case report
Introduction Shoulder pathology may be symptomatic or asymptomatic depending on the patient. We report the first case of a COVID-19 vaccination administration precipitating symptomatic calcific tendinitis from pre-existing, asymptomatic calcific tendinitis. Case presentation A 50-year-old Thai male began experiencing left shoulder pain about 3 hours following a COVID-19 vaccination. He waited at home for the pain to improve, and when it did not improve in about 3 days he decided to see a doctor at the orthopedics clinic. He was sent for ultrasonography of his shoulder, which revealed calcific tendinitis of the subscapularis tendon. Discussion A SIRVA is normally considered if post-vaccination shoulder pain has not improved within a few days following a vaccination in a patient without shoulder pain prior to the vaccination. In our patient, a COVID-19 vaccination precipitated asymptomatic calcific tendinitis to symptomatic calcific tendinitis. Conclusion Previously asymptomatic shoulder pathologies can be precipitated to symptomatic by a COVID-19 vaccination.
Introduction
A patient may experience side effects after a vaccine administration, either locally and/or systematically. Local symptoms are mainly things such as pain, redness, warmth, swelling, itching and/or bruising, while systematic symptoms involve things such as skin rash, gastrointestinal side effects (nausea, vomiting and/or diarrhea), headache, fever, malaise, chills, and joint or muscle pain [1,2]. If these clinical symptoms continue longer than few days, a shoulder injury related to vaccine administration (SIRVA) should be considered [3][4][5][6][7]. A SIRVA is usually the result of an incorrect vaccine injection technique, usually from one (or more) of three things: using the wrong landmark or incorrect needle direction and/or depth of needle penetration. Prior to this case, only a few SIRVA cases following a COVID-19 vaccination have been reported [8][9][10][11], and most of them were traced to an incorrect vaccine administration technique. In this study, we report a case of symptomatic calcific tendinitis precipitated by a COVID-19 vaccination following the SCARE guideline [12].
Case presentation
A 50-year-old Thai male without underlying disease, abnormal family history or genetic information, or pre-existing shoulder pain received a 1st dose of the Oxford-AstraZeneca COVID-19 vaccine on 15 June 2021, and a 2nd dose of the same vaccine on 07 September 2021 in the southern part of Thailand. The 2nd dose was given by a practitioner nurse using a 1.5-inch, 25-gauge needle in an injection site based on the landmark of 3 finger breadths below the midlateral border of the acromial process, with the needle direction was perpendicular to the skin at the injection site. Three hours after receiving the second dose, he began to feel moderate shoulder pain when moving the injection shoulder in any direction. The pain persisted, and at 3 days post-injection and he finally decided he should see a doctor. At the orthopedic clinic, a physical examination showed tenderness at the deltoid area and moderate pain in all directions of shoulder motion. Ultrasonography of the left shoulder showed swelling of the supraspinatus tendon ( Fig. 1) and calcific tendinitis of the left subscapularis tendon (Fig. 2). He was treated with oral prednisolone (30 mg/day) for 10 days and his pain gradually improved over the next few weeks.
Discussion
A SIRVA is normally considered if post-vaccination shoulder pain has not improved within a few days following a vaccination in a patient without shoulder pain prior to the vaccination [13,14]. In our patient, a COVID-19 vaccination precipitated asymptomatic calcific tendinitis to symptomatic calcific tendinitis.
Prior to the mass vaccinations beginning in early 2021 to deal with the COVID pandemic, SIRVAs were most common following an influenza vaccination [3][4][5][6][13][14][15]. Since the initiation of this mass vaccination program in the world. There were some cases of SIRVA have been reported, all to date blamed on an incorrect landmark and/or needle direction of injection [9,11]. In Thailand the mass vaccination program began in April 2021, and since then we have had three cases of SIRVA in our institution, the largest tertiary care facility in southern Thailand. All of our SIRVAs were traced to incorrect vaccine administration techniques, two from an incorrect needle direction [9,10], and one following an incorrect location due to use of an incorrect vaccination landmark [8].
Calcific tendinitis can be diagnosed by patient history, physical examination, and/or imaging studies. Not all patients with calcific tendinitis have the clinical symptom of shoulder pain, and incidences of asymptomatic calcific tendinitis have been reported from 2.7% to 20% [16]. In our case, the injection technique was found to be correct, so other causes were considered, and following ultrasonography we found a linear calcification near the footprint of the subscapularis tendon, which immediately led to the probable diagnosis of the COVID-19 vaccination had precipitating the patient's formerly asymptomatic calcific tendinitis to symptomatic calcific tendinitis. The main symptom of this patient was also compatible with the diagnosis of calcific tendinitis because the symptoms were a sudden pain and limited range of motion of left shoulder after a COVID-19 vaccine injection [16]. This case is quite similar to an earlier case report of combined subacromial-subdeltoid bursitis and supraspinatus tear after a COVID-19 vaccination, which was determined to have precipitated an existing but asymptomatic rotator cuff tear to a symptomatic rotator cuff tear [17].
Conclusions
Previously asymptomatic shoulder pathologies can be precipitated to symptomatic by a covid vaccination.
Ethical approval
The present study was waived by the Prince of Songkla University Institutional Review Board, Faculty of Medicine, Songklanagarind Hospital, Prince of Songkla University (IRB number REC 64-595-11-1).
Sources of funding
No funding was involved regarding this case report.
Author contribution
Chaiwat Chuaychoosakoon -Preparation of case report, Literature review, Writing the paper.
Prapakorn Klabklay -Preparation of case report, Literature review, Writing the paper.
Pathawin Kanyakool -Preparation of case report, Literature review, Writing the paper. Pattira Boonsri -Preparation of case report, Literature review, Writing the paper.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Registration of research studies
This case report is not first in man.
Declaration of competing interest
No conflicts of interest.
|
2022-02-06T14:10:54.159Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "fa104127f01693105308dec2fa4b3999a3a49a4d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2022.103347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15476a14ca0651a2c480278a9d23622bfee6307b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59361377
|
pes2o/s2orc
|
v3-fos-license
|
Limiting Measure of Lee--Yang Zeros for the Cayley Tree
This paper is devoted to an in-depth study of the limiting measure of Lee--Yang zeroes for the Ising Model on the Cayley Tree. We build on previous works of M\"uller-Hartmann-Zittartz (1974 and 1977), Barata--Marchetti (1997), and Barata--Goldbaum (2001), to determine the support of the limiting measure, prove that the limiting measure is not absolutely continuous with respect to Lebesgue measure, and determine the pointwise dimension of the measure at Lebesgue a.e. point on the unit circle and every temperature. The latter is related to the critical exponents for the phase transitions in the model as one crosses the unit circle at Lebesgue a.e. point, providing a global version of the"phase transition of continuous order"discovered by M\"uller-Hartmann-Zittartz. The key techniques are from dynamical systems because there is an explicit formula for the Lee-Yang zeros of the finite Cayley Tree of level $n$ in terms of the $n$-th iterate of an expanding Blaschke Product. A subtlety arises because the conjugacies between Blaschke Products at different parameter values are not absolutely continuous.
Introduction
We study the the limiting measure of Lee-Yang zeros for the infinite Cayley tree, a finite approximation of which is shown in Figure 1 below. Consideration of the Lee-Yang zeros for the Ising Model on the Cayley Tree dates back to works of Müller-Hartmann and Zittartz [20,19], , Barata-Goldbaum [1], and others. The hierarchical structure of the Cayley Tree results in the following renormalization procedure for studying the Lee-Yang zeros, which played a key role in each of the aforementioned papers: Proposition 1.1. For any k ≥ 2, any t ∈ [0, 1) and any z ∈ T := {z ∈ C : |z| = 1} consider the following Blaschke Product: B z,t,k (w) := z w + t 1 + wt k .
(1)
The Lee-Yang zeros for the n-th rooted Cayley Tree with branching number k ≥ 2 are solutions z to (2) B n z,t,k (z) = −1, and the Lee-Yang zeros for the n-th full Cayley Tree with branching number k ≥ 2 are solutions z to (3) B z,t,k+1 • B n−1 z,t,k (z) = −1.
Here, z := exp(−2h/T ) and t := exp(−2J/T ), where h is the externally applied magnetic field, T > 0 is the temperature, and J > 0 is the coupling constant between neighboring atoms. The superscript "n" denotes iteration of the function n times. When the exponent k is clear from the context we will drop it from the notation, writing B z,t,k ≡ B z,t . Remark that many classical treatments of the Ising Model on the Cayley Tree consider only the thermodynamical properties associated to vertices "deep" in the lattice (e.g. the root vertex); see [3,Ch. 4] and the references therein. The term "Bethe Lattice" is customarily used to describe such considerations. Instead, we treat all vertices equally, studying the "bulk" behavior of the lattice, and thus we follow the standard convention of referring to our work as being on the Cayley Tree.
Before stating our results, we will give a brief background on Lee-Yang zeros, including a description of what many people believe should hold for the classical Z d lattice (where d ≥ 2), as well as a description of the previous results of Müller-Hartmann and Zittartz, Barata and Marchetti, and Barata and Goldbaum. The reader who already knows this background can skip ahead to Section 1.6. Proposition 1.1 allows us to use powerful techniques from dynamical systems to prove results about the Lee-Yang zeros for the Cayley Tree, whose analogs are completely unknown for classical lattices like Z d . Therefore, our work lies at the boundary between dynamical systems and statistical physics. For this reason, we have attempted to provide considerable background in both areas.
1.1. Lee-Yang Zeros. The Ising Model describes magnetic materials. The matter at a certain scale is described using a graph Γ = (V, E) with vertex set V and edge set E. Here, V represents atoms and E represents the magnetic bonds between them. Assign a spin to each vertex using a spin configuration σ : V → {±1}. The total energy of the configuration σ is given as where J > 0 is the coupling constant that describes the interaction between neighboring spins, and h is the externally applied magnetic field. The Boltzmann-Gibbs Principle gives that the probability P (σ) of a configuration σ is proportional 1 to W (σ) := exp(−H(σ)/T ) for temperature T > 0. Explicitly, P (σ) = W (σ)/Z, where Z is the normalizing factor defined as Z ≡ Z(J, h, T ) := σ W (σ), 1 We set the Boltzmann constant kB = 1. which is summed over all possible spin configurations σ. (Remark that we will always impose free boundary conditions on Γ.) This normalizing factor Z is known as the partition function. It is a fundamental quantity to study in statistical mechanics and most aggregate thermodynamic quantities of a physical system can be derived from it.
It is useful to make a change of variables to z = exp(−2h/T ), which represents the magnetic field variable, and t = exp(−2J/T ), which represents the temperature variable. In these new variables, Z(z, t) becomes a polynomial, if we multiply by √ z |V | √ t |E| to clear the denominators.
For fixed t ∈ [0, 1], the behavior of Z(z, t) can be fully understood by studying its complex zeros in the variable z. In 1952, T. D. Lee and C. N. Yang [15] characterized these zeros, now known as Lee-Yang zeros, in their famous theorem.
Lee-Yang Theorem. For t ∈ [0, 1], the complex zeros in z of the partition function Z(z, t) for the Ising model on any graph lie on the unit circle T = {|z| = 1}.
Because of the Lee-Yang Theorem, throughout the paper we will refer to z and φ := Arg(z) interchangeably.
1.2.
Limiting Measure of Lee-Yang Zeros µ t . One typically describes a magnetic material at different scales using a sequence of connected graphs Γ n = (V n , E n ), each thought of as a finer approximation of the material than the previous. Let us call such a sequence of graphs a "lattice". The standard example is the Z d lattice where, for each n ≥ 0, one defines Γ n to be the graph whose vertices consist of the integer points in [1, n] × [1, n] and whose edges connect vertices at distance one in R 2 . The physical properties of the magnetic material are described by limits of suitably normalized thermodynamical quantities associated to each of the finite graphs Γ n . Many of these can be described in terms of the limiting measure of Lee-Yang zeros associated to the lattice {Γ n }, which we will now describe. For each n ≥ 0 let Z n (z, t) denote the partition function associated to Γ n and let z 1 (t), . . . , z |Vn| (t) denote the Lee-Yang zeros at temperature t ∈ [0, 1]. For classical lattices (Z d , etc), it is a consequence of the van-Hove Theorem [29] and the Lee-Yang Theorem that for each t ∈ [0, 1] the sequence of measures weakly converges to a limiting measure µ t that is supported on the unit circle T. One has the following expressions for the limiting free energy and magnetization: 1.3. Conjectural Description of µ t for the Z d lattice (where d ≥ 2). A famous unsolved problem from statistical physics is to understand the limiting measures of Lee-Yang zeros µ t for the Z d lattice and how they depend on t. It is believed that for every t ∈ [0, 1) the measure µ t is absolutely continuous with respect to Lebesgue measure dφ on the circle, and thus has density ρ t (φ) := dµt dφ . Let t c > 0 denote the critical temperature of the Z d Ising model. It is believed that: (A) For t < t c , ρ t (φ) is a continuous function of φ and positive on all of T.
, symmetric about z = −1, such that ρ t (φ) is a continuous function of φ and positive on T \ [−κ(t), κ(t)] and zero otherwise. Moreover, κ : [t c , 1] → [0, π] is a continuous function with κ(t c ) = 0, κ(t) > 0 for t > t c , and κ(1) = π. In fact, for sufficiently small t > 0, it has been proved by Biskup, Borgs, Chayes, Kleinwaks, and Kotecký [5] that the limiting measure of Lee-Yang zeros for the Z d lattice is absolutely continuous and even has C 2 density ρ t (φ). Meanwhile, at high temperatures, quantum field theory gives a prediction of the universal exponents of the densities ρ t near the end-points of T \ [−κ(t), κ(t)], see Fisher [8] and Cardy [6]. For example, for d = 2 the exponent is (−1/6), while for d > 6 it is 1/2. A more detailed discussion of this conjectural behavior for the limiting measures of Lee-Yang zeros for the Z d lattice, including a discussion of what has been proved, is presented 2 in Section 1 of [22] and also in the first two sections of [21].
1.4. Description of µ t for the Diamond Hierarchical Lattice. Besides the one-dimensional lattice Z 1 there are very few lattices for which a global description of the limiting measure of Lee-Yang zeros has been rigorously proved. One exception is the Diamond Hierarchical Lattice (DHL), which was recently studied in [22]. Below the critical temperature of the DHL, the limiting measure of Lee-Yang zeros matches nicely with the conjectural picture for the Z d lattice in that it is absolutely continuous and even has C ∞ density. On the other hand, the sequence of graphs Γ n comprising the DHL has vertices whose valence tend to infinity, causing the limiting measure of Lee-Yang zeros µ t to have support equal to the entire circle T for every t ∈ [0, 1], which is not physical; see [24].
1.5. Work of Müller-Hartmann-Zittartz, Barata-Marchetti, and Barata-Goldbaum. Let Γ k n denote the n-th-level rooted Cayley Tree with branching number k and Γ k n the unrooted (full) Cayley Tree of level n with branching number k. An illustration for k = 2 is given in Figure 1. We will denote the corresponding lattices by Γ k := {Γ k n } ∞ n=0 and Γ k := { Γ k n } ∞ n=0 . In Proposition 5.3 we will see that the limiting measure of Lee-Yang zeros is the same for Γ k and Γ k and after that point we will ignore the distinction between them.
The critical temperature for the Ising Model on the Cayley Tree with branching number k is In [20], Müller-Hartmann and Zittartz used the hierarchical structure of the Cayley Tree to write an explicit expression (see [20,Eq. 4]) for the limiting free energy. They then used this expression to see a curious type of phase transition: for fixed 0 < t < t c and varying z ∈ (0, ∞) there exists real-analytic F reg (z, t) so that Said differently, the singular part of the free energy F sing (z, t) := F (z, t) − F reg (z, t) vanishes with exponent σ(t) := log k log γ at z = 1, so that σ(t) is called the critical exponent of F (z, t). The phase transition is called "continuous order" because the exponent σ(t) increases continuously from 1 to ∞ as t increases from 0 to t c . Meanwhile, for fixed t c < t ≤ 1, F (z, t) varies analytically for all z ∈ (0, ∞).
In [19], Müller-Hartmann provided a different explanation for this phenomenon by computing the pointwise dimension d µt (0) of the Lee-Yang measure µ t at φ = Arg(z) = 0. He found 3 : He then used the electrostatic representation (5) for F (z, t) and a clever argument to reprove (7) and actually obtain further details of the singularity. A global study of the Lee-Yang zeros for the Cayley Tree is done by Barata and Marchetti [2]. While they allow the coupling constants to be chosen as 0 or J, at random, we will simply describe their results in the deterministic setting. They proved for the binary Cayley tree Γ 2 n that (1) For t < t c = 1 3 , the Lee-Yang zeros Γ 2 n become dense on T as n → ∞.
is a continuous function such that κ(t c ) = 0, κ(t) > 0 for t > t c , and κ(1) = π. We refer the reader to Equation 8 for the explicit formula of κ (for arbitrary branching number) and to Figure 2 for a plot of the curve formed by {(φ, t) : φ = κ(t)}. We refer the reader to [2, Thm. 1.2] for the explicit formula of K. (It will not be used in the present paper.) Figure 2. The κ curve for k = 2. The support of µ t is shown for 0 ≤ t ≤ t c (solid horizontal line) and for t c < t < 1 (dashed horizontal line).
The work of Barata-Goldbaum [1] re-investigates the issues studied by Barata-Marchetti with the couplings between neighboring vertices chosen periodically and aperiodically.
1.6. Main Results. Each of the following theorems holds for either the rooted Cayley Tree or the full one. So, we will not distinguish between them. Note also that for any lattice, the limiting measure of Lee-Yang zeros at t = 0 and t = 1 are Lebesgue measure on T and a Dirac mass at z = −1, respectively. Indeed, for any connected graph Γ = (V, E) the Lee-Yang zeros at t = 0 are the |V |-th roots of −1 and the Lee-Yang zeros at t = 1 are all equal to z = −1. For this reason, our results focus on 0 < t < 1.
In Theorem A we see for the Cayley Tree that Supp(µ t ) matches perfectly with the conjectural picture for the Z d lattice: Theorem A. Consider the Cayley Tree with branching number k ≥ 2 and let µ t denote the limiting measure of Lee-Yang zeros as n → ∞ at temperature t ∈ [0, 1]. Then (iii) For each t c < t ≤ 1 and any n ≥ 0 there are no Lee-Yang zeros for Γ k n in (−κ(t), κ(t)). For the remainder of the paper we will use the notation S t := Supp(µ t ), which by Theorem A is From the formula of κ given in (8), it is evident that the set of points (±κ(t), t) forms a continuous curve that wraps once around the cylinder T × [0, 1]. We refer to this as the κ curve. It is shown for branching number k = 2 in Figure 2.
A key aspect of the proof of Theorem A is the following: has a fixed point w D in D and a symmetric fixed point wĈ \D = 1/w D ∈Ĉ \ D, and (ii) B φ,t,k is an expanding map of T, i.e. there exists c > 0 and λ > 1 such that for all w ∈ T and n > 0 we have | B n z,t (w)| ≥ cλ n . In Theorem B we see that for the limiting measure µ t for the Cayley Tree is much wilder than what is conjectured (and rigorously proved at small temperature [5]) for the Z d lattice: Theorem B. Fix any branching number k ≥ 2 and any 0 < t < 1. For any compact interval X t ⊆ interior(S t ), the restriction of µ t to X t has Hausdorff dimension less than one. In particular, for any φ ∈ S t and any neighborhood of φ, µ t is not absolutely continuous with respect to the Lebesgue measure.
(Recall that the Hausdorff dimension of a measure is defined to be the smallest Hausdorff dimension of a full measure set.) The pointwise dimension d µt (φ) for µ t at φ ∈ S t is defined by log 2δ , supposing that the limit exists.
Theorem C. Fix any branching number k ≥ 2. For any 0 < t < 1, there is a Lebesgue full measure set S + t ⊂ S t , such that for any φ ∈ S + t , we have where with w D the unique fixed point of B z,t,k in D.
Meanwhile, there is a dense set S − t ⊂ S t , such that for any φ ∈ S − t we have d µt (φ) < 1. Figure 3. Plot of the "almost everywhere critical exponent" σ φ, 2 3 for branching number k = 2.
(We will see in the proof of Theorem C that χ φ,t is the Lyapunov exponent for the unique absolutely continuous invariant measure ν z,t for B z,t,k .) Theorem C and an adaptation of Müller-Hartmann's analysis of the electrostatic representation (5) for F (z, t) allows us to prove the following global (Lebesgue almost everywhere) version of the "phase transition of continuous order" described by Müller-Hartmann and Zittartz.
For φ ∈ S + t we will see that the pointwise dimension d µt (φ) serves as the critical exponent for the free energy, thus it will be more natural to use the notation σ(φ, t) ≡ d µt (φ).
Theorem D. Fix any branching number k ≥ 2, any 0 < t < 1, and any φ in the Lebesgue full measure set S + t ⊂ S t . Then, the free energy F (z, t) has radial critical exponent at the point z = e iφ . More precisely, there is a real analytic function 4 g : (0, ∞) → R so that Meanwhile, for φ in the dense set S − t there is a real analytic g : (0, ∞) → R so that lim r→1 log F re iφ , t − g(r) log |r − 1| < 1.
The "almost-everywhere" critical exponent σ(φ, t) from Theorem D is illustrated in Figure 3.
1.7.
Main technical difficulty. The main technical difficulty in proving Theorems B and C is that Proposition 1.1 expresses the Lee-Yang zeros for Γ n as solutions to B n z,t (z) = −1, i.e. that variable z occurs both as a parameter and as the dynamical variable. To address this issue we fix t ∈ (0, 1) and work with the skew product where 5 we have set φ = arg(z) and θ = arg(w). Suppose we parameterize the diagonal ∆ = {(φ, θ) : θ = φ} by the variable φ. Then, Proposition 1.1 gives that the Lee-Yang zeros for Γ n are 4 g depends on on k, t, and φ. 5 We will typically abuse notation and ignore subtleties about branches arg, except when truly necessary. Figure 4. Illustration of how to determine Lee-Yang zeros for Γ k n using the skew product B(φ, θ) = (φ, B φ,t (θ)). Here, we use branching number k = 2, level n = 4, and temperature t = 1 2 . For these choices, the Lee-Yang zeros are the values of φ at which the black curves (depicting B −4 {θ = π}) intersect the diagonal ∆, in red. Also shown is a vertical T π 2 := { π 2 } × T, in blue.
the intersection points This is illustrated in Figure 4.
is an expanding map of the circle, by Proposition 1.2. Therefore, if we assign Dirac mass to each of the preimages (B −n ){θ = π} ∩ T φ 0 and normalize, the result converges to the measure of maximal entropy (MME) η φ 0 ,t of B φ 0 ,t , as n → ∞. Because expanding maps of the circle are one of the simplest types of dynamical systems, a tremendous amount is known about η φ 0 ,t . The issue in proving Theorems B and C is to relate these properties of η φ 0 ,t to the properties of the Lee-Yang measure µ t at points on the diagonal ∆ that are near to (φ 0 , φ 0 ). This is done as follows: Let X t interior(S t ) be an interval containing φ 0 . Then, Proposition 1.2 implies that the restriction B : X t × T → X t × T is partially hyperbolic, with the vertical direction expanding. As such, it has a unique central foliation F c , that can be thought of as "horizontal". Using standard dynamical techniques, one can construct a holonomy invariant transverse measure η on F c that describes the limit as n → ∞ of the (normalized) preimages If we restrict η to T φ 0 we obtain the MME η φ 0 ,t for B φ 0 ,t and if we restrict η to the diagonal ∆ we obtain the Lee-Yang measure µ t . Therefore, they are related by holonomy along F c .
While the honolomies along F c are not absolutely continuous, they are Hölder continuous, with Hölder exponent arbitrarily close to one, so long as the two transversals are chosen suitably close. This allows us to control the images under holonomy of sets whose Hausdorff dimension is less than one-the key idea in proving Theorems B and C.
This idea goes back to conversations the last author had with Victor Kleptsyn at the conference 6 in honor of Dennis Sullivan's 70th birthday, who had used the same idea in work with Ilyashenko and Saltykov on intermingled basins of attraction [10]. Moreover, a key tool used in proving Theorem C is the "Special Ergodic Theorem" proved by Kleptsyn, Ryzhov, and Minkov [13] which allows one to conclude that set of initial conditions whose ergodic averages deviate by more than > 0 from the space average has Hausdorff dimension less than one. It is a generalization of a preliminary version that appeared in [10].
1.8. Connection with dynamics of Blaschke Products and expanding maps of the circle. The proof of each of the theorems above rely upon techniques from real and complex dynamics to study the iterates of the Blaschke product B z,t . The dynamics of Blaschke Products and, more generally, of C 2 expanding maps of the circle is a classical topic in the dynamical systems community [27,28,23] and their study remains an active area of research in dynamics; see [7,9,17,11] for a sample. Remark also that Blaschke Products arise in a far more subtle way than here, when studying the Lee-Yang zeros for the Diamond Hierarchical Lattice [22].
1.9. Plan for the paper. Because the paper is written for readers from both mathematical physics and dynamical systems, we have made an effort to provide ample details throughout. Section 2 is devoted to proving Proposition 1.2, which is the key statement needed to begin applying dynamical systems techniques to the problem. This is followed by a proof of Theorem A in Section 3. In Section 4 we summarize several powerful results from real dynamics that will be applied to the expanding Blaschke Products and explain their consequences for B z,t . Section 5 is devoted to studying the skew-product B(φ, θ) = (φ, B φ,t (θ)) as a partially hyperbolic mapping and relating the limiting measure of Lee-Yang zeros to the measure of maximal entropy for the Blaschke Products under holonomy along the central foliation. We prove Theorem B in Section 6 and then prove Theorem C in Section 7. Section 8 is devoted to Theorem D. Proofs of the formula for Lee-Yang zeros on the Cayley Tree in terms of iterates of B z,t (Proposition 1.1) can be found in the literature, however, we provide a derivation in Appendix A, so that our paper can be read independently.
For t ∈ [0, 1), the map B z,t is an example of a Blaschke Product, which is a map of the form where φ is real and the a i 's are in D. Blaschke Products satisfy: (1) Each of D, T, andĈ \ D is totally invariant under B. That is, if S is any of the three sets above, then w ∈ S if and only if B(w) ∈ S. (2) Blaschke Products are symmetric across T. That is, for all w ∈Ĉ.
Lemma 2.1. If the map B z,t has a fixed point w D in D, then w D must be attracting, and the orbit of each point in D must converge to w D . In particular, w D must be the only fixed point in D. The analogous result follows for a fixed point wĈ \D inĈ \ D.
Proof. An easy adaptation of the classical Schwarz Lemma states that if f : In our setting, B z,t is a degree k ≥ 2 rational map for which D is totally invariant, so its restriction to D cannot be an automorphism. The result for w D follows. Meanwhile, the result for wĈ \D follows by symmetry of the Blaschke Product across T.
Then a point w • is a fixed point of B z,t and has multiplicity greater than 1 if and only if P z,t (w • ) = 0 and P z,t (w • ) = 0. Solving these two equations, we find that w • must satisfy Solving yields the two solutions stated in the Theorem A: We can use the condition that w • = z w•+t if and only if B z,t has a fixed point of multiplicity greater than one.
Proof of Proposition 1.2, Part (i): By symmetry of the Blaschke Product B z,t across T, if B z,t has a fixed point in D then it also has one inĈ \ D and the two fixed points have the same argument. By Lemma 2.1, both such fixed points would be attracting and they would be the unique fixed points of B z,t in D andĈ \ D, respectively.
Starting with initial parameter (φ, t) = (0, 0) yields fixed points at 0 and ∞. Let us assume there is some (φ 0 , t 0 ) below the κ curve such that all fixed points lie on the circle. Since B z,t is a rational function with complex coefficients, the fixed points vary continuously with its parameters. We can pick a continuous path from (0, 0) to (φ 0 , t 0 ) that lies below the κ curve. However, the only way all fixed points can be on the circle is if the fixed points off the circle collided on the circle, since the fixed points off the circle have the same argument. This implies the path intersects the κ curve, a contradiction.
Proof. We proceed by proving that the Fatou set of B z,t isĈ \ T. We will first show that for any open neighborhood U ⊂Ĉ \ T, the family of iterates B 1 z,t (U ), B 2 z,t (U ), . . . is normal. That is, for any infinite subsequence of the family, there exists a further infinite sub-subsequence that converges to a holomorphic map on U .
Suppose U ⊂ D, and let w D be the fixed point of B z,t that is in D. By Lemma 2.1, it follows that B n z,t (w) → w D for any w ∈ D. Thus, the restriction of B z,t to any such U ⊂ D produces a normal family. Using the symmetry of Blaschke Products, the analogous result follows for U ⊂Ĉ \ D.
Suppose there exists a point w ∈ T and an open neighborhood U around w on which the iterates of B z,t form a normal family. As already proved, Thus, the family produced by U converges to a discontinuous function and therefore not holomorphic, contradiction. Thus, no point in T is in the Fatou set, so T is the Julia set. Criterion for Expansion. For a rational map f :Ĉ →Ĉ of degree d ≥ 2 , the following two conditions are equivalent: (i) f is expanding on its Julia set J (there exists c > 0 and λ > 1 such that The forward orbit of each critical point of f converges towards some attracting periodic orbit. The Julia set of B z,t is T, by Proposition 2.3, so it suffices to check that (2) holds. By Part (i) of Proposition 1.2, B z,t has fixed points w D ∈ D and wĈ \D ∈Ĉ \ D. The only critical points of B z,t are at w = −t and w = − 1 t , which are in D andĈ \ D, respectively, since t ∈ [0, 1). By Lemma 2.1, the orbits of these critical points converge to w D and wĈ \D , respectively.
Proof of Theorem A
Throughout the rest of the paper we will focus on the dynamics of B z,t : T → T. Let us write the mapping in angular form in terms of θ = Arg(w) and φ = Arg(z). While it will be sufficient to consider φ ∈ [−π, π], it will be helpful to allow θ ∈ R, so we consider the angular form of B z,t as a "lift" to R: Note that for any φ, t, k and θ the lift satisfies reflecting the fact that B z,t,k : T → T is a degree k mapping of the circle.
As usual, we will drop the parameter k when it is clear from the context, writing B φ,t ≡ B φ,t,k .
Remark 3.1. We can restate Proposition 1.1 as the following: Remark 3.2. For every n > 0, the modulus of the derivative of B n φ,t (θ) with respect to θ coincides with that of B n z,t (w) with respect to w. In particular, if (φ, t) is below the κ curve, there exists c > 0 and λ > 1 such that ( B n φ,t ) (θ) ≥ cλ n for all θ ∈ R and n > 0.
Proof. Note B φ,t (θ) increases in φ trivially. Moreover, so B φ,t (θ) increases in θ as well. By induction, the assertion can be proved for n > 1.
3.1. Proof of Parts (i) and (ii). We will focus on the proof for the rooted Cayley Tree. Then, at the end of the subsection, explain how to adapt it for the unrooted version.
Since (φ 1 , t) is below the κ curve, Part (ii) of Proposition 1.2 (see also Remark 3.2) gives constants c > 0 and λ > 1 such that In particular, there exists some N > 0 such that with the first two inequalities given by Lemma 3.3 and the last equality coming from the fact that B z,t has degree k; see Equation (12). Therefore, µ t ((φ 1 , φ 2 )) > 0.
In the case of the unrooted Cayley Tree we have As for the rooted tree, we need to show that the numerator grows exponentially at rate k. Again, using Lemma 3.3, it suffices to prove it for the smaller quantity . However, this follows from Equation (14) and the fact that there is a uniform constant A such that d B φ,t,k+1 dθ = (k + 1) 1 − t 2 1 + 2t cos θ + t 2 > A > 0.
Proof of Part (iii) of Theorem A.
For the rooted Cayley Tree, Part (iii) of Theorem A was proved by Barata-Marchetti [15]. We include their proof here for completeness and we also explain the adaptation needed for the full Cayley Tree.
Let us start with the rooted tree. Since the Lee-Yang zeros are symmetric under φ → −φ, it suffices to prove that for any t > t c there are no Lee-Yang zeros for Γ n at any angle φ 0 ∈ [0, κ(t)). It suffices to show that for any such t, φ 0 , and n we have that B n φ 0 ,t (φ 0 ) < π. The proof is illustrated by Figure 5. Here, the graph of B φ 0 ,t (θ) is depicted by the blue curve and the diagonal is depicted by the red line. By the definition of κ, parameter φ 0 = κ refers to the case when the blue curve lies tangent to the red curve at θ • < π, where θ • = Arg(w • ) with w • given in (8). However, since 0 < φ 0 < κ(t) and B φ 0 ,t (θ) is increasing and concave up for θ ∈ (0, π), there must exist some θ * with θ * < θ • < π such that the iterates (black staircase) of φ 0 under B φ 0 ,t converge to θ * , as shown in the figure.
Blue depicts the graph of B φ 0 ,t (θ) and the diagonal is depicted in red.
Basic Real Dynamics for B z,t
Throughout this section we recall some basic facts about expanding circle maps, i.e. f : S 1 → S 1 with constants C > 0, λ > 1 so that for all x ∈ S 1 and for all integers n ≥ 1, we have We will call λ the expansion constant of f . Every expanding circle map is a covering map of degree d with |d| > 1. The term absolute continuity will refer to absolute continuity with respect to the Lebesgue measure on S 1 . Most of the theorems stated in this section were originally proved in more general settings, but we state them in the more narrow context that will be used here. where h ρ (f ) denotes the metric entropy of ρ under f . The measure achieving this supremum (if it exists) is called an equilibrium state, which a priori need not be unique. However, in the case when f is expanding and φ is Hölder, the equilibrium state is unique: Uniqueness of Equilibrium States (Ruelle [30]). Let f be a C 2 expanding circle map and φ : S 1 → R be a Hölder function. Then there exists a unique equilibrium state for φ.
Moreover, we have the following inequality:
Ruelle's Inequality. [25] Let f be a C 2 expanding circle map. Suppose ρ ∈ M f is ergodic, then where χ ρ (f ) is the Lyapunov exponent of (f, ρ), i.e. As an immediate consequence, we have Next, recall that the Hausdorff dimension of a Borel probability measure ρ on a compact manifold M is defined as
HD(Y ).
Ledrappier-Young Formula. [32] Let f be a C 2 expanding circle map, and let ρ be a f -invariant ergodic probability measure. Then Set φ = 0 in Ruelle's Theorem. Then the unique measure η achieving the supremum sup ρ∈M f h ρ (f ) is called the measure of maximal entropy (MME). The measure η can be constructed using a pullback argument: For any y ∈ S 1 , 1 d n The measure η satisfies f * η = d · η. By the Variational Principle, we have h η (f ) = log d, the topological entropy of f . Meanwhile, f has an absolutely continuous invariant measure (ACIM) ν, which actually has C 1 density with respect to Lebesgue measure, see [26,14]. Proof. The ACIM ν satisfies HD(ν) = 1. Therefore the Ledrappier-Young Formula gives that h ν (f ) = χ ν (f ). On the other hand, Ruelle's Inequality gives that for any ergodic ρ ∈ M f : where equality is attained if and only if ρ is the equilibrium state for φ = − log |f (x)|, by Corollary 4.1. Therefore, the absolutely continuous ν is the unique equilibrium state for φ. Since ρ is not absolutely continuous, ρ = ν, and hence ρ must satisfy h ρ (f ) < χ ρ (f ). Applying the Ledrappier-Young Formula again, we have HD(ρ) < 1.
We will need a special version of Birkhoff's Ergodic Theorem for the ACIM ν. For any continuous function φ ∈ C(S 1 ), let φ n := 1 n n−1 k=0 φ • f k , and φ := S 1 φ dν. Since, ν is ergodic, for ν a.e. x (and hence Lebesgue a.e. x), we have lim n→∞ φ n (x) = φ. In the proof of Theorem D we will need to give up some control on the sequence {φ n (x)} in order to have more control on the exceptional set.
Special Ergodic Theorem (Kleptsyn-Ryzhov-Minkov [13]). Let f be an expanding circle map and let φ ∈ C(S 1 ). For any > 0, the set Proof. We verify that (f, ν) satisfies the hypotheses from Theorem 1 of [13]. Since ν is absolutely continuous, it is the global SBR measure. Meanwhile, the work of Kifer [12] and Young [31] implies that ν satisfies a Large Deviations Principle (LDP). For example, while Theorem 3 from [31] is stated for the equilibrium state of an Axiom A attractors of a C 2 diffeomorphism, one can check that each line of the proof adapts directly to expanding maps of the circle.
Shub-Sullivan Theorem. [28] Two C r , r ≥ 2, expanding maps of the circle which are absolutely continuously conjugate are C r conjugate. If two Blaschke products are absolutely continuously conjugate on the unit circle T, then they are conjugate by a Möbius transformation of the Riemann Sphere.
By Shub's Theorem, any two C 2 expanding maps of the circle are topologically conjugate. If they are also C 2 close, then we have more control over the Hölder exponent of the conjugacy. Proposition 4.3. Let f be a C 2 expanding circle map. Then, for any > 0, there exits δ > 0 such that if g is another C 2 expanding circle map with ||f − g|| C 2 < δ, then f and g are Hölder conjugate by h with exponent 1 − and multiplicative constant independent of g.
The proof of Proposition 4.3 requires the following Denjoy style distortion estimate. A proof can be found in [28]. Given a covering map f : S 1 → S 1 , letf : R → R be any lift of f . Let p be a periodic point of f with period n. Define λ avg (p) = ((f n ) (p)) 1/n . Remark that for all periodic points p of an expanding circle map f with expansion constant λ, we have λ avg (p) ≥ λ.
Proof of Proposition 4.3. For any α > 1, it is straight forward to see that there exists δ > 0 such that whenever ||f − g|| C 2 < δ, we have for all periodic points p of f 1 α λ avg (p) ≤ λ avg (h(p)) ≤ αλ avg (p), where h is the C 0 conjugacy close to the identity given by Shub's Theorem. We will show that h is in fact Hölder continuous with exponent 1 − log α log λ , where λ is the expansion constant of f Let I ⊂ S 1 be an interval. Then there exists an integer N ≥ 0 so that f N is the first iterate so that |f N (I)| > 4π, wheref is any choice of lift of f . Then, since h is a conjugacy between f and g, we have that g N is also the first iterate so thatg N (h(I)) > 4π, withh andg suitable lifts of h and g. By the Intermediate Value Theorem, f N has a fixed point p ∈ I, corresponding to a periodic point of f with period dividing N . Then h(p) ∈ h(I) is a periodic point of g of the same period. Then, using the Distortion Lemma, we have |f N (I)| λ avg (p) N |I|, |g N (h(I))| λ avg (h(p)) N |h(I)|.
(The asymptotic notation means the ratio of the left and right sides is bounded from above and below by positive constants, independent of N (or, equivalently, independent of |I|, since N depends on |I|). Since 4π ≤ |f N (I)| ≤ 4πK and 4π ≤ |g N (h(I))| ≤ 4πK, where we have |f N (I)| |g N (h(I))| 1 and thus λ avg (p) N |I| λ avg (h(p)) N |h(I)| 1. Then This shows that h is Hölder with exponent 1 − log α log λ . Lastly, the reader can check that all the multiplicative constants that are implicit in the notation (including those coming from the distortion lemma) can be made uniform in g.
Proof. To simplify notation, we write B ≡ B k,z,t and η ≡ η k,z,t . Assume for contradiction that η is absolutely continuous, so η = m(θ)dθ, where m(θ) is L 1 and dθ denotes the Lebesgue measure on T. By absolute continuity, HD(η) = 1, so the Ledrappier-Young Formula gives that h η (B) = χ η (B). Then, by Ruelle's Inequality and Ruelle's Theorem, η must be the unique measure achieving the equilibrium state for the potential − log B . It follows from [26,14] that the density function m(θ) of η is C 1 . Define h : T → T by we will show that m(θ) = 0 for all θ. If m(θ 0 ) = 0 for some θ 0 , then invariance of m implies that m vanishes on the backward orbits of θ 0 , which is dense in T because B is expanding, therefore m(θ) ≡ 0, contradiction. This shows that m(θ) = 0 for all x. Hence h is in fact a C 2 diffeomorphism. On the other hand, the MME η satisfies B * η = k · η, then we have Therefore, B and θ → k · θ are conjugate by the C 2 mapping h.
Then, by the Shub-Sullivan Theorem, h can be extended to a Möbius transformation of the Riemann SphereĈ so that it conjugates B k,z,t to z → z k onĈ. However, both of the critical points of z → z k are fixed points, but, for 0 < t < 1, the critical point t for B k,z,t is not a fixed point. Therefore η z,t cannot be absolutely continuous, then by Proposition 4.1, HD(η z,t ) < 1.
Remark 4.6. Some of the key statements above can also be obtained from the perspective of complex dynamics. In particular, Lemma 4.5 is a direct application of Zdunik's Theorem [33] and, in the case of Blaschke Products B z,t , Proposition 4.3 about Hölder regularity of the conjugacy follows from theory of holomorphic motions [4]. However, we found that presenting them using real dynamics was more concrete for the purposes here. Proposition 4.7. For z ∈ interior(S t ), the Lyapunov exponent of B z,t,k with respect to the ACIM ν z,t is given by where w D is the unique attracting fixed point of B z,t in D.
Proof. Let ψ be the disc automorphism Then the mapB z,t := ψ • B z,t • ψ −1 is a Blaschke product expanding on T with the attracting fixed point at the origin, hence its ACIMν z,t is the normalized Lebesgue measure dθ [23]. The only critical point of B z,t in the unit disc is w = t, which is mapped to t−w D 1−w D t under ψ. By Jensen's Formula, we have Using that B z,t (w D ) = w D , that t is real, and that |z| = 1, this formula simplifies to the stated one.
To simplify notation, let T φ := {φ} × T, and x := (φ, θ) when there is no ambiguity. 5.1. Partial Hyperbolicity. Let X t ⊆ interior(S t ) be any compact interval. Since B is a skew product over the identity, with the restriction to each vertical fiber being an expanding map of the circle, it is a straight forward application of techniques from smooth dynamics to prove that B is partially hyperbolic on X t × T. In this context, this means that: (1) There exists a vertical tangent conefield K v (x) and a horizontal central linefield L c (x) ⊂ T x (X t × T) depending continuous on x and invariant under DB. (2) Vertical tangent vectors v ∈ K v (x) get exponentially stretched under DB n . We remark that the central linefield on X t × T is given by Proof. It follows from the Peano Existence Theorem that continuous linefields are integrable, so we can find a central curve through any point x ∈ X t × T by integrating the central linefield L c . Denote F c the collection of all such curves. Let us show that F c is a central foliation.
This proves that the linefield L c is uniquely integrable, so the family F c of all integral curves forms the central foliation.
5.2.
Transverse invariant measure. Let τ 1 and τ 2 be two local transversals to F c . We say that τ 1 and τ 2 correspond under F c -holonomy if for any x 1 ∈ τ 1 , there exists a unique x 2 ∈ τ 2 so that x 1 and x 2 belong to the same leaf of F c , and vice versa. In this case, the holonomy transformation g τ 1 ,τ 2 : τ 1 → τ 2 is defined by the property that (φ, θ) ∈ τ 1 and g τ 1 ,τ 2 (φ, θ) ∈ τ 2 belong to the same leaf. In the case that one or both of the transversals is a vertical fiber T φ , we will abuse notation and write g φ 1 ,φ 2 := g T φ 1 ,T φ 2 .
A transverse invariant measure η for F c is a family of measures {η τ : τ tranversal to F c }, such that for any τ 1 , τ 2 which correspond under F c -holonomy, we have η τ 2 = (g τ 1 ,τ 2 ) * (η τ ). Such measures can be specified by a single measure η τ on a global transversal τ .
The main two results of this section are: There is a transverse invariant measure η for F c , so that (i) on each vertical fiber T φ , η is the MME η φ,t for B φ,t , and (ii) on the diagonal ∆ t := {(δ, δ) : δ ∈ X t } ⊂ X t × T, η is the limiting measure of the Lee-Yang zeros µ t for the rooted Cayley Tree. The proof of both propositions will follow from the next three lemmas. The Lee-Yang zeros for the n-th level rooted Cayley Tree are obtained by pulling back the horizontal line {θ = π} by B n and then intersecting with the diagonal ∆ t . Therefore the following lemma is of particular interest.
Lemma 5.4. The leaves of the central foliation F c have negative slope. In particular, the diagonal ∆ t is transversal to F c .
Proof. One can prove by induction that where * denotes a positive term. So we see that the vector v = [1, 0] T is vertically stretched exponentially by DB n , while its horizontal length is fixed. Then, for some integer n > 0, DB n v is in the vertical cone. This implies v / ∈ L c . So the leaves of F c must have negative slopes. Hence ∆ t is transversal to F c .
Denote the cardinality of a set S by #S.
Lemma 5.5. Let I φ ⊂ T φ be an arbitrary interval and let τ be a local transversal to F c so that I φ and τ correspond under F c -holonomy. For any smooth curve γ that can be represented as a graph of a smooth function γ(φ) : X t → T, we have where η φ is the MME for B φ,t .
Proof. LetB : X t × R → X t × R be a lift of B and let γ 0 be a lift of γ to X t × R. Meanwhile, let γ denote the union of all lifts of γ, i.e. γ is the union of vertical translations of γ 0 by every integer multiple of 2π.
Let π 2 : X t × T → T andπ 2 : X t × R → R denote the projections onto the second coordinate. Suppose ξ ⊂ X t × R is any smooth curve. Let (ξ) := |π 2 (ξ)| denote the Euclidean length of its projection to the second coordinate R. For any integer n > 0, the quantity (B n (τ ))/2π is the number of times the curve π 2 (B n (τ )) wraps around the circle T, which differs from the number of timesB n (τ ) intersects γ by at most 1 + M 2π , where M = ( γ 0 ), i.e.
On the other hand, since B maps each leaf of F c to another leaf of F c , there exists a constant A > 0 such that for all n > 0, Thus, k −n (B n (I φ )) − (B n (τ )) → 0 as n → ∞. Therefore, we have Finally, notice that lim n→∞ k −n #{B −n (γ) ∩ I φ } converges to the measure of I φ under the measure of maximal entropy η φ for B φ,t . In particular, Suppose τ 1 and τ 2 correspond under F c -holonomy. Then, for any φ ∈ X t we can find I φ ⊂ T φ so that I φ corresponds to both τ 1 and τ 2 under F c -holonomy. Therefore, the following is a direct consequence of Lemma 5.5.
Lemma 5.6. For any two transversals τ 1 and τ 2 that correspond under F c -holonomy we have Proof of Proposition 5.2. For each τ transversal to F c , Lemma 5.5 defines a premeasure on the algebra generated by intervals on τ . By the Carathéodory Extension Theorem, this premeasure can be extended to a Borel measure. Lemma 5.6 gives that the measure is invariant under the holonomy transformation. It follows from the construction above that η is the MME η φ,t on each vertical fiber T φ . By taking γ(φ) ≡ π and τ = ∆ t , it follows that on the diagonal ∆ t , η is the limiting measure of the Lee-Yang zeros for the rooted Cayley Tree.
Proof of Proposition 5.3: Fix 0 ≤ t < 1 in order to show that µ t = µ t , where µ t denotes the limiting measure of Lee-Yang zeros for the full Cayley Tree and temperature t. Let us now emphasize the branching number in the notation for the skew product by writing B k (φ, θ) = (φ, B φ,t,k (θ)). Then B −1 k+1 (θ = π) is a disjoint union k+1 i=1 γ i of k + 1 smooth curves, each can be represented as a graph Figure 6. Configuration of transversals I δ and ∆ δ from Lemma 5.8.
As we mentioned in the introduction, from now on we will ignore the distinction between the full Cayley Tree and the rooted Cayley Tree, so the term "Cayley Tree" will refer to either of them.
5.3.
Regularity of the holonomy along F c . We finish this section with the next two results which will be used in the proof of Theorems B and C.
Proof. Since B maps leaves of F c to leaves of F c , the holonomy transformation g φ 1 ,φ 2 provides a conjugacy between B φ 1 and B φ 2 . The result then follows from Proposition 4.3.
Proof of Theorem B
The following lemma is well-known and its proof is a direct application of the definitions: Lemma 6.1. Let f : X → Y be an α-Hölder continuous function between metric spaces. Then HD(f (A)) ≤ 1 α HD(A) for any A ⊂ X. Proof of Theorem B. We must show that the restriction of µ t to any compact interval X t ⊂ interior(S t ) has Hausdorff dimension less than one.
Let Y t be some compact interval such that X t ⊂ interior(Y t ) and Y t ⊂ interior(S t ). We will show that for any φ ∈ Y t there exists δ > 0 so that the restriction of µ t to (φ, φ + δ) has Hausdorff dimension less than one. Such intervals form an open cover of X t , so the desired result will follow by compactness.
Proof of Theorem C
Recall that for each φ ∈ interior(S t ), ν φ,t denotes the ACIM for the expanding circle map B φ,t . Let χ φ,t ≡ χ ν φ,t (B φ,t ) denote the Lyapunov exponent of ν φ,t . The following proposition is a direct application of the Special Ergodic Theorem for the observable log |B φ,t |.
The proof of Theorem C will take place in two steps: Step 1: Using Proposition 7.1 and properties of the F c -holonomy to find Lebesgue full-measure subsets of the diagonal on which we have good control of the Lyapunov exponents: Step 2: Use of Proposition 7.2 to prove Theorem C using a similar technique to the proof of the Ledrappier-Young formula and then taking a countable intersection of Lebesgue full-measure sets.
Let B n φ,t be the first iterate so that 2π ≤ | B n φ,t ( I δ )| = 2πk, where B φ,t : R → R is a lift of B φ,t and I δ is the lift of I δ . By the Intermediate Value Theorem, there exists ξ δ ∈ I δ with (15) 2π ≤ |(B n φ,t ) (ξ δ )| · | I δ | ≤ 2πk. On the other hand, by Distortion Control (Lemma 4.4), there is a constant M so that for all I δ , Taking logarithm in the above inequalities and combine with (15): Next, since the MME satisfies (B φ,t ) * η φ,t = k·η φ,t , for all I δ , we have log η φ,t ( I δ ) = log C δ −n log k, where C δ = | B n φ,t ( I δ )|. Under the transverse invariant measure η (as in Proposition 5.2), we have Hence, combine with (16): Divide both numerator and denominator by −n, and notice that n → ∞ as δ → 0, so by taking limit infimum and limit supremum respectively in the first and second inequalities above, we obtain (18) log k Since the Hölder exponent of g I δ , ∆ δ converges to 1 when δ → 0, we have lim δ→0 in (18) we can replace log | I δ | by log | ∆ δ |. Meanwhile, by choosing 0 sufficiently small, we have log | I δ | ≤ lim sup by Proposition 4.1 and the Ledrappier-Young Formula. Since η φ,t is supported on the whole circle, the set of points θ ∈ T φ satisfying Inequality (20) is dense. Hence, using that the Lyapunov exponent (with respect to the MME), χ η φ,t (B φ,t ), is continuous in φ [16], and that the holonomy transformations are homeomorphisms onto their images, we conclude that there is a dense subset The result then follows using Inequalities (17) and taking δ → 0.
Proof of Theorem D
This section is an adaptation of the clever techniques in Müller-Hartmann's paper [19] to prove Theorem D. Because of the "electrostatic representation" (5) for the free energy, we will present the result as a basic fact about the logarithmic potential of a measure. While the Lee-Yang measure µ t is supported on the unit circle, we find it convenient to perform a suitable Möbius transformation to move the support of the measure from T to R, studying where µ is a probability measure on R.
Proposition 8.1 (Müller-Hartmann [19]). Let µ be a probability measure on R which satisfies Then, there exists a real-analytic function f reg (y) such that In other words, f µ has critical exponent σ when crossing R vertically.
We will need the following integration by parts formula and include a proof here, for lack of a convenient reference. The second equality uses Fubini's Theorem, which is allowed since since f ∈ L 1 (Leb([a, b])) implies that f (t)1 [a,t] (s) ∈ L 1 (Leb([a, b])×µ), and the third equality uses that 1 [a,t] (s) = 1 iff a ≤ s ≤ t ≤ b iff 1 [s,b] (t) = 1.
Proof of Proposition 8.1: It suffices to show that there exists real-analytic f reg (y) such that for any > 0 there are constants C 1 , C 2 > 0 so that for all y = 0 sufficiently close to 0 the logarithmic potential f µ satisfies C 1 |y| σ+ ≤ |f µ (iy) − f reg (y)| ≤ C 2 |y| σ− .
Let us start with the rooted tree Γ n . For each n, let r denote the root vertex of Γ n and consider the conditional partition functions Z + n ≡ Z + n (z, t) := σ such that σ(r)=+1 W n (σ) and Z − n ≡ Z − n (z, t) := σ such that σ(r)=−1 W n (σ), (25) where W n (σ) = e − Hn(σ) T and H n (σ) is the Hamiltonian given in Equation (4). By definition, the full partition function is Z n = Z + n + Z − n . We will first produce a recursion on Z + n and Z + n , from which the statement of Proposition 1.1 will quickly follow. As initial condition of the recursion, note that Γ 0 consists of just the root vertex r, so that Z + 0 = z − 1 2 and Z − 0 = z 1 2 . Now consider arbitrary n ≥ 0. The tree Γ n+1 is formed by taking two copies of Γ n and attaching each of their root vertices by an edge to the root vertex of Γ n+1 . Let us call the two sub-trees the "left tree" Γ L n and the "right tree" Γ R n , and let us denote their root vertices by r L and r R , respectively.
|
2018-12-26T23:28:46.051Z
|
2018-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "65e916ee2df5610fe589bf2e63288a7704a769d0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.00403",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "65e916ee2df5610fe589bf2e63288a7704a769d0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
258060199
|
pes2o/s2orc
|
v3-fos-license
|
A priori data-driven robustness guarantees on strategic deviations from generalised Nash equilibria
In this paper we focus on noncooperative games with uncertain constraints coupling the agents' decisions. We consider a setting where bounded deviations of agents' decisions from the equilibrium are possible, and uncertain constraints are inferred from data. Building upon recent advances in the so called scenario approach, we propose a randomised algorithm that returns a nominal equilibrium such that a pre-specified bound on the probability of violation for yet unseen constraints is satisfied for an entire region of admissible deviations surrounding it, thus supporting neighbourhoods of equilibria with probabilistic feasibility certificates. For the case in which the game admits a potential function, whose minimum coincides with the social welfare optimum of the population, the proposed algorithmic scheme opens the road to achieve a trade-off between the guaranteed feasibility levels of the region surrounding the nominal equilibrium, and its system-level efficiency. Detailed numerical simulations corroborate our theoretical results.
Introduction
The study of noncooperative games plays a significant role in a panoply of applications ranging from smartgrids [44] to communication [46] and social networks [2].In these setups, agents can be modelled as self-interested entities that interact with each other and make decisions based on possibly misaligned individual criteria, while being subject to constraints (local or global) that restrict the domain of their choices.Even though a variety of systems can be analysed by means of deterministic game-theoretic tools [24,38,46], in many applications the decision making procedure is affected by uncertainty.A number of results in the literature have explicitly addressed uncertainty in a noncooperative setting.Specifically, [7] follows a randomized approach for the special case of stochastic zero-sum games.Most results rely on specific assumptions on the probability distribution [15,47] and/or the geometry of the uncertainty set [3,27,36].
To circumvent these limitations, recent developments adopt a data-driven perspective, focusing on the connection of game theory with the so called scenario approach [9].This is based on the idea that an optimisation problem with constraints parametrised by an uncertain parameter-with fixed but possibly unknown support set and probability distribution-can be approximated by drawing samples of that parameter and solving the problem subject to the constraints generated by those samples only; this approximation is known as the scenario program.Standard results in the scenario approach [8,11,13] provide certificates on the probability that a new yet unseen constraint will violate the randomised solution obtained by the scenario program.
While the aforementioned results apply to uncertain convex optimisation problems, the works [12] and [14] paved the way towards the provision of data-driven robustness guarantees to solutions of more general nonconvex problems.In [22,23,37], these theoretical advancements were leveraged for the first time in a gametheoretic context, for the formulation of distributionfree probabilistic feasibility guarantees for randomised Nash equilibria.These works provide guarantees for one specific equilibrium point (often assumed to be unique); this was extended in [39,40], by providing a posteriori feasibility guarantees for the entire domain.Besides the game-theoretic context, alternative methodologies for set-oriented probabilistic feasibility guarantees have been proposed in the seminal works [5,16], which a priori characterise probabilistic feasibility regions constructed out of sampled constraints using statistical learning theoretic results.More recently, the so called probabilistic scaling [4,33] has been proposed to obtain a posteriori guarantees on the probability that a polytope generated out of samples is a subset of some chance-constrained feasibility region.Following an approach similar to [39], the works [17,18] deliver tighter guarantees by focusing on variational-inequality (VI) solution sets.
The results above follow a standard approach in the game-theoretic literature, where a strict behavioural assumption-the so called rationality-is imposed on the players' decision making.Namely, the players are viewed as rational agents wishing to maximize their profit (expressed by some given cost function).However, studies have shown that this is unrealistic in practice [10,29,41,48] and that agents usually exhibit a boundedly rational behaviour [42], i.e., their decisions can deviate from rationality due to individual biases, behavioural inertia, restricted computational power/time, etc.The consequences of this become relevant in engineering applications, as the human role in technical systems evolves beyond mere users and consumers to active agents, operators, decision-makers and enablers of efficient, resilient and sustainable infrastructures [32].
To bridge this gap between real-world applications and the cognate literature, here we study games with uncertain constraints, where deviations from a nominal equilibrium are explicitly considered.We follow a randomised approach to approximate the coupling constraints by means of data.In this more general setting, where deviations are considered, providing guarantees for a single solution is devoid of any meaning: indeed, repetition of the game might lead to a different solution in a neighbourhood around the nominal equilibrium, irrespective of the employed dataset.Technically speaking, this renders the identification of the data samples that support the solution (cf.sample compression [34]) a challenging task.Focusing on the class of generalised Nash/Wardrop equilibrium seeking problems (GNEs/GWEs) [19], we contribute to the provision of data-driven robustness guarantees for the collection of possible deviations from the equilibrium as follows: (1) Adopting a scenario-theoretic paradigm, we establish a methodology for the provision of a posteriori probabilistic feasibility guarantees for a region around the randomised equilibrium of the game under study.This result (Theorem 1), complements [39], [40], [17], [18], that instead focus on the entire feasibility region.Focusing on a circumscribed region around a GNE/GWE allows offering tighter probabilistic bounds, while the results of [39], [40], [17], [18], can be obtained as a limiting case of Theorem 1. (2) We design a data-driven algorithm that returns a GNE/GWE and show that all points in a predefined admissible region surrounding it enjoy a priori probabilistic feasibility guarantees.This result (Theorems 2 and 3), unlike Theorem 1, offers an a priori statement valid for a region that is tunable by the user, modelling possible deviations from a nominal equilibrium that a designer wishes to take into account when incentivising a certain operation profile.
A distinctive feature of this result is that it provides a priori guarantees for a set rather than single points [37], [22], [23].These guarantees depend on a prespecified quantity, which in turn can affect the location of the nominal equilibrium and the size of the region for which these probabilistic guarantees hold.As such this region is tunable, unlike [25] where a priori guarantees for a set of solutions are provided, but this set is not controlled by the user and could be arbitrarily narrow.Moreover, the results of [25] do not focus on games and follow a fundamentally different approach.Furthermore, when the game under study admits a potential function-whose minimum coincides with some social welfare optimum-our methodology provides a new perspective for trading off the probabilistic feasibility of the region surrounding the nominal equilibrium and its system-level efficiency.
(3) We propose an equilibrium seeking algorithm as the machinery to obtain a region surrounding a GNE/GWE over which the aforementioned feasibility guarantees hold.The algorithm relies on a primal-dual scheme and is inspired by seminal developments in [20].However, the mapping that characterizes the algorithm updates differs from those typically encountered in the literature (e.g., see [20,Ch. 12]).This requires showing that the ad-hoc mapping enjoys certain continuity and cocoercivity properties, thus extending the proof-line of [20] (see Lemmas 2 & 3, and proof of Theorem 2), a task which is interesting per se.
Our contributions compared to the cognate literature are summarized in Table 1.The rest of the paper is organized as follows.In Section 2 we provide fundamentals of game theory and the scenario approach.In Section 3.1 we show how the feasibility guarantees for a region around the game solution can be a posteriori quantified.In Section 3.2 we propose a data driven algorithm and prove its convergence to an equilibrium such that the considered neighbourhood of strategic deviations can satisfy prespecified probabilistic feasibility requirements.An illustrative example in Section 4 corroborates our theoretical analysis.Section 5 concludes the paper and presents future research directions.To streamline the presentation of our results, some proofs are deferred to the Appendix.
Preliminaries
Notation: All vectors are column unless otherwise indicated.R n + is the nonnegative orthant in R n .For an n × n matrix A, we write A ≻ 0 (A ⪰ 0) when x ⊺ Ax > 0 (x ⊺ Ax ≥ 0), for any x ∈ R n .We denote by 0 q×r the q × r null matrix, by I r the r × r identity matrix, and by 1 r the vector of r ones; dimensions can be omitted when clear from the context.e q is the unit vector whose q-th element is 1 and all other elements are 0, ∥ • ∥ p the pnorm operator, and (•) r denotes the r-th component of its vector argument.B p (x, ρ) = {y ∈ R d : ∥y − x∥ p < ρ} is the open p-normed ball centred at x with radius ρ; when p is omitted, any choice of norm is valid.For a set S, |S| denotes its cardinality, while 2 S denotes its power set, i.e., the collection of all subsets of S. Finally, given D ≻ 0, proj K,D [x] := arg min y∈K (y − x) ⊺ D(y − x) is the skewed projection of x onto the set K.
Games with uncertain constraints
We consider a population of agents with index set N = {1, . . ., N }.The decision vector x i of each agent i ∈ N takes value in the set is the global decision vector that is formed by concatenating the decisions of the entire population.The vector x −i ∈ R n(N −1) comprises all agents' decisions except for those of agent i.In our setup, the cost incurred by agent i ∈ N is expressed by a real-valued function J i (x i , x −i ) that depends on local decisions as well as on the decisions from other agents j ∈ N \ {i}.In the following, with a slight abuse of notation, we can exchange x for (x i , x −i ) to single out agent i's decision from the global decision vector.Furthermore, we consider uncertain constraints coupling the agents' decisions.These can be expressed in the form where g : R nN × ∆ → R depends on some uncertain parameter δ taking values in a support set ∆ according to a probability measure P.
Feasible collective decisions under this setup can be found by letting every agent i ∈ N solve the following optimization program, where the decisions x −i of all other agents are given, is the projection of the coupling constraint on X i for fixed x −i and uncertain realization δ ∈ ∆.The collection of coupled optimization programs in (2) for all i ∈ N constitutes an uncertain noncooperative game; we denote it as G.
Note that (2) follows a worst-case paradigm, taking into account all possible coupling constraints that can be realised by variations of the uncertain parameter δ ∈ ∆.This can render the solutions of G rather conservative.Furthermore, it is in general not possible to compute a solution for G without an accurate knowledge of, and/or additional assumptions on, the support set ∆ and the probability distribution P. To circumvent these limitations, we follow a data-driven paradigm and approximate G by means of a finite number of samples drawn from ∆, namely the K-multisample δ K = (δ 1 , . . ., δ K ) ∈ ∆ K .In the sequel, we hold on to the standing assumption that these samples are independent and identically distributed (i.i.d.).Apart from this, no other knowledge on the support set ∆ and the probability distribution P of the uncertain parameter is required.Then, for a given multi-sample δ K , (2) can be rewritten as Instead of considering all possible uncertainty realizations δ ∈ ∆ as in (2), we let the data encoded in δ K lead agents to their decision by solving (3).We refer to the collection of coupled optimization programs in (3) as the scenario game G K (the subscript K implies dependence on the drawn multi-sample δ K ).Under standard assumptions, a solution to the scenario game G K exists and the problem is, in contrast to G, tractable using state-of-the-art equilibrium seeking algorithms.
Variational inequalities and game equilibria
Notably-under certain assumptions detailed nextsolutions to the game G K can be retrieved as solutions to a variational inequality (VI), for specific choices of the mapping F : X → R nN [19, Thm 3.9]: where Π K := X ∩ K k=1 C δ k denotes the problem domain.A classic game solution concept, which encounters wide application in the literature, is the Nash equilibrium (NE) [35].At a NE, no agent can decrease their cost by unilaterally changing their decision.Formally, this can be stated as follows.
For our analysis, we rely on the following conditions: Assumption 1 For all i ∈ N , J i (x i , x −i ) is convex and continuously differentiable in x i for any fixed x −i .
Assumption 2 (1) For any multi-sample δ K ∈ ∆ K , the domain Π K is non-empty.(2) The set X = N i=1 X i is compact, polytopic and convex.
Note that convexity of the cost function with respect to the agent's local decision is crucial for the design of tailored algorithms with theoretical convergence guarantees for Nash equilibrium seeking.Under these assumptions, we can determine a GNE as in Definition 1 by solving (4) with A class of problems of common interest can be modelled by the so called aggregative games [1,28,30], where the cost incurred by agents depends on some aggregate measure-typically the average-of the decision of the entire population.Such a cost can be expressed in (3) by the function J i (x i , σ(x)), where the aggregate σ : R nN → R n is defined as the mapping x → 1 N N i=1 x i .A solution frequently linked to this class of games is the Wardrop equilibrium (WE), a concept akin to the NE but specifically defined in the context of transportation networks [6].The variational WEs of G K can be expressed by using ] i∈N ; notice that in this case the second argument of J i is fixed and set to σ(x), consistently with the notion of WE where agents neglect the impact of their decision on others.
Assumptions 1 and 3 are standard in the game-theoretic literature [20,46].Assumption 2 is relatively mild; the affine form of the constraints is exploited in the proposed algorithm (see Section 3) for the convergence to an equilibrium bearing the desired robustness properties.
We point out that in general only a subset of solutions to G K can be retrieved through (4): these are referred to as variational equilibria, and enjoy favourable properties over nonvariational ones, as with the former the coupling constraints' burden is equally split among agents [26,31].
The following lemma, adapted from [20,Thm. 2.3.3],formalises the connection between the solutions to VI K and the GNEs (or GWEs) of G K .
Lemma 1 Under Assumptions 1, 2 and 3, VI K has a unique solution that is also an equilibrium of G K .
For the considered class of VIs, several algorithms from the literature can be employed to obtain a variational equilibrium of G K ; see, e.g., [19,38].We remark that, even if not explicitly shown for ease of notation, any solution x * to G K is itself a function of the drawn multisample δ K ∈ ∆ K .Probabilistic feasibility guarantees for the unique solution of VI K can then be provided both in an a priori and a posteriori fashion by resorting to the results in [22,23,37].However, these results are tailored to the provision of probabilistic feasibility guarantees for a single point (namely the solution of a VI): any strategic deviation from the equilibrium is not covered by such guarantees.We cover this issue in Section 3. First, we provide some background on the scenario approach.
Basic concepts in the scenario approach
A fundamental notion in the scenario approach is the probability of violation of an uncertain constraint.
Definition 2 (1) The probability of violation V : A data-driven decision-making process can be formally characterized by a mapping-the algorithm-that takes as input the data encoded by the samples and returns a set of decisions.
Definition 3 An algorithm is a function A : ∆ l → 2 R nN × R nN that takes as input an l-multisample and returns the pair (x * , S * l (x * )), namely, a point x * and a solution set S * l parametrized by x * .
In the setting we consider, we have x * ∈ S * l (x * ); however, this ought not to be the case in general.In the following, we interpret the above definition as contextdependent, in that the size l of the input multisample is admitted to vary-all else remaining fixed for a given algorithm A.
A key notion, strongly linked to that of algorithm, is the minimal compression set [34].This concept springs from the observation that typically only a subset of the sampled data is relevant to a decision or set of decisions, and all other samples are redundant.
Definition 4 (Compression set) Consider an algorithm A as in Definition 3. A subset of samples
As multiple subset of samples can exist that fulfil this property, the ones with the minimal cardinality are called minimal compression sets.
If we feed the algorithm with the set of samples corresponding to a compression, then the same decision will be returned as when we feed the algorithm with the entire multi-sample.As established in [34], the compression set is related to the notion of support samples [11] and that of essential constraints [8].Under certain nondegeneracy assumptions these concepts coincide.
3 Probabilistic feasibility of sets around equilibria
A first a posteriori result
Returning to the scenario game G K in (3), we now consider a more general setup where agents are allowed to deviate from x * following, e.g., unmodelled changes in their cost functions; while we suppose that these deviations are feasible with respect to the local constraints, we want to study the feasibility as regards the coupling constraints obtained through sampling.Specifically, the region in which agents' strategies can deviate from the nominal equilibrium is assumed to lie within a predefined open ball B(x * , ρ), where ρ > 0 is a fixed radius that denotes the maximum possible distance of agents' deviations from x * ; the latter is assumed to be unique as per Lemma 1.As such, the region of interest is This is depicted in Figure 1 using the ∞-norm (any other norm could have been used instead): an algorithm A (see Sec. 2.3) takes as input a multi-sample δ K and returns the region S * K around the solution x * ∈ R2 of a game with two players whose decisions are defined as scalar quantities.For this pictorial example, Π K is shaped exclusively by sampled coupling constraints.Any compression set as per Definition 4 for A must be associated with the solid blue constraints (these form a compression for x * ), and with the dashed red constraint that intersects B(x * , ρ)-as its removal would change S * K .
We can quantify the number of samples that form a compression set for the algorithm that returns S * K in an a posteriori fashion as established in Theorem 1.To this end, for a fixed confidence β ∈ (0, 1), let the violation level be defined as a function ϵ : {0, . . ., K} → [0, 1] K (in green) obtained as the intersection of the set of deviations B∞(x * , ρ) around the equilibrium x * (red dot) with the domain ΠK .The samples producing the constraints in blue are in the compression set of x * , while those associated with the red constraint are in the compression set of S * K ; discarding these does not change x * .
Proof : Let (x * , S * K ) be the solution returned by A for some given δ K , according to Definition 3. We aim at determining a compression set for A(δ K ), and use its cardinality to reach the theorem's conclusion by means of Theorem 2 in [40].This would be the union of: (i) the samples that form a compression set for x * -i.e., solving the problem using only these would result in the same equilibrium obtained by using all samples-, and (ii) any other sample (not in the compression set of x * ) whose removal can still lead to a change of the region S * K .Case (i): Determining a (possibly non-minimal) compression set for x * can be achieved, as suggested in [14], by progressively removing samples till a subset that leaves the solution unchanged is determined.We denote its cardinality by s * .With reference to Fig. 1, this set would be associated with the blue constraints active at x * .Case (ii): We need to count the samples whose removal does not change x * but yields a larger region S * K (red constraint in Fig. 1).Their number can be upper bounded by the M facets of Π K that intersect S * K .Hence, the number of samples that form a compression set for A(δ K ) is bounded by s * +M .Existence of a compression set I with a bound on its cardinality is sufficient for the application of Theorem 2 in [40].The fact that for the minimal compression set |I * | ≤ |I| ≤ s * + M always holds leads then to the statement of this theorem.■ It is important to stress that the application of Theorem 1 is agnostic on the choice of the equilibrium seeking algorithm.To use the result of Theorem 1, one needs to quantify (an upper bound of) the number of samples s * that form a compression set for x * and (an upper bound of) the number M of coupling constraints that correspond to facets of S * K .While s * ≤ nN under Assumptions 1-3,3 an upper bound for M can in general only be achieved a posteriori, i.e., once δ K is sampled.In the next section we show how we could obtain a priori bounds for the same quantity.
A priori probabilistic certificates
Consider the scenario game G K and suppose that bounded deviations from the solution are allowed.We model such deviations as a ball of radius ρ around the equilibrium, as in Section 3.1.In contrast to the a posteriori nature of the result therein, our goal here is to achieve an a priori bound.Namely, we aim at establishing the main statement of Theorem 1 with a prespecified violation level, which does not depend on the given multi-sample δ K .In other words, we seek a statement-holding with known confidence-of the form V(Π K ∩ B(x * , ρ)) < ε, with ε ∈ (0, 1) a priori fixed.
To achieve this we build upon the previous conclusions, which expose a link between the probability of constraint violation and the number M of facets of Π K (each originated from some uncertainty sample) that B(x * , ρ) intersects.In particular, a monotonic relationship follows from (6): the smaller M the better, i.e., less conservative, the theoretical feasibility guarantees on constraint violation for the strategies belonging to the feasible region S * K surrounding the equilibrium.Also, a smaller value of M can result in a larger region for which the guarantees of Theorem 1 hold-due to a smaller portion of B(x * , ρ) being cut off by intersection with Π K .This motivates us to study the role of M as a modulating parameter for the robustness of the feasibility certificates offered for the region S * K , as well as the extent of deviation from the nominal equilibrium covered by such certificates.
GNE-seeking algorithm with a priori robustness guarantees
We consider an iterative scheme to determine a solution of VI K in (4).In particular, since the problem involves coupling constraints, we build our Algorithm 1 upon a primal-dual scheme, where constraint satisfaction is achieved by the use of Lagrange multipliers; similar developments hold for both GNE and GWE problems.To this end, we define the augmented vector y := (x, µ) ∈ R nN +m by stacking the global decision vector x Algorithm 1 A priori robust GNE seeking algorithm Require: κ ← κ + 1 5: until ∥y (κ+1) − y (κ) ∥ ≤ ξ 6: y * ← y (κ+1) 7: return y * and Π K ∩ B(x * , ρ) and the Lagrange multipliers µ = (µ ℓ ) m ℓ=1 ∈ M ⊆ R m + .The set M denotes the domain of µ; in the sequel we impose some structure on M once some necessary theoretical ingredients are introduced.As deterministic constraints do not play a role in the evaluation of the robustness guarantees, suppose for ease of exposition that Π K only comprises uncertain coupling constraints.Let where a ℓ ⊺ denotes the ℓ-th row of A. Eq. ( 7) is the irredundant H-representation of the polytopic feasibility region Π K defined in (4), where the rows of matrix A are unit vectors.Property (7b) is key to the second statement in Lemma 2. It entails no loss of generality, since for any A, b forming an equivalent H-representation of Π K , ( 7) can be obtained by normalising each row of A and the corresponding component of b by the row-vector norm.Thus, the pair (A, b) encodes the set of randomised coupling constraints that constitute facets of Π K4 .
The main step of Algorithm 1 (step 3) is a projected gradient descent (ascent) update for x (µ) through the mapping T : R nN +m+1 × N → R nN +m given by T follows from the primal-dual conditions of the game solution; see [19,Sec. 4.2], [20, Sec.1.4.1].F is the pseudogradient mapping defined as in Section 2.2, A, b are as in (7), and ρ := cρ1 m , where c is a constant scaling factor (see Sec. 3.2.2) and M a nonnegative integer.In the second block-row of ( 8), the m − M least relevant (based on the multipliers value) coupling constraints are tightened by an amount cρ through the mapping Q : R m + × N → {0, 1} m×m .Finally, the asymmetric projection matrix D ≻ 0 includes the step-size parameter τ > 0 and is defined as Note that the constraint tightening performed in the second block-row of T is equivalent to preventing B(x * , ρ) from intersecting these constraints.In other words, Q ensures that the number of facets of Π K intersecting B(x * , ρ) is at most M , which in turn enables to obtain an a priori estimate of the number of samples that form a compression for S * K and hence on V(Π K ∩ B(x * , ρ)); this is formalised by Theorems 2 and 3. Since m − M coupling constraints are tightened, smaller values for M can result in a more robust and possibly larger region S * K ; however, they can also move the location of the nominal equilibrium x * to a somewhat less efficient point towards the interior of Π K .As we will demonstrate numerically in the sequel, this is the case with potential games [21].
Constraint tightening via mapping Q
We define the mapping Q as where • P : R m → {0, 1} m×m returns a permutation matrix such that P (µ)µ is the vector composed by the elements of µ arranged in decreasing order.• R : N → {0, 1} m×m takes as input the number of coupling constraints M ≤ m we allow B(x * , ρ) to intersect with and returns as output the matrix Compatibly with the definition of , where the last equality holds since all components of ρ are equal.
Convergence analysis and main result
Due to Q, mapping T is discontinuous on X × R m .To circumvent this, we restrict the multipliers to the set M on which we impose some structure granting continuity of T on X × M. To this end, let Z := [ζ, +∞) ∪ {0}, for some small ζ > 0, i.e., Z ⊂ R contains all nonnegative scalars which take value greater than ζ when nonzero.
Assumption 4 Let Λ be an arbitrarily large compact set.M admits the form Recalling that P (µ)µ rearranges the multipliers in descending order, the set M contains all vectors where the difference between every pair of strictly positive components-and the distance of the smallest of these from zero-is no less than ζ.We note that (13) is the union of q = m! + m + 1 disjoint convex subsets of R m + , each of which we denote as M j , i.e., M = q j=1 M j ; figure 2 illustrates this set for m = 3.It is therefore possible to compute the projection in line 3 of Algorithm 1 by, e.g., projecting on M j , for j = 1, . . ., q, and then setting y (κ+1) to be the solution among these that results in the minimum distance from y (κ) − D −1 T (y (κ) , ρ, M ).Still, the projection on M can be computationally intensive if q is large.
Imposing on M the structure of (13) endows T with the desired nonexpansiveness properties that are exploited in the proof of Lemma 3. In the numerical implementation of the algorithm, ensuring µ ∈ M can possibly introduce small perturbations in the multipliers-compared to standard formulations where µ ∈ R m + -which in turn could produce a slight violation of the constraints (this can be controlled through the magnitude of ζ).We note that M is compact by construction due to the intersection with the compact set Λ in (13) which can, however, be arbitrarily large thus not impacting the result numerically.Compactness is used in the proof of Theorem 2; Remark 1 discusses cases where this requirement can be lifted.
(2) Let D as in (9) and set τ > 0 such that Then, for any j = 1, . . ., q, Algorithm 1 converges to a solution of VI(X × M j , T ), when the gradient step in line 3 is projected on the corresponding subdomain, for any y (0) ∈ X × M j .
Continuity of the mapping is essential for the theoretical convergence of Algorithm 1.The second part of Lemma 3 provides an admissible range of values for τ such that Algorithm 1 converges to a solution of VI(X × M j , T ) if at each iteration the projection in line 3 is performed on the (convex) subdomain M j ⊂ M, j ∈ {1, . . ., q}.
The stepsize τ is chosen such that conditions standard in NE seeking are satisfied and oscillations among multiple equilibria are avoided.Still, we are interested in establishing convergence on the entire domain M, so at each iteration the projected solution might belong to a different subdomain.This does not trivially follow from the second part of Lemma 3; therefore, by Lemmas 2 and 3 we establish an additional condition on τ such that Algorithm 1 retrieves a solution of VI(X × M, T ).
Theorem 2 Consider Assumptions 1, 2, 3 and 4. Fixed 0 ≤ M ≤ m, assume the domain Π K is nonempty for any of the m M combinations of constraints tightened as in (12).Let D ≻ 0 be defined as in (9), where τ satisfies (15) and where Then Algorithm 1 converges to a solution of VI(X × M) for any initial condition y (0) ∈ X × M.
Note that as µ (κ) → µ * , we have Then, the solution returned by Algorithm 1 is the equilibrium of a variant of G K with m−M tightened constraints (follows from (12) with Q(µ) replaced by Q * ).
Remark 1 (Relaxing compactness) Theorem 2 still holds when Λ = R m in the definition of M in (13) if for all multi-samples, (i) is full row-rank, or (ii) all elements of A are positive.
(i) To show this, consider mapping T and matrix D in (9).The multipliers' update involves projecting (weighted according to D) on M, the term Since X is compact, there exists a subsequence {κ i } i∈N such that lim i→∞ κ i = ∞, lim i→∞ x (κi) = x, for some x ∈ X.It suffices to show that the sequence of multipliers {µ (κi) } ∞ i=1 remains bounded (all arguments in the proof of Theorem 2 from (34) onwards remain unaltered).For the sake of contradiction, assume that there exists at least one element of µ (κi) that tends to infinity across the considered subsequence.Let then µ (κi) = (µ F ), where based on our contradiction hypothesis lim i→∞ ∥µ (Taking the first elements of µ (κi) to be the ones that tend to infinity is only to simplify notation and is without loss of generality.∞ ∥ → ∞, we need the terms that are integrated in the multipliers' update, i.e., last two terms in (17), to be positive for all i (in fact across a subsequence), which since τ > 0 is equivalent to where (•) ∞ denotes the elements of its argument corresponding to µ (κi) F .As such, we have However, lim i→∞ x (κi) = x ∈ X, and (Q(µ (κi) , M )ρ) ∞ ≤ cρ for all i, while by Lemma 3, F is continuous over the domain of multipliers satisfying (13).Moreover, µ contains the components of µ (κi) that remain finite.Therefore, the limit as i → ∞ of the right-hand side of ( 19) is finite.Due to the assumed full row-rank structure ∞ ∥ < ∞, establishing a contradiction showing that the subsequence {µ (κi) } remains bounded.(ii) If all elements of A are positive, and since a ℓ ⊺ a ℓ = 1, for all ℓ = 1, . . ., m, all arguments of case (i) remain the same with the only difference that we directly have that The next result accompanies the region S * K = Π K ∩ B(x * , ρ) of strategic deviations from the equilibrium x * with a priori probabilistic feasibility guarantees that can be tuned by means of M .It should be noted that Theorem 2 establishes that there exists a choice of τ to guarantee convergence of Algorithm 1.The admissible range of values for τ is explicit via ( 15), ( 16), but difficult to quantify due to R. Numerical evidence suggests that selecting a small enough value is sufficient for convergence.
Theorem 3 Consider Assumptions 1, 2, 3 and 4. Let x * and S * K = Π K ∩ B(x * , ρ) be returned by Algorithm 1; fix ϵ ∈ (0, 1) and M .We then have that By Definition 2, Theorem 3 guarantees that for any point in S * K , the probability of constraint violation is bounded by ε, with confidence at least 1 The dependence of this term on M gives us an additional degree of freedom in trading the robustness of the solution for its associated probabilistic confidence.The choice of M can also have an effect on the size of S * K , as well as on the location of x * , thus resulting in a trade-off between performance and robustness.
For the case in which the coupling constraints concern exclusively the aggregate variable, it can be shown that the upper limit of the summation in the right-hand side of ( 20) can be replaced by n+M −1, as n is the dimension of the aggregate vector.This allows to state (20) with a much higher confidence of 1− for details, we refer the reader to [40], where the notion of support rank is exploited [45].
Numerical example
Consider a game with N agents whose decisions are subject to deterministic local constraints and uncertain coupling constraints on the aggregate decision: where C ≻ αI n , for some α > 0, and d ∈ R n .Note that a structure similar to our numerical example has been considered in applications of aggregative games such as electric vehicle charging and traffic management under uncertainty [38], [22], [23].We impose no knowledge of ∆ and P; we rely instead on a scenariobased approximation of the game, whereby each sample δ k ∈ δ K gives rise to b δ k , b δ k .Eq. ( 21) is an aggregative game in the form of (3).In this instance, we assume each agent's action has negligible effect on the aggregate, and accordingly consider a GWE-seeking problem.Following the definition of F WE (Sec.2.2), we get We employ Algorithm 1 to seek a WE x * such that, by fixing M , a prespecified theoretical violation level is guaranteed for the set Π K ∩ B(x * , ρ).Due to uniqueness of σ * , all sets B(•, ρ)-parametrised by any x * solving (21)-are projected on the unique ball B(σ * , ρ/N ) in the aggregate space.Also note that by definition of σ, at 5 We note that this case slightly transcends the conditions in Theorem 2, as F does not comply with Assumption 3-(1).Convergence of Algorithm 1 (following from the nonexpansiveness of T on each subdomain Mj) can still be ensured here due to the affine structure of F ; cf.[20, Sec.12.5.1].
most n non-redundant samples will contribute to define the domain Π (21).For the derivation of the robustness guarantees, we can thus restrict our attention to S * K = Π σ K ∩ B(σ * , ρ/N ) ⊆ R n .As remarked at the end of Section 3.3, we can apply (20) with the upper limit in the summation involved replaced by n+M −1.For the case n = 2, N = 50, and different choices of M , Figure 3 depicts the projected iterations {σ(x (κ) )}, κ = 1, 2, . . .generated by Algorithm 1 on the space Π σ K .It can be observed how the region S * K changes as the value of M is modified.
It is worth noting that in this case F (x) is integrablethis can be inferred by [20, Thm.1.3.1]since the Jacobian of the game is symmetric, i.e., ∇ x F (x) = ∇ x F (x) ⊺ .Therefore, a GWE x * can also be obtained by solving In other words, this game admits a potential function E(x) := σ(x) ⊺ Cσ(x) + d ⊺ σ(x), whose minimizers correspond to GWEs.E can be interpreted as the total cost incurred by the population of agents, and its minimization leads to the optimum social welfare.The contour lines of E are depicted in Figure 3: since x * minimises E(•), σ * lies on the contour associated to the minimum value of E within the feasible domain.Lower values of M result in larger regions for which guarantees are provided.Figure 4 shows how the sequence {E(x (κ) )} κ=1,2,... converges to the minimum potential within the possibly tightened feasibility region.It can be observed how in this case the efficiency of the equilibrium decreases as smaller values of M are chosen.The three panels in Figure 4 show the trade-off between system level efficiency and the guaranteed robustness levels.
The lower the value of M , the lower the empirical constraints violation-corresponding to a better confidence bound in the right-hand side of (20).
Concluding remarks
This work proposes a data-driven equilibrium-seeking algorithm such that probabilistic feasibility guarantees are provided for a region surrounding a game equilibrium.These guarantees are a priori and the region that is accompanied with such a probabilistic certificate is tunable.For games that admit a potential function, the proposed scheme is shown to achieve a trade-off between cost and the level of probabilistic feasibility guarantees.In fact, our scheme returns the most efficient equilibrium such that the predefined guarantees are achieved.Proving this conjecture is left for future work.Moreover, current work investigates a distributed implementation of the proposed equilibrium seeking algorithm.) evaluated along the iterations of Algorithm 1. Lower values of M yield better confidence on the theoretical robustness certificates for the considered region (see Thm. 3), which results in a lower empirical probability of constraint violation.On the other hand, the system-level efficiency of the equilibrium increases for higher values of M .
Proof of Lemma 2
Let µ, z be arbitrary vectors in M and, as in the proof of Lemma 3, define ⃗ µ, ⃗ z as the vectors composed by rearranging the elements of µ, z in decreasing order.According to this arrangement, let I µ = {i 1 , i 2 , . . ., i m } be the ordered set of indices of µ, i.e., i k : µ i k = ⃗ µ k , k = 1, . . ., m; as a result, i 1 and i m will be the indices of the largest and smallest components of µ, respectively.Applying a similar definition to z, we denote the corresponding set I z := {j 1 , j 2 , . . ., j m }.Then, the first M indices in I µ and I z , denoted as L µ and L z , respectively, are relative to the constraints not tightened by the application of Q(•, M ).In other words, for all ℓ ∈ L µ , (Q(µ, M )ρ) ℓ = 0-and similarly for z.Vice versa, the complementary sets We distinguish between the following cases: ) are pairwise disjoint and exhaust the set {1, . . ., m}.Hence we can write Now, notice that for any i ∈ L c µ ∩ L z ⊆ L c µ and j ∈ L µ ∩ L c z ⊆ L µ , we have by definition of L µ and L c µ that µ i ≤ µ j (which by (13) only holds with equality if µ i = µ j = 0).With analogous reasoning, we have z i ≥ z j for any i ∈ L c µ ∩ L z ⊆ L z and j ∈ L µ ∩ L c z ⊆ L c z .Let h 1 be the cardinality of the set L c µ ∩ L z , and h 2 that of L µ ∩ L c z .Then, where (a) holds since L µ , L z ⊆ {1, . . ., M }, and (b) follows from |L µ | = |L z | = M .Therefore h 1 = h 2 =: h and 0 ≤ h ≤ M , which implies U 1 ≤ 0 and U 2 ≤ 0 in (23).We can observe that U 1 < 0 and U 2 < 0 if L µ ∩ L c z and L c µ ∩ L z are nonempty and the corresponding components of µ and z are nonzero.In such a case h ≥ 1 and we can write where the inequality follows from (13) and the above discussion.A similar reasoning holds for U 2 .Lastly, note that if µ ̸ = z and h ≥ 1, then at least one of U 1 ≤ −hζ and U 2 ≤ −hζ will hold.By (23), we can thus conclude U ≤ −hζcρ for any µ, z ∈ M, µ ̸ = z.■
Proof of Lemma 3
Part (1): To prove that the mapping T is continuous on its domain, we first notice that T is by construction continuous on X × M when the operator Q(•, M ) is continuous on M (as the parameter M is fixed).Therefore, it is sufficient to show that for any µ, z ∈ M and any η > 0, there exists δ > 0 such that where ρ = cρ1 m ̸ = 0. To this end, consider any µ, z ∈ M such that ∥µ − z∥ < ζ 2 , with ζ as defined in (13). 6Let ⃗ µ and ⃗ z denote the vectors µ and z sorted in decreasing order; thus, ⃗ µ ℓ is the ℓ-th largest element of µ (and similarly for z).For any given ℓ, let i : µ i = ⃗ µ ℓ , j : z j = ⃗ z ℓ , and l := min {1,...,m} ℓ : i ̸ = j.In words, l is the smallest index for which the ℓ-th largest elements of µ and z do not appear at the same row of their respective vectors.We then let I be the set of indices for which the ordering of the elements of µ and z agrees, i.e., for all k ∈ I, there exists ℓ < l such that i = j = k, with i : µ i = ⃗ µ ℓ and j : z j = ⃗ z ℓ .
We prove our statement by contradiction.Suppose there exists i, j / ∈ I such that i : µ i = ⃗ µ ℓ and j : z j = ⃗ z ℓ for some ℓ > l, where µ i < µ j and z i > z j .First, we note that such an instance exists by hypothesis, as otherwise the only possible case is where i = j, which contradicts i, j / ∈ I and implies Q(µ, M ) = Q(z, M ).Since z ∈ M, it further holds z j < z i −ζ, which by We bound (26) from below by noting z j > µ j − ζ 2 , which holds since or equivalently µ j < µ i , which contradicts our hypothesis.Hence the elements of any pair of vectors µ, z ∈ M such that ∥µ − z∥ < ζ 2 must follow the same ordering.By definition of P (•), this implies P (µ) = P (z) and, in turn, ∥Q(µ, M ) − Q(z, M )∥ = 0.This validates (25) with δ = ζ 2 and any η > 0, establishing the continuity of Q(•, M ) on M and concluding the proof of the first part.
Part (2): We show that the mapping T fulfils certain nonexpansiveness properties required for the convergence of Algorithm 1, for compatible choices of τ .In particular, we provide a sufficient condition for which the iteration y (κ+1) = proj X×Mj ,D y (κ) − D −1 T (y (κ) , ρ, M ) , (27) converges to a solution of VI(X × M j , T ), where j ∈ {1, . . ., q} is fixed, for any y (0) ∈ X × M j .Notice that in (27) the skew projection is performed on the convex subdomain X × M j .( 27) is the solution of the VI(X × M j , T w, ρ, M ), for all w ∈ U. To ease notation, we drop the dependence of T and TD on ρ, M , as they remain fixed throughout the proof.According to [20, Thm.12.5.2](see also [49,Sec. 4.3]), to ensure convergence of (27) to a solution of the VI(X × M j , T ) it is sufficient to show that TD = T − D is β-cocoercive on U j , i.e., ( TD (v) − TD (w)) ⊺ (v − w) ≥ β∥ TD (v) − TD (w)∥ 2 , (28) for some β > 1 2 and all v, w ∈ U j , j ∈ {1, . . ., q}.In fact, we will go a step forward and demonstrate here that TD is co-coercive on U with β > 1 2 .Due to the saddle problem structure of the mapping in (8), we adopt the procedure in [20,Prop. 12.5.4]and define D as in (9) (see also [38]).It then follows from the above definitions that TD (w), for any w ∈ U, reduces to which can be easily seen by rewriting (8) as Then, for any w a , w b ∈ U, we can expand (28) by using (29), obtaining for all y a , y b ∈ X × M, where the last equality follows from the definition of U j and by expanding the norm.Matrix W can be written as W = W11 W12 W21 W22 , where W 11 ∈ R nN ×nN , W 12 ∈ R nN ×m , W 33 ∈ R m×m are: Expanding the inner product in (30) with respect to the matrix blocks W 11 , W 12 , W 21 , W 33 we obtain where for the last inequality we used, in order, (i) strong monotonicity of 2 -which follows from the same arguments used in the proof of Lemma 2-and (iv) (p τ + q τ ) ⊺ (p τ + q τ ) ≤ 2(p τ ⊺ p τ + q τ ⊺ q τ ).Expanding the term containing p τ , q τ in (31) we get where (a) is obtained by applying the Cauchy-Schwarz inequality, and in (b) we use the Lipschitz continuity of , and the triangle inequality.Notice that for the last term in (32) holds for any choice of τ ∈ 0, max 1 ∥A∥ , 1 .Recall that by invoking [20, Thm.12.5.2],our objective is to show that (28) holds for some τ > 0 and β > 1 2 .Then, by inspecting (32) and using (33), to achieve this it is sufficient to guarantee Solving the quadratic expressions above with respect to τ results in the admissible range of values in (15) (these are also satisfying τ ∈ 0, max 1 ∥A∥ , 1 , required for (33) to hold).Therefore, for any τ satisfying this condition, TD is co-coercive with β > 1 2 on the entire domain U, which in turn implies that co-coercivity of TD holds on each subdomain U j , j = 1, . . ., q, with the same modulus.By [20, Thm.12.5.2],this is sufficient to guarantee the convergence of ( 27) to a solution of the VI(X × M j , T ), thus concluding the proof.■
Proof of Theorem 2
Fix any τ satisfying the conditions of Lemma 3 and ( 16).
Due to our contradiction hypothesis (recall that {κ i } i∈N is a subsequence), the sequence of iterates generated by Algorithm 1 would be leaving M 1 towards M 2 infinitely often.Denote then by κ > κ the smallest index of the subsequence such that µ (κ) ∈ M 1 , but µ (κ+1) ∈ M 2 , i.e., after the κ-th iterate the original sequence would jump to M 2 (for the first time after κ).For this jump to occur, the unprojected solution for the Lagrange multipliers must be "closer" to M 2 than to any other sub- domain of M. To see this more formally, let D −1 µ denote the lower block-row of D −1 = τ I nN 0 2Aτ 2 τ Im , corresponding to the Lagrange multiplier update in line 3 of Algorithm 1.By definition of M, such a jump requires the Euclidean distance between the unprojected gradient step at κ + 1 and µ (κ) to satisfy Figure 5 illustrates this construction: (35) describes the minimum distance for a jump to occur.This is when the ellipsoidal contour levels according to which the projection is performed (skew projection defined by matrix D) have their major axis aligned between subdomains as in Figure 5 (solid red ellipses).For this two-dimensional example this distance would then be half the width of the white stripe, i.e., ζ/ √ 2. We rather impose ζ/2 (which is smaller) in (35), to account for the case where one of the subdomains is the origin (M 3 ).However, ∥µ (κ) − D −1 µ T (y (κ) , ρ, M ) − µ (κ) ∥ = τ ∥ − 2τ A(F (x (κ) ) + A ⊺ µ (κ) ) + Ax (κ) − b + Q(µ (κ) , M )ρ∥ = τ ∥ − 2τ A(F (x (κ) ) − F (x 1 ) + A ⊺ (µ (κ) − μ1 )) where the first equality follows from the definition of D −1 µ and T , and the second one by adding and subtracting F (x 1 ), A ⊺ μ1 and Ax 1 .The first inequality is due to the triangle inequality, while the last one follows from the previous one by upper-bounding (i) the first two terms using the definition of R; (ii) ∥Q(µ (κ) , M )ρ∥ by cρ √ m − M based on its definition; and (iii) the last three terms using ∥F (x (κ) ) − F (x 1 )∥ ≤ L F ∥x (κ) − x1 ∥ by Assumption 3, and ∥x (κ) − x1 ∥ ≤ δ, ∥µ (κ) − μ1 ∥ ≤ δ.By (36), and choosing τ as in (16), we have that where K is a constant, emanating from the coefficient of δ in (36) when substituting for τ the upper-bound in (16).Note that κ is a function of δ, as it depends on κ, which in turn depends on δ.Since δ is arbitrary, taking lim sup δ→0 in (37) and lim inf δ→0 in (35) establishing a contradiction.Then μ2 , μ1 ∈ M 1 , i.e., all cluster points should be in the same subdomain of M. As Lemma 3 establishes co-coercivity of T on each subdomain X×M j , j = 1, . . ., q, it must be μ2 = μ1 , i.e., Ω is a singleton, implying that Algorithm 1 converges.■
Proof of Theorem 3
The elements of the minimal compression set I of Algorithm 1 can belong to one or both of the following sets: (1) The subset I 1 of samples that form a minimal compression for x * .Note that since Algorithm 1 converges to the point (x * , µ * ) for a fixed choice of M , Q(µ * , M ) will be a fixed quantity.Then Algorithm 1 will converge to a solution of Find x * ∈ Π K such that where Π K denotes the polytope obtained from Π K by tightening at most M coupling constraints, as dictated by (12) with Q(µ * , M ).The constraints in (40) are equivalent to F (x * ) ⊺ x ≥ F (x * ) ⊺ x * for all x ∈ Π K .Then, x * is the minimiser of min which is unique due to Lemma 1.Since the cost function is linear in x and Π K is convex by Assumption 2, we obtain a scenario program as in [11] where the last equality is due to (7b); depending on the choice of norm, c = 1 if B(•, ρ) is expressed by a p-norm with p ≤ 2, c = √ n otherwise.Conversely, at most M constraints can intersect B(x * , ρ) upon convergence of the algorithm.Let L(M ) ⊆ {1, . . ., m} contain the indices of the M largest multipliers.Then, ℓ ∈ L(M ) ⇔ (Q(µ, M )ρ) ℓ = 0, and the second block row of T in (8) expresses
Fig. 2 .
Fig. 2. Domain M of the Lagrange multipliers associated to the coupling constraints, for ζ = 0.2 and m = 3.This results in q = 10 convex subsets, including the origin and a portion of the axes.
corresponding partition of A and b, respectively, where A ∞ , b ∞ are non-empty by hypothesis.To have ∥µ (κi)
≥ 2 Fig. 3 .
Fig. 3. Iterates generated by Algorithm 1 (blue diamonds) for different choices of M .In this numerical instance, N = 50, ρ = 10, and Xi := {xi ∈ R n : xi ∈ [x i , xi]}, with x i = (0 0), xi = (3.5 3.5).The randomly generated coupling constraints form the rectangular feasibility region Π σ K (delineated by the solid black line).The red-shaded region represents the intersection between the latter and the ball B1(σ * , ρ/N ) around the aggregate equilibrium σ * (red diamond marker).In this instance, its volume increases as larger values for M are chosen.The value associated to the contour lines of the potential function E decreases from top-right to bottom-left, showing that σ * is the unique minimiser in the admissible region (shaded in green) after constraint tightening is performed by the algorithm (see Sec. 3.2.2).
Fig. 4 .
Fig.4.Potential function E(x(κ) ) evaluated along the iterations of Algorithm 1. Lower values of M yield better confidence on the theoretical robustness certificates for the considered region (see Thm. 3), which results in a lower empirical probability of constraint violation.On the other hand, the system-level efficiency of the equilibrium increases for higher values of M .
Fig. 5 .
Fig. 5. Domain M of the Lagrange multipliers associated to the coupling constraints, for the case m = 2. Notice the minimum distance ζ between any two subdomains of M involves the origin as one of these subdomains.
|
2023-04-12T01:16:00.075Z
|
2023-04-11T00:00:00.000
|
{
"year": 2023,
"sha1": "5af86e61ec8fa024edd5c84ec459a95f7576f071",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.automatica.2024.111746",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "c48d38a008d38dd7dc1e712e1264cd17799d8be5",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18933956
|
pes2o/s2orc
|
v3-fos-license
|
Recruitment methods in Alzheimer's disease research: general practice versus population based screening by mail
Background In Alzheimer's disease (AD) research patients are usually recruited from clinical practice, memory clinics or nursing homes. Lack of standardised inclusion and diagnostic criteria is a major concern in current AD studies. The aim of the study was to explore whether patient characteristics differ between study samples recruited from general practice and from a population based screening by mail within the same geographic areas in rural Northern Norway. Methods An interventional study in nine municipalities with 70000 inhabitants was designed. Patients were recruited from general practice or by population based screening of cognitive function by mail. We sent a questionnaire to 11807 individuals ≥ 65 years of age of whom 3767 responded. Among these, 438 individuals whose answers raised a suspicion of cognitive impairment were invited to an extended cognitive and clinical examination. Descriptive statistics, chi-square, independent sample t-test and analyses of covariance adjusted for possible confounders were used. Results The final study samples included 100 patients recruited by screening and 87 from general practice. Screening through mail recruited younger and more self-reliant male patients with a higher MMSE sum score, whereas older women with more severe cognitive impairment were recruited from general practice. Adjustment for age did not alter the statistically significant differences of cognitive function, self-reliance and gender distribution between patients recruited by screening and from general practice. Conclusions Different recruitment procedures of individuals with cognitive impairment provided study samples with different demographic characteristics. Initial cognitive screening by mail, preceding extended cognitive testing and clinical examination may be a suitable recruitment strategy in studies of early stage AD. Clinical Registration ClinicalTrial.gov Identifier: NCT00443014
Due to the increasing lifespan and the decreasing ratio of working to retired populations, the social and economic burden of neurodegenerative diseases are growing [3] and may be threatening future welfare and health care. As a consequence, the European Science and Research Commission has declared that prevention, early identification, and postponement of AD onset should have high priority. In order to remove or reduce modifiable AD risk factors, increasing attention to cognitive impairment is needed within the medical communities, including more reliable early AD screening tools.
Characteristics of AD study participants depend on study design, inclusion and diagnostic criteria, recruit-ment method and age distribution [4,5]. In clinical trials AD patients are usually recruited from memory clinics, hospitals and nursing homes, which makes the studies prone to selection bias [6]. The heterogeneity of diagnostic criteria and diagnostic tools reinforce these methodological challenges [7][8][9][10]. As a consequence few validated screening questionnaires are available and study comparisons may be hampered due to lack of standardization [11][12][13]. In a recent population based study Palmer et al showed that mild cognitive impairment criteria failed to identify individuals with global cognitive deficits at high risk of AD progression [14].
The impact of different recruitment methods on sample characteristics is insufficiently examined [15], whereas studies comparing sample characteristics of individuals recruited by different methods from the same population are lacking.
The aim of this paper is to compare clinical and demographic characteristics in AD individuals recruited from general practice or by population based screening questionnaires in the same geographical area.
Participants
The Dementia Study in Rural Northern Norway planned to recruit 200 patients with recently diagnosed AD in primary health care in nine rural municipalities, from January 2006 to December 2007. AD patients were recruited from general practice (n = 87) and through population based screening (n = 100) based on the same diagnostic criteria. Both groups underwent similar cognitive, physical and laboratory examinations. Inclusion criteria were individuals aged ≥ 65 years with a MMSE sum score ≥ 10 and ≤ 30 points. Exclusion criteria were delirium and behavioral disturbances interfering with cognitive and clinical testing, reluctance to participate, and inability to understand the purpose of the study, or relatives/caregivers disapproving participation.
Recruitment by population based screening
Due to a low inclusion rate by general practitioners (GPs) during the first year, the recruitment method was extended to comprise a population based screening of cognitive impairment. An invitation letter enclosing a questionnaire modified from the Cambridge Examination for Mental Disorders of the Elderly and Strawbridge et al [11,12] was sent to 11807 individuals ≥ 65 years of age in the participating municipalities. The first question was on participation (Do you want to participate in the Dementia study?). Four questions covered memory impairment (Have your memory deteriorated?), visuospatial skills (Do you forget where objects were left?), speech difficulties (Do you have difficulties to find the appropriate words?) and activities of daily living (Do you have difficulties in managing daily activities, which earlier represented no problem?). An algorithm was designed to identify individuals at increased risk of having cognitive impairment (Appendix 1). Based on this, 438 individuals were invited to an extended cognitive and clinical examination. A physician with background in geriatric medicine examined the invited individuals, and supervised and completed the interdisciplinary diagnostic procedures. Among the respondents without self reported cognitive impairment, a randomly selected reference group, undergoing cognitive testing, was established.
Examination of individuals with memory problems
Prior to study start, GPs and co-workers were trained to identify and diagnose AD based on the Norwegian guidelines [16]. A semi-structured interview of the participants focused on the onset and the course of cognitive impairment emphasising memory and visuo-spatial disturbances, word-finding difficulties, and changes in executive functions including activities of daily living (ADL). A family member or a caregiver was encouraged to add medical data according to the Informant Questionnaire-Cognitive Decline in the Elderly (IQ-CODE) [17]. Cognitive function was tested with The Mini Mental Status Examination (MMSE) [18] and Clock drawing test [19], and depression was examined with Montgomery and Aasberg Depression Rating Scale (MADRS) [20]. Neurological examination, blood tests and cerebral computed tomography (CT) were performed to exclude other causes of dementia but AD.
AD diagnosis
The diagnosis of dementia was set by GPs and discussed with at least one specialist in geriatric medicine according to the ICD-10 criteria [21], AD according to the Statistical Manual of Mental Disorders fourth edition (DSM-IV-TR) and probable AD according to National Institute of Neurological Disorders and Stroke-Alzheimer Disease and Related Disorders' (NINCDS-ADRDA) criteria [22]. Disagreement or uncertainty about the diagnostic subtypes regarding 12 patients was solved by consulting a third specialist in geriatric medicine (MV).
Approvals
The present study is registered as an International Standard Randomised Controlled Trial within ClinicalTrials.gov and approved by the following bodies; The Regional Committee for Medical Research Ethics in Northern Norway, The Privacy Ombudsman for Research, The Directory of Health and Social Welfare and The Norwegian Medicine Agency including the EudraCT database (no 2004-002613-37). The Norwegian Medicine Agency concluded that the study was conducted according to the principles of Good Clinical Practice. Each participant gave a written informed consent co-signed by a spouse, a close relative or a guardian. The national authorities listed above have approved the consent formula.
This manuscript intends to comply with The CON-SORT statements and The Uniform Requirements for Manuscripts Submitted to Biomedical Journals.
Statistics
Descriptive statistics, chi-square, independent sample ttest, analyses of covariance adjusted for possible confounders and multivariable analysis were used with a two-sided significance level of 5%. SPSS version 15 was applied for both data management and analysis.
Results
At the end of the inclusion period 87 patients were recruited from GPs and 100 from population based postal screening. Figure 1 describes in detail the outcome of the screening method. 3767 (31.3%) individuals responded to the questionnaire, of which 438 persons met the criteria of self reported cognitive impairment. The cognitive impairment (CI) group consisted of 292 individuals, but only 229 individuals underwent cognitive and clinical examinations due to withdrawals.
Seventy of these had no dementia, 31 had mild cognitive impairment [23] and 15 had cognitive impairment not due to AD. One hundred and thirteen individuals had probable AD constituting 49% of the clinically examined group (n = 229). Thirteen patients with probable AD withdrew before inclusion. Of those examined, but not included, 53% were women.
Among 791 individuals who answered "Yes" to question one (participation) and "No" to the remaining five screening questions, 500 individuals were randomly selected as references for the cases. The final reference group (Refgroup) comprised 199 individuals who underwent cognitive testing. A highly significant difference (p < .001, age adjusted) was found for MMSE sum score between individuals with and without self-reported cognitive impairment (CI-group versus Ref-group, Figure 1).
A comparison between the two recruitment methods (screening versus general practice) (Table 1) revealed that AD patients recruited by screening more often were male (p < .001), younger (p = .006), needed less community support (p < .001), and had a significantly higher MMSE sum score (p < .001) as compared to those recruited in general practice, also when adjusting for education and co-morbidity (p < .001). In a multivariable model, the estimates remained unchanged. Overall, men were younger than women (p = .001). Compared to men, women more frequently lived alone (p < .001, age adjusted) and more frequently needed support from community care (p = .04, age adjusted).
Discussion
The main finding of this comparative study was that AD patient characteristics differed according to recruitment method. Younger and more often male patients with a higher MMSE score were recruited by mail as compared to those recruited in general practice. The estimate of baseline characteristics remained significantly different when adjusting for education, coronary heart disease, hypertension, stroke and diabetes. However, the overall gender difference in MMSE score turned non-significant when adjusting for age. 54% of those recruited by population based screening were men in contrast to only 23% from general practice. This is in accordance with the findings of Fitzpatrick et al who reported that men younger than 85 years were more willing to attend clinical trials dealing with cognitive function. According to Norwegian Statistics 2008, the male proportion of the general population above 70 and 80 years were 42% and 35%, respectively [24].
Men recruited by screening were more frequently living in a relationship in their own home without community support as compared to men recruited from general practice. In contrast, female participants were older, less selfreliant and more likely to live alone. All these factors are probably highly inter-correlated, partly as a result of different life expectancies for men and women. Living alone may promote inactivity, isolation and stimulus deprivation, all contributing to dementia progression [25].
Our study confirms that different recruitment methods in AD research provide samples with different baseline characteristics. Similar findings have been reported by Izal et al who emphasises that this could influence the results significantly [5]. A possible explanation is that a mailed questionnaire may alert those with a concern of own memory problems or early stage dementia whereas routine practice usually diagnoses patients with more obvious cognitive impairment and loss of compliance and self-care. According to the algorithm of the screening program 438 persons were invited to a subsequent cognitive and clinical examination, of which 113 (26%) were diagnosed with probable AD. In a large population based screening program conducted by Crews et al, 44.3% of the participants were recommended a follow-up. Of these, 24% were referred for objective memory impairment. A number of the patients reported that their GPs had never adequately assessed their memory complaints [26].
As for the present screening program, we found that a selection based on the answers to the postal questionnaire resulted in samples with highly significant differences in cognitive abilities (p < .001 age adjusted). Among 229 individuals undergoing clinical examination 70 (30%) had no dementia but had a family member or close relative with such a diagnosis, which probably contributed to Algorithm a higher level of concern regarding cognitive symptoms. This group (n = 70) had similar MMSE sum score as the Ref-group. Study limitations include the relative low number of participants in both groups. If screening questions are imprecise, information bias may threaten the internal validity of a study like this. We used questions based on the Cambridge Cognitive Examination, a widely accepted and reliable screening tool [11]. Our results indicate that the questions are capable of identifying individuals with MMSE scores corresponding to early AD. In our opinion, it is likely that the results are valid for western populations with similar demographics and co-morbidities.
It is known that GPs hesitate to diagnose mild cognitive impairment or early stage of dementia [27]. Mild to moderate cognitive impairment in the elderly, including early stage AD, seems to be disregarded by both relatives and health professionals, even though this stage of cognitive impairment has the best response to intervention. In this study, only one of five GPs in the nine municipalities joined the pre-study educational program aimed to improve AD diagnostics. Our findings are in accordance with those of Vernooij-Dassen et al who reported that GPs tend to postpone a comprehensive examination of patients who complain of memory problems [28]. Lack of therapeutic and diagnostic skills may contribute to this attitude [27]. Carters et al described such insufficient examination as a consequence of time constraints [29], whereas Wilcock et al reported that a number of GPs considered studies aiming to diagnose and manage AD irrelevant to their practice [30]. According to Norwegian recommendations, early stage dementia and MCI should be diagnosed in memory clinics and not in primary health care [31]. However, due to shortage of memory clinics, this strategy will not benefit the majority of elderly with early stage dementia. Under these circumstances GPs have to be supported and trained to detect and diagnose cognitive impairment.
Conclusions
Few AD studies have compared patient characteristics of different recruitment methods. In our study patients recruited by screening were younger, more frequently men and had a higher MMSE sum score as compared to those recruited from general practice. Screening of cognitive function by questionnaires enables recruitment of early stage AD, whereas general practice recruits patients with more advanced disease. Future dementia studies on prevention and treatment should take recruitment methods into consideration, in particular when focusing on early stage AD. According to our experience, the applied screening questionnaire, preceding a clinical examination, would be a suitable strategy to identify and diagnose AD in primary health care.
|
2017-06-12T23:22:13.724Z
|
2010-04-29T00:00:00.000
|
{
"year": 2010,
"sha1": "2bead26dbd11049b543e0dda08eedb6883b58a71",
"oa_license": "CCBY",
"oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-10-35",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4a2d7fe7b9a039dc8d50f3034fb6aa0f4f7201c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247536298
|
pes2o/s2orc
|
v3-fos-license
|
Stabilization of PIM Kinases in Hypoxia Is Mediated by the Deubiquitinase USP28
Proviral integration sites for Moloney murine leukemia virus (PIM) kinases are upregulated at the protein level in response to hypoxia and have multiple protumorigenic functions, promoting cell growth, survival, and angiogenesis. However, the mechanism responsible for the induction of PIM in hypoxia remains unknown. Here, we examined factors affecting PIM kinase stability in normoxia and hypoxia. We found that PIM kinases were upregulated in hypoxia at the protein level but not at the mRNA level, confirming that PIMs were upregulated in hypoxia in a hypoxia inducible factor 1-independent manner. PIM kinases were less ubiquitinated in hypoxia than in normoxia, indicating that hypoxia reduced their proteasomal degradation. We identified the deubiquitinase ubiquitin-specific protease 28 (USP28) as a key regulator of PIM1 and PIM2 stability. The overexpression of USP28 increased PIM protein stability and total levels in both normoxia and hypoxia, and USP28-knockdown significantly increased the ubiquitination of PIM1 and PIM2. Interestingly, coimmunoprecipitation assays showed an increased interaction between PIM1/2 and USP28 in response to hypoxia, which correlated with reduced ubiquitination and increased protein stability. In a xenograft model, USP28-knockdown tumors grew more slowly than control tumors and showed significantly lower levels of PIM1 in vivo. In conclusion, USP28 blocked the ubiquitination and increased the stability of PIM1/2, particularly in hypoxia. These data provide the first insight into proteins responsible for controlling PIM protein degradation and identify USP28 as an important upstream regulator of this hypoxia-induced, protumorigenic signaling pathway.
Introduction
Hypoxia is common in cancers. As the tumor proliferates, it rapidly outgrows its blood supply, leading to areas of low oxygen tension. Both healthy and tumor cells compensate for low oxygen tension by enacting a transcriptional program driven by the hypoxia-inducible factor 1 (HIF-1) transcription factor [1]. However, tumor cells have additional adaptive responses to hypoxia that allow them to survive in this harsh microenvironment [2]. Identifying such factors, particularly if they are actionable targets, may provide potential therapeutic options to oppose the well-established oncogenic effects of hypoxia in patients with solid tumors.
The Proviral Integration site for Moloney murine leukemia virus (PIM) proteins are serine/threonine kinases that are involved in cytokine signaling [3]. They are upregulated in multiple cancer types, most commonly in hematopoietic cancers and prostate cancer [4][5][6][7]. PIM kinases are best known for their role in helping cells to evade apoptosis through their direct phosphorylation of BAD [8][9][10]. However, they also promote tumor cell survival through other mechanisms, such as decreasing lethal levels of reactive oxygen species and regulating mitochondrial dynamics [11,12]. PIM kinases have also been USP28 blunts the upregulation of PIM1 in xenograft tumor models and results in reduced tumor growth.
Tissue Culture
HCT116 (human colon cancer) and 293T (transformed human embryonic kidney) cells were purchased from the American Type Culture Collection (Manassas, VA, USA). PC3-LN4 (prostate cancer) cells were a gift from Dr. Andrew Kraft. This cell line was created from the serial orthotopic transplantation of parental PC3 cells and subsequent harvest from lymph node metastases [41]. Parental and genetically modified HCT116 and PC3-LN4 cells were grown in RPMI with 10% fetal bovine serum (FBS), and 293T cells were grown in DMEM with 10% FBS. All cells were cultured at 37 • C in 5% CO 2 , routinely screened for mycoplasma, authenticated by short tandem repeat DNA profiling by the University of Arizona Genetics Core Facility, and used for fewer than 50 passages. For experiments involving hypoxia (1% O 2 ), cells were cultured in a hypoxic environment (1% O 2 , 5% CO 2 , and 94% N 2 ) using an InVivo2 400 hypoxia workstation (Baker Ruskinn, Sanford, ME, USA).
PC3-LN4-knockdown cells were created by transducing cells with a control virus or viruses encoding short hairpin RNAs (shRNAs) against USP28. Cells were selected under puromycin treatment.
Quantitative Polymerase Chain Reaction (qPCR)
RNA was extracted using the Quick-RNA Miniprep Kit (Zymo, Irvine, CA, USA) and reverse-transcribed using the qScript cDNA Synthesis Kit (Quantabio, Beverly, MA, USA). qPCR was performed using a CFX96 Lightcycler (Bio-Rad, Hercules, CA, USA) and qPCRBIO qPCR Master Mix (PCR Biosystems, Wayne, PA, USA). Primers were purchased from Qiagen (Germantown, MD, USA) or Integrated DNA Technologies (Coralville, IA, USA). Primer sequences are listed in Table 1. The expression of target genes relative to beta-actin was quantified using the 2 −∆∆CT method.
Western Blotting
Proteins were extracted from cells or tumor tissues using RIPA buffer (150 mM NaCl, 1% NP-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate, and 50 mM Tris, pH 7.4) with protease inhibitors, and an equal amount of each lysate was loaded onto a 10% sodium dodecyl sulfate-polyacrylamide gel. Lysates were electrophoretically transferred to polyvinylidene fluoride or nitrocellulose membranes, which were then blocked with 5% milk or 1% casein, respectively. Membranes were washed with Tris-buffered saline with Tween (TBST) and incubated with primary antibodies at 4 • C overnight. Membranes were then washed with TBST and incubated with horseradish peroxidase-conjugated or fluorescently labeled secondary antibodies for 1 h at room temperature and imaged using ECL or a LiCor imager, respectively.
Immunoprecipitation and Protein Degradation Assays
To assess the ubiquitination of PIM isoforms under different conditions, 293T cells or PC3-LN4 USP28-knockdown cells were transfected with HA-tagged PIM1, PIM2 (HA-PIM1 or HA-PIM2), or GFP-PIM3 overnight. Then, as appropriate, cells were treated with PR-619 or placed in hypoxia. Cells were treated with the proteasome inhibitor MG-132 for 2 or 4 h before harvest to block the degradation of ubiquitinated PIM isoforms. To determine whether USP28 bound to PIM kinases, 293T cells were co-transfected with Myc-tagged USP28 (USP28-Myc) and HA-tagged PIM1 or PIM2 (HA-PIM1 or HA-PIM2) overnight. Then, cells were treated as stated and placed in hypoxia for the stated times. Cells were harvested in an IP lysis buffer (20 mM Tris HCl, pH 8; 137 mM NaCl; 10% glycerol; 1% Nonidet P-40; and 2 mM EDTA) with protease inhibitors and centrifuged at 15,000 RPM for 10 min. Lysates were incubated overnight at 4 • C with HA magnetic beads (Pierce Biotechnology, Waltham, MA, USA) or GFP magnetic beads (Chromotek, Islandia, NY, USA) and subjected to western blotting as described above.
To assess changes in protein degradation, 293T cells were co-transfected with HA-PIM1 and either Flag-USP28 or a control vector. The following day, cells were treated with 10 µM CHX in hypoxia or normoxia to block new translation and harvested at the stated time points. Western blotting was performed as described above.
In Vivo Experiments
All animal studies were approved by the Institutional Animal Care and Use Committee of the University of Arizona. Male NOD/SCID mice at 6-8 weeks of age were used. Five million control or shUSP28-1 PC3-LN4 cells in PBS were injected subcutaneously into the rear flanks of eight mice each. Tumor volume was measured over time by caliper and calculated using the equation: V = (tumor width) 2 × tumor volume/2. Mice were administered sunitinib (100 mg/kg; Adooq Bioscience, Irvine, CA, USA) or vehicle daily, once the tumors reached~100 mm 3 (n = 4 mice and 8 tumors/group). Mice were sacrificed when the tumor volume reached~2000 mm 3 . After sacrifice, tumors were harvested for downstream experiments. Immunohistochemical staining was performed to assess PIM1 levels.
Statistical Analysis
Western blot densitometry was performed using Image J v1.51u (National Institutes of Health, Bethesda, MD, USA). Statistical analysis was performed using Microsoft Excel, version 2108 (Microsoft, Redmond, WA, USA). A p value < 0.05 was considered statistically significant.
PIM Kinases Are Upregulated in Hypoxia at the Protein Level
We have previously observed that PIM kinases are increased in hypoxia in prostate, breast, and colon cancer cells [14]. However, the mechanism underlying this increase in PIM protein levels is unknown. Many proteins that are upregulated in hypoxia are target genes of the hypoxia-inducible transcription factor HIF-1. To assess whether the PIM isoforms are transcriptionally upregulated in hypoxia, we examined the protein and RNA levels in cells cultured in hypoxia (1.0% O 2 ) for 4 or 8 h. In both the HCT116 and PC3-LN4 cell lines, we observed robust increases in PIM1, PIM2, and PIM3 protein levels after 4 and 8 h in hypoxia compared to the levels in normoxia ( Figure 1A). Notably, although classic HIF-1 target genes, such as hexokinase 2 (HK2), were increased in hypoxia, there were no significant differences in the mRNA levels of PIM1, PIM2, or PIM3 ( Figure 1B) in hypoxia and normoxia, indicating that the increase in PIM kinase levels in hypoxia occurs at the post-translational level and is independent of HIF-1 transcriptional activation. The total levels of a majority of cellular proteins were regulated through degradation by the 26S proteasome, which occurred after polyubiquitination. To determine whether hypoxia altered the rate of PIM ubiquitination, we transfected HA-PIM1, HA-PIM2, or GFP-PIM3 into 293T cells that were cultured in normoxia or 1.0% O2 prior to treatment The total levels of a majority of cellular proteins were regulated through degradation by the 26S proteasome, which occurred after polyubiquitination. To determine whether hypoxia altered the rate of PIM ubiquitination, we transfected HA-PIM1, HA-PIM2, or GFP-PIM3 into 293T cells that were cultured in normoxia or 1.0% O 2 prior to treatment with MG-132 (10 µM), a proteasome inhibitor, to block proteasomal degradation and preserve the ubiquitinated form of PIM. Lysates were collected over time, the tagged proteins were immunoprecipitated, and ubiquitination was assessed by immunoblotting. The rates and total amounts of ubiquitination of all PIM isoforms were significantly reduced in hypoxia compared to those in normoxia after 2 and 4 h of MG-132 treatment ( Figure 1C), suggesting that the ubiquitination of PIM kinases is impaired in hypoxia compared to in normoxia, which favors protein stability. Because PIM1 and PIM3 frequently act on similar substrates and have high homology [42,43], we expect that PIM1 and PIM3 are regulated similarly. Therefore, we focused on the regulation of PIM1 and PIM2 for further experiments.
PIM Kinases Are Regulated by DUBs
Decreased ubiquitination can result from either the decreased activation of E3 ubiquitin ligases or increased activation of DUBs. To determine whether PIM levels are sensitive to deubiquitination, we treated PC3-LN4 cells with PR-619, a pan-DUB inhibitor, or HBX 41108, a USP7 inhibitor that has shown broader spectrum activity at low concentrations [44]. The treatment with PR-619 significantly decreased PIM1 and PIM2 protein levels, whereas HBX 41108 had no effect on PIM levels ( Figure 2A). These data indicated that PIM1/2 stability was acutely controlled by deubiquitination. As expected, the treatment with MG-132 increased PIM1 and PIM2 levels, confirming that PIM kinases were degraded by the 26S proteasome. Notably, MG-132 treatment blocked the reduction of PIM levels observed with PR-619. Together, these data indicated that deubiquitination plays a key role in controlling the proteasomal degradation of PIM kinases ( Figure 2A).
Next, we assessed whether the ubiquitination of PIM1/2 was also sensitive to DUB inhibition using the previously described ubiquitination assay. To this end, cells were pretreated with DMSO or PR-619 for 30 min prior to the addition of MG-132, and lysates were collected at 2 and 4 h. Immunoblotting for ubiquitin after immunoprecipitation revealed that PIM1 and PIM2 were more highly ubiquitinated in cells treated with PR-619, providing further evidence that a DUB is responsible for regulating the ubiquitination and degradation of PIM kinases ( Figure 2B). Based on the literature, we identified four DUBs that have been associated with hypoxia: USP13, USP28, USP46, and CYLD [32]. To determine whether any of these candidates affected PIM1/2 protein levels, we transfected each into PC3-LN4 cells and monitored PIM1/2 expression by western blotting. While the ectopic overexpression of several DUBs increased PIM1/2 levels, USP28 caused the greatest increase ( Figure 2C). Importantly, USP28 is not inhibited by HBX 41108 [44], which explains why PR-619 decreased PIM levels but HBX 41108 did not. Therefore, we explored the potential of PIM1 and PIM2 as substrates of USP28.
whether any of these candidates affected PIM1/2 protein levels, we transfected each into PC3-LN4 cells and monitored PIM1/2 expression by western blotting. While the ectopic overexpression of several DUBs increased PIM1/2 levels, USP28 caused the greatest increase ( Figure 2C). Importantly, USP28 is not inhibited by HBX 41108 [44], which explains why PR-619 decreased PIM levels but HBX 41108 did not. Therefore, we explored the potential of PIM1 and PIM2 as substrates of USP28.
USP28 Increases PIM Stability
Because DUBs stabilized their target proteins, we assessed the effect of USP28 overexpression on PIM kinase stability. To this end, 293T cells transfected with a control vector or USP28 were treated with CHX, lysates were collected at the indicated time points, and the half-lives of PIM1 and PIM2 were assessed by western blotting and densitometry. In normoxia, the half-life of PIM2 in cells transfected with the control vector was 1.26 h, whereas the half-life of PIM2 in cells transfected with USP28 was 7.3 h, indicating that the overexpression of USP28 significantly increased PIM2 stability (p = 0.01). We observed similar results with PIM1 (0.98 h vs. 1.95 h) ( Figure 3A). We repeated this experiment in 1% O 2 and observed that USP28 overexpression led to even greater stabilization of PIM1 and PIM2 (vector vs. USP28: PIM1, 2.00 h vs. 5.37 h, p = 0.01; PIM2, 1.73 h vs. 7.45 h, p = 0.007) ( Figure 3B). Cells 2022, 11, x 9 of 16 Next, we created USP28-knockdown cells (shUSP28) by transducing PC3-LN4 cells with two different shRNAs against USP28. Control cells were transduced with the empty vector. The knockdown of USP28 was sufficient to block the induction of PIM1/2 in response to hypoxia, suggesting that USP28 is required to regulate PIM1/2 expression in Next, we created USP28-knockdown cells (shUSP28) by transducing PC3-LN4 cells with two different shRNAs against USP28. Control cells were transduced with the empty vector. The knockdown of USP28 was sufficient to block the induction of PIM1/2 in response to hypoxia, suggesting that USP28 is required to regulate PIM1/2 expression in hypoxia ( Figure 3C). It is of note that total levels of USP28 were not altered by hypoxia, suggesting an increase in activity toward PIM instead of general USP28 upregulation ( Figure 3C). Because shRNA #1 displayed a stronger knockdown of USP28, we used this shRNA for further experiments. We next assessed the effect of USP28 knockdown on PIM1/2 ubiquitination. Control or shUSP28 PC3-LN4 cells were transfected with HA-PIM1 or HA-PIM2, treated with MG-132, and harvested at 2 or 4 h, after which PIM isoforms were immunoprecipitated. PIM1 and PIM2 ubiquitination was significantly increased in cells lacking USP28 ( Figure 3D). Taken together, these results indicated that USP28 is sufficient to regulate the stability of PIM kinases, regardless of oxygen tension, and necessary for the induction of PIM kinases in response to hypoxia.
USP28 Interacts with PIM Kinases Preferentially in Hypoxia
Because we observed no increase in USP28 levels in hypoxia, we hypothesized that hypoxia might increase the affinity of USP28 for PIM kinases. To examine this, we performed co-immunoprecipitation to determine whether these proteins preferentially interact in hypoxia. 293T cells were transfected with HA-PIM1/2 and USP28-Myc and incubated in normoxia or hypoxia for 1 or 6 h prior to harvest. HA-PIM1/2 were immunoprecipitated, and USP28 interaction was monitored by blotting for Myc. Interestingly, USP28 was only bound to PIM1 and PIM2 in hypoxia, and this binding occurred as early as 1 h ( Figure 4A,B). Hence, the induction of PIM kinases in hypoxia can be attributed to increased interaction with USP28 and subsequent deubiquitination. We also examined the effect of protein kinase B (Akt) inhibition on this interaction, as the E3 ubiquitin ligase most commonly associated with USP28-FBW7-is regulated by glycogen synthase kinase 3β (GSK-3β) through Akt [45]. The inhibition of Akt activity did not affect the binding of USP28 and PIM2 in normoxia or hypoxia ( Figure 4C).
Cells 2022, 11, x hypoxia ( Figure 3C). It is of note that total levels of USP28 were not altered by h suggesting an increase in activity toward PIM instead of general USP28 upregulatio ure 3C). Because shRNA #1 displayed a stronger knockdown of USP28, we us shRNA for further experiments. We next assessed the effect of USP28 knockdo PIM1/2 ubiquitination. Control or shUSP28 PC3-LN4 cells were transfected wi PIM1 or HA-PIM2, treated with MG-132, and harvested at 2 or 4 h, after whi isoforms were immunoprecipitated. PIM1 and PIM2 ubiquitination was significa creased in cells lacking USP28 ( Figure 3D). Taken together, these results indicat USP28 is sufficient to regulate the stability of PIM kinases, regardless of oxygen t and necessary for the induction of PIM kinases in response to hypoxia.
USP28 Interacts with PIM Kinases Preferentially in Hypoxia
Because we observed no increase in USP28 levels in hypoxia, we hypothesiz hypoxia might increase the affinity of USP28 for PIM kinases. To examine this, w formed co-immunoprecipitation to determine whether these proteins preferentiall act in hypoxia. 293T cells were transfected with HA-PIM1/2 and USP28-Myc an bated in normoxia or hypoxia for 1 or 6 h prior to harvest. HA-PIM1/2 were immu cipitated, and USP28 interaction was monitored by blotting for Myc. Interestingly, was only bound to PIM1 and PIM2 in hypoxia, and this binding occurred as early ( Figure 4A,B). Hence, the induction of PIM kinases in hypoxia can be attributed creased interaction with USP28 and subsequent deubiquitination. We also exami effect of protein kinase B (Akt) inhibition on this interaction, as the E3 ubiquitin most commonly associated with USP28-FBW7-is regulated by glycogen synth nase 3β (GSK-3β) through Akt [45]. The inhibition of Akt activity did not affect the b of USP28 and PIM2 in normoxia or hypoxia ( Figure 4C). HA-PIM1/2 were immunoprecipitated, and immunoprecipitated and input lysates were used for western blotting. Cells transfected with USP28-Myc alone were used as a negative control. (C) 293T cells were co-transfected with HA-PIM2 and USP28-Myc, treated with vehicle or an Akt inhibitor (AZD5383), and cultured in normoxia or hypoxia for 6 h. HA-PIM2 was immunoprecipitated, and immunoprecipitated and input lysates were used for western blotting.
USP28 Regulates PIM Protein Levels In Vivo
Finally, we performed in vivo tumorigenesis assays to confirm the relevance of this signaling axis in tumors and further investigate the role of USP28 in tumor growth. Five million control or shUSP28 PC3-LN4 cells were injected subcutaneously into the flanks of immunocompromised mice. We previously observed that treatment with sunitinib (an inhibitor of vascular endothelial growth factor [VEGF] signaling) results in hypoxia and significantly increases PIM1 levels [14]. Therefore, we treated both cohorts with a vehicle or sunitinib once tumors were established. In the vehicle-treated mice, control tumors grew more rapidly than shUSP28 tumors, indicating that USP28 promotes tumor growth in this prostate cancer model, potentially by inducing PIM1/2 expression. Although we did not observe a significant difference in the tumor volume, shUSP28 tumors tended to be smaller than control tumors, and sunitinib was able to further decrease the size of these tumors ( Figure 5A). This effect mimics previous findings from our group showing that a combined inhibition of PIM and VEGF signaling produces an enhanced antitumor activity [14]. At the end of the study, tumors were harvested to assess PIM1 levels in each cohort. The western blotting analysis of four individual tumors showed a significant reduction in PIM1 in tumors lacking USP28 ( Figure 5B). The immunohistochemical staining of PIM1 confirmed this result, showing a significant decrease in PIM1 in shUSP28 tumors compared to that in control tumors ( Figure 5C). Moreover, we observed a dramatic increase in PIM1 in sunitinib-treated tumors compared to that in vehicle-treated tumors, whereas there was only a modest increase in PIM1 levels following sunitinib treatment in the shUSP28 tumors that was equivalent to the levels in untreated controls, suggesting that the hypoxic induction of PIM1 is highly sensitive to the loss of USP28 ( Figure 5C).
USP28 Regulates PIM Protein Levels In Vivo
Finally, we performed in vivo tumorigenesis assays to confirm the relevance of this signaling axis in tumors and further investigate the role of USP28 in tumor growth. Five million control or shUSP28 PC3-LN4 cells were injected subcutaneously into the flanks of immunocompromised mice. We previously observed that treatment with sunitinib (an inhibitor of vascular endothelial growth factor [VEGF] signaling) results in hypoxia and significantly increases PIM1 levels [14]. Therefore, we treated both cohorts with a vehicle or sunitinib once tumors were established. In the vehicle-treated mice, control tumors grew more rapidly than shUSP28 tumors, indicating that USP28 promotes tumor growth in this prostate cancer model, potentially by inducing PIM1/2 expression. Although we did not observe a significant difference in the tumor volume, shUSP28 tumors tended to be smaller than control tumors, and sunitinib was able to further decrease the size of these tumors ( Figure 5A). This effect mimics previous findings from our group showing that a combined inhibition of PIM and VEGF signaling produces an enhanced antitumor activity [14]. At the end of the study, tumors were harvested to assess PIM1 levels in each cohort. The western blotting analysis of four individual tumors showed a significant reduction in PIM1 in tumors lacking USP28 ( Figure 5B). The immunohistochemical staining of PIM1 confirmed this result, showing a significant decrease in PIM1 in shUSP28 tumors compared to that in control tumors ( Figure 5C). Moreover, we observed a dramatic increase in PIM1 in sunitinib-treated tumors compared to that in vehicle-treated tumors, whereas there was only a modest increase in PIM1 levels following sunitinib treatment in the shUSP28 tumors that was equivalent to the levels in untreated controls, suggesting that the hypoxic induction of PIM1 is highly sensitive to the loss of USP28 ( Figure 5C).
Discussion
PIM kinases play important protumorigenic roles in multiple cancer types [5]. They are particularly important in prostate cancer, where they are commonly upregulated [4]. This upregulation is of particular interest, because the prostate gland is highly hypoxic [46]. Although our group and others previously observed that PIM1 levels are increased in hypoxia, the mechanism underlying this phenomenon has never been described. This increase in PIM kinases in hypoxia allows cancer cells to survive hypoxic stress [11], including reactive oxygen species, which are increased in hypoxia [47]. Being able to respond to hypoxia is vital for tumor cells, since tumors rapidly outgrow their blood supply as they proliferate. This leads to decreased oxygen throughout the tumor, and the tumor must respond by promoting angiogenesis or new blood vessel growth, which provides both oxygen and nutrients to the tumor. Our previous work showed that PIM kinases can promote tumor angiogenesis [14]. Here, we identified an association between PIM kinases and the DUB USP28. USP28 increased PIM1 and PIM2 protein stability and interacted with PIM1/2 preferentially in hypoxia ( Figure 5D). Our results showed that USP28 was necessary for the increase in PIM observed in hypoxia both in vitro and in vivo.
Unlike most protein kinases, PIM kinases do not contain any regulatory domains and are constitutively active upon translation. Therefore, characterizing the mechanisms that control PIM levels is critically important for understanding how these kinases are dysregulated in cancer. Previous studies have largely focused on the transcriptional regulation of PIM, namely via JAK/STAT signaling. In contrast, we found that hypoxia did not change the levels of PIM1, PIM2, or PIM3, suggesting that hypoxia impacts PIM at the post-translational level. This is somewhat uncommon in hypoxia, as a vast majority of hypoxia-induced proteins can be attributed to HIF-1 transcriptional upregulation, including factors that promote angiogenesis, such as VEGF and angiopoietin-like 4 [48,49], or relieve the deleterious effects of hypoxia, such as HK2 and heme oxygenase 1 [50,51]. Although we observed HIF-1 target genes upregulated at the transcriptional level in hypoxia, we did not observe any significant increase in PIM kinase transcript levels ( Figure 1).
Instead, we showed that hypoxia altered the ubiquitination and proteasomal degradation of PIM1/2 and that this effect was dependent upon the activity of DUBs. A screen of DUBs that are associated with hypoxia led us to identify USP28 as a key factor in controlling PIM protein stability. The overexpression of USP28 increased PIM1/2 stability, whereas the knockdown of USP28 decreased PIM1/2 levels and increased their ubiquitination. The tight regulation of the deubiquitination process is an important mechanism by which hypoxic cells can regulate their protein complement without the high cost of new translation [32]. Because of their low oxygen tension, hypoxic cells are unable to undergo oxidative phosphorylation. This is particularly true of hypoxic cancer cells. Therefore, being able to rescue specific factors from proteasomal degradation can save cells from having to expend the energy to translate proteins anew. This process has been best studied in the regulation of HIF-α subunits themselves. In addition to the loss of ubiquitination due to the inactivation of prolyl hydroxlyases, some DUBs, including USP28, have been shown to deubiquitinate HIF-α subunits, increasing their protein concentration and stimulating the subsequent transcriptional response [32]. However, most DUBs that have been shown to be increased in hypoxia are increased at the transcriptional level downstream of HIF-1 activation [32]. Conversely, USP28 activity has been shown to be differentially regulated in hypoxia by SUMOylation [52], suggesting that USP28 might be particularly active in hypoxia.
Mechanistically, USP28 preferentially bound to PIM1/2 in hypoxia, suggesting that it is recruited to PIM kinases particularly under low oxygen tension. USP28 is usually recruited to its substrates through interaction with an E3 ubiquitin ligase [38], most commonly FBW7. However, we did not observe any interaction between FBW7 and PIM kinases ( Figure S1). Further, many FBW7 targets are phosphorylated by GSK-3β, which has been shown to be a direct target of Akt [53]. However, Akt inhibition, which led to active GSK-3β (i.e., no S9 phosphorylation), did not affect the interaction of USP28 with PIM2 ( Figure 4C), suggesting that a different E3 ubiquitin ligase is responsible for ubiquitinating PIM kinases and recruiting USP28. Previous studies have described the recruitment of USP28 by the E3 ubiquitin ligases kelch-like family member 2 (KLHL2) [54] and ring finger and CHY zinc finger domain containing 1 (RCHY1) [55], but little is known about how these factors are affected by tumor hypoxia. KLHL2 has mainly been studied in hypertension [56], another disease in which PIM kinases play key roles and their overexpression is associated with poor prognosis [57]. This is intriguing, as we have previously shown PIM kinases to be necessary for the induction of new blood vessel formation in prostate cancer [14]. There is no literature on RCHY1 in hypoxia, but previous studies have shown that it may be involved in prostate carcinogenesis. For instance, RCHY1 interacts with the androgen receptor to promote target gene expression [58] and promotes the degradation of p53 [59]. These ligases and others may regulate hypoxia-inducible proteins in prostate cancer through USP28. Identifying the E3 ligase associated with the USP28-PIM axis will help clarify the underlying biology of prostate cancer.
In conclusion, we identified the DUB USP28 as a novel regulator of PIM stability in hypoxia. This hypoxia-induced pathway plays vital roles in tumor progression, so identifying factors regulating this pathway is important for understanding the underlying tumor biology.
|
2022-03-19T15:12:11.545Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b6e96d5468beb180b94b73bd4de3c0fcb554601d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/11/6/1006/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e272476a6c86ada7ea67384e668abd2215529d11",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9751414
|
pes2o/s2orc
|
v3-fos-license
|
True Durability: HIV Virologic Suppression in an Urban Clinic and Implications for Timing of Intensive Adherence Efforts and Viral Load Monitoring
Although the majority of HIV-infected patients who begin potent antiretroviral therapy should expect long-term virologic suppression, the realities in practice are less certain. Durability of viral suppression was examined to define the best timing of targeted adherence strategies and intensive viral load monitoring in an urban clinic population with multiple challenges to ART adherence. We examined the risk of viral rebound for patients who achieved two consecutive viral loads lower than the lower limit of quantification (LLOQ) within 390 days. For 791 patients with two viral loads below the LLOQ, viral rebound >LLOQ from the first viral load was 36.9 % (95 % CI 32.2–41.6) in the first year, 26.9 % (95 % CI 21.7–32.1) in the year following one year of viral suppression, and 24.6 % (95 % CI 18.4–30.9) in the year following 2 years of viral suppression. However, for patients with CD4 ≥300 cells/µl who had 3–6 years of virologic suppression, the risk of viral rebound was very low. At the population level, the risk of viral rebound in a complex urban clinic population is surprisingly high even out to 3 years. Intensified monitoring and adherence efforts should target this high risk period. Thereafter, confidence in truly durable virologic suppression is improved.
Introduction
HIV-1 RNA viral load (VL) monitoring is currently recommended every 3-4 months for patients on antiretroviral therapy (ART). Among those patients with suppressed viral load for greater than 2 years, monitoring at 6 months intervals is considered reasonable [1]. As these guidelines are based predominantly on clinical trials and on expert opinion, our objective was to examine the risk of viral rebound over time in a large urban HIV Clinic, and better define the durability of virologic suppression and its implications for viral load monitoring and for targeted adherence strategies.
Methods
The objective of the analysis was to describe the risk of rebound among patients with virologic suppression. We used the HIV Clinical Case Registry to describe the population of patients with HIV infection who had at least one outpatient visit to the Washington DC Veterans Affairs Medical Center from January 1, 2005 to December 31, 2011. We evaluated every paired HIV-1 viral load (VL) and CD4 count performed by the Infectious Diseases Laboratory during the period of observation. Time to rebound was computed using consecutive sequences of observations for subjects whose initial two viral loads were below the lower limit of quantification (LLOQ) and had a viral load measurement within 390 days. Although the median frequency of viral load monitoring for the clinic between 1999 and 2011 was 113 days (IQR; 96-138), we aimed to be inclusive of those patients with more minimal monitoring up to a maximum of approximately 13 months between measurements.
Two analyses were performed. In Analysis A, viral rebound was defined as a viral load greater than the LLOQ. In Analysis B, viral rebound was defined as a viral load greater than 200 copies/ml. Subjects were classified as censored either if they reached the end of the study while remaining virally suppressed, or if at some point a gap of 390 days between tests occurred. Only the first period of virologic suppression for each patient was included in these analyses.
Kaplan-Meier and life table curves were generated to describe time to viral rebound in the cohort. Analyses were done on the cohort as a whole and also as stratified by CD4 groups \300 cells/ll and [300 cells/ll at the time of inclusion. Homogeneity of survival curves in the latter case was tested via the log rank test. Cox's Proportional Hazards model was also used to quantify the degree and direction of relative risk between CD4 groups. The proportional hazards assumption was assessed visually using log-log survival plots. Analysis was performed using the lifetest, phreg, and freq procedures (SAS 9.3, Cary, NC), and data management was performed using R 3.0.1.
Results
From January 2005 to December 2011, 1544 patients had at least one outpatient visit. Among these patients, 97 % were male, 75 % were black or African American, and the median age was 50 years. Reported risks for exposure to HIV included sex with a male (30 %), sex with a female (50 %) and injection drug use (20 %). Approximately 30 % of patients were co-infected with Hepatitis C and other co-morbid illness was common including drug and alcohol dependence and mental health disorders in approximately 50 % of patients. Approximately 75 % of patients received antiretroviral therapy (ART) during this period. Among those on ART, 30 % received a nonnucleoside reverse transcriptase inhibitor (NNRTI) based regimen (95 % efavirenz-based), 30 % received a protease inhibitor based regimen (85 % boosted with ritonavir and 15 % unboosted), and 3 % received an integrase inhibitor or other regimen. The remaining 37 % of patients either switched drug classes during this period or received a regimen consisting of three or more After three years of virologic suppression, the risk of viral rebound dropped significantly. For patients who achieved three to five years of virologic suppression, the risk of failure were 12.8 % (95 % CI 7.3-18.2) overall and 9.7 % for CD4 [300 in the year following three years of virologic suppression, 11.3 % overall and 7.7 % for CD4 [300 in the year following four years of virologic suppression, 7.8 % overall and 6.5 % for CD4 [300 in the year following five years of virologic suppression. For 79 patients in both CD4 strata who achieved 5.7 years of virologic suppression, none had viral rebound at a median of 10 months of follow-up.
Risk of Viral Rebound by Year for Analysis B, [200
The risk of viral rebound by year, defined as [200 viral copies/ml (Table 3 and Fig. 2), in the first three years was high. When stratified by CD4 cell count, patients with [300 cells/ll were at lower risk of viral rebound than those with \300 cells/ll only in the first year following
Discussion
The primary risk of inadequate viral load monitoring is undetected viral rebound with potential immunologic decline, immune activation and progressive selection of resistance mutations that limit antiretroviral options. Although we are informed by data from clinical trials, we conducted this study to better understand the risk of viral rebound relative to time with virologic suppression in a complex outpatient clinic environment. Our findings have relevance for HIV clinic practices and further inform recommendations for the appropriate frequency of viral load monitoring and further consideration in appropriate timing for intensive adherence strategies. Our prior examinations of CD4 cell count and viral loads from 1999 to 2011 demonstrated considerable improvement in median CD4 cell count and the percentage of patients with virologic suppression [2] as also demonstrated elsewhere [3]. Now, in the era of potent antiretroviral therapy and capacity for genotypic resistance testing to guide therapy, the occurrence of viral rebound may reflect our challenges with retention in care and adherence to antiretrovirals [4]. It is therefore particularly disappointing that the risk of viral rebound is high out to three years. The clinic from which this data is derived provides both HIV care and primary care, has a ''medical home'' approach with a nurse practitioner-physician team for each patient, social workers and a clinical pharmacist on site as well as availability of an HIV psychologist. Though this model improves outcomes in the engagement in care continuum [5,6], we, like others have demonstrated this high early risk for viral rebound, [4,[7][8][9][10][11] indicating that further refinement of approach is warranted. These findings support not only the suggested higher frequency of early viral load monitoring, but also highlights the period of time when additional strategies are needed to keep patients in care and on treatment.
When stratified by CD4 cell count, patients with CD4 \300 had a nearly double risk of viral rebound. Higher rates of viral rebound among patients with a low CD4 cell count in the first three years following virologic suppression is not unexpected. Patients with a low CD4 cell count (\300 cells/ ll) may represent a population who may have late HIV diagnosis, with very low nadir CD4 and immune restoration failure due to inability to reconstitute depleted T cell populations despite virologic suppression. Some also have a low CD4 due to a co-morbidity such as Hepatitis C and cirrhosis despite virologic suppression. However, those with a CD4 cell count \300 are over-represented by those who are under-treated for HIV due to the failure to engage in care and attend visits to the clinic (even with two viral load measurements in 390 days, engagement in care cannot be assumed); and those who come to their visits but fail to take prescribed antiretroviral therapy.
On the other hand, after 3 years of sustained virologic suppression, the risk of rebound is quite low and our confidence in a twice yearly monitoring strategy improves. This risk declines even further by 6 years, an observation also seen by Lima et al. [4]. In our analysis, viral rebound was not seen after 5.7 years of virologic suppression among 119 patients at risk for a median of 10 months. For those patients with demonstrated consistent adherence and engagement in care for five to 6 years, even further reduction in monitoring may be rational [12].
We examined the risk of viral rebound with rebound defined both as [LLOQ and [200 copies/ml. Although the [200 definition is intended to allow for clinically insignificant ''viral blips'' and follows the antiretroviral guideline definition [1], recent literature suggests an increased risk of early viral rebound even with very low replication of HIV [13,14]. A more stringent definition of viral rebound was therefore also examined. Although we did not compare the risk of viral rebound at ''not detected'' compared to \LLOQ, our data demonstrated that once viral suppression was achieved for five to 6 years, annual failure risk was similar regardless of rebound definition and CD4 stratification. Benzie and Lima have demonstrated durability regardless of adherence or previous treatment failures once around 6 years of suppression are achieved [14][15][16], thus true durability at the HIV population level may be best defined after five to six years of virologic suppression.
For the individual patient, clinician decisions as to frequency of viral load monitoring should be informed by psychosocial and neurobehavioral factors [1,17] and selfreported adherence, pharmacy refill data or adherence monitoring [18][19][20]. But the findings of this analysis remind us to have caution; we should not assume durable virologic suppression after one year or even two years of virologic suppression, but carefully assess the likelihood of viral rebound. Yet, six or more years of undetectable viral loads for the ''right'' patient might even allow an annual viral load monitoring strategy. Viral load monitoring is costly and particularly prohibitive in resource-limited countries. We previously demonstrated that frequent CD4 monitoring among patients with CD4 C300 cells/ll and virologic suppression was not necessary [21]. A less intensive strategy for viral load monitoring after truly durable virologic suppression has further significant economic implications in both resource rich and resource poor nations and warrants further prospective evaluation.
Our study had several limitations. This was a retrospective evaluation from a single, urban medical center caring for predominantly African-American men. The risk of viral rebound by antiretroviral regimen was not examined. This analysis intended to address risk for the population overall, for all patients who achieved initial virologic suppression. We examined data beginning in 2005 when efavirenz and simpler once daily regimens were more widely in use to reflect the ''current'' era of ART. Although the higher barrier to mutation of the newer once daily integrase strand transfer inhibitors may reduce the risk of viral mutations associated with nonadherence compared to once daily NNRTI regimens, the findings here still provide relevant guidance to a rational approach to viral load monitoring and to timing of strategies to improve adherence and reduce the risk of viral rebound.
Conclusions
In conclusion, HIV-infected patients in our urban clinic had high rates of viral rebound in the first three years following virologic suppression, highlighting the time when targeted efforts to assure antiretroviral adherence may be particularly meaningful. On the other hand, this data demonstrated that once virologic suppression is achieved and sustained for three years, the risk for rebound declines substantially, supporting guidance for reduced monitoring particularly for patients with CD4 cell counts [300 cells/lL. As we enter eighteen years post approval of potent antiretroviral regimens, better defining the relative risks for viral rebound allows better and more focused use of resources and improvement of our capacity to achieve truly durable virologic suppression in all patients initiating antiretroviral therapy.
|
2017-06-01T03:55:56.395Z
|
2014-11-05T00:00:00.000
|
{
"year": 2014,
"sha1": "ae093fb895e41a3dd0c396a21823cb73a98b94f7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10461-014-0917-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "51c973514eb40aa4aa47ca113e7e536e035f42c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15207729
|
pes2o/s2orc
|
v3-fos-license
|
Shoulder Osteoarthritis
Osteoarthritis (OA) is the most frequent cause of disability in the USA, affecting up to 32.8% of patients over the age of sixty. Treatment of shoulder OA is often controversial and includes both nonoperative and surgical modalities. Nonoperative modalities should be utilized before operative treatment is considered, particularly for patients with mild-to-moderate OA or when pain and functional limitations are modest despite more advanced radiographic changes. If conservative options fail, surgical treatment should be considered. Although different surgical procedures are available, as in other joints affected by severe OA, the most effective treatment is joint arthroplasty. The aim of this work is to give an overview of the currently available treatments of shoulder OA.
Background
Osteoarthritis (OA) is the most frequent cause of disability in the USA [1]. Although not as prevalent as OA of the hip or knee, OA of the shoulder has been demonstrated, in cadaver and radiographic studies, to affect up to 32.8% of patients over the age of sixty years [2,3] and to be equally debilitating [4]. Patients perceive that the impact of shoulder OA is comparable with that of chronic medical conditions such as congestive heart failure, diabetes, and acute myocardial infarction [5]. e prevalence of shoulder OA increases with age and women appear to be more susceptible than men [6].
OA of the shoulder is the consequence of destruction of the articular surface of the humeral head and glenoid and results in pain and loss of function. It can be primary or secondary. Primary OA is diagnosed when no predisposing factors that could lead to joint malfunction are present. Secondary OA may occur as a result of chronic dislocations and recurrent instability, trauma, surgery, avascular necrosis, in�ammatory arthropathy, and massive rotator cuff tears [7,8] (Figure 1).
Treatment of shoulder OA is oen controversial and is typically based on the patient's age, severity of symptoms, level of activity, radiographic �ndings, and medical comorbidities [9].
Nonoperative treatment options include activity modi�cation, physical therapy, anti-in�ammatory drugs (NSAIDs), and intra-articular injections. If conservative options fail, surgical treatment should be considered. Although different surgical procedures are available, as in other joints affected by severe OA, the most effective treatment is joint arthroplasty [10].
e aim of this work is to give an overview of the currently available treatments of shoulder OA.
Nonsurgical Treatments
Nonoperative modalities should be utilized before operative treatment is considered, particularly for patients with mildto-moderate OA or when pain and functional limitations are modest despite more advanced radiographic changes [11].
Although nonsurgical management of shoulder OA will not ultimately alter the progression of disease, it can be effective in reducing pain and improve the range of motion [9]. �ifestyle modi�cations and occupational changes are oen an initial step in this process.
Nearly all patients with shoulder OA can bene�t from physical therapy. Ideally, therapy should be initiated before the development of atrophy or contracture, and it should be tailored to the speci�c needs of the patient [8]. Typical programs include gentle range of motion and isometric strengthening of the rotator cuff and scapulothoracic musculature [12].
Intra-articular injections are commonly used for patients with OA in other joints and may provide pain relief in patients with shoulder OA [13]. Because of the lack of evidence supporting their efficacy, however, no more than three corticosteroid injections in a single joint are recommendable unless there are special circumstances [11]. Some evidence exists supporting viscosupplementation for shoulder OA. Silverstein et al. [14] reported that glenohumeral viscosupplementation resulted in a signi�cant improvement in shoulder pain and function outcome scores 6 months following injection.
Medical management of shoulder OA includes salicylates, acetaminophen, and nonsteroidal anti-in�ammatory drugs (NSAIDs), which can all be effective in relief of pain and in�ammation. In particular randomized trials indicate that NSAIDs are more effective than both paracetamol and placebo for pain relief of OA [15,16]. It is important, however, to be aware of the increased risk of gastrointestinal and cardiovascular side effects when considering NSAIDs prescription for this cause [16].
Surgical Treatments
e primary reason to consider surgery for OA is pain that does not respond to nonsurgical measures. Improved function is typically a secondary goal of surgery and is less predictably achieved than pain relief [17]. e choice of treatment then depends on both patient and disease features.
Patient features include age, occupation, activity level, and the expectations for functional recovery. Disease features include the lesion size and the extent of chondral involvement.
Arthroscopic Treatment.
Arthroscopy has become increasingly accepted as an option in the management of shoulder OA (Figure 2), thanks to the few complications and low morbidity associated with this procedure [18,19]. It may be useful both as a diagnostic tool for characterizing lesions and as a therapeutic tool for debridement. Capsular release followed by manipulation may also be an important part of the procedure and can improve postoperative motion [20,21]. In general, arthroscopic debridement is most likely to bene�t patients with mild OA. Although arthroscopic intervention is not likely to halt arthritic progression, it may provide a period of improved pain and function, thereby delaying a larger operation [9]. By stabilizing cartilage lesions, eliminating mechanical symptoms, and releasing capsular contractures, satisfactory outcomes are obtained as reported by several authors [20,22,23]. Weinstein et al. [23] described good results from arthroscopic debridement in patients with mild or minimal arthritic change and less favorable results in patients with more advanced changes. Cameron et al. [20] evaluated arthroscopic debridement in patients with grade IV osteochondral lesions, �nding an overall 88% rate of postoperative improvement. More recently, Van iel et al. [22] described a signi�cant decrease in pain in 55 of 71 patients, mean age 47 years old (range 18-77), aer arthroscopic shoulder debridement at a mean of 27 months postoperatively.
Humeral Head Resurfacing Arthroplasty.
Shoulder resurfacing arthroplasty has gained popularity as an alternative to conventional shoulder arthroplasty for the treatment of OA (Figure 3). In contrast to conventional shoulder arthroplasty, which involves removal of the entire humeral head followed by placement of an intramedullary stem into the proximal aspect of the humerus, shoulder resurfacing consists of reaming the proximal portion of the humeral head and �tting a metal-alloy cap over the remainder of the head [24] (Figure 4). is cap may or may not be mated against a glenoid component [25,26].
Potential advantages of humeral resurfacing are decreased bone resection, shorter operative times, a lower prevalence of humeral periprosthetic fractures, and the potential for straightforward revision to a conventional total shoulder replacement [27,28]. In addition, it may be straightforward to restore normal offset, inclination, and version of the glenohumeral joint because no osteotomy of the neck is performed and the head-neck angle remains intact [24]. Although many studies demonstrated that the success rates of shoulder surface replacement arthroplasty are comparable with those associated with conventional stemmed prostheses at the time of short and mid-term followup [25,28,29], there is lack of evidence regarding long term outcomes and no comparative studies are present. As the bone stock is preserved, resurfacing arthroplasty is particularly indicated in young patients who may require revision to a total shoulder arthroplasty with a stemmed prosthesis during his lifetime. Moreover periprosthetic fractures, which are a concern in this more active population, are less likely to occur than they are with total shoulder replacement because the stem does not pass through the surgical neck [24].
3.2.2.
Hemiarthroplasty. Both total shoulder arthroplasty and hemiarthroplasty ( Figure 5) may achieve good shortterm and mid-term results [30][31][32][33]. However, while total shoulder arthroplasty may provide superior and more reproducible pain relief, this must be balanced against the technical difficulties of inserting a glenoid prosthesis and the long-term durability of glenoid prostheses in terms of loosening and wear [34][35][36]. Alternatively, despite good early and mid-term results with hemiarthroplasty, glenoid arthrosis and the need for revision to total shoulder arthroplasty have been demonstrated aer longer-term followup [37,38]. e condition of the glenoid is critical in determining whether humeral head replacement alone will be successful. In particular, patients with concentric glenoid wear and primary OA seems to have better outcomes than those with eccentric glenoid wear and secondary OA [39]. e results of hemiarthroplasty in young individuals appear to deteriorate with time, and there remains a high rate of patient dissatisfaction and revision surgery [40,41].
Sperling et al. [41] found that in spite of long-term improvements in pain relief and function aer hemiarthroplasty, in patients under 50 years there was a 60% rate of unsatisfactory results. Several other studies have con�rmed that long-term functional results appear to be compromised by progressive glenoid wear, especially in those individuals with preexisting asymmetric glenoid erosion [42]. us, primary hemiarthroplasty may be indicated in particular in carefully selected patients with a congruent and minimally arthritic glenoid.
Anatomic Total Shoulder Arthroplasty.
Total shoulder arthroplasty ( Figure 6) with replacement of the glenoid with a prosthetic polyethylene component is actually the gold standard for the management of advanced and bipolar shoulder OA [31]. Several authors have reported that the functional results of total shoulder arthroplasty are better than those of hemiarthroplasty alone in the treatment of shoulder OA [36,43]. Even in patients under the age of 50 years, survival rates of 97% and 84% at 10 and 20 years have been reported [41]. In a trial of forty-seven patients with primary OA who had been randomized to be treated with total shoulder arthroplasty or hemiarthroplasty and followed for an average of thirty-�ve months, Gartsman et al. [36] reported signi�cantly greater pain relief ( ) and shoulder motion ( ) aer total shoulder arthroplasty.
In a multicenter nonrandomized study of nearly 700 arthroplasty performed for the treatment of primary arthritis, total shoulder arthroplasty resulted in higher adjusted Constant scores (96% versus 86% aer hemiarthroplasty) and improved motion (forward elevation, 145 ∘ versus 1 ∘ aer hemiarthroplasty, and external rotation, 42 ∘ versus 36 ∘ aer hemiarthroplasty [43]). Finally, a 2005 meta-analysis of 112 patients demonstrated that total shoulder arthroplasty resulted in higher functional outcome scores, greater pain relief, and increased shoulder motion at two years postoperatively [44]. However, these bene�ts come with the risk of glenoid loosening [45]. Particularly in younger, more active patients, long-term survival of the glenoid component is a concern because the outcomes of glenoid revision are not as robust as the outcomes of primary total shoulder arthroplasty [46]. In a recent review of 33 previously published studies, Bohsali et al. [47] found that glenoid component loosening accounted for 39% of all complications aer total shoulder arthroplasty. Sperling et al. [41] similarly reported high rates of loosening and declining prosthesis survival aer 5 to 8 years, speci�cally in younger individuals. So-tissue failure and prosthetic instability may explain, in part, the high rate of glenoid loosening [48]. In addition, the risk of glenoid failure seems to be associated with the use of reaming to optimize the seating and positioning of the glenoid component. e reaming of the glenoid surface weakens the support from subchondral bone exposing the component to excessive compressive and eccentric forces. Preserving subchondral bone may then be important for long-term longevity of the glenoid component [49].
Given the risk of glenoid loosening, careful patient selection for total shoulder arthroplasty is paramount. It is a durable and effective option in appropriately selected and counseled individuals who have had failure with all palliative and reconstructive treatment modalities [8].
Reverse Total Shoulder
Arthroplasty. While anatomic total shoulder arthroplasty can be considered a very effective treatment for shoulder OA in the presence of an intact rotator cuff, when shoulder OA is associated with a massive rotator cuff rupture (i.e., cuff tear arthropathy-CTA [50]), the results are suboptimal. e rotator cuff is an active stabilizer that is indispensable for the proper functioning of the glenohumeral joint [51]. With a massive rupture, the center of rotation of the joint migrates upward and joint stresses become offcenter, which may explain the glenoid loosening observed with total shoulder prostheses [52]. To avoid this problem, it is possible to leave the glenoid in place and to carry out only a hemiarthroplasty but the results are oen somewhat disappointing and the improvement in shoulder function and range of motion is limited [53,54]. Moreover the progressive upward displacement of the humeral head causes wear of the coracoacromial arch and the patient is at risk for a deteriorating functional result over time [55].
Reverse total prostheses (Figure 7) such as those developed by Grammont et al. [56] appear to provide good functional results in CTA [57,58].
e congruent joint surfaces of the reverse ball-andsocket design provide inherent stability, while moving the joint center of rotation medially and distally to increase deltoid function and the range of motion [59,60]. Key aspects of the modern reverse total shoulder arthroplasty include (1) a large glenosphere component with no neck, which allows medialization of the center of rotation and reduced torque on the glenoid component; (2) a humeral implant with a nonanatomic valgus angle, which moves the center of joint rotation distally, thus maximizing the length and tension of the deltoid to increase its ability to abduct the humerus, in addition to providing increased stability; and (3) a greater range of shoulder motion [61]. Distal displacement of the center of joint rotation increases the lever arm of the deltoid and also recruits portions of the anterior and posterior heads of the deltoid to act as abductors of the arm, permitting elevation above shoulder height. In addition, reestablishment of the subacromial space permits greater potential abduction [61,62].
Reverse total shoulder arthroplasty has been shown to be effective in treating CTA, with numerous studies demonstrating improvements in shoulder motion and patient outcome [56][57][58][59]. However, most reports have presented only midterm followup results, and despite these encouraging midterm results, complications have been reported. In one longterm analysis, Molé and Favard reported the radiographic appearance of deterioration aer approximately �ve to six years, with clinical deterioration appearing aer approximately eight years [63]. In a retrospective review of eighty reverse total shoulder arthroplasties, with a mean duration of followup of forty-four months and a mean patient age of 72.8 years, Sirveaux et al. [57] reported an increase in the mean Constant score from 22.6 points preoperatively to 65.6 points postoperatively, with 96% of the patients having little or no pain and an increase in mean active forward �exion from 73 ∘ to 138 ∘ . However, at the time of followup, 4% of the implants had failed and been revised, 6% were noted to have radiographic signs of loosening, and 9% demonstrated unscrewing of the glenosphere component. In contrast, Guery et al. [51] showed that the global survivorship of the Grammont reverse total shoulder prosthesis with revision or loosening as the end point is good even eight years aer implantation. Moreover, Cuff et al. [64] recently reported durable clinical and radiographic results and a survival rate of 94% at 5 years of followup. In addition, no mechanical baseplate failures or glenoid-sided screw loosening was noted. us, although the results of reverse total shoulder arthroplasty are promising with regard to the postoperative range of motion, pain relief, and improvements in clinical outcome, long-term studies are necessary to con�rm the encouraging data on survivorship reported in recent works.
Conclusions
Shoulder OA can be a major source of pain and disability. e management of this condition, in particular in young active patients, is a challenge, and the optimal treatment has yet to be completely established. If nonoperative treatment fails, several surgical techniques are currently available. Shoulder arthroplasty produces excellent and reliable functional improvements, but further studies will be necessary to con�rm the long-term effectiveness of this procedure.
|
2018-04-03T06:21:35.772Z
|
2013-01-10T00:00:00.000
|
{
"year": 2013,
"sha1": "7bf50415886821bbef2cb2def54da55a6984a589",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2013/370231.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4401c243ab064f501bd0cb54d6abe17dccd2ad07",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224921190
|
pes2o/s2orc
|
v3-fos-license
|
IMPACT OF HEAT STRESS ON GROWTH PERFORMANCE AND CARCASS TRAITS IN SOME BROILER CHICKENS
Environmental heat stress is one of the most challenging conditions in the world which have adverse impact on the industry of poultry. Broiler chicken strains are delicate to heat stress primarily due to not having sweat glands. The current study was aimed to investigate the effects of heat stress exposed on growth performance and Bio-physiological characteristics for (Cobb, Hubbard and Arbor Acres broiler hybrids) under the summer season when environmental conditions of Egypt were revealed. A total of three hundred one day old (one hundred birds from each hybrid) at one day of age were brooding under the same conditions of water, diet consumed, breeding system, vaccines and medications used during the period birds life even slaughtering age. The three strains were randomly divided into twelve groups (three strains "Cobb, Hubbard and Arbor Acres" X two treatments "control group and heat exposed group" X two replicates X twenty five chicks).The degrees of environmental temperature and relative humidity during housing are (Environmental temperature = 32°±2 Celsius degree, Relative humidity = 50±5 percentage) for control group and (Environmental temperature =40°±2 Celsius degree, Relative humidity= 20±5 percentage) for the heat stressed group. The body weight, body weight gain, edible parts of carcass (carcass, thigh, drum, breast muscles and giblets weight) and inedible parts of carcass (blood, feathers, head and legs weight) were recorded to heat stressed group and control group. Lymphatic organs such as spleen, thymus and bursa weight were measured also. The last results concluded that the Cobb strain showed the best growth performance and carcass characteristics under heat stress condition, while the Arbor Acres strain considered the best strain which didn't effected a lot in their rectal temperature such another strains with heat exposure. The Arbor Acres strain for each group (control and treated) have the highest viability. The control group and Hubbard strain showed an increase in bursa weight compared to heat exposed group and another strains. It was concluded that the Cobb strain has the best performance under heat stress comparing to the other strains broiler chicken.
INTRODUCTION
The poultry industry was major source protein to human so the poultry production was increased during last two decades. But the high ambient temperatures were negative effect of production traits. Pourreza and Edriss (1992) raised broiler at 20 or 30°C (control and high temperatures respectively) until 45 days of age, compared with normal temperatures high temperature decreased slaughter and carcass weights and increased dressing percentage. Broiler chickens are sensitive to heat stress (Yousaf et al 2019, Yousaf et al 2018). Poultry birds don't have sweat glands for heat releasing factor, if panting failed to decrease the high internal body temperature, birds become inactive, exhausted and mortality happened because of the circulatory, respiratory and electrolytes imbalance (Swayne, 2017). It has been exploring the impact of high environmental temperature on the performance of different poultry species, including broilers (Dozier et al 2007) and has found that high environmental temperatures have pernicious impacts on Humidity and temperature play a key role which is one of the foremost imperative environmental factors during poultry housing (Lourens et al 2005). Broilers expose to higher ambient temperature, increase body temperature (Reddy, 2000) consequently released the corticosterone hormone into the circulation of blood to help the metabolism (Arce-Menocal et al 2009). This hormone might cause humoral immunity and cell mediate failure cause of the changing's in plasma concentrations of Adrenocorticotropic Hormone and corticosteroids affect the lymphoid organs, decrease the mass of spleen, thymus and bursa (Havenstein et al 2003). The current study was pointed to investigate the impacts of adaptation of heat stress on growth performance and characteristics carcass.
Statement of the Experiment
This experiment was carried out at Poultry Breeding Farm, Poultry Production Department, Faculty of Agriculture, Ain Shams University. A total of three hundred one day old chicks from three broiler strains (100 Cobb, 100 Hubbard and 100 Arbor Acres). All groups were isolated from each other by plastic sheet barriers.
Broiler chicks were randomly divided into the following groups: Normal control group (C) exposed to environmental temperature (T=32±2°C) at the day of hatch and decreased 1°C every three days even 16 th day of age when temperature stabled at 24°C even age of Slaughtering, while the relative humidity recorded arrange from (40-60%) during the age of birds. Another group which exposed to heat stress had the same temperature program, but exposed to 40°C from 1 day up to 7 day of age for 3 hours daily and exposed to 40°C for 3 hours before slaughtering directly at 35 day of age.
Housing and Management
Before receiving the experimental chicks, the broiler house is used for rearing the chicks with density about 7 birds /m 2 . All orifices of the rearing house (semi-open-housing system) had been closed to maintain temperature and a thermometer was installed to monitor temperature in the center of the room.
Feed and water were supplied ad libitum. Birds were fed on a commercial starter diet based on corn-soybean meal; containing 23%, crude protein and 3000 kcal/kg ME from one day to 17 days of age, and replaced thereafter by a diet containing 21%, crude protein and 3050 kcal/kg grower diet which then have been substituted by a diet containing 19%, crude protein and 3100 kcal/ kg.
Body weight and body weight gain
Body weight was recorded individually and weight gain calculated weekly to the nearest (0.1g) throughout the experimental period at day 1 and from 1 week until 5 weeks of age.
Slaughter test and carcass traits
A total of 60 birds (20 chickens from each strain, 10 birds from each treatment) at the end of the growth experiment will be taken at 35 day of age randomly from each treatment. After complete bleeding was over, birds were scalded at 60 C° water for 30 seconds; feathers were removed by defeathering machine. After the removal of head, viscera, shanks, edible parts(gizzard, heart, and liver) the rest of the body was weighted to determine the dressed weight which includes the front parts with wings, hind parts and the neck. The dressed birds were portioned into right and left sides. The right side of each carcass was halved into a front quarter (breast and wings) and hindquarter (thigh and drumstick). The edible and non-edibles parts were also calculated. Also, lymphatic organs such as spleen, thymus and bursa were removed and weighed. After that the right side of each carcass was split into its cuts, breast, drumstick and thigh were weighted and recorded.
Rectal Temperature (RT)
Rectal Temperature was measured during the thermal challenge period in ten random birds for each treatment during the age from 1 up to 7 day of age through exposing to heat stress (5 birds for each replicate). Rectal temperature (±0.1 °C) was measured by inserting a probe of electronic thermometer 3 cm into the colon.
Mortality Rate
The number of dead individuals during the breeding period was individually recorded to evaluate the viability of three broiler strains under control and heat exposure conditions
Statistical Analysis
A general linear model of SAS® was used for two-way analysis of variance of statistical analysis system (SAS 9.1.3., 2003). Factorial design (2X3) was used to examine the effects of different temperature on productive performance, physiological parameters and three strains to evaluate the acclimation of broiler chickens in response to thermal conditioning. When significant differences among means were found, means were separated using Duncan's multiple range test at (P˂0.05) (Duncan, 1955).
Body weight and body weight gain
Body weight of broiler chicks strains are presented in Table (1). Our study showed a highly significant difference between strains due to body weight. The Cobb hybrid recorded the highest weight at zero day at hatch (46.07 g) compared to Arbor Acres and Hubbard hybrids (41.39g & 36.85g) subsequently.
The Cobb strain showed the best body weight during most of ages for each control and group which exposed to heat stress. Individuals of Cobb strain for control group recorded a significant difference values in body weight compared to another strains. It noticed that Cobb broiler strain reserve its performance under heat stress when compared to Arbor Acres and Hubbard broiler strains obviously which decreased in their body weight affected to heat stress exposure.
It seems that there is no significant effect for interaction between treatment effect and strain effect during all ages of flocks in this study. On the other hand, Altan et al (2000) decided metabolic and physiological responses of Hubbard and Cobb strains which exposed to an ambient temperature of 38±1°C for 2 hour at 14 and 15 days of age. They found that exposure to high temperature at an early age resulted in weight loss in Cobb broiler strain which was not compensated at 35 days of age but there was no weight loss in Hubbard broiler strain.
Data presented in Table (2) showed that the live body weight gain for Cobb broiler strain recorded a heights value compared to Arbor Acres and Hubbard strains for cumulative body weight gain during the five weeks of age. It found that groups which exposed to heat stress recorded a lower values of body weight gain compared to the control group for the same strains under the same conditions of environment. It could be noticed that effect of (treatment*strain) interaction showed no significant differences during all week of age. Al-Batshan (2002) showed the effect of ambient temperature (33 ± 0.5°C) and genotype (Hubbard and IsAJ57) was evaluated in a factorial arrangement on broiler performance and found that high ambient temperature significantly decreased the body weight gain and the reduction under the hot temperature was more pronounced for Hubbard broiler chicks than those of the IsAJ57 strain.
Carcass measurements
Effect of both strain and heat exposure on inedible parts of carcass is summarized in Table (3). There was statistically significant difference obtained for feather weight (122.6 g) due to the effect of heat stress compared to strains which didn't exposed to heat stress which recorded (98.34 g). While, due to strain effect, the Hubbard strain recorded a non-significant decrease in feather weight (103.46 g) compared to Cobb (113.25 g) and Arbor Acres (114.69 g). Blood weight recorded non-statistically significant differences, which Hubbard strain recorded the highest non-significant value (73.33 g) compared to Cobb (65.0 g) and Arbor Acres strain (66.47 g).
In this study, Table (4) showed that gizzard weight recorded non-significant differences due to strain effect or heat exposure effect and interaction between them. The heart weight in Hubbard strain recorded the highest non-significant value (8.485 g) compared to Cobb (9.18 g) and Arbor Acres (9.285 g). Heat exposure effect caused a lower non-significant value (8.35 g) of heart weight compared to control groups (9.1 g).
Edible parts of chickens were weighted at five weeks of age for the three broiler strains under control conditions and heat exposed conditions. It could be observed that the strains which exposed to heat Omran, Galal, Mahrous and Badri AUJASCI, Arab Univ. J. Agric. Sci., 22(2), 2082 stress recorded slightly non-significant value for dressing weight compared to control strains. We indicated that Cobb broiler strain recorded the lowest value for dressing weight under effect of heat stress compared to another strains. Conversely, the Cobb strain recorded the highest value for dressing weight under the control conditions of environmental temperature. The different muscles (major breast muscles, minor breast muscles, thigh and drums stick) weights showed slightly non-significant a differences with effect of strains and also the same results with effect of heat exposure. But Cobb strain showed lower major and minor breast muscle weight (Pr st = 0.05) compared to Hubbard and Arbor Acres strains as well as higher (Pr st = 0.05) thigh muscle weight than Hubbard and Arbor Acres strains.
Impact of heat stress on growth performance and carcass traits in some broiler chickens
Data summarized in Table (5) indicated that there were non-significant differences for bursa weight affected with treatment, strain and interaction between them. The control group showed an increase in bursa weight (3.68 g), while the exposed groups recorded (3.29 g). Hubbard strain recorded the higher non-significant value (NS = Pr st) of bursa weight (3.69 g) compared to Cobb (3.4 g) and Arbor Acres (3.37 g). JovanirInês Müller Fernandes et al (2013) showed that carcass yield had no significant effect registered in relation totemperature ofenvironment or heat stress during the last week of bird life. These results are probably associated with the small variation between temperatures used in the first week of bird life, not allowing acclimatization of these birds and not influencing thus physiological responses to heat stress applied on the last week of life. Gholam-Reza Zaboli et al (2016) found that the relative weight of breast (without skin), legs, liver and gizzard didn't differ among the experimental groups. The thermal manipulation caused a decrease in the relative weight of the heart. Table 5 also shows a higher spleen weight for Arbor Acres strain (Pr st= 0.05) compared to Cobb strain.
Rectal Temperature
Data summarized in Figs. (1&2) showed the rectal temperature for the three strains which exposed to heat stress and another control groups. At one day of age Arbor Acres strain showed a narrow range between RT for control group (40.85˚C) and heat affected groups (41.19˚C). While both (Hubbard & Cobb) strains showed a wide range between RT for control group and heat exposed groups. Subsequently, the Arbor Acres considered the best strain which didn't effected a lot such another strains with heat exposure. The same results were revealed at one week of age and Arbor Acres strain proved that it is the best for tolerance heat stress when comparing the range of rectal temperature for control and treated groups. Abioja et al (2014) showed that temperature of rectum used as a major indicator of heat stress in poultry (Abioja et al 2012). Heat stress leads to elevated body temperature as the environmental temperature shoots ahead of the comfort zone of the birds (Kumar et al 2011). Hiroshi Tanizawa et al (2014) reported that before the acute heat challenge, rectal temperature in control and treated chicks was 41.6℃ and 41.7℃, respectively. The temperatures of both groups were elevated by acute heat stress but that of the experienced chicks was significantly lower than control chicks (control: 43.09℃ & treatment: 42.87℃).
Mortality rate
In our study Fig. (3) proved that the Arbor Acres strain for each group (control and treated) have the highest viability with mortality percentage (2%) for both of them while the control group of Hubbard strain showed the lowest viability with mortality percentage (8%) may be to the highly body weight gain than another strains during period of (3 to 4 wk of age) with synchronize of exposure of environmental temperature.
|
2020-10-19T18:11:27.807Z
|
2020-09-21T00:00:00.000
|
{
"year": 2020,
"sha1": "5b0bd68ffc43139282d1371e0ccaa2eca6477176",
"oa_license": "CCBYNCSA",
"oa_url": "https://ajs.journals.ekb.eg/article_114159_d1f7c96ed7a83c6122b6750345f8b8d9.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c9e2f7211a7aad4753e0011ef7be036aa269a0c8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
74800917
|
pes2o/s2orc
|
v3-fos-license
|
Serious bacterial infections in febrile young children: Lack of value of biomarkers
Background . Serious infections in children are difficult to determine from symptoms and signs alone. Fever is both a marker of insignificant viral infection, as well as more serious bacterial sepsis. Therefore, seeking markers of invasive disease, as well as culture positivity for organisms, has been a goal of paediatricians for many years. In addition, the avoidance of unnecessary antibiotics is important in this time of emerging multiresistant micro-organisms. Objective . To ascertain whether acute-phase reactant tests predict positive culture results. Methods . A prospective, cross-sectional study over a 1-year period included all documented febrile children under the age of 5 years (with an axillary temperature ≥38°C) who presented to Steve Biko Academic Hospital, Pretoria, with signs and symptoms of pneumonia, meningitis and/or generalised sepsis. Every child had clinical signs, chest radiograph findings, urine culture, blood testing (full blood count, C-reactive protein, procalcitonin) and blood culture results recorded. Results . A total of 63 patients were enrolled, all of whom had an axillary temperature ≥38°C. C-reactive protein, procalcitonin and white cell count did not predict the presence of positive blood culture or cerebrospinal fluid culture results, nor infiltrates on chest radiographs. No statistically significant correlations were found between the duration of hospital stay and the degree of fever (p=0.123), white cell count (p=0.611), C-reactive protein (p=0.863) or procalcitonin (p=0.392). Conclusion . Biomarkers do not seem to predict severity of infection, source of infection, or duration of hospitalisation in children presenting to hospital with fever. The sample size is however too small to definitively confirm this viewpoint. This study suggests that clinical suspicion of serious infection and appropriate action are as valuable as extensive testing.
RESEARCH
Young children often attend primary care institutions and emergency departments with acute infections. Most of these children have selflimiting conditions; however, a small proportion have serious or even life-threatening infections. This may be a source of anxiety for parents and may present a challenge to attending clinicians. [1] There is reasonably good evidence for the diagnostic value of clinical features for certain conditions, namely pneumonia, and to a lesser degree, meningitis. However, little is known about the clinical features predictive of serious outcome for febrile children presenting with nonspecific symptoms and signs with no clear focus of infection. [1] Various clinical tools have been developed in order to score febrile children as a means to predict the severity of illness, but these scores have been found to be nonspecific and of limited use in clinical practice, especially when used as positive predictors of serious bacterial infection (SBI). [2] Fever in young children may represent both insignificant viral infections and SBI which is often not associated with a distinguishable source of infection. [3] Seeking markers of invasive disease, as well as culture positivity for organisms, has been a goal of paediatricians for many years.
The evidence thus far, for multiple site testing, as well as multiple testing methods to detect SBIs in febrile young children, is unclear. In addition to selecting antibiotics for appropriate infections, the avoidance of unnecessary antibiotic use should also be considered important in this time of emerging multiresistant micro-organisms. This is an important aspect of antibiotic stewardship. [4] Methods A prospective, cross-sectional study over a 1-year period (1 December 2013 -30 November 2014) of children presenting to Steve Biko Academic Hospital (SBAH), Pretoria, was conducted.
The study sampling strategy included paediatric patients requiring admission who met inclusion criteria and presented to casualty, the outpatients department or directly to the wards.
Approval was obtained from the Department of Paediatrics and Superintendent of SBAH, as well as the MMed and Ethics committees of the University of Pretoria. Informed consent was obtained from the parents/guardians of each participant.
SBAH is a large academic tertiary hospital located in Gauteng. It has two paediatric wards, with a maximum capacity of 56 beds, and one 7-bed paediatric ICU. All patients require referral from a clinic, medical practitioner or hospital. Patients not requiring tertiary care are down-referred to Tshwane District Hospital. The number of under-5 patients admitted per year is approximately 1 400 -1 500. Three-quarters (~1 000) are admitted for subspecialist care and tend to be afebrile on admission; an eighth (~200) are neonates (excluded from the study), while 50% of the remaining 200 -300 patients have low-grade fevers <38°C on admission (having been given prior antipyretics). Therefore the total number of children under 5 years of age expected for this cohort was 100 -150 participants.
This study included all documented febrile patients (axillary temperature ≥38°C) who presented to SBAH between the ages of 1 month and 5 years, with signs and symptoms of pneumonia, meningitis and/or generalised sepsis. Exclusion criteria for this study were children over the age of 5 years, neonates (less than 1 month of age), admission temperature <38°C, and children with exclusion criteria for a lumbar puncture (LP), namely focal neurological signs, papilloedema, readily deteriorating consciousness or Glasgow Coma Scale (GCS) <8, signs of raised intracranial pressure (falling pulse, rising blood pressure, dilating or poorly reacting pupils), continuous seizure activity, bleeding diathesis and neural-tube defects. [3] Data collected included all the clinical, laboratory, radiological and microbiology findings. Data were collected on the day of admission, and updated as laboratory and microbiology results became available. In order to attempt to overcome the difference in clinical experience between admitting clinicians, specific signs and symptoms were listed in the data collection table that clinicians were instructed to document if present or absent on the day of admission. Patient age, gender, admission axillary temperature (°C) and urine dipstick findings were also included in the captured data.
A 3-day cut-off value for the duration of hospitalisation was chosen for the purpose of this study as the median duration of hospitalisation for children at SBAH is 3 days. We expect those patients with severe illness to have a longer-than-median hospitalisation time.
A urine dipstick test was deemed positive in this study in the presence of leucocytes and/or nitrites. White cell count (WCC) was deemed to be increased if the WCC value was >17 × 10 9 /L in children aged 1 -12 months, and if >15 × 10 9 /L in children 13 months -5 years. [5] Leucopenia was defined as WCC <4 × 10 9 /L. A C-reactive protein (CRP) value was deemed positive/predictive of bacterial infection if CRP >40 mg/L [6] and procalcitonin (PCT) was positive if >0.2 µg/L. [7] Cerebrospinal fluid (CSF) findings were considered positive based either on biochemistry suggestive of meningitis, or on a positive CSF culture or PCR. [8] Chest radiographs (CXRs) were interpreted by the lead investigator using the World Health Organization (WHO) CXR interpretation methodology. [9] A CXR was deemed positive for hyperinflation in the presence of >8 visible posterior ribs. [9] This study also noted some combinations of both pneumonic changes and hyperinflation.
Statistical analysis of data
The primary data analysis focused on the proportion of children under the age of 5 years who were admitted to hospital and remained hospitalised for a period of 3 days or longer. Secondary data analysis was the agreement between the clinical picture and individual biomarkers, as well as among the individual biomarkers themselves.
All parametric data were analysed using a t-test and nonparametric data were analysed by means of Wilcoxon rank sum test using Stata-13 (StataCorp, USA).
The study was adequately powered.
Results
Sixty-three patients were enrolled (age range 1 month -4.5 years), and 33 (52.4%) were male. All children whose HIV status was unknown or showed signs of being clinically immunocompromised were tested for HIV infection. Eight (12.7%) patients were confirmed to be HIV-positive, in 5 (62.5%) a significant organism was identified and in 2 (25.0%) multiple significant organisms were cultured. Two (3.2%) of the 63 patients were assessed as having malnutrition. Both patients were moderately acutely malnourished, and no organisms were cultured in either patient.
Thirty-seven (58.7%) patients were hospitalised for ≥3 days. The median temperature for all patients on admission (irrespective of duration of hospitalisation) was 38.4°C (range 38 -40°C). There was no statistical significance between temperature on admission and duration of stay (p=0.123).
An organism was cultured from the blood in 13 (25.5%) of the 51 patients in whom a blood culture was performed. All cultures were performed on admission. Commensal flora were isolated from 11 (84.6%) of the 13 positive blood cultures (10 coagu lase-negative Staphylococcus spp., 1 Micrococcus sp.). The two significant blood cultures isolated extended-spectrum beta-lactamase-producing Klebsiella pneumoniae and Salmonella Group D.
Of the 54 children who had CXRs, 30 (55.6%) had pneumonic changes. Nasopharyngeal aspirates (NPAs) were collected from 10 (33.3%) of the 30 children with pneumonic changes on CXR, 3 (30.0%) of which were positive for respiratory syncytial virus (RSV) and 1 (10.0%) was positive for Pneumocystis jiroveci (on immunoflourescence). Ten (33.3%) of the 30 children with pneumonic changes on CXR had induced sputum specimens collected, one of which (in an HIV-positive child) was positive for acidalcohol-fast bacilli (AFB) and one positive for Candida albicans (HIV-negative child). Twenty (37.0%) of the 54 children who had a CXR had positive findings of hyperinflation. NPAs were conducted in 5 (25.0%) of the 20 children who had hyperinflated CXRs, and 2 of these were positive, 1 for RSV and 1 for parainfluenza virus type 3.
Seven (70.0%) of the 10 children who had an elevated WCC and CXR performed had positive CXR findings. Both patients with a decreased WCC had radiographic changes. The proportion of all CXRs with positive findings (n=42) with and without elevated or decreased WCC were compared with those with normal CXRs (n=12). There was no correlation between WCC on admission and CXR findings of pneumonia or hyperinflation (p=0.145 and p=0.669, respectively).
A positive WCC (elevated or decreased WCC) in combination with positive CSF findings was found in 1 of the 3 patients who had a positive WCC and an LP. These positive CSF findings were based on biochemistry suggestive of meningitis, even though they had a negative CSF culture and PCR. Twenty-two LPs were conducted in this cohort.
There was no correlation between WCC on admission and duration of hospital stay (p=0.471). CRP was measured in 59 (93.7%) of the 63 patients enrolled in the study. A positive CRP (defined as a CRP >40 mg/L) [6] was found in 25 (42.4%) patients. Median (range) CRP was 31.0 (<1 to 336) mg/L. There was no correlation between CRP on admission and duration of stay (p=0.863). A positive CRP in combination with positive CXR findings was found in 15 (71.4%) of the 21 patients who had a positive CRP and had had a CXR. There was no correlation between CRP on admission and CXR findings of hyperinflation (p=0.087) or pneumonia (p=0.368).
A positive CRP in combination with a significant positive blood culture result was found in 2 (9.5%) of the 21 patients who had a positive CRP and in whom a blood culture was performed. A positive CRP in combination with positive CSF findings was found in 6 (60.0%) of 10 patients who had a positive CRP and in whom an LP was performed. Five out of the 6 positive CSF findings were based on biochemistry suggestive of meningitis. One was positive based on CSF PCR for enterovirus. PCT was measured in 25 (39.7%) of the 63 enrolled patients. The clinical profile of these patients varied greatly, and 11 (44.0%) were critically ill, requiring paediatric intensive care unit (PICU) admission. PCT was found to be positive (>0.2 µg/L) [7] in 21 (84.0%) of the 25 patients, 10 (47.6%) of whom were admitted to ICU. Five (50.0%) of these 10 died. Median (range) PCT was 5.7 (0 -728) μg/L. Sixteen (76.2%) of the 21 children with a positive PCT remained hospitalised for 5 days or longer; however, statistically there was no correlation between PCT on admission and duration of stay (p=0.392).
A positive PCT in combination with positive CXR findings was found in 16 (80.0%) of the 20 patients who had a positive PCT and in whom a CXR was performed. A positive PCT in combination with a significant positive blood culture result occurred in 1 (4.8%) of the 21 patients who had a positive PCT and in whom a blood culture was performed. A positive PCT in combination with positive CSF findings occurred in 4 (50.0%) of the 8 patients who had a positive PCT and in whom an LP was performed. Three (75.0%) of the 4 were positive based on biochemistry suggestive of meningitis; 1 (25.0%) was positive based on CSF PCR for enterovirus.
All of the children enrolled in the cohort had urine dipsticks evaluated at admission, and 12 (19%) had positive urine dipsticks for leucocytes or nitrites. All of the urine specimens from children with positive urine dipsticks were culture-negative. Two (16.7%) of the children with positive urine dipsticks had viral pathogens isolated on stool specimens (positive on stool ELISA): adenovirus (n=1) and adenovirus and rotavirus (n=1). In 8 (15.7%) of the children with negative urine dipsticks an organism in the urine was cultured.
Of the 63 total enrolled patients, 11 (17.5%) required PICU admission and 5 (7.9%) patients died. Four of the five deaths occurred after the first 3 days of hospitalisation, with the median time to death being 10 days (range 2 -20 days).
Discussion
The value of biomarkers for determining SBI in febrile children has revealed conflicting results. Some studies suggest that individual tests perform better than others or than clinical judgement of bacterial v. viral infection, while other studies do not. [3,[10][11][12][13] The insensitivity of routine microbiological methodologies in identifying bacterial infections, particularly bacteraemia, is well described, and molecular diagnostic techniques may in fact be superior to acute-phase reactants in detecting SBI in children. [14] In a study performed to understand the epidemiology of childhood bacterial diseases, including invasive pneumococcal disease (IPD), screening criteria were used to identify children aged less than 5 years of age who had signs and symptoms of SBI. [14] The study concluded that PCR and antigen testing increased the sensitivity of detection and provided a more precise estimation of the burden of invasive bacterial disease than bacterial culture. [14] This study does not however argue against the use of CRP, PCT and other inflammatory biomarkers.
CXRs and their correlating biomarkers have been shown to have good test sensitivity for pulmonary disease. [9] In our study, all patients with a decreased WCC, 80% of those with a positive PCT, 71.4% with a positive CRP and 70% with an elevated WCC had positive CXR changes. However, these biomarkers cannot be used to distinguish pneumonic changes from hyperinflation.
The results from the analysis of data collected from febrile young children presenting to SBAH reveal that fever or degree of fever does not predict severity of infection, nor source of infection, nor duration of hospitalisation. Degree of fever does not predict biomarkers for bacterial infection (WCC, CRP and PCT). Elevated biomarkers are not related to duration of hospitalisation nor do they predict a positive blood culture. However, it is important to keep in mind that the sensitivity of blood cultures is known to be low, and this is further impacted by the inordinately high (11/51, 21.6%) culture contamination rate at the study facility. The patient's clinical picture may be more valuable than CRP/WCC when deciding on choice of antibiotics and whether or not an organism is cultured. CRP/WCC cannot be used to predict SBIs in febrile young children and therefore cannot be used to decide on choice of antibiotics.
A low WCC is just as significant a marker for sepsis in children as a high WCC. Literature suggests that the risk of bacteraemia increases from 0.5% if the WCC is <15 × 10 9 /L to more than 18% if WCC is >30 × 10 9 /L. [15] Findings from the literature [15] and our study reveal that more children with a WCC abnormality secondary to sepsis present with a high WCC. Twelve (85.7%) of the 14 patients with WCC abnormalities had an increased WCC.
There was a wide range of CRP findings and CRP was specifically unhelpful in predicting infection severity when using cut-off values suggested by the literature. [6] PCT is an expensive biomarker and therefore not usually performed on febrile children presenting to casualty; it is usually reserved for patients hospitalised in the PICU. Fourteen (56%) of the 25 patients in whom a PCT was performed were stable with various illnesses while 11 (44%) were critically ill requiring ICU admission. The limited number of PCTs done, because of the cost restraint, as well as the wide variability of the positive PCT values (>0.2; range 3 -728 µg/L) obtained, makes it difficult to interpret data. As 50% of patients with positive CSF results also had a high PCT, it may possibly have a predictive role with regard to meningitis; however, the cohort is too small to confirm this finding.
The sterility of the blood culture techniques used in this study is questionable. Eleven (21.6%) of the 51 blood cultures that were taken cultured a commensal. Therefore, it can be concluded that one-fifth of blood culture specimens are not taken using the correct sterile procedures. This makes it difficult to interpret findings regarding true disease-causing pathogens v. commensal organisms. This problem with specimen collection may reflect, too, the significant number of negative tests.
The results of this study question the validity of urine dipsticks or the method of reading the test. All urine dipsticks are performed in the ward and read by nursing or medical staff; this may call into question the reliability of the staff 's performance and interpretation of the test. Formal urine sample tests are superior to urine dipsticks. [16] This study revealed that a negative urine dipstick does not rule out a urinary tract infection (UTI) and a positive urine dipstick does not confirm a UTI.
Blood tests such as WCC, CRP and PCT may be of more value in assessing response to treatment, rather than predicting the severity of sepsis. Relying on them to withhold or start antibiotic therapy is not prudent. The choice of whether to use antibiotic therapy or when to start treatment should rather be based on clinical judgement, as is invariably the case in the clinical environment in busy hospitals in South Africa, where empiric antibiotic use is most often implemented before laboratory test results have become available.
Routine CXRs are not justified unless a bacterial pneumonia is clinically suspected. This study is a poor example of when to use CXRs judiciously. CXRs were conducted in cases of generalised sepsis, upper respiratory tract infections, viral pneumonias and acute gastroenteritis, without supportive clinical signs of bacterial pneumonia.
If there is any concern for bacterial meningitis, then an LP is still mandatory. CSF Gram stain, bacterial antigen and culture are extremely useful markers to aid in diagnosis, whereas there is no consensus regarding CSF cell count and biochemistry in differentiating between viral and bacterial meningitis. The empiric use of antibiotics in this case should also be based on clinical judgment and not biomarker testing, until the results of the Gram stain, bacterial antigen and culture are available.
This study was not without limitations. The small, possibly poorly representative sample size, with no control group and erratic sampling methods (owing to clinical discretion and cost constraints), reflects the large number of febrile children presenting to hospital with undocumented or low-grade fever.
Conclusion
Fever or degree of fever does not predict severity of infection, nor source of infection, nor duration of hospitalisation. Elevated biomarkers (WCC, CRP, and PCT) are not related to duration of hospital stay; they do not predict positive culture results nor identification of significant organisms. Thus, WCC, CRP, and PCT were not shown to be effective in predicting SBIs in febrile children under 5 years of age. This study suggests that clinical suspicion of serious infection and appropriate action are as valuable as extensive testing.
|
2019-03-12T13:13:47.764Z
|
2016-03-29T00:00:00.000
|
{
"year": 2016,
"sha1": "19f2073fbce5f45214d3acffbeb70e932497cfe3",
"oa_license": "CCBYNC",
"oa_url": "http://www.sajch.org.za/index.php/SAJCH/article/download/980/683",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5f1fa1eef5935a152176e6e977d2572375d78afb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210317829
|
pes2o/s2orc
|
v3-fos-license
|
Chabahar and Its Impacts on Regional Convergence
Unfortunately, during the last half century our country has been confronted with political and security problems and has caused Afghanistan to lack a good standing in the region. Insecurity, political problems, and existence of terrorist groups present a black image of Afghanistan to the international community, but with the overthrow of the Taliban regime and the influx of billions of dollars in aid from the international community to Afghanistan; and the formation of interim and transitional governments, Hamid Karzai's presidency, and especially the period of National Unity Government (NUG), with major projects such as Selma Dam, CASA1000, the Lajward Road (also called Lapis Lazuli Route), the Chabahar Port, this black-painted image of Afghanistan from before the interim period has fortunately turned into a beautiful and remarkable image to the international community. As a result of such large projects, the Afghan government has shown very well to neighboring countries that Afghanistan is not just a consumer state and a monopoly of one or two neighboring countries; in addition, the Afghan government has made it clear that there are many ways for the country to go, through which it can facilitate international trade with very low cost. The Chabahar Port showed that our beloved Afghanistan can act in the same way as other countries in the field of development and contribute to paving the way for comprehensive development. In conclusion, it can be claimed that the Chabahar Port is a much safer and cost-efficient route for trade in the country, and the Afghan government can open up numerous opportunities such as the Chabahar Port by providing overall security and having an effective economic plan. Key terms: economic convergence, transit, trade, good business governance Introduction As it is clear, Afghanistan is a landlocked country that has been fighting wars for almost half a century. The lack of security has deprived our beloved country of almost all its infrastructures and has stifled the building of new and rebuilding of existing infrastructures. As mentioned above, Afghanistan is a landlocked country where none of the seas of the country, with the exception of the Amu Sea, is suitable for transfers of commercial goods. Therefore, due to the higher cost of transportation and transit through the land, it increases the prices of the commodities shipped by land, which decreases the competitiveness of our products in international markets. Fortunately, with the relative security provided, efforts have been made to revitalize the infrastructure, and the NUG has focused its attention on foreign trade and has always strived for the liberation of Afghanistan from one or two countries’ monopoly. The Chabahar port is an ideal 65 Corresponding Author*
Introduction
As it is clear, Afghanistan is a landlocked country that has been fighting wars for almost half a century. The lack of security has deprived our beloved country of almost all its infrastructures and has stifled the building of new and rebuilding of existing infrastructures. As mentioned above, Afghanistan is a landlocked country where none of the seas of the country, with the exception of the Amu Sea, is suitable for transfers of commercial goods. Therefore, due to the higher cost of transportation and transit through the land, it increases the prices of the commodities shipped by land, which decreases the competitiveness of our products in international markets.
Fortunately, with the relative security provided, efforts have been made to revitalize the infrastructure, and the NUG has focused its attention on foreign trade and has always strived for the liberation of Afghanistan from one or two countries' monopoly. The Chabahar port is an ideal opportunity to meet the Afghans' needs, and this port can connect Afghanistan to the world's open waters and greatly alleviate the problem of Afghanistan being landlocked and make it easier for Afghan businessmen to use facilities such as 250 acres of land, cold storages, the National Bank of Afghanistan agencies and direct flights from Kabul to Chabahar in a short and secure route to easily transport commercial goods with the proud sign of 'Made in Afghanistan.'
Convergence in the World
After World War II, America's presence as the sole economic and political power in the world provided the US with access to the markets of other countries, and on the other hand, post-war Europe needed to be restored, and America as the world's sole economic and political power in the world, intervened in the restoration and rebuilding of Europe facilitating the creation of the World Bank and the International Monetary Fund (IMF) in the year 1944, and signing the General Agreement on Trade and Tariff in the year 1947 to facilitate trade between different countries in order to ensure that the interests of all the groups involved are maintained and that these efforts made in the year 1947 which led to creation of the European Economic Commission was the first experience in establishing regional organizations. The process of establishing regional organizations is still ongoing and we can mention the following important regional organizations: 1.
European Economic Society 2.
European Free Trade Area 3.
Canada-United State Free Trade Agreement 4.
North American Free Trade Agreement 5.
Asian Free Trade Area 6.
South Asia Association for Regional Cooperation 7.
Asia Pacific Economic Cooperation 8.
Economic Cooperation Organization
A Brief Introduction to Chabahar
Chabahar has an area of 9739 square kilometers, located 60 degrees and 37 minute east longitude and 25 degrees 17 minutes north latitude, in the direction of the Indian Ocean's summer monsoon winds, which makes the year-round pleasant weather with an average temperature of 23°C. Chabahar Port is one of the parts of Sistan and Baluchistan which is bounded on the north by Iranshahr, on the east by the Pakistani border, on the south by the Oman Sea and on the west by Jask and Kohnuj. This port is located on the east of the Strait of Hormuz and the Oman Sea and on the north of Indian Ocean on the main shipping routes to Africa, Asia and Europe. It is Afghanistan's shortest and least costly transit route to world markets and can be a good alternative to Karachi Port.
Afghanistan from the point of view of Economic Integration and International Trade (Afghanistan's need for such agreements)
It is clear that Afghanistan has been using the Karachi port for many years in its foreign trade; and Pakistan, which has used a hostile policy in all aspects of its governance with Afghanistan, has left the Afghan foreign trade unsafe from this hostile policy and has always suffered harms from these hostile policies. In addition to the long-distance between Karachi Port and Kabul and the hostile policies of Pakistan, our beloved country Afghanistan has a number of other restrictions on foreign trade-according to a World Bank investigation. The followings shed light on them.
A: Afghanistan being landlocked
Landlocked countries are more vulnerable to foreign trade than non-landlocked ones and have less foreign trade, as research has shown foreign trading of landlocked countries is as much as 30% of the foreign trade of the countries which are not landlocked. And unfortunately, the seas of Afghanistan except for the Amu Sea are not conducive to the transfer of commercial property.
B: Geographic Location of the Afghan Mountains:
Another challenge to Afghan foreign trade and Afghanistan as a country for the transit of other countries' trade is the geographical location of the Afghan mountains, which are located in the middle of Afghanistan from east to west creating challenges for the transit of commercial property.
C: Shipping costs
The economic development of countries has made the importance of freight transport in the world very clear and visible as the world has witnessed economic convergences for independent and tariff-free trade, but practical experience shows that in the area of foreign trade freight costs are higher than the tariff and is a bigger barrier than the tariff, and these costs depend on transportation, distance, converting a vehicle from ship to ship or from ship to lorry (converting ship to lorry is seven times more than the cost of exchanging a ship for another), administrative barriers and the number of ports and border the Commodities goes through.
The above limitations do not imply that Afghanistan is no longer a viable country for trade and transit. For Afghanistan to be a developed country, it must draw on the experiences of other countries in the field of regional organizations and sign regional agreements that guarantee common interests of all the countries involved, especially with neighboring countries such as Iran, India, Tajikistan, Turkmenistan, Pakistan, China, etc. The outcome of regional agreements and the development of infrastructure can eliminate the above limitations and increase the proportion of foreign trade of the country.
The Benefits of Joining Afghanistan in Chabahar Port
In general, the establishment of regional organizations, in addition to enhancing the political power of countries in the region, saves scale, division of labor and expertise at the regional level, and expands the product markets in the region. As of the Chabahar Port agreement, we can claim that it is a stable and vital measure by the Afghan government in good governance on commerce providing good business opportunities for Afghanistan as currently goods are being transferred to other countries through Karachi and Bandar Abbas ports. Chabahar can be a good alternative for the two ports, so Chabahar is 90km and 700km less distant to Afghanistan than Bandar Abbas and Karachi ports respectively lowering the cost of transporting each container of commercial property between $500 and $1,000.
In addition, Afghanistan's access to the Chabahar port offers other opportunities for Afghanistan, which can be summarized as follows: Alternative Chabahar for Karachi Port According to information obtained, about three-fourths of Afghan trade is being carried out through the Karachi Port and the political relations between the two states have had a negative impact on Afghanistan's foreign trade. The Karachi Port route is still in the border areas under the control of unarmed insurgent groups that could pose a significant threat to Afghan businessmen, as long as a 700kilometer-longer Karachi port can increase the shipment cost of commercial property and reduce the competitiveness of Afghan products in international markets, and the transfer of goods through Chabahar port can be a shorter and safer route than Karachi with lower costs.
Sufficient facilities for our businessmen According to the agreement signed between the government of the Islamic Republic of Afghanistan and the government of the Islamic Republic of Iran, 250 acres of land has been provided for the Afghan government for fifty years presenting a unique opportunity for our national traders to build infrastructures, factories, and cold storages, and export their commercial property from this port with further facilities to other parts of the world. Also, under the two governments' agreement, the Afghan government has been allowed the establishment of an Afghan National Bank agency in Chabahar Port, which can facilitate the necessary cash flow for Afghan businessmen, thus allowing direct flights from Kabul to Chabahar Port and from Chabahar to Dubai which could be helpful in replacing the country's imports and boosting exports.
Exporting domestic products with the country's name and emblem Since experience has shown that Pakistan has fraudulently exported most of its neighboring countries' products to the world markets with its own name and badge, and has obtained monetary benefits and reputation from this source. For instance, exporting Afghan carpets by Pakistani name and badge to countries around the world whereas, the treaty and principles in place makes it possible to export commodities with our country's name and emblem from the Chabahar port.
Afghanistan's Potential for Foreign Trade Afghanistan is a country with many potential foreign trade opportunities, such as iron ore, coal, precious stones, and oil. By turning these potential opportunities into existing ones and exporting them to the international markets through the Chabahar Port, we can strengthen our country's economy.
Tripartite Chabahar Port Agreement and Its Need
The common ground that the three countries (Afghanistan, India, and Iran) have included in the trilateral agreement is their geographical location and their common interests, which can be a good basis for trading with each other as Afghanistan is an insular country and needs a cost-efficient, reliable, and easy water port for its foreign trade, Iran is thinking of new markets for oil sales after severe western sanctions, and India wants enough oil from the port for its products, and on the other hand, uses the Chabahar port to export its products to middle-eastern markets with lower transportation costs.
Conclusion
Chabahar is a golden opportunity for Afghanistan that can shape the fate of Afghans and contribute to a sustainable development for Afghanistan. Using the available potential capabilities and making the best use of the Chabaharport, Afghanistan must find itself a unique economic and political position in the globe and particularly in the region.
As the interests of the countries of the region are intertwined in the Chabahar port, this sharing of interests could be another opportunity for the Afghan government to use it to promote permanent stability of the country and to enable the countries in the region to understand that a prosperous and secure Afghanistan can be very useful at the regional level rather than a dependent country and a safe haven for terrorists. Moreover, with this sharing of economic interests, a regional consensus can be established at the regional level to ensure stability in the country. With the stabilization of the country and the attention of the government in the field of infrastructure construction as well as the extensive use of the Chabahar port, we can tackle the problems of foreign trade on the one hand, and on the other hand, Afghanistan can act as the Asian intersection and take firm steps in sustainable development through trade and transit.
|
2019-10-17T08:58:36.090Z
|
2019-10-10T00:00:00.000
|
{
"year": 2019,
"sha1": "89c6b8a1b1eaad51289303625a2d839d5ddab031",
"oa_license": "CCBY",
"oa_url": "https://aipublisher.org/?download_id=3373&smd_process_download=1",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "838cc8c7c9582d2a51647a7dc2b2241822cf869f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
27403284
|
pes2o/s2orc
|
v3-fos-license
|
Treatment of post-operative orchialgia with therapeutic unilateral penile and spermatic cord block
1. Mertes PM, Laxenaire MC, Alla F. Anaphylactic and anaphylactoid reactions occurring during anesthesia in France in 1999-2000. Anesthesiology 2003;99:536-45 2. Beard K, Jick H. Cardiac arrest and anaphylaxis with anesthetic agents [letter]. J Am Med Assoc 1985;254:1742 3. Baldo BA, Fisher M. Substituted ammonium ions as allergenic determinants in drug allergy. Nature 1983;306:262-4. 4. Schwartz LB, Bradford TR, Rouse C, Irani AM, Rasp G, Van der Zwan JK, et al. Development of a new, more sensitive immuno-assay for human tryptase: Use in systemic anaphylaxis. J Clin Immunol 1994;14:190-20. 5. Moneret-Vautrin DA, Laxenaire MC. Skin tests in diagnosis of allergy to muscle relaxants and other anaesthetic drugs. Monogr Allergy 1992;30:145-55.
Access this article online
Treatment of post-operative orchialgia with therapeutic unilateral penile and spermatic cord block Sir, Chronic testicular pain is common and well recognized, but its pathophysiology is poorly understood. Non-invasive treatment techniques include drugs like non-steroidal anti-inflammatory drugs (NSAIDs), tricyclic antidepressants, gabapentin, carbamazepine and a-adrenergic antagonists. Minimally invasive techniques include Transcutaneous Electrical Nerve Stimulation (TENS) analgesia and pulsed radiofreqency of nerves. Transrectal periprostatic administration of lignocaine and methylprednisolone is reported in the literature for treatment of chronic orchialgia. [1] Spermatic cord block anaesthesia was also used successfully for this purpose. [2,3] We describe a case of post-surgical orchialgia treated successfully with therapeutic penile block using 0.5% bupivacaine and methylprednisolone 40 mg/mL.
A 22-year-old male was referred to the pain clinic for complaint of burning pain in the testis and scrotum on the right side. This was not responding to NSAIDs like ibuprofen. He was operated for the ailment of spermatocoele on the right side. Right from the first post-operative day, he was having continuous burning sensation in the right testis and scrotum. This pain was not radiating to the left side or to the lower abdomen. Because of continuous pain, he was unable to do any work and lost his wages. His sexual desire was also adversely affected. Pain score on Visual Analogue Scale (VAS) of 0-10 was 9 on the affected side. His general and systemic examination revealed no abnormality. On local examination, the right testis and scrotum were tender. Scar of surgery was healthy. There was irregular swelling of size 4 mm × 4 mm on the right testis. There was no evidence of fluid in the scrotum. All investigations were within normal limits.
Along with NSAIDs, he was treated with oral Tramadol 50 mg BD, Carbamazepine 100 mg BD and Gabapentin 400 mg BD for 4 days without success. Then, it was decided to give interventional treatment with therapeutic block. Drugs used were 1 cc of 0.5% bupivacaine and 1 cc of methyl-prednisolone (40 mg/mL) taken in a single 2 cc disposable syringe. Under all aseptic precautions, 1.5 cc of the combination was injected on the right pubic tubercle. The spermatic cord was rolled between the fingers near the base of the scrotum and 0.5 mL of the drugs were injected around it. Immediately after injection, the pain score on VAS was 0. Prophylactic oral antibotic ciprofloxacin 500 mg BD was advised for 5 days. The patient was comfortable for the next 15 days. Again, after 15 days, he presented with the same complaints and had a pain score of 9 on VAS. Again, the therapeutic block as described above was repeated. Immediately after injection, the pain score on VAS was 0. This time, relief lasted for 2 months. Again, he presented with a pain score of 3 on VAS. It was treated with NSAIDs for 7 days, with the patient having complete relief.
Cause of post-operative orchialgia is injury to the nerves. This could be explained by the phenomenon of neural plasticity. In neural plasticity, disease or injury may result in changes at all levels of the nervous system, resulting in amplified pain messages. Another explanation of post-injury/surgery chronic pain syndromes is the development of sprouting between axons. This can occur either at the level of the dorsal root ganglion or at the dorsal horn. This results in light touch stimuli being re-routed into the pain pathway and felt as pain by the patient.
In the absence of any findings that require surgical treatment, conservative treatment is advised. [4] After initial measures like NSAIDs, tricyclic antidepressants and gabapentin fail, minimally invasive techniques are the next step.
Spermatic cord block anaesthesia using a local anaesthetic and a steroid like methylprednisolone can be used to treat this condition. [2,3] Methylprednisolone acts by directly blocking the "C" fibres. [5] It also reduces the neurilemmal oedema. With repeated injections, the intensity of aberrant signals is brought down significantly and the condition becomes manageable with simple analgesics.
Sunil G Patil, Subodh S Kamtikar
Department of Anaesthesia, Bidar Institute of Medical Sciences, Bidar, Karnataka, India
|
2018-04-03T00:55:46.918Z
|
2012-05-01T00:00:00.000
|
{
"year": 2012,
"sha1": "447b29147983bc5fc05636138b8c54b4cf19703d",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0019-5049.98800",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "95f989d40e471e9aba781d65f64909cbd326899d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271158463
|
pes2o/s2orc
|
v3-fos-license
|
Research progress on antitumor effects of sea buckthorn, a traditional Chinese medicine homologous to food and medicine
Sea buckthorn (Hippophae Fructus), as a homologous species of medicine and food, is widely used by Mongolians and Tibetans for its anti-tumor, antioxidant and liver-protecting properties. In this review, the excellent anti-tumor effect of sea buckthorn was first found through network pharmacology, and its active components such as isorhamnetin, quercetin, gallic acid and protocatechuic acid were found to have significant anti-tumor effects. The research progress and application prospect of sea buckthorn and its active components in anti-tumor types, mechanism of action, liver protection, anti-radiation and toxicology were reviewed, providing theoretical basis for the development of sea buckthorn products in the field of anti-tumor research and clinical application.
accompanied by abdominal pain due to food accumulation; coughs with excessive phlegm production along with chest congestion causing heartache; menstrual disorders caused by blood stasis accumulation; as well as injuries resulting in hematoma formation leading to pus accumulation alongside swelling.Recent pharmacological research has demonstrated that sea buckthorn possesses noteworthy therapeutic properties in the management of cardiovascular diseases, anti-tumor, anti-oxidation, and liver protection.Importantly, it should be emphasized that the medicinal benefits associated with sea buckthorn extend beyond its fruit alone.The medicinal value inherent in its leaves, oil, and seeds.Cancer, also referred to as malignant tumors, is characterized by aberrant mutations in normal cells that undergo uncontrolled and excessive proliferation, eventually leading to metastasis.According to the latest report in 2020, there were approximately 19.29 million new cases of malignant tumors worldwide, with a staggering 9.96 million deaths attributed to this disease.Furthermore, it is projected that by 2040, there will be an estimated 28.4 million new cancer cases globally (2).As of July 2019, China's tumor registry encompassed a population of around 438 million individuals, accounting for approximately 31.5% of the country's total populace.Over the past four decades, China has witnessed a significant surge in the burden of cancer; thus highlighting the urgent need to address this ailment as one of the most critical public health challenges faced in the twenty-first century (3).Currently, the management of malignant tumors primarily encompasses surgical resection, chemoradiotherapy, photothermal therapy, gene therapy, immunotherapy, and other modalities (4).The treatment of cancer is contingent upon its stage of progression; early detection leads to improved therapeutic outcomes and prolonged survival.In the initial phases of cancer development, lesions can be surgically excised to achieve maximal radical intervention.However, a majority of patients are diagnosed during intermediate or advanced stages when treatment becomes challenging.
Due to the global prevalence of diet-related chronic diseases, the concept of Food is Medicine was proposed by Downer et al. (5), highlighting its potential for managing and treating patients with chronic illnesses.Chinese medicine has long embraced the belief that "medicine and food have the same origin, " as evident in ancient texts like Huangdi Neijing, which states that consuming food on an empty stomach serves as nourishment while being medicinal when consumed by patients.Recognizing this synergy, the National Health and Medical Commission has identified a total of 110 traditional Chinese medicines that possess both nutritional and medicinal properties, including sea buckthorn, with ongoing efforts to expand this list further.Traditional Chinese medicine (TCM) plays a pivotal role in cancer prevention and treatment (6).The treatment of cancer with TCM primarily serves as adjuvant therapy.By modulating the internal environment of the body, it can effectively impede tumor growth and reduce metastasis.Additionally, TCM has the potential to regulate immunity and alleviate patients' discomfort and adverse reactions during radiotherapy and chemotherapy (7).In this review, we comprehensively examine the anti-tumor mechanisms of key active compounds found in sea buckthorn.Furthermore, we investigate the protective effects of sea buckthorn on liver function and radiation-induced damage.Considering its dual role as both medicine and food source, we also explore the toxicity profile and applications of sea buckthorn.Please refer to Figure 1 for a visual representation of our research flowchart.
Screening of sea buckthorn related pathways
We employed bioinformatics methods, utilizing the TCMSP 1 and DAVID 2 online databases, as well as Cytoscape3.9.1 software and the bioinformatics online platform 3 , to conduct enrichment analysis of the active components of sea buckthorn and their targets (Figure 2; Table 1).The findings reveal that a majority of genes are enriched in cancer-related pathways, including Small cell lung cancer and Colorectal cancer.Consequently, our focus is directed towards investigating the effects of sea buckthorn on tumors for this review (Figure 3).
Flavonoid
Flavonoids are generally considered to be the primary active ingredients in sea buckthorn, sea buckthorn varieties in different component content determination of total flavonoids in different parts of the following shows that the highest flavonoid in the sea buckthorn, accounted for 76%, fruit with 14% times, minimum content of seed, which is about 10% (8).Currently, over 50 flavonoids have been identified from sea buckthorn fruit, including quercetin, isorhamnetin, kaempferol, and other flavonoid aglycones.Additionally, glucose, rhamnose-rutin, and other sugar groups combine to form flavonol glycosides.Among these compounds, isorhamnetin derivatives account for 65% of total flavonols, while quercetin derivatives make up 25%.Isorhamnetin is a natural small molecule flavonoid also known as 3,5,7-trihydroxy-2-(4-hydroxy-3-methoxyphenyl) benzopyran-4-one (9).Relevant studies have revealed notable variations in the types and compositions of flavonoids among different subspecies, varieties, and origins of sea buckthorn.Quercetin is commonly present in the flowers, leaves, and fruits of numerous plants primarily as glycosides.It exhibits pharmacological properties such as antioxidation, anti-inflammatory effects, hypoglycemic activity, anticancer potential, as well as prevention and treatment capabilities for cardiovascular and cerebrovascular diseases (10).In the process of extracting flavonoids from sea buckthorn, the flavonoid content obtained by different extraction methods was also different.The flavonoid content obtained by ultrasonic and microwave extraction methods was about 22 mg/g, and only 12 mg/g obtained by aqueous two-phase extraction method.However, the microwave extraction method takes less time, but the ultrasonic and aqueous two-phase extraction methods have higher safety (11).
Polyphenols
Sea buckthorn is rich in over 30 polyphenolic compounds, total polyphenol content of sea buckthorn leaves is about 3 to 4 times of sea buckthorn fruit (12), predominantly gallic acid and protocatechuic acid, as well as p-hydroxybenzoic acid, vanillic acid, and salicylic acid, among others.Notably, gallic acid exhibits antibacterial, antiviral, and antitumor properties (13).The abundance of polyphenolic compounds in sea buckthorn contributes significantly to its role in cardiovascular protection.Protocatechuic acid or 3,4-dihydroxybenzoic acid serves as the primary metabolite of anthocyanins and possesses antioxidant, antibacterial, antiinflammatory, and anti-tumor effects (14).
Fatty acids
The oil content of Sea buckthorn in Central Asia reaches an impressive 22.57%, while in China it ranges from 2.38 to 12.07%.Sea buckthorn fruit oil is rich in fatty acids, with significant variations observed among different varieties and origins of sea buckthorn, among them, the fatty acid content of sea buckthorn fruit is about 5%, and that of seed is about 70% (15).The predominant fatty acids found in sea buckthorn are unsaturated, including palmitoleic acid, palmitic Screening of related targets and pathways of sea buckthorn.Main active components, structural formulas and types of cancer treated by sea buckthorn.
acid, oleic acid, linoleic acid, and linolenic acid.Notably, the content of palmitoleic acid can be as high as 32 to 53%.Relevant research has demonstrated that palmitoleic acid exhibits potential for preventing, controlling, and improving chronic metabolic diseases and inflammation (16).In the extraction process of sea buckthorn fruit oil, the use of organic solvent extraction oil rate of up to 22-28%, but its security is low; Squeeze the extraction operation is simple, but the oil rate less than 1%; Enzymatic, supercritical CO 2 extraction, and ultrasonic assisted enzymatic oil rate between 2 and 6% (17).
Other
Sea buckthorn is enriched with bioactive compounds including triterpenoids, steroids, alkaloids, and β-carotene.Furthermore, its pharmacological potential against tumor growth has been substantiated through pertinent research studies.
Isorhamnetin
The anti-tumor potential of isorhamnetin has garnered significant attention in recent years, demonstrating a comprehensive range of anti-tumor activities, including the inhibition of cell proliferation and migration and the induction of cell apoptosis (Table 2).Notably, treatment with isorhamnetin severely disrupted the morphology of AGS-1 and HGC-27 cells.Furthermore, joint staining analysis using Caspase-3 and Annexin V revealed that the activation of apoptosis induced by isorhamnetin primarily relied on Caspase-3 activation.Importantly, subsequent CCK-8, transwell, and wound healing assays confirmed that isorhamnetin also effectively inhibited gastric cancer cell proliferation and migration (18).In HT-29 colon cancer cells, isorhamnetin's chemoprotective properties against colon cancer are attributed to its anti-inflammatory activity as well as its inhibition of Src-mediated carcinogenesis, leading to the subsequent loss of nuclear beta catenin that relies on CSK expression (19).Furthermore, studies conducted on GBC-SD and NOZ cell lines demonstrated that isorhamnetin effectively suppressed cell proliferation and metastasis in gallbladder cancer by deactivating the PI3K/AKT signaling cascade.Additionally, it induced apoptosis while blocking the G2/M phase progression in GBC cells (20).
Isorhamnetin was found to decrease the phosphorylation levels of MEK and ERK in the Ras/MAPK pathway of PANC-1 cells, leading to a significant inhibition of cell growth through S phase block.
Additionally, wound healing experiments demonstrated that isorhamnetin significantly reduced the migration ability of PANC-1 cells (21).Furthermore, isorhamnetin exhibited inhibitory effects on breast cancer cell proliferation by down-regulating MMP2 and MMP9 protein expression levels.Notably, overexpression of ESR1 promoted breast cancer cell proliferation, migration, and invasion; however, these results were reversed upon knocking down ESR1.The observed inhibitory effect of isorhamnetin on breast cancer cells was attributed to its ability to suppress ESR1 gene expression (22).In the intervention of prostate cancer cells, isorhamnetin exhibits its potential by promoting apoptosis through downregulating the expression of anti-apoptotic protein Bcl-2 and upregulating the levels of pro-apoptotic proteins Bax and cytochrome C. Additionally, it plays a crucial role in suppressing metastasis by enhancing e-cadherin expression while reducing vimentin and N-cadherin expressions, as well as MMP2 and MMP9 activities.Furthermore, evaluation of the PI3K/AKT/ mTOR pathway confirms that isorhamnetin effectively inhibits this signaling cascade, thereby exerting anticancer effects (23).Moreover, isorhamnetin induces G2/M phase arrest via binding to Cdk1 and inhibiting its activity through both endogenous and exogenous pathways.It also upregulates Fas, FasL, and Bax protein levels while downregulating anti-apoptotic protein Bcl-2 expression to induce apoptosis in bladder cancer cells, ultimately restraining their proliferation (24).In a breast cancer study, isorhamnetin was found to exert its effects through the inhibition of Akt/mTOR and MEK/ERK signaling pathways, thereby promoting apoptosis and inhibiting cell proliferation (25).In the investigation conducted by Luo et al. (26), it was demonstrated that isorhamnetin effectively blocks the Akt/ERK1/2 signaling pathway, leading to the inhibition of epithelial-mesenchymal transition (EMT) and subsequent suppression of lung cancer cell metastasis.Additionally, Ye et al. ( 27) also reported that isorhamnetin facilitates cell apoptosis by inducing endoplasmic reticulum stress via both endogenous mitochondrial apoptotic pathways and exogenous death receptors.Furthermore, this compound exhibits an ability to regulate MMP2 and MMP9 protein levels, thus impeding cell metastasis.The expression levels of Bax and Caspase-3 were upregulated, while the expression level of Bcl-2 was downregulated upon isorhamnetin intervention in the mouse skin melanoma cell line B16F10.These findings provide evidence for the pro-apoptotic ability of isorhamnetin through the inhibition of PI3K/Akt and NF-κB signaling pathways, with its inhibitory effect being associated with PFKFB4 (28).Furthermore, Juan Wei et al.'s study on cervical cancer cells demonstrated that isorhamnetin effectively hindered cell cycle progression at the initial G2/M phase by suppressing protein expressions of cyclin B1, cell division cycle 25C (Cdc25C), and Cdc2 (29).
Quercetin
Quercetin, a flavonoid compound, exhibits anti-tumor, antiinflammatory, analgesic properties and exerts protective effects on the cardiovascular and cerebrovascular systems (Table 3).Pertinent evidence demonstrates that treatment with quercetin in HT-29 cells results in growth inhibition, alterations in cell morphology, and induction of apoptosis (30).In liver cancer cells SMMC7721 and HepG2, quercetin activates autophagy by inhibiting the AKT/mTOR pathway while activating the MAPK signaling pathway.Consequently, this leads to the suppression of cell proliferation and initiation of apoptosis (31).Quercetin exhibits its anti-proliferative effects on pancreatic cancer cells by down-regulating c-Myc expression and suppressing EMT levels through the reduction of TGF-β1.Furthermore, it effectively hinders cell migration and invasion (32).In the investigation involving AGS cells, quercetin induces apoptosis in AGS cells via activation of the MAPK signaling pathway and modulation of TRPM7 channel activity (33).Notably, when studying the intervention of quercetin on esophageal cancer cells, it significantly impedes human esophageal cancer Eca109 cell proliferation in a timeand dose-dependent manner while concurrently inducing their apoptosis (34).Ren et al. (35) demonstrated that quercetin exhibits inhibitory effects on the proliferation of ovarian cancer cells, impedes cell cycle progression from G0/G1 to G2/M phase, and induces apoptosis in vitro.Ward et al. (36) discovered that quercetin effectively triggers apoptosis and secondary necrosis in three distinct types of prostate cancer cells.Further investigations revealed that the antiprostate cancer efficacy of quercetin is mediated through its regulation of ROS, Akt, and NF-κB pathways.Lee et al. (37) demonstrated that quercetin can activate AMPK through the generation of ROS in breast cancer cells, leading to the inhibition of COX-2 expression and thereby exerting its antiproliferative and pro-apoptotic effects.Subsequently, re-treatment resulted in cell cycle arrest at the sub-G1 phase, upregulation of apoptosis-related genes, and downregulation of the survival gene VEGF.Moreover, quercetin was found to enhance the expression levels of LC3-II and beclin 1 while inhibiting p62 expression.It also increased SIRT1 protein level and pAMPK-AMPK ratio, ultimately inducing mitochondria-dependent apoptosis and autophagy while suppressing cell viability (38).Finally, in the investigation of HeLa cervical cancer cells, following intervention, genes implicated in the G2/M phase of the cell cycle (CCNB1, CCNB2, and CDK2), relevant genes within the MAPK, PI3K, and WNT pathways, genes involved in cellular migration (MMP14, MMP9, and MTA1), as well as anti-apoptotic proteins were downregulated.Conversely, pro-apoptotic protein expression was upregulated.Consequently, it can be deduced that quercetin effectively impedes cell cycle progression, specifically at the G2/M phase, while concurrently inhibiting migration and proliferation processes.Moreover, it induces apoptosis by suppressing MAPK-, PI3K-, and WNT-associated signaling pathways (39).
Gallic acid
Gallic acid typically appears as white or yellowish needle-like crystals, exhibiting solubility in water and ethanol.It possesses a diverse range of physiological activities, including antioxidant, antibacterial, and anti-tumor properties (Table 4).Gallic acid has been found to inhibit the proliferation of TE-1 cells derived from human esophageal cancer by impeding their migration and colony-forming ability while promoting apoptosis.This effect is accompanied by an elevation in ROS levels and up-regulation of pro-apoptotic proteins Caspase-3, Caspase-9, and Bax.Conversely, the expression of antiapoptotic protein Bcl-2 along with cyclin D1 and cyclin D3 were down-regulated (40).Furthermore, gallic acid demonstrates inhibitory effects on HCT-116 and HT29 cells through its ability to suppress SRC and EGFR phosphorylation.Consequently, this inhibition leads to reduced proliferation of colon cancer cells, along with the induction of cell apoptosis (41).It was observed that gallic acid exerted a significant inhibitory effect on the migration of AGS cells, potentially mediated in part through modulation of the Ras/PI3K/AKT signaling pathway (42).Gallic acid was found to induce apoptosis in MIA PaCa-2 cells via activation of the mitochondrial signaling pathway, involving the participation of Bcl-2 and Bax proteins.Treatment with gallic acid resulted in the down-regulation of Bcl-2 protein expression while up-regulating the expression of Bax protein (43).In studies related to ovarian cancer cells, gallic acid demonstrated its ability to arrest cell cycle progression at S/G2 phase by reducing levels of cell cycle-related proteins CDC2, p-Cdc2, and cyclin B. Additionally, it activated an intrinsic apoptotic pathway mediated by Caspase-3 through upregulation of p53 (44).Lin et al. (45) showed that gallic acid exerts its apoptotic and anti-proliferative effects by inhibiting the PI3K/AKT/EGFR pathway while activating the MAPK signaling pathway.This process is accompanied by a reduction in MMP levels and an increase in ROS production, suggesting that apoptosis may be mediated through the mitochondrial apoptotic pathway and induce oxidative stress within cells.In bladder cancer studies, gallic acid has been shown to modulate cell proliferation via the PI3K/AKT and MAPK/ERK pathways, as well as inhibit bladder cancer cell invasion and migration through regulation of p-AKT/MMP2 signaling (46).BING ZHAO and MENGCAI HU (47) demonstrated in their study on cervical cancer cells that gallic acid exhibits inhibitory effects on the expression of ADAM17, EGFR, p-AKT, and p-ERK, thereby effectively impeding the progression of cervical cancer.In a separate investigation focusing on non-small-cell lung cancer, gallic acid was found to dose-dependently suppress cell proliferation.Additionally, gallic acid exhibited its regulatory potential by inducing up-regulation of p53 expression through inhibition of the PI3K/AKT pathway.This mechanism consequently modulates the expression levels of cell cyclerelated proteins as well as endogenous apoptotic proteins (48).In the investigation of gallic acid's impact on the migratory capacity of nasopharyngeal carcinoma cells, it primarily diminishes the expression of two crucial transcription factors, AP-1 and ETS-1, within the MMP1 promoter by inhibiting the p38 MAPK signaling pathway.Additionally, upregulating TIMP-1 expression can further impede MMP1 expression, thereby restraining tumor invasion (49).In a study conducted by Kaur et al. (50), gallic acid exhibited potential for reducing prostate cancer cell activity and inducing apoptosis; however, this effect was not observed in normal PWR-1E cells.Subsequently, researchers performed xenotransplantation experiments using animal models to validate gallic acid's anticancer effects in vivo.Gallic acid exhibits anti-tumor effects on brain gliomas by inhibiting the expression of ADAM17, p-AKT, and p-ERK, thereby suppressing the PI3K/Akt and Ras/MAPK signaling pathways to mitigate tumor cell aggressiveness (51).In osteosarcoma cells, galic acid downregulates lncRNA H19 expression, disrupting Wnt/β-catenin signaling and impeding osteosarcoma development (52).A study investigating gallic acid's promotion of apoptosis in oral cancer cells specifically explored its mechanism.It was found that gallic acid activates CK II, leading to BIK-BAX/BAK-mediated endoplasmic reticulum-related ROS-dependent apoptosis (53).
Protocatechuic acid
Protocatechuic acid, a gray-to-brown solid crystalline powder commonly found in Chinese herbs and foods, has been extensively studied for its potential anti-tumor effects.Notably, it has demonstrated the ability to induce apoptosis in tumor cells and inhibit cell proliferation across various tissues (Table 5) (60).In a study conducted by Punvittayagul et al. (54), protocatechuic acid exhibited anticancer properties in rats with diethylnitrosamine-induced hepatocarcinoma by effectively suppressing inflammation, proliferation, and promoting apoptosis.Furthermore, protocatechuic acid was found to impede HO-1-mediated activation of p21, thereby inhibiting colorectal cancer cell viability and inducing cellular apoptosis (55).In studies pertaining to esophageal cancer, protocatechuic acid has been found to exhibit inhibitory effects on tumorigenesis and inflammatory signaling, thereby suppressing the development of N-nitrosomethylbenzylamine-induced esophageal cancer (56).Motamedi et al. (57) demonstrated that protocatechuic acid effectively impedes colony formation in AGS cells by restraining cell proliferation and promoting cell apoptosis.This effect is primarily achieved through upregulating P53 expression and downregulating Bcl-2 expression levels.Furthermore, the combination of protocatechuic acid with 5-fluorouracil enhances its anti-tumor efficacy.Additionally, protocatechuic acid exerts inhibitory actions on MMP2 expression via the RhoB/PKCε and Ras/Akt cascade pathways, leading to suppression of tumor cell migration and invasion (58).Notably, for mouse breast cancer 4 T1 cells, the anti-metastatic effect does not appear to be associated with MMP2 (59).
Liver and radiation protection
The liver functions as the primary organ responsible for drug metabolism and susceptibility to drug-induced damage.The mechanism underlying drug-induced liver injury primarily involves the direct toxic effects of drugs and their intermediates on the liver, as well as specific reactions elicited by the body towards these drugs.According to relevant surveys, approximately 15% of anti-tumor medications are associated with drug-induced liver injury (61).Consequently, in clinical practice, hepatoprotective agents are often co-administered with anti-tumor drugs to mitigate potential hepatic harm.Furthermore, chemoradiotherapy represents a crucial therapeutic approach for malignant tumors; however, it not only eradicates tumor cells but also inflicts damage upon healthy tissue cells in patients.
Isorhamnetin
By downregulating the TGF-β1/Smad3 and TGF-β1/p38 MAPK pathways, isorhamnetin can decrease HSC activation and ECM formation.This confirms that isorhamnetin protects mice against CCL4-induced liver fibrosis (62).Isorhamnetin application can improve the pathological injury of mouse liver tissue, lower serum liver enzyme and pro-inflammatory factor levels, and down-regulate the levels of Bax, cleaved Caspase-3, cleaved Caspase-9, Beclin-1, and p-P38/P38 in the mouse model of acute hepatitis caused by canavin A. Isorhamnetin's hepatoprotective impact was achieved by inhibiting autophagy and apoptosis through the P38/PPAR-α signaling pathway, as evidenced by the up-regulation of PPAR-α level (63).Isorhamnetin can prevent cell death, but the combination of arachidonic acid and iron can induce mitochondrial malfunction and result in cell death.After AMPK upstream kinase CaMKK2 was knocked down, the amount of phosphorylation of AMPK was decreased, suggesting that isorhamnetin primarily reduces mitochondrial apoptosis and oxidative stress through AMPK.Isorhamnetin is therefore thought to be a possible component in the prevention of liver disease (64).Because isorhamnetin can encourage ATM activation and the recruitment of DNA repair factor 53BP1 in irradiated cells, it can prevent the development of radioactive gastrointestinal syndrome in mice (65).
Quercetin
Quercetin can lessen the acute liver damage brought on by CCl4; this defense may result from quercetin's higher antioxidant capacity (66).Through a mechanism mostly associated with the reduction of Notch1 expression, quercetin can also limit M1 macrophage recruitment, polarization, and the production of inflammatory markers, thereby reducing liver inflammation and fibrosis (67).Quercetin has been shown to lower liver functionrelated parameters, ameliorate hepatic pathological tissue, suppress oxidative stress and apoptosis by lowering P53 and TNF-α, and prevent liver toxicity in the dobiculoxin-induced liver injury rat model (68).In a related investigation on radiation-induced brain damage, quercetin inclusion body complexes have been shown to influence the gut microbiota through modulating the microbiotagut-brain axis.This reduces intestinal permeability and inflammation in model mice, improving the damage caused by radiation to the brain overall (69).Radiation therapy can cause side effects in cancer patients, including oral mucositis.By increasing BMI-1, quercetin can enhance wound healing by lowering the release of inflammatory agents and reactive oxygen species (70).
Gallic acid
The degree of liver tissue injury in the CCL4-induced Wistar rat liver injury model slightly improved following the addition of gallic acid.Gallic acid's hepatoprotective effects were attained by downregulating pro-inflammatory indicators, scavenging free radicals, suppressing malondialdehyde levels, and activating antioxidant enzymes (71).Additionally, gallic acid might lessen the amount of liver damage brought on by anti-tuberculosis medications, mostly through the inhibition of NF-κB to lessen liver toxicity and the activation of Nrf2 and its downstream pathway to lessen drugtriggered cytotoxicity (72).The liver tissues of the mice were examined after the x-ray radiation, followed by gallic acid intragastric administration.The findings indicated that the use of gallic acid as a prophylactic measure could boost the activity of antioxidant enzymes in the liver tissues affected by radiation, diminish the oxidative and DNA damage of liver cells, and provide a protective effect from radiation on the liver of mice (73).Furthermore, due to the heightened susceptibility of salivary acicular cells to radiation, which can cause them to become dysfunctional during radiotherapy, gallic acid can regulate TLK1/1B to counteract genotoxicity, thus increasing cell survival and aiding DNA repair to reduce radiation toxicity (74).
Protocatechuic acid
It was discovered that protocatechuic acid could safeguard hepatocytes from the hindrance of cell viability caused by hydrogen peroxide, eradicate ROS generated by hydrogen peroxide, and diminish the activity of Caspase-3/7 following its involvement in the oxidative stress model of human hepatocellular carcinoma cell HepG2 induced by hydrogen peroxide.It appears that protocatechuic acid can safeguard hepatocytes from oxidative stress-induced apoptosis caused by reactive oxygen species (75).The protective properties of protocatechuic acid on the liver are evident in its ability to enhance oxidative stress and tissue morphology, reduce inflammatory factor expression, and lower mTOR, LC3, and Caspase-3 levels, thereby inhibiting autophagy and apoptosis (76).The hepatorenal toxicity of methotrexate, a chemotherapy drug, poses certain limitations when applied clinically.The administration of methotrexate with protocatechuic acid resulted in a decrease in the levels of TNF-α, IL-1β, and Caspase-3 in rats, suggesting that protocatechuic acid provided hepatorenal protection through its anti-oxidation, antiinflammatory, and anti-apoptosis properties (77).
Toxicity study
The utilization of sea buckthorn can be traced back to the mid-8th century, and despite its long history, limited research has been conducted on its potential toxicity.Yuan et al. (78) conducted chromosome aberration experiments and teratogenicity experiments on mouse spermatogonia to investigate the genotoxicity and teratogenicity of sea buckthorn fruit oil.These pivotal studies serve as crucial assessments for determining the safety profile of this medicinal substance.The findings revealed that even under high dosage administration (10 mL/kg body mass) of sea buckthorn fruit oil, neither experiment exhibited any adverse reactions associated with the use of this oil.This substantiates that sea buckthorn fruit oil does not possess genotoxic or teratogenic effects.Tang et al. (79) administered sea buckthorn seed extract orally to mice and conducted acute oral toxicity, genetic toxicity, and 30-day feeding experiments.
The results demonstrated no abnormalities in any aspect of the rats.Furthermore, Ruan et al. (80) performed acute toxicity tests on rats using sea buckthorn liquid at a maximum dose (causing all deaths in mice) of 19.2 g/kg, which is equivalent to 800 times the clinical use in humans.The minimum dose (mortality rate 1/10) of 11.7 g/kg also corresponds to 488 times the clinical use, indicating minimal toxicity associated with sea buckthorn consumption.Based on these experiments, it can be concluded that sea buckthorn exhibits low toxicity and high safety when used clinically as both medicine and food products due to its homologous nature.
Patent application
Patent application data for sea buckthorn can be accessed through the Betan database. 4The application process commenced in 1985, and as of now (2023.4.11), a total of 10,918 patents have been published.The peak number of applications was observed in 2018, with the previous year witnessing a maximum cumulative count of 1,268 patent applications.However, in recent years, there has been a decline in the number of applications (Figure 4).Among these patents, the majority are concentrated in China, which signifies a significant level of innovation activity and intense competitive pressure within the sea buckthorn industry in China (Figure 5).
Food applications
Sea buckthorn, being derived from both medicine and food origins, has gained widespread utilization in food development due to its remarkable antioxidant properties, immune regulatory capabilities, and gastrointestinal protection functions.Despite the sour taste associated with sea buckthorn consumption, its flavor characteristics undergo a transformation during fermentation resulting in increased sweetness.
Liu et al. (81) conducted research on optimizing the fermentation process of sea buckthorn juice and subsequently investigated its inhibitory effects on various fungi as well as its protective effects against oxidative stress induced by H 2 O 2 .The findings demonstrated that fermented sea buckthorn juice exhibited potent antioxidant and antibacterial activities.And studies have shown that, compared with gastric cancer and colorectal cancer, sea buckthorn juice plays out in breast cancer and prostate cancer better antitumor effect (82).
Considering the declining masticatory function among the elderly population, the introduction of sea buckthorn jelly has significantly broadened the market for age-friendly food products.By utilizing various gelling agents, it becomes possible to regulate the firmness of the jelly in order to cater to individuals with diverse chewing abilities, thereby mitigating the risks associated with choking incidents and nutritional imbalances (83).Studies have shown that sea buckthorn juice addition amount of 11% sea buckthorn jelly with a higher sensory score, but due to the influence of other additives content recommendations to 9% as the best level of sea buckthorn jelly recipe (84).Sea buckthorn leaves boast a remarkable content of polyphenols and flavonoids, which exhibit potent antibacterial, anti-inflammatory, and antioxidant properties.In analyzing the composition of sea buckthorn leaf tea, ellagic acid total content to 59.12 mg/g ranked first, as the quality control index, while ellagic acid has therapeutic effects in liver cancer, lung cancer, esophageal cancer and other cancers.After crushing sea buckthorn leaves is advantageous to the composition of precipitation (85).The chemical constituents and extracts of sea buckthorn leaf tea were investigated, revealing significant antioxidant and α-glucosidase inhibitory activities.However, heat treatment can reduce its antioxidant activity (86).After the intake of 0.1 mg/mL sea buckthorn leaf tea extracts, the DPPH radical scavenging activity is approximately 94%, ABTS radical scavenging activity in 70-90%, far higher than α-glycosidase enzyme inhibition activity.4 mg/mL sea buckthorn leaf tea extract exhibited a moderate level of α-glucosidase inhibitory activity compared with 0.97 mg/mL (87).
The production of sea buckthorn wine has effectively addressed the issue of storage and transportation intolerance associated with sea buckthorn.Studies have demonstrated that sea buckthorn wine possesses potent free radical scavenging abilities, which gradually decline over time as it ages.The antioxidant capacity is closely linked to the vitamin C content present in sea buckthorn wine (88).After Regional distribution of sea buckthorn related patents.fermentation, the antioxidant activity of sea buckthorn juice increased significantly, the free radical scavenging rate increased to more than 90%, and the levels of phenolic and flavonoid active substances also increased significantly in the early stage of fermentation (89).
In addition, sea buckthorn yogurt has been found to effectively enhance the sour taste of sea buckthorn and intensify its fruit flavor.This contributes to regulating the balance of intestinal flora and boosting immunity (90).The study found that the content of VC in sea buckthorn yogurt was positively correlated with the added amount of sea buckthorn juice, but if the added amount was too high, the overall acidity of the yogurt would increase, the fermentation would be inhibited, and the protein content would be reduced.Therefore, it is recommended that the added amount of sea buckthorn juice should not exceed 15% (91).
Other
After undergoing processing to create various products, sea buckthorn generates a by-product known as sea buckthorn residue.Currently, the utilization of this residue primarily involves its use as animal feed or direct disposal.In order to enhance resource utilization, researchers conducted further analysis on sea buckthorn residue.A study investigating the residual fruit of sea buckthorn revealed that it retains some antioxidant properties and UPLC-Q/TOF analysis unveiled numerous compounds with free radical scavenging capabilities.Moreover, in vitro cell experiments demonstrated its potential to inhibit tumor cell proliferation (92).In addition, Chenyu Su et al. successfully enhanced the triterpene acid content in sea buckthorn fruit residue to optimize the overall mass fraction of triterpene acids.Subsequently, an α-glucoside inhibition experiment demonstrated that these triterpene acids effectively attenuated postprandial blood glucose levels in diabetic patients, surpassing the activity exhibited by acarbose (93).Related research on sea buckthorn fruit residue not only improves its recycling rate but also provides a basis for its further utilization.
Summary and prospect
Sea buckthorn, as a medicinal material sharing the same origin with both medicine and food, possesses significant medicinal value and holds immense potential for generating substantial economic and social benefits.This paper aims to comprehensively summarize the active constituents, anticancer properties, toxicity profile, and clinical applications of sea buckthorn.The ultimate objective is to advocate the concept of "medicine and food homology" while providing robust theoretical support for the sustainable development of sea buckthorn.
A comprehensive search was conducted on Pubmed and CNKI using the keywords "Hippophae rhamnoides L., " "Hippophae Fructus", "sea buckthorn, " "cancer, " "tumor" and "neoplasm" to explore recent advancements in the field of anti-tumor applications of sea buckthorn.The search yielded only two relevant reviews (94,95).Among them, Zheng Yu et al. 's study provided limited descriptions regarding the antitumor effects of sea buckthorn, with a reference list primarily consisting of Chinese literature, thus diminishing its overall significance.Conversely, Beata Olas et al. 's articles presented a more comprehensive review of both in vivo and in vitro anti-tumor effects of sea buckthorn, including an insightful discussion on its potential as a radiation protective agent.However, it is worth noting that the aforementioned two reviews have a substantial historical background (2016; 2018).In this current review, we have extensively referenced numerous studies published after 2018 and meticulously summarized pertinent research on the anti-tumor effects of sea buckthorn.Furthermore, adopting the perspective of "homology of medicine and food, " we have also comprehensively examined its application in food by supplementing relevant literature cited in the previous review.Additionally, to enhance clarity and coherence, bioinformatics methods were employed to investigate the principal active components of sea buckthorn, elucidate their anticancer properties and mechanisms, and visually present our findings through informative charts.
Although the antitumor effects of sea buckthorn have been summarized, this review still exhibits several limitations.Considering the variations in active substance composition and content across different regions and varieties of sea buckthorn, as well as their corresponding therapeutic effects, it is imperative to address these factors in future research endeavors pertaining to sea buckthorn.
FIGURE 2
FIGURE 2 frontiersin.org proliferation and induce apoptosis; the expression of P53 and Bad are up-regulated, while the expression of Cyclin D1, Bcl-xl, TNF-α and IL-1β are down-regulated.cell viability; increase ROS level and decrease RSH level; down-regulated HO-1 expression and up-regulated p21 expression.expression of COX-2, iNOS, p-NF-κB, sEH and PTX3 are decreased.cell proliferation and promote cell apoptosis; increase the expression of P53 and decrease the expression level of Bcl-2.and invasion; the mRNA expression of MMP-2 is decreased and the mRNA expression of TIMP-2 is increased.; increase the expression of MMP, RhoB and PKCε; down-regulated the expression of Ras and p-Akt.
FIGURE 4
FIGURE 4Number of sea buckthorn related patents published in recent.
TABLE 1
Active ingredients of sea buckthorn and their ID.
TABLE 2
Types and mechanisms of cancer treatment with isorhamnetin.
Endometrial cancerInhibit cell proliferation and metastasis; cell cycle arrest is in G2/M phase; promote cell apoptosis; raise the ROS level.
TABLE 3
Types and mechanisms of cancer treatment with quercetin.Myc and inhibite cell proliferation; decrease the level of TGF-β1 and inhibite epithelial interstitial transformation, thereby inhibiting cell migration and invasion; induce apoptosis.
Breast cancerInhibit cell proliferation and cell cycle arrest in sub-G1 phase; the levels of p53 and p21 are up-regulated, and the expression levels of VEGF are down-regulated.
TABLE 4
Types and mechanisms of cancer treatment with gallic acid.
TABLE 5
Types and mechanisms of cancer treatment with protocatechuic acid.
|
2024-07-14T15:45:34.568Z
|
2024-07-09T00:00:00.000
|
{
"year": 2024,
"sha1": "4b0f5115a04003f7403f813ae562f013aab1a603",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "38e6ec7fc97c7108e8a3a7b6f39a29e8820c906c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
201836764
|
pes2o/s2orc
|
v3-fos-license
|
Occupational Exposures in an Equestrian Centre to Respirable Dust and Respirable Crystalline Silica
Sand-based products are regularly used as footing material on indoor equestrian arenas, creating a potential occupational exposure risk for respirable crystalline silica (RCS) for equestrian workers training and exercising horses in these environments. The objective of this study was to evaluate an equestrian worker’s personal RCS and respirable dust (RD) exposure. Sixteen personal full-shift RD measurements were collected from an equestrian worker and analysed for RD, quartz and cristobalite. Geometric mean exposures of 0.12 mg m−3 and 0.02 mg m−3 were calculated for RD and RCS concentrations, respectively. RCS exposures of between 0.01 to 0.09 mg m−3 were measured on days when the indoor arena surface was not watered, compared to lower exposures (<LOD-0.03 mg m−3) on days when the indoor arena was watered (p < 0.01); however, manual watering is time intensive and less likely to be implemented in practice. This small-scale study provides new data on RCS and RD exposures among equestrian workers. RCS exposures are within the range considered to be associated with increased risk for lung cancer. The use of dust control solutions such as water suppression should be promoted for equestrian work in horse riding arenas. Equestrian workers need to receive occupational health training on the health risks associated with RCS exposure.
Introduction
The use of sand, animal feed and bedding materials can create dusty work environments for equestrian workers that tend to have a higher risk for respiratory conditions such as organic dust toxic syndrome, and bronchitis symptoms, particularly if their work is indoors [1][2][3]. Potential exposure risks from airborne pollutants including inhalable and respirable organic dusts, microorganisms, endotoxins and β-Glucans have been evaluated among equestrian workers [4][5][6]; however, less is known about exposures to inorganic dusts, such as respirable crystalline silica (RCS). RCS is a natural component of sand and associated with a range of respiratory diseases, in particular, silicosis and lung cancer [7]. Sand is regularly used in the equestrian sector as a surface or footing material in indoor and outdoor arenas, on sand gallops for training race horses, on longeing arenas and on horse walkers [8].
To the authors' knowledge, there has been just one published study, which reported on horse trainers' RCS exposures, as part of a lung cancer case report [9]. In this study, it was reported that the trainer had worked for 23 years in the sector, training 7-12 horses per day on longeing arenas covered with recycled sands. Although limited by the collection of just three exposure measurements (one area and two personal samples), 8 h time weighted average (TWA) exposure estimates for the personal sample exceeded the threshold limit value (TLV) of 0.025 mg m −3 for RCS set by the ACGIH [10]. This previous study highlighted a potential increased exposure risk for RCS, which could lead to the development of occupational lung cancer within this worker group. The study also suggests that there is a lack of awareness among equestrian workers of the risks of RCS exposures and limited use of exposure controls in this sector [9].
Choice, maintenance and age of the footing material play a significant role in the generation of particulate concentrations in indoor arenas. Maintaining a clean (manure free) well mixed moist footing material are some of the control measures recommended to reduce airborne particulates in riding arenas [11][12][13]. A recent study compared the release of airborne particulate concentrations (PM10) with footing moisture content and density for three different footing materials, sand, sand fibre mix, and sand wood chips. Particulate concentrations (PM10) from sand-fibre-based footing materials were over five times greater than concentrations for either sand or the sand wood mix. The density and moisture content of the sand fibre footing were identified as important factors influencing particulate generation. Regular watering and grooming of the footing materials to prevent separation and shifting of the materials was recommended to reduce the release of airborne particles [11].
Further exposure data is needed to characterise exposures to RCS within this occupational group, to highlight exposure risks and to promote the use of exposure controls within the sector. The objective of this study was to characterise respirable dust (RD) and RCS exposures among equestrian workers working in an Irish equestrian centre.
Materials and Methods
One small to medium sized Irish equestrian centre, managed and operated by a self-employed worker was recruited to participate in the study, over the summer period of 2018. The centre stabled, on average, 30 horses for training and a further 15 horses for riding lessons, had one indoor arena (approximate area of 6000 m 2 ) and two outdoor arenas (each had an approximate area of 10,000 m 2 ), all surfaced using silica sand and shredded carpet mix (which is sand mixed with polypropylene, polyester and polyurethane fibres shredded into pieces <30 mm in length). The indoor arena was housed under the same roof as the tacking and grooming area, it had entrance sliding doors on two side walls, which were closed during the surveys. There was no mechanical ventilation. Horses were brought into the arena via the tacking and grooming area. The building also contained a small fully enclosed room under the same roof, which functioned as a canteen. The room had windows, which opened onto the arena area; however, they were rarely opened. To reduce dust levels in the indoor arena, occasionally, the surface was dampened using a water hose mounted on a ladder (approximately 3 m high), which was moved around the arena for a maximum of one hour. However, this water suppression regime was rarely performed and only if there were no competing work tasks to be performed at the centre. A convenience sampling approach was followed when collecting personal exposure data and measurements were collected when the worker performed their normal daily duties. The worker was sampled at standing height (1.5 m), apart from when they were grading/raking when then were at sitting height. Typical work duties included; • cleaning horse stables, • longeing horses (which involves the horse, attached to a lunge line, moving around the trainer), • loose/free jumping 2-4 young horses per day (involves jumping a horse without a rider to practice the horses jumping skills), • delivering riding lessons both indoors and outdoors, • grooming, tacking and untacking horses which was always performed in the indoor arena, • grading/raking the surface of both indoor and outdoor arenas. Surface grading is required to maintain a good workable footing material, which over time, becomes compacted due to horse traffic. In this study, the surface was graded using a rake attached to an open top tractor which was driven around the arena.
Working with the horses when longeing, free jumping or during riding lessons and raking the arena surface lead to increased dispersion of the footing material and visible clouds of dust in the arena. Personal breathing zone samples were collected using a personal sampling pump (Sidekick; SKC Ltd., Dorset, UK) with Higgins-Dewell cyclone (Casella, Bedford, UK) and 25 mm, 5 µm pore size PVC filters. Pumps were pre-calibrated at a flow rate of 2.2 L per minute (L min −1 ) using a primary airflow meter (DryCal ® DC Lite; BIOS International, Butler, NJ, USA). The researcher collected contextual information to support all samples collected, including time spent on potentially high-risk exposure work tasks (tasks which generated visible clouds of dust in the work area). RD samples were collected and analysed gravimetrically according to HSE MDHS 14/4 [14], the limit of detection (LOD) for RD was 0.05 mg. Both quartz and cristobalite was quantified on each sample using X ray diffraction using a Bruker D2 Phaser X ray Diffractometer with Bruker Diffrac. DQuant 1 software following HSE MDHS 101/2 [15]. Sample analysis was performed at the Institute of Occupational Medicine, Edinburgh who are accredited for XRD analysis by the United Kingdom Accreditation Service (UKAS).
Summary statistics were calculated using SPSS version 26 [16], a concentration data was log normally distributed, a paired t-test was used to compare RD and RCS exposures on days when the indoor arena was watered to when it was not watered. The results were compared with the Irish Occupational Exposure limit value for RCS, 0.1 mg m −3 [17] and the recommended comparison guideline for low-toxicity respirable dust, 1.0 mg m −3 [18]. Where sampling times exceeded 8 h, exposure concentrations were adjusted to an 8 h reference period [19]. Estimates of relative risk for lung cancer were calculated using RCS exposure data and log linear response curves derived by Steenland et al. [20].
Results
A total of 16 personal exposure measurements were collected from the one equestrian worker over the period of June-August 2018. The worker was sampled for the full work shift, including short break periods which were always spent in the arena canteen, lunch breaks (30 min) were spent off-site and not included in the measurement. Sampling times ranged from 480-540 min and the worker spent between 75% and 85% of their time working in the indoor arena or nearby during the measurement period.
Exposure Results
Individual personal RD and RCS concentrations (mg m −3 ) are presented in Table 1. Table 1 also provides a summary of the work activities undertaken during each of the measurement periods, whether water suppression was applied in the indoor arena and also, outdoor weather conditions. The outdoor weather conditions were dry on 14 of the 16 days surveyed; there were light rain showers on days 8 and 9. On four of the measurement days, the surface of the indoor arena was sprayed with water in the morning, and on another four days, it was sprayed in the morning and afternoon.
Cristobalite was not detected in any sample and so the RCS results reflect quartz exposure. Two of the sixteen personal samples had non-detectable levels of RD and RCS (samples 15 and 16). Sample geometric mean (GM) and geometric standard deviations (GSD) were calculated by substituting < LOD values with half the analytical LOD, this being 0.025 mg for RD and 0.005 mg for RCS [21]. GM (GSDs) of 0.124 mg m −3 (2.15) and 0.025 mg m −3 (2.43) were calculated for RD and RCS concentrations, respectively (range; <0.05 to 0.30 mg m −3 (RD) and <0.01 to 0.08 mg m −3 (RCS).
There was a strong positive correlation between RD and RCS concentrations (p < 0.05). There was a significant difference (p < 0.01) between concentrations of both RCS and RD exposures on days when the indoor arena was watered and on days when no watering was performed. OD = outdoor, + both indoors/outdoors; * sample results < limit of detection (LOD); ** morning only; *** morning and afternoon; GM = geometric mean; GSD = geometric standard deviation; concentrations < LOD substituted with 1 2 LOD.
Discussion
Personal exposure data for both RD and RCS collected from an Irish equestrian worker are presented. To the authors' knowledge, this is only the second study of RCS exposures among equestrian workers and although based on only 16 samples, provides a larger dataset of the personal exposures experienced by these workers. The worker wore the sampling train for the full work shift, including short break periods, which were always spent in the arena canteen. This data is required to promote awareness of the health risks from exposure to RCS within this occupational group. RD concentrations over the sampling period and when adjusted to an 8 h reference period (8 h TWA) (GM (GSD); 0.12 mg m −3 (2.15)) are comparable to values previously reported for horse barns [4] and below 1.0 mg m −3 , a recommended comparison guideline for RD [18]. In this study, RCS concentrations were significantly higher (p < 0.01) on days when watering was not performed on the surface of the indoor arena, although concentrations were less than 0.1 mg m −3 ((GM (GSD) n = 8; 0.041 mg m −3 (2.08)). Previous research among US industrial sand workers suggest that exposures as low as 0.05 mg m −3 can present an increased lung cancer risk, which is further increased among smokers [20]. Yoon et al. [9] estimates a lifetime excess risk for lung cancer for an equestrian worker at age 74 of 0.077%-0.090% due to exposure to 0.02-0.086 mg m −3 RCS with a 15-year lag time. The relative risk for lung cancer mortality associated with 40 years exposure (with a 15-year lag period), at the concentrations measured in this current study (0.01-0.09 mg m −3 ), is estimated at between 1.004 and 1.038 (0.4% and 3.8% increase).
Given that occupational cancer is the leading cause of work-related deaths in the EU [22], further research is required to characterise RCS exposures and other potential airborne hazards created as a result of the use of different footing materials and additives including, for example, recycled textiles and synthetic carpets [23] in equestrian arenas. As exposure to mixed organic dusts, capable of inducing inflammatory reactions in the respiratory system, are common in many agricultural settings, including horse stables [6,24], future research should also include endotoxin and β (1→3)-glucan measurements. Further work should also consider particle preparations in vitro using inflammatory cells to determine the activity of the RCS and the impact of organic constituents of the aerosol. Previous work on coal mine dust suggests that other components of the aerosol may be capable of masking the reactivity of the quartz surface [25]. Studies on equine health have shown that maintaining optimal moisture content of the footing material can help manage indoor particulate concentrations in riding arenas [11]. Keeping the surface of the footing material moist and or using alternative footing materials such as sand with wood chips or wax-coated sand has the potential to reduce worker exposures to RCS but requires further evaluation.
Conclusions
The present study, although limited by sample numbers, provides a significant way forward in characterising equestrian workers exposure to RD and RCS. Measured RCS concentrations approached the recommended Irish OEL of 0.1 mg m −3 for RCS, which could suggest an increased lung cancer risk for this occupational group, but since the study represents just one worker from this industry, further exposure studies are required.
Exposure interventions, such as watering the indoor arena, reduced air concentrations of RD and RCS. However, automated watering systems would be recommended, as competing work tasks limit the time that the equestrian worker can spend on manual watering regimes. Further studies are required to promote awareness within the sector of the exposure risks associated with footing materials used on indoor equestrian arenas and the impact of increased knowledge and understanding of the risks involved.
|
2019-09-06T13:06:04.689Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1b6682e3798342dd666f6cb66f193ad42695f065",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph16173226",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efd064c270703b94fcdd952bd186f795a9cc7172",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1625823
|
pes2o/s2orc
|
v3-fos-license
|
ERBB3 is a marker of a ganglioneuroblastoma/ganglioneuroma-like expression profile in neuroblastic tumours
Background Neuroblastoma (NB) tumours are commonly divided into three cytogenetic subgroups. However, by unsupervised principal components analysis of gene expression profiles we recently identified four distinct subgroups, r1-r4. In the current study we characterized these different subgroups in more detail, with a specific focus on the fourth divergent tumour subgroup (r4). Methods Expression microarray data from four international studies corresponding to 148 neuroblastic tumour cases were subject to division into four expression subgroups using a previously described 6-gene signature. Differentially expressed genes between groups were identified using Significance Analysis of Microarray (SAM). Next, gene expression network modelling was performed to map signalling pathways and cellular processes representing each subgroup. Findings were validated at the protein level by immunohistochemistry and immunoblot analyses. Results We identified several significantly up-regulated genes in the r4 subgroup of which the tyrosine kinase receptor ERBB3 was most prominent (fold change: 132–240). By gene set enrichment analysis (GSEA) the constructed gene network of ERBB3 (n = 38 network partners) was significantly enriched in the r4 subgroup in all four independent data sets. ERBB3 was also positively correlated to the ErbB family members EGFR and ERBB2 in all data sets, and a concurrent overexpression was seen in the r4 subgroup. Further studies of histopathology categories using a fifth data set of 110 neuroblastic tumours, showed a striking similarity between the expression profile of r4 to ganglioneuroblastoma (GNB) and ganglioneuroma (GN) tumours. In contrast, the NB histopathological subtype was dominated by mitotic regulating genes, characterizing unfavourable NB subgroups in particular. The high ErbB3 expression in GN tumour types was verified at the protein level, and showed mainly expression in the mature ganglion cells. Conclusions Conclusively, this study demonstrates the importance of performing unsupervised clustering and subtype discovery of data sets prior to analyses to avoid a mixture of tumour subtypes, which may otherwise give distorted results and lead to incorrect conclusions. The current study identifies ERBB3 as a clear-cut marker of a GNB/GN-like expression profile, and we suggest a 7-gene expression signature (including ERBB3) as a complement to histopathology analysis of neuroblastic tumours. Further studies of ErbB3 and other ErbB family members and their role in neuroblastic differentiation and pathogenesis are warranted.
Background
Peripheral neuroblastic tumours (NT's) are derived from developing neuronal cells of the sympathetic nervous system and are the most frequent extracranial solid tumours of childhood. NT's are composed of variable proportion of neuroblasts (neuronal lineage) and Schwannian cells (glial lineage), and are classified into histopathological categories according to the presence or absence of Schwannian stromal development, differentiation grade of the neuroblasts, and their cellular turnover index. According to the International Neuroblastoma Pathology Classification (INPC -Shimada system) [1], the three subtype categories and their subtypes are: 1) Neuroblastoma (NB), Schwannian stroma-poor; 2) ganglioneuroblastoma (GNB), intermixed (Schwannian stroma-rich) or nodular (composite Schwannian stromarich/stroma-dominant and stroma-poor); 3) ganglioneuroma (GN), Schwannian stroma-dominant. Neuroblastoma exhibit an extreme clinical and biological heterogeneity, and patients are assigned to risk groups based on several criteria including stage [2,3], age [4], histological category and grade of tumour differentiation (histopathology) [5], the status of the MYCN oncogene [6], chromosome 11q status [7], and DNA ploidy [8] as the most highly statistically significant and clinically relevant factors [9]. Onehalf of NB patients have metastatic disease at diagnosis (INSS stage 4 or INRGSS stage M). All metastatic tumours with MYCN amplification (MNA) are aggressive and considered being high-risk tumours [9], whereas children with metastatic disease without MNA (approximately 65%) have variable clinical behaviours depending on age at diagnosis, histopathology, and other genetic factors. Based upon cytogenetic profiles, previous studies have categorized NB tumours into three major subtypes [10,11]: Subtype 1 representing favourable tumours with near triploidy and high expression of the Neurotrophic receptor TrkA (or NTRK1), mostly encompassing nonmetastatic NB stages 1 and 2; subtype 2A representing unfavourable NB stages 3 and 4, with 11q deletion (Del11q) and 17q gain (Gain17q) but without MNA; subtype 2B representing unfavourable widespread NB stages 3 and 4 with MNA often together with 1p deletion (Del1p) and Gain17q. Several gene sets are shown to discriminate the molecular subgroups and risk groups by mRNA and microRNA expression profiling in neuroblastic tumours [12][13][14][15][16][17][18][19][20][21]. A recent expression analysis by our research group identified the three cytogenetically defined subtypes (1, 2A, and 2B) by unsupervised clustering, but further indicated the existence of a fourth divergent subgroup [12]. Moreover, we identified a 6-gene signature including ALK, BIRC5, CCND1, MYCN, NTRK1, and PHOX2B to successfully discriminate these four subgroups [12]. The fourth (r4) subgroup encompassed tumours characterized by Del11q and high expression of genes involved in the development of the nervous system, but showed low expression of ALK. Approximately 7-9% of sporadic NB cases show inherent ALK mutations [22,23], and ALK overexpression, both in its mutated and wild type form, is demonstrated to define a poor prognosis in NB patients [24]. In relation to this our previous findings suggests the Type 2A (r2) and Type 2B (r3) subgroups, which both display high ALK expression, to be driven by the ALK pathway. In contrast, the r4 subgroup displaying low expression of all six genes of the signature, is suggested to be driven by an alternative oncogenesis pathway.
In the present study we aimed to further investigate the expression profiles of the four subgroups, and r4 in particular. By differential expression analysis and reverse engineering we found ERBB3 and its network members to be significantly overrepresented within the r4 tumour subgroup. Moreover, two other ErbB family members, ERBB2 and EGFR, were found to show concurrently higher expression. In contrast, unfavourable neuroblastoma subgroups (r2 and r3) were specifically characterized by G2/M cell cycle transition and mitotic regulating genes. By expression analysis of histopathology categories (i.e. NBs, GNBs, and GNs) we found the r4 subgroup to show an identical expression profile to GNB/GN types, and overexpression of ErbB3 was also confirmed at the protein level in GN tumours. We conclude that the ERBB-profile (high expression of EGFR, ERBB2 and ERBB3) defines a ganglion-rich neuroblastic tumour sub-set.
Differential expression in r-subgroups
To explore subgroup-specific characteristics we performed a differential expression analysis by SAM. Thirty-seven tumour cases from three studies were pre-processed in two separate data sets (data set 1, n = 14, and data set 2, n = 23, Table 1), and both data sets were divided into four r-subgroups based on rules according to the previously described 6-gene signature (6-GeneSig, Additional file 1) [12]. Six SAM pair-wise comparisons between rsubgroups were performed on each data set separately, and the 1000 most significant genes (according to descending SAM d-score) with a fold change above 2, were extracted to create SAM intersect gene lists representing both data sets (Additional file 2). The r2 versus r1 group comparison showed 122 differentially expressed genes present in lists from both data sets, and the r3 versus r1 group comparison showed 496 overlapping genes ( Figure 1A). The r4 subgroup showed the highest proportion of significant differentially expressed genes compared to all the other subgroups in both data sets (number of overlapping genes ranging between 503 and 669, Figure 1A).
The r1 subgroup (corresponding to the cytogenetically defined subgroup Type 1) was found to mainly involve nervous system developmental and catecholamine metabolic process related genes. In the MNA-specific subgroup r3 (corresponding to Type 2B), KIF15 was the most significantly up-regulated gene (fold = 15) while CUX2 showed the highest expression fold change (fold = 17). The MYCN gene was found on the 74 th position of upregulated genes (fold = 9), and NTRK1 was identified as the most significantly down-regulated gene within r3 compared to r1 (fold = 80, Additional file 2). Also, LMO3 and PHGDH were found to be specifically up-regulated in the r3 subgroup compared to the other subgroups. High expression of ALK was found in both the r2 (2-fold) and r3 (5-fold) subgroups compared to the favourable r1 subgroup. Moreover, r2 and r3 also showed up-regulation of several G2/M cell cycle transition and mitotic checkpoint related genes (e.g. AURKA, BRCA1, BUB1B, CCNA2, CCNB1, KIF15, MCM2, MCM3, and MCM5 etc.), which in contrast showed a significant down-regulation in the r4 subgroup. In line with this, a Gene Ontology (GO) search identified "cell cycle" as the most significant process accumulated in the SAM intersect gene lists of the r2 and r3 subgroups ( Figure 1B, Additional file 3). The apparent overrepresentation of cell cycle-related genes in subgroups r2 and r3 encouraged us to investigate enrichment of other cell cycle key players and networks in our SAM gene lists.
Differential expression in subgroup r4
Among the 10 most significantly up-regulated genes in the r4 subgroup in data sets 1 and 2, the following eleven genes were found; ABCA8, APOD, ASPA, CDH19, ERBB3, FXYD1, ITIH5, MAL, PLP1, S100B, and ST6GALNAC2. According to the GO search, these genes are mainly involved in nervous system development, multicellular organismal development, and response to wounding ( Figure 1B, Additional file 3). ERBB3 was found as the "top-one" up-regulated gene in r4 versus r3 with a 240-fold expression. ERBB3 encodes a transmembrane tyrosine kinase receptor, which has previously been associated with cancer in a large number of studies (>500 publications). ErbB3 is activated through dimerization to one of its four structurally related family members; EGFR, ErbB2, or ErbB4. ErbB-family members are often coexpressed, and thus we found it relevant to investigate their expression level relationships in our four neuroblatic data sets. We found a positive significant correlation of ERBB3 to the EGFR and ERBB2 family members, and a negative correlation to all genes of the 6-GeneSig in all four data sets (p < 0.05, Additional file 4). Also, EGFR and ERBB2 showed a significant up-regulation in r4 subgroups of most data sets (p < 0.05, Additional file 2). ERBB3 show several similarities to ALK, encoding the NB familial gene [25], and thus made a good candidate gene with potential role in the tumour development of r4 tumour types.
Among the down-regulated genes in the r4 subgroup CACNA2D3 was the most significant in comparison to the r1 subgroup (50-fold change). This gene was also found to be the 25 th most downregulated gene in the r3 subgroup compared to r1 (Additional file 2). Since both the r3 and r4 subgroups are previously found to show unfavourable outcome and poor survival [12], and the CACNA2D3 gene is located in the 3p21.1-locus commonly deleted in many NB tumours, this encouraged us to further screen the SAM intersect gene lists for other conceivable and previously reported tumour suppressor (TS) candidate genes. Out of 33 previously reported TS candidate genes, 15 were present among the SAM intersect gene lists from data sets 1 and 2 (Additional file 5). Gene network construction and gene set enrichment analysis (GSEA) Network modelling reveals the regulatory relationships among genes and can provide a systematic understanding of molecular mechanisms underlying biological processes. A variety of algorithms have been developed, and in the current study we chose the ARACNE algorithm [26] for reconstruction of seven networks (ALK, BIRC5, CCND1, ERBB3, MYCN, NTRK1, PHOX2B) from the Wang data set (n = 102), since this method has a documented high performance [27]. Also, 4850 pre-existing curate gene sets (c2) from the Molecular Signatures Database (MSigDB) were selected (Additional file 6). We subsequently analysed the lists of differential expressed genes for enrichment of these 4857 gene networks. The SAM intersect lists of genes up-regulated in the r4 group were found to comprise 17 out of 38 partners (~45%) of the ARACNE_ERBB3 network ( Figure 1C), which was significantly verified by GSEA (p < 0.001, . A relatively large fraction (between 20% and 58%) of the ARACNE_BIRC5 network partners (n = 45, Additional file 6) were found among the upregulated genes of r2 and r3 tumour subgroups, which was also significant by GSEA (p < 0.001, Additional file 7). A GO search of the BIRC5 network partners suggested a role in mitosis (GO terms: cell cycle, nucleosome assembly, chromatin assembly, protein-DNA complex assembly, nucleosome organization, mitotic cell cycle, cell cycle phase, DNA packaging, M phase, and cell cycle process, data not shown). Other cell-cycle or mitotic related gene sets found to be enriched among the r2 and r3 subgroups were e.g. ZHOU_CELL_CYCLE_GENES_IN_IR_RESPONSE_24HR, WHITFIELD_CELL_CYCLE_LITERATURE, REACTOME _CELL_CYCLE_MITOTIC, REACTOME_CELL_CYCLE_ CHECKPOINTS curate gene sets (Additional file 7).
Verification of gene network modelling and differential expression analysis The differential expression profiles of the r-subgroups were verified by replicating the study using the Wang data set (n = 67 cases, Table 1). The outcomes of SAM were in consistence with previous findings, showing the ERBB3 gene to be significantly up-regulated and its gene-network partners to be significantly overrepresented in the r4 subgroup ( Figure 1C, Figure 2). Also, several other previously identified r4-specific genes, APOD, CDH19, FXYD1, and S100B, were found among the 1000 most significantly up-regulated genes. In concordance with the previous analysis, we found the expression of CUX2 (fold = 5), LMO3 (fold = 2.7) and PHGDH (fold = 1.9) to be significantly higher in the MNA subgroup (r3) compared to the favourable subset (r1). In addition, cell cycle-related genes dominated the r2 and r3 subgroups, and this was significantly proven by GSEA of the BIRC5 network and other cell cycle networks (p < 0.001, Additional file 7).
To confirm the robustness of the ARACNE constructed gene networks, we selected the r3 versus r1 comparisons in data sets 1 and 2 to investigate the expected overrepresentation of MYCN-and NTRK1-network partners. Fourteen genes out of 40 (35%) of the ARACNE_MYCN network were found in the up-regulated gene lists, while eight out of 40 (20%) genes were found in the down-regulated gene lists, demonstrating an accumulation of the ARACNE_ MYCN network in the r3 subgroup ( Figure 1C, Additional files 7 and 8). Also, an accumulation of the ARACNE_ NTRK1 network was found in the opposite direction. Out of 62 genes composing the ARACNE_NTRK1 network, 28 genes (~45%) were among the 1000 most down-regulated genes in r3, which was significant by GSEA (Additional files 7 and 8). According to significance by SAM, the NTRK1 gene was the "top-one" down-regulated gene within the r3 versus r1 subgroup comparison in both data sets (fold change: >70, Figure 1C, Additional file 2). From these facts we conclude our study design to be substantial, and the constructed gene networks by ARACNE to be reliable and highly representative.
In addition, we checked the enrichment of network partners to the 6-GeneSig (ALK, BIRC5, CCND1, MYCN, NTRK1, and PHOX2B) and found the network representations to be in concordance with the 6-GeneSig expression levels in r-subgroups (Additional file 7). The credibility of ARACNE constructed networks were also tested by literature verification, and seven out of 38 transcriptional connections of the ERBB3-network as well as 11 out of 40 transcriptional connections of the MYCN-network were verified to have a functional relationship (data not shown). This demonstrates the robustness of the computationally inferred network analysis.
Differential expression analysis of histopathology groups (data set 4) To further explore the ERBB3 expression among other neuroblastic tumour we utilized the R2 database (hgserver1.amc.nl), and found indications of high ERBB3 expression in GNB and GN tumours. To investigate this finding in more detail, we performed a differential expression analysis of the histopathology subtypes in the Versteeg 110 data set (n = 110, Table 1). As expected, the ERBB3 gene and networks partners were significantly enriched in GNB and GN tumours compared to NB ( Figure 1C, Additional file 7). The highest enrichment of the ERBB3-network was found in GN tumours, with 18 up-regulated genes out of 38 (p < 0.001, Figure 2). In contrast, cell cycle-related genes and gene networks significantly dominated the NB types, including the ARACNE_BIRC5 network (Additional files 2 and 7).
Subgroup-specific expression profiles
ErbB family member genes (ERBB-genes; EGFR, ERBB2, and ERBB3) and 15 previously reported tumour suppressor candidate genes (TS-genes) were next studied by heat maps in all four data sets ( Figure 3). Most TS candidate genes were down-regulated in the MNA-specific r3 subgroup only. However, the CTNNBIP1 and KIF1B transcripts were also found to be down-regulated in both r3 and r4 subgroups, and the TFAP2B transcript was specifically down-regulated in the r4 subgroup alone (Figure 3, Additional file 2). Overall, the expression profiles of the 6-GeneSig genes, ERBB-genes, and TS-genes (25 genes in total) among r-subgroups were very similar between data sets. Moreover, the expression profiles of the GNB/GN tumours were identical to the previously detected r4 subgroups of NB ( Figure 3). These results strongly indicate that the same cellular pathways are active in r4 and GNB/ GN tumours types, hence the ERBB-gene profile most likely represents a more differentiated subset of tumours.
Verification of ErbB3 at protein level (data set 5)
To validate the biological significance of the ERBB3 enrichment in the expression profiles of GN tumours, the ErbB3 protein expression was investigated by immunohistochemistry (IHC) and western blot (WB) analysis. The IHC was performed on formalin-fixed and paraffin-embedded (FFPE) tissue slides from four GN and four NB tumours by using antibodies specific for Sox10 ([N-20], Santa Cruz Biotechnology) and ErbB-3 ([RTJ2], Abcam) respectively. The IHC showed ErbB3 to be mainly expressed in mature ganglion cells, whereas Sox10 was expressed in both ganglion and schwannian cells ( Figure 4A-B). A high fraction of satellite cells, as well as schwannian cells were also Sox10 immunopositive (data not shown).
Immunoblot analysis was performed on five GN and four NB in total (data set 5, Table 1). Out of the five investigated GN cases, four corresponded to the GN cases examined by IHC. In addition, the WB analysis also included one NB encompassed in the microarray analysis (case 6 corresponding to NBS1 in data set 2). The same antibody as for IHC ([RTJ2], Abcam) directed against the cytoplasmic region of ErbB3 was chosen in order to detect several isoforms of the protein as well as post-translationally modified and unmodified forms. Overall, ErbB3 expression levels were high and clearly enriched in the GN subset compared to the NB subset, which showed no detectable levels of ErbB3. Moreover, case 6/NBS1 previously displaying no or very low expression of ERBB3 by microarray analysis (data set 2, Figure 3), showed no detectable levels of ErbB3 at protein level by immunoblot analysis. Only one of the NB tumours (case 9) showed a strong ErbB3 signal. However, this case was a localized INSS stage 3 with favourable biology, later histopathologically classified as a GNB. Moreover, only the lower molecular weight band was visible indicating that the protein might be in its inactive unphosphorylated form, or indicate other post-translational modification or isoforms of ErbB3 ( Figure 4C).
Histopathology classification
Based on our results we included ERBB3 in the 6-GeneSig thus creating a new 7-GeneSig. The 7-GeneSig was refined to discriminate five subclasses; "NB-r1", "NB-r2", "NB-r3", "GNB-r4", and "GN-r4" (Additional file 9). In order to test the robustness of this 7-GeneSig subgroup classification, cases from all data sets were reclassified into three histopathology prediction classes "NB" (NB-r1, NB-r2, NB-r3), "GNB" (GNB-r4), and "GN" (GN-r4) and the reliability of assignments were investigated. Out of 110 neuroblastic tumours of the Versteeg data set, 82 cases could be successfully assigned according to the 7-GeneSig rules (Additional file 9). All NB histopathology types (64 out of 64) were correctly assigned according to the 7-GeneSig, and the interrate reliability of assignments was highly significant (Kappa measure of agreement p = 7.489E-17, Table 2). Five out of eight GNB tumour types, as well as nine out of ten GN tumour types were correctly assigned. One GN was predicted as "GNB" according to the 7-GeneSig (Table 2). In addition, we also performed a reassignment test on data set 2 comprising one GN, four GNB, and 25 NB tumour types, which was also significant (inter-rate reliability p = 0.003, data not shown). Reassignment of r4-cases (from data sets 1, 2 and 3), previously classified as NB, were all assigned to the "GNB" or "GN" categories by the 7-GeneSig. Also, all NB cases of data set 4 fell into the NB r1-r3 categories (data not shown). Conclusively, the histopathology classification and subgroup assignment by the 7-GeneSig seemed reliable and highly predictive.
Discussion
Neuroblastic tumours (NT's) represent a spectrum of disease, from undifferentiated and aggressive NB to the differentiated and largely quiescent GN tumours. NB tumours are commonly categorized into three main types based on numerical and structural genomic alterations, as well as expression of the neurotrophin receptor TrkA [10]. In a recent study using Principal Components Analysis (PCA) however, our data indicated the existence of four molecular tumour groups, r1-r4 [12]. In the current study we aimed to further characterize these four molecular subgroups, and investigated the divergent r4 group in particular. While the r2 (Type 2A) and r3 (Type 2B) tumour subgroups were dominated by cell cycle-related genes and networks, those were completely absent in the r4 subgroups (data sets 1-3) and GNB or GN subtypes (data set 4). The vast majority of the cell cycle-related genes were linked to the G2/M transition and spindle assembly checkpoint (e.g. BIRC5, BRCA1, BUB1B, CCNA2, CCNB1, FANCI, HMMR, KIF15, and MCM2), many of which were found to belong to the ARACNE-modelled BIRC5-network. Overexpression of genes involved in mitotic regulation is typical for rapidly proliferating tumours and would also be expected to be enriched in the aggressive NB subtypes when compared to more differentiated quiescent GNB and GN tumours. The BIRC5 protein is found to stabilize the microtubules in the chromosomal passenger complex, and knockdown of BIRC5 causes apoptosis in NB via mitotic catastrophe [28]. Also, a previous publication show that NB tumours with genomic aberrations in G1-regulating genes leads to S and G2/M phase progression [20]. Interestingly, the fork head box (FOX) gene FOXM1 encoding a protein phosphorylated in M phase was significantly up-regulated in r2 and r3 subgroups. FOXM1 activates the expression of several cell cycle genes, e.g. AURKB, CCNB1, CCND1, MYC, and is involved in cell proliferation and malignancy [29]. Several cell cycle and DNA repair genes, including BIRC5, are suggested to act downstream of N-myc [21,30,31]. In addition, most of the studied tumour suppressor (TS) candidates were specifically down-regulated in the r3 subgroup, which is probably explained by them acting downstream of N-myc. Several of the TS candidate genes are also located in the 1p36 chromosomal region (e.g. CHD5 and KIF1B [32][33][34]), and Del1p is a well-known prognostic marker highly correlated to MYCN-amplification in NB [35]. One such N-myc-regulated and 1p36-localized TS candidate is CDC42, encoding a small GTPase protein. This protein have a function in cell polarization and growth cone development in NB cell differentiation, similar to Rac1 and Cux-2, and is suggested to inhibit neuritogenesis in NB [36]. In concordance to this, we found CDC42 to be the 14 th most significantly down-regulated gene in the MNA subgroup (r3) compared to subgroup r2.
The main focus of the study was to define the underlying regulatory networks of the r4 subgroup. In contrast to the other three well-known subgroups of NB, the r4 tumours showed high expression of embryonic development and nervous system signalling genes. One of the most prominent genes from the differential expression analysis was ERBB3, encoding a member of the epidermal growth factor receptor (EGFR) family of receptor tyrosine kinases (RTK's). The ARACNE-modelled ERBB3-network was significantly enriched in the differentially expressed gene lists of the r4 subgroups (data sets 1-3), and this enrichment was also found in the GNB and GN histopathology categories of data set 4. Two members of the ERBB3-network, S100B and SOX10, were among the ten most significantly upregulated genes in the r4 subgroups. The S100 calcium binding protein B (S100B) has long been reported as a prognostic biomarker of malignant melanoma [37], and a paired down-regulation of ERBB3 and S100B is observed in malignant peripheral nerve sheath tumours confirming their functional relationship [38]. Interestingly, the S100 beta Histology prediction was performed on the Versteeg data set (n = 110). Group prediction was based upon the standard deviation (sd) of expression of ALK, BIRC5, CCND1, MYCN, NTRK1, PHOX2B, and ERBB3 according to the rules in Additional file 9. Out of 110 tumours, 82 were successfully assigned and for 28 tumour cases no group belonging could be determined. The groups were assigned as follows: "NB" (r1-r3),"GNB" (r4-GNB), and "GN" (r4-GN). The interrate reliability (Kappa) was used to measure the agreement between the assignments of categories (p= 7.489E-17).
protein, mapping to chromosome 21, has been proposed to be responsible for the lack of NB in Down syndrome patients by producing growth inhibition and differentiation of neural cells [39]. The SRY box 10 transcription factor (Sox10) is a key regulator of the developing nervous system, and has been shown to control expression of ErbB3 in neural crest cells [40,41]. A paired overexpression of ErbB3 and Sox10 has been observed in pilocytic astrocytoma (PA) a common glioma of childhood, which verifies their network connection found in the current study [42]. Also, Sox10 and S100 are routinely employed in the pathological diagnosis of neural crest-derived tumours [43], and Sox10 serves as an embryonic glial-lineage marker in NT's [44]. By immunohistochemistry assessment, we found Sox10 to be expressed in both the schwannian cells and ganglion cells, whereas ErbB3 was found mainly in the mature ganglion cells. We could also verify the GN-specific expression of ErbB3 by immunoblot analysis. ErbB3 is activated through ligand binding of neuregulin (NRG), leading to heterodimerization of ErbB3 to other ErbB members and subsequent phosphorylation. Activated ErbB3 regulates proliferation through downstream signalling of the phosphoinositol 3-kinase/AKT survival/mitogenic pathways [25]. In the current study we found a significant correlation of ERBB3 to its family members EGFR and ERBB2 in all four independent data sets. EGFR and ERBB2 were also both found to be significantly up-regulated in all r4 subgroups as well as in the GNB and GN tumours. Amplification of ERBB3 and/or overexpression of its protein has been reported in numerous cancers, including prostate, bladder, and breast. Moreover, loss of ErbB3 function has been shown to eliminate the transforming capability of ErbB2 (also known as HER-2) in breast tumours [45]. Although the extent of the role of ErbB3 is emerging, its clinical relevance in different tumours is controversial. There are a few studies of ErbB/HER receptor expression in neuroblastoma, showing that ErbB/HER family members in neuroblastic tumour biology is interrelated and complex, but their expression level may present a prognostic factor for patients outcome [46][47][48].
The heat map of 25 genes including the 6-GeneSig genes, ERBB-genes and TS-genes showed a very specific expression pattern among the different r-subgroups and histopathology categories. The similarity of expression profiles between the four data sets was striking. The correspondence of the r4 subgroups to the GNB and GN histopathology subtypes was obvious, and ERBB3 appeared as a clear-cut marker for a GNB/GN-like expression profile. To demonstrate this further, a new 7-GeneSig (including ERBB3) was constructed and used in a histopathology reassignment classification test. The 7-GeneSig successfully assigned 100% NB tumours, 62,5% GNB tumours, and 90% GN tumours into the correct histopathology category (Kappa measure of agreement p = 7.489E-17, data set 4).
Also, all r4-tumour types from data sets 1-3 were categorized as GNB or GN tumours according to the 7-GeneSig. By these facts we conclude that the NB tumours previously assigned to the r4 subgroup by the 6-GeneSig, most likely represent more differentiated NT's and are seemingly GNB/GN tumours types. Our study brings out the complexity in classifying neuroblastic tumours. The previously described unfavourable characteristics and poor outcome of the r4 tumour group is puzzling [12], but can be explained by the fact that prognostic subsets of GNB's exist [49]. Historically, GNB's have been the most difficult of the NT's to define in a consistent and uniform fashion, because the number and degree of differentiation of the neuroblastic cells tend to vary between cases as well as between different microscopic fields in the same tumour [1]. Moreover, the data sets used in the current study are probably not truly population-based, and the r4 subgroups found probably consist of different proportions of F/UF subsets. In addition, some tumours may previously have been misclassified as NB, or the tumour tissue part analysed by microarray may not be the same as the tissue part that underwent histopathology assessment. Furthermore, it is not clear whether differentiation markers are superior to other prognostic factors in defining outcome. Unfavourable markers such as MNA and clinical stage may also be present in or among differentiated cells, and mark a poor prognosis by themselves.
ErbB3 also has an important role in differentiation of Neural crest cell (NCC) lineages during the embryonic development [50]. Although ErbB receptors are also found to mediate proliferation and survival [47,48], the ERBB-profile found in this study is likely to reflect the phenotype or differentiation stage of developing neuronal progenitors. Upon induction of differentiation, neuronal progenitors may follow a variety of stages of NCC lineages. For example, neuroblasts in culture are shown to represent an immature bilineage stage able to progress towards neuronal and glial fates [44]. Schwannian cells are the principal glia of the peripheral nervous system, whereas neuroblasts differentiate from neural stem cells and exhibit variable degrees of differentiation up to ganglion cells. In this context, the ERBBprofile seems to be a marker of ganglionic-neuronal differentiation. A recent immunohistochemistry study of ErbB2 in neuroblastic tumours supports this conclusion [51]. However, it still remains uncertain whether the r4 subgroup of datasets 1 and 3 are indeed GN or GNB, or if the ERBB expression profile just marks the gradually differentiated NB tumours (encompassing increased levels of mature ganglion cells). Nevertheless, the results from all data sets are consistent in regards to the expression profile of the 25 genes selected for the heat map, strengthening the robustness of the suggested 7-gene signature. Accordingly, we propose ErbB3 as an excellent marker of neuronal differentiation, and suggest mRNA expression profiling by the 7-gene signature as a complement to histopathological assessment. However, the exact cut-off expression levels for classification needs to be worked out in more detail, and classification must be based on international standard cases and assays.
Conclusions
In summary, by differential expression analysis and network modelling we have identified genes and gene networks constituting molecular and histological subgroups of neuroblastic tumours. The primary aim of our study was to identify genes characterizing the previously unknown r4 subgroup. Our results pinpointed ERBB3 and its network as one of the most significantly up-regulated genes within this group. By studying the expression profiles in a broader range of neuroblastic tumour types, we found the r4 subgroup to be highly similar to GNB/GN tumour types. The ERBB-dominating profile found in r4 and GNB/GN tumours was clearly divergent from the cell-cycle-dominating profile mainly representing NB tumour subgroups (specifically unfavourable NB subgroups). Our findings indicate that the previously identified r4 subgroup most likely constitutes GNB/GN tumours or NB tumours with high content of mature ganglion cells. This study also demonstrates the importance of performing unsupervised subtype clustering prior to down-stream analyses. Predefined subgroups and supervised clustering studies might give distorted results if they are based on pools of mixed tumour histopathology subgroups. In conclusion, we have identified ERBB3 as a marker of a GNB/GN-like expression profile, and we suggest a 7-gene expression signature as a complement to histopathological assessment of neuroblastic tumours. Further studies of ErbB3 and other members of the ErbB family and their role in neuroblastic differentiation and pathogenesis are warranted.
Pre-processing microarray data
Data from five published neuroblastoma expression microarray studies run on three different Affymetrix platforms (HU133A, HGU95Av, and HU133plus2) were used in this study (Table 1). Raw data files were obtained from Array Express (www.ebi.ac.uk/microarray-as/ae/) and Gene Expression Omnibus (www.ncbi.nlm.nih.gov/geo/), or directly from collaborators. Expression data files were normalized by gcRMA using Bioconducter (library BioC 2.4) in R 2.9.2 [52] in four separate groups; 1) the De Preter [53] data set run on the HGU133A Affymetrix platform (17 samples, preamplified), 2) the McArdle [54] and the Wilzén [55] data sets run on the HGU133A Affymetrix platform (30 samples, not pre-amplified), 3) the Wang [56] data set run on the HGU95Av2 platform (102 samples, not pre-amplified), and 4) the Versteeg [57] data set run on the HU133plus2 platform (110 samples). For each probe-set, the maximum expression values over all samples were determined, and probe-sets which showed very low or no detectable expression levels were filtered out (log2 expression <5). Next, the mean log2 expression level for each Gene symbol was calculated to generate "mean-per-gene" data files: 7439 genes in data set 1, 8106 genes in data set 2, 7542 genes in data set 3, and 15614 genes in data set 4.
Differential expression analysis
NB samples from the DePreter and McArdle/Wilzén data sets were divided into four r-subgroups by a 6-gene signature (further referred to as the"6-GeneSig") according to Abel et al., 2011 [12] (Additional file 1). From these two data sets, 14 (preamplified, De Preter) and 23 (non-preamplified, McArdle/Wilzén) cases respectively were successfully assigned into one of the four r-groups (Table 1). Differential gene expression analysis was performed by a two group unpaired Significance Analysis of Microarray (SAM) test (i.e. six comparisons) [58]. Gene lists comprising the 1000 most significantly differentially expressed genes (sorted after the d-statistic) with a fold change above 2 were exported from each comparison, from each direction (up or down), and from each data set, separately (resulting in 12 SAM gene lists per data set). Next, SAM gene lists from the two different data sets were compared, and 12 intersection gene lists (SAM intersect ) were created. Too minimize the variance, a combined fold change (FC comb ) for each gene in the SAM intersect gene list was calculated as follows: where FC i is the fold change in data set i and where SE i is the standard error of the mean log2 expression values in data set i. A combined p-value (P comb ) for each gene in the SAM intersect gene list was calculated as follows: where N i is the total number of samples of the two groups compared by the d-statistic in SAM, and P i the corresponding p-value for dataset i. Φ is the cumulative distribution function of the standard normal distribution and Φ -1 is its inverse function.
Based on an approximation of 8000 multiple tests (i.e. 8000 genes), a nominal p-value <6.25E-06 was found to correspond to an adjusted p-value <0.05 (according to Bonferroni correction) and was subsequently used as a cut-off for significance in SAM.
Gene network modelling
A large gene regulatory network was constructed from an independent data set (Wang) of 102 expression profiles [56]. Mutual information values were estimated with the ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) algorithm using a p-value cut-off of 1E-7 [26]. The data processing inequality (DPI) was applied with a tolerance of 0.15. Gene networks of seven selected genes were extracted from the global network together with their immediate gene neighbours. The gene networks of nearest neighbours were visualized in Cytoscape 2.8.2.
Human tissue samples used for protein expression validation
Tumours histopathologically classified as GN and NB (data set 5, Table 1) were used for immunohistochemistry (4 NB and 4 GN), and immunoblot analysis (4 NB and 5 GN). Tissue from patients was obtained during surgery and stored in −80°C. Ethical approval was obtained from the Karolinska University Hospital Research Ethics Committee (Approval no. 2009/1369-31/1 and 03-736). Informed consent for using tumor samples in scientific research was provided by parents/guardians. In accordance with the approval from the Ethics Committee the informed consent was either written or verbal. When verbal or written assent was not obtained the decision was documented in the medical record.
Immunohistochemistry
Formalin-fixed and paraffin-embedded (FFPE) tissue slides were deparaffinized in xylol and rehydrated in graded alcohols. For antigen retrieval, slides were boiled in a sodium citrate buffer (pH 6.0) for 10 min, in a microwave oven. After blocking in 1% bovine serum albumin (BSA) for 20 min, the tissue sections were incubated with primary antibody overnight, Sox10 ([N-20], Santa Cruz Biotechnology) and ErbB-3 ([RTJ2], Abcam) respectively, diluted 1:50 in 1% PBSA. Thereafter slides were rinsed in PBS and endogenous peroxidases were blocked in 0.3% H 2 O 2 for 10 min. As a secondary antibody, anti-mouse-horseradish peroxidase (HRP) and anti-goat-horseradish peroxidase were used (Invitrogen, Paisley, UK). All slides were counterstained with haematoxylin. To control for non-specific binding, antibody specific blocking peptides and isotype-matched controls were used. For colocalization studies of Erb3 and Sox10, tumor tissue sections were simultaneously stained with primary antibodies and for fluorescence visualization, anti-goat Alexa Fluor 594 and anti-mouse Alexa Fluor 488 were used, respectively.
Statistical analyses
The expression relationship of ERBB3 to the discriminative 6-GeneSig (ALK, BIRC5, CCND1, MYCN, NTRK1, and PHOX2B) and the ErbB family members EGFR, ERBB2, and ERBB4 were investigated by a Pearson correlation test. The statistical significance of expression levels of ERBB genes (i.e. EGFR, ERBB2, ERBB3, and ERBB4) were investigated by Welch t-test. Inter-rater reliability of group assignments was tested by the Kappa statistic on crosstabs in SPSS (version 20.0).
Additional files
Additional file 1: The 6-GeneSig subgroup classification rules. Rules based on standard deviations (SD) of expression values for all samples in each data set. In order for samples to be successfully assigned into one of four r-groups, 5 out of 6 expression rules must be met. Shaded cells indicate rules with no exception for classification into that specific subgroup.
Additional file 3: GO results. The Biological Networks Gene Ontology tool (BiNGO) in Cytoscape was utilized to map the predominant functional themes of the SAM gene lists. The 10 most significant Gene Ontology (GO) from terms in each SAM comparison are presented. Gene lists are divided into three data sets; data set 1 & 2 (DePreterMcArdleWilzén), data set 3 (Wang), data set 4 (Versteeg), and into two differential expression directions; "up" or "down". GO-ID: Gene Ontology identification number, p-val: p-value, corr p-val: corrected p-value, Description: Description of the gene ontology theme. The "DePreterMcArdleWilzén_12_down" list was too short (22 genes) to enable the GO term search. are according to r-subgroups (r1-r4), and data set 4 (Versteeg) comparisons are according to histopathology groups; Ganglioneuroma (GN), Ganglioneuroblastoma (GNB), and Neuroblastoma (NB). Enrichments were run with 1000 permutations, permutation type = gene set, and gene list sorting mode = real (scoring both extremes) in descending order. Results per data set and comparison in each sheet are presented as follows: NAME = name of the gene set, SIZE = Size of the gene set, ES = enrichment score, NES = Normalized enrichment score, NOM p-val = Nominal p-value, FDR q-val = False Discovery Rate, FWER p-val = Familywise-error rate, RANK AT MAX = The position in the ranked list at which the maximum enrichment score occurred, LEADING EDGE = Displays the three statistics used to define the leading edge subset. In addition, the r3 versus r1 comparisons in data sets 1-3 were investigated and presented as gene list sorting mode = abs.
|
2014-10-01T00:00:00.000Z
|
2013-07-08T00:00:00.000
|
{
"year": 2013,
"sha1": "c770d21f4ccdf87249d18fa188a51653e295a0d1",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-12-70",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "19a31626b80743c372349bff63cd5cd917d6708a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
53713289
|
pes2o/s2orc
|
v3-fos-license
|
Broadband near-IR absorbing Au-dithiolene complexes bearing redox-active oligothiophene ligands †
A series of three homoleptic, monoanionic gold dithiolene complexes of oligothiophene ligands which coordinate via a central thiophene-3,4-dithiolate chelate are presented. The oligomer chains are three, fi ve and seven thiophenes long and the complexes display hybrid optoelectronic properties featuring characteristics of both the oligothiophene chains and the delocalised metal dithiolene centre. The properties of the complexes have been characterised using a variety of spectroscopic and electrochemical methods complemented by computational studies. Solid state spectroelectrochemistry has revealed that upon oxidation these complexes display intense and broad absorption across the visible spectrum. In attempting to produce nickel analogues of these materials a single crystal of a photo-oxidised nickel dithiolene complex has also been isolated.
Introduction
Binding two or more electronically delocalised substructures together by fusion across a shared aromatic bond presents an alternative and powerful method by which to facilitate electronic communication between redox-active sub-units.
Bis(dithiolene) complexes have a non-innocent [C 4 S 4 M] chelate and have fascinated coordination and materials chemists alike for over 50 years. [1][2][3][4][5] These complexes are often intensely coloured with a characteristic near-IR (NIR) absorption 6,7 and display numerous reversible electron transfer processes. 8 They are known to exhibit a range of fascinating and useful material properties including ferromagnetism, 9 metallic conductivity and superconductivity, [9][10][11][12] ambipolar charge transport, [13][14][15] non-linear optical activity 16,17 and catalytic water splitting. [18][19][20] Conjugated polymers [21][22][23][24] and oligomers [25][26][27] of thiophene are archetypical organic p-type semiconductors that have been widely studied for use in optoelectronic devices. Combining the rich optoelectronic properties of the metal-containing noninnocent dithiolene core with the straightforward processing and easily tunable electronic properties of a conjugated polymer or oligomer presents an appealing approach in the search for new functional materials. A number of polymerisable complexes have been realised and subjected to electropolymerisation [28][29][30][31][32][33][34] or electrodeposition 35,36 to give low band gap metallopolymers, while discrete thiophene-containing dithiolene ligands have recently provided some novel photochromic complexes. 37 Of particular relevance to this work are the studies by Belo, Almeida and co-workers on complexes of thiophene-dithiolate ligands [38][39][40][41][42] which have given rise to a number of complexes and salts thereof with interesting structural, electronic and magnetic properties. [43][44][45][46][47][48][49][50][51][52] The ligands thiophene-2,3-dithiolate (23TDT) and thiophene-3,4-dithiolate (34TDT) consist of an aromatic, electron-rich thiophene ring, fused to the electronically delocalised, typically anionic, metal dithiolene chelate. In both 23TDT and 34TDT further functionalisation of the thiophene moiety can be used to modify and tune the materials' electronic or mechanical properties. Mono-53 and di-alkyl 54 substituted derivatives of 23TDT have been prepared with a dialkyl substituted complex behaving as a single molecule conductor, while robust synthetic routes have been established for the synthesis of semiconducting π-extended 23TDT ligands bearing aromatic rather than alkyl substituents. 55,56 Our interest in this area involves extending the conjugation length of the 34TDT ligand. Metallopolymers obtained by the electropolymerisation of complexes of the symmetrical 34TDT ligand 2,5-di(thien-2-yl)thiophene-3,4-dithiolate displayed broad absorption which is desirable for harvesting solar energy. 33 An analogue of this ligand was synthesised with methyl groups capping the terminal α-positions of the oligothiophene chains to inhibit polymerisation; this material provided electrochemically stable complexes. 34 Our family of ligands has grown to include methyl endcapped terthiophene (3T), quinquethiophene (5T) and septithiophene (7T) chains which we have previously exploited as precursors in the synthesis of new tetrathiafulvalenes 57-59 and spirocyclic germanium complexes 60,61 for applications in organic field effect transistor (OFET) and organic solar cell (OSC) devices, respectively. These ligands also bear hexyl groups to improve solubility. Here, we continue these studies by reporting the synthesis of monoanionic homoleptic Au complexes using 3T, 5T and 7T as 34TDT ligands.
The complexes have been characterised by electrochemical and spectroscopic methods and the optoelectronic properties of the neutral complexes have been probed using electronic absorption spectroelectrochemistry. Alongside this we attempted the synthesis of a complementary Ni complex which proved to be poorly stable, although we did obtain a single crystal of the complex [Ni(3T) 2 ] 2− in a partially oxidised state, the structure of which is also presented.
The final complexes were obtained (Scheme 1) by treating a THF solution of the desired ligand precursor 12-14 with freshly made NaOMe prior to addition of K[AuCl 4 ] followed immediately by a tetra(alkyl)ammonium bromide to provide the complexes in moderate to high yields. Purification was achieved by recrystallisation and verified by elemental analysis. Although the elemental analysis results for the largest complex [Au(7T) 2 ] − showed some deviation from the predicted values, the results of optical and vibrational spectroscopy and its electrochemical properties are consistent with the other members of the series therefore we will include it within our discussion.
Electronic absorption spectroscopy
The absorption spectra of the complexes ( Fig. 1 and Table 1) were recorded in CH 2 Cl 2 solution and are dominated by intense bands in the visible region associated with π → π* transitions over the oligothiophene chains. As may be expected, the π → π* transitions become red-shifted as the conjugation length of the oligothiophene ligand increases. Common features include two high energy absorptions at ∼255 nm and ∼300 nm which are significantly more intense in complex [Au(5T) 2 ] − . [Au(7T) 2 ] − displays substructure featuring aspects of both of the smaller analogues, including a peak at ∼345 nm which coincides with the lowest energy π → π* band for [Au(3T) 2 ] − ; the peak maximum for [Au(5T) 2 ] − correlates closely with a shoulder in the spectrum of [Au(7T) 2 ] − indicating that some spectral characteristics of the smaller chains are maintained as the chain length increases.
The magnitude of the difference in extinction coefficients of the π → π* transitions between the free ligand and the complex decreases systematically as the chain length increases. It more than doubles in the case of 3T but increases by only a third in the case of 7T. Calculation of the energy of these π → π* transitions using the onset wavelengths reveals that they have only decreased by 0.04-0.07 eV with respect to their precursor ligands.
A characteristic low energy absorption which stretched towards the NIR region is also observed for each of the complexes and is associated with d → d transitions over the large Au atom (Fig. 1, inset). [64][65][66] Increasing the ligand size from [Au(3T) 2 ] − to [Au(5T) 2 ] − results in a notable red-shift of this band however it does not shift further when the chain length increases to [Au(7T) 2 ] − . The fact that this band is shifting to longer wavelengths as the oligothiophene chain increases in length serves to indicate that there is indeed some electronic communication with the delocalised dithiolene core.
Combined, these results indicate that while the Au dithiolene core itself does have interaction with the delocalised π-system of the ligands, it can facilitate only limited electronic communication between the π-systems of the two ligands, the extent of which decreases with increasing chain length.
Vibrational spectroscopy
Fourier transform infra-red (FTIR) spectroscopy of the ligands and complexes was performed using KBr pellets ( Fig. 2 and Table S1 †). There is good complementarity in the fingerprint regions of both the ligands and the complexes.
The ligands 3T-7T display peaks at ∼1650 and ∼1700 cm −1 due to the central carbonyl of the thieno-[3,4,d]-1,3-dithiol-2one moiety. This correlates reasonably well with reported values, albeit at slightly higher wavenumber, and can be assigned confidently due to the absence of this peak in the FTIR spectra of the complexes. 67,68 Aromatic overtones of the thiophene rings dominate the spectrum from 1650-1400 cm −1 (ref. 69) although a peak in this region between ∼1560 cm −1 is only observed in the metal complexes (hidden in [Au(7T) 2 ] − ) while a weak to medium intensity peak at ∼1377 cm −1 arises from deformations of the terminal -CH 3 groups. 69,70 A sharp peak at ∼1285 cm −1 is only present in the complexes. By comparing with Au complexes of 1,2-benzenedithiolate the absence of any intense bands around ∼1085 cm −1 indicates that the ligands have no significant radical character. 64,66 The remaining notable characteristics are bands of moderate to strong intensity between 850-785 cm −1 assigned to aromatic C-H stretches of the remaining β-hydrogens on the thiophene backbone. 69,70 Cyclic voltammetry Cyclic voltammetry was used to examine the electrochemical properties of the complexes (Fig. 3 and Table 2). All of the complexes are significantly more easily oxidised than their ligand precursors 57,58 and demonstrate a number of reversible or quasi-reversible processes. The complexity of the waveform increases and the reversibility decreases with increasing chain length.
The oxidative behaviour of aromatic Au dithiolene complexes is predominantly ligand based while the reductive behaviour is more metal centered. 64,66 which can allow for great control to be exerted over the complexes' oxidation potentials. 65 In this series of compounds all major oxidative processes shift to lower potential as the oligothiophene chain lengthens, due to the increase in the number of π-electrons and extent of delocalisation, which result in stronger electron donating properties. Using the first oxidation potential as an example, the half-wave potential decreases steadily: E 1/2 of +0.59 V for [Au (3T) 2 ] − , +0.40 V for [Au(5T) 2 ] − and +0.29 V for [Au(7T) 2 ] − . On the other hand, reduction processes shift to a more positive potential in a similar fashion. The trend in reduction potentials coupled with the low onset of oxidation in all cases further indicates that some electronic coupling between the ligand π-system and the metal dithiolene core does indeed occur.
Electrochemical HOMO-LUMO gaps were calculated from the difference in the onsets of the first oxidation and reduction peaks and agree well with the d → d transition energies calculated from the absorption spectra.
Some specific comments about each complex follow: [Au(3T) 2 ] − This complex displays greatly improved electrochemical reversibility when compared to the analogous non-capped complex which only had a single irreversible oxidation at +0.57 V. 34 It has closely overlapping first and second oxidations with peak profiles and separations that indicate fairly good reversi- bility. In keeping with our previous observations we have assigned this to sequential radical cation formation on each terthiophene, which is localised primarily over the ligands. A third quasi-reversible oxidation occurs at E 1/2 of +1.16 V, of which the narrow profile of the reverse peak may indicate some adsorption of the triply charged complex to the electrode surface. The complex undergoes a single quasi-reversible reduction at −1.26 V.
[Au(5T) 2 ] − In complex [Au(5T) 2 ] − , the first oxidation peaks are similar to those observed for complex [Au(3T) 2 ] − . The next region of the voltammogram shows no clearly defined peaks although there is evidence for a further oxidation occurring at approximately +0.83 V followed by a fourth, more easily discernible quasi-reversible peak centred at E 1/2 = +1.08 V. This sequence of events is likely due to dication formation on each of the quinquethiophene chains. The complex undergoes a single quasi-reversible reduction at E 1/2 = −1.00 V and an irreversible reduction at −1.48 V.
[Au(7T) 2 ] − Complex [Au(7T) 2 ] − displays a voltammogram with a shape that is largely complementary to that of [Au(5T) 2 ] − . A first reversible oxidation is centered at +0.28 V with another oxidation at +0.97 V. There appears to be at least one quasi-reversible oxidation in the broad waveform between these two oxidations. An irreversible shoulder can be identified at +1.04 V prior to the final quasi-reversible formation of a second dication at +1.27 V. The complex undergoes two irreversible reductions at −1.00 and −1.53 V. The magnitude of the current response of these reductions is very large and no reversibility is apparent.
Computational studies
DFT methods were used to gain a further insight into the electronic properties of [Au(3T) 2 ] − and [Au(5T) 2 ] − . The orbital contours for [Au(3T) 2 ] − and [Au(5T) 2 ] − are shown in Fig. 4 and 5 respectively. As it was desirable to see how the hexyl chains influence the geometry adopted by the ligands upon complexation they were included in the calculation. Similar observations can be made for both complexes: (1) In both cases steric hindrance results in significant twisting of the terminal thiophene rings of the oligothiophene chains (2) The frontier orbitals of the HOMO manifold are delocalised over both the oligothiophene chains and metal dithiolene core. In [Au(5T) 2 ] − it is not until the HOMO−4 that purely oligothiophene based contours begin to be observed while in [Au(3T) 2 ] − even the HOMO−5 contains some influence from the coordinating sulfurs.
(3) The LUMO is localised almost exclusively over the golddithiolene centre for both complexes and is close in energy to the higher HOMO orbitals which correlates well with the low energy d → d band observed in the absorption spectra.
(4) The LUMO+1 and higher lie at significantly more positive energies than the LUMO and are delocalised over the conjugated π systems of the ligands with small contributions from the coordinating dithiolene sulfur atoms. This indicates that these orbitals make a much more substantial contribution to the shorter wavelength π → π* transitions. (5) The HOMO in [Au(3T) 2 ] − is quite similar to the HOMO−2 of [Au(5T) 2 ] − . Noting that the energy differences between the HOMO, the HOMO−1 and the HOMO−2 are relatively small, it appears that the longer conjugation length of the 5T ligand leads to it having an increased influence on the frontier HOMOs than the less extended 3T ligand.
Paying particular attention to the mixed oligothiophene/ dithiolene nature of the HOMO orbitals, these results clearly indicate the presence of constructive electronic interplay between the ligands and the gold-dithiolene core. Overall, the calculated electronic structure of the gold TDT core agrees fairly well with existing studies on both gold dithiolenes 64,65,71 and other metal TDT complexes. 56 Spectroelectrochemistry Neutral Au 34TDT complexes have not as yet been isolated. Upon oxidation of the monoanionic complexes with iodine they yield poorly defined and insoluble polycrystalline precipitates which show some paramagnetic character. 39,42 UV-vis spectroelectrochemical (SEC) measurements present one route by which to begin to observe their spectral properties. SEC allows changes in a molecule's spectroscopic properties upon oxidation or reduction to be monitored.
Thin films of the complexes were deposited on ITO slides by drop casting. The slides were then suspended in an electrolytic solution and used as the working electrode of a standard three-electrode electrochemical cell held within a UV/vis spectrometer. A potential was applied to the film and its absorption spectrum measured at intervals of +0.10 V in a step-wise fashion. Upon changing potential the film was given approximately 60 seconds to equilibrate prior to recording the spectrum. The results of these experiments are presented in Fig. 6. Spectroelectrochemical studies on thin films of the ligands 5T and 7T can be found in our previous publications. 57,58 The smallest ligand 3T is soluble in the electrolyte solution therefore thin film studies were not possible.
Prior to oxidation, the thin film absorption spectra of all of the complexes are dominated by ligand based π → π* transitions with the weak d → d transition of the Au dithiolene core also evident. Upon oxidation a large increase in absorption across the entire visible spectrum is observed, featuring characteristics of both the conjugated ligand and the delocalised Au dithiolene centre. This is a somewhat gradual process for [Au(3T) 2 ] − and [Au(7T) 2 ] − but is very sharp for [Au(5T) 2 ] − and remains remarkably stable at higher potentials.
The solid state SEC of bis(terthiophene) Au dithiolene complex [Au(3T) 2 ] − is shown in Fig. 6a. Upon increasing the potential from 0.00 V to +0.40 V the intensity of the d → d absorption drops slightly then increases rapidly as at least one electron is lost with strong peaks emerging at 515 and 734 nm. The wavelength of the first of these peaks is due to the terthiophene based radical cation while the second will occur over the now neutral [AuC 4 S 4 ] 0 dithiolene centre. The intensity of the absorbance across the spectrum begins to stabilise at potentials >+0.70 V.
The SEC plot for [Au(5T) 2 ] − (Fig. 6b) shows that upon oxidation at +0.30 V a large and sudden spectral response occurs: the π → π* absorption drops slightly in intensity and is red- shifted from 429 to 459 nm and this is typical behaviour observed in conjugated thiophene oligomers and polymers. 72 Concomitantly, the intensity of absorption across the rest of the spectrum is raised greatly with peaks at 602 nm and 1036 nm pushing deep into the NIR. The very limited change in the spectrum at higher potentials may indicate that it has become insulating. The complex [Au(7T) 2 ] − displays characteristics similar to those of the 5T analogue. At potentials above +0.30 V the generation of broad bands with peaks at 679 and >1100 nm respectively (Fig. 6c).
X-Ray crystallography
As previously mentioned, we attempted to synthesise nickel analogues of the gold complexes by using NiCl 2 ·6H 2 O in place of K(AuCl) 4 but found them poorly stable and therefore difficult to isolate. However we were able to isolate a small amount of the terthiophene nickel complex by diffusion of cyclohexane into a tetrahydrofuran solution and were surprised by our findings.
A single crystal of (Et 4 N) 2 [Ni(3T) 2 ] grew as an orange plate. X-ray crystallography revealed that the complex displayed varying amounts of oxidation of the coordinating sulfur atoms. We can only attribute this partial oxidation to the complexes having reacted with dissolved molecular oxygen. Similar spontaneous aerobic oxidation of dithiolenes has only been reported on a small number of occasions. [73][74][75] This behaviour also leads us to postulate that oxidation of our Ni complexes could be an underlying factor of their poor stability. The asymmetric unit of the nickel complex contains two half complexes with the Ni lying on an inversion centre and two tetraethylammonium cations (Fig. 7). The coordinated S atoms are partially oxidised and have two oxygen sites modelled for each S which are refined with a common occupancy for each pair (for Ni(1) 0.19(3) and 0.50 (2), for Ni(1a) 0.24 (2) and 0.42 (2)).
Conclusions
In conclusion, six new homoleptic bis(thiophene-3,4-dithiolate) complexes featuring nickel or gold metal centres bound between end-capped oligothiophene ligands with chain lengths of three, five and seven thiophenes were synthesised. The Ni complexes proved to be poorly stable due to facile autooxidation as confirmed by the isolation of a crystal of a partially oxidised complex, the molecular structure of which was confirmed by X-ray crystallography. The monoanionic formally Au(III) complexes show hybrid optoelectronic properties resulting from interplay between the dithiolene core and the oligothiophene ligands including low oxidation potentials and a low energy metal centred absorption band, the position of which is sensitive to the oligothiophene chain length. This hybrid electronic behaviour has been confirmed using spectroscopic and electrochemical studies and corroborated with computational analysis of the two smaller complexes.
Spectroelectrochemical results for thin films of the Au(III) complexes in the solid state have shown that in all cases oxidation, ostensibly to the neutral Au(IV) complex, results in a large and sudden increase in the intensity of absorption across the entire visible window and into the NIR, which persists at higher potentials. Although their isolation remains challenging, the interesting optoelectronic and magnetic properties of neutral gold 34TDT complexes with extended conjugation means that they remain intriguing targets for further development. In particular, their broad and strong absorption across the visible and NIR regions of the spectrum make them excellent candidate materials for broadband light sensing and harvesting applications. 76
Conflicts of interest
There are no conflicts to declare.
|
2018-12-02T16:19:45.442Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b5155b5df138be74952f683124ca78bccd20e3d5",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/dt/c8dt03915a",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "df7c28734aaf41893f00f7dd4cb53f09ecdf2967",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
12529235
|
pes2o/s2orc
|
v3-fos-license
|
Lipoic acid increases glutathione peroxidase , Na + , k +-atpase and acetylcholinesterase activities in rat hippocampus after pilocarpine-induced seizures ?
In the present study we investigated the effects of lipoic acid (LA) on acetylcholinesterase (AChE), glutathione peroxidase (GPx) and Na, K-ATPase activities in rat hippocampus during seizures. Wistar rats were treated with 0.9% saline (i.p., control group), lipoic acid (20 mg/kg, i.p., LA group), pilocarpine (400 mg/kg, i.p., P400 group), and the association of pilocarpine (400 mg/kg, i.p.) plus LA (20 mg/kg, i.p.), 30 min before of administration of P400 (LA plus P400 group). After the treatments all groups were observed for 1 h. In P400 group, there was a significant increase in GPx activity as well as a decrease in AChE and Na, KATPase activities after seizures. In turn, LA plus P400 abolished the appearance of seizures and reversed the decreased in AChE and Na, K-ATPase activities produced by seizures, when compared to the P400 seizing group. The results from the present study demonstrate that preadministration of LA abolished seizure episodes induced by pilocarpine in rat, probably by increasing AChE and Na, K-ATPase activities in rat hippocampus.
Oxidative stress is attractive as a possible mechanism for the pilocarpine-induced seizures for many reasons.The brain processes large amounts of O 2 in relatively small mass, and has a high content of substrates available for oxidation in conjunction with low antioxidant activities, making it extremely susceptible to oxidative damage 1,2 .In addition, certain regions of central nervous system (CNS), such as the hippocampus, may be particularly sensitive to oxidative stress because of their low endogenous levels of antioxidants 3 .Such a depressed defense system may be adequate under normal circumstances.However, in pro-oxidative conditions, such as seizures, these low antioxidant defenses can predispose the brain to oxidative stress.
The mechanism behind seizures-induced oxidative stress is not well understood, but several explanations have been proposed.These include excitotoxicity associated with excessive neurotransmitter release and oxidative stress leading to free radical damage 2,4 .Recently, several studies have examined the role of oxidative stress on pilocarpine-induced seizures whose underlying mechanisms are not yet fully established 3 .
Na + , K + -ATPase is a crucial enzyme responsible for maintaining the ionic gradient necessary for neuronal excitability.It is present at high concentrations in brain cellular membranes, consuming about 40-50% of the ATP generated in this tissue 5 .It has been demonstrated that this enzyme is susceptible to free radical attack 6 .Besides, there are some reports showing that Na + , K + -ATPase activity is decreased in various chronic neurodegenerative disorders [6][7][8] .
On the other hand, there is considerable evidence showing that oxidative stress is an important event occurring in various common acute and chronic neurodegenerative pathologies 9 .This is understandable since the CNS is potentially sensitive to oxidative damage due to its great oxygen consumption, high lipid content and poor antioxidant defenses 10 .We have recently shown that pretreatment with lipoic acid (LA) induces alterations in antioxidant enzymatic activities in rat hippocampus, suggesting a direct effect of this antioxidant on this enzymatic activity 11 .
In addition, cholinergic transmission is mainly terminated by ACh hydrolysis by enzyme acetylcholinesterase (AChE) 12,13 .This enzyme substantially contributes to synaptic transmission during seizures, thus, it would be important to describe the effects of LA on this enzymatic activity.In the present study we investigated the LA effects on AChE, glutathione peroxidase and Na + , K + -AT-Pase activities in rat hippocampus after pilocarpine-induced seizures.
METHOD
Adult male Wistar rats (250-280 g) maintained in a temperature controlled room (26±1 o C) with a 12-h light/ dark cycle with food and water ad libitum were used.All experiments were performed according to the Guide for the care and use of laboratory the US Department of Health and Human Services, Washington, DC 14 .The research project was approved by the Ethics Committee of the Federal University of Piaui, Brazil (Protocol Number 038/09).The following substances were used: pilocarpine hydrochloride and alpha-lipoic acid (Sigma, Chemical USA).All doses are expressed in milligrams per kilogram and were administered in a volume of 10 ml/kg injected intraperitoneally (i.p.).In a set of experiments, the animals were divided in four groups and treated with LA (20 mg/kg, i.p., n=36) or 0.9% saline (i.p., n=36) and 30 min later, they received pilocarpine hydrochloride (400 mg/ kg, i.p.), and in this 30-min interval rats were observed for the occurrence of any change in behavior.The treatments previously described represent the LA plus P400 and P400 groups, respectively.Other two groups received 0.9% saline (i.p., n=36, control group) or lipoic acid alone (20 mg/kg, i.p., n=36, LA group).After the treatments, the animals were placed in 30 cm x 30 cm chambers to record: latency to first seizure (any one of the behavioral indices typically observed after pilocarpine administration: wild running, clonus, tonus, clonic-tonic seizures) 15 , number of animals that died after pilocarpine administration.Previous work have shown that the numbers of convulsions and deaths occurring within 1 h post-injection always follow the same pattern, so we decided to observe the animals for 1 h as pilocarpine-induced convulsions occur in 1 h and deaths within 1 h after pilocarpine injection.The survivors were killed by decapitation and their brains dissected on ice to remove hippocampus for determinations AChE, glutathione peroxidase and Na + , K + -ATPase activities.The pilocarpine group was constituted by those rats that presented seizures for over 30 min and that did not died within 1 h.
The drug dosages were determined from both doseresponse studies, including pilocarpine (data not shown), and observations of the doses currently used in animals studies in the literature 16,17 .The doses used are not equivalent to those used by humans because rats have different metabolic rates.
GPx was measured by method described by Sinet et al. 18 using t-butyl-HPx as substrate.The protein concentration was measured according to the method described by Lowry et al. 19 .The results expressed as mU per mg of protein (mU/mg of protein).
Na + , K + -ATPase activity was determined by method described by Wyse et al. 20 .Released inorganic phosphate (Pi) was measured by method of Chan et al. 21.Specific activity of the enzyme was expressed as nmol Pi released per min per mg of protein (nmol Pi/min/mg of protein).
AChE activity was determined according to Results of latency to first seizure and neurochemical alterations were compared using ANOVA and the Student-Newman-Keuls test as post hoc test, because these results show a parametric distribution.The number of animals that seized and the number that survived were calculated as percentages (percentage seizures and percentage survival, respectively), and compared with a nonpara-metric test (c 2 ).In all situations statistical significance was reached at p less-than-or-equals, slant 0.05.The statistical analyses were performed with the software Graph-Pad Prism, Version 3.00 for Windows, GraphPad Software (San Diego, CA, USA).
RESULTS
Animals studied showed generalized tonic-clonic convulsions (60%) with status epilepticus (SE), and 60% survived the seizures.Pilocarpine induced the first seizure at 35±0.70 min.Animals pretreated with the LA selected for this study were observed for 1 h before pilocarpine injection and its manifested alterations in behavior, such as peripheral cholinergic signs (100%), tremors (50%), staring spells, facial automatisms, wet dog shakes, rearing and motor seizures (25%), which develop progressively within 1-2 h into a long-lasting SE (25%) (Table ).Results showed that when administered at the dose (20 mg/kg) before pilocarpine, LA reduced by 35% the percentage of animals that seized (p<0.0001),increased (126%) latency to the first seizure (79.15±1.05min) (p<0.0001) and increased (40%) the survival percentage (p<0.0001)as compared with the pilocarpine-treated group (Table ).No animal that received injections of isotonic saline (control) or LA alone showed seizure activity (Table ).
Fig 1 shows the LA effects on glutathione peroxidase (GPx) and Na + , K + -ATPase activities in the hippocampus during seizures induced by pilocarpine.Post hoc comparison of means indicated a significant (52%) increase in hippocampal GPx activity in the hippocampus during seizures (p<0.0003), when compared with the control group.The pretreatment with LA also produced a significant increases in hippocampal GPx activities (20%; p<0.0001), when compared with the P400 group.In addition, the pretreatment with LA, 30 min before administration of pilocarpine also produced a significant increased of 81% in GPx (p<0.0228)activities, when compared with corresponding values for the control group (Fig 1).
Na + , K + -ATPase activity in the hippocampus during seizures showed a significant (17%) decrease in P400 group, when compared with corresponding values for the control group (p<0.0001).However, post hoc comparison of means indicated that hippocampal Na + , K + -AT- Effect of lipoic acid in adult rats prior to pilocarpine-induced seizures on glutathione peroxidase (GPx) and Na + , K + -ATPase activities in hippocampus of adult rats.
Pase activity in the rat hippocampus pretreated with LA was not markedly altered during acute phase of seizures (p=0.1334), when compared with the control group (Fig 1).
Post hoc comparison of means indicated a significant (23%) increase in hippocampal Na + , K + -ATPase activity of rats pretreated with LA (p<0.0001) when compared with the P400 group (Fig 1).However, no adult rats that received LA alone showed alterations in GPx (p=0.8913) and Na + , K + -ATPase activities (p=0.7039), when compared with the control group (Fig 1).
Fig 2 shows the LA effects in AChE activity in hippocampus during seizures induced by pilocarpine.Hippocampal AChE activity of rats in pilocarpine group was markedly decreased (63%) (p<0.0001), when compared with corresponding values for the control group.However, post hoc comparison of means indicated a significant (197%) increase in hippocampal AChE activity of rats pretreated with LA (p<0.0001), 30 min before administration of pilocarpine (LA plus P400 group), when compared with the P400 group.In addition, in LA plus P400 group it was observed no changes in AChE activity (p=0.0534), when compared with corresponding values for the control group (Fig 2).Moreover, AChE activity in the hippocampus adult rats that received lipoic acid alone (LA group) was not markedly altered (p=0.9823), when compared with corresponding values for the control group, but showed a significantly increased (169%) (p<0.0001), when compared with corresponding values for the P400 group (Fig 2 ).
DISCUSSION
The CNS contains some antioxidant enzymes, including superoxide dismutase (SOD) and GPx that are expressed in higher quantities than catalase 23 .This spectrum of enzymatic defense suggests that the brain may efficiently metabolize superoxide but may have difficulties in eliminating the hydrogen peroxide produced by this reaction 24 .In the present study we have examined whether the pretreatment with LA can reverse the alterations in the AChE, Na + , K + -ATPase and GPx activities in rat hippocampus caused by seizures.Generation of reactive oxygen species (ROS) is currently viewed as one of the process through which epileptic activity exert their deleterious effects on brain 22 .These ROS in the absence of an efficient defence mechanism cause peroxidation of membrane poly unsaturated fatty acids 25 .Brain is particularly susceptible to peroxidation due to simultaneous presence of high levels of poly unsaturated fatty acids and iron 24 , which is the target of free radical damage.
Previous studies conducted in our laboratory have shown that during seizures there are no alterations in hippocampal superoxide dismutase and catalase activities 11 .Furthermore, other antioxidant systems such as glutathi-one peroxidase can be responsible by inhibition of neurotoxicity induced by acute phase of seizures activity.It has been demonstrated that pretreatment with LA during acute phase of seizures induced by pilocarpine produces increase in SOD, catalase activities 11 and GPx in rat hippocampus.The increase in antioxidant enzymes activities, after pretreatment with LA, is most readily explained as a necessary consequence of inhibiting formation of free radicals during convulsive process [26][27][28] .
LA plus P400 and P400 groups showed an increase in the GPx activities.These data suggests that H 2 O 2 , which is generated during superoxide dismutation, could be sufficiently removed by GPx during seizures and after the pretreatment with lipoic acid.Previous studies showed an increased in hippocampal GPx activity after seizures 26 .In addition, during the convulsive process, neuronal activities changes are accompanied by alterations in the cerebral metabolic rate 29 .Considering that an increased metabolic demand can be observed during the epileptic activity, we can suggest that GPx activity is modified by seizures.This finding might suggest that pretreatment with LA produces an increase in this enzymatic activity.Its compensatory mechanisms against oxidative stress observed during seizures can explain the anticonvulsant actions of LA.The seizures induced by pilocarpine are prevented by LA, suggesting a role of free radical in controlling seizures installation and propagation.In fact, we found that pretreatment with LA is able to inhibit pilocarpine-induced seizures.In addition, the present data suggest evidence that free radical formation have a relevant role in the propagation and/or maintenance of convulsive activity.Meanwhile free radical formation reduc- es, an increase in antioxidant enzymes activities produced by LA produces a significant decrease in the susceptibility to seizures induced by pilocarpine.LA administration to convulsive animals has been shown to protect hippocampus against oxidative stress.LA has been observed to act as antioxidants towards hydroxyl radicals and to inhibit the oxidation of lipids and protein 4,9 .Results of animal studies have demonstrated that LA can reduce damage to neurons caused by free radicals that are produced in neurodegenerative diseases.
The underlying mechanisms of brain dysfunction in seizures are poorly understood.Regarding this, it has been demonstrated that elevated free radical concentrations can be highly toxic, and that nitric oxide metabolites produced by the oxidative stress pathway such as nitrite and nitrate might contribute to this toxicity 30 .It is also known that hydroxyl radical has a synergistic effect on seizures elicited by pilocarpine.
Considering that Na + , K + -ATPase is decreased by free radical formation 6 , lipid 31 and that -SH groups of cell proteins are highly susceptible to oxidative stress 32 , we also investigated the LA effects on inhibitory action of seizures on this enzyme activity.We verified that seizures significantly inhibited this enzymatic activity.On the other hand, we have shown in present work that LA increases this enzymatic activity during seizures in rat hippocampus 11 .These observations may explain, at least in part, the neuroprotective effects of LA against oxidative stress caused by seizures.Although the exact mechanism through which seizures inhibits Na + , K + -AT-Pase activity is yet unknown, the present findings suggest the involvement of ROS probably by oxidizing SH groups of the enzyme and/or by peroxidation of membrane lipids, in which the enzyme is embedded.In this context, it should be noted that LA acts directly as a thiol-reducing agent, as well as a scavenger of free radicals and lipid peroxidation products 33 .In turn, LA can be able to interact with cell membranes, trapping ROS and interrupting the chain of oxidative reactions that damage cells.Furthermore, there are studies in the literature showing that antioxidant compounds can effectively slow down the progression of neurodegenerative diseases [34][35][36] .
Finally, we also evaluated the effect of LA on AChE activity in rat hippocampus.Our results show that this enzyme activity was decreased in seized rats.In order to confirm these findings, we verified the effect of a single injection of LA on AChE activity.Results show that LA alone administration did not alter this enzyme activity in rat hippocampus killed 1 h after pilocarpine administration.Moreover, a single injection of LA, 30 min before administration of pilocarpine produces increased on AChE activity.The increases in AChE may be due to the compensatory mechanism of long-term administration with LA may be due to the up-regulation of AChE activity.The results obtained by AChE activities measurements could be further supported by Western blot analysis, which did not show higher protein contents of AChE (data not show).
Although it is difficult to extrapolate our animal model data to the human condition [37][38][39] , it is tempting to speculate that neurological symptoms observed in seizures may be related to high tissue concentrations of free radicals having an adverse effect on brain function through oxidative stress and inhibition of Na + , K + -ATPase and AChE activities.However, whether these or other abnormalities are the main factors responsible for the brain damage in seizures remains to be elucidated.Furthermore, future studies should be carried out to provide additional information so as to clarify the action mechanisms of lipoic acid during the establishment of seizures.
ackNowLedGmeNts -We would like to thank Stenio Gardel Maia for her technical assistance.
Fig
Fig 1.Effect of lipoic acid in adult rats prior to pilocarpine-induced seizures on glutathione peroxidase (GPx) and Na + , K + -ATPase activities in hippocampus of adult rats.
Fig 2 .
Fig 2. Effect of lipoic acid in adult rats prior to pilocarpine-induced seizures on acetylcholinesterase (AChE) activity in hippocampus of adult rats.
Ellman et
table .
Effect of pretreatment with lipoic acid on pilocarpine-induced seizures and lethality in adult rats.
|
2017-05-28T21:49:12.785Z
|
2010-08-01T00:00:00.000
|
{
"year": 2010,
"sha1": "2ba1c91d7bf14e9925750f77f89e816d23390818",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/wFwdHQ88JD3ZjcmfmzBxMhK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ba1c91d7bf14e9925750f77f89e816d23390818",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73500579
|
pes2o/s2orc
|
v3-fos-license
|
Variability of Serum Proteins in Chinese and Dutch Human Milk during Lactation
To better understand the variability of the type and level of serum proteins in human milk, the milk serum proteome of Chinese mothers during lactation was investigated using proteomic techniques and compared to the milk serum proteome of Dutch mothers. This showed that total milk serum protein concentrations in Chinese human milk decreased over a 20-week lactation period, although with variation between mothers in the rate of decrease. Variation was also found in the composition of serum proteins in both colostrum and mature milk, although immune-active proteins, enzymes, and transport proteins were the most abundant for all mothers. These three protein groups account for many of the 15 most abundant proteins, with these 15 proteins covering more than 95% of the total protein concentrations, in both the Chinese and Dutch milk serum proteome. The Dutch and Chinese milk serum proteome were also compared based on 166 common milk serum proteins, which showed that 22% of the 166 serum proteins differed in level. These differences were observed mainly in colostrum and concern several highly abundant proteins. This study also showed that protease inhibitors, which are highly correlated to immune-active proteins, are present in variable amounts in human milk and could be relevant during digestion.
Introduction
Human milk is the best source of nutrition for babies, enhances children's immune system and influences the microbiota [1][2][3]. Health benefits have been linked to the presence and concentration of human milk components like oligosaccharides and proteins [4,5]. There are two distinct groups of proteins in human milk; caseins and milk serum proteins [6]. Human milk in early lactation consists of approximately 30% caseins and 70% serum proteins, with a 50:50 ratio typically found after a six month lactation period [6].
Serum proteins in human milk have been categorized according to their main and highly diverse biological functions [7,8]. It was found that immune-related proteins, transport proteins, and enzymes were present in the largest quantities, and their concentrations generally decrease over lactation [7,8]. Immune-active proteins not only protect infants against pathogenic microorganisms, but also confer Nutrients 2019, 11, 499 2 of 14 passive immunity to the neonate until its own immune system has been fully developed [9][10][11]. Serum proteins in human milk also include an array of blood coagulation proteins, membrane proteins, signaling proteins, and protease inhibitors [9][10][11]. Protease inhibitors play a key role in the blood coagulation cascade and complement pathway [12][13][14], and might protect proteins against degradation by proteases in the mammary gland and even in the infant's gastrointestinal tract [12][13][14][15][16][17][18].
There is a wide range of proteins (e.g., α S1 -, β-, and κ-casein, lactoferrin, immunoglobulins, serum albumin, and α-lactalbumin) in relatively high concentrations in human milk [19]. Most milk proteins are synthesized in the mammary gland, except for immunoglobulins and serum albumin [19]. Serum albumin can enter milk via the paracellular pathway and immunoglobulins are transported from blood through mammary epithelial cells by a receptor-mediated mechanism [19]. Caseins are transport proteins that form micelles, and these micelles are capable of binding-and thereby transporting-minerals. Caseins can easily be digested in the infant's gastrointestinal tract [15][16][17][18], being a valuable source of amino acids and minerals, which can easily be absorbed. Milk serum proteins such as lactoferrin, immunoglobulins, serum albumin, and α-lactalbumin cover 90% of the milk serum proteome in abundance [20]. The milk serum protein α-lactalbumin is required for the synthesis of lactose, supplies infants with large amounts of tryptophan, and facilitates the absorption of essential minerals [21]. Several other milk serum proteins, like lactoferrin and immunoglobulins, protect infants against pathogens and decrease the risk of having acute or chronic diseases [21,22]. Lactoferrin, a globular glycoprotein of the transferrin family, ends up in the infant's feces, and was shown to influence the microbiota composition of neonates [22]. Human milk is also a rich source of antibodies or immunoglobulins, which are able to recognize and bind to unique epitopes of pathogens, preventing their colonization [23][24][25]. Serum albumin is a protein mainly involved in the transportation of hormones, fatty acids, and other milk components [21].
Individual differences in milk serum proteins between mothers have been reported, where it was found that there was a large overlap in identified proteins in human milk among mothers, whereas there were also major quantitative changes, both between mothers and over time [7]. Given the various potential benefits of milk serum proteins, it would be of interest to obtain insights in the variability of serum proteins in human milk from mothers from other geographical and ethnic origin.
Therefore, the main objective of this study was to investigate the milk serum proteome of seven Chinese mothers and to investigate the variability in type and level of serum proteins in Chinese human milk over a 20-week lactation period using liquid chromatography and mass spectrometry (LC-MS/MS). Additionally, the type and level of serum proteins in Chinese human milk were compared to those in colostrum and mature milk from Dutch mothers.
Study Setup and Sample Collection
Chinese participants were recruited in the Hohhot region, China, between August 2014 and November 2015 by the Yili Innovation Center (Hohhot, China). Yili organized the collection of the human milk, including sampling using a human milk pump. For every time point, a volume of 10 mL was collected in a polypropylene bottles. Milk bottles were shaken gently, aliquoted directly into 2 mL Eppendorf tubes, and stored at −20 • C. Milk samples from seven healthy mothers who delivered term (38-42 weeks) infants were assessed in weeks 1, 2, 4, 8, 12, and 20 postpartum. Human milk collection was approved by the Chinese Ethics Committee of Registering Clinical Trials (ChiECRCT-20150017). Written informed consent was obtained from all mothers. Milk collection and analysis of the milk of four Dutch mothers over a 24-week lactation period was described preciously and was a collaboration with the Dutch Human Milk Bank (Amsterdam, The Netherlands) [7]. Healthy women who delivered singleton term infants (38-42 weeks) were eligible for that study. The data from these analyses were re-used and made compatible with the Chinese data within this research to facilitate direct comparison, as explained further in Section 2.4 (Data Analysis).
Milk Serum Preparation and Concentrations
Human milk samples (5 mL) were fractionated, as described previously [10]. Briefly, the milk fat was removed by centrifugation (10 min, 1500 g, 4 • C) and the obtained skim milk was transferred to ultracentrifuge tubes. After ultracentrifugation (90 min, 100,000 g, 4 • C), the top layer represented the remaining milk fat still present, the middle layer was milk serum (with some free soluble caseins), and the bottom layer consisted of micellar casein. The free soluble caseins are part of the milk serum proteome. A comparative study previously showed that ultracentrifugation is the most effective method to separate caseins from serum proteins [26], although it is not possible to rule out low amounts of serum proteins in the casein pellet [6]. Milk serum concentrations were measured in duplicate using the bicinchoninic acid (BCA) protein assay kit (Thermo Scientific Pierce, Massachusetts, U.S.), to ensure that the same amount of protein (10 µg) was used for further sample preparation. Bovine serum albumin was used as standard for making a BCA calibration curve.
Sample Preparation, Dimethyl Labeling, Protein Digestion, and Peptide Analysis
Milk serum samples were prepared for protein analysis using filter-aided sample preparation and dimethyl labeling, as described previously [27]. Milk serum (20 µL) was mixed with a buffer containing sodium dodecyl sulfate (SDS) for protein denaturation and dithiothreitol (DTT) to reduce the disulfide bridges in proteins, after which the samples were loaded on a Pall 3 K omega filter (10-20 kDa cutoff, OD003C34, Pall, Washington, U.S.) for protein digestion. The lysis buffer contained 0.1 M Tris/HCl pH 8.0 + 4% SDS + 0.1 M DTT to get a 1 µg/µL protein solution. Next, 180 µL of 0.05 M iodoacetamide/urea (0.1 M Tris/HCl pH 8 + 8 M urea) was used for protein alkylation. Samples were washed three times with 100 µL of 8 M urea, using centrifugation, followed by 110 µL of 50 mM ammonium bicarbonate (ABC). Then 0.5 µg trypsin in 100 µL ABC was added, followed by overnight incubation at room temperature while mildly shaking, and centrifuged to separate peptides from undigested material. The trypsin digested samples were then labeled, using distinct combinations of isotopic isomers of formaldehyde and cyanoborohydride, leading to a unique stable isotope composition of labeled peptide doublets with different masses [27]. After dimethyl labeling, the prepared samples were analyzed using LC-MS/MS, as described before [7]. For LC-MS/MS, a Prontosil 300-3-C18Hmagic C18AQ 200 Å analytical column was used, and the full scan FTMS spectra were measured in positive mode between m/z 380 and 1400 on a Thermo LTQ-Orbitrap XL. CID fragmented MS/MS scans of the four most abundant doubly-and triply-charged peaks in the FTMS scan were recorded in data-dependent mode in the linear trap (MS/MS threshold = 5.000).
Data Analysis
The MS/MS spectra obtained were processed by the software package Maxquant 1.3.0.5 with the Andromeda search engine, as described previously [28]. Protein identification and quantification was done according to the literature [7]. Maxquant created a decoy database consisting of reversed sequences to calculate the false discovery rate (FDR). The FDR was set to 0.01 at the peptide and protein levels. The minimum required peptide length was six amino acids, and proteins were identified based on a minimum of two distinct peptides. The intensity-based absolute quantification (iBAQ) values were selected, representing the total peak intensity as determined by Maxquant for each protein and their values were corrected for the number of measurable peptides [7]. The iBAQ values have been reported to have a good correlation with known absolute protein amounts over at least four orders of magnitude [29]. For data normalization, iBAQ values for each protein were transformed into BCA equivalent milk serum protein concentrations, by dividing the iBAQ values of each protein in a sample by the summed iBAQ values of all protein within a sample, there were then multiplied with the corresponding milk serum protein concentration based on the BCA assay. To facilitate direct comparison between Chinese and Dutch data within this research, BCA equivalent values at time points 12 and 20 weeks postpartum were compared to weeks 16 and 24, respectively. Biological functions were assigned to all the serum proteins using the online UniprotKB database, as done previously [7]. To assign a specific function to multifunctional proteins, DAVID Bioinformatics Resource 6.7 was used additionally for further protein biological function classification and clarification [30].
Statistical Analysis
Statistical analysis was performed based upon previously described methods [7], with modifications. For the BCA equivalent values of each protein in Chinese and Dutch human milk over lactation, a regression line was fitted using R (Lucent Technologies, New York, NY, U.S.A.), summarizing the profile over time for each protein into an intercept and slope. The calculated intercepts are the protein BCA equivalent values at week 1, while the calculated slopes indicate the decrease or increase in BCA equivalent values per week. To determine the significant different milk serum proteins over the course of lactation per country, a comparison was made based on the calculated slope. Only BCA equivalent values of the common serum proteins found in both Chinese and Dutch human milk were used for comparison. The common serum proteins in Chinese and Dutch human milk were then evaluated based on the calculated intercept and slope using a two-tailed t-test, with a significance level set at α = 0.05. Next, these common milk serum proteins were compared in Chinese and Dutch human milk using a two-tailed t-test in Perseus [31], separately for each lactation week, with correction for multiple testing based on permutation-based FDR. The BCA equivalent values of serum proteins in Chinese and Dutch human milk were also summed per function and were then compared using a two-tailed t-test. To quantify the relation between biological function groups, Pearson correlation coefficients were calculated for summed BCA equivalent values and visualized in correlation matrix plots. Pearson correlation coefficients of >0.5 were considered good. All the serum proteins in Chinese and Dutch human milk were plotted in a graph in order to visualize the differences in serum proteins over the course of lactation.
Results
The objective of this study was to investigate the variability in the type and level of serum proteins in Chinese human milk over a 20-week lactation period. For this, the milk serum proteome of seven mothers over the course of lactation was investigated using LC-MS/MS.
Level and Type of Milk Serum Proteins in Chinese Human Milk
The total milk serum protein concentrations in Chinese human milk of the seven mothers over the course of lactation are presented in Figure 1. Concentrations ranging from 12 to 25 g/L decreased significantly (α < 0.05) over a 20-week lactation period, although with large individual variations ( Figure 1).
Serum proteins in human milk were grouped based on their main biological functions (Supplementary Supporting information, data file). Not only the total protein concentrations, but also the protein composition differed among mothers and over lactation as measured after protein digestion and subsequent LC-MS/MS analysis ( Figure 2). The figure shows that immune-active proteins, transport proteins, and enzymes were the most abundant for all mothers ( Figure 2). The percentage of total protein attributable to these main biological functions, however, varied widely among mothers ( Figure 2). Although the BCA equivalent values were always higher in colostrum than in mature milk, the rate of decline for the three main groups varied among mothers ( Figure 2). Serum proteins in human milk were grouped based on their main biological functions (Supplementary Supporting information, data file). Not only the total protein concentrations, but also the protein composition differed among mothers and over lactation as measured after protein digestion and subsequent LC-MS/MS analysis ( Figure 2). The figure shows that immune-active proteins, transport proteins, and enzymes were the most abundant for all mothers ( Figure 2). The percentage of total protein attributable to these main biological functions, however, varied widely among mothers ( Figure 2). Although the BCA equivalent values were always higher in colostrum than in mature milk, the rate of decline for the three main groups varied among mothers ( Figure 2). To facilitate the comparison between Chinese and Dutch human milk, data were averaged among mothers, as shown in Figure 3. The average total BCA equivalent values in Chinese human Serum proteins in human milk were grouped based on their main biological functions (Supplementary Supporting information, data file). Not only the total protein concentrations, but also the protein composition differed among mothers and over lactation as measured after protein digestion and subsequent LC-MS/MS analysis ( Figure 2). The figure shows that immune-active proteins, transport proteins, and enzymes were the most abundant for all mothers (Figure 2). The percentage of total protein attributable to these main biological functions, however, varied widely among mothers ( Figure 2). Although the BCA equivalent values were always higher in colostrum than in mature milk, the rate of decline for the three main groups varied among mothers ( Figure 2). To facilitate the comparison between Chinese and Dutch human milk, data were averaged among mothers, as shown in Figure 3. The average total BCA equivalent values in Chinese human To facilitate the comparison between Chinese and Dutch human milk, data were averaged among mothers, as shown in Figure 3. The average total BCA equivalent values in Chinese human milk for enzymes, immune-active proteins, and transport proteins ranged over 4.5-10.0 g/L, 2.9-7.8 g/L, and 2.9-5.0 g/L, respectively (Figure 3). milk for enzymes, immune-active proteins, and transport proteins ranged over 4.5-10.0 g/L, 2.9-7.8 g/L, and 2.9-5.0 g/L, respectively ( Figure 3).
Comparison of the Chinese and Dutch Milk Serum Proteomes
The type and level of serum proteins in Chinese human milk were also compared to those in Dutch human milk. The raw data on Dutch human milk were reprocessed to be compatible with the Chinese data. The total BCA milk serum protein concentrations in Dutch human milk per mother and over the course of lactation are available as supplementary information ( Figure S1). The total BCA equivalent values in Dutch human milk decreased over a 24-week lactation period from 21.6 to 13.6 g/L ( Figure S2). Enzymes, immune-active proteins, and transport proteins were also the most abundant in Dutch human milk over the course of lactation ( Figure S2). The BCA equivalent values for the groups enzymes, immune-active proteins, and transport proteins in Dutch human milk ranged over 4.5-9.0 g/L, 3.8-5.6 g/L, and 4.8-6.8 g/L, respectively. Although different patterns in Chinese and Dutch human milk can be observed, the difference was not significant between the same group of biological functions (data not shown), except for cell and signaling, where levels were higher in Chinese human milk.
The relations between the levels of different biological function groups of serum proteins within the Chinese and within the Dutch human milk populations were visualized in a correlation matrix plot (Figure 4).
Comparison of the Chinese and Dutch Milk Serum Proteomes
The type and level of serum proteins in Chinese human milk were also compared to those in Dutch human milk. The raw data on Dutch human milk were reprocessed to be compatible with the Chinese data. The total BCA milk serum protein concentrations in Dutch human milk per mother and over the course of lactation are available as supplementary information ( Figure S1). The total BCA equivalent values in Dutch human milk decreased over a 24-week lactation period from 21.6 to 13.6 g/L ( Figure S2). Enzymes, immune-active proteins, and transport proteins were also the most abundant in Dutch human milk over the course of lactation ( Figure S2). The BCA equivalent values for the groups enzymes, immune-active proteins, and transport proteins in Dutch human milk ranged over 4.5-9.0 g/L, 3.8-5.6 g/L, and 4.8-6.8 g/L, respectively. Although different patterns in Chinese and Dutch human milk can be observed, the difference was not significant between the same group of biological functions (data not shown), except for cell and signaling, where levels were higher in Chinese human milk.
The relations between the levels of different biological function groups of serum proteins within the Chinese and within the Dutch human milk populations were visualized in a correlation matrix plot (Figure 4).
Individual Milk Serum Proteins
Totals of 469 and 200 serum proteins were measured in Chinese and Dutch human milk, respectively. The milk serum proteomes of different Chinese and Dutch mothers were compared based on 166 common milk serum proteins. The overall 15 most abundant milk serum proteins can be found in Table 1.
Individual Milk Serum Proteins
Totals of 469 and 200 serum proteins were measured in Chinese and Dutch human milk, respectively. The milk serum proteomes of different Chinese and Dutch mothers were compared based on 166 common milk serum proteins. The overall 15 most abundant milk serum proteins can be found in Table 1. In Dutch human milk, 1-antichymotrypsin belongs to the top 15 serum proteins instead of the transport protein fatty acid-binding protein (Table 1). Within the group enzymes, the highly B B A In Dutch human milk, α 1 -antichymotrypsin belongs to the top 15 serum proteins instead of the transport protein fatty acid-binding protein (Table 1). Within the group enzymes, the highly abundant α-lactalbumin and bile salt-activated lipase are mainly responsible for the changes in this group in human milk over the course of lactation (Table 1). Many immune-active proteins, like lactoferrin, osteopontin, different types of immunoglobulins, polymeric immunoglobulin receptor, and clusterin, belong to the most abundant serum proteins in human milk ( Table 1). The changes within the group of transport proteins over the course of lactation can mainly be explained by the caseins ( Table 1). The caseins in Table 1 probably refer to the free, non-micellar casein, as the micellar casein should have been removed during the sample preparation (Table 1). With the majority of the caseins in milk being part of the micellar fraction, the caseins in Table 1 therefore do not reflect the levels of total casein. The differences in protein patterns between Chinese and Dutch human milk were examined by comparison of both the intercept (representing colostrum) and slope (representing the decline over lactation) of curves, fitted for the 166 common milk serum proteins. The p-values for these differences after using a two-tailed t-test are shown in Figure 5.
Nutrients 2019, 11, x; doi: FOR PEER REVIEW www.mdpi.com/journal/nutrients group in human milk over the course of lactation (Table 1). Many immune-active proteins, like lactoferrin, osteopontin, different types of immunoglobulins, polymeric immunoglobulin receptor, and clusterin, belong to the most abundant serum proteins in human milk ( Table 1). The changes within the group of transport proteins over the course of lactation can mainly be explained by the caseins ( Table 1). The caseins in Table 1 probably refer to the free, non-micellar casein, as the micellar casein should have been removed during the sample preparation (Table 1). With the majority of the caseins in milk being part of the micellar fraction, the caseins in Table 1 therefore do not reflect the levels of total casein. The differences in protein patterns between Chinese and Dutch human milk were examined by comparison of both the intercept (representing colostrum) and slope (representing the decline over lactation) of curves, fitted for the 166 common milk serum proteins. The p-values for these differences after using a two-tailed t-test are shown in Figure 5. Table 1. For each serum protein in Chinese and Dutch human milk over the course of lactation, a regression line was fitted, summarizing the profile for each protein into an intercept (representing week 1) and slope (representing rate of change over lactation). These profiles were used for comparison between Chinese and Dutch human milk, and the p-values for differences between them were plotted. (A) Significantly different proteins in Chinese and Dutch human milk over the course of lactation, based on difference in slope; (B) significantly different proteins in Chinese and Dutch human milk at week 1, based on intercept; and (C) no significant difference.
The levels of two serum proteins (elongation factor 2 and myristoylated alanine-rich c-kinase substrate) varied in the Chinese and Dutch human milk over the course of lactation, as shown by the significantly different slope ( Figure 5, area A). Next to that, the levels of 35 serum proteins varied in intercept ( Figure 5, area B), including several proteins from the top 15 (Table 1), as shown in green. The complete list of significantly different serum proteins in Chinese and Dutch human milk is shown in Table 2, grouped according to their biological function. The levels of the 166 common milk serum proteins in the Chinese and Dutch populations that increased or decreased over the course of lactation, can be found as supporting information (Table S1). The levels of 17 (10%) and 21 (12%) of the 166 common milk serum proteins changed over the course of lactation in Chinese and Dutch human milk, respectively. In addition, the 166 common serum proteins were compared between Chinese and Dutch human milk for each week separately (Table S2). This showed that 16 of 17 proteins that significantly differed in week 1 were also significantly differing in one or more of the other weeks.
The Level and Type of Serum Proteins in Chinese Human Milk
The total protein concentrations decrease significantly over a 20-week lactation period in each mother, although with individual variations (Figure 1). These milk serum protein concentrations match with those observed in earlier studies, ranging from 12 to 25 g/L [7,[32][33][34], although other studies report lower values from 7 to 16 g/L over the course of lactation [3, 24,35,36]. These differences may be explained by the BCA method [37,38], which generally overestimates the total protein in human milk by about 25-40% [37,38]. The serum protein levels in this study should thus be regarded as semi-quantitative, although this did not influence the comparisons reported here, as they are all based on the BCA method. Although the protein content seems high for milk serum, it should be taken into account that the samples with the highest protein content are actually those in early lactation. These samples are known to have higher protein and relatively lower casein contents [6], leading to higher milk serum protein contents. In addition, part of the casein remained in the sample after sample preparation and therefore also counted towards the BCA protein content.
As described previously [5], human milk becomes fully mature between 4 and 6 weeks postpartum, with the amounts of bioactive components decreasing relative to the nutrients. In early life, infants have an immature intestinal immune system, making them more vulnerable to infection by opportunistic pathogens [5]. The high levels of immune-related milk serum proteins in colostrum ( Figure 3) may provide protection to the infant in this sensitive stage of development.
It was also observed that a large variability exists in the milk serum protein composition in colostrum among Chinese mothers ( Figure 2). The results in this study comprising milk from seven mothers shows that immune-active proteins, enzymes, and transport proteins are highly abundant in Chinese human milk (Figure 3), which can also be observed from the individual data of mothers ( Figure 2). Earlier studies had already shown that immune-active proteins, enzymes, and transport proteins were present in the largest quantities over the course of lactation [7,9,11].
The 15 Most Abundant Milk Serum Proteins
The large quantities of immune-active proteins are especially driven by the abundance of lactoferrin, immunoglobulins, polymeric immunoglobulin receptor, clusterin, osteopontin and β 2 -microglobulin (Table 1), which may protect infants against pathogenic microorganisms, and confer passive immunity to the neonate until its own immune system has been developed [9][10][11]. As shown in Table 1, transport proteins, like free soluble caseins, serum albumin, and fatty acid binding protein were present in large quantities during lactation. Free soluble caseins could not be removed from the milk, unlike the micellar casein that can be pelleted by ultracentrifugation-a phenomenon that has also been reported by others [7,19,24]. Free soluble and micellar caseins belong to the most abundant proteins in human milk, and these proteins mainly supply infants with amino acids and minerals needed for their growth [23][24][25]. It can also be observed from Table 1 that enzymes are the largest group of proteins across lactation. The large quantities of enzymes in human milk can be explained by the presence of α-lactalbumin, which is known to be the most abundant milk serum protein (Table 1). This enzyme is required for the synthesis of lactose, the main macronutrient in milk [5,21]. It should be noticed that α-lactalbumin does not have enzymatic activity on its own. Besides α-lactalbumin, bile salt-activated lipase belongs to the 15 most important enzymes in Chinese and Dutch human milk during lactation (Table 1). Bile salt-activated lipase supports the digestion of fats in the immature infant digestive tract, and facilitates the absorption of cholesterol, vitamin A, and triacylglycerols [7]. The protease inhibitor α 1 -antichymotrypsin is also among the 15 most abundant human milk serum proteins, and, like other protease inhibitors and proteases, might play a key role in the digestion of human milk [12][13][14]. Overall, the 15 most abundant proteins identified in this study were in levels dominating the entire milk composition, covering more than 95% of both the Chinese and Dutch milk serum proteomes.
Proteases and Protease Inhibitors
Proteases may play a key role in the digestion of human milk. Although trypsin was the most abundant protease in Chinese and Dutch human milk, many other proteases (e.g., cytosol aminopeptidase, elastase, kallikrein, plasmin, cathepsins) were found, albeit to a lesser extent (Supplementary Information, data file). As described by others, proteases might be present in human milk to hydrolyze proteins in the mammary gland to regulate casein micelle size [14,15]. Protein digestion in human milk by proteases target specific proteins (e.g., caseins, polymeric immunoglobulin receptor, osteopontin) that do not have an extensive tertiary structure and are thus more accessible to proteolytic cleavage [16,18]. These proteins were, in this study, part of the overall 15 most abundant proteins in Chinese and Dutch human milk during lactation (Table 1). In particular, the caseins are well digested [16][17][18], which indicates that proteases and bile salt-activated lipase in human milk aids overall in the digestion of two of its main macronutrients, fats and proteins [19].
Besides proteases, human milk also contains protease inhibitors. The ratio between protease inhibitors and proteases in colostrum is circa 10:1. The most abundant protease inhibitors were α 1 -antichymotrypsin, α 1 -antitrypsin, cystatin C, and phosphatidyletanolamine-binding protein (Supplementary Information, data file). As described by others, α 1 -antichymotrypsin binds to chymotrypsin and other chymotrypsin-like serine proteases in human milk, while α 1 -antitrypsin inhibits proteases, such as trypsin, elastase, plasmin, and thrombin, and irreversibly deactivates trypsin in vitro [12][13][14][15]. A correlation was found between protease inhibitors and immune-active proteins in Chinese and Dutch human milk (Figure 4). Previous literature focused specifically on the relation between serine protease inhibitors and immunoglobulins [7], which also in our data showed stronger correlations than for all protease inhibitors and all immune proteins ( Figure S3). A correlation higher than 0.7 was also found in both Chinese and Dutch milk between proteases and protease inhibitors specifically (data not shown). A previous study presented an overview of the proteolytic system network in human milk [15], which consists of several proteases, protease inhibitors, and blood coagulation proteins, indicating that these protein groups share a common biochemical pathway; this may explain their correlations.
Where some of the major proteins are partially digested by milk proteases in human milk, most immune-active proteins are less sensitive to digestion by these proteases, due to their compact folded globular structure, that cannot be as easily digested [16]. For these immune-active proteins to have an immune-activating role in the small intestine, they must be protected against intestinal digestion, because they are sensitive to chymotrypsin and trypsin [17,18]. That might be the reason why protease inhibitors present in human milk seem to target intestinal enzymes, specifically blocking trypsin, chymotrypsin, and other proteases [17,18], especially through the relative abundant α 1 -antichymotrypsin and α 1 -antitrypsin. Overall, protease inhibitors may thus ensure that specific proteins stay intact in the infant's digestive tract. This may also explain previous findings that several immune-active proteins (e.g., lactoferrin, lysozyme, immunoglobulins) and protease inhibitors (e.g., α 1 -antichymotrypsin, α 1 -antitrypsin) can be found intact in the stool of breastfed infants [17,18]. The intact proteins in the infant's stool may also be related to the simultaneous decrease in the content of immune-active proteins and protease inhibitors over lactation. Protection is less necessary later in lactation due to the development of the infant's immune system and digestive tract over time, while digestion becomes important for the release of nutrients later in lactation.
Comparison of High-and Low-Abundance Serum Proteins in Chinese and Dutch Human Milk
It appears that the milk serum proteomes of Chinese and Dutch mothers are similar (Figure 3 and Figure S2). The main purpose of this study was to evaluate the common serum proteins in Chinese and Dutch human milk over the course of lactation. Totals of 469 and 200 serum proteins were found in Chinese and Dutch human milk, respectively. Although a lower number of serum proteins was identified in Dutch human milk, there was still an overlap of 166 serum proteins with Chinese human milk, which represents more than 95% of the milk serum proteome in term of concentrations. The reason for the higher number of serum proteins found in Chinese human milk might be due to the larger sample size (48 versus 24 human milk samples), which generally leads to more identified proteins [28].
In total, 22% (37 out of 166) of the common serum proteins in human milk differed between Chinese and Dutch mothers either at week 1 or over the course of lactation. The levels of 35 of the 166 (circa 21%) common serum proteins varied between Chinese and Dutch mothers in week 1 ( Figure 5, area B). This, together with the results presented in Table 2 and Table S2, indicates that the differences between Chinese and Dutch human milk serum proteins were mainly in their level throughout lactation, and not in their changes over lactation, as the levels of only 2 of the 166 (circa 1%) common serum proteins identified in this study (myristoylated alanine-rich c-kinase substrate and elongation factor 2) differed over the course of lactation ( Figure 5, area A, showing difference in slope). Overall, the main differences in the milk serum proteomes between Chinese and Dutch human milk were observed in the level of individual proteins, and not in rate of changes over lactation.
Conclusions
The milk serum proteome of Chinese and Dutch mothers were similar in term of relative the abundance of different functional groups as well as the most abundant proteins. Some quantitative differences were found, especially in absolute levels and not in rates of change over lactation. Human milk contains enzymes that can assist the digestion of milk proteins and lipids in the immature infant's digestive tract. Protease inhibitors, which are highly correlated to the immune-active proteins, are present in variable amounts in human milk; they could be relevant during digestion and might be involved in controlling protein breakdown in the infant's intestinal tract.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/11/3/499/s1, Figure S1: Total BCA serum protein concentrations (g/L) in Dutch human milk per mother over a 24-week lactation period. Raw data from Dutch human milk were re-used [7]; Figure S2: BCA equivalent values (g/L) of serum proteins in human milk of 4 Dutch mothers categorized per biological function over a 24-week lactation period. Raw data from Dutch human milk were re-used [7]; Figure S3: Correlations between the functional groups consisting of protease inhibitors (including serine and non-serine protease inhibitors) and immune-active proteins (including immunoglobulins and non-immunoglobulins) in Chinese human milk, using BCA equivalent values (g/L) over a 20-week lactation period; Table S1: Significantly different serum proteins in Chinese and Dutch human milk over the course of lactation, based on the BCA equivalent values (g/L) over lactation (slope); Table S2: Serum proteins that were significantly different in at least one of the lactation weeks. Numbers are the p-value for the difference between the Chinese human milk serum proteins and Dutch human milk serum proteins. To facilitate direct comparison between Chinese and Dutch data within this research, the time points 12 and 20 weeks postpartum were compared to week 16 and 24, respectively; Supporting Information, data file: Serum proteins in human milk of Chinese mothers over a 20-week lactation period. The columns described in the next tab are the individual proteins, their functions and their iBAQ values averaged for all mothers at weeks 1, 2, 4, 8, 12, and 20 postpartum.
|
2019-03-11T17:23:11.771Z
|
2019-02-27T00:00:00.000
|
{
"year": 2019,
"sha1": "9f7d6abcdc48a883e0110082f80c19353cf24528",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/nu11030499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f7d6abcdc48a883e0110082f80c19353cf24528",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118933970
|
pes2o/s2orc
|
v3-fos-license
|
Energy and centrality dependences of charged multiplicity density in relativistic nuclear collisions
Using a hadron and string cascade model, JPCIAE, the energy and centrality dependences of charged particle pseudorapidity density in relativistic nuclear collisions were studied. Within the framework of this model, both the relativistic $p+\bar p$ experimental data and the PHOBOS and PHENIX $Au+Au$ data at $\sqrt s_{nn}$=130 GeV could be reproduced fairly well without retuning the model parameters. The predictions for full RHIC energy $Au+Au$ collisions and for $Pb+Pb$ collisions at the ALICE energy were given. Participant nucleon distributions were calculated based on different methods. It was found that the number of participant nucleons, $$, is not a well defined variable both experimentally and theoretically. Therefore, it is inappropriate to use charged particle pseudorapidity density per participant pair as a function of $$ for distinguishing various theoretical models.
The main focus of the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) is to explore the phase transition related to the quark deconfinement and the chiral symmetry restoration. The first available experimental data were the energy dependence of charged particle pseudorapidity density in Au + Au collisions at √ s nn =56 and 130 GeV from the PHOBOS collaboration [1]. After that, the PHENIX collaboration published their data of centrality dependence of the charged particle pseudorapidity density in Au + Au collisions at √ s nn =130 GeV [2]. It has been predicted that the rare high charged multiplicity could indicate the onset of the Quark-Gluon-Plasma (QGP) phase, since the extra entropy in the QGP phase could manifest itself as a huge number of produced particles in the final state [3,4,5]. On the other hand, in [6] the centrality dependence of charged multiplicity has been proposed to provide information on the relative importance of soft versus hard processes in particle production and therefore provide a means of distinguishing various theoretical models for particle production.
The pQCD calculation with assumption of gluon saturation [7] (referred to as EKRT model later) was first used to study the centrality dependence of the charged particle pseudorapidity density at RHIC. In [6] the HIJING model met with success in describing both the energy and centrality dependence of the charged particle pseudorapidity density. The conventional eikonal approach and the high density QCD (referred to as KN model later) [8] were also used to investigate the centrality dependence and the both methods surpris-ingly obtained almost identical centrality dependence. Recently, authors in [9] reported their results from the Dual Parton Model. It was found that the experimental observation, the charged particle pseudorapidity density per participant pair slightly increasing with < N part >, was reproduced by [6,8,9], but contradicted the results of [7].
In this letter a hadron and string cascade model, JPCIAE [10], was employed to study this issue further. Within the framework of this model the experimentally measured energy dependence of the charged particle mid-pseudorapidity density per participant pair both in relativistic p +p and Au + Au collisions at RHIC was reproduced fairly well without retuning the model parameters. The predictions for the full RHIC energy Au + Au collisions and for P b + P b collisions at the ALICE energy were also given. In studying centrality dependence the focus was put on the calculations of < N part >, its definition and uncertainty. Both the PHENIX [2] and the PHOBOS [11] observations that the charged particle mid-pseudorapidity density per participant pair slightly increases with < N part > could be reproduced fairly well by JPCIAE. However, this study indicated that it is not suitable to use the charged particle mid-pseudorapidity density per participant pair as a function of < N part > to constrain theoretical models for particle production, because < N part > is not a well defined physical variable both experimentally and theoretically.
The JPCIAE model was developed based on PYTHIA [12]. In the JPCIAE model the nucleons in a colliding nucleus are distributed randomly in the sphere of the nucleus with a radius of 1.12A 1/3 fm. The modules of the nucleons are sampled by the Woods-Saxon distribution and the solid angles of the nucleons are sampled uniformly in 4π. Each nucleon is given a beam momentum in z direction and zero initial momentum in x and y directions.
After the construction of initial particle list the collision time of each colliding pair is calculated under the requirement that the least approaching distance of the colliding pair along their Newton straight-line trajectory should be smaller than σ tot /π, where σ tot refers to the total cross section. The nucleon-nucleon collision with the least collision time is then selected from the initial collision list to perform the first collision. After the first collision, both the particle list and the collision list are updated and now the collision list may consist of not only nucleon-nucleon collisions, but also collisions between produced particles and the nucleons and between produced particles themselves. The next collision is selected from the new collision list. The processes proceed until the collision list is empty.
For each collision pair, if its CMS energy is larger than a given cut, we assume that strings are formed after the collision and PYTHIA is used to deal with particle production.
Otherwise, the collision is treated as a two-body collision [13,14,15]. The cut (=4 GeV in the program) was chosen by observing that JPCIAE correctly reproduces charged multiplicity distributions in AA collisions.
It should be noted here that the JPCIAE model is not a simple superposition of nucleonnucleon collisions since the rescatterings among participant, spectator nucleons and produced particles are taken into account. We refer to [10] for more details of the JPCIAE model. jet quenching and dashed curve with jet quenching) [6] and EKRT model (dotted curve) [7]. The data of both p +p and A + A collisions at relativistic energies were reproduced fairly well by JPCIAE model without retuning model parameters. Fig. 1 (b) is the same as (a) but the vertical coordinate here is the charged particle pseudorapidity density itself.
The JPCIAE model predictions for full RHIC energy Au + Au collisions and for P b + P b collisions at the ALICE energy in both panels may supply a benchmark for QGP formation since the QGP phase is not included in the JPCIAE model.
Since number of participant nucleons, < N part >, plays a crucial role in the presentation of PHOBOS or PHENIX centrality dependence data we first make a study on < N part >.
In the fixed target experiments the participant nucleons from the projectile nucleus with atomic number A, for instance, is estimated from where E ZDC refers to the energy deposited in the Zero Degree Calorimeter dominated by projectile spectator nucleons and E kin beam is the kinetic energy of beam [16]. However, in the collider experiments, in order to obtain < N part > one has to relate the measurables to Monte Carlo simulations. In PHENIX, for instance, simulations for the response of the Beam-Beam Counter and the ZDC were used to calculate < N part > via a Glauber model [2]. In PHOBOS < N part > is derived relating HIJING simulations to the signals in the paddle counter [11]. On the theory side, first, < N part > could be calculated geometrically (referred to as method a later) [17], number of participant nucleons from the projectile nucleus, for instance, Second, in the Glauber model N part is calculated by (referred to as method b later) [7] where σ in ≈40 mb is the inelastic nn cross section at RHIC and T A refers to the nuclear thickness function of nucleus A. The third method is to count the participant or the spectator nucleons in the simulation for nuclear collisions and then to average over simulated events.
However, there is multifarious in simulating programs and even in the definition of the participants and spectators. FRITIOF 7.02 (referred to as method d later) [18] was popularly employed in the past. In FRITIOF the wounded nucleons, i.e., nucleons which suffer at least one inelastic collision, are counted and identified as < N part >. It should be pointed out here that in FRITIOF leading nucleons undergo multiple scatterings and get excited (forming strings) during collisions, but produced particles from the string fragmentation do not have rescattering. Unlike JPCIAE, FRITIOF is not a transport model, there is no space-time coordinates associated with each particle. In JPCIAE simulations we have devised three counting methods: First, the leading nucleons involved in at least one inelastic nucleonnucleon collision with string excitation are counted and identified as < N part >. This is called method c. It should be mentioned that the JPCIAE results in Fig. 1 were calculated by < N part > from method c. Second, in method e, spectator nucleons are counted at the final state of JPCIAE simulations without rescattering (i.e., only nucleon-nucleon collisions are included), then < N part > is calculated through where A and B refer to the atomic numbers of the target and projectile nuclei. Third, method f is the same as method e but JPCIAE simulations are with rescattering. The difference among method c, e, and f is that those nucleons which only experience two-body nucleon-nucleon collisions (without string formation) are included into < N part > in method e, while in method f even those nucleons which suffer collisions with other produced particles are also included into < N part >. In other word, in method e and f the nucleons that are knocked out of the colliding nuclei by the produced particles are included into < N part >.
In emulsion chamber experiment, such nucleons (protons) are usually called 'grey tracks'. , e, and f. One knows from Fig. 2 (a) that except method f < N part > from different methods are close to each other (the 10%−15% difference should contribute to the systematic error of the experimentallyextracted < N part > ) for most central collisions but the discrepancies among them increase with decreasing centrality in general. It is surprising that the results of geometry method are closest to the results of PHINEX or PHOBOS. In Fig. 2 (b) the ratios of < N part > from methods a, b, c, d, e and f to the corresponding results of PHENIX are given.
The charged particle pseudorapidity density at mid-pseudorapidity in Au + Au collisions at √ s nn =130 GeV as a function of the percentage of geometrical cross section is given in In panel (a) of Fig. 4 we compare the PHENIX data of the charged particle mid-pseudorapidity density per participant pair (full circles with shaded area of systematic errors) [2] with JPCIAE model (full triangles, < N part > from method c) and with HIJING model (dotted curve), KN model (solid curve), and EKRT model (dashed curve). One can see that except EKRT, all the other three models predict an increase of (dn ch /dη| η=0 )/(0.5 < N part >) as a function of < N part > though the theoretical results seem to underestimate the PHENIX data. Fig. 4 (b) compares the PHENIX data to the JPCIAE results of < N part > calculated by method a (thin dotted curve), b (thin dashed curve), c (thin solid curve), d (thick dotted curve), and e (thick dashed curve), respectively. One sees from this panel that starting from a single result of charged particle mid-pseudorapidity density from the JPCIAE model but using < N part > from different methods, it is possible to lead (dn ch /dη| η=0 )/(0.5 < N part >) to either increase or decrease with the increase of < N part >. Although < N part > from method a are closest to the PHENIX results (cf. Fig. 2 (b)) the (dn ch /dη| η=0 )/(0.5 < N part >) from JPCIAE have actually centrality dependence opposite to the PHENIX result (cf. Fig.4 (b)) because the (dn ch /dη| η=0 ) from JPCIAE is higher than the PHENIX result for peripheral collisions (cf. Fig. 3). On the other hand, even though the discrepancy between < N part > from method c and PHENIX slightly increases with decrease of centrality (cf. Fig. 2 (b)) the (dn ch /dη| η=0 )/(0.5 < N part >) from JPCIAE with method c is close to the PHENIX data. If dn ch /dη| η=0 from EKRT model was not normalized by < N part > from method b, as did in [7], but by < N part > from method d the results of (dn ch /dη| η=0 )/(0.5 < N part >) might have somewhat similar centrality dependence as in PHENIX data. Therefore one learns here that it is hard using (dn ch /dη| η=0 )/(0.5 < N part >) as a function of < N part > to distinguish various theoretical models for particle production since < N part > is not a well defined physical variable.
In summary, we used a hadron and string cascade model, JPCIAE, to investigate the energy and centrality dependences of charged particle pseudorapidity density at midpseudorapidity in relativistic p +p and A + A collisions. Both the relativistic p +p experimental data and the PHOBOS and PHENIX data of Au + Au collisions at RHIC could be reproduced fairly well within the framework of the JPCIAE model without retuning any parameter. The JPCIAE model predictions for full RHIC energy Au + Au collisions and for P b+P b collisions at the ALICE energy are also given. This study shows that since < N part > is not a well defined physical variable both experimentally and theoretically it may be hard to use charged particle pseudorapidity density per participant pair at mid-pseudorapidity as
|
2019-04-14T03:08:04.417Z
|
2001-08-01T00:00:00.000
|
{
"year": 2001,
"sha1": "5c62fc918281aac919884ed90400af5b99e2e95f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-th/0108003",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5c62fc918281aac919884ed90400af5b99e2e95f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18705729
|
pes2o/s2orc
|
v3-fos-license
|
All-inside suture device is superior to meniscal arrows in meniscal repair: a prospective randomized multicenter clinical trial with 2-year follow-up
Purpose Multiple techniques and implants are available for all-inside meniscal repair, but the knowledge about their failure rates and functional outcome is still incomplete. The hypothesis was that there might be differences between meniscal arrows and suture devices regarding reoperation rates and functional outcome. Thereby, the aim of this study was to compare clinical results following repair with the Biofix® arrows or the FasT-Fix® suture devices. Methods In this RCT, 46 patients were treated either by Biofix® (n = 21) or FasT-Fix® (n = 25). The main outcome was reoperation within 2 years. Knee function and activity level were evaluated by KOOS and Tegner activity scale. Results Twelve out of 46 (26 %) patients were reoperated within 2 years, nine out of 21 (43 %) in the Biofix®-group versus three out of 25 (12 %) in the FasT-Fix®-group (p = 0.018). The relative risk of reoperation was 3.6 times higher for Biofix® compared to FasT-Fix® (95 % confidence interval 1.1–11.5). Both treatment groups had significant increase in all KOOS subscales, but there were no major differences between the groups. The subgroup of reoperated patients differed from the other patients with higher Tegner score preoperatively (median 5 vs. 4) (p = 0.037) and at 3-month follow-up (median 4 vs. 3) (p = 0.010). Conclusions These results indicate that FasT-Fix® suture is superior to Biofix® arrows with significant lower failure rate. Functional outcome did not depend on repair technique. Higher activity score preoperatively and at 3-month follow-up in the reoperated patients indicates that activity level may influence on the risk of reoperation. Level of evidence I.
of patients, or have included a range of different all-inside devices [9], and several studies have short observational time only [1,2,9]. The first-generation device Biofix ® meniscal arrow have been reported with good results with 91 % healing within 4 months [2] and success rates above 90 % at 2-3-year follow-up [8,12,18]. Similarly, the second-generation all-inside devices FasT-Fix ® are reported with good functional results at 2 years [10] and up to 90 % success of healing at 18 months [13]. In laboratory studies, however, the Biofix ® meniscal arrow is shown to have lower pullout strength than the FasT-Fix ® suture [4,27], which have biomechanical properties comparable to insideout vertical mattress sutures [23].
Thus, the background knowledge for what all-inside technique to choose has been limited and inconclusive. Still, the use of suture devices has increased at the expense of meniscal arrows, based on the assumption that the suture device would provide higher healing rates. In our opinion, there was a need for randomized controlled trials (RCTs) comparing different all-inside techniques. The aim of this RCT therefore was to compare the survival rates and the functional results within 2 years following all-inside meniscal repair using either the Biofix ® meniscal arrow or the FasT-Fix ® suture. Our working hypothesis was that there might be differences between biodegradable meniscal arrows and suture devices regarding reoperation rates and functional outcome.
Materials and methods
During 2006-2010, 46 patients enrolled to Martina Hansens Hospital (39 patients) and Trondheim University Hospital (seven patients) with vertical longitudinal meniscal tears eligible for arthroscopic all-inside meniscal repair were included in this prospective randomized doubleblinded study. The patients were block randomized (blocks of ten) to arthroscopic meniscal repair with either Biofix ® or FasT-Fix ® all-inside devices using the "envelope method", and they were blinded for the treatment choice. The post-operative rehabilitation program was identical in the two treatment groups.
Blinded observers performed post-operative follow-ups after 6 weeks, 3 and 6 months, and 1 and 2 years. Symptoms (e.g. pain, stiffness, locking) and clinical findings (e.g. range of motion, swelling) and potential complications were recorded. All patients that dropped out at 2-year follow-up were interviewed by telephone to verify the reoperation status. The flowchart of patients is shown in Fig. 1.
The main endpoint of the study was reoperation within 2 years as a consequence of complaints due to rerupture or impaired primary healing. Reoperations were recorded when patients had recurrent symptoms of meniscal lesions (e.g. pain, clicking, locking) with clinical indication for reoperation, and the reoperation led to total or partial resection of the incident meniscus tear.
Secondary endpoints were knee function and activity level measured by Knee Injury and Osteoarthritis Outcome Score (KOOS) [21] and Tegner activity scale [24]. KOOS is validated for patients with meniscal tears and osteoarthritis and consists of 42 questions in five categories: pain, other symptoms (Symptoms), activities of daily life (ADL), sport and recreation (Sport) and quality of life (QOL) [21,22]. Scores in each subscale are transformed to a 0-100 scale, where zero represents extreme knee problems and 100 represent no knee problems [21]. A difference of 8-10 points is regarded as a clinical relevant difference [22]. The Tegner activity scale is graded from 1 through 10, according to the patients self-esteemed level of activity. Ten points are referring to pivoting sports activities (soccer) at international level, five points refer to heavy activities like chopping wood and one point refers to easy house cleaning.
This study was approved by the Regional Ethical Committee for South Eastern Norway (Registration Number 1.2005.2304) and has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. All patients gave their informed consent prior to their inclusion in the study.
Subjects and interventions
Inclusion criteria were patients aged 18-40 years with an MRI-verified vertical, longitudinal meniscal tear, 10-40 mm long, located in the peripheral or the middle third of the meniscus, with a preserved central bucket handle eligible for reduction and repair with all-inside technique. The patients had no conflicting comorbidity, drug addiction or psychiatric conditions affecting surgery or post-operative rehabilitation regime. Exclusion criteria were focal cartilage lesions or osteoarthritis grade 3-4 according to the revised International Cartilage Repair System (ICRS) classification system [5] in an area larger than 1 cm 2 in the actual knee, ligament tears except tear of the anterior cruciate ligament (ACL) and/or grade 1 tear of the medial collateral ligament (MCL).
The surgery was performed according to standard procedures for knee arthroscopy. When the indications for repair were confirmed, the randomization envelopes were opened. The meniscal tears were debrided with diamond rasp, and by use of adjusted instruments for each device, the implants were inserted on both surfaces of the menisci, seeking adequate adaption and compression of the tear surfaces.
Post-operatively, the patients were instructed to use crutches with partial weight bearing (within 20 kg load) the first 6 weeks post-operatively with unlimited joint range of motion. Flexion up to 90° with concomitant weight bearing was allowed from 6 to 12 weeks post-operatively. After 12 weeks, the patients were allowed to return to sport activities. Patients with ACL tears had concomitant ACL reconstruction or were stabilized in a brace until ACL reconstruction was performed.
Statistical analysis
The frequency of reoperation after meniscal repair with all-inside devices is reported ranging from 57 to 91 % [2,13,26]. In this study, a difference in reoperation rate greater than 10 % was considered as a clinical important difference. With a power of 0.90, a level of significance of With a 20 % suggested dropout rate at 2 years, the plan was to include 120 patients (60 in each group). However, we chose to interpret our own data at the inclusion of 46 patients, since new information from other studies revealed favourable results using suture techniques compared to meniscal arrows [7,10]. Based on the results of our own preliminary data, we found it unethical to continue the recruitment of patients. Thus, the total number of patients is 46. Statistical analyses were performed with IBM SPSS Statistics ® (v.21). The value p < 0.05 was considered statistically significant and p < 0.01 was considered highly significant. Comparisons between groups were performed using the Chisquare test for categorical data and using the Student t test or the Mann-Whitney U test for continuous data, depending on whether normality could be assumed. Due to low sample sizes, KOOS and Tegner scores are presented with median and range values, although tests showed normality. Logistic regression analysis was performed with reoperation status as the dependent variable and gender, method for meniscal repair and comorbidity in the same knee as independent variables. Time to reoperation was estimated by the Kaplan-Meier method, and differences between the treatment groups were compared using the Log-Rank test. Both intention-totreat and per-protocol analyses were performed.
Results
The baseline data for the two intervention groups were similar (Table 1). Forty-six patients (26 men, 20 women) with median age 25.7 years (range 18.7-40.0) were reviewed 2 years after meniscal repair with either Biofix ® arrows or FasT-Fix ® sutures. Twelve out of 46 patients (26 %) underwent reoperation within 2 years; nine out of 21 (43 %) patients in the Biofix ® -group and three out of 25 (12 %) patients in the FasT-Fix ® -group (p = 0.018). The relative risk of reoperation was 3.6 times higher for patients in the Biofix ® -group compared to the FasT-Fix ® -group (95 % confidence interval 1.1-11.5). The survival curves for the two repair techniques are shown in Fig. 2.
The median time from operation to reoperation was 1.1 years (range 0.4-1.8) in the Biofix ® -group and For both treatment groups, there was clinical relevant and statistical highly significant (p < 0.01) increase in all KOOS subscales, from baseline to 2-year follow-up (Fig. 3). Comparing the KOOS profiles in the two treatment groups at 2-year follow-up with the profile from a slightly younger reference population [16] revealed that the patients' KOOS at 2 years were similar to the KOOS profiles of the reference population [16] (Fig. 4). Between the treatment groups, there were no major differences in any KOOS subscale or Tegner score at any follow-ups ( Table 2).
Analysing the group of patients that was reoperated within 2 years, comparing them to the group of patients that was not reoperated, revealed that the group of reoperated patients had higher Tegner activity score preoperatively (median 5 vs. 4) (range 3-9 vs. 1-9) (p = 0.037) and at 3-month follow-up (median 4 vs. 3) (range 2-9 in both groups) (p = 0.010). Except for the higher Tegner score, these patients did not differ from the other patients regarding baseline data or the other variables at follow-up.
Among the 46 patients included in this RCT, one patient in the FasT-Fix ® -group suffered from a purulent arthritis caused by Staphylococcus aureus diagnosed 2 weeks postoperatively. He was treated successfully with surgical lavage where the implants were left in place and with intravenous and oral antibiotics. Another patient was diagnosed with deep vein thrombosis and pulmonary embolus post-operatively and was treated with thrombolytic medication for 6 months. None of these patients underwent further reoperations. There were no other major complications, and there were no complications related to the implants.
Discussion
The main finding of this prospective randomized study was a 3.6 times higher risk of reoperation within 2 years following meniscal repair with Biofix ® meniscal arrows compared to repair with FasT-Fix ® meniscal suture. The reoperation rates are consistent with earlier studies [7,10,13] and give support to today's clinical practice where biodegradable arrows are used less frequent.
Logistic regression analysis demonstrated that neither comorbidity in the same knee, nor age nor gender influenced on the risk of reoperation. Only the choice of meniscal repair device was found to have an impact on the outcome. The post-operative regimes and rehabilitation were identical in the two groups, and the surgical techniques were similar except for the procedures directly related to the implant. Hence, the difference in risk of reoperation seems to be dependent on the implant only. Biomechanical laboratory studies have shown that Biofix ® meniscal arrows have lower pullout strength and less flexibility over the repair than FasT-Fix ® meniscal suture in cadaver knees [4,27]. In addition, Biofix ® is biodegradable and FasT-Fix ® is not. It is possible that the combination of lower pullout strength, less flexibility and biodegradation of the implants leads to lower stability and thereby higher risk of failure in the Biofix ® -group than in the FasT-Fix ® -group. It is also possible that the FasT-Fix ® devices tends to keep the reduced meniscal fragments more anatomically in place over a longer period of time than the Biofix ® arrows, which possibly may reduce or delay reonset of symptoms in patients even if the meniscal tear is not healed.
Another finding was that patients 2 years after meniscal repair-using either method-had knee function assessed by KOOS similar to that of a reference population [16]. The study was underpowered to explore possible betweenintervention group differences in patients' function assessed by KOOS. Thus, no statistically significant differences were revealed within 2 years post-operatively, except for the KOOS subscale Pain at 6 months where patients operated with Biofix ® scored better than patients operated with FasT-Fix ® (p = 0.041) ( Table 2). The mean difference was, however, only 10 points, which is recognized as being a borderline value for what is considered clinically relevant [22]. This result should therefore be interpreted with care. However, we also found that Biofix ® -patients had a significant higher Tegner score at 3 and 6 months compared to the FasT-Fix ® -patients (p < 0.01 and p = 0.046, respectively) ( Table 2). Neither should this be emphasized heavily, but one explanation may be that the FasT-Fix ® sutures give rise to irritation in the joint capsule due to traction of the anchors and thereby pain and reduced activity level, whereas the Biofix ® arrows do not since they are mainly located inside the menisci.
The post-operative rehabilitation program was equal for all included patients except for the five who went through simultaneous ACL reconstruction and the two who later decided not to go through reconstruction. These seven patients, however, were equally distributed in the two treatment groups. In the literature, there is no consensus concerning rehabilitation regimes following meniscal repair [3], but during the last decades, the trend is towards less restrictive regimes. The patients in this study were instructed to use crutches with partial weight bearing the first 6 weeks post-operatively but had no restrictions in range of motion without weight bearing. A recent study shows, however, that avoiding weight bearing after repair with FasT-Fix ® meniscal suture is unnecessary [25], and a laboratory study concludes that even high flexion is safe, but only when performed in closed-chain exercises [14].
The patients in this study were not allowed to return to sports activities until 12 weeks post-operatively. In spite of this, the group of patients that later went through reoperation, had median Tegner activity score 4 at 3 months, compared to the rest of the patients who had median score 3. A Tegner score of 4 corresponds to activities like moderately heavy labour, cycling, cross-country skiing and jogging [24]. These activities are not recommended for the first 12 weeks post-operatively. It is therefore reasonable to assume that some patients were back to knee demanding activities earlier than prescribed and that this may have contributed to failure in the healing of the meniscal repairs. However, these numbers should be interpreted with care as well, since in both groups, the scores showed a wide range (from the value 2-9). Moreover, the subgroup of patients later being reoperated had a higher median Tegner activity score at baseline compared to those who were not reoperated, again however, with wide ranges (3-9 and 1-9, respectively). It might be possible that individuals with higher physical activity level present higher demands for knee function. They may therefore have a lower threshold for re-consulting the surgeon if they have recurrence of meniscal symptoms, which may possibly result in a reoperation. Patients with lower knee demanding activity levels may accept recurrence of meniscal symptoms better and therefore avoid reoperations. Nevertheless, these findings support the idea of more restrictive return to sports FasT-Fix ® activities, and we suggest that these arguments should be emphasized even stronger for more active individuals.
One strength of the current study was the design. The trial was prospective, randomized and double blinded; neither the patients nor the follow-up examiners knew which device had been used. The time to final follow-up was longer than in previous studies [2,13,26], and the main end point-reoperation-is absolute and assumed to be more accurate than the diagnosis of rerupture assessed by e.g. MRI, which is shown to have low reliability [6,20]. For the main endpoint-reoperation within 2 years-there were no missing data.
Knee function and activity level, measured by KOOS and Tegner activity scale, were the secondary outcomes in this study. KOOS is a widely used self-assessment questionnaire. The score is valid and reliable for patients with different knee injuries, including meniscal tears and ACL tears [22]. In this study, the Tegner activity scale was presented as a self-assessment questionnaire as well, for which it is not validated. Of course this excludes researcher bias, but on the contrary, it opens for the possibility that patients may tend to score themselves to improperly higher or lower activity levels, depending upon characteristics like personality, self-image and ambition level.
The study has the following limitations: the number of patients is relatively low, the inclusion period was quite long, and there were several participating surgeons and the distribution of patients from the two hospitals was rather skew. The low number of patients was the result of stopping inclusion by ethical reasons with a considerable lower number of patients participating than actually planned. However, the difference in the failure rate between the two groups turned out much higher than estimated in the power analysis performed prior to starting the study. Thus, the power for revealing a statistically significant and clinically relevant difference in the main outcome between the treatment groups turned out large enough. Whether the secondary outcomes (knee function and activity level) would have shown some statistically significant differences given a larger number of patients, remains unknown.
None of the patients had MRI examinations at 2-year follow-up. MRI might have given some additional information on the status of the meniscal healing of those patients not being reoperated. On the other hand, there are some pitfalls using MRI as a criteria for meniscal healing, since the overall sensitivity and specificity for diagnosing a meniscal tear on conventional MRI is reported to be as low as 79 and 88 % [6]. Following meniscal repair, the MRI-diagnostics is even more challenging, and false positive tears may be reported still after healing of the meniscus [6,20].
Despite the limitations in this study, it verifies that meniscal repair with an all-inside suture device is superior to meniscal arrows regarding reoperation rate during the first 2 years post-operatively. The 3.6 times higher risk of reoperation following repair with arrows will presumably have important impact on the choice of methods for meniscal repairs in clinical practice. Longer term follow-up studies are necessary to determine the influence of the different methods on degenerative changes over time.
Saving meniscal tissue is important [17], and all-inside meniscal repair has benefits compared to outside-in and inside-out techniques [1]. Therefore, in clinical practice, all-inside techniques are often used and the technique and device seems to be important. The results of this study may primarily contribute to better knowledge in choosing among the wide range of devices available today and secondary in designing new devices in the future. Thus, this study represents a high clinical relevance.
Conclusions
This study showed a significant lower rate of reoperations following meniscal repair with FasT-Fix ® all-inside suture compared to Biofix ® all-inside meniscal arrows. Functional outcome was not dependent upon repair technique, but the higher Tegner scores preoperatively and at 3 months postoperatively in the group of patients being reoperated at a later stage, imply that activity level may influence on the risk of reoperation. This study strongly advocates the use of a suture device instead of meniscal arrows in all-inside meniscal repair. It also suggests a restrictive activity level within the first 3 months of rehabilitation.
|
2017-08-02T18:17:49.556Z
|
2014-11-09T00:00:00.000
|
{
"year": 2014,
"sha1": "740b4aa977e2e7ce4cbdf6e54d51717da34da93a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-014-3423-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "740b4aa977e2e7ce4cbdf6e54d51717da34da93a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
64292098
|
pes2o/s2orc
|
v3-fos-license
|
Activatable cell–biomaterial interfacing with photo-caged peptides
We report an effective strategy to design activatable cell–material interfacing systems enabling photo-modulated cellular entry of cargoes and cell adhesion towards surfaces.
Introduction
The recent explosive growth of research in the eld of nanotechnology has provided a wide range of novel materials and strategies for biomedicine, including important advances in bioimaging, drug delivery, photothermal/photodynamic therapy, and gene transfection. [1][2][3][4][5][6][7] However, further translation beyond basic research is heavily hampered by the inefficient performance of nanomaterials in biological environments. Amongst the main hurdles, the hydrophobic nature of the lipid bilayer of the plasma membrane renders it impermeable to most polar, hydrophilic molecules including peptides, proteins, oligonucleotides, drugs and nanomaterials that lack specic membrane receptors or transport mechanisms. Cellpenetrating peptides (CPPs) such as the HIV transactivator of transcription (TAT) peptide and arginine oligomer proteintransduction domain have been widely used as transport vector tools for the cellular import of a variety of cargos (e.g., nanomaterials and biomolecules) through the cell membrane. 8,9 In addition, new cell-penetrating transporters have been developed such as synthetic peptides, 10,11 helical poly(arginine) mimics, 12 antibiotics, 13 poly-(disulde)s 14,15 and guanidinium-containing synthetic polymers. 16,17 In spite of their potential, the inherent non-specicity of CPPs has restricted their application in targeted delivery systems. An alternative strategy to selectively enhance cell-material interfacing lies in the design of smart systems that are triggered by external stimuli, or specic features of the target cell or local tissue physiology. For example, the development of acidactivated CPPs takes advantage of the acidic tumour extracellular environment 18,19 for tumour-targeted drug delivery. [20][21][22] Among these, pH-induced charge reversal and pH-sensitive transmembrane insertion of a low-pH insertion peptide have also been explored as novel strategies to increase the efficacy of biomaterial administration. 19 Similarly, the design of hydrogen peroxide (H 2 O 2 ) and matrix metalloproteinase-2 (MMP2) responsive cellular delivery systems have been reported. 23,24 Key attributes of such smart delivery systems are their ability to control cellular entry of biomolecules/nanomaterials and their release on demand, minimize side effects and improve the therapeutic efficacy of pharmaceuticals.
In this work, we developed a stimuli-responsive cell-material interfacing system enabling the spatial and temporal control of cellular delivery and cell attachment via photo-activation. Photo-stimulation of biomaterials is advantageous since it can be manipulated precisely, enabling ne control over irradiated volumes. [25][26][27][28][29][30][31][32][33] In the presence of photo-stimulation, it is possible to rapidly increase the concentration of the active form of molecules with strict control over the illuminated area, time, and dosage. [34][35][36][37][38] In our system, we synthesized a series of photocaged peptide ligands consisting of a cell penetrating sequence, a blocking sequence, and a photo-cleavable linker (Scheme 1), and conjugated them to various nanomaterials or surfaces. The cell-material interaction is effectively suppressed when there is no light activation, resulting in minimal cargo uptake. Upon photo-irradiation, the linker is cleaved to remove the blocking peptide, resulting in activation of the cell penetrating peptide and increased cellular uptake. This phenomenon was demonstrated with a wide range of cargos including small and large molecules, organic and inorganic nanoparticles, and selfassembled so materials.
Results and discussion
As shown in Fig. 1a, we synthesised a photo-labile ligand consisting of hepta-arginine (R7), hepta-glutamic acid (E) and a photo-linker. The positive R7 serves as the cell penetrating sequence capable of delivering cargos to cells either by direct membrane translocation or promoting endocytosis. 39 The proximal glutamic acid-rich sequence (E7) blocks the oligoarginine by electrostatic attraction, forming a U-shaped antifouling zwitterionic ligand. 40 The cell penetrating peptide (R7) and the blocking sequence (E7) are connected with a photocleavable linker, which can be cleaved at the position of the onitrophenyl group to yield two peptide chains terminated with ketone and amide groups (Fig. 1a). We monitored the photocleavage reaction with UV-vis spectroscopy over a period of 15 min (Fig. 1b). Without UV light irradiation, there are two absorption peaks at approximately 305 nm and 350 nm. Upon UV irradiation, the absorbance at 305 nm decreased and the peak intensity at 350 nm increased, while a shoulder peak emerged at 370-400 nm. Since arginine and glutamic acid do not have an absorption peak between 300 nm and 400 nm, these spectral changes originate from the photo-cleavage of the o-nitrophenyl group. The isosbestic point at 317 nm indicates no side reactions or decomposition in the process of photo-irradiation. The reaction reached a photostationary state aer 10 min, as the absorbance at 390 nm approached a plateau (Fig. 1c). Because the electrostatic interactions between heptaglutamic acid (E7) and hepta-arginine (R7) are strong due to multiple charge pairs, it is important to determine whether the negative blocking peptide will electrostatically bind to the cell penetrating peptide even when the covalent spacer is cleaved. To test this phenomenon, we developed a FRET system using a uorescein isothiocyanate-labelled, thiolated peptide (HS-R7E7-FITC) and citrate-coated gold nanoparticles ($40 nm) ( Fig. S1 † and 1d). The FITC-appended peptide was covalently bound to AuNPs via the Au-thiol bond, and the close proximity between FITC and AuNPs enabled efficient energy transfer (ET) from FITC to AuNPs, resulting in quenched FITC uorescence. Upon exposure to UV light, FITC uorescence increased with irradiation time (Fig. 1e), suggesting that the FITC-conjugated Scheme 1 (a) Schematic of light-activated cellular uptake of biomolecules and nanomaterials. (b) Light-responsive delivery systems are functionalised with a synthetic ligand consisting of a negative blocking sequence, a light-cleavable sequence, and a cell penetrating peptide (CPP). blocking peptide (E7) was released from the hepta-arginine sequence following cleavage of the nitrophenyl group.
Since the blocking peptide could be removed with 365 nm UV light, we anticipated that the bioactivity of CPP (R7) could be enhanced through controlled light irradiation. To test this idea, we synthesised a uorescein isothiocyanate-labelled photo-activatable peptide (FITC-R7E7) with FITC attached to the N-terminus (Fig. S1 † and 2a). Peptide solutions at different concentrations (100, 50, 20 and 10 mM) were either exposed to light or le unexposed and incubated with MDA-MB-231 (MDA) cells. Aer 24 h, we evaluated the level of internalized uorophores by measuring the uorescence intensity of cell samples. We found that photo-irradiation enhanced the cellular uptake by 1.1 to 3-fold for FITC-R7E7 concentrations between 10 and 100 mM, respectively (Fig. 2b). Fluorescence activated cell sorting (FACS) conrmed that cells incubated with the activated peptide showed a higher uorescence intensity than those incubated with the native construct ( Fig. 2c). In addition, confocal microscopy showed the subcellular localization of the uorophore, by staining the cell membrane with WGA (wheat germ agglutinin) and the nucleus with DAPI (4 0 ,6-diamidino-2-phenylindole, dihydrochloride). MDA cells incubated with the non-activated peptides exhibited a weak and scattered green uorescence signal ( Fig. 2d-f, arrowheads), whereas cells incubated with light-activated FITC-R7E7 ( Fig. 2g-i) exhibited high penetration of the uorophore, with a preferential perinuclear localization (co-localization with DAPI; see orthogonal views in Fig. 2h and i). In a control experiment, we demonstrated that light irradiation did not cause any obvious uorescence changes in the FITC-labelled peptide (Fig. S2 †). Similarly, we demonstrated that light irradiation enhanced cellular internalization of the uorophore with HeLa cells (Fig. S3 †). These results show that although neutral peptide-bound FITC can be taken up by the cells to some degree, light-triggered cleavage of the blocking sequence signicantly increases the quantity that translocates into MDA cells.
We further applied this strategy to control the intracellular delivery of large biomolecules. We used avidin, a di/tetrameric biotin-binding protein, 66-69 kDa in size, as a model macromolecule to test the possibility of light-activated protein delivery. To this end, we conjugated FITC-labelled avidin with 4 equivalents of the biotinylated photo-labile ligand (biotin-R7E7) via biotin-avidin affinity and incubated with MDA cells (Fig. S1 †). Aer 24 h, cell uorescence increased by up to 200% for FITC-avidin concentrations of 2.5 and 1.25 mM (Fig. S4 †), highlighting the feasibility of this strategy to deliver full-length proteins.
We subsequently applied the light-activated peptide to control intracellular entry of inorganic nanoparticles. As an example, we coupled the cleavable peptide ligand to semiconducting CdSe/ZnS QDs. Compared to organic uorophores, inorganic QDs possess a number of superior properties such as bright emission, high stability, reduced photobleaching and ease of surface functionalization. 41 For example, the biotinylated photo-cleavable ligand was attached to streptavidincoated QDs (5 nm in diameter, l em ¼ 605 nm) via streptavidin-biotin affinity interactions ( Fig. 3a and b). As shown in Fig. 3c, light irradiation increased the cellular uptake of QDs to 1015% and 573% when using 40 and 20 nM QDs, respectively. This increase in uorescence is not attributed to enhanced QD emission aer light irradiation (Fig. S5 †). It is noteworthy that the light-triggered increase in QD uptake was strikingly higher than that of FITC-avidin as shown in Fig. S4, † which is ascribed to the high number of biotin-binding sites on QDs. According to the vendor, there are 5-10 covalently bound streptavidins per QD, permitting the attachment of 20-40 biotin-R7E7 molecules on a single QD. QD uptake by MDA cells was examined with confocal microscopy ( Fig. 3d-i), where non-activated QDs were poorly taken up ( Fig. 3d-f, arrowheads), and strong uptake was observed for cells incubated with the light-activated QD-peptide ( Fig. 3g-i). To conrm the functionality of our system in a biologically relevant environment, we incubated the QD-R7E7 complexes with MDA cells and irradiated them in situ. The results were consistent with our previous observations, where FACS indicated a strong uorescence signal per cell (Fig. S6 †), indicating increased uptake of the activated QDs. There was however a notable difference in the uptake pattern between activated QD/biotin-R7E7 and activated FITC-R7E7, where cells showed a punctate and mainly cytosolic distribution of QDs in contrast with the preferentially nuclear distribution of FITC. The reasons behind this are unclear, but it is reasonable to hypothesize that differences in the cargo size and CPP concentration could affect whether the QDs are primarily taken up via endocytosis or direct membrane translocation. 39 Our light-cleavable ligands have broad applicability for improving intracellular uptake of a variety of nanomaterials, which can be bound to the peptide using well-established conjugation chemistry (Table 1). For example, carboxylated polystyrene particles modied with an aminated ligand (H 2 N-R7E7, Fig. S1 †) displayed enhanced cellular uptake aer light activation (Fig. S7 †). Branched Au nanostars with a localised surface plasmon resonance peak at 820 nm in the infrared range (so-called "biological window") were internalized with a greater efficiency following light irradiation (Fig. S8 †). The Au nanostars were conjugated via thiolated ligands (HS-R7E7). Similarly, we demonstrated light-enhanced delivery of lipid vesicles ($200 nm size) by conjugating HS-R7E7 via the thiol-maleimide reaction (Fig. S9 †). In all cases, light-triggered cleavage of the nitrophenyl group and subsequent removal of the blocking peptide greatly promoted the cellular uptake of nanomaterials.
Light-activated cellular uptake strategies enable targeted delivery of nanomaterials to diseased tissue, which could minimize off-target side effects and also potentially decrease the dosage administered to a patient. In this work, we prepared phospholipid-coated polylactic-co-glycolic acid (PLGA) nanoparticles as a drug carrier for camptothecin, a cytotoxic alkaloid employed in cancer therapy. However, this drug has poor water solubility which limits its efficacy. 42 PLGA particles are attractive polymer-based delivery systems because of their biocompatibility, efficient encapsulation of hydrophobic materials, and tunable drug release properties. We synthesized PLGA particles via emulsion/solvent evaporation methods, using lecithin and DSPE-PEG-NHS as surfactants. We further conjugated a lightactivatable peptide ligand to PLGA particles by reacting an aminated peptide (H 2 N-R7E7) with DSPE-PEG-NHS. As shown in Fig. 4a, c and d, exposing drug-loaded polymer particles to 365 nm UV light resulted in enhanced MDA cell death, as demonstrated by the statistically signicant decrease in alamarBlue® reduction activity. A similar trend was observed for HeLa cells (Fig. 4b, e and f), and it is likely that the difference in the magnitude of effect was determined from differential entry propensities and drug sensitivities between the two cell lines.
We further demonstrated the potential for activating cellnanomaterial interactions with two-photon excitation, where a high power pulsed laser with a very short pulse was applied in confocal microscopy, providing the potential for deep tissue manipulation. The high photon density in the focused region enhances the probability that a chemical group/molecule absorbs two photons quasi-simultaneously. To this end, we incorporated an o-nitrobenzyl ether moiety into the peptide chain between the blocking sequence and cell-penetrating sequence ( Fig. 5a and S1 †) and attached this peptide to the surface of streptavidin-coated CdSe/ZnS QDs via biotin-streptavidin affinity. Compared to the amide bond (Fig. 1a), the ester bond is more sensitive to light irradiation and gives a higher quantum yield of photocleavage. It was previously demonstrated that the irreversible photocleavage of an o-nitrobenzyl ether moiety into nitroso-and acid-terminated by-products via two-photon irradiation could be used to dynamically control the properties of 3D hydrogels with photocleavable crosslinkers. 43 Here, we incubated the peptide-modied QDs (20 nM) with HeLa cells encapsulated in the 3D matrix of 8-arm polyethylene glycol (PEG) hydrogels (Fig. 5b). We prepared the hydrogels with 8% w/v 8-arm PEG acrylate, 5 mM RGD peptide (CGGRGDSP), and 6 mM PEG dithiol crosslinker, where the RGD peptides were used to promote cell viability and adhesion to the hydrogel network. To precisely locate viable cells within the 3D hydrogel, HeLa cells expressing green uorescent protein (GFP) were used. As shown in Fig. 5c-h, two-photon excitation (740 nm) was applied within precisely dened 3-dimensional regions of the hydrogel, resulting in increased intracellular red uorescence indicating the signicant enhancement of QD uptake into HeLa cells.
The application of CPPs is usually hampered by their poor selectivity towards target cells. We demonstrated enhanced uptake of photo-activated biomaterial-conjugated CPPs, revealing their great potential for controlling delivery of small molecule drugs, biomolecules and nanomaterials. Uptake can be controlled both spatially and temporally, potentially reducing side effects of toxic drugs and increasing the effectiveness of therapies. Moreover, stabilising and neutralising positively charged CPP sequences with anionic peptides until controlled photo-activation occurs could enhance the circulation time in the blood stream. By using precisely controlled and deep tissue penetrating two-photon excitation, we believe that this system will be applicable to in vivo studies providing 3D resolution for biomaterial delivery.
Switchable biological surfaces have important applications in cell-based diagnostics and tissue engineering. 44,45 The phototriggered conversion from a neutral zwitterionic ligand to a highly positive peptide can be exploited to construct a photoresponsive surface with tuneable cell binding capacity. Hydrophilic zwitterionic peptides have been established as effective antifouling agents, 46 whereas a positive surface (e.g., polylysine) oen favours cell attachment. Here, we conjugate HS-R7E7 to an Au surface (Fig. 6a), and signicant increases in cell attachment to the light-activated surfaces were observed compared to nonactivated surfaces (Fig. 6b-k). Moreover, cells plated on the photo-activated Au surface could spread and remain attached, as depicted by their spread morphology and the relative increase in cell density aer 24 h, yet those on non-activated surfaces were unable to do so, as indicated by their rounded shape and low cell density. Therefore, our strategy can be utilized to controllably switch the surface properties of a material towards supporting cell attachment and growth, which could be used to prepare patterned biological surfaces with photomasks that regulate cell patterning and migration. 44,47,48
Conclusions
In conclusion, we report a general strategy to develop photoswitchable cell-biomaterial interfaces with a new class of photo-labile ligands conjugated onto different biomolecules (e.g., FITC and avidin) and nanomaterials (e.g., quantum dots, polystyrene particles, Au nanostars, and liposomes). We demonstrated enhanced cellular uptake of cargoes by light activation up to 9-fold, as shown for QDs. We also applied our system to demonstrate controllable cancer drug delivery and showed the possibility to engineer smart biointerfaces with tuneable cell attachment/growth properties controlled by light irradiation. The photo-reaction is highly cytocompatible (Fig. S10 †), facilitating a wide range of new technologies to regulate cell-material interactions, and control cell attachment, migration and culture in a 3D scaffold, and pave the way towards novel and exciting translatable applications. In particular, incorporating NIR-responsive crosslinkers (Fig. 5) 33,43 and employing spatiotemporally controlled two-photon excitation will address the issues of low penetration depth and the absorption of UV-light in biological tissues. obtained from IRIS Biotech GmbH. A Qdot 605 ITK Streptavidin Conjugate Kit was obtained from Life Technologies (U.K.). Carboxylate-modied polystyrene nanoparticles (Latex beads, 20 nm, l ex $470 nm, l em $505 nm) were obtained from Sigma-Aldrich (U.K.). Lipids were purchased from Avanti Polar Lipids (Alabaster, AL). DSPE-PEG-NHS was purchased from Nanocs. 8arm PEG (40 000 MW) was purchased from Jenkem (U.S.A.). All the other chemicals were purchased from Sigma-Aldrich (U.K.) and used without further purication.
Solid phase peptide synthesis (SPPS)
Peptides were synthesized manually using standard uorenylmethoxycarbonyl (Fmoc) chemistry protocols. The Fmoc protecting group was removed with 20% piperidine/DMF. Fmoc-protected amino acids were activated with 4 molar equivalents of the Fmoc-protected amino acids, 3.95 molar equivalents of HBTU, and 6 molar equivalents of DIEA in DMF. The coupling solution was added to the resin, and the coupling reaction was allowed to proceed for two to three hours. Kaiser tests were performed aer each Fmoc deprotection and coupling step to monitor the presence of free amines. Peptides were cleaved in the mixture of triuoroacetic acid/triisopropylsilane/H 2 O (95 : 2.5 : 2.5) for four hours. The peptides were puried using reversed phase preparative high performance liquid chromatography (HPLC; Shimadzu) in an acetonitrile/ water gradient under acidic conditions on a Phenomenex C18 Gemini NX column (5 micron pore size, 110Å particle size, 150 Â 21.2 mm). The puried peptide mass was veried by matrix-assisted laser desorption spectroscopy (MALDI; Waters).
Light irradiation
The light-irradiation experiments were conducted using a UV lamp (UVLMS-38 EL Series 3 UV™, 365 nm, 1.3 mW cm À2 ) with a distance of 10 cm between the sample and the lamp. To test the light-responsiveness of peptides, 0.1 mM peptide was dissolved in phosphate buffer (10 mM, pH 7.0). The solution was exposed to UV light and the UV-vis spectra were recorded every 30 seconds.
Synthesis of Au nanostars
Gold nanostars were prepared using a seeded, HEPES/ hydroxylamine reduction approach similar to that reported by Maiorano et al. with minor adjustments. Briey, a solution, consisting of 38.5 mL Milli-Q water, 18.75 mL of 100 mM HEPES buffer at pH 9.6, 750 mL of fresh 40 mM hydroxylamine and 750 mL of the as-prepared citrate gold nanoparticle seeds, was prepared in a 100 mL conical ask. A sufficiently large magnetic stirrer bar was used to achieve effective mixing at a stirring rate of 1450 rpm. To this rapidly mixing solution, 22.5 mL of 1 mM HAuCl 4 $3H 2 O was added at a rate of 2 drops per second. Upon complete addition of the HAuCl 4 $3H 2 O, the stirring speed was reduced to 400 rpm, and the stirring was continued for a further 15 minutes, and subsequently, 80 mL of 1 wt% aqueous solution of Tween-20 surfactant was added. The particles were centrifuged at 4000 rpm for 30 minutes for 3 cycles including brief sonication to assist resuspension such that the nal concentration of Tween-20 was approximately 0.0001 wt%. The particles were then passed through a 0.2 mm PES lter membrane. The peak maximum of the longitudinal plasmon absorption was at 820 nm, and the nal concentration of gold [Au] ¼ 2.6 mM.
Synthesis of 13 nm citrate-capped gold nanoparticle seeds 180 mL of Milli-Q water in a 250 mL round-bottom ask (2-neck) was brought to reux in an oil bath and stirred gently. A solution of 79 mg of HAuCl 4 $3H 2 O in 5 mL of Milli-Q water was then added, and the stirring speed was increased as high as possible. Within 5 minutes, a 10 mL solution of 2 wt% trisodium citrate dihydrate in Milli-Q water (pre-heated to 70 C) was injected rapidly. Reux was continued for a further 5 minutes with fast stirring before being removed from the oil bath and allowed to cool to room temperature. The particles were then allowed to cool and stored at 4 C prior to use.
Surface functionalization of Au nanostars
To functionalize Au nanostars, 10 mL of 1.0 mM HS-PEG2K and 50 mL of 0.5 mM HS-R7E7 were added to 2 mL Au nanostar solution (5.5 mM). The mixture was incubated at room temperature overnight. Aer centrifugation at 10 000 rpm for 10 minutes, the supernatant was discarded, and the pellet was redispersed in phosphate buffer.
Surface functionalization of quantum dots (QDs)
Peptide conjugation on CdTe quantum dots was achieved via biotin-streptavidin chemistry. Briey, 10 mL of commercial streptavidin-coated CdTe QDs (4 mM) was incubated with 1.0 mL 10 mM biotinylated photo-cleavable peptide (biotin-R7E7) for 1.0 hour. The solution was centrifuged at 32 490 rpm for 2 hours with an ultracentrifuge to remove the excess unbound peptides. The supernatant was discarded, and the QDs were re-dispersed in phosphate buffer.
Surface modication of uorescent polystyrene (PS) nanoparticles
Typically, 80 mL of carboxylic acid-modied polystyrene particles (20 nm, 25 mg mL À1 ) were dispersed in 2 mL MES buffer (pH 5.7, 20 mM). Aer that, 20 mg of EDC and 20 mg of NHS were added to activate the carboxyl group. The mixture was incubated for 2 hours at room temperature before the addition of NH 2 -R7E7 (2 mg in 50 mM phosphate buffer, pH 8.0). Aer incubation overnight, PS particles were centrifuged, and the suspension was discarded to remove excess chemical reagents. The PS particles were washed with PBS again before the cell experiments.
Surface modication of Au plate
The Au surface was rinsed with ethanol three times and dried in air. Aer that, 1 mg mL À1 of HS-R7E7 solution was applied to the Au surface and incubated for 5 hours. The Au surface was then washed with water and ethanol several times. The Au surface was further incubated with a HS-PEG1000 solution (1 mM) to block the surface.
Characterization UV-Vis spectra were recorded with a Lambda 25 spectrometer (Perkin Elmer). Fluorescence spectra were recorded with a Fluorolog uorometer (Horiba). The uorescence measurements of cell experiments were conducted with a SpectraMax M5 plate reader.
Cell culture
All cell culture reagents were from Thermo Fisher Scientic (Loughborough, UK) unless otherwise stated. MDA-MB-231 cells were purchased from ATCC (Teddington, UK), HeLa cells from DSMZ (Braunschweig, Germany), and GFP-HeLa cells from Cell Biolabs, Inc. (San Diego, CA, USA). Cell lines were cultured under standard conditions (37 C and 5% CO 2 ) in DMEM supplemented with 10 v/v% fetal bovine serum and 1 v/v% penicillin streptomycin and split using trypsin-EDTA upon conuence. For uptake experiments (FITC, avidin, QDs, PS, Au nanostars, liposomes and camptothecin-loaded PLGA nanoparticles), cells were plated in 96 well plates at 10 000 cells per well and le to attach overnight. The next day, the medium was replaced by the corresponding constructs diluted in PBS : medium (1 : 1) and again incubated overnight. Aer 24 hours, the medium was discarded and cells were gently washed with PBS 3 times before uorescence intensity was measured or the alamarBlue® test was performed (PLGA nanoparticles).
For live activation of QD-R7E7 complexes, MDA-MB-231 cells were incubated with 20 nM QDs and irradiated for 0, 5 and 15 minutes. 24 hours later, epiuorescence images were taken, and cells were trypsinized and analyzed by FACS.
For Au attachment experiments, cells were detached using 0.05 w/v% trypsin-EDTA, counted and plated at 100 000 cells per mL in DMEM without supplements to avoid interference from the charged molecules in the serum. 30 minutes aer plating, the medium was removed and the samples were washed with PBS. Cell attachment was evaluated aer staining with calcein AM, and 4-6 pictures per sample were taken randomly. The substrates were fed with complete medium and incubated overnight. Aer 24 hours, the samples were again stained with calcein AM, and the same number of images was acquired.
Viability of the cultures aer incubation with the different nanomaterials was evaluated using either LIVE/DEAD staining or alamarBlue®, both following the manufacturer's instructions.
Fluorescence-activated cell sorting (FACS)
For FACS experiments, cells were plated in a 12 well plate at an equivalent density to the uptake experiments and le to attach overnight. The next day, they were incubated with the different compounds or nanomaterials for 24 hours and detached with 0.05 w/v% trypsin-EDTA. Non-treated cells were used as a control. Fluorescence was measured using a Fortessa cytometer (BD, Oxford, UK).
Cell encapsulation in PEG hydrogels
15 mm diameter glass coverslips were thiol-functionalized with 3-mercaptopropyl trimethoxysilane in acetone for 10 minutes, rinsed in acetone, heated at 80 C for 10 minutes, cooled and stored at À20 C. Cell-laden hydrogels were prepared with 8% w/v 8-arm PEG acrylate, 5 mM RGD peptide (CGGRGDSP), 6 mM PEG dithiol crosslinker (MW 1000), and 1 Â 10 6 cells per mL in DMEM with 20 mM HEPES, pipetted into silicone moulds with 6 mm diameter and 500 mm thickness, on top of thiolfunctionalized glass coverslips, and covered with Rain-x treated glass coverslips. Aer 12 minutes, gelation was complete, and individual gels attached to 15 mm coverslips were transferred to 24-well plates with the cell culture medium and cultured for 2 days before 2-photon photoactivation experiments.
Two-photon photoactivation
Cell-laden hydrogels were incubated with 20 nM QD-peptide in the culture medium for 3 hours, and several 3-dimensional regions of interest at the center of the cell-laden gels (300 Â 300 mm x-y, 200 mm in z with 5 mm z-spacing) were selectively exposed to multiphoton pulsed laser light (740 nm, 40 Â 0.8 NA water immersion objective, 16 mW average power at the objective) to photo cleave the QD-bound peptide, using an upright multiphoton confocal microscope (Scientica, Uckeld, UK). Hydrogels were incubated overnight to allow for cellular uptake of the photoactivated QD-peptide.
Microscopy
For conventional uorescence microscopy, cells were imaged live using an IX51 epiuorescence inverted microscope (Olympus, Southend-on-Sea, UK). For confocal microscopy, cells were plated on glass bottom microslide chambers (Ibidi, Glasgow, UK) and treated as above. All washes were performed with PBS. Aer incubation, cells or cell-laden hydrogels were washed and xed for 15 minutes in 4 w/v% paraformaldehyde, washed again in PBS and stained with WGA (where indicated, conjugated to 488-AlexaFluor for FITC-R7E7 and PS-R7E7 or 594-AlexaFluor for QD-biotin-R7E7) for 15 minutes at room temperature. Samples were then washed and incubated with DAPI for 5 minutes, washed and mounted with a Vectashield (Vector Laboratories, Peterborough, UK). Cell-laden hydrogels were mounted with a FluorSave (Calbiochem USA). Imaging was performed using a SP5 MP/FLIM inverted confocal microscope (Leica Microsystems, Milton Keynes, UK). For hydrogel experiments, GFP-positive cellular uptake of QDs was compared at the gel center (photoactivation region) versus gel edge (receiving no activation).
Statistics
Results are presented as mean AE standard deviation unless specied otherwise. Statistical analysis was performed using SPSS 22 and Prism soware. Distributions were assumed normal, and differences were analyzed using the Student's ttest. Differences were considered statistically signicant when p < 0.05 (*) and very signicant when p < 0.01 (**).
Conflicts of interest
There are no conicts to declare.
|
2019-02-22T00:18:24.252Z
|
2018-11-16T00:00:00.000
|
{
"year": 2018,
"sha1": "18bf0a46c615b7bd711ffafe10c1386846070c85",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c8sc04725a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18bf0a46c615b7bd711ffafe10c1386846070c85",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
225217178
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Plant Growth Promoting Bacteria on Seed Germination, Seedling Vigor and Growth Lagenaria siceraria (Molina) Standl
Bottle gourd [Lagenaria siceraria (Molina) Standl] is a vital cucurbitaceous crop grown for its fleshy fruits in tropical and subtropical regions (Desai et al., 2016). It's probably originated in Africa. Bottle gourd young fruits are edible, but the mature hollowed shells used as bottles, musical instruments, and floats for fishermen. Bottle gourd fruit crush is employed within the treatment of insanity, epilepsy, and other nervous diseases. Bottle International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 8 (2020) Journal homepage: http://www.ijcmas.com
gourd plant is an annual herb and vigorous running vine. Stems of bottle gourd are suclate angular, the leaf is simple, suborbicular non-lobed or slightly lobed; flowers white, monoecious, and axillary; fruits are berry, varies in size, and shape, woody when mature. Bottle gourd area in India was 186 million hectares and production are 3052 "000" million tonnes during 2018-2019 (Agricultural statistics, 2019). The mechanisms by which PGPRs promote plant growth aren"t fully understood. But, several mechanisms are suggested by which PGPR can enhanced stress resistance, symbiotic N2 fixation according to Salantur et al., (2006), solubilization of phosphate, and mineralization of organic phosphate or other nutrients (Cattelan et al., 1999;Jeon et al., 2003), increasing the availability or availability of primary nutrients to the host plant and antagonism against phytopathogenic microorganisms by the production of siderophores, synthesis of antibiotics, enzymes or fungicidal compounds and competition with detrimental microorganisms (Ashrafi et al., 2011;Wu et al., 2005;Lucy et al., 2004;Ahmad et al., 2005;Egamberdiyeva et al., 2007).According to Dursun et al., (2019) the seedsof tomato cultivars treated with three different concentrations (1, 3, and 5 gL-1 and control i.e. un-inoculated) and two different bacterial fertilizers including Azotobacter spp. (1×109 CFU), and a mixture of Bacillus subtilis and Bacillus megatarium (1×109 UFC). The effects of these treatments were found significant in plant growth parameters. Bacterial fertilization increased yield and other parameters in all treatments of tomato. Kumar et al., (2017) studied that the effect of Enterobacter, M. arborescens and Serratia marcescens on yield and nutrient uptake of wheat and suggested that consortium of two or three isolates significantly increased plant height (13.91%, 34.32%), straw yield (78.58%, 26.23%) and grain yield (79.83%, 24.05%) in pot and field experiment respectively and uptake of nutrients such as N by 50.64% and P by 56.49% was also enhanced by wheat in field conditions was also observed. Singh et al., (2017) suggested that Pseudomonas (RS1) and Bacillus (R7) most dominant in the rhizosphere of bitter melon and Bacillus (R7) may be used as a biofertilizer for bitter melon growth. However, scanty information is out there regarding the utilization of bio-priming in several crops and it must be investigated. Keeping in sight the above facts this investigation is going to be undertaken to review the effect of plant growth-promoting bacteria on seed germination, seedling vigor and growth of Lagenaria siceraria (Molina) Standl. with the objectives to evaluate the effect of plant growth-promoting bacteria on seed germination, seedling vigor and growth of bottle gourd.
Materials and Methods
The present investigation entitled "Effect of plant growth-promoting bacteria on seed germination, seedling vigour and growth of Lagenaria siceraria (Molina) Standl." was conducted during Kharif season of 2019 and 2020 at the University College of Agriculture farm, Talwandi Sabo, Bathinda (Punjab) and experiment was laid out by Randomized Block Design with three replications. Three standard PGPR strains these included Serratia marcescens MTCC 10241 and Bacillus subtilis MTCC7611; MTCC814 were obtained from Microbial Type Culture Collection (MTCC) Chandigarh, India.
Sub culturing of bacterial culture
The bacterial culture was sub cultured by growing in nutrient broth and centrifuged at 120 rpm for 24 hrs. Spectrophotometer at "600" nm light was used to check the growth of bacteria optical density.
Sterilization and inoculation of seeds
The seeds of bottle gourd was surface sterilized with NaOCl for 2 minutes, and after that wash thoroughly with sterile distilled water for three times. Then surface sterilizes seeds was soaked into bacterial suspension at a particular concentration and they were allowed for overnight. The seeds were soaked in sterile distilled water was used as control.
Preparation of pots and treatment details
The soil was sterilized with formalin @ g/m 3 before two weeks of transplanting. Polybags with a capacity of 10 kg are filled with mixture formalin sterilized coarse sandy loam soil and treatedseeds of bottle gourd variety Punjab Bahar were sown. The poly-bags were arranged according to the recommended spacing in the open field. Total treatments are seven and these were T 1 : Control (Water soaked),
Details of observations recorded
Seed germination percentage was analysed by using the formula: Seed Germination %= Germinated seeds / Total Seed Sown × 100, Seed vigour was measured by using formula: Seedling length × % germination, Vine length
Statistical analysis
All data were analysed for different characters with the help of OPSTAT (Sheoran et al., 1998). The critical difference at 5% level of implication was calculated to equate the mean different treatments.
Results and Discussion
The data in
Seed germination percentage and seedling vigor
Germination percent and seedling length are the major factors for deciding the seedling vigor. In the present study, this character showed significant variation among all treatments which might be due to PGPR (Serratia marcescens MTCC 10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) activity. Similar results were also reported by (Prathibha and Siddalingeshwara, 2013) studied the effect on seed treatment with PGPRs such as Pseudomonas fluorescens and Bacillus subtilis significantly increased seed germination, vigor index, and nutritional quality.
Vein length and the number of primary branches per plant
These are the characters of growth which should be enhanced to get optimum yield from the crop. So, mixture of all the bacteria (Serratia marcescens MTCC 10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) with 50% recommended dose of fertilizer in this experiment significantly increase the vein length and the number of primary branches per plant of bottle gourd. Correspondingly Kumar et al., (2015) studied that seed coating with Bacillus subtilis OTPB1 and Trichoderma harzianum OTPB3 of brinjal, beans, bitter gourd, bottle gourd, cabbage, chili, carrot, cauliflower, pumpkin, ridged gourd), fruit crop (papaya in plastic trays in a glasshouse), tuber crops (potato), ginger and turmeric, a significant increase in growth parameters under greenhouse conditions and growth and yield parameters under field conditions were recorded.
The number of nodes on the main axis and number of leaves per plant
The results revealed that the T 3 (RDF 50% +Serratia marcescens MTCC10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) treatment shows significantly increase the number of nodes on the main axis in bottle gourd crop during both the years. On the contrary Kidoglu et al., (2008) studied that Pseudomonas putida, Enterobacter cloacae, Serratia marcescens, Pseudomonas fluorescens, Bacillus spp., Pseudomonas putida) significantly increases in growth of cucumber, tomato, and pepper. The data with regards to the number of leaves produced by each plant was maximum observed in the treatment T 3 (RDF 50% +Serratia marcescens MTCC10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) in both years. Similiarly, Yıldırım et al., (2015), applied Bacillus pumilis and Alcaligenes piechaudii strains as seed and/or drench treatments and found increased number of leaves in cucumber.
Leaf area and the days to first fruit set
In the pooled data with regards to leaf area produced by each plant was maximum observed in the T 3 (RDF 50% +Serratia marcescens MTCC10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) treatment in bottle gourd crop in both years. Equally Mia (2010) evaluated the effect of PGPR on banana plantlets under nitrogen-free hydroponic condition and found an increase in growth attributes such as root hair, leaf area, chlorophyll content and total biomass. In pooled data T 3 (RDF 50% + Serratia marcescens MTCC10241 + Bacillus subtilis MTCC7611 + Bacillus subtilis MTCC814) treatment was significantly superior for a reduced number of days to first fruit set in bottle gourd crop during both years of investigation. Similarly Karakurt et al., (2011) also found that B. subtilis OSU-142, B. megaterium M-3, B. cepaciai OSU-7, and P. putida BA-8 have great potential to increase fruit set, plant growth, and fruit quality. Present study support that use of PGPRs may enhance the seed germination, seedling vigor, and growth of bottle gourd crop under field condition. The effective PGPR strain may be recommended to farmer which may reduce the use of chemical fertilizers in cucurbit crops.
|
2020-09-26T02:48:05.154Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "67b9bbd261e95b574cd09454aac063b1fa1cbd11",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-8-2020/Navdeep%20Singh%20and%20Daljeet%20Singh.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "67b9bbd261e95b574cd09454aac063b1fa1cbd11",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
267031179
|
pes2o/s2orc
|
v3-fos-license
|
Deductible imputation in administrative medical claims datasets
Abstract Objective To validate imputation methods used to infer plan‐level deductibles and determine which enrollees are in high‐deductible health plans (HDHPs) in administrative claims datasets. Data Sources and Study Setting 2017 medical and pharmaceutical claims from OptumLabs Data Warehouse for US individuals <65 continuously enrolled in an employer‐sponsored plan. Data include enrollee and plan characteristics, deductible spending, plan spending, and actual plan‐level deductibles. Study Design We impute plan deductibles using four methods: (1) parametric prediction using individual‐level spending; (2) parametric prediction with imputation and plan characteristics; (3) highest plan‐specific mode of individual annual deductible spending; and (4) deductible spending at the 80th percentile among individuals meeting their deductible. We compare deductibles’ levels and categories for imputed versus actual deductibles. Data Collection/Extraction Methods Not applicable. Principal Findings All methods had a positive predictive value (PPV) for determining high‐ versus low‐deductible plans of ≥87%; negative predictive values (NPV) were lower. The method imputing plan‐specific deductible spending modes was most accurate and least computationally intensive (PPV: 95%; NPV: 91%). This method also best correlated with actual deductible levels; 69% of imputed deductibles were within $250 of the true deductible. Conclusions In the absence of plan structure data, imputing plan‐specific modes of individual annual deductible spending best correlates with true deductibles and best predicts enrollees in HDHPs.
What is known about this topic
• High-deductible health plans are an increasingly common type of benefit structure that may impact health care access, health and consumer finances.
• Research has been hindered by a lack of plan-level information on deductibles in administrative medical claims datasets.
What this study adds
• Using each plan's highest mode of annual individual deductible spending is a reasonably accurate way to identify high-versus low-deductible health plans, and more accurate than more computationally intensive methods.
• When imputing deductibles for a categorical distribution, limiting the sample to plans with ≥50 enrollees increases accuracy.
• All imputation methods are imperfect at predicting deductibles.Claims dataset vendors should include plan structure variables, including deductibles, in data releases so researchers do not have to rely on imputation.
| INTRODUCTION
High-deductible health plans (HDHPs) are among the most common type of health plan for the 155 million Americans who receive their health insurance through an employer and the 20 million who purchase insurance on the individual commercial market. 1,2[13][14][15][16][17][18] An impediment to research on HDHPs is the lack of data about plan structure.Administrative claims data, ideal for research, often come in two types.The first type includes detailed information about plan structure but often has poor externality validity as it is typically sourced from a single health insurer or small subset of enrollees. 19e second type has improved external validity by pooling across insurers but does not usually include plan-structure variables necessary to distinguish between HDHPs and plans with lower deductibles, or interpret what binary "HDHP" variables represent. 20Several research groups have used claim-level deductible spending to impute deductibles, but different methods have been used and none has been validated against a full distribution of true deductible levels. 13,21This paper aims to fill that gap by comparing several methods of deductible imputation used in previous literature or suggested by experts.We use an administrative claims dataset of the first type (sourced from a single insurer with plan-level deductibles) to validate imputation methods.
Researchers studying health insurance design or who want to adjust for deductible spending in their analyses can use the results of this paper to operationalize imputation of plan deductibles.We have included replication code for this purpose.By validating a method for imputing deductibles with common data elements, we hope to both improve the quality of research on health insurance and expand the scope of data that can be used for understanding the effects of HDHPs.
| METHODS
Our goal is to impute plan-level deductibles when data do not include them.Then, using these imputed values, assign plans to categories of deductibles and binary highÀ/low-deductible status for one plan-year of claims.
| Data
We use 2017 de-identified administrative claims data from the OptumLabs Data Warehouse. 22We pull medical and pharmaceutical claims for enrollees under age 65 in plans with at least 10 enrollees in employer-based insurance in the USA.We limit the sample to those who are continuously enrolled for 12 months in a single plan for which out-of-pocket spending resets on January 1 to ensure we capture spending for a full plan year.
Similar to most administrative claims databases, the data contain claim-level spending variables, including out-of-pocket spending (deductible, coinsurance, and copayment) and the amount paid by the health insurance plan.We use in-network claims and sum across these fields to find the total amount paid per claim.The data also contain other variables often included in claims datasets: enrollee demographic characteristics (self-reported gender, birthdate), plan characteristics (e.g., network structure), coverage level (individual/ family), specific plan identifiers, and an anniversary date (cost-sharing reset date).It is necessary to be able to identify claim-level spending, coverage level, specific plans, and anniversary date for our imputation methods.Unlike many multipayer claims datasets, the OptumLabs data contain variables denoting administratively set annual deductibles that are consistent within plans.We leverage our ability to see both administratively set deductibles and enrollee spending in the same dataset to validate imputation methods.
| Deductible construction
We derive a claim-level dataset linking individual enrollee medical encounters to their insurance plans.We exclude out-of-network claims, which may not be subject to the general deductible.For plans with separate medical and pharmaceutical deductibles, we use the medical deductible in place of a general deductible and measure only medical spending.For plans with a general (medical + pharmaceutical) deductible, we combine medical and pharmaceutical claims.Within the full employer market, 85% of plans use a general deductible; for plans with a separate pharmaceutical deductible, the average pharmaceutical deductible is $150. 23We top-coded data at the 99th percentile of deductible spending and bottom-coded at $0 to remove unreasonable values.We define HDHPs as plans with a deductible of ≥$1350, reflecting the Internal Revenue Service minimum deductible limit for HDHPs with a health savings account in 2017.
A general challenge in estimating deductibles is that most plans have separate deductible amounts for individual and family coverage and the way in which individual medical spending contributes to the family deductible varies.Claims datasets may not link family members or include information about the structure of family deductibles though often include variables that denote whether a person is enrolled as a part of a family.Because of these additional complications with estimating family deductibles, we impute deductibles for enrollees with individual-level coverage only.While we believe this is the most straightforward approach, we include in Supporting Information Appendix 1 additional considerations for researchers who wish to estimate family-level deductibles.
| Imputation
We test four methods for deductible imputation: three are based on methods previously used in peer-reviewed literature or in-progress work and one has been recommended by researchers familiar with claims datasets (Table 1).For all methods, we impute $0 as the deductible for plans with positive total spending but no deductible spending (2% of plans).
| Parametric prediction with spending (regress on spending method)
This method predicts deductibles using plan-specific variations in the observed relationship between individual deductible and total spending amounts. 24To implement it, we regress each enrollee's annual deductible spending on their total annual spending (plan plus out-of-pocket), common demographic covariates (gender and age), and fixed effects for each plan (details are in Supporting Information, Appendix 1).Using the coefficients from the best-fit regression model, we predict deductibles for each plan at a fixed amount of total spending, which we set at $10,000 to exceed most deductibles.The coding and construction of variables for this method is simple, though processing time to generate predictions can be extensive in datasets with many plans.
| Parametric prediction with imputation and plan characteristics (regress on imputed deductibles method)
This method is inspired by an imputation method used in multiple papers published by the same research group.were done with a different dataset that included actual plan deductibles; our paper does not comment on the validity of those methods for their specific data and usage.We implement this method in two stages: (1) impute deductibles for a subset of plans where they are easily identified and (2) use regression to predict deductibles for remaining plans.For the first stage, we sum deductible spending to the individual-year level and impute deductibles based on modal spending values. 11In this stage, we are able to impute deductibles for 69% of plans.For the second stage, we create a set of covariates describing observed deductible spending and plan characteristics and collapse data from the individual to the plan level.Using the subset of plans with an imputed deductible, we regress the imputed deductible amounts on the set of covariates and use generated coefficients to predict deductibles for plans unable to be imputed in the first stage.
The method, including detailed imputation rules for the first stage, covariates used in the second stage, and the regression specification, is more fully described in Supporting Information, Appendix 2.
| Modal deductible spending (mode method)
The logic of this method and the following one is that enrollees who meet their deductible will have observable deductible spending clustered at the administratively set deductible level and these clusters can be seen as modal lumps in each plan's overall deductible spending distribution.To implement this method-the simplest of the four tested-we identify the highest modal nonzero deductible spending amount among enrollees in a plan and apply it to all enrollees in that plan.
| 80th percentile of deductible spending (80th percentile method)
This method is based on the method used in Rabideau et al. 13 We begin with enrollee-month level data, where deductible spending is summed across the month.First, by enrollee, we track monthover-month deductible spending and identify enrollees whose total spending is increasing for multiple months without commensurate increases in their deductible spending; we consider these instances of an enrollee meeting their annual deductible limit and flag them.
Then, keeping only these enrollees assumed to have met their deductibles, we collapse the data to the plan level and, for each plan, impute the 80th percentile of annual deductible spending as the plan deductible.
| Analysis
Our analytic dataset includes, at the plan level, actual deductibles for each plan and imputed deductibles from each method described above.We descriptively compare distributions of actual versus imputed deductibles for each method with both scatterplots and statistics.We compute the sensitivity, specificity, and positive/ negative predictive value (PPV/NPV) of each method for classifying enrollees into high-versus low-deductible plans.Sensitivity and specificity measure the proportion of high-and low-deductible plans that will be identified through each imputation method, respectively.PPV and NPV measure the proportion of imputed deductibles correctly classified; PPV can be interpreted as the probability that a plan classified as an HDHP through imputation is actually an HDHP.This study was approved by the Johns Hopkins University Institutional Review Board.All analyses were done in Stata version 17.0 MP; exact code is in supplemental content.
| RESULTS
Our analytic dataset includes 2,055,822 individuals in 17,425 unique plans.The actual deductible in our data ranges from $0 to $5500, with a mean of $1847 and a median of $1500 (Supporting Information, Appendix Table 1).The most common category of deductible is $500-999, though 59% of plans have a deductible higher than $1500 (Table 2).Compared with a large national sample of employer plans, our data have a similar number of plans with deductibles above $3500 and at the modal deductible level ($500 and $999), but fewer plans with an individual of deductible <$500 (Supporting Information, Appendix Table 2).
Means of predicted deductibles range from $1452 to $2286 (Supporting Information, Appendix Table 1).All methods had a predicted minimum of $0, which was bottom coded in the regressionbased methods.Predicted plan maximums varied from $2830 to $12,682.Histograms show the distribution of deductibles varies by imputation method (Supporting Information, Appendix Figures 1-5).
The regression-based methods show a smoother distribution than the true deductible, reflecting that imputation using these methods is less likely round numbers.
The sensitivity of each method for correctly classifying HDHP versus non-HDHP ranged from 0.79 to 0.93, with the Regress on Spending performing worst and the mode method performing best (Table 2).Specificity across all methods was moderate to high (0.83-0.92).PPV was high for all methods (0.87-0.95).However, NPV is higher for the mode method (0.91) and 80th percentile method (0.86) compared with the regression-based methods, implying that HDHPs can be misclassified as low deductible plans using these methods.
Stratifying by the number of enrollees in a plan shows that limiting to larger plans improves precision along most measures (Supporting Information, Appendix Table 3).Varying the percentile threshold in the 80th percentile method does not substantially change results (Supporting Information, Appendix Table 4).
Scatterplots of true against predicted deductible show that all methods have a high concordance with actual deductibles at low deductible levels but that methods using regression underpredict at higher levels of the true deductible (Figure 1).Graphically, the Mode and 80th percentile methods appear to adhere most closely to actual deductibles, though overpredict for most deductible levels.Figure 1 shows that predicted deductibles are positively correlated with actual deductibles for all methods, which is also evident in the formal correlations (0.66-0.72;Supporting Information, Appendix Table 1).A positive correlation implies that all imputation methods properly order deductible size in plans relative to each other, even when actual levels of the deductibles are incorrectly imputed.
The overall sensitivity across the categorical distribution of deductible levels is low to moderate for all methods (Supporting Information, Appendix Table 5) and, consistent with Figure 1, imputations perform better at lower deductible levels.The regress on spending method has the lowest sensitivity both for predicting the correct deductible category and the continuous level of a deductible within $250 (0.25 and 0.23, respectively, Supporting Information, Appendix Table 5).The mode method performs best; 72% of plans are correctly classified into each category and 69% of plans have an imputed deductible within $250 of the actual deductible.For this method, limiting imputation to groups with more than 50 enrollees improved sensitivity to 85% of plans correctly classified by category and reduced the average difference between the imputed and actual deductible from $700 to $496 (Supporting Information, Appendix Table 6).
| DISCUSSION
We found that imputing the nonzero mode for each plan, which requires little coding or computation time, performed as well or better than more complex methods of deductible imputation.This method performed well, even in small groups, for classifying low-and highdeductible plans; it performed well at classifying more granular deductible levels for groups with more than 50 enrollees.Researchers who previously used more computationally intense methods may want to switch, for both simplicity and accuracy.
Additionally, all imputation methods demonstrated a high correlation with true deductibles, indicating that they can correctly order plans in terms of deductible levels.This is particularly useful for researchers who are interested in understanding the relative differences in cost-sharing structures across various health plans.
It is important to note that no imputation method perfectly matched the exact deductibles, nor is able to capture nuances of cost sharing in each plan.The method of imputing nonzero modes, while performing better than other methods, still only predicted a deductible within $250 of the actual deductible 69% of the time.Our findings suggest that, while imputation methods can provide a reasonably Note: Categorical distributions describe the percent of plans in each deductible category for each method.For predictive statistics, HDHP is defined as a deductible ≥ $1350, per the 2017 Internal Revenue Service minimum health savings account eligible deductible level.Sensitivity is the ratio of plans correctly classified as HDHPs with each imputation method compared with the total number of HDHPs, defined with actual deductible levels.Specificity is the ratio of plans correctly classified as low deductible (<$1350) with each imputation method over the total number of low deductible plans.PPV is defined as the ratio of imputed deductibles for each method that are correctly classified as HDHPs to the total number that are predicted to be HDHPs.NPV defined as the ratio of imputed deductibles that are correctly classified as low deductible plans per to the total number of those predicted to be low deductible plans.The regress on imputed deductibles method and mode method identified deductibles in plans only when there was positive total spending; 52 plans had no spending and thus were excluded from these imputation methods.The 80th percentile method used only plans in which we could determine that at least 1 enrollee had met their deductible.
good approximation of general deductibles, there is still room for improvement, particularly when it comes to predicting deductibles at higher levels.A solution to these limitations is to include plan structure variables in data releases, which would allow researchers to directly observe the effects of cost-sharing structures, including deductibles, on health and spending outcomes.
Our study has several limitations.We use data from a single health insurer that, while large, is not representative of all health insurance plans and variables may not translate.We made decisions about which variables to use based on external generalizability and tractability for each imputation method, though acknowledged our methods would be difficult with datasets that do not include basic variables denoting groups or coverage levels.Our methods are not validated for family deductibles, which may be structured differently than individual deductibles.Finally, we used published literature as well as our own experiences to choose methods in our imputations.It is possible we left out a valid method or valid variation on an above method.We hope our results will help to standardize methods used in this type of research so that studies can be better compared and evidence more easily synthesized.
ACKNOWLEDGMENTS
This project was partially supported by grant number R01DA044201 from the National Institute on Drug Abuse (NIDA).The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIDA.F I G U R E 1 Scatterplots of imputed and true deductibles.Each panel in this figure shows imputed deductibles, binned in deciles, for each imputation method plotted against the true deductible.Points on the identity line signify 1:1 match between predicted and actual deductibles.Points under each identity line signify under-prediction and points above the line signify overprediction relative to the true deductible.
Note: Coding intensity based on creating code in Stata 17.0 MP.Exact code for replication is included as a supplement to this publication.Approximate processing time based on time to run full code associated with each method and generate an imputed deductible for each plan in our sample ($2 million individuals in $17,400 plans).Code was run on a server that operated using a 64-bit system with 8 gigabytes of installed random-access memory (RAM); times may vary depending on server specifications hardware.
T A B L E 2 Categorical deductible distributions and predictive statistics.
|
2024-01-19T06:18:03.324Z
|
2024-01-17T00:00:00.000
|
{
"year": 2024,
"sha1": "9025678f119294e28e26b002255ec05fb216d184",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1475-6773.14278",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "235cd2fbabf4f8f9e53b44de278b4413c12534b5",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
62809261
|
pes2o/s2orc
|
v3-fos-license
|
Utility of Real-Time and Retrospective Continuous Glucose Monitoring in Patients with Type 2 Diabetes Mellitus: A Meta-Analysis of Randomized Controlled Trials
In the present study, we aimed to investigate the effects of continuous glucose monitoring (CGM) on blood glucose levels, body weight, blood pressure, and hypoglycaemia in patients with type 2 diabetes mellitus using a meta-analysis of randomized controlled trials (RCTs). A literature search was performed using MEDLINE, Cochrane Controlled Trials Registry, and ClinicalTrials.gov. RCTs using CGM in patients with type 2 diabetes mellitus were then selected. Statistical analysis included calculation of the standardized mean difference (SMD) or risk ratio and 95% confidence intervals (CIs) using a random effects model. After literature search, seven RCTs (669 patients) satisfied the eligibility criteria established herein and were included into the meta-analysis. Compared with the self-monitoring blood glucose group, the CGM group exhibited significantly lower HbA1c levels (SMD, −0.35; 95% CI, −0.59–−0.10; P = 0.006) and shorter time spent with hypoglycaemia (SMD, −0.42; 95% CI, −0.70–−0.13; P = 0.004). Conversely, no differences in body weight and blood pressure were observed between the groups. CGM in patients with type 2 diabetes mellitus could reduce HbA1c levels and time spent with hypoglycaemia. However, because few RCTs were included in this present study and heterogeneity was also noted, care should be taken when interpreting the results.
Introduction
The number of patients suffering from type 2 diabetes mellitus is increasing worldwide, with estimates suggesting that approximately 300 million individuals could develop the disease by 2050 [1,2]. Previous studies have revealed that strict blood glucose control is extremely important for preventing microangiopathy and macrovascular disorders [3,4]. Primary treatment for type 2 diabetes mellitus includes diet/exercise therapy, whereas pharmacotherapy is administered only when diet therapy/exercise therapy is insufficient. However, in many cases, favourable blood glucose control cannot be achieved through the aforementioned therapeutic interventions alone [5,6].
Self-monitoring blood glucose (SMBG) has been proven to be useful for long-term glycaemic control in patients with type 2 diabetes mellitus [7]. However, this method places considerable burden on the patient given that performing finger pricking several times per day is not only troublesome but also painful [8]. Furthermore, understanding detailed blood sugar fluctuations, such as elevated blood glucose after meals or asymptomatic hypoglycaemia, may be difficult [9].
Continuous glucose monitoring (CGM) allows for continuous measurement of interstitial glucose levels in subcutaneous tissues and evaluation of the detailed blood glucose profile of the patient. CGM includes retrospective CGM (r-CGM), which is used for retrospective examination of lifestyle problems and pharmacotherapy adjustment after understanding the blood glucose profile over several days, and real-time CGM (RT-CGM), which confirms the blood glucose profile in real-time. Studies have shown that utilization of such CGM approaches promotes favourable blood glucose control by changing patient behaviours or pharmacotherapy adjustment [10,11].
A 2013 meta-analysis that examined the influence of CGM on blood glucose levels in patients with type 2 diabetes mellitus indicated significant improvements in HbA1c levels [12]. However, the aforementioned study included only a few randomized controlled trials (RCTs) and did not examine whether CGM intervention had a direct hypoglycaemic reduction effect or an influence on weight. In the present study, therefore, we aimed to investigate the effects of CGM on blood glucose levels, body weight, blood pressure, and hypoglycaemia in patients with type 2 diabetes mellitus using a meta-analysis of RCTs.
Study Selection. A literature search was performed on 1st
February 2018 using MEDLINE (from 1960), Cochrane Controlled Trials Registry (from 1960), and ClinicalTrials.gov. The search strategy was "(type 2 diabet * or T2DM or NIDDM or non-insulin dependent diabet * ) AND [continuous glucose and (monitor * or sensing or sensor * )] or [continuous subcutaneous glucose and (monitor * or sensing or sensor * )] or CGM or CGMS or real-time CGM or RT-CGM or flash glucose monitor * or FGM or sensor-augmented insulin pump or SAP AND (randomized controlled trial or controlled clinical trial or randomized or randomised or placebo or randomly)." The present study included RCTs that evaluated the effect of CGM on blood glucose levels, body weight, hypoglycaemic frequency, and other parameters in type 2 diabetes. Moreover, we included RCTs that compared CGM and SMBG regardless of diet/exercise therapy, oral hypoglycaemic agent use, and injectable formulation administration. The exclusion criteria were as follows: non-RCT studies, those involving animal experiments, those that targeted patients with gestational diabetes, those with insufficient data for analysis, and duplicate literature. Two authors (SI and RK) independently assessed whether each document satisfied the eligibility criteria established herein. In case of disagreements between interpretations by the two authors, a third reviewer (KM) was consulted.
Data Extraction and Quality Assessment.
We created a data extraction form listing the characteristics of studies included in the present study (i.e., key author's name, publication year, study location, sample size, patient's baseline information, basic treatment, and treatment duration). Continuous variables were expressed as mean values, standard deviations, standard errors, or 95% confidence intervals (CIs), whereas binary variables were expressed as percentages (%). Studies comparing one SMBG group with two or more intervention groups were treated as two or more studies sharing an SMBG group. Two authors (SI and RK) independently evaluated the quality of research included in the present study. Accordingly, Cochrane's risk of bias tool was used for evaluating quality [12]. Six domains (random sequence generation, allocation concealment, blinding of personnel and participants, blinding of outcome assessors, incomplete data, and selective reporting) were evaluated using low, moderate, and high risk of bias.
Statistical Analysis.
Given that continuous variables in each study appeared to be expressed using different units, analysis was performed using standardized mean difference (SMD) and 95% CIs. Binary variables were analyzed using the risk ratio (RR) and 95% CIs. When only the standard error or p values were described, standard deviation was calculated with reference to the method used by Altman and Bland [13]. When no description for the standard deviation was present, it was calculated from 95% CIs, t values, or p values [14]. A random effects model was used for analysis; I 2 was used for evaluating statistical heterogeneity (I 2 ≥ 50% was regarded as heterogeneous) [15]. When the number of RCTs included in the analysis was ≥10, a funnel plot was created for evaluating publication bias [14]. Furthermore, previous studies have reported that baseline HbA1c levels and age may affect the influence of CGM on HbA1c levels [16,17]. Therefore, when heterogeneity was noted, a metaregression analysis was conducted on whether baseline HbA1c levels, age, and frequency of CGM sensor use affected the impact of CGM on HbA1c levels. Moreover, RevMan version 5.3 (Cochrane Collaboration, https://tech.cochrane. org/revman/download, July/2017) and STATA version 12.1 (Stata Corporation LP, College Station, TX) were used for the analysis.
3.2. Assessment of Potential Bias. Among RCTs included herein, proportions of appropriate assessments for each domain were as follows: random sequence generation, 85.7% (6/7); allocation concealment, 85.7% (6/7); blinding of participants and personnel, 0% (0/7); blinding of outcome assessors, 14.2% (1/7); incomplete data, 71.4% (5/7); and selective reporting, 100% (7/7). The quality of the included RCTs varied considerably, with none of the included studies having a low risk of bias. Generally, the overall risk of bias was high, with most of the bias originating from blinding of participants, personnel, and outcome assessors. As there were <10 RCTs, a funnel plot was not created.
3.6. Blood Pressure. Two trials regarding systolic blood pressure were included in the meta-analysis [19,21], with 77 and 75 pooled subjects in the CGM and SMBG groups, respectively. An I 2 value of 75% (P = 0 05) confirmed heterogeneity. No difference in the systolic blood pressure was observed between the CGM and SMBG groups (SMD, −0.26; 95% CI, −0.94-0.42; P = 0 46; Figure 6). When RT-CGM and r-CGM were viewed separately, the comparison between the RT-CGM and SMBG groups resulted in an SMD of 0.06 (95% CI, −0.33-0.45; P = 0 76), whereas a comparison between the r-CGM and control or SMBG group resulted in an SMD of −0.63 (95% CI, −1.19-−0.08; P = 0 03). The same two trials were used for studying the diastolic blood pressure in the meta-analysis [19,21]. No difference in the diastolic blood pressure was observed between the CGM and SMBG groups (SMD, −0.03; 95% CI, −0.35-0.29; P = 0 87; Figure 7). When RT-CGM and r-CGM were viewed separately, the comparison between the RT-CGM and SMBG groups resulted in an SMD of 0.01 (95% CI, −0.38-0.40; P = 0 96), whereas a comparison between the r-CGM and SMBG groups resulted in an SMD of −0.10 (95% CI, −0.64-0.45; P = 0 730). (Table 2). Accordingly, although three trials [20,23,24] evaluated the aforementioned scales, a meta-analysis was not performed because of the different scales used for each study. Two trials utilizing the DTSQ, DQoL, and CGM Satisfaction Scale revealed that treatment satisfaction was higher in the CGM group than in the SMBG group [20,24]. However, in the remaining trial utilizing the DTSQ [23], no difference in the degree of treatment satisfaction was observed between the CGM and SMBG groups. Two trials utilizing DDS found no significant differences in scores between the CGM and SMBG groups [20].
Discussion
In this study, we examined the influence of CGM on blood glucose levels, weight, blood pressure, and frequency of hypoglycaemia in patients with type 2 diabetes mellitus using a meta-analysis of RCTs. Accordingly, our results revealed that HbA1c levels and time spent with hypoglycaemia were significantly lower in the CGM group than in the SMBG group. Conversely, no difference in body weight and blood pressure was observed between the CGM and SMBG groups. One 2013 meta-analysis involving four RCTs that collectively examined the effects of RT-CGM and r-CGM in patients with type 2 diabetes mellitus indicated that the CGM treatment group had significantly lower HbA1c levels than the SMBG group [11]. Similarly, the present study revealed that the CGM group had significantly lower HbA1c levels than the SMBG group. However, when RT-CGM and r-CGM were viewed separately, we found that although the Heterogeneity: tau 2 = 0.00; chi 2 = 0.31, df = 2 (P = 0.86); I 2 = 0% Figure 4: Forest plot presenting the meta-analysis based on standardized mean differences (SMDs) for the effect of CGM versus SMBG on time spent with hypoglycaemia (<70 mg/dL). SMDs in the individual studies are presented as squares with 95% confidence intervals (CIs) presented as extending lines. Pooled SMD with its 95% CI is presented as a diamond. CGM: continuous glucose monitoring; SMBG: self-monitoring blood glucose; RT-CGM: real-time continuous glucose monitoring; r-CGM: retrospective continuous glucose monitoring. Figure 6: Forest plot presenting the meta-analysis based on standardized mean differences (SMDs) for the effect of CGM versus SMBG on systolic blood pressure. SMDs in the individual studies are presented as squares with 95% confidence intervals (CIs) presented as extending lines. Pooled SMD with its 95% CI is presented as a diamond. CGM: continuous glucose monitoring; SMBG: self-monitoring blood glucose; RT-CGM: real-time continuous glucose monitoring; r-CGM: retrospective continuous glucose monitoring.
RT-CGM group had predominantly lower HbA1c levels than the SMBG group, no significant difference in HbA1c levels had been found between the r-CGM and SMBG groups. According to a systematic review of patients with type 1 diabetes, RT-CGM has a greater blood glucose-ameliorating effect than r-CGM [25]. The use of RT-CGM helps patients not only adjust diabetes medication dosage but also understand changes in blood glucose levels on a monitor and be conscious of lifestyle factors, such as meals and exercise, thereby ameliorating blood glucose levels [18,21,26]. Conversely, r-CGM increases physical activity and blood glucose amelioration and inhibits the onset of complications [21]. Nevertheless, further studies are needed to determine whether RT-CGM improves HbA1c in patients with type 2 diabetes mellitus to a greater extent than r-CGM.
We showed no difference in body weight change between the CGM and SMBG groups. However, although the study by Beck et al. [20] showed that the RT-CGM group tended to have greater body weight than the SMBG group, the other three trials [18,19,23] showed no change or even a decrease in body weight. The daily amount of insulin administered in Beck et al.'s study increased compared with the baseline. However, this remained unchanged or decreased in the other three trials. Moreover, Beck et al.'s study revealed that although patients in the RT-CGM group had improved blood glucose levels because of an increase in snacking as a result of hypoglycaemia or an increase in insulin levels to correct blood glucose levels, an increase in body weight could have been present. Accordingly, blood glucose management using CGM in patients with type 2 diabetes mellitus necessitates paying close attention to the insulin dose and changes in weight [26].
With regard to influence on hypoglycaemia, we showed that the RT-CGM group spent less time with hypoglycaemia than the SMBG group. A previous study examining the utility of CGM for type 1 diabetes observed a shortening in the time spent with hypoglycaemia because of CGM intervention. In general, CGM intervention exhibits greater hypoglycaemic effect among patients with high hypoglycaemic frequency at baseline, such as those with type 1 diabetes [17]. Among the studies included in the present meta-analysis, the time spent with hypoglycaemia per day at patient baseline ranged from 3 to 60 min, which may be considered relatively short [22][23][24]. Nevertheless, CGM intervention shortened the time spent with hypoglycaemia, suggesting its practicality for shortening time spent with hypoglycaemia in patients with type 2 diabetes mellitus. However, given that RCTs comparing the RT-CGM and SMBG groups had not been included in the present analysis, further investigation is necessary.
One study on the effect on blood pressure included herein showed that the CGM group had no reduction in systolic and diastolic blood pressure compared with the SMBG group. In another study included herein, Allen et al. found that the r-CGM group exhibited lower blood pressure during the collection period than the SMBG group. However, as indicated in a previous study [11], given the inclusion of counselling on exercise therapy based on r-CGM data, the independent impact of r-CGM might have not been observed. However, most of the patients in trials included herein had been administered hypotensive medication for blood pressure management. Accordingly, baseline blood pressure management appeared to be the reason why intervention effects of CGM had not been observed. Moreover, assessing the influence of CGM on blood pressure had been generally difficult given the few studies included.
Although a meta-analysis regarding treatment satisfaction after CGM intervention had not been conducted, the present study included one trial [20,24] that indicated increased treatment satisfaction and another [23] in which no change was noted. Accordingly, the shortening of time spent with hypoglycaemia has been speculated to be the reason for such differences. In a previous study on patients with type 1 diabetes, the decrease in hypoglycaemic frequency had been indicated to be closely related to patient satisfaction [27]. In our study, there are similar observations wherein a shortening of time spent with hypoglycaemia because of CGM in two trials resulted in increased treatment satisfaction, but limited shortening of time spent with hypoglycaemia in one study resulted in unchanged satisfaction. Hence, based on the trials involving patients with type 2 diabetes mellitus included herein, the shortening of time spent with hypoglycaemia because of CGM intervention may perhaps lead to increased treatment satisfaction.
Large-scale clinical trials have shown that strict blood glucose management contributes to the reduction of the risk for vascular complications in patients with type 2 diabetes mellitus [3,4]. However, avoiding the risk of hypoglycaemia and maintaining patient QOL are also extremely important for glucose management. The present meta-analysis showed that the CGM group exhibited a significantly greater degree of HbA1c reduction (a decrease of approximately 1% from the baseline value) and shorter time spent with hypoglycaemia than the SMBG group. A ≥0.5% improvement in HbA1c levels or a ≥10% improvement from baseline values contributes to the inhibition of future cardiovascular events and has been indicated as clinically significant amelioration [28][29][30]. Given that hypoglycaemia and blood glucose fluctuations, which are believed to be related to various poor outcomes, could be underestimated in patients in type 2 diabetes mellitus [31], understanding detailed blood glucose profiles through CGM may be useful. In recent years, the increase in healthcare costs has been noted as a problem. Reportedly, CGM intervention is useful in terms of cost effectiveness in patients with type 1 diabetes [32] and in those with type 2 diabetes [33,34], although the number of reports is limited for the latter type of patients. Further investigations are needed on effects of CGM intervention in patients with type 2 diabetes to alleviate complications, to reduce the incidence of cardiovascular disease, and to improve QOL and cost effectiveness.
The present study had several limitations. First, given the few number of RCTs included, the present study might have had insufficient power to detect differences between groups. Second, although previous studies on RT-CGM interventions had indicated that the frequency of CGM sensor use influences its effects on HbA1c levels [35], this had not been examined because of a lack of sufficient data. Third, we cannot deny the possibility that some literature could have been missed while searching the databases, which could have influenced the results of the present study. Fourth, the observation period and evaluation items of each RCT included herein varied greatly. Therefore, it appeared necessary to pay close attention to the interpretation of the results and generalization. Finally, the quality of RCTs included in the present study was generally low. Moreover, given the presence of heterogeneity, there could be concern regarding the validity of the results derived from the present study.
The present study examined the effects of CGM on blood glucose levels, body weight, blood pressure, and hypoglycaemia in patients with type 2 diabetes mellitus using a meta-analysis of RCTs. The results revealed that the CGM group had significantly lower HbA1c levels and shorter time spent with hypoglycaemia than the SMBG group. On the other hand, no difference in body weight and blood pressure had been observed between the CGM and SMBG groups. As previously mentioned, given the few RCTs included as well as the presence of heterogeneity, care may be needed when interpreting the results of the present study. Accordingly, further studies addressing the limitations presented herein may be necessary.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2019-02-05T23:06:27.162Z
|
2019-01-15T00:00:00.000
|
{
"year": 2019,
"sha1": "ec6974d9f1f2473ae4bfc7498d00d9555753e7f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/4684815",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec6974d9f1f2473ae4bfc7498d00d9555753e7f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267713197
|
pes2o/s2orc
|
v3-fos-license
|
Mass balance of palm waste energy potential in palm oil processing in South West Aceh, Indonesia
The process of land clearing, tillage, and planting in plantations has environmental impacts. The use of fertilizers, both organic and inorganic, contributes to potential emissions during land preparation (262 kg/cycle), tillage (236 kg/cycle), and planting (165 kg/cycle). Land preparation has the highest emissions due to increased fuel consumption. Planting oil palm seedlings aged 1, 2, and 3 years requires significant water (5,160,063.496 tons/cycle, 5,222,991.444 tons/cycle, and 5,411,774.030 tons/cycle, respectively). Outputs in years 4-7 include groundwater use (5,710,654.467 tons/cycle), 12,750 tons of fresh fruit bunches (FFB) per cycle, 12,878 tons of palm fronds and leaves per cycle, and 19.62% evapotranspiration. In years 8-10, FFB production reaches 24 tons/cycle, with 12,878.79 tons/cycle of fronds and leaves, and 19.63% evapotranspiration. In years 11-14, FFB production is 26 tons/cycle, with 6.435 tons/cycle of fronds and leaves. The water requirement remains at 80.37%. For oil palm aged >19 years, FFB production decreases to 18 tons/cycle, with fronds and leaves remaining the same. Electrical waste energy (E) generated by 2050 totals 7,343,834.558 GW, increasing from 2016-2032 and plateauing from 2033-2050. Factory energy needs (Ep) at 20% power plant efficiency are 1,468,766.912 GW, while waste-derived energy (P) is 167.667 GW.
Introduction
The research approach used is descriptive quantitative research describing the state of the object under study according to the facts in the field (existing facts), concrete, observable and measurable, meaning revealing all studies in the field on the utilization of palm oil waste shell, fiber, empty fruit bunch and liquid waste (palm oil mill effluent).For POME, a study based on methane gas (CH4) produced in 1 m should be considered 3 CH4.This approach study should refer to the biomass calorific value [1] [2] [3] [4] [5] as well as the conversion value of each waste [6] [7].
The production of fresh fruit bunches (FFB) is the raw material for the processing of FFB to produce crude palm oil (CPO), palm kernel oil (PKO), side waste in the form of solid and liquid POME (palm oil mill effluent) has a large amount and can disturb the environment [8] [9], and unutilized or liquid waste resulting from the accumulation of palm waste [10], increasing the yield of crude palm oil [11].In the actual (exiting) conditions, the processing process of producing palm oil is very energy intensive, and the need for water in processing produces solid waste and liquid waste.The waste produced is in the form of solid waste of oil palm empty bunches.fiber.and shells.and POME liquid waste [12] [13].Palm oil process waste (biomass) generated by palm oil mills can be used to support industrial sustainability.One of the alternative energy in increasing the utilization of waste so that it becomes efficient that has economic value.The energy conversion process of oil palm fresh fruit bunches in the process of producing palm oil (CPO) requires energy.both electrical energy.water and other process materials.
The utilization of biomass as a means to facilitate sustainable industrial development is of paramount significance.This phenomenon is prompted by the correlation between the cost of fossil fuel energy and its natural abundance.Biomass refers to organic matter that is produced during the process of photosynthesis, which can exist in the form of either usable goods or residual waste materials.Biomass, together with its byproducts, possesses the potential to serve as alternate sources of energy.Additionally, exploring alternate approaches to waste management can contribute to the promotion of sustainability by using trash as a renewable resource [14] [15].So this technology is needed to be utilized for future energy interests.
Agro-industry of sustainable oil palm plantations must apply the concept of utilization of waste generated (close system production) and the application of cycles during the production process.Utilization of oil palm waste empty palm bunches, and fiber waste to plantations as plantation mulch in maintaining soil fertility [6], as well as a cycle of reducing the use of inorganic fertilizers.Waste can act as fuel for mill boilers, electricity for remote mills [16], conversion of waste into valuable products [17] [18] [19].and regional, national and international palm oil industry development models for palm oil waste bioenergy development [18] [20] [21].
The process of processing palm fruit bunches begins with harvesting palm fruit in the plantation.weighing palm fruit.entering the shelter and ending at the processing process which is pulled by a lorry to begin processing at the processing stations.The process takes place in the zone of each FFB processing station.the process takes place at the sterilization station.stripping / thresser.digester.pressing countinous septling tank (CST), sludge tank, sludge separator tank.Back to CST. then oil purifier, vacuum dryer.and finally in the crude palm oil (CPO) and palm kernel oil (PKO) storage tank.The palm kernel oil (PKO) process starts with pressing, depericarper, nut cracker, kernel dryer, and PKO kernel tank.In general, wastes generated from these two processes and reused to the field as well as the combustion process for power plant boilers [22] [23].As well as the waste water treatment (WWT) process for palm oil mill liquid waste (POME) into biogas that produces methane gas (CH 4 ) for the benefit of electrical energy in an integrated and planned manner that can be utilized [17] [24] [25].Optimal utilization of residual/waste materials is very beneficial for all waste utilization actors.Therefore, it is necessary to analyze the material and raw material mass flow (input and output) to create a potential utilization of the waste generated for all processes that occur in the palm oil mill.
This study formulates a model of FFB processing in palm oil mills and raw materials to estimate energy production and the use of inorganic and organic fertilizers in the field.The energy produced by the waste is expected to generate electrical energy for the use of boilers to replace petroleum (diesel and gasoline) for processing in palm oil mills.Primary and secondary data obtained came from the data of palm oil mill companies that have processing plants and related parties of government agencies, and private plantation companies and oil palm plantation farmers.Mass flow analysis is limited to data available at plantation companies that own plantations and FFB processing mills in 2019.
The population in this study is all palm oil mills and companies in the South Barsela region, the population to be targeted is 16 palm oil mills and companies that have factories and plantations, as well as observations or observations at companies and palm oil mills.For data collection techniques include primary data from direct field observations and secondary data from company data conducted research.The company's primary data through company management, as well as research variables including data on the number of oil palm plantations, the number of palm oil mills, the amount of capacity and FFB production, the amount of fiber waste (fiber), shell (shell), empty fruit bunch and 3 liquid waste (palm oil mill effluent), data on the specifications of the factory power plant used.
Quantitative Analysis and Mass and Energy Balance of Palm Oil Waste Energy Potential
Palm oil mill processing is carried out at palm oil mills that have a processing capacity of 23, 30 and 60 tons per hour-1 in 16 palm oil mills in the South West Aceh (Barsela) region.According to [26] the capacity of FFB/ton processed will produce 23% TKKS, 21% oil and 22% mud, the remaining water 12% endoscarp (6%), kernel (5%) and fiber (22%).The processing process of 30 tons per hour and 60 tons per hour in factories located in the Barsela region will produce solid waste and liquid waste as potential energy to be developed in this region as alternative energy for potential electrical energy [27] [28] [7] sourced from waste generated from the palm oil processing process.This energy sustainability will affect the need for electricity sourced from waste, as well as the fulfillment of electrical energy for electricity continuity in Barsela.
The waste energy produced can be surplus energy.The existence of this surplus energy can be evaluated using the following equation: The variable EP represents the energy consumption associated with the palm oil mill process (POM) or the quantity of electrical energy supplied to the PLN grid system.The symbol η denotes the overall efficiency of the power generation process, which is currently at 20%.Additionally, the variable E represents the energy content of palm oil waste, including TKKS, fiber, and shell.The electrical energy requirements of the palm oil mill are met by the utilization of oil palm empty fruit bunch (EFB) waste, assuming a temporal progression of power production.
The energy content of by-products is calculated based on the heating value of each component.The total energy content is calculated by multiplying the mass of the product by the heating value of the component.
Mass and Energy Balance
The determination of mass and energy balance is predicated upon the fundamental principle of conservation of mass and energy.The principle of conservation of mass and energy posits that the total amount of mass and energy in a closed system remains constant over time, with no creation or destruction occurring.However, it is possible to convert them into alternative formats.Mass and energy balance never involves chemical reactions, it involves the FFB going through a series of processing processes sterilization.stripping/thresser.digester.pressing countinous septling tank (CST).sludge tank.sludge separator tank.Return to CST. then oil purifier.vacuum dryer.and finally in the crude palm oil (CPO) and palm kernel oil (PKO) storage tank.For the palm kernel oil (PKO) process starts pressing.depericarper.nut cracker.kernel dryer.and PKO kernel tank.
The basic mass balance process used is 1 ton of FFB, it can be calculated the mass balance balance based on the process of 30 tons of FFB.To calculate the mass balance, the following mathematical equation can be used: = (4) Note: minput = mass input (kg) moutput = mass output (kg) The assumptions employed in the computation of the energy balance are as follows: The energy input to the process consists of power and fuel, while the energy output is in the form of carbon emissions.The flow in both the inlet and output processes is characterized by one-dimensional behavior.The concepts of kinetic energy and potential energy are disregarded.
In all processes involved in the production of crude palm oil (CPO), palm kernel oil (PKO), solid waste, and liquid waste, a comprehensive analysis of mass and energy balance is performed.Mass and energy balance analysis is conducted in several processes, including sterilization, stripping/thresser, digester, pressing continuous septling tank (CST), sludge tank, sludge separator tank, oil purifier, vacuum dryer, and storage tank.For PKO, the mass and energy balance goes through several stages including; pressing.depericarper.nut cracker.kernel dryer.and kernel tank [32].The use of inputs to the life cycle of oil palm is preferred to the use of fertilizers, herbicides, pesticides and water needs for oil palm plants during the maintenance process until the harvest period which can produce carbon emissions (CO2eq) during the process until the replanting period [33] [34].
Energy Balance in Each Process Station
The energy balance in the station process is calculated based on the total fuel during the process and the resulting carbon emissions (CO2eq) [16] [35] [36] [37].The potential energy generated from each category of solid and liquid waste.Liquid waste can be estimated based on the potential methane gas produced in each category of waste that has been converted in units of kWh.
Potential Methane (CH4) generated from Wastewater
The energy potential of wastewater is determined by the utilization of anaerobic wastewater treatment methods.The effluent from the treatment process has a biochemical oxygen demand (BOD) ranging from around 25000 to 29000 mg L -1 (average 27,000 mg L -1 ) [38] [39].According to empirical findings, it is important to eliminate a biochemical oxygen demand (BOD) value of 27,000 mg L-1 (equivalent to 27 kg/L) in order to ensure the safe release of wastewater.According to [40], the highest achievable methane (CH4) production per kilogram of biochemical oxygen demand (BOD) eliminated is 0.6 kilograms of CH4 per gram of BOD.The estimation of methane gas generated from wastewater is conducted through the utilization of equation 5:
Based on equation ( 5), the potential methane gas produced per year for the observed biogas can be estimated.The estimation of energy potential can be derived by assessing the methane gas potential produced from the palm oil mill effluent (POME).Equation 6is utilized to determine the electrical energy output from the potential biogas by considering the calorific value of methane gas: = [4 1.17] 0.35 The potential energy values obtained from the calculations are expressed in electrical potential per ton of biomass (kWh/ton biomass).
Mass balance model of the Palm Oil Processing Process
The processing of palm fruit bunches begins with the harvesting of the palm fruit in the plantation.Weighing of the FFBs, then they enter the storage area and end up in the processing area where they are pulled by lorries to begin processing at the processing stations.The process takes place in the zone of each FFB processing station.The process takes place at the sterilization station, stripping/thresser, digester, pressing countinous septling tank (CST), sludge tank, and sludge separator tank.Back to CST, then oil purifier, vacuum dryer, and finally at the crude palm oil (CPO) and palm kernel oil (PKO) storage tanks.
The palm kernel oil (PKO) process starts with pressing, depericarper, nut cracker, kernel dryer, and PKO kernel tank.In general, the wastes generated from these two processes are reused in the field as well as the combustion process for power plant boilers [22] [23], and the waste water treatment (WWT) process for POME into biogas which produces methane gas (CH4) for the benefit of electrical energy in an integrated and planned manner that can be utilized [17] [24] [25].Optimal utilization of residual or waste materials is very beneficial for all actors of waste utilization.Therefore, it is necessary to analyze the material and raw material mass flow (input and output) to create a potential utilization of the waste generated for all processes that occur in the palm oil mill.
The process of processing palm oil (CPO) with a capacity of 30 tons per hour -1 applies the principle of mass flow of input to produce output.The application of material inputs used is inseparable from the process at each station to produce palm oil with good yields.The process of processing FFB to produce palm oil must go through the stages of the process with water energy material.electricity to be able to produce palm oil.These processes are interconnected between each station.because the output at one station will be the input at another station.This process continues for 20 hours per day 25 days per month and 300 days a year to process 30 tons of FFB per hour to produce solid waste and liquid waste [41].The accumulation of the processing process uses electricity to drive turbines or boilers using diesel or petroleum which can cause pollution to the air.the formation of the greenhouse effect (Global warning Potential).The formation of CO2eq carbon emissions (CO2.NOx and CH4) [35] [42] [36], to the air around processing plants in plantation companies.The mass and energy balance model used to produce Crude Palm Oil (CPO) or good palm oil is found in the processing process of boiling (sterilizer).stripping (striping/thresher). pulp removal (digester).pressing.clarification or initial purification (countinous settling tank).sludge tank.sludge separator tank.oil purifier.vacuum dryer.and palm oil tank (oil CPO tank/storage CPO tank).
Potential Palm Waste in the Boundary Cradle to Grave System
The potential of FFB processing to produce palm oil at each process station is 30 tons; 60 tons per hour -1 produces solids, liquids and gases that are lost due to the process.At each process station such as stripping, depericarper, Hydrocyclone produces waste that has the potential for energy such as TKKS waste (stipping), fiber (depericarper), and hydrocyclone.The shell (hydrocyclone) has a compound composition for each process that takes place.
The process of processing 30 tons of FFB per hour to the waste generated, the type of waste generated from the process of striping, depericarper and hydrocyclone stages respectively TKKS, fiber and shell.The amount of waste generated is 8,092.27TKKS, 5,348.17fibers and 1,939.76shell [32].For more details can be seen in Table 1.Source: [32] Table 1 shows the dominance of oil palm empty bunches of 16,801 kg h -1 , fiber of 10,556.37 kg h -1 and shells of 3,653.47 kg h -1 .Percentage comparison, TKKS (28%), fiber (17.61%), and shell (6.06%).According to [14] has conducted research on palm oil solid waste with a ratio of 30% and 70% in producing this solid waste, and also other researchers such as [43] [44] [45] for TKKS converted into bio hydrogen, and biogas [27] [46] [47] and bioethanol [48].Source: [32] The liquid waste generated in the form of water and sludge comes from the sterilization station, sludge tank, sludge separator tank, oil purifer, and hydrocyclone.Each station produces different liquid waste, the composition of solids and water is different.For liquid waste containing water produced from sterilization stations, and hydrocyclone while sludge is produced from sludge tank stations, sludge separator tanks, and oil purifiers.The composition of the amount containing water is 28,337.73kg hour -1 , and the composition of sludge is 13,340.097kg hour -1 .The total amount of liquid waste generated is 69.46% (41,677.83kg hour -1 ).The potential development of liquid waste can be used as another potential source of energy [17] [7] and biohydrogen [49] [32] [50] and water [51].Source: [32] The liquid waste generated in the form of water and sludge comes from the sterilization station, sludge tank, sludge separator tank, oil purifer, and hydrocyclone.Each station produces different liquid waste, the composition of solids and water is different.For liquid waste containing water produced from sterilization stations, and hydrocyclone while sludge is produced from sludge tank stations, sludge separator tanks, and oil purifiers.The composition of the amount containing water is 11,065.63kg hour -1 , and the composition of sludge is 5,948 kg hour -1 .The total amount of liquid waste generated is 65.81% (16,814.23 kg hour -1 ).The potential development of liquid waste can be used as another potential source of energy [17] [7] and biohydrogen [49] [32] [50] and water [51].Source: [32] The process of wastewater potential can provide electricity potential for the value converted to wastewater.This process is carried out through the technique of utilizing liquid waste in POME ponds, eventually processed through methane capture into electrical energy [18].The development of biogas electricity has long been recognized in the industrial world, processed through a series to produce renewable energy for the benefit of the industrial world and / or in government circles, industrial users and stakeholders [52].The total capacity of liquid waste produced by the process of each station produces water and solids amounting to 28,337 kg hours -1 (67.99%).According to others [49] [53] the potential development of liquid waste can be used as a potential electricity (Kwh), heat energy source (MJ).
Conclusions
The process of using inputs to produce outputs in the mass balance process for land clearing, tillage and planting in plantation areas has an impact on the use of inputs in the field of oil palm plantations on the environment.The use of fertilizers, organic and inorganic, also has a positive influence on the creation of potential emissions resulting from the use of land preparation of 262 kg cycle -1 ; tillage 236 kg cycle - 1 and planting 165 kg cycle -1 .The largest potential emission value was generated during the land preparation period because more diesel and gasoline were used to clear the land.Electrical waste energy (E) generated until 2050 for total electrical energy generated amounted to 7,343,834.558GW.The energy potential increases from 2016-2032 by 7,343,805.845GW, and the electricity potential in 2033 remains (stagnant) from year 17 (2033-2050).For the electrical energy required by the factory process (Ep) against 20% efficiency of the power plant is 1,468,766.912GW, while the electrical energy generation from waste (P) is 167.667GW.
17 =
Conversion value from m3 unit to Nm unit 3 LHv = Low heating value of methane (= 35.8 MJ Nm -3 ) 0.35 = Conversion efficiency from biogas to electricity
8 Figure 1 .
Figure 1.Balance sheet of 30 tons per hour FFB processing -1 at the mill
Figure 2 .
Figure 2. Balance sheet of 60 tons per hour FFB processing -1 at the mill
Table 1 .
Potential Solid Waste in the 30 ton hour palm oil process-1
Table 2 .
Potential Solid Waste in the 60 ton hour palm oil process-1
|
2024-02-17T16:11:41.080Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "f497578f3eb3fc5b9bde11d8eadbaadb92ca0237",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1297/1/012076/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f122214b17cbaa5647273722a52a338b6b331a61",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
195656084
|
pes2o/s2orc
|
v3-fos-license
|
Motor Neuron Susceptibility in ALS/FTD
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease characterized by the death of both upper and lower motor neurons (MNs) in the brain, brainstem and spinal cord. The neurodegenerative mechanisms leading to MN loss in ALS are not fully understood. Importantly, the reasons why MNs are specifically targeted in this disorder are unclear, when the proteins associated genetically or pathologically with ALS are expressed ubiquitously. Furthermore, MNs themselves are not affected equally; specific MNs subpopulations are more susceptible than others in both animal models and human patients. Corticospinal MNs and lower somatic MNs, which innervate voluntary muscles, degenerate more readily than specific subgroups of lower MNs, which remain resistant to degeneration, reflecting the clinical manifestations of ALS. In this review, we discuss the possible factors intrinsic to MNs that render them uniquely susceptible to neurodegeneration in ALS. We also speculate why some MN subpopulations are more vulnerable than others, focusing on both their molecular and physiological properties. Finally, we review the anatomical network and neuronal microenvironment as determinants of MN subtype vulnerability and hence the progression of ALS.
INTRODUCTION
Amyotrophic lateral sclerosis (ALS) is a late-onset, progressive and fatal neurodegenerative disease which primarily affects motor neurons (MNs) of the motor cortex of the brain, brainstem motor nuclei and anterior horn of the spinal cord (Kiernan et al., 2011;Renton et al., 2014;Al Sultan et al., 2016;Taylor et al., 2016). ALS commonly begins in late-adulthood, when patients first experience focal symptoms, such as weakness in the limb or bulbar muscles, as well as widespread fasciculations. The disease then usually progresses in an organized way to adjacent areas of the central nervous system (CNS), and consequently symptoms appear in other regions of the body. Several clinical subsets of ALS can be distinguished by the anatomical location first affected (Renton et al., 2014;Taylor et al., 2016). This includes bulbar onset, where symptoms first appear in the muscles controlling speech, mastication and swallowing; and limb onset, where symptoms present initially in the upper (arm or hand) or lower limbs (leg or foot). Bulbar onset patients face a much worse prognosis than those with spinal onset ALS, where the average survival time following diagnosis is less than 2 years. However, in patients with the much rarer respiratory onset form (3-5%), the prognosis is even worse as the survival time following diagnosis is only 1.4 years (Swinnen and Robberecht, 2014). At disease end stage, only support and palliation are available, and patients usually die from respiratory failure, typically 3-5 years after diagnosis (Taylor et al., 2016). There are currently few effective treatments. Hence there is an urgent need to understand the underlying causes and risk factors for ALS to discover new therapeutic targets.
Neurons have complex and extended morphologies compared to other cell types, and within the CNS, neurons can vary greatly in their properties. MNs are unique cells amongst neurons because they are large, even by neuronal standards, with very long axons, up to 1 m in length in an adult human. MNs can be distinguished into two main categories according to their location in the CNS: upper MNs (UMNs) located in the cortex, and lower MNs (LMNs) located in the brainstem and spinal cord. The spinal MNs comprise both visceral MNs of the thoracic and sacral regions, which control autonomic functions, and somatic MNs, which regulate the contraction of skeletal muscles and thus control movement. The diversity of MNs reflects the variety of targets they innervate, including a wide range of muscle fiber types. UMNs and LMNs differ in the location of their cell bodies, the neurotransmitters released, their targeting and symptoms resulting from their injury.
It is unknown why MNs are specifically targeted in ALS and remarkably, MNs are not equally affected (Rochat et al., 2016;Nijssen et al., 2017). Whilst both UMNs and LMNs are involved, some LMN subtypes are relatively resistant to neurodegeneration. Spinal cord and hypoglossal MNs are amongst the first to degenerate, hence the ability to speak, breath and move is lost early in disease course. As ALS progresses, specific MN subtypes then preferentially deteriorate. However, some MNs are spared until disease end stage, such as oculomotor neurons and Onuf 's nuclei MNs, and as a result, patients retain normal visual, sexual and bladder function throughout the disease course. The resistant MNs differ significantly from the vulnerable MNs anatomically and functionally, and they possess distinct transcriptomes, metabolic and developmental profiles. Surprisingly, there are also differences in vulnerability amongst spinal MNs, because those that are part of the faster motor units degenerate before those in the slower motor units (Frey et al., 2000;Pun et al., 2006;Hegedus et al., 2007;Hadzipasic et al., 2014;Sharma et al., 2016;Spiller et al., 2016a), thus adding further complexity to the question of MN vulnerability.
ALS shares clinical and pathological features with frontotemporal dementia (FTD), a type of dementia that involves impaired judgment and executive skills. In FTD, the loss of cortical MNs is accompanied by loss of neurons in the frontal and temporal cortices, which correlates clinically with the symptoms of FTD (Neumann et al., 2006;Burrell et al., 2016). The relationship between ALS and FTD has been confirmed through genetic studies, and these two conditions are now considered to be at opposite ends of the same disease continuum (Taylor et al., 2016;Shahheydari et al., 2017). Hence, while ALS was historically judged as a disorder affecting the motor system only, it is now recognized that non-motor features are present (Fang et al., 2017). A wealth of evidence also demonstrates that ALS is a heterogeneous disorder. The clinical symptoms, including the proportion of UMN and LMN signs, age of onset, disease duration, and association with other conditions, are major features contributing to its highly variable phenotypes. As well as the development of FTD (Strong and Yang, 2011), ALS can also involve cognitive impairment in up to 50% of patients (Tsermentseli et al., 2012), the autonomic nervous system (Piccione et al., 2015), supranuclear gaze systems (van der Graaff et al., 2009;Donaghy et al., 2011), and extrapyramidal motor signs (Pradat et al., 2002). Sensory, olfactory and visual dysfunction have also been described in some patients (Bede et al., 2016). In addition, there are also other conditions affecting MNs that share similarities, but also striking differences, to ALS. In particular, primary lateral sclerosis (PLS) affects UMNs but it progresses much slowly than ALS. It also has a significantly lower mortality rate (Tartaglia et al., 2007), consistent with the relative resistant of LMNs in ALS.
One of the main pathological characteristics of ALS is the presence of insoluble protein inclusions in the soma of MNs. TAR DNA binding protein-43 (TDP-43) is the major component of these inclusions (Arai et al., 2006;Neumann et al., 2006) in almost all (∼97%) ALS patients and ∼50% FTD patients (Arai et al., 2006;Neumann et al., 2006;Mackenzie et al., 2007;Scotter et al., 2015;Le et al., 2016). Loss of TDP-43 from the nucleus is evident in MNs from ALS/FTD patient tissues, concomitant with the formation of TDP-43 inclusions in the cytoplasm of both MNs and glia. Neuropathological studies have also revealed that the clinical course of ALS reflects the presence of TDP-43 pathology, from its deposition at an initial site of onset, to its spread to contiguous regions of the CNS . Mutations in TDP-43 are also present in 5% of familial forms of ALS (Sreedharan et al., 2008). In the genetic types of ALS, it remains unclear why MNs are specifically affected when the mutant proteins are ubiquitously expressed. Males are affected more by ALS than females, and ethnic populations show differences in the incidence rates of ALS, further highlighting the contribution of genetics to ALS.
Whilst our understanding of the etiology of ALS has increased significantly in recent years, major gaps in our knowledge remain. In this review, we address several unanswered questions regarding the unique susceptibility of specific types of MNs in ALS: Why does neurodegeneration spread throughout specific neural networks? How can ubiquitously expressed genes be selectively toxic to MNs? Why are some MN subtypes more vulnerable to degeneration than others? We also discuss the role of the neuronal network and the specific cellular microenvironment in driving cell-to-cell disease progression, plus the importance of genetics in influencing susceptibility of specific neuronal subpopulations. Finally, we discuss the role of aging as a potential risk factor for the susceptibility of specific MN subtypes. A thorough comprehension of why specific cell types degenerate is imperative to our understanding of ALS because it provides important clues as to what initiates neurodegeneration, and how this knowledge may be harnessed therapeutically.
ANATOMY OF THE MOTOR SYSTEM
In the CNS, the motor cortex, basal ganglia, cerebellum, and parts of the brainstem, are directly involved in the planning and initiation of movement. In contrast, the precise timing and pattern of movement is generated by MNs located in the spinal cord (Figure 1; Kiehn, 2016). The corticospinal (anterior and lateral) tract is the largest descending tract in humans. The lateral corticospinal tract originates in the primary motor cortex, which lies in the precentral gyrus and sends fibers to muscles in the extremity. This is via contralateral cortical innervation, so that the left motor cortex controls the right extremities and vice versa, to control the voluntary movement of contralateral limbs (Javed and Lui, 2018). MNs outputs are not confined to the peripheral muscles however, but also include excitatory terminals to a group of interneurons, Renshaw cells, and also to other MNs.
Glutamate (cortex, spinal cord) and acetylcholine (spinal cord) modulate excitatory input within neurons, whereas GABA and glycine facilitate inhibitory neurotransmission FIGURE 1 | Organization of the human corticospinal tract. MN groups vulnerable and resistant to degeneration in ALS are shown in red and blue, respectively. (Ramírez-Jarquín et al., 2014). At the neuromuscular junction (NMJ), only acetylcholine acts at the synapse but interestingly, synaptic transmission between MNs in the spinal cord involves both acetylcholine and glutamate (Bhumbra and Beato, 2018). Renshaw cells are excited through both acetylcholine and glutamate receptors and spinal MNs co-release glutamate to excite Renshaw cells and other MNs, but not to excite muscles (Nishimaru et al., 2005;Bories et al., 2007;Bhumbra and Beato, 2018). Hence, different synaptic transmission systems are present at different postsynaptic targets of MNs (Bhumbra and Beato, 2018).
However, MNs are not homogeneous throughout the CNS because they exhibit distinct morphologies and patterns of connectivity, which underlie their different physiological functions. Hence, within a single region, MNs that perform closely related functions can be further subdivided, both anatomically and physiologically. The identities of specific MN subtypes and their target projections are controlled by selective cell-type expression of transcription factors, notably members of the Hox, LIM, Nkx6, and ETS families (Stifani, 2014). This provides the fundamental mechanism for spinal MN diversification and connectivity to specific peripheral muscle targets. Thus, to generate movement, MNs integrate information from sensory structures and transform it into precise temporal and magnitudal activation of muscles.
A MN located in the spinal cord innervates up to several hundred fibers within one muscle, which together form the motor unit. Trains of action potentials within the axon cause the release of acetylcholine at the NMJ, which activates nicotinic receptors on the muscle fibers the MN innervates. This initiates a cascade of signaling events in the muscle fiber that leads to its contraction. A motor pool consists of all the individual MNs that innervate a single muscle. A muscle unit (one muscle and its motor pool) is composed of three different types of functional motor units consisting of alpha (α), beta (β), and gamma (γ) MNs, which are classified according to the contractile activity of the muscle fiber innervated. We will now discuss in more detail the anatomy of those structures involved in movement.
A distinct group of MNs in the sacral spinal cord termed 'Onuf 's' neurons, innervate the striated muscles of the external urethra, external anal sphincter via the pudental nerve, and the ischiocavernosus and bulbocavernosus muscles in males (Sato et al., 1978;Nagashima et al., 1979;Kuzuhara et al., 1980;Roppolo et al., 1985). These MNs are histologically similar to limb α-MNs (Mannen et al., 1977) and they are located anteromedial to the anterolateral nucleus and extend between the distal part of the S1 segment and the proximal part of S3.
α-motor units can be subdivided according to their contractile properties, into fast-twitch (F) and slow-twitch (S) fatigueresistant types (Table 3) . In addition, fast-twitch α-motor units can be further categorized into fasttwitch fatigable [FF] and fast-twitch fatigue-resistant [FR] types, based on the length of time they sustain contraction. The basis of this classification is the duration of the twitch contraction time . F-and S-MNs also exhibit different afterhyperpolarization duration (AHP) properties. AHP is the phenomenon by which the membrane potential undershoots the resting potential following an action potential. S-MNs have a longer AHP than F-MNs, indicating that S-MNs have a longer "waiting period" before they can be stimulated by an action potential. Thus, they cannot fire at the same frequency as F-MNs (Eccles et al., 1957), so the larger FF-MNs take longer to reach an activation threshold. Similarly, other electrical properties differ between S-and F-MNs (Table 3), including their input resistance (a measure of resistance over the plasma membrane) and rheobase (a measure of the current needed to generate an action potential). S-MNs have a higher input resistance than F-MNs, underlying Hennenman's size principle which postulates that S-motor units are the first to be recruited during movement, followed by FR and then FF units (Henneman, 1957;Mendell, 2005). Hence, a slow movement generating a small force will only recruit S-MNs, whereas a quick and strong movement will also recruit F-MNs, as well as S-MNs.
In addition, at least eleven types of interneurons are involved in the control of movement, as part of central pattern generators in the spinal cord. Interneurons arise from five progenitor cells and, according to the expression of distinct transcription factors, they mature into different lineages. This includes excitatory V2a, V3, MN and Hb9 neurons and inhibitory V0C/G,V0 D , V0 V , V1, V2b, Ia and Renshaw cells (belonging to the V1 interneuron subclass), which display specific locations and projections within the spinal cord (Ramírez-Jarquín et al., 2014).
The Brainstem
Cranial nerve nuclei are populations of neurons in the brainstem that are associated with one or more cranial nerves. They provide afferent and efferent (sensory, motor, and autonomic) innervation to the structures of the head and neck (Sonne and Lopez-Ojeda, 2018). The more posterior and lateral nuclei tend to be sensory, and the more anterior nuclei are usually motor nuclei. Trigeminal MNs innervate the muscles of mastication, whereas facial MNs supply the superficial muscles of the face, and ambiguous MNs supply the muscles of the soft palate, pharynx, and larynx. The oculomotor (III), trochlear (IV) and abducens (VI) nuclei are somatic efferents innervating the extraocular muscles within the orbit. The oculomotor nucleus contains MNs that innervate four of the six extraocular muscles (superior, medial and inferior recti, inferior oblique), plus the levator palpebrae superioris muscle. These muscles display a unique composition of six fiber types, distinct from other skeletal muscles that possess marked fatigue resistance (Table 4). Oculomotor units are amongst the smallest of the motor units, in contrast to skeletal muscle motor units that have higher maximum MN discharge rates. Furthermore, α-MNs in oculomotor units have higher resting membrane potentials (∼61 mV) than spinal cord α-MNs (∼70 mV), and they also discharge at higher frequencies (∼100 Hz during steady state and ∼600 Hz during saccadic eye movements, compared to ∼100 Hz for spinal cord α-MNs) ( Table 4) (Robinson, 1970;Fuchs et al., 1988;Torres-Torrelo et al., 2012). Oculomotor neurons are almost continually active at high frequencies when maintaining eye position (Fuchs et al., 1988;De La Cruz et al., 1989), and this level of activity places high metabolic demand on these cells (Robinson, 1970;Porter and Baker, 1996;Brockington et al., 2013).
modulates precise voluntary movement, through long-range projections to the spinal cord. Approximately ∼30-50% of corticospinal projections originate from M1 MNs and they begin modulating their firing rate several hundred ms before movement of the limb is initiated (Georgopoulos et al., 1982;Porter and Lemon, 1993). In most mammals, the axons of cortical MNs terminate at spinal interneurons, but they also make direct connections to MNs (Lemon, 2008;Rathelot and Strick, 2009). This constitutes the final efferent pathway to the muscle to generate or suppress movement (Ramírez-Jarquín and Tapia, 2018).
quadrant of the spinal cord (Charcot, 1874;Frey et al., 2000;Pun et al., 2006). In the brain, UMNs in the primary cortex are also amongst the first to degenerate in ALS, and similarly, in the brainstem, the hypoglossal MNs that innervate the muscles of the tongue involved in swallowing and breathing, are also targeted early in disease course. In the brainstem, ALS can also affect trigeminal MNs, the facial MNs and ambiguous MNs. However, other MN subgroups within this region are relatively resistant to degeneration, including MNs of the oculomotor (III), trochlear (IV) and abducens (VI) nuclei, innervating the extraocular muscles (Mannen et al., 1977;Schrøder and Reske-Nielsen, 1984). Hence, eye movements remain relatively preserved throughout disease course (Kanning et al., 2010) and as a consequence, eye tracking devices are often used to aid communication in the later stages of ALS (Caligari et al., 2013). Whilst it has been reported that oculomotor neurons may be affected at disease end stage, this was recently attributed to dysfunction of the dorsolateral prefrontal cortex, the frontal eye field and the supplementary eye field, confirming the relative resistance of pure oculomotor functions in ALS (Shaunak et al., 1995;Proudfoot et al., 2015). Widespread loss of GABAergic interneurons has also been described in ALS, in both the cortex (Stephens et al., 2001;Maekawa et al., 2004) and the spinal cord (Stephens et al., 2006;Hossaini et al., 2011). MRI studies of ALS patients has revealed that very specific neuronal networks are vulnerable to degeneration in ALS (Bede et al., 2016). However, whilst TDP-43 pathology is the signature pathological hallmark of almost all ALS cases, it can arise in areas of the CNS that are not particularly vulnerable to degeneration (Geser et al., 2008). Significant TDP-43 pathology is present in the substantia nigra and basal ganglia, which are not affected in ALS, as well as in the motor gyrus, midbrain and spinal cord. Curiously, pathological forms of TDP-43 are also detectable in the occipital lobe, amygdala, orbital gyrus and hippocampus (Geser et al., 2008). Hence, whilst major degeneration of corticobulbar, LMN, pyramidal and frontotemporal networks underlie the widespread clinical symptoms of ALS, it remains unclear how other circuits, such as the visual, sensory, autonomic and auditory systems, remain relatively protected in ALS. These unaffected networks, however, have not been well studied in ALS patients.
Genetics of ALS
Most ALS cases occur without a clearly identified cause and are therefore referred to as sporadic ALS (SALS). In contrast, a positive family history is present in ∼10% of all patients (familial ALS; FALS) (van Blitterswijk et al., 2012;Nguyen et al., 2018) and these genetic mutations cause ALS in a mostly autosomal-dominant manner (Supplementary Table 1 and Figure 2). However, several recently discovered mutations have been described in patients diagnosed with SALS (Renton et al., 2014;Al Sultan et al., 2016;Taylor et al., 2016). The patterns of selective MN degeneration and vulnerability are similar between FALS and SALS (Comley et al., 2015), implying that shared molecular mechanisms exist between the two conditions. The first gene found to harbor mutations causing FALS encodes Cu/Zn superoxide dismutase (SOD1), an enzyme that detoxifies superoxide radicals (Rosen et al., 1993). Mutations in SOD1 account for 12-23.5% of FALS cases, representing 1-2.5% of all ALS, and 186 ALS mutations have now been described 1 . Since then, mutations in approximatively 26 genes have been identified (Supplementary Table 1 and Figure 2) using genome-wide or exome-wide association studies combined with segregation analysis. Hexanucleotide repeat expansions (GGGGCC) within the first intron of the chromosome 9 open reading frame 72 (C9orf72) gene are the most common cause of FALS and FTD (∼30-50% of FALS, ∼10% of SALS 25% of familial FTD and ∼5% of apparently sporadic ALS and FTD) (DeJesus-Hernandez et al., 2011b;Renton et al., 2011;Majounie et al., 2012;Devenney et al., 2014) (Supplementary Table 1 and Figure 2), in both Europe and North America (DeJesus-Hernandez et al., 2011b;Renton et al., 2011). However, this mutation is much rarer in Asian and Middle Eastern populations (Majounie et al., 2012;Woollacott and Mead, 2014). Healthy individuals possess ≤ 11 GGGGCC repeats in C9orf72 (Rutherford et al., 2012;Harms et al., 2013;van der Zee et al., 2013), whereas hundreds to thousands of repeats are present in ALS/FTD patients (Beck et al., 2013;Harms et al., 2013;van Blitterswijk et al., 2013;Suh et al., 2015). After C9orf72, mutations in SOD1 (20% of FALS), TARDPB encoding TDP-43 (5% of FALS, >50% of FTD) (Rutherford et al., 2008;Sreedharan et al., 2008;Borroni et al., 2010;Kirby et al., 2010), Fused in sarcoma encoding FUS (FUS, 5% of FALS) (Belzil et al., 2009;Blair et al., 2009;Chiò et al., 2009;Kwiatkowski et al., 2009;Neumann et al., 2009;Vance et al., 2009), and CCNF encoding cyclin F (0.6-3.3% of FALS-FTD) are more frequent than the remaining 20 genes mutated in the much rarer forms of FALS (Supplementary Table 1). The physiological functions and properties of the proteins encoded by these genes can be grouped according to their involvement in protein quality control, cytoskeletal dynamics, RNA homeostasis and the DNA damage response. However, it is possible that genetic inheritance could sometimes be missed, due to incomplete penetrance or an oligogenic mode of inheritance, whereby more than one mutated gene is necessary to fully present disease (Nguyen et al., 2018). Consistent with this notion, the frequency of ALS patients carrying two or more mutations in ALS-associated genes is in excess of what would be expected by chance (van Blitterswijk et al., 2012;Veldink, 2017;Zou et al., 2017;Nguyen et al., 2018).
The B6SJL-TgN(SOD1-G93A)1Gur mouse (Gurney et al., 1994) carries 25 ± 1.5 copies of the transgene within chromosome 12 and as a result, it expresses very high levels of human mutant SOD1 G93A (Alexander et al., 2004). Whilst these significant levels of overexpression are criticized as a major limitation (Alexander et al., 2004), these animals remain the most widely used mouse model for therapeutic studies in ALS (Gurney et al., 1994). These SOD1 G93A mice become paralyzed in the hindlimbs as a result of MN loss from the spinal cord, resulting in death by 5 months of age. Another variant of this model, B6SJL-TgN(SOD1-G93A) dl 1Gur, possesses fewer copies of the transgene; 8 ± 1.5 (Gurney, 1997;Alexander et al., 2004) 2 . This "low-copy" mouse, hereafter referred to as "G93A-slow" (s-SOD1 G93A ), develops a slower disease course in comparison, where paralysis begins at 6-8.5 months of age (Alexander et al., 2004;Muller et al., 2008;Acevedo-Arozena et al., 2011). In addition, several other "low-copy" mouse lines have subsequently been generated, with even fewer copies of the human SOD1 G93A transgene. These models also exhibit greater life spans compared to the higher copy lines (Alexander et al., 2004) (Table 6). Similarly, four lines of mice expressing another SOD1 mutant, SOD1 G37R , at different levels (5-14 times) have been produced, with variable phenotypes (Wong et al., 1995). Multiple mouse models based on transgenic expression of wild type or mutant TDP-43 have also been generated (Philips and Rothstein, 2015) ( Table 5). Overexpressing human TDP-43 with a defective nuclear localization signal (NLS) in mice -in the absence of an ALS mutation -results in cytoplasmic expression of hTDP-43 and nuclear TDP-43 clearance. This results in a severe motor phenotype and reduced survival in the resulting 'rNLS8' mice compared to littermate controls (Walker et al., 2015). Several mouse models also exist based on transgenic expression of mutant FUS ( Table 5). These mice display progressive, age-and mutationdependent degeneration that also model aspects of ALS . Furthermore, several newer models based on the C9orf72 repeat expansion have also been produced, although the phenotypes are more reminiscent of FTD rather than ALS (Batra and Lee, 2017).
Misfolded Protein Expression Level Influences Susceptibility
The expression of specific proteins can vary between MN subpopulations and this may be linked to their vulnerability to degenerate. Evidence for this hypothesis comes from the existing mouse models of ALS. Whilst mutant SOD1 G93A is expressed in all MNs in these mice (Jaarsma et al., 2008), its propensity to induce neurodegeneration and disease is proportional to its expression level ( Table 6) (Gurney et al., 1994;Bruijn et al., 1997;Alexander et al., 2004). At lower levels of expression, pathology is restricted to MNs in the spinal cord and brainstem only, whereas higher expression levels also induce severe abnormalities in the brain. Fewer copies of the SOD1 G37R transgene correlate with delayed disease progression and a significant increase in lifespan compared to animals with higher copy numbers ( Table 6) (Zwiegers et al., 2014). Similarly, in TDP-43 models, higher levels of overexpression are associated with a worse phenotype (Philips and Rothstein, 2015). Moreover, disease is evident in both wildtype and mutant TDP-43 models, indicating that the expression levels of TDP-43, rather than the presence of a mutation per se, induces neurodegeneration. Hence, the effect of the TDP-43 mutation can be difficult to segregate from the effects of overexpression in these models (Philips and Rothstein, 2015). Both retaining the physiological expression levels and normal nuclear localization of TDP-43 have been linked to maintaining cellular homeostasis (Swarup et al., 2011;Philips and Rothstein, 2015). These studies together highlight the role of differing protein expression levels in the development and progression of ALS. However, further work is required to determine whether the expression levels of mutant ALS-associated proteins are different among MN subtypes, and whether this can differentially sensitize specific MNs to neurodegeneration and stress in ALS.
Selectivity in MN Degeneration in Mouse Models of ALS
Rodent disease models are also useful in studies examining the selective vulnerability of specific MNs within an individual motor pool in ALS. Similar to human ALS, in mouse models based on mutant SOD1 G93A , TDP-43 A315T and FUS P525L , α-MNs selectively degenerate, while γ-MNs and MNs in the Onuf 's nucleus are spared (Mannen et al., 1977;Lalancette-Hebert et al., 2016). Also, as in ALS patients, the oculomotor MNs are spared in SOD1 G93A (Niessen et al., 2006) and SOD1 G86R (Nimchinsky et al., 2000) mice, whereas spinal cord MNs, trigeminal, facial and hypoglossal MNs are targeted (Niessen et al., 2006). In rNLS8 mice, MNs in the hypoglossal nucleus and the spinal cord are also involved, whereas those in the oculomotor, trigeminal, and facial nuclei are spared, despite widespread neuronal expression of cytoplasmic hTDP-43 (Spiller et al., 2016a). Atrophy of MNs in the trigeminal motor, facial and hypoglossal nuclei are also significantly smaller in TDP-43 knockout mice, whereas MNs in the oculomotor nuclei are preserved (Iguchi et al., 2013). In addition, in another TDP-43 model, Prp-TDP43 A315T mice, degeneration of specific neuronal populations occurs (Wegorzewska et al., 2009). Cytoplasmic ubiquitinated proteins accumulate in neurons of cortical layer V and in large neurons of the ventral horn and scattered interneurons, despite expression of the Prp-TDP-43 A315T transgene in all neurons and glia (Wegorzewska et al., 2009). In a knock-in TDP-43 mouse model bearing a G298S mutation, MN loss was restricted to largediameter α-MNs (Ebstein et al., 2019). Furthermore, in FUS P525L and FUS R521C mouse models, no significant MN loss was detected in oculomotor neurons, whereas spinal cord MNs were progressively lost during disease course . In mutant SOD1 G93A mice, FF α-MNs are more susceptible to degenerate than FR α-MNs, resulting in the FF muscles becoming paralyzed before FR muscles (Hegedus et al., 2007). Furthermore, tonic S-units only disconnect from the muscle at disease end stage, meaning that S α-MNs are the least vulnerable within motor pools in SOD1 G93A , SOD1 G85R (Frey et al., 2000;Pun et al., 2006;Hegedus et al., 2007;Hadzipasic et al., 2014), TDP-43 rNLS8 (Spiller et al., 2016a), FUS R521C and FUS P525L transgenic models . These findings together therefore provide strong evidence that there is a gradient of vulnerability amongst spinal MNs, whereby the faster, less excitable motor units are affected before the slower, more excitable types, at least in mouse models. Interestingly, selective denervation of MN subtypes occurs at the NMJ. Less denervation of the relatively resistant slow-twitch soleus muscle (Frey et al., 2000), compared to the vulnerable fast-twitch tibialis anterior muscle, occurs in TDP-43 M337V , TDP-43 G298S , FUS P525L , FUS R521C and TDP-43 rNLS8 mouse models Spiller et al., 2016a;Ebstein et al., 2019). In both the low-and high-copy s-SOD1 G93A and SOD1 G93A mice, the onset of interneuron degeneration also precedes the onset of behavioral motor manifestations and most MN degeneration (Chang and Martin, 2009;Jiang et al., 2009;Pullen and Athanasiou, 2009). Subtle changes to inhibitory synaptic inputs to MNs may therefore modulate MN excitability, leading to degeneration and motor symptoms in ALS/FTD.
NETWORK-DRIVEN MN VULNERABILITY
Genetic mutations are present throughout life in ALS patients (summarized in Supplementary Table 1), but as only specific cellular populations are affected, this implies that the vulnerability of MN subtypes in ALS is not caused wholly by genetic factors. Hence, environmental or extrinsic factors, such as the neuronal circuitry or the microenvironment surrounding MNs, may explain the selective vulnerability of MNs in ALS/FTD.
Site-Specific Onset and Spread of Neurodegeneration in ALS
The pattern of neurodegeneration in ALS/FTD is not random; it targets specific large-scale distributed networks in the brain and spinal cord. Motor manifestations begin in one region of the body in ∼98% of patients (Ravits et al., 2007) accompanied by unilateral, focal damage to MNs in the motor cortex or spinal cord, that innervate the corresponding peripheral body regions. It has been previously suggested that ALS targets specific evolutionarily linked, interdependent functions, and as the disease progresses these deficits combine into failure of specific networks (Eisen et al., 2014). More recently, several clinical studies have revealed that neurodegeneration and TDP-43 pathology spread to continuous anatomical regions during disease course (Ravits et al., 2007;Brettschneider et al., 2013;Walhout et al., 2018), and symptoms arise in the contralateral regions following a unilateral limb onset (Walhout et al., 2018). This also implies that neuronal circuitry might drive disease progression to specific MN populations in ALS/FTD. The spread of misfolded proteins from cell-to-cell, particularly TDP-43, provides a molecular explanation for the specific network and anatomical vulnerability observed in ALS. However, it must be noted that whilst contiguous spread is observed for most patients, this is not the case for all (Ravits and La Spada, 2009).
Increasing evidence suggests that ALS begins in the cortical regions of the brain, which is referred to as the "dying-forward hypothesis." Features of cortical hyperexcitability -heralded by reduction in short interval intracortical inhibition -have been FIGURE 3 | Schematic diagram representing the typical spread of neurodegeneration following an initial onset in motor neurons in ALS patients (n = 76 patients) . Shading represents TDP-43 pathology.
detected during the early phases of ALS in transcranial magnetic stimulation studies (Thomsen et al., 2014;Menon et al., 2015). This can precede the clinical onset of bulbar/spinal motor dysfunction by ∼3-6 months (Vucic et al., 2008;Bakulin et al., 2016). The dying forward hypothesis is consistent with Charcot, who first postulated that ALS begins in the cortex (Charcot, 1874). Clinical observations that MNs without monosynaptic connections to cortical MNs, such as the oculomotor, abducens, and Onuf 's nuclei, are spared in ALS, and that pure LMN forms of ALS are rare, also support this hypothesis. Further evidence is provided by the observation that MNs receiving direct, monosynaptic cortical input also predominantly develop TDP-43 pathology, while subcortical MNs do not (Eisen et al., 2017). Similarly, TDP-43 pathology develops in patients only in structures under the control of corticofugal projections Menon et al., 2015;Eisen et al., 2017) TDP-43 pathology may then propagate through corticofugal axons to the spinal cord and regions of the brain (Braak et al., 2013;Eisen et al., 2017) in a time-dependant and region-specific manner , consistent with the dying forward hypothesis (Figure 3). This sequential pattern of TDP-43 dissemination is consistent with the hypothesis that TDP-43 pathology is propagated synaptically from cell to cell (Brundin et al., 2010;Maniecka and Polymenidou, 2015), in a similar way to the pathogenic prion protein, a concept known as the "prionlike mechanism" (Lee and Kim, 2015;Ayers and Cashman, 2018). In this model, misfolded proteins act as template seeds to trigger aggregation of their natively folded counterparts. This results in the propagation of protein misfolding, leading to its orderly spread through the CNS (Soto, 2012;Maniecka and Polymenidou, 2015). However, the question of where disease begins remains controversial because many researchers still favor the "dying-back" hypothesis, in which ALS begins within the muscle cells or at the NMJ. This hypothesis proposes that there is a spread of pathology from LMNs to UMNs (Chou and Norris, 1993;Fischer et al., 2004;Pun et al., 2006;Turner et al., 2018), or else, a simultaneous involvement of both UMNS and LMNs (Turner et al., 2018). Whilst most of the evidence for the dyingback mechanism comes from animal models, studies of muscle biopsies from early stage ALS patients and long-term survivors have demonstrated significant morphological abnormalities and major denervation/re-innervation at the NMJ, implying that this region is targeted early in disease (Millecamps et al., 2010;reviewed in Arbour et al., 2017).
There is evidence to support the prion-like model in ALS. The spread of neurodegeneration through adjacent anatomical regions of the CNS resembles the orderly spread of protein misfolding in prion disease. The in vitro cell-to-cell transmission of misfolded SOD1, TDP-43 and C9orf72 di-peptide repeat proteins has been demonstrated (Grad et al., 2011(Grad et al., , 2014Münch et al., 2011;Nonaka et al., 2013;Feiler et al., 2015;Porta et al., 2018). Similarly, the addition of cerebrospinal fluid from ALS/FTD patients (Ding et al., 2015), detergent-insoluble fractions of ALS-disease brains (Nonaka et al., 2013) or insoluble phosphorylated TDP-43 from post-mortem brain and spinal cord tissue (Smethurst et al., 2016), results in misfolding of TDP-43 when added to human cell lines. However, so far, only misfolded SOD1 and TDP-43 transmissibility has been demonstrated in vivo (Ayers et al., 2014(Ayers et al., , 2016Porta et al., 2018). A recent study demonstrated that injection of brain-derived extracts from FTD patients into mice promoted the spatio-temporal transmission of TDP-43 pathology via the neuroanatomical connectome, suggesting that TDP-43 travels via axonal transport through connected regions of the CNS (Porta et al., 2018). Similarly, axonal transport is implicated in the spread of mutant SOD1 in mice (Ayers et al., 2016). Overexpression of misfolded TDP-43 or SOD1 facilitated the seeding ability of each inoculum, consistent with results obtained in vitro (Nonaka et al., 2013;Feiler et al., 2015;Smethurst et al., 2016).
Whilst these animal studies demonstrate that ALS spreads within MNs that are connected synaptically, a small portion of patients do not display this contiguous spreading of pathology, however. This implies the existence of alternative mechanisms of disease progression (Fujimura-Kiyono et al., 2011;Gargiulo-Monachelli et al., 2012), such as the transfer of misfolded proteins in nanotubules or exosomes (Nonaka et al., 2013;Sundaramoorthy et al., 2013;Grad et al., 2014;Ding et al., 2015;Feiler et al., 2015;Westergard et al., 2016). Interestingly, it has been suggested that the vulnerability of specific MN populations is associated with the spread of neurodegeneration in ALS (Fu et al., 2018).
Role of Glial Cells in Driving Disease Progression
There is increasing evidence for a role of the neighboring nonneuronal cells in ALS. Under normal conditions, glial cells provide nutritional and trophic support to MNs, but in ALS, they appear to exacerbate neurodegeneration in a non-cell autonomous fashion. These cells include microglia, astrocytes, oligodendrocytes and Schwann cells. Limiting the expression of mutant SOD1 to MNs only does not lead to neurodegeneration in mice (Pramatarova et al., 2001;Lino et al., 2002), and chimeric mouse studies have established that the presence of mutant SOD1 G93A in glial cells induces neurodegeneration and MN loss (Papadeas et al., 2011). Both microglia and astrocytes appear to enhance disease progression by inducing neuroinflammation, whereas oligodendrocytes drive disease initiation. Non-neuronal cells may also be involved in the spread of pathological proteins in ALS (Thomas et al., 2017;Porta et al., 2018). However, whilst misfolded proteins released by MNs can be taken up by glial cells, they may be less toxic to these cells than to MNs (Benkler et al., 2018).
Microglia
Microglia are the main immune cells of the CNS (Fujita and Kitamura, 1975;Hickey and Kimura, 1988;Lawson et al., 1990). In ALS patients, activated microglia increase in CNS regions that are susceptible to neurodegeneration (Kawamata et al., 1992) and in SOD1 G93A mice, enhanced microglial reactivity precedes nerve denervation at the NMJ Saxena et al., 2009). Microglia exist in both resting and activated states [reviewed in Perry and Holmes (2014)] and in ALS, activated microglia display two distinct phenotypes. The neuroprotective M2 phenotype promotes tissue repair and supports MN survival by releasing neuroprotective factors, and the toxic M1 phenotype produces cytokines, enhances inflammation, and induces cell death (Liao et al., 2012). Studies in mutant SOD1 mice reveal that the numbers of microglia increase during disease progression, but they vary between the neuroprotective M2 and toxic M1 phenotypes (Liao et al., 2012;Chiu et al., 2013). In lumbar spinal cords of pre-symptomatic SOD1 G93A mice, the anti-inflammatory M2 microglia predominate (Gravel et al., 2016), whereas at disease onset and during progression, the proinflammatory M1 type is more common (Beers et al., 2011). Microglialspecific ablation of mutant SOD1 G37R in mice does not affect disease initiation, but it significantly slows disease progression (Boillée et al., 2006b), indicating that microglia enhance the progression, but not the onset, of disease in transgenic mutant SOD1 mice. However, contradictory findings were obtained in the TDP-43 rNLS8 model, where microglia were neuroprotective and not neurotoxic (Spiller et al., 2018). Interestingly, knockdown of C9orf72 in mice alters microglial function and induces age-related neuroinflammation, but not neurodegeneration (Lall and Baloh, 2017). Further investigations are required to examine the role of microglia in other ALS disease models, and to determine whether MN subtypes display different vulnerabilities to microglia-mediated protective and/or toxicity in ALS.
Astrocytes
Astrocytes perform multiple homeostatic functions in the CNS; they regulate the plasticity of synapses and synthesis of neurotransmitters (Ullian et al., 2004;Volterra and Meldolesi, 2005;Sloan and Barres, 2014), they maintain the blood brain barrier, and they provide neurotrophic support to MNs by releasing glial-derived neurotrophic factor (GDNF) and transforming growth factor β1 (TGF-β1) amongst others. Like microglia, during the neurodegenerative process, astrocytes can exist in two states, either reactive or activated, and activated astrocytes lose their neuroprotective functions and become neurotoxic during disease Ilieva et al., 2009;Valori et al., 2014;Das and Svendsen, 2015). Also, like microglia, astrocytes are implicated in the progression rather than onset of ALS. Deletion of SOD1 from astrocytes slowed disease progression, but not disease onset, in SOD1 G93A mice Wang L. et al., 2011), whereas deletion of mutant SOD1 from MNs did delay onset (Boillée et al., 2006a;Wang L. et al., 2009). Furthermore, gene expression changes in MNs, astrocytes and oligodendrocytes start just before disease onset in SOD1 G37R mice, but these alterations are first observed in MNs (Sun et al., 2015). Recently, two different subsets of reactive astrocytes were described in the adult CNS, A1 and A2 (Liddelow et al., 2017;Clarke et al., 2018;Miller, 2018) and the A1 reactive astrocytes were associated with the death of both neurons and oligodendrocytes (Liddelow et al., 2017).
There is increasing evidence that astrocytes mediate MN degeneration via the release of neurotoxic factors. Soluble toxic compounds produced by astrocytes expressing mutant SOD1 trigger the selective loss of spinal MNs (Nagai et al., 2007), but not spinal GABAergic neurons, consistent with the specific vulnerability of these cells in ALS (Nagai et al., 2007). Astrocytes in the ventral spinal cord can be distinguished from astrocytes in the dorsal spinal cord by expression of semaphorin A3 (Sema3a), which is implicated in the specific vulnerability to FF-MNs in ALS (see section "Neuroprotective and Neurotoxic Factor Expression in MN Subpopulations" below). Furthermore, astrocytes are also implicated in MN loss and disease progression by mediating AMPA receptor-induced excitotoxicity via EAAT2/GLT-1, as discussed below (section "Neuronal Excitability"). Expression of mutant TDP-43 M337V in rat astrocytes led to down-regulation of neurotrophic genes, up-regulation of neurotoxic genes and progressive MN degeneration (Tong et al., 2013;Huang et al., 2014). Conditioned medium from primary astrocyte cultures of SOD1 G86R and TDP-43 A315T mice also induces MN death through activation of sodium channels and nitro-oxidative stress (Rojas et al., 2014). Furthermore, astrocytes expressing mutant FUS R521G trigger MN death by secreting pro-inflammatory tumor necrosis factor (TNF)-α (Kia et al., 2018). SOD1 G93A aggregates in astrocytes appear in late disease stages, selectively in regions with extensive neuronal degeneration and prominent astrogliosis (Jaarsma et al., 2008). This raises the possibility that astroglial aggregate formation is triggered by MN degeneration, implying that disease may spread from neurons to glia (Jaarsma et al., 2008;Sun et al., 2015).
Together these studies suggest the involvement of astrocytes in the selective degeneration of MNs in ALS. Under normal conditions, astrocytes may be able to cope with the expression of low levels of misfolded proteins, but, during cell stress or in the context of MN degeneration, they become more vulnerable, and release factors toxic to MNs, thus producing a vicious cycle. However, the relative resistance of neuronal populations surrounded by reactive astrocytes indicates that the vulnerability of MNs is also determined by cell-autonomous components, such as their genetic background and transcriptional/translational profiles (Boillée et al., 2006a;Sun et al., 2015).
Oligodendrocytes and Schwann Cells
The two glial cell types responsible for myelination of axons have also been investigated in the context of ALS. Oligodendrocytes myelinate axons in the CNS whereas Schwann cells are responsible for myelination in the peripheral nervous system (PNS). Whilst they perform similar functions, there are also important differences between these two cell types. Schwann cells form a single myelin sheath around one single axon, whereas oligodendrocytes myelinate many different axons. Furthermore, there are differences in the protein composition of CNS and PNS myelin.
In ALS, TDP-43 pathology has been detected in oligodendrocytes in the motor cortex and spinal cord of both SALS and FALS patients (Arai et al., 2006;Mackenzie et al., 2007;Tan et al., 2007;Zhang et al., 2008;Seilhean et al., 2009;Murray et al., 2011;Philips et al., 2013). In addition, FUS forms cytoplasmic aggregates in oligodendrocytes from ALS patients bearing FUS R521C or FUS P525L mutations (Mackenzie et al., 2011). Degeneration of oligodendrocytes and their precursors was also linked with axon demyelination in both SALS and FALS patients (Kang et al., 2013). In SOD1 G93A mice, oligodendrocyte loss in the spinal cord occurs before symptoms appear and importantly, before MN loss, implying that oligodendrocytes are associated with disease onset. This MN loss increases with disease progression, resulting in MNs with only partially myelinated axons in SOD1 G93A mice and SOD1 G93A rats (Niebroj-Dobosz et al., 2007;Kang et al., 2013;Philips et al., 2013). Whilst the proliferation of oligodendrocyte precursors may compensate for this loss, newly synthetized oligodendrocytes failed to mature and remain dysfunctional in SOD1 G93A mice (Magnus et al., 2008;Philips et al., 2013). Recently, SOD1 G85R was able to transfer from MNs to nearby oligodendrocytes (Thomas et al., 2017). The selective removal of mutant SOD1 from NG2+ oligodendrocyte progenitors, but not mature oligodendrocytes in SOD1 G37R mice, leads to delayed disease onset and prolonged survival (Kang et al., 2013), further suggesting that mutant SOD1-induced oligodendrocyte defects are detrimental to MNs in ALS.
Schwann cells are required for the long-term maintenance of synapses at the NMJ (Reynolds and Woolf, 1992;Son and Thompson, 1995;Reddy et al., 2003). Early studies demonstrated that myelin is altered along peripheral nerves in ALS patients, implying that Schwann cells are involved in disease (Perrie et al., 1993). However, unlike the other glial cell types, more recent studies on the role of Schwann cells in ALS have reached conflicting conclusions. Knockdown of SOD1 G37R within Schwann cells significantly accelerates disease progression, concomitant with a specific reduction in insulin-like growth factor (IGF-I), which is protective to MNs (see section "Neuroprotective and Neurotoxic Factor Expression in MN Subpopulations" below) (Lobsiger et al., 2009). This surprising finding, implying that SOD1 G37R is protective in Schwann cells, could be linked to the dismutase activity of SOD1. Whereas SOD1 G37R retains its enzymatic activity, SOD1 G85R does not, and similar experiments performed in SOD1 G85R mice resulted in opposite findings; Schwann cell specific knock-down of SOD1 G85R delayed disease onset and extended survival (Wang et al., 2012). Furthermore, TGF-β1 produced by Schwann cells promotes synaptogenesis by increasing nerve-muscle contacts (Feng and Ko, 2008), in contrast to TGF-β1 expression in astrocytes which accelerates disease progression in SOD1 mice (Endo et al., 2015). Hence, the role of Schwann cells in ALS remains unclear.
INTRINSIC FACTORS SPECIFIC TO MN SUBPOPULATIONS
Multiple cellular pathways are now implicated in the etiology of ALS, but it remains unclear how dysfunction of these diverse processes can result in the same disease phenotype. Furthermore, the same genetic mutation can result in either ALS, FTD or both conditions, implying that specific disease modifiers exist. Studies using in vivo and in vitro models of FALS suggest that the intrinsic properties of MNs are crucial for degeneration and/or protection (Boillée et al., 2006a). Importantly, resistant MN subtypes appear to display diverse gene expression profiles from susceptible MNs. Microarray analysis and laser capture microdissection of MNs isolated from oculomotor/trochlear nuclei, the hypoglossal nucleus and the lateral column of the cervical spinal cord in SOD1 G93A rats (Hedlund et al., 2010), or in human brain and spinal cords (Brockington et al., 2013), have revealed marked differences between these subpopulations. Importantly, many of the genes that were differentially expressed encode proteins that function in pathways implicated in ALS pathogenesis, such as ER function, calcium regulation, mitochondrial function, ubiquitination, apoptosis, nitrogen metabolism, transport and cellular growth. Interestingly, oculomotor neurons possess a specific and relatively conserved protein signature between humans and rodents, implying that this contributes to the relative resistance of these MNs in ALS/FTD (Hedlund et al., 2010;Comley et al., 2015). Several of these proteins are known to be protective against MN neurodegeneration, such as insulinlike growth factors (IGF) and their receptors (see section "Neuroprotective and Neurotoxic Factor Expression in MN Subpopulations" below). Similarly, other genes highly expressed in vulnerable MNs are implicated in their susceptibility to degeneration, such as semaphorin A3 (Sema A3) and matrix metalloproteinase 9 (MMP-9) (see section "Neuroprotective and Neurotoxic Factor Expression in MN Subpopulations" below).
Recently, a comprehensive bioinformatics meta-analysis of ALS modifier genes was performed from 72 published studies (Yanagi et al., 2019). A total of 946 modifier genes were identified and of these, 43 genes were identified as modifiers in more than one ALS gene/model. These included TDP-43, SOD1, ATXN2 and MMP9. Intrinsic factors in MNs might therefore underlie their relative vulnerability or resistance to neurodegeneration in ALS. The two pioneering studies linking gene expression differences to MN vulnerability in ALS (Hedlund et al., 2010;Brockington et al., 2013) have led to several subsequent reports, where the role of specific genes were examined further (summarized in Table 7, and discussed further in the sections below). However, it is also possible that the differences in gene expression reflect the diverse embryological origins or milieu of resistant and susceptible MN groups, or simply the structural and functional differences between oculomotor units and motor units of other skeletal muscles. To date, no studies have extensively characterized the specific transcriptional profile of vulnerable vs. susceptible MNs in TDP-43, C9orf72 FUS or other models of ALS, similar to those performed in SOD1 G93A mice and ALS patients (Hedlund et al., 2010;Brockington et al., 2013). In addition to alterations in gene expression profiles, it is also possible that the resistant MNs in ALS display differing functional or morphological properties to those more susceptible to degeneration. A recent study demonstrated that cultures obtained from surviving MNs of SOD1 G93A mice displayed more dendritic branching and axonal outgrowth, as well as increased actin based-growth cones, implying that they have more regenerative capacity (Osking et al., 2019).
RNA Homeostasis
Abnormal RNA homeostasis is increasingly implicated in the pathophysiology of ALS/FTD, consistent with the functions of TDP-43 and FUS in regulating RNA splicing and transport (Polymenidou et al., 2011;Tank et al., 2018). In the transgenic SOD1 G93A rat, differences in the number of genes involved in transcription, RNA metabolism, RNA binding and splicing, and regulation of translation, were evident between neuronal populations located in the oculomotor/trochlear nucleus, the hypoglossal nucleus and the lateral column of the cervical spinal cord (Hedlund et al., 2010). These results therefore suggest that RNA homeostatic processes are involved in the differential vulnerability of specific subtypes of MNs in ALS. However, further studies in this area are required to investigate this possibility, particularly in relation to TDP-43 and FUS.
Neuroprotective and Neurotoxic Factor Expression in MN Subpopulations
Differential expression of pro-survival or toxic factors is also implicated in the specific vulnerability of MN subtypes. The IGFs are proteins with high homology to insulin that form part of the IGF "axis" that promotes cell proliferation and inhibits apoptosis. In the normal rat, IGF-I is highly expressed in oculomotor neurons, where it is protective against glutamateinduced toxicity (Hedlund et al., 2010;Allodi et al., 2016). This may be due to activation of the PI3K/Akt and p44/42 MAPK pathways, which both inhibit apoptosis (Siddle et al., 2001;Sakowski et al., 2009). In addition, its associated receptor, IGF-I receptor (IGF-IR), is also highly expressed in oculomotor neurons and on the extraocular muscle endplate (Allodi et al., 2016). IGF-IR is important for the survival of neurons following hypoxic/ischemic injury (Vincent and Feldman, 2002;Liu et al., 2011) by upregulation of neuronal cellular inhibitor of apoptosis-1 (cIAP-1) and X-linked inhibitor of apoptosis (XIAP) . Delivery of IGF-II using AAV9 to the muscle of mutant SOD1 G93A mice extended life-span by 10%, prevented the loss of MNs and induced motor axon regeneration (Allodi et al., 2016). These findings indicate that differential expression of IGF-II and IGF-IR in oculomotor neurons might contribute to their relative resistance to degeneration in ALS/FTD.
Conversely, aberrant expression of axon repulsion factors near the NMJ may contribute to neurodegeneration in ALS. Sema3A and its receptor neuropilin 1 (Nrp1) are involved in axon guidance during neural development (Huber et al., 2005;Moret et al., 2007). Sema3A is specifically upregulated in terminal Schwann cells near NMJs of vulnerable FF muscle fibers in mutant SOD1 G93A mice (De Winter et al., 2006). Nrp1 is upregulated in axon terminals of the NMJ in this model and administration of an antibody against the Sema3A-binding domain of Nrp1 delayed the decline of motor functions while prolonging the lifespan of SOD1 G93A mice (Venkova et al., 2014). Furthermore, Sema3A is upregulated in the motor cortex of ALS patients (Körner et al., 2016;Birger et al., 2018), but not in the spinal cord. Sema3A induces death of sensory, sympathetic, retinal and cortical neurons (Shirvan et al., 2002;Ben-Zvi et al., 2008;Jiang et al., 2010;Wehner et al., 2016), but not spinal neurons (Molofsky et al., 2014;Birger et al., 2018). Similarly, Sema3A induces apoptosis of human cortical neurons but promotes survival of spinal MNs (Birger et al., 2018). Furthermore, loss of Sema3A-expressing astrocytes in the ventral spinal cord leads to selective degeneration of α-MNs, but not γ-MNs (Hochstim et al., 2008;Molofsky et al., 2014). These data indicate that whilst Sema3A and Nrp1 contribute to the loss of MNs in ALS, some neuronal subpopulations are more susceptible than others. There is also evidence that other axon guidance proteins are associated with the susceptibility of MNs in ALS. Increased expression of ephrin A1 has been demonstrated in the vulnerable spinal MNs of ALS patients (Jiang et al., 2005). EPHA4, which is a disease modifier in zebrafish, rodent models and human ALS, encodes an Eph receptor tyrosine kinase, which is involved in axonal repulsion during development and in synapse formation, plasticity and memory in adults (Van Hoecke et al., 2012). The more vulnerable MNs express higher levels of EPHA4, and neuromuscular re-innervation is inhibited by Epha4. In ALS patients, EPHA4 expression also inversely correlates with disease onset and survival (Van Hoecke et al., 2012).
Matrix Metalloproteinase (MMP9) has been recently identified as another determinant of selective neuronal vulnerability in SOD1 G93A mice (Kaplan et al., 2014). MMP-9 was strongly expressed by vulnerable FR spinal MNs, but not oculomotor, Onuf 's nuclei or S α-MNs, and it enhanced ER stress and mediated muscle denervation in this model (Kaplan et al., 2014). Delivery of MMP-9 into FF-MNs, but not in oculomotor neurons, accelerates denervation in SOD1 G93A mice (Kaplan et al., 2014). Similarly, another study demonstrated that reduction of MMP-9 expression attenuated neuromuscular defects in rNLS8 mice expressing cytoplasmic hTDP43 NLS in neurons (Spiller et al., 2019). Edaravone, a free radical scavenger which inhibits MMP-9 expression, was recently approved for the treatment of ALS in Japan, South Korea, United States and Canada (Yoshino and Kimura, 2006;Ito et al., 2008;Yagi et al., 2009). Further molecular investigations into the differences and similarities between different motor units in ALS should yield additional insights into their vulnerability to neurodegeneration.
Polymorphisms in specific genes have also been linked to MN vulnerability. In SALS patients, variants in the gene encoding UNC13A are associated with greater susceptibility to disease and shorter survival (Diekstra et al., 2012). UNC13A functions in vesicle maturation during exocytosis and it regulates the release of neurotransmitters, including glutamate. Mutations in EPHA4 are also associated with longer survival (Van Hoecke et al., 2012), implying that Epha4 modulates the vulnerability of MNs in ALS. Furthermore, repeat expansions in the gene encoding ataxin 2 (ATXN2), which cause spinocerebellar ataxia type 2 (SCA2), are also increased in ALS patients compared to healthy controls (Ross et al., 2011). This implies that ATXN2 repeat expansions are also related to MN vulnerability to neurodegeneration in ALS.
Neuronal Excitability
The excitability properties of MNs are also implicated in the selective degeneration of specific MN subtypes in ALS. Alterations in MN excitability have been reported during the asymptomatic disease stage in the SOD1 G93A (Saxena et al., 2013), s-SOD1 G93A (Pambo-Pambo et al., 2009) and SOD1 G85R (Bories et al., 2007) mouse models, in iPSC-derived MNs (Vucic et al., 2008;Wainger et al., 2014) and in SALS and FALS patients (Vucic and Kiernan, 2010;Devlin et al., 2015). Specific isoforms of the sodium-potassium pump (Na + /K + ATPase), which generates the Na + /K + gradients that drive the action potential, are associated with the specific vulnerability of MN subtypes. Misfolded mutant SOD1 forms a complex with the α3 isoform of Na + /K + ATPase, and this leads to impairment in its ATPase activity. Altered levels of this isoform were also observed in spinal cords of SALS and non-SOD1 FALS patients . Importantly, α3 is the major isoform in vulnerable FF-MNs, whereas both α1 and α3 predominate in FR-MNs, and S-MNs express only α2. Furthermore, viral-mediated expression of a mutant Na + /K + ATPase-α3 that cannot bind to mutant SOD1 restored Na + /K + ATPase-α3 activity, delayed disease manifestations and increased lifespan in two different mutant SOD1 mouse models (SOD1 G93A and SOD1 G37R ) . This indicates that modulating the activity of the α3 isoform of the Na + /K + ATPase, and therefore modulating the excitability status of MNs, is important in neurodegeneration in ALS.
However, increasing MN excitability is also neuroprotective to MNs in ALS. Enhancing MN excitability by delivering AMPA receptor agonists to mutant SOD1 G93A mice reversed misfolded mutant protein accumulation, delayed pathology and extended survival, whereas reducing MN excitability by antagonist CNQX accelerated disease and induced early denervation, even in the more resistant S-MNs (Saxena et al., 2013). However, MN subpopulations can be differentially affected by changes in excitability. Disease resistant S-MNs exhibit hyper-excitability in ALS patients (de Carvalho and Swash, 2017) and early in disease in mutant SOD1 G93A mice, whereas disease vulnerable FF-MNs are not hyper-excitable, again highlighting increased excitability as a protective property in ALS (Leroy et al., 2014). Also, the vulnerable masticatory trigeminal MNs from SOD1 G93A mice exhibit a heterogeneous discharge pattern, unlike oculomotor neurons (Venugopal et al., 2015). However, MNs in FALS and SALS patients are hyperexcitable early in disease course, but then later become hypo-excitable (Vucic et al., 2008;Menon et al., 2015), indicating that modulation of neuronal excitability is a factor influencing the development of ALS.
Excitotoxicity
Excitotoxicity is the process by which neurons degenerate from excessive stimulation by neurotransmitters such as glutamate, due to overactivation of NMDA or AMPA receptors. This can result from pathologically high levels of glutamate, or from excitotoxins like NMDA and kainic acid, which allow high levels of Ca 2+ to enter the cell. One line of evidence supporting a role for excitotoxicity in ALS is that riluzole, one of the only two drugs available for ALS patients, has anti-excitotoxic properties (Bensimon et al., 1994;Lacomblez et al., 1996). Riluzole inhibits the release of glutamate due to inactivation of voltage-dependant Na + channels on glutamatergic nerve terminals (Doble, 1996). Previous studies have suggested that MNs that are less susceptible to excitotoxicity are less prone to degenerate (Hedlund et al., 2010;Brockington et al., 2013).
Ca 2+ enters neurons through ligand-gated channels or voltage-gated channels such as the voltage-gated-L-type Ca 2+ channel (Cav1.3), which mediates the generation of persistent inward currents (Xu and Lipscombe, 2001). Cav1.3 is differentially expressed in MN subtypes, with more in the spinal cord compared to the oculomotor and hypoglossal nuclei (Shoenfeld et al., 2014). This Ca 2+ inward current increases early in disease course in MNs of SOD1 G93A mice, which is associated with an increase in Cav1.3 expression.
In addition, the presence of atypical AMPA receptors in MNs compared to other neurons might render them more permeable to Ca 2+ . Functional AMPA receptors normally form a tetrameric structure composed, in various combinations, of the four subunits, GluR1, GluR2, GluR3, and GluR4. The Ca 2+ conductance of these receptors differs markedly depending on whether GluR2 is a component of the receptor. However, in MNs, AMPA receptors express proportionately fewer GluR2 subunits relative to other types (Kawahara et al., 2003;Sun et al., 2005), which may render them more permeable to Ca 2+ and thus more vulnerable to excitotoxic injury than other cells. Consistent with this notion, more GluR1 and GluR2 subunits are present in oculomotor neurons compared to spinal MNs in humans (Brockington et al., 2013), and treatment with AMPA/kainate of slice preparations from the rat lumbar spinal cord and midbrain results in more Ca 2+ influx in spinal cord MNs compared to oculomotor neurons (Brockington et al., 2013). MNs in culture or in vivo are selectively vulnerable to glutamate receptor agonists, particularly those that stimulate AMPA receptors and induce excitotoxicity (Carriedo et al., 1996;Urushitani et al., 1998;Fryer et al., 1999;Van and Robberecht, 2000), whereas NMDA does not damage spinal cord MNs (Curtis and Malik, 1985;Pisharodi and Nauta, 1985;Hugon et al., 1989;Urca and Urca, 1990;Nakamura et al., 1994;Ikonomidou et al., 1996;Kruman et al., 1999). Moreover, ALS-vulnerable α-spinal cord MNs display greater AMPA receptor current density than other spinal neurons (Vandenberghe et al., 2000). Furthermore, when this density is reduced pharmacologically to levels similar to spinal neurons, these MNs are no longer vulnerable to activation of AMPA receptors. Similarly, when mutant SOD1 G93A mice are crossed with mice overexpressing the GluR2 subunit in cholinergic neurons, the resulting progeny possess AMPA receptors with reduced permeability to Ca 2+ and prolonged survival compared to SOD1 G93A mice (Tateno et al., 2004), highlighting the importance of AMPA receptors and GluR2 in ALS.
Editing of mRNA controls the ability of the GluA2 subunit to regulate Ca 2+ -permeability of AMPA receptors. RNA editing is a post-transcriptional modification (Gln; Q to Arg; R) in the GluA2 mRNA, and the AMPA receptor is Ca 2+ -impermeable if it contains the edited GluA2(R) subunit. Conversely, the receptor is Ca 2+ -permeable if it lacks GluA2 or if it contains the unedited GluA2(Q) subunit. Interestingly, spinal MNs in human ALS patients display less GluR2 Q/R site editing (Kawahara et al., 2004;Aizawa et al., 2010). GluR2 pre-mRNA is edited by the enzyme adenosine deaminase isoform 2 (ADAR2) (Kortenbruck et al., 2001) and reduced ADAR2 activity correlates with TDP-43 pathology in human MNs (Aizawa et al., 2010). Furthermore, when ADAR2 is conditionally knocked-down in MNs in mice, a decline in motor function and selective loss of MNs in the spinal cord and cranial motor nerve nuclei was observed . In contrast, MNs in the oculomotor nucleus were retained, despite a significant decrease in GluR2 Q/R site editing . Notably, cytoplasmic mislocalization of TDP-43 was present in the ADAR2-depleted MNs and TDP-43 was also localized at the synapse, further highlighting a link between ADAR2, GluR2 and TDP-43 (Wang et al., 2008;Feiguin et al., 2009;Polymenidou et al., 2011;Gulino et al., 2015).
Motor neurons may be vulnerable to excitotoxicity because they possess a lower capacity than other neurons to buffer Ca 2+ upon stimulation (Van Den Bosch et al., 2006). Several electrophysiological studies have demonstrated that susceptible MNs in ALS have a limited capacity to buffer Ca 2+ compared to resistant MNs Keller, 1998, 1999;Palecek et al., 1999;Vanselow and Keller, 2000). Ca 2+ -binding proteins, such as calbindin D28K and parvalbumin, protect neurons from Ca 2+ -mediated cell death by enhancing Ca 2+ removal after stimulation (Chard et al., 1993). In human autopsy specimens, both proteins are absent in MN populations lost early in ALS (cortical, spinal and lower cranial MNs), whereas MNs targeted later in disease course (Onuf 's nucleus, oculomotor, trochlear, and abducens MNs) expressed markedly more of each (Alexianu et al., 1994). Similarly, in pre-symptomatic SOD1 G93A mice, lower levels of the Ca 2+ binding ER chaperone calreticulin (CRT) were detected in vulnerable FF-MNs of the tibialis anterior muscle, compared to resistant MNs of the soleus (Bernard-Marissal et al., 2012). Knock-down of CRT in vitro was sufficient to trigger MN death by the Fas/NO pathway (Bernard-Marissal et al., 2012). Furthermore, reduced CRT levels and activation of Fas both trigger ER stress and cell death specifically in vulnerable SOD1 G93A -expressing MNs (Bernard-Marissal et al., 2012). These studies suggest that expression of Ca 2+ -binding proteins may confer resistance to excitotoxic stimuli (Alexianu et al., 1994;Obál et al., 2006). However, overexpression of parvalbumin in high-copy SOD1 G93A mice was beneficial (Laslo et al., 2000), although these findings have been challenged (Beers et al., 2001). Also, the loss or reduction of parvalbumin and calbindin D-28k immunoreactivity in large MNs at early stages in SOD1-transgenic mice suggest that these Ca 2+ -binding proteins contribute to the selective vulnerability of MNs (Sasaki et al., 2006). Conversely, parvalbumin levels are significantly less in oculomotor neurons from SOD1 G93A mice compared to spinal cord MNs (Comley et al., 2015). Hence, these conflicting data argue against the involvement of Ca 2+ -binding proteins in oculomotor neuron resistance to degeneration. However, together these studies suggest that neuronal excitability and excitotoxicity are determinants of the selective vulnerability of spinal cord neurons, and the relative resistance of oculomotor neurons, in ALS.
Endoplasmic Reticulum Stress
The ER is responsible for the folding and quality control of virtually all proteins that transit through the secretory pathway. Hence it is a fundamental aspect of proteostasis. Unfolded or misfolded proteins are retained in the ER, which activates the unfolded protein response (UPR). This aims to improve the cellular protein folding capacity by inhibiting translation, upregulating ER chaperones -such as immunoglobulin binding protein (BiP) and protein disulfide isomerase (PDI) -and stimulating protein degradation (Walter and Ron, 2011;Rozas et al., 2017;Shahheydari et al., 2017). Numerous ALS-related proteins chronically active the UPR, including ALS-associated mutant forms of SOD1 (Nishitoh et al., 2008), TDP-43 , C9orf72 (Dafinca et al., 2016), Vesicle-associated membrane protein-associated protein B (VAPB) (Suzuki et al., 2009) and FUS (Farg et al., 2012). ER stress has also been detected in sporadic ALS patients (Ilieva et al., 2007;Atkin et al., 2008). Furthermore, ER stress is linked to excitability in ALS. Mutant SOD1 induces a transcriptional signature characteristic of ER stress, which also disrupts MN excitability . Similarly, modulating the excitability properties of human iPSC-derived MNs alters the UPR . Conversely, treatment of MNs with salubrinal, an inhibitor of ER stress which inhibits eIF2α dephosphorylation (Boyce et al., 2005), reduced the excitability of MNs . Similar results were obtained in MNs from patients carrying C9orf72 repeat expansions or VCP mutations Dafinca et al., 2016;Hall et al., 2017). Moreover, pharmacological reduction of neuronal excitability in SOD1 G93A mice specifically reduced BiP accumulation in ipsilateral FALS α-MNs (Saxena et al., 2013). Hence, together these findings indicate that induction of the UPR and the electrical activity of MNs are both closely related in ALS.
An in vivo longitudinal analysis of MNs revealed that ER stress influences disease manifestations in SOD1 G93A and SOD1 G85R mouse models of FALS (Saxena et al., 2009). However, activation of the UPR is detrimental to mutant s-SOD1 G93A mice, leading to failure to reinnervate NMJs. Conversely, treatment with salubrinal attenuated axon pathology and extended survival in mutant SOD1 G93A mice (Saxena et al., 2009). Initiation of the UPR was detected specifically in FF-MNs in asymptomatic SOD1 G93A mice, but not in S-MNs (Saxena et al., 2009). Hence these findings indicate that the more vulnerable MNs develop ER stress first, thus linking the UPR to MN susceptibility in ALS. FF-MNS may be more vulnerable to ER stress because they have much lower levels of BiP co-chaperone SIL1 compared to S-MNs (Filézac de L'Etang et al., 2015). SIL1 is protective against ER stress and reduces the formation of mutant SOD1 inclusions in vitro. Conversely SIL1 depletion leads to disturbed ER and nuclear envelope morphology, defective mitochondrial function, and ER stress, thus linking SIL1 to neurodegeneration (Roos et al., 2016). Furthermore, AAV-mediated overexpression of SIL1 in MNs of SOD1 G93A mice preserves FF MN axons and prolongs survival by 25-30% compared to littermates (Filézac de L'Etang et al., 2015). In addition, SIL1 levels are reduced in MNs of mutant TDP-43 A315T mice, and are increased in the surviving MNs of SALS patients, also implying that SIL1 is protective in ALS (Filézac de L' Etang et al., 2015).
Consistent with these studies, ER stress is present specifically in anterior horn MNs in knock-in mice expressing BiP artificially retained in the ER. Furthermore, this was accompanied by the accumulation of ubiquitinated proteins and wild type SOD1 (Mimura et al., 2008;Jin et al., 2014), reminiscent of SALS (Bosco et al., 2010). Significant changes in mRNAs of ER stress genes were also detected in the cerebellum by transcriptome analysis (Prudencio et al., 2015). These studies together link SIL1 and BiP to neurodegeneration in both neuronal subpopulations in ALS/FTD. PDI is also upregulated in SOD1 mice and human SALS spinal cord tissues (Ilieva et al., 2007;Atkin et al., 2008;Sasaki, 2010;Walker et al., 2010;Chen et al., 2015;Sun et al., 2015). Wild type PDI overexpression and related family member Erp57 are protective in vitro in neuronal cells expressing mutant SOD1 (Walker et al., 2010;Jeon et al., 2014;Parakh et al., 2018a). Interestingly, mutations in PDI and Erp57 have been identified in ALS patients, and expression in zebrafish induces motor defects (Woehlbier et al., 2016). Furthermore, the levels of PDI in MNs are lower than in astrocytes and oligodendrocytes in SOD1 G37R mice (Sun et al., 2015). This implies that MNs are intrinsically more vulnerable to unfolded protein accumulation than other cell types, which may also contribute to their susceptibility in ALS.
It should also be noted, however, that the ER in neurons (and therefore MNs) is not as well characterized as other cell types. In fact, most studies examining UPR mechanisms have involved non-neuronal cells. Neurons possess extensive ER which is distributed continuously throughout the axonal, dendritic and somatic compartments, implying that neurons make unique demands on the ER compared to other cell types (Ramírez and Couve, 2011). Hence, our current soma-centric view of the ER does not consider its role in neuronal processes and how this might relate to their specific functions. This is particularly true for large neurons, such as MNs with their extended axons. The findings that the most susceptible MNs develop ER stress first implies that the ER in MNs may confer unique susceptibility on these cells compared to other MNs and non-neuronal cells. However, this idea requires validation experimentally.
Mitochondria and Energy Metabolism
Neurons utilize most of their energy at the synapse, which consumes more than a third of the overall cellular ATP (Harris et al., 2012;Niven, 2016). The properties and types of ion channels expressed in a MN influence the energy required to generate an action potential, and the Na + /K + pump is estimated to account for 20-40% of the brain's energy consumption (Purves et al., 2001). The size and shape of a MN also affects its electrical properties, and the distance over which signals must spread. MNs have particularly high energetic demands, even compared to other neurons. They also have large numbers of NMJs as well as high intracellular Ca 2+ flux as discussed above.
More than 90% of ATP generation in the CNS occurs via mitochondrial oxidative phosphorylation (Hyder et al., 2013;Vandoorne et al., 2018). Reductions in energy metabolism have been reported in ALS (Vandoorne et al., 2018) and mitochondrial abnormalities, such as swelling and morphological changes, are among the earliest signs of pathology in SOD1 G93A and SOD1 G37R mice (Wong et al., 1995;Kong and Xu, 1998), FUS R521C rats (Huang et al., 2012;So et al., 2018) and wild type TDP-43 mice (Shan et al., 2010;Xu et al., 2010). Moreover, mitochondrial abnormalities are also present in MNs of ALS patient tissues (Fujita et al., 1996;Sasaki and Iwata, 1996;Swerdlow et al., 1998;Dhaliwal and Grewal, 2000;Sasaki et al., 2007). Furthermore, mutant SOD1 specifically associates with mitochondria and interferes with their function (Liu et al., 2004;Pasinelli et al., 2004;Ferri et al., 2006;Sotelo-Silveira et al., 2009;Vande Velde et al., 2011). Decreased activity of mitochondrial respiratory chain complexes was also present in spinal cord sections (Borthwick et al., 1999) and homogenates (Wiedemann et al., 2002) from ALS patients. Consistent with these findings, genes involved in mitochondrial function were upregulated in rat oculomotor neurons compared to hypoglossal and cervical spinal cord MNs. However, it should be noted that the higher firing rate of the former might confer some resistance to energy imbalance (Hedlund et al., 2010;Brockington et al., 2013).
In vulnerable MNs lacking Ca 2+ -binding proteins calbindin and parvalbumin, Ca 2+ is largely taken up by mitochondria (Lautenschläger et al., 2013). As a result, extensive mitochondrial transport to the dendritic space is required to maintain Ca 2+ homeostasis. The normal distribution of mitochondria is also perturbed in ALS patient MNs. Whereas they are depleted in distal dendrites and axons, mitochondria also accumulate in the soma and proximal axon hillock (Sasaki et al., 2007). Disturbed mitochondrial dynamics were also described in MNs in mutant SOD1 G93A (De Vos et al., 2007;Sotelo-Silveira et al., 2009;Bilsland et al., 2010;Magrané et al., 2014) and TDP-43 A315T (Magrané et al., 2014) mice. In addition, iPSC-derived A4V MNs exhibit disturbances in mitochondrial morphology and motility within the axon . Similarly, expression of mutant TDP-43 in spinal cord primary neurons leads to abnormal distribution of mitochondria . Dysfunctional Ca 2+ uptake by mitochondria may therefore result in elevated intracellular Ca 2+ levels, thus contributing to neurodegeneration.
Compared to FF-MNs, S-MNs have smaller soma and axons, less dendritic branching, and fewer neuromuscular terminals (Kanning et al., 2010). This results in higher input resistance and therefore less energy is required to initiate an action potential in comparison. Moreover, S-MNs contain more mitochondria compared to FF-MNs (Kanning et al., 2010). These two properties may therefore render FF-MNs more vulnerable to depletion of energy than S-MNs. Indeed, a computational analysis study estimated that the energy requirements of FF-MNs are considerably larger than S-MNs for a similar discharge (Le Masson et al., 2014), rendering the former more sensitive to ATP imbalance. Furthermore, the muscle fiber types associated with FF-and S-MNs differ in their major energy source. The slow twitch muscles use mainly oxidative metabolism, whereas the fast-twitch fibers use glycolysis. Hence, the heightened vulnerability of MN subpopulations may relate to their bioenergetic and morphological characteristics. Both the direct interaction of misfolded ALS mutant proteins with mitochondria and the secondary overload of ion uptake could account for mitochondrial metabolism failure, leading to reduced ATP availability (Israelson et al., 2010).
Motor Neuron Size
Motor neurons can vary widely in their size and this can impact on their physiological functions. There is also increasing evidence that vulnerability to degeneration is related to MN size. The disease-vulnerable FF-MNs somas are larger than the S-MN resistant types, and they possess larger motor units. Moreover, the size of a MN also correlates inversely with its excitability, discharge behavior, firing rate, recruitment during movement, and vulnerability to degeneration in ALS (Henneman, 1957;Le Masson et al., 2014). The soma of MNs from male SOD1 G93A mice is larger than those of wild type male mice (Shoenfeld et al., 2014). Furthermore, a recent study demonstrated that not only are the larger MN subtypes more vulnerable to neurodegeneration in SOD1 G93A mice, but MNs also increase in size during disease in multiple regions of the spinal cord. Interestingly, in silico modeling predicted that the excitability properties of these cells were also altered (Dukkipati et al., 2018). Hence, MN size may alter during disease progression, and this plasticity may impact on the vulnerability of MN subtypes.
Oxidative Stress
Oxidative stress arises when reactive oxygen species (ROS) or nitrogen species (RNS) accumulate within cells. This can lead to oxidative modifications and altered functional states of proteins, nucleic acids and lipids. Oxidative stress is linked to neurodegeneration in ALS (Carrí et al., 2003) and oxidation products, such as malondialdehyde, hydroxynonenal, and oxidized proteins, DNA or membrane phospholipids, are elevated in SALS and FALS patients (Shaw et al., 1995;Beal et al., 1997;Ferrante et al., 1997;Bogdanov et al., 2000;Shibata et al., 2001) and mouse models of ALS (Gurney et al., 1994;Andrus et al., 1998;Bogdanov et al., 1998;Hall et al., 1998;Liu et al., 1998Liu et al., , 1999Rizzardini et al., 2003). Mitochondria damage in ALS has also been attributed to intracellular oxidative stress (Fujita et al., 1996). The normal physiological function of SOD1 is the detoxification of superoxide radicals, although loss of SOD1 function is no longer favored as a disease mechanism in ALS (Saccon et al., 2013). However, mutations in SOD1 increase neuronal vulnerability to oxidative stress (Franco et al., 2013;Tsang et al., 2014). Moreover, in response to elevated ROS, SOD1 relocates from the cytoplasm to the nucleus, where it regulates the expression of oxidative resistance and repair genes (Tsang et al., 2014). Some neurons exhibit differential vulnerability to oxidative damage. Cerebellar granule and hippocampal CA1 neurons are more sensitive to oxidative stress than cerebral cortical and hippocampal CA3 neurons (Wang X. et al., 2009;Wang and Michaelis, 2010). Hence, it is possible that similar differences in vulnerability to oxidative stress might exist between MN populations. However, this possibility needs to be confirmed experimentally.
Protein Transport
Efficient intracellular trafficking is required to maintain the structure and function of MNs, particularly because MNs have very long axons that connect the soma with distant synaptic sites [reviewed in De Vos and Hafezparast (2017)]. Disorganization of the neuronal cytoskeleton and inhibition of axonal, ER-Golgi, endosomal and nucleocytoplasmic transport, are now widely reported features of ALS [reviewed in Parakh et al. (2018b) and Burk and Pasterkamp (2019)]. Importantly, defects in trafficking could reduce the supply of components necessary for synaptic and/or somal function, and prevent clearance of waste products from the synapse, together contributing to neurodegeneration in ALS.
The existence of mutations in genes encoding cytoskeletal proteins or the cellular transport machinery highlights the involvement of these processes in ALS/FTD. These include tubulin α4A (Smith et al., 2014a;Perrone et al., 2017), a major component of microtubules, neurofilament heavy chain (Figlewicz et al., 1994), a type of intermediate filament, and profilin-1 Dillen et al., 2013;Smith et al., 2014b), which is involved in actin polymerization. Similarly, dynactin-1, involved in axonal transport (Puls et al., 2003;Münch et al., 2004;Münch et al., 2005;Liu et al., 2017) and SCFD1 (Sec1 family domain containing 1), involved in ER to Golgi transport (van Rheenen et al., 2016), are also mutated in a small proportion of patients, further implying that protein transport is impaired in ALS/FTD.
Axonal transport defects may be an important factor underlying the selective vulnerability of MNs or MN subtypes in ALS/FTD. Abnormal accumulation of phosphorylated neurofilaments, mitochondria and lysosomes in the proximal axon of large MNs and axonal spheroids, are present in SALS and FALS patients (Hirano et al., 1984;Corbo and Hays, 1992;Okada et al., 1995;Rouleau et al., 1996;Sasaki and Iwata, 1996). Mutant SOD1 slows both anterograde (Williamson and Cleveland, 1999) and retrograde (Chen et al., 2007;Perlson et al., 2009) axonal transport. Cytoskeletal and motor proteins are differentially expressed in spinal MNs compared to oculomotor neurons. This includes peripherin (Hedlund et al., 2010;Comley et al., 2015), which is also found in ubiquitinated inclusions in the spinal cord of FALS (Robertson et al., 2003) and SALS patients (He and Hays, 2004). Overexpression of peripherin leads to defective axonal transport (Millecamps et al., 2006) and late-onset MN degeneration (Beaulieu et al., 1999), implying that differential expression of peripherin contributes to neurodegeneration.
Axonal transport requires the efficient regulation of both dynein and kinesin molecular motors (Melkov et al., 2016), which mediate transport in the retrograde and anterograde directions respectively. Dynein is differentially expressed in vulnerable and susceptible MNs because higher levels are present in spinal and hypoglossal MNs compared to oculomotor neurons (Ilieva et al., 2008). However, dynein levels were significantly decreased in motor nuclei in SOD1 G93A mice compared to wild type mice although its expression in MNs was equivalent (Comley et al., 2015). Similar patterns were observed in ALS patients (Comley et al., 2015). Disruption of dynein inhibits axonal transport and results in abnormal redistribution of mitochondria (Varadi et al., 2004) and late-onset degeneration in mice (LaMonte et al., 2002). Several FALS-linked SOD1 mutants co-localize with dynein/dynactin in vitro and SOD1 G93A mice (Ligon et al., 2005;Zhang et al., 2007;Shi et al., 2010), which perturbs axonal transport and synaptic mitochondrial content (De Vos et al., 2007). The lower expression of dynein in oculomotor neurons might therefore confer resistance to axonal transport defects in ALS. However, it is also possible that this simply reflects less need for retrograde transport in oculomotor neurons due to their smaller cell bodies, shorter axons and lower requirements for energy, compared to spinal and hypoglossal MNs. Nevertheless, the inefficient axonal transport of mitochondria may confer loss of energy at the synapse in vulnerable MN subpopulations. These MNs require more energy to function than other cells, leading to disturbed synaptic activity.
Kinesin-dependant axonal transport is also disrupted in ALS. Oxidized forms of wild type SOD1 immunopurified from SALS tissues inhibited kinesin-based fast axonal transport (Bosco et al., 2010). However, no interaction between members of the kinesin family (KIF5A, 5B or 5C) and SOD1 was detected in SOD1 G93A mice. High expression of KIF proteins is also associated with neurodegeneration. KIF5C was abundantly expressed in vulnerable spinal MNs in SOD1 G93A mice (Kanai et al., 2000), but a marked reduction in KIF3Aβ levels was detected in the motor cortex of SALS patients (Pantelidou et al., 2007). Furthermore, reduced kinesin-associated protein 3 (KIFAP3) expression was linked to an increase in the survival of ALS patients (Landers et al., 2009) and changes in the transport of choline acetyltransferase transporter (ChAT) along axons. KIF5C is expressed more in rat spinal MNs than oculomotor and hypoglossal MNs (Hedlund et al., 2010), However, further work is necessary to determine if this is related to ALS, and to examine whether KIFs are differentially expressed in neuronal subtypes.
Defects in the secretory pathway are also linked to ALS. Depletion of TDP-43 inhibits endosomal trafficking and results in lack of neurotrophic signaling and neurodegeneration (Schwenk et al., 2016). Similarly, inhibition of the first part of the classical secretory pathway, ER-Golgi transport, is also induced by mutant SOD1, TDP-43 and FUS Soo et al., 2015). This mechanism has been described as a possible trigger for ER stress (Soo et al., 2015), which, as detailed above, is linked to neuronal susceptibility. Both endosomal and ER-Golgi transport are also linked to transport within the axon. However, it remains to be determined if these other forms of trafficking are directly associated with selective neuronal susceptibility in ALS.
Defective nucleocytoplasmic transport is emerging as an important cellular mechanism in the initiation or progression of ALS. Nuclear pore pathology is present in the brain of SALS and C9orf72 patients (Zhang K. et al., 2015;Chou et al., 2018). C9orf72 repeat expansions impair protein trafficking from the cytoplasm to the nucleus, and reduce the proportion of nuclear TDP-43 in patient-derived MNs (Zhang K. et al., 2015), thereby mimicking the nuclear depletion of TDP-43 in ALS patients (Neumann et al., 2006). Proteins involved in nucleocytoplasmic transport are abnormally localized in aggregates in the cortex of C9orf72 ALS patients, patient-derived MNs and the brain of C9orf72 mouse models (Zhang K. et al., 2015, Zhang et al., 2016. Similarly, TDP-43 pathology disrupts nuclear pore complexes and lamina morphology in cell lines and patientderived MNs. Furthermore, insoluble TDP-43 aggregates also contain components of the nucleocytoplasmic machinery (Chou et al., 2018). Both protein import and RNA export were impaired by mutant TDP-43 in the brain of SALS mouse primary neurons (Chou et al., 2018). A recent meta-analysis of ALS modifier genes identified several genes encoding proteins involved in nucleocytoplasmic shuttling (Yanagi et al., 2019). In fact, the most enriched gene ontology term in this study was "protein import into the nucleus, " and it included KPNB1, encoding importin subunit beta-1, which was identified as a genetic modifier in three separate ALS models. Interestingly, the gene encoding lamin B1 subunit 1, which is involved in nuclear stability, was upregulated in oculomotor neurons compared to hypoglossal MNs and spinal cord MNs (Hedlund et al., 2010). Furthermore, lamin B1 is also known to possess cellular protective functions such as controlling the cellular response to oxidative stress (Malhas et al., 2009), DNA repair (Butin-Israeli et al., 2015) and RNA synthesis (Tang et al., 2008). It is therefore tempting to speculate that lamin B1 confers resistance to specific MN populations when highly expressed. However, further work is necessary to examine this possibility.
AGING
Although genetic mutations are present throughout life, ALS most commonly develops in mid-adulthood (50-60 years), implying that the normal aging process renders MNs vulnerable to degeneration. However, there is considerable variability in disease progression amongst mutation carriers, FIGURE 4 | Reported differences between the vulnerable (ventral spinal cord MNs) and resistant (oculomotor) motor neurons in ALS. The surface area and axonal conduction velocities referred to here were obtained from studies in cats (Westbury, 1982). The α-MNs innervate highly contracting extrafusal fibers, whereas γ-MNs innervate intrafusal fibers that contract much less; oculomotor neurons innervate the extraocular muscles in the orbit. α-MNs are larger than γ-MNs and oculomotor neurons and possess more dendritic trees. α-MNs are further subdivided based on their size and function. The proteins listed at the bottom of the figure are those enriched in each MN population. even within the same families. Hence, this implies that there is no simple correlation between genetics and disease phenotypes, suggesting that environmental factors and the normal aging process are relevant to understand neuronal vulnerability in ALS/FTD.
Aging results in the accumulation of detrimental biological changes over time. The reduction of muscle mass and strength (sarcopenia) is one of the major causes of disability in older persons (Enoka et al., 2003;Lauretani et al., 2003;Delmonico et al., 2009;Clark and Manini, 2012), which affects gait speed, balance, and the command of fine motor skills (Fried et al., 2004;Sorond et al., 2015). The deterioration of motor functions with advancing age therefore increases the risk of injury and age-associated diseases such as ALS/FTD (Spiller et al., 2016b;Niccoli et al., 2017).
Aging-associated muscle weakness also results from impairment of the activity of MNs contacting skeletal muscles (Fiatarone and Evans, 1993;Manini et al., 2013). High resolution structural MRI imaging reveals prominent atrophy in the primary motor cortex (Salat et al., 2004), as early as middle life in humans. Age-related decreases in white matter mass and myelinated nerve fiber length also correlate with reductions in the size of the motor cortex (Marner et al., 2003). However, loss of neurons during normal human aging is restricted to specific regions of the CNS only, and the number of cells lost is only slight, contrary to previous convictions that significant loss of neurons occur in the human cortex (Pannese, 2011).
Instead, age-related changes observed in aged rhesus monkeys and mice appear to involve loss of dendrites and axons, and demyelination, resulting in significant loss of synapses without loss of the neuronal soma (Pannese, 2011). Similarly, there are fewer cholinergic and glutamatergic synaptic inputs directly abutting α-MNs in aged animals, indicating that aging causes α-MNs to shed synaptic inputs. Thus, both impairment of axon function and substantial loss of synaptic inputs may contribute to age-related dysfunction of α-MNs, without loss of the soma (Maxwell et al., 2018). As a consequence, motor units are gradually lost over the first six decades of life, and this accelerates thereafter (Deschenes, 2011). These studies together indicate that neuronal atrophy and axonal impairment, with reduced neuromuscular activity in the absence of MN loss, occur with normal aging.
A major component of aging-related muscle weakness is breakdown in communication between the brain and NMJ. This is related to increased neural noise which reduces the accuracy of neural transmission (Manini et al., 2013). This can result in activation of the motor unit, so that it becomes erratic, and together with diminished glutamate uptake into MNs, leads to an inability to exert muscle force and motor control (Manini et al., 2013). Furthermore, susceptibility of neurons to cellular stress, due to impairment of proteostasis and/or increased oxidative or metabolic stress during normal aging, may render MNs vulnerable to degeneration. Hence, genetic and environmental factors may combine to determine whether FIGURE 5 | Diagram showing a hypothetic cascade of cellular events leading to neurodegeneration and neuronal death in motor neurons in ALS/FTD. This schematic diagram summarizes the key features occurring in vulnerable MNs. Resistant MNs are protected by the expression of a genes controlling cellular mechanisms that are defective in ALS/FTD (RNA dysfunction, ER stress, mitochondrial defects, protein transport dysfunction, dysregulation of neuronal excitability and excitotoxicity). These processes can be exacerbated by age, environmental and genetic mutations. a MN can withstand an age-related disease such as ALS or not (Mattson and Magnus, 2006).
Age-Related Proteostasis Disturbance
During the aging process, a decline in the normal cellular ability to maintain proteostasis is observed and, as a result, damaged proteins accumulate (Kikis et al., 2010). Thus the normal aging process in MNs that are already weakened by ALS-associated insults, such as the presence of misfolded proteins or environmental factors, may combine to induce neurodegeneration. MN populations that are more susceptible in ALS may therefore be less able to tolerate disturbances in proteostasis than the more resistant populations (Neumann et al., 2006;Kikis et al., 2010).
Mitochondria play a crucial role in neuronal aging. Normal features observed in the aging brain include the accumulation of mutations in mitochondrial DNA, the production of ROS, mitochondrial metabolic abnormalities and altered Ca 2+ storage (Sun et al., 2016). Remarkably, mitochondria in different regions of the CNS are not equally affected during aging. The sensitivity of the mitochondrial permeability transition pore to Ca 2+ in the cortex and hippocampus is greater than that of the striatum and the cerebellum in aged rats (LaFrance et al., 2005;Brown et al., 2006). The cellular location of mitochondria is also relevant to the aging processes. Synaptic mitochondria are more prone to oxidative stress-induced damage than mitochondria located in the soma (Brown et al., 2006;Reddy and Beal, 2008). In addition, synaptic mitochondria display a limited capacity to accumulate Ca 2+ , unlike those located in the soma (Brown et al., 2006). Furthermore, marked differences have been described between mitochondria located in the spinal cord and those found in distal axons of MNs from aged rats. In the axon termini at the NMJ, mitochondria swelling, fusion and an abundance of megamitochondria (giant mitochondria) during aging have been reported (García et al., 2013). These studies therefore imply that mitochondria become dysfunctional in aged MNs, which might sensitize vulnerable MN populations to ALS/FTD. Mitochondria located at the synapse may also be particularly vulnerable to these agerelated processes.
Age-Related DNA Damage
The mammalian genome is under constant attack from both endogenous and exogenous sources. This can result in DNA damage, mutations and impaired cellular viability if not repaired correctly (Madabhushi et al., 2014). There is a significant increase in DNA damage during aging due to reduced capacity of DNA repair. Moreover, erroneous repair of DNA lesions can result in further mutations in the aged brain (Vijg and Suh, 2013). DNA damage is increasingly implicated in neurodegenerative disorders, including ALS, where it is induced by the C9orf72 repeat expansion (Farg et al., 2017;Walker et al., 2017). Interestingly, there is also evidence that both FUS and TDP-43 function in the DNA damage response, in either prevention of damage or repair of R loop-associated DNA damage (Hill et al., 2016). In addition, impairment of the DNA damage response due to the presence of ALS/FTD-associated FUS mutations induces neurodegeneration (Higelin et al., 2016;Naumann et al., 2018). It is therefore possible that the normal aging process results in an impaired ability to repair DNA in MNs. This may be an important source of cellular stress that precipitates neurodegeneration in cells already exposed to pathological events throughout life. However, recent work suggests that mutant SOD1 G93A does not impact on DNA strand integrity, implying that DNA damage is not present in all forms of ALS (Penndorf et al., 2017).
CONCLUSION
Motor neurons are unique cells compared to other neurons. They are large cells, with extraordinarily long axons, and very high energetic requirements, which may render them uniquely susceptible to degeneration in ALS. Remarkably, however, not all MNs are equally affected, and there are marked differences in vulnerabilities between MN subtypes, even within the same motor unit. The resistant MNs possess distinct morphological and functional characteristics, as well as different gene expression profiles, compared to the more vulnerable groups (Figure 4). Importantly, the oculomotor neurons continue to function, even in the late stages of ALS when the vulnerable spinal and other MNs are significantly depleted. These oculomotor neurons are anatomically and functionally very different from all other motor units: they are much smaller, and their function involves sensing rather than movement, hence different circuits are involved. In contrast, spinal MNs are more prone to hyperexcitation and they express high levels of AMPA receptors, they are more prone to develop ER stress, and they do not buffer Ca 2+ as well as the more resistant MN types. These properties may confer unique sensitivity to neurodegeneration in ALS. Interestingly, even within spinal MNs, there are distinct differences in vulnerability, because FF-MNs degenerate first, followed by FR-MNs, and the more resistant S-MNs degenerate later. Similarly, these cells also display differences in excitability and ER stress.
A hypothetical model is presented in Figure 5, summarizing the possible molecular mechanisms involved in MN vulnerability in ALS. The regulation of synaptic plasticity and neuronal excitability may underlie susceptibility in ALS involving nuclear-cytoplasmic defects, ER stress, transport dysfunction and mitochondrial alterations. From an initial site of onset, neurodegeneration begins in susceptible MN groups, and then spreads contiguously throughout the neuroanatomy, in a defined pattern, to the surrounding cells. This therefore highlights the role of impaired neurotransmission in triggering and propagating neurodegeneration in ALS. Glial cells are involved in both the onset and progression of ALS.
The susceptibility of specific MN groups, however, is further complicated by the heterogeneous nature of ALS, even within the same families, and the different patterns of motor involvement. Stratification of ALS patients into distinct subtypes and investigations into MNs susceptibilities may reveal more insights why specific groups of MNs degenerate first in ALS in the future. However, the blurring of some neurodegenerative disorders, including ALS and FTD, and the presence of C9orf72 mutations in several other neurodegenerative conditions as well as ALS, is another confounding factor. Understanding the fundamental mechanisms dictating MN vulnerability in ALS is central to our understanding of this devastating disorder. Hence, studies in this area may lead to novel therapeutic insights in the future.
AUTHOR CONTRIBUTIONS
MV wrote the "Site-Specific Onset and Spread of Neurodegeneration in ALS" section. MJ wrote the "Role of Glial Cells in Driving Disease Progression" section. SS wrote the "Aging" section. AR conceived and prepared the figures, and wrote the "Introduction, " and "Anatomy of the Motor System, " "Genetic Mutations and Risk Factors in ALS, " and "Intrinsic Factors Specific to MN Subpopulations" sections. JA conceived the article, wrote the "Conclusion" section, contributed text in numerous sections, and edited the manuscript throughout for content and style consistency.
|
2019-06-27T16:24:01.512Z
|
2019-06-27T00:00:00.000
|
{
"year": 2019,
"sha1": "1a2da5dba01388778191132535c826320577526f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fnins.2019.00532",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a2da5dba01388778191132535c826320577526f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13933913
|
pes2o/s2orc
|
v3-fos-license
|
Mesoporous silica nanoparticles for drug and gene delivery
Mesoporous silica nanoparticles (MSNs) are attracting increasing interest for potential biomedical applications. With tailored mesoporous structure, huge surface area and pore volume, selective surface functionality, as well as morphology control, MSNs exhibit high loading capacity for therapeutic agents and controlled release properties if modified with stimuli-responsive groups, polymers or proteins. In this review article, the applications of MSNs in pharmaceutics to improve drug bioavailability, reduce drug toxicity, and deliver with cellular targetability are summarized. Particularly, the exciting progress in the development of MSNs-based effective delivery systems for poorly soluble drugs, anticancer agents, and therapeutic genes are highlighted.
Introduction
In recent years, there has been a rapid growth in the area of biomedicine, particularly in exploring new drug/gene delivery systems. More recently, nanotechnology emerged as a promising approach which has motivated researchers to develop nanostructured materials. Among various integrated nanostructured materials, mesoporous silica nanoparticles (MSNs) have become a new generation of inorganic platforms for biomedical application.
MSNs with uniform pore size and a long-range ordered mesoporous structure were first introduced by Mobil corporation scientists in 1992 1 . In general, supramolecular assemblies of surfactants are necessary in the synthesis of MSNs. Usually, the surfactant will self-aggregate into micelles at a concentration higher than the critical micelle concentration (CMC). Then, the silica precursors can condense at the surface of the micelles forming an inorganic-organic hybrid material. Finally, the template surfactant can be removed either by calcination or by solvent extraction to generate pores (Fig. 1). The resulting silica-based mesoporous matrices may offer the following unique structural and biomedical properties: 1) Ordered porous structure. MSNs have a long-range ordered porous structure without interconnection between individual porous channels, which allows fine control of the drug loading and release kinetics (Fig. 2).
2) Large pore volume and surface area. The pore volume and surface area of MSNs are usually above 1 cm 3 /g and 700 m 2 /g, respectively, showing high potential for molecule loading and dissolution enhancement.
3) Tunable particle size. The particle size of MSNs can be controlled from 50 to 300 nm, which is suitable for facile endocytosis by living cells. 4) Two functional surfaces. MSNs have two functional surfaces, namely cylindrical pore surface and exterior particle surface. These silanol-contained surfaces can be selectively functionalized to achieve better control over drug loading and release 2 . Moreover, the external surface can be conjugated with targeting ligands for efficient cell-specific drug delivery.
5) Good biocompatibility. Silica is "Generally Recognized As
Safe" by the United States Food and Drug Administration (FDA). Recently, silica nanoparticles in the form of Cornell dots (C dots) received FDA approval for stage I human clinical trial for targeted molecular imaging 3,4 . It was reported that MSNs exhibited a three-stage degradation behavior in simulated body fluid 5 , suggesting that MSNs might degrade after administration, which is favorable for cargo release. Several in vivo biodistribution studies of MSNs have been reported recently 6,7 . Liu et al. 6 evaluated the systematic toxicity of MSNs after intravenous injection of single and repeated dose to mice. The results of clinical features, pathological examinations, mortalities, and blood biochemical indexes indicated low in vivo toxicity of MSNs. It was also reported that MSNs were mainly excreted through feces and urine following different administration routes 7 .
These unique features make MSNs excellent candidate for controlled drug/gene delivery systems. Since the first report using MCM-41 type MSNs as drug delivery system by Vallet-Regi et al. 8 in 2001, the research on biomedical application of MSNs has steadily increased, with an exponential rise in last decade. Various mesoporous materials with different porous structure and functionality have been developed for controlled and targeted drug/gene delivery. Here, we give an overview of the recent research progress and future development of MSNs in biomedical applications, particularly focused on the practical applications of MSNs as delivery systems for poorly soluble drugs, anticancer agents, and therapeutic genes. Based on the review, we have also included our perspectives on the further applications of MSNs.
Mesoporous silica-based system for poorly soluble drugs
With the increasing numbers of innovative new drugs in development, almost 70% of new drug candidates exhibit low aqueous solubility, ultimately resulting in poor absorption 9 . In an attempt to overcome this solubility obstacle and to improve the oral bioavailability, a growing number of drug delivery technologies have been developed. Presently, nanotechnology is attracting increasing attention as it can be applied in two aspects 10 : processing the drug itself into nano-sized particles or preparing drug-contained nanoparticles from various materials. With the excellent features including huge surface area and ordered porous interior, mesoporous silica can be used as a perfect drug delivery carrier for improving the solubility of poorly water-soluble drugs [11][12][13][14] and subsequently enhancing their oral bioavailability [15][16][17] .
When water-insoluble drug molecules are contained in mesoporous silica, the spatial confinement within the mesopores can reduce the crystallization of the amorphous drug 18 . Compared with the crystalline drug, the amorphous drug can reduce the lattice energy, subsequently resulting in improved dissolution rate and enhanced bioavailability 15,19 . Moreover, the huge hydrophilic surface area of mesoporous silica facilitates the wetting and dispersion of the stored drug, resulting in fast dissolution 20 . In one example, the poorly water soluble drug clotrimazole was loaded into MSU-H type mesoporous silica through supercritical CO 2 21 . The experimental and theoretical results indicated that clotrimazole was not crystalline and drug molecules were homogenously distributed in the mesopores. He et al. 22 also reported that the solubility of paclitaxel was significantly enhanced after loaded into MSNs. The 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) assay revealed that paclitaxel loaded mesoporous silica nanoparticles exhibited obvious cytotoxicity on HepG2 cells as compared with paclitaxel. SBA-15 mesoporous silica was successfully used to accelerate the dissolution rate of furosemide which is a representative class IV drug according to the Biopharmaceutical Classification System (BCS) 23 . About 71% of the drug was released from SBA-15-based preparation at 2 h dissolution, whereas only 49% of drug release from the commercial product Lasix. In addition, when the dissolution medium was changed from pH 3.0 to pH 6.8, the drug was rapidly and completely released from the inclusion preparation against the incomplete release of 83% drug from the commercial product during the whole test.
There are several factors which can influence drug release rates from MSNs. Pore size plays an important role in the release rate since the drug release is mainly controlled by diffusion 24 . Jia et al. 25 prepared paclitaxel-loaded MSNs with different pore sizes from 3 to 10 nm. The in vitro drug release test showed that the release rate decreased as the pore sizes changed from 10 to 3 nm, which might be attributable to the reason that paclitaxel loaded in relatively small pores has less opportunity of escaping from pores and diffusing into the release medium. The effect of pore size on the drug release rate was further verified in the celecoxib loaded mesoporous silica system. The release rate of celecoxib from mesopores increased with the increase of the pore size (3.7-16.0 nm) 26 . In addition, the surface chemistry is another factor which can influence the drug release rate. Ahmadi et al. 27 loaded ibuprofen into amino-modified SBA-15. Compared with SBA-15, the release rate from amino-modified SBA-15 was much slower. This was due to the interaction between carboxyl groups of Ibuprofen and amino groups of the amino-modified SBA-15. Hollow structure was also reported to retard drug release from mesoporous silica nanoparticles 28 . Furthermore, during the degradation, highly ordered hexagonal mesoporous structure will be degraded into a disordered network where the walls have been partly destroyed 5 , which might affect the release of drug cargo loaded in MSNs.
To obtain a suitable release rate and high bioavailability of poorly soluble drugs from mesoporous silica, mesoporous silica were combined with other materials into different kinds of formulations. Chen et al. 29 constructed a liquisolid formulation in which liquid polyethylene glycol 400 (PEG400) and model drug carbamazepine were absorbed into mesoporous silica to achieve improved adsorption capacity and high drug loading. The obtained liquisolid system was mixed with starch slurry, then granulated, and filled into gelatin capsules. The in vivo study demonstrated that the bioavailability of the liquisolid capsules was improved to 182.7% compared with the commercial carbamazepine tablets. Hu and his co-workers 30 encapsulated felodipine-loaded MSNs using chitosan and acacia through layer by layer self-assembly method. The release rate of felodipine decreased with the increase of the number of chitosan/acacia bilayers coated on MSNs. The production of immediate-release carbamazepine pellets was reported by Wang et al. 31 based on mesoporous silica SBA-15 using extrusion/ spheronization method. The dissolution results showed that the incorporation of drug-loaded SBA-15 into pellets did not change the in vitro release behavior. Moreover, the oral bioavailability of pellets was 1.57-fold higher than that of fast-release commercial tablets in dogs (Po0.05). In another study, MSNs were formulated into hydrogel beads with polysaccharides matrix, resulting in a sustained drug release profile maintaining for 24 h 32 .
Mesoporous silica-based system for cancer therapy
Recently, the combination of nanotechnology with drug delivery in the field of cancer therapy has been a research hotspot. The defective vascular architecture and impaired lymphatic drainage/ recovery system of tumors allow small nanocarriers and macromolecules to extravasate the endothelial barrier and accumulate in the tumor tissues 33 . Owing to this so-called enhanced permeability and retention (EPR) effect, the passive targeting of nanocarriers can be partially achieved 34 . Though organic nanocarriers such as nanocapsules 35 , liposomes 36 , polymeric micelles 37 , and nanoparticles 38 can easily encapsulate anticancer drugs, their physicochemical instability and unexpected drug leakage have severely impeded their application. In contrast, inorganic silicate (SiO 2 ) carriers have several merits, such as excellent biochemical and physicochemical stability, biocompatibility, and degradability 39 . Among the recent breakthroughs that brought new exciting possibilities to this area, MSNs have commonly been suggested as effective carriers for anticancer drugs because of their excellent drug delivery and endocytotic behaviors 40,41 . In this part, we review the applications of MSNs in cancer therapy.
Pathways for the cellular internalization of MSNs
Since the cell membrane is the biggest barrier for intracellular anticancer drug delivery, it is important to thoroughly investigate the cellular internalization and intracellular trafficking of MSNs as drug carriers.
Generally, the uptake pathways can be divided in two groups: phagocytosis and pinocytosis (macropinocytosis and endocytosis) 42 . Phagocytosis usually occurs in specialized cells (professional phagocytes) such as monocytes, neutrophils, macrophages, and dendritic cells, for particles with minimum size of 1 μm 43 . Small nanoparticles (o 200-300 nm) are usually taken up by cells via endocytic pathways, which involve various routes such as clathrinmediated, caveolae-mediated, or the clathrin and caveolae independent mechanism, depending on the cell type, particle size, particle shape, particle surface charge, and even culture conditions 44 .
Since most endocytic pathways are energy dependent, use of an inhibitor or a method of energy depletion can directly identify an endocytic pathway. It was reported that incubating KB cells with MSNs at 4°C significantly impeded the cellular uptake and the internalization also markedly decreased in the presence of sodium azide 45 . These findings demonstrated that the uptake of MSNs by KB cells was an energy-dependent endocytic process. To further investigate the role of specific endocytic pathways involved in the cellular internalization of MSNs, KB cells were pre-incubated with a series of metabolic inhibitors, including chlorpromazine (inhibits the formation of clathrin vesicles), nystain (binds sterols and disrupts the formation of caveolae), cytochalasin D (inhibits clathrin-and caveolae-independent endocytosis). Finally, the authors proposed that the uptake of MSNs into KB cells was predominated by clathrin-mediated endocytosis and required energy. Similar results were found in A549 46,47 , PANC-1 48 , and 3T3-L1 cells 49 . Other researchers 50,51 also reported that MSNs were taken up by Hela cells through caveolae-mediated endocytosis.
Intracellular trafficking of MSNs
After penetrating the cell membrane barrier, MSNs need to reach the cytoplasm to release therapeutic drugs. Biological transmission electron microscopy (Bio-TEM) is usually adopted to observe the intracellular distribution of MSNs after endocytosis [52][53][54] . It was found that MSNs were transported to large vesicular endosomes after internalization, and then fused with lysosomes. The membranes of endosomes/lysosomes eventually disrupted, suggesting that the nanoparticles could escape from the endosomes/lysosomes. In addition, a large number of nanoparticles were observed in the cytoplasm maintaining their spherical morphology. No particles were found in the nucleus.
The trafficking of MSNs inside cells also can be studied by confocal fluorescence microscopy using stained cells and fluorescently labeled MSNs. Lu et al. 55 used acridine orange (AO) to specifically stain acidic organelles (endosomes and lysosomes) red but stained other cellular regions green. The green fluorescence of labeled MSNs overlapped mostly with the red fluorescence of AO exhibiting yellow fluorescence, which indicated that MSNs were mainly internalized into the acidic organelles. Lin et al. 56 stained the endosomes by a red endosome marker (FM 4-64) and observed the Hela cells after incubating with green fluorescent FITC-cytochrome c-labeled MSNs using confocal fluorescence microscope (Fig. 3). Interestingly, after 24 h of incubation, no yellow spots were observed, indicating there was no overlap between the red endosomes and the green MSNs. This suggested that MSNs could escape from the endosomal entrapment. Recently, Tang and co-workers 57 showed that different shaped MSNs-PEG were internalized into cells and partially located in the acidic organelles, and the green fluorescence observed inside the cytoplasm also suggested the nanoparticles could successfully escape from the endosomes/lysosomes.
MSNs as anticancer drug delivery vehicles
With porous interiors and large surface areas, MSNs can be used as reservoirs to store different molecules with high loading capacity and tunable release mechanisms. As a promising drug delivery system, the pore size of MSNs can be tailored to selectively load either hydrophobic or hydrophilic anticancer agents, and their size and shape can be controlled to maximize cellular internalization. The cytotoxic effect of camptothecin (CPT)-loaded MSNs on several cancer-cell lines was evaluated 55 , and the clear growth inhibition was found in three pancreatic cancer-cell lines (Capan-1, PANC-1, AsPc-1), one stomach cancer-cell line (MKN45) and one colon cancer-cell line (SW480). Tao et al. 58 reported when loaded into MSNs, transplatin, an inactive isomer of cisplatin, exhibited enhanced cytotoxicity similar to that of cisplatin on Jurkat cells after 24 h exposure. This work indicated that even less potent anticancer drugs could become biomedically effective after proper combination with MSNs.
Active targeting therapy using MSNs
Over the last decade, the development of MSNs as anticancer drug delivery systems has been mainly based on the premise that the tailored nanoparticles can store high volume of chemotherapeutics in their pores and accumulate in tumor tissues achieving passive targeting via EPR. To enhance the uptake of MSNs in targeted cells, MSNs have been conjugated with various targeting ligands, which have specific affinity to the receptors over-expressed on the surface of cancer cells, including folic acid 59 Successful specific drug delivery to cancer cells has been reported by Sarkar and coworkers 59 . Quercetin encapsulated MSNs (Q-MSNs) modified with folic acid exhibited increased cellular uptake and higher cytotoxicity in breast cancer cells. In another study, significant improvement of tumor suppression in vivo was also achieved by folic acid modified MSNs 60 . Ma et al. 72 synthesized hyaluronic acid-conjugated MSNs (MSNs-HA) by a facile amidation reaction. The cellular uptake study showed that MSNs-HA were more effectively endocytosed by CD44positive cancer cells (Hela cells) through receptor-mediated endocytosis mechanism. In contrast, no selective endocytosis of MSNs-HA was found in CD44-negative cells, such as L929 and MCF-7 cells. Model drug CPT loaded in the nanoparticles exhibited enhanced cytotoxicity to Hela cells.
Environment-responsive therapy using MSNs
Although vast effort has been devoted to active targeting therapy using MSNs, the delivery efficacy still needs to be strengthened. During the blood circulation and penetration into tumor matrix, anticancer drugs may leak from mesopores of MSNs, leading to insufficient drug concentration at the tumor site. To overcome this obstacle, "smart" MSNs-modified with environment-responsive gatekeepers were designed. Since the microenvironment of tumor tissue differs from that of normal tissue (e.g., acidic pH [4.5-6.5], high concentration of glutathione [2-10 mmol/L] and high temperature [40-42°C] 75 ), environment-specific drug release at a tumor site is envisioned upon removal of gatekeepers.
According to the microenvironment of cancer cells, the "smart" environment-responsive gatekeepers of MSNs can be divided into pH-responsive gatekeepers [76][77][78] , redox-responsive gatekeepers [79][80][81][82] , temperature-responsive gatekeepers [83][84][85] and enzyme-responsive gatekeepers 80,86,87 . Cheng et al. 76 designed poly(ethylene glycol)-folic acid-functionalized polydopamine-modified MSNs (MSNs@PDA-PEG-FA) for controlled delivery of doxorubicin (Dox). As illustrated in Fig. 4, when MSNs@PDA-PEG-FA were dispersed in acidic conditions, the PDA film would be destroyed and the loaded doxorubicin would be released rapidly. The in vivo experiments indicated that this system exhibited superior antitumor effects. Li and his coworkers 79 developed a glutathioneresponsive MSNs system. The gatekeeper (RGD containing peptide) was conjugated on the surface of MSNs by disulfide bonds which could be cleaved by the high concentration of glutathione at tumor site, leading to a burst release of doxorubicin.
To improve the control release of anti-tumor drugs, MSNs were designed to be sensitive to multi-stimulus. Zhao and colleagues 80 developed a redox and enzyme-responsive doxorubicin delivery system based on MSNs. The in vitro experiments demonstrated that the release of doxorubicin was dependent upon glutathione and hyaluronidase. Moreover, the anticancer effects of doxorubicin were enhanced in HCT-116 cells as compared with free doxorubicin.
MSNs based on photodynamic and photothermal therapy have also shown great potential in cancer therapy, which exerts a therapeutic effect following irradiation with a near-infrared (NIR) laser. Compared with microenvironment-responsive systems, NIR-responsive systems can achieve remote spatiotemporal control and in-demand drug release. Qian et al. 88 synthesized mesoporous-silica-coated zinc phthalocyanine nanoparticles. Zinc phthalocyanine, a photosensitizer, can convert NIR light to visible light, then release reactive singlet oxygen to kill cancer cells. It was demonstrated that the photosensitizers loaded into mesoporous silica were protected from degradation in the biological environment and could continuously produce singlet oxygen with NIR irradiation. Yang and colleagues 89 developed mesoporous silica-encapsulated gold nanorods (GNRs@mSiO 2 ) as a doxorubicin delivery system as well as a photothermal conversion system. The results showed that the combined treatment had a higher therapeutic efficacy for cancer therapy compared with either chemotherapy or photothermal treatment alone.
Overcoming multidrug resistance
Multidrug resistance (MDR) is a major obstacle in cancer chemotherapy and severely impedes the efficacy of anticancer drugs. Drug resistance at tumor tissues is complicated, and usually involves multiple dynamic mechanisms. MDR can commonly be divided into two categories, pump and non-pump resistance. Pump resistance mainly refers to the expression of drug efflux pumps, such as P-glycoprotein (P-gp) and multidrug resistance protein (MRP1), which expel many anticancer agents to decrease the intracellular drug concentration. The main non-pump resistance refers to the activation of cellular antiapoptotic defense pathway, such as drug-induced expression of B-cell lymphoma-2 (BCL-2) protein, leading to a decrease in drug sensitivity. Moreover, these two resistance mechanisms can mutually interact. Several design strategies based on the unique properties of MSNs have been utilized to overcome drug resistance. First, nano-scaled MSNs can facilitate cellular uptake, increase intracellular accumulation, and improve drug efficacy. The energy-dependent endocytosis of MSNs can bypass the drug efflux pumps 40,90,91 . Recently, Shi and co-workers 91 confirmed the enhanced cellular uptake and nuclear accumulation of DOX-loaded MSNs in MCF-7/ADR cells, which may have resulted from bypassing the drug efflux mechanism and/or down-regulation of P-gp by MSNs. The IC 50 of Dox-loaded MSNs against MCF-7/ADR cells was 8-fold lower than that of free DOX, which demonstrated that MSNs increased the suppression of cell proliferation by DOX in ADR cells.
Another advantage of MSNs is the ability to co-deliver different agents, such as antitumor drugs and MDR reversal agents. Jia et al. 92 fabricated MSNs for co-delivery of paclitaxel (PTX) and tetrandrine (TET) to overcome MDR of MCF-7/ADR cells. As shown in Fig. 5, TET could inhibit the efflux of P-gp to enhance the antitumor effect activity of PTX. Many researchers also used MSNs to deliver chemotherapeutic agents and nucleic acids. Nucleic acids provide the opportunity to silence the genes responsible for drug resistance, such as drug efflux transporter gene P-gp 93,94 and antiapoptotic protein gene BCL2 95 , thereby restoring the intracellular drug concentration required for effective apoptosis and cytotoxicity. In another study 94 , MSNs were functionalized to effectively deliver anticancer drug DOX as well as P-gp siRNA to MDR cells (KB-V1 cells). It was found the dual delivery system significantly increased the intracellular and intranuclear drug concentrations as compared with free DOX or DOX delivered alone by MSNs.
In addition, MSNs have been designed as stimulus-responsive drug delivery systems to control drug release and increase the accumulation of antitumor agents in nuclei of cancer cells. Wang and coworkers 96 prepared sericin-coated MSNs with pH and protease-responsive properties, which could deliver doxorubicin into perinuclear lysosomes of cancer cells, leading to burst release of doxorubicin into cell nuclei. These doxorubicin-loaded MSNs inhibited the growth of MCF-7/ADR tumor by 70%, showing that this system could effectively overcome MDR in vivo.
It is currently thought that an ideal nuclear-targeted nanoparticle drug delivery system can effectively overcome MDR. Recently, MSNs were modified with trans-activating transcriptor (TAT) peptide to construct a nuclear-targeted anticancer drug delivery system [97][98][99] . This novel TAT peptide-modified MSNs (MSNs-TAT) system facilitated intranuclear localization in multidrug resistant MCF-7/ADR cancer cells and released the drug directly into the nucleoplasm. As illustrated in Fig. 6, the authors also constructed a MSN-based vasculature-membrane-to-nucleus sequential drug delivery strategy exploiting RGD and TAT dualpeptides as targeting ligands 99 . RGD/TAT peptide-modified MSNs (MSNs-RGD/TAT) first bound to the tumor vasculature and then to the cell membrane. Finally, the TAT served as a nuclear targeting ligand for enhanced nuclear uptake. This sequential targeting system remarkably enhanced the therapeutic efficacy in vivo.
Mesoporous silica-based system for gene delivery
Besides conventional drug delivery, mesoporous silica can also be applied as carrier for gene transfection. It is well known that carriers play an important role in gene delivery, since the naked nucleic acids show little penetration of cell membranes 100 . There are two main gene delivery systems, namely viral and non-viral systems. The more effective viral systems face significant safety concerns, such as immunogenicity, gene recombination 101 , and nonspecificity 102 . The non-viral systems, including cationic compounds 103 , recombinant proteins 104 , polymeric 105,106 and inorganic nanoparticles 107 , have been widely studied in recent years. However, cationic materials are often associated with high toxicity, and the recombinant proteins show a low cost-performance ratio 108 . Though liposomes have attracted much attention and can provide efficient gene transfection, their main drawback is instability. Inorganic nanoparticles possess several advantages over the others, such as simple preparation and surface-functionalization, good biocompatibility, and excellent physicochemical stability. Among various materials, MSNs are particular attractive due to their unique properties. Therefore, MSNs are considered to be a promising vehicle for gene delivery to increase the cell uptake and transfection efficiency.
Gene delivery by positive charge-functionalized MSNs
Untreated MSNs often possess a negative charge due to the ionization of surface silanol groups which reduces binding to negatively charged nucleic acids, such as DNA. Therefore, silica nanoparticles are usually modified to express net positive charges by methods including amination-modification, metal cations codelivered vector and cationic polymer functionalization. Use of these modified MSNs promotes gene loading by enhanced electrostatic interactions with nucleic acids.
Amination modification is a simple and common attempt to enhance the gene loading capacity of MSNs, 3-aminopropyltriethoxysilane (APTES) [109][110][111][112] or amino propyl trimethoxysilane (APTMS) 113,114 have been commonly used to modify MSNs. Yang et al. 111 also analyzed and reported the positive correlation between the adsorption amount of plasmid DNA (pDNA) and amination degree.
Metal cations which can enhance the interactions between DNA and the silica surface have also been used to facilitate MSNsmediated gene delivery. Solberg and Landry 115 investigated the effect of metal counter ions on gene adsorption, and found Mg 2þ had a higher affinity with DNA vs. Na þ or Ca 2þ . However, DNA Figure 6 Schematic diagram of vasculature-to-cell membrane-to-nucleus sequential targeting drug delivery based on RGD and TAT peptides coconjugated MSNs for effective cancer therapy. Reproduced with permission from Pan et al. 100 . Copyright (2014) WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. seemed to bind less strongly with MSNs through metal cations as compared to the case with the presence of amino group.
PEI coating is another efficient method to promote gene transfection of MSNs because of the "proton sponge effect". This approach is thought to facilitate the formulation's escape from endosomes or lyposomes [126][127][128][129] . Xia et al. 116 reported cationic PEI-coated MSNs exhibited high binding affinity to both DNA and siRNA, as well as a surprising high transfection efficiency up to 70% of cells. The advantages of using PEI for MSN modification were also reported by other groups 93,[118][119][120] . Furthermore, PEI can conjugate with other molecules before the attachment to MSNs to control the gene release 121 .
PLL polymers are commonly used for gene transfer since they can carry large DNA and penetrate cell membranes easily 130,131 with low immunogenicity. Moreover, PLL can be degraded by enzymes to achieve a controlled release behavior 132,133 . Zhu et al. 122 combined PLL with MSNs to form an enzyme-triggered system which could control the release of drug and gene simultaneously.
Poly-L-arginine composed of natural amino acid may be more biocompatible and less toxic than synthetic polycationic polymers, such as PAMAM and PEI. Kar et al. 124 proposed a facile synthesis of poly-L-arginine grafted MSNs, and found the transfection efficiency reached up to 60% with plasmid DNA.
In conclusion, the positive charges of these modified materials may lead to strong electrostatic interactions with the negatively charged cell membrane, resulting in enhanced particle wrapping and cellular uptake as well as toxicity to cells. Therefore, it is critical to control the amount of cationic polymer used in order to balance the transfection efficiency and toxicity of the modified MSN system for gene delivery.
Gene delivery by pore-enlarged MSNs
To date, MSNs with small pores (o 3 nm) 94,116,140,141 , such as MCM-41 (pore size about 2-3 nm), have been studied as potential vectors to deliver genes. However, limited by the small pore size of MSNs, genes or plasmids were found to primarily be adsorbed on the outer surface of MSNs rather than loaded in the pores, leading to burst leakage of genes. In addition, genes located on the outer surface of MSNs cannot be protected from nucleases or lysosomes. Therefore, nanoparticles with large pores have been synthesized to facilitate the internal gene storage and protection 100,142 .
The production of MSNs with expanded pores is mainly realized by temperature control 115,123,142,143 or pore-enlarging agents 100 . Kim et al. 100 simply synthesized MSNs with ultra-large pores (~23 nm) using the swelling agent 1,3,5-trimethybenzene (TMB). The resulting MSNs efficiently protected plasmids from nuclease degradation and exhibited higher transfection efficiency compared to MSNs with small pores (2.1 nm). Meka et al. 144 fabricated MSNs with large pores (9 nm) using ethanol as cosolvent and fluorocarbon-hydrocarbon as template. After conjugation with hydrophobic octadecyl group, this type of MSN showed high loading capacity and efficient delivery siRNA into cancer cells, leading to inhibition of cancer cell proliferation.
Gene delivery by multifunctional MSNs
As briefly mentioned above, nanocarriers provide a great potential for delivering drug-nucleic acid combinations to overcome MDR in cancer treatment 145 . As such, there is an increasing focus on the development of multifunctional delivery systems based on MSNs and other multiple components, including drugs, genes, specific targeting and imaging agents.
Besides modification with cationic materials to enhance the loading of biomolecules and cell uptake, MSNs have been functionalized with various targeting agents to achieve better applications. Park et al. 118 coupled MSNs with mannosylated polyethylenimine to target macrophage cells with mannose receptors as well as to enhance the plasmid DNA expression. Peptides, like luteinising-hormone releasing hormone (LHRH) 146 and SP94 138 , have been reported to form multifunctional delivery systems. Ashley et al. 147 developed a new type of nanocarrier (the "protocell") based on mesoporous silica particles and liposomes, modified with a targeting peptide (SP94), a fusogenic peptide (H5WYG), and PEG. These nanocarriers can hold multiple cargos like doxorubicin, 5-fluorouracil, cisplatin, and siRNA, forming "cocktails". This system showed significant advantages in stability, targeting specificity, high delivery efficiency of multicomponents, as well as dosage reduction. Magnetic nanoparticles have also been widely used to effectively delivery vehicles to target organs or tissues, and even permit magnetic response imaging. PLL functionalized magnetic silica nanospheres with large mesopores (13-24 nm) were synthesized by Gu and co-workers 148 . This platform showed strong adsorption capacity for DNA and efficient cellular delivery capability for miRNA, respectively. Yiu et al. 149 prepared PEI-Fe 3 O 4 -MCM-48 particles, which showed 4-fold higher transfection efficiency compared with the commercial reagent Polymag TM . Zhang et al. 119 synthesized a multifunctional fluorescent-magnetic polyethyleneimine functionalized platform with mesoporous silica, which satisfied the fluorescent tracking and magnetically guided siRNA delivery simultaneously.
Conclusions and perspectives
During the last decade, MSNs have exhibited many attractive features which can be synergistically exploited in the development of drug/gene delivery systems. It has been demonstrated that MSNs can improve the dissolution rate and bioavailability of the water insoluble drugs based on the following features: 1) noncrystalline state of drug entrapped in the mesopores; 2) high dispersibility with large surface area; 3) wettability enhancement by the hydrophilic surface of MSNs. Moreover, several factors can influence the drug release rate from MSNs, including pore size, surface chemistry and hollow structure.
Especially for cancer therapy, MSNs have shown obvious advantages for delivery of chemotherapeutic agents over other nanocarriers, such as excellent drug loading capacity and endocytotic behavior. The external surfaces of MSNs can be further modified with various tumor-recognition molecules and stimuli responsive molecules to enhance the therapeutic effect of antitumor agents. Moreover, the energy-independent endocytosis and co-delivery ability of MSNs can overcome the MDR in cancer cells.
As for gene delivery, MSNs possessing large pores have been designed to encapsulate abundant genes and protect genes from nucleases. Through cationic modification, MSNs are able to complex with genes and successfully be transfected into various cells. In addition, multifunctional systems based on MSNs also show great potential in controlled drug/gene delivery.
Despite the recent extensive research into the development of MSN-based carriers for drug/gene delivery, there are critical issues that need to be addressed to facilitate their further development. In particular, the biocompatibility, degradability and pharmacokinetics of these materials should be systematically investigated. The in vivo therapeutic benefits of MSNs-based systems in vivo should be rigorously and extensively demonstrated. The essential information regarding the circulation properties in blood, clearance time in body, possible immunogenicity and accumulation in tissues should be obtained before the clinical translation of MSNs. Given the satisfactory resolution of these issues, MSNs-based formulations may make exciting breakthroughs in the treatment of many important diseases and disorders.
|
2018-05-09T00:44:54.447Z
|
2018-02-12T00:00:00.000
|
{
"year": 2018,
"sha1": "061d7ab7343e26fbe9736de90e07bb104d8bddb6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.apsb.2018.01.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "061d7ab7343e26fbe9736de90e07bb104d8bddb6",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
219312387
|
pes2o/s2orc
|
v3-fos-license
|
Will legal international rhino horn trade save wild rhino populations?
Wild vertebrate populations all over the globe are in decline, with poaching being the second-most-important cause. The high poaching rate of rhinoceros may drive these species into extinction within the coming decades. Some stakeholders argue to lift the ban on international rhino horn trade to potentially benefit rhino conservation, as current interventions appear to be insufficient. We reviewed scientific and grey literature to scrutinize the validity of reasoning behind the potential benefit of legal horn trade for wild rhino populations. We identified four mechanisms through which legal trade would impact wild rhino populations, of which only the increased revenue for rhino farmers could potentially benefit rhino conservation. Conversely, the global demand for rhino horn is likely to increase to a level that cannot be met solely by legal supply. Moreover, corruption is omnipresent in countries along the trade routes, which has the potential to negatively affect rhino conservation. Finally, programmes aimed at reducing rhino horn demand will be counteracted through trade legalization by removing the stigma on consuming rhino horn. Combining these insights and comparing them with criteria for sustainable wildlife farming, we conclude that legalizing rhino horn trade will likely negatively impact the remaining wild rhino populations. To preserve rhino species, we suggest to prioritize reducing corruption within rhino horn trade, increasing the rhino population within well-protected ’safe havens’ and implementing educational programmes and law enforcement targeted at rhino horn consumers.
Introduction
The majority of wild vertebrate populations are in severe decline and one-third of all mammal and bird species are currently under threat by unsustainable subsistence hunting, poaching and wildlife trade (IPBES, 2019;Rivalan et al., 2007;Scheffers et al., 2019). Large-scale poaching operations are taking place all over the world, heavily impacting the remaining number of rhinoceros, elephants, vultures, pangolins and numerous other animal species (Conrad, 2012;Fischer, 2004;Rademeyer, 2016). Their horns, tusks, claws, scales, bones and other body parts are smuggled in large quantities mainly to Southeast and East Asia, where they are processed into products that function as status symbols and traditional medicines (Milliken and Shaw, 2012). Illegal trafficking of animal products, e.g., rhino horn, is often undertaken by international crime groups, which can be both opportunistically formed collectives or structured and organised networks, that may have ties or are involved with conservation, tourism and/or trophy hunting industries (Ayling, 2013;Rademeyer, 2016Rademeyer, , 2012Van Uhm, 2012). Especially rhino horns are extremely valuable on the black market, being sold between US $ 30,000 and 65,000 per kg in Vietnam, thereby being worth more than gold, heroin or cocaine (Rademeyer, 2016;Van Uhm, 2012). The poachers may be locals that live near nature reserves who can earn between US $ 500 and 20,000 per poached rhino, depending on the role they fulfil (Rademeyer, 2016). However, there seems to be a trend towards more professionally outfitted and trained poachers (Van Uhm, 2016). Rhino horns are also harvested via 'pseudo-hunting', by using rhino trophy hunting as a cover-up for the illegal killing and trafficking of rhino horns to Southeast Asian markets (Ayling, 2013;Rademeyer, 2016;Van Uhm, 2018a).
The poaching rate of the two African rhinoceros species (the white rhino Ceratotherium simum and black rhino Diceros bicornis) increased significantly since 2007 ( Fig. 1), which has generated substantial global concern (African Wildlife Foundation, 2014; Biggs et al., 2013;Milliken and Shaw, 2012;Rubino and Pienaar, 2017). It has been estimated that African rhinos could already become extinct in the wild around the year 2036 (Haas and Ferreira, 2016). In 2010 it was estimated that South Africa was home to 95% (~19,000) of all remaining white rhinos and 40% (~1900) of all black rhinos (Emslie et al., 2016;Rubino and Pienaar, 2017). The survival of the South African rhino population could therefore likely determine the fate of both African rhinoceros species.
The rhino conservation sector, especially in southern Africa, has responded to this alarming extinction risk in a number of ways. First, intensive patrols with anti-poaching rangers are being undertaken, fences have been built or improved around protected areas, scouting drones have been deployed, horns of living rhinos have been equipped with RFID chips and information technology has been included at various levels to stop poaching (Cambron et al., 2015;Conway-Smith, 2013;Penny et al., 2019;SANParks, 2015;Wildlife ACT, 2014). Second, education and awareness campaigns have been set up to decrease the illegal demand for rhino horn (African Wildlife Foundation, 2014;Greenfield and Veríssimo, 2019; Save the Rhino, 2013; Veríssimo and Wan, 2019;WildAct Vietnam, 2019). Third, synthetic horns have been proposed to replace real ones and with that disturb the illegal market (Save the Rhino, 2016a). Fourth, cargo is being checked more intensively for animal body parts and negotiations with Asian governments are taking place to further enforce the ban on domestic sales of rhino horn in an effort to control the illegal trade (Save the Rhino, 2015Rhino, , 2013. Fifth, horns of living rhinos have been dyed, poisoned or removed to devalue rhino horn (Ferreira et al., 2014;Rubino and Pienaar, 2017; Save the Rhino, 2016b). All these efforts have not been able to stop rhino poaching from taking place, but have possibly assisted in the decrease of rhino poaching events recorded in South Africa from 2014 to 2019 (Fig. 1). However, reduced rates of successfully tracking down rhino, because of their dwindling numbers, may also be invoked as an explanation for the decrease of poaching incidents. Furthermore, some state that the overall decrease of rhino poaching incidents is largely a result of the decrease in poaching in Kruger National Park, where protection was improved in response to the high poaching rate (Rademeyer, 2016). As a response, rhino poaching incidents have increased in other areas (Rademeyer, 2016), notably in Hluhluwe-Imfolozi Park and private game reserves. Unfortunately, the current poaching rate is still so high that it poses a serious threat to the survival of both African rhino species (Haas and Ferreira, 2016).
With the aim to reduce the rapid population decline of vulnerable species, international commercial trade bans of animal products have been implemented through CITES since 1975 (Ayling, 2013). International rhino horn trade has been banned since 1977, which was followed by a decrease in rhino poaching rate at first (Ayling, 2013). However, the increase in the population size of white rhino between 1977 and 2007 was likely not attributed to this trade ban, but to an increase in private ownership and trophy hunting (Leader-Williams et al., 2005) and the protection in the South African National Parks. Furthermore, the population size of black rhino has decreased substantially since the implementation of the trade ban from approximately 65,000 individuals in 1970e2400 individuals in 1995 (Leader-Williams et al., 2005). It is unlikely that the ban directly led to the increase of black rhino poaching, as this was likely caused by rapid economic and population growth in Southeast Asia . Moreover, the poaching rate of both African rhino species increased dramatically since 2007 despite the trade ban (Fig. 1).
Given the failure of an international trade ban to fully stop rhino poaching, a substantial number of scientists, policy makers, conservationists and rhino owners have argued to lift the current ban on international rhino horn trade as a potential solution for the ongoing rhino poaching crisis (Biggs et al., 2013;Rubino and Pienaar, 2020;Taylor et al., 2017). This was based on the reasoning of "use it, or lose it", as substantiated by the Principles and Guidelines for the Sustainable Use of Biodiversity by the Convention on Biological Diversity (SCBD, 2004). Rhino horn, which is comprised only of keratin, can be harvested with no ill effect to the animal's health (Biggs et al., 2013;Rubino and Pienaar, 2017). However, others are strongly opposed to lifting this ban for both ethical reasons and concerns about a further increase in rhino poaching (Cheung et al., 2018b;Prins and Okita-Ouma, 2013;Save the Rhino, 2018). This topic has been discussed during several CITES meetings, which led to votes in 2016 and 2019 that twice rejected proposals to lift the ban (CITES, 2019; Save the Rhino, 2018). Furthermore, scientists have been studying the potential effects of a rhino horn trade ban lift for approximately two decades now (e.g., Ayling, 2013;Biggs et al., 2013;Cheung et al., 2018b;Conrad, 2012;Fischer, 2004;Rivalan et al., 2007;Taylor et al., 2017). Overall, this debate has become polarized, which has led to an apparent deadlock in the discussion (Committee of Inquiry, 2016; Taylor et al., 2017).
The potential conservation benefit of legalizing an animal product market can be divided into two aspects: 1) a legal competing market could offset poaching, and 2) a legal market could provide financially viability to keep, protect and breed animal populations (see Appendix). Past cases show that the legal commercialization of animal products can go both ways regarding the conservation of a species; with a (potentially) positive effect in the case of bison meat, crocodilian skins and trophy hunting, but with a (potentially) negative effect for elephant ivory and lion bones (see Appendix). There are thus situational-and/or context-dependent mechanisms that determine how an animal population responds to a legal animal product trade (Tensen, 2016). It is important to gauge how the rhino populations could respond to a legalization of international rhino horn trade.
Here we present an integrative review on the pros and cons of legalizing international rhino horn trade for the sustained preservation of rhinos in the wild by drawing insight, plausible reasoning, modelling results and empirical data from scientific and grey literature of multiple disciplines (Snyder, 2019). In this review, we discuss four mechanisms (in no specific order) that change or come into play if international rhino horn trade would be legalized and how these mechanisms will potentially impact wild rhino populations (Fig. 2). We identified the following mechanisms as the most frequently occurring ones in scientific literature, in grey literature, and in the arguments of conservationists, policy makers and private rhino owners: 1) financial viability for private rhino owners, 2) rhino horn demand, 3) laundering of rhino horns, and 4) behaviour of rhino horn consumers. These four mechanisms were selected by the authors after thoroughly familiarizing themselves with the topic through past work experience and reading top results from literature search engines about wildlife trade and farming, but without strong a priori hypotheses about how each of the mechanisms would influence the study's conclusion. The authors varied in their initial ideas about whether or not rhino horn trade could benefit rhino conservation, thereby limiting a potential researcher bias in the selection of the mechanisms. However, we do not suggest that the selected four mechanisms provide a complete description about what will happen if rhino horn trade is legalized, but we do posit these mechanisms to be of major importance. We collected and studied the literature ad hoc to get a thorough understanding about the mechanisms and how these would influence rhino populations in the case of a horn trade legalization. We did this by first reading the top results from literature search engines while searching for keywords related to these mechanisms and wildlife trade and farming. Upon noticing contradictions in views or knowledge gaps, we continued our search by using more specific keywords. These latter search results were often read with the purpose to retrieve an answer on specific questions, in order to get a complete overview of the effects of the mechanisms.
After discussing the four aforementioned mechanisms, we combine our insights into a conclusion where we evaluate each mechanism and whether it will have a positive, negative, or still unknown effect on the future wild rhino population size. We weigh the relative importance of these mechanisms and their potential effect on the wild rhino population through plausible reasoning to come to an overall recommendation about legalizing international rhino horn trade. We conclude by giving suggestions for future research and for a policy agenda that would benefit rhino conservation the most according to our study. In our study we focus primarily on the two African rhino species and often in the setting of South Africa (as South Africa harbours the majority of all rhinos on Earth at present), even though we acknowledge the importance of other countries with rhino populations and the situation of the more rare Asian rhino species. Nevertheless, since illegal wildlife trade is an interlinked and global system, we posit that our review provides a valid overview for the situation of all rhino species by primarily considering the world's largest rhino population as a case study.
Financial viability of private rhino ownership
The majority of South African rhinos (both black and white) currently live in either government-owned national parks or privately owned game reserves and farms (Child et al., 2012;Knight et al., 2015). In national parks, large amounts of money are often spent on wildlife protection, paid for by revenues from tourism as well as by affluential external donors and the state Conceptual diagram of the international legal rhino horn trade scenario with farmed and wild rhino populations, legal and illegal markets, and four identified mechanisms (as discussed in the four main sections of this study): a) financial viability for private rhino owners, b) rhino horn demand, c) laundering of rhino horns, and (d) behaviour of rhino horn consumers. Green arrows represent a potential positive effect (higher/larger source leads to a higher/larger destination), red arrows a potential negative effect (higher/larger source leads to a lower/smaller destination) and green/red arrows both a potential positive and negative effect. An improved financial viability for private rhino owners has been hypothesized to benefit both farmed and wild rhino populations, rhino horn demand has been hypothesized to increase with a legal market, laundering has been hypothesized to allow for an increase in illegal horn trade with a legalized market, and it has been hypothesized that programmes aimed at changing the behaviour of rhino horn consumers will be less effective with the existence of a stigma-removing legal rhino horn market. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) (Annecke and Masubelele, 2016). Privately owned game reserves and farms on the other hand, need to be financially viable as a business model. For private rhino owners, the revenue from keeping rhinos on their lands traditionally comes from tourism, trophy hunting and live animal sales. When in the early 1990s the subsidy to agricultural commercial farmers stopped in South Africa, a large number of farmers reverted to game farming as South African law allowed for private ownership of wildlife (Child et al., 2012;Taylor et al., 2015). Private wildlife ownership is currently only allowed in South Africa, Namibia and Zimbabwe (Muir-Leresche and Nelson, 2000), where private wildlife owners have to abide to the national nature protection laws. Populations of large game animals have increased in southern Africa through this form of farming (Child et al., 2012). As 80% of the land in South Africa is privately owned (Cousins et al., 2008), it is thought that private ownership of rhino on these lands can play a critical role in the recovery and long-term conservation of the species Rubino and Pienaar, 2017). It is estimated that 33% of the total rhino population in South Africa is now privately owned (Rademeyer, 2016;Rubino and Pienaar, 2017).
For private rhino owners, the increasing security costs of protecting their rhino from poaching pose a major problem (Rubino and Pienaar, 2020). Income from the traditional sources (tourism, trophy hunting and/or live sales) is in many cases not sufficient to cover the increased costs for protection and at the same time create a financially sustainable enterprise (Minnaar and Herbig, 2018;Rubino and Pienaar, 2017). It is estimated that in 2016, 70 of the approximate 400 private rhino owners in South Africa have removed rhinos from their land due to financial difficulties and the personal security risks posed by poachers, amounting to a loss of about 200,000 ha of land available for rhino conservation (CITES, 2016).
The problem sketched above has fuelled the plea for a lift on the trade ban and legalization of the market, with private rhino owners being prominent advocates (Private Rhino Owners Association, 2017; Rubino and Pienaar, 2020). Lifting the trade ban could enable private rhino owners to exploit an extra way of gaining revenue from keeping rhinos by selling sustainably harvested horns (Rubino et al., 2018). This increased revenue could in turn be used to pay for extra anti-poaching measures by private rhino owners. An additional advantage that is to be expected when legalizing the trade is that the viability of rhino farming will get an impulse, leading to more entrepreneurs and land-owners being interested in keeping rhinos. This will increase the population of captive rhinos, which benefits the global population of this threatened species. Although the conservation value of a captive population of rhinos is less than that of a healthy wild population (Redford et al., 2011), a captive population could be an important buffer in case rhinos become extinct in the wild.
Another frequently used argument is that tax raised from legally traded horns could flow back to the protection of wild rhino populations and can be invested in livelihood development for communities surrounding these parks, which currently form the cradle of poachers (Di Minin et al., 2015;Rademeyer, 2012). Di Minin et al. (2015) concluded in a modelling study that this reinvestment of profit from legal sales would actually be a prerequisite for a positive effect of legalizing the market on rhino conservation. Given that the black market price for rhino horn is currently between US $ 30,000 and 65,000 per kg and rhino horn farming is profitable from approximately US $ 11,500 per kg onwards (Rademeyer, 2016;Rubino et al., 2018), there is ample room for legal sales to yield substantial financial resources to potentially protect rhinos in such a way that poaching becomes less profitable Di Minin et al., 2015). However, it is unlikely that most of the tax raised through rhino horn sales will be reinvested in wild rhino conservation, since health care, housing and education of previously disenfranchised people are politically more urgent for many African governments. Capitalist governments have independent processes of harvesting and distributing wealth, meaning that sectors that are taxed for a certain amount are not compensated with an equal amount of governmental funding. Furthermore, it is questionable whether private rhino owners are major stakeholders in wild rhino conservation or not, because they only have an indirect financial incentive to bargain with the government for a reinvestment of taxes to the protection of wild rhinos. Less poached rhino horns could of course lead to more consumers for farmed rhino horns, but the significance of this phenomenon will fade when there would be substantially more farmed rhinos than wild rhinos.
Legalizing the rhino horn trade would thus have two main advantages through the mechanism of increased revenue for rhino owners. First, the owners will have an incentive for sustaining a viable captive population of rhinos. Second, there will be money available for the protection of both private (through sustainably harvested horn sales) and wild rhinos (through taxes), which in turn can discourage poaching. However, it is unclear if a substantial amount of the raised taxes will be reinvested in the protection of wild rhinos.
Demand for rhino horn
The debate about whether or not to legalize international rhino horn trade often focuses on what will happen to the market demand (viz., in terms of quantity of rhino horn given current prices or potentially lower or higher prices), i.e., will the overall demand (legal, viz., supplied mainly by farms, and illegal, viz., supplied by poachers, combined) increase and how will the current illegal market respond to a legal market? To adequately answer these questions it should first be known how large the current demand for illegal rhino horn is. Some estimated the overall demand for rhino horn by looking solely at the current illegal supply, concluding that demand for rhino horn can be met with 5000 captive white rhinos through regular non-lethal harvesting of their horns in South Africa alone (Biggs et al., 2013;Milliken et al., 2009). However, there are many concerns about this estimation. First, the current illegal demand is already far greater than the current illegal supply (USAID Vietnam, 2018; USAID Wildlife Asia, 2018). The United States Agency for International Development concluded via interviewing 1400 Vietnamese people that are financially able to buy rhino horn (from five different cities that sustain a black market in rhino horn) that in Vietnam 10% of the people find it acceptable to buy or own rhino, of which 10% are currently wealthy enough to afford it (USAID Vietnam, 2018). This suggests that there is a demand for rhino horn from about a million people in Vietnam alone. In the 14 times more numerous Chinese population, the USAID surveyed 1800 people (from six different cities that have a rhino horn black market) and concluded that 16% have purchased rhino horn in the past, of which 8% in the past 12 months (USAID Wildlife Asia, 2018). China and Vietnam combined are thus home to millions of potential rhino horn consumers. Second, Kotze (2014) argued that rhino horn farming will produce too few horns to meet the demand in the near future, considering the horn growth rate of only 6 cm per year on average (Pienaar et al., 1991) and the low reproduction rate of one calf per 3e5 years (Patton et al., 1999;Swaisgood et al., 2006). Third, Prins and Okita-Ouma (2013) argued that Biggs et al. (2013) overlooked the demand for the other four rhino species; the suggested yield of legal supply is often based on rhino farming in southern Africa and overlooks Asian rhino species, which are not currently farmed and are desired for their horns nonetheless. To conclude, current illegal demand is likely far greater than current illegal supply, with the current estimation of potential buyers far exceeding the amount a legal supply could realistically meet in the (near) future (USAID Vietnam, 2018).
Current demand for rhino horn will likely not stay the same with a legalization of rhino horn trade and it should thus be estimated how the overall demand will change. First of all, future overall demand is likely to increase with economic and population growth in Asia, regardless of rhino horn trade legalization (Tensen, 2016;Vigne et al., 2007). Furthermore, if the trade ban is lifted new forces will start to influence the demand for rhino horn as well (Fischer, 2004). An important new force is the removal of the stigma that comes with buying illegal products. Although Biggs et al. (2013) assumed that with a legal rhino horn trade "the demand does not escalate to dangerous levels as the stigma associated with the illegality of the product is removed", plenty of other studies argued that the demand will likely increase significantly because of the removal of the stigma (e.g., Collins et al., 2013;Fischer, 2004;Prins and Okita-Ouma, 2013), at least for law-abiding consumers (Fischer, 2004;USAID Vietnam, 2018;USAID Wildlife Asia, 2018). Another market force that could result in an increased demand after legalization is the reawakening of old markets, particularly markets that were active in the 1970s and 1980s in Taiwan, Japan, Singapore and Yemen (Prins and Okita-Ouma, 2013), which could thus reverse the decreased demand in these old markets (Graham-Rowe, 2011). In addition to traditional consumer countries, there are also new (e.g., African) countries that sell Traditional Chinese Medicines in their drug stores and where people start to believe that wildlife products (including rhino horn) can cure diseases (Cyranoski, 2018). These new local markets are often overlooked in the estimation of demand. Moreover, a substantial increase in demand (both legal or illegal) could further promote the tragic positive feedback loop between demand and the rhino extinction rate, which is coined the Anthropogenic Allee Effect (Challender and MacMillan, 2014;Hall et al., 2008). The Anthropogenic Allee Effect indicates that when the abundance of an animal species decreases, the demand for its products will increase due to its rarity (Hall et al., 2008). Accounting for all the aforementioned market forces, the overall demand for rhino horn is expected to grow significantly with a legalized market, although the recent COVID-19 pandemic may affect people's attitudes towards using products of wild animals in unforeseen ways (Lam et al., 2020).
Ideally, with a legal market that would be supplied mainly by rhino farmers, the illegal demand for poached horns would disappear or at least become substantially smaller. Unfortunately, how the illegal demand for rhino horn will respond exactly is uncertain (Fischer, 2004). From an economic perspective, illegal traders and farmers can compete with each other in multiple ways that could either benefit or devastate rhino conservation (Damania and Bulte, 2007). From a social perspective, people that fear heavy penalties for consuming illegal products will likely shift from the illegal to the legal market when effective law enforcement is in place. The same applies to people that care about animal welfare or conservation. These three deterrents have been mentioned by 71e76% of the 242 interviewed Vietnamese illegal rhino horn consumers (USAID Vietnam, 2018), so it can be assumed that a substantial portion of the current illegal consumers will consider switching to a legal market. On the other hand, some people have a preference for illegally harvested ('wild') horns, e.g., those that prefer to buy larger horns as a status symbol and those that believe that the suffering of the animal enhances the 'potency' of the medicine (Cheung et al., 2018a;Hanley et al., 2018;Tensen, 2016). It is thus likely that an illegal market will always persist parallel to a legal market and this should not be neglected in the debate around the legalization.
Given the high likelihood of a substantially increasing demand with trade legalization, it is important to consider effective market forces to regulate this increase to avoid it leading to the detriment of wild rhino populations. Price is such a force that many studies proposed to influence the market demand (e.g., Milner-Gulland, 1993). However, the effect of price on the overall demand as well as on the illegal demand is ambiguous and may yield counterintuitive results. For the overall demand, lower prices make on the one hand rhino horn affordable to more buyers, which could lead to an increase in overall demand (USAID Vietnam, 2018). On the other hand, a lower price could also weaken the effect of the Anthropogenic Allee Effect, i.e., a lower price makes it less attractive for people that are after luxurious or rare products. For the illegal demand, Biggs et al. (2013) argued that lower prices in the legal market will likely diminish it. While it is true that a lower legal price can motivate people to move from the illegal to the legal market, it is not always the case. Like with marijuana, a legal market is more likely to reduce the illegal market when its price can compete with the illegal market (Morris, 2018). Wildlife product markets are very different from perfect competition markets, suggesting that lowering the price may not be a good strategy, as it is hard to make sure that the price in the legal market is always lower than the illegal price. For example, farmed tiger bones are 50e300% more expensive than from poached tigers (EIA, 2013). Also, illegal elephant tusks were sold for only a third of the price of legal tusks (Fischer, 2004). As for rhino farming, the minimum price for rhino horn to be profitable is approximately US $ 11,500 per kg (Rubino et al., 2018). If crime networks are able to supply horns at a lower price it is still likely that consumers will buy illegal products. However, 63% of the 242 Vietnamese illegal rhino horn consumers would be willing to pay more if the product is scientifically tested by a trusted supplier and 72% would still buy rhino horn with a 10% increase in price (USAID Vietnam, 2018). So a substantial portion of the current illegal consumers is likely to move to the legal market, even if legal prices cannot fully compete with illegal prices. On the other hand, consumers often overstate their willingness to pay a premium (Katt and Meixner, 2020). Furthermore, these results also show that price is only a minor concern to current rhino horn users (USAID Vietnam, 2018; USAID Wildlife Asia, 2018). This is backed up by the notion that demand for rhino horn is inelastic to price changes (Crookes and Blignaut, 2015;Milner-Gulland, 1993). For instance, the demand for rhino horn rose substantially in Yemen despite a 40% increase in price within four years (Vigne et al., 2007) and modelling studies have suggested that reducing the price of rhino horn will not curb rhino poaching (Crookes, 2017). These results suggest that the overall demand for rhino horn is insensitive to an increase or decrease in price.
The improbability of price being able to control the demand urged researchers to look into social instead of economic forces. These social forces turned out to be more effective than price in a modelling study about the rhino horn case (Crookes and Blignaut, 2015). First, the consumption motives of rhino horn buyers in Southeast and East Asia should be known to be able to adequately respond to it. According to results of interviews with 242 Vietnamese illegal rhino horn buyers, the two main drivers of purchase are that rhino horns "are worth their price no matter how expensive" and "indicate wealth, power and social status" (USAID Vietnam, 2018). The status and cultural pride of the elite increases when the prices of 'must-have' status symbol products are high. The way to reduce the demand of these Vietnamese people are for example strategies related to heavy penalties and a focus on animal cruelty (USAID Vietnam, 2018). In a similar study, 140 Chinese illegal wildlife product buyers primarily mentioned that rhino horn "brings good health" and "cures illness" (USAID Wildlife Asia, 2018). In addition, an underestimated driver for buying rhino horn in China is the art and antiques market (Gao et al., 2016). Therefore, eliminating concerns about modern medical practices and increasing public awareness about animal conservation are key to reducing wildlife consumption in China. Understanding and anticipating the underlying consumption motives of rhino horn buyers thus seems more helpful in reducing demand than price changes.
In short, demand for rhino horn is currently much larger than supply and is expected to increase with economic and population growth in Asia (Tensen, 2016;USAID Vietnam, 2018;USAID Wildlife Asia, 2018;Vigne et al., 2007). With legalization of the market, demand is likely to increase further (in current, old and new markets) when the stigma around buying rhino horn is removed and potentially also due to the Anthropogenic Allee Effect (Challender and MacMillan, 2014;Hall et al., 2008). It will most likely be impossible to satisfy the demand with legal horns alone, due to a preference of some consumer groups for illegal ('wild') horns and the potentially lower price of illegal horns (Cheung et al., 2018a;Hanley et al., 2018). The demand for illegal horn could however be reduced through a simultaneous increase in law enforcement combined with severe penalties for buying illegal horn (Tensen, 2016).
Laundering of rhino horns
The issues and debates about the demand for rhino horn suggest that legal and illegal markets are likely to co-exist after trade legalization, not only for consumers but also for suppliers (Fischer, 2004). Illegal rhino horn traders are likely to remain in business after trade legalization and could start laundering their products into the legal market (Collins et al., 2013;Fischer, 2004). This is the case with legal ivory trade as well, where 'ghost ivory' (post-1947 ivory being sold as pre-1947 ivory) and 'look-alikes' (e.g., elephant ivory fraudulently mislabelled as mammoth ivory) are being sold to the unsuspecting and uneducated buyers (CITES, 2019;Collins et al., 2017). Under such conditions, a legal market can actually give an incentive to illegal suppliers by lowering the chances of being caught in an illegal exchange, as corruption reduces the rhino horn confiscation rate (Fischer, 2004;Van Uhm, 2018a). For example, corruption amongst government officials, e.g., via threats and commission payments Rademeyer, 2012), can allow for the entering of illegal products into legal markets (Bennett, 2015). A similar situation was found for the legal trade of ivory, in which eight of the twelve African countries that are home to the majority of elephant populations belong to the top 40% of the world's most corrupt countries (Transparency International, 2013;UNEP et al., 2013).
Widespread corruption exists and expands to all nodes in a trade chain (Bennett, 2015). Examples of wildlife trade related corruption exist in justice, economic and political systems (Wyatt et al., 2018), where acts of corruption on an individual level include bribes, patronage, diplomatic cover and permit abuse (Corruption Tracker, 2011;Nshuli, 2013;Walker, 2009;Wyatt et al., 2018). For example, a number of rhinos were actually poached by people who were employed to guard them in Africa during the 1970s and 1980s (Fischer, 2004). More recently in South Africa's Kruger National Park, police officers and rangers were directly involved in poaching (Anderson and Jooste, 2014). Similar situations were discovered in other rhino poaching hotspots in Africa as well (Smallhorne, 2013). For example in Kenya, the stronghold of the eastern black rhino (containing 87% of the subspecies' population), the internal government corruption worsened the problem of population decline (Anderson and Jooste, 2014).
Due to the aforementioned effects of corruption and laundering, legalizing rhino horn trade would at least need a highly regulated trading system if rhinos are to be preserved. A Central Selling Organization, the system with the largest control, was proposed by Biggs et al. (2013). To reduce the effects of corruption, they suggested to shorten the market chain between suppliers and buyers (Biggs et al., 2013). However, an illegal supply can in reality always be present as corruption within the Central Selling Organization could still support laundered poached horns to end up on the legal market (Bennett, 2015;Fischer, 2004). This was the case in the highly controlled diamond trade, where an estimated 5e10% of the world's legal diamond market consisted of 'blood diamonds' (Baker, 2015). Considering the huge demand for rhino horn and the small rhino population (USAID Vietnam, 2018), a potential 5% illegal horns would already be problematic for the survival of the species. Biggs et al. (2013) also proposed DNA profiling to track the legality of individual horns. However, this will not only inhibit the potential use of synthetic horns, but also that of buffalo horn and wood that circulate as 'rhino horn' and which currently comprise a substantial proportion of the market (Collins et al., 2013; Save the Rhino, 2016a). The demand for genuine rhino horn could therefore increase, together with the negative consequences (Collins et al., 2013). Furthermore, DNA profiling will likely increase the price of legal rhino horns.
In short, corruption is unfortunately a large problem worldwide, also along the rhino horn trade route in Africa and Asia (Emslie and Brooks, 1999;Wyatt et al., 2018). The illegal supply of rhino horn is therefore likely to increase when legalizing international rhino horn trade due to laundering and corruption (Van Uhm, 2018a), even with a highly regulated trading system (Bennett, 2015;Collins et al., 2013;Fischer, 2004).
Long-term behavioural change of rhino horn consumers
It is generally thought that the ultimate solution to stop rhino poaching lies in a change of the consumers' behaviour (Litchfield, 2013). The demand can be drastically reduced if not eliminated, by creating a uniform morality that it is wrong to purchase products that have such a clear negative effect on the survival of a threatened species and by providing alternatives to fulfil the need for the product (Litchfield, 2013). This can only be accomplished by a global change in consumer behaviour. Despite the efforts of non-governmental organisations and conservation incentives (Biggs et al., 2013;Holden et al., 2019;St John et al., 2010), this has not been achieved yet, as illustrated by the high poaching rates and large demand for rhino horn and other wildlife products (Save the Rhino, 2019; USAID Vietnam, 2018; USAID Wildlife Asia, 2018).
Environmental awareness programmes are believed to increase knowledge and concern (Sampei and Aoyagi-Usui, 2009), but there seems to be a value-action gap remaining in the general public (Kollmuss and Agyeman, 2002). Furthermore, the outcome of programmes to reduce consumer demand for wildlife products are only known for about 37% of the programmes, and the ecological impact has been reported for only 9% (Veríssimo and Wan, 2019). An extra complication in the rhino poaching crisis is the scale of the problem. While local awareness programmes can have strong positive effects on local environmental problems, e.g., overexploitation by subsistence hunting (Campos-Silva et al., 2017) or human wildlife conflicts (King et al., 2017), the illegal rhino horn trade represents an international conservation crisis that involves many stakeholders other than the local consumers (Milliken and Shaw, 2012;Sutherland et al., 2014). Especially with the current rise in popularity of Traditional Chinese Medicine, as promoted by the Chinese government (Cyranoski, 2018;Master, 2019) and supported by the World Health Organisation (Matthews- King, 2019;WHO, 2013), the market for perceived medicinal uses of rhino (and other wildlife) products is increasing (Master, 2019;Tang et al., 2018). Furthermore, half of all planned purchases of rhino horn products in Vietnam were motivated by the advice of a traditional medical doctor (USAID Vietnam, 2018). Current rhino horn buyers in Vietnam indicated that although they were aware of the extinction risk for rhinos, they do not feel responsible for the killing themselves as they "are one of many consumers", "do not kill the animals themselves" or "do not buy products regularly nor in high quantities" (USAID Vietnam, 2018). These beliefs in combination with the commercial and governmental lobby for the use of Traditional Chinese Medicine make it difficult to campaign for the exact opposite However, the incrimination of pangolins as the origin of the COVID-19 pandemic may put traditional misuse of wild animals in a new unfavourable light (Lam et al., 2020).
In addition to environmental awareness programmes, law enforcement on the consumer side of the trade could also change the behaviour of potential buyers of rhino horn (Olmedo et al., 2018). Buyers of rhino horn in both Vietnam and China indicated that the top deterrents for future purchases are the link of rhino products to organised crime and the personal risk of violating the law (USAID Vietnam, 2018; USAID Wildlife Asia, 2018). When prioritized by the governments of consumer countries, more severe penalties for rhino horn owners could be implemented and effective law enforcement established. This has the potential to change the behaviour of consumers in both the short and long term (Olmedo et al., 2018).
Legalizing the market can be considered the complete opposite of campaigning to reduce consumer demand. By making it legal to sell and buy rhino horn products, the stigma around these products is removed and a signal that it is acceptable and useful to buy rhino horn is implicitly given (Biggs et al., 2017b). This may hamper critical thinking by consumers about their own behaviour and limit the impact of education programmes on the matter. As a comparison, legalization has increased the demand for other products in the past (see Appendix). For marijuana the total consumption rose after legalization due to an increase in new users and extended consumption by regular users (Pacula, 2010). Furthermore, after legalizing crocodilian skin trade, the demand remained robust for the high-end products (alligator and crocodile skins) and increased dramatically for the lower-cost (caiman) products (MacGregor, 2002).
In the long term, involving consumers and informing them about the consequences of their choices is an essential aspect of saving the rhino as a species (Biggs et al., 2017a). This can perhaps best be achieved by emphasizing the lack of efficacy of rhino-based Traditional Chinese Medicine by engaging professional medical doctors in China, who have essentially the same ethical standards as their counterparts in the U.S.A. (Nie et al., 2015). In the long run, such demand-reduction programmes may be more cost-effective and better able to tackle the complexity of the trade than increasing anti-poaching enforcement, independent of the initial price of wildlife products or ecological parameters (Challender and MacMillan, 2014;Holden et al., 2019). However, for demand-reduction programmes to become truly effective, conservationists need to adopt more rigorous impact evaluation strategies ('t Sas-Rolfes et al., 2019;Olmedo et al., 2018;Veríssimo and Wan, 2019). Nevertheless, given the current critical situation of rhino populations, more short-term measures, e.g., law enforcement (Olmedo et al., 2018), should be implemented as well to ensure the survival of the rhino species. In either scenario, legalizing the rhino horn market will likely hamper any demand-reduction strategy.
Discussion
Evaluating the effects of the four mechanisms separately on the rhino population in the situation of a legalized trade (Fig. 3), we can summarize that 1) an improved financial viability of private rhino ownership will likely have a positive effect on the captive rhino population in countries that allow private wildlife ownership, i.e., South Africa, Namibia and Zimbabwe (Muir-Leresche and Nelson, 2000). However, it is questionable if this will lead to a substantial conservation benefit for wild rhino populations. 2) It will most likely be impossible to satisfy the demand with legal horns alone in the near future (Tensen, 2016;USAID Vietnam, 2018;USAID Wildlife Asia, 2018). Therefore, legal and illegal trade circuits would probably exist in parallel due to a preference of some consumer groups for illegal ('wild') horns and the potentially lower price of illegal horns (Cheung et al., 2018a;Hanley et al., 2018;Rubino et al., 2018). 3) Corruption is widespread and likely to remain present in all nodes of the trade chain and can stimulate the illegal trafficking of poached rhino horns through laundering channels into the legal market (Van Uhm, 2018a), thereby keeping the poaching incentive alive. Furthermore, even in a tightly controlled market system corruption will most likely still allow for an influx of poached horns from African countries, as is the case with blood diamonds and ivory (Baker, 2015;Bennett, 2015;Fischer, 2004;Wasser et al., 2015). 4) Behavioural change of rhino Fig. 3. The mapped conservation benefit (y-axis, from red to green) and certainty (x-axis, from grey to transparent) of the four discussed mechanisms (financial viability for private rhino owners, rhino horn demand, laundering of rhino horns, and behaviour of rhino horn consumers) on rhino populations: a) business as usual scenario for wild rhinos, b) legal trade scenario for farmed rhinos, and c) legal trade scenario for wild rhinos. The symbols are identical to Fig. 2. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) horn consumers has often been suggested as the ultimate solution to stop rhino poaching, but legalizing the rhino horn market could likely negatively affect efforts taken in this direction (Biggs et al., 2017b). By legalizing the rhino horn market the stigma around buying illegal products of poached and threatened animals will be removed, which could cause an overall increasing interest in rhino horn (as happened for crocodilian skins and marijuana) in the future (MacGregor, 2002;Morris, 2018;Prins and Okita-Ouma, 2013).
Cases of trade legalization from the past show that the legal commercialization of animal products can go both ways regarding species' conservation; with a (potentially) positive effect in the case of bison meat, crocodilian skins and trophy hunting, but with a (potentially) negative effect for elephant ivory and lion bones (see Appendix). To determine how the rhino populations will respond to the legalization of international rhino horn trade, it needs to be evaluated what makes a legal animal product market sustainable to benefit species conservation (SCBD, 2004;Tensen, 2016). Tensen (2016) determined that wildlife farming (to supply legal products) can benefit species conservation only if five different criteria are met. First, consumers should show no preference for products originating from wild-caught animals. This likely does not apply to all buyers of rhino horn, as larger horns from poached rhinos function better as status symbols and horns from rhinos that suffered are believed by some to increase their medicinal potency (Cheung et al., 2018a;Hanley et al., 2018). Second, a substantial part of the demand should be met and the demand should not increase due to a legalized market. This probably does not apply to rhino horns either, because demand is unlikely to be met by rhino farming in the near future (USAID Vietnam, 2018;USAID Wildlife Asia, 2018). Third, legal products should be more cost-efficient in order to combat the black market prices. This criterion likely does not apply to rhino horn as rhino horn farming was estimated to only be profitable without subsidies when horn is sold at a minimum price of US $ 11,500 per kg (Rubino et al., 2018). In contrast, poached rhino horn would probably still be profitable at a much lower price if the risks of rhino poaching do not increase substantially compared to the current situation (Conrad, 2012). Fourth, wildlife farming should not rely on wild populations for restocking. This would likely hold true for rhino farming if captive populations are well protected, because already more than 30% of all South African rhinos are privately owned and due to the aridification of farming grounds more area is expected to become available for rhino farming in the near future (Rademeyer, 2016;Rubino and Pienaar, 2017). Fifth, laundering of illegal products into the commercial trade should be absent. This will likely not be the case for rhino horn farming given the enormous value of the product, the trade network that is already involved and the corruption that is present in many African and Asian countries (Collins et al., 2013;Fischer, 2004;Wyatt et al., 2018). In short, the case of rhino horn farming complies with only one of the five criteria that are needed for wildlife farming to benefit species conservation. According to Tensen (2016) even a minor violation of any of the criteria will result in a negative outcome of wildlife farming to species conservation, but even if a minor violation of these criteria could be compensated for by the other criteria then violating four out of five criteria will likely not result in a benefit for rhino conservation from farming rhino commercially. Similar to this prediction for the rhino horn trade, a modelling study deemed sustainable harvesting of elephant ivory to be impossible (Lusseau and Lee, 2016), and an assessment framework study deemed pangolin farming to be unable to yield a conservation benefit Phelps et al., 2014).
Providing recommendations about scenarios that have never happened before (viz., legalizing international rhino horn trade) is challenging, as this is inherently coupled with a lack of empirical data. As a consequence, all our conclusions could only be determined with a certain level of certainty (Fig. 3). In order to increase the certainty of inferences that can be made about potential effects of legal horn trade on wild rhino populations, we would suggest to focus future research on three topics. 1) Quantify and describe the current demand for rhino horn and the potential demand for legal rhino horn better. Although recent studies have taken important steps in this direction (e.g., USAID Vietnam, 2018; USAID Wildlife Asia, 2018), there is potential to better clarify the number of (potential) consumers, the amount of rhino horn they (want to) consume per time unit, the amount of money they are realistically willing to pay per unit of legal and illegal rhino horn, their reasons for purchasing rhino horn, and under which circumstances they are willing to switch to a legal market. This can be achieved through surveys and undercover intelligence in Southeast Asia. This information is critical to make more certain conclusions about whether or not there is potential for an illegal rhino horn market to exist in parallel to a legal market. 2) Rhino horn demand-reduction programmes should adopt more rigorous impact evaluation strategies. As demand-reduction is often a long-term process, studies should ideally be designed in such a way that it allows for demand-reducing strategies to be effective over multiple years and that the impact of the strategies are quantitively evaluated for multiple times during this period. Because these programmes are arguably the only solution to stop the demand for rhino horn entirely and because these are likely to take a long time to take effect, efforts taken in this direction should be properly chosen, evaluated and ultimately optimized. 3) Economic and political avenues should be explored and substantiated about how a legal market of farmed rhino horns could benefit wild rhino populations in national parks and private game reserves. The financial benefit for rhino conservation related to farmed rhinos is clear, but it is not yet clear through which mechanisms this could benefit wild rhino populations. As both a healthy captive and wild population of rhinos could be important in preserving these species during the Anthropocene, the benefit of legal horn trade should be clear for the entire gradient of captive to wild rhino.
Conclusion
A legal rhino horn trade will most likely not be able to satisfy demand in the near future and will likely even lead to an increase in demand (Fig. 3c). Omnipresent corruption in countries along the rhino horn trade routes will, together with demand for illegal ('wild') horns, facilitate the co-existence of legal and illegal markets. In addition, legalization will remove the stigma associated with the consumption of illegal products and will therefore counteract long-term behavioural change programmes targeted at consumers, which is arguably the ultimate solution to wildlife crime. Only one of our four considered mechanisms (an increased revenue for private rhino owners) will likely have a positive impact on rhino conservation, but primarily for the captive rhino populations in countries that allow private wildlife ownership. However, this one minor positive impact for rhino conservation will most likely not be able to offset the other negative impacts of trade legalization (Fig. 3c). Based on this review, we therefore recommend not to legalize an international trade in rhino horn. Instead, we suggest to focus efforts on creating well-protected 'safe havens' for the remaining wild rhino populations to bridge the current period of high demand (short-term approach) and on programmes aimed at reducing rhino horn demand (long-term approach). We acknowledge that this strategy is not perfect, because rhinos are still poached in well-protected reserves and behavioural change programmes still need to improve and prove their effectiveness, which is why our proposed strategy requires substantial (international) effort.
One could argue that rhinos should be preserved as a species, instead of prioritizing rhinos in the wild. Focusing on preserving rhinos in the wild through a legal trade ban has the likely consequence that far less captive rhinos will be kept. Even though healthy wild animal populations are generally thought to have a higher conservation value than captive populations (Redford et al., 2011), if in spite of all efforts rhinos do become extinct in the wild, then it might complicate future reintroduction efforts to have fewer captive rhinos. This is a risk that should not be underestimated, which makes our suggested short-term approach of creating well-protected 'safe havens' for wild rhino populations all the more relevant (e.g., Welgevonden Game Reserve, 2020). Regardless of one's opinion to legalize an international rhino horn market or not, both anti-and pro-trade strategies to save the rhino from extinction are likely only possible after corruption has been reduced, more rhinos have been bred and illegal demand has been reduced (Committee of Inquiry, 2016). The debate about legalizing the rhino horn market should thus not prevent stakeholders from working together to achieve these goals (Sandbrook et al., 2019).
|
2020-06-05T13:02:40.814Z
|
2020-06-05T00:00:00.000
|
{
"year": 2020,
"sha1": "b56f1ab3f189c536f575be6e80e8bcf739b95efa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.gecco.2020.e01145",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "714f09be895e351d4532b0bedeee3fa258135b99",
"s2fieldsofstudy": [
"Environmental Science",
"Law"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
260648590
|
pes2o/s2orc
|
v3-fos-license
|
Structural, Electronic, and Mechanical Properties of Zr2SeB and Zr2SeN from First-Principle Investigations
MAX phases have exhibited diverse physical properties, inspiring their promising applications in several important research fields. The introduction of a chalcogen atom into a phase of MAX has further facilitated the modulation of their physical properties and the extension of MAX family diversity. The physical characteristics of the novel chalcogen-containing MAX 211 phase Zr2SeB and Zr2SeN have been systematically investigated. The present investigation is conducted from a multi-faceted perspective that encompasses the stability, electronic structure, and mechanical properties of the system, via the employment of the first-principles density functional theory methodology. By replacing C with B/N in the chalcogen-containing MAX phase, it has been shown that their corresponding mechanical properties are appropriately tuned, which may offer a way to design novel MAX phase materials with enriched properties. In order to assess the dynamical and mechanical stability of the systems under investigation, a thorough evaluation has been carried out based on the analysis of phonon dispersions and elastic constants conditions. The predicted results reveal a strong interaction between zirconium and boron or nitrogen within the structures of Zr2SeB and Zr2SeN. The calculated band structures and electronic density of states for Zr2SeB and Zr2SeN demonstrate their metallic nature and anisotropic conductivity. The theoretically estimated Pugh and Poisson ratios imply that these phases are characterized by brittleness.
Introduction
The family of materials denoted as the MAX phases is a subject of great interest within the scientific community, with a general formulation expressed as M n+1 AX n , where n can take on values of 1, 2, or 3. In this expression, M denotes an early transition metal, A typically represents an A-group element, and X is predominantly characterized by C or N. The origins of this family of materials can be traced back to the pioneering research of Nowotny et al. in the 1960s [1][2][3][4]. Until the 1990s, the interest in MAX phases was reignited by Barsoum et al. [5,6], who revealed their exceptional properties. These materials exhibit metallic like characteristics such as high electrical and thermal conductivity, as well as machinability and mechanical strength. Additionally, they possess exceptional mechanical properties at high temperatures and exhibit highly sensible corrosion and reaction resistance, which are similar to those of ceramics [7]. All these unique properties can be attributed to their nano-layered structures, where elemental A in the style of a single atomic layer is situated between M n+1 X n sheets. Furthermore, the bond of M-A plays a pivotal role in determining the chemical and physical characteristics. This distinctive set of properties has resulted in the identification of over 150 MAX phases that can be utilized in various applications [8,9].
Given the growing demand for MAX phase materials, researchers have been exploring ways to enhance their structural versatility and performance flexibility. This has led to the development of novel MAX phase compounds or the recombination of M, A, and/or X elements in existing structures [8]. Recently, boron (an element with an atomic number of five) was introduced as an additional X element, resulting in an expanded range of MAX phases. The remarkable physical and chemical properties of boron and corresponding compounds make them highly desirable for high-temperature applications, thereby creating a pressing need for boride MAX phases [10]. As such, the boride MAX phase, much like its conventional MAX phase counterparts, has garnered significant research attention [11][12][13][14][15][16][17]. Khazaei [11] systematically investigated the structure and properties of the Sc 2 AlB, Ti 2 AlB, Cr 2 AlB, Zr 2 AlB, and Nb 2 AlB MAX phase borides. The MAX phase borides Ti 2 AlB, Ti 2 GaB, and Ti 2 InB have also been investigated via theoretical approaches [13]. In addition, the chalcogen-containing MAX phases with more robust mechanical properties received greater attention than the corresponding aluminum-containing MAX phases [18]. Currently, the MAX phases with sulfur elements at A sites are limited to Ti 2 SC, Zr 2 SC, Hf 2 SC, Nb 2 SC, and M 2 SB (M = Zr, Hf, Nb) [19]. The experimental realization of Se occupying the A site in a Zr 2 SeC MAX phase has expanded the family of nano-laminated ternary carbides [20]. Recently, the DFT method was utilized to investigate the physical characteristics of novel chalcogen-containing MAX phases, Hf 2 SeC and Zr 2 SeC, for high-temperature applications [21,22]. This study aims to have an in-depth study of the effect of replacing carbon with boron or nitrogen as the X element on the crystal lattice constant, electronic structure, and several physical properties of Se-containing MAX phases, which would enrich the materials' properties and extend their potential applications. In addition, to obtain complete image of chalcogen-containing MAX phases, the electronic properties of Zr 2 SeC was calculated for comprehensive comparison. In this work, the predicted results of the B/N atom replacing the C atom in the ternary 211 MAX-phase nano-laminates essentially hold the potential to enhance the properties of the MAX phase materials and broaden their applications in various fields.
Computational Details
Throughout the whole work, density functional theory (DFT) [23] calculations were performed using the Cambridge Serial Total Energy Package (CASTEP) [24]. The electronic exchange-correlation interaction was described using the generalized gradient approximation [25] with the Perdew-Burke-Ernzerh [26] (GGA-PBE) functional. The elemental core and valance electrons were implemented by norm-conserving pseudopotentials and a plane-wave basis functional with kinetic energy cutoff of 520 eV. The main calculated electronic configurations were Zr: 4s 2 4p 6 4d 2 5s 2 , Se: 4s 2 4p 4 , N: 2s 2 2p 3 , and B: 2s 2 2p 1 . To optimize the geometric structure and cell structure, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization scheme was chosen to equilibrize the structure. Throughout the entire self-consistent field (SCF) calculation process, the difference of total energy [27], Hellman-Feynman forces on each atom, atomic displacements, and the stresses were less than 1.0 × 10 −7 eV/atom, 0.002 eV/Å, 1 × 10 −3 Å, and 0.05 GPa, respectively, to achieve the convergence threshold. The atomic models of Zr 2 SeB and Zr 2 SeN cells were first constructed for the optimization of geometric configuration before the investigation of the electronic structure and their respective physical properties. As depicted in Figure 1, Zr 2 SeX (with B or N located at the X site) with a hexagonal crystal structure and space group of P6 3 /mmc, has eight atoms (four Zr atoms, two X atoms, and two Se atoms) in a unit cell, which is identical to that of the reported Zr 2 SeC. their respective physical properties. As depicted in Figure 1, Zr2SeX (with B or N located at the X site) with a hexagonal crystal structure and space group of P63/mmc, has eight atoms (four Zr atoms, two X atoms, and two Se atoms) in a unit cell, which is identical to that of the reported Zr2SeC. Based on the input parameters utilized in this investigation, the optimized la ice constants of the Zr2SeC cells exhibit strong agreement with similar structures (Table 1), alongside corresponding experimental and theoretical values. Moreover, we can see that our results of Zr2SeB and Zr2SeN are in good agreement with other reported data [28]. After the identification and comparison of calculation results, the la ice consists of the a and c values of Zr 211 MAX compounds, and the Se, at the A site, is larger than the S, which can be ascribed to the atomic size. Specifically, the calculated values of a and c for Zr2SeC show a 0.63% and 0.72% increase, respectively, relative to prior theoretical results. The dynamic stabilities of both configurations of Zr2SeB and Zr2SeN were revealed by the theoretical calculations of the phonon dispersions [29], as presented in Figure 2. A unit cell of Zr2SeB or Zr2SeN has eight atoms and thus has twenty-four phonon branches (three acoustic and twenty-one optical branches). They are labeled according to their symmetries at the Γ point: TA and LA modes are the in-plane transverse and longitudinal acoustic modes, the vibration planes of these two phonons are along the ab plane direction. The vibration plane of the other transverse acoustic branch (ZA) phonon is perpendicular to the ab plane. The slopes indicate their group velocities, and the slope of LA is the largest, while the slope of ZA is smaller than that of LA and TA. In addition, the irreducible representation can be classified as Γoptical = 2E1g + 4E2g + A1g + 4E1u + 2A2u + 4E2u + 2B2g + 2B1u. The E1g, E2g, and A1g modes are the Raman active vibration modes, Based on the input parameters utilized in this investigation, the optimized lattice constants of the Zr 2 SeC cells exhibit strong agreement with similar structures (Table 1), alongside corresponding experimental and theoretical values. Moreover, we can see that our results of Zr 2 SeB and Zr 2 SeN are in good agreement with other reported data [28]. After the identification and comparison of calculation results, the lattice consists of the a and c values of Zr 211 MAX compounds, and the Se, at the A site, is larger than the S, which can be ascribed to the atomic size. Specifically, the calculated values of a and c for Zr 2 SeC show a 0.63% and 0.72% increase, respectively, relative to prior theoretical results. The dynamic stabilities of both configurations of Zr 2 SeB and Zr 2 SeN were revealed by the theoretical calculations of the phonon dispersions [29], as presented in Figure 2. A unit cell of Zr 2 SeB or Zr 2 SeN has eight atoms and thus has twenty-four phonon branches (three acoustic and twenty-one optical branches). They are labeled according to their symmetries at the Γ point: TA and LA modes are the in-plane transverse and longitudinal acoustic modes, the vibration planes of these two phonons are along the ab plane direction. The vibration plane of the other transverse acoustic branch (ZA) phonon is perpendicular to the ab plane. The slopes indicate their group velocities, and the slope of LA is the largest, while the slope of ZA is smaller than that of LA and TA. In addition, the irreducible representation can be classified as Γoptical = 2E1g + 4E2g + A1g + 4E1u + 2A2u + 4E2u + 2B2g + 2B1u. The E1g, E2g, and A1g modes are the Raman active vibration modes, and the E1u and A2u modes are the IR active vibration modes. Notably, the phonon frequencies of the Zr 2 SeB structure were observed to be considerably higher than those of the corresponding Zr 2 SeN structure, which could be ascribed to a lighter mass of the boron element than that of nitrogen atom. and the E1u and A2u modes are the IR active vibration modes. Notably, the phonon frequencies of the Zr2SeB structure were observed to be considerably higher than those of the corresponding Zr2SeN structure, which could be ascribed to a lighter mass of the boron element than that of nitrogen atom.
Electronic Properties
On the basis of symmetry of hexagonal crystal system, to optimize the calculation process of electronic properties of two models, a high symmetry path of G-A-L-K-H towards the Brillouin zone was adopted to investigate the electronic band structures of Zr2SeB and Zr2SeN (Figure 3). Due to their comparable structures, there is a certain degree of similarity in their band structures. The electronic energy bands of both configurations, as shown in Figure 3, overlap near the Fermi level (EF), indicating a metal-like conductivity that is similar to Zr2SeC. According to the band structure (Figure 3), it can be seen that electronic conduction is naturally anisotropic. In the K-L and H-K directions, the energy dispersion with a unit area is small along the c-direction. Conversely, conductivity within the basal plane is demonstrated by G-M, and the L-A direction is higher than that within the basal plane, indicated by K-L and H-K directions with a unit area in the c-direction. Thus, it follows that Zr2SeB and Zr2SeN exhibit higher basal plane conductivity than the c-directional conductivity, a characteristic analogous to most traditional MAX phases reported in the literature [18].
The density of states (DOS) for Zr2SeB and Zr2SeN is depicted in Figure 4. The high degree of similarity between the electronic bands of the two materials demonstrates that
Electronic Properties
On the basis of symmetry of hexagonal crystal system, to optimize the calculation process of electronic properties of two models, a high symmetry path of G-A-L-K-H towards the Brillouin zone was adopted to investigate the electronic band structures of Zr 2 SeB and Zr 2 SeN ( Figure 3). Due to their comparable structures, there is a certain degree of similarity in their band structures. The electronic energy bands of both configurations, as shown in Figure 3, overlap near the Fermi level (EF), indicating a metal-like conductivity that is similar to Zr 2 SeC. and the E1u and A2u modes are the IR active vibration modes. Notably, the phonon frequencies of the Zr2SeB structure were observed to be considerably higher than those of the corresponding Zr2SeN structure, which could be ascribed to a lighter mass of the boron element than that of nitrogen atom.
Electronic Properties
On the basis of symmetry of hexagonal crystal system, to optimize the calculation process of electronic properties of two models, a high symmetry path of G-A-L-K-H towards the Brillouin zone was adopted to investigate the electronic band structures of Zr2SeB and Zr2SeN ( Figure 3). Due to their comparable structures, there is a certain degree of similarity in their band structures. The electronic energy bands of both configurations, as shown in Figure 3, overlap near the Fermi level (EF), indicating a metal-like conductivity that is similar to Zr2SeC. According to the band structure (Figure 3), it can be seen that electronic conduction is naturally anisotropic. In the K-L and H-K directions, the energy dispersion with a unit area is small along the c-direction. Conversely, conductivity within the basal plane is demonstrated by G-M, and the L-A direction is higher than that within the basal plane, indicated by K-L and H-K directions with a unit area in the c-direction. Thus, it follows that Zr2SeB and Zr2SeN exhibit higher basal plane conductivity than the c-directional conductivity, a characteristic analogous to most traditional MAX phases reported in the literature [18].
The density of states (DOS) for Zr2SeB and Zr2SeN is depicted in Figure 4. The high degree of similarity between the electronic bands of the two materials demonstrates that According to the band structure (Figure 3), it can be seen that electronic conduction is naturally anisotropic. In the K-L and H-K directions, the energy dispersion with a unit area is small along the c-direction. Conversely, conductivity within the basal plane is demonstrated by G-M, and the L-A direction is higher than that within the basal plane, indicated by K-L and H-K directions with a unit area in the c-direction. Thus, it follows that Zr 2 SeB and Zr 2 SeN exhibit higher basal plane conductivity than the c-directional conductivity, a characteristic analogous to most traditional MAX phases reported in the literature [18].
The density of states (DOS) for Zr 2 SeB and Zr 2 SeN is depicted in Figure 4. The high degree of similarity between the electronic bands of the two materials demonstrates that the DOS diagrams of then would be similar consequently. In agreement with the analysis results of energy bands, Zr 2 SeB and Zr 2 SeN belong to electronic conductors, according to the several orbit states (such as B-p and Zr-d) that occupy the Fermi level. The contribution from the different states of Zr, Se, N, and B to the total DOS is confirmed from the partial DOS (PDOS) diagrams. For instance, Zr's 4d orbit state provides the dominant contribution around the Fermi level, corresponding to the electronic structure of the reported Zr 2 SeC [20,21]. Neither N nor Se contributes to the DOS at the Fermi level, which is consistent with prior experimental and theoretical results [18,21]. Moreover, obvious hybridization could be observed from the PDOS diagrams in both Zr 2 SeB and Zr 2 SeN. In Figure 4a, as for Zr 2 SeB, B's 2p state splits, owing to strong hybridization with Zr's 4d state. In Zr 2 SeN, similarly, a strong hybridization between N's 2p state and Zr's 4d state is observed in the energy range from −7.5 to −4.5 eV. Furthermore, in both Zr 2 SeN and Zr 2 SeB, it is non-negligible that Se's p state is hybridized with Zr's d states.
the DOS diagrams of then would be similar consequently. In agreement with the analysis results of energy bands, Zr2SeB and Zr2SeN belong to electronic conductors, according to the several orbit states (such as B-p and Zr-d) that occupy the Fermi level. The contribution from the different states of Zr, Se, N, and B to the total DOS is confirmed from the partial DOS (PDOS) diagrams. For instance, Zr's 4d orbit state provides the dominant contribution around the Fermi level, corresponding to the electronic structure of the reported Zr2SeC [20,21]. Neither N nor Se contributes to the DOS at the Fermi level, which is consistent with prior experimental and theoretical results [18,21]. Moreover, obvious hybridization could be observed from the PDOS diagrams in both Zr2SeB and Zr2SeN. In Figure 4a, as for Zr2SeB, B's 2p state splits, owing to strong hybridization with Zr's 4d state. In Zr2SeN, similarly, a strong hybridization between N's 2p state and Zr's 4d state is observed in the energy range from −7.5 to −4.5 eV. Furthermore, in both Zr2SeN and Zr2SeB, it is non-negligible that Se's p state is hybridized with Zr's d states.
Mechanical Properties
Dynamic characteristics are significant for evaluating a material's performance. The elastic constants predict these traits and behaviors, which can represent important macroscopic properties. Herein, the stress-strain method was applied to calculate the various elastic constants of Zr2SeB and Zr2SeN, and corresponding values are presented in Table 2. As the MAX phases are hexagonal in crystal symmetry, they possess six elastic constants ( , , , , − , ) Of these, is dependent( = ( − )/2) . The mechanical stability of a material under load is a critical factor in practical applications, and the stability conditions [30]
Mechanical Properties
Dynamic characteristics are significant for evaluating a material's performance. The elastic constants predict these traits and behaviors, which can represent important macroscopic properties. Herein, the stress-strain method was applied to calculate the various elastic constants of Zr 2 SeB and Zr 2 SeN, and corresponding values are presented in Table 2. As the MAX phases are hexagonal in crystal symmetry, they possess six elastic constants C ij (C 11 , C 12 , C 13 , C 33 , C 44 − C 55 , C 66 ) Of these, C 66 is dependent (C 66 = (C 11 − C 12 )/2). The mechanical stability of a material under load is a critical factor in practical applications, and the stability conditions [30] for hexagonal systems dictate that C 11 > |C 12 |, (C 11 + C 12 )C 33 > 2C 2 13 , C 44 > 0, and C 66 > 0. Consequently, the mechanical stability of Zr 2 SeB and Zr 2 SeN should be fulfilled by the four aforementioned conditions. From the obtained elastic constants, other important parameters can also be calculated, such as the bulk modulus, B; the shear modulus, G; Young's modulus, E; Poisson's ratio, σ; and Debye's temperature, θ D , using relevant equations with the software [31][32][33].
(3) Here, v l and v t represent longitudinal and transverse sound velocities, respectively. ρ is the density of the cell, v m is the averaged sound velocity, h is Planck's constant, and k is Boltzmann's constant.
Elastic constants provide crucial insights into bonding behaviors across different crystallographic planes. Specifically, the Zr 2 SeX (X = C, B, N) compounds exhibit greater compression along the c-axis than the a-axis. This observation is supported by the lattice parameters of the MAX phases, which indicates a preferential compression along the c-axis, as opposed to that along the a-axis. This trend is a common feature of MAX phases and is reflected in their elastic anisotropic characteristics.
To analyze and estimate the brittleness or toughness of a material, Poisson's ratio (v), as a critical parameter, is usually tested. Traditionally, the transition value of 0.26 is the threshold for evaluating whether a material is brittle or ductile. As shown in Table 2, the Zr 2 SeN and Zr 2 SeB MAX phases are relatively brittle when compared to Zr 2 SeC and Zr 2 SB. Furthermore, Pugh's ratio is a valuable tool in predicting ductile or brittle failure modes by examining the ratio of bulk to shear moduli. A critical value of 1.75 is used to classify materials as either ductile or brittle, with a B/G value greater than 1.75, indicating a ductile character. The MAX phases of Zr 2 SeN and Zr 2 SeB are classified as brittle, similar to the previously reported Zr 2 SeC.
It can be inferred that Zr 2 SeB possesses higher B and G values when compared to Zr 2 SeN (Table 2). This observation indicates that Zr 2 SeB necessitates greater pressure than Zr 2 SeN for bulk and plastic deformation. Furthermore, the E values of Zr 2 SeB surpass those of Zr 2 SeN, suggesting that Zr 2 SeB exhibits greater hardness than Zr 2 SeN. Moreover, C 44 , an important indicator of material hardness, exhibits a strong correlation with hardness in comparison to other elasticity moduli. Consequently, Zr 2 SeB is expected to possess a higher C 44 than Zr 2 SeN, thereby enhancing its hardness. In contrast, Zr 2 SeN's hardness is lower than Zr 2 SeC and Zr 2 SeB. These findings offer new clues for tuning the X-composition of the substituted MAX phase materials, potentially leading to improved performance in various applications.
Debye's temperature, θ D , helps to predict the application of the material at high temperatures. Using the Anderson model, the θ D of Zr 2 SeN and Zr 2 SeB are 499 K and 498 K, respectively, which are lower than that of Zr 2 SeC (512 K, calculated using Anderson's model; 679 K, calculated via the quasi-harmonic Debye model [21]). The θ D of Zr 2 SeB (498 K), calculated via Anderson's model, is lower than that of Zr 2 SB (540 K, obtained using the quasi-harmonic Debye model [18]). Comparable results are seen in other carbides MAX 211 phases, such as the reported Zr 2 SC, Hf 2 SB/C, and Nb 2 SC/B [18,34]. In the hightemperature applications of Zr 2 SeN and Zr 2 SeB, such as thermal barrier coating (TBC), the Debye temperatures are required.
Mechanical anisotropy is one of the non-negligible factors that are closely related to the potential applications of functional materials. For example, in practical applications, the material itself may produce micro-cracks or undergo deformation in different directions, which is limited by their intrinsic mechanical properties. Thus, the mechanical anisotropies of Zr 2 SeB and Zr 2 SeN are investigated, and corresponding data were recorded in the forms of 2D and 3D. Visually, as shown in Figures 5 and 6, Young's modulus and shear modulus of Zr 2 SeB and Zr 2 SeN are direction-dependent. Usually, in terms of measurement of elastic moduli, the spherical shape of the curved surface in 3D and the circular shape of plots in 2D indicate the isotropic mechanical behavior of solids. However, deviations from spherical/circular symmetry or symmetry breaking indicate that the mechanical properties of the measured object are anisotropic. Meanwhile, the degree of anisotropy of the elastic moduli of a substance is measured through the amount of deviation from a perfect sphere/circle. Figure 5 shows the directional dependence of E for Zr 2 SeB and Zr 2 SeN. As can be seen in Figure 5, E is isotropic in the xy plane, and its plot shape is uniformly circular. In contrast, E is anisotropic in the xz and yz planes. It can be seen in Figure 6 that the G of Zr 2 SeB and Zr 2 SeN does not change direction on the xy plane, and the two-dimensional graph is uniformly circular but changes direction on both the xz and yz planes. These are essentially identical to the symmetry of hexagonal crystals and are consistent with the findings of M.A. Hadi et al [18]. By mechanical analysis, the Zr2SeN and Zr2SeB phases can be tentatively identified as elastically anisotropic. The three shear anisotropy coefficients that depend on the hexagonal crystal of [35], which quantify the degree of elastic anisotropy, can be obtained as follows [36]: which is associated with the {100} shear planes in the 〈011〉 and 〈010〉 directions; which is related to the {010} shear planes in the 〈101〉 and 〈001〉 directions; By mechanical analysis, the Zr 2 SeN and Zr 2 SeB phases can be tentatively identified as elastically anisotropic. The three shear anisotropy coefficients that depend on the hexagonal crystal of C ij [35], which quantify the degree of elastic anisotropy, can be obtained as follows [36]: which is associated with the {100} shear planes in the 011 and 010 directions; (14) which is related to the {010} shear planes in the 101 and 001 directions; which denotes the shear anisotropy that occurs in the 001 shear planes in the 110 and 010 directions. As is well known, in anisotropic crystals, A i (i = 1, 2, 3) would have a value other than unity. In contrast, all factors of A i (i = 1, 2, 3) would have a unit value in all isotropic systems [37]. Moreover, the deviation of A i from unity (∆A i ) could determine the degree of elastic anisotropy in the shear. Subsequently, Zr 2 SeN and Zr 2 SeB exhibit elastically anisotropic in the shear (Table 3). Moreover, regarding the specific evaluation of elastic anisotropy for the hexagonal crystal, there is another anisotropy factor that is obtained from C ij , k c /k a = (C 11 + C 12 − 2C 13 )/(C 33 − C 13 ) [38]. In the formula, k a and k c , respectively, represent the linear compressibility coefficients along the a and c axes. All the values of k c /k a (Table 3), which are different from unity (∆k c /k a ), demonstrating the degree of anisotropy of Zr 2 SeN and Zr 2 SeB under a linear compression in the a and c directions. Hill's theory proposed a proportional relationship between difference between B V and B R , and the elastic anisotropy level of crystals. The same relationship is appropriate for the difference between G V and G R as well. Then, the percentage of anisotropy factors A B and A G can be calculated as follows: Considering the compressibility and shear, these two coefficients are assigned zero values for fully isotropic crystals. We can conclude that the Zr 2 SeB and Zr 2 SeN phases are anisotropic, as confirmed by the results in Table 3. The current knowledge about the anisotropy of Zr 2 SeN and Zr 2 SeB contributes to their mechanical stability during specific physical processes, including the occurrence of plastic deformation and the generation of microscale cracks.
Conclusions
In summary, the electronic structure and several mechanical properties of two chalcogencontaining ternary MAX phases, Zr 2 SeB and Zr 2 SeN, were investigated systematically via DFT calculations. The lattice parameters of Zr 2 SeB and Zr 2 SeN are consistent with those of Zr 2 SeC and Zr 2 SB, and these MAX phases exhibit dynamical and mechanical stability. Through the analyses of band structure and the density of states, the electronic character of Zr 2 SeB and Zr 2 SeN is identified as a metal, which is consistent with that of the conventional MAX phases. Furthermore, we investigate the mechanical properties of Zr 2 SeB and Zr 2 SeN and compare them with those of Zr 2 SB and Zr 2 SeC, obtained from previous studies. The intrinsic anisotropy of Zr 2 SeB and Zr 2 SeN in bonding strength along the a-and c-axis is revealed via calculations, and the mechanical properties of Zr 2 SeB and Zr 2 SeN, such as elastic anisotropic characteristics, brittleness, hardness, and mechanical stability, are comparable with those of other prior-reported chalcogenide-MAX phases. From the inspiration of this work, it is worth noting that the mechanical properties and electronic structure of MAX phases can be obviously modulated by designing the X element rationally. Thereby, the construction of two MAX-phase materials in this work and corresponding calculations provide an effective strategy for selecting and optimizing MAX phases towards broader applications.
|
2023-08-07T15:33:31.182Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e91dc1c5a86d1ea67f208cdc4e8f764a9b0036bc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fe9e5460e2e5a4355dd5687a38cb754c1a0e1a5e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
4644152
|
pes2o/s2orc
|
v3-fos-license
|
Guideline on management of the acute asthma attack in children by Italian Society of Pediatrics
Background Acute asthma attack is a frequent condition in children. It is one of the most common reasons for emergency department (ED) visit and hospitalization. Appropriate care is fundamental, considering both the high prevalence of asthma in children, and its life-threatening risks. Italian Society of Pediatrics recently issued a guideline on the management of acute asthma attack in children over age 2, in ambulatory and emergency department settings. Methods The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) methodology was adopted. A literature search was performed using the Cochrane Library and Medline/PubMed databases, retrieving studies in English or Italian and including children over age 2 year. Results Inhaled ß2 agonists are the first line drugs for acute asthma attack in children. Ipratropium bromide should be added in moderate/severe attacks. Early use of systemic steroids is associated with reduced risk of ED visits and hospitalization. High doses of inhaled steroids should not replace systemic steroids. Aminophylline use should be avoided in mild/moderate attacks. Weak evidence supports its use in life-threatening attacks. Epinephrine should not be used in the treatment of acute asthma for its lower cost / benefit ratio, compared to β2 agonists. Intravenous magnesium solphate could be used in children with severe attacks and/or forced expiratory volume1 (FEV1) lower than 60% predicted, unresponsive to initial inhaled therapy. Heliox could be administered in life-threatening attacks. Leukotriene receptor antagonists are not recommended. Conclusions This Guideline is expected to be a useful resource in managing acute asthma attacks in children over age 2.
Background
Acute asthma attack is a frequent condition in children. It is one of the most common reasons for emergency department (ED) visits and hospitalization [1]. It can be triggered by viral infections, atypical bacteria (i.e. Mycoplasma pneumoniae) infections, allergens and/or air pollutants, including tobacco smoke, medications, physical exercise, and stress and emotions [1]. Acute asthma attack can occur as a first episode in undiagnosed children or in children with a previous asthma diagnosis and an uncontrolled disease despite therapy [2]. Indeed, despite advances in therapy, asthma remains a disease that is not optimally controlled in many children [2]. Asthma attacks can be particularly recurrent or life-threatening and increasingly expensive in unresponsive children [2].
The multidisciplinary ISP panel recently issued a new guideline on the management of acute asthma attack in children over age 2, in ambulatory and ED settings, using the GRADE methodology [3]. The guideline aims to deliver up to date scientific evidence and recommendations to pediatricians, general practitioners, Emergency Medicine Physicians, and nurses.
Methods
This Guideline was issued by the ISP, jointly with the Italian Society of Pediatric Respiratory Diseases, the Italian Society of Pediatric Immunology and Allergology, and the Italian Society of Pediatric Emergency Medicine. The document was developed by a multidisciplinary panel of clinicians and experts in evidence-based medicine who were identified with the help of the participating scientific societies. Specifically, the panel included experts in the fields of general pediatrics, emergency medicine, epidemiology, nursing practice, pharmacology, research methodology, and a member of the parents' association FEDERASMA. No panel member declared any conflict of interest.
The panel met in two occasions, and many of the consultations involved in the guideline development and draft processes took place interactively by e-mail or phone. The panel members first defined the objectives of the guideline, the essential clinical questions, and the appropriate inclusion and exclusion criteria for the studies from which evidence would be derived. They also identified the information sources and biomedical databases that would be consulted, and the search terms that would be used in constructing the search strategy.
The objective of the guideline was to optimize the management of acute asthma attack in children over age 2, in ambulatory and emergency department settings. This guideline was not intended for children aged 2 years or younger, with acquired or congenital immunodeficiency, major pre-existing, chronic heart or lung disease, and should not be used to treat children admitted to hospital ward or to intensive care unit (ICU).
The quality of evidence and strength of recommendations were rated using the Grading of Recommendation Assessment, Development, and Evaluation (GRADE) approach [3].
Literature search
Literature search was performed using the Cochrane Library and Medline/PubMed databases, using appropriated key words and retrieving studies published between January 2009 and December 2016, including children aged more than 2 years. The results of this search were then evaluated and selected based on both methodology and relevance. An updated literature search was performed before preparing the final draft; this search identified no additional relevant publications.
Study selection, levels of evidence, and strength of recommendations
The selection of studies, data extraction and quality assessment were performed by specially trained personnel, following the GRADE methodology [3]. Briefly, evidence was evaluated according to six categories: 1) risk of bias, 2) inconsistency, 3) indirectness, 4) imprecision, 5) publication bias, and 6) other criteria. Quality of the studies can be up-or down-graded due to magnitude factors, limitations in any of the aforementioned categories or other factors [3]. Finally, 4 levels of quality of evidence were indicated (high, moderate, low, very low). Subsequently, balances were assessed between benefit and harm, patients' values and preferences, cost and resources, and feasibility and acceptability of the intervention, and recommendations were formulated considering 4 grades of strength (Positive-strong; Positive-weak; Negative -strong; Negativeweak) [3]. A strong recommendation was worded as "we recommend" or "it should…" and a weak recommendation as "we suggest" or "it could…" The full text of the guidelines and all the related documents are available at the website of the ISP (www.sip.it).
Clinical and objective assessment
History should be collected very carefully since it is an extremely important tool to predict the severity of exacerbations and the risk for hospitalization. Symptoms are poorly related to the severity of airway obstruction. Therefore, objective evaluations (i.e. pulse oximetry; peak expiratory flow; FEV1; blood gas measurement) should be considered [4][5][6][7][8][9][10][11][12][13][14]. However, the value of pulmonary function parameters in the assessment of patients with respiratory distress is modest [4][5][6][7][8][9][10][11][12][13][14][15]. Only three high quality studies are available [11][12][13]. One is a systematic review of 60 studies showing that none of the available score are validated in the clinical practice [11]. The other one is an observational prospective study including 101 children, aged > 6 years, demonstrating that the Clinical Asthma Score was not related to the spirometry results [12]. More recently Eggink and collaborators performed a prospective, high quality study, reviewed and validated clinical scores for dyspnoea severity in children, and authors concluded that the commonly used dyspnoea scores have insufficient validity and reliability to allow for clinical use without caution [13].
Levels of severity of the acute asthma attack and indications for hospitalization are summarized in Tables 1 and 2. It should be underlined considering that low oxygen saturation, especially after initial bronchodilator treatment, allows the identification of patients with more severe asthma [2,16,17]. Respiratory physiology studies showed that in mild acute asthma attack, PaCO2 values are usually normal. Increasing values of PaCO2 may be an ominous sign of impending respiratory failure, in presence of respiratory distress [2,16,17].
Recommendation
Level of severity should be assessed considering both clinical and objective evaluations, including pulse oximetry, Table 1 Management of acute asthma attack in children Note. PEF is expressed as percentage of personal best. Not all parameters have to be abnormal, but a single abnormality may be sufficient to classify a patient into a severity class. The severity category may change when more information is available or over time peak expiratory flow or FEV1. Blood gas measurement should be reserved only to more severe attacks.
Positive strong recommendation Treatments
Oxygen Numerous studies have confirmed that hypoxia is almost always present during acute asthma attack, its degree depending on the severity of the episode [2,[14][15][16][17]. Therefore monitoring the blood oxygenation level, mainly through pulse oxymetry, is fundamental in order to select children who deserve oxygen therapy. Oxygen saturation should be obtained when the patient is breathing room air. However, it is not necessary to cease oxygen therapy to measure pulse oximetry, if it has already been started. Clinical judgment should be applied in any circumstance [2,16,17].
Recommendation
Humidified oxygen therapy using a tight fitting face mask or nasal cannula should be administered to children with severe acute asthma attack and/or SpO 2 < 92%. Flow rates and oxygen concentration may be released by specific Venturi mask and should be sufficient to achieve saturations of ≥ 95%.
Positive strong recommendation Inhaled short-acting ß 2 agonists
Inhaled short-acting ß2 agonists are the first line treatment for acute asthma attack in children. Salbutamol is a useful medication that can be used in children of all ages. Inhaled via is the traditional route of administration [18]. Salbutamol given continuously via nebulizer was not associated with a better outcome with respect to frequent intermittent administration, in a systematic review, dating back to 2003 and including only one pediatric study [19,20]. In 2013 Cochrane including 1897 children and 729 adults in 39 trials, Metered-Dose Inhalers (MDI) with spacer was considered the preferred option for delivering ß2 agonists in children with mild to moderate asthma attack [21].
Salbutamol dose to be administered through MDI with spacer should be individualized according to the asthma attack severity: 200-400 μg/dose (2-4 puffs/dose) could be sufficient in mild attacks. Children with severe asthma should receive frequent doses of nebulised bronchodilators (2.5 to 5 mg of salbutamol), driven by oxygen, given the risk of oxygen desaturation while using air-driven compressors. Once improving on two-to four-hourly salbutamol, patients should be switched to a MDI with spacer [16,17,22].
Recommendation
Salbutamol is the first line treatment for acute asthma attack in children. In severe attack it should be administered frequently, up to 3 times every 20-30 min within the first hour.
Positive strong recommendation Recommendation
MDI with spacer should be used to delivery ß2 agonists in children with mild to moderate asthma attack. Children with severe asthma should receive frequent doses of nebulised bronchodilators (2.5 to 5 mg of salbutamol), driven by oxygen.
Positive strong recommendation Intravenous short-acting ß 2 agonists
Literature data regarding iv short-acting ß 2 agonists use are poor. No consistent evidence favoring the use of iv short-acting ß 2 -agonists for patients with acute asthma were evidenced in a 2012 Cochrane including 2 pediatric studies on children (one in ICU) [23].
Some authors suggest the use of iv salbutamol in addition of long-acting ß 2 agonists in children with severe asthma attack unresponsive to initial therapy [23]. The recommended dose is a single bolus of 15 μg/kg (diluition: 200 μg/mL for central iv line; 10-20 μg/mL for peripheral iv line) over 10 min, followed by continuous infusion of 0.2 μg/kg /min. Higher doses (1-2 μg/ kg/min up to 5 μg/kg/min) can be administered in unresponsive children [2,16]. Intravenous salbutamol should be given in the ICU with continuous ECG and twice daily electrolyte and lactate monitoring [17].
Recommendation
Salbutamol could be administered intravenously (iv) in children with asthma attack not responding to initial therapy.
Positive weak recommendation Recommendation
Children receiving iv salbutamol should be admitted to intensive care unit with continuous ECG and twice daily electrolyte and lactate monitoring.
Positive strong recommendation Ipratopium bromide
Ipratopium bromide induces a slower broncodilator response than ß 2 agonists, but the combination of the two medications produces a synergic effect. In severe attack Severe asthma itself, irrespective of worsening; History of previous severe life-threatening asthma episodes, or previous admission to ICU the recommended nebulized dose is 125-250 μg/dose (in children < 4 years of age) to 250-500 μg/dose (in children ≥ 4 years of age), in combination with nebulized salbutamol. It should be administered frequently, up to 3 times every 20-30 min, within the first hour. The ipratropium dose should be tapered to 4 to 6 hourly or discontinued [17]. Once ipratropium bromide is discontinued, salbutamol dose should be tapered to one-to two-hourly thereafter according to clinical response.
A 2012 Cochrane review [24] including four trials on 173 children found that treatment failure on anticholinergics alone was more likely than when anticholinergics were combined with short-acting ß 2 agonists (OR 2.65; 95% CI 1.2 to 5.88). Authors concluded that inhaled anticholinergic drugs alone are not appropriate for use as a single agent in children with acute asthma exacerbations. In a subsequent 2013 Cochrane review [25], including 15 studies with 2497 children, the addition of an anticholinergic to a SABA significantly reduced the risk of hospitalization (RR: 0.73; 95% CI: 0.63 to 0.85). Fewer children treated with anticholinergics plus shortacting ß 2 agonists reported nausea and tremors compared to short-acting ß2 agonists alone; no significant group difference was observed for vomiting. Authors conclude that inhaled anticholinergics given in addition to β2-agonists are effective in reducing hospitalizations in children arriving in ED with a moderate to severe asthma exacerbation [25]. Only one study yielded a different result, however it should be noticed that MDI plus spacer was used [26]. It was a prospective, singleblinded, randomized, controlled, equivalence trial in a tertiary pediatric ED, including 347 children, and showing that the addition of ipratropium bromide was not significantly associated with a reduction in admission rates [26]. In a 2014 Cochrane review, including 4 studies on 472 children admitted to pediatric wards, no evidence of benefit for length of hospital stay nor other markers of response to therapy was noted when nebulised anticholinergics were added to short-acting β2-agonists [27].
Recommendation
Nebulized inhaled ipratropium bromide, given in addition to short-acting β2-agonists, should be administered in children with a moderate to severe asthma attack.
Positive strong recommendation Steroids
Systemic steroids (SS) have been reported to be effective in the treatment of acute asthma attack in children, with no difference between oral or intravenously/intramuscle route of administration [28]. Therefore the oral steroids are preferable, in the absence of vomiting. Dexamethasone, prednisone, and prednisolone are equally effective even if dexamethasone is associated with a higher risk of vomiting [28]. A recent open randomized trial [29] and one meta-analysis including 6 pediatric studies [30] demonstrated no different efficacy between prednisone and dexamethasone in children with acute asthma attack. However, this meta-analysis concludes that "emergency physicians should consider single or 2dose dexamethasone regimens over 5-day prednisone/ prednisolone regimens for the treatment of acute asthma exacerbations", due to easier administration and less side effects with dexamethasone [30]. A recent meta-analysis including 18 studies with a total of 2438 participants assessed the efficacy and safety of any dose or duration of oral steroids versus any other dose or duration of oral steroids for adults and children with an asthma exacerbation [31]. Literature data was not sufficient to discriminate whether shorter or lower-dose regimens are less effective than longer or higher-dose regimens, or indeed more adverse events are associated with the latter. Thus, authors underline that some regimen characteristics including palatability, regimen duration, and costs should be considered in order to improve adherence in individual patients [31]. Another recent meta-analysis, including 10 RCT in children, concluded that dexamethasone is likely to have less adverse effects than others corticosteroids, and similar efficacy in reducing hospitalizations and revisits [32].
Considering the time needed to induce gene expression and protein synthesis, the majority of pharmacological effects of steroid are not immediate, but are evident some hours after their intake. However, glucocorticoids can have rapid effects on inflammation which are not mediated by changes in gene expression [33]. Therefore their efficacy is optimized by an early use. Accordingly, an inverse association between time of administration and risk of hospitalization has been reported in a systematic review [34]. Steroid intake within the first hour from admission to the ED was associated with a significantly reduced time spent in the ED and a lower hospitalization rate [33].
The optimal duration of steroid therapy is unclear, some experts would suggest prolonging this therapy for 3 to 5 days, with no need to taper the dose at the end, particularly using molecules with short or intermediate half-life [34]. In a recent review acute single or recurrent systemic short-term (< 2 weeks) steroids in children with asthma exacerbations did not show any concern about short-term adverse effects [35].
However, it is important to underline the long-term risks caused by recurrent administration of oral steroids in children with asthma. Literature data report that children who require more than four courses of oral corticosteroids as treatment for underlying disease, including asthma, are at increased risk of fracture [36]. Furthermore, the CAMP study demonstrated that multiple oral corticosteroid bursts over a period of years can produce a dose-dependent reduction in bone mineral accretion and increased risk of osteopenia in children with asthma [37].
Recommendations
Systemic steroids (SS) should be used in the moderate to severe acute asthma attack in order to reduce the hospitalization rate and the risk of recurrence. Oral course, should be preferred in children able to retain drugs orally.
Two RCTs, whose results have been reported in three manuscripts [39][40][41], showed that addition of high dose ICS to standard asthma attack therapy, including SS, was not associated with clinical improvement after one and 2 h. However, in one study, it was associated with a decreased admission rate of children with severe acute asthma [40].
Two randomized clinical trials compared the effectiveness of high-dose of ICS vs, SS [42,43], the results showed that ICS and SS have the same efficacy to improve clinical symptoms. However, one study [38] showed that in the group treated with high doses of budesonide (800 μg/ 20 min) there was an increase in the percentage of children discharged from hospital after 2 h compared to the group treated with prednisolone (2 mg/kg).
A systematic review including eight studies published between 1995 and 2006 [44] showed no differences in the treatment with high-dose ICS or SS regarding admission rates, ED visits and rescue medications.
Two Cochrane reviews [45,46], including both adult and pediatric studies, conclude that there is insufficient evidence that ICS treatment results in clinically important changes in pulmonary function or clinical scores when used in acute asthma in addition to SS [45,46]. Therefore there is insufficient evidence that ICS therapy can be used in place of SS therapy when treating acute asthma [45,46]. A 2012 Cochrane Review [47] evaluated the effectiveness of the ICS treatment after discharge from ED and concluded that ICS provides no additional benefit to standard therapy with SS in the post-discharge treatment of children with acute asthma. In conclusion, there was some evidence that high doses of ICS can be as effective as SS in the post-discharge treatment of children with acute asthma. However, it should be noticed that the settings where the trials have been performed -including specifically dedicated nurses and/or doctors -are difficult to replicate in the everyday practice in ED or ambulatory. In such situations, prudently, SS should be preferred. In addition, higher cost of ICS should be considered.
Recommendation
-High doses of ICS should not be used instead of SS in asthma attack.
Negative strong recommendation -Children treated with ICS can continue to use the usual doses of ICS during the asthma attack.
Positive strong recommendation Aminophylline
Several studies are available comparing the efficacy of aminophylline in different clinical settings (i.e. aminophylline compared to placebo when added to inhaled β2-agonists, or compared to iv salbutamol in more severe attacks) [49]. In a recent review results from 12 RCTs, involving 586 children, and comparing aminophylline with placebo or usual treatment were summarized [49]. Improvement in clinical severity scores was found in 3 RCTs but not confirmed in other six, while 2 RCTs showed improved lung function scores and two did not [49]. One trial showed that iv aminophylline reduced ICU admission rates, but no trial evidenced any benefit of aminophylline on length of hospital or ICU stay [49]. Seven out of these 12 trials have been included in a 2005 Cochrane review [50]. This review concluded that intravenous aminophylline improved lung function within 6 h of treatment, but did not appear to reduce symptoms or length of hospital stay, and there was insufficient evidence to evaluate its impact on ICU rates [50]. In conclusion, in the setting of moderate asthma attack, the association of aminophylline to inhaled β2 agonists and steroids in acute asthma does not offer substantial benefits [49,50].
In the setting of severe asthma attacks data of the literature comparing iv salbutamol with iv aminophylline are poor [49,51], and no substantial difference of efficacy emerges between the two drugs. In particular, iv aminophylline and salbutamol (or terbutaline) have been compared, head-to-head, in 4 RCTs including 202 children [49]. In three trials no different clinical severity scores were reported between iv salbutamol and iv aminophylline. Moreover no difference was observed in the one study reporting ICU admission rates and in two RCTs reporting length of hospital stay [49]. No study reported lung function outcomes. These paediatric studies have been included in a subgroup analysis in a Cochrane review [51], concluding that there was no consistent evidence to help decide between iv aminophylline and iv salbutamol as therapy of choice. In a recent study, a single i.v. dose of magnesium sulphate, added to inhaled β2 agonist and SS, was more useful and safe than iv aminophylline in 100 children with severe acute asthma [52]. In summary, the administration of iv aminophylline can be considered in addition to usual care in patients with impending respiratory failure and in those who have shown a good response to the drug in the past [2,16,17]. Serum levels measurements are needed, especially in patients already being treated with oral aminophylline [2,16,17]. Few studies are available regarding the use of low dose of aminophylline but further data are needed regarding this issue [53].
Recommendation
Aminophylline should not be used in mild to moderate acute asthma.
Negative strong recommendation Recommendation
Iv Salbutamol or iv aminophylline could be used in severe acute asthma in children non-responder to inhaled β 2 agonist and oral corticosteroids. There are no significant differences between the two treatments.
Positive weak recommendation Epinephrine
Epinephrine does not offer any advantages compared to β 2 agonist in the treatment of acute asthma and is associated with a greater risk of side effects, especially in hypoxemic patients. Epinephrine could be used if β 2 agonists are not available [2,16,17].
Recommendation
Epinephrine should not be used in the treatment of acute asthma for its lower cost / benefit ratio, compared to β 2 agonists.
Magnesium sulphate
The childhood experiences are still limited and related to the use of a single dose of 25-40 mg/kg iv. In a recent RCT of moderate quality [54] in 143 children with severe asthma, the intravenous administration of magnesium sulphate during the first hour was associated to a significant decrease in the number of patients who required mechanical ventilation. In a pharmacokinetic study [55] in 19 children with severe asthma, a bolus of magnesium sulphate (50-75 mg/kg), followed by continuous infusion (40 mg/kg/h) for 4 h, was safe and maintained appropriated levels of Mg in serum.
There are conflicting data about the use of nebulized MgSO4 in addition to β 2 agonists in asthma exacerbations [56,57].
One RCT that included 508 children with severe acute asthma [58] compared the effect of nebulized magnesium sulphate to placebo. In the treated group there was a statistically significant improvement in asthma score after 60 and 240 min. However, the clinical relevance of this finding is uncertain. No serious adverse event was observed in 19% of patients in the Mg group and in 20% of the controls. Moreover, the study concludes there might be a role for nebulised MgSO4 in children with a severe exacerbation whose SaO2 in air after the first nebulised treatment remains below 92%, and in those with a shorter duration of symptoms [58]. Similarly a role of nebulised MgSO4 has been considered by other authors [59], but further studies are needed at this regard.
A recent RCT evaluated the effect of nebulized MgSO4, on FEV1 and PEF in children with asthma induced by acetylcholine [60]. The nebulized MgSO4 showed a wide bronchodilator effect but the rise in FEV1 and PEF was not superior to salbutamol. There is no evidence to support that the combination of salbutamol and magnesium sulphate displays a synergistic effect. No significant adverse event risk was reported.
A recent meta-analysis [61] including 5 studies (182 children) demonstrated that treatment with iv MgSO4 reduced the odds of admission to hospital by 68%. Adverse events have not been reported consistently with magnesium sulphate therapy.
Recommendation
MgSO4 could be used intravenously in children with severe asthma not responding to the initial treatment. MgSO4 could be also used if FEV1 is less than 60% predicted, after the first hour.
Positive weak recommendation Recommendation
Nebulized MgSO4 should not be used in mild, moderate or severe asthma, since the available evidence is poor.
Negative strong recommendation Heliox
A gas mixture containing helium / oxygen (Heliox) can decrease respiratory failure and improve ventilation in patients with airway obstruction. The use of this mixture is not indicated in mild-moderate asthma. It can be used as an alternative to oxygen in severe asthma not responding to the initial treatment [62].
According to results of a systematic review of 5 pediatric RCT (1996-2010) and 143 children, there are insufficient data to support the routine use of heliox in acute asthma. In particular, not benefits in terms of rate/ length of hospitalization, nor percentage of children requiring intubation have been demonstrated [63]. However, it is a safe therapy, and some data suggest that it may be beneficial to patients with severely impaired lung function. A systematic review and meta-analysis [64], including 3 pediatric studies and 113 children, showed that heliox used as a vehicle to deliver β 2 agonist (compared to oxygen) was associated with improvement of acute asthma, especially in most severe attacks. It also was associated with reduced need for hospitalization [64].
Notably, to administer the heliox, a non-rebreathing high-flow system is needed. Heliox needs a high flow of oxygen to the appropriate sized particles.
Recommendation
A helium-oxygen mixture (70%: 30%) could be used in severe asthma unresponsive to standard therapy.
Positive weak recommendation Leukotriene modifiers
A Cochrane review was available including 1470 adults and 470 children (aged 2-12), treated for acute asthma in ED and randomized to receive montelukast or placebo in addition to standard therapy [65]. No statistically significant difference was found in the risk of hospitalization with the use of oral montelukast in addition to standard therapy [65]. These results have been recently confirmed by Wang and colleagues in one trial comparing montelukast versus placebo in 117 children, aged 2 to 5 years, demonstrating no difference in PEF and lung function improvement [66].
Recommendation
Leukotriene modifiers in addition to standard therapy should not be used.
Conclusions
This guideline is an updated tool for the management of acute asthma attack in children over age 2. The review of the literature supports the use of salbutamol as the most appropriate β 2 agonist. Adding ipratropium bromide is an effective aid in moderate and severe attack. Oral corticosteroids should be used in moderate-tosevere acute asthma attacks to prevent hospitalizations and symptom relapse. Adding steroids to the moderate and severe attacks is more effective if administered at an early stage. Intravenous steroids should be reserved for selected children who are unable to take oral medications. High doses of inhaled steroids should not replace systemic steroids. Aminophylline use is not recommended in mild to moderate acute asthma attacks. Weak evidence supports its use in life-threatening attacks.Epinephrine should not be used in the treatment of acute asthma for its lower cost / benefit ratio, compared to β 2 agonists. The use of iv MgSO4 could be considered only in children with severe asthma attack who are unresponsive to initial treatment and/or who have FEV1 less than 60% predicted, after 1 h of standard therapy. Helium-oxygen mixture (70%:30%) can be used in severe asthma attack unresponsive to standard therapy. Leukotriene modifiers are not currently recommended.
|
2018-04-12T05:20:07.567Z
|
2018-04-06T00:00:00.000
|
{
"year": 2018,
"sha1": "9960dca4495b9ef08d78cacd6e8b4d2f7d7a534e",
"oa_license": "CCBY",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-018-0481-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9960dca4495b9ef08d78cacd6e8b4d2f7d7a534e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13422251
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of an Immunochromatographic Rapid Strip Test, ELISA and PCR in the Diagnosis of Hepatitis C in HIV Patients in Hospital Settings in Cameroon
Cameroon belongs to the group of countries highly endemic for hepatitis C viruses. Coinfection of hepatitis C and HIV are also common due to the shared route of transmission of both viruses. In hospital settings in Cameroon, diagnosis prior to treatment of hepatitis C is based solely on the results obtained with an immunochromatographic rapid strip test (97%). This study was aimed at determining the validity of the results that is obtained when an immunochromatographic rapid strip test is used to diagnose hepatitis C virus infection in HIV-positive patients in comparison with more sensitive and specific methods like ELISA and PCR. In a cross-sectional study in two parts, 700 participants were enrolled, 350 were HIV-positive patients and a control group of 350 individuals not infected with HIV. All participants were screened for anti-HCV antibodies using ACON HCV strip test, an assay commonly used in 57·1% of Cameroon hospitals. While using the rapid strip test, of the 350 HIV-positive patients, 25 (7·1%) were found to be positive with the rapid strip test of whom 3(12%) were positive with an ELISA and all 3(100%) positive with the ELISA were also positive with PCR. Evaluation of the rate of false positives with the rapid strip test using ELISA as the gold standard gave a rate of 6·3%. Meanwhile in the control group, after screening with the rapid strip test, 39 (11·1%) were positive of whom 6 (15·4%) were positive with the ELISA and 3 (50%) of the 6 positive with the ELISA were positive with the PCR. Evaluation of the rate of false positives with the rapid strip test in the control group using ELISA as the gold gave a rate of 9·6%. False positive results with this immunochromatographic rapid strip test for the diagnosis of hepatitis C virus infection is therefore common and therefore reinforce the need for a confirmatory test prior to treatment in hospital settings in Cameroon.
Introduction
Coinfection with HIV and Hepatitis C virus is a major public health problem in both developed and developing countries. A good proportion, 4-5 millions of the 40 millions HIV patients are coinfected with Hepatitis C virus [1,2]. This is probably because both viruses are acquired through similar parenteral route. Hepatitis C virus infection is the major cause of liver disease (chronic hepatitis, liver cirrhosis, and hepatocellular carcinoma) and its natural history in HIV is accelerated [3].
In hospital settings in Cameroon, diagnosis of hepatitis C is often done using immunochromatographic rapid strip tests. These assays work on the common principle of antibody present in the test serum/plasma reacting with protein coated particle (protein A) and migrating upward on a membrane chromatographically by capillary action to react with recombinant HCV antigen present on the membrane thereby generating a coloured line in the test region. So much reliance is placed on these immunoassays because they are cheap and easily affordable, they require minimum technical expertise to operate, and also very rapid to give results especially in a setting where diagnosis has to be made before treatment commences on daily bases. Treatment is with pegylated interferon and ribavirin combination therapy. Side effects due to these regiments are very common which include hematologic side effects (in most cases anemia) [4] and the possibility of drug interaction with anti-HIV drugs.
The immunologic reaction of patients infected with Hepatitis C virus is diverse especially during seroconversion phase thereby giving varying results with these immunochromatographic rapid strip tests. The colour intensity of the test line will depend on the concentration of antibody to hepatitis C produced by the patients. Therefore in the presence of immunodeficiency like HIV where adequate antibody production may be a problem, this may have an Test, ELISA and PCR in the iagnosis of Hepatitis C in HIV Patients in Hospital Settings in Cameroon effect on the quality of the results that is produced by these rapid test strips. This cross-sectional study in two parts therefore seeks to determine the validity of the results obtain with an immunochromatographic rapid strip test when used to diagnose hepatitis C in HIV patients in comparison to more accurate diagnostic methods (ELISA and PCR) in today's health care practice and also to determine the risk factors involved for the transmission of hepatitis C virus.
Hospital Survey
Hospitals were randomly selected in the country and a questionnaire was administered to them in order to determine how hepatitis C virus infection is being diagnosed.
Study Area
This study was done in Bamenda using two facilities; Mbingo Baptist Hospital (MBH) and Nkwen Baptist Health Center. Bamenda (coordinates 5°56′N 10°10′E) is the capital of the North West Region of Cameroon (a country situated on the west wing of Central Africa). The region was chosen because of it great cultural diversity and the relatively high prevalence of HIV (6.9%), more than any other region in the country [5].
Study Population
This study was approved by the Cameroon Baptist Convention Institutional Review Board (CBC IRB). Blood samples were collected from 700 randomly selected participants after getting their informed consent between 20 June and 01 August 2011. These participants included 350 HIV positive patients and a control group (HIV-negative) of 350 participants. Among the 350 HIV-positive patients were 207 (73·4%) women and 93 (26·6%) men (median age of 37years) meanwhile 245 (70%) women and 105 (30%) men made up the control group (median age of 35years). The inclusion criterion was limited to individuals above 18 years because of the sexually sensitive nature of the questionnaire they had to fill in order to determine risk factors for the transmission of Hepatitis C virus in the region. The questionnaire was duly explained to the participants in the local pidgin language after which 5ml of blood was collected into two separate tubes, EDTA tube and dry tube for every participant. The tubes were centrifuged and the serum from the dry tubes were transferred into eppendorf tubes and frozen at -20˚C until further analysis, meanwhile the plasma samples were used for the screening process. Questionnaires and samples were identified only by a study number.
Screening for HIV
Screening for HIV was done on the control group to confirm their HIV-negative status. This was done in accordance with the Cameroon algorithm for HIV screening approved by the WHO. The first line test was Determine™ HIV (Abbott Laboratories, Abbott Park, IL, USA) and the second line test was Hexagon (Human Diagnostic, Germany).
Screening for HCV
Initial screening for anti-HCV antibodies was done using an immunochromatographic rapid strip test, ACON HCV rapid test (ACON Laboratories, Inc) on the plasma of the samples collected. The manufacturer's instructions were closely followed.
Positive samples with the rapid strip test were taken to the virology laboratory of Centre Pasteur in Yaounde for confirmation.
Confirmation of Anti-HCV Antibodies
The samples that were positive for anti-HCV antibodies with the rapid strip tests were confirmed for anti-HCV an-tibodies using a commercial third-generation ELISA, MONOLISA anti-HCV plus version 2 (Bio-Rad, Marne La Coquette, France) paying close attention to the manufac-turer's instructions. The results of the assay were expressed as a ratio (R) of the optical density (OD) of the sample to the calculated cut-off absorbance as recommended by the manufacturer. Samples were considered positive with a ratio (R) ≥ 6·0.
HCV RNA Detection
The samples that were positive with the ELISA were further processed to detect the presence of HCV RNA. Viral RNA was extracted from 140µl of serum with a Qiamp® viral mini kit (Qiagen, Courtaboeuf, France) according to the manufacturer's instructions. The extracted RNA was used as a template and amplified using an in-house RT-PCR with primers to the NS5B region (Pr3, and Pr 2). The amplified products were analysed by electrophoresis in a 1.5% agarose gel.
Statistical Analysis
Statistical analyses were performed with the MINITAB 15 English. Differences between proportions were determined using the chi-square (χ²) or the Fisher's exact test. P values < 0·05 were consider to be statistically sensitive and represents 95% of the population.
Limitations
ELISA and PCR could not be performed on samples from all the 700 participants because of cost. They were performed only on samples that were positive with the immunochromatographic rapid strip test. In this study only the rate of false positives will be evaluated. It is therefore assumed that all negative results with the rapid strip test are negative with ELISA and PCR.
Results
sidered in the survey, are currently using immunochromatographic rapid strip tests as the sole diagnostic criteria for infection with hepatitis C virus. Two (3%) hospitals use an ELISA to diagnose hepatitis C virus infection but none of the hospital that were included in the survey use PCR to confirm the infection. The most common type of rapid strip test is ACON® HCV strip test which was currently in use by 40 (57·1%) of the 70 hospitals in the survey use (Table 1).
Base on the record obtained from the HIV-positive patients, 280 (80%) of the 350 patients were on antiretroviral therapy. The most common HIV type among the 350 HIV-positive patients were HIV-1 where 275 (78%) were infected, follow by HIV-1&2 where 48 (14%) were infected and HIV-2 where 27 (8%) were infected with the virus (Figure 1). The individuals of the control group were confirmed not to be infected with HIV. Upon screening all participants for anti-HCV antibodies with the immunochromatographic rapid strip test, 25 (7·1%) of the 350 HIV-positive patients were found to be positive for anti-HCV antibodies and 39 (11·1%) of the 350 individuals of the control group were also found to be positive with the rapid strip test (Table 2).
Upon confirmation with the ELISA, 3 (12%) of the 25 samples from the HIV-positive group that were positive with the rapid strip, were found to be positive with the ELISA, meanwhile 6(15·4%) of the 39 samples from the control group that were positive with the rapid strip test were found to be positive with the ELISA as well. From the results obtained with the ELISA, it implies that 3(0·9%) of the 350 HIV-positive patients and 6 (1·7%) of the 350 individuals who made up the control group were found to be positive for anti-HCV antibodies ( Table 2). The overall seroparevlence of anti-HCV in the study population will therefore be 1·3% (95% CI, 0·46 -2·14).
HCV RNA could be detected from all the 3(100%) samples that were positive with ELISA from the group of HIV-positive patients and only 3 (50%) of the 6 samples that were positive with ELISA from the control group ( Table 2). The electrophoretic patents of the RNA are shown in Figure 2.
Hepatitis C virus infection was found to be more common in individuals above 50years of age (χ² = 13·569, P = 0·0002) than in individuals below 50 years (Table 3). No significant difference was observed with hepatitis C virus infection in males and females (χ² = 1·187, P = 0·2760) ( Table 4), as well as in HIV-positive patients and the control group (χ² = 0·987, P = 0·3204). In order to evaluate the rate of false positives for the rapid strip test, the results obtained with ELISA were used as the gold standard. Upon evaluation, it was observed that the rate of false positives in the control group (9·4%) was higher than the rate (6·3%) in the HIV-positive group. This difference was however not significant statistically (χ² = 2·040, P = 0·1532) ( Table 5).
Assessment of risk factors for transmission showed that only age (age above 50 years) was found to be significant. Other risk factors such as the sex, marital status, circumcision, past intravenous infusion, blood transfusion, traditional scarification, history of STDs, history of surgery, education, and the HIV status of the individual were not found to be major risk factors for the transmission of hepatitis C virus infection in the study population (Table 6).
Discussion
From the survey of hospitals in Cameroon, it was observed that a good number of hospital (97%) diagnose hepatitis C using a rapid strip test, only a few (3%) use ELISA and none (0·0%) use PCR. The most common rapid strip test is the ACON® HCV strip test which was in use by 57·1% of hospitals to diagnose hepatitis C virus infection.
This study therefore shows the discrepancy that exists when the diagnosis of hepatitis C virus is made solely on the results obtained from immunochromatographic rapid strip tests. Of the 25 samples from the HIV-positive group, only 3 (12%) were positive with the ELISA and a similar scenario was observed in the control group where out of the 39 samples that were positive with the same rapid strip test, only 6(15·4%) were positive with the ELISA. The Monolisa anti-HCV ELISA used in this study is a third-generation ELISA and has been shown to be very sensitive (100%) and specific (98%) and even has the capability of reducing the window period for detection of the virus by 72 days [6]. It is therefore very likely that a positive result with ELISA will also be positive with PCR as shown by the observation that all the 3 (100%) samples that were positive with the ELISA from the HIV-positive group were also positive with PCR and 3 (50%) of the 6 samples from the control group that were positive with the ELISA were also positive with PCR. Evaluation of the rate of false positives with the rapid strip test in the HIV-positive group when compared with ELISA gave a rate of 6·3%. This rate when stacked close to the rate (9·4) obtained in the control group is found to be lower. However this difference was not significant statistically (χ² = 2·040, P = 0·1532). Only the rate of false positives was evaluated in this study and not the rate of false negatives because this study was designed to determine the validity of the results that are given out in the hospitals in Cameroon as positive in which treatment begins almost immediately with therapy that has a lot of side effects on the patients and which is also very costly for the patients. However, false negative results can be produced in the case of HIV patients who may have difficulties in raising antibodies against the virus. A false negative result obtain in this case will be better than a false positive result. On the contrary, in blood banking, a false negative result can be disastrous because infected blood will be transfused thereby transmitting the virus to an uninfected individual. The observation that in the control group, HCV RNA could be detected in only 3(50%) of the 6 samples that were positive with the ELISA can either signify a false positive, or the phenomenon known as spontaneous viral clearance, which is in accordance with the natural history of infection with hepatitis C virus whereby 10-60% of individuals that have been infected with the virus, have the ability to clear the virus from their system even without treatment [7]. This phenomenon further reinforces the importance for investigating further before beginning treatment. False positives with ELISA cannot be completely ruled out in Africa where studies have shown that Africans produce antibodies that react non-specifically with ELISAs and have even prompted the development of newer generations of ELISA [8][9][10]. A more cost effective algorithm for testing for hepatitis C virus had been proposed by Njouom et al. (2006) which in addition to the initial screening test, incorporated another rapid strip test to complement the other. Njouom et al. (2006) had evaluated a suitable candidate and found Hexagon (Human Diagnostic, Germany) as the best option. According to the authors, a positive result with hexagon HCV test was a more likely indicator for the presence of viraemia [11]. But this does not undermine the importance of ELISA as the most appropriate confirmatory test for the detection of anti-HCV antibodies. We therefore propose an algorithm for screening hepatitis C in hospital settings in Cameroon incorporating an ELISA as shown in Figure 3.
The overall HCV seroprevalence of 1·3% (95% CI, 0·46 -2·14) with ELISA reported in this study is far lower than the seroprevalence of 11% reported in the Littoral Region [12] and the 21% in the East Region [13] of Cameroon. But the seroprevalence of 0·9% with ELISA found among the HIV-positive patients is similar to the 0·6% which was reported in HIV-positive patients in the East region [13] but lower than the 1·7% reported in the control group of this study. This suggest that although the region has the highest seroprevalence of HIV (6·9%) [5] and other sexually transmitted diseases, no significant difference was observed in hepatitis C virus infection in both the HIV-positive patients and the control group (χ² = 0·987, P = 0·3204). This is a further complication of hepatitis C virus being considered sexually transmitted.
The major risk factor for infection with hepatitis C virus in this study was the age of the individual. HCV was found to be common among individuals over 50years (OR=9·33, P = 0·0021). This is very similar to what was obtained in Ebolowa in the Southern Region of Cameroon [14]. Though it may appear as if as the elderly pass away the prevalence of the infection in the population will drop, but that is often not the case as observation from two independent studies on the same population in different time frames has shown otherwise. For example, a study that was done on the Pygmy and Bantu population in the East region of Cameroon reported a seroprevalence of 18.6% in 1995 [15] and another independent study on the same population in 2007 reported a seroprevalence of 21·0% [13]. The increasing prevalence with age supports the hypothesis of Pépin et al. (2010) which suggest that these individuals were present in a time frame where large immunization campaigns were organized against some tropical diseases that were more prevalent at the time such as African Trypanosomiasis and other treponemal diseases like syphilis and most importantly before the implementation of blood screening for the virus prior to transfusion by the WHO. Other risk factors that were not found to be significant in this study were medical interventions in the form of surgery (OR= 4·25, P = 0·0882), traditional scarification (OR=0·55, P = 0·4192) and blood transfusion (OR=2·07, ρ=0·3096). This is very contrary to what was observed in the Southern region of Cameroon [14]. From the study, it was observed that being HIV-positive is not a predisposing factor to infection with hepatitis C virus (OR=0·50, P = 0·5050).
Conclusions
A positive result for anti-HCV antibodies gotten with an immuno-chromatographic rapid strip test does not warrants that treatment should begin. False positive results are common. Therefore the presence of the disease should be investigated further using a more sensitive and specific assay prior to treatment. Although PCR assays are very expensive to be incorporated into hospital setting in Cameroon, an ELISA which is less expensive and more affordable can be implemented to give more valid results. A negative result too does not exclude the presence of the infection. If symptoms persist, then the infection should be investigated further with a PCR assay. It is important that diagnosis should be done together with the patient medical history since the major risk factors for infection with the virus in the North West Region of Cameroon is the age of the individual. Despite the relative high prevalence of HIV in the North West region, the seroprevalence of coinfection with HCV among HIV patients is low (0·9%) owing to the different demographic nature of both diseases and being HIV-positive is therefore not a major risk factor for infection with hepatitis C virus.
|
2019-03-11T13:05:55.439Z
|
2012-01-09T00:00:00.000
|
{
"year": 2012,
"sha1": "64e711d1ab093161e7abdbcb354567e2549edefe",
"oa_license": null,
"oa_url": "https://doi.org/10.5923/j.cmd.20110101.04",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "93578dd6bf472bc474cf1661641990ea62693bc1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268398254
|
pes2o/s2orc
|
v3-fos-license
|
Analysis and Prediction of COVID-19 Cases in Pune Using Machine Learning Techniques
COVID has caused a major outbreak in the world. It has brought about the breakdown of the economy, significant over-burden in the well-being area, a break in the schooling system and lost lives, and so forth. To ensure that the condition of the world comes to a superior spot regarding work, money and well-being, controlling the outbreak has turned into a main concern. In this paper, ARIMA time series forecasting model is utilized to predict and forecast the assessment of the spread of COVID-19 contamination in the following week. We have taken the data of COVID-19 patients of the Pune district in India. The data is visualized in a simple-to-consume and interactive format. By and large, this paper can help experts and authorities control the COVID-19 outbreak by helping them gain alertness.
IntroductIon
T he COVID-19 pandemic has had a humongous impact on human livelihood.Coronavirus originated in China's Wuhan province.It immediately spread across the globe.The World Health Organization (WHO) announced it as a worldwide pandemic in the wake of considering its spread pace and the infection's nature and conduct. [6]he data on COVID-19 is for the most part accessible at the national or state level.Locale-level government bodies like Municipal corporations in India are much of the time reliant upon this data to get to the prediction tools to predict the ascent or fall of new COVID-19 cases.Taking into account this situation, making such a prediction tool and making it accessible at the district level government body in itself will give a more prominent effect by adding to the ordered progression of State, Country and World.
This paper utilizes ARIMA time series forecasting model to perform predictions based on the 3 days rolling data given as reference.ARIMA stands for autoregressive integrated moving average.It is a statistical analysis technique which utilizes time series data to better understand a collection of data or forecast future patterns.The dataset is given by www.cessi.comand it contains various daily confirmed, recovered and deceased Coronavirus cases in Pune district. [7]he data is taken from January 1 2021 to December 31 2021.Data visualization is likewise finished by bringing in various libraries, such as matplotlib and seaborn, plotly to analyze the pattern of COVID-19 patients.
To help boost the vaccination drive across pune district, we have added the system of the vaccination center close to the user utilizing python.It will take users straightforwardly to the Co-WIN gateway to book their vaccine.This paper will give a decent representation over the COVID-19 outbreak in Pune district.Near right prediction will assist the managerial office, clinical office and residents with being ready for the approaching 7 days.
relAted Work
Several researchers have added to the areas of predicting and forecasting the COVID-19 pandemic.In, [1] the authors used linear and polynomial ML models to predict and forecast the COVID-19 pandemic in India.They assessed the models utilizing R-squared score and error values techniques.They split data from March 12 to October 31, 2020 into 75% for training and 25% for testing.The paper is predicting the number of confirmed, recovered, and death cases of COVID-19.The authors implemented the tableau time series forecasting approach for forecasting the future trend of these cases.
In, [2] the authors took data from March 4 to May 15.They used regression analysis (exponential and polynomial), auto-regressive integrated moving averages (ARIMA) model, exponential smoothing and Holt-Winters models to examine the development of COVID-19.
In, [3] authors applied the as-of-late evolved eigenvalue decomposition of the Hankel matrix (EVDHM) alongside the ARIMA model to foster a forecasting model for nonstationary time series.
In, [4] a data dashboard is formed with the assistance of data taken from dependable sources to depict it in an interactive and simple-to-consume design with highlights like a chatbot, cases forecast with AI and projection assistance, and data in various formats updated daily.
In, [5] the paper compares different machine learning algorithms to predict the number of positive cases in India.The effect of lockdown is thought about while developing the ML algorithm.The paper additionally advances key measures and ideas about systems and strategies to the policymakers thinking about the impact of the lockdown.The models are based on China's data and validated to India's sample.The created ML model works in real-time and gives near right predictions of positive cases.
Data collection
The data is collected from [6] in csv format.The columns of this dataset contain confirmed cases, recovery cases and death cases of COVID-19 patients on an everyday premise from January 1 2021 to December 31 2021.Exploratory information examination (EDA) is directed utilizing Jupyter Notebook to get an understanding of the data.
Data Pre-Processing
The imported data is filtered through data cleaning, duplicate data removal and data formatting.Information is additionally partitioned into two sets: training set and a testing set with the proportion of 80:20.
ARIMA time series model
ARIMA is a simplified form of the autoregressive moving average model.It incorporates autoregressive and moving average models to develop a composite forecasting model.
The AR model utilizes the reliance between the observations and several lagged observations, though the MA model uses the association between the observations and the residual error values by using the MA for the lagged observations.ARIMA uses the order factors p, d, and q. p is the order of the AR expression, q is the order of the MA expression, and d is the order of the differencing.
Mathematical Model
In a pure AR model, Y t relies just upon its own lags.That is, Y t is a function of the 'lags of Y t '.
is the lag1 of the series, α 1 is the intercept term and β 1 is the coefficient of lag1.β 1 and α 1 are estimated by the model.
In a pure MA model, Y t relies just upon the lagged forecast errors.
The errors E t and E( t-1 ) are the errors from the accompanying equations : In the ARIMA model, the time series is differenced at least once to make it stationary.
After you combine the AR and the MA expressions, the equation becomes:
Data Visualization
We used Python libraries for data visualization.Bar and scatter plots are implemented for visualization with the use of Matplotlib, Seaborn, and Plotly.
Error Analysis
Mean absolute percentage error (MAPE) is a significant strategy in statistics that measures the prediction accuracy of forecasting.R-squared is a mathematical measurement that addresses the degree of fluctuation for a subordinate variable that's Using Auto ARIMA the best fit model for this proposed system is SARIMAX.
results
The website displays a homepage [Figure 3] which shows information about COVID-19, an explore page that analyzes and shows 10 days of COVID's insights for pune district [Figure 7] with 92% accuracy.The website additionally permits clients to look for a vaccination center availability as per pin code [Figure 8] and download the data whenever required [Figure 9].
conclusIon A n d f u t u r e s co p e
The paper successfully shows the analysis of the gathered COVID-19 data of the Pune district.Using the ARIMA forecasting model, successful forecasts of the number of active cases over the following 7 days from the present day is done.The model obtained a prediction accuracy of 92%.
The research also aids in booking vaccination slots.This paper will help authorities contain the COVID-19 break by gaining caution.We wish to expand our project to other districts in Maharashtra state.Live forecasting will be our main focus in the future.More effective algorithms can be developed for better prediction and understanding of the virus' spread with further availability of data.Hope this paper contributes to improving the district government's response to the COVID-19 pandemic and puts forward some references for future research.
Figure 4 :Figure 5 :
Figure 4 : Monthly trend in confirmed, recovered and deceased COVID-19 cases using bar plot
|
2024-03-15T16:26:47.648Z
|
2023-11-23T00:00:00.000
|
{
"year": 2023,
"sha1": "2f3bee75c1fd6a114f0d39d393288994ddc9e59a",
"oa_license": null,
"oa_url": "https://doi.org/10.18090/samriddhi.v15i03.13",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e820d14de081947383920205c96ea7782e592949",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"extfieldsofstudy": []
}
|
154322470
|
pes2o/s2orc
|
v3-fos-license
|
Bringing Ethics into the Classroom : Making a Case for Frameworks , Multiple Perspectives and Narrative Sharing
This article argues for the need to discuss the topic of ethics in the classroom and presents five frameworks of ethics that have been applied to education. A case analysis used in workshops with educators in the field of Special Education is described, and the benefits of sharing narratives are discussed. The authors offer suggestions, grounded in education literature, for addressing ethics explicitly and for developing a critically reflective perspective toward ethical decision-making.
Introduce the Problem
Many of the concerns confronting teachers in U.S. public schools today, indeed around the globe, require ethical decision making.Teachers may experience tensions between personal beliefs, professional codes of conduct, and moral values when facing ethical issues.In a review analyzing 22 articles from Teaching and Teacher Education, Bullough (2011) found that teachers understood and responded to ethical dilemmas differently and showed different levels of ethical sensitivity.Some made ethical determinations about what was the right thing to do based on their own personal ethics and life experiences, others gave priority to social and institutional norms, and yet still others held a more malleable and thoughtful view attending to a wide range of moral prerogatives (Bullough, 2011).Clearly, teachers need tools to support consideration of multiple perspectives surrounding an issue, tools to facilitate their thinking (Cartledge, Tillman, & Talbert-Johnson, 2001).
Researchers continue to argue that the teacher education field should approach professional ethics in ways similar to other licensed professions, such as psychology, medicine, and law, with direct teaching on the topic, and explicit statements regarding the rights and privileges of clients, patients, and practitioners (Barret, Casey, Visser, & Headley, 2012).Unlike these professions where there is more focus on direct instruction on the topic and specialized coursework for teaching ethical decision making, teachers usually engage in an experiential process in which they have to learn to make ethical decisions about instructional practices on their own (Huling & Resta, 2001;Moir, 2009).For example, preservice teachers in special education learn about the special education law and how it needs to be implemented, but many of them do not engage in discussions of specific ethical concerns that emerge from time to time and challenge their decision making.For example, they need to learn how to evaluate parents' or students' rights in relation to a school policy.In managing student problem behavior, many teachers do not develop awareness about how sometimes their inflexibility and lack of understanding of individual student needs can escalate problem behaviors.They do not learn how to balance flexibility of thinking with consistency in decision making.By engaging in conversations that are structured around various ethical dilemmas, they can learn to use tools to facilitate their thinking and evaluate their own decision making.
Our purpose in writing this article is clear and simple, while our topic, ethics, is in many instances murky and elusive.We aim to explicitly examine implicit notions of ethics, and to highlight existing conceptual frameworks of ethical conduct that have been applied to school settings.We present a fictitious case study we have used in workshops with educators to elicit critical dialogue in a non-threatening context.Our goal in the workshops is for teachers and administrators informed by frameworks of ethics, to hone a critical perspective about their own decision making.We describe how case analysis sparks the sharing of personal narratives and explain the process of narrative self-construction.Finally, we offer suggestions for incorporating ethics in professional development for in-service and pre-service teachers.
To begin we briefly describe the current status of ethics education and argue for the need to discuss the topic of ethics in the classroom.
Ethics-What and Why
Etymologically, "ethics" is derived from the Greek word "ethos" which means "character" or "conduct".Ethics is not limited to the actions or behaviors of an individual but includes practices of a profession, an organization, a government agency or a corporation.Philosophers and moral ethicists have historically fallen into two camps when considering the topic of ethics in relation to the individual and society.One group, in the tradition of Hobbes, Locke and Rawls places the individual, independent of society, as more important, while the other group, adherents to the likes of Aristotle, Rousseau, Hegel, Marx and Dewey, consider the society as preeminent to the individual (Graham, 2011;Stefkovich & O'Brien, 2004;Sullivan, 1986;Warnick & Silverman, 2011).In a school setting, ethics includes both an individual's actions and the school community's choice to act or govern (Sullivan, 1986).
When deliberating the definition of the term "ethics", individual teachers, parents, and administrators each have a unique view according to their own lived experiences and positioning (Davies & Harre, 2001).Some rely on words such as "right", "moral", "values"; others have been more inclined towards "policy", "code of conduct", and "professionalism".But all generally seem to agree that educators need to be ethical and that educators should have access to training in understanding the ethical issues involved in decision making.In a review of teacher curricula from 156 colleges and universities, Glanzer and Ream (2007) found that only 9% of teacher education programs offered ethics courses as program requirements or electives, compared with 71% of business programs, 60% of nursing programs, and 51% of social work programs.Although Glanzer and Ream, warned about generalizing this conclusion beyond the specific sample in their study, their findings suggest ethics is not well emphasized in education programs as compared to many other professional schools, such as social work, counseling, law, and medicine.
Teachers, as many other professionals, often face dilemmas that are complicated and lead to ethical challenges.Although educators (teachers and administrators) in general are supposed to behave ethically, many of them have not had the benefit of specific coursework or training in their teacher preparation programs that would lead to ethical preparedness.Yet ethics, whether named or unnamed, underpin every aspect of school life from decisions about discipline to teacher-talk in the staff room.In our view dialogue about ethics must be pervasive, not reserved for special occasions or framed as a "virtue of the month."All teachers must learn about and commit to instructional and behavioral practices that foster an ethical school culture that embraces and promotes the core values of respect and responsibility, integrity and honesty, and care for self and others.
In the following section, we summarize five conceptual frameworks that have been presented in the literature as tools for understanding the intersection of ethics and education.
Conceptual Frameworks for Ethics in Education
Five frameworks of ethics have been identified in education: (1) the ethic of care, (2) the ethic of justice, (3) the ethic of critique, (4) the ethic of profession and (5) the ethic of community (Furman, 2004).
(1) The ethic of care is based on the tenet that people are relational, interdependent beings.As infants, humans begin life thoroughly dependent upon the care of others for survival; human babies do not emerge independent and self-sufficient.The ethic of care rejects the idea that the goal of healthy child development is to become independent.Rather, advocates of the ethic of care view people as both relational and capable of autonomy throughout the lifespan (Held, 2006).
(2) The ethic of justice includes both an individual's choice to act justly and the school community's choice to act or govern justly (Sullivan, 1986).The ethic of justice provides a framework for people to solve problems by first establishing what is just and fair for the individual and for the school community.
(3)The ethic of critique has to do with questioning -asking why are things the way they are.One might ask: is bureaucracy, hierarchy, or complacency thwarting progress in a school?To embrace the ethic of critique requires a willingness to reflect upon social justice, upon issues of access, inclusion, and distribution of resources (Giroux, 2003).The ethic of critique illuminates flaws, but typically stops short of offering solutions.
Using the best interest of the child as their touchstone, Shapiro and Stefkovich (2001) merged the three aforementioned ethical models of critique, justice and care, to create a new ethical paradigm, (4) the ethic of the profession.The paradigm of the profession focuses on moral aspects and questions specific to schools, much like other professions have done, e.g., medical ethics, legal ethics, and business ethics (Stefkovich & O'Brien, 2004;Warnick & Silverman, 2011).In this model, educators are meant to consider their professional principles, codes and standards, such as Council for Exceptional Children (CEC) standards, Common Core standards, and to position the "best interests of the student" as paramount (Shapiro & Stefkovich, 2001).Furman (2004) noticed that most of the published work in the ethics literature focusing on educational leadership paid little attention to "the communal processes that are necessary to achieve the moral purposes of schooling in the twenty-first century" (Furman, 2004, p. 220).Missing in this literature, in her opinion, was discussion of community -not in the sense of a kind of micro-society, rather community as the hub of iterative communal processes.Through the process of sharing information, insights, and experiences, the individuals and groups within the community benefit from one another.Furman built a framework with (5) the ethic of community at its center and incorporated the three most widely accepted ethics paradigms-critique, justice, and care, along with Shapiro and Stefkovich's ethic of the profession.Table 1 summarizes these frameworks, their foci, and main ideas they promote with specific guidance for teachers.These frameworks provide multiple ethical lenses through which educators can view the issues they encounter, understand different perspectives and thus take more thoughtful actions.
Case Study and Discussion
We have used the following case study during conference presentations to elicit constructive dialogue around the topic of ethics situated in the context of schooling.In these sessions, rich discussions have emanated from this scenario.The storyline highlights the differing perspectives of a special education teacher and a school principal about the consequences of a student's behavior.
Ms. Green is a caring special education teacher, who just joined Lincoln Elementary School, located in a suburb in a southwestern state.She has been assigned to teach a self-contained classroom for second and third grade children identified as having emotional and behavioral disorders (EBD).EBD is one of the special education categories characterized by one or more of the following characteristics: an inability to build or maintain satisfactory interpersonal relationships; an inability to learn; consistent or chronic inappropriate type of behavior or feelings under normal conditions; pervasive mood of unhappiness or depression; physical symptoms or behaviors associated with personal or school problems.A continuum of educational services is necessary to appropriately meet the needs of students who are identified with EBD.Some students with EBD can be served in regular education classrooms with additional supports; others may need one-on-one, personalized programming in a self-contained setting for all or part of their school day.
In Ms. Green's previous job, she taught students with EBD from first through fourth grades.The principal of Lincoln Elementary, Mr. Driscoll, has a very strict view about misbehavior.He has no tolerance for disruptive behavior in classrooms, the cafeteria, the playground, or in the hallways.He believes his primary responsibility as principal is to create a school climate that provides physical safety for all children.Any student who is perceived to be a threat to safety is likely to be removed from the school and sent to an alternative setting (depending on the severity of disruptive behavior).Student behaviors that might lead to immediate suspension or expulsion from Lincoln Elementary include: seriously disrupting classroom instruction; endangering other students, teachers, or school officials; or damaging property.A student who has received a prior suspension, but continues with disruptive behavior, is moved to an alternative setting to avoid further risk.Mr. Driscoll's view of discipline is based on the use of punitive approaches, sometimes without any provision for positive supports.
Ms. Green's view differs significantly from her principal.She had attended a teacher preparation program that focused on positive supports of behavior and differentiated instruction at a recognized university.She considers one-on-one support to be an important component of specialized intervention.She attributes her success with many of her students with EBD to her ability to provide them with individualized attention and interventions when needed.
Sophie, a third grader who recently moved to the area from another state, has only been in Ms. Green's class for three weeks.Sophie gets frustrated with math when she does not understand the directions to complete a task; she shows her frustration by throwing objects.Sophie's mother is concerned about her daughter and repeatedly calls Ms. Green to talk about her child's progress.Sophie's mother is an involved single parent, who is worried about her daughter's adjustment in her new school.She had called the principal's office earlier to complain about the location of the bus stop, and requested the bus to stop closer to her residence.Sophie had attended a Title I school where she was qualified for tutoring after school.Lincoln Elementary School has no funds to provide after-school tutoring.Ms. Green has noticed that Sophie is a little bit anxious and that she gets frustrated easily in math.But if Ms. Green spends an extra 15 minutes either during recess or after school with her, Sophie seems to grasp the concepts and has been making adequate progress.Ms. Green gives Sophie one-on-one attention and helps her understand what was covered during the day and what she needs to do for homework.
One week when Ms. Green was out sick, a substitute teacher took over the class.Not knowing Sophie, the substitute did not fully understand how to deal with her.She did not provide additional supports to Sophie in math.Sophie got frustrated and started throwing her books and papers on the floor.The substitute wrote her up and sent her to the principal's office.Sophie got suspended!The principal, who had been getting upset with what he construed to be complaints from Sophie's Mom, decided he needed to transfer Sophie to an alternative school within the same district.After getting over the flu, Ms. Green came back the following week and learned that Sophie was going to be moved to the alternative school.Ms. Green tried to explain to the principal that she was willing to work with Sophie on a one-on-one basis.The principal did not agree with this option and told Ms. Green that she had other students to focus on.What do you think Ms. Green should have done?How could she better handle this situation?
This case has spurred rich discussion specific to the events described in the text, and perhaps more importantly it has served as a springboard for participants to share and reflect on their own personal narratives.Sophie's story has spawned a range of topics concerning (a) policy issues (zero tolerance for disruptive behavior); (b) the principal's attitude and actions (removal of the student from school to an alternative setting); (c) teacher's job protection (Ms.Green's job security); and (d) concern for the student (what is beneficial for Sophie).
Listening to the varied perspectives of fellow attendees, participants apprehend their own capacity to view a subject through different lenses and from different angles.For example, in one session a participant offered the opinion that Ms. Green needed to follow the school policy and the student Sophie needed to have a negative consequence for her disruptive behavior.A second attendee concurred with this view.Not surprisingly, the two participants who articulated this view both worked in administrative roles, and were thus familiar with the responsibility of adhering to established school/district policies.However, a third attendee questioned whether an official policy was actually in place at the school/ district, as the case study had not specified.A number of participants interjected the opinion that whether the principal was following policy or acting on his own accord, Ms. Green might put herself in danger of losing her job by arguing with the principal as he was her boss.Still another participant countered that if Ms. Green succumbed to the principal's decision without arguing for what she believed to be best for the child, she would have to carry the burden of feeling she had failed to support the rights of the student.Each of the diverse views expressed by workshop participants generated lively and productive dialogue, illustrative of the complexity of decision-making.Through discursive collaboration, participants created alternative storylines for the characters.
The stakes are low when people examine cases in which they are not personally involved, so the case study approach provides a good starting point to prompt reflection.In our experience, this case analysis typically segues into the sharing of individual narratives with little or no prompting.In one session, a participant who was a practicing teacher shared a story in which she described herself in a situation very similar to Ms. Green's.In her case, the student was a boy with whom she had established a positive rapport, something not many other teachers had been able to do.The principal wanted the boy removed from this teacher's full inclusion classroom and sent to an alternative school.The teacher recounted how the principal and the school psychologist told her what she should say in the upcoming child-study-team meeting with the boy's mother.The teacher related how she surprised the principal and other school staff, as well herself, when she asserted her view that the boy was making progress in her class, that she wanted to continue working with him, and that she recommended he not be removed to an alternative setting.
Admittedly conference workshops occur in a time and space removed from the school workplace with its competing demands, allegiances, and power hierarchies.Participants may in fact readily share their personal narratives in conference workshops because they are less vulnerable to the censure of superiors and colleagues than they might consider themselves to be in a professional development session conducted at their school or in their district.
Through telling and sharing stories, humans make sense about themselves and the world around them (Mishler, 2006;Seidman, 2013).Sharing stories in a safe setting, away from institutional hierarchy affords the opportunity to delve into complex issues in a low-threat context.People can position themselves and their actions in their own terms.Telling stories allows one to narratively self-construct.Wortham and Gadsden (2006) specified four ways in which narrative positioning occurs and how these principles relate to a narrator's act of self-construction.First, at the most basic level a narrator, by telling an autobiographical story, positions herself as having experienced some event or sequence of events in the past.In this regard, the narrator is in essence saying "this happened and this is what I did, saw, felt…" to the listener (Wortham & Gadsden, 2006, p. 320).The second tool a narrator employs is "voicing".It is beyond the scope of this article to discuss Bakhtin's construct of voicing in detail (as cited in Ribeiro, 2006).Suffice it to say that the teller of the tale positions herself and the other actors involved in the context of the story, by describing them as certain kinds of people, types that are socially and culturally recognizable to the listener.(The concept is akin to referencing or bringing in the "voices" of others who are outside the immediate story).Thirdly, the narrator evaluates the actions of the people described in the narrative, including the narrator herself, and thereby elaborates her positioning of all involved in the context of the event's unfolding.The fourth way the narrator positions herself is in the moments of the telling itself.The narrator, by choosing to tell a particular story in a particular way to a particular audience positions herself, as well as her interlocutors (the presenters and the other workshop participants).For example, the teacher who related the personal story about standing up for what she considered to be the best interests of the child, positions and constructs herself as a virtuous person, by the act of telling.She also positions the interlocutors as people who would appreciate and not scoff at such a tale (Davies & Harré, 2001;Schiffrin, 1996;Wortham, 2006;Wortham & Gadsden, 2006).
We value case analysis and narrative sharing in no small part because we see these activities as participatory and non-didactic.Yet, we are not without opinions.So in the next section we offer a number of suggestions, malleable to each reader's situated context.
Suggestions
The first step toward making ethical choices in school settings requires awareness of the relationship between actions and ethics.Since ethical issues are complex and individuals bring their own values and prior experiences to each encounter, people must also recognize the need to consider issues and dilemmas from multiple perspectives.Such awareness results from engaging in open, honest and explicit discussion.
Understanding Ethical Dilemmas
The case study highlighted the tension between the ethics of care and ethics of profession.Ms. Green was caught between her desire to care for the student and her wish to follow the established school rules and policies for misbehavior.In order to develop common understanding about ethical dilemmas, Shapira-Lishchinsky (2011) suggests building explicit ethical knowledge among teachers, establishing shared ethical guidelines, and embedding dialogue, as a normal course of action in the school culture.When teachers' view themselves as powerless, without adequate tools for making tough decisions, and without administrative support, they find their ability to adhere to their ethical and moral obligations compromised (Shapira-Lishchinsky, 2011).Teachers need strong administrative support and opportunities for professional development and capacity building (Johns, McGrath, & Mathur, 2008).
Education is more than imparting knowledge of subject matter; education also influences, among other things, the development of ethical decision making.The adults in schools play powerful roles in children's development; these adults-the teachers, administrators, and staff-are the "experts," the guides who can lead students beyond their current state of understanding and mastery to the next, more advanced level, a destination students can reach with assistance (Vygotsky, 1990).Thus, it is incumbent upon the adults in schools to model ethical practices and to help students construct a moral compass guided by fairness, honesty, integrity, civility, compassion, constancy, and responsibility (Campbell, 2008).If students like Sophie receive only punishment (punitive approaches) and confrontation from the principal for breaches of discipline, they are more likely to develop patterns of resistance and oppositional behavior.If Ms. Green, a teacher, cannot articulate her side of an argument, or the principal does not pay heed to what the teacher has to say, Sophie's educational goals (which include her social and emotional development) will continue to be compromised.
In addition to understanding the principles behind behavioral interventions and strategies, educators need to assume responsibility for selecting a specific intervention for implementation.For example, the principal considers himself to be accountable when his decisions are guided by the ethics of profession and justice.He believes he is responsible for providing a safe physical environment (ethics of profession) to all his students and therefore adheres to the policy of zero tolerance for behavioral infractions (ethics of justice).On the other hand, Ms. Green believes in using a continuum of positive supports that includes individualized behavioral interventions (ethics of care) and reinforcement for positive behavior before using punishment.Both the principal and the teacher have reason to believe they are ethical in their own way!Their scenario illustrates the importance of understanding different frameworks of ethics that guide teachers' and administrators' decision making and its short and long-term effects.
Talk about Ethical Decision Making
Some teachers and administrators shy away from discussions of ethics due to a commonly held belief that morality is about values, and not about facts.Debates about opinion and fact, about subjectivity versus objectivity are not unique to our contemporary context; such philosophical questions go far back in time (In Western philosophy such questions have been deliberated in the written record since Plato's dialogues; see T. West & S. West, 1998).Even though people do not share all the same values, they do share quite a number of basic values-that it is wrong to kill, steal, cheat, or injure another-to name just a few.Sadly, the things people disagree about tend to get all the attention.If there is agreement that education should focus on the development of the whole child, and consensus that every child has a right to an education, there should also be agreement that all students including those with EBD, need access to a positive climate for learning and an opportunity to learn.
So, what does ethical decision making look like when teachers are dealing with problem behaviors?When teachers who use ethical decision making notice a problem behavior, they do not immediately think about removing the problem by removing the student from the instructional context.Teachers grounded in ethics invest in prevention (proactively keeping problems from occurring) and pre-correction (anticipating and correcting in advance prior to the occurrence of the problem behavior); they evaluate if the student needs some signals to engage in positive behavior, additional reminders, prompts and cues.Perhaps a student who is not confident in moving to the next step in learning needs reinforcement for successive approximations and additional prompting.Some students may need to learn self-regulation and anger management skills so they can recognize their own triggers and respond to the early signs of tensions and pressures in socially appropriate ways.Instead of receiving criticisms and ultimatums, many students with challenging behaviors benefit from positive attention and feedback.In some rare instances when disruptive behavior warrants punishment, the ethical teacher uses reprimands or response cost (e.g., loss of privilege or points) carefully and judiciously and in conjunction with positive reinforcement (see Table 2).
Steps Sample Strategies
Step 1: Plan for prevention Use of tiered 1 approach.
Use pre-correction (Anticipate what needs to be corrected ahead of time) Invest in planning, arranging classroom, and scheduling
Establish rules and routines
Establish learning structures: cooperative learning, small groups Establish contingencies Step 2: Focus on antecedent control Ethics help teachers identify possible courses of action and assess the value of pursuing these actions.Ethical teachers deliberate on their decisions; they evaluate, review, and reflect on their practices and policies, and refine and improve them accordingly.As they engage in ethical decision making, they learn to view the problem from others' perspective and invite others to give constructive feedback.Professional development opportunities can be structured to assist in building educators' capacity for making ethical decisions.
Engage in Ongoing Professional Development
The purpose of professional development is to support educators in their practice.Therefore the professional learning opportunities offered (or required) for teachers should respond to their needs and suit their contexts.In a collaborative learning environment, teachers can generate narrative cases about the challenges they face related to the ethical dimension of teaching and collectively brainstorm alternatives (Strike & Soltis, 2004).Participants could be given a list of values and asked to prioritize them by importance to the healthy functioning of the school.They could then be given a list of behaviors and asked to select which behaviors are tied to the values they have chosen.Such an activity would provide teachers an opportunity to have input (and instill ownership) on assorted school issues such as confidentiality, protection and proper use of property and assets, policies of discrimination and harassment, and the use of technology and the Internet.Educators, like students, need engagement as well as opportunities for reflection so they can cultivate their own professional learning communities and thus thoughtfully collaborate with colleagues to build a strong ethical culture in their schools.
Establish Mentorship Opportunities
Atjonen (2012) conducted a study with 201 pre-service teachers who were asked to describe, from an ethical viewpoint, both positive and negative mentoring experiences during their student teaching.Results indicated that teacher candidates viewed their mentorship as ethically successful, when they had a mentor who gave them feedback, was student-centered, was fair and just, gave timely advice, gave enough support and listened carefully, was both flexible and demanding, and was a positive person.In an ethically unsuccessful mentorship, they viewed their mentor as someone who was authoritative, refused to give feedback, treated student teachers disrespectfully, was hard and critical, interrupted lessons with insufficient reason, discussed confidential issues with outsiders, and neglected certain basic supervisory tasks.The findings of this study highlight the importance of positive and ethical mentoring of clinical practice.The study indicates the need for mentor teachers to be trained in ethical practices of supervision.Implications of such training go beyond student teaching.All teachers, veteran and new, can benefit from ethical mentoring practices.All can be reflective teachers who continue to develop their decision making skills, professional knowledge, and performance.
Including Ethics in the Classroom
School culture has a powerful effect on the behavior of the members of the school community, including teachers.In a cooperative school context, teachers can have agentive roles identifying situations for which they need to expand ethical understanding in order to bolster their sense of preparedness.They can generate a personal skills inventory and self-determine: Do they find themselves ethically unprepared when they are interacting with students from cultural backgrounds different from their own?Do they find themselves challenged when they are collaborating with their peers?Do they have difficulty engaging with parents, or managing student behavior?To what extent do they demonstrate capacity to act confidently and sensitively?Which aspects of their knowledge and practice need to be strengthened?Similar analysis is needed for student behaviors.Students too can actively participate in generating their own needs assessment for themselves and their school.Where do students see issues of ethics?Are they concerned about cheating, bullying, divisive cliques/gangs, and pressures to skip school, to name a few possibilities?
If ethics is something that is referenced frequently throughout the school -in the classroom, in the staff lounge, in the principals' office, in conversations among teachers, students, and staff, then everyone would have a shared understanding of the ethics valued in the school and be better prepared not only to abide by these ethics, but also to handle problems when they arise.
We support the use of a case analysis framework as a means for generating dialogue about ethics in decision making (Strike & Soltis, 2004).The case analysis approach does not produce absolute answers; rather it provides a springboard for discourse about alternative possibilities (See Clandinin & Huber, 2010 for discussion of the affordances of Narrative Inquiry, which is typically a more personalized approach to inquiry than Case Analysis).
Case analysis can help increase students' sensitivity to issues, and objectivity in ethical decision making (Richert, 2012;Strike & Soltis, 2004;Warnick & Silverman, 2011).Instead of delivering a right or wrong answer, this approach provides a framework for analyzing a situation, understanding the context, considering alternative perspectives surrounding the issue, and building a consensus along ethical dimensions.The process of case analysis is presented in Table 3.
Review Rules and Regulations
Schools should make the review of regulations an annual practice.It is important to reevaluate the relevance, adequacy and appropriateness of policies, because schools and societies are constantly evolving.New guidelines and procedures may be needed to promote ethical use of media (including, but not limited to texting, Internet use and social media) on and off campus (Campbell, 2008).Guidelines must be developed and communicated to teachers, students and other stakeholders regarding school use of networking tools.Everyone should be involved in discussions about privacy, respect and protection in online environments.Many public schools use learning management systems (LMS); Schools can also use blogs to guide interactive discussions about ethical treatment of oneself and others as well as to send alerts when issues emerge.
Schools should initiate a community dialogue, and invite parents, students, educators to come together to discuss community and school needs, to exchange ideas and share suggestions.If a need emerges for additional policies to respond to new problems in the school, the leadership team should work with community members, apprise them of the situation and take their input when crafting new policies.The school leadership should highlight the goals of any new policies and make clear links to existing policies and procedures, and to the mission of the school and the district.
Conclusion
Perhaps we should have prefaced this discussion about ethics in schools by clarifying just what the purpose of public education is.Noddings (2003) wondered whether the purpose of schooling has become purely economic: "… to improve the financial condition of individuals and to advance the prosperity of the nation.Hence students should do well on standardized tests, get into good colleges, obtain well-paying jobs, and buy lots of things.Surely there is more to education than this?" (Noddings, 2003, p. 4).It is our view that the purpose of education is to contribute to the person's journey toward responsible selfhood.Schools have an integral role to play in each student's journey to selfhood including those students with EBD.In order to accomplish such noble goals, in addition to being "physically safe", schools must be "emotionally safe places" where caring peer relationships are fostered and supported (e.g., through buddy programs, mentoring, conflict resolution classes); where an ethical culture of community (which includes critique, justice, care, and professionalism) prevails; where students participate in ethical discussions and activities that empower and encourage them to take responsibility for their behavior.Schools need to create environments conducive to developing values of caring and justice rather than only focusing on catching and punishing transgressions or adding surveillance cameras, more security guards, better metal detectors, more locks, shorter lunch periods, more rules.
This article highlighted the need to establish ethical decision making in schools and recommended creating a climate that rouses and inspires the moral and ethical dimensions of living, learning and teaching.Reasons to explicitly discuss ethics in the classroom, as well as challenges to doing so have been noted.The ethics of critique, justice, caring, profession, and community can be tapped as a wellspring for school administrators and teachers who endeavor to create rich ethical environments that nourish the development of the hungry minds and bodies of children throughout their K-12 schooling.Structured professional development opportunities can be organized to help teachers develop an understanding of various frameworks and sharpen their own decision making.Educators need tools to generate and evaluate decisions as they face familiar and unfamiliar ethical dilemmas in their future professional lives.
Table 1 .
Ethical frameworks in teacher education
|
2017-08-15T06:08:30.251Z
|
2014-08-21T00:00:00.000
|
{
"year": 2014,
"sha1": "040f7126d9578317a0a865b30f1d44e99db7f8a8",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ies/article/download/37617/21967",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "040f7126d9578317a0a865b30f1d44e99db7f8a8",
"s2fieldsofstudy": [
"Education",
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
56237704
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Institutional Investors on Firms Accounting Flexibility : Evidence from Jordan
In this paper the existence of the impact of the institutional investor on the firm’s accounting flexibility in generating discretionary accruals was verified. For this purpose balanced data cross-sectional regression model for all 70 Jordanian manufacturing companies listed at Amman Stock Exchange (ASE) over eleven years from 2000 to 2010 was utilized. In the regression model discretionary working capital accruals (DWCA), proxy for earnings manipulation, was set as the dependant variable. Independent variables were; the percentage of the institutional investors ownership of common stock in firm as a proxy of the institutional investors (IIP), the managerial ownership (MAO), firm’s size (SIZE), leverage ratio (LEV), and return on sales ratio (ROS). The econometric model was estimated. The results of various analysis and tests carried out in this study confirm the monitoring role of the institutional investors and the role played by the institutional investor in alleviating the practices of earnings management.
Introduction
This paper seeks to investigate the effect of institutional ownerships on earnings manipulation activities in the Jordanian industrial firms.To achieve this aim, the possibility of association between institutional investors and accounting flexibility available to firms to generate discretionary accrual was investigated.The importance of this study is its attempt to cover a lack of studies on the impact of one component of corporate governance on managing earnings practices in developing countries such as Jordan.
Many previous studies concluded that the earnings management is a familiar practice in companies (e.g.Miglo, 2007;Bissessur, 2008).It is a well known fact managers use their authority in choosing accounting methods in order to maximize their own benefits.Most of the studies that addressed the earnings management focused on the goals that drive managers to manage earnings and methods of earnings management, and most of the time ignored company-specific characteristics that affect the ability of this company to manage earnings assuming constant ability across companies.Few studies have investigated the effect of company-specific characteristics in their ability to practice earnings manipulation (e.g., Francis et al., 1999;Klein, 2002;and Chung et al., 2002).
Regardless of whether they do so or not, institutional ownerships have the ability, potential, and motivation to monitor managers in order to reduce their ability to achieve their own benefits against the shareholder's.This study examine the impact of institutional ownerships on company's ability to generate discretionary accruals in the existence of a set of control variables, which previous studies have confirmed it's significantly impact on earnings management practices, namely: managerial ownership, size, leverage, and profitability.
Using balanced panel data for all 70 th Jordanian manufacturing firms listed at ASE, between the years 2000 to 2010, the study estimated the discretionary working capital accruals, proxy for the firm's accounting flexibility available, using the Jones (1991) model.The model of the study was estimated using the regression-based framework Pooled Ordinary Least Square Method (OLS).And a set of tests and analysis were conducted namely: descriptive analysis, univariate test of mean difference, correlation analysis, and regression analysis.
The findings of this study indicate a strong evidence for the impacts of the institutional ownerships on the managing earnings practice.Institutional investors found to be capable to reduce the managers' tendency towards the exercise of managing earnings.This result corresponds with the fact that institutional ownership has a surveillance role over the managers, and in order to reduce the agency problem, institutional investors have to exercise this role efficiently and effectively.
Related Literature
In their study Lin, L. and Manowan, P. (2011) investigate the institutional investor's effect on earnings manipulation, they differentiate between two scenarios of earnings management: decreasing and increasing income, as for decreasing income through earnings management they did not find any statistically significant impact of the institutional ownerships on the earnings management, this is the inevitable result of the difference in institutional ownerships time horizon and nature.To overcome this hurdle, the researchers classified the institutional investors according to their nature to: transient investors (investors who owned diversified and high turnover portfolios), and dedicated investors (investors who owned concentrated and low turnover portfolios).Based upon, the study concluded a direct statistically significant correlation among transient investors and managing earnings, and an inverse relationship but not statistically significant among the dedicated investors and earnings manipulation.The study also concluded that because of the difference in institutional investor nature, we cannot treat them as a homogeneous group.Rebai, I. (2011) investigated the impact of the institutional investors on managing earnings for 123 American firms.He concluded that, while transient investors (investment funds) inspire managers to spend less on R&D, Bank Holding Company and long-term institutional investors (pension funds) are passive.Mitra S. (2002) investigates the ability of the institutional investor to limit the practice of companies to manage earnings.The study examines the relation between the flexibility available to firms in generating discretionary working capital accruals and institutional investors.The study concluded that institutional investors reduce significantly the ability of the manager's flexibility in generating discretionary working capital accruals, and the concentrated institutional ownership reduces the tendency of the management towards managing earnings.Moreover, while the study found that the institutional investors do not have the ability to reduce the earnings management practices in S&P 500 firms; they have significant ability to reduce earnings manipulation activities in other companies.Bushee (1998) studied the role of institutional investor in mitigating the managerial tendency towards abandoning long-term investments in order to achieve current earnings target.The study concluded that firms are not expected to manage earnings with existence of high percentage of institutional investors, which refers to the monitoring role played by institutional investors compared to individual investors.The study also concludes that the existence of high percentage of transient institutional investors leads to the possibility of increasing the practice of earnings management by reducing spending on R & D in order to increase profits.Based upon, the study indicated that the presence of high percentage of transient institutional investors in the firm leads to myopic investment behavior resulting in the sacrifice of long-term investments in order to meet the current target profit.
Data
Variables utilized in the study, definition, and measures are presents at Table 1.The study employed econometric analysis using balanced panel data regression of all 70 th Jordanian manufacturing companies listed at ASE for the period 2000-2010 resulting in 770 firms' year observations.The data for the firms in the sample were derived from the ASE.For the econometric analysis, the study adopted the discretionary working capital accruals (DWCA), a proxy for the accounting flexibility in generating accruals.Independent variable of interest is the institutional investor (IIP) measured by dividing institutional ownerships of common stock over total common stock.Based on previous studies, four variables that have effect on the availability of accounting flexibility in generating discretionary accruals were adopted as a control variables namely: managerial ownership (MAO), firms size (SIZE), leverage (LEV), profitability (ROS).
Dependent Variable
In the regression model discretionary working capital accruals (DWCA), proxy for earnings manipulation, was set as the dependant variable, measured by subtracting non-discretionary current accruals from total current accruals.
Following Jones (1991), total current accruals defined as the interaction of changes in sales minus changes in account receivables, plus plant, equipment and property, therefore, the following model were estimated: TWCA: total working capital accruals, i: firm i, t: time t, ∆: annual change, S: annual sales, AR: account receivable, PPE: gross property.
To reduce the potential for heteroscedasticity the variable in equation ( 1) has been scaled by the total assets.The following equation has been estimated: Where TA i,t-1 is the total assets.
Equation 2 is computed separately for each sample company, and DWCA is computed from the residuals of these regressions, using the coefficients estimated in equation 2, the non-discretionary component of working capital accruals has been removed.Thus the remaining accruals, discretionary working capital accruals (DWCA), are due to earnings management, so the following equation has been estimated: ^ and β ^ are coefficient estimated in equation (2).
Independent Variable of Interest
Independent variable of interest is the institutional investor (IIP).Previous studies on the impact of institutional investor on the companies concluded mixed results.Bushee, B.J. (1998) concluded that high percentage institutional investors firms tend to spend more on the R&D.However, if the institutional investor in the firm is engage in momentum trading, the possibility of the firm to reduce its spending on R&D increase to increase the profitability of the firm.Bange and Bondt (1998) concluded that increase company profits by reducing spending on R&D is less likely to occur with high percentage of IIP.Due to the fact that institutional investors spend more on information search, Shiller and Pound (1989) and Lev (1988) concluded that there is an inverse relationship between the aggressive practicing of earnings management by the managers and the percentage of institutional investors in the company.
In terms of institutional investor influence on the performance of the company, mixed results were documented, Smith M. (1996) concluded a direct correlation between the firm's performance and institutional investors, while Duggal and Millar (1999), Facio and Lasfer (2000) and Mizuno M. (2010) did not support this result, and concluded no statistical evidence that IIP influence the performance of the companies.
Control Variable
Four control variables with significant impact on earnings management practices were adopted, these variables are:
Managerial Ownership
Managerial ownership (MAO) is defined as the equity shares percentage owned by the managers.Previous studies have confirms the managerial ownerships managing earnings relationship (hence, the availability of the accounting flexibility).Warfield et al., (1995) found that as the managerial ownership increases, the possibility of earnings management decreases, because, the higher the managerial ownership percentage is, the higher is the conformity and harmony of interest between managers and shareholders, and the higher is the managerial reliability on long-term value of firms rather than short-term profit.
Managerial ownership was adopted in the model study to take into consideration the tendency of the managers to generate accruals in the presence of the institutional ownerships, and is expected to have negative impact on the firm's accounting flexibility.
Firm Size
In their study Kim, Y. et. al., (2003) concluded that the impact of firm's size on the managing earnings is different : while small firms are more than large firms engaging in earnings management to avoid disclosure of losses, larger firms are more violent in managing earnings than small firms to avoid reporting decreases in the earnings.
Size (SIZE), defined as the firm's total asset logarithm is expected to associate negatively with the firm's accounting flexibility.
Leverage
Leverage (LEV), measured by dividing total liabilities over total assets, measures the risks of the company's ability to fulfill its obligations.The higher the debt ratio, the closer the firm is to violate its debt obligation.Defond and Jiambalvo (1994) found that as the firm approachs debt-covenant violation, the possibility of engaging in earnings management increases to avoid or delay the violation.Also, Duke and Hunt (1990) concluded a direct correlation between debt ratio and inability to fulfill debt obligation.Therefore, a direct relationship between the leverage and the firms accounting flexibility available to generate discretionary accruals is expected.
Profitability
McNichols (2000) documented that including a variable of profitability in the multiple regression model increases the explanatory power of explaining the changes in the discretionary accruals, indicating that more profit firms are directly correlated with positive discretionary accruals.
Profitability (ROS), defined as the earnings before interest and tax divided by the annual sales, is adopted to control the effect of stockpiling inventory rise as a result of unusual business operations.
Estimation Model
To test the potential impact of IIP on CWCA, the study employed a cross-sectional regression technique.The linear regression model can be estimated as follows: 4 Where; DAWC is the discretionary working capital accruals, as a proxy of the firm's accounting flexibility in generating discretionary accruals for i th cross-sectional firm for the t th time period, with i = 1,2,3,…,70, t = 1,2,3,…,11, α is constant, β's are unknown parameters of the firm's characteristics included in the model to be estimated, IIP is the institutional investors defined as ratio of institutional ownerships of common stock to total common stock; MAO is the managerial ownership defined as the common stock percentage owned by the managers; SIZE is the firm's sizes defined as the firm's total asset logarithm; LEV is the leverage ratio defined as the ratio of total liabilities to total assets, ROS is the profitability measure defined as the income before interest and tax divided by the annual net sales, and ε is the error term.
The next step is to split the firms based on the median of the institutional investor's sample firms, and conduct the univariate test of mean difference to evaluate the potential impact of institutional ownerships on the extent of earnings management practices.
Based on previous studies, the effect of institutional ownerships on the accounting flexibility is expected to be adversely.Since the firms accounting flexibility to generate discretionary accruals is directly significantly correlated with the extent of earnings management, the study predicts inverse correlation between the institutional ownerships and the firm's accounting flexibility of generating discretionary accruals.
Descriptive Analysis
Table 2 presents the results of the descriptive analysis for the variables employed in the study.Results shows that the mean and median of the DWCA, are 4.28% and 3.3367% respectively, with a distribution range of Min. of 2.91% and a Max. of 58.28%.
The average institutional investor ownership in Jordanian industrial companies is 19.95%, while the average manager's ownership in Jordanian industrial companies is 9.53%.This indicates that, because the institutional investors own twice more than managers (19.95% versus 9.53%) institutional investors have a better chance to influence the tendency of exercising earnings management.
Results also shows that the institutional ownership range is wide with a Min. of 0.14% and a Max. of 68.30%, which leads to increase the reliability of the statistical tests.Also, the distribution of manager's ownership is wide with a Min. of 0.00% and a Max. of 49.28%.
Univariate Test of Mean Difference
Discretionary working capital accruals were splits based on the median of the institutional investor's variable.Table 3 shows the descriptive analysis of the two groups.As expected, for firms with IIP less than the sample median, the DWCA mean and median were 4% and 3.03% respectively, with IIP mean and median of 13.36% and 13.97% respectively, on the other hand, firms with IIP greater than the median sample firms, the DWCA mean and median were 3.12% and 2.85% respectively with IIP mean and median of 39.6% and 40.78% respectively.
Univariate tests showed that the difference in the mean for the IIP variable between firms with IIP less than the sample median and firms with IIP greater than the sample median is significant (t-value=-144.22, p-value=0.000).
Tests also showe that the DWCA mean's is greater for firms with IIP less than the sample median than firms with IIP greater than the sample median (t-value= 269.01; p-value = 0.000).
Univariate test results confirm the effectiveness of the monitoring role for institutional investors in reducing the tendency of the managers to practice earnings management, and thereby reduce the accounting flexibility available to the managers to generate discretionary working capital accruals.
Correlation Analysis
Table 4 presents the correlation matrix between the regression's model utilized variables.The results show that IPP is significantly negatively correlated with DWCA, indicating that the greater the IIP, the lower the availability of the accounting flexibility.This result confirms the Univariate test and the descriptive analysis results mentioned earlier.
The IIP is also significantly negatively correlated with MAO, which corresponds to the view that the institutions are reluctant to invest in firms that are dominated by the managers.Moreover, the IIP is found to be directly correlated to SIZE and ROS, verifying the role of the institutional ownerships in improving firm's performance.SIZE found to be negatively correlated with DWCA, meaning that the availability of accounting flexibility to manage earnings is lower in large firms.
The correlation results also show that the DWCA is significantly positively correlated with LEV indicating that the managers practice earnings management increases the ability of the firm to raise funds.Further, the results did not provide any significant evidence to support the relation between the DWCA and ROS.
Regression Analysis
The regression analysis is present in table 5. Two regression models were analyzed.In Model 1, control variables were excluded, and IIP found to inversely associate with DWCA at less than 0.01 significant level.Moreover, results shows that the IIP was able to explain 5% of the change in DWCA (R 2 = 0.051).In Model 2, the impact of institutional investors (IIP) on the firms accounting flexibility to generate discretionary working capital accruals (DWCA) were examined with the existence of the control variables.Even after the entering all control variables to the multiple regression model, the regression coefficient of IIP remain statistically significant with the negative sign (coeff.: -4.087, p-value: 0.009).All control variables were statistically significant and as expected, with the exception of ROS.Overall, the results of regression analysis support the inverse correlation between the institutional investors and the firms accounting flexibility in generating discretionary working capital accruals.
Conclusion
This paper aimed to investigate the potential impact of the institutional investors on the ability of the firms to practice earnings management as proxy for the availability of accounting flexibility to generate discretionary working capital accruals over eleven years from 2000 to 2010 for all 70 th Jordanian manufacturing companies listed at ASE.
Depending on the results of various analysis and tests carried out in this study, the study found statistically significant evidence that institutional ownership have an important monitoring role over the Jordanian manufacturing companies, leading managers to reduce the tendency towards the exercise of earnings management, and thus lessen the accounting flexibility.
Table 1 .
Variables definition
Table 2 .
Descriptive analysis results
Table 3 .
Distribution of discretionary working capital accruals based on the median of the institutional investor's variable
Table 4 .
Correlation Matrix 1 Definition of the variables presents at Table 1, first line is the correlation coefficient, second line is the p -value.
|
2018-12-15T15:48:55.151Z
|
2012-06-01T00:00:00.000
|
{
"year": 2012,
"sha1": "cfb67b108913687db782869b9add8d7885fad0d1",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/17416/11567",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cfb67b108913687db782869b9add8d7885fad0d1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
201644260
|
pes2o/s2orc
|
v3-fos-license
|
Lung function and paper dust exposure among workers in a soft tissue paper mill
Purpose To study respiratory effects of exposure to soft paper dust exposure, a relationship that is rarely studied. Methods Soft tissue paper mill workers at a Swedish paper mill were investigated using a questionnaire and lung function and atopy screening. Spirometry without bronchodilation was performed with a dry wedge spirometer, and forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1) were obtained and expressed as percent predicted. Exposure to soft paper dust was assessed from historical stationary and personal measurements of total dust, in addition to historical information about the work, department, and production. The impact of high exposure to soft paper dust (> 5 mg/m3) vs. lower exposure ≤ 5 mg/m3, as well as cumulative exposure, was analyzed using multiple linear regression models. Multivariate models were adjusted for smoking, atopy, gender, and body mass index. Results One hundred ninety-eight current workers (124 male and 74 female) were included. There were significant associations between both cumulative exposure and years of high exposure to soft paper dust and impaired lung function. Each year of high exposure to soft paper dust was associated with a 0.87% decrease in FEV1 [95% confidence interval (CI) − 1.39 to − 0.35] and decreased FVC (− 0.54%, 95% CI − 1.00 to − 0.08) compared to the lower exposed workers. Conclusions The present study shows that occupational exposure to soft paper dust (years exceeding 5 mg/m3 total dust) is associated with lung function impairment and increased prevalence of obstructive lung function impairment.
Introduction
Occupational exposure to dust, both organic and inorganic, is clearly associated with lung function impairments and clinical outcomes like chronic obstructive pulmonary disease (COPD) and interstitial lung disease (Blanc et al. 2019). The pulp and paper industry is an important industrial sector in Sweden, and a large sector is the production of soft paper (FAO 2017). Growth in demand for particularly soft paper, i.e., toilet paper, paper towels, and napkins has been particularly strong in Asia (CEPI 2017). Soft paper mills still have high exposure to dust, and in previous decades, dust levels have frequently exceeded 10 mg/m 3 . Soft paper dust is an organic dust with a varying proportion of inorganic material depending on the use of additives (Sahle et al. 1990). In animal models, it has been shown that fibers from cellulose are biopersistent, and it has also been shown that exposure to cellulose dust are associated with fibrotic and granulomatous reactions (Muhle et al. 1997, Tatrai et al. 1996. Hence, it seems reasonable exposure to soft paper dust should be associated with impaired lung function.
We have in previous studies shown that high occupational exposure (> 5 mg/m 3 ) to soft paper dust is associated with impaired lung function, mainly decreased forced expiratory volume in 1 s (FEV 1 ) and forced vital capacity (FVC) (Ericsson et al. 1988;Järvholm et al. 1988). A German study, also on soft paper mill workers with occupational exposure (> 5 mg/m 3 ) to soft paper dust, observed a dose-response relationship for cumulative exposure to soft paper dust and decreased FVC (Kraus et al. 2004). By contrast, in two studies with lower exposure levels (≤ 5 mg/m 3 ), there was no association between exposure to soft paper dust and lung 1 3 function impairment (Heederik et al. 1987;Thorén et al. 1989b). There is also conflicting data about whether exposure to paper dust increases the risk for asthma and COPD (Thorén et al. 1989a;Torén et al. 1991Torén et al. , 1994Torén et al. , 1996. Hence, there is an obvious need for further studies investigating the relation between exposure to soft tissue paper dust and respiratory health effects, especially lung function outcomes.
In this study, we have examined workers in a large soft tissue paper mill in Sweden with the aim to elucidate the extent to which exposure to soft paper dust is associated with respiratory health effects.
Materials and methods
The study was performed at a mill where soft tissue paper production started on a small scale in 1948, and increased considerably around 1960. Today, the mill is one of the largest soft paper plants in Sweden. In 2006, all employees currently working at the mill (n = 205) were invited to participate in a clinical investigation at the mill site. Six of the invited persons did not participate. Hence, the initial study population included 199 workers.
All invited workers received an extensive questionnaire with questions about occupational history, smoking habits, and respiratory symptoms and asthma. Height and weight were measured with workers wearing light clothing and no shoes. Spirometry without bronchodilation was performed with a dry wedge spirometer (Vitalograph, Buckingham, UK) and according to American Thoracic Society (ATS)/ European Respiratory Society (ERS) standards (Miller et al. 2005). Forced vital capacity and FEV 1 were measured with individuals in a sitting position and wearing a nose clip, and predicted normal values were based on the GLI-equations (Quanjer et al. 2012). Blood samples were analyzed for specific immunoglobulin E class using Phadiatop analysis (Pharmacia & Upjohn Diagnostics, Uppsala, Sweden).
Smoking was classified as never-smoking, former smoking, and current smoking, based on the subjects' answers to the questionnaire. Pack-years were calculated among current and former smokers. Asthma was defined as an affirmative answer to "Have you ever had asthma diagnosed by a physician?" and onset after 15 years of age (Torén et al. 1993). Cough with phlegm (chronic bronchitis) was defined as an affirmative answer to "Have you had long-standing cough with phlegm?" and "If so, did any period last at least 3 months?" and "If so, have you had such periods at least 2 years in a row?" (Holm et al. 2014). Wheezing was defined as an affirmative answer to the question "Have you experienced wheeze or whistling in your chest at any time since 15 years of age?" Atopy was defined as a positive Phadiatop result (class 1) (Matricardi et al. 1990). Body mass index (BMI) was defined as measured weight/height 2 .
Exposure assessments
For the purpose of this study, we developed a specific job exposure matrix (JEM) for soft paper dust exposure. Exposure to soft paper dust was assessed from historical stationary and personal measurements of total dust, in addition to historical information about the work, department where worked, and kind of production, allowing us to assess exposure to soft paper dust for every year for each worker with an estimated mean level of dust (mg/m 3 ). Further, the cumulative exposure, in mg/m 3 -years, was calculated for each worker, as (mg/m 3 ) × years of exposure. Due to variations in exposure across time and duties, most workers were classified into more than one exposure category over the study period. The cumulative number of years in different exposure categories is shown in Table 1. Cumulative mg/m 3years for all workers were divided into quartiles and workers in the highest quartile (> 72 mg/m 3 -years) were defined as high exposed. The remaining workers were classified as lower exposed. High exposed years were defined as years having been exposed to soft paper dust exceeding 5 mg/m 3 .
Statistical analyses
In the univariate analyses, we dichotomized the subjects into high exposed and lower exposed to soft paper dust. Univariate inferential analyses were performed using Chi-square test and Student's t test. Where there were fewer than ten subjects in any stratum, Fisher's exact test was used for univariate analyses. Univariate analysis results were considered significant if p < 0.05. Lung function outcomes (dependent variable) and the association between the different independent variables (gender, BMI, pack-years, current smoking, atopy, and soft paper dust exposure) were examined in multiple linear regression models, and also stratified into never-smoking and ever-smoking. The associations between high exposure (highest quartile of cumulative dust exposure) and AL GOLD , AL LLN , asthma, chronic bronchitis, and wheezing were analyzed using logistic regression models. Dust exposure was measured in terms of high exposed years as well as the cumulative exposure measure, mg/m 3 -years. All variables were kept in the models even if most of them were without formal statistical significance. The models were adjusted for former and current smoking and also stratified into never-smoking and ever-smoking. In all regression models, we used 95% confidence intervals (CIs) and p values to determine significance. All analyses were performed using SAS version 9.4 (SAS, Cary, NC, USA).
Results
One person was excluded due to inadequate spirometry technique; hence, the final study population comprised 198 workers with complete data regarding lung function and smoking habits. Basic data of the study population are shown in Table 2. In the univariate analyses, FEV 1 was significantly lower among the high exposed compared to the lower exposed workers, 91.4% vs. 97.8% predicted. Further, the prevalence of both AL GOLD and AL LLN was higher (p < 0.05) among the high exposed workers, 27.5% vs. 6.8%, and 19.6% vs. 6.1%, respectively.
In adjusted multiple linear regression models, lung function decreased for every high exposed year (Table 3). For each year of exposure to high levels of soft paper dust, there was a 0.87% predicted decrease in FEV 1 (95% CI − 1.39 to − 0.35). A similar, but lesser, effect was seen for FVC (− 0.54% predicted, 95% CI − 1.00 to − 0.08). Among never-smokers, the exposure effect was significant only with regard to FVC (− 1.33% predicted, 95% CI − 2.50 to − 0.16) ( Table 3). Cumulative exposure to soft paper dust expressed Table 2 Personal data age, gender, employment time, as well as data on respiratory health, smoking habits, pulmonary function, and dust exposure data in soft tissue paper mill workers AL airflow limitation, AL GOLD AL according to Global Initiative for Obstructive Lung Disease criteria, AL LLN AL with an FEV 1 /FVC ratio below the lower limit of normal, FEV 1 forced expiratory volume in 1 s, FVC forced vital capacity, SD standard deviation a High vs. low exposed b Fisher's exact test All, N = 198 Lower exposed workers, N = 147 12.1% (n = 24) 6.8% (n = 10) 27.5% (n = 14) 0.003 b AL LLN 9.6% (n = 19) 6.1% (n = 9) 19.6% (n = 10) 0.01 b Restrictive spirometric pattern 1.5% (n = 3) 1.4% (n = 2) 2.0% (n = 1) 1.00 b as mg/m 3 -years was associated with decreased FEV 1 and decreased FVC (Table 3). This was seen among all workers and among ever-smokers. Among never-smokers the estimates also indicated decreased FEV 1 and FVC, but without formal statistical significance.
In the logistic regression models, high exposure to soft paper dust was associated with an increased odds ratio (OR) both for AL GOLD (OR 4.6, 95% CI 1.8-12.0) for AL LLN (OR 3.4, 95% CI 1.2-9.3) ( Table 4). There were no significant associations with asthma, wheezing, or chronic bronchitis (Table 4).
Discussion
The main finding from this study is that high exposure to soft paper dust (> 5.0 mg/m 3 ) was associated with decreased pulmonary function. Previous studies have indicated restrictive lung function impairment associated with paper dust exposure; by contrast, the results from this study indicated obstructive impairment, as FVC was less affected than FEV 1 , and the prevalence of AL was increased among high exposed workers.
Our previous study, showing a restrictive impairment of lung function, was conducted at paper mills with dust levels often exceeding 10 mg/m 3 ; hence, probably higher than the exposure, present and past, at the mill in this study ). In the present mill, the levels in the 1980s were between 5 and 10 mg/m 3 total dust, but exposure Table 3 Multivariate linear regression analyses of lung function, in percentage of predicted values, among subjects currently employed (n = 198) at a soft tissue paper mill in Sweden All models are adjusted for gender, atopy, body mass index, current smoking and pack-years. The models for never-smokers does not include smoking variables CI confidence interval, FEV 1 forced expiratory volume in 1 s, FVC forced vital capacity a High exposed years > 5 mg/m 3 total dust vs. lower exposed years ≤ 5 mg/m 3 total dust % of predicted FEV 1 % of predicted FVC Estimate 95% CI p value Estimate 95% CI p value All (n = 198), high exposed years a − 0.87 − 1.39 to − 0.35 0.001 − 0.54 − 1.00 to − 0.08 0.02 Never-smokers (n = 79), high exposed years a − 1.16 − 2.47 to 0.14 0.08 − 1.33 − 2.50 to − 0.16 0.03 Ever-smokers (n = 119), high exposed years Table 4 Logistic regression models of adult-onset asthma, wheeze, chronic bronchitis, and lung function parameters among subjects (n = 198) currently employed at a soft tissue paper mill in Sweden High exposed workers, compared to lower exposed workers, adjusted for former and current smoking, and atopy among all workers, atopy among never-smokers and current smoking and atopy among ever-smokers CI confidence interval, OR odds ratio, AL airflow limitation, AL GOLD AL according to Global Initiative for Obstructive Lung Disease criteria, AL LLN AL with an FEV 1 /FVC ratio below the lower limit of normal a 1 = yes; 0 = no All workers Never-smokers Ever-smokers Cases, n High exposed workers, OR (95% CI) Cases, n High exposed workers, OR (95% CI) Cases, n High exposed workers, OR (95% CI) levels were later reduced to around 1-2 mg/m 3 (Thorén et al. 1989b). By analyzing the association between lung function and high exposed years, we consider both the intensity and the duration of exposure (De Vocht et al. 2015). Our findings in the present study indicate that working for at least 1 year at dust levels exceeding 5.0 mg/m 3 is associated with significant lung function impairment. Such high exposure levels have not been present in the mill in the last two decades; hence, the affected workers have had quite a long exposure to soft paper dust. However, low exposed workers with a similar duration of exposure did not show any signs of lung function impairment. Among never-smokers FVC was significantly decreased, − 1.30% predicted but FEV 1 was not significantly affected. Chronic airflow limitation (CAL), is commonly defined as an FEV 1 /FVC ratio of < 0.7 (Vogelmeier et al. 2017). This has been seriously challenged, however, because the fixed ratio FEV 1 /FVC < 0.7 does not take into account the agerelated changes in lung function. Thus, it has been argued that employing a definition based on FEV 1 /FVC < 0.7 leads to an overestimation of airflow limitation in the older population (Pellegrino et al. 2005). An alternative approach that has been proposed is to use the LLN as a cut-off. The LLN is calculated using the distribution in reference material; the use of LLN has been proposed by the ATS/ERS (Pellegrino et al. 2005). However, as we only had access to spirometry without bronchodilation, we analyzed AL as a proxy for CAL. However, we observed, as expected, that the prevalence of AL GOLD was higher (12.1%) than the prevalence of AL LLN (9.6%).
We decide to present both AL GOLD and L LLN as a joint American Thoracic Society/European Respiratory Society (ATS/ERS) statement called for investigations of comparisons between the fixed cut-off (FEV 1 /FVC < 0.7) and the LLN-based definition (FEV 1 /FVC < LLN) of airflow limitation in predicting adverse health outcomes (Celli et al. 2015). Our results also indicated that both definitions predicted an adverse outcome.
Whether soft paper dust exposure increases obstructive lung disease risk is unclear. Among soft tissue paper workers, we have previously described increased mortality due to obstructive lung disease as well as an insignificantly increased incidence rate of asthma (Thorén et al. 1989a;Torén et al. 1994). In addition, soft paper workers seem to have an increased prevalence of rhinitis and irritative symptoms of the upper airways, even those exposed to levels below 5 mg/m 3 (Thorén et al. 1989b;Hellgren et al. 2001;Kraus et al. 2002, Holm et al. 2011). Among more highly exposed workers, increased prevalence of cough has been reported (Torén et al. 1994;Kraus et al. 2002). A suspected case of occupational asthma due to cellulose has also been described (Knight et al. 2018). Our results provide further evidence that high exposure to soft paper dust has irritating effects on the airways, impairs lung function and increases the risk for AL, regardless of how this is defined. In a longer perspective exposure to soft paper dust may also increase the risk for COPD (Järvholm 2000).
We also intended to define a group of workers with restrictive spirometric pattern, but the prevalence was too low to perform any meaningful analyses.
The present study has a number of methodological limitations that have to be considered. The main limitation is the cross-sectional design. This design implies that workers with long-standing respiratory ailments may have left the mill. We have previously shown that subjects with asthma or respiratory symptoms have an increased frequency of job change (Torén et al. 2009). This turnover of workers will cause an underestimation of the risk associated with paper dust exposure, due to healthy worker selection bias (Östlin 1989). The reference group in the present study was not unexposed; rather, they were low exposed workers, which may also have resulted in underestimation.
Our analysis adjusted for current smoking and cumulative dose of tobacco (pack-years). There is, however, a strong relation between decreased FEV 1 and increased prevalence of AL and tobacco smoking, and, hence, residual confounding by smoking cannot be excluded.
Another major limitation is the lack of power. The study size was limited to the workforce in one mill, which limited the number of study subjects. Still, there were significant associations between exposure and the lung function parameters.
Conclusions
The present study shows that occupational exposure to soft paper dust (years exceeding 5 mg/m 3 total dust) is associated with lung function impairment and increased prevalence of obstructive lung function impairment.
|
2019-08-26T14:25:32.238Z
|
2019-08-26T00:00:00.000
|
{
"year": 2019,
"sha1": "4a8d49d689aca8b35a149b16321eb94efa08bc2d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00420-019-01469-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "afaf07e647e71c32ea9b344b00a5a0bddda869bb",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259960139
|
pes2o/s2orc
|
v3-fos-license
|
Burn assessment: A critical review on care, advances in burn healing and pre-clinical animal studies
: Burn, a severe skin injury due to electricity, radiation, chemicals, or friction, may lead to the death of affected skin cells. Burns are a painful and crucial problem which causes disabilities. Sometimes, burns may also associate with the mortality of burn-injured patients. First-degree, second-degree, and third-degree are three categories of burns. First-degree burns (superficial burns) create minor skin damage as it affects the only uppermost layer of skin, and domestic care is sufficient for the treatment. Second-degree burns have injuries beyond the upper layer of skin, and third-degree burns reach every layer of skin, including nerve injuries that require critical care in treatment. Burn injuries are not limited to local; they may also give systemic responses and cause serious problems. Microbial infection is the most severe challenge associated with second and third-degree burns injuries. The ultimate goal for treating burn injuries is re-epithelialization with minimum tissue scarring. Selection of the appropriate treatment will be based on the extremity of the burn injury. The most prevalent and effective treatment is topical agents containing mafenide acetate, silver sulfadiazine, silver nitrate, etc. Skin substitutes, negative pressure wound therapy, and skin grafting are advanced treatments for burn injuries. Burn treatment is also associated with complications such as infection, dehydration, low body temperature, and emotional problems. Animal studies for burn models are performed using rabbits, rats, and pigs. This may be an effective way to find out the new forms of burn treatment, including assessing newly developed formulations.
INTRODUCTION
Burns are identified as the death of the affected skin cells by brutal skin injury.Burns are the most severe and crucial health problem which causes disability and death [1].Statistical Investigation has shown an estimated 70 lakh burn incidents in hospitals in India annually, and it ranks second in various injuries following road accidents.In 2010, The Indian Government initiated the National Programme for Prevention of Burn Injuries (NPPBI).This program targets reducing the burn mortality rate, managing burn injuries, and the foundation of the central burn registry.However, its impact is not yet apparent [2,3].One million people in India each year endure mild to severe burns [4].In 2016, Around 70 lakh burn injuries were reported in India [2,3].According to the WHO factsheet, in march 2018, 1.95 hundred thousand deaths occur yearly due to burns.More severe burns need instant emergency therapeutic care to prevent serious health issues and death.First-degree, second-degree, and third-degree are the three main kinds of burns.The severity of injury to the skin expresses the degree of burn.First-degree burns are the most minor, and third-degree burns are the most severe.Describing every symptom and sign associated with a third-degree burn and reaching up to the bones is known as a fourth-degree burn.Burns have a diversity of Etiologic like scalding from hot and boiling liquids, burns due to chemicals and electricity, and fires such as flames from candles, matchsticks, and lighters) and immoderate exposure to the sun.First-degree burns to produce the minimum skin damage.Additionally, they are known as "superficial burns," as the uppermost skin layer is only affected.First-degree burns are generally handled 1578 with domestic care.Second-degree burn injuries are more severe as the injuries expand beyond the upper layer of the skin.Infection can be avoided by maintaining a clean wound and its bandaging.This may lead to a faster healing process.Third-degree burns create the most severe damage by reaching throughout each layer of the skin.Third-degree burns are commonly thought to be excruciating.However, these burns relate to nerve cell injury [5].
SKIN
The body's most significant organ is the skin [6], which protects the body and prevents foreign substances from entering the body (Figure 1).It has been roughly calculated that up to 1000-2000 million epidermal cells are shed, and most are replaced daily [6,7].The epidermis, dermis, and hypodermis are three layers of the skin [8,9].The function of the epidermis layer is to restrict the entry of dangerous microorganisms, maintain water content in the skin, and thus maintain body hydration.It is divided into five layers: Basal, lucidum, spinosum, corneum, and granulosum [8,10].A dermis layer is present in between the epidermis and hypodermis layer is comprised mainly of blood vessels, collagen protein, hair roots, sweat glands, nerve cells, mesenchymal stem cells (MSCs), and lymphatic vessels [8,11,12].The dermis layer provides structural robustness to the skin.The hypodermis layer, the third layer of skin, comprises macrophages, adipocytes, vasculature, fibroblasts, and nerves.Aid and repair of the dermal and epidermal layers serve the hypodermis layer's purkpose [8,13].
THE VARIANCE BETWEEN BURNS AND WOUNDS
Burns have many generalized effects on the body.The skin has local damage in wounds, and generalized results are not seen [14].Wounds are limited to the epidermal and dermal parts of the skin [15].On the other hand, burns can affect much bigger surfaces, i.e.,>20-30% TBSA (Total Body Surface Area), which shows significant burn injury.Burn injury caused due to electrical energy, radiation, chemicals, or abrasion results in the same damage as clinically the same as thermal injury [16].
Flame injuries:
These injuries are a usual form of burn.It generally occurs in women aged 16-35 and is more associated with a long time in cooking in loose-fitting clothing [17][18][19].Flame injuries are of any depth; they may be of the total thickness or a portion thereof.Flame Injuries: 1) Scalds: Injuries occur due to hot liquids and steams.Scalds create superficial burns and may associate with a large skin area [19].
2) Contact burn occurs when the skin comes in contact with a great degree of a hot object or less hot object for a more extended period.Common causes of contact burns include irons, oven doors, radiators, the glass fronts of gas fires, and vitro-ceramic cooking stations.Contact burns, which frequently result in deep cutaneous or full-thickness burns, can cause fatal harm [19].
Electric injuries:
Here, electric current proceeds from one point to another throughout the body and generate "entry" and "exit" points.The electrical current can potentially harm the tissues in this area [20].Electric injuries are split up into high voltage, which is more than 1000 V, and low voltage, which is below 1000 V. Their extremity is based on Current Contact time, Current type (AC vs. DC), and Voltage [21].Low voltage burns create minor deep-thickness burns; meanwhile, high voltage burns create immense deep tissue burns and cause limb loss.High voltage burns, also known as flash burns as high voltage burns current, do not enter the human body, but high-temperature energy creates apparent burns to visible body parts like hands, upper limbs, face, and neck.Burned clothing may cause deeper burns.The cardiac cycle gets affected by electrical burns and may cause arrhythmia.Treatment should include cardiac monitoring.
Chemical injuries:
These injuries may occur because many different chemicals (Table 1) are used daily.The extremity of the caustic burn is based on the amount of the agent, depth of penetration, exposure duration, the concentration of the agent, and the mechanism of injuries [22][23][24].
PATHOPHYSIOLOGICAL CONDITION AND HEALING PROCESS OF BURNS
A burn injury creates local responses as well as systemic responses.Local reactions to burning damage are identified by increased capillary permeability and hydrostatic pressure [14,25,26].Figure 2 shows zones of coagulation, stasis, and hyperaemia as distinct areas of the burn injury.This bifurcation is based on the extremity of the wound.The necrotic area of the burn is named the zone of coagulation.It minimizes progressive damage to burning skin.When an injury occurs, tissue around the necrotic area is permanently destroyed.The area near the injured zone is known as the zone of stasis, with a gentle degree of injury with lower tissue perfusion.It is also connected with vascular damage, where inflammation occurs.Hyperaemia's periphery (distinguished by inflammatory vasodilation due to high blood flow) will initiate the healing process and generally avoid further necrosis [21,23,27].The systemic response comes into the picture after burn injury reaches 30% of TBSA (Figure 3).Cardiovascular response: Capillary permeability is augmented due to the loss of fluids and intravascular proteins into the interstitial compartment.Additionally, when the tumour necrosis factor is released, it lessens myocardial contractility.It also causes fluid loss from the burn wound.Respiratory response: bronchoconstriction occurs owing to the release of inflammatory mediators, and in intense burns, respiratory distress syndrome may happen.Metabolic response: Basal metabolic rate (BMR) increases up to three times compared to the standard rate.Immunological response: The body's defence mechanism is reduced [19,20].The ultimate goal in burn damage is re-epithelialization with the most negligible scarring.Phases of burn wound healing include four stages (Table 2), which are associated with biological processes, i.e., phases of hemostasis, inflammation, proliferation, and remodelling (Figure 4) [8,28].The hemostasis phase and inflammation phase first give a response to injury and prevent blood loss at a specific site by clotting.Here, fibrin formation occurs through several events, such as augmentation of platelet, activation of the immune system, clotting of blood, and complement system activation [6,12,[28][29][30].The inflammation phase prevents infection in wounds.At the site of injury, inflammatory cells are attracted by cytokines and transform tumour necrosis factor (TNF-α), platelet-derived growth factor (PDGF), and growth factor-beta (TGF-β) [28,29].TGF-β and other growth factors are also released by macrophages, which regulate the relocation of fibroblasts and epithelial cells into the wound [30].The proliferation phase starts then onwards.
Triggering of keratinocytes and fibroblasts by cytokines and growth factors indicates the starting of the proliferative phase [31].Keratinocytes move towards the wound, which leads to the restoration of the vascular network [32,33], and the re-epithelialization process starts [34].The dermis layer is restored in the proliferating phase.Accumulated fibroblasts and myofibroblasts produce ECM (extracellular matrix) proteins, namely fibronectin and growth factors like TGF-β [12,35,36].The final steps induce the manufacturing of granulation tissue with fibroblasts, granulocytes, and macrophages.The last step of this phase is collagen formation, which fibroblasts initiate.
Scar from a burn wound yields more fibroblasts, collagen, and elastin.Here, myofibroblast initiates the contraction of the wound [34,37].Ending the response to the injury is crucially dependent on keratinocytes and inflammatory cells like macrophages and T cells dying [12,36], which gives the final appearance of a burn wound.
TREATMENT AVAILABLE FOR BURN SCARS
The healing of burn scars will depend on the depth of the wound.Superficial burns are generally treated easily without scarring, while severe burn needs special care for faster healing [34,40].Surgical and non-surgical treatments are available for burn injuries, which are selected based on the severity of the burn injury (Table 3).
Types of burn Symptoms
Available treatment 1 st degree Redness, minor inflammation, pain, dry and peeling skin Apply cold water on the burn wound for three to five minutes or lengthier; apply analgesics for relief of pain, application of gel or cream containing herbal or appropriate ingredients along with local anaesthetics to calm the skin, and protect the affected area by applying antibiotic ointment and loose gauze 2 nd degree Redness, splotchy skin, pain, swelling, blisters, scarring Apply a thin layer of antibiotic ointment to the burn for healing; protect the burn with sterile, non-sticky cotton gauze may prevent infection and assist the skin's recovery.3 rd degree Burned areas might be charred black or white, leathery skin, destroyed nerves, numbness, difficulty breathing, carbon monoxide poisoning Wear compression garments, skin grafting, surgery, physical therapist
Pharmacological treatments of burns
Analgesics, commonly referred to as painkillers, are essential in treating burns.Potential side effects and drug interactions must be avoided to treat pain in burn patients effectively.Acetaminophen and oral NSAIDs are mild analgesics that have a ceiling effect in their dose-response relationship.These constraints make these substances inappropriate for managing typical, severe burn pain.Minor burns can be successfully treated with acetaminophen and oral NSAIDs, typically in an outpatient setting.A reliable and efficient method of delivering adaptable analgesia to burn injury patients is patient-controlled analgesia (PCA) with
Phases of Burns Description Key responsible factors and their effect
Hemostasis [6,12] Platelet cluster, activation of defence mechanism, clotting of blood, and complement system-induced Fibrin: Augmentation of platelet, activation of the immune system, clotting of blood Inflammation [14] Neutrophils and Macrophages remove foreign materials, control fibroblast, and epithelial cells in wounds Neutrophils: Dilatation of blood vessels, Monocytes: leakage of fluid from the blood vessel into the surrounding tissue, Macrophages: swelling Proliferation [31] Re-constitution of an epithelial layer; restoration of the vascular network, fibroblasts induced collagen formation.
Keratinocytes: closing of the wound, Fibroblasts: restoration of the vascular network Remodelling [12,14] Collagen and elastin were produced; end of the response to the injury by the formation of myofibroblast Collagen: tight cross-links between the collagen (Maturation of wound) Elastin: Induces cell activities like the migration of cells, synthesis of the matrix, and protease formation [38] Fibroblasts/myofibroblasts: contraction of the wound (Breaking of a fibrin clot, creation of new ECM and collagen structure) [39] intravenous opioids.Patients with burn injuries frequently experience anxiety, which may be strongly related to pain.Anxiety can worsen pain by increasing background pain and the expectation of procedural discomfort.When treating burn pain, anxiolytic medications are frequently used with opioids.Benzodiazepines have been demonstrated to lessen procedural pain in patients with high levels and background discomfort when given as an opioid adjunct [42].Systemic antibiotics are also essential in the treatment of burn injuries.Using antibiotics to treat underlying illnesses can lower morbidity and prevent a fatality.Once an infection has been identified, antibiotics should be given per the French Society for Burn Injuries recommendations.Topical antimicrobial therapy is frequently used as a complement to surgical treatment or systemic antibiotics to cure or prevent infections.Medicaments should not promote drug resistance and should complement the type of bacteria inducing the infection.Antibiotic use in burn patients is more challenging than in patients with other illnesses.This is because the burn victim's pharmacokinetic characteristics are altered, and there are significant individual differences.The key alteration is scanty tissue concentration of antibiotics, which immediately causes therapy non-fulfillment and drug resistance; as a result, drug resistance lowers the efficacy of standard doses of antibiotics.Furthermore, the effectiveness of medications like ciprofloxacin or penicillin-resistant to the enzyme penicillinase has decreased for Staphylococcus aureus.There is variability regarding the potency of new products that have become commercially available in the present day.Therefore, antibiotic adaptation, delivery and treatment duration are more challenging to establish in burn patients, and practitioners should examine all risk factors for morbidity and fatality when considering their use [43].
One of the most effective ways to control microbes in a disease-ridden wound is the ideal use of topical medications.Topical agents reduce the sepsis and mortality rate; each topical agent has advantages and disadvantages.The effectiveness of topical agents is identified by in vitro bacterial growth and reduced colony counts in vivo [44].Some of the topical antimicrobial medications used in burns treatments are listed below.
Silver nitrate
It is a non-toxic, 0.5% solution and is very effective against P. aeruginosa, S. aureus, and E. coli.A mesh dressing is placed on the wound, and the silver nitrate solution is introduced.Silver nitrate has limited penetration capacity, the same as silver sulfadiazine because silver ions are very quickly bound to the natural chemical substances of the body.During treatment with silver nitrate, serum electrolytes monitoring is necessary.0.5% silver nitrate solution is light-sensitive.After drying, the solution turns black when it is in contact with Cl-containing compounds and tissues.An adverse reaction such as high fever is also reported if the solution of silver nitrate becomes dry [45,46].
Silver sulfadiazine (SSD)
SSD, a 1% water-soluble cream, is an amalgam of silver and sulfadiazine.The silver ion attaches to the organism's DNA and releases sulfonamide, and further, it interacts with the metabolism of the organism [44].The advantage of this drug is its ability to reduce pain.Penetrating power is solely confined to the epidermal layer.It does not, however, relate to pulmonary fluid overload or acid-base disturbances, as it is found with mafenide acetate [46].The side effect of this drug may be a reduction in granulocyte, but it is an argumentative statement [45,46].
Mafenide acetate
It is presented as an 8.5% water-soluble cream base and a 5% aqueous solution.It has been of utmost effectiveness to the vast spectrum of microorganisms, predominantly for use with all strains of Clostridium and P. aeruginosa.The cream is employed at least twice daily and should be re-applied if removed from the wound.Moreover, Mafenide acetate has the maximum capability to breach burn eschar and prevent the growth of a microorganism colony in the necrotic area.Besides cream formulation, the 5% mafenide acetate solution is also helpful for the burn wound.The highest antimicrobial effect may be achieved by keeping the dressing wet with the mafenide acetate 5% solution.At every 8 hours, the dressings may be changed.This solution has some adverse effects, such as a carbonic anhydrase inhibitor, and it may cause acid-base imbalance when applied to a vast burn surface area.It may also cause respiratory acidosis.
Povidone-iodine
The topical usage of povidone-iodine is painful.Some recent studies state that the absorption of iodine in burn wounds is very high, leading to iodine toxicity, kidney failure, and excess acid formation.It can cause damage to fibroblasts by cytotoxic effect [45][46][47][48].It is also used as a topical disinfectant.
Gentamicin sulfate
The 0.1% gentamicin sulfate-containing cream formulation is available on the market.It is similar to other aminoglycoside antibiotics, such as neomycin and kanamycin.Rapid resistance development is the major drawback owing to its excessive use as a topical antimicrobial medication [44].
Polymyxin/Bacitracin
Polymyxin and Bacitracin are used as ointment formulations.Since these topical treatments are nontoxic, many medical professionals recommend them for skin graft covering.At the same time, they are not effective in controlling the infection.
Nitrofurantoin
Nitrofurantoin has effectiveness against all gram-negative bacteria except P. aeruginosa bacteria.Nitrofurantoin is effective up to 75% compared to the 21% effectiveness of Polymyxin/Bacitracin [44].
Mupirocin
It is produced in the fermentation carried out by P. fluorescens.Antimicrobial activity is obtained from protein synthesis inhibition in the bacterial cell [49].It is beneficial in methicillin resistance treatment in burn wound infections [50].
Nystatin
Nystatin, derived from Streptomyces noursei, is an antifungal medicine.It is generally used in an oral dosage form.However, topical use of nystatin powder at a dosage of 60,00,000 U/g has successfully treated burn sites with fungal infection.Nystatin powder may be given in combination with mafenide acetate (5% aqueous solution) and SSD (1% cream) to avoid the growth of fungi and yeast at the necrotic site [51,52].
Skin grafting
When a burn injury completely demolishes all layers of skin, and a primary healing mechanism cannot heal the wound, the surgical procedure should be adopted [53].Skin grafting is suggested during complete and partial-thickness burn injuries.The removal of the necrotic portion is followed by analogous skin grafting.Skin grafting transfers healthy skin from a patient's beneficial site to their dead or dying site.The skin grafting technique has its limitation.Unfortunately, less than 50% of TBSA is treated by this treatment [54,55].Over time, this issue can be conquered by recurrent skin transplantation from the donor sites.But the healing of the donor site is minimized, and skin disorders may rise [56,57].And sometimes, this technique also makes severe scarring possible [58,59].
Split-thickness Skin Graft (STSG)
Unlike a full-thickness skin graft, an STSG only contains the epidermis and dermis components.It is classified according to the thickness of STSG.In STSG, the thickness of the graft (Thin: 0.01 to 0.03 cm, Medium: 0.03 to 0.04 cm, Thick: 0.04 to 0.07 cm) can be varied based on the patient to patient as per their requirements.It is limited to the dermis layers that can increase to form a new epidermis layer [60,61].
Skin substitue
It is generally used when there is restricted availability of donor skin, but the burn wound is vast.Replacements that are either biological, synthetic, or both are the classes of skin substitutes.Biological skin alternatives have similar constituents as natural skin [59].Unlike biological reserves, synthetic materials provide greater structural integrity but are less bioactive than biological substitutes, and artificial substitutes have no disease transmission risk.Currently, limited synthetic skin substitutes are available in the market [62].Different skin substitutes are classified in Table 4.
Wound Dressing
Wound dressings are beneficial for wound coverage and re-epithelialization.It is also a skin desiccant that prevents contamination and further skin damage.Wound dressing acts as a support for wounds and enhances the healing process of the injured area.Various types of dressing are available in Markets.Dressing selection depends on factors such as the wound bed's state, burn depth, wound site, moisture retention and drainage amount, the requisite regularity of dressing changes, and cost.The ideal dressing characteristics offer protection against contaminates and physical harm, enabling gas exchange and moisture retention and ensuring comfort for accelerated operative recovery [71,72].Significant classes of wound dressings [33] are classified in Table 5 with their Marketed name and Manufacturer name.
Negative pressure wound therapy (NPWT)
Here, the application of a vacuum to the burn wound area via a foam dressing stimulates the granulation of tissue, raises blood perfusion, and decreases the rate of colonization of bacteria [6,80].NPWT is currently used in burn wound care [60].In humans, NPWT reduces injury development [81], and an optimized NPWT environment increases the healing process [60].Research has also indicated that NPWT enhances wound healing by combining split skin grafts with Matiderm® or Pelnac® skin replacement [59,82,83].NPWT leads to improved grafting conditions, decreases the chance of infection in burn injury, and reduces sepsis progression and bacterial proliferation [84,85].
COMPLICATIONS DUE TO BURNING INJURY
Burn patients lose their skin, a primary barrier, so they have a more significant risk of complications following complications generally occurring in the burn patient.
Infection: The most severe challenge in burn care management is infectious, and they are the principal reason for burn injury fatalities [86][87][88].A few antibiotics are helpful in a burn injury, and bacteria's growth is mainly responsible for the infection.Bacteria are generally of two types, Gram-positive and gram-negative bacteria.Pseudomonas aeruginosa, Acinetobacter baumanni, and Enterobacteriaceae are common gram-negative bacteria that cause infection in a burn injury.Piperacillin-tazobactam, carbapenems, and cephalosporins antibiotic treatments are used respectively for particular Gram-negative bacteria.Penicillin treatment is indicated for Gram-positive bacteria like Staphylococcus aureus, Streptococcus, and Enterococcus [59].Infection can lead to sepsis, which causes hypotension and reduced perfusion of organs, and it also reduces the skin healing process, leading to multi-organ failure [89][90][91].Other than infections, the following complications may occur [5].Dehydration: Burns cause fluid loss, so blood volume may become insufficient to supply blood to the entire body.Low body temperature: Skin takes place in adjusting body temperature; however, during burn injury, the body can lose heat faster than in average conditions, which may lead to hypothermia.Contractures: When a scar forms, it can tighten the skin leading to difficulty in the movement of bones or joints.Emotional problems: If the damage is on the face or on other visible areas, this may lead to emotional issues.
ANIMAL MODELS USED IN BURN RESEARCH
Burn injury is not associated with a single pathophysiological condition but involves multiple organs that cause structural and functional abnormalities.In vitro experiments are unable to predict the complexity nor address the pathophysiology.Hence, the in-vivo study is essential.The in-vivo analysis can be performed to mimic the post-burn pathological mechanisms and also applied to evaluate the novel therapeutic approaches.Wound healing experiments are generally performed on animal species such as mice, rats, rabbits, and pigs (Table 6).
Mouse
In the burn healing process, mice are the most frequently used animals in experimental work.Mice are used due to a superior immune system and a low morbidity rate, and they have reduced healing time.Mouse models of burn injury also have drawbacks, such as mice cannot develop the same wound-healing conditions as humans.The wound healing process is rapid in mice due to wound contractions compared to humans.In contrast, in humans, the re-epithelialization process occurs, which is a bit slower than the mice [92][93][94].Mouse skin is covered with hairs that are denser compared to human hairs.Generally, 6 to 8 weeks old and healthy mice are used for the mouse burn model.The mice are anaesthetized by administering ketamine/xylazine through an intraperitoneal route or other anaesthetics.In some cases, a 1 mL saline solution is also given as it acts as a cushion to the spinal cord [95].
Rat
Rats' skin is composed of similar composition as of humans, and it also has the same layers, such as the epidermis and dermis layers.The rats' skin is looser; hence it has more elasticity than human skin.Rats are also used more frequently due to the low cost.The wound healing mechanism of a rat is the same as a mouse so it won't conduct similar healing conditions as humans.Rats have fewer chances of systemic sepsis [94][95][96].The burn model of the rats is identical to the burn model of the mice.Rats have higher tolerance as they can hold up to 60% TBSA.
Guinea pig
Guinea pigs have numerous potential advantages, including a high pain tolerance, being lightweight, and being easy to handle.Their anatomy resembles those of humans, and their skin is sensitive enough to be burned easily.According to some researchers, the metabolic reaction to acute burn injury in guinea pigs is remarkably indistinguishable from humans' postburn metabolic response [97].The stage of the hair development cycle affects the burn depth in small rodents like rats and mice.However, this is not the case with guinea pigs.In addition, the guinea pigs' epidermis and dermis are around the same thickness as human skin and have the least amount of thickness variability compared to rat and rabbit skin [98].
Pig
Skin structure similarities in anatomy and physiology enable the pig as an appropriate model for studying human skin.The epidermis and dermis layers of the pig are thicker than in humans.In mice and rats, layers were thinner, so it could not mimic similar healing conditions, while in pigs, it is closer.Pigs have many more similarities, such as the density of hairs, orientation and distribution of blood vessels, epidermal enzyme patterns, and lipid film of the skin.Pigs' healing processes also have similar stages, such as inflammation, proliferation, re-epithelialization, and remodelling, which are also seen in humans [92,94,99].
Rabbit
The high cost of the pig model is addressed using the rabbit model, as well as it also has similar metabolic relevance to humans.The rabbit model creates circumstances to evaluate the systemic effect of burns and allows the investigation of dynamic changes [100].Hypermetabolism response was also assessed easily in rabbits.Interferon-gamma has a crucial influence on the recovery process of the burn wound.It inhibits Collagen (A key factor for re-epithelialization) and the synthesis of fibroblasts, which takes longer to heal burn wounds.The local examination was done after inducing severe burns in interferon-gamma deficient or sufficient mice.The authors' findings suggested inhibiting gamma interferon speeds up the healing process after a burning injury [101].Authors have tried to characterize the expression of an actin filament protein known as gelsolin, present in the brain in mice subjected to burn injury.Gelsolin affects the motility of cells, glueyness, and apoptosis.Gelsolin activates monocytes, and astroglia cells, thereby playing an essential role in further apoptosis of neurons induced by swelling following burn injury [102].The research findings concluded that intestinal immunity was improved due to significantly enhancing the level of IgA in burn-injured mice supplemented with enteral nutrition and glutamine compared to conventional enteral nutrition only [103].Burn injury complication that is both severe and common is an infection, which is directly associated with the size of the burn.Cases of sepsis are higher in a burn if TBSA is more than 30% here; the authors' findings suggested that the treatment using simvastatin in burn-injured mice helps reduce the raised level of interleukin-6 further helps to reduce mortality in mice [104].
Rat
The experiment was performed to evaluate the effects of carbachol on rat enteral recovery from burn shock in terms of intestinal absorption rate and blood flow to the intestine's mucosa.The findings showed that carbachol increases intestinal water absorption rate as mucosal blood flow improves [105].The study was focused on observing the morphological changes of the muscle fibres away from the site of thermal burn injury in rats.A noticeable morphological difference in muscle was found at the place of thermal burn injury, which covers 45% of the total body surface area [106].The purpose of the study was to determine how ulinastatin affects fluid vasopermeability and responds to inflammation.Ulinastatin helps prevent systemic inflammatory reactions and fluid leaking into tissue after a catastrophic burn [107].The investigation was done to determine the modulating effect of anaesthesia, analgesia, and euthanasia techniques on the inflammation profile in the rat burn model.Results concluded that understanding such effects is necessary for a study to examine the pathophysiology of the inflammation process in animal models with burn injury [108].3 Pig Various methods were developed to evaluate the healing process of wounds.This comprises tensiometry, immunohistochemistry, electron microscopy, granulation tissue depth analysis, and digital photography analysis.The results suggest that the hypertrophic burn scar in domestic pig appear similar to the hypertrophic scar in humans.The study's developed model helps evaluate and contrast various burn injury treatments [109].The analysis was performed to classify the different methods of burn treatment in a standardized animal model of the pig through the histopathological assessment of scalds and contact burns [110].The pig was used as an animal of illustration of the experimental burn wound.A stainless-steel round bar, previously heated to 50-110˚C, was used to insert the burn wound using the push-pull force technique.Saline dressing and lidocaine HCl gel were used to heal burn wounds.The authors concluded the examination by applying a pig as an animal model and using therapeutic formulation to heal burn wounds [111]. 4
Guinea-pig
The study evolved the comprehension of the healing process for burn wounds in the guinea pig after inducing a burn.The burn was produced on depilated dorsal skin using a round aluminium template previously heated to 75˚C and was applied to the skin for 5 seconds.The study was performed to understand the mechanism of the burn wound healing process, i.e., epithelialization, contraction, and scar formation [112]. 5 Rabbit To treat wounds, ulcers, and burns, therapeutic herbal remedies have been utilized for a very long time.The study was focused on the exploration of alcoholic extract of the yarrow plant for the treatment of experimentally induced burn wounds in New Zealand white rabbits as an animal model.The study concluded that the yarrow extract was the potential to enhance burn wound healing and lessen the microbial load of the wound [113].Alkali burn injury was produced in New Zealand white rabbits.A cross-linked, chemically altered hyaluronan derivative was used to treat experimentally induced burn wounds.Contrary to the control group, the treated group's rate of wound closure was higher, as the burn wound healing process improved in the treated group [114].The study's objective was to evaluate the efficacy of quince seed mucilage, traditional preparation used in treating burns and skin wounds.A wound was created in Iranian male rabbits, and a cream containing quince seed mucilage (5%,10% & 20%) was administered twice daily to the wound.The study concluded that 10-20% of quince seed mucilagecontaining formulations showed an excellent prospect in the healing of wound injury [115].
SUMMARY AND CONCLUSION
The skin comprises the major part of the human body.It serves as an obstacle and safeguards the body from the environment.A zone of coagulation, a zone of stasis, and a zone of hyperemia characterize burn injury.Systemic responses come into the picture when burn injury extends more than 30% TBSA.It alters cardiovascular responses and immunological responses.The homeostasis and inflammatory phases are the first events of the burn healing process wherein the immune system will be activated and cause oedema formation.Revascularisation is part of the proliferative (the second phase of the burn injury) phase that leads to re-epithelialization. Topical agents are one of the most preferable and effective ways to treat a burn wound.Reduction in the chance of sepsis and mortality are the critical benefits of topical treatments for burns.Silvadene, Sulfamylon, Betadine, and Furacin are some marketed topical products helpful in treating burn injury.Some advanced treatments are also available such as skin and split skin grafting.These techniques are used to treat full-thickness burn injuries.Karo skin, GraftJacket®, OrCel®, and MySkin™ are examples of skin substitutes.The effectiveness of topical agents were evaluated by in vivo studies.Different animals are being used, such as rats, mice, rabbits, and pigs.Novel therapeutic formulations can also be evaluated using various animal models.This kind of animal model involved research paves the way for developing newer medicines and dosage forms to improve burn treatments.
Figure 2 .
Figure 2. Structure of Skin and its layers
Figure 3 .
Figure 3. Different zones of burn injury
Figure 4 .
Figure 4. Burn wound healing process of each step
Table 1 .
Different chemical substances and their mechanism of injuries
Table 5 .
Different types of skin wound dressing
Table 6 .
Animal models used in burn/wound injury
|
2023-07-19T15:05:09.214Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "6049dc292c3c6c30a45fcdbb3f70fb34f759a6f6",
"oa_license": null,
"oa_url": "https://jrespharm.com/pdf.php?id=1327",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3de1909365992cd75cf9289916e93a3374746e0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
246451634
|
pes2o/s2orc
|
v3-fos-license
|
A Study on the Evaluation of the Effect of Exercise on the Treatment of Chronic Diseases Based on a Digital Human Movement Model
-e aim of this study was to summarise the therapeutic effects of exercise interventions for common chronic diseases and to analyse the scope of application and problems of various exercise treatment protocols, with a view to guiding clinical practice and providing references for subsequent research. To this end, this paper describes how to extract feature parameters for gait analysis based on a digital humanmotionmodel. In-depth descriptions of the extraction algorithms for spatiotemporal features, centre-ofmass movement features, joint mobility, and joint contact forces are presented, and the reliability of the knee contact force extraction algorithm is analysed in particular. To analyse the effect of exercise on the treatment of chronic diseases, 50 cases of elderly chronic disease patients collected from the community were selected and subjected to healthy exercise for 1 year, before and after the healthy exercise. After the exercise, the elderly chronic disease patients’ blood pressure, lipid, and blood glucose attainment rates, chronic diseases, and knowledge of nonpharmacological treatments were significantly higher, with statistically significant differences compared to the preexercise period (P< 0.05).
Introduction
China's health data show that there are more than 300 million patients with chronic diseases (NCD) in China, including about 260 million with hypertension, 100 million with diabetes, 170 million with hyperlipidemia, 110 million with fatty liver, and more than 100 million with overweight or obesity [1]. Health promotion interventions that focus solely on disease treatment or rely solely on the medical and health sector are not effective in promoting the health problems of the Chinese population. Evidence on the value of exercise for the prevention and treatment of chronic diseases is growing, and there is a growing body of evidence on the value of exercise for the prevention and treatment of chronic diseases, and it is possible that, in the near future, sports will become part of medicine. e linkage between sports and medicine will be the most active and cost-effective strategy to prevent and control chronic diseases and maintain the health of the entire population [2]. Also, chronic diseases have become the main cause of death among Chinese residents. e death rate due to chronic diseases is 85%, and the disease burden caused by them accounts for 70% of the total disease burden [3]. Johnston et al. [4] studied the impact of cardiovascular disease on economic growth and found that, for every l% increase in mortality, economic growth in high-income countries fell by 0.1%. Chronic diseases significantly increase healthcare spending, with patients with chronic diseases spending 47.3% more than the average level of healthcare spending.
Exercise has gradually been shown to prevent and treat chronic diseases [5]. Nonpharmacological treatments include moderate exercise.
Exercise is one of the most cost-effective ways to change poor behaviour and lifestyle habits and to prevent and manage chronic diseases. To achieve better results in the prevention and treatment of chronic diseases through exercise, medical exercise professionals need to develop scientific exercise prescriptions that prescribe specific exercise content and volume for the individual patient's physical condition, so as to achieve a scientific and planned preventive fitness or rehabilitation treatment [6].
Exercise prescription refers to a systematic and individualised exercise programme formulated in the form of a prescription given to patients, athletes, and gym-goers by physicians, rehabilitation therapists, and sports instructors according to age, gender, physical health status, and exercise experience, as well as cardiorespiratory functional status and functional level of exercise organs [7].
e exercise prescription should specify frequency, intensity, duration, exercise form, and progression. Recommended aerobic exercise: 3 to 5 d per week at 50% to 80% of exercise tolerance for 20 to 60 min; forms of exercise include walking, treadmill, cycling, rowing, stair climbing, ergometers for the hands and legs, and other appropriate continuous or interval training. Recommended resistance exercise: 2 to 3 d per week, 10 to 15 repetitions per muscle group to achieve moderate fatigue values; forms of exercise include aerobic stretch bands, weight, dumbbells, barbells, pulleys, and weight lifts. Each exercise consists of a warm-up, relaxation, and flexibility exercise. It is recommended to gradually increase the intensity and duration of activity over time and to avoid strenuous physical activity [8].
To this end, this paper describes how to extract feature parameters for gait analysis based on a digital human motion model. e extraction of spatiotemporal features of gait was achieved based on 3D coordinate sequence data of skeletal key points; the subsegmental centre-of-mass weighting method was used to calculate the centre-of-mass movement feature parameters. To analyse the effect of exercise on the treatment of chronic diseases, healthy exercise in the prevention and treatment of chronic diseases can significantly improve patients' clinical indicators and enhance the treatment effect of patients' chronic diseases.
Related Work
With advances in medical care, the 5-year survival rate for effectively treated cancers has increased significantly and cancer is now classified as a chronic disease [9,10]. e results in [11,12] showed that, in the same exercise training population, middle-aged hypertensive patients (41-60 years) had longer-lasting reductions in systolic blood pressure than younger or older patients; women had better blood pressure reduction than men. Regular exercise reduces the risk of hypertension and improves fitness and health. Regular (≥3 days per week) moderate-intensity exercise for a sustained period of time (30-45 min or more) can reduce systolic blood pressure by 5-17 mm Hg and diastolic blood pressure by 2-10 mm Hg [13]. e sustained decrease in blood pressure in hypertensive patients is rapid and pronounced within 24 h after a single exercise session, with a more pronounced trend towards a decrease in systolic blood pressure as the duration of training increases [14]. e results of the US DDD project showed that a lifestyle intervention group that burned 700 kcal/week through at least 7% of body weight found a 58% reduction in the incidence of diabetes in those aged 48-66 years and 71% in those aged ≥60 years after 2.8 years [15]; after 10 years, there was a 34% reduction in the incidence [16]. e combined exercise and diet interventions were effective in controlling the condition of patients with gestational diabetes combined with hyperemesis, balancing indicators of ischaemic and hypoxia damage, and reducing levels of vascular neovascularisation factors, thereby improving maternal and child outcomes in patients [17].
Spatiotemporal Characteristics Required for Gait
Assessment.
e spatiotemporal parameters used for gait analysis usually include Stride (ST), Step Length (SL) [18], Step Width (SW) [19], Gait Speed (GS) [20], Stride Frequency (SF), Toe Out Angle (TOA), and Support Phase Time Ratio (SPTR). Stance Phase Time Ratio (SPTR) is shown in Figure 1 [21]. e stride length is the distance between the centres of the two ipsilateral feet after one step, expressed in cm.
e longitudinal straight-line distance between two points when the right and left heels or toes land successively during walking is called the stride length and is expressed in cm [22]. A step forward with the left foot is called the left stride length, and a step forward with the right foot is called the right stride length. Stride length is significantly related to height; the shorter the stature, the shorter the stride length, with a normal person's stride length being approximately 50-80 cm. e angle of the foot is the angle between the centre line running through the sole of one foot (the long axis of the foot, the line from the midpoint of the heel of the second toe) and the forward direction. e angle of the foot in a normal person is approximately 6.75°.
3D Coordinate Sequence of Skeletal Key Points.
In this paper, we extracted step width, step length, step width, step speed, step frequency, foot deflection angle, and support phase time ratio from the motion trajectory data captured by the Kinect sensor as the spatiotemporal features of gait, and the experimental process is shown in Figure 2. e extraction algorithm can be summarised in the following steps: first, the marker coordinates captured by Kinect are preprocessed to remove coarse errors in the marker coordinates; then, the centre position coordinates of marker points 15 and 16 are calculated to obtain the left plantar coordinates; the centre position coordinates of marker points 19 and 20 are calculated to obtain the right plantar coordinates; then, the Euclidean distance between the unilateral plantar coordinates is used to calculate the average e mean stride width was calculated using the horizontal distance between the left and right plantar coordinates, and the mean stride length was calculated using the distance from the landing of one foot to the landing of the opposite foot; finally, the leg lengths of all subjects were measured; i.e., the Euclidean distances between segments 13-15 and segments 17-19 were calculated, and their mean values were used to normalise stride width, stride length, and stride width. After obtaining the stride length, the stride speed characteristics can be calculated from stride speed � stride length/walking period. e direction of the vector at markers 15 (19) and 16 (20) was used as the direction of the foot deflection angle. In addition, the gait frequency characteristics are calculated by counting the number of steps taken per unit of time; the support phase time is calculated by taking the time when the sole of the foot is in contact with the ground, thus deriving the weight of the support phase time for the complete gait cycle.
Center-of-Mass Movement Characteristics for Gait
Analysis. Balance is the ability to maintain the body's stability or the ability to keep the body's centre of gravity within a plane of support. Clinically, balance is the ability of the body to automatically adjust and maintain posture when in a posture or steady state and regardless of position, when moving or when subjected to external forces [23]. It plays an important role in maintaining normal human walking. e centre of gravity is the centre of the body's weight and is the point of action of the combined forces of gravity on the head, torso, and upper limbs. When the body is standing still, the centre of gravity is located between the human sacrum and the umbilicus. When the body is in motion, the centre of gravity changes. e line of gravity is a vertical line drawn through the body's centre of gravity towards the ground. e centre of gravity falls within the plane of support in order for the body to maintain balance. e line of gravity is an auxiliary line for analysing the body's dynamics and can be used to check dynamic patterns and instability. e support surface is the area that supports the body's weight. e area between the bottom of the feet and the area made to contain between the feet when the body is standing is the support surface. When the body is sitting, lying, or moving, the support surface includes the contact surface between the body and the ground and the contact surface between the support surface and the ground, as well as the entire area between the two contact surfaces. e larger the support surface, the more stable the body's centre of gravity and the more balanced the body [24]. e centre-of-mass position of each segment was then calculated using the collected marker positions, and the formula for defining the centre-of-mass position of a body segment is as follows: Finally, the position of the centre of gravity of the body is calculated using the position of the centre of mass of each segment, and the formula is as follows: where x tcm , y tcm , and z tcm are the coordinates of the body's centre of gravity; x k and y k are the coordinates of the kth segment; m k is the mass of the kth segment; and M is the total mass of the 15 body segments. e range of centre-ofmass movement (up and down and left and right directions) obtained in this paper is shown in Figure 3, on the basis of which the mean values of the range of centre-of-mass movement (L/R distance of COM, L/R COM) and the range of up and down movement (U/D distance of COM, U/D COM) of the subject can be calculated.
Joint Mobility in Gait
Analysis. e maintenance of a normal gait involves the participation of the trunk, pelvis, hip, knee and ankle joints, lower limb muscles, and the upper limbs and is a complex and coordinated movement. Walking is also a three-dimensional spatial activity, so gait should be analysed from three perspectives. In the sagittal view, the gait is observed from the side, and eight frames are taken for analysis according to the EFG division of gait, analysing the angles taken by the hip, knee, and ankle joints at different stages. e upper part of the trunk should be upright, with a slight forward lean as the speed increases, and the upper limb should be swinging in the opposite direction to the lower limb movement, which reduces the trunk swing to maintain balance. e shoulder joint swings freely at 32°, flexes at 8°, and extends at 24°. In coronal plane observation, the gait is observed from the front or from behind, and the upward and downward movement of the centre of gravity are analysed as shown in Figure 4. e specific process is expressed as L 1 , L 2 , and L 3 , respectively. L 1 was calculated as in equation (3), and the rest of the distances were calculated in a similar way. Finally, the knee flexion angle α is calculated using the triangle method in motion space, and the formula is shown in equation (4).
e hip flexion and extension angle is measured using the spatial vector method, as shown in Figure 5, in the following way: e hip-to-knee vector is set to Let the hip joint flexion angle be β, calculated as shown in the following equation:
Joint Contact Forces for Gait Analysis.
is paper proposes the use of an RGB-D data-driven bone muscle model to predict knee contact forces during walking, with the potential of Microsoft's RGB-D camera Kinect as a new alternative tool due to its low cost, ease of operation, and low spatial constraints. Validation of its accuracy revealed that the deviation between the results measured by the Qualisys motion capture data-driven bone muscle model and those measured by the RGB-D data-driven bone muscle model ranged from 57.8 to 90.7 N, confirming the ability of the RGB-D data-driven bone muscle model to accurately predict knee contact forces during walking of the subject.
e Pearson correlation coefficients for the two measures ranged from 0.943 to 0.988, showing perfect waveform similarity between the two measures.
e RGB-D data-driven bone muscle model is, therefore, able to accurately predict knee contact force and could be used as a better alternative in clinical practice, as shown in Figure 6.
Case Studies
Fifty elderly patients with chronic diseases in the community collected from February 2017 to December 2018 were selected and given healthy exercise for 1 year. All patients met the diagnostic criteria. Among the patients, 26 were male and 24 were female, aged 60 to 87 years, with an average of (74.3 ± 0.5) years. ere were 30 cases of hypertension and 20 cases of diabetes mellitus [25].
After the health exercise, the elderly patients with chronic diseases had significantly higher rates of attainment of blood pressure, blood lipid, and blood glucose standards and knowledge of chronic diseases and nonpharmacological treatments, with statistically significant differences compared with those before the exercise (P < 0.05), as shown in Table 1.
e change in patients' chronic disease behaviour was significantly higher after the health campaign than before the health campaign, with a statistically significant difference (P < 0.05), as shown in Table 2.
System Effects
Healthy exercise is a comprehensive exercise to control health risk factors of individuals and groups. Healthy exercise is carried out for elderly patients with chronic changes in the community, and special health exercise groups are established to comprehensively assess patients' disease conditions, develop scientific health programmes, and provide targeted and personalised guidance to elderly patients with chronic diseases. In this study, the rate of elderly patients with chronic diseases meeting blood pressure, blood lipid, and blood glucose standards and knowledge of chronic diseases and nonpharmacological treatments increased significantly after the health campaign, with a statistically significant difference compared to the rate before the campaign (P < 0.05). After the health campaign, the change in patients' chronic disease behaviour was significantly higher than before the health campaign, with a statistically significant difference (P < 0.05), as shown in Figure 7. It is evident that health campaigns can achieve good results in the management of chronic diseases, improving all clinical indicators and reinforcing the behavioural change profile of patients with chronic diseases. e treatment of chronic respiratory-related diseases (e.g., chronic bronchitis, bronchial asthma, and chronic obstructive pulmonary disease) has been found to reduce patients' healthcare costs by 60% compared to the same period using this integrative model of physical medicine [26]. Weight loss through exercise and other forms of activity that increase energy expenditure can reduce initial body weight by 9-10%, and exercise can prevent weight regain. Exercise did not exacerbate the clinical symptoms or the inflammatory response of children with mild-to-moderate asthma. Extending the duration of exercise combined with increasing the frequency of exercise was the most effective way to reduce body weight, as shown in Figure 8.
Long-term adherence to exercise can be an effective means of preventing the development of osteoporosis in older diabetic patients [27]. Exercise-cognitive interventions can effectively improve cognitive function, enhance daily Journal of Healthcare Engineering living skills, and alleviate adverse mood in patients with cognitive impairment [28]. Exercise has a preventive and therapeutic effect on psychological disorders and has a positive effect on developing good psychological quality, promoting mental health, improving mood and state of mind, improving sleep quality, and promoting dopamine secretion, as shown in Figure 9.
We recommend that all patients with chronic diseases should be assessed by clinical professionals before exercising and that exercise plans should be developed to suit individual needs, taking into account the patient's physical condition and that scientific and reasonable individualised exercise prescriptions should be established to bring practical benefits to the health of the nation Journal of Healthcare Engineering so that the general public can actively participate in the "Healthy China" which is the trend of the times.
Conclusions
is paper describes how to extract feature parameters for gait analysis based on a digital human motion model. Algorithms for extracting spatiotemporal and mass movement parameters, joint mobility, and joint contact forces are described in depth, and the reliability of the knee contact force extraction algorithm is analysed in particular. Rationalised exercise prescription can reduce the incidence of malignant tumours and improve the survival and physical performance of patients with malignant tumours, reduce the prevalence of cardiovascular disease and the risk of death, lower blood pressure, improve blood glucose and reduce the use of hypoglycaemic drugs, promote pulmonary rehabilitation, control body weight, and prevent and treat psychological disorders.
Data Availability e datasets used during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
e authors declare no conflicts of interest.
|
2022-02-02T16:02:58.939Z
|
2022-01-31T00:00:00.000
|
{
"year": 2022,
"sha1": "66902a9b8a897468fdf7b2d214a425e4139574a6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2022/1984145.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4745e723b75b90bd3997a93a6196ee1fe7aa71d1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11888702
|
pes2o/s2orc
|
v3-fos-license
|
Robust Linear MIMO in the Downlink: A Worst-Case Optimization with Ellipsoidal Uncertainty Regions
This paper addresses the joint robust power control and beamforming design of a linear multiuser multiple-input multiple-output (MIMO) antenna system in the downlink where users are subjected to individual signal-to-interference-plus-noise ratio (SINR) requirements, and the channel state information at the transmitter (CSIT) with its uncertainty characterized by an ellipsoidal region. The objective is to minimize the overall transmit power while guaranteeing the users’ SINR constraints for every channel instantiation by designing the joint transmitreceive beamforming vectors robust to the channel uncertainty. This paper first investigates a multiuser MISO system (i.e., MIMO with single-antenna receivers) and by imposing the constraints on an SINR lower bound, a robust solution is obtained in a way similar to that with perfect CSI. We then present a reformulation of the robust optimization problem using S-Procedure which enables us to obtain the globally optimal robust power control with fixed transmit beamforming. Further, we propose to find the optimal robust MISO beamforming via convex optimization and rank relaxation. A convergent iterative algorithm is presented to extend the robust solution for multiuser MIMO systems with both perfect and imperfect channel state information at the receiver (CSIR) to guarantee the worst-case SINR. Simulation results illustrate that the proposed joint robust power and beamforming optimization significantly outperforms the optimal robust power allocation with zeroforcing (ZF) beamformers, and more importantly enlarges the feasibility regions of a multiuser MIMO system
INTRODUCTION
The rapid growth of wireless communications services has brought severe challenges to the design of reliable and efficient communications systems.In the future generation wireless systems, ubiquitous delivery of high-speed highquality services over air is anticipated whereas the physical susceptibility of a wireless channel such as fading continues to be a critical concern [1].In response to this, multiantenna technologies, or widely known as multiple-input multipleoutput (MIMO) antenna systems, have emerged as an attractive means to provide diversity in the spatial domain without the need of bandwidth expansion and increase in transmit power.The amount of diversity benefit MIMO offers is directly linked to its enormous achievable capacity.It has been confirmed that not only can MIMO provide a substantial capacity gain to a single-user system (e.g., [2][3][4][5][6][7][8]), such advantage is also even more apparent in multiuser systems [9][10][11][12][13][14][15][16][17].With perfect channel state information (CSI), it is known that the channel capacity can be achieved using dirty-paper coding (DPC) in the MIMO downlink [9,10].However, this nonlinear optimal strategy is not suitable for practical implementation and the beamforming alternatives have attracted much interests for their low complexity to realize the capacity enhancement [13][14][15][16][17][18][19].
While it is well known that CSI at the transmitter (CSIT) is not so important to achieve the capacity of an ergodic single-user MIMO channel at high signal-tonoise ratio (SNR) [2,3,5], same is, however, not true for multiuser channels [11,12].In a multiuser downlink system, for instance, the availability of CSIT would be essential to organize the users in a controlled way that the interference levels are kept minimal while enhancing the overall capacity by allowing users to transmit at the same time.Under the assumption with perfect CSIT and CSI at the receiver (CSIR), much have been understood so far.Unfortunately, this is never the case in practice, and channel error or uncertainty appears for many reasons.First, CSIT may be acquired through a quantized feedback channel from the receiver and there will be quantization errors in the CSIT.
Channel estimation fidelity is also limited by the SNR at the estimation samples.In addition, a channel varies in time due to Doppler spread, which will cause more errors on the estimated CSIT or CSIR as time goes.There is no doubt that with channel uncertainty the achievable system capacity will go down (e.g., [20][21][22][23]) but more concernedly, for a system where users are required to achieve certain quality-ofservice (QoS) such as signal-to-interference-plus-noise ratio (SINR), the users' requirements are likely to be violated by a design based on the imperfect, and hence incorrect, CSIT/CSIR.Motivated by this reason, some recent studies aimed for robust beamforming designs to CSI uncertainty [14,[20][21][22][23][24][25][26][27][28][29][30][31][32][33].
In general, there are two ways to obtain a robust solution.One popular way is to examine the worst-case scenario and design the system under the worst-case channel condition [14,24].Ideally, if the problem is indeed feasible and such design is obtained, it will ensure the users' requirements to be met for all possible channel error conditions.Alternatively, robustness could be obtained by a stochastic approach which takes a statistical viewpoint of the design problem and provides the needed robustness in the probabilistic sense [25].Both the worst-case and the stochastic approaches have pros and cons against each other.Nevertheless, to get absolute robustness (i.e., performance guaranteed with probability one), worst-case designs are necessary and for this reason, this paper will investigate the worst-case approach to the robust beamforming design of a multiuser MIMO antenna system in the downlink, in the presence of both CSIT and CSIR uncertainties.Some very recent robust techniques are reviewed as follows.The robust transceiver design for a single-user multicarrier MIMO system with various channel uncertainties was presented in [26].In [27], a robust maximin approach was devised for a single-user MIMO system based on convex optimization.In [28], the robust transmit strategy to maximize the compound capacity, defined as the capacity of the worst-case realization within the uncertainty set, in single and multiuser rank-one Ricean MIMO channels was analyzed (see also [29]).It was also shown that beamforming is optimal for both single-user and multiuser settings.Robust adaptive beamforming using second-order cone program (SOCP) was proposed in [30] to deal with an arbitrary unknown signal steering vector mismatch based on the worst-case performance.For a multiuser MISO system (i.e., multiuser MIMO with single-antenna receivers) with individual QoS constraints, the robust beamforming vectors under the worst-case criteria were determined in [14,31] given an imperfect channel covariance matrix.[32,33] considered errors in the CSI matrices and studied the optimal power allocation with fixed beamforming vectors, again in a downlink multiuser MISO system.Most recently, some conservative design approaches that yield convex restrictions of the original robust design problem with imperfect CSIT and perfect CSIR were proposed in [34,35].This contemporary list of references indicates that despite the need to have a universal robust solution, how to ensure the worst-case QoS constraints of a multiuser MIMO system in the presence of CSI uncertainty is largely unknown.This paper aims to devise a robust multiuser MIMO power and beamforming solution which optimizes the power allocation and the transmit and receives beamforming vectors of the users jointly, to minimize the overall transmit power in the downlink while guaranteeing the users' individual SINR constraints for every possible channel error conditions (i.e., a worst-case approach), in the presence of imperfect CSIT and perfect/imperfect CSIR uncertainty modeled by an ellipsoid.The motivation behind an ellipsoidal model is that it bounds the CSI errors to make possible such a worst-case design.In practice, CSI is measured in minimizing the mean-square-errors (MSE) and the CSI errors tend to be Gaussian.Such ellipsoidal bounding is thus appropriate and achievable with a small controllable outage probability.Previous works based on spherical or ellipsoidal CSI uncertainty regions can be found in [23,27,28].
The technical difficulty of the design lies in the fact that the users' worst-case SINRs are hardly derivable without knowing the beamforming solution; yet, getting a loose bound on the SINR for robustness may result in huge transmit power penalty and worst, suffer a higher likelihood of the system becoming infeasible.In particular, this paper makes the following contributions.
(i) The optimal robust power allocation with fixed beamforming vectors (power-only optimal solution) is found via convex optimization.
(ii) A reformulation of the robust design using S-Procedure [36] is presented for a multiuser MISO antenna system, which makes it possible to obtain the globally optimal robust solution via convex optimization and rank relaxation [37] with high probability (but not with probability one).More importantly, the proposed scheme results in a larger feasibility region than power-only optimization, where feasibility is declared if and only if there exist a power vector and transmit and receive beamforming vectors such that the worst-case SINR requirements are satisfied.This demonstrates that a joint optimization of the power allocation and the beamforming vectors is vital.(iii) A convergent iterative algorithm is proposed to extend the robust multiuser MISO solution to a multiuser MIMO antenna system both with perfect and imperfect CSIR.Although not optimal, this algorithm guarantees the worst-case SINR at the mobile users.Simulation results will show that a significant reduction in transmit power is possible by using the proposed algorithm as compared to poweronly optimization methods.
The remainder of this paper is structured as follows.Section 2 introduces the system model for a multiuser MIMO with channel uncertainty and then formulates the robust optimization problem.In Section 3, we look at the robust design of a multiuser MISO antenna system first using an SINR bounding approach and then S-procedure.We will discuss how the optimal robust solution can be obtained using convex optimization and rank relaxation.Section 4 extends our results to a multiuser MIMO system and an iterative algorithm to jointly optimize the power allocation and the transmit and receive beamforming vectors for the users is presented.Simulation results will be presented in Section 5 and finally, we conclude this paper in Section 6.
Throughout this paper, complex scalar is represented by a lowercase letter and |•| denotes its modulus.E[•] denotes the mean of a random variable.Vectors and matrices are represented by bold lowercase and uppercase letters, respectively, and • is the Frobenius norm.The superscript † is used to denote the Hermitian transpose of a vector or matrix.A ⊗ B denotes the Kronecker product of matrices A and B. X 0 means that matrix X is positive semidefinite.eig(X) returns the vector containing the eigenvalues of a square matrix X while trace (A) denotes the trace of A. vec(A) is a column vector by stacking all the elements of A. Finally, x∼CN (m, V) denotes a vector of complex Gaussian entries with a mean vector of m and a covariance matrix of V.
Multiuser MIMO in the downlink
Consider an M-user MIMO antenna system where n T antennas are located at the base station and n (m) R antennas are located at the mth mobile station.Communication takes place in the downlink, that is, from the base station to the mobile receivers.As in [15][16][17][18][19], the system model is written as where (i) s m is the digital symbol sent from user m (complex scalar) with E[|s m | 2 ] = 1; (ii) s m is the estimated symbol at the mobile user m (complex scalar); (iii) t m is the transmit beamforming vector for user m (n T × 1 complex vector); (iv) r m is the receive beamforming vector for user m (n (m) R × 1 complex vector) with r m = 1; (v) H m is the MIMO channel from the transmitter to user m (n (m) R × n T complex matrix); (vi) η m is the noise vector ∼CN (0, N 0 I) at user m (n (m) R × 1 complex vector).
The time index in the above model is omitted for convenience.
The SINR at the mth user can be expressed as and the amount of power transmitted to this user is given by t m 2 .The total transmit power of the base station is therefore With perfect CSIT and CSIR, one would like to minimize the transmission cost for maintaining the users' QoS.Mathematically, this may be achieved by minimizing the overall transmit power with users' individual SINR constraints {γ m }, that is, This problem has been extensively studied (e.g., [13][14][15][16][17]) although the globally optimal solution for a MIMO antenna system is still unknown.With MIMO, spatial multiplexing (i.e., transmitting parallel substreams per user in the spatial domain) can be used to increase both the per-user and system capacity, but this is not considered here for simplicity.This restriction is also motivated by the fact that in many situations, singlestream transmission in multiuser MIMO is nearly optimal [28,[38][39][40][41].
The definition of CSI and the ellipsoidal uncertainty region
In this paper, CSIT and CSIR are estimated in two training periods.During the first one, CSIT, defined as the information about the channel matrices {H m }, may be estimated directly at the base station in the uplink.In particular, we model the imperfection of CSIT as an additive noisy matrix where H m is the actual channel matrix, H (m) T denotes the CSIT estimates known to the base station, and ΔH m represents the CSIT uncertainty, bounded by the region where T 0 is a given matrix determined the orientation of the region and the parameter ξ (m) T controls the size of the region.(In practice, depending upon how the CSI is estimated (e.g., the length of the training sequence and the training power), the minimum MSE (MMSE) in the channel estimate will shed light on the required size of the region.) In this paper, we will assume that U (m) T is of full rank so that U (m) T has a geometric meaning of being an ellipsoid.It is said in [35] that such model may well be useful to characterize the quantization error in CSIT.In the rest of the paper, the knowledge for both { H (m) T } and {U (m) T } is assumed at the base station, based on which the robust transmit beamforming vectors {t m } ∀m are designed.
At the mth mobile station, we define CSIR as the local information about the matrix or the vectors which are the resultant channels after multiuser transmit beamforming.During the second training period, it can be estimated once the transmit beamforming design is completed.We find this CSIR definition necessary because the receive beamforming vector should be designed in accordance with the transmitted channels to maintain the required SINR.The matrix (7) can be estimated locally from the reception of the beamformed training sequences transmitted from the base station.The CSIR uncertainty can be modeled in the same way as for CSIT (8) so that consists of an estimate H (m) BF and the CSIR error ΔH (m) BF , which is bounded by the region with the parameters U (m) R ( 0) and ξ (m) R .It is assumed that the mobile user m has the knowledge of H (m) BF and U (m) R , which is used for the design of the receive beamforming vector r m .
The generality of this model embraces the following situations as special cases, for example, (a) no CSIT and perfect CSIR: (b) perfect CSIT and perfect CSIR: R →0; (c) imperfect CSIT and perfect CSIR: R →0; (d) imperfect CSIT and imperfect CSIR: The foci of this paper are on cases (c) and (d) where the CSI errors are considered.In particular, for MISO systems to be discussed in Section 3, (c) will be studied.One final point on the uncertainty model worth mentioning is that as a worst-case approach is adopted in this paper, the explicit statistical distribution of how the CSI error varies within the region is not important and therefore not exploited as usual in the worst-case optimization (as opposed to the stochastic optimization which takes into account the distribution of the error).It is, however, known that for MMSE channel estimation, ΔH will tend to be Gaussian distributed, which we will assume in the simulation results section.The above ellipsoidal model, which has already been used in [23,27,28,35], can be viewed as a deterministic modeling or simplification of the more sophisticated stochastic CSI uncertainty model.
The robust optimization problem
This paper adopts a worst-case methodology, whose solution is robust to every possible CSI error condition for a given { H (m) T , In particular, our aim is to minimize the overall transmit power for ensuring the users' SINR constraints by jointly optimizing the power allocation and the transmit-receive beamforming vectors of the users, with the aid of CSIT and CSIR, that is, Note that min Γ m corresponds to the worst-case SINR for user m given the CSI error regions.By ensuring min Γ m ≥ γ m , QoS assurance can be guaranteed for every possible CSI error condition.
ROBUST MULTIUSER MISO
In this section, we consider a MISO system where each receiver has only one antenna, and address the problem (10) with imperfect CSIT but perfect CSIR.
The optimization
The technical difficulty of solving ( 10) is obvious and even for a multiuser MISO setting, there has been no known optimal robust solution so far.In this section, to gain more insights and a deeper understanding of ( 10), we will look at a multiuser MISO antenna system where each mobile user has a single receive antenna (or r m becomes a scalar).To distinguish the channel dimension from the MIMO case, we will use lowercase h to denote the respective channel vectors.
The subscript T will be omitted for notational convenience as long as imperfect CSIR is not considered.
A simple observation shows that for MISO, the constraints in (10) can be rewritten as where Problem ( 11) is actually a robust second-order cone programming (SOCP) problem in {t m }, and the constraints in ( 11) can be equivalently expressed as max According to [42], the SOCP constraints in (14) are not known to be tractable.A possible remedy is to derive a lower bound for the worst-case constraint for any Δh m ∈ U (m) T .For the special case U (m) T = I, this is possible and we describe this in the next subsection.
Design by lower bounding the SINR
To get around the difficulty of solving (11) with unknown {Δh m }, a simpler robust solution based on lower bounding the constraints is possible when U (m) T = I for all m.Using [26, Lemma 7.1], a lower bound for f m (Δh m ), denoted by f m , can be found as The worst-case SINR can then be guaranteed by imposing f m ≥ 0. As a consequence, ( 11) can be suboptimally solved by where Z denotes (ξ (m) This problem is similar to that with perfect CSIT and there are algorithms (e.g., [17]) available to achieve the optimum.As will be shown later in the simulation results, however, the main drawback of this method is that the bound f m (Δh m ) is too loose, which results in severe power penalty and even worse and diminishes the feasible region considerably.In the following subsection, we will show that the optimal solution of (11) could in fact be found without relying on SINR bounds.
S-procedure and convex optimization for power-only control
The inferior performance of the design described above in Section 3.2 is because the bound is very loose and rarely achievable in most cases.In general, one can obtain robustness by the power-only optimization with fixed transmit beamforming vectors in (11).In [32], the optimal power allocation is found under several types of channel uncertainties including the ellipsoidal region considered in this paper.The main difference is that the work in [32] assumed that both the transmitter and the receiver share the same uncertainty region with a common channel estimate, which is hardly justifiable in practice.Secondly, the model in [32] also disallows their solution to deal with the case of imperfect CSIT and perfect CSIR, as we do in here.Now, we assume a fixed set of transmit beamforming vectors and find the optimal solution to the power control.The main result is based on S-procedure and given in Theorem 1 as follows.
Theorem 1.The optimal power control for the original beamforming problem (11) with fixed transmit beamforming vectors is given by the solution to the following semidefinite programming (SDP): where w m denotes a fixed unit-norm transmit beamforming vector and the transmit beamforming vectors are given by t m = √ p m w m .
Proof.Note in ( 11)-( 13) that Q m may, in general, be indefinite and it is possible that f m (Δh m ) is not convex.However, according to S-lemma [36,43], the constraint in (11), which is is equivalent to With this equivalent constraint, we no longer need to derive the analytical form of the worst-case SINR or the worst-case f m (Δh m ).As long as ( 21) is met, the constraint is guaranteed.An interesting and useful fact about (21) is that Δh m is not involved whereas the uncertainty structure is dealt with by the parameters, U (m) T and ξ (m) T .
Joint power control and transmit beamforming design
There are two main drawbacks of the power only optimization above in Section 3.3.1.Firstly, there is a power penalty caused by not allowing the optimization to be done jointly with the power allocation and the beamforming vectors.
It will be shown in the simulation section that for MISO systems, the gap is negligible but for MIMO systems, the gap can be very significant (can be as large as 8 dB), and the degradation grows with the number of users and the channel error bound ξ.Secondly and worst of all, the feasibility region of the joint power and beamforming design problem (22) tends to encompass that of (19) and this will have a detrimental implication on the likelihood of outage occurrence.
Although it is difficult to find an equivalent convex problem, if the power allocation and the transmit beamforming vectors of a MISO system are to be optimized jointly, in the following, we are about to show that it is possible to bound the problem (11) by a convex counterpart after rank relaxation.(We observe from the numerical results that the rank relaxation appears to be exact with high probability, allowing the globally optimal robust solution to be found via convex optimization, although analytical evidence is unavailable.)The main result is summarized in Theorem 2 below.
Theorem 2. The original robust problem (11) is relaxed as the following SDP problem: where The problem (22) is convex and hence can be optimally solved.
Proof.The proof about the equivalent constraint in (22) is the same to that in Theorem 1.Using (21), and introducing the transmit covariance matrices Apparently, (24) (and hence (11)) is the same as (22) except that the rank-1 constraints are missing in (22).Due to this rank-relaxation, in general, (22) gives a lower bound for the problem (24).As a result, the original problem (11) is lower bounded by (22).
The advantage of ( 22) is substantial because it is an SDP problem and hence can be optimally solved efficiently.Moreover, we observe from the simulation results that in most cases (22) gives rank-1 solutions if all {U (m) T } are of fullrank (i.e., U (m) T are indeed ellipsoids), which means that the relaxation is exact and the optimal robust solution to (11) can thus be found from solving (22).If the SDP does not offer a rank-1 solution, then a countermeasure is needed (see Section 3.3.4).(22) versus (11) with perfect CSIT At first, ( 22) may look quite different from (11) with perfect CSIT (or when Δh m = 0), and the original SINR constraints in (22) are not explicit.However, the two problems can be well linked with each other by their duals.In Appendix A, we show that the dual of ( 22) can be written as max {λm,Vm,vm}
Interpretation of
On the other hand, the dual of ( 11) is given by [14] max Comparing (25) with (26), we can actually see that they are similar.In particular, the matrix (27) in ( 25) can be interpreted as the equivalent channel covariance matrix h † m h m in (26).Nevertheless, (25) tends to require a larger objective value (i.e., m λ m γ m N 0 ) to respond to the channel uncertainty parameters (i.e., U (m) T and ξ (m) T ), and this can be seen by the fact that the constraint of λ m in ( 25) is stricter than that in (26) because
Feasibility, rank-1 solutions, and a countermeasure
Thus far, little is understood about the feasibility of linear multiuser MIMO antenna systems with imperfect and even perfect CSIT.Despite the contributions in Section 3.2, the exact feasibility issue of a multiuser MIMO antenna system with imperfect CSIT is still not known.However, what we Example: Consider the system with the parameters U Solving the SDP by the rank-relaxation method yields the following solution can say is that if ( 22) is infeasible, the original problem (11) cannot be feasible since ( 22) is a relaxed version.The existence of the proposed robust solution relies on whether the problem is feasible for a particular channel realization and error condition.If the problem happens to be infeasible, then an outage will be declared.In practice, it may mean that the users' requirements will have to be degraded or the transmission will have to be postponed until the channels improve to a better state.In addition, even if ( 22) is feasible, it may return a solution with rank higher than 1.Whether an all-rank-1 solution exists for ( 22) is not known.In this paper, if (22) gives higher-rank solutions, the following countermeasure, which optimizes only the power allocation of the users for a given set of fixed beamforming vectors in Section 3.3.1 will be in place.
In this case, w m may be chosen as, for instance, the zeroforcing (ZF) beamforming vectors [16] or the principal eigenvector of the optimal T m obtained from the SDP.The latter appears to be more useful because ZF vectors may not always exist.In some cases when an all-rank-1 solution to (22) is not available, the power-only optimization by choosing the dominant eigenvector as the beamforming vector will produce a contingent robust solution to (11).To illustrate how it works, a numerical example is given in Figure 1.
EXTENSION TO MULTIUSER MIMO
In this section, we extend our results to a multiuser MIMO antenna system in the downlink, and the joint optimization of the transmit and receive beamforming vectors is anticipated.Although a lower bounding approach, similar to Section 3.2, may be possible, the SINR bounds would be too loose to be useful.As such, we focus on how the SDP reformulation in Section 3.3 is extended to cope with the MIMO optimization.It is, however, well known that a joint optimization of transmit and receive beamforming vectors of a multiuser system is not convex.Even with perfect CSIT/CSIR, the optimal solution is not known, let alone with imperfect CSI.In the following, we first look at the case with imperfect CSIT and perfect CSIR as for the multiuser MISO case in Section 3. The case with imperfect CSIR will be addressed later in Section 4.3.
In the case of imperfect CSIT and perfect CSIR, the worst-case SINR is expressed as [15] min and is very difficult to evaluate.In the following, a suboptimal approach to promise the worst-case SINR will be presented.The base station assumes that the mobile user has the same knowledge of CSI.The transmit beamforming vectors {t m } (also with the power allocation) and virtual receive beamforming vectors {r m } are optimized jointly at the base station based on the CSIT (i.e., { H m } and U (m) T ).After that the actual receive beamforming vectors {r m } are optimized locally at the mobile receivers based on the perfect CSIR, that is, Note that {r m } are the only auxiliary variables to facilitate the design of {t m }.
To obtain a robust solution of {t m } to (10), an iterative optimization algorithm is proposed, which optimizes one set of variables at a time while keeping others fixed and iterates from one optimization to another to converge to the joint-optimized state, with the aid of CSIT (see Section 4.1).Then the corresponding solution of r m is learnt locally at the mth mobile receiver, based on the perfect CSIR.Because the mobile user actually has perfect CSIR, such a design results in a lower bound for the achievable worst-case SINR.
Similar to the MISO case, the constraints in (10) for the MIMO systems can be simplified as min
Transmit beamforming
For a given set of the virtual receive beamforming vectors {R m r m r † m }, we consider how the transmit beamforming vectors {T m } can be optimized by first rewriting (30) as where Q m is defined in (13).This constraint can further be re-expressed using vector operation and Kronecker product as and ΔH m ∈ U (m) T can be rewritten as Using the S-lemma with known {R m }, (10) can be reformulated using rank relaxation as follows: where Solving this convex SDP problem gives the optimal {T m } for a given {R m }.The dimension of the matrix According to the analysis in [44,Chapter 6], the associated complexity to solve the SDP is O((n T M m=1 n (m) R ) 6.5 ) per accuracy digit.It should be noted that as discussed earlier in Section 3.3.3,however, the rank-1 solution {t m } may not be known but can be dealt with in the similar way.
Virtual receive beamforming
The optimization of the virtual receive beamforming vectors {r m } is also based on the CSIT.For a given user m, we propose to optimize the virtual receiver r m in order to maximize the worst-case F m (ΔH m ).In particular, r m is chosen to be the solution of the following problem where F m (•) in ( 32) is evaluated.It will be shown later in Section 4.1.3that this optimization criterion enables the construction of a convergent iterative algorithm for the joint optimization of the transmit and receive beamforming vectors.
Once again, we find the S-lemma and rank relaxation very useful in transforming the problem into an SDP for ease of solving.Hence, (35) becomes max g,Rm 0, {s where As the optimization of {T m } requires only the knowledge of {R m }, rather than {r m }, whether or not (36) returns a rank-1 solution is unimportant since a rank-1 solution does always exist [45] and it only needs to be extracted after the iterative algorithm in the next subsection converges.
The iterative algorithm
The above results can be iteratively combined to reach a joint optimization state so that {t m } can be found.The proposed algorithm is outlined as follows.Note that we will use the notation a [n] to denote the optimizing variable a at the nth iterate.
The convergence of the above algorithm will be analyzed in the next subsection.At convergence, we will have the steady-state joint solution m } are all of rank one, the robust transmit beamforming vectors {t m } can be readily obtained from the Cholesky decomposition of m }.Otherwise, the technique described in Section 3.3.3 is needed to get a suboptimal solution for {t m } for a given m }.However, due to the rank relaxation in the optimization of {t m }, it is possible that ( 10) is feasible but the above algorithm does not return a feasible rank-1 solution.How the actual receive beamforming vectors {r m } are obtained will be addressed in Section 3.2.
Convergence analysis
Given a feasible initial point to start the iteration, we can prove that the proposed algorithm is convergent.Nevertheless, it is worth mentioning that as the problem is nonconvex, the proposed algorithm may converge only to the local optimum and the effect of the choice of the initial receive covariance matrices is still unknown.In the following, we start the proof by denoting the total transmit power at the nth iteration as P [n] and considering the nth and the (n+1)th iterates.
Proof.At step (2) of the nth iteration, for a given set of , the optimal {T [n] m } M m=1 are obtained.Therefore, after that, we have a joint feasible solution which gives a sum-power of P [n] and F m (ΔH m , {T [n] m }, we have which means that the worst-case SINR requirements are over-satisfied with the same sum-power P [n] by the feasible solution At step (2) of the (n + 1)th iteration, we have another feasible solution (41) with the sum-power of P [n+1] .By definition, as is obtained by minimizing the total transmit power with known {R [n+1] m } M m=1 , then we have always P [n] ≥ P [n+1] .As a result, the total power is monotonically decreasing (and obviously lower bounded by 0) and hence the proposed algorithm converges, which completes the proof.
Optimization at the mth mobile receiver, r m
With the effective channels {H m t n } being learnt perfectly at the mobile receiver, the corresponding optimal receive beamforming vectors {r m } are well known to follow the MMSE criteria [46] and given by where σ m is a constant chosen to ensure r m = 1.As mentioned before, this receiver design will further increase the received SINR, so the actual resulting SINRs are higher than the requirements {γ m } which are made achievable even with the imperfect CSIT.
Extension to the case with imperfect CSIR
Here, a more general case is considered where neither CSIT nor CSIR is perfect.In this case, we use H (m) BF and H (m) T to distinguish the estimated CSIR from CSIT.The details of the uncertainty CSIT and CSIR models are given in Section 2.2.At the mobile user m, the actual receive beamforming vector r m should be optimized based on the knowledge of the estimated CSIR H (m) BF and the uncertainty region U (m) R .To be specific, r m is chosen to be the solution of the following: where F m (ΔH (m) BF , r m ) is defined as follows: where and a m is an all-zero vector except the mth element being unity.Similar to the optimization described in Section 4.1.2,(43) becomes where J denotes trace ( g.When (46) returns a higher-rank solution for R m , the optimal rank-1 solution can be extracted by the following.(i) With the higher-rank R m , ( 46) is indeed a nonconvex quadratic problem in vec(ΔH (m) BF ) and the technique in [45] can be used to determine the optimal vec(ΔH (m) BF ) or the worst-case CSI error matrix, denoted as ΔW (m) BF .(ii) Then, the optimal receive beamforming vector has an MMSE form and is given by where BF , and σ m is chosen to ensure r m = 1.Note that this MMSE receiver can be used to decode the signal because it not only maximizes the worst-case SINR but also minimizes the worst-case MSE, which facilitates signal demodulation and decoding.
Setup and assumptions
Simulations are conducted to assess the performance of the proposed algorithm in Rayleigh flat-fading channels, following CN (0, 1).Unless explicitly stated, we consider that users have the same target SINR and channel error bounds, that is,
R
= ξ for all m.Further, for the cases of multiuser MIMO, users are assumed to have an equal number of antennas, that is, n (m) R = n R for all m.The notation M-user (n T , n R ) will be used to denote an M-user MIMO system with n T transmit antennas and n R receive antennas per mobile user.In the simulations, we assume that the CSI error is Gaussian distributed over the bounded uncertainty region with the probability that the Gaussian CSI error falls within the region, to be 99% for any given bounds ξ (m) The average total transmit SNR, defined as E[P]/N 0 , will be regarded as the performance measure.The service probability, which is defined as the probability that a given method gives a feasible solution, will be used to measure the robustness of the method against CSI errors.Several benchmarks are compared with the algorithm proposed in Section 4. They are as follows.
(i) The "nonrobust" design, which optimizes the users' beamforming vectors based on the estimated CSIT and CSIR ({ H (m) T }).For multiuser MISO, the optimal solution in [14] is applied while the iterative method in [17] will be used for MIMO.The channel uncertainty regions, U (m) T and U (m) R , are ignored, so this method is expected to have high probability of outage.
(ii) Optimal power allocation (19) with fixed beamforming vectors, which chooses {t m } to be the ZF beamforming vectors in [16] and then the power allocation is optimized with these fixed beamforming vectors based on (19).
(iii) Robust solution based on the lower bound (18), which obtains the robust solution by solving (18).Since ( 18) is of the same form as with perfect CSIT, the method in [14] can be used to find the optimal solution.Note also that we have not derived the SINR lower bound for MIMO systems.Therefore, results of this method are provided only for multiuser MISO systems.
To enable a fair comparison of SNR, our first discussion in the following will be based on the channel and error realizations where all of the methods (both the benchmarks and the proposed one) are feasible.In particular, it implies that for MISO cases, (22) always returns an all-rank-1 solution and thus the proposed method is also the global optimal robust solution.The feasibility issues for various algorithms and their probability of outage will then be evaluated and compared when we conclude this section.
Results
Results in Figures 2-4 are provided for a 3-user (3,1) system with the CSI uncertainty parameters U (m) T = I, ξ (m) T = ξ, for all m.In Figures 2 and 3, the users' target SINR are set to be 5 (dB) while 10 (dB) is considered in Figure 4. Results in the first two figures examine the performance of various schemes with small channel uncertainty bound, up to ξ = 0.1.Results in Figure 2 show the output SINRs of user 1 for a particular channel realization, averaged over the channel uncertainty.As we can see, the proposed algorithm (which is optimal for MISO) and the optimal power-only allocation achieve slightly greater SINR than the target, which is expected because the optimization is done in a way that the target can still be achieved at the worst error conditions.In addition, results also illustrate that the lower bounding approach achieves much higher SINR than required and this loose bound leads to a huge power penalty for ensuring the required QoS.In particular, results in Figure 3 show that the SNR penalty of the SINR bounding approach (18) grows with the channel error bound and there is an SNR gap of as large as 10 (dB) if ξ = 0.1, as compared to the proposed algorithm and the optimal poweronly allocation.Moreover, results indicate that the optimal joint power and beamforming solution performs similarly to the optimal power-only allocation with fixed beamforming vectors.However, we will soon observe that this is only the case for systems with small number of users, and that the channel is feasible for both solutions.From the transmit SNR point of view, the nonrobust design is always the best but a close observation of the data in Figure 2 reveals that the output SINR is always smaller than the target, meaning that the solution is actually not feasible.This problem becomes much more apparent when γ = 10 (dB) is considered in Figure 4, and the gap between the output and target SINRs grows farther apart as ξ increases.
In Figure 5, the feasibility regions of the optimal solution, the optimal power-only allocation (19) with ZF beamforming vectors, and the method using (18) are (Note that as these three problems are all convex, they can be optimally solved and the feasibility can also be easily checked using some standard numerical algorithms for convex optimization, such as the interior-point method.)In this figure, γ = 5 (dB) and ξ = 0.05 are assumed.The vertices indicate the minimum transmit power (or SNR) needed for each scheme.
As we can see, the region for the lower bounding approach is the smallest while the region for the optimal solution is the largest and embraces that of the other two schemes.This demonstrates that although previous results have shown that the optimal solution and the optimal power-only allocation perform similarly, there is a detrimental implication on the feasibility by not optimizing the beamforming vectors and the power allocation jointly.This point will further be elucidated later in Table 1.
Results have so far shown that for multiuser MISO systems, the proposed algorithm performs similarly as the power-only optimization with ZF beamforming vectors.This conclusion is however not true for a MIMO system and when the channel uncertainty is more severe, for example, ξ as large as 0.3.These results are shown in Figure 6 for 3-user (3,2) and 4-user (4,3) systems with γ = 10 (dB).As we can see, larger gaps in SNR are observed and they grow considerably with the channel error bound ξ and the number of users.In particular, a gap of 8 (dB) is observed for a 4-user system when ξ = 0.3 while a gap of 7 (dB) appears for a 3-user system with the same level of CSIT uncertainty.Note that the results in this figure are for the cases with both perfect and imperfect CSIR since the transmit SNR depends only on the transmit beamforming vectors, obtained based on CSIT.
The performances of various algorithms when they are all feasible are pretty well addressed now.However, it is also important to know how they actually perform for general random channels and error conditions particularly in terms of their service probability (i.e., the probability that a given method gives a feasible solution with the users' SINR constraints satisfied).Here, we examine this by providing the service (or nonoutage) probabilities for the various algorithms in Tables 1 and 2. Results in the tables illustrate that the proposed algorithm decreases the probability of outage 3-user (3,1) system Ellipsoid Nonellipsoid Ranks of (U (1) T , U (2) T , U (3) T ) (3, 3, 3) (3, 3, 2) (3, 2, 2) (3, 1, 1) P 0.0226 1 1 * 4-user (4,1) system Ellipsoid Nonellipsoid Ranks of (U (1) T , U (2) T U (3) T , U (4) T ) (4, 4, 4, 4) (4, 3, 3, 2) (4, 3, 2, 1) (4, 2, 1, 1) P 0.0274 1 1 * 5-user (5,1) system Ellipsoid Nonellipsoid Ranks of (U (1) T , U (2) T , U (3) T , U (4) T ) ( by orders of magnitude, when compared to the nonrobust design.Besides, there is a remarkable increase in the service probability by using the proposed algorithm over the optimal power-only allocation.On the other hand, however, if ξ is too large, the problem itself is more likely to become infeasible (and there exists no robust solution), leading to an unacceptably low service probability.In addition, we can see that for a given channel error bound ξ, multiuser MIMO has a much higher service probability than multiuser MISO even if imperfect CSIR is considered for the MIMO cases.In Table 3, the tightness of the relaxation approach is examined and the probability that the proposed algorithm (22) does not give an all-rank-1 solution is shown, which is designated as P .In the simulations, ξ = 0.1 and random SINR requirements are considered.The results are obtained by averaging over 10 5 independent channel realizations and T }.It is observed that with full-rank matrices {U (m) T } (i.e., the CSI error regions are ellipsoids), the rank-1 solution exists with high probability, while with nonfull-rank {U (m) T } (i.e., nonellipsoids), the proposed algorithm always outputs a higher-rank solution and in some cases, the problem ( 22) is even infeasible.
CONCLUSION
This paper has studied the worst-case robust beamforming design of a downlink multiuser MIMO antenna system where the CSIT and CSIR are known but imperfect, and the channel uncertainty regions, which are modeled by ellipsoids, are also known.With an aim to minimizing the overall transmit power while ensuring that the users' target SINRs for all possible channel uncertainty conditions within the ellipsoids, this paper has presented techniques to jointly optimize the power allocation and the transmitreceive beamforming vectors for the users, based on the imperfect CSIT, the perfect or imperfect CSIR, and the channel uncertainty regions.Using S-procedure and rank relaxation, it is possible to obtain the globally optimal joint robust power and beamforming solution for a multiuser MISO system while a convergent iterative algorithm has been proposed to obtain a suboptimal robust solution for a multiuser MIMO system.Simulation results have demonstrated that the proposed algorithm yields a significant power reduction to ensure robustness than an SINR bounding approach, and the optimal power-only allocation with fixed ZF beamforming vectors.The proposed algorithm has also been shown to have the largest feasibility region and yield the least outage probability as compared to other robust and nonrobust schemes.R V m ) ≥ 0, (A.4)
Figure 1 :
Figure 1: A numerical example showing how the countermeasure works.
Figure 2 :
Figure 2: The user's output SINR averaged over channel uncertainty for a given channel realization for various channel error bound for a 3-user (3,1) system with γ = 5 (dB).
Figure 3 :
Figure 3: The total transmit SNR versus the channel error bound for a 3-user (3,1) system.
Figure 4 :
Figure 4: The user's output SINR averaged over channel uncertainty for a given channel realization for various channel error bound for a 3-user (3,1) system with γ = 10 (dB).
Figure 5 :
Figure5: The feasibility regions of a 2-user (2,1) system for various robust algorithms for a particular channel realization.
Table 2 :
Service probabilities for multiuser MIMO systems where both imperfect CSIT and imperfect CSIR are considered such that ξ(m)
Table 3 :
(22)probability, P , that the proposed algorithm(22)does not give an all-rank-1 solution.In this table, * means that(22)does not even have a feasible solution.
|
2014-10-01T00:00:00.000Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "a1082f52e91d7303084527f77fd02ec5529f4eb3",
"oa_license": "CCBY",
"oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1155/2008/609028",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a1082f52e91d7303084527f77fd02ec5529f4eb3",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
15371945
|
pes2o/s2orc
|
v3-fos-license
|
Transfer of the IL-37b gene elicits anti-tumor responses in mice bearing 4T1 breast cancer.
AIM
IL-37b has shown anti-cancer activities in addition to its anti-inflammatory properties. In this study, we investigated the effects of IL-37b on breast carcinoma growth in mice and to determine the involvement of T cell activation in the effects.
METHODS
IL-37b gene was transferred into mouse breast carcinoma cell line 4T1 (4T1-IL37b cells), the expression of secretory IL-37b by the cells was detected, and the effects of IL-37b expression on the cell proliferation in vitro was evaluated. After injection of 4T1 cells or 4T1-IL37b cells into immunocompetent BALB/c mice, immunodeficient BALB/c nude mice and NOD-SCID mice, the tumor growth and survival rate were measured. The proliferation of T cells in vitro was also detected.
RESULTS
IL-37b was detected in the supernatants of 4T1-IL37b cells with a concentration of 12.02 ± 0.875 ng/mL. IL-37b expression did not affect 4T1 cell proliferation in vitro. BALB/c mice inoculated with 4T1-IL37b cells showed significant retardation of tumor growth. BALB/c mice inoculated with both 4T1 cells and mitomycin C-treated 4T1-IL37b cells also showed significant retardation of tumor growth. But the anti-cancer activity of IL-37b was abrogated in BALB/c nude mice and NOD-SCID mice inoculated with 4T1-IL37b cells. Recombinant IL-37b slightly promoted CD4(+) T cell proliferation without affecting CD8(+) T cell proliferation.
CONCLUSION
IL-37b exerts anti-4T1 breast carcinoma effects in vivo by modulating the tumor microenvironment and influencing T cell activation.
Introduction
IL-37 (formerly IL-1F7), a cytokine in the IL-1 family, is expressed in a variety of normal tissues and tumors in humans, but a mouse homolog has not been identified. Among the various splicing forms of IL-37, IL-37b has been extensively studied [1] . Although the receptor and signaling pathway of IL-37 have not been clearly defined, it is known to function as a non-specific inhibitor of inflammation. Transgenic expression of IL-37b suppresses the production of proinflammatory mediators, such as IL-6 and IL-1β, in RAW macrophages, peripheral blood mononuclear cells (PBMCs), and dendritic cells (DCs) [2] . The use of siRNA to reduce the synthesis of the IL-37 protein in PBMCs leads to the increased production of several pro-inflammatory mediators, including IL-1α, IL-1β, IL-6, IL-12, G-CSF, GM-CSF, and TNF-α [3] . LPS challenge of IL-37b transgenic mice has not been found to induce significant circulating or organ levels of inflammatory cytokines (IL-1β, IL-6, IL-17, IFN-γ, etc), and the activation of DCs and macrophages are also suppressed [3] .
In a dextran sodium sulfate (DSS)-induced colitis model, the severity of intestinal inflammation has been shown to be significantly lower in IL-37b transgenic mice compared with wild-type controls [4] . The effects of IL-37b are at least partially mediated by Smad3 because two-dimensional gel electrophoresis and mass spectrometry have shown that IL-37 bound to Smad3 and anti-Smad3 siRNAs can reverse the suppression of inflammatory responses in transgenic mice [3,5] . In addition to its anti-inflammatory properties, IL-37b demonstrates anti-tumor activities. Gao et al have found that intratumoral injection of an IL-37b-expressing adenovirus results in the dramatic growth suppression of MCA205 mouse fibrosarcoma. The anti-tumor activity of IL-37b has been shown to be abrogated in nude and SCID mice and in IL-12-, IFN-γ-, and Fas ligand-deficient mice [6] . These results suggest that IL-37 could play an important role in boosting anti-tumor adaptive immunity. An additional study has demonstrated that the high expression of IL-37 in primary hepatocellular carcinoma (HCC) tissues, as shown by immunohistochemistry, is associated with better overall survival and is positively associated with the density of tumor-infiltrating CD57 + natural killer (NK) cells [7] .
Despite the two aforementioned reports, the mechanisms by which IL-37b exerts anti-tumor effects have not been completely resolved, and in particular, the role of T cells has not been clearly defined. Thus, we evaluated the anti-tumor effects of IL-37b against breast carcinoma and explored the involvement of T cells in these effects by in vitro and in vivo experiments.
Materials and methods
Animal and tumor cell lines Immunocompetent BALB/c mice that were 6-8 weeks old were purchased from the Institute of Hematology, Chinese Academy of Medical Sciences (Tianjin, China). Six-to eightweek-old BALB/c nude mice and NOD-SCID mice were purchased from Vital River Co, Ltd (Beijing, China). All mice were maintained in specific pathogen-free barrier facilities at the Institute of Hematology, Chinese Academy of Medical Sciences. All animal experiments were conducted according to the guidelines of the Animal Care and Use Committee of the Institute of Hematology, Chinese Academy of Medical Sciences. 4T1 is a murine breast carcinoma cell line, and it was maintained in RPMI-1640 medium supplemented with 10% heat-inactivated fetal calf serum (FCS), 2 mmol/L L-glutamine, 50 mmol/L β-mercaptoethanol (β-ME), 100 IU/mL penicillin, and 100 μg/mL streptomycin. Cells were incubated at 37 °C in a humidified atmosphere of 5% CO 2 /air.
Preparation of recombinant adenovirus
For the construction of a recombinant adenoviral vector expressing IL-37b (Ad-IL-37b), IL-37b cDNA, including the Kozak sequence (CCACC) and ATG at the 5' end, which was cloned by PCR, was inserted into a pDC315-eGFP vector under the control of the mouse cytomegalovirus (CMV) promoter. pDC315-X and pBHGlox E1, 3Cre [8,9] were co-transfected into HEK293 cells using LipoFiter TM transfection reagent purchased from Hanbio Co, Ltd (Shanghai, China) to generate recombinant adenoviruses. Ad-IL37b and Ad-eGFP (control virus) were propagated in HEK293 cells, and virus titers were measured with plaque assays by Hanbio Co, Ltd. The stock solutions of Ad-IL37b and Ad-eGFP were both 1×10 10 plaque formation units (PFUs)/mL and were stored at -80 °C.
Transduction of tumor cells 4T1 cells cultured to 80% confluence were incubated with Ad-IL37b or Ad-eGFP at a multiplicity of infection (MOI) of 100 in RPMI-1640 medium (without FCS) for 2 h. Then, the media containing viruses was replaced by complete RPMI-1640 (with 10% FCS), and the 4T1 cells continued to be cultured for 48 to 72 h. Transfection efficiency was determined by fluorescence microscopy and flow cytometry. The 4T1 cells transduced with Ad-IL37b or Ad-eGFP were termed 4T1-IL37b and 4T1-eGFP, respectively.
IL-37b expression in 4T1 cells
After infection of Ad-IL37b, RPMI-1640 medium with 10% FCS was used for the continued culturing of the 4T1 cells in 1 mL media in each well of 12-well plates. The cells and supernatants were harvested at 48 h and 72 h later, respectively. Total RNA was extracted with TRIzol reagent, and real-time PCR was performed to determine the mRNA expression of IL-37b. The primers used to detect IL-37b were as follows: forward 5'-GGGAGTTTTGTCTCTACTGTGAC-3' and reverse 5'-CCCACCTGAGCCCTATAAAAG-3'. IL-37b activity in the supernatants was measured using an sandwich enzyme-linked immunosorbent assay (ELISA) kit developed by R&D Systems (Minneapolis, MN, USA).
Treatment of 4T1-IL37b or 4T1-eGFP with mitomycin-C The treatment of 4T1 cells with mitomycin-C has been described elsewhere [10] . Briefly, at 48 h after adenovirus transduction, 4T1-IL37b or 4T1-eGFP cells were treated with 10 µg/mL mitomycin-C (Sigma, London, UK) for 2 h at 37 °C. Then, the media containing mitomycin-C was replaced with complete media. The amounts of IL-37b in the supernatants were detected by ELISA at 24 h after the mitomycin-C treatment. In addition, 1×10 5 mitomycin-C-treated 4T1-eGFP or 4T1-IL37b (M-4T1-eGFP or M-4T1-IL37b) cells were injected into the intramammary gland fat pad in the right flank of the BALB/c mice to confirm the complete arrest of cell growth by mitomycin-C.
Cell proliferation assay
To compare the in vitro proliferation of 4T1 sublines transfected with or without adenovirus, 1×10 4 cells from each cell line were seeded in each well of 24-well plates in 500 μL of culture medium, and the cells were enumerated in triplicate.
The T cell proliferation assay was similar to the procedure previously described by our research group [11] . Briefly, CD4 + or CD8 + T cells were isolated from lymph node cells with a Dynabeads FlowComp™ Mouse CD4 or CD8 Kit (Invitrogen, Carlsbad, CA, USA). CD4 + or CD8 + T cells labeled with carboxyfluorescein succinimidyl ester (CFSE, Invitrogen) were stimulated by an anti-CD3 antibody (BD PharMingen, San Jose, CA, USA), without or with addition of 200 ng/mL of recombinant IL-37b (R&D Systems). Cell size, which was examined by FSC, T cell proliferation, which was evaluated by CFSE, and expression of CD25 and CD69, which was detected Tumourigenesis studies A total of 1×10 5 4T1, 4T1-eGFP, or 4T1-IL37b cells were injected into the intramammary gland fat pad of BALB/c or NOD-SCID mice and simultaneously into the right flank of BALB/c nude mice. In some experiments, 1×10 5 4T1 cells were co-injected with 1×10 5 mitomycin-C-treated 4T1-eGFP or 4T1-IL37b into the intramammary gland fat pad in the right flank of BALB/c mice. Tumor sizes were measured in millimeters with a caliper at various time points. The longest surface length (a) and its perpendicular width (b) were measured, and tumor volume was reported as 0.5×a×b 2 .
Statistical analysis
All statistical tests were performed using GraphPad Prism software. Unpaired Student's t-tests were used to statistically evaluate differences in sample mean values between two groups. For comparisons of more than two groups, we used one-way ANOVA with Tukey's post hoc test. The survival rates were analyzed using the log-rank (Mantel-Cox) test. Differences were considered significant at a P<0.05. The data are presented as the mean±SEM.
IL-37b production by 4T1 cells transduced with Ad-IL37b
Ad-IL37b infections of 4T1 cells with an MOI of 100 showed the 78% positive expression of eGFP without apparent cytotoxicity (data not shown). IL-37b mRNA expression in the 4T1-IL37b cells was confirmed by real-time PCR (data not shown). The 4T1-IL37b cells secreted detectable amounts of IL-37b (approximately 12.02 ±0.875 ng/mL) into the supernatant ( Figure 1A). Similar amounts of IL-37b (approximately 12.08 ±0.965 ng/mL) were detected in the supernatant of the mitomycin-C treated 4T1-IL37b cells (data not shown).
To examine whether there was a difference in the proliferation of the three 4T1 sublines (4T1, 4T1-IL37b, and 4T1-eGFP), we performed in vitro cell proliferation assays. There were no significant differences in cell proliferation among 4T1, 4T1-IL37b, and 4T1-eGFP ( Figure 1B).
Tumor growth of 4T1-IL37b cells in BALB/c mice At 48 h after adenovirus transduction, 1×10 5 4T1, 4T1-eGFP, and 4T1-IL37b cells were injected into the mammary fat pads in the right flank of each mouse. 4T1 and 4T1-eGFP showed similar degrees of tumor growth. 4T1-IL37b showed a significant retardation of tumor growth compared with 4T1 or 4T1-eGFP (Figure 2A). The tumor volume and weight of the 4T1-IL37b group were significantly smaller than those of the 4T1-eGFP group ( Figure 2C-2E). However, the survival rate was not significantly different among the three groups (data not shown).
Effect of mitomycin-C-treated 4T1-IL37b cells on tumor growth of 4T1 cells in BALB/c mice To further demonstrate whether IL-37b suppresses tumor growth via affecting the tumor microenvironment or directly by affecting the tumor cells, we co-injected 1×10 5 mitomycin-C-treated 4T1-IL37b cells (M-4T1-IL37b) with 1×10 5 4T1 cells into the mammary fat pads of BALB/c mice and observed tumor growth. The mice that were injected with M-4T1-eGFP or M-4T1-IL37b did not develop tumors, indicating that the growth of the cells was completely arrested by mitomycin-C (data not shown). Co-injection with M-4T1-IL37b significantly suppressed 4T1 tumor growth ( Figure 2B), but it did not significantly improve survival (data not shown).
Effect of recombinant IL-37b on T cell proliferation in vitro
To evaluate whether the direct activation of T cells is involved in the anti-tumor effect of IL-37b, an in vitro T cell proliferation assay was conducted by adding recombinant IL-37b to a culture of T cells stimulated with anti-CD3. After treatment with recombinant IL-37b for 72 h, the proliferation of CD4 + T cells was found to be promoted, as shown by FSC and CFSE. Slight increases in the expression levels of CD25 and CD69 on the CD4 + T cells were also observed following the treatment with recombinant IL-37b ( Figure 4). However, the efficacy of the recombinant IL-37b in promoting isolated CD4 + T cell proliferation varied under the different experimental condi- Tumor growth of 4T1-IL37b in BALB/c nude and NOD/SCID mice To confirm the involvement of T cells in vivo, T cell-deficient BALB/c nude mice were used to test the tumor growth of 4T1, 4T1-eGFP and 4T1-IL37b cells. The tumor growth of these three 4T1 sublines showed no significant difference ( Figure 5A). Moreover, tumor growth was observed in the NOD/ SCID mice, which were deficient not only in functional T and B cells but also in innate immunity, exhibiting the damaged functioning of NK cells, macrophages and antigen-presenting cells (APCs). Soon after the growth of tumors to a measureable size on approximately d 12, the mice died rapidly. Therefore, we compared the tumor size at one time point on d 12.
The tumor sizes of the three 4T1 sublines showed no significant difference ( Figure 5B).
Discussion
IL-37 is expressed in a variety of normal tissues, inflammatory diseases and tumors [3,[12][13][14][15] . Interestingly, IL-37 protein expression is upregulated in human breast carcinoma tissues [16] . This finding suggests that it is involved in both inflammatory responses and tumor progression. In the IL-1 family, IL-18 is a potent pro-inflammatory cytokine that induces the production of IFN-γ. A series of studies have demonstrated that IL-18 exhibits anti-tumor activities [17][18][19][20] , which is logical because of its pro-inflammatory properties. Although IL-37 shows antiinflammatory properties, it also exhibits anti-tumor activity, of which the underlying mechanism is seemingly more complex. To date, only two published studies have demonstrated the anti-tumor activity of IL-37 in fibrosarcoma and hepatocellular carcinoma, but the underlying mechanism remains to be fully elucidated [6,7] . In the present study, we used mouse 4T1 breast carcinoma models to study the anti-breast cancer effect of IL-37b and to explore whether T cells are involved in its anti-tumor mechanisms. First, an IL-37b-expressing adenoviral vector was constructed and transduced into the 4T1 cell line. IL-37b was successfully expressed inside of the 4T1 cells and secreted into the culture supernatants. Because IL-37b also functions inside of cells as a signaling molecule, it is possible that it directly affects 4T1 cell proliferation. However, this possibility was ruled out because we found that transduced IL-37b expression did not affect cell proliferation. IL-37b-expressing 4T1 cells were injected into the intramammary gland fat pads of immunocompetent BALB/c mice, resulting in significantly slower tumor growth compared with the 4T1 or 4T1-eGFP cells, confirming the anti-tumor activity of IL-37b.
To further clarify the anti-tumor effects of IL-37b in the tumor microenvironment, we co-injected mitomycin-C-treated 4T1-IL37b cells with 4T1 cells into immunocompetent BALB/c mice and observed tumor growth. Because the growth of mitomycin-C-treated 4T1-IL37b cells was completely arrested, these cells did not develop tumors, but they secreted IL-37b into the tumor microenvironment. Co-injection with mitomycin-C-treated 4T1-IL37b cells significantly suppressed 4T1 tumor growth in the BALB/c mice. Considering the abovementioned findings that IL-37b did not affect 4T1 cell proliferation, these results collectively indicate that IL-37b exerts anti-tumor effects through influencing the tumor microenvi-ronment rather than directly affecting 4T1 cells.
Although IL-37b slowed 4T1 tumor growth in the tumor microenvironment, the survival rate was not significantly improved in the mouse models of either 4T1-IL37b or of the co-injection of 4T1 with mitomycin-C-treated 4T1-IL37b (data not shown). 4T1 mammary tumors are highly metastatic. Because the survival of mouse models is directly related to tumor metastasis, we postulate that IL-37b suppresses tumor growth but has no or little effect on tumor metastasis. However, in this study, we did not evaluate metastasis in some important organs, such as the lung, heart, liver and kidney, following the deaths of the mice.
The observation of the anti-tumor efficacy of IL-37b in the BALB/c mice prompted us to test whether it could directly affect the function of T cells, which have a central role in antitumor activity. In fact, IL-37b binds to the IL-18Rα chain and belongs to the IL-18 subfamily [21] ; thus, we postulated that the function of IL-37b could be similar to that of IL-18 and involve the stimulation of the proliferation of T cells [22] . Our results showed that IL-37b directly stimulated CD4 + T cell activation and proliferation in vitro. However, the efficacy of IL-37b stimulation was fairly weak and varied occasionally, which may have been related to the number of prepared isolated CD4 + T cells and concentration of stimulating anti-CD3 antibody (the titer decreased as storage time was extended). Overall, when the CD4 + T cell concentration was lower and anti-CD3 stimulation was weaker, the stimulatory effect of IL-37b was stronger, suggesting that the strong activation of T cells with IL-2 (a major T cell growth factor produced by T cells themselves upon anti-CD3 stimulation) in cultures may have overridden the effects of IL-37b. Strikingly, IL-37b did Increasing evidence has demonstrated that CD4 + T cells contribute to tumor eradication, even in the absence of CD8 + T cells [23] . Because the direct effect of IL-37b on CD4 + T cells was weak, the direct activation of T cells might play a minor role in its suppression of tumor growth in the tumor microenvironment. However, given that its anti-tumor effect has been shown to be T cell-dependent in ours and others' study [6] , the indirect activation of T cells might play a major role in its anti-tumor effect in the tumor microenvironment. It has been demonstrated that the anti-tumor activity of IL-37b is also IL-12, IFN-γ, and Fas ligand-dependent [6] . IL-12 shows potent anti-tumor activity by acting as a major orchestrator of the Th1-type immune response against cancer [24] . IFN-γ is mainly produced by activated Th1 and CD8 + T cells and exerts antitumor effects by affecting the STAT1 signaling pathway [25] . Through inducing cell death, the Fas ligand helps to remove tumor cells. It has been reported that the T-cell receptor complex is essential for Fas signal transduction [26] . Therefore, IL-37b might indirectly activate T cells as mediated by IL-12, IFNγ, and the Fas ligand to suppress tumor growth. Because no effect of IL-37b on tumor growth was noted in the nude mice which preserve normal NK cell function, it implied that NK cells were not (or were not directly) involved in the anti-tumor activity of IL-37b, which is not consistent with a previous study describing its anti-tumor effect against hepatocellular carcinoma by Zhao et al [7] . We suggest that it may be a consequence of the different type of cancer studied by this group. Finally, we observed the tumor growth of 4T1-IL37b in NOD-SCID mice, which were deficient in not only functional T and B cells but also innate immune components, such as NK cells, macrophages and APCs. The tumor growth of IL-37b-expressing 4T1 cells showed no difference compared with that of the 4T1 and 4T1-eGFP cells. Most of the NOD-SCID mice died after day 14 post-tumor cell inoculation, and the survival rate was apparently lower than that of the BALB/c nude mice. This finding indicates that innate immunity plays a key role in controlling tumor growth and metastasis, despite the fact that IL-37b seems to mainly exert anti-tumor functions via T cells.
In conclusion, our results indicate that IL-37b is capable of exerting anti-breast tumor activity by modulating the tumor microenvironment and that T cells are essential to its antitumor mechanisms. However, the indirect, but not the direct activation of T cells might play a major role in the suppression of tumor growth by IL-37b in the tumor environment. The molecular details of its effects on T cells need further investigation. This research highlights the potential usage of IL-37b gene/protein therapy in the future treatment of breast cancer in the clinic and provides insights for further research in this field.
|
2016-05-09T00:57:29.195Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "86bd75e47948e70c343665bdaac9b377655c7d22",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/aps20153.pdf",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "86bd75e47948e70c343665bdaac9b377655c7d22",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225633775
|
pes2o/s2orc
|
v3-fos-license
|
Exactly solvable model for transmission line with artificial dispersion
The problem of the emergence of wave dispersion due to the heterogeneity of a transmission line (TL) is considered. An exactly solvable model helps to better understand the physical process of a signal passing through a non-uniform section of the line and to compare the exact solution and solutions obtained using various approximate methods. Based on the transition to new variables, the developed approach made it possible to construct exact analytical solutions of telegraph equations with a continuous distribution of parameters, which depend on the coordinate. The flexibility of the discussed model is due to the presence of a number of free parameters, including two geometric factors characterizing the lengths of inhomogeneities in values of the inductance L and of the capacitance C. In the new variables, the spatiotemporal structure of the solutions is described using sine waves and elementary functions, and the dispersion is determined by the formulas of the waveguide type. The dispersive waveguide-like structure characterized by refractive index N and cut-off frequency $\Omega$. The exact expressions for the complex reflection and transmission coefficients are derived. These expressions describe phase shifts for reflected and transmitted waves. The following interesting cases are analyzed: the passage of waves without phase change, the reflectionless passage of waves, and the passage of signals through a sequence of non-uniform sections. The developed mathematical formalism can be useful for the analysis of a wider range of problems.
I. INTRODUCTION
In recent years, studies of the interaction of waves with inhomogeneous media have been actively carried out (see [1][2][3][4][5][6][7][8] and references therein).The notion of a "barrier" is frequently used in various fields of science, for which a wave description can be applied, for example, in plasma physics (wave barriers), in solid-state physics (quantum-mechanical barriers), in optics, etc.A segment of a non-uniform transmission line can also represent a barrier with respect to propagating waves.Exactly solvable models are interesting for science, technology and education, since they can help in purposeful scientific search and in understanding of phenomena under investigation.A key role in many phenomena is played by a variety of resonant effects.
The study of such resonant effects is of considerable interest for various practical applications.
Exactly solvable models can help in their search, for example, when analyzing the possibilities of realization for reflectionless passage of waves through wave barriers.Such a model will be presented in the article.
We consider the classical "telegraph equations" [9][10][11][12] describing the voltage U and current I in the transmission line (TL).For example, for a homogeneous stripline formed by two metal strips and a dielectric layer between them, the inductance and capacitance per unit length will be: are some dimensionless real smooth functions.The telegraph equations for the lossless TL with these parameters can be written (in SI system) as 4][15][16][17] Effective methods for integrating equations of mathematical physics include methods based on knowledge of their continuous, point, nonlocal, or potential symmetries.9][20][21] It is known that the number of dependencies for the coefficients that admit an analytical solution of such equations is limited. 2,9,224][25][26][27][28] For this, introducing the generating we can reduce the system (1) to one equation Next we will use the method of phase coordinates: 2,23 using of new variable permits one to eliminate the right side from Eq. ( 3) ( As it can be seen from equation (5), there is a certain analogy between the microwave phenomena studied here and optical phenomena in inhomogeneous media with spatial modulation, as well as with the phenomenon of Alfvén wave propagation through inhomogeneous plasma. 2,4Consequently, the mathematical formalism that will be developed in the work, and the results obtained may be useful for the analysis of a wider range of problems.
The main purposes of this article are to describe the appearance of dispersion in a nonuniform transmission line using a precisely solvable model and to derive the rigorous expressions for the complex reflection and transmission coefficients.These expressions describe a change in the amplitudes of transmitted and reflected waves, as well as the formation of their phase shifts.This will allow us to analyze important particular cases of signal propagation, as well as highlight possible new properties of long transmission lines that may have practical application.The analysis will also touch on the possibility of constructing a transmission line in which several non-uniform segments are combined.
The article is organized as follows: Sec.II discusses a precisely solvable model for a nonuniform TL with distributed parameters; Sec.III studies the propagation and tunneling of current and voltage waves, and analyzes a number of important special cases; finally, Sec.IV contains conclusions.
II. DISPERSION OF NON-UNIFORM TRANSMISSION LINE WITH CONTINUOUSLY DISTRIBUTED PARAMETERS (EXACTLY SOLVABLE MODEL)
Till now the distributions were arbitrary regular functions.The number of dependencies for the coefficients that allow an analytical solution of (5) is finite.It is needed to choose such a dependence in (4) that allows the explicit expression of the variable z through the variable θ.We consider the following model of TL with distributed capacity The characteristic lengths 1 l and 2 l in (6) are the free parameters of the discussed model of non- uniform TL; positive and negative values of these quantities correspond to growth or decrease of capacity and inductance along the TL.This choice provides a fairly wide range of possibilities for modeling various heterogeneities.As it can be seen from Fig. 1, the change in the parameters on the non-uniform section of the transmission line can occur with respect to the homogeneous section both continuously and abruptly, and herewith the inductance or capacitance can independently either increase, or decrease, or remain constant.Substitution of function z P from (6) to Eq. ( 4) brings the link between the variables z and : Finally, presentation of the factor in Eq. ( 5) via the variable permits one to rewrite this equation in a form l can be both positive or negative.A method for solution of equation ( 8) will be considered below in detail.
Inspection of Eq. ( 8) shows, that an unknown function obeys a wave equation with a coordinate-dependent speed of wave propagation.Equation ( 8) can be solved by introducing a new variable and a new function F : Assuming, that the time dependence of function is harmonic , and using (9) we obtain from Eq. ( 8) the simple equation with constant coefficients governing the behavior of function Equation ( 10) can be viewed as the standard equation describing the propagation of wave with wave number q in dispersive waveguide-like structure characterized by refractive index N and cut-off frequency .The latter frequency separates the frequency region where the tunneling regime is observed from the region with the wave regime of propagation.Note, that this cut-off frequency is determined by the parameters of spatial distributions of capacity and inductance (8).
Equation (10) describes the propagation of a pulse with an arbitrary shape ( / ) Here A and Q are the constants, which have to be determined from the boundary conditions on the ends of TL.The obvious expressions for variables and through the variable z, obtained from ( 7) and ( 9), read as The exact generating function (11) will be used below for computation of reflection and transmission of current and voltage waves in the non-uniform TL with artificial dispersion for the variety of physically meaningful cases stipulated by the interplay of positive and negative values of free parameters 1 l and 2 l .Moreover, this approach provides the platform for comparison of eikonal and antieikonal approximations with the exact solutions. 2,29
III. PROPAGATION AND TUNNELING OF CURRENT AND VOLTAGE WAVES IN NON-UNIFORM TRANSMISSION LINE
Voltage U and current I in the TL can be found due to substitution of generating function (11) to the equalities (2): We consider the reflection and transmission of monochromatic wave from the segment of nonuniform TL with length d described by the distributions ( 6) and installed into the uniform line characterized by the constant capacitance 1 C and inductance 1 L per unit length (see Fig. 1).Note, that the voltage Taking into account the reflected wave, we can write The problem of propagation of the current wave through a non-uniform section is analogous to the task of passage of a wave through a gradient barrier (see Fig. 2). , we find from ( 13) -( 15) the equation governing the complex reflection coefficient R: where 1 Z and 0 Z -are the impedances of uniform and non-uniform parts of TL.For the transmitted wave, we can write The unknown parameter Q in Eq. ( 16), determined from the continuity condition at the end of segment Substitution of Q from ( 18) into ( 16) brings the value of quantity Finally, substitution of the expression of B from (19) into Eq.( 16) yields the complex reflection coefficient of non-uniform segment of TL: Quantities 1 k and 0 are defined in ( 14) and ( 18) respectively.The transmission coefficient is Note, that in the limiting case, when the distributions of capacity and inductance are coinciding , and expression ( 20) is reduced to the standard formula describing the reflection coefficient of uniform segment installed to TL: Introducing the dimensionless values of Z Z Z in (20), the ultimate results for the reflection coefficient will be: ] where /. l l l For example, the value of 0 ld can be chosen for graphical presentation.We will not write out explicitly the expression for the transmission coefficient T due to its bulkiness, but it can be analytically obtained on PC and analyzed in numerical computations.
We see that in the general case there are dispersion of values of () Further we describe other special cases.
I. If
Im[ ] 0 R for a wave with some frequency ω, then the wave will be reflected without changing its phase.The behavior of the function Im[ ] R is characterized by the following features.b) With an increase in Z , the beginning of such the band slightly shifts toward somewhat higher frequencies.
c) The "frequency of oscillations" of the function Im[ ] T with respect to the value of ω is greater to the right of the band (for negative parameter T remains true (section by this plane), that is, the phase of the transmitted waves does not change at all.III.In the case when the value ( ) 1 r T (or the value ( ) 0 r R , which is the same case), the phenomenon of reflectionless passage takes place for the wave with such a resonant frequency r .In the general case, when changing the system parameters, the amplitude of the oscillations of the function T with frequency can alternatively increase and decrease.The phenomenon of reflectionless propagation of wave exists not for all values of the system parameters.But if there is a resonant frequency r for a certain set of system parameters, then there will be infinitely many such resonant frequencies If some signal is compounded of waves with resonant frequencies ri only, then such signal ( () ) will pass through the non-uniform TL without changing its form (no distortion).
Note that to transmit information without distortion, you can use the amplitude modulation of the signals at these selected frequencies.If some signal represents the very narrow wave packet constituted of waves near the resonant frequency r , then such signal will pass through the non- uniform TL with a minimal changing its form.However in the general case (for an arbitrary wave packet, for example), the form of the propagating signal can change significantly (similarly for the reflected signal).
1][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47] So, for example, reflectionless tunneling can occur in a homogeneous layer of the waveguide, which is bounded on the sides by curved boundaries.In this case, the interference of partial waves can lead to the complete suppression of the reflected radio waves.As a result, waves with wavelength exceeding the width of the slit by several times, tunnel without reflection through the narrowing of the waveguide. 30,32Coatings (for example, from metamaterials) can be also applied for amplification of evanescent electromagnetic waves (tunneling). 32,38For waveguides, there exists the lowest frequencythe cutoff frequency, 35 similar to the expression (10); and the wave mode less than this cannot propagate.To reduce the cutoff frequency, baffles are sometimes used. 34Various types (modes) of waves can be excited in waveguides.Sometimes it is necessary to suppress unwanted mode types or create waveguides with a filtering function for bandpass. 36,37Excitation of various types of modes is used in antenna irradiators.When connecting a waveguide to an antenna, full power transfer and the absence of signal distortion are usually required.
Can approximate methods describe the solution obtained in the article?The flexibility of the presented exactly solvable model is due to not only the possibility of choosing arbitrary material of the line 0 0 1 1 ( , , , ) L C L C , but also the presence of the geometric factor, i.e. free parameters characterizing the lengths of the inhomogeneities in the non-uniform segment 12 ( , , ) l l d .As a result, the changes cover all possible cases: the properties of the system can change both in a jump (due to the choice of different quantities It is well known that the eikonal approach in wave physics can be applied in the cases of slow variations of parameters of wave field or media along the wavelength. 48For the considered model, this approximation covers the very particular case only: , or 12 ll ).Under these conditions, the eikonal approximation can allow to approximately determine the intensity of the transmitted wave.The anti-eikonal limit can be applicable for determination of the wave intensity in the opposite case, when the system parameters highly varies over the distance of one wave-length, i.e. .This case can also include a system parameter jump: . However, in the general case, none of the approximations mentioned above tracks the exact changes in the wave phase.And since the implementation of reflectionless passage is determined by the exact values of the phases (namely, the phases change so that at the point of incidence all reflected waves cancel each other out in total), then none of the approximations catches resonance cases.
A non-uniform segment of the TL acts in relation to the incident wave as a wave "barrier".
Each such barrier can be described by a complex transmission coefficient i T and complex reflection coefficients for a wave incident on the left i R (impinges from ) and for a wave incident on the right i R (impinges from ).Let several such non-uniform segments be included in the homogeneous TL and they are separated by the homogeneous segments of the TL with a length i D .Let us start with the combined system of two such barriers.In the general case the wave amplitudes for repeatedly reflected waves between these separate barriers can be exactly summed up as the geometric progression (see Fig. 7), and the complex transmission coefficient for the combined barrier can be found: 7 Similarly, summing up the return flows, we obtain the complex reflection coefficients: -waveguide-type frequency dispersion, which is characterized by a controlled cutoff frequency; -the possibility of coordination (adjustment) for sections of the line, regardless of their geometric characteristics; -transmitted wave phase control.
An important feature of such lines is the fact that the refractive index can be arbitrary (more than one, less than one, and even imaginary).For a given spectral range, the line parameters can be set in such a way that non-local dispersion effects will appear in the required frequency band.In this case, the wave dynamics in the line is described by exact analytical solutions constructed without any assumptions about the smallness of the changes in the parameters of the line or fields.
is an effective dielectric constant of the layer, aits thickness, bits width.However, we will consider the non-uniform TL with continuously distributed capacity Сz and inductance z L per unit length.Let us model these distributions of capacity and inductance along the line by means of products
Fig. 1 .
Fig. 1.The equivalent circuit of different sections dz for the transmission line with a nonuniform section [0, d] is shown in the lower figure; and some possible dependencies of dimensionless ratio of parameters are shown in the upper graphic.The definition / i dl can be seen for the straight line 8.The change in the parameters for a non-uniform section of the transmission line can occur with respect to the homogeneous section both continuously (straight lines 4 and 5) and abruptly (straight lines 1, 2, 3, 6, 7, 8), and herewith either increase (straight lines 3, 5, 8), or decrease (straight lines 1, 4, 6), or remain constant (straight lines 2 and 7).
) Here 3 ll and 2 l
is a new spatial scale; subject to the correlation between scales 1 , the values of 3 brings the exact analytical solution of Eq. (8) for the generating function
1 U and current 1 I 1 C and inductance 1 L
describing the incident wave in the uniform area 0 z with capacitance can be written by means of generating function 1 by means of presentation (2):
Fig. 2 .
Fig. 2. Tunneling of a current wave (normalized to the amplitude of the incident current wave) through a gradient barrier, which is similar to an inhomogeneous section of the transmission line shown in Fig. 1.Here we see the wave incident on the [0, d] section, the reflected wave and the transmitted wave.
T
, i.e. both values U and I -are the ω-dependent ones.There occurs the full reflection R = 1 at N = 0, i.e. when the wave frequency ω is equal to the cut-off frequency .If the propagating wave has a frequency ω less than this value of , the segment of non-uniform TL represents the opacity area (in this case, the wave can tunnel with damping for some short distance, but the most part of the wave will reflect).The dependence of dimensionless value of the cut-off frequency 0 / on the dimensionless values 11 / .Wave solutions will be observed at frequencies lying above the surface depicted here.
Fig. 3 .
Fig. 3.The dependence of the dimensionless cut-off frequency
3 l 2 l 1 l
a) There are sections (bands of opacity) continuous in frequency ω and parameter 3 l , where the imaginary part Im[ ] 0 R ; as well as there are continuous lines connecting the values ω and , where Im[ ] 0 R , i.e. the phase of such the reflected waves remain constant.b) With an increase in Z , the beginning of such the band slightly shifts toward somewhat higher frequencies.c)The "frequency of oscillations" of the functionIm[ ] R with respect to the value of ω is greater for negative parameter than for positive parameter 2 l .d) An increase in the parameter for a fixed Z leads to an increase in the "frequency of such oscillations" of the function Im[ ] R depending on the values of ω. e) In the general case, for each region, a continuous change in the parameters leads alternately to an increase in the amplitude of the oscillations to unity (function 1 Im[ ] 1 R ) and to a decrease in the amplitude of the oscillations to almost zero ( Im[ ] 0 R ).For example, the dependence of the function Im[ ] R on the dimensionless parameters 2
Fig. 4 . 2 l 3 l
Fig. 4. Dependence of the imaginary part of the reflection coefficient R from the expression (23) on the dimensionless parameters and
2 l 1 l
) than to the left of the band (for positive parameter 2 l ).d) An increase in the parameter for a fixed Z leads to an increase in the "frequency of oscillations" of the functionIm[ ] T depending on the values of ω. e) In the general case, for each region, a continuous change in the parameters leads alternately to an increase in the amplitude of the oscillations of the function Im[ ] T and to a decrease in the amplitude of the oscillations of the function ImCC ) for this function can differ from analogous values for functions Im[ ] R and T .As an example, the dependence of the function Im[ ] T on the dimensionless parameters 2
Fig. 5 . 2 l
Fig. 5. Dependence of the imaginary part of transmission coefficient T from the expression (21) on the dimensionless parameters and 2 l at the fixed parameters 0.1 Z and
ri.T on the dimensionless parameters 2 l
As some example, the dependence of the function
Fig. 6 . 2 l
Fig. 6.Dependence of the absolute value of the transmission coefficient T (computed module of expression (21)) on the dimensionless parameters and 2 l at the fixed parameters 5.0 Z and
12 (
continuously; at the same time, they can change both quickly and slowly, both increase and decrease (due to the choice of quantities , , ) l l d ).
Fig. 7 . 1 D 12 RR
Fig. 7. Tunneling of a wave through a pair of barriers, each of which is characterized by its own reflection and transmission coefficients
|
2020-07-09T09:06:16.684Z
|
2020-07-14T00:00:00.000
|
{
"year": 2024,
"sha1": "4fcd1da8e53ed66edbf12af6eb5528c85b432601",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/records/3940247/files/Artekha_TRANSMISSION%20%20LINE.pdf",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "90044327994c00a745675a807560a091ca8cf889",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
228866538
|
pes2o/s2orc
|
v3-fos-license
|
Thermal behavior and combustion performance of Al/MoO3 nanothermites with addition of poly (vinylidene fluorine) using electrospraying
To investigate the effect of the addition of poly (vinylidene fluoride) (PVDF) on nanothermites, Al/MoO3/PVDF energetic nanocomposites were prepared using electrospraying method. As a control group, Al/MoO3 was also designed. Then, both samples were tested by FE-SEM, XRD and TG-DSC. TG-DSC results showed that the Al/MoO3/PVDF energetic nanocomposites released more than 934.0 J g−1 with two obvious exothermic peaks. Compared with the control group of 800.7 J g−1 heat, it changed the thermal performance to some extent. There were Mo2C among the residues products after the reaction via XRD. The activation energy (Ea) was analyzed using the Kissinger method under different heating rates by DSC. The addition of PVDF reduced the Ea of the thermites. To explore the combustion performance, a preliminary experiment was designed. The Al/MoO3/PVDF energetic nanocomposites were easier to ignite and the burning was more durable, which was significant in solid propulsion and applications requiring extended combustion time.
Introduction
With the development of nanotechnology, nanothermite, as a high-energy material containing metal oxidant and fuel, has short diffusion distance, large contact area and good uniformity, which has aroused widespread concern [1][2][3][4]. It has a lower ignition temperature, better reactivity and faster propagation speed, which is significantly better than traditional thermite. Carrying out a powerful redox reaction in a short time, it can be applied in ammunition primer, [5] nano welding, [6] gas generator [7] and explosive propellant [8,9].
Preparing a more homogeneous structure and reducing the diffusion distance between fuel and oxidant help improve the thermal performance of thermite, which has attracted many scholars to study. For example, Kim [10] and Zachariah realized the directional assembly and close contact between fuel and oxidant using the electrostatically enhanced method. This approach intensified the interaction and improved the reactivity of energetic nanocomposites. Wang [11] and his co-authors chose sol-gel technology to prepare Al/Fe 2 O 3 nanocomposites. The results showed that the Fe 2 O 3 particles fabricated by the sol-gel method could successfully encapsulate nano-Al, avoiding the oxide film's generation. However, this method had many influencing factors. Some of them were difficult to control. Besides, Ke Xiang and co-authors [12] used the magnetron sputtering method to prepare CuO/Al core/shell structured nanothermites with stable combustion process and flame propagation speed. The good thermal performance and excellent energy retention performance had been proved. Further, Song [13] and his co-authors added potassium perchlorate (KClO 4 ) to the Al/MnO 2 nanothermites system by electrospray method, which effectively reduced the activation energy of the thermite system. Wang and his co-authors successfully prepared Al/CuO, Si/CuO nanothermites with core-shell Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. structure by self-assembly method and electrophoretic deposition method, which showed excellent thermal performance [14][15][16].
Zhou [17] filled the micron and nanometer passivated aluminum particles with PVDF to make composite materials, which significantly improved the thermal conductivity and relative dielectric constant. This phenomenon is the free electron transport of aluminum metal particles, which are embedded in the PVDF matrix, forming a uniform and dense microstructure. Fluorine, as the most abundant halogen, is the most electronegative and non-polarizable element known at present. The nature of its chemical bond reflects the physical properties of a fluoropolymer. C-F bond is the strongest single key inorganic substances (450 kJ mol −1 ) and more than 100 kJ stronger than a C-Cl bond, so its reactivity is reduced by more than 100 times. The addition of a fluoropolymer can also bring some other excellent properties, such as moisture resistance and super-hydrophobicity [18]. To explore new fluoroplastic materials, throughout the 1950 s, researchers studied and prepared many fluoropolymer mixtures, fluorinated ethylene propylene (FEP), which was TFE's first copolymer [19]. The following year, polyvinyl fluoride (PVF) and PVDF were created, the former with one type of fluorine and the latter with two types of that. Nevertheless, in terms of solubility, most fluoropolymers are insoluble, but PVDF has great solubility in polar organic solvent dimethylformamide (DMF), [20] so it is most commonly used. Song [21] prepared Al/MnO 2 /PVDF using an electrostatic spray. The activation energy of the thermite system has significantly been reduced. Li [22] synthesized Al/PVDF/CuO composites using solvent synthesis and explored its properties.
Molybdenum trioxide (MoO 3 ) has unique electrochemical, catalytic and environmental characteristics, excellent electrochemical colour development and electrocatalytic function, which is widely used in battery material research [23]. Compared with CuO, Fe 2 O 3 and MnO 2 , the enthalpy of Al/MoO 3 thermite reaction is the highest when MoO 3 is used as a metallic oxidizer in the thermite system, according to the stoichiometric ratio [9]. What is more, the Al/MoO 3 composites can provide more remarkable ignition characteristics. Wolenski, C and his co-authors [24] engineered the composition and morphology of particles in the Al-NP/MoO 3 thermite system. The results showed that this method could promote an enhanced response and adjust the combustion behavior.
In this work, as a binder, PVDF was added to the Al/MoO 3 nanothermites system to explore its impact on combustion and thermal performance. Firstly, the Al/MoO 3 nanothermites were fabricated by ultrasound method. Then, based on the role of PVDF binder, PVDF was uniformly dispersed into Al/MoO 3 by electrospray method to prepare Al/MoO 3 /PVDF energetic nanocomposites. At the same time, Al/MoO 3 also was prepared as the control group. Next, the Al/MoO 3 nanothermites and Al/MoO 3 /PVDF energetic nanocomposites were characterized and tested by Field Emission Scanning Electron Microscope (FE-SEM), x-ray diffraction (XRD) and Thermogravimetric Analysis and Differential Scanning Calorimetry (TG-DSC). To calculate and discuss their activation energy, DSC was implemented at different heating rates. In the end, the preliminary combustion test was designed to observe the actual combustion performance of the thermites.
Materials
All chemicals were analytical grade reagents and were used directly without any treatment or purification. Nano-Al (∼100 nm) was obtained from Aladdin Industrial Corporation (Shanghai, China). Nano-MoO 3 particles (∼50 nm) were purchased from Nano Chemical Technology Co., Ltd (Guangzhou, China). The energetic additive, PVDF was supplied by Sinopharm Chemical Reagent Co., Ltd And the molecular mass of PVDF was 476×10 3 g·mol −1 . Besides, the absolute ethanol and DMF were obtained by Nanjing Chemical Reagent Co., Ltd Deionized water, absolute ethanol as well as DMF were chosen as solvent and dispersant.
Precursor preparation
At first, ultrasonic mixing method was applied to synthesis the Al/MoO 3 nanothermites. According to the stoichiometric ratio, 62 mg of MoO 3 and 38 mg of nano-Al (considering the effect of aluminum oxide layer) were magnetically stirred in cyclohexane for about 30 min At the same time, 43 mg of PVDF was dissolved in 1 ml of DMF. Then, the PVDF/DMF solution was poured into the Al/MoO 3 solution under ultrasonic conditions. The mixed solution appeared black without precipitation. 43 mg of PVDF accounts for 30 wt% of the total solution mass of 143 mg. Also, Al/MoO 3 thermite without PVDF additive was prepared as an experimental control group.
Electrospray experiment
As shown in figure 1, the precursor solution was contained in a syringe with a flat needle whose diameter was 0.42 mm at the end. A syringe pump was used to apply a flow rate of 4.0 ml·h −1 , and a voltage of 13.5 V was applied between the nozzle and the receiving plate (a square aluminum foil with a side length of 30 cm) to form the Taylor cone. The distance between the nozzle and the receiving plate was about 10 cm. The relative humidity of the experimental environment was 75%.
Under the action of a strong electrostatic field, the precursor liquid advanced through the nozzle, and the nanoparticles adhering to the charged droplets were accelerated by the electric field and formed a Taylor cone. Because the electrostatic force was bigger than the liquid's molecular cohesion, the sprayed liquid broke into a large number of small droplets [25,26]. Simultaneously, the solvent evaporated quickly, leaving concentrated nano-solid particles diffused to the receiving plate to form uniform, highly polymerized thermite composites. Finally, a scraper was used to collect the deposits on the receiving plate and installed it in an anti-static bottle.
Characterization and thermal analysis
The MoO 3 sample phase structures and chemical reaction composition were characterized by using XRD analysis (Bruker, D8 Advance, Germany) with CuK α radiation (λ = 0.1542 nm). The morphology, particle size and mixing quality of the materials and mixture were observed by FE-SEM analysis (HITACHI High-Technologies corporation, S-4800 II, Japan).
TG-DSC (NETZSCH STA 449F3, Germany) analysis was applied to investigate both components and thermite samples' thermal behaviours. All the experiments were conducted under the argon atmosphere with the heating rates of 15 K min −1 with the temperature range from 40°C to 1000°C. To further calculate the activation energy, in a corundum crucible, a sample with a mass of about 5 mg was heated through the DSC at a heating rate of 10, 15, 20, 25 K min −1 .
Theoretical model
As one of the most famous isoconversional methods, the reliable Kissinger method was accessed to obtain the Ea of Al/MoO 3 . The Kissinger method of the model equation can be provided as follows [27,28].
Where β is the linear heating rate (K min −1 ), T P the absolute temperature (K), R the gas constant (J mol −1 K −1 ), A the pre-exponential factor (s −1 ) and Ea is the activation energy (kJ mol −1 ). The plot of ln (β/T P 2 ) Versus 1/T should be a straight line when hypothesizing that the rate of reaction reaches the maximum at the peak temperature, and the slope can be considered as the value of the activation energy Ea.
Preliminary combustion tests
The potential energy capacity of thermite can be characterized by thermal analysis, while the combustion performance is an indicator that reflects the actual properties, which is essential for practical applications.
Heating wire ignition experiments tested the burning characteristics of samples. The heating wire diameter was 0.1 mm, which was heated by a DC power supply to ignite 8 mg samples. A high-speed camera (FAST CAM-AZ) was used to record the combustion processes with a sampling rate of 20,000 frames per second and a frame size of approximately 1024 * 512 pixels. The aperture value was adjusted to 6.4. The schematic diagram of the experimental device is shown in figure 2.
Results of nano-MoO 3
To explore the properties of the samples, XRD was used to test the phase structure, and SEM was introduced to observe the morphology.
XRD analysis
The XRD pattern of the sample is shown in figure 3. The sample shows the diffraction peaks from 5°to 90°can be attributed to MoO 3 (ICSD No. 76-1003 MDI Jade 6.0). The diffraction pattern for the samples have six broad Figure 4 shows the FE-SEM image of the MoO 3 . Figure4(a) is the overall FE-SEM image of the nano-MoO 3 . For more accurate observation, the red box in figure 4(a) is enlarged, as shown in figure 4(b). The pictures show that the nano-MoO 3 particles have a round shape with a diameter of 50-80 nm. The surface is smooth and flat with less reunion, but some big blocks have some minor effects on the subsequent thermal effects.
Results of nanocomposites and nanothermites XRD analysis
The XRD pattern of the synthesized nanocomposites and PVDF is shown in figure 5. The red line indicates the XRD of Al/MoO 3 /PVDF, and the blue line represents the control group Al/MoO 3 . The black line is the XRD of PVDF. Obviously, peaks appearing in the red line can be well indexed to MoO 3 (ICSD No. 76-1003 MDI Jade 6.0) and Al (ICSD No. 04-0787 MDI Jade 6.0). Also, the diffraction peaks are sharp and intense, indicating their highly crystalline impurity peaks are observed, confirming the high purity of the products.
But no characteristic diffraction peaks of PVDF are observed, which could be caused by its lower loading content and weak crystallization, on the other hand, implying the good dispersion of the tiny PVDF clusters on the Al/MoO 3 surface. Generally, PVDF undergoes a process of rapid evaporation and recrystallization. However, according to the preparation method of the precursor, under the action of a high-voltage electric field, PVDF is difficult to recrystallize after evaporation and dispersion, especially polymers with large molecular masses. Therefore, it is hard to find the obvious characteristic peaks in the XRD pattern.
The blue line is not much different from the red line, showing similar characteristics.
SEM analysis
To better understand the morphological characteristics of the composites, FE-SEM is used to observe its features. It is worth noting that there are no obvious characteristic peaks of PVDF seen in the XRD diffraction pattern, but PVDF can be clearly captured in FE-SEM. Figure 7 shows the distribution overlay of element Al, element Mo and element F, indicating that these elements have been evenly distributed and confirmed the uniformity of the distribution of several components.
Combining the results of Figures 6 and 7 can be found that Al is directly attached to MoO 3 , and its dispersion is relatively uniform with few agglomerations. As we all know, agglomeration is inevitable [29], but the method of electrostatic spray can effectively reduce the reunion phenomenon. In figures 6(c) and (d), it can be observed that PVDF used as an adhesive can tightly glue Al and MoO 3 together. The bonding effect of PVDF can be clearly seen in figure 6(d). Besides, when the poly fluoride is heated to a specific temperature, a decomposition reaction will release heat. At this time, PVDF becomes a reactant in the redox reaction and participates in the reaction [30].
Thermal analysis and kinetics calculation
TG-DSC analysis TG-DSC tested the pure PVDF, Al/MoO3 and Al/MoO3/PVDF for investigating the effect of PVDF addition in nanothermites system. The results are shown in figure 8 and the main details are listed in table 1. Figure 8(a) shows the TG-DSC results of pure PVDF. There are three peaks in the picture. A small endothermic peak (peak A) corresponds to the melting of PVDF at 170°C [31]. Peak B and peck C, two large exothermic peaks at 480°C and 690°C, accompanied by rapid mass loss, representing the decomposition reaction of PVDF, releasing a large amount of heat 1161.12 J g −1 . After the decomposition reaction, the quality of PVDF no longer decreases, leaving 30.46%.
The TG-DSC of Al/MoO 3 nanothermite is shown in figure 8(b). Before 400°C, the mass of the sample decreased slightly by 3% due to both physisorbed and structural H 2 O and ethanol, accompanied by a small endothermic peak D [32]. As the temperature increases, the main exothermic peak of the Al/MoO 3 thermite starts at 553°C. The two exothermic peaks E and G undergo the same thermite reaction process, but an endothermic peak F that respects the melt of Al at 660°C produces. The exothermic peak E shows a solid-solid phase reaction, while the exothermic peak G indicates a liquid-solid phase reaction between molten Al and solid MoO 3 , which might be caused by the big-block part of MoO 3 with melted Al. The process's onset temperature is 519°C and the endpoint is 824°C, with heat release 800.72 J g −1 .
It can be seen that the space I correspond to the decomposition of PVDF. In the exothermic zone J, the primary thermite reaction occurs between Al and MoO 3 with the exothermic heat is approximately 771.3 J g −1 . At the same time, there is no noticeable quality change in the TG curve, implying that Al and MoO 3 are completely reacted, and there is no residual MoO 3 . Besides, the contact between Al and MoO 3 will be closer and the reaction will be more complete when Al is melted. Due to the addition of PVDF, the peak exotherm at this stage is 680°C, which is much earlier than that in figure 8(b). What is more, the total heat release of Al/MoO 3 /PVDF energetic nanocomposite is 934.0 J g −1 , which changes the heat release of Al/MoO 3 (800.7 J g −1 ) to a certain degree.
Analysis of reaction products
The residues products after the TG-DSC tests are collected and characterized by XRD to analyze the reactant reaction. The results are shown in figure 9.
In the control group, the reactants are Al and MoO 3 , and the residual products after the reaction are detected as Mo and Al 2 O 3 . No reactants are caught in the residue, indicating that the reaction was complete and only one thermite reaction occurred. PVDF is added to the experimental group whose residual products after the reaction are Mo, Al 2 O 3 and Mo 2 C. It is speculated that some part of the Mo produced after the thermite reaction reacted with PVDF to produce Mo 2 C.
Kinetics analysis
Activation energy, reflecting the degree of difficulty of a chemical reaction, represents the minimum energy required for a chemical reaction to occur. Reducing the activation energy of the Al/MoO 3 reaction can promote According to the Kissinger method theory, the plots of ( ) is constructed, and the result is shown in figure 11. The analytical expression of the data correlation line of the A exothermic peak of Al/MoO 3 is = -+ y x 35444 26 and the correlation coefficient R is −0.94019. The value of the activation energy Ea can be calculated from the slope of the linear fitting line is 294.6 kJ mol −1 . Using the same approaches, the B exothermic peak of Al/MoO 3 and the Al/MoO 3 /PVDF are = -+ y x 70928 56 and = -+ y x 33981 24, corresponding the correlation coefficient R are 0.99058 and 0.94698. Prominently, the addition of PVDF allows the thermite to react fully. Namely, the reaction of MoO 3 with Al after melting is much earlier, and the activation energy is significantly reduced.
Result of preliminary combustion tests
The results of preliminary combustion tests are shown in figure 12. Figure 12 red light is set as the starting time, specified 0. It can be seen that during ignition and combustion, the flame energy release is concentrated and only a small amount of spark splashes.
Obviously, the figure shows that compared to Al/MoO 3 /PVDF, the reaction of Al/MoO 3 is severe brighter fast and shining brightly, reaching the maximum intensity at about 400 μm. However, the excitation current required for Al/MoO 3 /PVDF ignition is 0.678 A, which is much smaller than the excitation current required for Al/MoO 3 ignition, 0.838 A, which implies that the addition of PVDF can effectively make the Al/MoO 3 nanothermite easier to ignition. In terms of burning time, almost completely releasing after 10 ms, the release rate of Al/MoO 3 is too fast. In contrast, the release of Al/MoO 3 /PVDF is more durable, which can still maintain its most violent state of burning at 20 ms, gradually beginning to extinguish after 35 ms.
PVDF with a mass fraction of 30% may reduce the Al/MoO3 thermite content, so there will be attenuation to a certain extent. Still, it is of great significance in improving the activity of the thermite, reducing the reaction conditions and increasing the reaction time.
Conclusions
In this work, the impact of PVDF on Al/MoO 3 nanothermites system was investigated. Firstly, the MoO 3 was characterized by SEM and XRD. Then the Al/MoO 3 /PVDF nanocomposites were assembled via electrospraying method, and the Al/MoO 3 nanothermite was also prepared in the same way as a control group. Agglomerations of components could be effectively reduced through electrospraying method according to the SEM and Mapping results. Besides, all the nano-particles were evenly distributed.
TG-DSC results show that the Al/MoO 3 /PVDF energetic nanocomposites have two obvious exothermic areas in the range of room temperature to 1000°C, which release 934.0 J g −1 heat in total, while the control experiment group, the Al/MoO 3 , has a heat release area in the same range with the about 800.7 J g −1 heat releases. The addition of PVDF slightly improves the heat release. Besides, it can be found in the calculation of activation energy that the activation energy of Al/MoO 3 /PVDF nanocomposites is significantly reduced, which is carried out by thermal analysis experiments under different heating rates using Kissinger method. As a highenergy additive, the results reveal PVDF can dramatically improve the reaction's activity and help ignite the thermite at comparatively low energy.
A preliminary combustion test was conducted and recorded by high-speed photography. The excitation current of Al/MoO 3 /PVDF being ignited is 0.678 A, compared with the 0.838 A of the Al/MoO 3 , which is lower. Meanwhile, Al/MoO 3 /PVDF nanocomposites can burning duration time longer. Obviously, the addition of PVDF can ignite easily and increase reaction time, which corresponds with the results of TG-DSC.
|
2020-11-19T09:13:17.238Z
|
2020-11-26T00:00:00.000
|
{
"year": 2020,
"sha1": "86ff4e03c0b3975b9d95c200892d4d278d733150",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2053-1591/abca6d/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cb95280b644375a961a5cf79733b4d5b8f3228a4",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
246075639
|
pes2o/s2orc
|
v3-fos-license
|
Association between hospital and ICU structural factors and patient outcomes in China: a secondary analysis of the National Clinical Improvement System Data in 2019
Background Hospital and ICU structural factors are key factors affecting the quality of care as well as ICU patient outcomes. However, the data from China are scarce. This study was designed to investigate how differences in patient outcomes are associated with differences in hospital and ICU structure variables in China throughout 2019. Methods This was a multicenter observational study. Data from a total of 2820 hospitals were collected using the National Clinical Improvement System Data that reports ICU information in China. Data collection consisted of a) information on the hospital and ICU structural factors, including the hospital type, number of beds, staffing, among others, and b) ICU patient outcomes, including the mortality rate as well as the incidence of ventilator-associated pneumonia (VAP), catheter-related bloodstream infections (CRBSIs), and catheter-associated urinary tract infections (CAUTIs). Generalized linear mixed models were used to analyse the association between hospital and ICU structural factors and patient outcomes. Results The median ICU patient mortality was 8.02% (3.78%, 14.35%), and the incidences of VAP, CRBSI, and CAUTI were 5.58 (1.55, 11.67) per 1000 ventilator days, 0.63 (0, 2.01) per 1000 catheter days, and 1.42 (0.37, 3.40) per 1000 catheter days, respectively. Mortality was significantly lower in public hospitals (β = − 0.018 (− 0.031, − 0.005), p = 0.006), hospitals with an ICU-to-hospital bed percentage of more than 2% (β = − 0.027 (− 0.034, -0.019), p < 0.001) and higher in hospitals with a bed-to-nurse ratio of more than 0.5:1 (β = 0.009 (0.001, 0.017), p = 0.027). The incidence of VAP was lower in public hospitals (β = − 0.036 (− 0.054, − 0.018), p < 0.001). The incidence of CRBSIs was lower in public hospitals (β = − 0.008 (− 0.014, − 0.002), p = 0.011) and higher in secondary hospitals (β = 0.005 (0.001, 0.009), p = 0.010), while the incidence of CAUTIs was higher in secondary hospitals (β = 0.010 (0.002, 0.018), p = 0.015). Conclusion This study highlights the association between specific ICU structural factors and patient outcomes. Modifying structural factors is a potential opportunity that could improve patient outcomes in ICUs. Supplementary Information The online version contains supplementary material available at 10.1186/s13054-022-03892-7.
Background
Healthcare delivery is challenging and complex in the intensive care unit. Several factors, including ICU structure, ICU organization and the care process, can influence ICU performance [1][2][3]. Despite the extensive literature addressing the quality of care in ICUs, the impact of such factors remains controversial, and a diligent assessment of care components is required.
It is well known that ICU structural factors vary within different countries and regions [4,5]. However, most related studies have been conducted in Western countries [6,7]. Little evidence regarding the structural factors of ICUs in China is available [8]. China still faces challenges in providing optimal and equitable management strategies for ICU patients across the nation because of its broad geography and unbalanced economic development. A previous study described critical care resources in Guangdong Province [9]. Another study evaluated the practices, outcomes, and costs related to mechanical ventilation within ICUs in Beijing [10]. Nonetheless, those studies were limited to the regions in which the surveys were administered and feature small sample sizes; it is also not clear whether those resources were associated with ICU care provision, treatment patterns, and patient outcomes. Finally, increasing demands and rising costs pose significant challenges to the delivery of high-quality and affordable critical care to a growing population of patients. Optimizing ICU organization is a potential opportunity to improve patient outcomes and the use of resources.
Therefore, the aim of this study was to investigate hospital and ICU structural factors and patient outcomes in China. Moreover, we aimed to identify the association between these variables and patient outcomes, with a focus on potential structural factors, including ICU structural resources and staffing levels. We hypothesized that patients admitted to hospitals that were governmentfunded, tertiary, well-equipped, and better-staffed would have a decreased risk of ICU mortality and occurrence of VAP, CRBSIs and CAUTIs after adjusting for region, disease severity, and other confounders.
Design
This was a nationwide, observational database study in 2019. The data source was the National Clinical Improvement System (https:// ncisdc. medid ata. cn/ login. jsp), collected by the China-National Critical Care Quality Control Centre (China-NCCQC), which is the official national department that regulates ICU quality control in China. The Ministry of Health of China approved that Peking Union Medical College Hospital establishes China-NCCQC in 2012. The Quality Improvement of Critical Care Program, led by China-NCCQC, was initiated in 2015. This study is part of the above program. Permission to use the data was obtained from the China-NCCQC.
Study population and settings
The China-NCCQC collected the relevant data regarding quality control indicators through the database of the National Clinical Improvement System. Hospitals in China are classified in a 3-tier system (primary, secondary or tertiary hospital) that recognizes a hospital's ability to provide medical care, medical education, and conduct medical research. Tertiary hospitals, similar to a tertiary referral hospital in the West, are usually comprehensive, referral, general hospitals responsible for providing specialist health services, perform a larger role with regard to medical education and scientific research and serve as medical hubs providing care to multiple regions. Secondary hospitals, similar to a regional hospital or district hospital in the West, tend to be affiliated with a medium-sized city, county, or district and are responsible for providing comprehensive health services and medical education and conducting research on a regional basis. In contrast, primary hospitals are primary health care institutions whose main function is to provide primary prevention directly to the population, however, they rarely admit and treat critically ill patients. Therefore, primary hospitals were not included in the scope of the study.
The enrolled tertiary and secondary hospitals voluntarily participated and were selected by the China-NCCQC. The selection criteria were as follows. (1) The ICU had to have more than five beds. (2) The ICU had to have the ability to diagnose and treat the relevant medical diseases that were evaluated as quality control items (such as ventilator-associated pneumonia (VAP), catheterrelated blood stream infections (CRBSIs), and catheterassociated urinary tract infections (CAUTIs)). Hospitals without ICUs were excluded from the study. The 31 provinces/municipalities/autonomous regions of mainland China were included in this survey (data from Hong Kong, Taiwan, and Macao were not included). There were 12,436 registered hospitals (including 2749 tertiary hospitals and 9687 secondary hospitals) across the country in 2019 [11], and a total of 2820 hospitals (including 1383 tertiary hospitals and 1437 secondary hospitals) in China were involved in the current analysis.
Variables and measurements Hospital and ICU structure factors
In this study, the structural factors of the hospital and ICU were evaluated according to the National Clinical Quality Control Indicators for Critical Care Medicine (2015 Edition) released by the China-NCCQC [12]. The structural indicators that were monitored included hospital characteristics and ICU characteristics in 2019. The hospital characteristics included the region (Eastern China, Central China, Western China, North-eastern China), location (metropolitan cities, other cities and rural areas), type (secondary, tertiary), ownership (private, public), and ICU-hospital bed percentages (calculated by the number of total ICU beds divided by the number of beds in the hospitals). The ICU characteristics included the physician-to-bed ratio (calculated by the total number of ICU physicians divided by the total number of ICU beds), bed-to-nurse ratio (calculated by the total number of ICU beds divided by the total number of full-time equivalent registered nurses working in the ICU), single rooms, and extracorporeal membrane oxygenation (ECMO) equipment. The proportion of ICU patients with APACHE II scores ≥ 15 (%) and the 6-h compliance rate with the surviving sepsis campaign guidelines (1. Completion of repeated measurement of lactate levels in patients with initial hyperlactatemia, 2. completion of resuscitation with vasopressors in patients with mean arterial pressure [MAP] ≤ 65 mmHg after fluid resuscitation, 3. completion of central venous pressure [CVP] and central venous oxygen saturation [ScvO2] measured in patients with lactate ≥ 4 mmol/L) and the microbiology detection rate before antibiotic use (defined as (no. of patients with microbiology detection before antibiotics)/(no. of patients who received antibiotics during the same period)) were also collected as controlling factors.
ICU patient outcomes
The ICU patient outcomes included the ICU mortality rate and the incidence of VAP, CRBSIs and CAUTIs in 2019. The ICU mortality rate (%) was defined as the number of patients who died in the ICU/the number of patients admitted to the ICU during the same period. The VAP incidence rate per 1000 ventilator days was defined as the number of patients with VAP/the number of patients with mechanical ventilation during the same period. The CRBSI incidence rate per 1000 catheter days was defined as the number of patients with CRBSIs/the number of patients with a central venous catheter during the same period. The CAUTI incidence rate per 1000 catheter days was defined as the number of patients with CAUTIs/the number of patients with a urinary catheter during the same period [13]. The definitions of these outcome indicators are described in Table S1 in Additional file 1.
Data collection
The data were collected between January 1, 2019, and December 31, 2019, and were entered into a webbased data entry system by a local, trained independent research coordinator. Range checks were used to check for inconsistent or out-of-range data, prompting the user to correct or review data entries outside the predefined range. The system also used predefined logical checks to identify any errors or illogical data entries. A data quality meeting was held monthly to review all hospital enrolment records and registry data.
Ethical considerations
The current study is reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The trial protocol was approved by the Central Institutional Review Board at Peking Union Medical College Hospital (NO. SK1828), and individual consent for this analysis was waived. No identifying or protected health information was included in the analysed dataset.
Data analysis
All statistical analyses were performed in SAS 9.4 (SAS Institute Inc., Cary, NC, USA). Normally distributed data are expressed as the mean and standard deviation and were compared using Student's t test. Nonnormally distributed data are presented as the median and interquartile range (IQR) and were analysed using the nonparametric Mann-Whitney U test. To identify the adjusted effects of the structural variables on patient outcomes, a multivariate analysis was conducted using generalized linear mixed models with two random intercept models to demonstrate the effects of the region and location. The model took into account the fact that patients from the same region or city may have unmeasured characteristics that are similar because the patient outcomes would be more similar in patients from the same region than in patients that are across different regions. The mixed-effects model analyses were run separately for each patient outcome.
Covariates that were considered to be important impact factors in the patient outcomes based on the prior literature and the univariate analysis were taken as candidates for inclusion in the models. In addition, patient variables (the proportion of ICU patients with an APACHE II score ≥ 15 (%)) and processing factors (the 6-h SSC bundle compliance rate (%) and the microbiology detection rate before antibiotic use (%)) were also included in the model. The results are expressed as the p value and beta (β) with the 95% confidence interval (CI). A missing value analysis was conducted. The percentage of missing values across the variables varied between 0 and 12.02%. In total, 2387 observations were complete (84.65%). Listwise deletion was used to handle missing data. All statistical tests were twotailed, and p < 0.05 was considered to be statistically significant.
Hospital structure characteristics
A total of 2820 hospitals from 31 provinces were included in the data analysis. All hospital structural characteristics were analysed and are presented in Table 1. The number of tertiary hospitals was 1383 (49.04%), compared to 1437 (50.96%) secondary hospitals. A larger proportion of the hospitals were from Western China (1083, 38.40%), were public (2574, 91.28%), and were located in nonmetropolitan cities or rural areas (2288, 81.13%). The median ICU-hospital bed percentage was 1.77% (1.29%, 2.42%). There were significant differences in all hospital characteristics between the tertiary and secondary hospitals, except for the ICU-hospital bed percentage.
ICU structure characteristics
All ICU structural characteristics are presented in Table 2. The median physician-to-bed ratio was 0.60 (0.44, 0.78), while the median bed-to-nurse ratio was 0.55 (0.44, 0.71). A large proportion (59.57%) of the ICUs had more than one private patient room. Respondents reported 450 ICUs equipped with ECMO (15.96%). The median proportion of patients with an APACHE II score ≥ 15 24 h after admission was 58.91% (35.12%, 76.30%). The median proportion of the 6 h SSC bundle compliance rate (%) was 80.00% (50.00%, 100%), while the median proportion of the microbiology detection rate before antibiotic use (%) was 91.67% (71.47%, 100%). Moreover, there were significant differences in all of these characteristics between the tertiary and secondary hospitals.
ICU patient outcomes
The health outcomes of ICU patients were analysed and are presented in Table 3. Overall, the median ICU patient mortality was 8.02% (3.78%, 14.35%). In addition, . There were significant differences in patient outcomes between the tertiary and secondary hospitals except for the incidence of VAP.
Association between hospital and ICU structural factors and ICU patient outcomes
The
Discussion
Gaps and variations in patient care and outcomes for ICU patients exist within and across countries worldwide, particularly between developed and developing regions. In this study, we report detailed information on the structural factors for a large sample of Chinese ICUs. We found that the hospital and ICU structure and patient outcomes varied substantially among the participating hospitals. Some of the structural factors were associated with ICU patient outcomes. The population of China has aged rapidly, and the rate will continue to accelerate in the decades to come. Meanwhile, the number of hospitals and hospital volume is gradually increasing. An ageing society could also induce an increase in ICU admissions and ICU demand. Studies have shown that the organization, structure, and delivery of critical care in China are different from those in Asia, Europe and North America [13][14][15][16]. Critical care medicine in mainland China is still in a phase of development. China still has a large gap with developed countries in the number of ICU beds and capacity, clinician staffing, critical care technicians (such as respiratory therapists), ICU equipment, and so on. For example, the rapid expansion of hospital beds was disproportionate to the severe shortage of ICU beds. The ratio of ICUs to hospital beds recommended by the Guidelines for the Construction and Management of Critical Care Medicine in China was 2-8% [17]. Although the proportion is relatively lower than that in many developed countries, a large number of hospitals did not meet this recommendation [18]. Under the condition of limited and unevenly distributed ICU resources, exploring the impact of organizational and structural factors on ICU patient outcomes in Chinese ICUs could provide a valuable reference for the further improvement of critical care quality. To our knowledge, this is the first national report on hospital-and unit-level differences in medical care and outcomes for ICU patients in China, and it reveals the gaps and challenges that China is facing. These findings establish the fundamental and current status for the care and outcomes of ICU patients and serve as a basis to guide efforts for quality improvements in intensive care and in the allocation of resources.
Our results showed that hospital ownership was significantly associated with ICU patient outcomes. The mortality rate and incidence of VAP and CRBSIs in ICU patients admitted to private hospitals were higher than those in patients admitted to public hospitals. There are some possible explanations for this finding. Hospitals in China are classified into public hospitals and private hospitals according to ownership and economic type. Public hospitals are non-profit and receive financial subsidies from the state, so their medical prices are strictly limited. Private hospitals, which are believed to be indispensable supplements to public hospitals to enhance healthcare quality and efficiency across the nation and to meet the rapidly increasing demand for diversified health care, are generally profit-making hospitals under the government's supervision and are responsible for their profits and losses, with independent decisions made on medical prices. Since 1980, private hospitals have begun to appear in China's medical industry. The large-scale development of private hospitals in China occurred after 2001. The number of private hospitals exceeded that of public ones in 2015. In 2019, the number of private hospitals in China reached 22,424, while the number of public hospitals had declined to 11,930 [11]. Despite the growth in the number of facilities, private hospitals still face several challenges in the Chinese social and medical context. In a relatively short period of development, private hospitals are more likely to be smaller and specialized hospitals, and the most substantial issues facing private hospitals are the recruitment of high-quality physicians and the lack of public insurance coverage. While public hospitals typically have more beds, staff, and tertiary care capacity, they usually allocate sufficient medical talent, equipment and resources to meet the needs of patients [19]. A recent study in Beijing, China showed that the technical efficiency, pure technical efficiency, and scale efficiency of public hospitals were higher than those of private hospitals [20]. In addition, public hospitals usually have a better reputation than private hospitals, and reputation also influences the performance and efficiency of hospitals. As a result, differences exist in the outcomes of ICU patients in public and private hospitals. With the rapid development of private hospitals in China, measures need to be taken to further improve the quality of care and outcomes of ICU patients in private hospitals. In this study, ICU patients admitted to tertiary hospitals in China had a lower CAUTI incidence than patients admitted to secondary hospitals. Being treated in a secondary hospital was associated with a higher CAUTI incidence in the multivariate analysis. The difference in the clinical care, diagnostic protocols, assessment, and treatment of hospital infection might account for the observed discrepancy in the CAUTI incidence among ICU patients. Recent studies have also observed higher mortality rates in secondary hospitals than in tertiary hospitals [21,22]. These complications also reflect the combined effect of patient case-mix and quality of care. In China, tertiary hospitals are usually comprehensive, referral-based, general hospitals responsible for providing specialist health services and performing a larger role with regard to medical education and scientific research. They also serve as medical hubs providing care to multiple regions, while secondary hospitals are responsible for providing comprehensive health services, medical education and conducting research on a regional basis. For one thing, patients with acute and critical illnesses tend to be treated in tertiary hospitals rather than secondary hospitals, resulting in a higher proportion of patients with acute and critical illnesses in tertiary care hospitals, and for another, tertiary hospitals are usually more adequately staffed and equipped compared to secondary hospitals, which may lead to a lower rate of complications in ICU patients from tertiary hospitals. Intensive training and technical support for ICU staff in secondary hospitals should be implemented to narrow the gaps and variations in the care and outcomes of ICU patients.
We found that an ICU-to-hospital bed percentage of more than 2% was independently associated with a lower mortality rate in ICU patients, which is consistent with a contemporary study [23]. Previous studies indicated that larger hospitals and hospitals with high ICU occupancy were more likely to increase their number of ICU beds compared to other hospitals. Small hospitals and hospitals with relatively low ICU occupancy were less likely to add ICU beds in the subsequent year [24]. When the ICU occupancy rate is high, lower ICU bed ratios could possibly result in delayed ICU admissions, which in turn may affect patient outcomes. It is noteworthy that the result should be interpreted with caution. A combination of factors, including ICU bed occupancy rate, acuity of patients, and capacity of other departments (e.g., operation rooms), should be considered when making the decision whether to expand ICUs in a hospital.
A large number of studies have reported that a higher number of nursing staff was associated with a lower in-hospital mortality rate [7,22,25,26]. Our study showed a similar trend. Patients from an ICU with a bed-to-nurse ratio of more than 0.5 (two nurses per bed) had significantly higher mortality. The bed-tonurse ratio is a widely used indicator of nurse staffing in ICUs and general wards, and larger numbers of bed-to-nurses indicate worse staffing. ICU patients are highly dependent on nursing care due to the nature of their illnesses, the need for continuous invasive monitoring, and the need for multiple organ system support. Variables that mediated the relationship between nurse staffing and the patient outcome of death were inferred to be insufficient physician collaboration, excessive workload, increased medical errors, and missed nursing care [3,27]. Moreover, one key role that ICU nurses perform is patient monitoring. ICU nurses are at the patient's bedside around the clock and are paramount for the early identification of problems. Failure of such monitoring may cause life-threatening complications such as pneumothorax or unexpected extubation, which requires prompt recognition and treatment [28]. A shortage of nursing staff could be associated with insufficient supervision and might inhibit the early recognition of any changes in the status of the patients [25]. This finding could provide useful information for nurse managers and policymakers to determine if staffing levels are adequate and safe, not just whether there is a relationship between staffing and outcomes. This finding should be considered in light of the lower nurse-to-patient ratios in China and other low-and middle-income countries. It should also be noted that the average bed-to-nurse ratio used in this study would not be equal to the nurse-to-patient ratio of the units where the numbers of nurses and patients are in constant dynamic change. Furthermore, the quality of the nursing staff (education course, advanced training) could possibly be a confounder of ICU patient mortality. Future research that uses a more advanced study design and analytical approach is needed to examine the dynamic impact of nurse staffing on ICU patient outcomes.
There are some limitations to this study. First, the use of secondary data limited essential variables such as the patient-level characteristics or uncontrolled confounders (e.g., education level of the staff, ICU bed occupancy rate) needed to understand the predictive factors of the patients' outcomes. To reduce these methodological problems, we applied multivariate analysis using generalized linear mixed models adjusted for patient disease severity using the APACHE II score, which has been shown to be an important predictor of patient outcomes [29,30]. Second, since only one year of data was reported, presented and summarized in this study, the relationships of the structural factors and health outcomes could not be analysed continuously and dynamically. Third, this was an observational study and, therefore, prone to selection bias. Causal relationships cannot be drawn due to the cross-sectional nature of the study design. Fourth, listwise deletion was used to handle the missing data, which may cause bias in the estimates of the parameters [31]. Despite these limitations, the results of this study are highly meaningful in that they underscore the necessity of nonpatient factors, including hospital and ICU structural factors, as a way to reduce adverse patient outcomes. In addition, we have improved the generalizability of the findings by using national administrative data, unlike most previous studies, which were conducted using data from only certain regions or hospitals. The results may have important implications for critical care development in China and other countries with similar medical environments.
Conclusion
In conclusion, specific structural factors, including hospital ownership, hospital type, ICU-hospital bed percentage, and bed-to-nurse ratio, were associated with ICU patient outcomes. These observations can assist in policies and interventions to bridge the current quality gap in the delivery of critical care in China as well as other developing countries.
|
2022-01-21T14:47:15.968Z
|
2022-01-21T00:00:00.000
|
{
"year": 2022,
"sha1": "61f9b9777e9517884a7fac9ef54a98ff1ab29475",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-022-03892-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cce42b3885cf7e7543b61350c4ffc4c906e89b33",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52107492
|
pes2o/s2orc
|
v3-fos-license
|
The impact of four harvesting techniques on the cell viability and osteogenic behaviour of cells in autogenous bone grafts : A critical appraisal of an experimental study
Copyright: © 2013. The Authors. Licensee: AOSIS OpenJournals. This work is licensed under the Creative Commons Attribution License. The investigators tested the null hypothesis that there would be no differences between the different bone harvesting techniques with regard to cell viability, cell activity and osteogenic potential of grafted cells. Bone grafts were harvested from the mandibles of 12 miniature pigs using four different harvesting techniques: bone milling, bone scraping, bone drilling (bone slurry) and piezosurgery. Cell viability was determined according to an immunoassay of released signalling molecules and gene expression that affect bone formation and resorption. The osteogenic activity of conditioned graft-sampled media was assessed in a bioassay using isolated bone cells. Cells in autogenous bone grafts obtained by using a bone mill and a bone scraper showed a higher viability and a stronger osteogenic potential than those from piezosurgery and bone drilling (slurry). This study contributed towards the understanding of the impact of harvesting techniques on the viability and osteogenic behaviour of grafted cells.
Introduction
Autogenous bone grafts are considered to be the 'gold standard' for bone grafting in alveolar bone augmentation procedures and treatment of bone defects because of the presence of viable cells and growth factors that enhance bone growth.There is a paucity of information on why and how bone grafts obtained by different harvesting techniques behave differently during the process of wound healing and graft consolidation.The purpose of this in vitro experimental study was to establish the effect of different bone harvesting techniques on the biological viability, activity (release of growth factors and other bioactive molecules) and osteogenic potential of transplanted cells that contribute to the process of graft consolidation and subsequent bone formation.Knowledge of cellular behaviour and activity may assist clinicians in selecting the most appropriate harvesting technique for obtaining autogenous bone grafts with the best osteogenic potential, thus ultimately contributing towards a successful therapeutic outcome.
Appraisal of study methodology and validity of the results
Bone grafts were obtained from 12 sedated miniature pigs.The Ethical Committee for Animal Research, State of Bern, Switzerland approved the protocol.To minimise selection bias, increase the strength (precision) of the treatment effect, reduce the number of animals used and optimise efficient use of resources, bone grafts were obtained from the lateral portion of the mandible during each harvesting technique.Each graft site was subdivided into four sections for collecting cells using four different techniques: • Corticocancellous block grafts were harvested with a 6 mm trephine and ground to particulate bone chips using a bone mill.• Bone chips were harvested with a sharp bone scraper.
• Bone particles were collected with a bone trap filter from the suction tip after drilling of cortical bone with a 2.2 mm round bur under saline conditions (bone slurry).• Bone particles were harvested with a piezosurgery device under saline conditions.
Harvested grafts were all treated equally to minimise contamination and transportation for in vitro experiments.
Read online:
For each grafted sample, bone particle size was determined using light microscopy and the surface variation was assessed using scanning electron microscopy.The number of viable cells in autogenous bone particles was determined by bioassay.Three independent experiments were performed for each harvesting method and all samples were measured in duplicate.Data (± SE) were normalised to bone mill samples.Cell activity was determined by measuring the growth factors and gene expression affecting bone formation and resorption, namely bone morphogenic protein-2 (BMP2), transforming growth factor b1 (TGFb1), vascular endothelial growth factor (VEGF), osteoprotegerin (OPG) and receptor activator of nuclear factor kappa B ligand (RANKL) using immunoassay techniques.Conditioned media prepared with various harvested autogenous bone samples were incubated with primary bone cells to determine the osteogenic potential and translation of cell viability and gene expression into paracrine function to form new bone.
Appropriate statistical analyses were conducted and mean values (± SE) were reported.The data were analysed for statistical significance using analysis of variance tests.Sufficient information was included in the scientific publication to ensure that the methods could be repeated.Figures were provided instead of raw data for cell activity and osteogenic potential, making it difficult to review and analyse the results.This study was conducted according to high standards of scientific rigour with regard to experimental design and methodology and transparency of reporting.Therefore there is no reason to believe that there is a threat to the internal validity of the results.This study satisfied all the internal validity assessment criteria.The results therefore likely yielded an accurate, transparent and unbiased assessment of the treatment effect.
The study was supported by a grant from the International Team for Implantology Foundation.The authors declared no conflicts of interest.
Results
Bone chips harvested by bone milling (1.551 mm ± 0.137 mm) and bone scraping (1.805 mm ± 0.154 mm) were larger than the graft particles obtained from piezosurgery (1.352 mm ± 0.070 mm) and bone drilling (bone slurry) (0.215 mm ± 0.010 mm).Scanning electron microscopy showed collagen fibres on the surface of bone chips obtained by bone milling.The number and activity of viable cells in autogenous bone chips obtained by bone milling and bone scraping were significantly greater compared to that in cells obtained from bone slurry and piezosurgery.
Bioactive molecules and gene expression of growth factors related to bone formation (BMP2, VEGF, and TGFb1) were significantly higher in autogenous bone chips harvested by bone mill and bone scraper compared to the other grafting modalities.In contrast, the bioactive molecules and gene expression for RANKL, a biomolecule that is associated with bone resorption, was significantly lower in autogenous bone chips obtained with a bone scraper (indicating less resorption) and significantly higher in bone slurry particles obtained from bone drilling, thus indicating a higher resorption rate.The relative proliferation rates and osteogenic potential were significantly greater with the conditioned-media samples of autogenous bone chips from bone milling and bone scraping.Samples obtained by means of piezosurgery and bone slurry showed significantly less osteogenic potential.
Discussion
Although controversial in its applicability, experiments using animal models are traditionally used to investigate physiologic processes and the efficacy and safety of therapeutic procedures or agents.The debate, however, continues whether the insights yielded by animal studies can be applied to humans.For scientific, economic and ethical reasons, miniature pigs are commonly used as a large animal model in experimental dental studies.The similarities between the oral maxillofacial region of miniature pigs and that of humans with regard to anatomy, development, physiology, pathophysiology and disease occurrence also render these animals appropriate models for research. 1spite the scientific rigour of the current study, extrapolating the results to humans should be approached with caution.The results did, however, provide valuable knowledge that will enhance our understanding of the effect of harvesting techniques on the viability and osteogenic behaviour of grafted cells.There is no compelling reason why the knowledge gained from this study cannot be used to assist clinicians when deciding on the harvesting technique to be used and the type of instruments or equipment to be purchased.The knowledge yielded by this study may contribute towards successful therapeutic outcomes in bone grafting procedures and pave the way for further preclinical and clinical research.
Conclusion Clinical implications
This study provided a model to better understand the biologic trends in bone behaviour and the osteogenic potential of autogenous bone obtained by four different harvesting methods.The authors concluded that cells in autogenous bone grafts obtained using a bone mill or a bone scraper showed a higher viability and a stronger osteogenic potential than cells obtained by means of piezosurgery or bone drilling (slurry).Overheating during mechanical harvesting, such as occurs in bone slurry preparations, or vibrations generated during piezosurgery harvesting could potentially affect cell viability.Osteogenesis also occurred more rapidly in bone chips than in bone sludge.
The investigators based their assumptions on the observation that growth factors released from grafted cells may contribute towards graft consolidation.If this assumption is true, autogenous bone harvested by bone milling or bone scraping might be more favourable than grafts harvested by means of piezosurgery or bone drilling.
Limitations
Animal characteristics (strain, weight, sex and age) were not provided.These are important variables as they could potentially affect the precision of the results.The investigators did not report whether a power analysis was performed to determine the sample size.This is important for detecting a biologically important effect if present, or to prevent animals being used unnecessarily.The investigators did not report whether assessors were blinded to minimise bias with outcome assessment.Measurements that are conducted blind are more likely to produce accurate (precise) results or estimates of treatment efficacy.The authors did not report the p-values and statistical significance.
Cell viability and gene expression in in vitro experiments remain surrogate measures to predict the biologic process of graft consolidation and the associated therapeutic success of bone grafting procedures.Further research in humans is required to lend credibility to these results.
Research gaps
Further preclinical research is required on the cellular composition of bone graft samples and its effect on gene expression.Biomechanical stresses induced by the different harvesting techniques may result in a change in cell population and thus influence cellular response and gene expression.The effect of mechanical harvesting and piezosurgery on the biological processes of bone resorption also requires further investigation.
Well-designed clinical trials are needed to determine which autogenous bone harvesting techniques are the most effective for specific clinical indications with reference to treatment time, cost of resources, clinical outcome of graft consolidation and complications associated with bone grafts and harvest sites.
|
2018-08-28T19:40:09.896Z
|
2013-03-26T00:00:00.000
|
{
"year": 2013,
"sha1": "e7edfa06bdde60caf2d56c538a03010dd3983f11",
"oa_license": "CCBY",
"oa_url": "https://ojid.org/index.php/ojid/article/download/6/13",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e7edfa06bdde60caf2d56c538a03010dd3983f11",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
42348172
|
pes2o/s2orc
|
v3-fos-license
|
Apoptosis and surfactant protein-C expression inhibition induced by lipopolysaccharide in AEC II cell may associate with NF-κ B pathway
Lipopolysaccharide (LPS), a Gram-negative bacterial outer membrane component, is one of the major causes of septic shock. Herein we investigate LPS-induced apoptosis of rat alveolar epithelial type II cells (AEC II) and the effects of LPS on surfactant protein-C (SP-C) expression in AEC II, along with the possible molecular mechanisms. LPS exposure impaired cell viability and increased apoptosis of AEC II significantly in concentration-dependent manner embodied in increased caspase-3 expression and the activity of caspase-3. Simultaneously, our results also indicated that LPS inhibited surfactant protein-C (SP-C) expression in AEC II. Mechanistic studies revealed that LPS treatment significantly increased the expression of NF-κB p50, NF-κB p65 and IKKβ proteins as well as induced IκB-α phosphorylation. Moreover, pretreatment with IKK inhibitor IKK-16 or NF-κB inhibitor PDTC ameliorated LPScaused alterations in cleaved caspase-3 expression, the activity of caspase-3 and SP-C expression. Taken together, these results demonstrate that LPS can induce apoptosis of AEC II and decrease SP-C expression partly through activating the NF-κB pathway.
INTRODUCTION
Gram-negative septicemia, a complication from acute pulmonary infection, can lead to organ dysfunction or hypoperfusion abnormalities (Cazzola et al., 2004). Lipopolysaccharide (LPS), a Gram-negative bacterial outer membrane component, has been reported as one of the major causes of septic shock (Raetz et al., 1991). Accumulating evidence indicates a consistent association between sepsis-associated acute respiratory distress syndrome (ARDS) and abnormal apoptosis of pulmonary alveolar type II epithelial cells (Gill et al., 2015). In this regard, many current studies have been conduct-ed to investigate whether modulating apoptosis could be a therapeutic target in sepsis-induced ARDS management. Recent findings indicated that therapies in attenuating alveolar type II epithelial cell apoptosis may have positive impacts on pathophysiological regulation of septic shock and acute lung injury, as well as the clinical course and outcome of patients with ARDS (Chuang et al., 2011).
Furthermore, LPS also induces acute pulmonary inflammation, causing rapid changes in the composition of the surfactant pool in human lung (Rooney, 2001). Pulmonary alveolar type II epithelial cells, located in the corners of the alveoli, highly specialize functions for synthesizing, secreting and reutilizing surfactants (Rooney, 2001). The critical function of pulmonary surfactants is to reduce surface tension at the alveolar air-liquid interface, thereby preventing alveolar collapse upon expiration and allowing for normal breathing (Clements and King, 1976). The pulmonary surfactant proteins (SPs) are secreted by alveolar type II cells, which reduce the surface tension of the alveoli and allow expansion of the lung during inspiration (Nkadi et al., 2009). There are four surfactant specific proteins including SP A-D. Among them, SP-A and SP-D participate in host defense in the lung, whereas SP-B and SP-C contribute to the surface tension-lowering activity (Avery, 2000). It has been confirmed that SP-B and SP-C protein deficiencies are associated with the pathogenesis of neonatal respiratory distress syndrome (RDS) (Yin et al., 2012;Danlois et al., 2000). SP-C, a small lipopeptide of 4.5 kDa with 35 residues, exclusively produced in lungs by the AEC II cells, is believed to promote and stabilize membrane-interface contacts and to facilitate lipid exchange between lipid layers (Glasser et al., 2001;Lukovic et al., 2012). In contrast to SP-B, SP-C is not as absolutely essential for lung ventilation and survival. However, SP-C deficient mice ultimately develop chronic respiratory failure (Glasser et al., 2008;Lawson et al., 2005). SP-C is lipid membrane-associated and thus probably performs its surface activity in a concerted manner. And it improves surfactant activity in particular interfacial adsorption, film stability and its respreading abilities (Cruz et al., 2000;Serrano and Perez-Gil, 2006;Wang et al., 1996), and it has been shown that these roles are particularly relevant at extensive lung expansion and relaxation during periods of high ventilatory demands (Almlén et al., 2008). Therefore, this study was designed to evaluate the effects of LPS on its induced apoptosis and SP-C expression inhibiting activity and its possible mechanisms using the primary cultured rat AEC II cells as the experimental model.
Rat alveolar epithelial type II cell isolation and cell culture
AECII cells were isolated from male Sprague-Dawley rats (150-200 g) (Guangdong Medical Laboratory Animal Center, Foshan, China) as described elsewhere (Hu et al., 2012). Rats were anesthetized with chloral hydrate and injected with heparin to prevent the formation of thrombi in the lung. Lungs were surgically removed and lavaged several times to remove most alveolar leukocytes. The lungs were perfused with phosphate buffer saline (PBS) for 5 times at 37°C. The lungs were digested by instilling 10 mL elastase (3 U/mL in PBS) at 37°C and incubating for 15 min. The above process was repeated twice. The cell suspension was mixed with 100 mg/mL DNAse I (Thermofisher Scientific, San Jose, CA, USA), incubated for 5 min at 37°C with gentle rotation to minimize cell clumping. The elastase reaction was stopped with fetal bovine serum (FBS) (Hyclone, Logan, Australia). The cells were incubated in two rat IgG-coated polystyrene bacteriological 100 mm petridishes (1.5 mg rat IgG/dish) sequentially at 37°C, 1 hr each. The unattached cells were centrifuged at 250 g for 5 min and resuspended with 10 mL Dulbecco's Modified Eagle Media: Nutrient Mixture F-12 (DMEM/F12) (Gibco Brl/ Invitrogen Co., Carlsbad, CA, USA) containing 10% FBS and 1% antibiotic (100 U/mL penicillin and 100 μg/mL streptomycin) (Gibco Brl/Invitrogen Co.) at a concentration of 10 6 cells/mL. To remove the remaining macrophages, the cells were incubated with rat IgG (40 mg/mL) at room temperature for 15 min with gentle rotation. Non-adherent cells were centrifuged and the cell pellet was resuspended in DMEM/F12 medium with 10% FBS and 1% antibiotic (100 U/mL penicillin and 100 μg/mL streptomycin). Cells were then cultured in DMEM/F12 supplemented with 10% FBS and 1% antibiotic (100 U/mL penicillin and 100 μg/mL streptomycin) at 37°C in a humid atmosphere containing 5% CO 2 . The medium was changed every 3 days to remove the non-adherent cells.
Cell viablility assays by Methylthiazolyldiphenyltetrazolium bromide (MTT)
Cell viability was measured by the MTT assay. Briefly, the cells were plated in a 96-well plate (4 × 10 3 cells/ well). After 24 hr, the cells were treated with DMSO or different concentrations of LPS (Guangzhou Hewei Chemical Co., LTD, Guangdong, China), IKK-16 as a selective inhibitor of IκB kinase, which is more sensitive to IKKβ than IKKα (Selleck Chemicals, Shanghai, China) and pyrrolidine dithiocarbamate (PDTC), a potent inhibitor of nuclear factor kappa B (NF-kappa B) activation (Selleck Chemicals). After different time points of treatment, 100 μL of 5 mg/mL MTT (Sigma-Aldrich, St. Louis, MO, USA) was added to each well for 4 hr, the medium was replaced with 200 μL of Dimethyl Sulphoxide (DMSO), and the cells were incubated at room temperature in the dark for 6 hr. The optical density (OD) value was measured using a spectrophotometric microtiter plate reader at 570 nm. The effect was expressed as percentage relative to the controls.
Cell apoptosis analysis by Annexin V-Fluoresceinisothiocyanate/Propidium Iodide (Annexin V-FITC/PI) staining
Apoptotic cells were detected using flow cytometry with Annexin V-FITC/PI dual staining according to the manufacturer's instruction of Invitrogen V13241 Dead Cell Apoptosis Kit (Invitrogen). After different concentrations of LPS, IKK-16 and PDTC treatment, the cells were harvested by trypsinization, rinsed twice with PBS, and suspended in 500 μL of binding buffer. The suspended cells were incubated at 4°C with 5 μL Annexin V-FITC solution for 15 min, and incubated for another 5 min at 4°C after adding 10 μL of PI solution. Flow cytometric analysis of apoptotic cells was performed with a flow cytometer (Beckman-Coulter, Inc., Brea, IN, USA). The flow cytometer was used to detect the emitted green fluorescence of Annexin V (FL1) and red fluorescence of PI (FL2) and for each sample 10,000 events were recorded. The amount of early apoptosis, late apoptosis, and necrosis was determined as the percentage of AnnexinV + / PI -, AnnexinV + /PI + , and AnnexinV -/PI + cells, respectively.
Cell apoptosis analysis by 4',6-diamidino-2phenylindole (DAPI) staining
After 24 hr of treatment with different concentrations of LPS, IKK-16 and PDTC, cells were fixed with pre-chilled methanol for 2 min, and then stained with 5 mg/mL of DAPI (Beyotime Institute of Biotechnology, Jiangsu, China) for 10 min. Nuclei were examined and photographed using fluorescence microscopy.
Caspase-3 activity assay
Cells were treated with various concentrations of LPS and then harvested and lysed in cell lysis buffer [20 mM Tris-HCl (pH 7.5), 150 mM NaCl and 1% Triton X-100] after 24 hr. And the caspase-3 activity was detected using a kit from Beyotime Institute of Biotechnology according to the instruction by the manufacturer.
Immunofluorescence
Immunofluorescence staining was used to determine the induction of SP-C activity in AEC II treated with different concentrations of LPS, IKK-16 and PDTC treatment. At 24 hr after treatment with different concentrations of LPS, IKK-16 and PDTC treatment, AEC II cells were fixed with 4% paraformaldehyde and permeabilized by 80% cold methanol. After washing with PBS, cover slips were then incubated in PBS with 3% bovine serum albumin for 10 min at room temperature. Primary antibodies against the active form of caspase-3 (BD Systems Ltd., Abingdon, UK) and Tom 20 (Cell Signaling Technology, Inc., Beverly, MA, USA) in PBS plus 0.1% Tween 20 were then added and incubated for 1 hr at room temperature. After three washes with PBS, the cells were incubated with a fluorescence-conjugated secondary antibody in the dark for 1 hr. For nuclear staining, the cells were subsequently stained with 0.5 mg/mL DAPI dye (Sigma-Aldrich) for 5 min before examination under a fluorescence microscope. Images of mitochondria were collected using a Leica confocal microscope.
Western blot analysis
AEC II cells were seeded at a density of 2 × 10 6 cells in a 25 cm 2 flask for 24 hr. After incubation, cells were pretreated with various doses of LPS, IKK-16 and PDTC for 12 hr. Cells were collected and lysed on ice, cell lysates were clarified via centrifugation and then the supernatants were collected and stored at -70°C until use.
Protein concentrations were measured using the Bradford method. An equal amount of protein was loaded and separated using 10% polyacrylamide gel electrophoresis and transferred onto a polyvinylidene fluoride membrane. The nonspecific site was blocked with 5% nonfat dried milk in 50 mM Tris-buffered saline containing 0.1% Tween-20 (TBST) for 1 hr at room temperature, and then the membrane was incubated with the specific primary antibody (1: 500) at 4°C overnight. Primary antibodies against IKKα, IKKβ, IκBα, p-IκBα, NF-κB p50, NF-κB p65, Caspase-3, and SP-C were purchased from Cell Signalling Technology. Following three washes with TBST, the blots were incubated with the secondary horseradish peroxidase-conjugated goat anti-rat IgG antibody (Beyotime Institute of Biotechnology) (1: 1000) for 1 hr at room temperature. Subsequently, the blots were washed again for three times with TBST and then visualized using an enhanced chemiluminescence (ECL) kit according to the manufacturer's instructions. The band densities were quantified from three different observations using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
Statistical analysis
Results were expressed as means ± standard deviations (SD) calculated from three independent experiments. A Student's t-test was used to compare the changes of all the measurable variables in this study. P < 0.05 was considered a significant difference.
Effects of different concentrations of LPS, IKK-16 and PDTC on AEC II cell growth
To examine the biological effects of different concen- Vol. 42 No. 1 trations of LPS, AEC II cells were treated with varying doses of LPS (0, 20, 40, 80, 160 μg/mL) for 24, 48 and 72 hr, and cell viability was assayed by MTT method. LPS decreased the cell viability with time and dose increasing (Fig. 1). Based on this, we used 80 μg/mL LPS in subsequent experiments. Later on, we determined 10 μM IKK-16 or PDTC in subsequent experiments in a trial treating cells with two concentrations of IKK-16 and PDTC (0 or 10 μM) for 20 min prior to LPS (80 μg/mL) exposure for 24, 48 and 72 hr, in which IKK-16 and PDTC decreased the decline of cell viability as a result of LPS insult (Fig. 1).
Effect of LPS on apoptotic death of AEC II cells
We also investigated whether LPS can induce the apoptosis of AEC II Cells. The ratio of cells with apoptotic nuclear morphology (fragmented nuclei and condensed chromatin) to total cells counted was significantly increased at 24 hr post-treatment of LPS, comparing to that of only DMSO treatment ( Fig. 2A). As depicted in Fig. 2B, LPS treatment resulted in a significant increase in the percentage of Annexin V positive cells in a dose-dependent manner. Data from the Annexin V assay was consistent with DAPI staining. Caspase-3 is a key effector in the process of apoptotic cell death. Fig. 2C showed that the expression of caspase-3 was markedly increased in LPS-treated cells, as compared to controls. As shown in Fig. 2D, LPS significantly increased the expression of caspase-3 in a concentration-dependent manner. Caspase-3 expression incubated with 80 μg/mL LPS for 24 hr was 0.70 ± 0.04 while the negative control value was 0.29 ± 0.01 (Fig. 2D). Also, LPS demonstrated a dose-dependent increase in caspase-3 activity by 6.17%, 14.08% and 41.18% at the concentrations of 20, 40 and 80 μg/mL in caspase-3 activity assay as compared to the control (Fig. 2E).
Effect of LPS on SP-C expression
Western blotting analysis was carried out to determine the effects of LPS on SP-C protein production in AEC II cells. SP-C protein could be detected in untreated AEC II cells. Exposure of AEC II cells to 20, 40 and 80 μg/mL LPS decreased SP-C protein synthesis for 12 hr (Fig. 3A). Treatment of AEC II cells with 20, 40 and 80 μg/mL LPS for 12 hr caused significant 29.6, 42.9 and 54.4% decreases in the levels of SP-C protein, respectively (Fig. 3B).
LPS-induced apoptotic death of AEC II cells is mediated by IKK-16 and PDTC
To determine the role of NF-κB pathway in LPS-induced apoptosis, IKK-16 (a IKK inhibitor) and PDTC (a NF-κB inhibitor) were administered to inhibit the expression of IKKα and NF-κB. As shown in Figs. 5A and 5B, the cell apoptosis treated with only PDTC or IKK-16 did not show significant differences compared with that of non-treated cells, while IKK-16 and PDTC effectively attenuated LPS-induced cell apoptosis. Meanwhile, PDTC and IKK-16 further attenuated LPS-induced caspase-3 expression in AEC II cells (Figs. 5C and D). Overall, these data suggest that the activation of NF-κB pathway by LPS contributed to cell apoptosis in AEC II cells.
LPS-inhibited SP-C expression of AEC II cells is mediated by IKK-16 and PDTC
LPS at the concentration of 80 μg/mL can significantly decrease the expression of SP-C. Pretreatment of AEC II cells with PDTC or IKK-16 (1 and 5 μM) reversed the LPS decreased SP-C expression (Figs. 6A and B). More-over, results from immunofluorescence labeling for SP-C in AEC II cells with or without the inhibitor (5 μM PDTC or IKK-16) pretreatment performed in parallel with previous results, which revealed that the expression of SP-C increased to some extent in cells pretreated with PDTC or IKK-16 compared to those without pretreatment, namely LPS-treated only (Fig. 6C).
was investigated. The current study showed that AEC II cells treated with LPS resulted in cell growth inhibition and apoptosis by detecting DNA condense, early/later stage apoptosis, caspase-3 expression and caspase-3 activity. AEC II cells serve important functions including synthesis and secretion of pulmonary surfactants (Wu et al., 2015). SP-C is one of the important pulmonary surfactants that can reduce the surface tension at the alveolar air liquid interface and provide alveolar stability necessary for normal ventilation (Mason et al., 2000). We found that the expression of SP-C was abnormally decreased in AEC II cells after exposure to LPS, confirming the damage to the normal function of the lung.
NF-κB is a key transcriptional factor, which plays a critical role in the regulation of cell survival genes (Pan et al., 2014). NF-κB (a heterodimer of p65 and p50) is located in the cytoplasm as an inactive complex bound to IκB-α, which is phosphorylated and subsequently degraded, then dissociates to produce activated NF-κB (Baeuerle and Baltimore, 1996). In present study, it was found that the expressions of NF-κB p65 and p50 were induced by LPS in a concentration-dependent manner, PDTC or control media were pretreated AEC II cells for 1 hr, and then the cells were treated with or without 80 μg/mL LPS for an additional 24 hr. Nuclear morphology was analysed by fluorescence microscopy after DAPI staining of cells (1: 0 μg/mL LPS, 2: 80 μg/mL LPS, 3: 80 μg/mL LPS + 1 μM IKK-16, 4: 80 μg/mL LPS + 5 μM IKK-16, 5: 80 μg/mL LPS + 1 μM PDTC, 6: 80 μg/mL LPS + 5 μM PDTC), representative images are shown (Scale bar, 50 μm). (B) AEC II cells were pretreated with 5 μM IKK-16, 5 μM PDTC or control media for 1 hr and then treated with or without 80 μg/mL LPS for an additional 24 hr; the cell cycle distribution was analysed by flow cytometry (1: 0 μg/mL LPS, 2: 80 μg/mL LPS, 3: 80 μg/ mL LPS + 5 μM IKK-16, 4: 80 μg/mL LPS + 5 μM PDTC). The data from three independent experiments are presented as the mean ± S.D. (*p < 0.05). (C) Cells were treated with or without 80 μg/mL LPS in the presence or absence of the indicated concentrations of IKK-16 or PDTC, and the expression of caspase-3 in AEC II cells was examined by Western blotting. (D) The bands of caspase-3 protein were quantified and statistically analyzed by ImageJ software. Three independent experiments were performed; the data are presented as the mean ± 480 S.D. (column, *p < 0.05). and the phosphorylation of IκB-α, which is required for p65 activation, were increased in cells treated with LPS. Moreover, the phosphorylation of IκB-α bound NF-κB is considered to be mediated by IKK at two conserved serines in the N-terminal domain of IκB-α (Baeuerle and Baltimore, 1996). In our research, we found that the expression of IKKβ protein rather than IKKα was significantly increased when AEC II cells were treated with different concentrations of LPS. Meanwhile, NF-κB inhibitor PDTC blocked NF-κB activity and IKK inhibitor IKK-16 suppressed IKKβ phosphorylation effectively attenuated LPS-induced cell apoptosis and reversed the LPS-decreased SP-C expression. Thus, we propose that LPS induced septic ARDS is partly attributed to LPS's ability to modulate alveolar epithelium cell apoptosis and decrease the expression of SP-C through regulating IKK/ NF-κB signal activation in the lung.
|
2018-04-03T05:20:08.749Z
|
2017-02-01T00:00:00.000
|
{
"year": 2017,
"sha1": "6e4ce1caef14357a3281ba876cb4aadfa8c9f6e0",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jts/42/1/42_53/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3bf47b587ef47540d5591bb1b8f26cd8d965ddfe",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1058237
|
pes2o/s2orc
|
v3-fos-license
|
Bacillus cereus Spores Release Alanine that Synergizes with Inosine to Promote Germination
Background The first step of the bacterial lifecycle is the germination of bacterial spores into their vegetative form, which requires the presence of specific nutrients. In contrast to closely related Bacillus anthracis spores, Bacillus cereus spores germinate in the presence of a single germinant, inosine, yet with a significant lag period. Methods and Findings We found that the initial lag period of inosine-treated germination of B. cereus spores disappeared in the presence of supernatants derived from already germinated spores. The lag period also dissipated when inosine was supplemented with the co-germinator alanine. In fact, HPLC-based analysis revealed the presence of amino acids in the supernatant of germinated B. cereus spores. The released amino acids included alanine in concentrations sufficient to promote rapid germination of inosine-treated spores. The alanine racemase inhibitor D-cycloserine enhanced germination of B. cereus spores, presumably by increasing the L-alanine concentration in the supernatant. Moreover, we found that B. cereus spores lacking the germination receptors gerI and gerQ did not germinate and release amino acids in the presence of inosine. These mutant spores, however, germinated efficiently when inosine was supplemented with alanine. Finally, removal of released amino acids in a washout experiment abrogated inosine-mediated germination of B. cereus spores. Conclusions We found that the single germinant inosine is able to trigger a two-tier mechanism for inosine-mediated germination of B. cereus spores: Inosine mediates the release of alanine, an essential step to complete the germination process. Therefore, B. cereus spores appear to have developed a unique quorum-sensing feedback mechanism to monitor spore density and to coordinate germination.
Introduction
B. cereus and B. anthracis form dormant spores that survive harsh environmental conditions. Upon encountering a suitable environment, these spores germinate into their vegetative form [1,2]. Binding of specific germinants including amino acids, nucleosides, and other small molecules to their cognate membrane receptors (Ger proteins) is believed to initiate the germination process [3,4]. Ger receptors are essential for germination and encoded as tricistronic operons [5]. Following the activation of these receptors, B. cereus spores release dipicolinic acid (DPA), calcium, and amino acids [4,6]. Subsequently, the cores of the spores becomes hydrated, and the spore cortex and spore-specific proteins are hydrolyzed [7,8,9]. Amino acids are released into the extracellular milieu following germination from an internal pool and from protein degradation [6,10,11]. Approximately 30 min after addition of germinants, the newly germinated cells start to divide [4,12].
While B. cereus and B. anthracis spores recognize nucleosides and amino acids as germinants, the species respond differently to these germinants [13,14,15]. While B. cereus 569 spores are able to germinate in the presence of a single germinant (inosine), B. anthracis spores require either a combination of inosine and an amino acid, or two different amino acids in order to germinate [15,16]. GerI and GerQ receptors have been linked to inosine-mediated germination of B. cereus 569 spores [13,17]. B. cereus spores lacking the GerQ receptor are unable to germinate in the presence of inosine alone, while those lacking the GerI receptor show reduced germination rates in the presence of inosine [13]. However, gerI and gerQdeficient strains germinate efficiently in the presence of a combination of inosine and alanine [13,14]. Thus, the presence of the second germinant alanine appears to compensate for the deficiency of gerI and gerQ-negative spores. While two Ger receptors have been linked to inosine-mediated germination in B. cereus [3,13,14], only one Ger receptor (GerH) has been linked to inosine in B. anthracis [15,18]. GerI and GerH share high sequence homology (96%, 92%, and 89% identity for the A-, B-, and Csubunit, respectively), while the germination receptor GerQ is only minimally related to GerI and GerH receptors [3,15].
We have previously demonstrated that B. cereus spores germinate with a time lag and non-linear kinetics when inosine is used as the sole germinant [17]. This lag phase is greatly reduced when inosine is supplemented with alanine. Others and we have shown that numerous nucleoside analogs are able to germinate B. cereus spores when supplemented with alanine [3,17,19,20]. Inosine, on the other hand, is the only nucleoside that efficiently germinates B. cereus spores when used in the absence of a co-germinant [17]. In contrast to B. cereus spores, germination of B. anthracis spores requires the presence of two germinants such as inosine and alanine [16]. These germinants bind to B. anthracis spores with strong cooperativity [16].
In this study, we analyzed inosine-mediated germination of B. cereus 569 spores. We demonstrated that B. cereus release amino acids and specifically alanine when germination is triggered by inosine as the sole germinant. Amino acid release following inosine exposure required the presence of both GerI and GerQ receptors. We provide evidence that alanine release is essential for germination of B. cereus spores treated with a single germinant. We also found that alanine release enhances the germination kinetics of inosine-treated spores, and speculate that alanine release serves as a positive feedback loop to bring about spore germination.
Results
Conditioned supernatant from germinated B. cereus spores increases inosine-mediated germination rate of these spores We recently described that B. cereus 569 spores germinate with a time lag when inosine is used as the sole germinant [17]. This lag phase is significantly reduced and germination rates increase considerably when inosine is supplemented with alanine (1). Following the lag phase, B. cereus spores treated with inosine germinate with non-linear kinetics. We hypothesized that cofactors released from germinating spores during the lag phase enhance germination kinetics. To test this, we treated B. cereus 569 spores with 0.2 mM inosine, and collected supernatants 30 min post-inosine exposure. The conditioned supernatants derived from germinated spores were then added to fresh B. cereus spores. As shown in Fig. 1, conditioned supernatants collected from germinated spores significantly accelerated germination of fresh B. cereus spores. The lag phase was greatly shortened in the presence of conditioned supernatants, and the resulting germination kinetics resembled those obtained when 0.2 mM inosine was supplemented with 20 mM alanine (Fig. 1). Heat-treated (90uC for 15 min) or micro-filtrated (5 kDa MWCO) conditioned supernatants showed similar acceleration of the germination rate as untreated conditioned supernatants (data not shown). Together, these findings indicate that B. cereus spores release low molecular weight and heat-stable germination cofactors that promote inosine-mediated germination.
The potency of the conditioned supernatants is dependent on inosine and spore concentrations To determine conditions that promote germination, B. cereus spores were germinated at different spore densities, or in the presence of increasing inosine concentrations. Conditioned supernatants collected 30 min post-germination were added to fresh spores and T 1/2 values were determined. T 1/2 values represent the time point when the optical density has reached 50% of its final value. As expected, germination T 1/2 times decreased with increasing inosine concentrations ( Fig. 2A). Similarly, the potency of conditioned supernatants increased when they were harvested from spores germinated at increasing spore concentrations as indicated by decreased T 1/2 values (Fig. 2B).
We also tested whether germination of B. cereus spores by inosine alone required a specific spore density. Towards this, we diluted 10 ml of spores in increasing volumes germination buffer containing 0.2 mM inosine. Following continuous shaking at 37uC, germination was determined by microscopy using a modified Wirtz-Conklin stain [21]. This protocol stains resting and germinated spores green and red, respectively. Strikingly, germination of B. cereus spores was impaired at high dilutions, as less than 3% of spores germinated when diluted to ODs ranging from 0.0025 to 0.02 ( Fig. 3A and 3B). On the other hand, B. cereus spores germinated efficiently (.87%) at high concentration (OD of 0.1 and 1). As expected, B. cereus spores germinated efficiently when inosine was supplemented with 40 mM alanine regardless of spore density (Fig. 3A). These results indicate that inosine-mediated germination of B. cereus spores requires a minimal spore density.
Dipicolinic acid (DPA) release cannot account for germination acceleration A release of DPA has been linked to increased germination efficiencies, presumably through the activation of cortex-lytic enzymes [22][23][24][25]. While B. cereus spores germinate in the presence of 60 mM extracellular calcium-DPA [23,24,25], the final DPA concentration in the conditioned medium of germinated B. cereus spores was only 0.18 mM [26]. To test whether released DPA and/or calcium could account for the enhanced germination kinetics observed in the presence of conditioned supernatants, we exposed spores to 0.2 mM inosine supplemented with Ca-DPA (Fig. 4). As a control, we germinated spores in the presence of inosine alone. The presence of 0.18 mM Ca-DPA did not accelerate inosine-mediated germination (Fig. 4), suggesting that DPA is not a co-germinant in this process. D-cycloserine improves the efficiency of conditioned media to germinate B. cereus spores We also determined the effect of increasing incubation times on the potency of the harvested supernatants on inosine-mediated germination of B. cereus spores. As expected, germination efficiency of harvested supernatants increased with incubation time: germination was most efficient using supernatant collected 30 min post-inosine exposure (Fig. 5A). No increase in germination rate was observed when conditioned media was collected within 5 min of inosine exposure. Taken together, the potency of conditioned media increased with inosine and spore concentrations, as well as with longer incubation times.
We subsequently tested whether altering levels of the cogerminant alanine changes germination kinetics of inosine-treated B. cereus spores. Bacterial spores contain two alanine isomers: Lalanine and D-alanine. L-alanine has been shown to promote germination of multiple bacterial spores [16,17,27,28], while Dalanine has been described to block germination [29]. B. cereus spores express the endogenous enzyme alanine racemase on the surface. Alanine racemase is able to convert the activating Lalanine into the inhibitory D-alanine [30]. Inhibition of alanine racemase has been shown to increase L-alanine-mediated germination rates. The presence of D-cycloserine significantly increased the germination rates of inosine-treated B. cereus spores compared to spores exposed to inosine only (Fig. 5A). The increased germination kinetics further implicates L-alanine in germination of inosine-treated B. cereus spores.
Germinated B. cereus spores release amino acids
Since addition of alanine mimics the effect of conditioned media on inosine-treated spores ( Fig. 1), we determined the concentration of released amino acids in the conditioned B. cereus supernatants using 7-amino, 4-methylcoumarin (7-AMC) labeling. 7-AMC is a fluorescent dye that has been used to label amino acids and peptides [31,32]. The concentration of amino acids in B. cereus conditioned supernatants was approximately 80 mM as determined by 7-AMClabeling. As expected, no amino acids were detected in the supernatant of B. anthracis spores treated with inosine only.
Following HPLC separation and mass spectrometry of 7-AMClabeled supernatants, we detected alanine, glycine, leucine, . cereus spores were resuspended at increasing optical densities (OD 580 ) and germinated in the presence of 0.2 mM inosine. After 30 min, conditioned supernatants were isolated from the germinated spores. Fresh spore aliquots were resuspended in the conditioned supernatants to an optical density of 1. Germination curves were monitored as described above. T 1/2 values were calculated from each spore sample germinated in conditioned buffer and plotted against the initial spore optical densities. doi:10.1371/journal.pone.0006398.g002 threonine, and serine as major compounds in the supernatant of germinated B. cereus spores ( Table 1). The final concentration of each amino acid ranged from 5 to 20 mM. An amino acid standard mixture containing alanine, glycine, leucine, threonine, and serine showed the same elution profile as compounds identified in the conditioned supernatant from B. cereus spores.
To determine whether the amino acid released could act as a co-germinant with inosine to accelerate spore germination, we treated spores with inosine and each one of the amino acid identified above. Consistent with our findings, only alanine (data not shown) was able to synergize with inosine to increase the germination rate. Germination acceleration was identical at Lalanine concentrations between 8 mM and 20 mM.
To determine the kinetics of alanine release, we collected supernatants from spores germinated at different time points after inosine addition. These supernatants were derivatized with isobutyl groups to enhance fragmentation for quantitative analysis by tandem mass spectrometry. As expected, the alanine concentration increased continuously during the germination process (Fig. 5B). In fact, the alanine concentration increased almost 100-fold during the first 25 min of germination (Fig. 5B). Furthermore, the kinetics of alanine accumulation in the supernatant of germinated spores follows the same trend as the increase in germination rate as a function of incubation time (Fig. 5A). As expected, D-cycloserine did not increase the total amount of alanine in the supernatant, and similar amount of alanine was released in the presence and absence of D-cycloserine (Fig. 5B). These findings suggest that the enhanced germination in the presence of D-cycloserine is not due to increased levels of total alanine, but rather due to increased levels of L-alanine.
In contrast to B. cereus spores, the concentrations of inosine and alanine did not change in the supernatants from B. anthracis spores following germination with inosine and alanine (data not shown). These results further support the notion that amino acid release is restricted to germinating B. cereus, and does not occur in B. anthracis spores.
Concentration of free alanine in B. cereus spores. An aliquot of B. cereus spores was resuspended under conditions identical for the quantification of alanine in the supernatant (see above). Enough free alanine was obtained from the ungerminanted spores to yield a final concentration of 4.8 mM.
DgerI and DgerQ B. cereus spores fail to release amino acids The GerI and GerQ receptors of B. cereus are required for efficient germination in the presence of inosine [13,14]. We found that B. cereus spores containing a deletion in the GerQ receptor gene (DgerQ spores) did not germinate in the presence of inosine as the sole germinant (Fig. 6A). However, DgerQ spores germinated efficiently when inosine was supplemented with alanine or with conditioned media from germinated wild-type B. cereus spores. In fact, the germination kinetics of DgerQ spores obtained with conditioned media were similar to those acquired with inosine and alanine (Fig. 6A). Our results are consistent with findings showing that DgerQ spores germinate normally when inosine is supplemented with alanine [13,14]. These results indicate that the responsiveness to primary (inosine) and secondary (alanine) germinants is not compromised in DgerQ spores, and that these spores germinate normally in the presence of both germinants.
As predicted, conditioned supernatants harvested from inosinetreated DgerQ B. cereus 569 spores had no significant effect on the germination rate of wild type or DgerQ spores (Fig. 6B). Similar results were observed with conditioned media isolated from inosine-treated DgerI B. cereus 569 spores (data not shown). Subsequently, we tested whether DgerQ and DgerI spores are defective in their ability to secrete germination cofactors by using 7-AMC-labeling procedure as described above. As expected, inosine-treated DgerQ and DgerI spores did not release any amino acids (data not shown). It is reasonable to assume that the failure to release amino acids is responsible for the results obtained with DgerQ and DgerI spores treated with inosine only.
Conditioned supernatants from B. cereus spores do not increase the germination rate of B. anthracis spores In contrast to B. cereus spores, germination of B. anthracis spores requires at least two different germinants and does not occur in the presence of inosine only [16]. Intriguingly, conditioned media obtained from germinating B. cereus spores failed to germinate inosine-treated B. anthracis spores. Accordingly, 20 mM alanine, the alanine concentration found in the supernatant of germinated B. cereus spores, was sufficient to germinate B. cereus when used with inosine, but was insufficient to germinate B. anthracis spores (data not shown). B. anthracis spores, however, germinated efficiently when the alanine concentration was increased to 100 mM. Taken together, our findings suggest that inosine triggers the release of amino acids, most notably alanine, from B. cereus spores. This step appears to be required for completion of the germination process.
This positive feedback loop appears to be mediated by GerI/ GerQ receptors.
Taken together, we have demonstrated that B. cereus spores, in contrast to B. anthracis spores, are able to germinate in the presence of a single external germinant. We have shown that the single germinant inosine is able to trigger a feedback loop that results in the release of amino acids, presumably alanine. This amino acid release appears to be the second step required to complete the germination process.
Discussion
Here we present multiple findings supporting the theory that alanine is released during B. cereus germination, and is required for germination of these spores in the presence of inosine as the sole germinant: 1) We found that inosine-treated B. cereus spores release alanine in sufficient concentrations to positively affect germination. The concentration of DPA, calcium, and other amino acids released from germinated spores, on the other hand, was too low to affect germination kinetics. 2) Blocking alanine racemase with D-cycloserine enhanced germination kinetics, consistent with Lalanine-mediated germination [33,34,35]. 3) Amino acid release was required for germination, as spores defective in amino acid release did not germinate in the presence of inosine as sole germinant. Taken together, our findings suggest that alanine is the major co-germinant released by B. cereus stimulated with inosine only.
We have demonstrated that the lag phase of germination observed in inosine-treated B. cereus spores is greatly reduced when inosine is supplemented with alanine or conditioned media. Our data suggests that this lag phase corresponds to the time it takes the inosine-activated spores to release amino acids/alanine in sufficient quantities to aid in spore germination. We have shown that B. cereus germination is significantly enhanced in the presence of D-cycloserine, which increases the concentration of active Lalanine. These findings mimic earlier studies demonstrating the enhancing effect of D-cycloserine on germination of B. thuringiensis spores in the presence of inosine [36].
Our findings are consistent with studies linking an increase in levels of endogenous amino acids with enhanced germination kinetics of inosine-treated B. cereus spores [37]. Moreover, increased spore density has been shown to enhance germination rates of different Bacillus species [38], supporting the notion that released germinants aid in the germination process. B. cereus spores lacking GerI or GerQ receptors failed to germinate in the presence of inosine only. We found that gerI and gerQ-deficient spores did not release amino acids indicating that the defect was in the release of co-germinants. Moreover, gerI and gerQdeficient spores germinated normally when inosine was supplemented with alanine or preconditioned supernatants derived from germinated B. cereus spores. Both receptors have been linked to inosine binding, however, the ability of gerI and gerQ-deficient spores to germinate efficiently in the presence of inosine and alanine indicates that recognition of these germinants is not impaired in these spores [14]. Intriguingly, B. anthracis does not release amino acids upon germination with inosine and alanine. Thus, inosine-mediated amino acid release seems to be a unique property of B. cereus 569 spores. We have shown that gerI and gerQnegative B. cereus spores fail to release amino acids and to germinate in the presence of inosine. Since the B-subunit of germination receptors are related to bacterial amino acid exporter proteins [4], it is possible that GerI and GerQ receptors are directly involved in amino acid transport. It is also conceivable that these receptors stimulate amino acid transporters indirectly in inosine-treated spores. Our findings suggest that the mixture of exogenous inosine and released alanine activates secondary germination receptors that are presumably essential for the completion of the germination process.
Because B. anthracis spores do not release amino acids, they appear to require two germinants to bring about a successful germination [15,18,39,40]. Having to simultaneously detect structurally different compounds might prevent B. anthracis spores, an obligate pathogen, from germinating outside a suitable host. Like B. anthracis, B. cereus efficiently germinates in the presence of two germinants. However, in addition to the ''two-germinant mode'' B. cereus has also developed a mechanism that allows it to germinate in the presence of a single germinant, provided that the spores have reached a certain density. It is conceivable that the alanine release provides B. cereus with a feedback loop to finish the germination process. This feedback loop requires a critical density of B. cereus spores for optimal germination, and might allow B. cereus to monitor spore density and to coordinate germination. Our findings suggest that B. cereus spores not only sense the environment for nutrients, but also for spore density.
Because B. anthracis spores do not release amino acids in the presence of inosine, germination of these spores is independent of their density. In fact, B. anthracis spores may actually use an opposite strategy. Conditioned media obtained from B. anthracis spores germinated with inosine and alanine inhibited germination of fresh B. anthracis spores. In this case, exogenous L-alanine was converted to D-alanine by the alanine racemase enzyme, thus resulting in an inhibitory conditioned supernatant [33,34,35]. The differences in strategies of B. cereus and B. anthracis spores might have evolved to take advantage of different environmental niches. B. cereus, unlike B. anthracis, is not an obligate pathogen. While B. anthracis germination outside the host would be detrimental for the pathogen, B. cereus might require less stringent conditions. Taken together, B. cereus spores appear to have developed a unique quorum-sensing mechanism to coordinate their germination processes.
Spore germination was monitored on a Biomate 5 spectrophotometer at 580 nm (ThermoElectron Corporation, Waltham, MA). DPA release was monitored using published procedures [41]. Fluorescence spectroscopy was performed on an LS-50B fluorescence spectrophotometer (Perkin Elmer Life, Boston, MA,). Supernatant fractionation was performed on an Agilent 1200 HPLC system fitted with a UV-visible detector set at 340 nm (Agilent Technologies, Santa Clara, CA). Molecular weights were determined on a Thermo Finnigan LCQ ion trap mass spectrometer (ThermoFisher Scientific, Waltham, MA).
Spore preparation
B. cereus and B. anthracis cells were plated in DIFCO sporulating media (DSM) (Difco Laboratories, Detroit, MI) agar at high dilutions to yield single cell clones [42]. Single B. cereus and B. anthracis colonies were replated and incubated for 72 h at 37uC. The resulting bacterial lawns were scraped from the plates and resuspended in deionized water. Spores were purified by centrifugation through a 20%-50% HistoDenz gradient. Purified spores were washed 5 times with deionized water and stored at 4uC. Spores were more than 95% pure as determined by phasecontrast microscopy.
Analysis of inosine-mediated germination
Changes in light diffraction during spore germination were monitored at 580 nm. Spores were heat-activated at 70uC for 30 min, and resuspended in germination buffer (50 mM Tris-HCl pH 7.5, 10 mM NaCl) to an OD 580 of 1. The spore suspension was monitored for auto-germination at OD 580 for 1 h. Germination experiments were carried out with spores that did not autogerminate in a Biomate 5 spectrophotometer in a total volume of 1 ml. Experiments were performed in triplicate with at least two different spores preparations. Spore germination was evaluated based on the decrease in OD 580 at room temperature. Relative OD 580 values were expressed as a fraction of the actual OD 580 divided by the OD 580 obtained at the beginning of germination, and were plotted against time. All measurements showed standard deviations of less than 10%.
Germination with conditioned supernatant
Purified spores were resuspended in 2 ml germination buffer to an OD 580 of 1, and germination was initiated by addition of 0.2 mM inosine. Conditioned supernatants were collected following centrifugation (5,000 RPM) of germinated spores 30 min after addition of inosine. To determine heat lability and particle size of released factors, aliquots of the resulting conditioned supernatant were boiled for 30 min or filtered through a 5 kDa MWCO filter. Conditioned supernatant was then used to resuspend fresh spore aliquots. As controls, fresh spore aliquots were resuspended in 0.2 mM inosine with or without 20 mM L-alanine, and germination was monitored as described above.
To determine the effect of the inosine concentration on germination kinetics of B. cereus spores, supernatants were collected from spores treated with increasing inosine concentrations (0.1, 0.2, 0.4, 0.6, 0.8, and 1.0 mM: final concentration). Fresh spores (OD 580 = 1) were germinated in the resulting conditioned supernatants. As controls, fresh spores were also germinated in 0.1, 0.2, 0.4, 0.6, 0.8, and 1.0 mM inosine in the absence of conditioned media. Germination curves were fitted using the four-parameter logistic function of SigmaPlot v.9 software to calculate the mid-time point of the germination curve (T 1/2 ).
The effect of spore concentration on supernatants was tested by resuspending spores to a final OD 580 of 0.01, 0.05, 0.1, 0.15, 0.2, 0.5, and 1. Spores were treated with 0.2 mM inosine, allowed to germinate for 30 min, and supernatants were collected from each sample and tested for their effect on germination kinetics of B. cereus spores, as described above.
To determine the effect of D-alanine racemase on the kinetics of inosine-mediated germination of B. cereus, spores were resuspended in 2 ml germination buffer to an OD 580 of 1, and supplemented with the racemase inhibitor D-cycloserine (0 or 1 mM). Dcycloserine inhibits the alanine racemase, which catalyzes the conversion of active L-alanine into inhibitory D-alanine. Dcycloserine has been shown to potentiate L-alanine mediated germination, presumably by increasing the concentration of Lalanine available for germination. Germination was started by addition of 0.2 mM inosine and monitored 1, 2, 5, 10, 15, and 30 min post-inosine addition.
Supernatant washout experiment
To dilute out any released germinants in the supernatant of germinated spores, 10 ml of the spore suspension (OD 580 of 1) was added to increasing volumes of germination buffer (up to 4 l) prewarmed to 37uC containing 0.2 mM inosine and 0 or 0.04 mM alanine. As a positive control, spore suspension aliquots (200 ml) were treated with 0.2 mM inosine in the presence and absence of 0.04 mM L-alanine. Spore suspensions were incubated on a shaker at 37uC for 1 h, and then rapidly cooled to 4uC on ice. Spores and bacteria were collected from the small volumes via centrifugation at 10,000 x g. Spores and bacteria were collected from volumes above 10 ml by filtering the suspension at 4uC through a 0.2 mm PES membrane. The residue was collected from the membrane by resuspending in 2 ml germination buffer and pelleting by centrifugation at 10,000 x g. B. cereus pellets were smeared across a glass slide, air dried, and heat-fixed over a flame. Cells were stained using the Wirtz-Conklin staining technique, as described previously [21]. Briefly, heat-fixed spore/bacterial smears were immersed in boiling malachite green stain (5 g/ 100 ml water) for 1 min. Following destaining in distilled water, smears were counterstained with safranin-O (0.5 g/100 ml water) for 1 min. Smears were subsequently destained in distilled water, and mounted. B. cereus pellets were visualized using a Zeiss Axiophot microscope.
Germination with dipicolinic acid (DPA)
Purified B. cereus 569 spores were resuspended to an OD 580 of 1 in germination buffer and germinated with 0.2 mM inosine. After 30 min, cells were centrifuged, and the concentration of released DPA in the supernatants was determined using standard protocols [43]. A solution of Ca-DPA was prepared at the same concentration (0.18 mM) present in the conditioned supernatants. Resulting solutions were supplemented with 0.2 mM inosine, and germination was monitored as above.
Labeling of compounds released by germinating spores
Wild-type B. cereus 569, DgerI B. cereus 569, DgerQ B. cereus 569, and B. anthracis Sterne strain spores were resuspended (OD 580 = 1) in 200 ml trimethylammonium bicarbonate buffer (TMB, pH 8.5). Wild-type B. cereus spores were treated with 0.2 mM inosine (in TMB) alone or 0.2 mM inosine supplemented with 0.04 mM alanine. Germination was determined as described above. After 30 min, germinated spores were pelleted by centrifugation and cell-free supernatants were collected. As a negative control, conditioned supernatant aliquots were treated with water. As positive controls, conditioned supernatants were spiked with an amino acid standard solution containing 25 mM each of L-alanine, L-arginine, L-aspartic acid, L-cysteine, L-glutamic acid, glycine, L-histidine, L-isoleucine, L-leucine, L-lysine, L-methionine, Lphenylalanine, L-proline, L-serine, L-threonine, L-tyrosine, and Lvaline. A 500 ml sample of each supernatant was treated with 500 ml DMSO supplemented with 1 mM of -(3-dimethylaminopropyl)-N9-ethylcarbodiimide hydrochloride (EDAC), N-hydroxysulfosuccinimide (NHSS), and 7-amino-4-methylcoumarin (7-AMC). All reactions were incubated overnight at room temperature. After incubation, excess reagents were quenched with 1 ml glacial acetic acid for 2 h. All samples were dried under reduced pressure, re-dissolved in 100 ml of water, heated at 90uC for 30 min, and filtered through a 0.2 mm filter. Adduct fluorescence was determined on an LS-50B fluorescence spectrophotometer with excitation at 351 nm and emission at 430 nm.
Identification of released compounds
To label amino acids in the supernatant of germinated spores we used 7-amino-4-methylcoumarin (7-AMC). Released 7-AMC adducts were separated by HPLC over a C18 reverse phase column. The mobile phase consisted of a gradient from 5% to 100% acetonitrile (MeCN in water) in 30 min. Released 7-AMC adducts were detected with a UV-visible detector set with a 340 nm cut-off filter. The identities of the amino acids present in the 7-AMC treated samples were assigned by co-elution with the similarly treated amino acid standard solution. 7-AMC adduct concentrations were determined by fluorescence spectroscopy. The identity of each 7-AMC adduct was confirmed by LCQ ion trap mass spectrometry.
Kinetics of amino acid release B. cereus spores were resuspended to an OD 580 of 1 in 2 ml germination buffer supplemented with 0 or 1 mM D-cycloserine. Germination was started by addition of 0.2 mM inosine, and aliquots were collected at 0, 5, 10, 15, 20 and 30 min post-inosine addition. Aliquots were filtered sterilized and analyzed as described below.
Deuterium labeled amino acid standards ( 2 H 4 -Ala) was purchased from Cambridge Isotope Laboratories (Andover, MA). Molecular biology grade isobutanol and acetyl chloride were purchased from Acros (Geel, Belgium). HPLC grade Omnisolv water and acetonitrile were purchased from EMD Chemicals Inc. (Gibbstown, NJ). An ACQUITY ultra performance liquid chromatography (UPLC) with a BEH C18 column (1.7 mm particle diameter, 2.1650 mm) and sample organizer was used for analyte introduction. A Quattro Premier XE tandem mass spectrometer from Waters-Micromass was utilized for analyte detection.
Samples and calibration solutions were prepared for multiple reaction monitoring (MRM) quantitation of alanine following a procedure similar to that reported by Zhang et al [44]. Briefly, an aliquot of sample was mixed with deuterium labeled alanine and dried by vacuum centrifugation. Anhydrous isobutanolic-3 M HCl (200 ml) was added to the sample and allowed to react at room temperature for 50 min to form the isobutyl ester derivative. The reaction mixture was removed by vacuum centrifugation, and the sample was reconstituted in 200 ml of EMD water to give a final internal standard concentration of 500 nM immediately before analysis.
The sample was injected into the UPLC and run with initial solvent conditions of 20% acetonitrile and 80% water. The initial solvent mixture was maintained for 0.5 min. The solvent mixture was changed to 60% acetonitrile and 40% water in 3 min. The solvent mixture was then changed to 90% acetonitrile and 10% water in 1 min. After a 1 min hold, the solvent conditions were brought back to the original settings in 0.5 min and held for 1 min to equilibrate the column. The analyte and internal standard MRM transitions of 146.90, 146.44 and 150.94, 150.48 were monitored to calculate response factors based on peak area for quantification and confirmation. The data were processed by employing TargetLynx and MassLynx NT Software (Version 4.1, Micromass, Manchester, UK). Concentration was determined by using a calibration curve and back-calculating to reflect the original solution concentration in germination buffer.
Determination of alanine concentration in the spore core B. cereus spores were decoated following established procedures [45]. The decoated spores were lyzed by sonication in 70% acetonitrile/water. The resulting suspension was filtered-sterilized and submitted for mass spectrometry analysis as described above.
|
2014-10-01T00:00:00.000Z
|
2009-07-28T00:00:00.000
|
{
"year": 2009,
"sha1": "5fedb1945234d8e8beb38f58d637a079d69ec9b8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0006398&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fedb1945234d8e8beb38f58d637a079d69ec9b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
232258963
|
pes2o/s2orc
|
v3-fos-license
|
Gene Expression Landscape of SDH-Deficient Gastrointestinal Stromal Tumors
Background: About 20–40% of gastrointestinal stromal tumors (GISTs) lacking KIT/PDGFRA mutations show defects in succinate dehydrogenase (SDH) complex. This study uncovers the gene expression profile (GEP) of SDH-deficient GIST in order to identify new signaling pathways or molecular events actionable for a tailored therapy. Methods: We analyzed 36 GIST tumor samples, either from formalin-fixed, paraffin-embedded by microarray or from fresh frozen tissue by RNA-seq, retrospectively collected among KIT-mutant and SDH-deficient GISTs. Pathway analysis was performed to highlight enriched and depleted transcriptional signatures. Tumor microenvironment and immune profile were also evaluated. Results: SDH-deficient GISTs showed a distinct GEP with respect to KIT-mutant GISTs. In particular, SDH-deficient GISTs were characterized by an increased expression of neural markers and by the activation of fibroblast growth factor receptor signaling and several biological pathways related to invasion and tumor progression. Among them, hypoxia and epithelial-to-mesenchymal transition emerged as features shared with SDH-deficient pheochromocytoma/paraganglioma. In addition, the study of immune landscape revealed the depletion of tumor microenvironment and inflammation gene signatures. Conclusions: This study provides an update of GEP in SDH-deficient GISTs, highlighting differences and similarities compared to KIT-mutant GISTs and to other neoplasm carrying the SDH loss of function. Our findings add a piece of knowledge in SDH-deficient GISTs, shedding light on their putative histology and on the dysregulated biological processes as targets of new therapeutic strategies.
Introduction
Succinate dehydrogenase deficient (SDH-deficient) gastrointestinal stromal tumors (GISTs), as defined by the expression loss of the subunit B of the succinate dehydrogenase complex, account for approximately 20% to 40% of all KIT/PDGFRA wild-type (WT) GISTs and 5% of all GISTs [1]. The SDH deficiency is mainly due to mutations in one of the four SDH mitochondrial complex subunits, SDHA, SDHB, SDHC, and SDHD [1][2][3]. Most SDHx mutations in GIST disease are germline, in particular, germline mutations in SDHB, SDHC, and SDHD occur in about 20-30% of SDH-deficient GISTs and may be referred to as Carney-Stratakis syndrome [2]. Rarely, an epigenetic mechanism may occur, such as the recurrent aberrant DNA methylation of SDHC seen in GISTs associated to the Carney triad, which is a rare condition characterized by synchronous or metachronous occurrence of GISTs, paragangliomas, and pulmonary chondromas [4][5][6]. Several studies reported that SDH-deficient GISTs are exclusively located in the stomach with mainly multifocal primary localization, frequently present lymph node involvement, and generally affect the younger population. Moreover, SDH-deficient GISTs present an indolent course even when multiple metastases are present [7].
Currently, data on the molecular background of the SDH-deficient GIST shows that this disease may be considered as a unique entity among the GISTs [8]. No specific medical therapy is available in this subset when SDH-deficient GISTs are recurrent or metastatic, but the standard flow chart of GIST treatment is still recommended even though antiangiogenic drugs seem to be the most effective in terms of control of the disease.
The aim of this work is to uncover the gene expression profile of the SDH-deficient GIST subtended to tumor development and invasion in order to identify new signaling pathways or molecular events actionable for a tailored therapy.
Materials and Methods
Thirty-six GIST tumor samples, from formalin-fixed, paraffin-embedded (FFPE) or fresh frozen tissue, were retrospectively collected and analyzed.
Fresh frozen tissue specimens (25 samples) were collected during the surgical operation, snap-frozen in liquid nitrogen, and stored at −80 • C until RNA extraction. Formalinfixed, paraffin-embedded tissue blocks (13 samples) were obtained by fixing the surgical specimens in 10% NBF (formalin solution, neutral buffered) for no less than 6 h, then dehydrated and included in paraffin. Expert pathologists reviewed all samples, and the molecular alteration was detected by the routine GIST diagnostic panel by Sanger sequencing. Moreover, the SDH-mutant samples were tested by immunohistochemistry (IHC) in order to prove the negativity of SDHB staining.
The cohort consists of two distinct molecular subgroups of GISTs: KIT-mutant (29 cases) and SDH-deficient (7 cases). All KIT-mutant tumors were characterized by the presence of KIT mutation detected by Sanger sequencing. The SDH-deficient status was assessed by both immunohistochemistry of the SDHB subunit and Sanger sequencing of all the subunits of the SDH complex. Patients' data are reported in Table 1, and additional details and clinical data are shown in Supplementary Table S1.
Total RNA was extracted from tumor specimens with RNeasy Mini Kit (Qiagen, Milan, Italy) and then processed to be analyzed either on HGU133Plus 2.0 Affymetrix microarrays or by whole-transcriptome RNA sequencing on Illumina platform.
Briefly, for microarray samples, quality-controlled RNA was labelled following the Affymetrix manufacturer's recommendations and then hybridized to HGU133Plus 2.0 arrays. Gene expression data were normalized and quantified as log2signal by the robust multichip average (RMA) algorithm (package oligo, R-bioconductor).
For the whole-transcriptome samples, the cDNA libraries were synthesized starting from 250 ng total RNA with TruSeq RNA Exome (Illumina, San Diego, CA, USA) according to the manufacturer's protocol. Sequencing by synthesis was performed on Nextseq500 sequencer (Illumina) at 75 bp in paired-end mode. An average of 49.5 million reads per sample were obtained, reaching an average coverage of~45×. Read pairs were mapped on reference human genome hg38 with STAR (https://github.com/alexdobin/STAR accessed 15 October 2020), duplicates removed, and sorting and indexing were performed with samtools (http://www.htslib.org/ accessed 15 October 2020). Gene expression was quantified and normalized in two different ways: (1) as count per million (CPM) by adopting the python package HTseq-count to get the raw count (https://htseq.readthedocs.io/ accessed 15 October 2020), followed by the R-bioconductor package edgeR to compute the normalization factors (https://bioconductor.org/packages/release/bioc/html/edgeR.html accessed 15 October 2020); (2) as transcript per million (TPM) using the program kallisto (https://pachterlab.github.io/kallisto accessed 15 October 2020). The two normalization methods are conceptually different and suited to perform different types of downstream analysis; generally CPM are employed to compare between samples while TPM are best suited to compare between genes [9]. Here, we adopted CPM to perform the principal component analysis (PCA) and the evaluation of differential expression (DE), and the TPM values were considered to estimate the tumor microenvironment composition and to quantify the gene signatures. This study was approved by the institutional review board of IRCCS-Azienda Ospedaliero-Universitaria Policlinico S.Orsola-Malpighi, Bologna, Italy (approval number 113/2008/U/Tess). Each patient provided written informed consent.
The R package prcomp (https://cran.r-project.org/package=nsprcomp accessed 15 October 2020) was adopted to perform the PCA, and the three-dimensional projections, corresponding to the first three components, were plotted with the function plot3d of rgl package (https://cran.r-project.org/package=rgl accessed 15 October 2020). The DE analysis of SDH-deficient versus KIT-mutant GISTs was conducted using the R-bioconductor limma package (https://www.bioconductor.org/packages/release/bioc/html/limma.html accessed 15 October 2020), sequentially adopting the functions lmFit (to produce a fitted model) and eBayes (to compute moderate t-statistic and log2 fold change). Significantly modulated genes (over-or underexpressed) were defined on the basis of q-value < 0.05 (adjustment method Benjamini-Hochberg). The methods described for PCA and DE was applied to both microarray and RNA-seq data series. Over-representation analysis was performed separately for over-and underexpressed gene lists to determine whether genes associated to a specific pathway are present more than expected. We adopted the web tool Enrich (https://maayanlab.cloud/Enrichr/ accessed 15 October 2020) focusing on MSigDB Hallmark 2020 to evaluate pathways and Human Gene Atlas to evaluate cell type. As input, we entered the two lists of up-and down-regulated genes obtained by the consensus between microarray or RNA-seq DE results. We included significantly modulated genes (q-value < 0.05) in microarray data having the same fold change sign and p-value < 0.01 in RNA-seq analysis (and vice versa). We also performed gene set enrichment analysis (https://www.gsea-msigdb.org/gsea/index.jsp accessed 15 October 2020) adopting the full expression matrix (without any filter) for both microarray and RNA-seq data. We ran Gene Set Enrichment Analysis (GSEA) by selecting the curated gene set carrying "canonical pathways" from Molecular Signatures Database (MSigDB) and adopting the following parameters: number of permutation = 100; enrichment statistic = "classic"; metric for ranking gene = "Diff_of_Classes"; normalization mode = "meandiv". A tumor microenvironment study was performed using CIBERSORT, and immuno-related gene signatures were evaluated as previously described [10].
Gene Expression Profile of SDH-Deficient GIST
Gene expression analysis was performed separately for fresh frozen tissue samples analyzed with microarray and FFPE samples analyzed with RNA-seq.
As first step, PCA was adopted to perform an unsupervised analysis with the aim to decompose the high dimensionality of transcriptome data variability into three-dimensional components. The 3D projections in both PCA analyses showed that SDH-deficient GISTs distinctly separate from KIT-mutant GISTs, providing proof of an expression profile typical of this molecular subgroup and profoundly different from KIT-mutant GISTs, supporting the hypothesis that the two GIST molecular groups may derive from two distinct cell types or oncogenic programs ( Figure 1A,B). The analysis of DE was performed for both sample series to discover the set of genes that are significantly overexpressed or down-regulated in SDH-deficient GISTs. For fresh frozen samples, analyzed by microarray, we found 833 and 928 genes that were respectively up-and down-regulated (adjusted p-value < 0.05); for the FFPE samples, analyzed by RNA-seq, 577 genes were overexpressed and 889 genes were underexpressed (adjusted p-value < 0.05) (Supplementary Tables S2 and S3).
Then, the data were intersected to identify the over/underexpressed genes commonly modulated in the two series of samples, highlighting 405 overexpressed and 331 underexpressed genes in SDH-deficient GISTs (Supplementary Table S4). These two sets of genes were adopted to perform the over-representation analysis with the Enrich web tool, as described in the method section. The significantly over-represented pathways (adjusted p-value < 0.05) for up-and downregulated genes are reported in Table 2. Among the upregulated pathways in the SDH-deficient group, we found hedgehog signaling, hypoxia, glycolysis, and epithelial-to-mesenchymal transition (EMT). Conversely, the set of underexpressed genes returned terms related to immune system, such as interferon gamma/alpha response, IL/STAT signaling, TNF-alpha signaling, and complement; moreover, fat metabolism and KRAS signaling were also highlighted. Interestingly, also the cell type over-representation analysis of down-regulated genes showed significant terms related to hematopoietic lineage, such as CD33+ myeloid, CD14+ monocytes, and CD56+ natural killer (NK) Cells; while the list of up-regulated genes produced a set of overrepresented cell type mainly imputable to neuronal and brain tissues such as Fetal brain, pineal gland, prefrontal cortex, and superior cervical ganglion ( Table 2, and complete results in Supplementary Table S5).
To further investigate the presence of enriched and depleted pathways in SDHdeficient GISTs, the whole expression matrices (form both microarray and RNA-seq analysis) were adopted to run the GSEA tool. GSEA offered a wide and complex picture of gene expression profile in our GIST series, and the full results are reported in Supplementary Tables S6 and S7. In order to highlight the strongest and clearest signals, we decided to intersect the results given by RNA-seq and microarray data. We found three shared significant pathways in SDH-deficient GISTs corresponding to the fibroblast growth factor receptor (FGFR) signaling, the glycosamminoglicane (GAG) degradation, including a group of lysosomal enzymes involved in the GAGs breakdown, and VXPX CARGO TARGETING TO CILIUM (the process of driving membrane proteins containing the motif valine-X-proline-X in the C-terminal tail towards the ciliary membrane) ( Table 3). On the other hand, several commonly depleted pathways were highlighted by the GSEA analysis. Interestingly, we found immune system terms recurrence such as the cascade of Toll-like receptor complex, the IL3 pathway, the high-affinity IgE receptor signaling, and the granulocyte macrophage colony-stimulating factor.
Summing up, the gene set enrichment and the over-representation analysis showed a worthwhile scenario to be further explored. In particular, SDH-deficient GISTs appeared as a group of tumors with a presumed stand-alone histological background, marked by the activation of several gene signatures known to be related to invasion and tumor progression, and characterized by the depletion of immune competence. Given that, and based on the involvement in key oncogenic mechanisms, we decide to focus our investigation in neurallike signatures, FGFR signaling, hypoxia, EMT, and immune-related signatures.
Overexpression of Neural Markers
Among the top up-regulated elements in SDH-deficient GISTs, our analysis highlighted a relevant number of genes that are suggestive of neural commitment. Among them, we confirmed the overexpression of previously described (also validated by qPCR and IHC experiments) genes [11] such as the transcriptional regulator LHX2 (known to be associated with the neural crest differentiation), the neurofilament light polypeptide NEFL, the synaptic cell adhesion molecule belonging to the nephrin-like family KIRREL3, the N-cadherin CDH2, and the neural progenitor-specific gene IGF1R. Moreover, the present analysis showed other important neural marker such as the glutamate receptor GRIA1, the integrin-α8 ITGA8, and the neuronal cell adhesion molecule NRCAM. This scenario, corroborating the hypothesis given by PCA analysis, suggested that SDH-deficient GISTs might derive from a diverse cell type (with respect to the more common GIST molecular subtype) and in particular, from cells committed to neural differentiation. We know that GISTs originate from mesenchymal cells, namely, interstitial cells of Cajal (ICC), located within the gastrointestinal tract and involved in the crosstalk between smooth muscle and nervous system. Recently, ICC were isolated from mice, and the transcriptome profile was deeply evaluated, identifying an important set of ICC markers [12]. Taking into account this set of genes (including ANO1, KIT, PRCKCQ, THBS4, ELOVL6, GJA1, ADGRDA, EDN3, HPRT1, and ETV1), we speculate if some difference exists in SDH-deficient GISTs with respect to KIT-mutant GISTs. We found that the majority of ICC markers were highly and equally expressed in both GIST subgroups; however, we found THBS4 and ELOVL6 that were more expressed, as well as EDN3 and GJA1 that were underexpressed in SDHdeficient GISTs (Supplementary Figure S1). The role of Endothelin-3 (EDN3) in the neural crest proliferation and differentiation was widely studied by Nagy at al. [13]. Interestingly, in this study, the presence of EDN3 in the hindgut explant cultures was clearly associated to the inhibition of neuronal differentiation. On the contrary, an increased expression of thrombospondin-4 (THBS4) was demonstrated to induce neuronal differentiation in CSPG4expressing neural progenitor cells [14]. It is known that ICCs derive from mesenchymal stem cells that retain KIT expression during smooth muscle differentiation, probably due to induction from the nearby neural crest cells in the primitive gut that will give rise to the enteric nervous system [15,16]. ICCs are therefore cells that exhibit a relevant expression plasticity, which is probably reflected in their malignant counterparts. Taken together, all these connections validate the hypothesis that SDH-deficient GISTs could originate from a different type of ICC polarized towards a cell type with more pronounced neural features.
Fibroblast Growth Factor Receptor 2 Binding and Activation
The pathway enrichment analysis showed a significant up-regulation of signaling related to fibroblast growth factors (FGFs) activation and the corresponding receptors (FGFRs) cascade. Notably, no FGFR genes were differentially modulated in SDH-deficient with respect to KIT-mutant GISTs. However, FGFR1 and FGFR2 showed a high expression level in all samples, while FGFR3 and FGFR4 abundance was close to zero. In contrast, we found a relatively large set of FGF ligands that are significantly highly expressed in the SDH-deficient group, such as FGF4, FGF2, FGF7, and FGF10 ( Figure 2). Interestingly, also the cell adhesion molecules NRCAM and NCAM2 were strongly up-regulated in SDHdeficient GISTs. NCAMs family members are known to interact with FGFRs and to induce a specific FGFRs-mediated cellular response. In particular, the NCAMs-FGFRs interaction promotes the FGFRs stabilization and recycling to the cell surface of the receptors, indicating that FGFRs are activated by NCAMs in a very different way with respect to FGFs [17]. The concomitant presence of an increased level of both FGFs and NCAMs suggested that in SDH-deficient GISTs there are two different conditions possibly leading to FGFRs activation.
Comparison with SDH-Deficient Pheochromocytoma and Paraganglioma
Our analysis showed several interesting signatures overexpressed in SDH-deficient GISTs. To evaluate if the same expression profile and gene signatures were specifically characteristics of this rare subgroup of GIST, or if they were peculiar to neoplasms displaying the loss of function of SDH complex, we comparatively analyzed our microarray GIST series and a set of SDH-deficient pheochromocytoma and paraganglioma analyzed with the same protocol Affymetrix HG-U133 Plus 2.0 (available at E-MTAB-733 ArrayExpress). This dataset was published by Loriot et al. [18] within a research paper in which they identified the EMT activation specifically associated with SDHB-mutant metastatic pheochromocytoma and paraganglioma, concluding that this process may be involved in the acquisition of the invasiveness.
Firstly, we compared the whole expression profiles in an unsupervised manner, adopting PCA as previously described, putting together SDHB-mutant pheochromocytoma/paraganglioma and our microarray GIST samples (both SDH-deficient and KITmutant). The projections of the first three components show the GIST and pheochromocytoma/paraganglioma groups separately ( Figure 3A). This result clearly suggests that the global gene expression specifically characterizes the two cancer types, probably due to the different histological derivation driving the transcriptional profile. However, it is possible to hypothesize that specific signatures, which represent weaker signals with respect to the cell of origin, are due to the similarity of the genetic profile. So, we focused on the EMT pathway that, interestingly, also emerged as enriched in our SDH-deficient GISTs.
The acquisition of mesenchymal characters from epithelial cells, referred to as EMT, is normally associated to the embryonic development or to the tissue regeneration in adults. Moreover, EMT may occur during the tumor progression, inducing metastatic activity and increasing malignancy [19].
Several genes belonging to EMT signature are overexpressed in our SDH-deficient GISTs with respect to KIT-mutant GISTs. Among them, we found the cadherins CDH2 and CDH6, the cytokine receptor CRLF1, the secretory protein SCG2, the amyloid precursor APLP1, the enolase ENO2, and the secreted protein MGP.
Taking into account this set of genes, a cluster analysis was performed. SDH-deficient GISTs and pheochromocytoma/paraganglioma were distinctly separate from KIT-mutant GISTs ( Figure 3B), suggesting that the EMT expression pattern represents a shared feature in SDH-deficient tumors and is clearly different to KIT-mutant GISTs. In addition to the previously cited EMT genes, we also found the overexpression of the basic helix-loop-helix transcription factor TWIST1 that is known to be associated to the EMT process and to play an important role in embryonic development, suggesting the existence of a diverse grade of differentiation shifted towards an early stage.
Notably, SDH-deficient GISTs showed the up-regulation of hedgehog signaling, another pathway strongly related to cell differentiation and cancer invasion. Similarly to the EMT pathway, we found that hedgehog signaling genes produced clusters that separate KIT-mutant GISTs and SDH-deficient (GISTs and pheochromocytoma/paraganglioma together), as shown in Figure 3C.
Finally, following the same procedure, we also performed the hierarchical clustering for hypoxia pathways. The cluster analysis showed that SDH-deficient GISTs and SDHBmutant pheochromocytoma/paraganglioma shared the expression of hypoxia genes that is particularly evident for FOXO3, VLDLR, and ENO2 ( Figure 3D).
The hypoxia condition was associated with the overexpression of genes encoding for glutamate receptors [20], and in our data, as a matter of fact, we found the glutamate receptor GRIA1 as one of the most up-regulated genes in SDH-deficient GISTs in both RNA-seq and microarray datasets.
SDH-Deficient GIST Immune Profiling
While the transcriptome profile of SDH-deficient GISTs has proved to be enriched in varied gene signatures supporting the histological origin, the oncogenic mechanism, and the tumor behavior, looking to the depleted signals, the leitmotiv appeared to be related to the immune landscape. Based on this observation, we applied CIBERSORT to comparatively evaluate the tumor microenvironment composition in the two GIST molecular subgroups. As well as for DE, CIBERSORT was run separately for microarray and RNA-seq data. The absolute and relative quantification of 22 hematopoietic populations is reported in the Supplementary Table S8; the absolute values were also adopted to build the heatmaps shown in Figure 4A,B. The analysis highlighted the M2 macrophages and the CD4+ T-cell memory resting as the more abundant cell types in both GIST groups. These observations are in agreement with what was previously described by several authors [10,21]. Overall, neither relative nor absolute abundance of tumor microenvironment subpopulations allowed to clusterize SDH-deficient GISTs separately from KIT-mutant GISTs; however, the t-test analysis at single subpopulation level depicted some noteworthy evidences. In particular, SDH-deficient GISTs in the microarray series showed a significantly lower abundance of M1 macrophages (p-value = 0.03) and NK cells (p-value < 0.01), and similar trends were found in RNA-seq data ( Figure 4C-F). Moreover, SDH-deficient GISTs in RNA-seq samples showed a statistically significant lower level of CD8+ T-cells (p-value < 0.01) and dendritic cells (p-value = 0.03), which was also confirmed in the microarray series without reaching significance ( Figure 4G-J). These results did not provide a definitely strong signal, probably due to the small and unbalanced sample number, however they unequivocally offered a picture that overlaps with the gene set enrichment and over-representation analysis described above, defining SDH-deficient GISTs as tumors with a cold tumor microenvironment.
We also evaluated specifically immune-related gene signatures previously analyzed in GIST [10] and first described as predictors of immunotherapy response [22,23], these are the expanded IFN-γ-induced immune signature (EIIS) and the T-cell-inflamed signature (TIS). We found that the EIIS score is lower in SDH-deficient GISTs ( Figure 5). Even if we were not able to cluster SDH-deficient and KIT-mutant GISTs based on the EIIS signal of single genes (Supplementary Figures S2 and S3), we can observe a lower average EIIS score in SDH-deficient GISTs ( Figure 5A,B), likely driven by few EIIS genes (including CXCL10, STAT1, and HLA-E) that are significantly down-regulated in this GIST group. Following the same procedure adopted by our group [10], we also evaluated the TIS score in our GIST series, comparing the results with the TIS score distribution in tumor types collected in The Cancer Genome Atlas (TCGA) database. Interestingly, we found that SDH-deficient GISTs showed TIS scores closer to glioblastoma multiforme and kidney renal papillary cell carcinoma, while KIT-mutant GISTs placed near to breast cancer and pancreatic adenocarcinoma. For sake of clarity, we included in this analysis also the GISTs of our previous series [10], excluding the KIT-mutant GISTs and leaving the PDGFRAmutant GISTs. In strong agreement with our previous findings [24], PDGFRA-mutant GISTs are confirmed as the most immunogenic GIST molecular subgroup, showing a TIS score very similar to that of tumor types known to benefit from immunotherapy (such as lung cancer) ( Figure 6). On the contrary, the TIS data obtained for SDH-deficient GISTs, paired to a lower EIIS expression and to the tumor immune microenvironment depletion, suggested that this GIST subgroup should be considered a noninflamed tumor for which immunotherapeutic approaches are far from being taken into consideration.
Discussion
In this study, we compared the gene expression profile between two different molecular groups of GIST, SDH-deficient and KIT-mutant, using a retrospective collection of RNA-seq and microarray data.
We identified distinct transcriptional profiles of SDH-deficient with respect to KITmutant GISTs, confirming what was previously described [25]. Moreover, we found interesting signaling pathways in SDH-deficient GISTs that may lead to useful information related to pathogenesis and to potential therapeutic targets. Differential expression analysis, followed by over-representation and gene set enrichment analysis, revealed among the up-regulated pathways in the SDH-deficient group those of FGFR signaling, hypoxia, and EMT. Moreover, the SDH-deficient group showed a gene signature mainly characterized by overexpression of neural markers. Conversely, among the underexpressed pathways there are the interferon gamma/alpha response, KRAS and mTORC1 signaling, and fatty acid metabolism and complement. Interestingly, the immune-related signatures seem to be under-represented with respect to other GIST subgroups such as the PDGFRA-mutant GIST [21,24].
Our data confirmed the expression of markers related to neural development. As previously described, we found a high expression of genes LHX2, NEFL, KIRREL3, CDH2, and IGF1R. Furthermore, the present analysis showed other important neural markers such as the glutamate receptor GRIA1, the integrin-α8 ITGA8, and the neuronal cell adhesion molecule NRCAM. These results lead us to assume that the SDH-deficient GIST group originates from cells committed to the neural lineage. Recently, Young et al. have deeply evaluated the ICC transcriptome profile identifying an important set of ICC markers [11,12]. Considering the hypothesis of a different molecular origin, we further investigated by analyzing the expression of ICC markers in the two groups of samples. We realized that the majority of ICC markers were highly and equally expressed in both GIST subgroups; however, we found a modulated expression of some ICC markers know to be involved in the neuronal differentiation, such as THBS4 and EDN3 [13,14].
In our series, two members of the transmembrane receptor tyrosine kinases family, FGFR1 and FGFR2, were highly expressed in all samples, while FGFR3 and FGFR4 showed a lower level of expression. Conversely, we found a differential expression profile in two groups for FGF ligands. In particular, FGF4, FGF2, FGF7, and FGF10 showed a significantly high expression in the SDH-deficient group.
The cell surface receptor FGFR2 belongs to the human immunoglobulin superfamily. Its tyrosine kinase activity, triggered by extracellular ligand interaction and subsequent autophosphorylation, is involved in relevant biological processes including cell differentiation and mitogenesis, migration, and apoptosis. The extracellular portion of the receptor, carrying the Ig domains, may interact with the secreted FGFs or with other membrane proteins, including the neural cell adhesion molecules, NCAMs (also up-regulated in our SDH-deficient GIST series).
A large number of studies indicated that the deregulation of FGF signaling leads to many types of cancer (including hematological malignancies, breast cancer, and sarcomas), in which the genetic driver could be FGFRs translocations, amplifications, point mutations leading to FGFRs activation, or an increased level of autocrine or paracrine ligand stimulation [26].
The involvement of FGF/FGFR signaling to GIST pathogenesis was established in different molecular subgroups. It has been shown that in SDH-deficient GISTs, methylation of an FGF insulator region is responsible for the induction of FGF4 expression [27,28]. FGF3-FGF4 locus topology is profoundly altered in SDH-deficient GISTs, with CTCF insulator loss allowing aberrant expression of FGFR ligand genes [29]. We also recently confirmed that overexpression of the FGF4 oncogene is related to an epigenetic status of FGF4 in GIST [30].
Interestingly, the up-regulation of genes encoding for the lysosomal enzymes, such as SGSH, HGSNAT, HEXA, HEXB, NAGLU, and ARSB, acting in the degradation of the main GAG groups, was reported in the SDH-deficient GIST group. GAGs are a family of complex polysaccharides known to play a crucial role in the cell biology, interacting with different growth factors and other transient components of the extra cellular matrix [31]. These molecules have been widely reported as modulators of the tumorigenic process by controlling signaling loops leading to unregulated cell growth, cancer progression, angiogenesis, and metastasis [32].Particularly interesting is the fact that specific groups of GAGs, such as the heparan sulfates, are able to trigger cell proliferation mechanisms through fibroblast growth factors (FGF1 and FGF2), vascular endothelial growth factor (VEGF), and transforming growth factor-β signaling [32,33].
Furthermore, we compared our GIST series with a set of SDH-deficient pheochromocytoma/paraganglioma in order to understand if the molecular signature of SDH-deficient GISTs was peculiar of GIST or related to the loss of the succinate dehydrogenase complex.
In the SDH-deficient GIST series, several genes belonging to EMT signature are overexpressed. Among them, we may list the cadherins CDH2 and CDH6, the cytokine receptor CRLF1, the secretory protein SCG2, the amyloid precursor APLP1, the enolase ENO2, and the secreted protein MGP.
Loriot et al. performed transcriptional profiling to better understand the participation of EMT in the metastatic evolution of pheochromocytoma/paraganglioma. They identified the pathways that distinguishes SDHB-metastatic from all other types of pheochromocytoma/paraganglioma and suggest that activation of the EMT process might be associated to the particularly invasive phenotype of this group of tumors [18]. Our cluster analysis shows a separation of SDH-deficient GISTs together with pheochromocytoma/paraganglioma from KIT-mutant GISTs, suggesting that this pattern is common to tumors sharing a deficiency of the succinate dehydrogenase complex.
Several studies had previously supported that EMT features can be affected by genetic aberrations in the Krebs cycle enzymes, proving that the metabolic rewiring could be linked to cell plasticity and oncogenic transformation [34]. In particular, the inhibition of expression of SDH genes was associated with EMT activation in breast cancer [35], showing that the SDH loss-of-function can be a causative factor for EMT in tumors. Actually, many authors have described some type of sarcomas (such as synovial sarcomas, Ewing sarcoma, and uterine carcinosarcomas) as presenting an intermediate behavior between mesenchymal and epithelial stages named "metastable" phenotype [36]. This scenario is supported in our SDH-deficient series by the up-regulation of EMT marker N-cadherin CDH2, moreover, the expression of specific markers, like TWIST1, corroborate the hypothesis that these tumors are blocked at an early stage of differentiation correlating with the not rare clinical evidence of metastatic presentation.
Similarly, SDH-deficient GISTs showed the up-regulation of hedgehog signaling involved in the regulation of cell differentiation and proliferation. Additionally, for the genes associated with this pathway we found evidences of similarity with SDHB-mutant pheochromocytoma/paraganglioma. Lastly, we found the hypoxia signaling up-regulated in SDH-deficient GISTs. Similarly to the EMT and hedgehog pathways, we showed that genes related to hypoxia signaling produced clusters combining SDH-deficient GISTs with pheochromocytoma/paraganglioma and separating them from KIT-mutant GISTs. Hypoxia is a metabolic condition in which tissues show a low oxygen level leading to the failure to maintain cellular functions. Hypoxia is known to be directly implicated in the neoplastic transformation of cells, which change their pattern and characteristics in response to the microenvironment oxygen lacking. However, the hypoxia-induced phenotypes are observed in some tumors also in the absence of hypoxia; in these cases, it is referred to as pseudo-hypoxia. Several malignant features are associated with the hypoxic/pseudo-hypoxic condition, including stem cell-like trait, metabolic alterations, and EMT [37] as well as angiogenesis, invasion, metastasis [38]. This particular characteristic was widely described in the subgroup of pheocromocytoma and paraganglioma carrying mutations in SDH genes and VHL, often classified as a pseudo-hypoxic cluster. In these kinds of tumors, together with SDH-deficient GISTs, the dysregulation of tricarboxylic acid leads to the pseudohypoxia status [39], also promoting the anaerobic process of glycolysis [40] that, as matter of fact, is up-regulated in our SDH-deficient GIST series.
Taken together, these observations indicate that the expression of EMT, hedgehog, and hypoxia pathways is strongly linked to the SDH complex deficiency, a feature shared with pheochromocytoma/paraganglioma, which could explain and support clinically relevant differences with other GIST subgroups, such as for the metastatic behavior.
In this study, the immunological state of both groups of GISTs was evaluated. The results showed a significant absence of immune infiltrate in SDH-deficient patients, which indeed display a low abundance of tumor-infiltrating CD8+, M1 macrophages, NK cells, and dendritic cells. Moreover, the EIIS signature in SDH-deficient GISTs is lower than in KIT-mutant GISTs. By comparing the TIS score in our GIST series with the TIS score distribution in several tumor types (collected in TCGA database), we can see that the SDH-deficient TIS score is closer to that of glioblastoma multiforme and kidney renal papillary cell carcinoma, emerging as the lowest with respect to other GIST molecular subgroups (both KIT-mutant and PDGFRA-mutant) that showed a TIS score more similar to hot tumors (such as melanoma and lung cancer).
These findings lead us to assume that SDH-deficient GISTs are noninflamed cancers with a poor tumor microenvironment and definitely different from other GIST groups for which several studies have speculated about the possible efficacy of immunotherapeutic approaches [10,21,41].
Conclusions
This study delves into the expression landscape of SDH-deficient GISTs and highlights gene expression pattern similarity and differences with respect to the most common KIT-mutant GISTs and with respect to other neoplasm carrying an analogous molecular background leading to the SDH loss of function. These findings could help the scientific community of oncologists, pathologists, and biologists to better understand both the histology and the dysregulated biological processes as putative target of new therapeutic strategies.
|
2021-03-18T05:12:50.094Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "10a16fd0aa4f438a15fca3db6b5e4d6e04500033",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/5/1057/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10a16fd0aa4f438a15fca3db6b5e4d6e04500033",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271329582
|
pes2o/s2orc
|
v3-fos-license
|
Filling ability of ready-to-use or powder-liquid calcium silicate-based sealers after ultrasonic agitation
Abstract This study evaluated the effect of ultrasonic agitation on the filling capacity of ready-to-use calcium silicate-based sealer Bio-C Sealer (BCS, Angelus, Paraná, Brazil) or powder-liquid BioRoot RCS (BR, Septodont, Saint-Maur-des-Fossés, France) using curved artificial canals by micro-computed tomography (micro-CT). Additionally, flow (mm) and flow area (mm2) were evaluated for both materials. Acrylic resin main canal (60° curvature and 5 mm radius, with 3 lateral canals in the cervical, middle, and apical thirds) were prepared up to size 40/.05 (Prodesign Logic, Brazil). The agitation method was used with ultrasonic tip (US, Irrisonic, Helse, Brazil): BCS, BCS/US, BR, and BR/US. All specimens were filled using the single-cone technique. The samples were scanned by micro-CT (8,74 µm) after obturation. The percentage of filling material and voids were calculated. Flow was evaluated based on ISO 6876/2012 standards (mm) and area (mm2). The data were statistically analyzed using ANOVA and Tukey tests (α = 0.05). BR/US showed lower percentage of filling material in the lateral canals than and, BCS/US (p<0.05). BR/US resulted in a higher percentage of voids than BR in the lateral apical third (p<0.05). BCS showed higher flow than BR (p<0.05). BCS and BR presented proper filling capacity in the simulated curved canals regardless of the use of ultrasonic agitation. However, BR/US showed more voids in the apical third. BCS demonstrates higher filling ability.
Introduction
Proper filling of the root canal system is a key factor in achieving higher rates of endodontic treatment success (1).Voids in the filling provide a greater chance for reinfection (2)(3)(4).Therefore, different obturation techniques have been proposed to optimize root canal filling (3).The use of an endodontic sealer is essential to fill the spaces between gutta-percha and root canal walls, as well as areas of anatomical complexities, such as curvatures and lateral canals (1)(2)(3).Endodontic sealers must have adequate flow, aiming to fill irregularities in the root canal system (1).According to the ISO 6876/2012 standards (5), more than 17 mm flow is required for the sealer.In addition, the area occupied by material, expressed in mm 2 , can be used as a complementary flow analysis (6).
Calcium silicate-based sealers are commercially available in ready-to-use or powder-liquid forms (7).The ready-to-use presentation demands the moisture of the root canals for setting, and the powder-liquid sealer initiates the hydration reaction in the presence of water during the manipulation of the material previously to insertion in the root canal (8).Bio-C Sealer (Angelus, Londrina, Paraná, Brazil) is a premixed, ready-to-use endodontic sealer based on calcium silicates.Adequate physicochemical and biological properties have been related to this material (9,10), including filling capacity in flattened root canals (2,3), as well as in the apical third of curved canals of lower molars (11).BioRoot RCS (Septodont, St. Maur-des-Fossés, França) is a powder-liquid sealer, and the liquid is fabricated based on water with calcium chloride and polycarboxylate (12).Proper biological and physicochemical properties are described for BioRoot RCS (13,14).However, a higher percentage of voids than AH Plus (15) and GuttaFlow BioSeal (16) have been reported for BioRoot RCS.
Bioceramic sealers are often associated with the single-cone filling technique (2,3).However, this technique requires sealer with adequate flow (3) for proper filling of root canals with anatomical This study evaluated the effect of ultrasonic agitation on the filling capacity of ready-to-use calcium silicate-based sealer Bio-C Sealer (BCS, Angelus, Paraná, Brazil) or powder-liquid BioRoot RCS (BR, Septodont, Saint-Maur-des-Fossés, France) using curved artificial canals by micro-computed tomography (micro-CT).Additionally, flow (mm) and flow area (mm 2 ) were evaluated for both materials.Acrylic resin main canal (60° curvature and 5 mm radius, with 3 lateral canals in the cervical, middle, and apical thirds) were prepared up to size 40/.05(Prodesign Logic, Brazil).The agitation method was used with ultrasonic tip (US, Irrisonic, Helse, Brazil): BCS, BCS/US, BR, and BR/US.All specimens were filled using the single-cone technique.The samples were scanned by micro-CT (8,74 µm) after obturation.The percentage of filling material and voids were calculated.Flow was evaluated based on ISO 6876/2012 standards (mm) and area (mm 2 ).The data were statistically analyzed using ANOVA and Tukey tests (α = 0.05).BR/US showed lower percentage of filling material in the lateral canals than and, BCS/US (p<0.05).BR/US resulted in a higher percentage of voids than BR in the lateral apical third (p<0.05).BCS showed higher flow than BR (p<0.05).BCS and BR presented proper filling capacity in the simulated curved canals regardless of the use of ultrasonic agitation.However, BR/US showed more voids in the apical third.BCS demonstrates higher filling ability.
Key Words: Endodontics, physical properties, root canal filling, ultrasonic, x-ray microtomography complexities, such as curvatures and lateral canals (11).Ultrasonic agitation of endodontic sealer before insertion of the gutta-percha cone has been proposed as a resource to optimize the filling of root canals with anatomical complexities (17,18).On the other hand, it has also been reported that ultrasonic agitation does not improve the penetration of bioceramic materials into intratubular dentin (19,20).To date, there is no data in the literature on the effect of ultrasonic agitation on the filling capacity of ready-to-use calcium silicate-based sealer or in powder-liquid presentation.
Therefore, the aim of this study was to evaluate using micro-computed tomography (micro-CT) an influence of ultrasonic agitation on the filling capacity of ready-to-use Bio-C Sealer or powderliquid BioRoot RCS in simulated curved canals, besides the flow of these materials using conventional ISO methodology and complementary analysis.The null hypotheses were that ultrasonic agitation would not influence the filling capacity for the different sealers and that there would be no difference in the flow for both materials.
Sample size calculation
The sample size for this study was calculated by G* Power software (3.1.7 for Windows, Heinrich Heine, Universität Dusseldorf, Germany).One-way ANOVA test was used with an Alpha-type error of .05 and a Beta power of .99.The effect size of 1.27 was determined based on a previous study that used a similar methodology (21).A total of 5 specimens per group was indicated as the ideal size required, thus, an n=6 was used to compensate for possible losses during methodology implementation.
Preparation of the curved artificial canals
Acrylic resin models with a curved main canal and three simulated lateral canals in the cervical, middle, and apical third (n=24) were used (IM do Brasil Ltda, São Paulo, SP, Brazil).The curved main canal had a standard size of 24 mm, 60° angle of curvature, and 5 mm radius and the center of the curvature was 5 mm from the end of the canal.The simulated lateral canals were positioned 2, 4, and 6 mm from the apical foramen, representing the apical, middle, and cervical simulated lateral canals, respectively (Figure 1).The working length (WL) was determined using a #10 K-file (Dentsply Maillefer, Ballaigues, Switzerland) 1 mm short of the simulated apical foramen.All curved main canals were prepared with a ProDesign Logic rotary system (Easy Equipamentos Odontológicos, Belo Horizonte, Minas Gerais, Brazil) operated by an electric motor (VDW Silver, VDW GmbH, Munich, Germany).The 25/.01 instrument was used at a speed of 350 rpm and torque of 1 N.cm.Then, the instruments 25/.05, 35/.05, and 40/.05 were used at a speed of 600 rpm and torque of 3 N.cm.All instruments were applied with in-and-out movements up to the WL.The simulated curved canals were irrigated with 2.5 mL of distilled water after each instrument, using a 5 mL syringe and NaviTip 27-G needle (Ultradent Products, South Jordan, UT) 2 mm short of the WL (22).Subsequently, all canals were carefully dried using 2 tips of #40 absorbent paper (Dentsply Maillefer) in order not to cause excessive drying, according to the protocol described by Pinto et al. (11).
Obturation of the curved artificial canals
After preparation, the curved artificial canals were divided into 4 experimental groups (n=6) for obturation using the single-cone technique and one of the sealers in the different experimental conditions.All information about the sealers, composition, manufacturers, proportions, and experimental groups is shown in Box 1.For the canals filled with Bio-C Sealer, it was injected into the simulated canals approximately 4 mm short of the WL, using syringe and plastic needles provided by the manufacturer.BioRoot RCS was manipulated according to the manufacturer´s specifications and inserted into the canal using a #40 K-file (Dentsply Maillefer) pre-curved in the WL, and lentulo spiral #40 (Dentsply Maillefer) operated clockwise in low-speed motor (Micromotor N270) and contraangle (Dabi-Atlante, Ribeirão Preto, São Paulo, Brazil) 2 mm short of the WL.For the groups with agitation, this was performed using an Irrisonic ultrasonic tip (Helse Ultrasonic, Santa Rosa de Viterbo, São Paulo, SP, Brazil).The tip was activated for 40 seconds, 20 seconds in the buccal-lingual direction, and 20 seconds in the mesio-distal direction of the simulated curved canals 2 mm short of the WL after insertion of Bio-C Sealer or BioRoot RCS.A Newtron® Booster ultrasonic device (Acteon, North America, New Jersey, USA) was used at a frequency of 50 Hz and power of 10% to activate the Irrisonic tip, following the manufacturer's recommendations.After agitation, gutta-percha master points size 40 taper 0.05 (Tanari industry Ltda., São Paulo, Brazil) that were previously selected based on tip diameter and taper using Profilometer device (Profile Projector Nikon model 6C-2) were inserted into each simulated canal up to WL.For all experimental groups, gutta-percha excesses were cut at the cervical level with a heat plugger (Paiva #2; Golgran, São Caetano do Sul, São Paulo, Brazil).All the specimens were stored in an oven at 37° C and 95% humidity for 72 hours for the final setting of the sealers.
Micro-CT Analysis
The artificial canals were scanned using micro-CT (SkyScan 1176; Bruker, Kontich, Belgium) after obturation using defined parameters after the pilot test: isotropic voxel of 8.74 µm, copper and aluminum filter, exposure time of 1900 ms, rotation step 0.5, rotation angle 180°, frame 4, 80 kV and 310 uA.The images obtained were reconstructed using NRecon software (NRecon v.1.6.3,Bruker) and quantitatively analyzed by CTAn software (CTAn v1.15.4.0,Bruker).The percentage of the volume of the filling material (sealer and gutta-percha) and the percentage of voids were quantified for the curved artificial main canal and the simulated lateral canals in the cervical, middle, and apical thirds.The volume of interest (VOI) was selected in all extensions of the main canal and for each of the lateral canals.An interpolated region of interest was defined to exclude the acrylic and artifacts.
After that, the grayscale range needed to recognize each object of study was determined with a density histogram by using adaptive thresholding.The threshold level for both materials in the simulated canals was 90-255.To obtain the percentage of the volume of the filling material, "the percentage bone volume (BV/TV)" shown in the 3D analysis in the software CTAn was considered (Figure 2), and the percentage of voids was determined using the following formula: [Percentage of voids = 100 -percentage of the volume of the filling material].Three-dimensional models were created by CTVox software (v.3.2,Bruker).It is important to highlight that a single operator previously trained and calibrated executed all analysis
Flow test following ISO 6876/2012 standards and additional analysis
Flow test was performed based on ISO 6876 standards (5).After manipulation, 0.05 ± 0.005 ml of each material was placed in the center of a glass plate using a graduate syringe (n=10).At 180±5 seconds after the initial manipulation, another glass plate (20 g) and a metal weight (100g) were placed over the sealer and kept for 10 minutes.After that, the maximums, and minimums diameters of the materials on the glass plate were measured by a digital caliper (Mitutoyo, Suzano, São Paulo, Brazil).When a difference of less than 1 mm between the diameters was observed, the mean value was recorded.The second analysis was performed by photographing the set (glass plate and sealer) next to a millimeter ruler.The images obtained were evaluated using ImageJ software (National Institutes of Health, Bethesda, USA), to obtain the flow area of the material expressed in mm 2 as proposed by Tanomaru-Filho et al. (6).
A schematic methodological demonstrating the filling capacity and flow can be seen in Figure 3.
Statistical analysis
All data were analyzed using GraphPad Prism 7.00 statistical software (GraphPad Software, La Jolla, CA, USA).The normal distribution of data was confirmed by the Shapiro-Wilk test.Comparisons between groups were performed using ANOVA and Tukey tests.The significance level was 5% for all analyses.
Filling Capacity
Bio-C Sealer and BioRoot RCS showed a similar percentage of filling material in the curved main canal independent to use the ultrasonic agitation (p>0.05).BioRoot RCS with agitation presented a higher percentage of voids in the lateral canals compared to Bio-C Sealer without or with agitation (p<0.05).Ultrasonic agitation of BioRoot RCS resulted in a higher percentage of voids compared to BioRoot RCS without agitation in the lateral canal of the apical third (p<0.05).Ultrasonic agitation did not influence the filling ability of Bio-C Sealer in the lateral canals (p>0.05)(Table 1, Figure 4).
Flow
The results of the flow test using ISO 6876 and additional analysis are described in Table 2. Bio-C Sealer showed greater flow in both analyses (mm e mm 2 ) compared to BioRoot RCS (p<0.05).
Discussion
This study evaluated the effect of ultrasonic agitation on the filling ability of ready-to-use Bio-C Sealer or powder-liquid BioRoot RCS.Based on the current findings, significant differences were detected in the percentage of filling material and voids between the sealers evaluated, leading to the rejection of the first null hypothesis.
The results of the present study revealed lower filling when using ultrasonic agitation of BioRoot RCS in the simulated lateral canals concerning Bio-C Sealer, regardless of the agitation protocol.Lower flow (22), a higher percentage of voids (15), and pores (23) were reported for BioRoot RCS when compared to AH Plus sealer.On the other hand, Bio-C Sealer demonstrates greater flow than AH Plus and ready-to-use calcium silicate TotalFill BC Sealer (FKG Dentaire SA, La Chaux-de-Fonds, Switzerland) (9).Furthermore, adequate results on the filling capacity of flattened root canals were observed for Bio-C Sealer (2,3).Thus, we can suggest that excellent flow associated with filling capacity can explain the lower percentage of voids observed for Bio-C Sealer in the lateral canals of this study.
Interestingly, our results demonstrated a higher percentage of voids when BioRoot RCS was agitated compared to non-agitated in the lateral canals of the apical third.According to this finding, ultrasonic agitation of endodontic sealer was not related to greater filling in extracted human teeth (17).Furthermore, physical changes have been described for Bio-Root RCS after heat application, such as reduced setting time and lower flow (24).Thus, the increase in the temperature caused by ultrasonic agitation may have negatively influenced the filling capacity of Bio-Root RCS in the lateral canals of the apical third of this study.In addition, we can speculate that the heat resulting from ultrasonic vibration may have affected the setting time and flow of this material since a previous study reported adequate physicochemical properties for bioceramic materials in powder-liquid presentation (7).
Adequate filling capacity (near to 100%) was observed in the simulated curved main canals of this study for both materials regardless of the ultrasonic agitation.This result may be related to the properties of sealers based on calcium silicate (7,10), as well as the use of simulated circular canals in acrylic resin blocks (21,25).It is important to highlight that the use of simulated canals may not completely represent a clinical application (14,21,25), resulting in a limitation of the present investigation.Therefore, future research should focus on the use of extracted human teeth with root canals presenting anatomical complexities to further explore the effects of ultrasonic agitation on the filling capacity of bioceramic sealers.The results of the present investigation can be used as a starting point for future comparisons.
In the present study, sealer flow was evaluated following the ISO 6876/2012 guidelines (linear measurement expressed in mm) (5) and through complementary analysis considering the material flow area (mm 2 ) (6).The additional flow analysis in mm 2 was used to complement the conventional ISO standard, considering that it does not evaluate the whole area occupied by endodontic sealers (6).Therefore, the flow results in mm 2 from this study can provide a better understanding of the flow capacity of bioceramic sealers in canals with anatomical complexities.Our results revealed that both sealers accomplish the ISO 6876 standards (≥ 17 mm), as previously reported in studies (9,10,23).However, higher flow values were observed in Bio-C Sealer compared to BioRoot RCS in both analyses (mm and mm 2 ), leading to the rejection of our second null hypothesis.High values were also reported for Bio-C Sealer in both flow analyses, being higher than AH Plus and TotalFill BC Sealer (9).These results may be correlated with the findings of the present study regarding the adequate filling capacity for Bio-C Sealer after obturation of simulated curved and lateral canals regardless of the use of ultrasonic agitation.
The present investigation used different methodological approaches to allow an integrative analysis of the results of the filling and flow capacity of bioceramic endodontic sealers in areas of anatomical complexity, such as curvatures and lateral canals, which represent a greater difficulty for adequate preparation and filling (1).Therefore, the current findings can provide greater support for the clinician before indicating or not the use of the ultrasonic agitation protocol for bioceramic sealers in powder-liquid or ready-to-use form, especially in cases of complex root anatomies.
Figure 1 .Box 1 .
Figure 1.Representative image of the acrylic resin model with standard size of the simulated curved principal canal and lateral canals in the cervical, middle, and apical third.
Figure 2 .
Figure 2. Representative image of the quantitative assessment of the percentage of filling material using the CTAn software
Figure 3 .
Figure 3. Schematic figure representing the methodology.(A) Preparation and obturation of the simulated curved canals using single-cone technique and Bio-C Sealer or BioRoot RCS without or with ultrasonic agitation and scanning with micro-CT -8.74 µm to evaluate the percentage of voids.(B) Flow assessment according to ISO 6876:2012 (mm) and complementary analysis (mm 2 ).
Figure 4 .
Figure 4. Three-dimensional reconstructions of micro-CT showing the filling of the simulated curved canals after obturation with Bio-C Sealer or BioRoot RCS without or with ultrasonic agitation.
Table 1 .
Mean and standard deviation of the percentage of filling material and voids by Bio-C Sealer or BioRoot RCS without and with ultrasonic agitation in simulated curved canals
Bio-C Sealer without agitation Bio-C Sealer with agitation BioRoot RCS without agitation BioRoot RCS with agitation
bDifferent superscript lowercase letters in the same line indicate a statistical difference between the groups (p<0.05).
Table 2 .
Mean and standard deviation of the flow in mm and mm 2 of the Bio-C Sealer or BioRoot RCS bDifferent superscript lowercase in the same line indicates a statistical difference between the groups (p<0.05).
|
2024-07-24T05:12:43.420Z
|
2024-07-22T00:00:00.000
|
{
"year": 2024,
"sha1": "93d62810f5f18ae0747c6d9625d6f261070bf981",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "93d62810f5f18ae0747c6d9625d6f261070bf981",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
25861269
|
pes2o/s2orc
|
v3-fos-license
|
Heterogeneous Sp1 mRNAs in human HepG2 cells include a product of homotypic trans-splicing.
Sp1 is one of the well documented transcription factors, but the whole structure of human Sp1 has not been determined yet. In the present study, we isolated several cDNAs representing two forms of human Sp1 mRNA with different 5'-terminal structures in HepG2 cells. Isolation of a genomic clone established that one of the cDNAs represents the mRNA having consecutive alignment of exons, which allowed deducing the complete amino acid sequence for human Sp1. Another cDNA clone had a surprising structure that possessed an alignment of exons 3-2-3. Both reverse transcriptase-polymerase chain reaction and RNase protection assays confirmed accumulation of the two forms of Sp1 mRNA in HepG2 cells. Because Southern blot analysis suggested that exon 3 is of a single copy in the genome, the cDNA clone having the duplicated sequences for exon 3 appeared to reflect the trans-splicing between pre-mRNAs of human Sp1.
Sp1 is one of the well documented transcription factors, but the whole structure of human Sp1 has not been determined yet. In the present study, we isolated several cDNAs representing two forms of human Sp1 mRNA with different 5-terminal structures in HepG2 cells. Isolation of a genomic clone established that one of the cDNAs represents the mRNA having consecutive alignment of exons, which allowed deducing the complete amino acid sequence for human Sp1. Another cDNA clone had a surprising structure that possessed an alignment of exons 3-2-3. Both reverse transcriptase-polymerase chain reaction and RNase protection assays confirmed accumulation of the two forms of Sp1 mRNA in HepG2 cells. Because Southern blot analysis suggested that exon 3 is of a single copy in the genome, the cDNA clone having the duplicated sequences for exon 3 appeared to reflect the trans-splicing between pre-mRNAs of human Sp1.
Transcription factor Sp1 was initially identified as a protein that bound to multiple GGGCGG sequences (GC boxes) in the SV40 early promoter (1). Subsequent studies have shown that Sp1 also interacts with GC boxes in the promoters of cellular and other viral genes and activates expression of those genes (2,3). Although Sp1 had been regarded as a ubiquitous transcription factor that regulates transcription from TATA-less promoters of housekeeping genes, recent studies have suggested that Sp1 may be also involved in specific gene activation through modulation of its abundance and phosphorylation and glycosylation states in response to a variety of signals (4 -8). Likewise, our preliminary studies concerning gene expression responsive to insulin suggested that synthesis and/or degradation of Sp1 protein might be regulated by insulin. Accordingly, we started structural study of human Sp1 mRNA to enquire any account for that apparent insulin effect in its mRNA structure. Despite a large number of reports concerning the function of Sp1, the complete structure of human Sp1 protein has not been established yet. The reported cDNA clones for human Sp1 still lack the N-terminal and the upstream noncoding regions (9,10), although a DNA-binding domain having three zinc finger motifs and four transcriptional activation domains, termed domains A, B, C, and D (11), have been identified in the partial Sp1 structure. We report here accumulation of two forms of human Sp1 mRNA in HepG2 cells and the evidence that one form of the products was generated by homotypic trans-splicing. The complete structure of human Sp1 protein was also deduced from the cDNA sequence.
EXPERIMENTAL PROCEDURES
5Ј Rapid Amplification of cDNA Ends and Reverse Transcriptase-Polymerase Chain Reaction-Total RNA was extracted from HepG2 cells using ISOGEN (Nippongene, Toyama, Japan) according to instruction of the manufacturer. The single strand cDNA for 5Ј RACE 1 was prepared by in vitro synthesis of cDNA with avian myeloblastosis virus reverse transcriptase XL (Takara Shuzo, Tokyo, Japan) using total RNA (5 g) and the primer RT (5Ј-TCTGTTCCTTTG-3Ј) and digestion of the template RNA with RNase H. When nucleotide positions were numbered relative to the transcription start site that was identified in this study, the primer RT corresponded to the positions from 2197 to 2186. The same nucleotide numbering was adopted throughout this paper. 5Ј RACE was carried out using a 5Ј Full RACE Core Set (Takara Shuzo). The first PCR was performed using the single strand cDNAs concatenated by T4 RNA ligase and primers S1 (5Ј-GCTGGCAGAT-CATCTCTTCC-3Ј, positions 2144 -2163) and A1 (5Ј-ACCCTGT-GAAAGTTGTGTGG-3Ј, positions 2136 -2117) through a 25 cycle-amplification (94°C for 30 s, 52°C for 30 s, and 72°C for 4 min). Then, a nested PCR was applied to the first PCR products under the same condition using primers S2 (5Ј-GGATCCTCTGGGGCTACCCCTAC-3Ј, positions 2164 -2183) and A2 (5Ј-GAATTCTGTGAGGTCAAGCTCAC-CTG-3Ј, positions 2116 -2096). Each primer contained both the sequence for a proper segment in Sp1 gene and the sequence (underlined) for creation of a restriction site. Each product of the nested PCR was cloned into pUC vector for DNA sequencing.
For detection of the Sp1 mRNA with the exon 3-2-3 alignment by RT-PCR, a 25-cycle amplification (94°C for 30 s, 52°C for 30 s, and 72°C for 50 s) was applied to the single strand cDNA that was used for 5Ј RACE and appropriate pairs of the following primers; primer T1 For detection of the Sp1 mRNA with the exon 2-3-2 alignment by RT-PCR, a single strand cDNA was prepared by reverse transcription from poly(A) ϩ RNA (1 g) of HepG2 cells using the primer R8 (5Ј-TGCCCGCAGGTGAGAGGTCTTG-3Ј); this primer was synthesized referring to the cDNA sequence in the downstream of exon 3. The first PCR was carried out using the primer X2 (5Ј-GTTCGCTTGCCTCGT-CAGCG-3Ј, positions 81-100), primer T3 and primer 2R-1 (5Ј-AAG-GCACCACCACCATTACC-3Ј, positions 1635-1616). The nested PCR was accomplished through a 25-cycle amplification (94°C for 30 s and 72°C for 2 min) using the following primers; primer R14 (5Ј-TTCATC- Isolation of a Genomic Clone-Human genomic DNA was prepared from HepG2 cells according to a standard protocol (12), and completely digested with XbaI. Then a size-fractionated pool of the DNA fragments was ligated with the phage vector DASH II (Stratagene, La Jolla, CA) to construct a human genomic library. This library was screened by a plaque hybridization technique using the 120-bp DNA fragment of human Sp1 cDNA (positions 56 -175) as a probe.
RNase Protection Assay-To construct template plasmids for in vitro synthesis of riboprobes, DNA fragments were amplified by PCR with two sets of primers. The primers XA1 (5Ј-AATAAGCTTGTTCGCTTGC-CTCGTCAGCG-3Ј, positions 81-100) and XA2 (5Ј-TTATCTAGAAAG-GCACCACCACCATTACC-3Ј, positions 1636 -1616) were used with the template of the 0.41-kb 5Ј RACE product, and primers BA1 (5Ј-AATA-AGCTTTCACACCCATTGCCTCAG-3Ј, 3332-3349) and BA2 (5Ј-TA-ATCTAGAATTGCCCCCATTATTGCC-3Ј, 1615-1598) were used with the 1.6-kb 5Ј RACE product. These primers contained additional sequences (underlined) for creation of XbaI or HindIII site at each end. Each amplified DNA fragment was inserted between XbaI and HindIII sites of pBluescript KS. The resulting plasmids were linearized by digestion with HindIII, and antisense riboprobes were synthesized from these T7 promoter-containing plasmids in the presence of [␣-32 P]CTP using a T7 RNA Synthesis Kit (Nippongene). The riboprobe for detection of the -actin mRNA was also synthesized using a -actin human antisense control template (Nippongene). RNase protection assays were performed with an RPA II kit (Ambion, Inc., Austin, TX) according to the manufacturer's instructions. In brief, riboprobes (each 5 ϫ 10 5 cpm) were incubated at 42°C for 16 h with RNA samples as indicated in the figure legends. Then they were digested for 30 min at 37°C with 200 l of a mixture of RNase A (2.5 unit/ml) and RNase T1 (100 unit/ml). The protected products were analyzed on a 6% polyacrylamide gel containing 8 M urea.
Genomic Southern Blot Analysis-Genomic DNA (2 g) from HepG2 cells was digested completely with a restriction enzyme (BamHI, EcoRI, PstI, or XbaI), electrophoresed on a 0.7% agarose gel, and transferred onto a Hybond-N membrane (Amersham Pharmacia Biotech). The DNA on the membrane was allowed to hybridize with the 32 P-labeled DNA fragment (positions 3225-3502) that corresponded to exon 3 of the human Sp1 gene and washed under stringent conditions (0.2 ϫ SSPE plus 0.1% SDS, 15 min at 65°C, two times).
Two Forms of Human Sp1 cDNA with Different 5Ј-Terminal
Regions-To obtain a human Sp1 cDNA clone containing the 5Ј-terminal region, we employed a 5Ј RACE method using total RNA from HepG2 cells. The primers were designed to anneal specifically to the sequence in a 5Ј-terminal region of human Sp1 on the basis of the data registered in GenBank TM (accession number J03133). As shown in Fig. 1A, three kinds of DNA fragments with respective sizes of 0.34, 0.41, and 1.6 kb were mainly amplified. The sequence analysis revealed that all the products indeed possessed in common a known sequence in Sp1 gene; however, these products were classified into two types based on the sequence upstream of this sequence. The products of 0.41 and 0.34 kb had a new and identical sequence in the immediate upstream of the known sequence, although the 0.41-kb product contained the further upstream sequence of 71 bp long (Fig. 1B). By contrast, the 1.6-kb product had an unexpected structure, in which another established sequence in the downstream region of Sp1 gene was also linked upstream of the common sequence (Fig. 1B).
Because we obtained different cDNA clones for the 5Ј-terminal region of Sp1 mRNA, we analyzed the Sp1 gene in the human genome to elucidate the mechanism of the generation of these differences. In genomic Southern analysis with XbaI digestion, a single band of 14 kb was detected using the probe obtained from the 0.41-kb product (data not shown); thus we constructed a genomic library from the XbaI digest to obtain the genomic clone containing the 5Ј region of human Sp1 gene. The screening of this library with the same probe yielded a single positive clone, which was named Sp1⌭1 (Fig. 1C). Characterization and sequence analysis revealed that this genomic clone contained both the common Sp1 sequence and the new sequence that was found in the present 5Ј RACE products (accession number AB039286). This result indicated that the 0.41-and 0.34-kb products contained an upstream exon of human Sp1 gene.
To confirm whether the 5Ј-terminal end of the 0.41-kb product represented the 5Ј terminus of Sp1 mRNA, we next performed primer extension analysis with poly(A) ϩ RNA isolated from HepG2 cells. This experiment showed only a single band representing the product extended 56-bp from the 5Ј end of the 0.41-kb product (Fig. 2B). There is no consensus sequence for the splice acceptor site in the genomic sequence preceding the one corresponding to the 5Ј end region of 0.41-kb product. We assume, therefore, the position 56 bp upstream from the 5Ј end of the 0.41-kb product as the transcription start site of human Sp1 gene ( Fig. 2A). The genomic structure up to 266 bp upstream of this putative transcription start site did not contain any TATA box-like sequence but four possible GC boxes at positions starting from Ϫ231, Ϫ182, Ϫ139, and Ϫ9, respectively, thus suggesting the possible auto-regulation in Sp1 function ( Fig. 2A). Although the 0.41-and 0.34-kb products did not contain the expected 5Ј terminus of Sp1 mRNA, the newly determined sequence in these products had a stop codon in the frame for Sp1 protein ( Fig. 2A). Therefore, the first methionine codon in this open reading frame appeared to be the initiation codon and the complete amino acid sequence of human Sp1 protein was thus deduced from the DNA sequences of the 0.34and 0.41-kb products. The deduced amino acid sequence of human Sp1 is composed of 785 amino acid residues, and the calculated molecular mass is 80,691 Da. The resulting amino acid sequence of the N-terminal region showed a high homology with those of mouse and rat Sp1 proteins (Fig. 3).
Comparison of sequences of the genomic and cDNA clones revealed exon-intron boundaries in the 5Ј-terminal region of Sp1 gene (Fig. 1, B and C). The Sp1E1 clone contained first three exons of the Sp1 gene; the sizes of these exons were 178, 155, or 1513 bp, respectively. It was also shown that the 1.6-kb product had a 3Ј-terminal portion of exon 3 in the immediate upstream of exon 2; it had the exon 3-2-3 alignment (Fig. 1B). The upstream exon 3 encoded exactly the same amino acid sequence coded by the downstream exon 3 in the same frame, except the codon at the junction between exons 3 and 2 (GAC), whereas the codon between exons 3 and 4 was GGT, which encoded Asp and Gly, respectively.
Establishment of the Presence of the Sp1 mRNA with the Exon 3-2-3 Alignment in HepG2 Cells-To confirm whether the Sp1 mRNA represented by the 1.6-kb product is naturally produced or not, RT-PCR analysis was performed with total RNA from HepG2 cells and several sets of the primers (Fig. 4A). When the mRNA corresponding to the 1.6-kb product is indeed present in HepG2 cells, the 500-bp product is expected to be amplified in the RT-PCR with primers T1 and T6 followed by the nested PCR with primers T2 and T5. As the positive control, a nested PCR was also performed using primers T3 and T6 followed by primers T4 and T5 to amplify the 264-bp fragment. These expected segments were all amplified in these PCRs (Fig. 4B, lanes 1 and 2), whereas no amplifications were observed in the negative control PCR with primers T6 and T5 (lane 3). We also confirmed that these amplified products had the expected sequences. To provide further the direct evidence for the occurrence of two forms for the Sp1 mRNA and to estimate their accumulation levels, we next carried out RNase protection assays with two antisense riboprobes (Fig. 5A). The riboprobe 1 that was synthesized using the 1.6-kb product as the template has the sequence complementary to the exon 3-2 boundary region covering 171 nt in exon 3 and 77 nt in exon 2. Thus, three sizes of protected fragments were expected when riboprobe 1 was used; hybridization of this probe to the mRNA corresponding to the 1.6-kb product should produce a 248-nt fragment from the exon 3-2 junction-spanning regions and a 171-nt fragment from the 3Ј-terminal region of exon 3 immediately preceding exon 4, and hybridization to the other mRNA gives rise to a 77-nt fragment from the 5Ј-terminal regions of exon 2 following exon 1. Such bands were clearly observed with total RNA from HepG2 cells in a dose-dependent manner (Fig. 5B, lanes 2-4), whereas no band was observed with the yeast RNA that served as a negative control (lane 5). The result with the riboprobe 2 also confirmed the presence of the two Sp1 mRNAs. The riboprobe 2 had the complementary sequence to the exon 1-2 boundary region comprising 98 nt in exon 1 and 97 nt in exon 2 and gave rise to two bands whose signal intensities were also dependent on the amount of total RNA used (lanes 7-9). The fragment with 195 nt corresponded to the full-protected product from the normal form of the transcript, whereas the fragment with 97 nt corresponded to the expected part of exon 2 in the mRNA related to the 1.6-kb product. Together, these results directly demonstrate the presence of two forms of Sp1 mRNAs with different 5Ј-terminal structures in HepG2 cells. Furthermore, the signal intensity of each protected fragment also suggested a significant level of accumulation of either form of Sp1 mRNA in HepG2 cells.
The Sp1 mRNA with the Exon 3-2-3 Alignment Is Produced by trans-Splicing-Because genomic rearrangement can cause exon duplication in mRNA (13), we examined whether or not exon 3 of Sp1 gene is duplicated in the genome of HepG2 cells. Genomic Southern blot analysis was performed with genomic DNA digests with various restriction enzymes using a DNA fragment from exon 3 as a probe. As shown in Fig. 6, a single band was detected in each lane (lanes 4 -7). In addition, the signal intensities of these bands were almost the same as that of the control band for a single copy (lane 3). This estimation was further validated by a parallel Southern analysis for an established single copy gene, p53 gene, applied to the same DNA digests (data not shown). These results suggested that exon 3 exists as a single copy in the genome. Thus, not the genomic duplication but an RNA editing mechanism, i.e. formation of circular RNA or trans-splicing, appeared to give rise to the Sp1 mRNA with the exon 3-2-3 alignment. Because circular RNAs lack poly(A) tails per se, we next performed RNase protection assay with poly(A) ϩ -rich RNA and poly(A) Ϫrich RNA (Fig. 7). The distribution of the Sp1 mRNA with the exon 3-2-3 alignment to these two fractions was similar to that of the cis-spliced Sp1 mRNA. In addition, the -actin mRNA that was used as a marker of fractionation, was distributed similarly. Therefore, we concluded that the Sp1 mRNA with the exon 3-2-3 alignment was produced by trans-splicing between two Sp1 pre-mRNAs.
We also investigated the structure upstream of the 3-2-3 alignment of the trans-spliced Sp1 mRNA by RT-PCR. To examine whether exons 1 and 2 are located in the upstream of the 3-2-3 alignment, first PCRs were carried out with primers X2 and 2R-1 or primers T3 and 2R-1 (Fig. 8A). Then the nested PCRs were done using the primers R15, R16 or R17, and R14, which were designed to anneal specifically to exon 1, exon 2, or exon 3, and the exon 3-2 junction sequence, respectively (Fig. 8A). When the first PCR product with T3 and 2R-1 primers was used as a template, the amplified products were observed by the nested PCRs (Fig. 8B, lanes 2). DNA sequencing of these products established their expected structure. In contrast, no product was observed by the nested PCR when the first PCR product with X2 and 2R-1 primers was used as a template (lanes 1). The specificity of the primers used were verified in the negative control PCRs using an EcoRI-XbaI fragment (Fig. 1C) of a genomic DNA clone (lanes G) or the cDNA clone containing the exon 1-2-3-4 alignment (lanes C) as a template, and the positive control PCR using the recombinant clone having the exon 1-2-3-2-3 alignment as a template (lanes R). Taken together, the trans-spliced mRNA appeared to have exon 2 but not exon 1 in its 5Ј region.
FIG. 5. RNase protection assay for the transcripts of Sp1. A, the riboprobes for RNase protection assays. The target sequences of riboprobes are shown beneath the structure of 5Ј RACE products. Both riboprobes that were synthesized in vitro from pBluescript derivatives contained a 40-bp vector sequence as well. B, the riboprobes were incubated with 10 g (lanes 2 and 7), 50 g (lanes 3 and 8), or 100 g (lanes 4 and 9) of total RNA from HepG2 cells or 100 g (lanes 5 and 10) of yeast RNA. The undigested probes were also loaded (lanes 1 and 6). The positions of the products are indicated by arrowheads, and the sizes of the products are also shown. The nonspecific bands were marked by asterisks, and a thin arrow shows an unidentified fragment. 7) and a linearized plasmid containing exon 3 that was equivalent to ten, three, or one copy per haploid of human genome (lanes1-3) were used for hybridization. DNA size markers are indicated on the left. The undigested riboprobe was also loaded (lane 1). The result for the same samples using a -actin probe was also shown below. Three protected fragments were indicated by arrowheads.
DISCUSSION
Here, we cloned human Sp1 cDNAs that represent two forms of mRNAs with different structures in the 5Ј-terminal region. The results of RT-PCR and RNase protection assay confirmed that two Sp1 mRNAs are really present and accumulated in HepG2 cells. One of them is generated through a well studied cis-splicing process, and the other with the exon 2-3-2-3 alignment is by trans-splicing. Consistently, we detected heterogeneous RNA species in Northern analysis of RNA from HepG2 cells using a Sp1 cDNA fragment (positions 2765-3295) as a probe; the main band was approximately 8.2 kb, whereas the other two were minor but still marked representing smaller mRNAs (data not shown). The main band probably corresponds to the main bands previously reported (4,9,14) for the human Sp1. Other distinct bands we observed correspond to smaller RNAs whose occurrences may depend on cell lines and/or tissues, because the smaller species also seem to be detected in MKN-28 (4) but not in HeLa cells (9). Although we determined the structure of the 5Ј-terminal region of human Sp1 mRNA in this study, the sequence for the 3Ј noncoding region remains undetermined. We currently suspend, therefore, the identification of those multiple bands in the Northern analysis until accomplishment of the whole structure for Sp1 mRNA.
trans-Splicing is an RNA editing mechanism that produces mature mRNA from separate pre-mRNAs. In trypanosoma, nematodes and some other lower organisms, spliced leader RNA, which is similar to spliceosomal U small nuclear RNAs, was ligated at the 5Ј ends of diverse nuclear mRNAs (15)(16)(17). Another type of trans-splicing has been discovered in plant mitochondria and chloroplasts. In this trans-splicing, formation of group II intron-like structures by base pairing between complementary segments of introns in separate pre-mRNAs seems to be essential (17)(18)(19). In mammalian cells, trans-splicing was first demonstrated in vivo and in vitro using artificial RNA substrates (20,21), and spliced leader RNA and actin-1 pre-mRNA from Caenorhabditis elegans (22). Subsequently, a few examples of trans-splicing as a natural event have also been found in mammalian cells (23)(24)(25). These trans-splicings occur between different pre-mRNAs. On the other hand, very recent studies unveiled trans-splicing between identical pre-mRNAs, namely homotypic trans-splicing, in mammalian cells in the expressions of the rat carnitine octanoyltransferase gene (26), the rat SA gene (27), and the rat voltage-gated sodium channel gene (28). Our present finding with the human Sp1 gene adds another distinct example to the homotypic transsplicing in mammalian cells, suggesting this type of transsplicing might be rather a general mechanism for regulation of phenotype expression in mammalian cells.
Based on our findings in this study, we propose the model of the homotypic trans-splicing (Fig. 9). In this model, we present the trans-spliced Sp1 mRNA that lacks exon 1. The reason that we did not detect the trans-spliced Sp1 mRNA having exon 1 is obscure. However, the presence of alternative transcription start sites in intron 1 can be a candidate account, because we observed multiple products in the primer extension analysis with the exon 2-specific primer (data not shown). It has been proposed that trans-splicing process in mammals proceeds in spliceosome complexes and through partial base pairing between two precursor mRNAs (29). By the survey of complementarity between a segment in the upstream of exon 2 and one in the downstream of exon 3, we found two sets of complementary sequences (Fig. 9). We also found an exonic splicing enhancer (ESE)-like sequence (GAGGAGGAGGG, positions 1680 -1690) in exon 2 of the Sp1 gene (Fig. 9). The ESEs, which are known to be involved in a weak splice site selection in alternative splicing with cooperation of serine/arginine-rich splicing factors (SR proteins), are usually purine-rich sequences in the exons downstream of a regulated 3Ј splice site (30 -32). Recently, it has been also demonstrated that ESE and SR proteins are important for trans-splicing by an in vitro assay system (33,34). Furthermore, two putative ESE elements were also reported in the carnitine octanoyltransferase gene (26). Thus, the above-mentioned complementary sequences and the ESE-like sequence are possibly involved in the case of Sp1 mRNA maturation as well.
The biological significance of trans-splicing for Sp1 remains FIG. 8. Detection of the Sp1 mRNA with the exon 2-3-2 alignment by RT-PCR. A, positions and directions of primers used in this analysis are indicated by arrows on an assumed trans-spliced Sp1 mRNA structure. B, the nested PCR products were analyzed on a 1.0% agarose gel. The primer sets used for the nested amplification are shown on the top. The templates for the nested amplification were as follows. Lanes 1, the first PCR product with X2 and 2R-1; lanes 2, the first PCR product with T3 and 2R-1; lanes G, 20 ng of human Sp1 genomic clone; lanes C, 20 ng of the cDNA clone with the exon 1-2-3-4 alignment; lanes R, a recombinant clone with the exon 1-2-3-2-3 alignment. elusive. Because exon 3 in Sp1 mRNA mainly encodes the transcriptional activation domains A and B (11), the product of the Sp1 mRNA with duplicated exon 3, if translated, has doubled transcriptional activation domains. Although we suggested that the trans-spliced form of human Sp1 mRNA lacked the first exon, this possibility still remains because of the presence of the second methionine in exon 2 (Fig. 3), which might serve as a translational start site. Such a product may show stronger ability for transactivation, because synergistic activation among activation domains, named superactivation, was demonstrated; an added transactivation domain elevated the ability of a truncated Sp1 having domains for DNA binding and transactivation (35,36). In this case, the trans-splicing will result in a positive regulation. In other cases, this trans-splicing might contribute to a negative regulation by producing nonfunctional mRNA and reduces the functional Sp1 mRNA. Although the result of RNase protection assays suggests that the Sp1 mRNA with the exon 2-3-2-3 alignment has a poly(A) tail, this possibility also remains. It is already proposed that trans-splicing is a novel mechanism for regulating the cellular events because the trans-spliced rat SA mRNA is tissue-specific, and trans-splicing of pre-mRNA of rat voltage-gated sodium channel is regulated by a nerve growth factor (27,28).
Finally, our study also established the complete amino acid sequence of human Sp1. A partial DNA sequence of 313 bp that covers only the N-terminal coding region of Sp1 recently appeared in the data base (accession number AJ272134), and this sequence perfectly matched to our sequence. Although the newly identified amino acid sequence is not so large, this information is valuable because a region close to the N terminus of human Sp1 is critical for susceptibility to proteasome-dependent degradation (37). In addition, the three different sizes of mRNAs of mouse Sp1 were observed during spermatogenesis, one of which encoded a Sp1 lacking the 7 amino acid residues of N terminus (38). Therefore, the complete structure of human Sp1 identified in this study will be also useful for further investigation concerning the regulation of Sp1 function.
|
2018-04-03T02:15:17.888Z
|
2000-12-01T00:00:00.000
|
{
"year": 2000,
"sha1": "437689a3adfc10d4605fed069ef7988777a6ae96",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/48/38067.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "98bbf9d7d96b78730a537cb722ab3d840ccc25e8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
23750877
|
pes2o/s2orc
|
v3-fos-license
|
On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task
Deep convolutional neural networks are powerful tools for learning visual representations from images. However, designing efficient deep architectures to analyse volumetric medical images remains challenging. This work investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these essential building blocks, we propose a high-resolution, compact convolutional network for volumetric image segmentation. To illustrate its efficiency of learning 3D representation from large-scale image data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images. Our experiments show that the proposed network architecture compares favourably with state-of-the-art volumetric segmentation networks while being an order of magnitude more compact. We consider the brain parcellation task as a pretext task for volumetric image segmentation; our trained network potentially provides a good starting point for transfer learning. Additionally, we show the feasibility of voxel-level uncertainty estimation using a sampling approximation through dropout.
Introduction
Convolutional neural networks (CNNs) have been shown to be powerful tools for learning visual representations from images. They often consist of multiple layers of non-linear functions with a large number of trainable parameters. Hierarchical features can be obtained by training the CNNs discriminatively.
In the medical image computing domain, recent years have seen a growing number of applications using CNNs. Although there have been recent advances in tailoring CNNs to analyse volumetric images, most of the work to date studies image representations in 2D. While volumetric representations are more informative, the number of voxels scales cubically with the size of the region of interest. This raises challenges of learning more complex visual patterns as well as higher computational burden compared to the 2D cases. While developing compact and effective 3D network architectures is of significant interest, designing 3D CNNs remains a challenging problem.
The goal of this paper is to design a high-resolution and compact network architecture for the segmentation of fine structures in volumetric images. For this purpose, we study the simple and flexible elements of modern convolutional networks, such as dilated convolution and residual connection. Most of the existing network architectures follow a fully convolutional downsample-upsample pathway [11,4,15,3,16,13]. Low-level features with high spatial resolutions are first downsampled for higher-level feature abstraction; then the feature maps are upsampled to achieve high-resolution segmentation. In contrast to these, we propose a novel 3D architecture that incorporates high spatial resolution feature maps throughout the layers, and can be trained with a wide range of receptive fields. We validate our network with the challenging task of automated brain parcellation into 155 structures from T1-weighted MR images. We show that the proposed network, with twenty times fewer parameters, achieves competitive segmentation performance compared with state-of-the-art architectures.
A well-designed network could be trained with a large-scale dataset and enables transfer learning to other image recognition tasks [9]. In the field of computer vision, the well-known AlexNet and VGG net were trained on the Ima-geNet dataset. They provide general-purpose image representations that can be adapted for a wide range of computer vision problems. Given the large amount of data and the complex visual patterns of the brain parcellation problem, we consider it as a pretext task. Our trained network is the first step towards a general-purpose volumetric image representation. It potentially provides an initial model for transfer learning of other volumetric image segmentation tasks.
The uncertainty of the segmentation is also important for indicating the confidence and reliability of one algorithm [5,18,19]. The high uncertainty of labelling can be a sign of an unreliable classification. In this work, we demonstrate the feasibility of voxel-level uncertainty estimation using Monte Carlo samples of the proposed network with dropout at test time. Compared to the existing volumetric segmentation networks, our compact network has fewer parameter interactions, and thus potentially achieves better uncertainty estimates with fewer samples.
On the elements of 3D convolutional networks
Convolutions and dilated convolutions. To maintain a relatively low number of parameters, we choose to use small 3D convolutional kernels with only 3 3 parameters for all convolutions. This is about the smallest kernel that can represent 3D features in all directions with respect to the central voxel. Although a convolutional kernel with 5 × 5 × 5 voxels gives the same receptive field as stacking two layers of 3 × 3 × 3-voxel convolution, the latter has approximately 57% fewer parameters. Using smaller kernels implicitly imposes more regularisation on the network while achieving the same receptive field.
To further enlarge the receptive field to capture large image contexts, most of the existing volumetric segmentation networks downsample the intermediate feature maps. This significantly reduces the spatial resolution. For example, 3D U-net [3] heavily employs 2×2×2-voxel max pooling with strides of two voxels in each dimension. Each max pooling reduces the feature responses of the previous layer to only 1/8 of its spatial resolution. Upsampling layers, such as deconvolutions, are often used subsequently to partially recover the high resolution of the input. However, adding deconvolution layers also introduces additional computational costs.
Recently, Chen et al. [2] used dilated convolutions with upsampled kernels for semantic image segmentation. The advantages of dilated convolutions are that the features can be computed with a high spatial resolution, and the size of the receptive field can be enlarged arbitrarily. Dilated convolutions can be used to produce accurate dense predictions and detailed segmentation maps along object boundaries.
In contrast to the downsample-upsample pathway, we propose to adopt dilated convolutions for volumetric image segmentation. More specifically, the convolutional kernels are upsampled with a dilation factor r. For M -channels of input feature maps I, the output feature channel O generated with dilated convolutions are: where the index tuple (x, y, z) runs through every spatial location in the volumes; the kernels w consist of 3 3 × M trainable parameters. The dilated convolution in Eq.
(1) has the same number of trainable parameters as the standard 3 × 3 × 3 convolution. It preserves the spatial resolution and provides a (2r + 1) 3 -voxel receptive field. Setting r to 1 reduces the dilated convolution to the standard 3 × 3 × 3 convolution. In practice, we implement 3D dilated convolutions with a split-and-merge strategy [2] to benefit from the existing GPU convolution routines. Residual connections. Residual connections were first introduced and later refined by He et al. [7,8] for the effective training of deep networks.
The key idea of residual connection is to create identity mapping connections to bypass the parameterised layers in a network. The input of a residual block is directly merged to the output by addition. The residual connections have been shown to make information propagation smooth and improve the training speed [7].
More specifically, let the input to the p-th layer of a residual block as x p , the output of the block x p+1 has the form: denotes the path with non-linear functions in the block (shown in Fig. 1). If we stack the residual blocks, the last layer output x l can be expressed as: The residual connections enables direct information propagation from any residual block to another in both forward pass and back-propagation.
Effective receptive field. One interpretation of the residual network is that they behave like ensembles of relatively shallow networks. The unravelled view of the residual connections proposed by Veit et al. [20] suggests that the networks with n residual blocks have a collection of 2 n unique paths.
Without residual connections, the receptive field of a network is generally considered fixed. However, when training with n residual blocks, the networks utilise 2 n different paths and therefore features can be learned with a large range of different receptive fields. For example, the proposed network with 9 residual blocks (see Section 3) has a maximum receptive field of 87 × 87 × 87 voxels. Following the unravel view of the residual network, it consists of 2 9 unique paths. Fig. 2 shows the distribution of the receptive field of these paths. The receptive fields range from 3 × 3 × 3 to 87 × 87 × 87, following a binomial distribution. 3 7 11 15 19 23 27 31 35 39 43 47 51 55 59 63 67 71 75 79 83 87 Receptive field Number of paths This differs from the existing 3D networks. For example, Deepmedic [11] model operates at two paths, with a fixed receptive field 17 × 17 × 17 and 42 × 42 × 42 respectively. 3D Unet [3] has a relatively large receptive field of about 88 × 88 × 88 voxels. However, there are only eight unique paths and receptive fields.
Intuitively, given that the receptive field of a deep convolutional network is relatively large, the segmentation maps will suffer from distortions due to the border effects of convolution. That is, the segmentation results near the border of the output volume are less accurate due to the lack of input supporting window. We conduct experiments and demonstrate that the proposed networks generate only a small distortion near the borders (See Section 4). This suggests training the network with residual connections reduces the effective receptive field. The width of the distorted border is much smaller than the maximum receptive field. This phenomenon was also recently analysed by Luo et al. [14]. In practice, at test time we pad each input volume with a border of zeros and discard the same amount of border in the segmentation output.
Loss function. The last layer of the network is a softmax function that gives scores over all labels for each voxel. Typically, the end-to-end training procedure minimises the cross entropy loss function using an N -voxel image volume {x n } N n=1 and the training data of C-class segmentation map {y n } N n=1 where y n ∈ {1, . . . , C} is: where δ corresponds to the Dirac delta function, F c (x n ) is the softmax classification score of x n over the c-th class. However, when the training data are severely unbalanced (which is typical in medical image segmentation problems), this formulation leads to a strongly biased estimation towards the majority class. Instead of directly re-weighting each voxel by class frequencies, Milletari et al. [16] propose a solution by maximising the mean Dice coefficient directly, i.e., We employ this formulation to handle the issue of training data imbalance.
Uncertainty estimation using dropout. Gal and Ghahramani demonstrated that the deep network trained with dropout can be cast as a Bayesian approximation of the Gaussian process [5]. Given a set of training data and their labels {X, Y}, training a network F (· , W) with dropout has the effect of approximating the posterior distribution p(W|{X, Y}) by minimising the Kullback-Leibler divergence term, i.e. KL(q(W)||p(W|{X, Y})); where q(W) is an approximating distribution over the weight matrices W with their elements randomly set to zero according to Bernoulli random variables. After training the network, the predictive distribution of test datax can be expressed as q(ŷ|x) = F (x, W)q(W)dW. The prediction can be approximated using Monte Carlo samples of the trained network: is a set of M samples from q(W). The uncertainty of the prediction can be estimated using the sample variance of the M samples.
With this theoretical insight, we are able to estimate the uncertainty of the segmentation map at the voxel level. We extend the segmentation network with a 1 × 1 × 1 convolutional layer before the last convolutional layer. The extended network is trained with a dropout ratio of 0.5 applied to the newly inserted layer. At test time, we sample the network N times using dropout. The final segmentation is obtained by majority voting. The percentage of samples which disagrees with the voting results is calculated at each voxel as the uncertainty estimates.
3 The network architecture and its implementation
The proposed architecture
Our network consists of 20 layers of convolutions. In the first seven convolutional layers, we adopt 3 × 3 × 3-voxel convolutions. These layers are designed to capture low-level image features such as edges and corners. In the subsequent convolutional layers, the kernels are dilated by a factor of two or four. These deeper layers with dilated kernels encode mid-and high-level image features. Residual connections are employed to group every two convolutional layers. Within each residual block, each convolutional layer is associated with an element-wise rectified linear unit (ReLU) layer and a batch normalisation layer [10]. The ReLU, batch normalisation, and convolutional layers are arranged in the pre-activation order [8].
The network can be trained end-to-end. In the training stage, the inputs to our network are 96 × 96 × 96-voxel images. The final softmax layer gives classification scores over the class labels for each of the 96 × 96 × 96 voxels. The architecture is illustrated in Fig. 3.
Implementation details
In the training stage, the pre-processing step involved input data standardisation and augmentation at both image-and subvolume-level. At image-level, we adopted the histogram-based scale standardisation method [17] to normalised the intensity histograms. As a data augmentation at image-level, randomisation was introduced in the normalisation process by randomly choosing a threshold of foreground between the volume minimum and mean intensity (at test time, the mean intensity of the test volume was used as the threshold). Each image was further normalised to have zero mean and unit standard deviation. Augmentations on the randomly sampled 96 × 96 × 96 subvolumes were employed on the fly. These included rotation with a random angle in the range of [−10 • , 10 • ] for each of the three orthogonal planes and spatial rescaling with a random scaling factor in the range of [0.9, 1.1].
All the parameters in the convolutional layers were initialised according to He et al. [6]. The scaling and shifting parameters in the batch normalisation layers were initialised to 1 and 0 respectively. The networks were trained with two Nvidia K80 GPUs. At each training iteration, each GPU processed one input volume; the average gradients computed over these two training volumes were used as the gradients update. To make a fair comparison, we employed the Adam optimisation method [12] for all the methods with fixed hyper-parameters. The learning rate lr was set to 0.01, the step size hyper-parameter β 1 was 0.9 and β 2 was 0.999 in all cases, except V-Net for which we chose the largest lr that the training algorithm converges (lr = 0.0001). The models were trained until we observed a plateau in performance on the validation set. We do not employ additional spatial smoothing function (such as conditional random field) as a post-processing step. Instead of aiming for better segmentation results by adding post-processing steps, we focused on the dense segmentation maps generated by the networks. As we consider brain parcellation as a pretext task, networks without explicit spatial smoothing are potentially more reusable. We implemented all the methods (including a re-implementation of Deepmedic [11], V-net [16], and 3D U-net [3] architecture) with Tensorflow 1 .
Experiments and results
Data. To demonstrate the feasibility of learning complex 3D image representations from large-scale data, the proposed network is learning a highly granular segmentation of 543 T1-weighted MR images of healthy controls from the ADNI dataset. The average number of voxels of each volume is about 182 × 244 × 246. The average voxel size is approximately 1.18mm × 1.05mm × 1.05mm. All volumes are bias-corrected and reoriented to a standard Right-Anterior-Superior orientation. The bronze standard parcellation of 155 brain structures and 5 nonbrain outer tissues are obtained using the GIF framework [1]. Fig. 5(left) shows the label distribution of the dataset. We randomly choose 443, 50, and 50 volumes for training, test, and validation respectively.
Overall evaluation. In this section, we compare the proposed high-resolution compact network architecture (illustrated in Fig. 3; denoted as HC-default) with three variants: (1) the HC-default configuration without the residual connections, trained with cross-entropy loss function (NoRes-entropy); (2) the HCdefault configuration without residual connections, trained with Dice loss function (NoRes-dice); and (3) the HC-default configuration trained with an additional dropout layer, and makes predictions with a majority voting of 10 Monte Carlo samples (HC-dropout). For the dropout variant, our dropout layer employed before the last convolutional layer consists of 80 kernels. Additionally, three state-of-the-art volumetric segmentation networks are evaluated. These include 3D U-net [3], V-net [16], and Deepmedic [11]. The last layer of each network architecture is replaced with a 160-way softmax classifier.
We observe that training these networks with the cross entropy loss function (Eq. 2) leads to poor segmentation results. Since the cross-entropy loss function treats all training voxels equally, the network may have difficulties in learning representations related to the minority classes. Training with the Dice loss function alleviates this issue by implicitly re-weighting the voxels. Thus we train all networks using the Dice loss function for a fair comparison.
We use the mean Dice Coefficient Similarity (DCS) as the performance measure. Table 1 and Fig. 5(right) compare the performance on the test set. In terms of our network variants, the results show that the use of Dice loss function largely improves the segmentation performance. This suggests that the Dice loss q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q function can handle the severely unbalanced segmentation problem well. The results also suggest that introducing the residual connections improved the segmentation performance measured in mean DCS. This indicates that the residual connections are important elements of the proposed network. By adopting the dropout method, the DCS can be further improved by 2% in DCS.
With a relatively small number of parameters, our HC-default and HCdropout outperform the competing methods in terms of mean DCS. This suggests that our network is more effective for the brain parcellation problem. Note that V-net has a similar architecture to 3D U-net and has more parameters, but does not employ the batch normalisation technique. The lower DCS produced by V-net suggests that batch normalisation is important for training the networks for brain parcellation.
In Fig. 6, we show that the dropout variant achieves better segmentation results for all the key structures. Fig. 4 presents an example of the segmentation results of the proposed network and 3D U-net-Dice.
Receptive field and border effects. We further compare the segmentation performance of a trained network by discarding the borders in each dimension of the segmentation map. That is, given a d × d × d-voxel input, at border size 1 we only preserve the (d−2) 3 -voxel output volume centred within the predicted map. The effect of number of samples in uncertainty estimations. This section investigates the number of Monte Carlo samples and the segmentation performance of the proposed network. Fig. 8(a) suggests that using 10 samples is enough to achieve good segmentation. Further increasing the number of samples has relatively small effects on the DCS. Fig. 8(b) plots the voxel-wise segmentation accuracy computed using only the voxels with an uncertainty less than a threshold. The voxel-wise accuracy is high when the threshold is small. This indicates that the uncertainty estimation reflects the confidence of the network. Fig. 9 shows an uncertainty map generated by the proposed network. The uncertainties near the boundaries of different structures are relatively higher than the other regions.
Currently, our method takes about 60 seconds to predict a typical volume with 192 × 256 × 256 voxels. To achieve better segmentation results and measure uncertainty, 10 Monte Carlo samples of our dropout model are required. The entire process takes slightly more than 10 minutes in total. However, during the Monte Carlo sampling at test time, only the dropout layer and the final prediction layer are randomised. To further reduce the computational time, the future software could reuse the features extracted from the layers before dropout, resulting in only a marginal increase in runtime when compared to a single prediction.
Conclusion
In this paper, we propose a high-resolution, 3D convolutional network architecture that incorporates large volumetric context using dilated convolutions and residual connections. Our network is conceptually simpler and more compact than the state-of-the-art volumetric segmentation networks. We validate the proposed network using the challenging task of brain parcellation in MR images. We show that the segmentation performance of our network compares favourably with the competing methods. Additionally, we demonstrate that Monte Carlo sampling of dropout technique can be used to generate voxel-level uncertainty es-timation for our brain parcellation network. Moreover, we consider the brain parcellation task as a pretext task for volumetric image segmentation. Our trained network potentially provides a good starting point for transfer learning of other segmentation tasks.
In the future, we will extensively test the generalisation ability of the network to brain MR scans obtained with various scanning protocols from different data centres. Furthermore, we note that the uncertainty estimations are not probabilities. We will investigate the calibration of the uncertainty scores to provide reliable probability estimations.
|
2017-07-06T23:13:03.000Z
|
2017-06-25T00:00:00.000
|
{
"year": 2017,
"sha1": "6e3510886f4b84a08a06abc954647a29f981639d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.01992",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0e4febfd2a3755c8183469d542a98b7073baa734",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
752523
|
pes2o/s2orc
|
v3-fos-license
|
Life as we know it
This paper presents a heuristic proof (and simulations of a primordial soup) suggesting that life—or biological self-organization—is an inevitable and emergent property of any (ergodic) random dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if the coupling among an ensemble of dynamical systems is mediated by short-range forces, then the states of remote systems must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states in a statistical sense. The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket. Crucially, this is the same quantity that is optimized in Bayesian inference. Therefore, the internal states (and their blanket) will appear to engage in active Bayesian inference. In other words, they will appear to model—and act on—their world to preserve their functional and structural integrity, leading to homoeostasis and a simple form of autopoiesis.
Introduction
How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?
Erwin Schrö dinger [1, p. 2] The emergence of life-or biological self-organization-is an intriguing issue that has been addressed in many guises in the biological and physical sciences [1][2][3][4][5]. This paper suggests that biological self-organization is not as remarkable as one might think-and is (almost) inevitable, given local interactions between the states of coupled dynamical systems. In brief, the events that 'take place within the spatial boundary of a living organism' [1] may arise from the very existence of a boundary or blanket, which itself is inevitable in a physically lawful world. The treatment offered in this paper is rather abstract and restricts itself to some basic observations about how coupled dynamical systems organize themselves over time. We will only consider behaviour over the timescale of the dynamics themselves-and try to interpret this behaviour in relation to the sorts of processes that unfold over seconds to hours, e.g. cellular processes. Clearly, a full account of the emergence of life would have to address multiple (evolutionary, developmental and functional) timescales and the emergence of DNA, ribosomes and the complex cellular networks common to most forms of life. This paper focuses on a simple but fundamental aspect of self-organization-using abstract representations of dynamical processesthat may provide a metaphor for behaviour with different timescales and biological substrates.
Most treatments of self-organization in theoretical biology have addressed the peculiar resistance of biological systems to the dispersive effects of fluctuations in their environment by appealing to statistical thermodynamics and information theory [1,3,[5][6][7][8][9][10]. Recent formulations try to explain adaptive behaviour in terms of minimizing an upper (free energy) bound on the surprise (negative log-likelihood) of sensory samples [11,12]. This minimization usefully connects the imperative for biological systems to maintain their sensory states within physiological bounds, with an intuitive understanding of adaptive behaviour in terms of active inference about the causes of those states [13]. Under ergodic assumptions, the long-term average of surprise is entropy. This means that minimizing free energy-through selectively sampling sensory input-places an upper bound on the entropy or dispersion of sensory states. This enables biological systems to resist the second law of thermodynamics-or more exactly the fluctuation theorem that applies to open systems far from equilibrium [14,15]. However, because negative surprise is also Bayesian model evidence, systems that minimize free energy also maximize a lower bound on the evidence for an implicit model of how their sensory samples were generated. In statistics and machine learning, this is known as approximate Bayesian inference and provides a normative theory for the Bayesian brain hypothesis [16][17][18][19][20]. In short, biological systems act on the world to place an upper bound on the dispersion of their sensed states, while using those sensations to infer external states of the world. This inference makes the free energy bound a better approximation to the surprise that action is trying to minimize [21]. The resulting active inference is closely related to formulations in embodied cognition and artificial intelligence; for example, the use of predictive information [22][23][24] and earlier homeokinetic formulations [25].
The ensuing (variational) free energy principle has been applied widely in neurobiology and has been generalized to other biological systems at a more theoretical level [11]. The motivation for minimizing free energy has hitherto used the following sort of argument: systems that do not minimize free energy cannot exist, because the entropy of their sensory states would not be bounded and would increase indefinitely-by the fluctuation theorem [15]. Therefore, biological systems must minimize free energy. This paper resolves the somewhat tautological aspect of this argument by turning it around to suggest: any system that exists will appear to minimize free energy and therefore engage in active inference. Furthermore, this apparently inferential or mindful behaviour is (almost) inevitable. This may sound like a rather definitive assertion but is surprisingly easy to verify. In what follows, we will consider a heuristic proof based on random dynamical systems and then see that biological self-organization emerges naturally, using a synthetic primordial soup. This proof of principle rests on four attributes of-or tests for-self-organization that may themselves have interesting implications.
Heuristic proof
We start with the following lemma: any ergodic random dynamical system that possesses a Markov blanket will appear to actively maintain its structural and dynamical integrity. We will associate this behaviour with the self-organization of living organisms. There are two key concepts here-ergodicity and a Markov blanket. Here, ergodicity means that the time average of any measurable function of the system converges (almost surely) over a sufficient amount of time [26,27]. This means that one can interpret the average amount of time a state is occupied as the probability of the system being in that state when observed at random. We will refer to this probability measure as the ergodic density.
A Markov blanket is a set of states that separates two other sets in a statistical sense. The term Markov blanket was introduced in the context of Bayesian networks or graphs [28] and refers to the children of a set (the set of states that are influenced), its parents (the set of states that influence it) and the parents of its children. The notion of influence or dependency is central to a Markov blanket and its existence implies that any state is-or is not-coupled to another. For example, the system could comprise an ensemble of subsystems, each occupying its own position in a Euclidean space. If the coupling among subsystems is mediated by short-range forces, then distant subsystems cannot influence each other. The existence of a Markov blanket implies that its states (e.g. motion in Euclidean space) do not affect their coupling or independence. In other words, the interdependencies among states comprising the Markov blanket change slowly with respect to the states per se. For example, the surface of a cell may constitute a Markov blanket separating intracellular and extracellular states. On the other hand, a candle flame cannot possess a Markov blanket, because any pattern of molecular interactions is destroyed almost instantaneously by the flux of gas molecules from its surface.
The existence of a Markov blanket induces a partition of states into internal states and external states that are hidden (insulated) from the internal (insular) states by the Markov blanket. In other words, the external states can only be seen vicariously by the internal states, through the Markov blanket. Furthermore, the Markov blanket can itself be partitioned into two sets that are, and are not, children of external states. We will refer to these as a surface or sensory states and active states, respectively. Put simply, the existence of a Markov blanket S Â A implies a partition of states into external, sensory, active and internal states: External states cause sensory states that influence-but are not influenced by-internal states, while internal states cause active states that influence-but are not influenced byexternal states (table 1). Crucially, the dependencies induced by Markov blankets create a circular causality that is reminiscent of the action-perception cycle (figure 1). The circular causality here means that external states cause changes in internal states, via sensory states, while the internal states couple back to the external states through active states-such that internal and external states cause each other in a reciprocal Table 1. Definitions of the tuple ðV; C; S; A; L; p; qÞ underlying active inference. a sample space V or non-empty set from which random fluctuations or outcomes v [ V are drawn external states C : C Â A Â V ! R states of the world that cause sensory states and depend on action sensory states S : C Â A Â V ! R the agent's sensations that constitute a probabilistic mapping from action and external states action states A : S Â L Â V ! R an agent's action that depends on its sensory and internal states internal states L : L Â S Â V ! R the states of the agent that cause action and depend on sensory states ergodic density pðc; s; a; ljmÞ a probability density function over external c [ C, sensory s [ S, active a [ A and internal states l [ L for a system denoted by m variational density q(cjl) an arbitrary probability density function over external states that is parametrized by internal states rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 fashion. This circular causality may be a fundamental and ubiquitous causal architecture for self-organization. Equipped with this partition, we can now consider the behaviour of any random dynamical system m described by some stochastic differential equations: Here, f (x) is the flow of system states that is subject to random fluctuations denoted by v. The second equality formalizes the dependencies implied by the Markov blanket. Because the system is ergodic it will, after a sufficient amount of time, converge to an invariant set of states called a pullback or random global attractor. The attractor is random because it itself is a random set [29,30]. The associated ergodic density p(xjm) is the solution to the Fokker -Planck equation (a.k.a. the Kolmogorov forward equation) [31] describing the evolution of the probability density over states _ p(xjm) ¼ r Á G rp À r Á ð fpÞ: ð2:2Þ Here, the diffusion tensor G is the half the covariance (amplitude) of the random fluctuations. Equation (2.2) shows that the ergodic density depends upon flow, which can always be expressed in terms of curl and divergence-free components.
This is the Helmholtz decomposition (a.k.a. the fundamental theorem of vector calculus) and can be formulated in terms of an antisymmetric matrix R(x) ¼ 2R(x) T and a scalar potential G(x) we will call Gibbs energy [32], f ¼ ÀðG þ RÞ Á rG: ð2:3Þ Using this standard form [33], it is straightforward to show that p(xjm) ¼ exp(2G(x)) is the equilibrium solution to the Fokker-Planck equation [12]: This means that we can express the flow in terms of the ergodic density f ¼ðG þ RÞ Á r ln pðxjmÞ; f l ðs; a; lÞ ¼ðG þ RÞ Á r l ln pðc; s; a; ljmÞ and f a ðs; a; lÞ ¼ðG þ RÞ Á r a ln pðc; s; a; ljmÞ: Although we have just followed a sequence of standard results, there is something quite remarkable and curious about this flow: the flow of internal and active states is essentially a (circuitous) gradient ascent on the (log) ergodic density. The gradient ascent is circuitous because it contains divergence-free (solenoidal) components that circulate on the isocontours of the ergodic density-like walking up a winding mountain path. This ascent will make it look as if internal (and active) states are flowing towards regions of rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 state space that are most frequently occupied despite the fact their flow is not a function of external states. In other words, their flow does not depend upon external states (see the right-hand side equation (2.5)) and yet it ascends gradients that depend on the external states (see the righthand side of equation (2.5)). In short, the internal and active states behave as if they know where they are in the space of external states-states that are hidden behind the Markov blanket.
We can finesse this apparent paradox by noting that the flow is the expected motion through any point averaged over time. By the ergodic theorem, this is also the flow averaged over the external states, which does not depend on the external state at any particular time: more formally, for any point v[V ¼ S Â A Â L in the space of the internal states and their Markov blanket, equations (2.1) and (2.5) tell us that flow through this point is the average flow under the posterior density over the external states: ) f l ðvÞ ¼ ðG þ RÞ Á r l ln pðvjmÞ; and f a ðvÞ ¼ ðG þ RÞ Á r a ln pðvjmÞ: returns a value of one when the trajectory passes through the point in question and zero otherwise-and the first expectation is taken over time. Here, we have used the fact that the integral of a derivative of a density is the derivative of its integral-and both are zero.
Equation (2.6) is quite revealing-it shows that the flow of internal and active states performs a circuitous gradient ascent on the marginal ergodic density over internal states and their Markov blanket. Crucially, this marginal density depends on the posterior density over external states. This means that the internal states will appear to respond to sensory fluctuations based on posterior beliefs about underlying fluctuations in external states. We can formalize this notion by associating these beliefs with a probability density over external states q(cjl) that is encoded ( parametrized) by internal states. Lemma 2.1 Free energy. For any Gibbs energy G(c, s, a, l) ¼ 2ln p(c, s, a, l), there is a free energy F(s, a, l) that describes the flow of internal and active states: Fðs; a; lÞ ¼ À ð c qðcjlÞ ln pðc; s; a; ljmÞ qðcjlÞ dc ¼ E q ½Gðc; s; a; lÞ À H½qðcjmÞ: Here, free energy is a functional of an arbitrary (variational) density q(cjl) that is parametrized by internal states. The last equality just shows that free energy can be expressed as the expected Gibbs energy minus the entropy of the variational density.
Proof. Using Bayes rule, we can rearrange the expression for free energy in terms of a Kullback -Leibler divergence [34]: Fðs; a; lÞ ¼ Àln pðs; a; ljmÞ þ D KL ½qðcjlÞjjpðcjs; a; lÞ; ) f l ðs; a; lÞ ¼ ðG þ RÞ Á r l ln pðs; a; ljmÞ À ðG þ RÞ Á r l D KL and f a ðs; a; lÞ ¼ ðG þ RÞ Á r a ln pðs; a; ljmÞ À ðG þ RÞ Á r a D KL : However, equation (2.6) requires the gradients of the divergence to be zero, which means the divergence must be minimized with respect to internal states. This means that the variational and posterior densities must be equal: In other words, the flow of internal and active states minimizes free energy, rendering the variational density equivalent to the posterior density over external states.
Remarks 2.2. Put simply, this proof says that if one interprets internal states as parametrizing a variational density encoding Bayesian beliefs about external states, then the dynamics of internal and active states can be described as a gradient descent on a variational free energy function of internal states and their Markov blanket. Variational free energy was introduced by Feynman [35] to solve difficult integration problems in path integral formulations of quantum physics. This is also the free energy bound that is used extensively in approximate Bayesian inference (e.g. variational Bayes) [34,36,37]. The expression for free energy in equation (2.8) discloses its Bayesian interpretation: the first term is the negative log evidence or marginal likelihood of the internal states and their Markov blanket. The second term is a relative entropy or Kullback-Leibler divergence [38] between the variational density and the posterior density over external states. Because (by Gibbs inequality) this divergence cannot be less than zero, the internal flow will appear to have minimized the divergence between the variational and posterior density. In other words, the internal states will appear to have solved the problem of Bayesian inference by encoding posterior beliefs about hidden (external) states, under a generative model provided by the Gibbs energy. This is known as approximate Bayesian inference-with exact Bayesian inference when the forms of the variational and posterior densities are identical. In short, the internal states will appear to engage in some form of Bayesian inference: but what about action? Because the divergence in equation (2.8) can never be less than zero, free energy is an upper bound on the negative log rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 evidence. Now, because the system is ergodic we have Fðs; a; lÞ ! À ln pðs; a; ljmÞ ) E t ½Fðs; a; lÞ ! E t ½À ln pðs; a; ljmÞ ¼ H½ pðs; a; ljmÞ: ð2:9Þ This means that action will (on average) appear to minimize free energy and thereby place an upper bound on the entropy of the internal states and their Markov blanket. If we associate these states v ¼ fs, a, lg with biological systems, then action places an upper bound on their dispersion (entropy) and will appear to conserve their structural and dynamical integrity. Together with the Bayesian modelling perspective, this is exactly consistent with the good regulator theorem (every good regulator is a model of its environment) and related treatments of self-organization [2,5,12,39,40]. Furthermore, we have shown elsewhere [11,41] that free energy minimization is consistent with information-theoretic formulations of sensory processing and behaviour [23,42,43]. Equation (2.7) also shows that minimizing free energy entails maximizing the entropy of the variational density (the final term in the last equality)-in accord with the maximum entropy principle [44]. Finally, because we have cast this treatment in terms of random dynamical systems, there is an easy connection to dynamical formulations that predominate in the neurosciences [40,[45][46][47].
The above arguments can be summarized with the following attributes of biological self-organization: biological systems are ergodic [26]: in the sense that the average of any measure of their states converges over a sufficient period of time. This includes the occupancy of state space and guarantees the existence of an invariant ergodic density over functional and structural states; they are equipped with a Markov blanket [28]: the existence of a Markov blanket necessarily implies a partition of states into internal states, their Markov blanket (sensory and active states) and external or hidden states. Internal states and their Markov blanket (biological states) constitute a biological system that responds to hidden states in the environment; they exhibit active inference [11]: the partition of states implied by the Markov blanket endows internal states with the apparent capacity to represent hidden states probabilistically, so that they appear to infer the hidden causes of their sensory states (by minimizing a free energy bound on log Bayesian evidence). By the circular causality induced by the Markov blanket, sensory states depend on active states, rendering inference active or embodied; and they are autopoietic [4]: because active states change-but are not changed by-hidden states (figure 1), they will appear to place an upper (free energy) bound on the dispersion (entropy) of biological states. This homoeostasis is informed by internal states, which means that active states will appear to maintain the structural and functional integrity of biological states.
When expressed like this, these criteria appear perfectly sensible but are they useful in the setting of real biophysical systems? The premise of this paper is that these criteria apply to (almost) all ergodic systems encountered in the real world. The argument here is that biological behaviour rests on the existence of a Markov blanket-and that a Markov blanket is (almost) inevitable in coupled dynamical systems with shortrange interactions. In other words, if the coupling between dynamical systems can be neglected-when they are separated by large distances-the intervening systems will necessarily form a Markov blanket. For example, if we consider shortrange electrochemical and nuclear forces, then a cell membrane forms a Markov blanket for internal intracellular states (figure 1). If this argument is correct, then it should be possible to show the emergence of biological self-organization in any arbitrary ensemble of coupled subsystems with short-range interactions. The final section uses simulations to provide a proof of principle, using the four criteria above to identify and verify the emergence of lifelike behaviour.
Proof of principle
In this section, we simulate a primordial soup to illustrate the emergence of biological self-organization. This soup comprises an ensemble of dynamical subsystems-each with its own structural and functional states-that are coupled through short-range interactions. These simulations are similar to (hundreds of) simulations used to characterize pattern formation in dissipative systems; for example, Turing instabilities [48]: the theory of dissipative structures considers far-from-equilibrium systems, such as turbulence and convection in fluid dynamics (e.g. Bénard cells), percolation and reaction-diffusion systems such as the Belousov-Zhabotinsky reaction [49]. Self-assembly is another important example from chemistry that has biological connotations (e.g. for pre-biotic formation of proteins). The simulations here are distinguished by solving stochastic differential equations for both structural and functional states. In other words, we consider states from classical mechanics that determine physical motion-and functional states that could describe electrochemical states. Importantly, the functional states of any system affect the functional and structural states of another. The agenda here is not to explore the repertoire of patterns and self-organization these ensembles exhibit-but rather take an arbitrary example and show that, buried within it, there is a clear and discernible anatomy that satisfies the criteria for life.
The primordial soup
To simulate a primordial soup, we use an ensemble of elemental subsystems with (heuristically speaking) Newtonian and electrochemical dynamics fp;qg [ X: Here,pðtÞ ¼ ð p; p 0 ; p 00 ; . . .Þ are generalized coordinates of motion describing position, velocity, acceleration-and so on-of the subsystems, whileqðtÞ correspond to electrochemical states (such as concentrations or electromagnetic states). One can think of these generalized states as describing the physical and electrochemical state of large macromolecules. Crucially, these states are coupled within and between the subsystems comprising an ensemble. The electrochemical dynamics were chosen to have a Lorenz attractor: for the ith system with its own rate parameter k (i) : Changes in electrochemical states are coupled through the local average q ðiÞ of the states of subsystems that lie within a distance of one. This means that A can be regarded as an (unweighted) adjacency matrix that encodes the dependencies among the functional (electrochemical) states of the ensemble. The local average enters the equations of motion both linearly and nonlinearly to provide an opportunity for generalized synchronization [50]. The nonlinear coupling effectively renders the Rayleigh parameter of the flow 32 þ q ð jÞ 1 state-dependent. The Lorenz form for these dynamics is a somewhat arbitrary choice but provides a ubiquitous model of electrodynamics, lasers and chemical reactions [51]. The rate parameter k ðiÞ ¼ 1 32 ð1 À expðÀ4 Á UÞÞ was specific to each subsystem, where U [ (0, 1) was selected from a uniform distribution. This introduces heterogeneity in the rate of electrochemical dynamics, with a large number of fast subsystems-with a rate constant of nearly one-and a small number of slower subsystems. To augment this heterogeneity, we randomly selected a third of the subsystems and prevented them from (electrochemically) influencing others, by setting the appropriate column of the adjacency matrix to zero. We refer to these as functionally closed systems.
In a similar way, the classical (Newtonian) motion of each subsystem depends upon the functional status of its neighbours: 3 jÞ À 2: This motion rests on forces w (i) exerted by other subsystems that comprise a strong repulsive force (with an inverse square law) and a weaker attractive force that depends on their electrochemical states. This force was chosen so that systems with coherent (third) states are attracted to each other but repel otherwise. The remaining two terms in the expression for acceleration (second equality) model viscosity that depends upon velocity and an exogenous force that attracts all locations to the origin-as if they were moving in a simple (quadratic) potential energy well. This ensures the synthetic soup falls to the bottom of the well and enables local interactions.
Note that the ensemble system is dissipative at two levels: first, the classical motion includes dissipative friction or viscosity. Second, the functional dynamics are dissipative in the sense that they are not divergence-free. We will now assess the criteria for biological self-organization within this coupled random dynamical ensemble.
Ergodicity
In the examples used below, 128 subsystems were integrated using Euler's (forward) method with step sizes of 1/512 s and initial conditions sampled from the normal distribution. Random fluctuations were sampled from the unit normal distribution. By adjusting the parameters in the above equations of motion, one can produce a repertoire of plausible and interesting behaviours (the code for these simulations and the figures in this paper are available as part of the SPM academic freeware). These behaviours range from gas-like behaviour (where subsystems occasionally get close enough to interact) to a cauldron of activity, when subsystems are forced together at the bottom of the potential well. In this regime, subsystems get sufficiently close for the inverse square law to blow them apart-reminiscent of subatomic particle collisions in nuclear physics. With particular parameter values, these sporadic and critical events can render the dynamics non-ergodic, with unpredictable high amplitude fluctuations that do not settle down. In other regimes, a more crystalline structure emerges with muted interactions and low structural (configurational) entropy.
However, for most values of the parameters, ergodic behaviour emerges as the ensemble approaches its random global attractor (usually after about 1000 s): generally, subsystems repel each other initially (much like illustrations of the big bang) and then fall back towards the centre, finding each other as they coalesce. Local interactions then mediate a reorganization, in which subsystems are passed around (sometimes to the periphery) until neighbours gently jostle with each other. In terms of the dynamics, transient synchronization can be seen as waves of dynamical bursting (due to the nonlinear coupling in equation (3.2)). In brief, the motion and electrochemical dynamics look very much like a restless soup (not unlike solar flares on the surface of the sun, figure 2)-but does it have any self-organization beyond this?
The Markov blanket
Because the structural and functional dependencies share the same adjacency matrix-which depends upon positionone can use the adjacency matrix to identify the principal Markov blanket by appealing to spectral graph theory: the Markov blanket of any subset of states encoded by a binary vector with elements x i [ f0, 1g is given by [B . x] [ f0, 1g, where the Markov blanket matrix B ¼ A þ A T þ A T A encodes children, parents and parents of children. This follows because the ith column of the adjacency matrix encodes the directed connections from the ith state to all its children. The principal eigenvector of the (symmetric) Markov blanket matrix will-by the Perron -Frobenius theorem-contain positive values. These values reflect the degree to which each state belongs to the cluster that is most interconnected (cf., spectral clustering). In what follows, the internal states were defined as belonging to subsystems with the k ¼ 8 largest values. Having defined the internal states, the Markov blanket can be recovered from the Markov blanket matrix using [B . x] and divided into sensory and active states-depending upon whether they are influenced by the hidden states or not.
Given the internal states and their Markov blanket, we can now follow their assembly and visualize any structural or functional characteristics. Figure 3 shows the adjacency matrix used to identify the Markov blanket. This adjacency matrix has non-zero entries if two subsystems were coupled over the last 256 s of a 2048 s simulation. In other words, it accommodates the fact that the adjacency matrix is itself an ergodic process-due to the random fluctuations. Figure 3b shows the location of subsystems with internal states (blue) and their Markov blanket-in terms of sensory (magenta) and active (red) locations. A clear structure can be seen here, where the internal subsystems are (unsurprisingly) close together and enshrouded by the Markov blanket. Interestingly, the active subsystems support the sensory subsystems that are rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 exposed to hidden environmental states. This is reminiscent of a biological cell with a cytoskeleton that supports some sensory epithelia or receptors within its membrane. Figure 3c highlights functionally closed subsystems (filled circles) that have been rusticated to the periphery of the system. Recall that these subsystems cannot influence or engage other subsystems and are therefore expelled to the outer limits of the soup. Heuristically, they cannot invade the system and establish a reciprocal and synchronous exchange with other subsystems. Interestingly, no simulation ever produced a functionally closed internal state. Figure 3d shows the slow subsystems that are distributed between internal and external states-which may say something interesting about the generalized synchrony that underlies self-organization.
Active inference
If the internal states encode a probability density over the hidden or external states, then it should be possible to predict external states from internal states. In other words, if internal events represent external events, they should exhibit a significant statistical dependency. To establish this dependency, we examined the functional (electrochemical) status of internal subsystems to see whether they could predict structural respectively. The (electrochemical) dynamics of the internal (blue) and external (cyan) states are shown for the 512 s. One can see initial (chaotic) transients that resolve fairly quickly, with itinerant behaviour as they approach their attracting set. (c) The position of internal (blue) and external (cyan) subsystems over the entire simulation period illustrate critical events (circled) that occur every few hundred seconds, especially at the beginning of the simulation. These events generally reflect a pair of particles (subsystems) being expelled from the ensemble to the periphery, when they become sufficiently close to engage short-range repulsive forces. These simulations integrated the stochastic differential equations in the main text using a forward Euler method with 1/512 s time steps and random fluctuations of unit variance.
rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 events (movement) in the external milieu. This is not unlike the approach taken in brain mapping that searches for statistical dependencies between, say, motion in the visual field and neuronal activity [52].
To test for statistical dependencies, the principal patterns of activity among the internal (functional) states were summarized using singular value decomposition and temporal embedding ( figure 4). A classical canonical variates analysis was then used to assess the significance of a simple linear mapping between expression of these patterns and the movement of each external subsystem. Figure 4a illustrates these internal dynamics, while figure 4c shows the Newtonian motion of the external subsystem that was best predicted. The agreement between the actual (dotted line) and predicted (solid line) motion is self-evident, particularly around the negative excursion at 300 s. The internal dynamics that predict this event appear to emerge in their fluctuations before the event itself (figure 4)-as would be anticipated if internal events are modelling external events. Interestingly, the subsystem best predicted was the furthest away from the internal states (magenta circle in figure 4d ).
This example illustrates how internal states infer or register distant events in a way that is not dissimilar to the perception of auditory events through sound waves-or the way that fish sense movement in their environment. Figure 4d also shows the subsystems whose motion could be predicted reliably. This predictability is the most significant at the periphery of the ensemble, where the ensemble has the greatest latitude for movement. These movements are coupled to the internal states-via the Markov blanketthrough generalized synchrony. Generalized synchrony refers to the synchronization of chaotic dynamics, usually in skewproduct (master-slave) systems [53,54]. However, in our set-up there is no master-slave relationship but a circular causality induced by the Markov blanket. Generalized synchrony was famously observed by Huygens in his studies of pendulum clocks-that synchronized themselves through the imperceptible motion of beams from which they were suspended [55]. This nicely illustrates the 'action at a distance' caused by chaotically synchronized waves of motion. Circular causality begs the question of whether internal states predict external causes of their sensory states or actively cause them through action. Exactly the same sorts of questions apply to perception [56,57]: for example, are visually evoked neuronal responses caused by external events or by our (saccadic eye) movements? Figure 3. Emergence of the Markov blanket. (a) The adjacency matrix that indicates a conditional dependency (spatial proximity) on at least one occasion over the last 256 s of the simulation. The adjacency matrix has been reordered to show the partition of hidden (cyan), sensory (magenta), active (red) and internal (blue) subsystems, whose positions are shown in (b)-using the same format as in the previous figure. Note the absence of direct connections (edges) between external or hidden and internal subsystem states. The circled area illustrates coupling between active and hidden states that are not reciprocated (there are no edges between hidden and active states). The spatial self-organization in the upper left panel is self evident; where the internal states have arranged themselves in a small loop structure with a little cilium, protected by the active states that support the surface or sensory states. When viewed as a movie, the entire ensemble pulsates in a chaotic but structured fashion, with the most marked motion in the periphery. (c,d ) Highlights those subsystems that cannot influence others (closed subsystems (c)) and those that have slower dynamics (slow subsystems (d)). The remarkable thing here is that all the closed subsystems have been rusticated to the peripherywhere they provide a locus for vigorous dynamics and motion. Contrast this with the deployment of slow subsystems that are found throughout the hidden, sensory, active and internal partition.
Autopoiesis and structural integrity
The previous section applied a simple sort of brain mapping to establish the statistical dependencies between external and internal states-and their functional correlates. The final simulations also appeal to procedures in the biological sciences-in particular neuropsychology to examine the effects of lesions. To test for autopoietic maintenance of structural and functional integrity, the sensory, active and internal subsystems were selectively lesioned-by rendering them functionally closed-in other words, by preventing them from influencing their neighbours. This is a relatively mild lesion, in the sense that they remain physically coupled with intact dynamics that respond to neighbouring elements. Because active states depend only on sensory and internal states one would expect to see a loss of structural integrity not only with lesions to action but also to sensory and internal states that are an integral part of active inference. Figure 5 illustrates the effects of these interventions by following the evolution of the internal states and their Markov blanket over 512 s. Figure 5a shows the conservation of structural (and implicitly functional) integrity in terms of spatial configuration over time. Contrast this with the remaining three panels that show structural disintegration as the integrity of the Markov blanket is lost and internal elements are extruded into the environment.
Conclusion
Clearly, there are many issues that need to be qualified and unpacked under this formulation. Perhaps the most prescient is its focus on boundaries or Markov blankets. This contrasts with other treatments that consider the capacity of living organisms to reproduce by passing genetic material to their . The location of the external subsystem that was best predicted is shown by the magenta circle on (d ). Remarkably, this is the subsystem that is the furthest away from the internal states and is one of the subsystems that participates in the exchanges a closed subsystem in the previous figure. (c) Also shows the significance with which the motion of the remaining external states could be predicted (with the intensity of the cyan being proportional to the x 2 statistic above). Interestingly, the motion that is predicted with the greatest significance is restricted to the periphery of the ensemble, where the external subsystems have the greatest latitude for movement. To ensure this inferential coupling was not a chance phenomenon, we repeated the analysis after flipping the external states in time. This destroys any statistical coupling between the internal and external states but preserves the correlation structure of fluctuations within either subset. The distribution of the ensuing x 2 statistics (over 82 external elements) is shown in (b ) for the true (black) and null (white) analyses. Crucially, five of the subsystems in the true analysis exceeded the largest statistic in the null analysis. The largest value of the null distribution provides protection against false positives at a level of 1/82. The probability of obtaining five x 2 values above this threshold by chance is vanishingly small p ¼ 0.00052.
rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 offspring [1]. In this context, it is not difficult to imagine extending the simulations above to include slow (e.g. diurnal) exogenous fluctuations-that cause formally similar Markov blankets to dissipate and reform in a cyclical fashion.
The key question would be whether the internal states of a system in one cycle induce-or code for-the formation of a similar system in the next. The central role of Markov blankets speak to an important question: is there a unique Markov blanket for any given system? Our simulations focused on the principal Markov blanket-as defined by spectral graph theory. However, a system can have a multitude of partitions and Markov blankets. This means that there are many partitions that-at some spatial and temporal scale-could show lifelike behaviour. For example, the Markov blanket of an animal encloses the Markov blankets of its organs, which enclose Markov blankets of cells, which enclose Markov blankets of nuclei and so on. Formally, every Markov blanket induces active (Bayesian) inference and there are probably an uncountable number of Markov blankets in the universe. Does this mean there is lifelike behaviour everywhere or is there something special about the Markov blankets of systems we consider to be alive?
Although speculative, the answer probably lies in the statistics of the Markov blanket. The Markov blanket comprises a subset of states, which have a marginal ergodic density. The entropy of this marginal density reflects the dispersion or invariance properties of the Markov blanket, suggesting that there is a unique Markov blanket that has the smallest entropy. One might conjecture that minimum entropy Markov blankets characterize biological systems. This conjecture is sensible in the sense that the physical configuration and dynamical states that constitute the Markov blanket of an organism-or organelle-change slowly in relation to the external and internal states it separates. Indeed, the physical configuration must be relatively constant to avoid destroying anti-edges (the absence of an edge or coupling) in the adjacency matrix that defines the Markov blanket. This perspective suggests that there may be ways of characterizing the statistics (e.g. entropy) of Markov blankets that may quantify how lifelike they appear. Note from equation (2.9) that systems (will appear to) place an upper bound on In all simulations, a subset of states was lesioned by simply rendering their subsystems closed-in other words, although the Newtonian interactions were preserved, they were unable to affect the functional states of neighbouring subsystems. (b) The effect of this relatively subtle lesion on active states-that are rapidly expelled from the interior of the ensemble, allowing sensory states to invade and disrupt the internal states. A similar phenomenon is seen when the sensory states were lesioned (c)-as they drift out into the external system. There is a catastrophic loss of structural integrity when the internal states themselves cannot affect each other, with a rapid migration of internal states through and beyond their Markov blanket (d ). These simulations illustrate the effective death of biological self-organization that is a well-known phenomenon in dynamical systems theory-known as oscillator death: see [58]. In our setting, they are a testament to autopoiesis or self-creation-in the sense that self-organized dynamics are necessary to maintain structural or configurational integrity.
rsif.royalsocietypublishing.org J R Soc Interface 10: 20130475 the entropy of the Markov blanket (and internal states). This means that the marginal ergodic entropy measures the success of this apparent endeavour. However, minimum entropy is clearly not the whole story, in the sense that biological systems act on their environmentunlike a petrified stone with low entropy. In the language of random attractors, the (internal and Markov blanket) states of a system have an attracting set that is space filling but has a small measure or entropy-where the measure or volume upper bounds the entropy [11]. Put simply, biological systems move around in their state space but revisit a limited number of states. This space filling aspect of attracting sets may rest on the divergence-free or solenoidal flow (equation (2.3)) that we have largely ignored in this paper but may hold the key for characterizing life forms.
Clearly, the simulations in this paper are a long way off accounting for the emergence of biological structures such as complex cells. The examples presented above are provided as proof of principle and are as simple as possible. An interesting challenge now will be to simulate the emergence of multicellular structures using more realistic models with a greater (and empirically grounded) heterogeneity and formal structure. Having said this, there is a remarkable similarity between the structures that emerge from our simulations and the structure of viruses. Furthermore, the appearance of little cilia (figure 3) are very reminiscent of primary cilia, which typically serve as sensory organelles and play a key role in evolutionary theory [59].
A related issue is the nature of the dynamical (molecular or cellular) constituents of the ensembles considered above. Nothing in this treatment suggests a special role for carbonbased life or, more generally, the necessary conditions for life to emerge. The contribution of this work is to note that if systems are ergodic and possess a Markov blanket, they will-almost surely-show lifelike behaviour. However, this does not address the conditions that are necessary for the emergence of ergodic Markov blankets. There may be useful constraints implied by the existence of a Markov blanket (whose constituency has to change more slowly than the states of its constituents). For example, the spatial range of electrochemical forces, temperature and molecular chemistry may determine whether the physical motion of molecules (that determine the integrity of the Markov blanket) is large or small in relation to fluctuations in electrochemical states (that do not). However, these questions are beyond the scope of this paper and may be better addressed in computational chemistry and theoretical biology. This touches on another key issue, namely that of evolution. In this treatment, we have assumed biological systems are ergodic. Clearly, this is a simplification, in that real systems are only locally ergodic. The implication here is that self-organized systems cannot endure indefinitely and are only ergodic over a particular (somatic) timescale, which raises the question of evolutionary timescales: is evolution itself the slow and delicate unwinding of a trajectory through a vast state space-as the universe settles on its global random attractor? The intimation here is that adaptation and evolution may be as inevitable as the simple sort of self-organization considered in this paper. In other words, the very existence of biological systems necessarily implies they will adapt and evolve. This is meant in the sense that any system with a random dynamical attractor will appear to minimize its variational free energy and can be interpreted as engaging in active inference-acting upon its external milieu to maintain an internal homoeostasis. However, the ensuing homoeostasis is as illusory as the free energy minimization upon which it rests. Does the same apply to adaptation and evolution?
Adaptation on a somatic timescale has been interpreted as optimizing the parameters of a generative model (encoded by slowly changing internal states like synaptic connection strengths in the brain) such that they minimize free energy. It is fairly easy to show that this leads to Hebbian or associative plasticity of the sort that underlies learning and memory [21]. Similarly, at even longer timescales, evolution can be cast in terms of free energy minimization-by analogy with Bayesian model selection based on variational free energy [60]. Indeed, free energy functionals have been invoked to describe natural selection [61]. However, if the minimization of free energy is just a corollary of descent onto a global random attractor, does this mean that adaptation and evolution are just ways of describing the same thing? The answer to this may not be straightforward, especially if we consider the following possibility: if self-organization has an inferential aspect, what would happen if systems believed their attracting sets had low entropy. If one pursues this in a neuroscience setting, one arrives at a compelling explanation for the way we adaptively sample our environments-to minimize uncertainty about the causes of sensory inputs [62]. In short, this paper has only considered inference as emergent property of self-organizationnot the nature of implicit (prior) beliefs that underlie inference.
|
2014-10-01T00:00:00.000Z
|
2013-09-06T00:00:00.000
|
{
"year": 2013,
"sha1": "9d91dead9612e1c6a8dd9ae3793fe697565c744e",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2013.0475",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d91dead9612e1c6a8dd9ae3793fe697565c744e",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
258088571
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Transformational Leadership and Organizational Commitment on Intentions to Move Elementary School Administrative Staff in Bukittinggi City
Based on data from the Education and Culture Office of the City of Bukittinggi, the past few years have shown a negative phenomenon where there is a tendency for a shortage of elementary school teaching staff in the City of Bukittinggi. This is thought to have something to do with the principal's transformational leadership, which influences the intention to move the individual through his organizational commitment. Therefore, it is necessary to do research to test the truth. This study aims to reveal the contribution of school principals' transformational leadership and organizational commitment to turnover intentions among elementary school teaching staff in Bukittinggi City. The hypotheses put forward in this study are: (1) transformational leadership has an effect on turnover intentions; (2) transformational leadership has an effect on turnover intentions through organizational commitment; and (3) organizational commitment has an effect on turnover intentions. The population in this study was all 155 elementary school teaching staff in the city of Bukittinggi. The research sample consisted of 98 people who were taken using the Slovin technique, and the research instruments used were the rating scale and the Likert scale model questionnaire, which had been tested for validity and reliability. Furthermore, the research data were analyzed using correlation and regression techniques. The results of the data analysis show that: (1) transformational leadership can affect turnover intentions of elementary school Elementary School Administrative Staff in Bukittinggi City to a low level, especially through organizational commitment; (2) transformational leadership with limited significance is also able to directly influence turnover intentions of Elementary School Administrative Staff in elementary schools in Bukittinggi City to be low; and (3) organizational commitment can directly influence turnover intentions of elementary school Elementary School Administrative Staff to be low. The findings above imply that the principal's transformational leadership is one of the dominant factors that can reduce the intention to move the elementary school Elementary School Administrative Staff
INTRODUCTION
The shortage of educators and educational staff will have a negative impact on the quality of service and quality of education directly or indirectly. This is understandable because the shortage of educators and educational staff will have an impact on the creation of crucial problems in the planning and policy making of the education system (Luthy 1989) (Kamau, Muathe, and Wainaina 2021). The teacher shortage is hurting students, the teachers themselves and the education system as a whole. Besides that, the shortage of education staff or administrative staff will have a detrimental impact on the running of the education system, one of which is having an impact on teacher performance so that teacher effectiveness in teaching is reduced. Besides that, there are many other impacts that actually threaten to reduce student learning abilities. Therefore, the government, especially related agencies, should immediately try to overcome the problems mentioned above (García and Weiss 2019).
The reason for the reduction in educators and education staff is partly due to the large number of educators and education staff leaving or moving from the educational organization where they are or better known as turnover or moving or leaving. Abelson divides moving or leaving the place where they originally worked into two, the first is voluntary movement which can actually be avoided (avoidable voluntary turnover) where this move can occur due to a policy of carrying out mutations or rotations in the organization where they work, so that they inevitably have to move. Meanwhile, the second is voluntary but unavoidable work change (unavoidable voluntary turnover). (Abelson 1987).
Transferring or leaving individuals from the organization where they work is usually predicted using the variable intention to move or leave. This is due to the convenience in measuring the intention to move variable as a representative of actual individual or employee transfer data where the actual transfer data is difficult to implement in the form of measurements.(Verhees 2012). The variables of intention to move or leave are valid proxies for actual employee turnover (Lee 2019). The intention to move is indeed only limited to the intention of the employee who wants to leave the organization, but the impact will be the same as the real action in making the actual decision to leave. The desire or intention to leave can also have a negative effect on the organization so that gradually it affects the employee's commitment to achieving organizational goals and the organization has to spend more money to find new people. The intention to move the individual or employee can be divided into two. First, situational factors are factors that are caused by the pressure of the organization itself, such as due to a mutation from the organization itself or a transfer process in order to equalize the power, these are commonly referred to as external factors. Second, (Princess and Hasanati 2022).
School as the smallest educational organization in the provision of education requires an organization that is stable and has strong bonds between members of the organization and collaborates with each other in achieving its organizational goals. So that the hope in realizing higher education development goals such as improving the quality of education and educational equity can be realized properly. The Bukittinggi City Education Office always tries to realize these educational development goals, but always faces real obstacles in the field. Among these obstacles is the problem of weak and unstable educational organizations such as schools in the city of Bukittinggi. Schools in the City of Bukittinggi, especially primary level schools, have always experienced a shortage of educators and educational staff. This was revealed from the results of interviews with the Head of the Education and Quality of Education Personnel Improvement (PKPMP) Department of Education and Culture of the City of Bukittinggi, Elementary Schools in Bukittinggi City have problems that are not much the same as other cities such as a shortage of education staff and also a shortage of teachers. However, in Kota Bukittinggi there tends to be a marked shortage of administrative staff. Figure 1 above shows that there is a significant difference in the number of administrative staff leaving at the elementary school level in Bukittinggi City compared to the number of administrative staff entering. This indication clearly reinforces the shortage of administrative staff in the City of Bukittinggi. The shortage of administrative staff tends to be caused by individual factors. This factor occurs because of their own desire to be able to move to a better place. Head of PKPMP Office of Education and Culture of Bukittinggi City Masri, SPd, M.Si in his explanation further revealed that many administrative staff at each elementary school asked to leave due to various reasons. One of the reasons for moving to another SKPD, is due to a request from another SKPD which started with the individual request of the administrative employee, who openly stated that they wanted to leave school and were then approved by the leadership which was followed by a transfer. Relevant leaders who realize that Number 4, February 2023, Page. 1396-1405 until now there is still a crisis of ASN staff in the City Government of Bukittinggi in various SKPDs feel they don't need to think long and hard about making decisions which they also think are part of the solution to the problems they are facing at that time, so they can easily transfer employees administration of the school. In other places, such as contract or honorary administrative staff leaving because they want to find a better place or better job than before. The results of interviews with several administrative employees who have moved or left also state almost the same thing, Administrative staff who are part of this administrative staff are an important component in educational organizations, they are the dominant factor in improving the quality of education (Rusdinal 1989) (Rusdinal, Harma, and Afriasyah 2019). Their role is quite significant for schools because they can help improve the quality of education services and are also able to help improve the quality of education in these schools (Wahyuni, Sri, 2016) (Hutomo et al. 2022). School administrative staff is an important element for schools, because by optimizing the role of administrative staff, especially in elementary schools, it is very effective in helping schools develop their education management and also provide educational services.(Wandani, Asriani, and Agustina 2022). Administrative staff are able to support school principals in developing the quality of educational services in a school aimed at students and parents as recipients of these services. This is largely determined by the role of administrative staff or administrative staff who enter, including through the support of accurate and complete information to the school principal (Komariah, Achmad Kurniady, and Rusdinal 2019).
The high transfer or departure of administrative employees within the Bukittinggi City Education Office can have an unfavorable impact on the provision of educational services to the community and processes related to improving the quality of education. The high rate of intention to move employees will more or less have a direct negative impact on an educational organization, including the impact of inequality on the organization (Sudita 2015)where moving the employee will cause a decrease in the employee's performance and have an effect on the performance of the organization itself, thereby hampering educational productivity.
Based on observations made by the author in the field with existing literature studies, the phenomena observed above can occur due to several factors. Among other things, it can be caused by an indication of the low organizational commitment of the individual or the employee. Employees who are not committed to the organization have low loyalty, especially for contract employees (Susilo and Satrya 2019). Low employee commitment to the school and a low sense of belonging to the school and its environment also lead to an intention to move (Setiyanto and Selvi 2017). Therefore it is necessary to increase their loyalty through the creation of comfortable situations and conditions within the organization. One of them is through the role of the leader in an effort to create this, where with various skills and leadership styles it is hoped that all of this will be made easier.
Other factors such as leadership factors where these factors are able to directly or indirectly influence the intention to move employees (Kerdngern and Thanitbenjasith 2017), especially the factor of transformational leadership which proved to be very strong in influencing the intention to leave employees(Park and Pierce 2020). Transformational leaders can encourage their employees to become skilled, innovate so that the employee is satisfied and has high organizational commitment so that the intention to move the employee is lower.(Abouraia and Othman 2017)(Tri Utami and Havidz Aima 2021). A leadership style that inspires and empowers individuals, groups and organizations by transforming organizational paradigms and values towards independence, both directly and indirectly, can reduce the desire to move employees. It is proven that transformational leadership can affect the low intention to (Oupen and Yudana 2020). Oupen proved that at an elementary school in Buleleng, Bali, it turns out that the leadership style of the principal there can influence the organizational commitment of the teachers. The principal is able to create a conducive relationship through the communication process and the style or skills as a leader they have in dealing with their teachers so as to minimize the intention to move these teachers. An increasingly good leadership style can reduce the saturation level of administrative employees and minimize the intention to move the administrative staff(Nur Dwiyanto 2017).
Based on the description above, the authors are interested in conducting research on "The Influence of Transformational Leadership and Organizational Commitment on Intentions to Move Elementary School Administrative Staff in Bukittinggi City".
RESEARCH METHODS
This research is a research using a quantitative approach with the type of correlational research to see how much influence the independent variable (X) has on the dependent variable (Y). Quantitative approach is research whose analysis focuses more on data in the form of numbers processed using statistical methods. The analysis used is descriptive and inferential analysis.
Descriptive analysis is intended to describe each independent variable, namely transformational leadership and organizational commitment, and the dependent variable, namely intention to leave, while inferential analysis is used to be able to reveal the effect of transformational leadership and organizational commitment as independent variables on intention to leave as the dependent variable. through correlation and regression techniques.
The population is the entire research subject, and this study has subjects who are ASN (PNS and contracts) administrative staff of elementary schools within the Education and Culture Office of the City of Bukittinggi. The population in this study is the number of elementary schools in a sub-district or classified per sub-district cluster in the City of Bukittinggi. This study limits respondents to administrative staff at public and private elementary schools in the City of Bukittinggi. And the results obtained a population of 155 people.
Sampling was carried out using the slovin sampling technique (with a 5 percent margin of error). The sample was obtained from a total population of 137 respondents using a margin of error of 5 percent. The instrument for collecting data in this study was a questionnaire prepared using a Likert scale, each question has five alternative answers chosen based on a Likert scale.
Data collection techniques in this study used Observation and Interview techniques with data analysis techniques carried out through several stages, namely Descriptive Analysis and Partial Structural Equational Model (SEM) Analysis with the SmartPLS application.
Transformational Leadership influences Intention to Move
Park and Pierce (2020) revealed that transformational leadership strongly influences workers' intention to leave. Transformational leaders are able to improve the ability of employees to be better and competent in their fields so that the intention to move is reduced.
International Journal Of Humanities Education And Social Sciences (IJHESS)
E-ISSN: 2808-1765 Volume 2, Number 4, February 2023, Page. 1396-1405 https://ijhess.com/index.php/ijhess/ Based on the test results which can be seen in table 4.13, the p values for the direct relationship between the transformational leadership variable and the intention to move are obtained at a 5 percent alpha confidence level and have a negative direction. This shows that there is a negative effect with limited significance from the transformational leadership variable on the intention to move administrative employees in the elementary school environment in the City of Bukittinggi. The results of this study with limited significance can be interpreted that the higher the level of transformational leadership of the leader, the lower the intention to move the educational staff. And these results are in line with research conducted by Lystia Tri Utami and M. Havidz Aima (2021), in practice, not all elementary school principals in the city of Bukittinggi use transformational leadership. There are still leaders who are less assertive in making decisions so that it has an impact on the emergence of doubts from subordinates to their leaders. Of course, this condition is an X factor for employees to think about looking for another job elsewhere or moving agencies to another place.
Transformational Leadership influences Intention to Transfer through Organizational Commitment
A leadership style that inspires and empowers individuals, groups and organizations by transforming organizational paradigms and values towards independence, both directly and indirectly. Abouria and Othman (2017) prove that transformational leadership can make employees more satisfied, dedicated and creative, up-to-date and can provide strong ideas to achieve organizational goals which ultimately lead to lower turnover rates.
From the results of the hypothesis testing, it was obtained that the p values for the direct relationship between the transformational leadership variable and the intention to move through the organizational commitment variable at the 95% confidence level, namely 1.960, had a value of 0.001. This shows that there is a negative influence of transformational leadership variables on intention to move through organizational commitment. Therefore it can be stated that the second hypothesis is accepted. While the path coefficient is 3,380, it proves that there is a correlation between transformational leadership and the intention to move through organizational commitment. These results support previous research conducted by Oupen and Yurdana (2020) which stated that the principal's leadership style can minimize the intention to move teachers. This leadership style will support its members to become smarter, more up-todate and able to provide strong ideas so that organizational goals are met. Transformational leaders are promoted to foster more mature preparation and systems that can increase employee job satisfaction and organizational commitment which ultimately leads to lower turnover rates. Transformational leaders are tasked with fostering more mature preparation and systems that can increase the job satisfaction of their workers, increase trust in the organization so that loyalty increases and ultimately leads to lower turnover rates. Transformational leaders are promoted to foster more mature preparation and systems that can increase employee job satisfaction and organizational commitment which ultimately leads to lower turnover rates. Transformational leaders are tasked with fostering more mature preparation and systems that can increase the job satisfaction of their workers, increase trust in the organization so that loyalty increases and ultimately leads to lower turnover rates. Transformational leaders are promoted to foster more mature preparation and systems that can increase employee job satisfaction and organizational commitment which ultimately leads to lower turnover rates. Transformational leaders are tasked with fostering more mature preparation and systems that can increase the job satisfaction of their workers, increase trust in the organization so that loyalty increases and ultimately leads to lower turnover rates. (Abouraia and Othman 2017) (Khan 2015).
Organizational Commitmentinfluentialon Intention to Exit
Susilo and Satrya (2019) state that besides job satisfaction, a person's organizational commitment also influences the level of intention to leave the employee. Employees who are not committed to the organization and have low loyalty are dominantly contract employees (Susilo and Satrya 2019).
From the results of testing the hypothesis, it was found that organizational commitment has a negative effect on the intention to leave or move. This can be seen from the p-values of 0.000 with a significance level (α) which is determined at 0.05 with a negative path coefficient of 6.260. Therefore it can be stated that the third hypothesis is accepted.
From the results of the research conducted, it has been shown that there is an influence of organizational commitment on the intention to leave administrative staff in elementary schools in Bukittinggi. Some of the things that cause this to happen are the commitment to survive even though the workload is heavy, especially for employees who already feel they belong to the organization or work environment. This is similar to research by Setiyanto and Selvi (2017) which explains that high employee commitment to school and a high sense of belonging to the school and its environment also makes the emergence of the intention to move low. And also in line with research conducted by Lystia Tri Utami and M. Havidz Aima (2021) where these employees still have a commitment to remain at the school because they still think their lives will be disrupted in the future if they try to leave the school. Apart from these reasons, the employees or staff of these education personnel who tend to be dominated by women still realize that if they leave or move from the school they have to adapt again in order to find the conditions they expect. Azem, S and Akhtar, N (2014) found that intention to move is more likely to be influenced by ongoing commitments such as perceived economic value for survival these employees or staff of education personnel who tend to be dominated by women still realize that if they leave or move from the school they have to adapt again in order to find the conditions they expect. Azem, S and Akhtar, N (2014) found that intention to move is more likely to be influenced by ongoing commitments such as perceived economic value for survival these employees or staff of education personnel who tend to be dominated by women still realize that if they leave or move from the school they have to adapt again in order to find the conditions they expect. Azem, S and Akhtar, N (2014) found that intention to move is more likely to be influenced by ongoing commitments such as perceived economic value for survivalin the company compared to affective and normative commitment. And with rewards that have economic value can encourage these employees to continue to survive.
CONCLUSION
Based on the results of statistical data analysis and discussion of the empirical test of the model which shows the relationship between the transformational leadership variable and the intention variable to move elementary school environmental administration employees in Bukittinggi City through the organizational commitment variable, the following conclusions can be drawn: (1) Transformational leadership can influencing the intention to move elementary school administrative staff in the City of Bukittinggi to be low, especially through organizational commitment; (2) Organizational commitment can directly affect the intention to move the administrative staff of the elementary school to be low; (3) Transformational leadership with limited significance is also able to directly influence the intention to move elementary school administrative staff in Bukittinggi City to a low level.
|
2023-04-13T15:16:38.557Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "b760ecadbfeb92457a5f2e0a640dddbe40994cf5",
"oa_license": "CCBYNC",
"oa_url": "http://ijhess.com/index.php/ijhess/article/download/377/359",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35168a4f0d946738a3724823934ee01374093909",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
248405639
|
pes2o/s2orc
|
v3-fos-license
|
Absorption and Fixed Points for Semigroups of Quantum Channels
In the present work we review and refine some results about fixed points of semigroups of quantum channels. Noncommutative potential theory enables us to show that the set of fixed points of a recurrent semigroup is a W*-algebra; aside from the intrinsic interest of this result, it brings an improvement in the study of fixed points by means of absorption operators (a noncommutative generalization of absorption probabilities): under the assumption of absorbing recurrent space (hence allowing non-trivial transient space) we can provide a description of the fixed points set and a probabilistic characterization of when it is a W*-algebra in terms of absorption operators. Moreover we are able to exhibit an example of a recurrent semigroup which does not admit a decomposition of the Hilbert space into orthogonal minimal invariant domains (contrarily to the case of classical Markov chains and positive recurrent semigroups of quantum channels).
Introduction
Quantum channels are mathematical objects used to described the most general evolution an open quantum system can undergo and semigroups of quantum channels (which in continuous time are known as quantum Markov semigroups) have been used for decades to model time evolution of open quantum systems under reasonable assumptions and approximations. From a mathematical point of view they represent an interesting noncommutative generalization of transition kernels and classical Markov semigroups. Many fundamental concepts and tools from the theory of classical Markov processes have been carried to the setting of semigroups of quantum channels and have provided valuable contributions to their study.
The notion of positive recurrence for semigroups of quantum channels traces back to the 70s and it is a fundamental tool for instance for studying the long-time behaviour of quantum systems; taking inspiration from the theory of classical Markov semigroups, transience and the distinction between positive and null recurrence were further analyzed in [8,17] (in finite dimensional quantum systems, as in the case of Markov chains on a finite state space, null recurrence does not show up and the situation is far less complicated). However, there are still some open issues in the theory of recurrence for semigroups of quantum channels; following on from [3], in this work we present some recent results that improve the understanding of null recurrence in the noncommutative setting.
Another topic which has drown attention in the last years is the study of the fixed points of the evolution: they are relevant for the asymptotics of the evolution of quantum systems ( [10]) and whether or not they are a W * -algebra has implications, for instance, on the relationship between conserved quantities and symmetries of the semigroup (see [2] about the general problem of when fixed points are an algebra and [12] for a discussion about Noether-type results in the context of quantum channels). Fixed points are well understood in the case of positive recurrent semigroups ( [4,9,14]), while less is known for general semigroups ( [1,3,11]). In the present work we review and improve some results of [3] which, under mild assumptions, characterize the fixed points sets in terms of absorption operators, which are a noncommutative generalization of absorption probabilities introduced always in [3].
Let us briefly present the setting. Let h be a separable Hilbert space; we denote by L 1 (h) trace class operators and by B(h) bounded linear operators; positive trace class operators with unit trace are called states and play the role of noncommutative probability densities. We recall that the topological dual of L 1 (h) is isometric to B(h) via the following correspondence: A quantum channel Φ : B(h) → B(h) is a completely positive unital w * -continuous linear map, while the predual map Φ * (defined via equation (1)) is a completely positive trace preserving map acting on L 1 (h). A semigroup of quantum channels is a collection of quantum channels (P t ) t∈T indexed by a semigroup T such that P 0 = Id, P s P t = P t+s for any s, t ∈ T.
The collection of predual maps (P * t ) forms a semigroup too. We will mainly consider two cases: T = N and the semigroup consists of the powers of a single quantum channel P := (Φ n ) n∈N , and T = [0, +∞) and we ask for the map t → P t to be w * -pointwise continuous (these semigroups are known in the literature as quantum Markov semigroups).
In Section 2 we briefly recall some fundamental concepts about reducibility of quantum channels. After that, we recap the basic notions of recurrence and transience for semigroups of quantum channels.
In Section 3 we show that the fixed points set of a recurrent semigroup of quantum channels is a W * -algebra. Moreover, assuming that the recurrent space is absorbing, we provide a description of the fixed points set of the semigroup in terms of absorption operators and we present some relevant consequences (these results have already been proved in [3] under stronger assumptions). We recall that the set of fixed points (sometimes called harmonic operators) of a semigroup of quantum channels is the following set: It is well known that in the case of classical Markov chains, recurrent states can be partitioned in communication classes; however, in Section 4 we show that recurrent semigroups in general do not admit a decomposition of h into orthogonal minimal invariant spaces (hence the result [5, Proposition 7.1] for positive recurrent semigroups does not extend to recurrent ones).
For a more detailed treatment of the topics and the results presented in this work, we refer to [13].
2 Preliminaries on semigroups of quantum channels Reducibility. The concept of reducibility for quantum channels was introduced in [7] and since then have been intensively studied and exploited. We recall that for every positive operator x, supp(x) := ker(x) ⊥ .
Definition 1 (Enclosure). A closed subspace V of h is an enclosure (sometimes also called invariant domain) for a quantum channel Φ if, for any state ρ, V is an enclosure for a semigroup P when it is an enclosure for any channel of the semigroup P.
An enclosure V is said to be minimal if the only enclosures contained in V are the trivial ones, i.e. {0} and V.
The following are equivalent (see [6,Section 3]): • V is an enclosure for Φ, • p V is a reducing or subharmonic projection for Φ, i.e. Φ( When V is an enclosure, we can define the restricted quantum channel Φ V and the corresponding predual map: Thank to equation (2), the restrictions of a semigroup of quantum channels P V still mantain the semigroup property. Where it does not create confusion, we will often identify B(V) Recurrence and transience. Recurrence and transience are strictly connected to reducibility properties and, of course, the ergodic behaviour of the semigroup; for the convenience of the reader, we briefly recall some related definitions and relevant properties following [8,17] (see [11] for the discrete time case). In analogy with the theory of classical Markov chains, the following space is called the positive recurrent space: It is well known that R + is an enclosure ([17, Proposition 3]). For every positive x ∈ B(h), we call form-potential of x the following positive symmetric closed quadratic form: m stays for the counting measure in the discrete time case and for the Lebesgue measure in the continuous time setting. Loosely speaking, we can say that for any orthogonal projection p, U(p)[v] represents the average time spent in supp(p) by a quantum systems that starts in the state |v v| and evolves according to P; if U(p) is bounded, no matter what is the initial state |v v|, the system spends on average a finite amount of time in supp(p). We define R := T ⊥ is an enclosure and T ⊥ R + (see [17,Proposition 7 and Corollary 2]), hence it is meaningful to call R the recurrent space and to define R 0 := R ∩ R ⊥ + the null recurrent space; as in the classical case, it turns out that the null recurrent space is an enclosure too ([3, Theorem 9]). We say that the semigroup P is positive recurrent (resp. null recurrent/recurrent/transient) if h = R + (resp. R 0 , R, T ); in general, a semigroup P does not need to be of one of the above types, but [17,Theorem 9] shows that it can always be decomposed in its transient and recurrent restrictions, P T and P R , and the recurrent restriction splits again into the positive and null recurrent restrictions, P R + and P R 0 (since R, R + and R 0 are enclosures, the corresponding restrictions are defined as in equation (2)).
Absorption operators to describe fixed points
Recurrent semigroups. In the first part of this section, unless differently specified, we assume that the semigroup P is recurrent, i.e. h = R; notice that, in the case of a generic semigroup P, everything that we prove holds true for the recurrent restriction P R . In the case of recurrent semigroups, F (P) is tightly related to communication properties of the semigroup: every projection in F (P) is by definition the projection onto an enclosure and the vice versa is also true.
Theorem 2 (Theorem 9, [3]). If V, Z are increasing enclosures included in R, i.e. such that V ⊆ Z ⊆ R, then Z ∩ V ⊥ is an enclosure.
Hence projections corresponding to enclosures are harmonic projections. In general F (P) is only a selfadjoint w * -closed linear space, however the following result proves that the fixed points set of a recurrent semigroup is a W * -algebra, hence it is completely determined by its projections.
We remark that this fact was already known in case of positive recurrent semigroups, i.e. when R = R + (see [2,Theorem 2.3]).
Proof. If F (P) is not an algebra, [2,Lemma 2.2] shows that there exists x ∈ F (P) such that x * x ∈ F (P). Kadison-Schwarz inequality implies that which means that x * x is subharmonic and (P t (x * x)) is a bounded positive monotone increasing net; we call y its least upper bound. y is a fixed point, since ∀s ∈ T P s (y) = lim t→+∞ P s (P t (x * x)) = lim t→+∞ P t+s (x * x) = y.
Notice that Moreover y −x * x = 0: since x * x ∈ F (P), there must be a timet > 0 such that the inequality in equation (4) is not an equality, hence By [8,Theorem 4], there exists a non-null positive operator z such that U(z) = y − x * x is bounded, hence {0} = supp(U(z)) ⊂ T and the transient space is non-trivial.
Corollary 4. For every enclosure V, p V ∈ F (P) and Proof. As we already pointed out, Theorem 2 shows that projections corresponding to enclosures are harmonic projections and, since F (P) is a W * -algebra, it is the norm-closure of the linear span of its projections.
Another immediate consequence is a nice diagonal structure of the elements in F (P).
Proof. By Corollary 4, it is enough to prove the statement for the projections corresponding to the enclosures, hence the result follows from [3, Corollary 17].
Absorbing recurrent space. We now turn our attention to the fixed points set of semigroups with non-trivial transient part. For every enclosure V, one can consider the corresponding absorption operator, defined as Absorption operators are a noncommutative generalization of absorption probabilities: for can be interpreted as the asymptotic probability of finding in V a quantum systems that starts in the state |v v| and evolves according to P. Absorption operators have a convenient block structure which is related to recurrence: Theorem 14 in [3] shows that Moreover absorption operators are fixed points of P ([3, Proposition 4]) and, since the set of fixed points is norm closed, we know that A natural question is to find under which conditions the reverse inclusion is true and so fixed points are completely described by absorption operators. We think it is an interesting problem because it creates a bridge between the fixed points set and the absorption dynamic and communication properties of the semigroup. Although we still are not able to characterize the situation in full generality, we can improve [3,Theorem 22] and show that fixed points are completely characterized by absorption operators when the recurrent space is absorbing, that is A(R) = 1. It means that asymptotically the evolution gets absorbed in the recurrent space, which, therefore, contains all relevant information regarding asymptotic quantities; it is not a very restrictive hypothesis: it holds true if h is finite dimensional and stronger restrictions where often considered in ergodic theory ( [10,11]). Furthermore, as pointed out already in [10], there is a wide class of quantum Markov semigroups for which checking its validity reduces to an analogous problem for a classical Markov chains.
Proof (sketch). Together with Proposition 3, Theorem 2.4 in [2] shows that there exists a conditional expectation E : B(h) → F (P) (it cannot be w * -continuous unless R = R + by [10, Theorem 2.1]) which can be approximated in the w * -pointwise topology by the net Notice that, by definition of absorption operator, for every enclosure V In order to prove the diagonal structure of F (P) with respect to R + and R 0 , it suffices to show that it holds for absorption operators corresponding to recurrent enclosures and this follows from the fact that every enclosure V ⊂ R is of the form Corollary 17]) and from equation (5). Equation (2) implies that for every x ∈ F (P), p R + xp R + and p R 0 xp R 0 are fixed points of the corresponding restrictions.
Notice that in the case of recurrent semigroups, by equation (5), absorption operators coincide with harmonic projections and Theorem 6 becomes Corollaries 4 and 5. There are some interesting consequences of Theorem 6 (hence we always assume A(R) = 1), whose proofs we skip because they are very similar to the ones of [3, Proposition 23 and Proposition 29], with suitable trivial adaptations.
1. Let V ⊂ R be an enclosure; A(V) = p V + p T A(V)p T (see equation (5)) and p T A(V)p T is the unique y ∈ B(T ) that solves where L is the infinitesimal generator of P in continuous time or is equal to Φ − Id in discrete time.
Equation (6) is the analogous of the characterization of absorption probabilities as solution of a linear system: let (X n ) n≥0 be a Markov chain on a countable state space E with transition matrix (p xy ) x,y∈E . In the classical setting, enclosures correspond to closed sets: a subset C ⊂ E is said to be closed if for every x ∈ C, n ∈ N, P(X n ∈ C|X 0 = x) = 0. If recurrent states are absorbing, for every close set C ⊂ R, the corresponding absorption probability is the unique solution of 2. Every enclosure V is of the form As we mentioned in the previous section, enclosures play a fundamental role in the study of semigroups of quantum channels and in applications it is useful to find them; equation (8) ensures that, at least when A(R) = 1, we can restrict our search to projections which commutes with p R + , p R 0 and p T . It would be extremely interesting to know whether or not a subharmonic projection always commutes with p R + , p R 0 and p T .
The last part of the statement tells us that every enclosure is composed by a non-null recurrent enclosure V ∩ R, plus a linear space which gets completely absorbed into V ∩ R.
Reducibility of recurrent semigroups
Whether or not a recurrent semigroup admits a decomposition of the Hilbert space h into orthogonal minimal enclosures (DOME) is a natural question because it is true for classical Markov chains and it was shown in many papers (for instance, see [5, Proposition 7.1]) for positive recurrent semigroups. In this section we will exhibit an example of a recurrent semigroup which does not admit a DOME, but before, we need to reformulate the problem thanks to Corollary 4.
Lemma 7.
A recurrent semigroup P admits a DOME if and only if F (P) is an atomic W * -algebra.
We recall that a W * -algebra is atomic if for every projection p, there exists a minimal projection q ≤ p and this implies that there exists a denumerable family of orthogonal minimal projections (q α ) α∈A such that α∈A q α = 1.
We remark that any DOME of R must be compatible with R + and R 0 .
Proof. Corollary 5 implies that every (sub)harmonic projection p V for P R commutes with p R + , p R 0 and, since R + and R 0 are enclosures and the intersection of two enclosures is again an enclosure, V ∩ R + and V ∩ R 0 are enclosures. Therefore a DOME of R (V α ) induces DOMEs (V α ∩ R + ) and (V α ∩ R 0 ) for R + and R 0 , respectively (since V α is minimal, We already know that there always exists a DOME of R + , hence the existence of a DOME of R and R 0 are equivalent problems. Example 9 (Noncommutative symmetric random walk on Z). Let us consider the group G with two generators a, b satisfying a 2 = b 2 = e (e is the identity element) and the corresponding left and right representations defined on h = ℓ 2 (G). For every g ∈ G, λ(g) and ρ(g) are the unitary operators acting in the following way on the canonical basis C = {δ g : g ∈ G}: Notice that λ(g) * = λ(g −1 ) and ρ(g) * = ρ(g −1 ) for every g ∈ G. We introduce the following notation: We recall that R(G) = L(G) ′ ( [16, Section V.7]). We define the quantum channel and we consider the semigroup P := (Φ n ) n∈N .
Invariant commutative subalgebra and null recurrence. Consider the commutative W*-subalgebra of operators which are diagonal in the canonical basis C and its predual: Since G is countable, we can relable its elements with integer numbers: and this provides isomorphisms between ℓ α (G) and ℓ α (Z) for α ∈ {1, 2, ∞}. λ(a), λ(b) act on C in the following way (for any g ∈ G, the label g stays for δ g ): It is easy to see that Φ preserves ∆ and ∆ * and its restriction corresponds via the isomorphisms above to the transition matrix of a symmetric random walk on Z: Φ(|δ g δ g |) = 1 2 (|δ ag δ ag | + |δ bg δ bg |). The symmetric random walk on Z is null recurrent and this implies that also P is null recurrent.
Proposition 10. P is null recurrent.
Proof. 1. Consider a non-null positive operator x, then there must exists some g ∈ G such that δ g , xδ g ≥ c > 0 for some positive constant c. By the symmetry of the semigroup, we can assume g = e.
where p k 0,0 is the probability that a symmetric random walk on Z that starts in 0 comes back to 0 in k steps. U(x) is unbounded, hence T = {0}. 2. Suppose there exists an invariant state ρ; the action on ρ on ∆ is represented by a statẽ ρ ∈ ∆ * , which must be invariant for the symmetric random walk on Z, but this is again a contradiction, hence R + = {0}. F (P) has no minimal projections. When F (P) is an algebra (this is the case by Proposition 3) and the semigroup is generated by a single quantum channel Φ, Proposition 1 in [4] provides us a characterization of F (P) in terms of Kraus operators of Φ: We will show something stronger than the fact that R(G) is not atomic: namely, we will prove that it has no minimal projections. We denote by Z(G) := R(G) ∩ L(G) the center of R(G); notice that (ρ(ab) + ρ(ba))/2 ∈ Z(G): it clearly belongs to R(G) and it commutes with ρ(a) (by symmetry it commutes with ρ(b) too): ρ(a)(ρ(ab) + ρ(ba)) = ρ(b) + ρ(aba) = (ρ(ab) + ρ(ba))ρ(a).
Let us focus on the action of ρ(ab) and ρ(ba) = ρ(ab) * on C : Hence there exists a unitary operator U : ℓ 2 (Z) ⊗ C 2 → ℓ 2 (G) such that U * ρ(ab)U is S ⊗ 1 C 2 , where S is the right shift operator. Consider the Fourier transform between the one dimensional torus T and Z: By the Fourier transform properties, it is easy to see that F −1 SF is the multiplication operator M e ix corresponding to the function e ix . We have the following chain of equalities Therefore the W*-algebras generated by M cos(x) and (ρ(ab) + ρ(ba))/2 are isomorphic.
Proposition 11. R(G) has no minimal projection.
Remark 13. The structure of the decoherence-free subalgebra N (P) of a semigroup and wether it is atomic or not has been investigated in [4,9,15], especially in relation to enviromental decoherence. In the present example, even N (P) has no minimal projection. Indeed, by [4, Proposition 3], we have that N (P) = {λ(ab), λ(ba)} ′ .
Although Example 9 shows that in general F (P R ) is not atomic, [2, Theorem 2.4] together with Proposition 3 shows that it is injective. A natural question is if it possible to find interesting features of the semigroup ensuring the atomicity of F (P R ) and, hence, the decomposition of R into orthogonal minimal enclosures.
|
2022-04-28T06:47:08.302Z
|
2022-04-27T00:00:00.000
|
{
"year": 2022,
"sha1": "a617f32e52d737ea40e8f530076e8cf8b8dc7a81",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a617f32e52d737ea40e8f530076e8cf8b8dc7a81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
116531747
|
pes2o/s2orc
|
v3-fos-license
|
Adaptive Fuzzy Model for Determining Quality Assessment Services in the Supply Chain
The problem that is being addressed in this paper is to improve the services provided by company and achieve better communication between companies in the supply chain. Therefore, a qualitative assessment of service has been required. This service is characterized by a group of parameters, which are often inaccurately estimated values, as well as their importance for the evaluation system. This is often the result of assessor ́s uncertainty, variability of conditions, etc. Therefore, in the context of AM4SCM (Adaptive Model for Supply Chain Management) a mathematical model for evaluating the quality of services has been developed (FAM4QS Fuzzy Aggregation Method for Quality Service) which is based on the fuzzy arithmetic. Selection of different values for the degrees of fuzzy power mean, which are used for evaluation of parameters or groups of parameters of the system and the service, contributes to a better assessment and it is due to the varying nature of the parameters. The observed model was simulated on 17 supply chains on the territory of the Republic of Serbia. Service quality assessment is carried out based on data from the user requirements participants of supply chains binding the so-called fuzzy aggregation function.
INTRODUCTION
The concept of supply chain changes over time. It is gaining in importance. During the first decade of this century, according to [1], supply chain management and control were the strategic focus of the leading manufacturing companies. This is caused by rapid changes of environment in which companies operate, the globalization of markets and very high customers' requirements where high quality products and services are becoming a priority. The aim of for today's supply chain is to model the supply chain in a way that will provide profitable outputs for all parts of the supply chain and its participants. Looking at one of the supply chain definitions, according to [2], it is a set of three or more organizations that are directly connected with one or more flows of products, services, finance and information flows from a source to the end user in contemporary supply chains, and very often it is necessary to coordinate activities and flows to the extent that goes beyond the current limits. Supply chain management has a high impact on the quality of products and services, which according to [3] increases the importance of the relationship between procurement, suppliers and quality. With the increasing importance of these relationships, the aim is to optimize the supply chain which, according to [4], aims to successfully control the different elements within the chain, which include the participants, their mutual contacts and relationships, and the way of organizing certain internal activities.
In addition to cost optimization, the aim of supply chain management is to improve the flow of information between the suppliers, companies and distributors. As one of the important aims of supply chain management, which has lately been emphasized, is to increase the quality of service and flexibility in order to achieve the satisfaction of the end users. This is confirmed by Christopher in his book [5], "the whole purpose of supply chain management and logistics is to provide customers with level and quality of service that they required and to do so less cost to the total supply chain".
When it comes to supply chain the flow of information in a real time is one of the global problems. Many researches are focused on solving this problem and ensuring the flow of information in real time in the supply chain, so that participants are more satisfied and do business better. Problems often arise due to poor connectivity of subsystems that are independently developed and used as global integrators of all company processes. Within the subsystem, solutions for individual functions are given only as a set of fixed partial solutions without generalization. It is often possible to find a system whose structure is not specially projected; however, the solution is sought in the merger (purchase) of subsystems where the partial solutions occurred during the time of need. The subjects of this study are model, method and tools for supply chain management used to the greatest extent possible using the concepts of responsibility for the flow of information, increasing the quality of service in real time in the supply chain.
According to Cheng [6] due to its complexity and uncertainty, quality control of supply chain represents great challenges to practitioners and researchers, so the problem considered in this article is to improve the service provided by the company and to achieve better communication between the companies in the supply chain, in order to accelerate their business and deliver more profit, as well as to exert greater cooperation with customers and to continue good business relation with them.
In addition, the investigated problem is the service qualitative assessment, when it is characterized by a group of parameters which are often inaccurately estimated values, as well as its importance for the evaluation system. This imprecision is often the result of assessor´s uncertainty, variability of conditions, etc. Since imprecise data will be employed, the goal of this study is to introduce acceptable methodology or assessors (functions) for evaluating the quality of service. The assessor should be able to deal with imprecise data. This paper is structured as follows. In section 2 is shown literature review and the need for research. Section 3 presents description of the model, while section 4 presents verification and simulation of the model with discussion. Section 5 derives conclusions and directions for future research.
LITERATURE REVIEW
This is the time when companies cannot rely on their own inventive and productive abilities [7]. So, nowadays the center of gravity is not a competition among the organizations -it has been shifted toward the supply chain [8,9].
In addition to facing global competition, companies are faced with customers who change their requirements very quickly, but are also dealing with the technological changes that influence reduction of critical reactions when it comes to competence [10][11][12].
Therefore, firstly special attention should be paid to the supply chain at the first place, supply partners, improvement and acceleration of products and services [7]. These competencies are of particular importance for the firms with identified market changes. Therefore, they should turn towards integrated supply chain in order to positively and effectively respond to these changes [13,14]. Adequate management of the modern supply chain requires quality inputs, further reflected on its full flow. To provide that the purpose of the supply chain is satisfying user needs and requirements, the essential aim of the modern supply chain is the integration of all possible activities and processes that need to bring greater value to the end user. Supply chain integration (SCI) has positive impact on performances of companies [15][16][17] and helps firms to reconfigure their resources and capabilities internally and externally [18]. Supply chain integration may be more crucial in early stages and when that process is completed, a company can focus on SCM practice and competition capability [19]. Also supplier integration has a strong and positive impact on schedule attainment and customer satisfaction [20].
According to Nagurney [21] quality is one of the most essential factors for the success of supply chains, but also quality of service according to [22] is still one of the major problems with consumers. Consequently, due to ensuring the continuous improvement of quality service that leads to customer satisfaction, the study investigated the effect of external knowledge and knowledge chain to quality of service. Companies should use the chain of knowledge to collect the external knowledge from the customers, suppliers and competitors, as well as transformation of knowledge to improve their quality of service.
The need for research and improvement of the system for solving user´s problem arises from the current situation which companies are faced with, due to continual increase of users who need IT (information technology) services. According to [23] one of two approaches for improvement of business performance is integrated information technology. Pieces of information and their quality have a high impact on the whole supply chain, because poor information quality according to [24] may lead to organizational losses such as losing customers, missing opportunities, and making incorrect decisions. The significant role and impact of information sharing in supply chains have been extensively studied [25][26][27][28]. Apart from better information sharing, the connectivity among partner firms that enables information integration is crucial for firms to realize customer service performance gains [29].
The main objective of the research is the development of models that can respond to as many user requests as possible. It is a system that will serve companies to converge towards continuous quality improvement in the delivery of their IT services. The system that has emerged from this study, with the given specification, is a part of the model, which is subjected to changes and upgrades, which means that it will eventually improve over time. In order to stay competitive, it is important to constantly improve the quality of services and software as well as to respond to the latest needs faster than it is now being done, i.e. to be more agile.
THE MODEL AND METHOD
To assess the parameters of service it is advisable to take an arithmetic mean of the phenomenon with a normal distribution, but if it is not the case, then it is often better to take a different assessment as aggregation functions specially degree environments [30,31]. Diversity of choice values of degrees of that environment implies more or less disjunctively or conjunctively of forms selected aggregation functions (higher r disjunctive form, less r conjunctive form). In the paper [32], quality of services was improved by using the aggregation functions in the LSP method.
Taking the mid-stage of the aggregation instead of the typical one, due to the inaccuracy of data which are handled and which look like some of the triangular fuzzy numbers, the same as when evaluating the overall system, the fuzzy number, i.e. the interval of values with different values of the membership function, is received. Defuzzification provides better value in comparison with conventional method. To avoid harsh conclusion i.e. the answer for the quality assessment system is a number, in that case the response was an interval of values that is actually alpha-section of the stage as the number of output where alpha belongs to the desired degree of aggregation functions [33].
In this chapter, SSSI (the six-step service improvement method used lsp) method has been presented. Its main feature is that the power mean has been used for quality of service with weight coefficients in which the degree changes, if necessary, depending on whether more or less characteristics of conjunctive or disjunctive form are required. The parameters that appear in this formula which are characteristics of the system as well as the weight coefficients are a matter of judgment of the team of experts. The results are presented.
Algorithm of the SSSI method for assessing the quality of the software consists of the following steps [32]: 1. Select a group from the category of services (same rank) in the catalog of services; 2. Use the lsp method. The formula for calculating the estimates for each of these criteria is given by [34]: where w i the coefficients weight, r value based on the expectation of the combined impact of taking into account the priority level of the group. r takes values from −∞ (full conjunction) to +∞ (full disjunction); 3. Identification of the criteria comparisons; 4. Computation preference (priority) for each service selected rank; 5. Analysis of the results and selection of the best ranked in the group-UCL (upper control limit) and LCL (lower control limit) [35]; 6. If it is possible perform understandable conclusion and recommendation to improve service on the basis of knowledge gained in the previous step, and if not we will continue with another tour cycle. Adaptive model for supply chain management is a complex system that connects the functional and interfunctional business processes and allows participants in the supply chain management of processes in real time (see Fig. 1). It consists of: -Model for supply chain management (BSCMS) -Model for managing user requirements (Service Desk) -Model for assessing the quality of services provided (FAM4QS).
The hierarchical structure of the adaptive model for supply chain management (AM4SCM) is shown in Fig.1 with seven levels of activity and feedback interfaces that enable continuous improvement of AM4SCM. LEVEL 2 A general model for supply chain management makes use of the case with its activities covering the vast majority of premium features for business and they are presented in the following diagram. At LEVEL 3 the model has been adjusted to company requirements by choosing from the previous processes if they exist or otherwise they are created. LEVEL 4 or process consists of four steps. The first step includes defining the partners, defining data and documents needed for the operation after which the rules on the exchange of information and their availability are set out. At LEVEL 5, the selected processes are implemented and adapted by the company. LEVEL 6 is connecting with the Service Desk system which is shown in Fig. 2.
Figure 2
Service desk LEVEL 7 represents FAM4QS method that differs from the previous one [34] in the way that the parameter estimation and weight coefficients of the team of experts are presented as fuzzy triangular numbers due to their imprecision. As a result there is a review of the system in the form of fuzzy numbers, respectively the interval as its α-cut. If desired, the response can be defuzzyficated by the method of gravity. Due to better understanding of the method that deals with imprecise data, some of the concepts from the theory of fuzzy sets and properties associated with them should be considered.
The theory of fuzzy sets generalizes traditional theory, so that instead of the characteristic function (which takes a value of 1 for the given element x if x ∈ A, and a value of 0 if x ∉ A) we observe the so-called membership function μA of this set, which determines the grade of membership of the element x to the set A that is no longer just 0 and 1 but it can take any value from the interval [0, 1], i.e. μ A (x) ∈ [0, 1].
In this study special fuzzy sets will be used -fuzzy numbers and the so-called triangular fuzzy numbers: A = (l, m, r) where l is called the left boundary of triangular fuzzy number, m is the value which belongs to the core of fuzzy number (membership function is 1), and r is the right boundary of the triangular fuzzy number. Depending on the nature of the data, i.e. our estimate (whether accurate or not) shall modify the aforementioned formula (see (1)) for certain imprecise ei or inaccurately estimated weights w i as follows.
Interval value obtained by previous adding, for example 5% ± (or 10% ± ), on the left and right boundary of interval provides a selection criterion whether a service belongs to the highest (A) or the lowest rank (C). Those services that have a core (peak) or α-cut for α = 1 greater than the right border UCL have the highest rank, and the services whose core is less than the left border LCL have the lowest rank. Firms whose core is within left boundary of LCL and UCLright boundary are mid-level services (B). Diagram of activities for FAM4QS is shown in Fig. 3. Note: The number r is also a real number different from zero and does not have to be the same as values rj (from the formula for assessment e j that is analog to formula (2)). By changing the value of r (respectively r j ) it has obtained the characteristics of the disjunctive or conjunctive forms for evaluation services (parameters). By increasing r(r.→.+∞) disjunctively grows and conjunctively decreases, by reducing r(r.→.−∞), disjunctively declines, and conjunctively grows. Due to assessment of relevant parameters, the assessment of r j and r depends on whether it is to be more disjunctive or conjunctive. Characteristic of conjunctive form is that a bad score of at least one parameter gives a bad score of the whole service, and only good reviews of all parameters provide a good assessment of an entire service; while for the disjunctive form a bad score for the entire service results when all the parameters are evaluated as poor. The service is rated good if at least one parameter is evaluated as good. The values for r can be found in [36].
VERIFICATION AND SIMULATION OF THE MODEL
Application of a system of 17 supply chains in Serbia by fuzzy method has been made in the head of FAM4QS application. The first step of FAM4QS implementation is to organize services of the same group. Below services are grouped according to certain criteria set by the definition. The service grouping as a first step of identification was done based on the identified service class group attributes [32]: 1. Technology group is represented by technical attributes that better describe influence of applied technology tools on service development and operations. 2. Complexity group represented observed level of complexity in creating solution. More tiers in the solution implementation in most cases represent more complexity in service operation. 3. Development process group represented the possibility to lever the influence on the service by applied development process. Some development processes created very stable service, but had a problem with low level of flexibility towards change. 4. Development of team group -team experiences, skills, team cohesion, in house and outsourcing options that affect the ability for quality maintenance of specific service. 5. Business support domain group relates to the end user profile, number, location, and a type of application that is being used (for example, OLTP, reports, etc.). 6. In this case study we identified the following value domains for the above group attributes: a. For technology dependent group attribute TDi, the study identifies two-tier, three-tier and four-tier client server architecture, Web platform on Open Source, Web platform on proprietary (Oracle) platform, and Based on these group attributes definition, each instance of service class S i from the catalog was assigned the values as the following: where: The parameters to be used to complete evaluation of services are shown in Tab. 1. Tab. 2 shows the grouping of the same rank.
Due to better coordination quality of service observed it is suggested to define measurement period that is as long as possible (one year) with all the data collected during this time. Estimates were presented with the following criteria for services according to the user requirements that are shown with crisp values.
Estimates for the average time of solving the problem is rendered according to the following criteria -see Tab. 5. The average time (h) of resolving customer requests for services (17) over a period of 12 months Tab. 6.
Figure 4 Graphical representation of service results
In Fig. 5 it is shown that the best service (5) does not have any problems in the period from the second to the eighth month, even in the tenth month it was functioning smoothly. Regarding service 14, which is the worst, it can be seen that it has higher oscillations in the beginning compared to the later period. In Fig. 6, it can be observed that the service 5 almost has no serious problems in its functioning until the end of the year. Figure 5 Comparative analysis of the best and the worst rated service by number of received users requests by months. Figure 6 Comparative analysis of the best and the worst rated services according to an average time for solving users requirements (4)), the service will be ranked according to these criteria. So, for example, for SCM 4 the core is (0.653 + 0.688)/2 = 0.6705 < 0.693 so it has the rank C, for SCM 8 the core is (0.677 + 0.714)/2 = 0.6955 > 0.693 so it has the rank B, for SCM 10 the core is (0.778 + 0.825)/2 = 0.8015 < 0. So it has the rank B, for SCM 17 the core is (0.797 + 0.851)/2 = 0.824 > 0.811 so it has the rank A. From calculation using FAM4QS as can be seen in Fig. 5, it can be concluded that the best result for the number of user requests and the average time of solving them according to the adopted criteria has been achieved,regarding supply, for the chain 5 (SCM 5), and the worst result for the chain 14. If the chain 14 (SCM 14) isanalyzed as the worse performance regarding the supply it can be concluded that the reasons are the following: -Analysis and specification requirements were done badly and they are incomplete. -Unavailability of business users for developers.
-Insufficient team confidence that develops the application -programmers, and occasional absence from the team.
-The lack of interaction between the requirement specifications and the end users (the impact of user towards the requirements specification is negligible).
-Non-dynamics of system (rate of change of the system or bad system update). Due to increase in the success rate and reduced tendency of negative trend in SCM 14 and chains with similar characteristics the following steps are suggested: -A detailed analysis of requirements and greater flexibility of the model or system (easy and quick adaptability of new requirement specifications towards new requests).
-Improvements in communication between business users and developers (larger number of direct meetings, more frequent communication by e-mail, Skype, telephone ...).
-Raising the quality of human relations, working environment and greater control over nonattendance.
-Establishment of direct link between end users and service providers. -Increasing the speed and level of system update.
CONCLUSION
Within AM4SCM, a mathematical model was defined for evaluating the quality of the service provided which solves the problem of pre-existing models with imprecise estimates of parameters. Evaluations of the team of experts have been used while assessing weight coefficients and other parameters relevant to the system. (Progression) arithmetic mean -mean estimates of experts is usually taken for the assessment of weight element in six-step method for improving the quality of service which was upgraded.
If the phenomenon of observed evaluation has a normal distribution, then it is good to take the mean of such assessment parameter, but if not then it is often better to take different assessment. Distribution of the assessment of the team of experts was regarded as fuzzy number and for easy calculation unsymmetrical triangular fuzzy number. Thus, it has provided the opportunity to get fuzzy number instead of a number of such estimates of the entire system, i.e. interval of values with different values of the membership function. Defuzzification provided better value than the standard procedure. To avoid stiff conclusion, i.e. the answer for the system quality assessment is a number, the response interval value is taken which is actually alpha-section stage as the number of output where alpha desired grade of membership is taken.
Fuzzy aggregation environments used for assessing the quality of supply chains are degree environments where we have taken different values for degrees which were conditioned by different nature of parameters. Due to these differences, it follows that there are more or less disjunctive or conjunctive forms of selected aggregation functions. By applying our method on 17 selected homogeneous supply chains, the analysis of the best and the worst chain provided the conclusion that it is necessary to analyze the requirements and increase the flexibility of the model, improve communication between business users and developers, raise the quality of interpersonal relationships, exercise control of absenteeism, establish direct connection between end users with service providers and increase the speed and level of system update.
Working with large systems facilitates and accelerates process of finding new methods such as working with neural networks in combination with FAM4QS.
Likewise the traditional method, this method can also use software packages so the user can automatically receive, on the basis of given criteria, assessment of the quality of service in order to facilitate further decisionmaking. The software that we developed is written in C # and allows commercial use of FAM4QS. It will be further developed, i.e. for large systems, where FAM4QS will be combined with neural networks.
|
2019-04-16T13:29:41.304Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "014b0091eb85a908bb85aefc76958f3a7c2665ec",
"oa_license": "CCBY",
"oa_url": "https://hrcak.srce.hr/file/311130",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "014b0091eb85a908bb85aefc76958f3a7c2665ec",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119210522
|
pes2o/s2orc
|
v3-fos-license
|
Cross-correlation of WMAP7 and the WISE Full Data Release
We measured the cross-correlation of the Wilkinson Microwave Anisotropy Probe (WMAP) 7 year temperature map and the full sky data release of the Wide-field Infrared Survey Explorer (WISE) galaxy map. Using careful mapmaking and masking techniques we find a positive cross-correlation signal. The results are fully consistent with a Lambda-CDM Universe, although not statistically significant. Our findings are robust against changing the galactic latitude cut from |b|>10 to |b|>20 and no color dependence was detected when we used WMAP Q, V or W maps. We confirm higher significance correlations found in the preliminary data release. The change in significance is consistent with cosmic variance.
INTRODUCTION
Cosmological supernovae measurements and Cosmic Microwave Background (CMB) fluctuations support cosmological models in which the cosmic energy density is dominated by Dark Energy (DE) at the present epoch (Jarosik et al. 2011;Riess et al. 1998).
In such theories the current accelerating expansion and the decay of gravitational potentials are predicted. Therefore, the presence of DE is manifested in both geometrical and dynamical forms.
Dark Energy comes to dominate the energy density at late times, z < 2, and so the primordial fluctuations in the CMB alone do not provide a sensitive probe. However, DE may leave a signal in the secondary anisotropies that are imprinted on the microwave background radiation. The Integrated Sachs-Wolfe effect (ISW) (Sachs & Wolfe 1967;Rees & Sciama 1968) is an example of a secondary anisotropy: CMB photons passing through a changing graviational potential become slightly hotter or colder. In a flat and matter-dominated Universe the potential is constant on large scales thus gravitational blueshifts and redshifts cancel along the photon path. However, in a Universe dominated by DE there is a net energy difference between entering and leaving a potential well due to the decay. Thus, the detection of the linear ISW effect provides direct evidence for dark energy in the ΛCDM model. Furthermore, alternative gravity models provide predictions for the ISW effect and may be directly tested with ISW observations (Giannantonio et al. 2010).
The ISW signal may be detected through cross-correlation of large-scale structure surveys with the CMB temperature maps. The correlation is weak, generally less than a 1 µK signal is expected, orders of magnitude below the primary fluctuations. Futhermore, the ISW effect is strongest on large angular scales where cosmic variance is also large, making the measurement even more cumbersome.
Several measurements have been performed to uncover the ISW signal: positive cross-correlations were measured using galaxy data from the Sloan Digital Sky Survey (SDSS) and WMAP (Fosalba et al. 2003;Padmanabhan et al. 2005;Granett et al. 2008Granett et al. , 2009Pápai et al. 2011). Other successful attempts were Fosalba & Gaztañaga (2004) based on APM galaxies, Nolta et al. (2004); Raccanelli et al. (2008) using radio data and Boughn & Crittenden (2004a,b) in which the hard X-ray background was investigated. Besides, Afshordi et al. (2004); Rassat et al. (2007) and Francis & Peacock (2010) used infrared galaxy samples to characterize the ISW signal. The typical ISW significance in the papers above is around 2-3σ. Comprehensive studies using combinations of data sets were carried out by Ho et al. (2008) and Giannantonio et al. (2008Giannantonio et al. ( , 2012. The Wide-field Infrared Survey Explorer (WISE) all-sky survey is an attractive dataset for ISW studies. The survey effectively probes low redshift z < 0.3 with a high source density. Using the preliminary data release (PDR) covering 10000 sqr deg. Goto et al. (2012) cross-correlated a WISE galaxy sample with the Cosmic Microwave Background, finding a 3σ detection, although with three times the amplitude expected in ΛCDM. In this paper, we rexamine this finding using the full-sky data release (FDR) of the WISE survey and the WMAP 7-year dataset.
The structure of this paper is as follows. In Section 2 we describe the data we used in particular. Section 3 describes our methods including the theoretical expectations, simulations and measurements. Finally, in Section 4 the statistical significances are presented and systematic effects are discussed.
CMB DATA AND GALAXY MAP
We used the best achieveable versions of the CMB data products and focused on the reliability of our new galaxy sample. In this section we describe our map-making and masking procedures.
CMB map
The 7-year WMAP temperature data were downloaded from the LAMBDA website 1 (Jarosik et al. 2011). WMAP data is affected by noise and contaminations both from point sources and the Milky Way. Among all, the Q, V and W maps have the least galactic contamination. We used the foreground reduced version of these maps and the CMB Extended Temperature Mask was chosen to avoid contaminations. Using HEALPIX (Górski et al. 2005), NSIDE=128 repixelized versions of the maps were created. Galactic foregrounds and known point sources are quite surely excluded and 71% of the sky is unmasked.
WISE galaxy map
The density map of the galaxies was prepared using the full data release of the WISE project (Wright et al. 2010). The WISE satellite surveyed the sky at four different wavelengths: 3.4, 4.6, 12 and 22 µm. We used different bands to separate stars from galaxies using color-color plots. Following Goto et al. (2012) we select sources to a flux limit of W1 15.2 mag to have a uniform dataset.
According to Goto et al. (2012) the majority of stars near to the galactic plane have a W3.4 − W4.6 0.2 mag color. Moreover, it was found that a W4.6 − W12 2.9 mag selection reduced the stellar contamination. We confirm these findings in the FDR and followed the same procedure for star-galaxy separation.
Our galaxy sample exhibits stripe-like overdensities on the map in several locations. While Goto et al. (2012) applied handmade cutouts in their mask to exclude regions with unusually high number counts, we understand that the stripe-like features originated from the observational strategy of WISE and the position of the moon. We realized that the moon-contamination flag may be used to properly mask these regions. We forced out pixels in which the 'moonlev' flag is higher than 3 in at least one of the bands. This means that the fraction of the used image frames suffered by mooncontamination is higher than 30%. Masking regions based upon the moon contamination flag effectively removes the stripe pattern. We cannot address any further residual effects of the moon contamination outside our mask area. We have found that with a more conservative magnitude limit of W1 < 14.9 overdensities along stripes are reduced in width but do not disappear. Sources in WISE may also be contaminated by artefacts including the halos of bright stars, ghost images and diffraction spikes and special data-quality flags exist to handle these problems. If the dataset is filtered using these flags we possibly lose real galaxies. However, if we do not use flags to create a conservative catalog then stars or galaxies with insufficient parameters can appear in the data. Goto et al. (2012) used additional pixels in their mask to exclude regions where the abundance of these unreliable objects is high, but did not filter the whole galaxy sample. We investigated both cases and we have found this choice is important especially on large scales.
We find a gradient in the galaxy density with Galactic latitude with fewer galaxies near the Galactic plane. An empirical correction was developed by Goto et al. (2012) in which a mean correction is computed in galactic latitude bins to artificially 'flatten' the distribution at |b| < 20.
We attribute the gradient to stars near the Galactic plane masking background galaxies. The problem is made more severe due to the broad point spread function of WISE (6-12 arcsec). We use the Tycho2 star catalogue which reaches a depth of V < 13 mag (Høg et al. 2000) to measure the survey area lost around each star. We calibrated a mean relation between V magnitude and star halo radius R for WISE R = 9.52 − 0.74V for R in arcminutes. Any detected sources within this radius of a Tycho star is removed. We then construct a map of the lost area by summing the area attributed to stars in each Healpix pixel. This map is then used to normalise the galaxy counts. Figure 2 shows the result of this correction.
Apparently, the density gradient does not disappear, so only a modest correction is possible with our method. Interestingly, we find that the gradient is higher when flagged sources are not included in the sample. The exact source of the gradient is unexplored, but we show that this effect did not mess up our measurements, our findings are robust.
As described, regions nearby the plane of the Milky Way are potentially contaminated, and we consider the most appropriate solution to use |b| > 20 regions. To perform tests with only the preliminary sky coverage area a mask was created using the area covered by the preliminary survey.
Redshift distribution
In order to calculate a theoretical ISW expectation, redshift information is needed. Since WISE is a photometric survey without spectroscopy, the selected galaxies were cross-identified with sources from GAMA (Galaxy and Mass Assembly, Driver et al. (2011)) sample that has spectroscopic redshift of ∼ 200000 galaxies. Using the overlapping part of the two surveys we have found a pair for 82% of the galaxies with a 3" matching radius. We estimated an accidental matching rate of 0.1% for this analysis using random points with WISE density. The matched sample has a median redshiftz ≈ 0.15. The obtained approximate redshift distribution provides a basis to calculate a theoretical cross-power spectrum in this redshift range.
RESULTS
In this section we discuss the results using the CMB and galaxy datasets. We also elaborate on the most important theoretical and simulated considerations related to ISW detection and further analysis.
WISE-WMAP cross-correlation
We calculate the cross power spectrum using a fast quadratic estimator SpICE (Spatially Inhomogenous Correlation Estimator, Szapudi et al. ( , 2005). The individual band powers are binned logarithmically, the boundaries are l = 6, 8,11,16,22,31,44,61 and 87 this means the first band stands for l = 6, 7 etc. With this choice we avoid the lowest l range, where cosmic variance is meaningful and it is easier to compare the results to Goto et al. (2012), who used the same bins. Our measurement is introduced on Figure 3.
Theory
We derive the expected correlations and galaxy bias using WMAP7 best-fit ΛCDM cosmological parameters (Jarosik et al. 2011). Following Francis & Peacock (2010) a linear bias relation is considered to couple galaxy and matter overdenisities, δg = bδm. The two-dimensional projection of the 3D galaxy auto-correlation is given by dz dz dV is a comoving coordinate with a normalization relation φ(r)r 2 dr = 1, and j l is a spherical Bessel function. Independent determination of bg is not possible in linear theory, because σ8 acts to renormalize the power spectrum, C l ∝ (bσ8) 2 . Thus we fit only for bg and keep σ8 fixed. CosmoPy 2 and CAMB 3 were used to generate nonlinear matter power spectra with Halofit (Smith et al. 2003) at the median redshift of our galaxy sample. We measure the galaxy-galaxy power spectrum with SpICE. The measurement is affected by Poissonian shot noise that has a form of 1/N , where N is the mean galaxy count per steradian. Actually, the impact is negligible, less than 10% at the maximum l we used and less significant at larger scales where we expect to measure ISW. However, to be precise we substracted the noise and the amplitude of the theoretical model curve was fitted on the 6 < l < 100 interval, i.e. angular scales down to ∼2 • . Our result is bg = bσ8 = 1.04 ± 0.05.
To bring the bg parameter into use consider now the expression of the theoretical ISW signal. The cross-spectrum of a galaxy map and the CMB is given by where D1(z) is the linear growth factor, the numerical result of this expression depends on the cosmology (Cooray 2002).
Simulations
We simulated 1000 random CMB skies with ΛCDM cosmological parameters using Healpix synfast to cross-correlate with our WISE galaxy density map. The power spectrum was calculated and binned into the 8 spectral bins given above. The covariance matrix estimated from these measurements is shown in Figure 4. Neighboring bins are anti-correlated typically by 10%. The diagonal elements were used to calculate the errorsbars shown on Figure 3.
SIGNIFICANCE TESTS
Again we follow Francis & Peacock (2010), now to determine the significance of our ISW detection. Consistency of our results was investigated with three hypotheses: null-detection of ISW, regular ΛCDM model predictions and finally with a best-fit theoretical curve. Our statistics is based on the amplitude fit, we set the amplitude of the ΛCDM theoretical curve to 1.0 and to 0.0 in the zero ISW case.
Statistical tools
We evaluate a χ 2 statistic for each hypothesis which is the following: where C is the covariance matrix and di = (C gT data −C gT hypo ). C gT hypo can be given by Equation 2 assuming various models or it is zero in the null-ISW case. Index i labels the bins we use in the crossspectrum. Moving forward one step we define the likelihood of a hypothesis below: where N is the number of data points and with d vector that was constructed using di and C matrix. In fact, our tool to describe the detailed statistical properties of our test is −2 ln (L1/L2) = ∆χ 2 where the ratio of the two likelihoods is taken in a case of two different hypotheses and ∆χ 2 is calculated. In general, ∆χ 2 > 3 is a strong evidence for a significant difference. Table 2. Significance properties of our results are shown. χ 2 values in the table are less than theoretically expected, this indicates that the error bars are possibly overestimated. The same was reported by Francis & Peacock and Rassat et al. in their work. We performed Monte Carlo runs with 1000 and 3000 trials, but the covariance was robust. Moreover, the analytic estimate of the covariances and errors (Cabré et al. 2007)
Systematic effects
Although we have taken care in source selection and masking we must address any residual systematic effects that may exist. Galactic dust or emission detected by WMAP could contribute to a correlation with WISE due to dust attenuation or the gradient in source density measured with Galactic latitude. However, we performed all the significance tests using Q, V and W foreground reduced CMB maps and the maximum relative difference in a given spectral bin was 1.2%. This fact lead us to the conclusion that there is no significant color dependence and effects from Galactic dust or emission must be minor. We also checked the results using the WMAP team's Temperature Mask or Extended Temperature Mask but no meaningful difference was found.
On the other hand, we investigated several systematic effects related to our galaxy sample. Firstly, we analyzed different galactic latitude cuts. Our finding is robust, even if not significant, the results are summarized in Table 2.
Next we considered the differences in the detection significance due to initial mapmaking. Our tests were applied to a map without flagged objects and it was obtained that the amplitude varies between 0.5 and 1.0 regardless of the starmask correction technique.
We repeated our analysis with W1 14.9 mag limit to test the effects of faint sources. With this sample originating from a slightly different redshift distribution we found an increment in the ISW signal, but the errorbars were also high so the significance remained ∼1.0σ.
We also extended our cross-correlation analysis to 2 l 5 multipoles and very weak positive cross-correlation was measured with extremely high errorbars. The results increased the significance only slightly, however, they were sensitive to the galactic cut.
In the light of the robustness of our results against different galactic cuts we argue that the stellar contamination is low or at least uniform in our galaxy sample and it does not affect the measurements. The upper limit is 18% from the WISE-GAMA matching but a similar deficit of optical pairs was also reported using SDSS and WISE (Yan et al. 2012). These facts all indicate that the different selection criteria of the infrared and the optical bands are responsible for the missing counterparts, rather than confusion with stars. In summary, the presence of stars is probably less than the unpaired fraction of our galaxy sample.
DISCUSSION AND CONCLUSIONS
Our principal aim was to produce the final ISW measurement using WISE, we compare our results with Goto et al. (2012) that used the PDR. We repeated our measurements with the WISE preliminary data and largely reproduced the individual C l gT powers were found by Goto et al. (2012), although we measured lower significance, except using alternative binning. We do not expect perfect agreement, given that the analysis was performed from the ground up. With the same bins our best fit amplitude for the PDR was 2.5 ± 1.2 i.e. a 2.1σ detection. The result is consistent with our 1.9σ finding on the preliminary part of the sky but using the new data. Using the full sky we measure an ISW significance of ∼1.0σ. The change is fully consistent with possible cosmic variance, as illustrated by the light gray errorbars in Figure 3.
With our enhanced mapmaking and better view of the WISE data we suppressed artefacts. Our mask is entirely based on the properties of the WISE object flags and many systematics were revealed. However, the signal decreased despite the improvements in our analysis methods.
While some recent studies especially Goto et al. (2012) raised the possibility that the ISW correlations might be higher than ΛCDM predictions, we conclude that the signal we found is consistent with ΛCDM and previous measurements (Rassat et al. 2007;Francis & Peacock 2010). Our analysis highlighted that higher ISW amplitude measurements on certain parts of the sky can be due to cosmic variance.
|
2013-01-03T15:01:46.000Z
|
2013-01-03T00:00:00.000
|
{
"year": 2013,
"sha1": "76f0c8d0115e8522accb0a4f5cbff68b61fbc2d8",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnrasl/article-pdf/431/1/L28/4208747/slt002.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "76f0c8d0115e8522accb0a4f5cbff68b61fbc2d8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235229297
|
pes2o/s2orc
|
v3-fos-license
|
Taking Advantage of Waste Heat Resource from Vinasses for Anaerobic Co-digestion of Waste Activated Sludge under the Thermophilic Condition: Energy Balance and Kinetic Analysis
Vinasses are not only an easily biodegradable substrate but also a heat energy resource. In this study, the energy balance and kinetic model of anaerobic co-digestion of waste activated sludge (WAS) with vinasses have been investigated in semicontinuous reactor experiments at 55 °C. Herein, the maximum energy balance value, the ratio of energy to mass, and the kinetic constants μmax and K of anaerobic digestion of WAS were −33.44 kJ·day–1, −5.72 kJ·VS–1·day–1, and 0.0894 day–1 and 0.7294, respectively, at an organic loading rate (OLR) of 1.17 VS·L–3·day–1; when the mixture ratio of WAS to vinasses was 2:1 (dry VS) for co-digestion, the maximum energy balance value, the maximum ratio of energy to mass, and the kinetic constants μmax and K of anaerobic co-digestion of WAS and vinasses were +39.73 kJ·day–1, 8.1 kJ·VS–1·day–1, and 0.2619 day–1 and 1.9583, respectively, at an OLR of 1.73 VS·L–3·day–1. The positive energy balance was obtained for two reasons: one is for making the best use of the high-temperature heat energy resource of vinasses and the other is for enhancing the amount of biogas yield. The bottleneck of the negative energy balance of thermophilic digestion of WAS can be broken by anaerobic co-digestion of WAS and vinasses. The results indicate a promising future in the application of anaerobic thermophilic co-digestion of WAS and vinasses. Methane production from digestion and co-digestion was also predicted by the Chen–Hashimoto kinetic model.
INTRODUCTION
In the last few years, the number of municipal wastewater treatment plants in China has significantly increased, which results in the production of large quantities of waste activated sludge (WAS) that should undergo stabilization. It has been reported that approximately 6.03 million tons of WAS (dry weight) per year is produced in China, 1 increasing the concern of public risk of environment and human health caused by pathogens, heavy metals, or persistent organic pollutants existing in WAS. 2 Anaerobic digestion has been and continues to be one of the most widely used processes for WAS stabilization since anaerobic digestion produces methane, which can be used as a kind of renewable energy resource. 2 However, the conventional anaerobic digestion processes used in most municipal treatment plants in China still suffer from unreliable performance with low treatment efficiency, high costs, and negative energy balance 3 due to the poor hydrolysis caused by rigid cell walls and substantially secreted extracellular biopolymers. 4 For example, leading up to 2010, a total of 50 WWTPs (wastewater treatment plants) were designed with an anaerobic digestion system in China. Still, around 80% of them were poorly operated with low volumetric biogas production rates. 5 Moreover, the existing anaerobic digesters operated at wastewater treatment plants are also oversized and underloaded. 6 Codigestion of WAS with other kinds of wastes has been proposed extensively 4,6 to solve the above-mentioned problems and provoke the bio-energy recovery because co-digestion has unique benefits over the traditional anaerobic digestion. It balances carbon to nitrogen (C/N) ratio and nutrients, 7 increases pH buffering capacity, 8 decreases ammonia toxicity and accumulation of VFAs, 9 dilutes potential toxic matters, and increases the biogas yield. 10 Temperature, an important factor, directly affects the dynamic situation of microorganisms. The anaerobic digestion can take place at a mesophilic range of temperatures (30−38°C) and at a thermophilic range of temperatures (50−57°C), and each of these biological processes has its own merits and demerits. 11 Traditionally, mesophilic (37°C) anaerobic digestion is more widely used compared to thermophilic one (55°C) due to better process stability with less energy demand. 7 Nevertheless, several studies have reported the attractive advantage of the thermophilic processes to operate at reduced hydraulic retention times with higher organic matter removal and higher methane yields, 12,13 ensuring complete hygienization. 14 Moreover, several studies have shown that the thermophilic range of temperature should be preferred for the co-digestion process because of its superior performance compared to the mesophilic process. 15,16 However, the main problem in thermophilic anaerobic digestion of WAS is the high heating requirements for sustaining the process compared with mesophilic digestion. 17 In other words, the thermophilic reactor needs a slightly higher temperature input to maintain the thermophilic temperature range; hence, if an extra external waste heat resource can be utilized for maintaining the reactor temperature in thermophilic digestion, a better result of energy balance will be acquired from anaerobic thermophilic digestion of WAS, which can realize a waste-to-energy strategy.
Ethanol production for biofuel, industrial use, pharmaceutical use, and alcoholic beverages has increased in recent years in China, especially for biofuel, bioethanol-blended petrol, which accounted for 20% of the total petrol consumption, according to the Mid-and Long-term Development Plan for Renewable Energy; the consumption of biodiesel in China will reach 2.0 million tons in 2020. 18 In general, ethanol production generates between 9 and 14 L of wastewater known as vinasses. Vinasses have a pH between 3.5 and 5, a dark brown color, and a high chemical oxygen demand (COD), which ranges between 50 and 150 g·L −1 , and are discharged at a high-temperature range from 70 to 80°C. 19 Vinasses have been reported to be used for irrigation and fertilization due to their high nutrient and matter content; though many different technologies exist for treating vinasses, they must initially be treated with anaerobic processes due to their high organic loads. When vinasses are treated by anaerobic digestion at 55°C usually, the high-temperature vinasses require expensive precooling before they are fed into the anaerobic digester, which means a process of energy-wasting. Nanyang Tianguan Group Co., Ltd. (Henan province, China) has not only a capacity to produce 30 × 10 4 m 3 bioethanol per year but also a capacity of 10 × 10 4 m 3 municipal sewage wastewater treatment per day, which was carried out by a build− operate−transfer (BOT) model at the same location. Hence, there are large amounts of WAS and vinasses (of high temperature) in the same company, and both of them need to be anaerobically treated separately. At present, vinasses are treated by cooling to about 55°C before being fed into the thermophilic upflow anaerobic sludge bed (UASB) reactor, which has been the amount of heat energy wasted for many years. Therefore, we can take advantage of the amount of heat energy of vinasses for sustaining the anaerobic thermophilic digestion of WAS; at the same time, vinasses can be used as a cosubstrate for anaerobic thermophilic co-digestion with WAS together, the waste heat energy will be utilized, and the dominances of anaerobic thermophilic digestion can appear accordingly.
Generally speaking, the energy balance is a critical issue for the assessment of feasibility in anaerobic digestion of WAS. If the net energy balance, in which the energy output is more than the energy input, is a positive value, it indicates that the technique has advantages in practical application; otherwise, the technique exists with some defects in practical application. 20 However, to our knowledge, there are no studies on the evaluation of the energy balance by taking advantage of waste thermal resource from vinasses for anaerobic thermophilic co-digestion of waste WAS. This research has been conducted to address this limitation.
The present study was conducted to investigate the performance of anaerobic thermophilic digestion of WAS and co-digestion of WAS and vinasses. Moreover, to assess the energy balance by taking advantage of waste heat resources from vinasses based on biogas is utilized in the combined heat and power (CHP) unit; simultaneously, the kinetic evaluation was carried out using the Chen−Hashimoto methane production model. 21
MATERIALS AND METHODS
2.1. Materials. WAS used in this experiment was taken from the returned residual sludge of the Nanyang Tianguan Group Co., Ltd. (China) municipal sewage treatment plant, which treated 20 × 10 4 tons of municipal sewage daily by the activated sludge process. The residual sludge used for the experiment was naturally precipitated by gravity for 48 h, and the sludge with the supernatant removed was stored at 4°C for use. Vinasses were extracted from Nanyang Tianguan Group Co., Ltd. After separation of solid−liquid distiller's grains, the wastewater was retrieved and allowed to stand for 24 h. Then, the sedimentation part was discarded and the supernatant reserved for later use. The seed sludge used as an inoculum for the reactors was collected from anaerobically thermophilic (55 ± 1°C) digestion of the food wastewater. The elemental characteristics of the materials are shown in Table 1.
2.2. Experimental Methodology. Three laboratory-scale digesters (AD1, AD2, and AD3), each with a total volume of 6 L and a working volume of 5 L, were operated at a controlled temperature of 55 ± 1°C in a water bath. Each digester was fitted with a stainless steel stirrer, which was powered by a motor and stirred continuously at 80 rpm, equipped with a thermometer and a gas collection system. Some operational parameters of the semicontinuous system are provided in Table 2.
The feeding mode and digestion performance results are shown in Table 2. At the beginning of the experiment, 5 L of seed sludge was added to each of the three anaerobic digestion reactors AD1, AD2, and AD3, and then the anaerobic digestion operation was maintained at 55 ± 1°C and the stirring speed was set at 80 rpm. From the second day, different substrates were added using a peristaltic pump; AD1, AD2, and AD3 were fed with 300 mL of WAS, 400 mL of WAS/vinasses mixture (2:1 (dry VS)), and 500 mL of WAS/vinasses mixture (1:1 (dry VS)), respectively. The experiments were carried out to allow feeding after effluent discharge; that is 300, 400, and 500 mL of the reactor contents in AD1, AD2, and AD3 were replaced with new substrates when feeding, respectively.
The anaerobic digestion experiment underwent four feeding modes: daily feeding for the first mode, every 2 days feeding for the second mode, every 3 days feeding for the third mode, and every 4 days feeding for the fourth mode. After the first mode was completed, the next running mode was entered; the rest could be done in the same manner. Hence, each feeding mode has its own corresponding SRT and OLR. In each feeding method, three SRTs were continuously run until the system reached a stable state. After the last SRT was run, the properties of the anaerobic sludge were determined for six consecutive times and the average value was taken in the fourth SRT. The measurement of gas production was done on-line, and the pH values of all of the anaerobic digested sludge were determined. When each feed mode index measurement was completed, the next feeding mode was started. Reactors were operated in triplicate for each condition, and the results were calculated as an average obtained from the three replicate reactors.
2.3. Analytical Techniques. The following parameters were measured for each process: biogas production (wet-tip gas meter), pH (pH-3C acidity meter), and volatile fatty acids (VFAs, HP 6890/FID Chromatographer). Total solid (TS) and volatile solid (VS) were measured according to the methods for monitoring and analysis of water and wastewater. 22 Analyses of all of the above-mentioned parameters were performed in triplicate.
2.4. Statistical Analysis. Analysis of the variance was used to evaluate the effect on the investigated parameters, and the test data were tested for significant differences using a 95% least significant difference.
RESULTS AND DISCUSSION
3.1. Biogas Production at Different SRTs. The accumulated biogas yield and daily biogas yield in AD1, AD2, and AD3 under different conditions are summarized in Table 2. A similar changing trend was observed in each digester. First, the accumulated biogas yield increased as the SRT was increased. The accumulated biogas production increased from 1.2 to 2.2 L as the SRT was increased from 16.7 to 66.8 days in AD1, the accumulated biogas yield increased from 4.7 to 8.2 L as the SRT was increased from 12.5 to 50 days in AD2, and the accumulated biogas production increased from 5.6 to 10.6 L as the SRT was increased from 10 to 40 days in AD3. The C/N ratio of WAS is relatively low, only 5.6, while that of vinasse is relatively high, 17.8 (Table 1). This can explain why the gas production of WAS is low and the co-anaerobic gas production is high after the addition of vinasses, which is consistent with the results of other studies. 6,7,10 This indicates that with respect to biogas production, anaerobic co-digestion of WAS and vinasses is superior to the anaerobic digestion of WAS.
However, two apparent phenomena were noted during the digestion. First, in AD1 at a feeding mode once a day, if the feeding volume was more than 300 L, i.e., OLR was more than 1.17 g·VS·L −3 ·day −1 , acidification phenomenon (pH < 6.5) was observed and the digestion process was a failure and biogas was not produced in the end, which was caused by higher OLR than it can endure the maximum OLR. Second, the same acidification phenomenon (organic overload) was observed in AD3 (SRT of 10 days, OLR of 2.30 g·VS·L −3 ·day −1 ). In the experiments, to keep the process steady for continuous biogas production and to allow the collection of the completed data, neutralization measures were taken in AD3. Otherwise, the digestion process would fail for the acidification phenomenon. 7,16 During the process of the acidification phenomenon, the pH of the effluent is below 6.5, and thus the biogas production will cease in the end; in this study, the acidification phenomenon and pH of 6.5 were also observed simultaneously.
For this reason, although the anaerobic thermophilic codigestion of WAS and vinasses produces more daily biogas and accumulated biogas than those in the anaerobic thermophilic digestion of WAS alone, organic overload should be avoided. In practice, based on 300 mL of WAS for digestion alone, the optimum mixed ratio of WAS to vinasses to be fed into AD2 should be selected for the co-digestion of WAS and vinasses.
Besides, the VFAs in the three digesters were found to exhibit a similar trend result, except for the date in AD3 for SRT of 10 d; the VFA increased as the SRT was increased or the OLR decreased in the same digester; for example, in AD1, when the SRTs were 16.7, 33.4, 50.1, and 66.8 days, the VFAs were 24.4, 34.8, 185.2, and 214.7 mg·L −1 , respectively. However, in AD3, the highest VFA of 1653.7 mg·L −1 and the pH below 6.5 appeared with an SRT of 10 days and 2.30 g VS·L −3 ·day −1 . Xu et al. 23 suggested that the excessive accumulation of VFA caused by high organic loads will inhibit anaerobic digestion intensively. The methanogenic activities were wholly inhibited at a VFA concentration of 5.8−6.9 g·L −1 in the anaerobic thermophilic digestion of kitchen wastes. Compared with their results, it is evident that the VFA concentration, which leads to a drop in pH, is slightly lower than their result mentioned above; this is because the substrates used in anaerobic thermophilic digestion are different. The serious VFA inhibition on the activity of methanogens is caused by a pH drop in the reactor, which may lead to the activity loss of acid-sensitive glycolytic enzymes. 24,25 If the stable operation of the co-anaerobic system is to be maintained, appropriate OLR and SRT should be considered. Hence, AD2 was selected as an optimum selection for anaerobic thermophilic co-digestion of WAS and vinasses in practical application.
3.2. Calculation of Energy Balance in Anaerobic Digestion/Co-digestion. In the treatment of WAS by anaerobic digestion, whether the production of positive energy balance can be obtained or not is key to sustain the performance in municipal sewage treatment plants. During anaerobic digestion, energy consumption is mainly involved in the following factors: sludge for heating, sludge for transforming with pump, sludge for mixing, and the heat loss through the boundaries and pipe of the digester. Although the production energy in anaerobic digestion of the sludge is the heat energy derived from methane combustion, conventionally, biogas is used in a cogeneration internal combustion engine. This engine, here called a CHP, is used for the production of energy (electricity). The waste heat from the CHP system process is the main source of heat for the digestion. The results showed that most of the heat requirements in the thermophilic sludge digestion were inflow sludge heating, and the heat loss of the sludge digester was only to 2−8% of heat requirements. The energy requirements for pumping and mixing were estimated to be 1.8 × 10 3 kJ·m −3 and 3.0 × 10 2 kJ·(m 3 ·d) −1 , respectively. 26,27 To calculate the energy balance in sludge digestion, the technique used in the CHP system must be considered. Based on the CHP, about 35% of the biogas energy is converted to electrical power, heat losses are about 10%, and the portion of the heat that can be utilized is 55%; the specific heat of WAS and vinasses was 4.18 × 10 −3 kJ·(g·°C) −1 , and the calorific value of methane was 35.8 kJ·L −1 . 26,27 The specific density of WAS and vinasses was 1 g·L −1 .
For calculations, the average outside temperature of 16°C in Nanying City was considered across many years of meteorological recordings. Hence, the initial temperature of WAS was assumed to be 16°C, the initial temperature of the vinasses was 75°C, and the temperature for anaerobic thermophilic digestion was 55°C.
According to the volume that need to be digested in the three digesters, the mixed sludge temperature can be calculated as Here, T mixed,sludge is the mixed sludge temperature before feeding in the reactor; T 1 is the temperature of WAS before mixing; V 1 is the volume of WAS before mixing in mL; T 2 is the temperature of vinasses before mixing; and V 2 is the volume of vinasses before mixing in mL. The temperatures of the mixed sludge before feeding in the three anaerobic digesters are 16°C for AD 1 , 30.8°C for AD 2 , and 39.6°C for AD 3 . Because the temperature in the three anaerobic digesters is 55°C, the temperature difference between a mixture and anaerobic digestion (55°C) must be compensated from the production energy derived from CH 4 combustion. If the production energy is not sufficient for compensation, the necessary heat must come from elsewhere.
Consequently, the energy input in the form of heat and electricity for the compensation is calculated using the following equations 28,29 Here, E input,heat is the heat requirement for compensation, kJ· day −1 ; E input,electricity is the electricity requirement for compensation, kJ·day −1 ; ρ is the specific density of WAS and vinasses, which can be regarded as 1 g·mL −1 ; Q is the sludge flow fed to the digester, m 3 ·day −1 ; γ is the specific heat of WAS and vinasses, 4.18 kJ·(kg·°C) −1 ; t 1 the temperature of the mixing sludge (i.e., T mixed,sludge ),°C; t 2 is the temperature of anaerobic digestion, 55°C ; φ is the relative amount of heat recovered, 85%; k is the relative amount of heat loss from the piping and binding of the digester, 8%; V p is the volume of the digester, 5 L; θ is the electrical energy consumption for pumping, 1.8 × 10 3 kJ·m −3 ; and ω is the electrical energy consumption rate for stirring, 3.0 × 10 2 kJ·(m 3 ·d) −1 .
The biogas is a merely rich energy source, which is generated during the anaerobic digestion because the chemical energy of methane can be converted to heat and electricity by a combined heat and power (CHP) unit. Supposing that the CHP was used in this study for calculation, according to a previous study, about 35% of the chemical energy of methane can be converted to electrical energy, 55% to heat, and the remaining 10% is lost. Here, the output energy can be calculated using the following equations Here, E output,heat is the heat production from methane produced by a process, kJ; E output,electricity is the electricity produced from the extra methane produced from a process, kJ; H is the calorific value of methane, 35.8 kJ·L −1 ; V is the yield of methane in the process in L; and C is the proportion of methane in %.
The calculated values of heat and electricity requirements and net energy balance are shown in Table 3. The calculated values of the heat and electricity requirements for the compensation of the mixing sludge to the digester (E input,heat and E input,electricity ) and the heat and electricity production from the extra methane produced by the process (E output,heat and E output,electricity ) are presented in Table 3.
As shown in Table 3, when SRT increased in AD1, AD2, and AD3, the cumulative methane yield also increased, while the change in the methane content in the three digesters is not significant. Through calculation, the E input,heat and E input,electricity in each digester increased as SRT increased. This is because, as the viewer moves from AD1 and AD2 to AD3, the temperature of the mixing sludge before anaerobic digestion increased gradually, and the compensation heat for thermophilic digestion decreased gradually. The E output,heat and E output,electricity in each digester decreased gradually as the SRT increased. That is, although the methane yield increased with SRT, the methane production rate decreased as SRT increased, and the heat loss and energy consumption increased accordingly. The net energy decreased as SRT increased in each digester.
In AD1, a negative energy balance was observed during overall experiments; for example, with 300 mL of WAS for anaerobic thermophilic digestion, net energy values were negative for all SRTs. The energy balance ranged from −33.44 to −62.42 kJ· day −1 , which indicates that there are no possible practical applications for them in the anaerobic thermophilic digestion of WAS alone.
In AD2, 300 mL of WAS and 100 mL of vinasses were mixed for anaerobic thermophilic co-digestion. Consequently, the cumulative methane yield increased compared with AD1, and net energy transitioned from a negative value to a positive value as SRT increased. For example, the positive energy values are +39.73 and +15.51 kJ·day −1 at SRT of 12.5 and 25 days, respectively. This illustrates that the positive energy balance can be created in anaerobic thermophilic co-digestion of WAS and vinasses, which breaks through the bottleneck of negative net energy balance. As SRT approached 37.5 days, the net energy resulted in negative energy balance; the reason is that though the cumulative methane yield increased with the prolonging of SRT, the rate of methane production decreased rapidly, causing energy consumption to increase rapidly.
In AD3, 300 mL of WAS and 200 mL of vinasses were mixed for anaerobic thermophilic co-digestion, and the cumulative methane yield was higher than that in AD2. The net energy changed from a positive value to a negative value as SRT increased, and the positive energy values +64.96 and +41.51 kJ· day −1 appeared at SRT of 10−20 days. However, there is an acidification phenomenon for the overloading mentioned above in AD3, and some alkali must be added to the digester on schedule to avoid the acidification to maintain the regular operation. For this reason, this result is desirable as it is required for comparing AD3 with AD1 and AD2. When the SRT increased to 30 and 40 days, the net energy value was +14.35 and −2.88 kJ·day −1 , respectively. Compared with these net energy balance results of co-digestion for AD2 and AD3, the AD2 with 12.5 days of SRT was used as an optimum selection.
There are two main reasons for the positive net energy balance value in anaerobic thermophilic co-digestion of WAS and vinasses than that in anaerobic thermophilic digestion of WAS alone. One is that the high-temperature heat resource from vinasses was utilized fully and compensated the heat energy requirement for the mixture, and the other reason is that codigestion of WAS and vinasses can improve the efficiency and obtain more gas production.
3.3. Kinetic Evaluation of Anaerobic Thermophilic Digestion/Co-digestion. Anaerobic digestion processes are generally described using the first-order kinetic model. Several kinetic models can be used to understand the performance of anaerobic digestion; the model used in the present study was proposed by Chen and Hashimoto. Its main characteristics are as follows: (a) the specific growth rate of microorganisms, μ, is defined from Contois's equation; (b) continuous or semicontinuous completely mixed flow systems are considered; (c) predominant microorganisms in the influent; (d) the yield coefficient is constant; (e) cellular lysis is not taken into account; (f) effluent concentration is directly proportional to influent concentration; and (g) methane production is directly proportional to biodegradable substrate assimilation.
The kinetic equation governing this anaerobic digestion model is given as follows Here, Θ is the sludge retention time (SRT), in days; K is a dimensionless kinetic parameter related to the rate and stability of the anaerobic digestion; B is the volume of methane produced under normal conditions of pressure and temperature per gram of substrate (VS) added to the digester, L CH 4 STP/g VS added; B 0 is the volume of methane produced under normal conditions of pressure and temperature per gram of substrate added at infinite retention time, LCH 4 STP/g VS added; and μ max is the maximum specific microbial growth rate, in days −1 . Thus, by first calculating the values of B 0 , the graph of Θ versus B/(B 0 − B) produces a straight line with an intercept of 1/μ max and a slope of K/μ max .
To attain the parameter B 0 , the following equation is easily derived from eq 6.
This equation shows that, in fact, μ max Θ ≫ |1 − K|. The plot of B versus 1/Θ should be a straight line with B → B 0 as Θ → ∞ since the plot of B versus 1/Θ was found to be linear for the abovementioned ranges of SRT; linear regressions were used to determine the intercept B 0 . To use eqs 6 and 7, the data in Table 1 were alternated B as LCH 4 /g VS added. Figure 1 shows the linear fitting of B with 1/Θ in AD 1 , AD 2 , and AD 3 ; since the plots of B versus 1/Θ were found to be linear, test values and fit straight values (predication value) have a good correlation, and the linear regressions factor (R) was used to determine the intercept B 0 in Table 4.
The results for all digestion trail correlation coefficients (R) ranging from −0.8895 to −0.9948 are given in Figure 1 and Table 4. These results indicate that the Chen−Hashimoto kinetic model fitted well to the cumulative methane yield in this study. The maximum productive amounts of CH 4 (B 0 ) in AD1, AD2, and AD3 are 0.4285, 1.3899, and 1.0895 L·g −1 VS, respectively. It is obvious that the B 0 value in the co-anaerobic system is significantly greater than the sludge anaerobic value alone.
From the values of B and B 0 in the digester, the value of B/(B 0 − B) can be easily calculated, and the graph of SRT(Θ) against B/(B 0 − B) can be plotted. The graph of Θ against B/(B 0 − B) in AD1, AD2, and AD3 is shown in Figure 2 and Table 5. This indicates the best fit for the measured value and deviations between the measured and predicted values.
The results for all digestion trail correlation coefficients (R) ranging from 0.9752 to 0.9863 are given in Figure 2 and Table 5. These results indicate that the Chen−Hashimoto kinetic model fitted well to the SRT with B/(B 0 − B) in this study. This shows the best fit for the measured value and deviations between the measured and predicted values.
According to the results shown in Table 5 and eq 6, the kinetic equation parameters μ max = 1/linear intercept and k = linear slope/linear intercept. The values of the kinetic parameters (μ max and K) for the three substrates considered are shown in Table 6. The methane production model was the best fit for the measured value and deviations between the measured and predicted values.
The values of the two kinetic parameters (μ max and K) with their confidence limits at 95% for the three digestions considered are shown in Table 6. As can be seen in Table 6, the values of the kinetic constants μ max and K in AD1 were 0.0894 day −1 and 0.7294, respectively. However, the values of the kinetic constants μ max and K in AD2 were 2.9 and 2.7 times higher, respectively, than those in AD1, and the values of the kinetic constants μ max and K in AD3 were 3.2 and 2.4 times higher, respectively, than those in AD1. These results were shown by an enhancement of the maximum specific growth rate μ max and the kinetic constant K for the co-digestion of WAS by adding vinasses; meanwhile, the anaerobic thermophilic codigestion of WAS and vinasses had advantages over the anaerobic thermophilic digestion of WAS alone, such as the higher methane yield efficiency and positive energy production.
The value of the kinetic constant K in AD2 was greater than that in AD3, while the maximum specific growth rate μ max in AD2 was less than that in AD3. The main reason for this difference was the overloading in AD3 that led to digestion acidification, which affected the methane yield efficiency.
The maximum specific growth rate μ max in AD3 was the highest of the three digester values, which indicates that AD3 had the most significant organic loading compared with AD1 and AD2. However, in practice, the lower the proportion of added vinasses, the better it is for anaerobic co-digestion because lower vinasses content causes more convenience and ease of 0.2841 ± 0.10 1.7490 ± 0.04 management. Acidification was the most important factor to be considered overall. In practice, the performance of AD2 may be an optimal operation and should be selected for anaerobic thermophilic co-digestion with respect to process cost and management. The Chen−Hashimoto methane production model was best fit for the measured value, and deviations between the measured and predicted were less than 10%. The low deviations obtained between the predicted and measured values suggest that the proposed models predicted the behavior of the reactors accurately. 30
CONCLUSIONS
In this study, methane production from the anaerobic thermophilic digestion of WAS and the anaerobic thermophilic co-digestion of WAS and vinasses was investigated. The results suggested that the anaerobic thermophilic co-digestion of WAS and vinasses could be a viable alternative to production in the future because the co-digestion process could not only promote methane production but also take advantage of the heat resource to realize the positive energy balance value. The main findings of this study are as follows: 1. The net energy balance value of the anaerobic thermophilic digestion of WAS ranged from −33.44 to −62.42 kJ·day −1 at the SRT from 16.7 to 66.8 days.
2. In the anaerobic thermophilic digestion of WAS and vinasses in AD2, the mixture proportion of WAS and vinasses of 2:1 (dry VS) can produce the positive energy balance values, which overcame the bottleneck of the negative energy balance of the thermophilic digestion of WAS. 3. In practice, the optimal process in anaerobic thermophilic co-digestion of WAS and vinasses was in AD2, with SRT of 12.5 days, a ratio of WAS/vinasses of 2:1 (dry VS), and the total volume of 400 mL. 4. The two reasons for the net energy balance of co-digestion are as follows: one is the co-digestion of WAS and vinasses can improve the productive methane yield and the other is the full use of hot energy resources from vinasses. 5. Anaerobic thermophilic digestion was evaluated using the Chen−Hashimoto model. The results showed the kinetic constants μ max and K in the anaerobic thermophilic digestion of WAS to be 0.0894 day −1 and 0.7294, while in the anaerobic thermophilic co-digestion of WAS and vinasses, they were 0.2569 day −1 and 1.9583. Anaerobic thermophilic co-digestion of WAS and vinasses has obvious advantages and an energy-saving effect.
|
2021-05-29T05:18:59.871Z
|
2021-04-29T00:00:00.000
|
{
"year": 2021,
"sha1": "688f1c207994b59a3f48d01e08169519154bba8d",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c05980",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "688f1c207994b59a3f48d01e08169519154bba8d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
242503348
|
pes2o/s2orc
|
v3-fos-license
|
SEGMENTATION OF CANCER CELL FROM AN IMAGE
Segmentation of an image is the first step to extract required details from an image. It is a process of separating an image into unique regions containing each pixel with identical attributes. In this paper, an automatic segmentation algorithm is implemented to detect cancer cells from an image and label them in the original image
I. Introduction
Detection of the cancer cells from an image such as Computed Tomography (CT) image, Magnetic Resonance Image (MRI), digital mammography, etc. [IX] plays a vital role in medical imaging.To diagonalize the cancer cells in the patients, it is important to know the physical size of the cancer cells.
Human measurements from an image may vary from person to person (depends on the operator who computes the measurements).Various image processing techniques such as thresholding, morphological operations, etc. can be used to detect the cancer cells automatically from the above-mentioned images [VIII].The first step towards the automatic estimation of the size of cancer cells is the segmentation of cancer cells.
Segmentation procedures partition an image into its constituent parts or objects.In general, autonomous segmentation is one of the most difficult tasks in digital image processing.Segmentation is accomplished by scanning the image pixel by pixel and then after each pixel is labeled, depending on whether the gray level is greater or less than the threshold value.The image segmentation can be classified into two basic types such as local segmentation and global segmentation [XII].Most of the segmentation algorithms are based on two basic approaches of segmentation i.e. region-based or edge-based approaches.
An image segmentation algorithms are widely used in medical applications such as the quantification of tissue volumes, diagnosis, localization of pathology, treatment planning, computer-integrated surgery, etc. [VI].
An algorithm has been defined in the MATLAB documentation [I] to detect cells using Edge detection and morphology.In this paper, we extended the abovementioned algorithm to segment the cancer cell from an image.It is implemented in the MATLAB software [III].
II. Segmentation Techniques
An image is defined as a 2D function where x and y is spatial coordinates and the amplitude of I at any pair of coordinates ( ) the intensity of the image at that point.
Thresholding methods are the simplest methods for image segmentation.These methods divide the image pixels concerning their intensity level.There are three types of thresholding methods such as global thresholding, variable thresholding, and multiple thresholding [II].In this paper, we have used global thresholding which is defined as in the following: Global thresholding is done by using any appropriate threshold value T .This value T will be constant for the whole image.Based on T the output image ( ) y x q , can be obtained from the original image ( ) There are various methods available in the literature to find the threshold value such as OTSU [IV], Sobel [IX], etc.In this paper, we use the Sobel operator to compute the threshold value for segmentation.
Sobel Operator:
The Sobel operator is mainly used for edge detection, and it is technically a discrete differential operator used to calculate the approximation of the gradient of the image luminance function [VII].In other words, it is a typical edge detection operator based on the first derivative.
Morphological dilation makes objects more visible and fills in small holes in objects.The dilation () of a set (binary image) by the structuring element is defined by () = { + | ∈ , ∈ } Morphological erosion removes islands and small objects so that only substantive objects remain.The erosion () of a set by a structuring element is defined by () = {|∀ ∈ , + ∈ }
III. Method to Segment Cancer Cell Using Edge Detection
Step by step procedure to detect cancer cell from an image is defined as in the following: Step 1: Convert RGB color image to Grayscale image Step 2: Contrast the difference between the object to be segmented and the background image is high.Variation in contrast can be identified by operators that compute the gradient of an image.Calculate the gradient image and apply a threshold to create a binary mask containing the segmented cell.The value for the threshold is calculated using the Sobel operator.
Step 3: The above computed binary gradient mask image may have lines of high contrast in the image.These lines do not indicate the exact position of the boundary of the object.There may be gaps in the lines surrounding the object in the gradient image.The morphological dilation with proper structuring elements can be applied to connect the gaps in the gradient image.
Step 4: The gradient mask image after dilation will have a better outline of the cell, still there may be holes in the interior of the cell.The "infill" function in MATLAB may be used to fill these holes.
Step 5: The resulted image will contain segmented cells, also, that there may be noise around the boundary.The "imclearborder" function in MATLAB can be used to remove these noises that are connected to the border of the image.
Step 6: The morphological erosion with a diamond-shaped structuring element is applied to smooth the segmented image.
Step 7: The "label overlay" function in MATLAB is used to visualize the segmented object in the original image.
IV. Results
The aforementioned algorithm is implemented to detect the cancer cell from the image given in Fig. 1.The grayscale image of the input image is shown in Fig. 2. The boundary of the cancer cells with disconnected lines are obtained and displayed in Fig. 3. Fig. 4 is obtained by applying morphological dilation to this image to connect all the disconnected boundaries.Holes filling algorithm is used to fill the holes in the dilated image and the result is shown in Fig. 5. Morphological erosion is carried out to remove noise from the image.Cancer cells are segmented and given in Fig. 6.And these cells are labeled in the original image Fig. 7.
V. Conclusion
Various techniques for image segmentation were described.A combination of global thresholding and morphological operations were applied in this paper to segment cancer cells from an image.Finally, segmented cancer cells were marked in the original image.Furthermore, this approach can be applied to segment various objects in medical images such as kidney stones, bones, etc.
|
2020-10-28T19:11:43.710Z
|
2020-09-26T00:00:00.000
|
{
"year": 2020,
"sha1": "28ab9cec3b63c81266ba0918294ddf48f7375ad4",
"oa_license": null,
"oa_url": "https://doi.org/10.26782/jmcms.2020.09.00004",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9f3572c985229f9bc1e844bb0acf9d5f229faf13",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
202581090
|
pes2o/s2orc
|
v3-fos-license
|
The Past and Future of Experimental Speciation
Speciation is the result of evolutionary processes that generate barriers to gene fl ow between populations, facilitating reproductive isolation. Speciation is typically studied via theoretical models and snapshot tests in natural populations. Experimental speciation enables real-time direct tests of speciation theory and has been long touted as a critical complement to other approaches. We argue that, despite its promise to elucidate the evolution of reproductive isolation, experimental speciation has been underutilised and lags behind other contributions to speciation research. We review recent experiments and outline a framework for how experimental speciation can be implemented to address current outstanding questions that are otherwise challenging to answer. Greater uptake of this approach is necessary to rapidly advance understanding of speciation. models which can coestimate demography and selection, the ability to directly observe these processes during experiments designed to tracksuchinteractionswillprovidepowerfuldatatoapplytonaturalsystemswheredirectobservationduringtheevolution of RI is unavailable. Experimental speciation (ES), in combination with genomics, provides the ability to jointly infer phenotypic responses to, and genomic signals of, selection, and shouldbe a high priority for speciation research. We present a blueprint for the design of future ES studies investigating the impact of a process or condition on the evolution of RI in the face of gene fl ow (Figure I). We particularly focus on gene fl ow and selection manipulations, and the use of E&R. In this design, the pair of populations servesas the unitof replication;all measuresof divergence (e.g.,RI, F ST ) describethe paired metapopulation. This differs from designs in which experimental lines radiate from a single ancestral population, which typically involve no gene fl ow. Demography and migration rate, and the strength of natural and/or sexual selection can be controlled or ma- nipulated. Subsequent consequences on the initiation or elevation of RI can be estimated directly and assessed across different types of reproductive barriers. The time course nature of ES allows both phenotypes that contribute to local ad- aptation [95], assortative mating [55], or hybrid viability [24,36,37] to be assayed from the outset. By using E&R, effective gene fl ow and consequences for genomic architecture can be determined. By archiving populations throughout the experiment, a researcher can build a valuable cache of DNA data that can be analysed post-E&R with evolutionary hindsight. Having identi fi ed candidate barrier loci, the trajectory of allele frequencies of these selected loci can then be examined in detail across the course of the experiment by targeted sequencing of ar- chived populations at selected time points. This can pinpoint how and when changes relating to RI arise and spread in populations. E&R is a potent way to identify genetic signatures of RI but the power to detect these signatures is affected by demography (population size
Opinion
The Past and Future of Experimental Speciation Nathan J. White , 1,@ Rhonda R. Snook , 2,@ and Isobel Eyres 1, * Speciation is the result of evolutionary processes that generate barriers to gene flow between populations, facilitating reproductive isolation. Speciation is typically studied via theoretical models and snapshot tests in natural populations. Experimental speciation enables real-time direct tests of speciation theory and has been long touted as a critical complement to other approaches. We argue that, despite its promise to elucidate the evolution of reproductive isolation, experimental speciation has been underutilised and lags behind other contributions to speciation research. We review recent experiments and outline a framework for how experimental speciation can be implemented to address current outstanding questions that are otherwise challenging to answer. Greater uptake of this approach is necessary to rapidly advance understanding of speciation.
Forward and Reverse Approaches to Study Speciation
The progression and outcome of speciation (see Glossary) depend on interactions between evolutionary forces [1] that act with varying importance over space and time to either facilitate or impede the evolution of reproductive isolation (RI) [2]. RI may arise through the action of genetic drift and/or divergent natural selection, may depend on gene flow via continuous migration or secondary contact, is impacted by population size and structure, and influenced by genomic properties such as mutation and recombination rates [3]. Understanding the relative contributions of these processes to the evolution of RI is the focus of speciation research. A classic and highly successful approach to studying speciation involves identifying a phenotypically divergent trait and testing its association with the level of RI between extant populations [4][5][6]. The increasing application of high-throughput genomic data to address speciation genomics questions (Box 1) is used to reconstruct population history (e.g., demography) and infer the evolutionary processes leading to speciation, often over a long timescale [1,3,7]. This approach is analogous to the use of forward genetics to study the function of a gene, but applied to the study of RI. Here, the study of speciation begins with a phenotype (RI) and proceeds to identify the potential evolutionary processes that caused RI to build up between diverged populations. Many studies support the success of this approach [1,[4][5][6][7]. However, this forward method of studying speciation is actually backward looking, reflecting a static snapshot of the processes that contributed to divergence. Realistically, signals of early barriers to gene flow are likely erased or overwritten as speciation progresses. Thus, such studies are challenged to deduce the action of multiple evolutionary processes impacting phenotypic and genomic factors that influence speciation, either sequentially or simultaneously, either in the same or different directions, inferred over long evolutionary histories.
Laboratory experimental evolution (EE) experiments can address these challenges by manipulating evolutionary processes thought to generate RI over many generations and then testing the outcome on the evolution of RI. Experimental speciation (ES) is analogous to the use of reverse genetics to study gene function. It begins with the putative evolutionary processes and proceeds to identify the conditions leading to and maintaining RI. This approach is experimental and therefore directly identifies the evolutionary processes and circumstances for the evolution of RI.
Experimental speciation is an excellent complement to snapshot studies of natural populations because it can disentangle recurring problems that confound studies of natural populations.
Experimental speciation made early significant contributions to understanding evolutionary processes mediating the evolution of reproductive isolation.
Over the past decade, speciation genomics has provided better predictions on how barrier loci spread in the genome and how speciation-with-gene-flow can occur.
These developments remain difficult to test in natural populations and have not been widely adopted in experimental speciation research.
Future integration of genomic tools in an experimental speciation framework will provide a step-change to understanding these outstanding speciation questions.
ES complements snapshot studies (Table 1) but is also a stand-alone powerful approach because it reveals speciation processes in real time. ES has been implemented for several decades and when its influential contribution was last reviewed, 10 years ago by Fry [8],the technique seemed poised to exponentially accelerate understanding the evolution of RI. Fry also outlined neglected speciation questions that ES was well suited to answer. Since Fry'sreview, speciation theory has advanced to incorporate more sophisticated ideas on genomic conditions and constraints impacting the evolution of RI. Snapshot studies have widely adopted a genetic approach to identifying signatures of RI. However, these conventional studies are vexed with inference problems, limiting understanding of speciation [9][10][11].ESprovides a potent method to test speciation theory by controlling and/or testing genomic factors and environmental conditions thought to influence speciation, factors that forward speciation approaches cannot disentangle (see 'A Selection of New Challenges That Experimental Speciation Can Address').
Here, we review ES studies over the past decade to examine progress on Fry's original neglected speciation questions. We identify areas of speciation research that have progressed since that review, such as speciation-with-gene-flow models and genomic conditions impacting speciation, but which ES studies have not been applied. We provide a framework for using ES combined with genomics to enable rapid advances in understanding speciation.
Another Decade of Experimental Speciation
Fry's review suggested ES could address: the relative efficacy of selection and drift in generating RI; the relative rates of evolution of different types of reproductive barriers; the feasibility of sympatric and parapatric speciation; and the feasibility of reinforcement [8]. We summarise the limited progress on these topics in the past decade, identify new areas in which ES has been used, and argue that since Fry's review, two fundamental shifts in speciation theory and approach have occurred that have been ignored in an ES framework.
Glossary
Allopatry: geographic isolation resulting in two or more populations' ranges being nonoverlapping. Barrier loci: genomic loci that experience lower effective migration rate than actual migration occurring between populations. Cascading reinforcement: process in which reinforcement between two species indirectly strengthens RI between conspecificpopulations. Coupling: co-occurrence of different barriers to gene flow, producing a stronger overall barrier effect. Destroy all the hybrids: a moniker for aseriesofartificial selection experiments in which hybrids between divergent lineages were removed to select for RI. Dimensionality: number of traits or loci impacted by selection. Dobzhansky-Muller incompatibility (DMI): epistatic interactions between alleles that have become independently fixed in different populations, that have a deleterious effect on fitness when brought together in the same individual. DMIs are thought to be an important cause of barriers to gene flow between species. Evolve and resequence (E&R): process of sequencing population genomes before and after EE for purposes of comparison. Experimental evolution (EE): study of evolutionary processes under highly controlled experimental conditions. Experimental speciation (ES): EE which directly tests for RI between diverging populations. Extrinsic isolation: RI dependent upon environmental effects. Founder flush: process whereby a small founder population rapidly grows to carrying capacity, typically under relaxed selection. Gene flow: movement of alleles from one population to another. Genetic drift: changes in allele frequency due to stochastic effects in finite populations. Genomic architecture: genetic structure of the genome underlying traits. Hill-Robertson effect: interference between selection at linked loci. Hybrid speciation: hybridisation between two species that produces offspring which are reproductively isolated from the parent species. Intrinsic isolation: RI independent of environmental effects.
Box 1. Speciation Genomics
The reduced cost of genomics has expanded the ability to address outstanding questions in speciation [1,4,6,25].Ofinterest is how barrier loci are distributed across genomes and how they evolve during population divergence. Predicted genomic patterns are based on whether speciation proceeds between geographically separated populations without gene flow, or with gene flow occurring either during initial divergence or following secondary contact. In allopatry, divergence is not substantially constrained by the extent of genetic linkage and recombination relative to the strength of either selection or drift producing RI. In contrast, during speciation-with-gene-flow, selection for divergence is opposed by the processes of both gene flow and recombination that erode associations between genes under selection [82]. The genic view of speciationwith-gene-flow posits that speciation is initiated by selection acting against gene flow at specific targets of selection, and speciation genomics is interested in how barriers to gene flow initiate and facilitate (through the build-up of linkage disequilibrium) RI, including subsequent genomic divergence that is dependent on genomic architecture [25,26,83,84].Patternsofdivergence are predicted to be different depending on whether gene flow is primary or secondary [85].
Speciation genomics has begun to address these issues by identifying barrier loci evolving in response to selection or drift, their effect sizes, genomic distribution, and associations, and how this builds up as RI increases, along with inferring demographic history and gene flow [86][87][88][89]. However, there are well-reviewed confounding factors influencing genome heterogeneity that are unrelated to speciation (e.g., population history, gene flow over time, variation in strength, and timing of selection [7,9,10]), and disentangling these factors remains challenging in studies of natural populations. Models of the rate, direction and magnitude of gene flow through time tend to rely on summary measures or comparing limited sets of hypothesised scenarios. Additionally, the impact of selection on divergence can sometimes be clearly identified [5,[90][91][92], but it is frequently challenging to characterise selection pressuresincreasingly so the further selection is traced back through history. Thus, understanding the role of ecological differentiation, isolation, and genomic differentiation in response to specific evolutionary processes is difficult to reconstruct [93]. Alongside the development of models which can coestimate demography and selection, the ability to directly observe these processes during experiments designed to track such interactions will provide powerful data to apply to natural systems where direct observation during the evolution of RI is unavailable.
Trends in Ecology & Evolution
The Relative Efficacy of Selection and Drift To maintain differences between populations, barriers to gene flow must emerge and generate RI. Barriers can act at the prezygotic (premating and postmating, prezygotic) and/or postzygotic stage, and can be influenced by extrinsic isolation and/or intrinsic isolation. Initial ES studies found relatively strong support for divergent natural selection generating RI in allopatry, even on arbitrary traits with no clear link to an isolating mechanism [8]. However, under sympatric conditions, disruptive selection did not generally lead to RI, likely because many of the divergently selected traits had little relevance to fitness [8].SinceFry's review, few ES studies have altered conditions for local adaptation and then tested for the evolution of RI. Most studies tested the role of sexual selection and sexual conflict in generating RI [12,13]. Fry found equivocal support for sexual selection generating RI [8]. Subsequent work on sexual selection and speciation continues to fail to find significant RI [14][15][16][17], even when manipulating genetic variation and population size to increase the likelihood of response [14] and assessing different RI barriers [15]. One species, Drosophila melanogaster, has been tested independently in two laboratories but only one study found RI [18,19]. Theory suggests that different components of sexual selection may interfere with the evolution of RI [20] and one ES study supports this interpretation. In Drosophila Linkage disequilibrium: nonrandom association between alleles at different loci (whether physically linked or not). Local adaptation: adaptation in response to selection that varies between environments. Matching traits: mechanism of assortative mating in which individuals find mates based on communal traits or alleles. Multifarious selection: selection on multiple environmental axes. Multiple-effect trait: trait that contributes to more than one component of RI. Preference/trait: mechanism of assortative mating in which both signalling trait and preference for it must diverge between populations. Reinforcement: adaptive strengthening of prezygotic RI due to selection against hybrids (when hybrids have non-zero fitness), in a zone of secondary contact. Secondary contact: reintroduction of two or more populations' ranges after a period of geographic isolation. Snowball effect: greater than linear increase in RI with time because genetic incompatibilities between populations lead to reduced gene flow, further divergence and ever-greater numbers of incompatibilities. Soft sweeps: reduction in the genomic variation of a region due to linkage with a previously neutral allele which becomes beneficial and increases in frequency. Speciation: origin of distinct, reproductively isolated species.
Trends in Ecology & Evolution
pseudoobscura, experimental sexual selection drove divergence in female choice for divergent male courtship traits [21], which should generate assortative mating. However, males from the high sexual selection lines always outcompeted males from the enforced monogamy lines [16].
Overall, surprisingly, experimental sexual selection by itself does not seem to generate RI.
ES studies have tested the impact of either natural or sexual selection, but evolution of RI may require both and so their relative contribution should be studied [22,23]. No ES study has done this, although one study manipulated natural selection and then tested for RI that could have arisen via sexual selection [24]. Strong prezygotic RI was observed but it was independent of local adaptation. Additionally, no ES study has manipulated multiple axes of natural selection to test patterns of speciation under strong unidimensional versus multifarious selection, despite this being a long standing speciation question [25,26] (see 'How Can Selection Overcome Gene Flow?').
Genetic drift may generate RI but Fry found little ES evidence [8]. In the past 10 years, two further studies have manipulated population size to assess the contribution of drift. One study created 1000 bottlenecked, inbred 'founder' populations of Drosophila yakuba, and although weak RI was occasionally produced, extinction was overwhelmingly the most common outcome [27]. Furthermore, when population size constraints were lifted (founder flush), RI was diminished, suggesting that inbreeding effects, not drift alone, were responsible [27]. Another study used a bottleneck treatment combined with divergent selection, but found it did not affect RI [24]. Overall, ES studies indicate that drift is not a strong evolutionary force promoting speciation.
While generally studied separately, selection and drift interact in complex ways. Strong selection reduces effective population size, which can increase the role of drift. In turn, genetic drift may restrict genetic diversity, diminishing the effect of selection. Since Fry's review, one ES study has addressed the joint influence of selection and drift. Using an experimental niche shift to produce asymmetric strengths of selection and drift between ancestral and derived populations of the flour beetle, Tribolium castaneum, both premating and postzygotic RI evolved [28]. Due to strong selection and therefore reduced population size during the niche shift, RI likely arose via fixation of deleterious alleles as a consequence of drift. However, only one line of each of the ancestral and derived populations was generated and we found no other similar studies, limiting understanding of joint selection and drift effects.
Evolution of Different Types of Reproductive Barriers
Previous ES studies focused on premating barriers using patterns of assortative mating to measure RI [29]. Although this remains true for ES studies post-Fry [16,24,27,[30][31][32][33][34], some have included postmating, prezygotic [18,32,35], and postzygotic [24,[36][37][38] forms of RI. However, more ES studies comparing the speed of evolution, the traits targeted, and relative magnitude of extrinsic and intrinsic RI are necessary to understand mechanisms by which RI evolves. Fry [8] suggested that ES has been underutilised to test the origin of Dobzhansky-Muller incompatibilities (DMIs) [39,40]. Some recent ES studies, where postzygotic RI has been identified, have used analyses such as microarray-based mapping to identify candidate DMIs [41][42][43]. However, characterising DMIs and distinguishing these from signatures of extrinsic postzygotic RI (e.g., low hybrid fitness in a given environment) requires additional experiments, including exploring the consequences of DMIs segregating within a population via synthetic engineering [44].
Feasibility of Speciation-with-Gene-Flow Testing for speciation under sympatric and parapatric conditions was frequent in earlier ES studies [8,29], and strongly contributed to understanding the importance of multiple-effect traits [45] in overcoming gene flow [8,29]. While early ES efforts showed conditions for
Trends in Ecology & Evolution
speciation-with-gene-flow, Fry noted models of speciation-with-gene-flow as a neglected area [8]. Over the past decade, a fundamental shift in speciation research is the acceptance that gene flow frequently occurs at some point before the completion of RI [2,39] but ES studies incorporating varying levels of gene flow have not been published in the intervening years. Gene flow in the context of hybrid speciation has been tested recently using ES, expanding upon similar work in yeast species [46]. The number of hybridising Drosophila species, and their genetic divergence, affected RI between parental and hybrid lineages. Higher RI occurred when hybrids were derived from three, rather than two species, and when parental species had intermediate levels of divergence [34].
Feasibility of Reinforcement
Gene flow during cases of secondary contact after initial divergence in allopatry can generate reinforcement. While initially controversial, evidence for reinforcement has accumulated [47][48][49]. Previous ES reinforcement studies were "destroy all the hybrids" experiments [8] which removed all gene flow between populations and thus tested for increasing isolation between already reproductively isolated species. Post-Fry, Matute addressed this criticism and manipulated amounts of migration and hybridisation (and therefore effective gene flow) between sister species of Drosophila [32,33]. He found premating and postmating prezygotic isolation increased but only when the numbers of migrants were low and selection against hybrids strong. Reinforcement between nascent species could also have indirect effects that generate RI between conspecific populations, known as cascading reinforcement. Using ES, conditions for cascading reinforcement were demonstrated in Drosophila (using a "destroy all the hybrids" approach [35]). Although these ES studies demonstrate that reinforcement can occur, the mechanism by which reinforcement is generated has yet to be explored; linkage of genes for local adaptation with those for assortative mating [50], or via multiple-effect traits conferring local adaptation and assortative mating through pleiotropy [51]. No study has examined the genomics of ES reinforcement, which could test how linkage disequilibrium is generated.
Coevolution
Antagonistic coevolution between species (e.g., hosts and parasites) can potentially drive RI [52] but Fry did not mention any ES study examining this process. Subsequently the use of EE for testing coevolution has been emphasised, but outside of the speciation context [53]. We identified one ES study that found higher postmating RI between T. castaneum populations that had coevolved with the parasite Nosema whitei than between the nonparasitised controls [38]. Another ES study tested populations of D. melanogaster adapting to different diets in the presence of commensal organisms that may generate RI, and found premating isolation evolved in as little as one generation [30,31]. RI was attributed to the mere presence of different microbiota and did not vary significantly over time, thus it is difficult to conclude these effects were evolutionary, rather than plastic. Attempts to replicate these results have been mixed [54,55]. Overall, despite coevolution being a potential powerful driver of speciation, ES studies have not tested this.
That Was Then, This Is Now
ES continues to be underutilised even after Fry's promotion of its use. We provide ideas for future research drawing on his suggestions. Perhaps more importantly, since Fry's review, two major developments in speciation research have occurred for which ES is highly suited but for which ES has lagged behind. First, speciation-with-gene-flow is now thought to be a dominant mode of speciation, but ES studies have manipulated gene flow in only very specific conditions: hybrid speciation and reinforcement. Second, Fry's review [8] was published on the cusp of the genomic revolution. Subsequent EE studies addressing other evolutionary problems have adopted genome sequencing, including evolve and resequence (E&R) [56,57] which allows tracking of genetic changes during evolution, revolutionising EE studies [58]. However, we found surprisingly few new ES studies testing for RI and none incorporated tests of speciation theory using genomics. Given the importance of gene flow during speciation, ES design should include this, as expanded upon in Boxes 2 and 3, and genomic approaches must be used to test fundamental and Box 2. Importance of Gene Flow in Experimental Speciation Genomics As barrier loci can only be detected when populations are or have recently been exchanging genes [1], the degree of gene flow between diverging populations in an experimental speciation study using E&R is crucial for genomic analysis ( Figure I). Without gene flow (divergence in allopatry), soft sweeps are predicted to produce large blocks of genomic differentiation around differentially selected alleles. This makes barrier loci hard to pinpoint, a problem which is likely to be particularly pronounced since experimental speciation studies must often use much stronger selection than would be found in nature to generate reproductive isolation within the experimental timeframe. Furthermore, experimental populations are more susceptible to the effects of drift due to their typically small population size. Without gene flow, large genomic regions may drift to differentiation.
As such, gene flow is necessary to detect barrier loci, as it homogenises background genomes, counteracts the effects of selective sweeps and drift, and allows regions of differentiation to be identified. However, too much gene flow will swamp selection and obstruct population divergence. Guidelines on the design of E&R studies focus heavily on detecting signatures of selection in allopatric populations [56,57,94]. When designing future E&R speciation experiments, it will be important to consider these in the context of gene flow, distinguishing the detection of regions under selection from that of barrier regions.
Trends in Ecology & Evolution
increasingly sophisticated speciation genomics theory (Box 1). This combination will dramatically increase the ability to directly test how RI is either initiated between individuals within a population or intensified between partially reproductively isolated populations and help fulfil the promise of ES as a powerful approach to understanding speciation. To facilitate this aim, we highlight how ES Box 3. Blueprint for Experimental Speciation Design.
Experimental speciation (ES), in combination with genomics, provides the ability to jointly infer phenotypic responses to, and genomic signals of, selection, and should be a high priority for speciation research. We present a blueprint for the design of future ES studies investigating the impact of a process or condition on the evolution of RI in the face of gene flow ( Figure I). We particularly focus on gene flow and selection manipulations, and the use of E&R. In this design, the pair of populations serves as the unit of replication; all measures of divergence (e.g., RI, F ST ) describe the paired metapopulation. This differs from designs in which experimental lines radiate from a single ancestral population, which typically involve no gene flow. Demography and migration rate, and the strength of natural and/or sexual selection can be controlled or manipulated. Subsequent consequences on the initiation or elevation of RI can be estimated directly and assessed across different types of reproductive barriers. The time course nature of ES allows both phenotypes that contribute to local adaptation [95], assortative mating [55], or hybrid viability [24,36,37] to be assayed from the outset. By using E&R, effective gene flow and consequences for genomic architecture can be determined.
By archiving populations throughout the experiment, a researcher can build a valuable cache of DNA data that can be analysed post-E&R with evolutionary hindsight. Having identified candidate barrier loci, the trajectory of allele frequencies of these selected loci can then be examined in detail across the course of the experiment by targeted sequencing of archived populations at selected time points. This can pinpoint how and when changes relating to RI arise and spread in populations. E&R is a potent way to identify genetic signatures of RI but the power to detect these signatures is affected by demography (population size and number of founding haplotypes), strength of selection, and number of replicate populations (as is the success of ES generally) [56,57,94,96]. While these constraints need to be kept in mind, so should the limitations of detecting signatures of selection in non-ES speciation studies [9][10][11].
Furthermore, if individuals can be "resurrected" (e.g., yeast, rotifers, and Daphnia), a suite of genomic, metabolomic, transcriptomic, or fitness-related assays could be performed post-EE at time points of interest. Replication within each treatment tests for parallel evolution and identifies strong (consistent) candidate barrier loci arising due to selection. Replicates responding similarly allows distinguishing a selective response from other evolutionary processes such as mutation and drift, the latter of which are predicted to affect replicates differently.
Trends in Ecology & Evolution
combined with genomics can address speciation research developments in the past 10 years. Our list, below, is not exhaustive but is designed to inspire and stimulate ES speciation research.
A Selection of New Challenges That Experimental Speciation Can Address
What Genomic Conditions Promote Speciation? Variation in mutation rate, recombination rate, and gene density, are all predicted to impact progression towards RI [1,3,11]. These genome properties can only be assessed post hoc in natural populations, making it difficult to disentangle current genome properties as causes or consequences of the speciation process. For instance, suppressed recombination among genes inside chromosomal inversions can generate the linkage disequilibrium required for promoting divergence and speciation. In many species, inversions have been found containing genes important for speciation. However, in natural populations it is difficult to infer whether an ancestral inversion containing barrier loci facilitated speciation or arose after several loci were already in linkage disequilibrium. Furthermore, these properties can shape the genomic landscape independently of the evolution of RI, complicating the identification of barrier loci [1,7]. In an ES context, these genomic features can be characterised prior to applying EE and their behaviour tracked across time via E&R. Moreover, manipulating genomic properties of starting populations is possible, allowing direct tests of their effects on the evolution of RI in the absence of confounding differences.
Taking recombination rate as an example, low recombination increases linkage around a barrier locus. Clusters of barrier loci are more likely to evolve in low recombination regions, potentially but not necessarily producing coupling [59]. Reduced recombination regions could therefore evolve because they enhance clustering [60]. For example, inversions that reduce recombination between barrier loci are expected to be promoted by divergent selection in the face of gene flow [61]. Conversely, high recombination can counteract the Hill-Robertson effect, increasing the likelihood of bringing together otherwise competing beneficial alleles in a single individual. So high recombination might speed up local adaptation and divergence during speciation, but could also slow the build-up of RI by uncoupling barrier loci in the genome. The overall effect of recombination rate on RI could be examined by experimentally evolving populations with different patterns of genome-wide recombination rates, known to vary between populations [62,63], using genetic mapping to show the differences between populations. If a facultatively sexually reproducing organism is used, then manipulations in recombination rate could be achieved by varying the proportion of time during selection spent in the asexual and sexual phases [64]. Alternatively, artificially created inversions via CRISPR/Cas9 [65] might be propagated within a population to explore their effects. We use recombination rate as an example, but these approaches could be applied similarly to genomic features such as mutation rate, gene density, or genetic diversity.
How Does Gene Flow Impact Speciation? Gene flow is thought to be involved in most cases of speciation at some point before completion of RI [39]. However, its role in both opposing and facilitating speciation is theoretically complex. Gene flow has similar consequences to speciation as recombination. Gene flow opposes divergence under selection, but also makes recombination possible between gene combinations in diverging populations. The latter can promote local adaptation and potentially rescue diverging populations with small founding sizes [66].Geneflow also impacts the landscape of genomic divergence. In the presence of gene flow and recombination, strength of selection and linkage are expected to influence the establishment of barrier loci, and are predicted to lead to clustered genetic architecture [67]. In natural populations, correctly inferring gene flow is challenging given uncertainty about demographic history. For instance, modern-day genomic patterns may be due to past gene flow, varying recombination rates, and/or bottlenecks [10].
Trends in Ecology & Evolution
In contrast, using ES allows gene flow to be either controlled or manipulated throughout an experiment and this can be confirmed directly via sequencing. Gene flow can be manipulated, singly or in combination with other factors of interest, to test conditions under which speciation-with-gene-flow is feasible. Moreover, the phenotypic and genomic patterns produced are directly determined and can then be applied to understanding these patterns in natural systems.
Experiments manipulating the amount of gene flow, with and without recombination, can be done by varying the proportion of migrants between diverging populations at the start of each generation. This would allow testing predictions about how gene flow might oppose RI but facilitate local adaptation, and about the predicted clustering of loci within the genome. For instance, Fry emphasised speciation-with-gene-flow in certain conditions (e.g., finite stepping stone [68,69], or Bush's sympatric speciation model [70]) that have not yet been tested. This basic setup could be expanded to include how sexual selection impacts speciation-with-gene-flow, to test how it may either enhance or impede the evolution of RI depending on factors such as geography, and mechanisms of assortative mating [71].
ES is probably best placed to examine the role of gene flow early in speciation. However, it could also be used to test two hypotheses for more divergent populations: reinforcement and the Genome Wide Congealing hypothesis [72]. ES has demonstrated reinforcement but how linkage disequilibrium is generated to promote reinforcement remains unresolved. Sequencing starting populations, identifying markers for barrier loci, and then employing targeted sequencing of the markers on archived ES samples allows reconstruction of the genomic architecture of populations as reinforcement occurs, testing mechanisms of linkage. This approach also addresses the importance of tight linkage between loci and the likelihood of speciation depending on the basis of assortative mating [73]. Speciation-with-gene-flow is theorised to be more feasible when assortment results from matching traits, whereas assortative mating arising from preference/trait mechanisms requires maintenance of linkage disequilibrium between a larger set of loci, thereby decreasing its likelihood in the face of gene flow.
The Genome Wide Congealing hypothesis posits a tipping point of linkage disequilibrium and adaptive divergence. Crossing this threshold transitions from a number of weakly selected barrier loci accumulating between diverging populations, to RI at specific genes, to a switch of RI across the whole genome [72]. Whether this threshold exists, and at what point during speciation this theoretical tipping point is reached, depends on how many loci are targets of selection, how strong selection on each locus is, and the genome-wide recombination landscape. ES could empirically test the impact of these factors by taking divergently adapted but not very isolated populations and then manipulating conditions and/or genome properties to test for a tipping point from weak to strong RI.
How Can Selection Overcome Gene Flow? Fry reviewed ES studies testing whether selection on multiple-effect traits could overcome gene flow to generate RI [8,74,75]. However, many other facets of selection remain unexplored which, while being relatively minor in allopatry, can have major consequences in the presence of gene flow. One example is the dimensionality of selectionhow are the components of RI affected by whether a finite quantity of selection is spread over many, or concentrated onto few, traits and/or loci? To what extent is speciation promoted when selection is strong on a single trait compared to multifarious selection? Strong divergent selection, concentrated on a single trait, may overcome gene flow more successfully, leading to greater and more rapid local adaptation, but with lower effects overall on RI and genomic differentiation. In contrast, multifarious selection Trends in Ecology & Evolution may accelerate the build-up of RI [8,25,26] by impacting linked barrier loci, impacting multipleeffect traits, or producing a snowball effect [40] of DMIs [76][77][78][79]. If selection is spread too thinly across many dimensions, however, then it may fail to overcome gene flow [80]. Amount of gene flow is also critical in whether uni-versus multidimensional selection facilitates complete speciation [25]. No ES study has addressed this speciation theory. While not suggested in an ES framework, Figure 3 of Nosil et al. [25] provides an excellent guide for ES researchers for testing the contribution of uni-and multidimensional selection on the evolution of local adaptation and RI.
Concluding Remarks
Despite early ES success, the approach has lain relatively fallow in addressing unresolved speciation questions (see Outstanding Questions; note that these are general issues in speciation research which remain general because conventional speciation studies are challenged to answer them). This is particularly true when incorporating genomic techniques. It is the combination of ES with high-throughput genomics that can provide a step-change in understanding the origin of RI by directly testing competing hypotheses on processes suggested to impact speciation. Such tests are challenging in natural populations. While ES is typically used to reveal the evolution of early RI, its use on partially reproductively isolated taxa can test how existing patterns of RI and the underlying genomic architecture impacts progression to more complete speciation. ES combined with E&R can both disentangle and test confounding demographic and genetic processes, and elucidate the conditions under which speciation is impeded or accelerated. As it is these signals that get erased or overwritten during the speciation process in natural populations, such experimental insights can be used to help interpret patterns of divergence in natural populations whose selection and demographic history are unknown. In this way, ES, while perhaps oversimplifying real-world conditions, is a powerful tool complementing forward (static) speciation studies. As ES studies accumulate, questions about the role of certain types of genes and other types of phenotypic variation, such as gene expression, in speciation can be addressed. All experiments risk failure but given how time consuming ES is, researchers may be hesitant to adopt this approach for fear RI will not be generated. Rare events can still be important [81], so modelling approaches enabling the testing of many more variables over many more replicates than feasible experimentally would be a helpful complement to ES. Furthermore, an additional benefit of taking on the ES challenge is that, even if RI does not evolve, the approach can address other fundamental questions (e.g., how gene flow and recombination impacts the genomic architecture of local adaptation), themselves outstanding evolutionary problems. Unlike our update of ES in the past decade, we anticipate that the next decadal ES review will attest to the power of this approach and its application in interpreting divergence in natural populations.
Outstanding Questions
How does the interaction between different sources of selection impact on the build-up of RI? What role does the dimensionality of selection play in generating divergence? Does the interaction between sexual and natural selection impact on the outcome of speciation? How do drift and selection interact to affect the speciation process?
What demographic and environmental effects might aid selection in overcoming gene flow? Can small amounts of gene flow accelerate speciation, through reinforcement or by augmenting genetic variation (i.e., hybrid speciation)? Can coevolution drive speciation-with-geneflow? How might sexual selection impact speciation-with-gene-flow?
What genomic conditions facilitate or impede speciation? How do areas of suppressed recombination, such as inversions, contribute to RI? How does recombination affect the build-up of RI in the presence of gene flow? Are barrier regions typically gene rich or gene poor? Is there a tipping point of linkage disequilibrium and adaptive divergence moving from RI at specific barrier loci to RI across the whole genome, and if so, how does this change depending on how many loci are targets of selection, how strong selection on each locus is, and the landscape of recombination across the genome?
Are there general patterns of speciation we can test with experimental speciation? Are there optimal levels of standing genetic variation for speciation to occur? Do we see the same results when comparing different populations, subspecies or species?
|
2019-09-17T13:04:41.576Z
|
2020-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "23c6e7133ba556d60a1743925e53361834e3cc5f",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0169534719302587/pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "4bc32ff54f923cbd37eb16b304d31cbe3dd8c456",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
270462045
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the role of m 6 A writer RBM15 in cancer: a systematic review
In the contemporary epoch, cancer stands as the predominant cause of premature global mortality, necessitating a focused exploration of molecular markers and advanced therapeutic strategies. N6-methyladenosine (m6A), the most prevalent mRNA modification, undergoes dynamic regulation by enzymes referred to as methyltransferases (writers), demethylases (erasers), and effective proteins (readers). Despite lacking methylation activity, RNA-binding motif protein 15 (RBM15), a member of the m6A writer family, assumes a crucial role in recruiting the methyltransferase complex (MTC) and binding to mRNA. Although the impact of m6A modifications on cancer has garnered widespread attention, RBM15 has been relatively overlooked. This review briefly outlines the structure and operational mechanism, and delineates the unique role of RBM15 in various cancers, shedding light on its molecular basis and providing a groundwork for potential tumor-targeted therapies.
Introduction
Cancer is the primary cause of premature mortality worldwide in the 21st century, posing a substantial barrier to improving human life expectancy (1).Calculations and data analysis from the Global Cancer Observatory (GCO) database underscore the substantial burden cancer will impose on both low-and middle-income nations and the global population over the next five decades (2).Therefore, the prioritization of efficient molecular markers to elucidate tumorigenic and progression mechanisms, along with the exploration of advanced early diagnostic and therapeutic approaches, is of utmost importance (3).Recently, m 6 A modification has emerged as a key player in cancer development, propelling ongoing epigenetic investigations into its association with cancer (4)(5)(6).
RBM15, also recognized as OTT or OTT1 is a member of the split-end (SPEN) family of proteins involved in cell fate determination (28).Initially identified as an ectopic gene in pediatric acute megakaryocytic leukemia (29, 30), RBM15, despite lacking methylation activity, plays a critical role in recruiting MTCs and facilitating their binding to target mRNAs as a member of the m 6 A writers (31).Beyond its implications in hematopoiesis and cardio-splenic development in mice (32,33), RBM15 is implicated in various biological functions, including alternative splicing, nuclear export, and X chromosome inactivation (19,34,35).Recent studies have revealed the involvement of RBM15 in cellular biological behaviors such as proliferation, invasion, migration, and apoptosis in various cancers, like acute megakaryocytic leukemia (36), colorectal cancer (37), ovarian cancer (38), laryngeal squamous cell carcinoma (25) and osteosarcoma (39).High expression levels of RBM15 are typically correlated with a poor prognosis in patients with malignancies (40).In this review, we will focus on the individual function of RBM15 and its related mechanisms, highlighting the current status of RBM15 regulatory mechanisms in tumors and providing researchers with new ideas for tumor therapy.
Structural features of the RBM15
RBM15, an RNA-binding protein, is situated within the 1p13.2region of the human chromosome, characterized by 19 exons and 18 introns (41).RBM15 exhibits a typical SPEN family protein-like structure, comprising three highly conserved N-terminal RNA recognition motifs (RRM) and a spen orthologue and paralogue C-terminal (SPOC) domain (42,43) (Figure 1A).The RRM folding structure includes four antiparallel b-strands and two a-helices (44) (Figure 1B).The RRM domain is highly prevalent in eukaryotes and plays an essential role in post-transcriptional splicing, translation, nuclear export, and mRNA stabilization (45,46).The SPOC domain, characterized by seven b-strands and four a-helices (Figure 1C), has been demonstrated to influence several facets of mammalian gene expression, encompassing transcription, RNA modification, RNA export, and X-chromosome inactivation (47,48).Recent discovery revealed that the SPOC domain serves as a phosphoserine binding module, with conserved motifs on its surface specifically recognizing the C-terminal domain (CTD) phosphorylation tag of RNA polymerase II (RNA Pol II) (42).RBM15 SPOC domain was shown to predominantly regulate m 6 A modification and enhance mRNA stability by binding to the m 6 A reader (48).Furthermore, the SPOC domain engages with a range of factors, including histone lysine methyltransferase SETD1B, nuclear RNA export factor 1 (NXF1), and DEAD-box protein 5 (DBP5), thereby participating in RNA transcription, export, and RNArelated metabolism (35,49,50).In conclusion, the RRM domain and the SPOC domain of RBM15 can mediate a variety of protein interactions and play a key role as a bridge in the regulation of gene expression.
Mechanisms underlying the function of RBM15
3.1 RBM15 serves as a writer in m 6
A
In the context previously discussed, RBM15 assumes a pivotal role in the constitution of the MTC, guiding the METTL3/ METTL14 complex to specific mRNA target sites (51) (Figure 2).Furthermore, RBM15 exhibits a selective binding affinity to U-rich sequences on mRNAs, directing them to distinct localization sites and thereby promoting m 6 A adenosine ribonucleotide methylation consensus motifs (19,51).
Moreover, WTAP emerges as a robust candidate for interaction with RBM15, serving as an essential link in MTC recruitment by RBM15 (52).The interaction dynamics between the METTL3-METTL14 complex and RBM15 are intricately dependent on WTAP levels (19).It is noteworthy that the depletion of WTAP significantly diminishes or interrupts the binding interaction between RBM15 and the METTL3-METTL14 complex (19,(53)(54)(55).Subsequent experiments have elucidated that WTAP harbors a phosphorylated LSETD motif, emphasizing the potential dependence of the critical link between the RBM15 SPOC domain and WTAP on this phosphorylated motif (42).
RBM15, acting as a crucial mediator of m 6 A modifications, plays a multifaceted role in various biological processes.It collaborates with its analog RBM15b to recruit WTAP and METTL3-METTL14 to the long non-coding RNA X-inactive specific transcript (XIST) m 6 A regions, elevating XIST methylation levels and facilitating XISTmediated gene silencing (19,47,(56)(57)(58) (Figure 2).Interestingly, in cases of reduced RBM15 expression, RBM15b demonstrates functional compensation due to substantial sequence and structural domain similarity (19).Noteworthy XIST methylation reduction is observed only when both RBM15 and RBM15b are concurrently knocked down (19).Moreover, RBM15 orchestrates the m 6 Amediated regulation of BAF155, contributing to the normal development of the mammalian cerebral cortex (54).RBM15 also regulates the expression of CLDN4 in mice, exerting influence on insulin sensitivity and promoting insulin resistance in gestational diabetic mice (59).
The intricate interplay between RBM15 and m 6 A readers plays a crucial role in orchestrating the intricate m 6 A regulatory mechanism.For instance, RBM15 can interact with IGF2BP1 to facilitate the post-transcriptional activation of YES proto-oncogene 1 (YES1), thus contributing to the regulation of hepatocellular carcinoma progression (60).Additionally, RBM15 may interact with Insulin-like growth factor 2 mRNA-binding protein 3 (IGF2BP3) and collaborate in the m 6 A modification of TMBIM6, consequently promoting the malignant progression of laryngeal squamous cell carcinoma (25).In conclusion, RBM15 is involved in the composition of MTCs in a WTAP-dependent manner and recruits MTCs to specific sites to promote m 6 A methylation, which plays a key regulatory role in a variety of biological functions.
RBM15 controls alternative splicing
Alternative splicing, facilitated by spliceosome complexes binding to RNA Pol II transcripts, constitutes a pivotal factor contributing to the intricate complexity of the transcriptome in multicellular eukaryotes (61,62).Within nuclear speckles, recognized as crucial depots for numerous splicing factors, both RBM15 and RBM15b are situated (52).RBM15 exhibits its influence by binding to specific intronic sites in pre-mRNA and modulating alternative splicing through the recruitment of splicing factors (34) (Figure 2).A notable instance involves RBM15 orchestrating alternative splicing by enlisting the splicing factor SF3B1 to a distinct splice site in the c-Mpl intron, leading to the upregulation of the c-Mpl truncation isoform upon RBM15 knockdown (34).
Additionally, RBM15 may govern alternative splicing via chromatin modifications.Its interactions with Hdac3 and the histone methyltransferase SETD1B exemplify this (63), influencing c-Mpl RNA and chromatin interactions, thereby regulating H4 acetylation and H3K4me3 marks (64) (Figure 2).Noteworthy is the observation that inhibiting histone deacetylase or histone methyltransferase levels significantly heightens the abundance of truncated isoforms of c-Mpl (64,65).Upstream in the regulatory cascade, protein arginine methyltransferase 1 (PRMT1) methylates RBM15, instigating its ubiquitination and subsequent degradation by subunit 4 of the CCR4-NOT transcriptional complex (CNOT4) (34).This intricate process potentially serves as a pathogenetic mechanism in hematopoietic malignancies.
Alternative splicing orchestrated by RBM15 is essential for megakaryocyte differentiation.For instance, the depletion of RBM15 disrupts the selective splicing of GATA1, leading to the generation of truncated GATA1 isoforms (34).These truncated isoforms hinder the differentiation of progenitors into mature megakaryocytes, a crucial process in the pathogenesis of leukemia (34).Additionally, the transcription factor TAL1 holds a critical role in the differentiation of megakaryocyte-erythroid progenitors.The strong interaction between the SF3B1 K700E mutant and RBM15 can result in the dysregulation of alternative RNA splicing of TAL1, ultimately impeding erythropoiesis (66).In summary, RBM15 regulates alternative splicing by recruiting splicing factors and interacting with histone-modifying enzymes, thereby governing the splicing process and participating in diverse functions (34,64).
RBM15 promotes the nuclear export of mRNA
RBM15, a member of the SPEN family, possesses a distinctive SPOC domain responsible for governing nuclear export through interactions with diverse proteins (28) (Figure 2).For example, EB2 interacts with RBM15/b and the SPEN SPOC domain, thus facilitating the nuclear export of viral mRNA (28).The RNA transport element (RTE) enhances RNA binding to the mRNA export receptor NXF1, with this process mediated by the interaction with RBM15 (35).Acting as a bridge, RBM15 connects RTEcontaining RNA to NXF1, consequently amplifying the nuclear export of RNA (35).Moreover, the DBP5 plays a crucial role in providing the basic direction of nuclear export by specifically recognizing NXF1 through RBM15, allowing NXF1 to traverse the nuclear pore complex and enter the cytoplasm (35, 49).In summary, RBM15 actively engages in and facilitates cellular nuclear export.Nevertheless, when RBM15/b is subjected to knockdown, the nuclear export of mRNA appears to persist, raising questions about whether RBM15 solely aids export factors in enhancing the stability of their interactions (35, 49).
Role of RBM15 in cancer
Recent evidence underscores the pivotal role of RBM15 in cancer, where it predominantly functions as a methyltransferase, enhancing the stability of target mRNAs through m 6 A modification, thereby contributing to the initiation and progression of diverse cancers.In addition, RBM15 also regulates cancer through signaling pathways or other modifications.In this review, we present a succinct summary of recent discoveries elucidating the expression of RBM15 in different tumors and its corresponding molecular regulatory mechanisms (Table 1 and Figure 3).
Acute myeloid leukemia
Acute myeloid leukemia (AML) stands out as the preeminent malignancy affecting hematopoietic stem cells, marked by a notably unfavorable prognosis (78).Recent research has highlighted a substantial correlation between RBM15 expression levels and survival in AML patients, associating elevated RBM15 expression with shorter survival (67).Notably, a critical aspect of RBM15 function was uncovered in that RBM15 was recruited by RBFOX2 and established an interaction with the m 6 A reader YTH domaincontaining proteins 1 (YTHDC1) (67).This interaction facilitated the recruitment of polycomb repressive complex 2 (PRC2) to the binding site of RBFOX2, leading to chromatin silencing and transcriptional repression (67).Of note, the expression levels of RBM15 and RBFOX2 were positively correlated in cancer patients.Furthermore, down-regulation of RBFOX2 significantly impedes the survival and proliferation of AML cells and induces myeloid differentiation (67).
Acute megakaryoblastic leukemia
Acute megakaryocytic leukemia (AMKL) is a subtype of acute myeloid leukemia primarily characterized by the presence of platelet-producing megakaryocytes within the bone marrow (79).This disease is prevalent in children and is associated with an unfavorable prognosis (79)(80)(81).The gene fusion product involving RBM15 and megakaryocytic leukemia 1 (MKL1), termed the RBM15-MKL1 fusion protein (also referred to as OTT-MAL), Mechanisms of RBM15 in m 6 A modification, alternative splicing, and RNA export.RBM15 comprises three RRMs and a SPOC domain.It collaborates with other m 6 A writers, including METTL3, METTL14, WTAP, VIRMA, and ZC3H13, to form a methyltransferase complex (MTC) that promotes methylation.For example, RBM15 promotes X-chromosome inactivation by enhancing XIST methylation levels.Additionally, RBM15 recruits the splicing factor SF3B1 to participate in alternative splicing, and RBM15 can target the histone H3K4me3 methyltransferase SETD1B to RNA via the SPOC domain, thereby regulating selective splicing via histone modifying enzymes.Furthermore, RBM15 promotes the nuclear export of mRNA by binding to the nuclear export factor NXF1.
was initially identified in a pediatric patient with acute megakaryoblastic leukemia harboring the t(1;22)(p13;q13) (82).The etiology of this malady is notably intricate, and investigations have revealed that the RBM15-MKL1 fusion protein interacts with Setd1b histone H3-Lys4 methyltransferase (also recognized as KMT2G).This interaction is contingent on the intact RBM15 SPOC domain and enhances its leukemic activity in megakaryocytes (65).
Furthermore, RBM15 plays multiple roles in the hematopoietic system and may also contribute to disease pathogenesis (83,84).As an illustration, RBM15 interacts with SMRT/HDAC1-related inhibitory protein (SHARP) and is associated with the recombinant signal-binding protein RBP-Jk.This interaction activates Notch-regulated gene expression, thereby inhibiting myeloid differentiation in hematopoietic cells (83).Additionally, RBM15 plays a role in hematopoietic stem cells (HSC) and contributes to megakaryocyte development by regulating the downstream target c-myc (85,86).Furthermore, It has been shown that antisense RBM15 (AS-RBM15) finely modulates megakaryocyte differentiation by elevating the translation level of the RBM15 protein (87).Up-regulation of AS-RBM15 expression promotes terminal differentiation of megakaryocytes, while downregulation has the opposite effect.Nevertheless, the connection of these findings to the pathogenesis of AMKL requires further investigation.
Chronic granulocytic leukemia
Chronic granulocytic leukemia (CML) is one of the most malignant diseases in the hematopoietic system, presenting a grave threat to patients (88).Related researchers revealed that the average expression level of RBM15 was notably higher in acutephase CML cells when compared to those in the chronic and accelerated phases of the disease (68).Diminishing RBM15 levels demonstrated the capacity to impede the growth and proliferation of CML cells, impede the cell cycle, and induce apoptosis (68).Furthermore, evidence suggested that RBM15 might, in part or entirely, facilitate the malignant progression of CML through the Notch signaling pathway mediated by RBPJk.This pathway is postulated to wield a crucial influence in the etiology of hematopoietic malignancies (68, 83).
Kaposi's sarcoma
Kaposi's sarcoma (KS) is a multicentric tumor arising from the endothelial cells lymphatic vessels (89).The virulence genes ORF57 and ORF59 are vital contributors to the growth and proliferation of KS (90), and their expression levels are closely associated with RBM15 (69).Research has revealed that RBM15 plays a key role in enhancing the production of ORF57 nuclear transcripts.Inhibition of RBM15 hampers the production of ORF57 mRNA, resulting in a decrease in the overall RNA level of ORF57 (91).In addition, RBM15 and ORF57 interacted with the 5' MRE of ORF59 to increase the stability of ORF59 mRNA and promote its nuclear export (90), preventing the overaccumulation of ORF59 mRNA in the nucleus and maintaining the balance between the nuclear and cytoplasmic levels of ORF59 mRNA (69).However, further experiments are needed to confirm that RBM15 regulates the expression of ORF57 and ORF59, thereby promoting the growth and proliferation of KS.
Hepatocellular carcinoma
Hepatocellular carcinoma (HCC) is the predominant primary liver malignancy, distinguished by elevated malignancy, morbidity, and mortality rates (92).RBM15 exhibits high expression levels in HCC, indicative of an unfavorable prognosis (60,93).The depletion of RBM15 significantly impedes the growth of HCC cells (60).Notably, it uncovered a crucial aspect of RBM15 function, demonstrating its pivotal role in the post-transcriptional activation of the YES1 through interaction with the m 6 A reader IGF2BP1 (60).This intricate interplay subsequently activates the mitogen-activated protein kinase (MAPK) pathway, thereby fostering the progression of HCC (60).
Colorectal cancer
Colorectal cancer (CRC) is the third most common malignancy and the second most lethal cancer in the world, and its incidence is increasing at an alarming rate (94).Research indicates that the expression of RBM15 in CRC tissues is significantly higher than that in nearby non-tumor tissues, and elevated expression of RBM15 is closely associated with poor prognosis, while inhibition of RBM15 expression significantly suppresses the proliferation and invasion of CRC cells (70).Recently, more and more studies have focused on the mechanistic investigation of RBM15 in CRC.For example, RBM15 increased the methylation level of MyD88 through m 6 A modification, which promoted the proliferation and invasion of CRC cells (37).In addition, RBM15 regulates the expression and enhances the stability of KLF1 mRNA by interacting with the m 6 A reader IGF2BP3, which activates the transcription of the downstream target SIN3A and ultimately promotes the proliferation, invasion, and migration of CRC cells (70).
Pancreatic cancer
Pancreatic adenocarcinoma (PAAD) is a prominent and exceptionally malignant neoplasm of the digestive system, Molecular regulatory mechanisms of RBM15 in various types of cancer.In addition to conventional targeted regulation, RBM15 collaborates with m 6 A readers, such as IGF2BP1, IGF2BP3, and YTHDC1, to govern the malignant progression of cancer.Furthermore, RBM15 directs cancer progression by modulating various pathways, including TGF-b/Smad2, Notch, MAPK, and AKT/mTOR pathways.
characterized by a 5-year survival rate of approximately 10% following diagnosis (95).RBM15 exhibited expression across various pancreatic cancer cell frequently coexisting with a propensity for T lymphocyte aggregation (41).Studies have shown that elevated RBM15 expression emerged as a significant contributor to unfavorable prognosis, and conversely, the suppression of the RBM15 demonstrates the potential to inhibit cancer cell proliferation, invasion, and metastasis to varying extents (41,71).Intriguingly, elevated blood glucose levels may enhance RBM15 expression in pancreatic cancer.It is yet to be determined whether this phenomenon is associated with the increased energy demands of pancreatic cancer malignancy (41).
Cervical cancer
Cervical cancer is the fourth most prevalent gynecological cancer, and more than 99% of cases are attributed to human papillomavirus (HPV) (96).HPV-E6, one of the eight proteincoding genes associated with cervical carcinogenesis (97), has been shown to maintain high expression of RBM15 in cervical cancer cells by preventing its autophagic degradation (72).Recent studies have focused on the specific regulatory mechanisms of RBM15 in cervical cancer.For example, RBM15 interacts with the downstream target c-myc, enhancing its m 6 A modification, a process crucial in cervical cancer development (72).Intriguingly, the knockdown of HPV-E6 resulted in a reduction of c-myc mRNA expression and m 6 A modification levels in cervical cancer cells, which were subsequently reversed by the overexpression of RBM15 (72).In addition, RBM15-mediated m 6 A modification facilitated the expression of the oncogene Otubain 2 (OTUB2) in cervical cancer cells, which further activated AKT/mTOR signaling, thereby promoting the proliferation, migration, and invasion of cervical cancer cells (73).Furthermore, it has been hypothesized that RBM15 might play a role in promoting the proliferation, invasion, and migration of cervical cancer cells through its interaction with the JAK-STAT pathway.Nevertheless, additional validation is required (74).
Ovarian cancer
Ovarian cancer (OC) is the third most common gynecologic malignancy worldwide but accounts for the highest mortality rate among these cancers (98).Studies have shown that the expression of RBM15 is higher in OC tissues than in normal tissues, and that high expression of RBM15 is closely associated with the propensity to metastasize in OC (38,75).Interestingly, RBM15 is overexpressed in paclitaxel (PTX)-resistant cells and depletion of RBM15 also restores the sensitivity of PTX-resistant cells (75).Activation of the TGF-b/Smad pathway was shown to interact with the RBM15 promoter and directly inhibit RBM15 expression in PTX-resistant ovarian cancer cells (75).Subsequent low expression of RBM15 resulted in a reduction in the m 6 A level of MDR1, a recognized major target for overcoming OC resistance (75,99).In summary, RBM15, serving as a tissue biomarker for OC and PTX resistance, may open new therapeutic avenues for the treatment of PTXresistant OC in the future.
Most evidence has unveiled the oncogenic role of RBM15 in OC, however, a different voice also indicated a contradictory view: ubiquitin-like modification activating enzyme 6 antisense RNA 1 (UBA6-AS1) can m 6 A-regulate UBA6 mRNA by enlisting the aid of RBM15, followed by the m 6 A reader IGF2BP1, which bolsters the stability of UBA6 mRNA (100).UBA6-AS1 curbed UBA6 selfdegradation through m 6 A modification mediated by RBM15, thereby effectively impeding ovarian cancer proliferation, invasion, and metastasis (100).This suggests to us that RBM15 may be associated with a favorable prognosis in ovarian cancer.However, further experiments are warranted to validate this association.While controversy surrounds RBM15's role in ovarian cancer, these findings suggest the potential for a future role wherein RBM15 acts as a tumor suppressor or a tumor suppressor cofactor.
Clear cell renal cell carcinoma
Clear cell renal cell carcinoma (ccRCC) is a prevalent adenocarcinoma originating from renal tubular epithelial cells, frequently associated with an unfavorable prognosis (68).Most investigations have revealed a heightened expression of RBM15 in both ccRCC cells and tissues, correlating with augmented proliferation, invasion, and metastasis of ccRCC cells (76,101).Creb-binding protein (CBP) and EP300 are crucial transcriptional co-regulators implicated in cancer progression (102).It has been shown that they can facilitate the enrichment of the RBM15 promoter, inducing histone 3 acetylation modification and the subsequent upregulation of RBM15 expression.Furthermore, RBM15 plays a role in enhancing the stability of CXCL11 mRNA through m 6 A modification, consequently promoting macrophage recruitment and M2 polarization (76).Noteworthy is the observation that the knockdown of RBM15 led to a significant reduction in the m 6 A level of CXCL11 mRNA, thereby restraining the malignant behavior of ccRCC cells (76).
Lung cancer
Lung cancer stands out as the most prevalent global malignancy and the leading cause of cancer-related mortality among men (103), Among its subtypes, lung adenocarcinoma (LUAD) constitutes approximately half of the total incidence (104).LUAD cells exhibit significantly heightened levels of RBM15 expression, and this upregulation strongly correlates with diminished overall survival rates (105,106).Notably, RBM15's potential to promote the malignant behavior of lung cancer is linked to its antagonistic relationship with SETD2, a recognized favorable prognostic indicator for LUAD (107).A recent study has further proposed that the knockdown of RBM15 effectively reduces the levels of TGFb and Smad2, and promotes ferroptosis by regulating genes related to the iron concentration process, thus inhibiting proliferation, migration, invasion, and tumor growth (77).The proposition of targeting RBM15 opens new avenues for future directions in lung cancer treatment.
Laryngeal squamous cell carcinoma
Laryngeal squamous cell (LSCC), a highly malignant tumor of the respiratory holds the unenviable position of being the second most common head and neck cancer with a particularly dismal prognosis (108).LSCC tissues conspicuously exhibit elevated levels of RBM15 expression, significantly associated with a poor prognosis (20).Interestingly, inhibition of RBM15 by knockdown significantly impedes the invasion and migration capabilities of LSCC cells (25).Recently, a remarkable discovery demonstrated that RBM15 plays a critical role in promoting the methylation process of TMBIM6, consequently facilitating the malignant progression of LSCC (25).In addition, the m 6 A reader IGF2BP3 recognizes the m 6 A tag and exerts its influence by fortifying the stability of TMBIM6 mRNA.Notably, when the expression of RBM15 and IGF2BP3 was knocked down, a significant decrease in the expression level of TMBIM6 mRNA was observed (25).
Osteosarcoma
Osteosarcoma is an exceedingly uncommon primary malignancy of the skeletal system, primarily afflicting adolescents between the ages of 10 and 25 years (109).In osteosarcoma cells, RBM15 exhibits elevated expression levels, a phenomenon notably linked to an unfavorable prognosis (110).Mechanistic studies on how RBM15 regulates osteosarcoma are scarce, but a recent study suggested that RBM15 directly interacts with Circ-CTNNB1, thereby increasing the level of m 6 A modification of genes associated with aerobic glycolysis, and ultimately facilitating the glycolytic process (39).This augmentation in aerobic glycolysis unequivocally provides a survival advantage to osteosarcoma cells (39).Yet, the precise regulatory mechanism underpinning this phenomenon remains to be comprehensively elucidated.
Conclusions and perspectives
Cancer is characterized by numerous hallmark behaviors such as uncontrolled proliferation, evasion of cell death, angiogenesis, invasion, metastasis, metabolic dysregulation, and immune evasion (111).The most prevalent RNA modification in eukaryotes is m 6 A, which can determine the fate of the modified RNA (112).Methyltransferases have garnered substantial research interest due to their ability to catalyze RNA modifications, their involvement in tumor initiation and progression, and their potential as therapeutic targets in cancer (12).For instance, METTL3 may govern colorectal cancer metastasis by modulating the METTL3/miR-1246/SPRED2 axis (113), Additionally, METTL14 regulates USP48, enhancing SIRT6 stability via m 6 A modification, thereby restraining the malignancy of HCC (114).
RBM15, belonging to the SPEN family and distinguished by its specific SPOC domain, assumes a key role in recruiting MTC to specific sites, thereby facilitating m 6 A methylation (42,51).Additionally, RBM15 extends its impact beyond alternative mRNA splicing and nuclear transport, encompassing a spectrum of biological functions mediated by m 6 A methylation.These functions encompass Xist-mediated chromosome inactivation and the mediation of the degradation of the chromatin remodeling factor BAF155 (19,34,35,54).In addition, RBM15 is an oncogene in most cancers, and downregulation of RBM15 can effectively inhibit cancer progression (37,39,81).Based on emerging evidence of their role in cancer and molecular mechanisms, m 6 A regulators have attracted increasing attention from researchers as therapeutic targets.
Certainly, RBM15 also assumes a significant role in nonneoplastic diseases.For instance, RBM15 can accelerate the progression of diabetic nephropathy by modulating cell proliferation, inflammation, and oxidative stress through activation of the AGE-RAGE pathway (82).In addition, RBM15 triggers abnormal immune responses and lymphopenia, thereby exacerbating inflammatory reactions in COVID-19 through the regulation of multiple downstream target genes (83).Consequently, RBM15 holds promise as a prospective target for the treatment of malignancies and a wide range of diseases.Manipulation of RBM15 levels, whether through direct or indirect means, is expected to improve patient prognosis in the future.
In this article, we provide a comprehensive review of RBM15 expression in various cancer types, exploring its impact on prognosis and the underlying molecular mechanisms.However, RBM15-targeted therapies are still in their infancy and there are still significant gaps in the understanding of its upstream regulation and downstream targeting.It remains unclear whether it occurs through the m 6 A pathway in various tumor models, which is crucial for clinical translational applications as well as the development of disease therapies.It should be emphasized that although RBM15 has a specific SPOC domain, the relevance of its structure to disease and the specific upstream and downstream regulatory mechanisms are still rarely mentioned.In the future, we can pay more attention to how the SPOC domains are removed or lose their functions, understand how the SPOC domain regulates the synthesis and fate of mRNAs at the molecular level, and potentially discover potential targets for new therapeutic approaches.In addition, RBM15b is an analog of RBM15, which together with RBM15 plays a key recruiting role in the process of m 6 A methylation.However, the synergistic effects of RBM15b on RBM15 in disease regulation, as well as their potential to jointly induce cancer and promote tumor cell growth, have rarely been analyzed in detail.In the future, we may need additional transgenic mouse models to test the specific roles of these two analogs and their synergistic effects in vivo.
In conclusion, RBM15 is involved in a wide range of biological processes and its importance in cancer regulation is increasing.Therefore, it is imperative to elucidate the complex roles of RBM15 in cancer and to exploit its potential in targeted tumor therapy to bridge the gap between research findings and clinical translation, ultimately improving the prognosis of cancer patients.
TABLE 1
Molecular regulatory mechanisms of RBM15 in cancer.
|
2024-06-14T15:07:18.658Z
|
2024-06-10T00:00:00.000
|
{
"year": 2024,
"sha1": "aec300af0275e427d23754b118324e5557578941",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fonc.2024.1375942",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c99f2d6ceb036eae74326c131c1e0335e11846c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
240491951
|
pes2o/s2orc
|
v3-fos-license
|
Belt Uniform Sowing Pattern Boosts Yield of Different Winter Wheat Cultivars in Southwest China
: The relationship between the sowing patterns and yield performance is a valuable topic for food security. In this study, a novel belt uniform (BU) sowing pattern was reported, and a field experiment with four winter wheat cultivars was carried out over three consecutive growing seasons to compare the dry matter accumulation, harvest index (HI), grain yield and yield components under BU and line and dense (LD) sowing patterns [BU sowing with narrow (15 cm) spacing; BU sowing with wide (20 cm) spacing; LD sowing with wide (33.3 cm) row spacing; LD sowing with narrow (16.6 cm) row spacing]. Four cultivars produced a higher mean grain yield (GY), above-ground biomass (AGB) and spike number (SN) per m 2 under the BU sowing patterns than the LD sowing patterns in all three growing seasons. However, yield stability under the BU sowing patterns did not increase with the improved grain yield. The HI did not change with sowing patterns, and the contribution of above-ground biomass to grain yield (84%) was more than 5-fold higher than that of HI (16%). Principal component and correlation analyses indicated that the grain yield was positively correlated with the aboveground biomass and SN, while the HI and 1000-grain weight were not correlated with grain yield. We concluded that (1) the novel BU sowing patterns achieved a higher yield potential in winter wheat but did not further improve yield stability; (2) increasing the dry matter accumulation without changing the HI drove improvements in the SN and grain number per spike, thus increasing grain yield. (LDW, sowing pattern); Line and dense (LD) sowing with narrow (16.6 cm) row spacing (LDN); Belt uniform (BU) sowing with narrow (15 cm) row spacing (BUN); and Belt uniform (BU) sowing with wide (20 cm) row spacing (BUW)) study 2). Four wheat cultivars (Guinong 19 (G19); Guinong 30 (G30); Guizi 1 (G1); and Guizi 4 (G4)) were used in three consecutive years during the winter wheat growing season. G19 and G30 were common cultivars; G1 and G4 were high anthocyanin (about 50 mg kg − 1 ) winter wheat cultivars. A completely random experimental design was used in this study and there were three replicates per cultivar per sowing pattern, the area of each plot was 4 m 2 (2 m × 2 m). The plant density was 225 plants per m 2 . Before sowing, 160 kg N ha − 1 , 35 kg P ha − 1 and 35 kg K ha − 1 were applied according to local practice. Pesticides were used when necessary and the weeds were controlled by hand. The sowing and harvest dates were November 5 in 2017 and May 11 in 2018 for the 2017–2018 growing season, November 19 in 2018 and May 18 in 2019 for the 2018–2019 growing season, and November 23 in 2019 and May 23 in 2020 for the 2019–2020 growing season.
Introduction
Wheat is one of the most important cereal crops grown worldwide [1] and the third most important staple crop in China. The yield improvement in winter wheat was realized by improving the harvest index (HI) [2][3][4], dry matter accumulation and radiation use efficiency [5][6][7][8][9]. Increasing the HI is the main driver of yield improvements through wheat breeding [5,7,10]. Although the theoretical limit of the HI is 0.62 for wheat [11], HI values have not progressed since the 1990s when they reached~0.50-0.55 [5]. Thus, it is difficult to further improve yield by raising the HI, because this has reached a realistic limit of~0.5 [5,7]. Future yield improvement should rely on achieving greater above-ground biomass (AGB), while at least maintaining present HI levels [7]. The relative contribution of the AGB and HI to the grain yield was not well known.
Most studies concentrated on the role of fertilizer schemes and planting density in yield performance [12,13], but previous studies showed new sowing patterns such as raisedbed sowing [14] and furrow sowing [15] could increase the yields of winter wheat [16,17] and other crops [18,19]. Li et al. [15,20] reported a wide and narrow furrow planting which could increase the winter wheat grain yield in the North China Plain. Recently, a new sowing method-wide-precision planting pattern (sowing width was 6-8 cm) was reported; the author proposed that this new method, combined with deficit irrigation, could maximize winter wheat production in the North China Plain [20]. Thus, the development of new cultivation measures contributes hugely to increasing winter wheat yields. Moreover, previous studies have shown yield improvements associated with increases in radiation use efficiency [6], biomass [8,9] and spike number (SN) per square meter [20,21].
South China is one of the three main winter wheat production areas in China [10]. In this study, we reported a novel sowing pattern and the yield; yield components and biomass accumulation were compared among four sowing patterns and identified the related contribution of the HI and AGB to yield performance. The following hypotheses were tested: (1) the novel sowing pattern could improve the winter wheat grain yield, which was associated with the increasing of AGB not HI; (2) the yield improvement was associated with the increasing of the spike number m −2 (SN), grain number per spike (GN) or 1000-grain weight (TGW).
Materials and Growth Conditions
A field experiment was conducted in the Guiyang, Guizhou province, China. The mean temperatures were 10.2, 9.5 and 11.0 • C, whereas the precipitation was 377, 316 and 367 mm for the 2017-2018, 2018-2019 and 2019-2020 growing seasons, respectively ( Figure 1). The soil in the field was classified as yellow soil according to the Chinese soil classification system. The soil organic matter and soil available P, N and K were 6.60 g kg −1 , 4.85 mg kg −1 , 31.50 mg kg −1 , and 141.69 mg kg −1 in 2017, respectively. [16,17] and other crops [18,19]. Li et al. [15,20] reported a wide and narrow fu planting which could increase the winter wheat grain yield in the North China P Recently, a new sowing method-wide-precision planting pattern (sowing width wa cm) was reported; the author proposed that this new method, combined with defic rigation, could maximize winter wheat production in the North China Plain [20]. T the development of new cultivation measures contributes hugely to increasing w wheat yields. Moreover, previous studies have shown yield improvements assoc with increases in radiation use efficiency [6], biomass [8,9] and spike number (SN square meter [20,21]. South China is one of the three main winter wheat production areas in China [1 this study, we reported a novel sowing pattern and the yield; yield components biomass accumulation were compared among four sowing patterns and identified related contribution of the HI and AGB to yield performance. The following hypoth were tested: (1) the novel sowing pattern could improve the winter wheat grain y which was associated with the increasing of AGB not HI; (2) the yield improvement associated with the increasing of the spike number m −2 (SN), grain number per spike or 1000-grain weight (TGW).
Materials and Growth Conditions
A field experiment was conducted in the Guiyang, Guizhou province, China. mean temperatures were 10.2, 9.5 and 11.0 °C, whereas the precipitation was 377, 316 367 mm for the 2017-2018, 2018-2019 and 2019-2020 growing seasons, respectively ure 1). The soil in the field was classified as yellow soil according to the Chinese classification system. The soil organic matter and soil available P, N and K were 6 kg −1 , 4.85 mg kg −1 , 31.50 mg kg −1 , and 141.69 mg kg −1 in 2017, respectively. Four sowing patterns (Line and dense (LD) sowing with wide (33.3 cm) row spacing (LDW, conventional sowing pattern); Line and dense (LD) sowing with narrow (16.6 cm) row spacing (LDN); Belt uniform (BU) sowing with narrow (15 cm) row spacing (BUN); and Belt uniform (BU) sowing with wide (20 cm) row spacing (BUW)) were used in this study ( Figure 2). Four wheat cultivars (Guinong 19 (G19); Guinong 30 (G30); Guizi 1 (G1); and Guizi 4 (G4)) were used in three consecutive years during the winter wheat growing season. G19 and G30 were common cultivars; G1 and G4 were high anthocyanin (about 50 mg kg −1 ) winter wheat cultivars. A completely random experimental design was used in this study and there were three replicates per cultivar per sowing pattern, the area of each plot was 4 m 2 (2 m × 2 m). The plant density was 225 plants per m 2 . Before sowing, 160 kg N ha −1 , 35 kg P ha −1 and 35 kg K ha −1 were applied according to local practice. Pesticides were used when necessary and the weeds were controlled by hand. The sowing and harvest dates were Four sowing patterns (Line and dense (LD) sowing with wide (33.3 cm (LDW, conventional sowing pattern); Line and dense (LD) sowing with nar row spacing (LDN); Belt uniform (BU) sowing with narrow (15 cm) row sp and Belt uniform (BU) sowing with wide (20 cm) row spacing (BUW)) we study ( Figure 2). Four wheat cultivars (Guinong 19 (G19); Guinong 30 (G30 and Guizi 4 (G4)) were used in three consecutive years during the winter w season. G19 and G30 were common cultivars; G1 and G4 were high antho 50 mg kg −1 ) winter wheat cultivars. A completely random experimental de in this study and there were three replicates per cultivar per sowing patte each plot was 4 m 2 (2 m × 2 m). The plant density was 225 plants per m 2 . B 160 kg N ha −1 , 35 kg P ha −1 and 35 kg K ha −1 were applied according to Pesticides were used when necessary and the weeds were controlled by h ing and harvest dates were
Harvest and Measurement of Parameters
When the winter wheat reached maturity, two rows in the center w The shoots were cut about 1 cm above the soil surface, placed in nylon bag transported to the lab for further measurement.
The spikes were cut from the shoots, and the leaf and stem were combi at 80 °C for 48 h; then, the dry weights of the spike and the leaf + stem w The dry weight was obtained by adding together the dry weights of the s stem. The number of spikes was counted for each sample in each plot; then (ten representative spikes) was chosen to determine the grain numbers f The remaining spikes were threshed to obtain the grains and those were c the grains of the sub-samples to determine the grain yield for each plot. The m −2 ) was obtained by dividing the grain yield for each sample for each pl vested area. The harvest index (HI,%) = (grain yield × 100)/AGB.
Statistics Analysis
A two-way analysis of variance (ANOVA) for each year was conduct Stat 19.0 (VSN International Ltd., Rothamsted, UK) to analyze the effects o
Harvest and Measurement of Parameters
When the winter wheat reached maturity, two rows in the center were harvested. The shoots were cut about 1 cm above the soil surface, placed in nylon bags, tagged and transported to the lab for further measurement.
The spikes were cut from the shoots, and the leaf and stem were combined and dried at 80 • C for 48 h; then, the dry weights of the spike and the leaf + stem were evaluated. The dry weight was obtained by adding together the dry weights of the spike, leaf and stem. The number of spikes was counted for each sample in each plot; then, a subsample (ten representative spikes) was chosen to determine the grain numbers for each spike. The remaining spikes were threshed to obtain the grains and those were combined with the grains of the sub-samples to determine the grain yield for each plot. The grain yield (g m −2 ) was obtained by dividing the grain yield for each sample for each plot by the harvested area. The harvest index (HI,%) = (grain yield × 100)/AGB.
Statistics Analysis
A two-way analysis of variance (ANOVA) for each year was conducted using GenStat 19.0 (VSN International Ltd., Rothamsted, UK) to analyze the effects of sowing patterns and cultivars and their interaction on the grain yield, yield components and biomass accumulation. Principal component analysis (PCA) was also conducted using GenStat 19.0, based on grain yield, yield formation traits and yield components. All figures were Agriculture 2021, 11, 1077 4 of 11 generated using Origin 2020 (Origin Lab, Northampton, MA, USA); the elements of yield formation are presented using a histogram, the relationships are shown using a scatter plot and the stability of each cultivar in different sowing patterns is presented using a bubble chart.
The data for AGB, HI and grain yield were combined and used to evaluate the contribution of AGB and HI to winter wheat grain yield as follows: where A 1 and B 1 represent the standardized coefficients of AGB and HI; B 1 and B 2 represent the coefficients of AGB and HI in the partial regression equation; and S X1 , S X2 and S GY represent the standard deviation of AGB, HI and grain yield, respectively. The contributions of AGB and HI to grain yield are represented by C AGB and C HI .
Yield Performance
The HI was significantly related to the genotypes of the wheat, but its effect on the AGB and grain yield (GY) was irregular. The sowing pattern significantly affected the AGB, HI and grain yield, and the interaction with genotype significantly influenced the AGB, HI and grain yield during the 2017-2018 and 2018-2019 growing seasons (Table 1). Each cultivar produced the highest yield under the BU sowing conditions (Table 1; Figure 3A-C). Compared with the LDN sowing pattern, the average yield and AGB under the BU patterns was nearly double (Figure 3). The HI did not change with the sowing pattern but the AGB was significantly increased under the BU patterns ( Figure 3). The contribution of AGB to yield (84%) was more than five-fold higher than that of HI (16%; Table 2). Table 1. The above-ground biomass (AGB, g m −2 ), harvest index (HI, %) and grain yield (GY, g m −2 ) and their means of four wheat genotypes under four different sowing patterns during three consecutive growing seasons. LSD values at p = 0.05 are in parenthesis. n.s not significant, * p < 0.05, ** p < 0.01, *** p < 0.001. The yield components were significantly affected by genotype, sowing pattern an their interaction (Table 3). All genotypes had higher SN per unit area in the BU sowin pattern with narrow or wide row spacing (Table 3; Figure 4A-C). The effects on grai number per spike of genotype and sowing pattern were not regular. The genotypes mad a regular and significant difference to the 1000-grain weight, however (Figure 4). G4 ha the highest 1000-grain weight, followed by G19 and G30, while that of G1 was the lowe (Table 3). Table 2. The variation in grain yield, above-ground dry biomass (AGB) and harvest index (HI) and the contribution of AGDM and HI to grain yield in winter wheat grown from 2017-2020. *** p < 0.001. The yield components were significantly affected by genotype, sowing pattern and their interaction ( Table 3). All genotypes had higher SN per unit area in the BU sowing pattern with narrow or wide row spacing (Table 3; Figure 4A-C). The effects on grain number per spike of genotype and sowing pattern were not regular. The genotypes made a regular and significant difference to the 1000-grain weight, however ( Figure 4). G4 had the highest 1000-grain weight, followed by G19 and G30, while that of G1 was the lowest (Table 3).
Relationships between Yield and Its Components
The increase in grain yield was significantly positively correlated with the AGB under different sowing patterns (r = 0.98, p < 0.001). All genotypes produced a higher mean Agriculture 2021, 11, 1077 7 of 11 AGB and grain yield under the BU sowing patterns than those planted under both the LDW and LDN sowing patterns ( Figure 5A). The correlation between SN per unit area and grain yield was the same as between AGB and grain yield (r = 0.69, p < 0.001; Figure 5B). However, there was a positive correlation between grain yield and grain number per single spike (r = 0.51, p < 0.001); the grain number per spike under the BU patterns was not always higher than that under the other patterns and showed an irregular trend ( Figure 5C). The HI and 1000-grain weight were not correlated with grain yield and the HI was not significantly different among the four sowing patterns (Figure 5D,E). The mean grain yield across the 3 years was significantly improved by using the new sowing methods, but the yield stability did not increase with the grain yield improvement, except in G9 ( Figure 6).
Relationships between Yield and Its Components
The increase in grain yield was significantly positively correlated with the AGB under different sowing patterns (r = 0.98, p < 0.001). All genotypes produced a higher mean AGB and grain yield under the BU sowing patterns than those planted under both the LDW and LDN sowing patterns ( Figure 5A). The correlation between SN per unit area and grain yield was the same as between AGB and grain yield (r = 0.69, p < 0.001; Figure 5B). However, there was a positive correlation between grain yield and grain number per single spike (r = 0.51, p < 0.001); the grain number per spike under the BU patterns was not always higher than that under the other patterns and showed an irregular trend ( Figure 5C). The HI and 1000-grain weight were not correlated with grain yield and the HI was not significantly different among the four sowing patterns (Figure 5D,E). The mean grain yield across the 3 years was significantly improved by using the new sowing methods, but the yield stability did not increase with the grain yield improvement, except in G9 ( Figure 6). A PCA (Figure 7) based on yield formation traits and yield components indicated that the AGB and SN per m 2 highly correlated with PC1 were positively correlated with grain yield. PC2 was correlated with HI and 1000-grain weight, the inherent features of genotype. The scoring data obtained under the BU sowing patterns were closer to the SN, AGB and yield loadings than the LD conditions.
Discussion
In this study, a novel sowing pattern was reported and this largely improved 29.5% grain yield across all genotypes and years in Southern China, compared with other sowing methods such as raised bed-sowing (5.2%) [14], bed sowing (7.0%) [22] and wide-precision sowing (6.7-12.7%) [6]. These results not only highlighted the importance A PCA (Figure 7) based on yield formation traits and yield components indicated that the AGB and SN per m 2 highly correlated with PC1 were positively correlated with grain yield. PC2 was correlated with HI and 1000-grain weight, the inherent features of genotype. The scoring data obtained under the BU sowing patterns were closer to the SN, AGB and yield loadings than the LD conditions. A PCA (Figure 7) based on yield formation traits and yield components indicated that the AGB and SN per m 2 highly correlated with PC1 were positively correlated with grain yield. PC2 was correlated with HI and 1000-grain weight, the inherent features of genotype. The scoring data obtained under the BU sowing patterns were closer to the SN, AGB and yield loadings than the LD conditions.
Discussion
In this study, a novel sowing pattern was reported and this largely improved 29.5% grain yield across all genotypes and years in Southern China, compared with other sowing methods such as raised bed-sowing (5.2%) [14], bed sowing (7.0%) [22] and wide-precision sowing (6.7-12.7%) [6]. These results not only highlighted the importance
Discussion
In this study, a novel sowing pattern was reported and this largely improved 29.5% grain yield across all genotypes and years in Southern China, compared with other sowing methods such as raised bed-sowing (5.2%) [14], bed sowing (7.0%) [22] and wide-precision sowing (6.7-12.7%) [6]. These results not only highlighted the importance of developing new sowing methods to increase grain yield but also indicated that the new sowing method implemented in this study vastly improved yield. Moreover, we found no significant differences in the yield obtained from the four winter wheat cultivars under the BU sowing pattern, indicating that these four cultivars had similar yield potentials and the proper sowing methods could help to realize this high yield potential. However, the yield stability did not improve with increases in grain yield in three of the four cultivars. Thus, more work, such as improved fertilizer management, will be required to improve the yield stability [13].
Previous studies showed that yield improvement was associated with increasing radiation use efficiency [6], biomass [8,9], leaf area index [23] and yield components [10,20]. However, in this study, we focused on the response of the biomass and yield components and their roles in yield performance under different sowing patterns. The AGB and HI are the two main factors that determine the grain yield, as indicated by the equation: grain yield = (AGB × HI)/100 [24]. In this study, the biomass increased by 24-38%, while the grain yield increased by 25-34% under the new sowing patterns; however, the HI was almost the same between the four sowing patterns (0.45-0.48). Moreover, the contribution of the AGB to grain yield was more than five-fold higher than that of HI. These results clearly showed that (1) increasing the biomass was the main driver of grain yield improvement under the BU sowing patterns; (2) it was hard to increase GY by increasing the HI, which was almost 0.5. Solar radiation is essential for plant growth in field conditions and the leaf area index was the main factor determined the absorb of the solar radiation [6]. Thus, the large amount of biomass may have been a result of an increase in the leaf area index [23], a high radiation interaction ratio and use efficiency [6,25] and higher land use related to narrow row spacing. Reasonable line spacing can improve the space arrangement between the plants and help to increase the radiation interaction ratio and use efficiency [26,27].
Increases in yield components, such as TGW, SN and GN, have been significant in improving yields of winter wheat worldwide [7,10,[28][29][30]. In this study, we found that variations in the 1000-grain weight were observed among the different winter wheat cultivars, but this did not contribute to the yield performance, indicating that the 1000-grain weight was a specific property of each cultivar and the sowing patterns had little effect on it. Both the SN per m 2 and grain number per spike were positively correlated with yield performance, indicating that these two traits could be altered through breeding and field management in winter wheat. Thus, developing new farming methods combined with the high yield potential winter wheat cultivar could further increase the grain yield [9]. Furthermore, we found that SN per m 2 was more important than grain number per spike in yield performance and a 51% variation in the SN per m 2 was explained by the high accumulation of biomass. Moreover, changing the row spacing also contributed to increasing the SN per m 2 [26,27].
Conclusions
In this study, the BU sowing patterns significantly increased the grain yield of winter wheat in three consecutive growing seasons; the increasing of aboveground biomass, not the harvest index, was the main driver for the improvement of the grain yield; the increasing of the aboveground biomass was associated with the improvement of the spike number per m 2 . The fixed traits related to the wheat genotype, such as HI, the grain number per spike, and 1000-grain weight, did not change or had an irregular performance under different sowing patterns.
|
2021-11-03T15:18:01.085Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "790313d0991b6638d942e8df5efeb58099498747",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/11/11/1077/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "374f8ae408b88b4680650bb174786c29c4a83615",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
235411877
|
pes2o/s2orc
|
v3-fos-license
|
Initial experience of endoscopic ultrasound‐guided antegrade covered stent placement with long duodenal extension for malignant distal biliary obstruction (with video)
Abstract Background/Purpose This study aimed to evaluate the feasibility of endoscopic ultrasound (EUS)‐guided antegrade covered stent placement with long duodenal extension (EASL) for malignant distal biliary obstruction (MDBO) with duodenal obstruction (DO) or surgically altered anatomy (SAA) after failed endoscopic retrograde cholangiopancreatography (ERCP). Methods Outcomes were technical and clinical success, reintervention rate, adverse events, stent patency, and overall survival. Inverse probability of treatment weighting (IPTW) and competing‐risk analysis were performed to compare with conventional EUS‐BD. Results Twenty‐five patients (DO, n = 18; SAA, n = 7) were included. The technical and clinical success rates were 96% and 84%, respectively. Reintervention occurred in two patients (8.3%). Adverse events occurred in six patients (24%; two cholangitis, 16%; four mild postprocedural pancreatitis [24% (n = 4/17) in patients with non‐pancreatic cancers]). The median patency was 9.4 months, and the overall survival was 2.73 months. After IPTW adjustment, the median patency in the EASL (n = 25) and conventional EUS‐BD (n = 29) were 10.1 and 6.5 months, respectively (P = .018). Conclusions EASL has acceptable clinical outcomes with a low reintervention rate but higher rate of postprocedural pancreatitis in patients with non‐pancreatic cancers. Randomized trials comparing EASL and conventional EUS‐BD for MDBO with pancreatic cancers and DO/SAA after failed ERCP are needed to validate our findings.
| INTRODUCTION
Endoscopic retrograde cholangiopancreatography (ERCP) with self-expanding metal stent (SEMS) placement has been the primary choice for the palliation of malignant distal biliary obstruction (MDBO) owing to its long patency duration. 1 However, ERCP with transpapillary metal stenting is not always successful in patients with duodenal obstruction (DO) or a surgically altered anatomy (SAA). Conventionally, the percutaneous approach has been used after a failed ERCP; however, it is associated with considerable morbidities and an adverse event rate of up to 33%. 2 Endoscopic ultrasound (EUS)-guided biliary drainage (EUS-BD) may be preferred for its better clinical success, lower adverse event rate, and fewer reinterventions than the percutaneous approach after a failed ERCP. 3 EUS-guided hepaticogastrostomy (EUS-HGS) with transmural or antegrade stenting has been suggested as a practical alternative for patients with DO or SAA after a failed ERCP. 4 However, stent dysfunction related to sludge impaction in EUS-HGS with transmural covered metal stenting and tumor ingrowth in EUS-guided antegrade uncovered metal stenting (EUS-AGUS) are not uncommon. Furthermore, reflux of gastroduodenal contents such as food material can lead to stent dysfunction induced by sludge formation or ascending infection when the stent crosses the main duodenal papilla. 5 To simultaneously prevent reflux cholangitis and tumor ingrowth, percutaneous antegrade placement of the distal end of the stent at the third portion of the duodenum has been proposed for MDBO. 6 This study aimed to evaluate whether EUS-guided antegrade covered metal stent with long duodenal extension (EASL) in patients with unresectable MDBO after a failed ERCP can reduce the reintervention rate for stent dysfunction due to reflux cholangitis and tumor ingrowth without increasing the adverse events.
| Patients
This was a retrospective pilot study with single participating center for EASL (Asan Medical Center, Korea). From September 2016 to June 2018, patients with unresectable MDBO with failed ERCP owing to DO or SAA who were unsuitable for EUS-guided choledochoduodenostomy were consecutively enrolled in this study. The patients were treated using EASL (with a fully covered metal stent measuring 8 mm in diameter and 11-13 cm in length). Patients with coagulopathy (international normalized ratio ≥3, platelet count ≤50 000/mm 3 ) or age <18 years were excluded. The primary outcome was technical success. The secondary outcomes were clinical success, reintervention rate, adverse events, stent patency, and overall survival. The result was compared with conventional EUS-BD performed during the same period in three centers (Gifu University, Japan; Kindai University, Japan; and The University of Tokyo, Japan). This study was approved by the institutional review board (IRB) of each hospital (Asan medical center IRB approval number: 2018-0562, Kindai University IRB approval number: 30-149, Gifu University: 2018-084, University of Tokyo: 2018125NI).
| Procedure
Endoscopic retrograde cholangiopancreatography and the subsequent EUS-BD were performed by one expert with experience of >5,000 cases of ERCP and at least 125 cases of EUS-BD before the study period. 7 The detailed procedures of EASL are described in Figure 2 and Video 1. In brief, after puncturing the left intrahepatic duct with a 19-gauge EUS needle and crossing the distal bile duct stricture with placement of a guidewire in the duodenum, the guidewire was straightened in the bile duct and coiled in the distal duodenum for pushability and for an easier procedure. After the Conclusions: EASL has acceptable clinical outcomes with a low reintervention rate but higher rate of postprocedural pancreatitis in patients with non-pancreatic cancers.
Randomized trials comparing EASL and conventional EUS-BD for MDBO with pancreatic cancers and DO/SAA after failed ERCP are needed to validate our findings.
K E Y W O R D S
duodenal obstruction, Endoscopic ultrasound, EUS-guided biliary drainage, malignant biliary obstruction, surgically altered anatomy dilation of the fistula tract and distal biliary stricture with a 4-mm Hurricane balloon catheter (Boston Scientific), a long duodenal extension of a fully covered SEMS with at least 5 cm length of the stent was secured in the second and third portions of the duodenum (Figure 2A-E).
For the other EUS-guided drainage procedures in conventional EUS-BD group (n = 29; 15 in Kindai University, nine in University of Tokyo, and five in Gifu University), EUSguided hepaticogastrostomy (EUS-HGS) with transmural stenting or EUS-guided antegrade uncovered metal stenting (EUS-AGUS) was performed according to the discretion of each endoscopist.
| Definitions
Technical success was defined as satisfactory transpapillary deployment of the stent across the papilla with a long Figure 2E). Clinical success was defined as a decrease in bilirubin level to normal or to less than a quarter of the pretreatment value within the first month. 8 Reintervention was defined as any type of endoscopic or percutaneous procedure for relieving stent obstruction. Stent obstruction requiring reintervention was diagnosed when a patient developed cholangitis or jaundice and/or when bile duct dilation was evident in imaging studies. 9 Stent patency duration was defined as the period from the initial stent placement to the recurrence of stent obstruction requiring reintervention. 9 Stent migration was defined as any displacement of the stent into the bile duct (proximal migration) or the duodenum (distal migration). 9 Overall survival was calculated from the day of stent insertion to the last day of follow-up or death. Adverse events were classified according to the lexicon for endoscopic adverse events proposed by consensus guidelines. 10
| Statistical analysis
Descriptive statistics, including means, standard deviations, and percentages, were calculated. Categorical parameters are expressed as frequencies and proportions and compared using the chi-square test or Fisher's exact test. We estimated the cumulative stent patency and overall survival using the Kaplan-Meier method. All reported Pvalues are two-sided, and a P-value of <.05 was considered to indicate statistical significance. Data were analyzed using R program version 3.5.3 (R Foundation for Statistical Computing, Vienna, Austria, http://www.R-proje ct.org). The result was additionally analyzed to compare with conventional EUS-BD.
To reduce the impact of treatment selection bias and potential confounding in an observational comparative study between EASL and conventional EUS-BD groups, the inverse probability of treatment weighting (IPTW) method based on propensity score analysis was used. With this technique, the weight for patients receiving stent 1 was the inverse of the propensity score, and the weight for patients not receiving stent 0 were the inverse of (1-propensity score). The propensity score was estimated with two treatments as the dependent variables in multiple logistic regression analysis that included all the variables in Table 3. Absolute standardized differences were used to diagnose the balance after propensity analysis. All absolute standardized differences after IPTW were <0.2.
To assess the treatment effect, we performed IPTW-adjusted logistic or Cox model analysis with robust standard errors, as appropriate for the outcome. In addition, Fine and Gray competing-risk analysis was performed, in which death during the follow-up was considered a competing event for assessing reintervention.
| Baseline characteristics
A total 25 patients were included in this study. The baseline characteristics are summarized in Table 1. The reasons for failed ERCP were DO in 18 patients (complete DO requiring duodenal stenting, n = 7) and SAA in seven patients (three total gastrectomy with Roux-en-Y gastrojejunostomy, four Billroth II and Roux-en-Y anastomosis). Among 18 patients with duodenal obstruction, seven were DO type I, 11 were DO type II. Regarding seven patients with duodenal stenting, the type of DO were type I (n = 3), and type II (n = 4). Uncovered stent was deployed in all patients (n = 7). EASL was performed after duodenal stent placement in four patients (median 18 days; one in type I, and three in type II DO [ Figure 2E]), and vice versa in three patients (median 6 days; two in type I, and one in type II DO [ Figure 3]).
| Primary outcome
The technical success rate for EASL was 96% (n = 24/25). Technical failure occurred in one patient in the EASL group because the guidewire could not pass the stricture site of the common bile duct owing to complete obstruction by pancreatic cancer. The patient was managed with EUS-HGS with transmural metal stenting. All cases were suitable for EUS-HGS. However, for the evaluation of stent patency and reintervention rate of EASL, EUS-HGS with transmural stenting during the same session was not attempted. The median procedure time was 20.5 minutes (interquartile range 11.25).
| Secondary outcomes
Clinical success was achieved in 21 patients (84%). In the other four patients, the reasons for clinical failure were advanced diseases in three patients and technical failure in one patient. With respect to adverse events, six occurred in the EASL group (two cases of cholangitis due to stent malfunction and four cases of postprocedural pancreatitis in non-pancreatic cancer patients). Reintervention for stent obstruction occurred in two cases (8.3%), which were managed with percutaneous transhepatic biliary drainage. The four cases of postprocedural pancreatitis were all mild and improved with conservative treatment. Two cases (one in a patient with ampullary cancer and one in a patient with pancreatic neuroendocrine tumor) of spontaneous distal migration (lost with stool production) related to shrinkage of the tumor after chemotherapy occurred during the follow-up; however, reintervention was not needed because the patients improved with chemotherapy. The median observational period was 2.7 month (95% confidence interval [CI]: 2.62-5.65). The median patency was 9.4 months (95% CI: 7.96-not available), and the overall survival was 2.73 months (95% CI 2.43-7.86) ( Figure S1). The clinical outcomes in EASL are summarized in Table 2.
F I G U R E 3 Fluoroscopic image of the side-by-side placement of the biliary and duodenal metal stent (A-C) showing no contrast reflux into the covered biliary metal stent with long duodenal extension after injecting contrast via the duodenal metal stent (D) The reintervention rate was lower in the EASL group than in the conventional EUS-BD; however, the difference showed marginal significance (hazard ratio 0.242, 95% CI 0.057-1.035; P =.056) ( Table 3). After IPTW adjustment, the median stent patency in the EASL and conventional EUS-BD groups was 10.1 and 6.5 months, respectively (P =.018; Figure 4).
| DISCUSSION
EUS-HGS with transmural stenting has a potential risk of serious adverse events, such as proximal migration of the stent, whereas EUS-AGUS has a risk of tumor ingrowth. 11 Furthermore, obstruction by sludge or stones often occurs despite EUS-HGS with transmural stenting to bypass the stricture. 12 Therefore, in conventional EUS-BD in patients with DO or SAA after a failed ERCP, reintervention related to stent dysfunction may frequently be required. In this pilot study, EASL showed a 96% technical success rate and an 84% clinical success rate, which were comparable to the rates of other EUS-guided biliary drainages. 13 The EASL group showed a low intervention rate (2/24) during the follow-up period, which may be lower than that of other modalities. 14 The median stent patency of the EASL group was 9.4 months, which seems to be comparable to a recent study on EUS-HGS with partially covered metal stenting, which showed a median stent patency of 6.3 months in 110 patients (75 with DO and 16 with SAA) with malignant biliary obstruction. 12 The low reintervention rate and comparable stent patency of EASL may have resulted from a decrease in reflux cholangitis ( Figure 3D) owing to the long duodenal extension and the prevention of tumor ingrowth by a covered metal stent. However, this interpretation may be premature because of the relatively small number of patients included in our pilot study.
Gwon et al introduced percutaneous antegrade placement of a double-stent system with an outer self-expanding uncovered stent and an inner expanded polytetrafluoroethylene-covered stent for MDBO. 6 As the length was 21 cm, the distal end was located in the third or fourth duodenal portion or in the jejunum, similar to the concept of our study although the stent was much longer in their study. However, 10 patients (23.8%) experienced stent occlusion by food and biliary sludge, which needed reintervention. As it was a single study, the interpretation of the study results was limited; however, the occlusion of the covered metal stent could be attributed to e-polytetrafluoroethylene, which is reported to be vulnerable to the formation of biofilm, or to the excessively long extension of the stent, which can increase antegrade flow resistance. 15 As we used an 11-13-cm-long stent in our pilot study, it can be presumed that the antegrade flow resistance would be lower than that with the stent used by Gwon et al. In terms of overall adverse events, four patients had a mild degree of postprocedural pancreatitis, which rarely occurs in EUS-HGS. In this study, 68% of the patients in the EASL group had non-pancreatic cancer, which is a risk factor for pancreatitis T A B L E 3 Clinical outcomes using IPTW and propensity scores between EASL and conventional EUS-BD group after metal stent placement. 16 None of the four patients with post-ERCP pancreatitis in our study had pancreatic cancer. Therefore, EASL may be considered for patients with pancreatic cancer after a failed ERCP. As another merit of EASL, a duodenal stent for accompanying type II DO could be safely inserted as the distal end of the biliary covered stent is placed in the third portion of the duodenum ( Figure 2E and Figure 3). As usual, the biliary stent patency via the pre-positioned duodenal stent can be affected by the duodenal stent patency. In our pilot study, none of patients experienced recurrent biliary obstruction in patients (0%, 0/3) with EASL with pre-positioned duodenal stent with stent-in-stent fashion (duodenal stent first and EASL later in type II DO, Figure 2E). Recurrent biliary obstruction for EASL without duodenal stent was observed in two patients (11.8%, 2/17). However, it is hard to tell whether biliary stent patency or recurrent biliary obstruction in EASL would not be affected by pre-positioned duodenal metal stent patency with this small number and limited observation time.
Covered or uncovered duodenal metal stent could be used subsequently after EASL by side-by-side stent placement (EASL first and duodenal stent placement later in type II DO, Figure 3). In our pilot study, however, only one patient with type II DO had side-by-side EASL and uncovered duodenal metal stent placement. For the evaluation of side-by-side EASL and covered or uncovered duodenal metal stent patency and the ideal way of deploying both biliary and duodenal metal stent in EASL, therefore, future larger studies are required. This study had several limitations. First, this was a singlearm study with a small number of included patients for EASL. Therefore, we performed additional comparative analysis with IPTW of propensity scores to overcome this matter, but the heterogeneity of the comparative group with a small patient number makes it difficult to conclude that there is a benefit to our method; therefore, the present study may be a basis for further well-designed studies in the future. Second, a recent study about EUS-HGS combined with transmural and antegrade covered stenting was proposed for prolonged stent patency and reduced reinterventions. 11 However, the cost of the metal stent used for this procedure should not be neglected. Further comparative studies with a cost-effectiveness analysis between EASL and EUS-HGS combined with transmural novel plastic and antegrade stenting would be of interest. 17 Third, the inherent limitations of a retrospective analysis remain. Fourth, as the life expectancy of the patients was short, there was insufficient time to observe stent dysfunctions related to the stent.
In summary, EASL has acceptable technical and clinical success rates, patency, and adverse events but a higher rate of postprocedural pancreatitis in patients with non-pancreatic cancers. The low number of reinterventions may be an attractive pilot study result warranting further studies on EASL. Future randomized trials comparing the EASL and conventional EUS-BD in patients with unresectable pancreatic cancer with DO or SAA after a failed ERCP are needed to validate our findings.
|
2021-06-13T06:16:31.483Z
|
2021-06-12T00:00:00.000
|
{
"year": 2021,
"sha1": "af57f0cf49b4f7cce82ed55017d69aca0761561e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/jhbp.1011",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b2f7954fcf3661ba2440a31c2afda70ce59a8e71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73563205
|
pes2o/s2orc
|
v3-fos-license
|
The impact of Migrant Workers’ Remittances on the Living Standards of families in Morocco: a Propensity Score Matching Approach
: This article attempts to assess empirically the impact of remittances on household expenditure and relative poverty in Morocco. We apply propensity score matching methods to the 2006/2007 Moroccan Living Standards Measurement Survey. We find that migrants’ remittances can improve living standards among Moroccan households and affect negatively the incidence of poverty. The results show a statistically significant and positive impact of hose remittances on recipient households’ expenditures. They are also significantly associated with a decline in the probability of being in poverty for rural households; it decreases by 11.3 percentage points. In comparison, this probability decreases by 3 points in urban area.
Introduction
For several decades the fight against poverty has become a major policy concern for national governments and international institutions.The Millennium Declaration of the United Nations (2000) has placed the fight against poverty at the center of development policies.Morocco committed -like all other signatories of this statement-itself to achieve measurable targets by 2015, among them the fight against poverty 1 1 The other Millennium Development Goals (MDGs) are relate to primary education, gender equality, reducing child mortality, improving maternal health, the fight against HIV / AIDS and other diseases, environmental sustainability, and creating a global partnership for development.
. While some progress has been made in the eradication of extreme poverty, continuous and very substantial efforts are still needed to fight poverty and accelerate measures in areas of education, health, gender equality, etc.But according to a fairly large body of literature, private and public transfers often constitute a significant component of total household income and thus contribute to the reduction of income poverty and to the increase of the investment in human capital in certain developing countries.This is the case, for example, of private transfers from migrant workers.In general, a rich literature on welfare impacts of these private transfers highlights their positive effect on the poverty reduction in the counties of origin by increasing household income and smoothing consumption (see for example Adams, 1991, Brown and Jimenez, 2007, Acosta et al., 2007, Gubert et al., 2010, Combes et al., 2011, Esquivel and Huerta-Pineda, 2006, Adams and Page, 2005).At macro-level, Anyanwu and Erhijakpor (2010) have used a panel data set on poverty and international remittances for 33 African countries to examine the impact of international remittances on poverty reduction over the period 1990-2005.They found that international remittances reduce incidence, depth and severity of poverty in African countries.Adams and Page (2005), in their broader analysis of the impact of international migration and remittances on poverty indicators in 71 developing countries, showed that a 10 percent increase in the proportion of international migrants in the country of origin leads to a 2.1 percent fall in the number of people living on less than 1 US$ a day.Similar conclusions were also drawn at the micro-level by Adams (1991).The author finds that in Egypt the number of poor rural households declines by 9.8 percent when they receive international remittances.However, the link between international migration and poverty needs to be probed especially if a majority of migrants come from the wealthiest households because migration is selective on age, gender, wealth, etc.It is argued that the migration selective process is one of the key determinants of returns to international migration and thus its effect on poverty reduction.In reality, as De Haas (2007) suggests, if migration is a selective process, most direct benefits of remittances are also selective, tending not to flow to the poorest members of communities.In other words, if the migrants are not being drawn from the lowest quintiles of the income distribution in their country of origin, the impact of migration on poverty might not be direct and immediate and its effects on structural poverty are likely to occur through substantial indirect effects (Kapur, 2004).Recently, these challenges have given rise to innovative methods for estimating the possible impacts of remittances on poverty in recipient countries.
The counterfactual approach, usually taken in the migration and remittances literature, was focused on estimating household's income level that would have been in the absence of migration and then to compare that with actual household income with remittances (Adams, 1991, Brown and Jimenez, 2007, Gubert et al., 2010, Acosta et al., 2007).Esquivel and Huerta-Pineda (2006) have analyzed the relationship between international migration and poverty in Mexico by comparing incomes and poverty rates amongst remittance receiving households with those estimated for similar households who do not receive remittances.They find that receiving remittances reduces a household's probability of being in poverty by 8-6 percentage points.
In the past two decades, remittances by Moroccans residing abroad have increased.According to data from the World Bank, remittance inflows reached more than 7.25 billion $ in 2011.In addition, migrant workers' remittances remain an important source of financing for the Moroccan economy (7.28 percent of Morocco's gross domestic product in 2011) and one of the main means to ensure recipient family income.In fact, the well-being of households may be affected by the international migration, thus for example, it is estimated that, in 2007, approximately 13 percent of rural incomes depend on migrants' remittances to Morocco.Thus, after the consumption of food products, health and education constitute the main priorities in terms of household expenditure.
The existing studies on the relationship between Moroccan migration and poverty are rare.To the best of our knowledge, there is a single study on the subject (Bourchachen, 2000).The author suggests that international remittances have decreased the number of Moroccan living in poverty from 6.5 million to 5.3 million.Our contribution proposes to estimate the effect of these financial flows on the households' welfare levels by using carrying out a microeconometric approach.In particular, we assess the impact of migrants' remittances on poverty and standards of living in Morocco using propensity-score matching (PSM) methods.These methods were initially used to evaluate whether a medical treatment has an effect.In our study, we consider the receipt of international remittances as a treatment.In reality, the heterogeneity of households and the problem of self-selection are challenging the evaluation of the "real" effect of remittances on household expenditure and poverty.Overcoming these problems can be done by exploring some of econometric methods like the PSM approach.In this paper we apply this method in order to obtain treatment effects from the migrants' remittances on the well-being of remittances-recipient households.We also evaluate the extent to which selection bias on unobserved covariates would nullify propensity score matching estimates of the effects of migrants' remittances.
The rest of the paper is structured as follows.Section 2 describes the data and the variables under consideration.Section 3 explains our methodological procedure.The empirical results are then presented in Section 4. Section 5 provides an application of sensitivity analysis in order to judge on the causality of the different results.The last section concludes.
Data and variables used in estimation
The data used in this paper are from the Moroccan Living Standard Measurement Survey (LSMS) which was implemented by the High Commission for Planning (HCP)2 A detailed analysis of this household survey shows that 15 percent of households receive transfers from abroad.The average annual amount transferred exceeds 11,540 MAD in 200611,540 MAD in -2007.The survey is based on a weighting sample of 7,062 households, drawn from all regions of Morocco (1,079 households receive international remittances, the remaining 5,983 households in the sample did not benefit from such transfer).The descriptive analysis of the sample shows that remittances are a major component in recipient household income: the share of remittances in household expenditure is about 40 percent.
3 Table 1 depicts that remittances increase the annual expenditure of a recipient household.
Remittance-receiving households have more members with middle and high secondary .Of all migrants, 66 percent transfer funds to Morocco.Furthermore, remittances are sent at very high frequencies: 36 percent of individuals sent twelve or more remittances over the sample period (at least monthly), 15.52 percent sent one or more, and 19 percent did not send remittances regularly.education than non-remittance households; further, household heads are older in remittancereceiving households4 .Table 2 presents the importance of remittances in the income distribution.As can be seen, the proportion of households receiving remittances increased from 13.9 percent of those in the lowest income quintile to 14.17 percent in the second quintile and 30.76 percent in the highest quintile (i.e., the 20 percent of households with the highest income).Interestingly, in the case of Morocco, it is possible that all international migrants do not come from the lowest quintiles of the income distribution.This outcome may have methodological challenges for researchers in carrying out quantitative analyses of remittances impacts.In the spirit of the counterfactual analysis with observational data, this study uses an econometric technique called propensityscore matching for gauging empirically these impacts.To do so, we consider two types of explanatory variables of household income: -The socio-economic characteristics of the household: age, education and sex of household head, proxy for household income, education level within the household (indicators for the proportion of household members with primary, middle and high secondary education, and higher education), and area of residence (urban and rural).As we look to estimate the level of welfare of both urban and rural household, productive capital detained by households takes two forms: land and/or businesses.
-The characteristics of the commune of residence: We introduce the regional unemployment rate in order to control the characteristics of the municipality involved.
We chose to assimilate the standard of living of the household to his actual expenditure and not to his income.This choice is dictated by the fact that income is generally poorly measured especially in the rural areas5 In our analysis, household expenditure includes food and tobacco, clothing, health care, housing, home furnishings, transportation, education, leisure and culture, and other goods.A household is considered to be poor if its members cannot cover their expenses.According to the HCP definition .In addition, household expenditure can take into account the price differences according to the different municipalities.6 less than -or equal to -3,834 MAD (for households in urban areas) and less than -or equal to -3,569 MAD (for households in rural areas).Nationally, in 2007, 8.9 percent of the population in Morocco was under this condition (14.4 percent in rural areas and 4.8 percent in urban areas).As regards the extreme poverty, Morocco has been successful at achieving Goal 1 of the Millennium Development Goals (MDGs) by reducing the number of people living in extreme poverty.According to statistics provided by HCP, poverty at U.S $ 1 (PPP) per day per person has declined from 3.5 percent in 1990 to 2 percent in 2001 and 0.6 percent in 2008.
Methodological Approach
Matching techniques aim to estimate the specific effect of a measure (the receipt of international remittances in our case) on the situation of its beneficiaries.If these were chosen based on a number of characteristics, the effect of the measure is not clearly identified.
Matching methods try thus to correct the composition bias.In fact, remittance decisions could influence the living conditions of recipient household.In this case, households receiving remittances may be different from households that do not receive international transfers: the two populations differ.Therefore, it is necessary to ensure that the effect attributed to these financial flows is not due solely to the particular profile of remittances-recipient households.
To control for these potential biases, the researchers constructed, under the propensity score matching method, a population that includes households receiving remittances identical to the population of non-recipients, such that migration and transfers became a random event.If the observed differences are significant, they will be attributed to remittances inflows.Define an indicator variable T i equal to one if a household receives transfers from abroad and to zero otherwise.Y i is the potential outcome variable, represented in our study by the poverty status of the household i, defined on the basis of the national poverty line; Y i0 represents the counterfactual outcome value when T i =0.
We define the average treatment effect on the treated group of household: and the average treatment effect on the entire population: funds for the purchase of non food goods.According to HCP report (2010), in 2007, the relative poverty line per person per year was 3,834 MAD in urban areas and 3,569 MAD in rural areas, i.e. an average of US$ 2.15 PPP per person per day ($1 PPP = MAD 4.88).
is a sampling bias due to a non-random sample of a population.In other words, the populations of recipient and non-recipient households are not identical.If we have used a random sampling, the likelihood of bias could be reduced and there will be no systematic difference between treated and untreated units, so in this case we = 0. Consequently, to eliminate this sampling bias, Y i0 and T i must be independent.For this purpose, matching methods make the assumption of conditional independence, and assume that conditional on observable individual variables X, the assignment to treatment is random (Fougère, 2007, pp. 111).It means that, conditional on X, the outcomes are independent of treatment and thus the outcomes of non-treated units can be used to approximate the counterfactual outcome of treated units in the absence of treatment.
In practice, matching a large number of characteristics is difficult, which is why propensity score matching is important (Rosenbaum and Rubin, 1983) because it provides a onedimensional summary of all these characteristics i.e., a propensity score.
If a propensity score is defined by P(X) = Pr (T = 1| X) and the household untreated noted ĩ is paired with the treated household i, we have The final estimator for the average treatment effect is obtained as the average of the differences between the situation of households treated and their counterfactuals: where I is the subsample of households treated, N is the number of households treated.
The estimate using matching models propensity score requires two steps.In the first step, we estimate the propensity scores of households with a logit or probit model containing the explanatory variables of the probability of receiving remittances7 In the second step, we estimate an average treatment effect on the treated (ATT).The final estimator for this average treatment effect is obtained as the average of the differences in the : age, education and sex of household head, proxy for household income, education level within the household (indicators for the proportion of household members with primary, middle and high secondary education, and higher education), area of residence (urban and rural), and regional unemployment rate.
The main results of the estimation of the probit model are presented in appendix (Table A.2). situation of treated households and their counterfactuals.The mean difference of the two groups should be statistically significant to speak of an effect of remittances on the households surveyed.
Many mechanisms can be used to find the non-recipients households which have propensity scores close to those of recipient households.These include, among others, nearest neighbour matching and kernel matching.In practice, the nearest neighbour method chooses a counterfactual household for each recipient household who is closest in terms of propensity score.Nearest neighbours are not determined by comparing treated observations to every single control, but rather by first sorting all records by the estimated propensity score, and then searching forward and backward for the closest control unit.With Kernel Matching, all treated are matched with a weighted average of all controls with weights that are inversely proportional to the distance between the propensity scores of treated and controls (see Becker and Ichino, 2002).Nearest neighbour method requires a maximum distance between the propensity scores of treated households and their nearest neighbours (caliper) beyond which it can be no matching.The caliper threshold set in the analysis is 0.01.
Econometric studies insist that the property of balancing variables observed in the two groups (treated and counterfactual) should be satisfied in order to confirm the validity of matching (balancing tests for propensity score matching).In other words, equality of means (of each variable which explains the probability of receiving remittances) for treatment and control groups must be ensured.We use the pstest command in Stata to test the balancing.We find that the balancing property of propensity scores is satisfied (Results are reported in Table A.2 in Appendix).
Empirical results
Recall that our analysis evaluates the relative importance of international remittances in improving the living standards of recipient households and the financial contribution of migrants to the income of their households of origin.We start by deriving the estimations for total households and then applying the same specification for urban areas and rural areas separately.In Table 3, we present the results of our first estimation.Firstly, it appears that the estimates using different matching methods provide very similar results.Secondly, the ATT is significant for all outcome categories (significant at 1 percent level).Thirdly, the results based on the poverty indicator (outcome variable) show that remittances significantly reduce a household's probability of being in poverty, i.e, there is a negative (causal) effect of the receipt of remittances on the propensity of their recipient to be poor.For the kernel estimator, we applied the bootstrap to calculate the standard errors (50 replications), Abadie and Imbens (2006) show that bootstrapped standard errors are not valid for nearest-neighbour matching with a fixed number of neighbours.We impose common support condition in Stata to reduce poor quality matches.Psmatch2 command is used to estimate the different models.The Caliper is equal to 0.01, it corresponds to maximum allowable distance between the propensity scores (with nearest neighbour).Matching with the nearest neighbour is without replacement (individual control group can only be chosen once in the construction of the counterfactual), and in descending order.Source: LSMS 2006/2007.This effect takes values between 4.5 and 5.5 percentage points depending on the specification.
These results confirm those obtained by the majority of studies on the subject (see for example Gubert et al., 2010 or Brown andJimenez, 2007).It is important to mention that some studies have suggested that the poor household can and do benefit indirectly from international migration but also that the economic status of households could explain their use of remittances: richer households are, more expected to invest these remittances on various forms of productive investments and poorer households spend a greater share of their income on durable goods, healthcare, and housing 8 income and employment multipliers from remittances are quite high, and many of the indirect benefits do not accrue to migrant households themselves, but to others.In other words, it's also necessary to take into account the indirect multiplier effects of migration and remittances upon communities of origin as a whole (including households without remittances).This would require positive effects of international migration on employment, income, and production.
Table 3 also points out some key differences between households with and without migrants' transfers.It reveals that the expenditures of treated households increase in average by about 12,167 MAD per year (15,370 MAD with kernel matching) more than that of the control households.
Using the matched subsamples, we can estimate the ATT difference for rural households as well for urban households similar to the procedure when the whole sample is used.As Table 4 depicts, for rural households, remittances reduce the probability of being below the poverty line by 11.3 percentage points.In comparison this probability decreases by only 2.8 points for urban households.This reveals that there is significant variability in the average results when the ATT is estimated after taking areas of residence into account.It is interesting to note that in Morocco, poverty is most severe and most widespread in these areas.In fact, the most recent data from national household surveys show that, the majority of the country's poor still live in rural areas (14.4 percent in rural areas and 4.8 percent in urban areas in 2007).potentially be filled by the poor, or wages may be pressured upward, also potentially benefiting the poor.Second, remittances add liquidity to local markets, potentially stimulating economic activity.Third, when migrants return from urban areas or abroad, they bring new skills and experiences with them, sometimes even starting microenterprises that create local employment."Furthermore, the results show a statistically significant and positive ATT difference for rural household's expenditure.The average development of expenditures of treated rural households is 21,799 MAD (i.e.4,723 MAD per person) higher and statistically significant at the 1% level or better (see Table 4).
Robustness check
We conduct a sensitivity analysis on the estimation results.It is undertaken to check the strength of the conditional independence assumption, and if the influence of unobservable factors that may influence both remittances receipt and the outcome variables on the selection process is so strong to alter the matching estimates.
. Table 5. Mantel-Haenszel (1959) To do this, we use Rosenbaum's approach (2002).It is based on a test that determines the bounds of the significance level (p-value critical) of the average effect of treatment (ATT) for different levels of hidden bias.The idea is to increase the values of γ (variable which captures the effect of unobservable variables on the probability of receiving remittances), and to check if the results related to the consideration of hidden bias are robust.The higher the level of γ to which the ATT remains statistically different from zero, the more robust are the estimation results to the potential influence of hidden bias.
The results -presented in Table 5-are highly robust to unobserved heterogeneity, the threshold being higher than 2 Unfortunately, sensitivity analysis does not determine if biases really exist; it only shows how the existence of possible bias could undermine the significance of the estimates (Aakvik, 2001).
Conclusion
The migrants contribute in various ways to the well-being of their households of origin.This paper assesses the impact of international remittances on poverty and standards of living in Morocco.The analysis was based on propensity-score matching and uses national data from a Morocco household survey.Our results are interesting in a number of respects.Firstly, we show that migrants' remittances affect negatively the propensity of their recipient to be poor.
This effect takes values between 4.5 and 5.5 percentage points depending on the specification.
Secondly, we find a significant improvement of expenditure of remittances-recipient households.In particular, remittances are associated with an increase in households' expenditures by 12,167 MAD per year.In rural area, the expenditures of recipient households increase in average by about 21,799 MAD.Thirdly, when we distinguish households according to their area of living, we also find worthy notice that remittances have a statistically significant decline in the probability of being below the poverty line for rural households; it decreases by 11.3 percentage points.In comparison, this probability decreases by 3 points in urban area.
Our study suggests that matching can help to solve the problems of heterogeneity and selfselection in migration studies.It is especially relevant in the case of the analysis of household welfare, where the receipt of remittances can be dependent on some observable household specific characteristics.However, more research on the impact of remittances on poor households using a more specific database, namely a panel database is needed to confirm that poverty has continued its downward trend in the last few decades and that remittances to Morocco are partly responsible for this trend.
The findings are indicative of specific policy tools that could be made available for the poor households.For example, there are some policies that governments may introduce to reduce the population of the rural poor such as public transfer programs.
On another level, this study provides an analysis of some household factors selected from remittances literature influencing the probability of receiving remittances.More specifically, the results show that the household variables, namely, education, gender and age of household head are correlated with the probability of receiving remittances.
Table 1 .
Selected descriptive statistics
Table 2 .
Remittances by quintile of household expenditure and areas of residence (%)
Table 3 .
Average treatment effects on remittances on household poverty and expenditure
Table 4 .
Average treatment effects on remittances on poverty and expenditures, by areas of residence bounds for variable Poverty
|
2018-12-21T23:57:16.049Z
|
2014-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "8730a4dbf2fae7fb875eb119daf45660d2fee9c2",
"oa_license": "CCBYSA",
"oa_url": "https://basepub.dauphine.psl.eu/bitstream/123456789/12802/1/130064_2013_2014_10Migrant_Workers_Remittances_on_Living_Standards_Morocco_JBouoiyour_AMiftah.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2946d6af8ace2f7b6221bcf4cfa5a7b5ee2b2f42",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
243828579
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation Genetic Variation and Diversity of Grain Yield and Quality Traits in Rice (Oryza sativa L.) Genotypes for Low Input NPK Fertilizers
Determination of genetic variance in a large number of rice genotypes is an effective strategy for increasing yield. The goal of this research was to determine the genetic variability, phenotypic (PCV) and genotypic (GCV) coefficients of variation, broad-sense heritability, expected genetic advance and multivariate analysis for eight rice grain quality and yield traits, in twenty Egyptian and exotic genotypes under low NPK fertilizer input levels at RRTC, Sakha, Kafr El-Sheikh, Egypt and evaluated across two successive seasons. Results revealed highly significant mean squares for all traits. High estimates of both PCV and GCV were detected for grain elongation followed by gelatinization temperature and head rice. High estimates of heritability were noted for grain length, grain shape, hulling, milling, head rice, amylose content, grain elongation and grain yield. Results revealed that highly significant differences among different genotypes were observed for studied characteristics under different NPK levels. Cluster results revealed that genotypes from the same origin or taxonomy type were clustered together. Diversity analysis showed four clusters. Cluster I and III had maximum genotypes (70%) and Cluster IV showed the highest mean values for studied traits. The results revealed that PC1 and PC2 accounted for 65.6% of the diversity between genotypes investigated. These findings show that some genotypes have a lot of diversity, indicating an opportunity to breed for low-input genotypes without sacrificing grain production and quality. GZ10590-1-3-3-2 and IET1444, both of which have high grain yield, can be employed as hybrid parents and could help with further genetic research for reduced NPK input.
Introduction
Rice is one of the staple food crops for about half of the world's population. Therefore, rice production should be signi cantly increased to meet the needs of a growing world population. The global demand Samonte et al. (2006) reported that nitrogen is one of the most essential macronutrients for rice production. Fertilizers with appropriate management practice help to increase the productivity of rice in farmers' elds (Gairhe et al. 2018; Timsina et al. 2012). The N, P and K are macro elements. Many studies have shown that the appropriate use of NPK fertilizers has enhanced the yield and substantially improved rice quality (Oikeh et al. 2008). The recommendation of chemical fertilizers should be based on soil analysis and crop response. A good fertilization strategy should be developed that combines the use of organic and chemical fertilizers, as well as improving crop productivity and environmental quality (Devkota et al. 2019). Nitrogen plays a vital role in determining the growth and yield potential of crops. The best mineral fertilizer rate is one that yields the highest economic return at the lowest expense (Ananthi et al. 2010).
Grain quality is de ned as a major factor that decides market values of agricultural products and foods in each phase from production through consumption. Rice grain quality has always been an important consideration in variety selection and development. Cooking and eating quality traits are a part of the factors in assessing the grain quality of rice. Grain quality will become even more essential in the future for many of whom rely heavily on rice as a staple diet, become better off and want higher quality rice (Lampe 1993). Rice has different cooking and eating properties depending on the variety and grain type.
The dry-aky cooking characteristic of rice is found in varieties with a high percentage of amylose, a medium gelatinization temperature and relatively low water absorption. The varieties with low amylose and low gelatinization temperature tend to be sticky and cohesive when cooked, absorb more water, and thus have long grain elongation after cooking.
Genetic parameters such as GCV and PCV measure the amount of genetic diversity in genetic resources as well as the degree to which genotype is modi ed by the environment. While the selection is made primarily on yield contributing characters, heritability and genetic advance are key selection factors. Heritability estimates combined with genetic improvements are usually more useful than heritability estimates alone in estimating the gain under selection (Paul et al. 2006). We apply cluster analysis when we need to categorize crop genotypes, in which the resulted clusters are important for selecting the best parents for breeders (Sanni et al. 2012). The main objectives of the present investigation are to evaluate the performance of grain quality and yield traits in twenty rice genotypes at low fertilizer input levels to i) estimate genotypic (GCV) and phenotypic (PCV) coe cients of variability, broad-sense heritability and genetic advance for grain quality and yield traits, ii) utilize multivariate analysis to better understand the relationships and patterns between genotypes and iii) examine the connections between grain yield and grain quality attributes.
Plant materials
In total, twenty Egyptian and exotic rice genotypes were chosen at random and employed in this investigation. The genotypes were provided by the Genetic Stock of Rice Breeding Program, Agricultural Research Center (ARC, Giza, Egypt). The name, origins and subspecies group of these rice genotypes have illustrated in Table 1.
Experimental layout
The present investigation was carried out at Research Farm of the Rice Research and Training Center (RRTC), Sakha, Kafr El-Sheikh, Egypt, during the 2019 and 2020 planting seasons. The selected genotypes were evaluated under four levels of NPK fertilizers i.e., full dose, two-third, one-third and zero of the recommended dose of NPK. The recommended doses of NPK are 165, 36 and 58 kg N, P 2 O 5 and K 2 O per hectare, respectively. At the depth of 0-30 cm from the soil surface, representational soil samples had been selected. The method of soil interpretation followed the procedure of Black et al. (1965). The results of soil analysis in the studied seasons are presented in Table 2.
The nursery was well prepared and fertilized with four kg/m calcium superphosphate (15.5 % P 2 O 5 ) before plowing and three kg urea (46.5% N) was applied after plowing as well as, one kg zinc sulfate (22% Zn) immediately applied before sowing and after puddling. Rice seeds at the rate of 60 kg/ha were soaked in freshwater for 24 hours and incubated for 48 hours to improve germination. The pregerminated seeds were broadcast on May 15 th in both seasons. The current investigation was applied in a split-plot design with three replications. NPK treatments were distributed over main plots and the genotypes were allocated in sup plots. The permanent eld was plowed and then well dry leveled.
Phosphorus fertilizer in the form of calcium superphosphate (15.5% P 2 O 5 ) was applied before land preparation according to the treatment schedule in this hypothesis. Potassium fertilizer in the form of potassium sulfate (48 % K 2 O) was incorporated in the dry soil before planting according to the treatments used in this study. Nitrogen fertilizer was added according to the treatments in the form of urea (46.5 % N). Two splits (2/3) were applied and incorporated in dry soil before planting as well as (1/3) has applied after thirty days from transplanting as topdressing. The permanent eld was immediately irrigated. Thirty days old seedling of each genotype was individually transplanted in 10-row per replicate with a spacing of 20 cm between rows and 20 cm between plants.
Data collection
All rice grain quality and grain yield traits were calculated according to the standard evaluation system for rice (IRRI 1996). At harvest, grain yield (t/ha) from randomly 10 square meters was measured. Moreover, the laboratory analysis was conducted for grain quality traits, the grain samples were milled and analyzed for physicochemical properties. Milled rice out turn was determined by husking 200 g rough rice and milled in Satake Rice Mill. Head rice recovery was determined by separating broken parts from milled rice. Milling % and head rice % were expressed as a percentage of rough rice and milled rice, respectively. Rough grain length and breadth were measured by slide calipers (IRRI 1996). In determining the rough grain shape, rough rice was rst classi ed into four classes based on length, very long (more than 7.5 mm in length), long (6.61 to 7.5 mm in length), medium (5.51 to 6.6 mm in length) and short (5.5 mm or less in length). The grain was again classi ed into four classes considering length to breadth ratio; long (ratio above 4), slender (ratio from 3.1 to 4), medium or (ratio from 2.1 to 3.0), bold (ratio 1.1 to 2.0) and round (ratio less than 1.1). Based on amylose content, milled rice was classi ed as waxy (1-2% amylose), very low (>2-9% amylose), low (>9-20% amylose), intermediate (>20-25% amylose) and high (25-33% amylose).
For measurement of grain elongation ratio, 10 measured (length and width) grains were taken into a 20 ml glass test tube and soaked for 20 minutes with 5 ml of tap water. The test tubes were then immersed in boiling water for around 30 minutes after soaking. When the grain was fully cooked, the water inside them was drained. The cooked grains were then placed on a glass sheet for a few minutes to drain excess moisture before being measured in length and width. The relative change in rice grain length after cooking is referred to as the grain elongation ratio. Six whole milled grains of rice from each plant were spaced evenly in small transparent plastic boxes containing 10 ml of 1.70% potassium hydroxide solution to determine gelatinization temperature (GT). In an incubator set to 30°C, the boxes are covered and left undisturbed for 23 hours. The GT, which was visually rated on a seven-point numerical scale, was represented by such alkali spreading and clearing of starchy endosperm (Table 3).
Statistical analysis
The data obtained for each trait was statistically evaluated over the two seasons according to Le Clerg et al. (1962), then it was subjected to analysis of variance, which was used to partition the gross phenotypic variability into the components due to genetic (hereditary) and non-genetic (environmental) factors. Genotypic variance is the part of the phenotypic variance that can be attributed to genotypic differences among the phenotypes. Similarly, phenotypic variance refers to the total variation among phenotypes where Vp, Vg and X are the phenotypic variances, genotypic variances and grand mean per season, respectively, for the traits under consideration. Broad sense heritability (h 2 B) expressed as the percentage of the ratio of the genotypic variance (Vg) to the phenotypic variance (Vph) was estimated on the genotypic mean basis as described by Allard (1999). Genetic advance (GA) expected and GA as a percent of the mean assuming selection of the superior 5% of the genotypes were estimated by the methods illustrated by Fehr (1987) as follows: where k is a constant (which varies depending upon the selection intensity and, if the latter is 5%, it stands at 2.06), S ph is the phenotypic standard deviation (√Vph), h 2 B is the heritability ratio and x refers to the season mean of the trait.
To create the dendrogram depending on squared Euclidean distance, the unweighted pair group method of arithmetic average linkage (UPGMA) was applied using SPSS version 15 (IBM Corporation 2010). The initial cluster distances in Ward's minimum variance method are therefore de ned to be the squared Euclidean distance between points: dij = d({ Xi}, {Xj} = ||Xi -Xj||2. The principal component analysis (PCA) was subsequently investigated using Genstat version 12.0 software.
Result
Genetic variability between rice genotypes The grand mean, genotypic and phenotypic coe cient of variability, broad-sense heritability and genetic advance as percentages of the mean are illustrated in Table 4. Low estimates of phenotypic and genotypic variances were recorded for all the studied traits except head rice % and grain elongation traits. The environmental variance was very low for all traits especially for grain shape and grain yield. The phenotypic coe cient of variability (PCV) was higher than the GCV for all the characters studied. PCV was ranged between 2.572 and hulling to 19.956 for grain elongation. PCV and GCV were lower than (10 %) for grain length, grain shape, hulling percentage, milling percentage, amylose content and grain yield (t/ha). Otherwise, high estimates of both PCV and GCV were detected for grain elongation followed by gelatinization temperature and head rice. Negligible values of the difference between PCV and GCV were recorded for grain length, grain shape, hulling, milling, head rice recovery, amylose, grain elongation and grain yield. In general, PCV values were higher than GCV for various characters studied. Among the desirable traits, low GCV and PCV were observed for grain length. High estimates of broad-sense heritability were noted for grain length, grain shape, hulling, milling, head rice, amylose content, grain elongation and grain yield. Their estimated values were ranged between 71.25% for grain length and 99.55% for both grain elongation and grain yield traits, while almost moderate estimates were exhibited for gelatinization temperature (67.78 %). However, high estimates of expected genetic advances were found for grain elongation (40.924 %), head rice recovery (27.336 %) and gelatinization temperature (25.603 %). Grain elongation, gelatinization temperature and head rice had shown high PCV and GCV along with high to moderate heritability (h 2 ) and genetic advance. The additive gene action governed the above-mentioned three traits. High heritability with high genetic advance as percent mean was observed for all the grain quality traits except for hulling percent and milling percent.
Genotypes Performance under different NPK treatments Data pointed out that there were highly signi cant differences among different genotypes in the studied characteristics under different NPK levels. Grain yield, grain quality and cooking quality traits were affected signi cantly by NPK treatments. The studied characteristics were increased gradually by increasing NPK levels from 0 up to the full dose of recommended doses of NPK fertilizers. The results revealed that the GZ10590-1-3-3-2 rice genotype produced the highest grain yield followed by IET1444 , while Nerica1 came in the last rank and gave the lowest value in this aspect (Fig. 1). Grain length was higher in IRAT170 which came in the rst rank and recorded the highest value of grain length while Giza178 recorded the shortest grains (Fig. 2). Grain shape was higher in IRAT170 and came in the rst rank followed by Giza182 (Fig. 3). Milyang 109 produced the lowest values of grain shape. Giza177 recorded the highest values of hulling percentage followed by Giza179. Korea1, Nerica1 and Milyang109 came in the last rank and recorded the lowest hulling percentage and recorded nearly the same value of hulling percentage (Fig. 4). The highest percentage of milling was observed by the rice genotypes Giza177, GZ10991-5-18-5-1, GZ10333-9-1-1-3 and GZ10101-5-1-1-1. While the lowest value of milling rice was observed with Nerica1 (Fig. 5). GZ10333-9-1-1-3 recorded the highest values of head rice while GZ10598-9-1-5-5 recorded the lowest value of head rice under this study (Fig. 6). Amylose content was high in Nerica 1, it came in the rst rank and recorded the highest value of amylose content, while, GZ10598-9-1-5-5 recorded the lowest value of amylose content (Fig. 7). The highest value of grain elongation was recorded by IET1444. While IRAT170 and GZ10590-1-3-3-2 recorded the lowest grain elongation (Fig. 8). Gelatinization temperature (GT) in IET1444 came in the rst rank and recorded the highest value while IRAT170 and GZ10590-1-3-3-2 recorded the lowest values (Fig. 9).
Diversity analysis
Principal component analyses Principal component analysis (PCA) is used to generalize the variation in different explaining factors. Our PCA was done for different grain quality traits of the twenty rice genotypes. Eigenvalues and a fraction of variation in each principal component are shown in Table 5. PC1 explained 37.6 % of total variations observed among the genotypes while it was 28.0% for PC2. The total diversity was explained by nine principal components ( Table 5). The results showed that only the rst three principal components jointly accounted for 80.9% of the total variation among the genotypes, while these three exhibited eigenvalues above unity. However, the large fraction of the divergence (88.2% variation) was explained by the rst four components. Hulling was correlated with Milling, in addition, gelatinization temperature was closely associated with grain elongation, as well as grain shape was found to be correlated with grain length. All Egyptian genotypes and promising lines were features with the high yielding ability and high milling and hulling relearn. IET 1444 was greater in gelatinization temperature and grain elongation compared with other rice genotypes. Both Nerica 1 and IRAT 170 were unique to grain length and grain shape traits (Fig. 10). The Principle components for grain yield and grain quality traits in twenty rice genotypes and their distribution of priority in resulting PCA were reported in Tables 6 and 7, respectively. The results demonstrated that grain length, milling and amylose content were classi ed in PC1, while, head rice, gelatinization temperature and grain elongation were located in PC2. On the other hand, grain yield and hulling were distributed in PC3.
Cluster analysis
Clustering analysis was performed to look for similarities between rice accessions and assess the possibility of hybridization. Mean performance of different clusters for the characters manifested that genotypes with maximum grain yield, high milling output, good grain shape and low amylose content were accumulated in cluster I, whereas genotypes with maximum hulling were grouped in cluster II.
Moreover, genotypes that took minimum head rice were clubbed into cluster III. Whereas low yielding genotypes with soft gelatinization temperature and long grain elongation were grouped into cluster IV (Table 8).
Among twenty rice genotypes, cluster analysis resulted in four clusters following Ward's method (Fig. 11). Genotypes from the same origin or taxonomic type were found to be clustered together in the cluster results. Four distinct clusters emerged from the diversity analysis. Clusters I (7 genotypes) and III (7 genotypes) had the most genotypes (70%) while Cluster IV (one genotype) had the highest mean values for the qualities being evaluated. In cluster I, out of 7 rice genotypes, 5 of them were Egyptian japonica type, with high yielding, shortest grains, bold grain shape, highest milling and head rice recovery and low amylose content, namely, Giza 177, Sakha 107, Gz 10101-5-1-1-1, Gz 10598 -9-1-1-5-5 and Gz10590 -1-1-3-9-1, while the other two genotypes Giza 178 and Korea 1 were (indica/japonica) type. With regarding cluster III, 7 genotypes were aggregated, out of them, 4 genotypes were Egyptian japonica type, namely, Sakha 108, Sakha 109, Gz 10333-9-1-1-3 and Gz 10590-1-3-3-2 with high hulling. However, the other three genotypes were exotic, namely Milyang 109 (Indica/japonica type), IRAT 170 (Indica type) and Nerica 1 (Indica type). On the other hand, 5 rice genotypes were grouped into cluster II, 0ut 0f them, 2 genotypes, Giza 179 and Suweon 375 were (Indica/Japonica) type and 2 genotypes, Fukunishiki and Gz 10991-5-18-5-1 were (japonica type) as well as only one genotype Giza 182 was (Indica type). Finally, only one rice genotype namely, IET 1444 tolerant to water stress and suitable to cultivate under drought conditions (Indica type) was located in cluster IV. The distribution of 20 rice genotypes in different clusters is illustrated in (Table 9) and the dissimilarity matrix according to Euclidean square based on studied traits in the tested rice genotypes were observed in Table 10.
Discussion
Low estimates of phenotypic and genotypic variances were recorded for all the studied traits except head rice and grain elongation traits. The environmental variance was very low for all traits especially for grain shape and grain yield, indicating less in uence of environment in the expiration of these traits. The PCV was higher than the GCV for all the characters studied. PCV and GCV were low for grain length, grain shape, hulling, milling, amylose content and grain yield indicating limited scope for further genetic improvement of these traits through selection. Otherwise, high estimates of both PCV and GCV were detected for grain elongation followed by gelatinization temperature and head rice indicating wide variability among the varieties studied for these traits and the possibility of genetic improvement of these traits through selection. Negligible values of the difference between PCV and GCV were recorded for grain length, grain shape, hulling, milling, head rice recovery, amylose, grain elongation and grain yield suggesting a lower environmental in uence on the expression of these traits. Furthermore, Bharath et al.
(2018) reported that quantitative characters had shown less difference between PCV and GCV. Also, they recorded that single plant yield and gelatinization temperature had shown high PCV and GCV. Moreover, Archana et al. (2018) exhibited that PCV values, in general, were higher than GCV for various characters studied. Grain elongation, gelatinization temperature and head rice had shown high PCV and GCV along with high to moderate heritability (h 2 ) and genetic advance, these traits are less in uenced by the environment. High heritability accompanied with high genetic advance as percent mean were recorded for grain yield per plant (Archana et al., 2018). This signi es that these characteristics are governed by additive gene action and selection for these traits would be effective. Similar ndings were recorded previously (Hammoud et al. 2006a;Sahu et al. 2017). It had been observed that the coe cient of variation ranges from 8.61% for hulling percentage to 45.01% for gelatinization temperature.
There were highly signi cant differences among different genotypes in the studied characteristics under different NPK levels. Grain yield, grain quality and cooking characteristics were affected signi cantly by NPK treatments. The studied characteristics were increased gradually by increasing NPK levels from 0 up to the full dose of recommended doses of NPK fertilizers. The advantageous impacts of NPK on rice productivity and quality were noted by several previous investigations. The favorable effect of NPK fertilizer application on rice grain yield might be due to the increase in NPK availability in soil and subsequently content in rice plants leading to produce more energy which enhances the photosynthetic rate that improved grain lling process (Biswas and Dravid 2001;Ibrahim 2001). The effect of NPK fertilizer application on hulling percentage was mainly due to the maximum storing of starch in the endosperm of grains which caused a reduction in the hull components such as palea, lemma, pericarp, aleurone layers and rachilla. The increase in milling percentage due to increasing NPK levels may be due to the increase in metabolite substances in grains (Asif et al. 1999;Metwally 2007;Naeem et al. 2010). Data revealed that the GZ10590-1-3-3-2 rice genotype produced the highest grain yield followed by IET1444 , while Nerica1 came in the last rank and gave the lowest value in this aspect. These results could be attributed to the superiority of GZ10590-1-3-3-2 in growth vigor as well as yield attributes. The differences among rice genotypes may be due to the genetically inherited variants. The differences among rice genotypes in yield and grain quality characteristics may be due to the genetically inherited variants. Ebaid and El-Rewainy (2005) and Metwally et al. (2020) reported that hulling, milling, head rice, gelatinization temperature, grain elongation, amylose content traits were varied among different Egyptian rice genotypes. Singh et al. (2011) indicated that rice grain quality characteristics variations among rice genotypes are dominated by the genetic background of those genotypes. Zhao et al. (2018) reported that the GS9 gene regulates grain shape and hull thickness by altering cell division. It means rice grain shape is mainly controlled by the genetic background. The variations among the studied genotypes in cooking qualities may be due to the differences in their shape and thickness. Mohapatra and Bal (2006) suggested that the thickness of the rice grain is an important factor in deciding the diffusion of water during cooking. The differences among rice genotypes may be due to the genetically inherited variants. PCA is used to generalize the variation in different explaining factors. The results showed that only the rst three principal components jointly accounted for 80.9% of the total variation among the genotypes, while these three exhibited eigenvalues above unity. While a large fraction of the divergence (88.2% variation) was explained by the rst four components. The results demonstrated that grain length, milling and amylose content were classi ed in PC1, while, head rice, gelatinization temperature and grain elongation were located in PC2. On the other hand, grain yield and hulling were distributed in PC3. . Mean performance of different clusters for the characters manifested that genotypes with maximum grain yield, high milling output, good grain shape and low amylose content were accumulated in cluster I. Whereas, genotypes with maximum hulling were in grouped cluster II. Moreover, genotypes that took minimum head rice were clubbed into cluster III. Whereas, low yielding genotypes with soft gelatinization temperature and long grain elongation were grouped into cluster IV.
Conclusions
High heritability of grain quality and yield traits suggests that the environment has only minor in uence, implying that simple breeding methods based on selection will be e cient in genetic improvement. Lowinput fertilizers in agricultural production will be considered one of the necessary agricultural production inputs in the next years allowing us to feed a growing population while keeping low costs. This study dealt with the possibility of using different rates of the necessary elements such as nitrogen (N), phosphorous (P) and potassium (K) to rice fertilize. Results exhibited highly signi cant mean squares for all characteristics suggesting the presence of genetic difference among rice genotypes for all traits. Low estimates of PCV and GCV were recorded for all traits except head rice and grain elongation. Grain yield, grain quality and cooking traits were in uenced signi cantly by NPK treatments. The studied characteristics were improved progressively by rising NPK levels from 0 up to the full dose of recommended NPK fertilizers. There were highly signi cant variations among genotypes in studied traits. GZ10590-1-3-3-2 rice genotype exhibited the highest grain yield followed by IET1444 . Also, it was concluded that origin and taxonomy were both responsible for clustering genotypes.
Declarations
Authors' contributions SGHRS contributed to supervision, encouragement, suggesting the problem, design, data analysis and writing up the manuscript. TFM contributed to support, design, performance eld experiments, data analysis and writing up the manuscript. SHAT contributed to designing, preparing, and scienti c advice, data analysis and writing up the manuscript. MMS contributed to design, data analysis and writing up the manuscript. KFMS design, data analysis and writing up the manuscript and following up the publication with the journal (correspondence). MABE design, performance eld experiments, data analysis and writing up the manuscript. All authors read and approved the nal version. .
Code availability
Not applicable.
Figure 5
Mean performance of milling rice % trait among twenty rice genotypes over two contrasting environments (2019 and 2020) as affected by NPK treatments Figure 6 Mean performance of head rice % trait among twenty rice genotypes over two contrasting environments (2019 and 2020) as affected by NPK treatments Figure 7 Mean performance of amylose content % trait among twenty rice genotypes over two contrasting environments (2019 and 2020) as affected by NPK treatments Figure 8 Mean performance of gelatinization temperature trait among twenty rice genotypes over two contrasting environments (2019 and 2020) as affected by NPK treatments Figure 9 Mean performance of grain elongation % trait among twenty rice genotypes over two contrasting environments (2019 and 2020) as affected by NPK treatments Figure 10 Principal component analysis (PCA) for twenty Egyptian and exotic rice and nine characters (grain yield, grain length, grain shape, hulling, milling, head rice, amylose content, gelatinization temperature and grain elongation) Page 25/25
Figure 11
Clustering pattern (pooled over 2019 and 2020 seasons, RRTC, Sakha, Egypt) of the twenty Egyptian and exotic rice genotypes using Ward`s method based on grain yield and quality traits
|
2021-11-07T16:15:32.210Z
|
2021-11-05T00:00:00.000
|
{
"year": 2021,
"sha1": "93e6b80013846cbbd9ccf5cff7566094cb54b1a0",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1049255/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "8910b5b3c576f4073b3d91934ca3f347447b9454",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
237594087
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiological profile of dengue in Brazil between the years 2014 and 2019
Med
INTRODUCTION
Dengue is one of the main endemic diseases in Brazil.It is an arbovirus, transmitted via the bite of the Aedes aegypti mosquito, and can be divided into four different serotypes: DENV-1, DENV-2, DENV-3, and DENV-4 1 .The first report of dengue in Brazilian territory dates back to the end of the 19th century, but only in 1981 it was possible to isolate the serotypes of the virus, which has since spread throughout the country 2 .
The classification scheme of dengue divides the disease into three different categories.The first, dengue without warning signs, has as initial and main symptom that includes the abrupt onset of a high fever (39-40°C) in association with severe headache, myalgia, arthralgia, and retro-orbital pain.Patients may also manifest maculopapular rash, anorexia, diarrhea, nausea, and vomiting.The symptoms usually improve after the third day of their onset 3 .Some patients, after defervescence of fever, may progress to dengue with warning signs, the second category of the disease, presenting severe and continuous abdominal pain, persistent vomiting, pleural and/or pericardial effusion, ascites, postural hypotension, hepatomegaly, mucosal bleeding, lethargy, irritability, and progressive increase in hematocrit.These manifestations must always be investigated, since they can lead to the third and most dangerous category, severe dengue, promoting a range of outcomes including shock, hemorrhage, organs dysfunction and even death 3,4 .
The definitive diagnosis of dengue is performed in laboratory through the serology and viral antigen detection test.Since it is not possible to obtain the results of the tests immediately, the World Health Organization (WHO) recommends that the tourniquet test should be performed during the screening of all patients under the suspect of dengue and without the signs of bleeding 4,5 .
The control of A. aegypti is the main form of disease prevention.From the above findings, it is clear that dengue is a problem that needs to be tackled in Brazil 6 .This study aimed to trace the epidemiological profile of the disease in the counbetween the years 2014 and 2019.
METHODS
This is an observational, descriptive, cross-sectional, and retrospective study.Data collection was performed using the information available at Notifiable Diseases Information System (SINAN) and at SUS Department of Informatics (DATASUS), in the period between January 1, 2014 and December 31, 2019.Through them, the relation between Brazilian macro-regions and the following variables were observed: the number of probable cases, serotypes, sex, race, age group, final diagnosis, disease progression, need for hospitalization, and the confirmation criteria of the disease.Statistical analysis was performed using SPSS V20, Minitab 16, and Excel Office 2010 software.This study has a significance level of 0.05; therefore, all confidence intervals also met 95% of statistical confidence.
Since the data collection was performed online, and all the data are available at SINAN website and are of public domain, this research is, thereby, free of ethical risks.
Regarding the sex of the evaluated population, the males represented 2,599,974 (44.4%) cases, while the females represented 3,258,284 (55.6%) cases.As for the age group, the highest prevalence occurred in individuals between 20 and 39 years (38.3%).
Concerning the race, the brown skin prevailed in the North, Northeast, and Midwest (80.7, 78.5, and 57.8%, respectively) regions, while in the Southeast and the South regions the most affected populations were the whites.
This study also considered the need for hospitalization, which, overall, was low, but had most of its occurrences in the North (10.6%) and the Northeast (9.6%) regions.As for the disease progression, most patients got cured (n=4,275,802), and a total of 3,444 people died due to the disease during the study period.
In the matter of serotypes, DENV-1 prevailed (87.5%) between 2014 and 2017.However, in 2018 and 2019, DENV-2 was the most detected serotype in the country (63%).DENV-3, the least common of them, was responsible only for 7 cases in the North, 28 in the Northeast, 20 in the Southeast, 27 in the South, and 10 in the Midwest regions (Table 1).
DISCUSSION
According to the WHO, in the last decades, the incidence of dengue is increasing exponentially, especially in places near the tropics, such as the Americas and the Caribbean, and South-East Asia and Asia-Pacific regions 7 .It can be explained by factors such as the hot and humid climate, the low levels of basic sanitation, disordered urbanization, and the vector resistance to insecticides and larvicides 8,9 .
In this context, this study showed that, in Brazil, between 2014 and 2019, 5,867,255 cases of dengue were reported, highlighting the years of 2015 (n=1,696,340), 2016 (n=1,514,873), and 2019 (n=1,557,452), in which there was a significant increase of occurrences.These data are aligned with the bulletin released by WHO, in 2020, in the Epidemiological Update of Dengue and Other Arboviruses, and, in Brazil, can be explained mainly by two events: the increase of rain in these years and the introduction of a new serotype of the disease, DENV-2, which barely circulated in the country before 2018, and, since then, has become the most prevalent serotype of dengue, as shown in this study 10,11 .
Referring to the macro-regions of Brazil, two of them stood out: the Southeast region, which condensed the majority of dengue notifications (n=3,378,636), and the Midwest region, which registered the highest incidence per 100,000 inhabitants.The Southeast is the most populous region of the country, which may have contributed to this being the place with the most number of notifications 12 .
Regarding the distribution of cases by sex, similar to what was evidenced by Martins et al. 13 , this study also identified that the women were the most affected by the disease (55.6%).This cannot be explained by a single factor; however, Cardoso et al. 14 believed that, in addition to spending more time indoors, a favorable environment to A. aegypti, women tend to seek health care more commonly than men, and consequently they are diagnosed more.
Throughout the study period, in the North, Northeast, and Midwest regions, the brown people was the most afflicted race (80.7%, 78.5%, and 57.8%, respectively), a result confirmed by Oliveira et al. 15 but opposed to what was found by Santana e Duarte 16 , who identified a predominance of the white race (32.4%) in the same regions.However, in the Southeast and the South regions, the white race was more affected 17 .
The analysis of the age group revealed that individuals between 20 and 39 years (38.3%) were the ones who have fallen ill mostly.These results are different than those obtained by Bravo et al. 18 in the Philippines, where the most affected age group was between 5 and 14 years.
As laboratory tests are not always available, the clinical and epidemiological criterion was validated by the Ministry of Health and is usually adopted during endemics/epidemics/pandemics after the circulation of the virus is acknowledged in the area 19 .
Regarding the final classification of the disease, dengue without warning signs was the most prevalent (n=4,401,555), while dengue with warning signs (n=67,875) and severe dengue (n=6,201) were less common.Overall, the need for hospitalization was low, but needed the most in the North (10.6%) and Northeast (9.6%) regions, which burdened the public health system and highlights a failure in A. aegypti eradication plan 20 .These data are remarkably different in India where, according to Ganeshkumar et al. 21, dengue is the main cause of hospitalization.
As a limitation of this study, we identified that the data were collected from a secondary source that is fueled by health professionals, who often do not fill out the forms correctly, therefore interfering in statistical analysis and results.
CONCLUSIONS
This epidemiological analysis showed that, in the period between 2014 and 2019, 5,867,255 cases of dengue were reported in Brazil, and 2015 was the year that registered the majority of notifications.The highest incidence per 100,000 inhabitants took place in the Midwest region, and most of the cases occurred in the Southeast region.There was a switch of serotype predominance which changed from DENV-1, in 2014 until 2017, to DENV-2 since then.
Figure 1 .
Figure 1.Evolution of the incidence of dengue in Brazil between the years 2014 and 2019.
Figure 2 .
Figure 2. Incidence of dengue in the macro-regions of Brazil between the years 2014 and 2019.
|
2021-09-23T06:23:26.359Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f8dc56c9100c1404f94e3c0a17c1f2df8c3af94d",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ramb/a/SJNgnQXsxkzsHncjsmWGzdc/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1bfad4131fd003e90cf936dc68c03e31517017e3",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3776130
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of PD-L1 expression on vortex-isolated circulating tumor cells in metastatic lung cancer
Metastatic non-small cell lung cancer (NSCLC) is a highly fatal and immunogenic malignancy. Although the immune system is known to recognize these tumor cells, one mechanism by which NSCLC can evade the immune system is via overexpression of programmed cell death ligand 1 (PD-L1). Recent clinical trials of PD-1 and PD-L1 inhibitors have returned promising clinical responses. Important for personalizing therapy, patients with higher intensity staining for PD-L1 on tumor biopsies responded better. Thus, there has been interest in using PD-L1 tumor expression as a criterion for patient selection. Currently available methods of screening involve invasive tumor biopsy, followed by histological grading of PD-L1 levels. Biopsies have a high risk of complications, and only allow sampling from limited tumor sections, which may not reflect overall tumor heterogeneity. Circulating tumor cell (CTC) PD-L1 levels could aid in screening patients, and could supplement tissue PD-L1 biopsy results by testing PD-L1 expression from disseminated tumor sites. Towards establishing CTCs as a screening tool, we developed a protocol to isolate CTCs at high purity and immunostain for PD-L1. Monitoring of PD-L1 expression on CTCs could be an additional biomarker for precision medicine that may help in determining response to immunotherapies.
Several challenges exist in screening patients with only an invasive biopsy of the primary tumor. Biopsies allow sampling from limited sections of the tumor at one time point, which may not detect tumor heterogeneity (Fig. 1). Furthermore, especially for lung cancer, the biopsy tissue may be limited or may have been taken much earlier in the cancer's course (i.e. before it became metastatic). This is because repeat biopsies are avoided due to potential serious complications. If a biopsy is limited to the primary tumor at a single time point, it also does not allow evaluation of other metastasized tumor sites, and the primary tumor may not necessarily be representative of the metastatic sites. As reported, some patients whose primary tumor was negative for PD-L1 still responded well to anti PD-1 treatment, potentially because the biopsy may not have captured the heterogeneous expression of PD-L1 on the tumor 7 . Biopsy of multiple sites or serial biopsies during treatment could address some of these issues, however it may not be feasible due to the invasiveness of the procedure and the potential risks to the patient. In this regard, PD-L1 expression on circulating tumor cells (CTCs) could aid in screening and monitoring patients 16 . CTCs are tumor cells that are shed from various locations of the primary and/or metastatic tumors [17][18][19] . As such, they may represent a greater portion of the spectrum of genetic and epigenetic variability within a patient's tumors (Fig. 1). Additionally, monitoring PD-L1 levels over time on CTCs may potentially yield information about modulation of tumor PD-L1 expression in the presence of inhibition of the PD-1/PD-L1 interaction.
There have been few studies to explore PD-L1 expression on CTCs, either in breast cancer (15) or in bladder cancer 20 and another examining nuclear PD-L1 expression in colon and prostate CTCs 16,21 . To our knowledge, only one recent study from Nicolazzo et al. evaluated PD-L1 expression in NSCLC CTCs and examined PD-L1 expression in the context of active immunotherapy treatment, particularly PD-1/PD-L1 inhibition (nivolumab in their study) 22 . Most of these previous studies utilized specific surface markers for CTC capture, either with CellSearch or with similar magnetic bead technology, and did not isolate cells in a manner that is unbiased to surface expression. There are still other knowledge gaps, particularly in how PD-L1 expression on CTCs correlates with expression on tumor biopsies, what method to use for quantifying PD-L1 expression, on both CTCs and tumor biopsies, and how PD-L1 expression on CTCs varies, both at time of initial treatment and as therapy continues.
Here, we evaluate the PD-L1 expression on 31 CTC-containing samples, obtained from 22 patients with metastatic NSCLC who were scheduled to receive or were receiving PD-1 or PD-L1 inhibitors, including 11 metastatic NSCLC patients scheduled to receive the anti-PD-1 treatment pembrolizumab (one patient ended up receiving erlotinib) (Fig. 1A, Table 1). Most patients were evaluated for CTC collection prior to treatment or at the beginning of the second cycle of treatment. For patients receiving pembrolizumab or erlotinib, we compared the quantitative expression of PD-L1 to levels on tumor biopsies taken before treatment when available (N = 4, Fig. 1B) and assessed whether levels were associated with progression free survival (PFS). Tumors of patients receiving pembrolizumab were originally graded as positive for PD-L1 as this was one of the initial inclusion criteria for receiving this therapy. ① Blood is collected from cancer patients and processed through Vortex technology to enrich for CTCs. ② Blood is diluted 10X with PBS and ③ injected through the microfluidic device with syringe pumps. ④ Purified cells are collected into a 96 well-plate, where they are ⑤ stained with immunofluorescence markers and imaged. ⑥ Fluorescence intensity can be analyzed and PD-L1 gene expression quantified. (B) Tumor biopsy workflow: In parallel of the CTC workflow, lung biopsies were analyzed for PD-L1 expression. While biopsy provides information on the intra-tumor heterogeneity, only the CTCs present in a blood draw can cover both intra and inter-tumor heterogeneity. (Table 1). Among the 22 patients, 10 NSCLC patients were receiving anti-PD-1 treatment pembrolizumab, 9 NSCLC patients receiving anti-PD-1 treatment nivolumab (Bristol-Myers Squibb), and 2 NSCLC patients receiving anti-PD-L1 treatment avelumab (EMD Serono). 1 patient was evaluated for treatment with pembrolizumab but eventually received erlotinib. For 2 patients (patients #16 and 19), blood was collected at several time points; 5 times for patient 16 (before the treatment, and 4 follow-up draws) and 6 times for patient 19 (after the first dose, 4 follow-up draws, after the treatment). As this was a pilot study, samples were evaluated at different time points, though effort was made to collect blood samples prior to any treatment commencing or within three weeks of starting treatment. Of these 31 patient samples, 17 were sampled prior to commencing treatment, 2 were sampled after the first dose after two weeks, while the remaining 11 were taken at various time points while on treatment and 1 after the treatment. All patients receiving pembrolizumab were categorized as having positive PD-L1 expression in their original tumor biopsies 7 . 4 matched biopsies were available for analysis for this study.
This study was approved by the UCLA IRB (protocol #11-001798). All patients provided informed consent for participation in this study. After obtaining informed consent, 6-10 cc of blood were collected from each patient in EDTA tubes. Collected samples were processed to isolate CTCs within four hours of collection. Blood samples from healthy volunteers (n = 10) of various ages were similarly processed. All methods were performed in accordance with the relevant UCLA IRB guidelines and regulations.
Isolation of CTCs using Vortex technology. We used a microfluidic device for rapid, size-based capture of CTCs from blood called the Vortex HT chip, as previously described by the authors 23 (Fig. 1A). The Vortex HT chip utilizes inertial microfluidic flow to isolate CTCs within microscale vortices. Captured cells are then released and collected off-chip in a well plate for fixation and immunostaining.
Cells lines and WBCs. Lung cancer cell line staining controls A549 (adenocarcinoma), H1703 (squamous carcinoma) and H3255 (adenocarcinoma) were cultured in RPMI media supplemented with 10% FBS and 1% pen/strep. Hela cells were cultured with DMEM media supplemented with 10% FBS and 1% pen/strep. At 70% confluence, cells were harvested using 0.25% trypsin and fixed with 2% paraformaldehyde. White blood cells (WBC) were isolated from healthy blood using RBC lysis buffer (eBioscience). WBCs were similarly fixed with 2% paraformaldehyde. During each staining experiment, an aliquot of each of these fixed cell solutions was stained in the same well-plate alongside with the CTC samples for normalization.
Immunofluorescence staining of circulating tumor cells.
Collected cells were fixed with 2% paraformaldehyde (Electron Microscopy Sciences) for 10 min, permeabilized with 0.4% v/v Triton X-100 (Research Products International Corp) for 7 min, blocked with 5% goat serum (Invitrogen) for 30 min. To identify CTCs, cells were labelled for 40 minutes at 37 °C with 4,6-diamidino-2-phenylindole (DAPI) (Life Technologies), anti CD45-phycoerythrin (CD45-PE, Clone HI30, BD Biosciences), and a cocktail of primary antibodies to identify cytokeratin (CK) positive cells (Pan-CK clone AE1/AE3, eBioscience, clone CK3-6H5, Miltenyi Biotec, and CK clone CAM5.2, BD Biosciences). To identify PD-L1 levels, cells were also stained with anti-PDL1 antibody (ProSci Inc), A secondary antibody labeled with Alexa Fluor 647 was used as the fluorescent reporter for the PD-L1 antibody. One set of cells consisting of A549, H1703 and healthy white blood cells (WBCs) were stained along with each patient sample. These additional cells were necessary to normalize for the staining process, antibody batch, and microscope conditions over the length of the study and report fluorescent intensities that could be compared between CTCs from many samples. The staining protocol for PDL1 was optimized to positively stain lung cancer cell lines (Supp. Figure 1). After staining, the cells were imaged (Axio Observer Z1, Zeiss) and manually enumerated using specific classification criteria. We identified CTCs in patient samples based on DAPI+/ CK+/CD45−, or DAPI+/CK−/CD45−/ along with cytopathological features of malignancy as described previously 23 . All CTC and WBC counts were checked by two independent reviewers (Table 1). PD-L1 expression on these CTCs was quantified using a semi-automated algorithm as described below. Some CTCs could not be evaluated for PD-L1 expression due to the presence of fluorescent debris overlapping all or part of the cells in the PD-L1 Cy-5 channel and thus obscuring the full PD-L1 signal; the PD-L1 status on these cells was thus considered undetermined (identified as "UD" in Table 1).
Immunohistochemistry of lung tumor biopsies.
Immunohistochemistry was performed by the UCLA TPCL Pathology core facility. Briefly, thin tumor sections were cut from paraffin tissue blocks of biopsies obtained prior to treatment (Fig. 1B). The slides were deparaffinized in xylene and re-hydrated through graded ethyl alcohols (100% x3, 95% x2) to distilled water; initially xylene for about 10 min and the remaining treatments for 1 minute each with agitation. Antigen retrieval was performed in a pressure cooker (5 minutes at max temperature) in high pH Tris-EDTA buffer and samples were cooled for 15 minutes at room temperature after pressure returned to atmospheric pressure. The slides were incubated with primary antibody (rabbit monoclonal anti-PDL1 clone SP142, Sina Biological, at 1/200 dilution) for 60 minutes followed by anti-rabbit, horseradish peroxidase polymer (Refine detection kit from Leica) for 15 minutes. The slides were washed with buffer after each of the primary antibody and polymer steps and then incubated with hydrogen peroxide/diaminobenzidine for 10 minutes. Cells were counterstained with hematoxylin. Biopsy tissues were only available for analysis from 4 patients.
Quantification of PD-L1 levels on CTCs and tumor samples.
In order to quantify PD-L1 expression on CTCs, we developed a semi-automated imaging algorithm using a custom MATLAB script. The script was used to quantify the cell sizes and normalized fluorescence levels for each cell from each patient sample (Supp. Figure 2). Briefly, an edge detection algorithm was used to locate the outline of the cell membrane (from the transmitted light image) and convert the outline to a binary image mask. The mask was then overlaid against the fluorescence images from each channel and utilized to identify the fluorescence per pixel in each cell. The sum of the pixel intensity of the PD-L1 channel (Alexa Fluor 647) in the area identified as the cell was calculated for each CTC, and ~100 control cells of each type. To normalize the fluorescence intensity of the CTCs, we utilized the lung cancer line H1703 as these cells had the highest expression of PD-L1 of the three lung cancer lines used SCIENtIfIC RepoRts | (2018) 8:2592 | DOI:10.1038/s41598-018-19245-w (A549, H1703, and H3255) 24 . Staining a fixed batch of these cells along with each sample, allowed normalization of the CTC data. We use the following equation (Eq. 1) to calculate the normalized intensity: where I is the pixel value and K is the total number of H1703 cells analyzed. This value shows the relative intensity of PD-L1 expression on CTCs to that of H1703 cells. If the CTC has much higher expression than H1703, then the normalized value would be greater than 1. Once the CTC fluorescence intensities were normalized, they were categorized as PD-L1 negative (normalized intensity between 0-0.05) or PD-L1 positive. The quantity of PD-L1 expression was further categorized as either low (normalized intensity between 0.05-0.4), medium (0.4-0.7), or high (>0.7) as defined in the cutoff values in Fig. 2B. These descriptor bins were set initially by visual inspection.
To quantify PD-L1 expression on the tumor biopsy sections when available, the thin biopsy specimens were analyzed using the positive pixel count algorithm in HALO software (Indica Labs). The intensity signal from each cell was categorized as negative for PD-L1 expression (intensity between 0-0.04) or as positive (0.04-1). Positive cells were further categorized into low (intensity between 0.04-0.1), medium (0.1-0.2) and high levels (0.2-1), as indicated by the cutoff values shown in Fig. 2B. The lymphocytes at the periphery of the tumor were excluded by the software based on the cell size and nucleus.
Association with outcome and statistical analysis. We categorized patients by number of CTCs (<1.32 CTCs/mL or ≥1.32 CTCs/mL) and obtained clinical information (progression free survival) and immune related response criteria (irRC) based on imaging scan data when available 25,26 . Patients were categorized by the overall best response to treatment (complete response, partial response, stable disease, progressive disease, or not evaluable). The association between CTC and PD-L1+ counts with progression free survival was performed using Cox proportional-hazards models. In order to quantify the effects of interest, hazard ratios (HR) along with 95% confidence intervals (CI) were estimated by the model. All statistical analyses were performed using R V3.1.2 (Vienna, Austria) and IBM SPSS V23 (Armonk, NY). P values less than 0.05 were considered significant. Fig. 2B). Besides the CTCs, between 1 and 93 WBCs were collected per mL, which indicates a high-purity (Table 1, Fig. 2C). As negative controls, we tested blood samples from 10 healthy volunteers, male and female, of different ages (Table 1, Fig. 2B). Using the same enumeration criteria described for the patients, 0 to 1.25 cells per mL were isolated from healthy controls and characterized as CTCs. Based on these enumeration data, a "healthy" cut-off value was defined as the mean number of CTCs + 2 SD (mean = 0.556, SD = 0.385), and thus the cut-off was calculated to be 1.32 CTCs/mL. Using this threshold, 14 of 31 patient samples (45%) were considered positive for CTCs.
PD-L1 can be quantified on CTCs prior to treatment with PD-1 inhibition. We then developed a method for quantifying PD-L1 expression on lung cancer cells using immunofluorescence staining. To identify the optimal primary and secondary antibody concentrations, we utilized HeLa cells as a positive control and RBCs as a negative staining control for PD-L1 (Lee) We tested three commercially available PD-L1 antibodies (ProSci Ref# 4059, BioLegend clone: 29E.2A3, and eBioscience clone: MIH1) and determined that the ProSci antibody had the most intense specific staining while maintaining the least non-specific staining (Supp. Figure 1A).
We used the three lung cancer cell lines A549, H1703 (adenocarcinoma), and H3255 (squamous cell carcinoma) to develop and validate the PD-L1 fluorescence immunostaining protocol and quantification algorithms. The H1703 line was found to have the highest overall expression of PD-L1 while H3255 had minimal to no PD-L1 expression (Supp. Figure 1B-C); we thus decided to use H1703 as the positive staining control, whereas H3255 were used as the negative staining control.
Once these parameters had been determined, we then implemented this protocol to quantify PD-L1 expression on the isolated lung cancer patient CTCs; with examples of patient sample staining shown in Fig. 2A. For each patient, the number of CTCs positive and negative for PD-L1 were counted (Table 1, Fig. 2D). Of patient samples with CTCs, 30/31 had one or more PD-L1 + CTCs (Fig. 2D). The fraction of PD-L1 positive CTCs among these patients ranged from 2.2 to 100% (Table 1, Fig. 2D).
PD-L1 expression on tumor biopsy sections can be quantified and compared with CTC expression prior to treatment.
We next examined the concordance of PD-L1 staining between CTCs and tumor biopsy sections, as PD-L1 positivity in these sections has been shown to be a predictor for outcome in lung cancer and the PD-1 inhibitor pembrolizumab requires a patients' tumor biopsy to be positive for PD-L1 prior to administration (as determined by an FDA-approved companion diagnostic). Tumor biopsies prior to treatment were only available for 4 patients from the 22 patients in this study. For these 4, thin sections of tumor were cut from the paraffin block, stained and the resulting PD-L1 levels quantified as described above. Sample images for negative, low, medium, and high PD-L1 staining are shown in Fig. 3A. Although all 4 tumor biopsies were initially scored as PD-L1 positive, we quantified PD-L1 levels and found that the majority of positive cells had low expression (Fig. 3C). Patient P07 had the lowest fraction of medium (7.64%) and high (0.46%) staining cells in the corresponding biopsy, with 91.9% of the cells being either low or negative for PD-L1. This was reflected within the CTCs as well, as P07 had the lowest fraction of PD-L1 positive CTCs (15.8%, i.e. 3 of 19, these 3 cells being even identified as low PD-L1 expression). Two patients (P05 and P06) with the highest fraction of positive PD-L1 cells in their tumor (P05: 99.5%, P06: 99.9%) also had the highest fraction of PD-L1 positive CTCs (P05: 47.4%, P06: 66.7%), with a significant part of CTCs having medium or high PD-L1 expression (P05: 26.3%, P06: 12.5%) and (P06: 4 of 24 in P06). For Patient P01, tumor and CTCs provided again a similar pattern, with respectively 18.6% and 37% of the tumor cells and CTCs being PD-L1 negative, while 67.3% and 63% of the cells were PD-L1 positive but with a low expression level. Interestingly, for all 4 patients, the fraction of CTCs that were negative for PD-L1 staining was always higher than the fraction of negative cells in the corresponding biopsy.
Association of CTC count and progression free survival. As PD-L1 expression is detectable on CTCs,
we wanted to determine whether its detection can be an adjunct predictor for response to PD-1/PD-L1 inhibition. We determined progression using the irRC criteria. To assess the impact of overall CTC count and PD-L1 expression, we counted all CTCs and the subset that were PD-L1 positive (Table 1, Fig. 2D). We limited the analysis for this section to those patients who had a blood collection immediately prior to starting treatment (n = 17). For patients still having response to treatment, the cutoff day for response was set at July 8, 2016. For total CTC count, we pooled patients into two categories: those with <1.32 CTC/mL and those with ≥1.32 CTCs/mL (Fig. 4). Using this categorization, the hazard ratio for progression free survival on PD-1/PD-L1 inhibition was 0.48 (95% CI, 0.14-1.64, P = 0.239) for patients with ≥1.32 CTCs/mL. We also analyzed the association of PD-L1 expression and progression free survival for the 17 patients prior to starting treatment. The hazard ratio for PFS for patients with ≥2 PD-L1 + CTC (overall count) was 0.83 (95% CI, 0.24-2.84, P = 0.764). Of note, of the 5/17 patients with either partial response (PR) or stable disease (SD), three of the five had >50% PD-L1 + CTCs, while the other two had no PD-L1 positive CTCs; however, given the small number of patients with response, this association needs to be analyzed in a larger cohort to determine the overall impact of an increased fraction of PD-L1 + CTCs.
Although for this initial study, we did not serially collect blood in all patients, we did so for two patients (Patients #16 and 19, Fig. 5). For Patient P19, the first collection was after the first dose of pembrolizumab (Fig. 5. top). The patient continued to receive pembrolizumab as he was having decrease in tumor burden as measured by radiographic scan. However, a blood draw two and a half months after the first draw revealed an increase in CTC SCIENtIfIC RepoRts | (2018) 8:2592 | DOI:10.1038/s41598-018-19245-w count from 0.67 to 9.67 CTCs/mL, with a low fraction of PD-L1 + CTCs (13.7%) ( Table 1). Imaging conducted one month later demonstrated new brain metastases; the patient expired one year later. For Patient P16, 5 blood draws were collected at the beginning and during the course of avelumab therapy. Tumor burden was measured as well and decreased over time. Patient was indicated as stable at the last CT scan. During the 5 blood draws, CTC number varied from 0.62 to 3.87 CTCs/mL, but always with high proportion of the CTCs being PD-L1 positive (between 80.6% and 100%). Interestingly, at the exact beginning of the PD-L1 inhibitor therapy, 100% of the CTCs collected were defined as PD-L1 positive. Future work will involve serially tracking patients to see how CTC monitoring (both total and PD-L1 + CTC level) may correlate with efficacy or loss of efficacy of treatment.
Discussion
Although immunotherapy represents a breakthrough in the treatment of selected cancers, only a fraction of patients respond. In metastatic NSCLC, the overall response rate is approximately 20% 7,14 . Although several studies have indicated that selected biomarkers (such as tumor PD-L1 expression or the presence of CD8 + infiltrating lymphocytes) are correlated with patient response, there is still a need for other ancillary and non-invasive biomarkers that can be used to monitor response to ultimately help guide the treatment course. CTCs have recently gained momentum as a non-invasive liquid biopsy for monitoring of treatment response and as a source of genetic material to understand treatment failures. Although the analysis of circulating cell-free tumor DNA (ctDNA) is also being explored, especially in identifying the presence of known druggable mutations, such an approach is not suited for identifying phenotypic changes such as levels of PD-L1 on tumor cells. There are no known consistent genetic lesions associated with up-regulated expression of PD-L1. Also, a variety of other cell types express PD-L1 (for example macrophages 13 ) and could release protein, extracellular vesicles, or mRNA into the blood stream. CTCs are ideally suited to characterize PD-L1 expression through a non-invasive liquid biopsy in that they are short-lived markers of the active invasive tumors, with holistic phenotypic and proteomic information.
This study represents one of the first to examine PD-L1 expression on NSCLC CTCs in the context of anti-PD1/PD-L1 treatment. Some of the newly approved checkpoint inhibitors require a companion diagnostic indicative of tumor positivity for PD-L1 prior to administration (specifically pembrolizumab). So, in this pilot study, we wished to develop algorithms to quantify PD-L1 expression on CTCs and tumor biopsies respectively and secondly, explore whether CTC PD-L1 expression was correlated with tumor expression. Although PD-L1 , tumor burden initially decreased over time indicating a shrinking of the right upper lobe tumor and a potential response of the patient to the treatment. However, the patient soon after was found on imaging to have developed metastases to the brain and lung, which was preceded by the increase of CTC number from 0.67 to 9.67 CTCs/mL. The patient died at day 538. (Bottom) In Patient P16, tumor burden decreased over time and was stable at the last time point. CTC number varied from 0.62 to 3.87 CTCs/mL but always with a high proportion of the CTCs being PD-L1 positive (between 80.6% and 100%).
SCIENtIfIC RepoRts | (2018) 8:2592 | DOI:10.1038/s41598-018-19245-w expression alone was not predictive of progression free survival, we did note several potential trends in CTC count and PD-L1 expression. Even given a limited sample size, we did note that for patients with >1.32 CTCs/mL, those with >50% PD-L1 + had improved overall response (3 of 4 patients), though most of these patients were also PD-L1+ in their tumor biopsies. Heterogeneity of PD-L1 levels could indicate intra-tumoral or intra-patient heterogeneity, as each of these patients had multiple metastatic sites and the CTCs could break off from any or multiple of these sites.
Nevertheless, the trends we find here are suggestive that PD-L1 status on CTCs may track that of tumor tissue and that this may be a useful correlate in helping to assess potential for response to immunotherapy. Our findings indicate that PD-L1 expression on CTCs can be quantified as an adjunct biomarker. This work also suggests that serial monitoring of patient overall CTC count and PD-L1 + CTC count may help to identify changes in response to treatment, and these cells may be a readily obtainable source to understand tumor evolution, work that needs to now be confirmed with larger prospective studies. Future work will examine the time course of CTCs and their PD-L1 levels during immunotherapy treatment as well as how changes in CTC count and PD-L1 expression may correlate with overall response.
One limitation of this study or any CTC study in general is that it can be difficult to draw associations with few cells. In many cases, CTCs are present prior to treatment but their numbers can be limited, due in part to previous chemotherapeutic treatment or the fundamental patient-to-patient heterogeneity of tumor behavior. This fundamental issue is exacerbated by practical challenges with the limited number of patient samples that are available from patients on clinical trials for new therapies.
Conclusions
CTC PD-L1 quantified levels, when combined with the tumor biopsy results, could aid in identifying patients more likely to respond to therapy or likely to have become resistant to treatment when tracking levels over time. This study indicates that continued work is warranted to analyze and compare more CTC samples from patients on anti-PD-1 pathway treatment to more robustly determine how PD-L1 expression correlates to tumor levels, fluctuates in response to treatment, and is predictive of response. The methods we describe here are potentially applicable to any tumor type and to potentially any treatment course, as the size-based approach does not exclude cells on the basis of presence or absence of surface markers and the size selectivity criteria 27 can be tuned based on the cancer type known to be present.
|
2018-04-03T00:57:41.195Z
|
2018-02-07T00:00:00.000
|
{
"year": 2018,
"sha1": "6458a5bec4f07a5f7bee5376ec9968d34ffa8c64",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-19245-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "976ad2ebd917a16668f2a6224da227891ee00c6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5719461
|
pes2o/s2orc
|
v3-fos-license
|
Fairy wrasses perceive and respond to their deep red fluorescent coloration
Fluorescence enables the display of wavelengths that are absent in the natural environment, offering the potential to generate conspicuous colour contrasts. The marine fairy wrasse Cirrhilabrus solorensis displays prominent fluorescence in the deep red range (650–700 nm). This is remarkable because marine fishes are generally assumed to have poor sensitivity in this part of the visual spectrum. Here, we investigated whether C. solorensis males can perceive the fluorescence featured in this species by testing whether the presence or absence of red fluorescence affects male–male interactions under exclusive blue illumination. Given that males respond aggressively towards mirror-image stimuli, we quantified agonistic behaviour against mirrors covered with filters that did or did not absorb long (i.e. red) wavelengths. Males showed significantly fewer agonistic responses when their fluorescent signal was masked, independent of brightness differences. Our results unequivocally show that C. solorensis can see its deep red fluorescent coloration and that this pattern affects male–male interactions. This is the first study to demonstrate that deep red fluorescent body coloration can be perceived and has behavioural significance in a reef fish.
Introduction
Colour signals appear particularly strong if they involve wavelengths that are otherwise missing from the environment. Which colours can be displayed, however, depends on the prevailing ambient light conditions. A particularly striking constriction of the available spectrum occurs in marine habitats, where the low-energy, long-wavelength part of the downwelling sunlight (more than 600 nm) is quickly absorbed by seawater, leaving little red and orange light below 10-20 m depth [1][2][3]. Therefore, in all but the shallowest euphotic environments, red pigments of marine fish cannot reflect red light and will appear dark grey [4][5][6][7]. This wavelength-specific attenuation of sunlight is accompanied by a dominance of blue and yellow body colours in reef fishes [8]. Consistent with these prevailing hues, the visual systems of most reef fish investigated to date have spectral sensitivities biased towards short and intermediate wavelengths [9,10]. As a consequence, previous research on reef fish vision has focused on the 350-600 nm range of the colour spectrum [3].
The recent discovery of red fluorescent coloration in more than 180 fish taxa has, however, challenged this view [11,12]. In contrast to the prevalent reflective coloration, fluorescent pigments absorb short-wavelength light and re-emit photons at longer wavelengths. As a consequence, fluorescence can generate red colour even when the corresponding long wavelengths are entirely absent from the ambient light environment. Thus, fluorescent pigments may offer fish the opportunity to generate conspicuous colour contrasts [6,11], particularly in deeper waters.
Measurements of the spectral sensitivity of the goby Eviota atriventris (formerly Eviota pellucida [13]) have shown that this species possesses longwavelength visual pigments that make it physiologically sensitive to this species's red fluorescent coloration [11]. Moreover, fluorescent particles can be actively aggregated and dispersed within specialized chromatophores via hormonal and nervous control [7,14], corroborating the proposed role of red fluorescence as a signalling mechanism in reef fish [11]. While fluorescence has been associated with visual signals in parrots [15,16], spiders [17] and mantis shrimps [18], experimental data illustrating any behavioural response to fluorescent colour stimuli in reef fishes are lacking to date and the ecological role of long-wavelength fluorescence remains to be shown [6].
Here, we study behavioural responses elicited by red fluorescent colour patterns in the fairy wrasse Cirrhilabrus solorensis [19]. The genus Cirrhilabrus comprises more than 40 closely related species of small, diurnal Indo-Pacific labrids [20]. Fairy wrasses are common at the base of reef slopes at depths between 10 and 65 m [21,22], well below the depth to which red sunlight can penetrate. Cirrhilabrus solorensis features distinct red fluorescent body coloration (figure 1) with a unique deep red peak emission around 660 nm. Fluorescent emission in a comparable wavelength range has to date only been documented in one other reef fish species, the wrasse Pseudocheilinus evanidus [11]. Our own measurements show that other species of wrasses (for example in the genera Paracheilinus and Symphodus) also show deep red fluorescence (T.G. & N.K.M. 2013, unpublished data). In deep-sea dragon fishes, deep red fluorescence has been associated with bioluminescence [23,24], which has been proposed to constitute a private waveband used for interspecific communication and prey illumination ( [25,26] and references therein). For marine fish living in the euphotic zone, however, the ability to perceive such deep (more than 650 nm) red colours has never been shown.
In this study, we test the hypothesis that C. solorensis can perceive its own deep red fluorescence and demonstrate the behavioural significance of fluorescent colour patterns in intraspecific interactions. We chose a behavioural response assay as our experimental paradigm in order to capture the synthesis of all sensory and neural processes while also providing indications for adaptive significance [27]. In the field, males court groups of females while defending their territories against other males. Pilot experiments in the laboratory showed that male C. solorensis react towards their own mirror image with threat displays, chasing and biting in ways similar to the behaviour shown in male-male interactions in the field (T.G. 2011, personal observation). Such mirror-image stimuli (MIS) are commonly used in studies of fish ethology and enable the experimental manipulation of colour and illumination level via filters (reviewed in [28]). Here, we quantified agonistic reactions of males confronted with a set of MIS treatments that either showed or concealed the red fluorescent component of the mirror image, supplemented by control treatments with different brightness.
Material and methods (a) Study species
The fairy wrasse C. solorensis was selected as a study species due to its deep red fluorescent body pattern, its occurrence at depths devoid of red sunlight and its display of diverse intrasexual behaviour. Being protogynous hermaphrodites [29], all terminal-phase males are derived from initial-phase females. Cirrhilabrus solorensis exhibits a strong dimorphism between these successive sexual phases: males are generally larger and have longer pelvic fins than females (see also [30]), but most notably feature a distinct body pattern that appears purple under broad-spectrum white light but fluoresces red under monochromatic blue light illumination (figure 1).
(b) Animal maintenance
Experiments were conducted in the laboratory at the University of Tü bingen, Germany, between September 2012 and January 2013, and approved by the local state authority under permit no. ZO 1/12. A total of 27 adult male individuals were obtained from an ornamental fish trader (von Wussow Importe, Pinneberg, Germany) and housed individually in 60 l aquaria. Opaque black PVC sheets between tanks were used to prevent males from seeing each other and thus avoid uncontrolled agonistic interactions. Each aquarium contained a small flower pot as shelter. All fish were fed daily with a standardized mixture of Mysis shrimp and Calanus zooplankton. Water was kept at a temperature of 25-268C and 33-35 ppt salinity. Illumination was set to a 12 L : 12 D cycle. To confine red colour to fluorescence and to exclude interfering ambient red light, all animals were kept and experimentally tested under nearly monochromatic blue illumination (LED spots no. 71104, Lumitronix GmbH, Hechingen, Germany). Neither ultraviolet (less than 400 nm) nor wavelengths of more than 520 nm were present in the illumination spectrum, which featured a peak emission (l max ) of 462 nm, a predominant wavelength in clear oceanic waters [31]. Prior to testing the effects of red fluorescence on territorial defence reactions, all fish were acclimatized to their tanks for a minimum of 45 days to ensure that the fish had accustomed well and successfully established new territories in their respective aquaria. Eleven males failed to do so-these individuals turned out to be highly timid, and persistently concealed themselves upon the appearance of the experimenter and during any subsequent treatment. As this rendered behavioural observations towards a mirror image impossible, those fish were excluded from further experimentation. Each of the 16 remaining male fish was repeatedly exposed to every experimental treatment.
(c) Experimental treatments and filter properties
In order to test the effects of red fluorescent body coloration on male agonistic behaviour, we presented individual C. solorensis with a 15 Â 15 cm silver glass mirror and manipulated the colour composition of the mirror image by covering the mirror with different colour filters (LEE Filters, Hampshire, UK) held in place by metal pegs.
In the experimental treatment (NoRED), a red-opaque filter (LEE no. 729, figure 2a,c) was used in front of the mirror to block all wavelengths between 550 and 750 nm. With such a filter, the mirror reflects fish in the ambient blue colours while masking its red fluorescence. As this filter not only blocks red light but also decreases brightness in the blue-green spectrum, we needed to rule out the possibility that male wrasses display less agonistic behaviour towards a non-red mirror image simply because it appears darker. For this reason, we used two different neutral density (ND) filters as controls. These ND filters (LEE no. 209 and no. 210) alter brightness independent of hue (i.e. they transmit all coloursincluding red-but reduce the overall brightness of the mirror image to 50% and 25% of the ambient light, respectively; control treatments ND50 and ND25, figure 2a,b). As a positive control, we presented the fish with a mirror (control treatment NoFILTER), which generated a bright mirror image containing all available wavelengths, including the fish's red fluorescence.
To examine whether the red fluorescent patches alone elicit any behavioural response, we also covered the mirror with a filter that blocks the wavelength range 380-600 nm (LEE no. 106; figure 2), thus transmitting only the red fluorescent body pattern while obscuring the blue reflection of the fish and so dissociating the colour patch from the fish shape (control treatment RedONLY). In order to ensure that all the agonistic behaviour observed was caused by the mirror image and not by the mere presence of the glass pane or filter sheet, we added several negative controls: each filter was also presented separately against the grey, non-reflective back of the mirror (negative control treatments back þ NoRED, back þ ND50, back þ ND25, back þ RedONLY). To further eliminate possible olfactory and chemical cues, all filters used in this experiment were present in the aquaria simultaneously during each treatment, concealed at the reverse side of the mirror.
Preliminary analyses showed that in all these negative controls, as well as in the treatment only transmitting red fluorescent coloration without the outline of the fish (RedONLY), the fish showed no aggressive behaviour. In order to focus on planned comparisons and reduce the risk of type I errors [32], we excluded these control treatments from further statistical analysis.
Qualitative filter transmission characteristics (figure 2a) were measured with a spectrometer (QE65000, Ocean Optics, FL), connected via a fibre-optic cable (Ocean Optics QR600 -7-UV125BX) to a halogen light source (Ocean Optics HL-2000), with the light-emitting and -collecting probe pointing at a diffuse white reflectance standard (Spectralon SRS-99, Labsphere, NH). Filters were placed individually in an in-line filter holder (Ocean Optics FHS-UV) in the light path leading to the spectrometer, and transmission data were recorded with SPECTRA SUITE v. 6.1 software (Ocean Optics).
To also assess quantitative transmission properties of the filters (figure 3), we measured the overall amount of light (380-780 nm) transmitted under experimental conditions using a portable photospectrometer (SpectraScan PR-670 with Cosine Corrector CP-670, Photo Research Inc., CA). With the filter completely covering the spectrometer's photo detector, we took five standardized measurements of photon irradiance for each filter used.
(d) Experimental procedure and data recording
For each single treatment, a mirror with attached filters was carefully lowered into the water and placed at the side of the tank, whereupon the experimenter withdrew to minimize human interference. The fish's behaviour was then recorded for 2 min with a video camera (Sony HDR-CX6) mounted on a tripod parallel to the mirror pane. Experimental testing started in the morning and finished in the early afternoon. To eliminate daytime as a confounding factor, the testing sequence was designed in such a way that each day we started with a different animal, which was then subjected to all treatments in a randomized sequence; the completion of such a sequence was termed an experimental run. All animals Owing to constraints in laboratory space, the experiment had to be divided into two sequential trials with eight animals each. Behavioural data were extracted from the video sequences with the observer always blind to the treatment (see electronic supplementary material). We evaluated the frequency of three distinct agonistic behaviours: (i) display, (ii) bite and (iii) tail-slap. Display behaviour was initiated by the fish swimming parallel to the mirror, whereupon the animal abruptly stopped and erected all fins before swimming on again. Bites were counted each time the fish bit the mirror, which sometimes culminated in attempted jaw locking. Tail-slaps consisted of a sudden hitting motion of the labrid's caudal fin against the mirror and were usually observed at the end of a sequence of agonistic reactions.
(e) Fluorescence photography and morphometric parameters
One week after completion of the behavioural experiment, we measured individual body length, total body area and red fluorescent body area of each fish. For this purpose, each individual was transferred into a small, custom-made aquarium with a scale bar. Fish were photographed under monochromatic blue illumination provided by two blue LED torches (mini compact LCD, Hartenberger, Kö ln, Germany), each in combination with a subtractive dichroic blue filter (FD2C, Thorlabs, NJ). We used a digital still camera (Canon EOS 7D), standardized settings (1/15th sec, f/8, ISO 800 and white balance of 7450 K) and an EF-S 60 mm f/2.8 macro lens in combination with an optical long-pass filter attenuating short wavelengths below 550 nm (LEE filter no. 105). The latter served to artificially enhance the visibility of the red fluorescent pattern for image analysis. The fish pictures were imported, calibrated and measured in IMAGEJ v. 1.45s [33]. For the fluorescent area measurements, we set the colour threshold function to select only pixels with RGB red values exceeding 210. Fluorescence excitation and emission characteristics (figure 1c) were determined by measuring male opercular scale samples with a spectrofluorometer (QuantaMaster 40, Photon Technology International, NJ) equipped with two liquid light guides (LLG 380): one for excitation aimed at a 458 angle at the fish scale and one for collection emission signal, aimed perpendicular to the scale. Both tips were less than 5 mm away from the sample. The sample was measured in salt water to limit osmosis-related artefacts and suppress reflection, which is much stronger in air. For this purpose, the tips of both light guides were also submerged. Excitation was varied from 330 to 730 nm in 4 nm steps. Emission was measured from 350 to 750 nm, also in 4 nm steps. The entry and exit slit of both monochromators (excitation source and emission measurement) was set to 5 nm. Emitted light was integrated by a photomultiplier (Hamamatsu PMT R928) in 1 s bins. The results were corrected for the transmission properties of the liquid light guides as well as the quantum efficiency of the photomultiplier at each measured wavelength.
(f ) Statistical analysis
Statistical analysis was done in R v. 2.15.2 [34]. Generalized linear mixed models (GLMMs) were used to examine sources of variation in the total number of displays, bites and tail-slaps between treatments. All response variables represented count data following a Poisson distribution and were modelled using the glmer function in the 'lme4' package [35]. To account for repeated measurements per individual fish, all models contained individual ID as a random intercept factor with 16 levels. Fixed factors included the experimental treatment (four levels: treatment NoRED, controls ND25 and ND50, and positive control NoFILTER) as well as the experimental trial (two levels: first and second). After correcting for overdispersion, model reduction showed that the fixed factor experimental trial did not improve model fit as evaluated by the Bayesian information criterion. This indicates that the treatment effects did not differ between the two experimental trials, and we thus omitted this factor from the final analysis. Cases in which a given behaviour was not observed were included as zero values. One individual performed so many bites that it was considered an outlier and removed from the analysis. We conducted post hoc comparisons for each pair of treatments with Tukey's HSD, using the glht function in the 'multcomp' package [36]. In order to investigate potential effects of morphometric parameters on agonistic behaviour, we added total body length, total body area, red fluorescent body area and all their interactions as additional covariates to the model. All results were considered significant at p , 0.05.
Results
We observed significantly less display behaviour under the experimental NoRED treatment compared with controls ND25, ND50 and NoFILTER (Tukey's HSD tests, all p , 0.001; figure 4). Bites were also significantly less frequent in the NoRED treatment compared with all control treatments, while we found no significant difference between the different control treatments (table 1 and figure 5). Tail-slaps were observed too rarely to make a statistical analysis meaningful. The direct comparison between the experimental NoRED treatment and the darkest control (ND25) is particularly revealing: under our experimental light conditions, the NoRED filter transmits approximately 15% more light than the red-transparent control filter ND25 (figure 3). Nevertheless, both displays and bites occurred significantly less frequently under the NoRED treatment compared with the ND25 treatment (display behaviour: z ¼ 24.559, n ¼ 16, p , 0.001, Tukey's HSD; bites: z ¼ 22.675, n ¼ 15, p ¼ 0.034).
The morphometric parameters body size, total body area and fluorescent body area, and all of their interactions, did not have statistically significant effects on the observed agonistic behaviours (GLMMs for morphometric parameters, all p . 0.25).
Discussion
Male C. solorensis showed significantly fewer agonistic responses when confronted with a mirror image masking their red fluorescent body patterns compared with control treatments where their fluorescent coloration remained visible. Pairwise comparisons between control treatments revealed that a change in brightness alone had no significant effect on the observed behaviour. This clearly suggests that agonistic behaviour in C. solorensis is influenced by the presence of red fluorescent body coloration in the fish's mirror image, rather than through a change in brightness.
We thus conclude that (i) C. solorensis is able to perceive the deep red fluorescent coloration of its conspecifics and that (ii) this fluorescent colour pattern affects agonistic male-male interactions. This is the first study to demonstrate that deep red fluorescent body coloration can have a behavioural significance in a reef fish.
Why does the red fluorescent coloration influence male agonistic interactions? One explanation is that this colour pattern facilitates the recognition of male conspecifics, similar to the role of purely reflective colour patterns in other marine and freshwater fish [37][38][39]. An experimentally manipulated mirror image lacking that stimulus could therefore fail to be recognized as a rival. However, the fact that males did show some agonistic behaviour when confronted with a red-deprived mirror image-although at significantly lower rates-indicates that even without red colour, the mirror image was perceived as a potential intruder. Also, when protogynic Cirrhilabrus wrasses change sex from initial-phase females to terminalphase males, transitional-phase individuals already resemble males in shape but still lack the fluorescent dorsal and opercular stripe (T.G. 2011, personal observation; see also [30,40]). A red-deprived mirror image may therefore be perceived as a transitional male that is not yet judged as a fully competent rival, and thus receives only limited attention by territorial males.
The mere presence of deep red colour without the outline of the fairy wrasse (treatment RedONLY) proved insufficient to evoke any aggressive responses. This is not unexpected because many other reef organisms (such as stony corals and calcareous algae) also exhibit red fluorescence [11].
In recent years, short-wavelength ultraviolet colour patterns have been shown to serve species recognition and modulate male aggression in damselfish [39,41], and affect mate choice and territorial behaviour in guppies [42] and sticklebacks [43][44][45]. As many predatory fish are unable to detect ultraviolet light [46,47], UV coloration has been suggested to act as a private communication channel [41,48]. Red fluorescence in reef fish also has the potential to serve private communication: the fluorescent colour pattern of C. solorensis peaks at around 660 nm, a visual range for which most reef fish families have poor or no sensitivity [3,10,49]. This reduced sensitivity for red probably represents an adaptive response to the lack of long-wavelength sunlight in most marine habitats, making its perception superfluous. In this blue-dominated environment, however, red fluorescence enables fish to display signals with a particularly high chromatic contrast and conspicuousness to those few receivers that possess photoreceptor sensitivity in this long-wavelength range. The same signals remain invisible (or at least inconspicuous) to others with peak sensitivities at shorter wavelengths.
The suitability of red colour signals for private communication is further enhanced by the rapid attenuation of long wavelengths in seawater [2]: red fluorescent coloration is particularly well suited for short-range visual interactions, as is usually the case for social and sexual interactions among conspecifics. At the same time, its information content is rapidly lost at the greater distances relevant for most predators to detect their prey. As this study demonstrates that fairy wrasses do perceive their fluorescent colour pattern and use it for intraspecific interactions, we propose that C. solorensis may have shifted its visual communication towards wavelengths that predatory fish are less likely to pick up. Our discovery of a reef fish that uses long-wavelength fluorescence for intraspecific interaction raises several questions that will be addressed in future work: first, physiological characterizations of the long-waveband photoreceptor sensitivity of these fish will help towards understanding the intermediate perceptual steps enabling the behavioural responses documented here. Second, the fluorescent pigment and its associated costs should be characterized. Third, in addition to the malemale interactions described here, the male-limited fluorescent pattern of C. solorensis is a good candidate trait to affect female choice. Finally, to investigate the potential use of red fluorescence as private communication, the exact visual capabilities of predators in this wavelength range need to be examined, while taking into account functional costs of evolving the ability to detect such signals [49].
|
2017-04-18T22:11:20.002Z
|
2014-07-22T00:00:00.000
|
{
"year": 2014,
"sha1": "e8f400f1d3256f3170954e0f42af7da0887088fa",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2014.0787",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8f400f1d3256f3170954e0f42af7da0887088fa",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257765250
|
pes2o/s2orc
|
v3-fos-license
|
Placenta accreta spectrum into the parametrium, morbidity differences between upper and lower location
Abstract Objective To demonstrate the surgical and morbidity differences between upper and lower parametrial placenta invasion (PPI). Materials and methods Forty patients with placenta accreta spectrum (PAS) into the parametrium underwent surgery between 2015 and 2020. Based on the peritoneal reflection, the study compared two types of parametrial placental invasion (PPI), upper or lower. Surgical approach to PAS follows a conservative-resective method. Before delivery, surgical staging by pelvic fascia dissection established a final diagnosis of placental invasion. In upper PPI cases, the team attempted to repair the uterus after resecting all invaded tissues or performing a hysterectomy. In cases of lower PPI, experts performed a hysterectomy in all cases. The team only used proximal vascular (aortic occlusion) control in cases of lower PPI. Surgical dissection for lower PPI started finding the ureter in the pararectal space, ligating all the tissues (placenta and newly formed vessels) to create a tunnel to release the ureter from the placenta and placenta suppletory vessels. Overall, at least three pieces of the invaded area were sent for histological analysis. Results Forty patients with PPI were included, 13 in the upper parametrium and 27 in the lower parametrium. MRI indicated PPI in 33/40 patients; in three, the diagnosis was presumed by ultrasound or medical background. The intrasurgical staging categorizes 13 cases of PPI performed and finds diagnosis in seven undetected cases. The expertise team completed a total hysterectomy in 2/13 upper PPI cases and all lower PPI cases (27/27). Hysterectomies in the upper PPI group were performed by extensive damage of the lateral uterine wall or with a tube compromise. Ureteral injury ensued in six cases, corresponding to cases without catheterization or incomplete ureteral identification. All aortic vascular proximal control (aortic balloon, internal aortic compression, or aortic loop) was efficient for controlling bleeding; in contrast, ligature of the internal iliac artery resulted in a useless procedure, resulting in uncontrollable bleeding and maternal death (2/27). All patients had antecedents of placental removal, abortion, curettage after a cesarean section, or repeated D&C. Conclusions Lower PAS parametrial involvement is uncommon but associated with elevated maternal morbidity. Upper and lower PPI has different surgical risks and technical approaches; consequently, an accurate diagnosis is needed. The clinical background of manual placental removal, abortion, and curettage after a cesarean or repeated D&C could be ideally studied to diagnose a possible PPI. For patients with high-risk antecedents or unsure ultrasound, a T2 weight MRI is always recommended. Performing comprehensive surgical staging in PAS allows the efficient diagnosis of PPI before using some procedures.
Introduction
Placenta accreta spectrum (PAS) is a heterogeneous disease with severe maternal morbidity and fewer severe forms. Therefore, treatment by expert groups implies a relatively low frequency of complications, but highly complex cases may require many hospital resources and lead to severe maternal morbidity [1] and even death [2]. In addition, the location of placental invasion is related to a particular surgical difficulty and morbidity [1], especially when the lower part of the bladder, cervix, or parametrium are involved [3].
Information about PAS with parametrial involvement is negligible [4]. In addition, few existing articles describe the clinical results of these women, almost always mentioning the use of a high number of transfusions [5][6][7], aortic occlusion [5,7], multiple admissions to the operating theater [8], elevated illness, and long-term severe sequelae [8].
PAS with parametrial placental invasion (PPI) cases implies the involvement of many arterial pedicles through multiple sources, and the obstetrician does not often handle them. Moreover, the parametrial space, mainly located under the peritoneal reflection, is the pelvis's narrow and deep area near the ureter and multiple vascular structures [7]. Thus, it is necessary to establish management guidelines for PAS specialists and general obstetricians. PPI may be divided into upper or lower cases based on their position with the peritoneal reflection. While upper PPI may only require an average procedure, an inadequate approach to lower PPI could lead to fast, uncontrollable bleeding, severe morbidity, and even mortality [9].
Although PAS is not an actual placental invasion, just a placental protrusion, we continue using the term "invasion" because average readers frequently use it.
We describe the clinical results of PAS cases with parametrial placenta invasion (PPI) in three low-and middle-income countries and propose a sequential approach for management accordingly.
Materials and methods
Forty patients with PAS into the parametrium underwent surgery between January 2015 and December 2020. The specialist team received the patients in private, university, and public hospitals in Buenos Aires, Argentina, in the Valle de Lili Foundation, Cali, Colombia, and in the Dr. Soetomo General Hospital, Indonesia. Furthermore, the senior specialist performed a prenatal study image in all the patients, including abdominal, transvaginal ultrasound, and T2 weight, ultrafast magnetic resonance imaging (MRI). In addition, the study split patients into two types of PPI groups regarding whether the placental invasion was above (upper) or below (lower) the peritoneal reflection line. The surgical approach to PAS was performed following a general description mentioned by a conservative-resective method [10]. Before delivery, the surgeon performed a pelvic coalesced fascia dissection to achieve a definitive diagnosis of placental invasion. Experts performed surgical staging after cutting inside the round ligament, completed a dissection of the broad ligament folds, and separated the entire vesicouterine fold (Figure 1). Direct observation of the parametrial space between the two sheets of the broad ligament was classified as follows: class 0, no PPI; class A, PPI-like lateral uterine dehiscence, without strong tissue adherence of neovascularization; and class B, PPI with evidence of newly formed vessels in the placenta or with firm tissue adherence to the pelvic wall.
In upper PPI cases, the surgical team attempts to repair the uterus after resection of invaded tissues or performing a hysterectomy; overall, hemostasis was completed by uterine artery clamping taking part of the myometrium. Which procedure (conservative repair or resective-ablative) was performed depending on the women's desire for future pregnancy. In cases of lower PPI, experts performed a hysterectomy in all instances. First, the urologist inserted a simple ureteral catheter. In cases of lower PPI, a recurrent stopping to ureteral catheter advance indicated that the procedure should be stopped due to the risk of PPI rupture and massive, unexpected bleeding [11]. Next, the urologist repaired any detected cases of ureteral injuries by resection borders, use of a double J catheter, four stitches for ureteral approximation, and drainage; immediate repair could bring satisfactory surgical results and fewer complications [12]. In cases of hidden ureteral damage, its integrity was reestablished by ureteral reimplantation within 7-10 days. Specialists only used upper proximal vascular control in all patients with lower PPI. In four instances, the vascular surgeons used an elastomeric infrarenal aortic balloon (REBOA TM , Prytime Medical, Boerne, TX). In three placed abdominal aortic loops [13], an obstetrician performed internal aortic compression, and in two, a bilateral internal iliac artery was ligated.
Surgical dissection for lower PPI cases started finding the ureter in the pararectal space (Video 1). Then, a surgical retractor separated the round ligament laterally to expose a PPI area covering the ureter (Supplementary Material 1, F1). Later, the surgeon ligated the placenta and newly formed vessels above the ureter using Vicryl number 0 (Supplementary Material 1, F2), creating a tunnel to release the ureter from the placenta and connecting vessels. Due to the pelvic ureteral blood supply coming to the lateral side, ligatures must be applied internally to the ureter (Supplementary Material 1, F3). Then, the PPI is wholly separated to perform hysterectomy [14], reducing the risk of unexpected bleeding.
In all cases, at least three pieces of the invaded area were sent for histological analysis. Continuous statistical variables were expressed as medians and interquartile ranges and analyzed with the Mann-Whitney U-test. Categorical variables included proportions and comparisons using the Chi-square or Fisher's exact test (Statistical Package Stata, version 14.0, StataCorp., College Station, TX). Approval was obtained from the Ethics Committee under protocol number 550-2015.
Main findings
Lower parametrial PAS involvement is an infrequent but potentially life-threatening situation [15]. At surgery, the surgical team confirmed 40 cases of PPI, 13 in the upper parametrium and 27 in the lower parametrium. MRI indicated PPI in 21/32 patients; in seven, the identification was performed by surgical staging, and in three, the diagnosis was presumed by ultrasound [16]. Falsepositive ultrasound interpretations resulted in cases in which accreta were suspected to be a clinical concern but turned out to not be present [17]. Conversely, falsenegative readings may lead to a circumstance with unanticipated complications or tragic consequences [18]; for this reason, high-risk patients must be operated on as PAS-positive to decrease maternal morbidity [19]. The coexistence of MRI signs in the lateral uterine wall was associated with an elevated risk of bleeding [20] and complications [21]; an MRI study would facilitate preoperative planning and evaluation of maternal outcomes in most cases with maternal risk factors. Therefore, MRI should be used for at-risk patients to accurately identify the placental location, regardless of the ultrasonography results. The intrasurgical staging categorizes the position and features of the invaded placenta and newly formed between the lateral uterus and the iliac internal vascular fascia ( Figure 2).
All PPIs had a medical background, for instance, manual removal of the placenta [22], iterative D&C [23], and curettage after cesarean section or abortion [24]. Lower PPI showed more complications, blood loss, a requirement of proximal vascular control, longer operative time, and urinary injuries than upper PPI ( Table 1). The possibility of unexpected massive bleeding increases in lower PPI; heavy bleeding is associated with the presence of newly formed vessels but not connected with the size of the lower PPI (Supplementary Material F4). Surgical diagnosis of lower PPI with the mentioned features means the rise of unwanted blood loss [7] and the alert to stop any dissection until aortic vascular control is performed. The efficiency of aortic vascular management was directly associated with its application, obtaining the best results when used before and not during or after the parametrial dissection. The use of iliac internal vascular control resulted in uncontrollable massive bleeding and fasted and not immediately recognized blood loss volume [25,26], coagulopathy, and metabolic acidosis that lead to two cases of maternal death. In both cases, the surgical team underestimated a small piece of lower PPI and surprised by uncontrollable bleeding despite bilateral internal iliac vascular control. During active bleeding or persistent oozing, damage control surgery [27] was attempted using pelvic packing. However, it was completely useless because blood and clots went unsuspectingly through pelvic subperitoneal spaces and retroperitoneal areas (Supplementary Material F5). In upper PPI, control bleeding was efficiently controlled only using uterine vessel compression (Supplementary Material F6). In all cases of upper PPI, the uterine repair was technically possible, but the decision for a resective procedure was the final choice of the obstetrician according to the mother's preferences. All cases of ureteral ligature occurred in patients without ureteral catheterization due to complete identification; then, the ureter was reimplanted [28] within 7-10 days without further complications.
Internal manual compression of the aorta effectively controls pelvic bleeding in unexpected parametrial bleeding and facilitates dissection maneuvers. Histology analysis confirmed PAS, and over 120 samples (three samples from 40 cases) were collected in a 100% placenta percreta. PAS is not an authentic invasion but just a placental protrusion, although Figure 2. Parasagittal cut on the right female pelvis. An embalmed corpse, the peritoneal sheet of the parametrium was transilluminated. Notice that the parametrium in a straight space has plenty of arteries, veins, and the ureter. morbidity and blood loss are closely associated with the invasion topography and not with a placental degree invasion [10].
Strengths and limitations
The study's main strength is that is we collected a high number of cases of this uncommon PAS location. The use of aortic vascular control and precise ureteral dissection demonstrated safety and decreased blood loss [29] in patients with a lower PPI. Compared to existing papers, this study represents the most extensive series about PPI and could be an outstanding guide for experts and beginners. A surgical guide in severe cases of PAS proposes an alternative to leaving the placenta in situ; although, this option is connected with severe maternal morbidity, especially in placenta percreta [30]. The main limitation of our study is a poor comparison with other publications because they are only a few case reports. Furthermore, lower PPI is connected with hazardous surgery, massive and uncontrollable bleeding, severe maternal morbidity, and even death; probably by this cause, hospitals or doctors are not prone to publish their results.
Moreover, the low frequency of PPI made it impossible to collect information from centers with limited PAS cases by year. Although Argentina legalized abortion some years ago, Argentina, Colombia, and Indonesia are countries with a high rate of unlawful abortions, procedures closely connected to uterine damage. Maternal mortality in percreta is approximately 7% [31], but this is a general value; this rate is likely significantly higher in lower PPI. Although the publication appears to be substantial, the relative values for PAS complications are not enough to estimate a real statistical significance [32].
Interpretation
The presence of newly formed vessels or a direct placental attachment (without serosa) in the lower PPI indicated the possibility of uncontrollable massive bleeding [33] independent of extrauterine placenta size. Lower PPI bleeding is associated with challenging hemostasis, first by the diversity of their blood supply source [34] and second by the particularly deep and narrow placental invasion anatomy. Upper PPI seems impressive, but it is not technically difficult to handle when performing expertise groups. Due to the thick and healthy myometrium in the upper uterus concerning the lower PPI, the use of aortic vascular control is only recommended in cases of lower PPI, especially in the presence of newly formed vessels or firm placental adherence to iliac internal elements. First, the newly formed vessels have multiple connections with the lower anastomotic circle [35]; second, the possibility of undiagnosed extension to the ischiorectal through the levator ani muscle (roof of the ischiorectal fossae) is unknown until deep dissection.
The concomitant use of aortic vascular control and ureteral-specific dissection is safe to minimize the surgical risk in lower PPI. It is expected that the surgeons handling a complicated case of PAS think to avoid touching the placenta and leaving it in situ as a definitive treatment or addressing it later with a delayed hysterectomy. In the case of early and massive, unexpected bleeding, a surgical solution is problematic, first, due to placental placement and second, due to the risk of coagulopathy, hypovolemic shock, and metabolic acidosis. In these cases, primary aortic vascular control is a priority to have time to replace volume, restore a clot, and stabilize hemodynamic parameters [36]. Although unpublished, massive embolization by lower PPI is not recommended in the case of a wide pelvic and extra pelvic anastomotic net [37]. Although there are some prospective randomized trials [38,39], retrospective cohort and physiology research about the internal iliac ligature [40], and studies that have demonstrated its inefficacy in cases of pelvisubperitoneal bleeding, the use of internal iliac artery occlusion is associated with severe complications [41][42][43] and even death [44,45].
Conversely, the aortic vascular control below renal arteries blocks the iliac internal, external, some aortic, and femoral anastomotic components bilaterally in a simple way. Consequently, it is an invaluable tool to achieve hemostasis in lower PPI. Comparable efficacy of aortic blocking has been demonstrated using a specific aortic balloon, an aortic cross-clamp [46], or inexpensive aortic slinging [13].
Conclusions
Lower PAS parametrial involvement is uncommon but is associated with elevated maternal morbidity. Upper and lower PPI have different surgical risks and require different technical approaches; consequently, accurate identification is greatly needed.
Women with a clinical background of manual placental removal, abortion, or repeated D&C must be carefully examined to diagnose a possible PPI. For patients with high-risk antecedents or doubtful ultrasound, a T2 weight MRI is recommended. Performing comprehensive surgical staging in PAS allows the efficient diagnosis of PPI efficiently and before using any dissection maneuvers that cause unexpected massive bleeding. Knowing what to do or not do is essential for avoiding unnecessary organ ablation and uncontrollable massive bleeding.
|
2023-03-28T06:15:54.672Z
|
2023-03-26T00:00:00.000
|
{
"year": 2023,
"sha1": "c87d0ec2cbfd5278566d52f126f409842d6ef22d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/14767058.2023.2183764",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "ef30260bf0f3fd57127e3181f48d338ba1f8b6e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225646717
|
pes2o/s2orc
|
v3-fos-license
|
Cardiovascular Disease Burden in Patients with Non-Dialysis Dependent Chronic Kidney Disease in Cameroon: Case of the Douala General Hospital
Introduction: Cardiovascular disease (CVD) is the major cause of morbidity and mortality in patients with chronic kidney disease (CKD). Objective: To evaluate the burden of CVD and audit the management of cardiovascular risk factors (CVRF) in patients with non-dialysis (ND) dependent CKD in Cameroon. Patients and Methods: A cross-sectional study in the Douala general hospital was conducted from January to March 2016, including CKD patients’ stages 3 5 ND. Socio-demographic data, comorbidities, medications and biological data were extracted from patient’s records. For each participant, lipid profile and urinary protein excretion were measured; a resting electrocardiogram was done. Hypertension, diabetes, dyslipidemia, obesity, smoking, alcohol consumption, anemia, hyperuricemia, proteinuria and high calcium-phosphorus product were considered as CVRF. CVD was defined as a history of stroke, peripheral artery disease, coronary heart disease (CHD), left ventricular hypertrophy (LVH), heart failure (HF) and arrhythmia. We used KDOQI 2003, KDIGO 2012 and JNC 8 guidelines for definition and evaluation of the management of lipid abnormalities, proteinuria and hypertension respectively. Results: A total of 83 patients (45 males) were included; mean age was 56 ± 15 years. Mean number of CVRFs per patient was 5.19 ± 1.64; hypertension (90.3%), obesity (79.5%), anemia (78.3%), dyslipidemia (69.8%) and hyperuricemia (69.8%) were the most frequent. Mean number of How to cite this paper: Halle, M.P., Kom, M.F., Kamdem, F., Mouliom, S., Fouda, H., Dzudie, A., Kaze, F.F. and Ashuntantang, E.G. (2020) Cardiovascular Disease Burden in Patients with Non-Dialysis Dependent Chronic Kidney Disease in Cameroon: Case of the Douala General Hospital. Open Journal of Nephrology, 10, 171-186. https://doi.org/10.4236/ojneph.2020.103017 Received: June 3, 2020 Accepted: July 4, 2020 Published: July 7, 2020 Copyright © 2020 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access
Introduction [1]
Chronic kidney disease (CKD) is a public health problem with a prevalence estimated at 13% [1]. It progresses through 5 stages and carries a high morbi-mortality [2]. Cardiovascular disease (CVD) is frequent at each stage of CKD with reported prevalence ranging from 26% to 48% [3]- [8]. Furthermore CKD is an independent risk factor of cardiovascular morbidity and mortality [9] [10] and CVD represent almost 50% of the causes of death in CKD patients [11] [13].
Traditional risk factors such as diabetes, hypertension, age, gender, and dyslipidemia are the main cardiovascular risk factors (CVRF) in general, but in CKD patients, specific factors such as anemia, endothelial dysfunction, vascular calcification, hyperparathyroidism, hyperhomocysteinemia, albuminuria, and chronic inflammation are associated with CVD. Studies have shown that more than one CVRF existed in patients with CKD, and the number increased with decreasing renal function [13] [14]. Majority of these CVRF are potentially modifiable.
Early detection and adequate management of these factors is a key strategy in the prevention of CVD. Blood pressure should be maintained below 140/90 mmHg and blockage of the renin-angiotensin-aldosterone system (RAAS) is the corner stone of the treatment [15] [16]. In diabetic patients hemoglobin A1C levels below 7% and treatment of dyslipidemia with low-density lipoprotein cholesterol below 90 mg/dl is fundamental [17] [18] [19]. Anemia a strong predictor of left ventricular hypertrophy should be treated with recommended hemoglobin levels between 11 and 12 g/dl [20] [21] [22]. Treatment of hypophosphatemia reduces the rate of cardiac valve calcification and CKD progression [23] [24]. Studies have shown that treatment of these risk factors is suboptimal and the control poor [25].
CKD is a major health issue in sub-Saharan Africa (SSA) with an estimated prevalence of 13.9% [26]. Morbi mortality of these patients is high due to the appearance of CVD [3]. The epidemiology of CVRF and CVD in CKD patients in SSA is limited. Few studies reported that the burden of CVRF and CVD is high in CKD patients in this setting and the care of these patients is largely in-M. P. Halle et al. Open Journal of Nephrology adequate and suboptimal [27] [28]. In Cameroon, CKD prevalence is high mainly due to hypertension, diabetes and chronic glomerulonephritis [29] [30].
Patients carry a high morbi-mortality, late referral to nephrologist as well as poor management is frequent [31] [32] [33], and there is no data on the epidemiology of CVRF and CVD in these patients. Therefore, we undertook this study aiming to describe the spectrum of CVRF and evaluate their management according to guidelines among CKD patients in Cameroon.
Study Design and Setting
This cross sectional study was carried out from 1st January to 31st March 2016 in the outpatient section of the nephrology unit in Douala General Hospital (DGH), a tertiary referral hospital in Cameroon. DGH is a 320 bedded public institution and the main referral hospital for patients with kidney disease in the littoral region of the country. The medical staff of the unit comprises two nephrologists, one general practitioner. Patients with CKD referred to the unit are assigned a unique identifier and attached to one of the nephrologists, and then followed-up at intervals that are determined by the stage of the renal disease and comorbidities. At the first consultation in the unit, each patient has clinical assessment and laboratory tests done. The diagnosis of kidney disease was based on elevated serum creatinine level with or without urine dipstick abnormalities, and glomerular filtration rate (GFR) less than 60 ml/min/1.73m 2 . GFR was estimated using the CKD EPI equation [34]. The etiology of CKD was mostly based on clinical arguments in the absence of renal biopsy.
Data Collection
Final year undergraduate medical students collected consecutively data of all patients who provided a written informed consent and attended their first nephrology consultation with a nephrologist diagnosis of CKD stage 3 to 5 non-dialysed. We included all consenting patients with CKD stage 3 to 5 non-dialyzed and followed up in the unit for more than 3 months. From medical records, we collected relevant data: socio-demographic (sex, age, level of education, source of funding), anthropometric parameters (weight, height, waist and hip circumference), comorbidities (hypertension, diabetes mellitus, history of CVD, HIV, Gout), lifestyle habit (alcohol and tobacco use), etiology of CKD, medications and blood pressure at referral were recorded. Biological parameters done within 3 months of inclusion in the study (serum urea and creatinine level, glycaemia, uric acid, hemoglobin level, serum albumin, calcium, and phosphorus) were collected. For each participant 5 ml of fasting blood was collected for triglycerides, total cholesterol (TC), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) measurement using AUTOMATIC COBAS 311 (HITACHI ® , TOKYO.105-8717 JAPAN) in the biochemical laboratory of the DGH. Also a spot urine sample was collected be-Open Journal of Nephrology fore 9.am for protein and creatinine dosage by the pyrogallol red calorimetric method and the Jaffe kinetic method respectively, using the visual spectrophotometer (BIOMERIEUX ® , FRANCE). The urinary protein to creatinine ratio (UPCR) was computed as urinary protein/urinary creatinine in mg/g. A resting electrocardiogram (ECG) was done for all participants using a single channel CARDIART 6180T ECG (2011 FROST and SULLIVAN-Delhi-India). The ECG leads were placed accordingly, in line with the recommendation of the American Heart Association guidelines [35].
Definitions and Calculations
Chronic kidney disease was defined as estimated GFR < 60 ml/min for more the 3 months associated with or without complications such anemia, hypocalcaemia, hyperphosphatemia, and other signs of chronicity such as abnormalities of kidney size or structure on ultrasound.
Hypertension, diabetes, dyslipidemia, obesity, smoking and alcohol use were considered as traditional CVRF. Hypertension was defined as either blood pressure > 140/90 mmHg, or use of antihypertensive medications. Diabetes mellitus was defined as either a history of diabetes, or fasting glucose 7.0 mmol/L, or HbA1c 7%, or use anti-diabetic medications. Dyslipidemia was defined as any abnormality of plasma lipid concentration or treatment with statin. Lipid abnormalities were considered as; Total cholesterol > 2.40 g, LDL > 1.60 g/l, HDL < 0.40 g/l, Triglyceride > 2 g/l [36].
BMI was calculated using the formula [Weight/(Height) 2 ]. Obesity was defined as a BMI ≥ 30 kg/m 2 and abdominal obesity if waist circumference > 94 cm in men and >80 cm in women or waist to hip ratio > 1 in males and >0.85 in females. Tobacco use was defined as history of smoking within the last 6 months.
Alcohol use was defined as history of an average alcohol consumption greater than 14 bottles of beer or equivalent a week for men and greater than 7 bottles of beer or equivalent for women week within the last 6 months [37].
Non-traditional cardiovascular risk factors were: anemia, hyperuricemia, proteinuria and a high calcium-phosphate product.
Proteinuria was defined as urinary protein excretion > 150 mg/g [15]. Hyperuricemia was defined as serum uric acid level > 70 mg/l for men and >60 mg/l for women [38] or use of hypouricemic agents. Anemia was defined as hemoglobin of less than 13 g/l in males and less than 12 g/l in females [15]. Calcium phosphorus product was considered high if Ca × PO 4 > 55 mg 2 /dl 2 .
CVD was defined as a history of stroke, peripheral artery disease, coronary heart disease (CHD), left ventricular hypertrophy (LVH), heart failure (HF) or arrhythmia.
Peripheral artery disease was defined as prior radiological confirmation of atherosclerosis of limbs vessels or absence of lower limbs pulses. Coronary heart M. P. Halle et al. Open Journal of Nephrology disease was defined as history of myocardial infarction or ST segment abnormalities or abnormal Q waves on ECG. LVH was defined using either the Sokolow-Lyon Criteria or by the Cornell voltage criteria on ECG [35]. HF was defined as any systolic or diastolic dysfunction of the heart confirmed by a physician with or without an ejection fraction less than 50%. Arrhythmia was considered as a history of atrial fibrillation with current medication or ECG signs of atrial fibrillation.
Statistical Analysis
Data was analyzed with the aid of EPI Info Version 7 software. Continuous variables were presented as means (standard deviation) when distribution is symmetrical or median (25th-75th IQR) when skewed. Categorical data were expressed as frequencies, proportions and percentages. Comparison between proportions was done using chi-square test and fisher's exact where appropriate.
Means were compared using the t-test for comparison of two means and ANOVA test for more than two means. The degree of association between qualitative variables was evaluated by estimating odds ratio. The level of statistical significance was set at a p value < 0.05.
Prevalence of CVRF, CVD and treatment
The mean number of CVRF in the study population was 5.19 ± 1.64 ( Figure 2) and the number increased although not significantly with severity of CKD ( Table 2). The mean number of non-traditional CVRF significantly increased Table 2).
Discussion
In the present study including 83 patients with CKD stage 3 -5 ND in Cameroon, we found that CVD more than 2/3 of participants presented at least one CVD with LVH, CHD and HF being the most frequent. Prevalence of CVD, LVH and CHD increased with the severity of CKD. The mean number of CVRF was 5.19 ± 1.64. Hypertension (90.3%), abdominal obesity (79.5%), anemia (78.3%), hyperuricemia (69.8%), dyslipidemia (69.8%), proteinuria (44.5%) and diabetes (42.1%) were the most frequent CVRF. The management of these factors was appropriate for the majority but control was poor especially for blood pressure and anaemia.
Prevalence CVRF
We found a mean number of 5 CVRF per patient with traditional CVRF accounted for over two-thirds of the risk burden. When only traditional factors were considered, there was a mean number of 3 traditional CVRF higher than the 1 -2 factors reported in some high-income countries [41] [42]. Early presentation of patients for medical care, universal access to healthcare and less physicians' inertia towards therapeutic guidelines may account for the lower numbers in those countries [41] [42] [43]. Hypertension was the most common CVRF and constitutes the leading cause of CKD in Sub-Saharan Africa [26] [30] [31] as well as complication [44]; this could explain the higher prevalence of coronary artery disease (CAD) and LVH is these patients with CKD as noted elsewhere ranging from 87% to 90% [6]. Obesity was present in 23.17% of pa-M. P. Halle et al. Open Journal of Nephrology tients higher than the mean urban prevalence reported in Cameroon [45]; this may reflect the rising epidemic of obesity observed in Africa and worldwide [46] also obesity is the main risk factor for hypertension and diabetes the 2 leading cause of CKD in our setting [29]. Few studies have reported on the prevalence of obesity in the CKD population, with rates varying from 16% -38% depending on the method of diagnosis and the definition [28] [42] [47] with the severity of CKD.
We found anaemia, hyperuricemia and proteinuria as common non-traditional CVRF in this CKD population. Few studies include non-traditional CVRF in computing the number of factors in CKD [48]. As expected, anaemia and proteinuria increased with the severity of CKD. The prevalence of hyperuricemia was high with about half of these patients at CKD. Similarly, Chonchol et al. reported an increase in the prevalence of hyperuricemia with GFR decline [49].
The prevalence of proteinuria in CKD varies worldwide due to the definition, and clinical characteristics of the study population. The prevalence of proteinuria in the present study was 44, 5%. This is higher compared to most studies in the literature that range from 3% -37% [50], but lower to the prevalence found by in other studies in Africa [28] [47].
Prevalence of CVD
More than 2/3 of participants presented at least one CVD and the rates increased significantly with the severity of CKD; this is in consonance with the findings in literature [2] [8] [10]. CVD is the leading cause of morbidity and mortality [11] in CKD patients with reported prevalence ranging from 26% to 48% [4] [5] [6] [7]. LVH and CHD were the most common CVD similar to other studies in SSA [50] [51]. The high prevalence of hypertension, anaemia and patients with advanced CKD in our study may explain this high prevalence. About 12% of the population had heart failure with 2/3 of individuals in CKD stage 5.
Much higher rates were reported in the western world [52]. The difference is due to diagnostic criteria. Our prevalence of CHD (30%) was higher than reported in patients on haemodialysis in Cameroon [53], which may suggest that majority of patients with CHD die before getting to end stage renal disease. Also our diagnosis of CHD was mostly made by ST segment changes on ECG, which is not specific to ischemia, and could increase the prevalence.
Management
We found that 2/3 of patients with proteinuria and 61% with hypertension were on RAAS blockers. It is fundamental in managing CV complication in CKD to target dyslipide-Open Journal of Nephrology mia [19]. About half of participants with dyslipidaemia were on statins. Higher rates were shown in a population in Taiwan [39]. This low rate of statin use in our study could be due to the high cost of the drug that is out of pocket payment in our setting [56]. The control of LDL-cholesterol was optimal in half of the patients on statin and the rate decreased significantly with the severity of CKD.
Higher control rate were reported by Akpan et al. and [55]. We found that just 6% of patients met their target haemoglobin. This was in conformity with the findings of Nicola et al. [43] in Italy. The control rates of anaemia a risk factor of LVH is poor and one main reason is the high cost of erythropoietin stimulating agent that is out of pocket payment in our setting [20]. We observed 72% of patients with hyperuricemia were on a hypouricemic agent. They are no guidelines that suggest we should treat asymptomatic hyperuricemia. However, studies have shown that hyperuricemia is a factor of progression of CKD and that treatment reduces this progression [57].
Strength and Limitations
This study has some limitations: We use ECG criteria to determine LVH and did not confirm with echocardiography. Also blood pressure control was defined using office values and ambulatory blood pressure monitoring would have been a preferred means. We used self-report and review of medical records to define CVD in this study. This may have missed a small group of participants with undiagnosed CVD. This was a single center study and the results could not be generalized. Despite these limitations, this is the first study to provide basic information on the burden of CVRF and CVD in CKD patients in our setting.
Conclusion
In conclusion, this study reported that CKD patients have a high prevalence of CVD and CVRF. Adequate control risk of these factors is important to reduce mortality of patients but controls were suboptimal in our setting especially for hypertension and anemia. This basic evaluation may serve as foundation for further studies with a big sample and follow up data.
|
2020-07-16T09:02:35.612Z
|
2020-07-06T00:00:00.000
|
{
"year": 2020,
"sha1": "c3c29fc4962adeb5dd1ff423aba3d86508379f77",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/pdf/ojneph_2020070614034577.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36936213880fe21fede63a58184aea123dc968e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235289318
|
pes2o/s2orc
|
v3-fos-license
|
Influence of information integrity control of the unified model of the automated information system of commercial enterprises on conditional profit
Unified model of the automated information data system (AIS) at the commercial enterprises (UMAIS CE) as a protected AIS for the provision of the commercial activity of the enterprises with the common structure and configuration of equipment is examined in the article. Information integrity control (IIC) is an integral part of this model, and its functioning has an impact on the contingent profit of the commercial enterprise considered in the frameworks of UMAIS CE. Mathematical model of IIC UMAIS CE presented within this context, just as a number of computing experiment that allow determining of a strategy of the effective usage of IIC UMAIS CE and specified ways for increase of the contingent profit at the commercial enterprise.
Scope of the problem
Commercial enterprise (CE) is a juridical profit-seeking entity as a main objective of its activity and sharing it between its membersfounders and employees. In most cases the main kind of its activity is a sales of products and (or) services.
For the purposes of most efficient organization of its activity CE does actively apply automated information data system (AIS). AIS is a system consisting of the personnel and a complex of automation devices of its activity that are required for the execution of its functions with the use of the information technologies.
Security of the information that is processed in AIS CE is required because it may involve commercial classified information and/or personal data. Analysis of systemic and applied software employed in AIS CE demonstrated that in this software, as a rule, there are a lot of vulnerabilities. In this situation it is possible to get the presence of "zero day" vulnerabilities, and the information on these vulnerabilities is absent in the corresponding data bases, for example, in CVE data base. It is also characterized by the absence of non-declared facilities [1], thus stipulating a high probability of realization a number of threats to the information security directed as on the theft of information containing the data related to the commercial classified data as on the break of operation of software and hardware components. Main shortcomings of the typical utility intended for the integrity control of the considered AIS revealed during operation of the system comprise in the inability to counteract an attack (in fact, only the event of presence of such attack), the absence recovery means after unauthorized modification of the information and the absence of a convenient interface for the interaction of AIS administrator with the utility. With the account of the quantitative and qualitative growth of the number of threats for the information security of AIS CE it was proposed to develop the latter on the basis of the reference model of the protected automated system (RMPAS) which regulates security model (SM) providing flexible, convenient and secure differentiation of access to the data. RMPAS is presented in [2].
In RMPAS organization of the control for integrity of the information (CII) is different by initialization of the control checkout at each level of RMPAS for each discretion level thus reducing efficiency of the clients' service and thus, the contingent profit of CE. Therefore, for the adaptation of CII RMPAS its optimization is required directed at the increase of the contingent profit of CE under a sufficient control level.
In fact, for the adaptation of RMPAS to the specific features of AIS CW it is required to have also the development of the corresponding model of CII. To make this let us consider the unified model of the automated informational system of commercial enterprises (UMAIS CE) as a model of AIS CE as protected AIS CE with the generalized structure and configuration of equipment that provides versatility of the obtained results.
UMAIS CE is characterized by the presence of the control for information integrity (CII) which is the element of subsystem, providing integrity of the operational environment (software tools and processed data). CII is a tool for testing of the operational environment and it is intended for a periodical comparison of its current state with the reference one.
The aim of our work is to present the results of computing experiments based on mathematical model of CII UMAIS CE allowing determination of a strategy for the efficient use of CII UMAIS CE and certain ways to increase the contingent profit of a commercial enterprise.
In order to develop mathematical model of CII UMAIS CE let us use criteria for the quality of its functioning, allowing to perform optimization directed at the maximum profit earning of CE but under sufficient control. They were specially developed for the adaptation of CII RMPAS to specific features of CE. The main of them is comprised in a necessity to get contingent profit by CE. The aim of the control of CII service is the same for different AISsupport of a reasonable trade-off between meeting the claims to AIS concerning informational security and requirements according to its mission.
According to [2], the following characteristics are applied for CII RMPAS which are divided by the following criteria: Efficiency of CII means its ability to provide the control for the integrity of information to be checked-out. A criterion of this characteristic is functionality of CII ( D f ), that corresponds to the completeness of its set of functions from the viewpoint of using it as a software facility.
Aggressivity of CII is its ability to support the needed level of efficiency for UMAIS SE relative to its mission. The criteria of this characteristic are: resource aggressivity of CII ( D ра ) meaning an additional consumption of the hardware resources and functional aggressivity of CII ( D fа )compatibility of CII with the technology of the information in UMAIS CE system.
An ease of using CII ( D у )ae the efforts of the personnel required for support of its functioning..
Two new criteria different by their scientific novelty are presented below. They are described in details in [3].
Contingent profit ( D п ) is a criterion demonstrating an average profit from the received order of a customer. It allows to control the effect of additional time consumption resource for the contingent profit of CII. It is involved in the aggressiveness of CII characteristic. Sufficiency of the control for integrity ( D dkc )a criterion that corresponds to the ability of CII UMAIS CE to perform the execution of the specified CII functions. It is involved in the characteristics of CII efficiency.
The first four criteria are named as statistical ones. They possess Boolean values: «1» is an acceptable quality control, while «0» is an unacceptable one. These new criteria are called dynamical ones. Their results do always take positive values. The greater value of anyone among them is usually interpreted as a best quality of CII by the given criterion. Therefore, the problem of optimization for CII UMAIS CE can be written as follows: Probabilistic model is then considered that is intended for the analysis of dynamic criteria of CII UMAIS CE that will be an absorbing CEE. It will be associated with the transitions between UMAIS CE levels where the resources are shared under hierarchical restructuring [4] that is regulated by RMPAS. Initial state of CEE corresponds to the start of CII after authorization of a discretion accessdwelling at the identification level. Final state implies getting access to the datadwelling at the informational level. For details see [5].
Let us introduce a criterion of dynamic efficiency of CII that can be used for expression of the dynamic criteria: Below the following definitions are given. s Пexpectation value of the profit from the regular incoming order in case of its practical ordering calculated on the basis of data on the sale for the previous periods of time.
() t ddrandom value of the total time of CII proceeding during the common discretionary access of the certain order. max t пmaximum permissible limit of the time for II proceeding in the process of the common discretional access for a certain order. It is a random value with the exponential distribution having mean value of mп t and it means maximal time of the client expectation after which the order will be removed. 2). Next, we specify inhomogeneous distribution of the information amount when 1 0.86 3). Then, we specify the4 distribution of the information amount in accordance with its assumed distribution over RMPAS levels.
The , that is quite foreseeable: when the amount of information intended for the immutability check-up increases the ability of access to this information is reduced.
Results of the experiment that have a practical significance are formulated as conclusions: 1) If max K is increased then criterion of the dynamic efficiency diminishes (an increase of sufficiency for the integrity criterion IC dkc D and decrease of criterion contingent profit п D ), since under increase of the information amount checked-up for immutability the probability of concealment of its integrity breaking is reduced while the time required for this check-up increases. Experiment 3: now we define the ways for increasing the value of criterion «Contingent profit» for CII UMAIS CE and then we elucidate how one can considerably increase the value of this criterion and, hence, the actual profit of CE while an insignificant loss of immunity takes place.
The value of criterion ( ) tdkc Dt is regulated by parameter max K . With the use of above-mentioned program on the simulation of the control for CII UMAIS CE let us consider a contingent profit of CE without the application of optimization procedure, i.e. for max K =1 and with its application, i.e. for max 01 K for one working day. Let us apply the following formulas: where з Пis the expectation value of profit from the regular incoming order (this order can be drawn or removed); s Пis the expectation value of profit from the regular incoming order, in case of its practical ordering calculated on the basis of data on the sale for the previous periods of time; Пdexpectation value of the profit per one working day of CE; з Nexpectation value of the number of incoming orders per one working day of commercial enterprise, calculated on the basis of data on the sale for the previous periods of time.
|
2021-06-03T01:13:10.757Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4744e0dc10ef924fda3fb5a89789c83779c172e2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/1902/1/012065",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4744e0dc10ef924fda3fb5a89789c83779c172e2",
"s2fieldsofstudy": [
"Computer Science",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267395296
|
pes2o/s2orc
|
v3-fos-license
|
Non-exposed endoscopic wall-inversion surgery with one-step nucleic acid amplification for early gastrointestinal tumors: Personal experience and literature review
BACKGROUND Laparoscopic and endoscopic cooperative surgery is a safe, organ-sparing surgery that achieves full-thickness resection with adequate margins. Recent studies have demonstrated the safety and efficacy of these procedures. However, these techniques are limited by the exposure of the tumor and mucosa to the peritoneal cavity, which could lead to viable cancer cell seeding and the spillage of gastric juice or enteric liquids into the peritoneal cavity. Non-exposed endoscopic wall-inversion surgery (NEWS) is highly accurate in determining the resection margins to prevent intraperitoneal contamination because the tumor is inverted into the visceral lumen instead of the peritoneal cavity. Accurate intraoperative assessment of the nodal status could allow stratification of the extent of resection. One-step nucleic acid amplification (OSNA) can provide a rapid method of evaluating nodal tissue, whilst near-infrared laparoscopy together with indocyanine green can identify relevant nodal tissue intraoperatively. AIM To determine the safety and feasibility of NEWS in early gastric and colon cancers and of adding rapid intraoperative lymph node (LN) assessment with OSNA. METHODS The patient-based experiential portion of our investigations was conducted at the General and Oncological Surgery Unit of the St. Giuseppe Moscati Hospital (Avellino, Italy). Patients with early-stage gastric or colon cancer (diagnosed via endoscopy, endoscopic ultrasound, and computed tomography) were included. All lesions were treated by NEWS procedure with intraoperative OSNA assay between January 2022 and October 2022. LNs were examined intraoperatively with OSNA and postoperatively with conventional histology. We analyzed patient demographics, lesion features, histopathological diagnoses, R0 resection (negative margins) status, adverse events, and follow-up results. Data were collected prospectively and analyzed retrospectively. RESULTS A total of 10 patients (5 males and 5 females) with an average age of 70.4 ± 4.5 years (range: 62-78 years) were enrolled in this study. Five patients were diagnosed with gastric cancer. The remaining 5 patients were diagnosed with early-stage colon cancer. The mean tumor diameter was 23.8 ± 11.6 mm (range: 15-36 mm). The NEWS procedure was successful in all cases. The mean procedure time was 111.5 ± 10.7 min (range: 80-145 min). The OSNA assay revealed no LN metastases in any patients. Histologically complete resection (R0) was achieved in 9 patients (90.0%). There was no recurrence during the follow-up period. CONCLUSION NEWS combined with sentinel LN biopsy and OSNA assay is an effective and safe technique for the removal of selected early gastric and colon cancers in which it is not possible to adopt conventional endoscopic resection techniques. This procedure allows clinicians to acquire additional information on the LN status intraoperatively.
INTRODUCTION
The most effective treatment for patients with resectable gastrointestinal submucosal tumors (SMTs), including gastrointestinal stromal tumors [1][2][3][4] and early gastrointestinal cancer [5], is complete surgical resection.Segmental resection is acceptable based on oncologic principles [6][7][8].Endoscopic submucosal dissection (ESD) is a well-studied treatment for adenomas and early cancer of the gastrointestinal tract [9].In some cases, it has low rates of microscopic resections with a negative margin (R0).ESD in lesions involving the muscularis propria remains controversial due to the increased risk of perforation [10][11][12][13].Adverse events have also been described in lesions located in anatomical positions that are difficult to access.
Previous studies reported that laparoscopic wedge resection for gastric submucosal tumors (SMTs) was oncologically feasible and safe with a decrease in blood loss and length of hospital stay [14][15][16] compared to other classic gastric resections.Using the conventional laparoscopic approach, it is difficult to establish the exact line of resection for intraluminally developing gastrointestinal tumors.With limited or overabundant resections, there is a risk of having postoperative functional problems or infiltrated resection margins with postoperative recurrence [17,18].Endoscopic control of the resection margin is essential for safe local resection in early gastrointestinal cancers [19].Narrow-band imaging (NBI) is a digital optical method of image-enhanced endoscopy that enhances the vessel and surface structure of colorectal lesions using 415 nm and 540 nm wavelength filters [20][21][22].Several NBI classifications for colorectal lesions have been proposed.These classifications were unified as the NBI International Colorectal Endoscopic (NICE) classification for non-magnifying endoscopy in 2009 and the Japan NBI Expert Team (JNET) classification for magnifying endoscopy in 2014 [20,[23][24][25][26][27].
NICE was divided into three groups based on histology: Hyperplastic or sessile serrated polyp (type 1), adenoma (type 2), and deep submucosal invasive cancer (type 3).There are also three criteria: Surface pattern, vase pattern and colour [26,27].The Japanese endoscopists separated the type 2 NICE category into type 2A (low-grade adenoma) and type 2B (high-grade adenoma and submucosal cancer) using magnifying endoscopy.They developed the JNET classification as an advanced version of the NICE classification.When magnifying endoscopy is available, the JNET classification is better at selecting the correct therapeutic strategy based on precise diagnosis [20].
To overcome the limitations of endoscopic resections and wedge resections, Hiki et al [28] combined the ESD technique to properly determine the resection margin with laparoscopic full-thickness excision, developing a hybrid technique of laparoscopic endoscopic cooperative surgery (LECS).The LECS allows to preserve the vascularization, the innervation of the wall, and the organs' functionality with a better postoperative quality of life for the patient.
Diagnostic imaging assessment of lymphadenopathy in gastric and colorectal cancers (CRCs) is challenging.Individual imaging modalities face specific intrinsic limitations, which negatively impact detection and it does not allow a correct evaluation of important lymph node (LN) details.Therefore we only have the size criterion available to determine lymphadenopathies [45].A new semi-automated diagnostic method called one-step nucleic acid amplification (OSNA) has been developed to detect LN metastases.OSNA is based on reverse transcription-loop-mediated isothermal amplification [46,47] to amplify cytokeratin 19 (CK19) mRNA.The OSNA test has been used successfully in numerous malignancies [48].Yaguchi et al [49] defined the CK19 mRNA cutoff to identify LN metastases with OSNA assay in gastric cancer [49][50][51][52].Kumagai et al [50] showed that OSNA could be advantageously used to diagnose LN metastases in advanced gastric cancer.
In this study, we determined the safety and feasibility of NEWS in early gastric and colon cancers and adding rapid intraoperative LN assessment with OSNA.
MATERIALS AND METHODS
A cohort study approach was taken, relying on data from a prospectively maintained database of consecutive patients undergoing elective NEWS for early gastric and colon cancers at St. Giuseppe Moscati Hospital of Avellino, Italy.The database included preoperative, intraoperative and postoperative data.Inclusion criteria were: (1) Adult age; (2) diagnosis with early gastric or colon cancer; and (3) eligibility for the NEWS procedure.Exclusion criterion was: Allergy to indocyanine green (ICG).In total, 10 patients were included in this study.
This study was carried out in continuity with our previous study [59] and conducted according to the ethical principles of the Institution following the Declaration of Helsinki and with approval by the Institutional Review Board of the St. Giuseppe Moscati Hospital (Approval No. 201801).Written informed consent was granted by study participants for the publication of this article.
Surgical-endoscopic technique and histopathological evaluation
Immediately after access to the abdominal cavity by laparotomy or laparoscopy, a flexible endoscope was passed to the tumor site (Figure 1) to allow a 0.5 mL submucosal injection of ICG (2.5-5.0 mg/mL) at four points around the tumor (Figure 2).The nodal basin was then examined under near-infrared (NIR) illumination (Karl Storz SE & Co., Tuttlingen, Germany).In the case of the stomach, we proceeded to the exeresis of identifiable nodes at one or two nodal basins (Figure 3).In contrast, in the case of the colon, any identifiable sentinel LNs were excised and submitted immediately to the pathology department.LNs harvested in this way were cleaned of fat.LNs weighing between 0.05 g and 0.60 g or with a cross diameter of less than 8 mm were exclusively processed using the OSNA technique.LNs weighing more than 0.60 g or with diameter over 8 mm were dissected and analyzed using the OSNA technique and hematoxylin and eosin (H&E).In the latter case, LNs were divided at 2-mm intervals, and nonadjacent blocks were alternatively subjected to definitive histopathological examination with H&E or OSNA.Our technique for preparation of materials for OSNA and interpretation of results has already been described in our previous study [59] (Figure 4).The patient's total tumor load (TTL) resulted from the sum of all CK19 mRNA tumor copies/μL of each LN.
OSNA results with CK19 mRNA above 250 copies/μL were designated positive, and those with fewer than 250 copies/μL were considered negative [59].The result of the sentinel LN analysis was intraoperatively communicated to the surgeon.All patients underwent the NEWS procedure.
In this procedure, mucosal markings are placed around the tumor followed by serosal markings via laparoscope under endoscopic navigation.A longitudinal seromuscular incision is performed laparoscopically around the serosal markings, taking special care not to create a full-thickness opening in the wall.The pending vessels of the submucosa are coagulated laparoscopically.The seromuscular layers are sutured transversely (Figure 5) to avoid postoperative strictures, particularly in patients with colon cancer, and the lesion is inserted into the inside of the lumen.Finally, circumferential mucosal and submucosal tissue incisions are made endoscopically around the inverted tumor (Figure 6).The resected tumor is retrieved, and the mucosal defect is sutured with several endoscopic clips.
Patients with early gastric cancer and early colon cancer were encouraged to begin drinking and eating on postoperative day 1 after the NEWS procedure.If the postoperative course was uneventful, then patients could be discharged on postoperative day 1 or 2.
Subsequently, the records of each patient were discussed in a multidisciplinary conference with surgeons, pathologists, and oncologists, considering the definitive examination of the surgical specimen and the result of the sentinel LN analysis.The final histology of the surgical specimen and harvested LNs were always concluded based on the H&E analysis.
RESULTS
A total of 10 patients (5 males and 5 females) with an average age of 70.4 ± 4.5 years (range: 62-78 years) were observed in this study.Five patients were diagnosed with early-stage gastric cancer, and five patients were diagnosed with early-stage colon cancer.The mean tumor diameter was 23.8 ± 11.6 mm (range: 15-36 mm).The patient characteristics are shown in Table 1.
The NEWS procedure was successful in all cases.The mean procedure time was 111.5 ± 10.7 min (range: 80-145 min).The OSNA assay revealed no LN metastases in any patient.The diagnostic accuracy in predicting the LN status based on the sentinel LN concept by OSNA compared with the postoperative histological examination was 100%.Complete histological resection (R0) was achieved in 9 (90.0%)patients.The average post-procedure length of hospitalization was 3.1 ± 4.8 d (range: 1-8 d).
There was no recurrence during the follow-up.The mean follow-up was 6.3 ± 4.2 mo.
There was only 1 patient who experienced a complication.In the gastric cancer group, 1 patient was treated conservatively for intrabdominal fluid collection (Table 2).One patient who underwent the removal of a lesion affecting the proximal transverse colon presented positive focal margins after histological examination.Therefore, he underwent a right hemicolectomy.The definitive histological examination showed no residual tumor foci or LN metastases (Table 3).
The average LN count was 2.5 ± 2.2, ranging from 1 to 5 nodes per patient.Five patients had early gastric cancer, and five had early CRC confirmed by final histology.
DISCUSSION
In this article, we investigated the possibility of adding to the NEWS (for early gastrointestinal cancers) an intraoperative study of LNs using fluorescence and ICG to identify the LN basin of the tumor and OSNA for the analysis of the excised LNs.We also performed a literature review on the use of NEWS in early gastrointestinal cancers, on sentinel LN and LN basin studies in gastric and colonic cancers, and on the intraoperative use of OSNA.Recurrence 0 Data are presented as n, n/total, n (%), or mean ± SD (range).All patients were included in the evaluations.
NEWS in early gastrointestinal cancers
NEWS has been reported as a novel full-thickness resection technique without wall perforation.It is primarily used to treat early gastrointestinal cancer [30,[32][33][34]60,61].An advantage of NEWS is that all the gastric or intestinal layers can be resected precisely under direct visualization by laparoscopy and endoscopy.As a result, wall resection is limited and oncological principles are respected.The size of the tumor is one of the main limitations of NEWS.Lesion size should be 30 mm or less as the resected specimen is retrieved perorally or transanally.The LECS concept initially adopted for gastric lesions can also be applied to colorectal tumors [38].The appropriate indications of LECS for colorectal tumors are: (1) Intramucosal carcinoma (Tis) and adenoma with high-grade atypia accompanied by a severe degree of fibrosis in the submucosal layer (tumor recurrence after endoscopic or surgical resection); (2) Tis and adenoma with high-grade atypia involving the appendix or diverticulum; or (3) intraluminal or intramural growth-type SMTs.LECS for the colon is a safe and feasible procedure [62,63].LECS for colorectal tumors has a 0% rate for either residual or local recurrence.Currently, modified LECS procedures can be applied to cases of early gastrointestinal cancer (within the indication for endoscopic mucosal resection/ESD) that would be technically difficult to treat with endoscopic mucosal resection/ESD.Innovative organ-preserving procedures such as transanal minimally invasive surgery, LECS, and modified LECS allow adequate resection of early-stage tumors without extensive interventions [64].
Sentinel LN and nodal basin
The LN stage remains crucial for oncological treatments.Identifying LN metastases in gastric or CRC is still unsatisfactory despite improvements in imaging techniques [65][66][67].Sentinel LN biopsy for early gastric cancer is reportedly helpful when deciding whether to perform LN dissection, and LECS combined with sentinel LN dissection has been attempted for early gastric cancer as well [31,[68][69][70].The sentinel LN is the first LN that receives lymphatic drainage from the primary tumor and is a predictor of the pathological status of all other LNs.Miwa [71] studied lymphatic basin dissection in gastric cancer as a method of sentinel LN biopsy.The specific lymphatic system for gastric cancer is stained by a dye tracer injected near the gastric lesion and drained into the lymphatic system.The lymphatic system is then dissected en bloc, and the sentinel LN analysis is performed.
Multiple reports have investigated lymphatic basin dissection as a specific sentinel LN biopsy in stomach cancer [72][73][74][75].The lymphatic basin identified by dye mapping is excised en bloc, and sentinel LNs are analyzed ex vivo after dissection of the basin and sent for rapid intraoperative diagnosis.If, after sentinel LN biopsy, a positive LN is found, D2 gastrectomy is added.If LNs are negative for metastases, further dissection is avoided, gastric vasculature outside the basin is preserved, and the gastric resection area is minimized [76][77][78][79].
In recent years, a number of clinical trials in Japan and South Korea have shown that the safety and therapeutic effect of sentinel LN navigation surgery in early gastric cancer are acceptable [74,[80][81][82][83][84].A multicenter study in Japan showed that the detection rate of sentinel LNs in early gastric cancer reached 97.5%, with 93% sensitivity and 99% accuracy [75].A multicenter randomized controlled trial (SENORITA) conducted in South Korea showed that the sentinel LN biopsy group did not have noninferiority to the radical gastrectomy group for 3-year disease-free survival.However, the 3-year disease-free survival and 3-year overall survival rates did not differ after rescue surgery in recurrence/ metachronous gastric cancer cases.The sentinel LN biopsy group had a better long-term quality of life and nutrition than the radical gastrectomy group [76,77].
Goto et al [34] reported the use of NEWS for early gastric cancer in combination with sentinel LN navigation surgery (SNNS).The lymphatic basin is an essential concept in SNNS.
OSNA in gastric cancer and CRC
Recent developments in the OSNA test allow for rapidly identifying metastases over the entire LN.The OSNA assay, already extensively studied in breast cancer [56], is under evaluation as an alternative diagnostic test for identifying secondary localizations of sentinel LNs in gastric cancer [65].In the study by Yaguchi et al [49], the agreement rate between the OSNA test and H&E stain was 94.4%, and the sensitivity and specificity were 88.9 and 96.8%, respectively, when the CK19 mRNA cutoff value is 250 copies/μL.The multicenter study by Kumagai et al [50] showed that the OSNA test has the same precision as the 2 mm interval histological examination in detecting LN metastases in the case of stomach cancer.Shoji et al [85] showed that single-tracer sentinel LN mapping by ICG fluorescence imaging with intraoperative diagnosis by OSNA assay is feasible and safe.
Joosten et al [86] in 1999 introduced the concept of sentinel LN in the CRC to reduce false negative results and study the importance of LN involvement.The main advantage of sentinel LN mapping in CRC is identifying nodes with an increased risk of metastasis so that they undergo further testing.
The OSNA test can also detect colorectal metastases in LNs based on CK19 levels within 20 min of removal [87][88][89].Until recently, it was predominantly used to evaluate the entire LN basin, but its rapidity could make it a functional intraoperative test to guide decision-making.
Vogelaar et al [90] compared OSNA with a single H&E stain and multilevel fine pathology (immunohistochemistry with pan-cytokeratin antibody staining) to identify metastases in the sentinel LN of Table 3 Tumor size, number of retrieved lymph nodes, and clinical and anatomical parameters of the colonic lesions
Variables Results
Tumor number 5 Tumor size in mm 24.0 ± 10.6 ( 18 colon cancer patients.OSNA and fine pathological examination were superior to a single H&E stain.Additionally, combining the two methods observed a 46.5% upstaging rate.The study by Yamamoto et al [91] shows that if the sum of CK19 mRNA is higher, the number of histologically positive LNs increases.The median CK19 mRNA value was significantly lower if there were fewer than three metastatic regional LNs than in patients with more than four positive LNs.The median TTL values of pN0, pN1 (1-3 LN positive), and pN2 (4 or more LN positive) were 1550 copies/μL (300-320000 copies/μL), 24050 copies/μL (250-890000 copies/μL) and 90600 copies/μL (7700-1635100 copies/μL), respectively.The TTL increases with the advancing LN stage.In the study by Aldecoa et al [92], TTL correlated with pT stage (P = 0.01) and tumor size (P < 0.01) in low-grade tumors.In this study, classic high-risk factors correlated with TTL in patients with stage I-II colon cancer.These results showed that the sum of CK19 mRNA obtained by the OSNA method is comparable with the current pathology diagnosis system.These studies pave the way for a new molecular staging using OSNA based on the amount of CK19mRNA rather than the number of LN metastases.Furthermore, TTL has been suggested to be related to a poor prognosis, worse disease-free survival, and other CRC risk factors [45,93,94], such as pN, pT, tumor grade, male sex, tumor size, and lymph vascular invasion.
In the literature, the association between the use of fluorescence to search for colorectal LNs and the use of OSNA to provide an accurate and rapid assessment of the oncological status of these specific LNs has been shown, which can give information on the entire LN basin.In the study by Shimada et al [58], there is a high agreement between OSNA and histopathology.For this reason, the OSNA test can be considered a convenient, objective and valuable alternative to the current pathological method for detecting LN metastases.Some studies have used intraoperative OSNA testing to evaluate its diagnostic accuracy and elapsed time between surgery and postoperative adjuvant chemotherapy [59,95,96].A rapid intraoperative test, such as OSNA (which takes approximately 40 min [95]), to obtain information on the status of LNs in the tumor drainage basin can overcome the limitations of preoperative imaging in clinically LN-negative patients.The OSNA test, thanks to its intraoperative diagnostic accuracy and rapidity, may be indispensable for safely performing minimal resections in patients with early-stage gastrointestinal cancer.With more experience using this technology and further refinement of the technique, ICG analysis could be used in conjunction with OSNA intraoperatively to identify LN metastases, which would influence surgical decision-making.
In our previous study [37] we described the advantages and limitations of the OSNA technique.Some authors have proposed that positive CK19 on immunohistochemistry in primary tumors should be a prerequisite for OSNA use [37,73].
To our knowledge, this is the first study investigating the utility of intraoperative OSNA testing in the assessment of sentinel LN in patients with early-stage gastrointestinal cancer undergoing the NEWS procedure.
Study limitations
This article included a single-center pilot study.As such, the number of patients collected was small and the selected patients did not have LN metastases.Therefore, it was not possible to validate the sentinel LN technique.
At present, two methods are predominantly used in the detection of sentinel LNs; it involves injecting a radioisotope with a gamma probe or a dye.Hot LNs are identified by radioisotope uptake, and sentinel LNs are colored blue or green by dye after injection around the tumor with a radioisotope colloid or dye.The currently established double-tracer method (dye and radioisotope tracers) described in several studies [97][98][99] appears to increase the sensitivity of identifying true sentinel LNs.We used only one tracer method in this study.ICG has hypoallergenic potential, deep detection depth, high sensitivity, and stable signal.Furthermore, tight regulation and costs of radioactive substances limit the widespread use of the probe-guided method.
The SNNS concept has been established in early gastric cancer but is still controversial in CRC.Colorectal sentinel LN basins are still not well defined.The usefulness of sentinel LN basin dissection for curative resection needs to be investigated.In this situation, the effectiveness of SNNS combined with OSNA could not be determined in CRC.
CONCLUSION
NEWS is a feasible and safe technique for organ-sparing surgery in selected patients in centers with a high experience in endoscopy, laparoscopy, and robotic surgery.ICG-NIR lymphangiography in combination with sentinel LN OSNA analysis is feasible and may allow intraoperative prediction of LN status in patients with early colon or gastric cancer treated with the NEWS procedure.Furthermore, its implementation allows more precise staging.ICG-NIR lymphangiography and OSNA could be used to plan personalized surgery and lymphadenectomy in patients with early-stage cancers.Prospective multicenter studies with large populations of patient cohorts are needed to provide definitive conclusions.
Research background
There are more and more studies in the literature concerning endoscopic and surgical resections for organ preservation in early gastrointestinal neoplasms, respecting oncological principles.Lymph node (LN) study with one-step nucleic acid amplification (OSNA) is also the subject of numerous studies.
Research motivation
Organ-sparing endoscopic and imaging techniques do not currently allow for an accurate LN study.Using LN biopsy with rapid intraoperative results during the laparoscopic and endoscopic cooperative approach can add important information.
Research objectives
This article aims to stimulate studies that can add further information on LN status in patients with early gastrointestinal cancer, which, if treated only with endoscopic technique or with modified Laparoscopic and endoscopic cooperative surgery, would have no additional information on LN status beyond radiological ones.However, our study is the first to evaluate the utility of intraoperative OSNA assay in assessing SN in patients with early-stage gastrointestinal cancer undergoing the Non-exposed endoscopic wall-inversion surgery (NEWS).
Research methods
This pilot study with a literature review is based on data collected prospectively from a database of patients undergoing elective NEWS for early gastrointestinal cancer at St. Giuseppe Moscati Hospital of Avellino, Italy.The database included preoperative, operative, and postoperative data.Inclusion criteria included adult patients with early gastric and colonic cancer eligible for the NEWS procedure.Exclusion criteria included participants with an allergy to any indocyanine green (ICG).
Research results
A total of 10 patients were enrolled in this study, which included 5 gastric and 5 colonic early-stage cancers.The NEWS procedure was successful in all cases.The OSNA assay revealed no LN metastasis in all patients.The diagnostic accuracy in predicting the LN status based on the SN concept by OSNA compared with the postoperative histological examination was 100%.Histologically complete resection (R0) was achieved in 9 (90.0%)patients.There was no recurrence during the follow-up.An intrabdominal fluid collection treated conservatively was observed in 1 (10.0%)patient of the gastric group.One patient who underwent the removal of a lesion affecting the proximal transverse colon presented positive focal margins on a definitive histological examination.Therefore he subsequently underwent a right hemicolectomy.The definitive histological examination showed no residual tumour foci or LN metastases.The mean follow-up was 6.3 ± 4.2 mo.There was no recurrence during the follow-up period.Ours is a single-centre pilot study.The number of patients collected is small.The selected patients were all patients without LN metastasis; therefore, it is not possible to validate the sentinel node technique.
Research conclusions
Our study is the first to analyze the utility of intraoperative OSNA assay in sentinel node and nodal basin assessment in patients with early-stage gastrointestinal cancer undergoing the NEWS procedure.NEWS is a feasible and safe technique for organ-sparing surgery in selected patients.Additionally, implementing the NEWS association with the intraoperative study with OSNA will allow for more precise staging.
Research perspectives
OSNA and ICG near-infrared lymphangiography could be used to develop customized surgery and lymphadenectomy in patients with early cancers.Prospective multicenter studies with large populations of patient cohorts are needed to provide definitive conclusions.
S-Editor:
Fan JR L-Editor: A P-Editor: Zhang XD are presented as n, n/total, n (%), or mean ± SD (range).All patients were included in the evaluations.
Figure 2
Figure 2 Endoscopic injection of indocyanine green at cardinal points 1 cm from the margins of a prepyloric early gastric cancer lesion.A: First injection of indocyanine green (ICG); B: Second injection of ICG; C: Cardinal points of the lesion injected with ICG.
Figure 3
Figure 3 Sentinel lymph node biopsy.A: Level 4 d node; B: Level 4 d fluorescent node with near-infrared vision.
Figure 4
Figure 4 One-step nucleic acid amplification assay.A-F: Lymph nodes were prepared and placed in homogenized lysis buffer (Lynorhag; Sysmex) and then centrifuged.CK19 mRNA was extracted from the lysate and analyzed by reverse transcription-loop-mediated isothermal amplification in the RD-100i system (Sysmex) using the Lynoamp (Sysmex) reagent kit[59].
Figure 5
Figure 5 Laparoscopic and robotic surgical incision and reconstruction of the external gastric wall.A and B: Incision, representative views; C and D: Reconstruction, representative views.
Figure 6
Figure 6 Endoscopic full thickness resection of early gastric cancer using the non-exposed endoscopic wall-inversion surgery procedure.A: Mucosal markings placed around the tumor; B: Endoscopic incision of the internal layers of the gastric wall; C: Specimen.
Table 1 Patient demographics and clinical results Variables Results
Data are presented as n, n/total, n (%), or mean ± SD (range).All patients were included in the evaluations.ASA: American Society of Anesthesiologists; BMI: Body mass index.
|
2023-07-08T05:13:36.938Z
|
2023-06-28T00:00:00.000
|
{
"year": 2023,
"sha1": "d47aa28d4680d2ef4376282c10be7790420c7c5c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v29.i24.3883",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d47aa28d4680d2ef4376282c10be7790420c7c5c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219992996
|
pes2o/s2orc
|
v3-fos-license
|
Gas and high performance thin layer chromatographic based-determination method for quantification of thymol in semisolid traditional dosage forms
The control and standardization process of herbal products is a critical point in preparation of those medicaments. In Traditional Persian Medicine (TPM) literature, out of all the different pharmaceutical dosage forms, Jawarish is a semisolid gastrointestinal dosage form with positive related effects. Jawarish-e-Khuzi, including Zataria multiflora, Lepidium sativum, Trachyspermum ammi, Terminalia chebula, ferrous sulfate, and also honey is one of the popular mentioned traditional oral formulations. However, there have been no noticeable and proven control and standardization for this formulation. In this study, Jawarish-e-Khuzi was prepared based on one of the pharmaceutical textbooks of Traditional Persian Medicine (TPM). Using gas chromatography/mass spectroscopy (GC/MS), the volatile composition of this formulation was analyzed. Subsequently, Gas chromatography/flame ionization detector (GC/FID) and HighPerformance Thin-Layer Chromatography (HPTLC) techniques were employed to determine the main component. The GC/MS results showed thymol as the main constituent. In the content determination process via GC/FID, thymol was proved to be 0.02% of the whole preparation. The outcome of HPTLC method also corresponded with that of GC/FID. Based on the method validation parameters, both GC/FID and HPTLC methods are useful for the volatile content determination of semisolid dosage forms.
INTRODUCTION
For thousands of years, medicinal plants have been employed for various ailments by mankind. In contrast to current pharmaceutical agents, the pharmacological assessment of natural medicaments has been relatively neglected and related human trials involving herbal medicines are still infrequently performed. Lack of attention to the pharmaceutical analysis of medicinal plants and herbal preparations has contributed to concerns with availability of herbal remedies, poorly defined chemical composition and inadequate knowledge regarding the active markers. Consequently, natural health products might often be produced with unknown pharmaco-active components. Moreover, the determination of toxicity and the propensity for interaction with pharmaceutical agents are often neglected. Without adequate description and standardization of an herbal remedy understudy, further clinical research becomes unreliable due to inherent inconsistencies in the substance (s) that are being studied [1]. Plants synthesize a large number of bioactive compounds which can be a candidate to be a source of new drug [2]. The most important purpose of using traditional medicine is to discover and present new drugs for the pharmaceutical market around the world [3]. About 25% of medicines in modern pharmacopeias have been derived from plants [4].
Traditional medicine in Iran has a rich history from ancient Persia [5]. Actually, Traditional Persian Medicine (TPM), as an ancient great school, provided plants-based resources for clinical studies [6]. In order to achieve valuable information about the application of medicinal herbs, reviews of historical literature in TPM can be more useful [7].
In pharmaceutical textbooks of TPM, the process of compounding the natural medicaments, administration, warnings and precautions, uses, dosage forms and target organs has been mentioned. These types of textbooks are called "Qarābadin" [8]. Gastrointestinal (GI) diseases are prevalent among human. Although there have been restricted treatment options for GI disease, herbal drugs have been used widely in GI problem [9]. In TPM, it is supposed that digestive system has a notable effect on the other organs in the body. So, in regard to treatment various kinds of diseases, there has been specific attention to gastric function [10]. Medical literature of TPM has mentioned a large number of GI diseases and ailed treatment [11]. Jawarish formulations are herbal drugs that have been used for improving gastrointestinal problems [12]. Jawarish-e-Khuzi is one of the dosage forms that has been mentioned in the Teb-e-Akbari. This semi-solid traditional formulation, included 5 parts (Zataria multiflora, Lepidium sativum, Trachyspermum ammi, Terminalia chebula, ferrous sulfate and honey).
Lepidium sativum Linn. (L.S), generally known as garden cress, belongs to the Brassicaceae, broadly cultivated in many countries of the world [17]. Alkaloids, flavonoids, tannins, glucosinolates, sterols, triterpenes, saponins, anthracene glycosides, carbohydrates, proteins and phenolics are the chief bioactive component of garden cress [18]. This plant has various pharmacological effects such as antibacterial activity, antifungal activity, antioxidant activity, cytotoxic activity, diuretic activity, hepatoprotective activity, hypoglycemic activity, antiosteoporotic activity, antiasthmatic activity, anti-carcinogenic effect, cardiotonic activity, smooth and skeletal muscles contraction activity, fracture healing property, chemo-protective effects, and hemagglutination. Also, in Indian folk medicine, it has been prescribed for menstrual cycle regulation, gastrointestinal problems (diarrhea and constipation) and to increase milk production [19].
Trachyspermum ammi (L.) Sprague (T.A), commonly called Ajowan, belongs to the Apiaceae. This plant is widely cultivated all over the World. Thymol is the major constituent of Ajwain seeds essential oil. Ajowan seeds have possess aphrodisiac, diuretic, antimicrobial, antiviral, antiulcer, antiinflammatory, analgesic, and bronchodilatory, as well as antitumor and antioxidant properties [20,21].Among most investigations, Jawarish-e-Khuzi with those ingredients, has never been evaluated, standardized and reformulated. In this regard, the current study was conducted to introduce this semisolid dosage form and parallelly determined in regard to a major volatile constituent via Gas Chromatography/ Flame Ionization Detector (GC/FID) and High-Performance Thin-Layer Chromatography (HPTLC) techniques to compare these two methods additionally.
Jawarish-e-Khuzi.
As mentioned by the Teb-e-Akbari book, the Jawarish-ekhuzi is a semi-solid formulation made with the following ingredients in the constitution as shown in Table 1. All ingredients of this formulation were purchased from a popular medicinal plant market in Shiraz and brought to the Department of Phytopharmaceuticals (Traditional Pharmacy), Shiraz School of Pharmacy for authentication and specification of a voucher number.
PM1098 300g
Cress According to Teb-e-Akbari instruction, L.S was dried on a hot plate for 10 minutes at 140 ˚C. Other plants were ground via electric miller and sieved through 30 British mesh, separately. Finally, 300 g of Z.M and T.A, 200 g of T.C and 500 g of L.S and 100 g of ferrous sulfate were mixed together (final weight of the product was 1300 g). Then, the honey with the same amount of powder was added to the finished product and mixed well.
Hydro-distillation and sample preparation.
The Jawarish-e-khuzi in dry form was hydro-distilled in a Clevenger apparatus for 4 hours. For this purpose, 300 g of Jawarish powder was weighed and soaked in a certain amount of distilled water for 24 hours, then poured into the Clevenger apparatus. After finishing the extraction process, the yield of essential oils was calculated on a dry weight basis. Then, the whole of essential oil was poured in the test tubes with screwed caps. The essential oils after drying over anhydrous sodium sulfate, were stored in the refrigerator at -20°C until GC/MS analysis.
GC/MS analysis.
GC/MS analysis was performed via an Agilent GC-MSD system (model 7890A). HP-5MS capillary column (phenyl methyl siloxane, L × I.D. 30 m × 0.25 mm, with 0.25-µm film thickness) was used with Helium as a carrier gas at 1 mL/min flow rate. GC oven temperature was scheduled from 60 (0 min) to 220°C (heating rate of 5°C/min) and then kept fixed for 10 min at 220°C. The mass spectrometer (Agilent technologies 5975 C) employed at 70 eV. The mass range was documented from 30 to 600 m/z and injected temperature was adjusted at 280°C. In order to identify the essential oil components, the Kovats indices were calculated via using retention times of synchronically injected normal alkanes (C 9 -C 24 ) as well as their mass spectra with Willey (nl7) and Adams libraries spectra [22].
GC/FID analysis.
GC/FID analysis was performed via an Agilent GC-FID system (model 7890A) supplied with a HP-5 column (phenylmethyl siloxane, L × I.D. 25 m × 0.32 mm, with 0.52-µm film thickness) and flame ionization detector (FID). Nitrogen (5 th grade), as a carrier gas, was used at 1 mL/min flow rate. The column temperature was programmed from 60 (0 min) to 250 °C at the heating rate of 5 °C/min and then kept stable for 10 min at 250 °C. The injector and detector were modified at 270°C and 300°C, respectively. The stoke solution of thymol (the main bioactive component in the essential oil) as a reference compound was prepared in methanol (with 99-101% purity). The calibration curve is needed for the quantification of the specific marker. In order to plot the calibration curve, dilutions of thymol (0.04, 0.1, 0.5, 0.1 mg/ml) prepared by methanol and about 1µg of each one sample injected to GC/FID for three times. Also, the essential oil yielded from Jawarish was prepared (0.02 mg/ml) and injected to GC/FID three times a day in three days to determine the difference of inter-day, intraday and Relative Standard Deviation. The limit of detection (LOD) and limit of quantification (QOD) was determined.
High-performance thin-layer chromatography (HPTLC) analysis.
HPLC analysis was performed using the CAMAG TLC system equipped with ATS 4 (automatic TLC sample 4) and an automatic developing chamber (ADC2). The stationary phase was a silica gel plate 60F254 (10×10 cm, Merck, Germany) and the used mobile phase was Toluene: Ethyl acetate (9:1; v/v).
A stock solution containing 6 mg/ml thymol and decreasing serial dilutions from the stock solution in the range of 3, 1.5, 0.75, 0.375 and 0.187 mg/ml was prepared in methanol to set the calibration curve. Also, 4 mg of Jawarish essential oil was diluted via 1 ml ethanol. Then, four samples (2, 1, 0.5, 0.25 and 0.125 mg/ml) were prepared and each one was injected three times.
Finally, the chromatographic spots were visualized first with ultraviolet lamps emitting at 254 and 365 nm and then anisaldehyde-sulfuric acid reagent. All chemicals and solvents were purchased as the analytical grade from Merck (Germany) or Sigma Aldrich (USA).
Essential oils and Yield determination.
The amount of dried sample, essential oil and yield are shown in Table 2.
GC/MS Essential oil analysis.
About 1 µl of dehydrated Jawarish sample without honey was injected into GC/MS device and their components were analyzed. Table 3 and Figure 1 are shown the chemical composition and the GC/MS chromatogram of Jawarish-e-Khuzi essential oil, respectively. According to GC/MS analysis, thymol is the main volatile composition in the Jawarish-e-Khuzi.
Determination of the amount of thymol via HPTLC method GC/FID method.
GC/FID method was applied to determine the exact amount of thymol in Jawarish-e-Khuzi formulation. For this purpose, the standard curve of thymol was plotted and the related equation was calculated according to different concentrations of the standard (Figure 2). Also, table 4 represented the mean ± SD area and RSD for every 4 standard concentrations of thymol, separately. The linearity of the protocol was confirmed by R 2 (R 2 =0.99). Then, for determining the exact amount of thymol in the mentioned formulation, 1µl of prepared essential oil was injected to GC/FID for 3 times a day in three days. The intra-day and inter-day variation was calculated and showed in table 5. Finally, Table 6 represented the exact amount of thymol in Jawarish-e-Khuze formulation.
Quantification of the amount of thymol via HPTLC method.
With the intention of verifying the results of the GC/FID method, the presence and the amount of thymol in the jawarish essential oil, has been detected via HPTLC method. At first, different concentrations of Jawarish essential oil (4, 2, 1, 0.5, 0.25 and 0.125mg/ml) and primary thymol stoke (6, 3, 1.5, 0.75, 0.375 and 0.187mg/ml) were prepared. After loading these concentrations on the HPTLC plate, the plate was submerged in a tank with Toluene: Ethyl acetate (9:1; v/v) as the mobile phase. Followed by the development of the spots on the plate, and drying at room temperature, derivatization of spots were carried out via treatment with sulfuric acid/anisaldehyde reagent. The spots accordingly resembled under heat induction. The spots were photographed at white (245nm) and UV (366nm) lights, before and after treatment with sulfuric acid/anisaldehyde reagent. Figure 3 has exhibited the HPTLC plate at visible light, after submerging in the tank.
Also, in order to calculate the exact concentration of the thymol, the area under the curve of different concentrations of the standard (thymol), were calculated. The results have been shown in table 7. Subsequently, a calibration curve was plotted and exhibited in figure 4. Centered on the calibration curve, the linearity of the protocol was confirmed by R 2 (0.96). Then, the area under the curve of different concentrations of Jawarish essential oil was calculated (Table 8). Eventually, after considering the dilution factor and essential oil percentage, the thymol concentration was determined as 2.33 mg per100g Jawarish.
CONCLUSIONS
Herbal remedies have been frequently employed for the treatment of human disease for thousands of years. Although the new drugs have been approved in current medicine, the use of the medical herb has not been reduced [8]. Many people believe that the use of herbal drugs has not any risk but some significant reactions subsequent the administration of herbal medicine have been reported. Then, it seems that the standardization of herbal medicine is necessary [32]. Plus, standardization as a critical process, could help formulators gain repetitive drug responses elsewhere.
GI disorders are frequent in the general population. Herbal medicine has an important role in treatment of GI disorder. There is a growing appeal for the prescribe and use of herbal medicaments for these disorders. Jawarish formulations are the set of herbal compound drugs that have been prescribed for improving GI problems. Jawarish-e-Khuzi as a kind of Jawarish, includes Zataria multiflora, Lepidium sativum, trachyspermum ammi, Terminalia chebula, and ferrous sulfate and also honey. This formulation widely prescribed by traditional physicians in Iran. Till now, there have been no noticeable and proven control and standardization or any pharmacognosy studies on this formulation. The current study aimed to introduce some specific and reliable parameters to standardization of Jawarish-e-Khuzi as a semi-solid herbal medicine in treatment of GI diseases.
To the aim of this study, Jawarish-e-Khuzi was prepared based on the reliable instruction from Teb-e-Akbari book. Then, essential oil was extracted via Clevenger apparatus. In order to assess the containing of the formulation, GC/MS was employed.
According to GC/MS analysis, thymol is the main component from yielded essential oil. Also, following content determination via GC/FID and HPTLC, the exact amount of thymol was evaluated and the result of GC/FID method has been verified by HPTLC analysis. The validity of these methods, confirmed by RSD (<10%).
Regarding the quantification of thymol in Jawarish-e-Khuzi formulation, hydrodistillation via Clevenger apparatus confirmed as an appropriate and easy method to extract active volatile components from semisolid formulation.
|
2020-02-06T09:17:00.436Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2a8fdb3db04d19b6a443a5c88ce6ba4004c10311",
"oa_license": null,
"oa_url": "https://doi.org/10.33263/briac102.982987",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0d6020f38e579278dbca46c3956a1792c159f4e0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
233617988
|
pes2o/s2orc
|
v3-fos-license
|
"The Alarm Was Grass Fire": Emergency Services' Perceptions of Chemical Incident Mobilization in Sweden - a Focus Group Study
Background: In chemical incidents, infrequent but potentially disastrous, the World Health Organization calls for inter-organizational coordination of actors involved. Multi-organizational studies of chemical response capacities are scarce and testable hypotheses are largely lacking. We aimed to describe chemical incident experiences and perceptions of Swedish re and rescue services, emergency medical services, police services, and emergency dispatch services personnel. Methods: Eight emergency service organizations in two distinct and dissimilar regions in Sweden participated in one organization-specic focus group interview each. The total number of respondents was 25 (7 females and 18 males). A qualitative inductive content analysis was performed. Results: Three types of information processing were derived as emerging during acute-phase chemical incident mobilization: Unspecied (a caller communicating with an emergency medical dispatcher), specied (each emergency service obtaining organization-specic expert information), and aligned (continually updated information from the scene condensed and disseminated back to all parties at the scene). Improvable shortcomings were identied, e.g. randomness (unspecied information processing), inter-organizational reticence (specied information processing), and downprioritizing central information transmission while saving lives (aligned information processing). Conclusions: It is inferred that sensitivity, specicity and time-effectiveness in the ow of information may be improved by automation, public education, revised dispatcher education, and use of technical resources in the eld. It is further inferred that inter-organizational coordination may be improved by inter-organizational training and revised standard operating procedures. We propose systematic assessments of chemical incident probabilities in Sweden.
Background
Chemical exposures are distinguished by equivocal characteristics. Toxic emissions may cause immediate or delayed symtoms, may be colorless, odorless and tasteless, may contaminate rescuers and environment, and may escalate to disaster if not contained. In 2009, the World Health Organization (WHO) stated that in the event of any chemical incident, a timely and robust mechanism is needed to mobilize responders. [1] The WHO noted that, since chemical incidents are both complex and acute, an optimal response can only be achieved through coordination of actors involved, stressing the importance of interoperability of communication equipment, procedures, and systems. Toxic emissions are relatively uncommon in Sweden, but serious incidents have occurred. [2] [3] The implementation of new technologies and materials in buildings and vehicles, e.g. polyurethane and electric car traction batteries, may add new or increased toxicological risks in res. Road tunnels and underground transportation hubs may contribute to toxicological exposure levels in vehicle res, [4] and in structure res, discussions about the Grenfell Tower disaster in London 2017 have involved the choice of materials in relation to the cyanide poisoning of its residents. [5] There is also an awareness of the potential use of chemical agents in acts of terrorism. [6] Thus, it would seem of interest to our and other countries to examine to what degree they possess timely and robust mechanisms to mobilize responders in chemical incidents.
Previously, the general performance of Swedish emergency medical dispatch protocols have been evaluated. [7] [8] In the neighbouring country of Finland, the preparedness of the emergency medical services for chemical emergencies has been surveyed, nding unsu cient antidote and decontamination capacity. [9] A similar survey in the Netherlands found serious lack of hospital preparedness for chemical, biological, or radionuclear (CBRN) incidents. [10] Also in the Netherlands, a study of general coordination capacities of emergency services found patterns of organizational fragmentation emerging when challenged with disaster training. [11] 1.2 Importance Literature on integrated inter-organizational response capacities to chemical incidents is scarce and testable hypotheses are largely lacking.
Aim
The aim of this study was to describe chemical incident experiences and perceptions of emergency services and emergency service dispatch personnel.
Study design, setting, and selection of participants
This study has an inductive approach with interpretations of interviews, following the consolidated criteria for reporting qualitative studies (COREQ). [12] Eight Swedish emergency service organizations, including re and rescue services (RS), emergency medical services (EMS), police services (PS), as well as emergency dispatch services (EDS), were conveniently invited via e-mail to participate in one organization-speci c focus group interview each. Each focus group interview was intended to consist of about ve respondents and lasting 1-1.5 hour. Swedish emergency services have regional diversity and a degree of national heterogeneity regarding terminology and systems of hierarchical relationships. The invited organizations were located in two distinct and dissimilar regions in Sweden, having clear geographical separation between their respective catchment areas. Both catchment areas contain heavy chemical industries, busy industrial ports, as well as roads and railways carrying hazardous materials.
This study was approved by the Swedish Ethical Review Authority (dnr 2019-02043) and was conducted in accordance with the Helsinki Declaration. [13] 2.2 Data collection Interviews were conducted June through September, 2019. All interviews were conducted by the principal author. A semistructured interview guide was developed through iterative discussions among authors (Appendix A). All interviews were held at the respective workplace during working hours in a separate room without anyone present besides the respondents and researchers. The audio-recorded interviews lasted between 44 and 90 minutes and were transcribed verbatim.
Characteristics of study subjects and derived categories
All invited organizations chose to participate, including a specialist EMS for chemical incidents, the Chemical Ambulance (CA). The total number of respondents was 25 (7 females; 18 males). No respondent declined participation or dropped out. In two cases, only one respondent could participatethese were considered in-depth interviews. Respondent experience levels are presented in Table 1.
Analysis
The text was analyzed using qualitative content analysis, [14] [15] including iterative steps to enhance trustworthiness of the results. This method analyses both manifest (explicit) and latent (implicit) content, by grouping meaning units into subcategories, categories, and themes, with quotations used to describe internal consistency. Emphasis was placed on the manifest content, i.e., what the respondents actually said. To mitigate analysis biases, initial coding was performed independently by the authors as follows: First, the authors read the entire material thoroughly several times to grasp the content. The principal author analyzed two interviews marking meaning units and codes relating to the aim of the study. The other authors independently made the same procedure with the rest of the interviews. Thereafter, categorizations were performed from all the codes leading to consensus about emergent sub-categories from which categories were derived. Finally, a theme was derived. The respondents did not provide feedback on the ndings.
Results
Derived categories are presented in Table 2.
Main category: Unspeci ed information processing
The derived category Unspeci ed information processing concerns how a caller communicates with the emergency medical dispatcher (EMD). It further concerns how the emergency medical communication center (EMCC) receives the incoming call, understands and indexes the information, as well as how experts listening in (added by EMD request to the call) are coordinated, and how the information is disseminated.
Subcategory: The incoming call
EMDs expect the incoming call to provide information about how the caller has perceived the situation. The information can be quite clear (e.g. a gas leak), but there were descriptions of misunderstandings. The caller may only have a fragmented understanding of the situation, while at the same time, there may be several callers who can provide complementary information. Some callers were perceived to show adaptive capacity to take action.
The alarm was grass re. True, when they arrived it sure was a grass re, 20 x 30 m [next to] the railway... and also nine carriages laying there leaking.
Subcategory: Receiving, understanding and indexing
The initial information from an incoming call was said to only provide, at best, a partial description of the situation. The indexing process, by contrast, must rapidly identify and attach to the conversation personnel with the correct type of competence. The indexing process emphasizes automation. EMDs ask questions from the emergency dispatch index (EDI) interview guide, a criteria-based dispatch protocol structured in sub-indices, e.g. the medical index. In situations having a clear EDI, a swift response can be expected. Experiences from trucking incidents with a non-Swedish-speaking crew included insu cient cargo information. An EMD seeks to assess risk, and typically gives basic safety advice to callers. An open mind was perceived as needed to characterize an incident. One important variable mentioned was geographical position, perceived to be facilitated by local knowledge. The EMD must be able to adapt socio-linguistically to the caller, including dialectal nuances. Interviewing was considered an art learned through experience. In the event of multiple, simultaneous, incidents, it was perceived as di cult to know if they were related, since an EMD typically handles only one event at a time.
Subcategory: Listening in and disseminating information
The respondents described a rapid assembly and coordination of competence by listening in from senior personnel from both the RS and EMS. All parties become participants of the conversation. Listening in was considered bene cial to create inter-organizational consensus about the situation. In the view of the respondents, the information disseminated should contain facts about possible hazardous materials involved, life-saving needs, and an estimate of urgency. A general experience of being called out to chemical incidents was initial uncertainty about hazardous materials. The PS were perceived by the other organizations to have access to restricted information (e.g. ongoing lethal violence). The national public warning system was perceived as functional and appropriately used. The CA was if possible called out to the scene, but was also consulted for listening in to give telemedical advice. It was perceived that the CA was not contacted by default in all chemical incidents, and that the primary contact to the CA may be through another party than the EMCC, such as the RS calling directly from a scene. Lack of medical knowledge was considered problematic for an EMD, but help could be obtained from available nursing competence. An EMD cannot deploy too much resources to an incident while, by contrast, the emergency dispatch liaison o cer (EDLO) may activate all available RS resources without regard to cost. By further contrast, the EDLOs were perceived as having a su cient amount of indices and the mandate to deviate from the these, and make decisions other than those predetermined.
I have been to incidents where the police only afterwards told us it was a threatening situation. -RS respondent, focus group #2.
Main category: Speci ed information processing
The derived category Speci ed information processing concerns the need for the emergency services, after receiving initial information from the EMCC, to collect additional information relevant for their own organization.
Subcategory: Obtaining organization-speci c information
Additional information sought included hazardous materials involved and wind direction. A prime source of additional information mentioned was a national database and decision support service provided by the Swedish Civil Contingencies Agency -the respondents did not discuss this service in terms of any technical problems. The Swedish Poisons Information Centre also provided information, as well as general Internet search engines such as Google. Additional information was needed for EMS to be optimally distributed throughout the catchment area -in contrast to the RS' approach to scale up initially and then withdraw excess resources. The expertise of the CA was perceived as contributing to optimal resource allocation, including being able to withdraw excess resources from a scene. It was perceived that the PS can share certain information with the CA that it cannot with "ordinary" EMS personnel. The respondents perceived that speci c information was needed to know which equipment was to be taken out to the scene. This was seen as problematic for the PS who does not always ride with chemical protection clothing in a typical police car, yet may be the rst to arrive at the scene. Without organizationspeci c additional information, the respondents described a risk of moving up too close to an event.
Emergency medical respondents described how they also can arrive at a scene before receiving additional information. It was described that if you are in a vehicle already moving, there was less time to gather information than if you are at the station. The national collaborative radio channel system was perceived as helpful in providing additional information.
You often get information from the rescue service, they are key players.
The police, we are alerted late. We have calculated that on average the difference between when the rescue service gets the alarm and until we have a patrol that gets the alarm is 12 minutes.
Subcategory: Risk assessment
Risk assessment information was perceived as obtained from the EMCC, from the other organizations, from people at the scene, by observing (e.g. with binoculars) from a distance, and by using drones. A chemical incident was considered a prime responsibility for the RS, though they were not always rst to the scene. The PS and EMS trusted RS risk assessments. Risk assessments for urban vs. rural areas differ. Evacuation need was described as a function of risk assessment. The respondents expressed a view that personnel should not be endangered to save lives, yet do take excess risks sometimes. As part of the risk assessment, the level of personal protection needed and time limits to exposure are determined. It was perceived that actions taken by the PS must be given precedence.
Should we arrive rst knowing that lives are on the line, then we might skip the chemical suit since we can almost not move in it, and pull [the victims] out from the danger zone.
We have seen the police go down into res with lter masks and say that it takes everything. Yes, but it does not add oxygen.
Main category: Aligned information processing
The derived category Aligned information processing concerns the need for the EMCC to receive continually updated reports from the scene, in order to be able to disseminate a more complete overall situation description to all parties.
Subcategory: Building puzzles centrally
Throughout operations during a chemical incident, the EMCC was perceived as having the task of centrally collecting new information and thereafter tting this information into the situation description, and continuously disseminate the updated description to all parties. In contrast to the central need for information, it was perceived as di cult to continuously send updated information from an ongoing operation. Respondents thought that obtaining and transmitting such information was not possible while saving lives. Inter-organizational communication at the scene was perceived as, at times, unsatisfactory. Respondents described how easy it was to forget the other organizations while focusing on urgent own work. It was perceived that information from the PS needs to be actively requested. One perceived value of continuously updated information is that it may disseminate awareness about multiple, simultaneous, incidents.
The police use their own [radio] channel and they do not always come over into ours. The ambulance has its own channel and rarely changes to a collaborative channel.
As soon as we get the [ rst unit] windshield report, then it supplements the information we received from the beginning. I do not feel we get as much response from the police, [it is] almost always just the ambulance and the rescue service.
Theme: Rare, elusive, and dangerous
A theme throughout the interviews was characterized as Rare, elusive, and dangerous. Rare because the respondents described chemical incidents as infrequent; elusive because the respondents described chemical incidents as di cult to characterize and possibly concealed within another category of incident; dangerous because respondents described chemical incidents as threatening both to the public and themselves, with the potential to escalate from a minor to a major incident.
Main ndings
While our respondents did not describe technical problems with communication systems, they perceived di culties in centrally collecting continuous reports from the scene, and to disseminating that updated information to all engaged parties. In terms of the WHO recommendations, it appears that the interoperability of communication equipment and systems was considered unproblematic, but that coordination of actors involved has improvable variables. This is in line with a theoretical suggestion that disaster response interoperability degrades as number of actors increase. [16] 4.2 Unspeci ed information processing The EMD receiving the initial incoming call has a formidable task. Information retrieval obstacles revealed include a fragmented understanding of the situation by the caller; caller-EMD language problems (including dialects); and the incomplete predictive capacity of the EDI interview guide. Types of chemical incidents having comparatively well-de ned interview guides, e.g. gas leaks, are described as less challenging. This indicates a problem when the crucial rst communication falls outside predicted scenarios. The initial incoming call can be received by any EMD, stochastically, regardless of experience.
In Sweden, EMD education is 10-11 weeks. [17] In an evolving chemical disaster, the initial information retrieval is dependent on two human variables (the caller and the EMD) who both have properties of randomness and less-than-expert level of chemical competence. A countermeasure could be increased automation, e.g. revising current interview guides regarding chemical incidents or adding an automaton third party to the caller-EMD conversation in the form of arti cial intelligence (AI). Limitations to AI usefulness should, however, be considered. [18] Recommendations from a recent report of Swedish EDS include standardizing terminology, developing technical solutions for information sharing and geodata systems, and organizational reform. [19] 4
.3 Speci ed information processing
After the initial information has been passed on from the EMCC to the respective emergency services, a problem emergent is no longer randomness, but information access and literacy. All engaged parties need to obtain as much organization-speci c information of as high quality and relevance as possible, as fast as possible. It has been noted that toxicologists must be ready to gain and interpret analytical data in the response phase, to support both medical care and repeated risk assessment. [20] It appears from our interviews that the most commonly used source of information at this stage is a database and decision support service provided by the Swedish Civil Contingencies Agency. [21,22] However, the respondents also discussed other means of obtaining information, including Google -to date subject to third-party in uence. [23] The view of respondents from other organizations that the PS not always release all information may be worth enquiry. The positive perception of the consultant role of the CA, i.e. incidents in which its personnel partakes only telemedically, suggests a broader implication of such specialist competence, possibly nationwide.
Risk assessment in chemical incidents is perceived as dependent on inter-organizational communication.
Respondents considered the RS having prime responsibilities for risk assessment, but the service rst arriving may be the EMS or PS. Con icting statements emerged about when in the sequence of arrivals the PS typically enter the scene: On the one hand, the PS considers it probable to arrive rst at an urban scene since they are patrolling in their vehicles and may be only a few minutes drive away. On the other hand, a deployment delay of up to twelve minutes between RS and PS is mentioned. Apart from a need to examine police deployment routines, this variability also indicates initial excess risk exposure for the PS. This is consistent with a study of rst responders injured in acute chemical incidents in the USA 2002-2012, nding that police o cers had rarely used personal protective equipment. [24] Fire ghters were, however, most frequently injured. Our respondents described risk assessment information from various sources as being collected and processed by the EMCC (having engaged expertise centrally) and then disseminated, with the RS as a main addressee. In triage of chemical incident victims, the need to assess the risk of secondary contamination to emergency services must also be considered. [25] Our results show a high level of trust in the RS capacity to make risk assessments. However, it must be noted that this capacity is partially dependent on the capacity of the EMDs to continually receive updated information from the scene -which is described by all organisations as di cult.
Aligned information processing
While there were no descriptions of technical problems with communication systems, EMDs described di culties in receiving continuous reports from the scene, and in disseminating updated situation descriptions. This is not a unilateral view of EMD respondents -respondents working in the eld con rm how di cult they nd it to prioritize information transmission while saving lives. Previous literature notes how the needs of the injured take precedence over professional cross-border cooperation. [26] 4. [29] In our country, moose collisions are frequent, immediately characterizable, and without escalation potential, whereas chemical incidents possess the combined properties of infrequency, escalation potential, and initial inde nability. Based on our results, improvements in chemical emergency inter-organizational communication that could bene t victims and decrease risk to society seem achievable. Perceived from the viewpoints of the respondents as an outlier event di cult to initially characterize, a major chemical incident could form a "black swan" to societal robustness. [30] Arguably, preparedness and response to such events rests on the back of strong day-to-day systems. [31]
Conclusions
It may be inferred from our results that during chemical incident mobilization, sensitivity, speci city and time-effectiveness in the initial ow of information can be improved by automation, public education, revised dispatcher education, and use of technical resources in the eld. An increased degree of EMCC automation may involve revising or expanding current indexing interview guides, as well as the development of purposeful arti cial intelligence. Public education interventions may target select populations. Interventions in dispatcher education and certi cation should include early years of professional development. Suggested technical resources in the eld include drones, robotics and arti cial intelligence. Unmanned data collection will probably also improve personnel safety. It is further inferred that inter-organizational coordination can be improved by inter-organizational training and revised standard operating procedures. Finally, we propose systematic assessments of chemical incident probabilities in Sweden.
Limitations
Our own experiences of prehospital emergency healthcare entailed risk that we imposed our own views during interviews or were biased when coding. We sought to self-monitor and hold preunderstandings within brackets.
Availability of data and materials
The datasets generated and analysed during the current study are not publicly available due to respondent integrity concerns.
|
2021-05-05T00:09:44.341Z
|
2021-03-11T00:00:00.000
|
{
"year": 2021,
"sha1": "0145932f75f0e9c9878d04830c0733ea0ebff80b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-277836/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4499333c145a357e3df76b9ed2fe41590f65e252",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
26397253
|
pes2o/s2orc
|
v3-fos-license
|
Boundary Behavior of the Ginzburg-Landau Order Parameter in the Surface Superconductivity Regime
We study the 2D Ginzburg-Landau theory for a type-II superconductor in an applied magnetic field varying between the second and third critical value. In this regime the order parameter minimizing the GL energy is concentrated along the boundary of the sample and is well approximated to leading order by a simplified 1D profile in the direction perpendicular to the boundary. Motivated by a conjecture of Xing-Bin Pan, we address the question of whether this approximation can hold uniformly in the boundary region. We prove that this is indeed the case as a corollary of a refined, second order energy expansion including contributions due to the curvature of the sample. Local variations of the GL order parameter are controlled by the second order term of this energy expansion, which allows us to prove the desired uniformity of the surface superconductivity layer.
Introduction
The Ginzburg-Landau (GL) theory of superconductivity, originating in [GL], provides a phenomenological, macroscopic, description of the response of a superconductor to an applied magnetic field. Several years after it was introduced, it turned out that it could be derived from the microscopic BCS theory [BCS, Gor] and should thus be seen as a mean-field/semiclassical approximation of many-body quantum mechanics. A mathematically rigorous derivation starting from BCS theory has been provided recently [FHSS].
Within GL theory, the state of a superconductor is described by an order parameter Ψ : R 2 → C and an induced magnetic vector potential κσA : R 2 → R 2 generating an induced magnetic field h = κσ curl A.
The ground state of the theory is found by minimizing the energy functional 1 where κ > 0 is a physical parameter (penetration depth) characteristic of the material, and κσ measures the intensity of the external magnetic field, that we assume to be constant throughout the sample. We consider a model for an infinitely long cylinder of cross-section Ω ⊂ R 2 , a compact simply connected set with regular boundary. Note the invariance of the functional under the gauge transformation which implies that the only physically relevant quantities are the gauge invariant ones such as the induced magnetic field h and the density |Ψ| 2 . The latter gives the local relative density of electrons bound in Cooper pairs. It is well-known that a minimizing Ψ must satisfy |Ψ| 2 ≤ 1. A value |Ψ| = 1 (respectively, |Ψ| = 0) corresponds to the superconducting (respectively, normal) phase where all (respectively, none) of the electrons form Cooper pairs. The perfectly superconducting state with |Ψ| = 1 everywhere is an approximate ground state of the functional for small applied field and the normal state where Ψ vanishes identically is the ground state for large magnetic field. In between these two extremes, different mixed phases can occur, with normal and superconducting regions varying in proportion and organization. A vast mathematical literature has been devoted to the study of these mixed phases in type-II superconductors (characterized by κ > 1/ √ 2), in particular in the limit κ → ∞ (extreme type-II). Reviews and extensive lists of references may be found in [FH3,SS2,Sig]. Two main phenomena attracted much attention: • The formation of hexagonal vortex lattices when the applied magnetic field varies between the first and second critical field, first predicted by Abrikosov [Abr], and later experimentally observed (see, e.g., [H et al]). In this phase, vortices (zeros of the order parameter with quantized phase circulation) sit in small normal regions included in the superconducting phase and form regular patterns.
• The occurrence of a surface superconductivity regime when the applied magnetic fields varies between the second and third critical fields. In this case, superconductivity is completely destroyed in the bulk of the sample and survives only at the boundary, as predicted in [SJdG]. We refer to [N et al] for experimental observations.
We refer to [CR] for a more thorough discussion of the context. We shall be concerned with the surface superconductivity regime, which in the above units translates into the assumption for some fixed parameter b satisfying the conditions where Θ 0 is a spectral parameter (minimal ground state energy of the shifted harmonic oscillator on the half-line, see [FH3,Chapter 3]): dt |∂ t u| 2 + (t + α) 2 |u| 2 , u L 2 (R + ) = 1 . (1.5) From now on we introduce more convenient units to deal with the surface superconductivity phenomenon: we define the small parameter and study the asymptotics ε → 0 of the minimization of the functional (1.1), which in the new units reads (1.7) We shall denote E GL and denote by (Ψ GL , A GL ) a minimizing pair (known to exist by standard methods [FH3,SS2]).
The salient features of the surface superconductivity phase are as follows: • The GL order parameter is concentrated in a thin boundary layer of thickness ∼ ε = (κσ) −1/2 . It decays exponentially to zero as a function of the distance from the boundary.
• The applied magnetic field is very close to the induced magnetic field, curl A ≈ 1.
• Up to an appropriate choice of gauge and a mapping to boundary coordinates, the ground state of the theory is essentially governed by the minimization of a 1D energy functional in the direction perpendicular to the boundary.
A review of rigorous statements corresponding to these physical facts may be found in [FH3]. One of their consequences is the energy asymptotics where |∂Ω| is the length of the boundary of Ω, and E 1D 0 is obtained by minimizing the functional (1.11) both with respect to the function f and the real number α. We proved recently [CR] that (1.10) holds in the full surface superconductivity regime, i.e. for 1 < b < Θ −1 0 . This followed a series of partial results due to several authors [Alm1,AH,FH1,FH2,FHP,LP,Pan], summarized in [FH3,Theorem 14.1.1]. Some of these also concern the limiting regime b Θ −1 0 . The other limiting case b 1 where the transition from boundary to bulk behavior occurs is studied in [FK, Kac], whereas results in the regime b 1 may be found in [AS,Alm2,SS1]. The rationale behind (1.10) is that, up to a suitable choice of gauge, any minimizing order parameter Ψ GL for (1.1) has the structure where (f 0 , α 0 ) is a minimizing pair for (1.11) and (s, τ ) = (tangent coordinate, normal coordinate) are boundary coordinates defined in a tubular neighborhood of ∂Ω with τ = dist(r, ∂Ω) for any point r there. Results in the direction of (1.12) may be found in the above references, in particular [AH,FHP,FH2]. In [CR,Theorem 2.1] we proved that (1.13) for any 1 < b < Θ −1 0 in the limit ε → 0. A very natural question is whether the above estimate may be improved to a uniform control (in L ∞ norm) of the local discrepancy between the modulus of the true GL minimizer and the simplified normal profile f 0 τ ε . Indeed, (1.13) is still compatible with the vanishing of Ψ GL in small regions, e.g., vortices, inside of the boundary layer. Proving that such local deviations from the normal profile do not occur would explain the observed uniformity of the surface superconducting layer (see again [N et al] for experimental pictures). Interest in this problem (stated as Open Problem number 4 in the list in [FH3,Page 267]) originates from a conjecture of X.B. Pan [Pan,Conjecture 1] and an affirmative solution has been provided in [CR] for the particular case of a disc sample. The purpose of this paper is to extend the result to general samples with regular boundary (the case with corners is known to require a different analysis [FH3,Chapter 15]).
Local variations (on a scale O(ε)) in the tangential variable are compatible with the energy estimate (1.10), and thus the uniform estimate obtained for disc samples in [CR] is based on an expansion of the energy to the next order: where E 1D (k) is the minimum (with respect to both the real number α and the function f ) of the ε-dependent functional where the constant c 0 has to be chosen large enough and k = R −1 is the curvature of the disc under consideration, whose radius we denote by R. Of course, (1.11) is simply the above functional where one sets k = 0, ε = 0, which amounts to neglect the curvature of the boundary. When the curvature is constant, (1.14) in fact follows from a next order expansion of the GL order parameter beyond (1.12): where (α(k), f k ) is a minimizing pair for (1.15). Note that for any fixed k so that (1.16) is a slight refinement of (1.12) but the O(ε) correction corresponds to a contribution of order 1 beyond (1.10) in (1.14), which turns out to be the order that controls local density variations.
As suggested by the previous results in the disc case, the corrections to the energy asymptotics (1.10) must be curvature-dependent. The case of a general sample where the curvature of the boundary is not constant is then obviously harder to treat than the case of a disc, where one obtains (1.14) by a simple variant of the proof of (1.10), as explained in our previous paper [CR].
In fact, we shall obtain below the desired uniformity result for the order parameter in general domains as a corollary of the energy expansion (γ is a fixed constant) where the integral runs over the boundary of the sample, k(s) being the curvature of the boundary as a function of the tangential coordinate s. Just as the particular case (1.14), (1.18) contains the leading order (1.10), but O(1) corrections are also evaluated precisely. As suggested by the energy formula, the GL order parameter has in fact small but fast variations in the tangential variable which contribute to the subleading order of the energy. More precisely, one should think of the order parameter as having the approximate form (again, up to a suitable choice of gauge) with f k(s) , α(k(s)) a minimizing pair for the energy functional (1.15) at curvature k = k(s). The main difficulty we encounter in the present paper is to precisely capture the subtle curvature dependent variations encoded in (1.19). What our new result (we give a rigorous statement below) (1.19) shows is that curvature-dependent deviations to (1.12) do exist but are of limited amplitude and can be completely understood via the minimization of the family of 1D functionals (1.15). A crucial input of our analysis is therefore a detailed inspection of the k-dependence of the ground state of (1.15). We can deduce from (1.18) a uniform density estimate settling the general case of [Pan, Conjecture 1] and [FH3,Open Problem 4,page 267]. We believe that the energy estimate (1.18) is of independent interest since it helps in clarifying the role of domain curvature in surface superconductivity physics. It was previously known (see [FH3,Chapters 8 and 13] and references therein) that corrections to the value of the third critical field depend on the domain's curvature, but applications of these results are limited to the regime where b → Θ −1 0 when ε → 0. The present paper seems to contain the first results indicating the role of the curvature in the regime 1 < b < Θ −1 0 . This role may seem rather limited since it only concerns the second order in the energy asymptotics but it is in fact crucial in controlling local variations of the order parameter and allowing to prove a strong form of uniformity for the surface superconductivity layer.
Our main results are rigorously stated and further discussed in Section 2, their proofs occupy the rest of the paper. Some material from [CR] is recalled in Appendix A for convenience.
Notation. In the whole paper, C denotes a generic fixed positive constant independent of ε whose value changes from formula to formula. A O(δ) is always meant to be a quantity whose absolute value is bounded by δ = δ(ε) in the limit ε → 0. We use O(ε ∞ ) to denote a quantity (like exp(−ε −1 )) going to 0 faster than any power of ε and | log ε| ∞ to denote | log ε| a where a > 0 is some unspecified, fixed but possibly large constant. Such quantities will always appear multiplied by a power of ε, e.g., ε| log ε| ∞ which is a O(ε 1−c ) for any 0 < c < 1, and hence we usually do not specify the precise power a. densed Matter in Mathematical Physics (Cond-Math)" (code RBFR13WAET). N.R. acknowledges the support of the ANR project Mathostaq (ANR-13-JS01-0005-01). We also acknowledge the hospitality of the Institut Henri Poincaré, Paris.
Statements
We first state the refined energy and density estimates that reveal the contributions of the domain's boundary. As suggested by (1.19), we now introduce a reference profile that includes these variations. A piecewise constant function in the tangential direction is sufficient for our purpose and we thus first introduce a decomposition of the superconducting boundary layer that will be used in all the paper. The thickness of this layer in the normal direction should roughly be of order ε, but to fully capture the phenomenon at hand we need to consider a layer of size c 0 ε| log ε| where c 0 is a fixed, large enough constant. By a passage to boundary coordinates and dilation of the normal variable on scale ε (see [FH3,Appendix F] or Section 4 below), the surface superconducting layer can be mapped to We split this domain into N ε = O(ε −1 ) rectangular cells {C n } n=1,...,Nε of constant side length ε ∝ ε in the s direction. We denote s n , s n+1 = s n + ε the s coordinates of the boundaries of the cell C n : and we may clearly choose ε = ε|∂Ω| (1 + O(ε)) for definiteness. We will approximate the curvature k(s) by its mean value k n in each cell: We also denote f n := f kn , α n := α(k n ) respectively the optimal profile and phase associated to k n , obtained by minimizing (1.15) first with respect to 2 f and then to α. The reference profile is then the piecewise continuous function and we compare the density of the full GL order parameter to g ref in the next theorem. Note that because of the gauge invariance of the energy functional, the phase of the order parameter is not an observable quantity, so the next statement is only about the density |Ψ GL | 2 .
Let Ω ⊂ R 2 be any smooth, bounded and simply connected domain. For any fixed 1 < b < Θ −1 0 , in the limit ε → 0, it holds and (2.6) Remark 2.1 [The energy to subleading order] The most precise result prior to the above is [CR,Theorem 2.1] where the leading order is computed and the remainder is shown to be at most of order 1. Such a result had been obtained before in [FHP] for a smaller range of parameters, namely for 1.25 ≤ b < Θ −1 0 , see also [FH3,Chapter 14] and references therein. The above theorem evaluates precisely the O(1) term, which is better appreciated in light of the following comments: 1. In the effective 1D functional (1.15), the parameter k that corresponds to the local curvature of the sample appears with an ε prefactor. As a consequence, one may show (see Section 3.1 below) that for all s ∈ [0, |∂Ω|] so that (2.5) contains the previously known results. More generally we prove below that so that E 1D (k(s)) has variations of order ε on the scale of the boundary layer. These contribute to a term of order 1 that is included in (2.5).
2. Undoing the mapping to boundary coordinates, one should note that g ref (s, ε −1 t) has fast variations (at scale ε) in both the t direction and s directions. The latter are of limited amplitude however, which explains that they enter the energy only at subleading order, and why a piecewise constant profile is sufficient to capture the physics.
3. We had previously proved the density estimate (1.13), which is less precise than (2.6). Note in particular that (2.6) does not hold at this level of precision if one replaces g 2 ref s, ε −1 t by the simpler profile f 2 0 (ε −1 t). 4. Strictly speaking the function g ref is defined only in the boundary layerà ε , so that (2.6) should be interpreted as if g ref would vanish outsideà ε . However the estimate there is obviously true thanks to the exponential decay of Ψ GL .
We now turn to the uniform density estimates that follow from the above theorem. Here we can be less precise than before. Indeed, as suggested by the previous discussion, a density deviation of order ε on a length scale of order ε only produces a O(ε 2 ) error in the energy. Thus, using (2.5) we may only rule out local variations of a smaller order than the tangential variations included in (2.4), and for this reason we will compare |Ψ GL | in L ∞ norm only to the simplified profile f 0 (ε −1 τ ), since by (1.17) f 0 (t) − f k (t) = O(ε). Also, the result may be proved only in a region where the density is relatively large 3 , namely in where bl stands for "boundary layer" and 0 < γ ε 1 is any quantity such that where a > 0 is a suitably large constant related 4 to the power of | log ε| appearing in (2.5). The inclusion in (2.7) follows from (A.6) below and ensures we are really considering a significant boundary layer: recall that the physically relevant region has a thickness roughly of order ε| log ε|.
Under the assumptions of the previous theorem, it holds In particular for any r ∈ ∂Ω we have where C does not depend on r.
Estimate (2.10) solves the original form of Pan's conjecture [Pan,Conjecture 1]. In addition, since f 0 is strictly positive, the stronger estimate (2.9) ensures that Ψ GL does not vanish in the boundary layer (2.7). A physical consequence of the theorem is thus that normal inclusions such as vortices in the surface superconductivity phase may not occur. This is very natural in view of the existing knowledge on type-II superconductors but had not been proved previously.
We now return to the question of the phase of the order parameter. Of course, the full phase cannot be estimated because of gauge invariance but gauge invariant quantities linked to the phase can. One such quantity is the winding number (a.k.a. phase circulation or topological degree) of Ψ GL around the boundary ∂Ω defined as in the limit ε → 0.
Note that the remainder term in (2.12) is much larger than ε −1 |α(k) − α 0 | = O(1) so that the above result does not allow to estimate corrections due to curvature. We believe that, just as we had to expand the energy to second order to obtain the refined first order results Theorems 2.2 and 2.3, obtaining uniform density estimates and degree estimates at the second order would require to expand the energy to the third order, which goes beyond the scope of the present paper.
We had proved Theorems 2.2 and 2.3 before in the particular, significantly easier, case where Ω is a disc. The next subsection contains a sketch of the proof of the general case, where new ingredients enter, due to the necessity to take into account the non-trivial curvature of the boundary.
Sketch of proof
In the regime of interest to this paper, the GL order parameter is concentrated along the boundary of the sample and the induced magnetic field is extremely close to the applied one. The tools allowing to prove these facts are well-known and described at length in the monograph [FH3]. We shall thus not elaborate on this and the formal considerations presented in this subsection take as starting point the following effective functional where (s, t) represent boundary coordinates in the original domain Ω, the normal coordinate t having been dilated on scale ε, and ψ can be thought of as Ψ GL (r(s, εt)), i.e., the order parameter restricted to the boundary layer. We denote k(s) the curvature of the original domain and have set a ε (s, t) : · standing for the integer part. Note that a specific choice of gauge has been made to obtain (2.13).
Thanks to the methods exposed in [FH3], one can show that the minimization of the above functional gives the full GL energy in units ε −1 , up to extremely small remainder terms, provided c 0 is chosen lare enough. To keep track of the fact that the domain A ε = [0, |∂Ω|] × [0, c 0 | log ε|] corresponds to the unfolded boundary layer of the original domain and ψ to the GL order parameter in boundary coordinates, one should impose periodicity of ψ in the s direction.
Here we shall informally explain the main steps of the proof that where G Aε is the ground state energy associated to (2.13). When k(s) ≡ k is constant (the disc case), one may use the ansatz and recover the functional (1.15). It is then shown in [CR] that the above ansatz is essentially optimal if one chooses α = α(k) and f = f k . An informal sketch of the proof in the case k = 0 is given in Section 3.2 therein. The main insight in the general case is to realize that the above ansatz stays valid locally in s. Indeed, since the terms involving k(s) in (2.13) come multiplied by an ε factor, it is natural to expect variations in s to be weak and the state of the system to be roughly of the form (1.19), directly inspired by (2.17). As usual the upper and lower bound inequalities in (2.16) are proved separately.
Upper bound. To recover the integral in the energy estimate (2.16), we use a Riemann sum over the cell decomposition A ε = Nε n=1 C n introduced at the beginning of Section 2.1. Indeed, as already suggested in (2.4), a piecewise constant approximation in the s-direction will be sufficient. Our trial state roughly has the form ψ(s, t) = f n (t)e −i(ε −1 αns−εδεs) , for s n ≤ s ≤ s n+1 . (2.18) Of course, we need to make this function continuous to obtain an admissible trial state, and we do so by small local corrections, described in more details in Section 4.1. We may then approximate the curvature by its mean value in each cell, making a relative error of order ε 2 per cell. Evaluating the energy of the trial state in this way we obtain an upper bound of the form where the o(1) error is due to the necessary modifications to (2.18) to make it continuous. The crucial point is to be able to control this error by showing that the modification needs not be a large one. This requires a detailed analysis of the k dependence of the relevant quantities E 1D (k), α(k) and f k obtained by minimizing (1.15). Indeed, we prove in Section 3.1 below that and, in a suitable norm, which will allow to obtain the desired control of the o(1) in (2.19) and conclude the proof by a Riemann sum argument.
Lower bound. In view of the argument we use for the upper bound, the natural idea to obtain the corresponding lower bound is to use the strategy for the disc case we developed in [CR] locally in each cell. In the disc case, a classical method of energy decoupling and Stokes' formula lead to the lower bound 5 where we have used the strict positivity of f k to write and the "cost function" is This method is inspired from our previous works on the related Gross-Pitaevskii theory of rotating Bose-Einstein condensates [CRY,CPRY1,CPRY2] (informal summaries may be found in [CPRY3,CPRY4]). Some of the steps leading to (2.20) have also been used before in this context [AH]. The desired lower bound in the disc case follows from (2.20) and the fact that K k is essentially positive 6 for any k. This is proved by carefully exploiting special properties of f k and α(k).
To deal with the general case where the curvature is not constant, we again split the domain A ε into small cells, approximate the curvature by a constant in each cell and use the above strategy locally. A serious new difficulty however comes from the use of Stokes' formula in the derivation of (2.20). We need to reduce the terms produced by Stokes' formula to expressions involving only first order derivatives of the order parameter, using further integration by parts. In the disc case, boundary terms associated with this operation vanish due to the periodicity of ψ in the s variable. When doing the integrations by parts in each cell, using different f k and α(k) in (2.21), the boundary terms do not vanish since we artificially introduce some (small) discontinuity by choosing a cell-dependent profile f kn as reference.
To estimate these boundary terms we proceed as follows: the term at s = s n+1 , made of one part coming from the cell C n and one from the cell C n+1 is integrated by parts back to become a bulk term in the cell C n . In this sketch we ignore a rather large amount of technical complications and state what is essentially the conclusion of this procedure: and the "modified cost function" is and χ n is a suitable localization function supported in C n with χ n (s n+1 ) = 1 that we use to perform the integration by parts in C n . Note that the dependence of the new cost function on both k n and k n+1 is due to the fact that the original boundary terms at s n+1 that we transform into bulk terms in C n involved both u n and u n+1 . The last step is to prove a bound of the form on the "correction function" I n,n+1 , so that This allows us to conclude that (essentially)K n ≥ 0 by a perturbation of the argument applied to K kn in [CR] and thus concludes the lower bound proof modulo the same Riemann sum argument as in the upper bound part. Note the important fact that the quantity in the l.h.s. of (2.24) is proved to be small relatively to f 2 kn (t), including in a region where the latter function is exponentially decaying. This bound requires a thorough analysis of auxiliary functions linked to (1.15) and is in fact a rather strong manifestation of the continuity of this minimization problem as a function of k.
The rest of the paper is organized as follows: Section 3 contains the detailed analysis of the effective, curvature-dependent, 1D problem. The necessary continuity properties as function of the curvature are given in Subsection 3.1 and the analysis of the associated auxiliary functions in Subsection 3.2. The details of the energy upper bound are then presented in Section 4 and the energy lower bound is proved in Section 5. We deduce our other main results in Section 6. Appendix A recalls for the convenience of the reader some material from [CR] that we use throughout the paper.
Effective Problems and Auxiliary Functions
This section is devoted to the analysis of the 1D curvature-dependent reduced functionals whose minimization allows us to reconstruct the leading and sub-leading order of the full GL energy. We shall prove results in two directions: • We carefully analyse the dependence of the 1D variational problems as a function of curvature in Subsection 3.1. Our analysis, in particular the estimate of the subleading order of the GL energy, requires some quantitative control on the variations of the optimal 1D energy, phase and density when the curvature parameter is varied, that is when we move along the boundary layer of the original sample along the transverse direction.
• In our previous paper [CR] we have proved the positivity property of the cost function which is the main ingredient in the proof of the energy lower bound in the case of a disc (constant curvature). As mentioned above, the study of general domains with smooth curvature that we perform here will require to estimate more auxiliary functions, which is the subject of Subsection 3.2.
We shall use as input some key properties of the 1D problem at fixed k that we proved in [CR]. These are recalled in Appendix A below for the convenience of the reader.
Effective 1D functionals
We take for granted the three crucial but standard steps of reduction to the boundary layer, replacement of the vector potential and mapping to boundary coordinates. Our considerations thus start from the following reduced GL functional giving the original energy in units of ε −1 , up to negligible remainders: where k(s) is the curvature of the original domain. We have set and
3)
· standing for the integer part. The boundary layer in rescaled coordinates is denoted by The effective functionals that we shall be concerned with in this section are obtained by computing the energy (3.1) of certain special states. In particular we have to go beyond the simple ansätze considered so far in the literature, e.g., in [FH3,CR], and obtain the following effective energies: • 2D functional with definite phase. Inserting the ansatz in (3.1), with g and S respectively real valued density and phase, we obtain In the particular case where ∂ s S = α ∈ 2πZ we may obtain a simpler functional of the density alone However to capture the next to leading order of (3.1) we do consider a non-constant ∂ s S to accommodate curvature variations, which is in some sense the main novelty of the present paper. In particular, (3.7) does not provide the O(ε) correction to the full GL energy. On the opposite (3.6) does, once minimized over the phase factor S as well as the density g. We will not prove this directly although it follows rather easily from our analysis.
• 1D functional with given curvature and phase. If the curvature k(s) ≡ k is constant (the disc case), the minimization of (3.7) reduces to the 1D problem (3.10) In the sequel we shall denote (3.11) Note that (3.9) includes O(ε) corrections due to curvature. As explained above our approach is to approximate the curvature of the domain as a piecewise constant function and hence an important ingredient is to study the above 1D problem for different values of k, and prove some continuity properties when k is varied. For k = 0 (the half-plane case, sometimes referred to as the half-cylinder case) we recover the familiar which has been known to play a crucial role in surface superconductivity physics for a long time (see [FH3,Chapter 14] and references therein).
In this section we provide details about the minimization of (3.9) that go beyond our previous study [CR,Section 3.1]. We will use the following notation: • Minimizing (3.9) with respect to f at fixed α we get a minimizer f k,α and an energy E 1D (k, α).
• Minimizing the latter with respect to α we get some (a priori non-unique) α(k) and some energy E 1D (k).
The following Proposition contains the crucial continuity properties (as a function of k) of these objects: Proposition 3.1 (Dependence on curvature of the 1D minimization problem). Let k, k ∈ R be bounded independently of ε and 1 < b < Θ −1 0 , then the following holds: (3.14) Finally, for all n ∈ N, We first prove (3.13) and (3.14) and explain that these estimates imply the following lemma: Lemma 3.1 (Preliminary estimate on density variations).
Under the assumptions of Proposition 3.1 it holds Proof of Lemma 3.1. We proceed in three steps: Step 1. Energy decoupling. We use the strict positivity of f k recalled in the appendix to write any function f on I ε as f = f k v.
We can then use the variational equation (A.1) satisfied by f k to decouple the α , k functional in the usual way, originating in [LM]. Namely, we integrate by parts and use the fact that f k satisfies Neumann boundary conditions to write Inserting this into the definition of E 1D k ,α and using (A.3), we obtain for any f In the case α = α(k) we can insert the trial state v ≡ 1 in the above, which gives 3.19) in view of the bounds on f k recalled in Appendix A and the easy estimate for any t ∈ I ε . Changing the role of k and k in (3.19) we obtain the reverse inequality and hence (3.13) is proved.
Step 2. Use of the cost function. We now consider the case α = α(k ), f = f k and bound from below the term on the second line of (3.17). A simple computation gives We may now follow closely the procedure of [CR,Section 5.2]: with the potential function F k defined in (A.8) below we have and hence an integration by parts yields (boundary terms vanish thanks to Lemma A.3) 2 c0| log ε| (3.21) We now split the integral into one part running from 0 tot k,ε and a boundary part running from t k,ε to c 0 | log ε|, wheret k,ε is defined in (A.12) and (A.13) below. For the second part, it follows from the decay estimates of Lemma A.2 that c0| log ε| To see this, one can simply adapt the procedure in [CR,Eqs. (5.21) -(5.28)]. The bound (3.22) is in fact easier to derive than the corresponding estimate in [CR] because the decay estimates in Lemma A.2 are stronger than the Agmon estimates we had to use in that case. Details are thus omitted. We turn to the main part of the integral (3.21), which lives in [0,t k,ε ]. Since F k is negative we have, using Lemma A.4 and Cauchy-Schwarz, for any 0 < d ε ≤ C| log ε| −4 . Inserting this bound and (3.22) in (3.17), using (3.20) and (3.21), yields the lower bound where v = f k /f k and we also used the uniform bound (A.2) to estimate the fourth term of the r.h.s. of (3.17).
Step 3. Conclusion. We still have to bound the first term in the second line of (3.23): Inserting this in (3.23), using again (A.2) and dropping a positive term, we finally get where we have chosen d ε = | log ε| −5 , which is compatible with the requirement 0 < d ε ≤ C| log ε| −4 . Combining with the estimate (3.13) that we proved in Step 1 concludes the proof of (3.14). To get (3.16) one has to use in addition (A.6), which guarantees that under the assump- for some constant C independent of ε.
To conclude the proof of Proposition 3.1 there only remains to discuss (3.15). We shall upgrade the estimate (3.16) to better norms, taking advantage of the 1D nature of the problem and using a standard bootstrap argument.
Proof of Proposition 3.1. We write f k = f k + (f k − f k ) and expand the energy using the variational equation (A.1) for f k : where the O(ε|k − k || log ε| ∞ ) is as before due to the replacement of the curvature k ↔ k . Using the same procedure to expand E 1D (k ) = E 1D k [f k ] and combining the result with the above we obtain Hence it holds Next we note that thanks to (3.14) sup as revealed by an easy computation starting from the expression (3.10). Thus, using (3.16) and the Cauchy-Schwartz inequality, For the term on the third line of (3.25) we notice that, using the growth of the potentials V k and V k for large t, the integrand is positive iñ I ε := c 1 (log | log ε|) 1/2 , c 0 | log ε| for any constant c 1 and ε small enough. On the other hand, combining (3.16) and the pointwise lower bound in (A.6) we have Splitting the integral into two pieces we thus have Using this and (3.26) we deduce from (3.25) that and combining with the previous L 2 bound this gives Since we work on a 1D interval, the Sobolev inequality implies (3.28) In particular Then, integrating the bound (3.27) from c 1 (log | log ε|) 1/2 to c 0 | log ε| we can extend (3.28) to the whole interval I ε : which is (3.15) for n = 0. The bounds on the derivatives follow by a standard bootstrap argument, inserting the L ∞ bound in the variational equations.
Estimates on auxiliary functions
In this Section we collect some useful estimates of other quantities involving the 1D densities as well as the optimal phases. It turns out that we need an estimate of the k-dependence of ∂ t log(f k ), provided in the following Proposition 3.2 (Estimate of logarithmic derivatives). Let k, k ∈ R be bounded independently of ε and 1 < b < Θ −1 0 , then the following holds: Proof. Let us denote for short (3.30) We first notice that the estimate is obviously true in the region where f k ≥ | log ε| −M for any M > 0 finite, thanks to (3.15) and (A.7): Let t * be the unique solution to f k (t * ) = | log ε| −M (uniqueness follows from the properties of f k discussed in Proposition A.1). To complete the proof it thus suffices to prove the estimate in the region [t * , c 0 | log ε|]. Notice also that thanks to (A.6), it must be that t * → ∞ when ε → 0. At the boundary of the interval [t * , t ε ] (recall (3.11)), one has because of Neumann boundary conditions. Hence if the supremum of |g| is reached at the boundary there is nothing to prove. Let us then assume that sup t∈[t * ,tε] |g| = |g(t 0 )|, for some t * < t 0 < t ε , such that g (t 0 ) = 0, i.e., Since f k and f k are both decreasing in [t * , t ε ] (see again Proposition A.1) we also have The variational equations satisfied by f k and f k on the other hand imply thanks to (3.14) and (3.15). For the first two terms the estimate (A.7) has also been used for the derivatives f k and f k : Plugging (3.33) and (3.34) into (3.32), we get the estimate the result follows immediately. Therefore we can assume that but we claim that this also implies using (A.7) again. Hence h k (t 0 ) < 0, since V k (t 0 ) 1, which follows from t 0 > t * 1, and therefore (3.37) holds. An identical argument applies to h k and thus to the sum Finally, the explicit expression of g (t) in combination with (3.37) gives for t ≥ t 0 which implies the result.
The above estimate is mainly useful in providing bounds on quantities of the form alluded to in Subsection 2.2. As announced there, the main difficulty is that we need to show that I k,k is small relatively to f 2 k , which is the content of the following corollary. We need the following notation [0,t k,ε ] := t : f k (t) ≥ | log ε| 3 f k (t ε ) . (3.40) Note that the monotonicity for large t of f k guarantees that the above set is indeed an interval and thatt k,ε = t ε + O(log | log ε|).
and, settingt ε := min {t k,ε ,t k ,ε }, Using the definition of the potential function (A.8) and its properties (A.9), we can rewrite . (3.44) We first observe that for any as it easily follows by combining the monotonicity of f k for t large with its strict positivity close to the origin (see Proposition A.1 and Lemma A.2 for the details). Hence we can bound the last term on the r.h.s. of (3.44) as For the first term on the r.h.s. of (3.44) we exploit the estimate which can be proven by writing where we have used (3.45), the estimate |1 − e δ | ≤ |δ|e |δ| , δ ∈ R, and (3.29). Putting together (3.44) with (3.46) and (3.47), we conclude the proof of (3.42). To obtain (3.29) we first note that since F k (t) ≤ 0, the positivity of K k in [0,t k,ε ] recalled in Lemma A.4 ensures that and the proof is complete.
Energy Upper Bound
We now turn to the proof of the energy upper bound corresponding to (2.5), namely we prove the following: Proposition 4.1 (Upper bound to the full GL energy).
Let 1 < b < Θ −1 0 and ε be small enough. Then it holds where s → k(s) is the curvature function of the boundary ∂Ω as a function of the tangential coordinate.
This result is proven as usual by evaluating the GL energy of a trial state having the expected physical features. As is well-known [FH3], such a trial state should be concentrated along the boundary of the sample, and the induced magnetic field should be chosen close to the applied one. Before entering the heart of the proof, we briefly explain how these considerations allow us to reduce to the proof of an upper bound to the reduced functional (3.1). We define the infimum of the reduced functional under periodic boundary conditions in the tangential direction and prove Proof. This is a standard reduction for which more details may be found in [FH3,Section 14.4.2] and references therein. See also [CR,Sections 4.1 and 5.1]. We provide a sketch of the proof for completeness. We first pick the trial vector potential as where F is the induced vector potential written in a gauge where div F = 0, namely the unique solution of div F = 0, in Ω, curl F = 1, in Ω, F · ν = 0, on ∂Ω.
Next we introduce boundary coordinates as described in [FH3,Appendix F]: let be a counterclockwise parametrization of the boundary ∂Ω such that |γ (ξ)| = 1. The unit vector directed along the inward normal to the boundary at a point γ(ξ) will be denoted by ν(ξ). The curvature k(ξ) is then defined through the identity γ (ξ) = k(ξ)ν(ξ).
Our trial state will essentially live in the regioñ (4.4) and in such a region we can introduce tubular coordinates (s, εt) (note the rescaling of the normal variable) such that, for any given r ∈Ã ε , εt = dist(r, ∂Ω), i.e., r(s, εt) = γ (s) + εtν(s), (4.5) which can obviously be realized as a diffeomorphism for ε small enough. Hence the boundary layer becomes in the new coordinates (s, t) We now pick a function ψ(s, t) defined on A ε , satisfying periodic boundary conditions in the s variable. Using a smooth cut-off function χ(t) with χ(t) ≡ 1 for t ∈ [0, c 0 | log ε|] and χ(t) exponentially decreasing for t > c 0 | log ε|, we associate to ψ the GL trial state Ψ trial (r) := ψ(s, t)χ(t).
Then, with the definition of G Aε as in (3.1), a relatively straightforward computation gives and the desired result follows immediately. Note that this computation uses the gauge invariance of the GL functional, e.g., through [FH3,Lemma F.1.1].
The problem is now reduced to the construction of a proper trial state for G Aε . To capture the O(ε) correction (which depends on curvature) to the leading order of the GL energy (which does not depend explicitly on curvature), we need a more elaborate function than has been considered so far. The construction is detailed in Subsection 4.1 and the computation completing the proof of Proposition 4.1 is given in Subsection 4.2.
The trial state in boundary coordinates
We start by recalling the splitting of the domain A ε defined in (3.4) into N ε ∝ ε −1 rectangular cells {C n } n=1...Nε with boundaries s n , s n+1 in the s-coordinate such that with the convention that s 1 = 0, for simplicity. We will approximate the curvature k(s) inside each cell by its mean value and set k n := −1 ε sn+1 sn ds k(s). (4.8) We also denote by α n = α(k n ) (4.9) the optimal phase associated to k n , obtained by minimizing E 1D (α, k n ) with respect to α as in Section 3.1.
The assumption about the smoothness of the boundary guarantees that We can then apply Proposition 3.1 to obtain for any finite m ∈ N.
Our trial state has the form where δ ε is the number (3.3). The density g and phase factor S are defined as follows: • The density. The modulus of our wave function is constructed to be essentially piecewise constant in the s-direction, with the form f kn (t) in the cell C n . The admissibility of the trial state requires that g be continuous and we thus set: g(s, t) := f kn + χ n , (4.14) where the function χ n satisfies (4.15) the continuity at the s n boundary being ensured by χ n−1 . A simple choice is given by (4.16) Note that |k n − k n+1 | ≤ C|s n − s n+1 | ≤ Cε since the curvature is assumed to be a smooth function of s. Clearly, in view of Proposition 3.1 we can impose the following bounds on χ n : (4.17) so that χ n is indeed only a small correction to the desired density f kn in C n .
• The phase. We let S = S(s) = S loc (s) + S glo (s) where S loc varies locally (on the scale of a cell) and S glo varies globally (on the scale of the full interval [0, |∂Ω|]) and is chosen to enforce the periodicity on the boundary of the trial state. The term S loc is the main one, and its s derivative should be equal to α n in each cell C n in order that the evaluation of the energy be naturally connected to the 1D functional we studied before. We define S loc recursively by setting: S loc (s) = α 1 s, in C 1 , α n (s − s n ) + S loc (s n ), in C n , n ≥ 2, The factor S glo ensures that which is required for (4.13) to be periodic in the s-direction and hence to correspond to a single-valued wave function in the original variables. The conditions we impose on S glo are thus S glo (s 1 ) = 0 (4.20) with . standing for the integer value. Thanks to (4.19), we have and we can thus clearly impose that S glo be regular and Remark 4.1 (s-dependence of the trial state) The main novelty here is the fact that the density and phase of the trial state have (small) variations on the scale of the cells which are of size O(ε) in the s-variable. A noteworthy point is that the phase needs not have a t-dependence to evaluate the energy at the level of precision we require. Basically this is associated with the fact that the t 2 term in (3.2) comes multiplied with an ε factor. The main point that renders the computation of the energy doable is (4.17) and this is where the analysis of Subsection 3.1 enters heavily.
The energy of the trial state
We may now complete the proof of Proposition 4.1 by proving Lemma 4.2 (Upper bound for the boundary functional). With ψ trial given by the preceding construction, it holds The upper bound (4.1) follows from Lemmas 4.1 and 4.2 since ψ trial is periodic in the s-variable and hence an admissible trial state for G Aε .
Proof. As explained in Subsection 3.1, inserting (4.13) into (3.1) yields where E 2D S [g] is defined in (3.6). For clarity we split the estimate of the r.h.s. of the above equation into several steps. We use the shorter notation f n for f kn when this generates no confusion.
Step 1. Approximating the curvature. In view of the continuity of the trial function, the energy is the sum of the energies restricted to each cell. We approximate k(s) by k n in C n as announced, and note that since k is regular we have |k(s) − k(s n )| ≤ Cε in each cell, with a constant C independent of j. We thus have (1 − εk n t) 2 g 2 − 1 2b 2g 2 − g 4 1 + O(ε 2 ) (4.24) since each k-dependent term comes multiplied with an ε factor. 1 − εk n t g 2 + O(ε 3 | log ε| ∞ ). (4.26) Step 3. The 1D functional inside each cell. We now have to estimate an essentially 1D functional in each cell, closely related to (3.9): Cn dt ds (1 − εk n t) |∂ t g| 2 + ε 2 (1 − εk n t) 2 |∂ s g| 2 + t + α n − 1 2 εt 2 k n 2 (1 − εk n t) 2 g 2 − 1 2b 2g 2 − g 4 . (4.27) We may now expand g according to (4.14) in the above expression and use the variational equation (A.1) to cancel the first order terms in χ n . This yields where we only have to use (4.17) to obtain the final estimate.
Step 4, Riemann sum approximation. Gathering all the above estimates we obtain (4.29) Indeed, (3.13) implies that inside C n Recognizing a Riemann sum of N ε ∝ ε −1 terms in (4.29) and recalling that E 1D (k n ) is of order 1, irrespective of n, thus leads to (4.29). Combining (4.23) and (4.29) we obtain (4.22) which concludes the proof of Lemma 4.2 and hence that of Proposition 4.1, via Lemma 4.1.
Energy Lower Bound
The main result proven in this section is the following Proposition 5.1 (Energy lower bound).
Let Ω ⊂ R 2 be any smooth simply connected domain. For any fixed 1 < b < Θ −1 0 , in the limit ε → 0, it holds We first reduce the problem to the study of decoupled functionals in the boundary layer in Subsection 5.1 and then provide lower bounds to these in Subsection 5.2, which contains the main new ideas of our proof.
Preliminary reductions
As in Section 4, the starting point is a restriction to the boundary layer together with a replacement of the vector potential. We refer to the proof of Lemma 4.1 and in particular (4.5) for the definition of the boundary coordinates.
with ψ(s, t) = Ψ GL (r(s, εt)) in A ε and G Aε is the boundary functional defined in (3.1) Proof. A simplified version of the result for disc samples is proven in [CR,Proposition 4.1], where a rougher lower bound is also derived for general domains. This latter result is obtained by dropping the curvature dependent terms from the energy, which was sufficient for the analysis contained there. Here we need more precision in order to obtain a remainder term of order o(ε). We highlight here the main steps and skip most of the technical details. A suitable partition of unity together with the standard Agmon estimates (see [FH1,Section 14.4]) allow to restrict the integration to the boundary layer: where Ψ 1 is given in terms of Ψ GL in the form Ψ 1 = f 1 Ψ GL for some radial 0 ≤ f 1 ≤ 1 with support containing the setà ε defined by (4.4) and contained in {r ∈ Ω | dist(r, ∂Ω) ≤ Cε| log ε|} for a possibly large constant C. The constant c 0 in the definition (4.4) of the boundary layer has to be chosen large enough, but the choice of the support of f 1 remains to any other extent arbitrary and one can clearly pick f 1 in such a way that f 1 = 1 inà ε and going smoothly to 0 outside of it.
The second ingredient of the proof is the replacement of the magnetic potential A GL but this can be done following the same strategy applied to disc samples in [CR,Eqs. (4.18) -(4.26)], whose estimates are not affected by the dependence of the curvature on s. The crucial properties used there are indeed provided by the Agmon estimates, see below.
The overall prefactor ε −1 is then inherited from the rescaling of the normal coordinate τ = εt in the tubular neighborhood of the boundary. Note here the use of a different convention with respect to both [CR,FH1], where the tangential coordinate s was rescaled too.
We need to rephrase some well-known decay estimates in a form suited to our needs. The Agmon estimates proven in [FH2,Eq. (12.9)] can be translated is analogous bounds applying to ψ(s, t) = Ψ GL (r(s, εt)) in A ε : for some constant A > 0 it holds In addition we are going to use two additional bounds proven in [FH2,Eq. (10.21) and (11.50)]: These bounds imply the following by (5.4) and the assumption on t 1 andt. Indeed the factor e − 1 2 At1 = ε 1 2 Ac0(1+o(1)) can be made smaller than any power of ε by taking c 0 large enough.
For the second estimate we use a tangential cut-off function, i.e., a smooth function χ(s) with support 7 in [s 0 , 2π], such that 0 ≤ χ ≤ 1, χ(s 0 ) = 1 and |∂ s χ| ≤ C. Then as in the estimate above (recall that t ε := c 0 | log ε|) where the main ingredient is again (5.4) and the assumption ont.
We now introduce some reduced energy functionals defined over the cells we have introduced before, see Subsection 4.1 for the notation. In each cell we write ψ(s, t) =: u n (s, t)f n (t) exp −i αn ε + δ ε s , (5.9) and introduce the reduced functionals Note that in (5.10) the curvature is approximated by its mean value in the cell C n . These objects play a crucial role in the sequel, as per Lemma 5.3 (Lower bound in terms of the reduced functionals). With the previous notation Proof. With the above cell decomposition, we can estimate (1−εknt) 2 |(ε∂ s + ia n (t)) ψ| 2 − 1 2b 2|ψ| 2 − |ψ| 4 , (5.15) and a n (t) := −t + 1 2 εk n t 2 + εδ ε . (5.16) The remainder term has been estimated as follows: the replacement of k(s) by k n produces two different rests which can be estimated separately, i.e., In estimating the first error term (5.17), we use the fact that Inside any given cell C n we can then decouple the functional in the usual way (see [CR,Lemma 5.2] for a statement in this context) to obtain The first term in (5.19) is a Riemann sum approximation of the leading order term in (5.1): using (4.30), we immediately get which concludes the proof.
Lower bounds to reduced functionals
In view of our previous reductions, the final lower bound (5.1) is a consequence of the following lemma Lemma 5.4 (Lower bound on the reduced functionals).
With the previous notation, we have Proposition 5.1 now follows by a combination of Lemmas 5.1, 5.3 and 5.4 because the two sums in the right-hand side of (5.21) are positive. These terms will prove useful to obtain our density and degree estimates in Section 6.
We can now focus on the proof of Lemma 5.4, which is the core argument of the proof of Proposition 5.1.
Proof of Lemma 5.4. The proof is split into two rather different steps. In the first one we essentially follow the strategy of [CR,Section 5.2] to control the main part of the only potentially negative term in (5.10). This is done locally inside each cell and uses mainly the positivity of the cost function, Lemma A.4. This strategy however involves an application of Stokes' formula and subsequent further integrations by parts to put the so obtained terms in such a form (involving only first order derivatives, see (5.26)) that they can be compared with the kinetic one. This produces unphysical surface terms located on the boundaries of the (rather artificial) cells we have introduced. The second step of the proof consists in controlling those, which requires to sum them all and reorganize the sum in a convenient manner. It is in this step only that we cease working locally inside each cell.
Step 1. Lower bound inside each cell. First, we split the integration over two regions, one where a suitable lower bound to the density f n holds true and another one yielding only a very small contribution. More precisely we set R n := (s, t) ∈ C n : f n (t) ≥ | log ε| 3 f n (t ε ) . (5.22) Note that the monotonicity for large t of f n (see Proposition A.1) guarantees that Now we use the potential function F n (t) defined as (5.24) and compute − 2ε where we have exploited the vanishing of F n at t = 0 and t = t ε . Now we split the r.h.s. of the above expression into an integral over D n := C n \ R n and a rest. In order to compare the first part with the kinetic energy and show that the sum is positive, we have to perform another integration by parts: The first term in (5.26) can be bounded by using some kinetic energy: where we have used the inequality ab ≤ 1 2 (δa 2 +δ −1 b 2 ) and the negativity of F n (t) (see Lemma A.3). Combining the above lower bound with (5.10) and (5.14) and dropping the part of the kinetic energy located in R n , we get is the cost function defined in (A.10), for some given d ε , satisfying (A.11). The third term in (5.28) is bounded from below by a quantity smaller than any power of ε, provided c 0 is chosen large enough. This is shown using the same strategy as in [CR,Eq. (5.21) and following discussion] and we skip the details for the sake of brevity. For the first term we use the positivity of K n provided by Lemma A.4. We then conclude and there only remains to bound the first term on the r.h.s. from below. We are not actually able to bound the term coming from cell n separately, so in the next step we put back the sum over cells.
Step 2. Summing and controlling boundary terms. We now conclude the proof of (5.21) by proving the following inequality: Grouping (5.30) and (5.31), choosing d ε = 2| log ε| −4 (which we are free to do) concludes the proof. We turn to our claim (5.31). Once we have put back the sum over all cells the idea is to associate the two terms evaluated on the same boundary, which come from two adjacent cells and therefore contain two different densities: (5.32) where, assuming without loss of generality thatt n,ε <t n+1,ε , If on the other handt n,ε >t n+1,ε , in (5.32)t n,ε should be replaced witht n+1,ε and in place of R n one would find In other words the remainder R n is inherited from the fact that the decomposition C n = D n ∪ R n clearly depends on n and the boundary terms in (5.32) do not compensate exactly. However it is clear from what follows that the estimate of such a boundary term is the same in both cases and essentially relies on the second inequality in (5.6): recalling that for any t ≤t n+1,ε , we have where we have used the bounds (5.5) and (A.7), i.e., |f n+1 | ≤ | log ε| 3 f n+1 (t). The identity (5.32) hence yields Using now the definitions (5.9) of u n and u n+1 , we get so that J t [u n+1 ](s n , t) = iG n,n+1 (t)G n,n+1 (t) |u n (s n , t)| 2 + G 2 n,n+1 (t)J t [u n ](s n , t), (5.37) where we have set . (5.38) Then we can compute but we know that the l.h.s. of the above expression is real, so that we can take the real part of the identity above obtaining (5.39) To estimate the r.h.s. we integrate by parts back by introducing a suitable cut-off function. Let, for any given n = 1, . . . , N ε , χ n (s) be a suitable smooth function, such that χ n (s n ) = 1, χ 1 2 (s n + s n+1 ) = 0 and s n , 1 2 (s n + s n+1 ) ⊂ supp(χ n ), |∂ s χ n | ≤ C. (5.40) We can rewrite where we have set for short (compare with (3.39)) . (5.42) The first contribution to (5.41) can be cast in a form analogous to (5.27): sn ds χ n (s)I n,n+1 (t n,ε )J s [u n ](s,t n,ε ). (5.43) The first term on the r.h.s. can be handled as we did for (5.27): where we have used (3.42) with k = k n , k = k n+1 and recalled that |k n − k n+1 | ≤ Cε to bound I n,n+1 . The last term in (5.43) can be easily shown to provide a small correction: using (3.42) again yields |I n,n+1 (t n,ε )| ≤ Cε| log ε| ∞ f 2 n (t n,ε ), so that by (5.6) and (5.23) 1 2 (sn+sn+1) sn ds χ n (s)I n,n+1 (t n,ε )J s [u n ](s,t n,ε ) ≤ 1 2 (sn+sn+1) where we have estimated the s-derivative of ψ by means of (5.5). Hence, combining (5.43) with (5.44) and (5.46), we can bound from below (5.41) as where we have chosen δ = ε| log ε| a , for some suitably large a > 0 to compensate the | log ε| prefactor (this generates the coefficient | log ε| −5 ), and used (5.5) to estimate the remaining term. For the second term on the r.h.s. of (5.46) we proceed in the same way, using first (3.42) and the assumption |∂ s χ| ≤ C, to get ε t n,ε 0 dt 1 2 (sn+sn+1) sn ds ∂ s χ n I n,n+1 J t [u n ] ≤ Cε 2 | log ε| ∞ Dn dsdt f 2 n |u n | |∂ t u n | ≤ Cε 2 | log ε| ∞ Dn dsdt (1 − εk n t) f 2 n |∂ t u n | 2 + |ψ| 2 ≤ Cε 2 | log ε| ∞ Dn dsdt (1 − εk n t) f 2 n |∂ t u n | 2 + O(ε 3 | log ε| ∞ ). (5.48) Collecting all the previous estimates yields our claim (5.31) (recall that there are N ε ∝ ε −1 terms to be summed, whence the final error of order ε 2 | log ε| ∞ ).
Density and Degree Estimates
In this section we prove the main results about the behavior of |Ψ GL | close to the boundary of the sample ∂Ω and an estimate of its degree at ∂Ω.
We now focus on the refined density estimate discussed in Theorem 2.2 and the proof of Pan's conjecture. The result is obtained via an adaptation of the arguments used in [CR,Section 5.3], originating in [BBH1]. The general idea is now rather standard so we will mainly comment on the changes needed to make those argument work in the present setting.
We can now turn to the proof of the estimate of the winding number of Ψ GL along ∂Ω.
Proof of Theorem 2.3. Thanks to the positivity of g ref at t = 0 (see Lemma A.2) and the result discussed above, Ψ GL never vanishes on ∂Ω and therefore its winding number is well defined. The rest of the proof follows the lines of [CR,Proof of Theorem 2.4]. A minor modification is due to the cell decomposition and the use of a different decoupling in each cell: the analogue of [CR,Lemma 5.4] is the following To see that, we introduce a tangential cut-off function χ(t) with support contained in [0, | log ε| −1 ] and such that 0 ≤ χ ≤ 1, χ(0) = 1 and |∂ t χ| = O(| log ε|). Then we compute The three terms on the r.h.s. of the above expression are going to be bounded independently. We first observe that, exactly like we derived (6.1), one can also extract from the comparison between the energy upper and lower bounds (see (5.21)) the following estimate: Nε n=1 Cn dsdt (1 − εk n t) f 2 n |∂ t u n | 2 + 1 (1−εknt) 2 |ε∂ s u n | 2 ≤ Cε 2 | log ε| ∞ . (6.10) Then we can estimate the absolute value of the first two terms on the r.h.s. of (6.9) by using the Cauchy-Schwarz inequality dsdt (1 − εk n t)f 2 n | log ε| |u n | 2 + 2 (1−εknt) 2 |∂ s u n | 2 + |∂ t u n | 2 , (6.11) where we have exploited the pointwise lower bound (A.6), which implies f n (t) ≥ C > 0 for any t ∈ [0, | log ε| −1 ] and n = 1, . . . , N ε , to put back the density f 2 n in the expression. Now the bound f n |u n | = |ψ| ≤ 1 together with (6.10) yield We also use some decay and gradient estimates for the minimizing density. The following is a combination of [CR,Proposition 3.3 and Lemma A.1] Lemma A.2 (Useful bounds on f k,α ). For any 1 < b < Θ −1 0 , k ∈ R and ε sufficiently small, there exist two positive constants c, C > 0 independent of ε such that for any t ∈ I ε . Moreover there exists a finite constant C such that A.2 Positivity of the cost function A less standard part of our analysis in [CR] is the introduction of a cost function K k whose positivity is one of the crucial ingredients of the energy lower bounds in the present paper.
|
2015-01-27T09:24:22.000Z
|
2014-06-09T00:00:00.000
|
{
"year": 2014,
"sha1": "773c310bb01a884dc6dec04c9210c2eaa77c230b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.2259",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "773c310bb01a884dc6dec04c9210c2eaa77c230b",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.