id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
209465322
pes2o/s2orc
v3-fos-license
SFI Enhances Therapeutic Efficiency of Gefitinib: An Insight into Reversal of Resistance to Targeted Therapy in Non-small Cell Lung Cancer Cells Background: The clinical application of EGFR tyrosine kinase inhibitors is always accompanied by inevitable drug resistance. However, the mechanism remains elusive. In the present study, we investigate the involvement of MAPK/SREBP1 pathway in NSCLC gefitinib resistance and evaluate the synergistic effects of shenqi fuzheng injection (SFI) and gefitinib on NSCLC cells. Methods: To investigate the MAPK/SREBP1 pathway involved in gefitinib resistance, Western blotting was used to examine p-MEK, p-ERK and SREBP1 expression in PC-9 and PC-9/GR cells, MTT was used on cell proliferation, wound healing assay was used on cell migration. To detect the cooperative effects of SFI and gefitinib, clonogenic assay was used on cell proliferation. Apoptosis assay was analyzed by flow cytometry. Immunofluorescence was used to detect gefitinib binding to EGFR. Western blotting was used to detect whether SFI regulate the resistance to gefitinib via the suppression of MAPK/SREBP1 pathway. Results: Our results showed that MAPK/SREBP1 pathway mediated resistance to gefitinib in NSCLC cells. MAPK pathway was found to directly target SREBP1 and inhibition of SREBP1 increased gefitinib sensitivity. In addition, SFI showed cooperative anti-proliferation and pro-apoptosis impacts on gefitinib resistant cells via down-regulating MAPK/SREBP1 pathway. Moreover, the combination of SFI and gefitinib enhanced gefitinib binding to EGFR resulting in the restoration of sensitivity to gefitinib. Conclusions: Taken together, MAPK/SREBP1 pathway could be regarded as the potential treatment target for overcoming resistance to EGFR-TKIs in NSCLC and adjuvant therapy of SFI could be a potential therapeutic strategy for gefitinib resistant treatment. Introduction Lung cancer is one of the most common types of cancer diagnosed worldwide and leads to high mortality [1,2]. Non-small cell lung cancer (NSCLC) accounts for 85% of all cases [3]. Approximately 64% of patients with NSCLC harbor an oncogenic driver mutation, such as epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma 2 viral oncogene homolog (KRAS), which leads to improvements in survival and safety compared with conventional chemotherapy [4]. EGFR tyrosine kinase inhibitors (EGFR-TKIs) are effective in approximately 70% of NSCLC with EGFR activating mutation [5]. Inevitably, the most of patients develop acquired resistance after 1 year treatment with EGFR-TKIs on average due to a variety of mechanisms [6]. The possible mechanisms of EGFR-TKIs resistance include second-site mutation of EGFR kinase domain, histological transformation, bypass signaling pathways activation including MAPK pathway and molecular changes to promote cell Ivyspring International Publisher survival and inhibition of apoptosis [7]. For instance, AZD9291 was discovered to overcome secondary resistance mutations, resistant cells can develop a bypass pathway to reactivate downstream proliferation and survival signal [8]. It is necessary to figure out the complex and ambiguous resistant mechanisms and develop alternative therapeutic methods to overcome EGFR-TKIs resistance. Our team has demonstrated that high intracellular level of cholesterol was the leading cause of gefitinib resistance in non-small cell lung cancer [9]. Evidence showed that the expression of sterol regulatory element binding protein 1 (SREBP1) was high in tumor tissue [10].SREBP1 is a key transcription factor of lipid homeostasis and activates genes required for the synthesis of cholesterol [11]. Inhibiting SREBP1 expression decreased the tumor growth in vivo [12]. The signal transduction pathways, such as MAPK pathway, have been identified to regulate SREBP1 expression in cancer cells [13,14]. However, the role of MAPK/SREBP1 playing in EGFR mutation gefitinib resistant NSCLC cells has not been clarified. Many Traditional Chinese medicine (TCM) have anticancer effects and can enhance the efficacy of EGFR-TKIs in NSCLC [15,16]. It is a promising strategy to combine TCM and EGFR-TKIs as anticancer treatment to overcome drug resistance. Shenqi fuzheng injection (SFI) is a modern TCM commonly used in clinic as an antitumor injection. Two Chinese medicine herbs, codonopsis and astragali are the main resources of SFI [17]. It has been reported that combination of SFI and chemotherapy could improve the quality of life, reduce toxicity and exhibit synergistic antitumor effects in NSCLC patients [18,19]. However, the synergistic effects of SFI and gefitinib on gefitinib resistant NSCLC cells, which may become a promising strategy to overcome EGFR-TKIs resistance, and underlying mechanisms are poorly understood. In the current study, the role of MAPK/SREBP1 pathway in NSCLC with resistance to gefitinib was assessed for the first time, and the potential therapeutic effect of targeting MAPK/SREBP1 pathway was examined in NSCLC cells. Cell culture Human NSCLC H1650 and H1975 cells were obtained from 3D Medicines. Human NSCLC PC-9 and PC-9/GR cells were given by Dr. Zhou Caicun. Cells were cultured in DMEM containing 12% Fetal bovine serum (Biological Industries), Penicillin-Streptomycin Solution (1X) at 37℃ in an atmosphere of 5% CO 2 . Cell proliferation assay The cell proliferation assay of gefitinib and SFI was measured by MTT. Cells in 96-well plates were treated with indicated drugs. Then, each well added 150 μL MTT solution incubated at 37 ℃ for 4 h. Absorbance was determined at 570 nm using a microplate reader (Thermo Multiskan_FC, American). Drug synergy analysis The data from the PC-9/GR, H1975 and H1650 cell proliferation assays was analyzed by CompuSyn software (Biosoft, Cambridge, UK) to investigate the synergistic effects of SFI and gefitinib on cells in vitro. The combination index (CI)-isobologram equation was described as previously [20]. Clonogenic assay 500 cells per well were seeded into 12-well plates and cultured in DMEM supplemented with no drug solution, SFI (1:10), Gefitinib 4 μM or in combination at 37˚C . After 14 days, the cells were fixed with 4% paraformaldehyde and stained with crystal violet solution. Finally, the plates were washed dried at room temperature and photographed. Wound healing assay To evaluate the migration ability of cells, a wound healing assay was performed. The cells were seeded in 96-well plates and then, as its growth to 80% confluence, the wounds were scratched with pipette tip across the center of the well. After having been washed, the cells were incubated in humid atmosphere. The migrated cells were observed via the optical microscope at 0 and 12 h, respectively. Apoptosis assay To test apoptosis, cells were planted in 6-well plates incubated with indicated drugs for 24 h before cell dissociation and assemblage. The cells were harvested and re-suspended in 500 µL binding buffer. The cells were then stained with 5 µL Annexin V-FITC and Propidium Iodide for 5-10 min in the dark condition. Flow cytometry analysis (Becton Dickinson FACS Calibur; Becton-Dickinson, USA) was conducted to detect apoptosis. Western blotting assay Cells were treated with indicated drugs. Then the total protein was collected from cells. The concentration of protein in the supernatants was detected according to BCA protein assay kit instructions (Beyotime, P0010). Then, 60 μg protein was separated by 8%-12% SDS-PAGE and analyzed with antibodies. The final detection was performed by ECL reagents (KeyGen, KGP1121). Immunofluorescence assay Cells (1×10 5 cells/mL) were cultured on dish and treated with fluorescent-labeled quinazoline skeleton of gefitinib (10 µM) alone or in combination with SFI (1:10) for 3 h. Cells were washed with PBS, then incubated within DAPI (Beyotime, C1006) for further 20 min. Finally, the images were gained by a ZEISS. Gefitinib induced cytotoxicity in NSCLC cells with high constitutive levels of MAPK and SREBP1 We chose gefitinib sensitive PC-9 cells harboring EGFR exon 19 deletion and gefitinib resistant PC-9/GR cells for experiment. To quantify gefitinib cytotoxicity, MTT assay was performed. The cells were incubated for 24, 48 or 72 h at indicated doses of gefitinib (1,3,9,27,81,243,729 and 2187 nmol/L). As shown in Fig. 1A, the results indicated that gefitinib significantly inhibited the proliferation of PC-9 cells in a time and dose-dependent manner, and slightly inhibited the proliferation of PC-9/GR cells. To test whether MAPK signaling cascades phosphorylated and activated in cells with resistance to gefitinib, western blotting was conducted. As expected, phosphorylated MEK and ERK were elevated in PC-9/GR compared with PC-9 cells (Fig. 1B). Since it has been reported that MAPK pathway affected the transcriptional activity of SREBP1, directly [21]. Here, we compared the expression of SREBP1 in different cells. Notably, we found a higher expression of both flSREBP1and mSREBP1 in PC-9/GR cells. . Effects of MPAK pathway inhibition on SREBP1 expression and SREBP1 inhibition on migration and proliferation in PC-9/GR cells. (A) Western blotting for p-MEK, MEK, flSREBP1 and mSREBP1 protein expression in PC-9/GR cells after being treated with 10 μM U0126. **p<0.01 or ***p<0.001 compared to control group. (B) Cells were treated with gradient concentration of betulin (5 and 10 μM). Closed wound area of 5 and 10 μM betulin group at 12 h were significantly smaller compared to Control group. (C) PC-9/GR cells were treated with betulin 5 μM alone or combined with gefitinib 30 μM for 48 h. Relative cell viability was measured by MTT assay. **p<0.01 or ***p<0.001 compared to combination group. Inhibition of SREBP1 reversed Gefitinib resistance of NSCLC cells U0126 is a selective inhibitor of MEK kinases [22], which has been widely used as an inhibitor for the MAPK pathway in diverse fields [23]. To confirm the effects of MAPK pathway on SREBP1 expression, PC-9/GR cells were treated with 10 μM U0126. We found that both flSREBP1 and mSREBP1 expression were inhibited by U0126.These data indicated that overexpression of SREBP1 was due to activated MAPK pathway in PC-9/GR cells ( Fig. 2A). Betulin was previously identified as an inhibitor of SREBP pathway [24]. Treating cells with gradient concentration of betulin, the migration of PC-9/GR cells was inhibited (Fig. 2B). To examine whether inhibit SREBP1 could reverse cells resistance to gefitinib, 5 μM betulin was then combined with 30 μM gefitinib. The combined treatment exhibited a greater anti-proliferation effect on PC-9/GR cells than a single drug alone (Fig. 2C). These data confirmed that the MAPK/SREBP1 pathway mediated resistance to gefitinib in NSCLC cells. SFI synergizes with gefitinib to inhibit cell proliferation and clonogenicity in PC-9/GR, H1975 and H1650 cells Shenqi fuzheng injection (SFI) was extracted from astragali and codonopsis, which was generally used to improve the immune system of patients with NSCLC [25]. Previous study has reported that astragaloside IV inhibited accumulation and nuclear translocation of SREBP1 [26]. We presumed that SFI might show synergistic antitumor effects with gefitinib by inhibiting SREBP1 expression in NSCLC cells. To investigate the synergistic effects on NSCLC cell motility, PC-9/GR, H1975 and H1650 cells were selected for further study. Cells were treated with different concentrations of gefitinib (1.875, 3.75, 7.5, 15, 30, 60, 90 and 120 μmol/L), SFI (1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64, 1:128) alone or in combination for 24, 48 or 72 h. The combined treatment had significant inhibition on PC-9/GR, H1975 and H1650 cells when compared with a single drug alone. (Fig. 3A, B and C). The results of the CI-isobologram analysis showed that SFI and gefitinib had synergistic effects on the PC-9/GR (CI: 0.179-0.982), H1975 (CI: 0.032-0.582), and H1650 (CI: 0.360-0.834) cells at 72 h. The combination of SFI and gefitinib rendered the PC-9/GR, H1975 and H1650 cells more sensitive to gefitinib. To further evaluate the synergistic anti-cancer effects of gefitinib and SFI, clonogenic assay was performed. We found that SFI, gefitinib alone or in combination inhibited cell colony formation in all three kinds of cells. Furthermore, combined treatment could significantly reduce the number of clones compared to each drug alone (Fig. 3D, E and F). SFI enhances the effect of gefitinib on inducing apoptosis in NSCLC cells To test the regulation of SFI on cell apoptosis induced by gefitinib, cells were co-treated with SFI and gefitinib. As shown in Fig. 4A-C, SFI synergized with gefitinib in inducing cell apoptosis. The combined treatment increased cleaved-caspase 3, cleaved-caspase 9 and pro-apoptotic proteins Bax expression, as well as decreased the expression of anti-apoptotic protein Bcl-2. (Fig. 4D, E and F). The synergistic efficacy of SFI and gefitinib is dependent on inhibition of MAPK/SREBP1 pathway To elucidate whether a possible mechanism of synergy is due to the involvement of regulating MAPK/SREBP1 pathway, cells were treated with SFI and gefitinib respectively, or in combination for 24h, and then analyzed EGFR-related proteins levels by western blotting. As shown in Fig. 5A-C, gefitinib and SFI alone had no regulatory effects on phosphorylated EGFR, MEK, ERK proteins levels. Nevertheless, combined treatment showed a significantly inhibition on p-EGFR, p-MEK and p-ERK expression. We then investigated the expression of flSREBP1 and mSREBP1, a similar tendency was observed (Fig. 5D, E and F). To sum up, the combination of SFI and gefitinib could be a potential therapeutic strategy for gefitinib resistant treatment in NSCLC cells via regulating MAPK/SREBP1 pathway. SFI enhances gefitinib binding to EGFR resulting in restoration of sensitivity to gefitinib in PC-9/GR and H1975 cells SREBP1 is a transcription factor that maintain cellular lipid homeostasis by regulating the expression of many enzymes needed for the formation of cholesterol and fatty acid. Cholesterol and fatty acid are main components of mammalian cell membrane. EGFR is known to be a plasma membrane-resident protein, whose function is modulated by its surrounding lipid environment [27]. To determine whether SFI can cause changing in gefitinib affinity to EGFR, cells were treated with gefitinib alone or in combination with SFI. The fluorescence intensity was represented for the binding capacity of gefitinib to EGFR. Enhanced fluorescence intensity was observed by Confocal imaging (Fig. 6A, B and C) when cells were co-treated with SFI and gefitinib in PC-9/GR and H1975 cells. These results revealed that SFI increased gefitinib affinity in acquired resistant PC-9/GR and H1975 cells, but not in primary resistant H1650 cells. Discussion Gefitinib is the first EGFR-TKI that was approved for the therapy of patients with NSCLC [28]. By competitively interacting with the ATP-binding site, gefitinib can inhibit EGFR kinase activity, prevent auto-phosphorylation and suppress downstream signaling. NSCLC patients harboring EGFR mutation demonstrate good responses to gefitinib. Unfortunately, the clinical application of gefitinib is limited by drug resistance due to many mechanisms including the secondary T790M mutation, a most common mechanism for gefitinib resistance manifested in approximately 60% of patients. The third generation EGFR-TKIs, such as osimertinib, is designed to overcome T790M mutation. This new agent significantly increases the overall response rates of patients. However, similar to gefitinib, the application of osimertinib has been accompanied by the drug resistance. Several mechanisms of resistance have been identified including EGFR C797S mutation, MET amplification and epithelial-mesenchymal transition (EMT) [29]. Even with the fourth generation EGFR-TKIs on the clinical research, the complex mechanisms of drug resistance have not been fully revealed. Thus, there is a need to understand the underlying mechanism and identify the key molecule target so as to develop new strategies to overcome EGFR-TKIs resistance. The study is based on our previous work which showed that high levels of cholesterol in lipid rafts are responsible for gefitinib resistance in NSCLC cells and the depletion of cholesterol can restore the sensitivity of gefitinib. We presumed that the key molecules involved in the regulation of cellular cholesterol level could be targets to overcome EGFR-TKIs resistance. SREBP1 is a key transcription factor for cholesterol homeostasis by regulating the transcriptional activation of target genes, such as 3-hydroxy-3methylglutaryl-CoA reductase (HMGCR) and low-density lipoprotein receptor (LDLR) [30]. In the present study, we found a higher expression of SREBP1 in PC-9/GR cells compared to PC-9 cells (p<0.001). As it was documented before, SREBP1 could promote proliferation, metastasis and EMT in cancer cells by providing the membrane building materials [31]. We acquired similar results where the suppression of SREBP1 by betulin inhibited the migration of PC-9/GR cells. Further study was conducted to investigate the role of SREBP1 playing in gefitinib resistance by combining betulin and gefitinib to treat cells. Results showed that inhibition of SREBP1 enhanced cell sensitivity to gefitinib in NSCLC cells. The Ras-Raf-MEK-ERK mitogen-activated protein kinase (MAPK) pathway governs fundamental physiological processes, such as cell proliferation, metabolism, cell death and survival in NSCLC [32]. It is activated by extracellular ligands, such as epidermal growth factor (EGF), and motivates cell survival by regulating a range of targets including caspase 3, caspase 9, Bcl-xl and Bad transcription factors [33]. We found MAPK signaling cascades phosphorylated and activated in PC-9/GR cells. Previous studies have identified SREBP as a downstream effector of MAPK cascades in prostate cancer or melanoma, but not in NSCLC. To determine the targeted relationship between MAPK pathway and SREBP1 in NSCLC gefitinib resistant cells, we pharmacologically inhibited MAPK pathway by U0126. We discovered that in PC-9/GR cells, the expression of SREBP1 (both flSREBP1 and mSREBP1) was regulated by MAPK pathway. Our results demonstrated that MAPK/SREBP1 pathway was responsible for gefitinib resistance in NSCLC cells. Regarding SFI are mainly composed of codonopsis and astragali. Previous studies have reported that inhibitory impacts of astragaloside on cancer cells were probably related to its regulating MAPK pathway [34,35]. Moreover, astragaloside IV inhibited the accumulation and nuclear translocation of mature SREBP1 [36]. Above all, we presumed that SFI might show synergistic antitumor effects of gefitinib by regulating MAPK/SREBP1 pathway in NSCLC gefitinib resistant cells. Here, we selected PC-9/GR, H1975 and H1650 cells to detect synergistic effects of SFI and gefitinib. Results suggested that SFI combined with gefitinib to inhibit cell proliferation, clonogenicity and induce apoptosis, which were consistant with the study conducted by Xiong Y, et al. They documented that SFI increased chemotherapy sensitivity in cisplatin resistance of NSCLC cells through regulating cell cycle and initiating mitochondrial apoptosis [37]. We further validated the underlying mechanism of SFI reversing gefitinib resistance was the inhibition of MAPK/SREBP1 pathway. The secondary T790M mutation within the ATP site of EGFR is the most common mechanism of resistance to the first generation EGFR-TKIs in lung cancers [38], which reduces the binding efficacy of EGFR-TKIs to EGFR kinase domain [39]. The EGFR is a membrane-bound receptor, which consists of an extracellular module and an intracellular kinase domain. The activation and function of EGFR is related to membrane lipids [40]. Main components of membrane such as phospholipids, fatty acids and cholesterol have been reported to regulate drug sensitivity of gefitinib in NSCLC [41,42]. SREBP1 is the key factor transcription factor playing a central role in lipid metabolism. Based on the result that SFI combined with gefitinib can reduce the SREBP1 protein expression. Here, we speculated that SFI might enhance gefitinib binding to EGFR by inhibiting SREBP1 and then detected the binding of gefitinib to EGFR by Immunofluorescence. We found that SFI enhanced gefitinib binding to EGFR in acquired resistant NSCLC PC-9/GR and H1975 cells, but not in primary resistant H1650 cells. The resistant mechanism of H1650 cells is associated with PTEN deletion without affecting gefitinib binding to EGFR [43]. Therefore, it is not surprising that SFI did not exert any impacts on binging gefitinib to EGFR in H1650 cells. In summary, our data initially exhibit the MAPK/SREBP1 pathway were responsible for gefitinib resistance in NSCLC cells and draw a conclusion that the combined treatment of SFI augmented gefitinib's anti-proliferation and pro-apoptosis potential in NSCLC gefitinib resistant cells through regulating MAPK/SREBP1 pathway. Moreover, the combined treatment enhanced gefitinib binding to EGFR resulted in restoration of sensitivity to gefitinib in acquired resistant NSCLC cells. Thus it can be seen that inhibition of MAPK/SREBP1 pathway is a prospective strategy to conquer gefitinib resistance in NSCLC cells and adjuvant therapy of SFI could be a potential therapeutic strategy for gefitinib resistant treatment.
2019-12-12T10:50:12.320Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "7e07e2ea431c65791f4a7d0a1fa231dab55aed77", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v11p0334.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d425c938ff8871c681b4f8e3d2c0ee536b021945", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8583184
pes2o/s2orc
v3-fos-license
Designing experiments to study welding processes : using the Taguchi method Identification of significant process parameters using experiments needs to be carefully formulated as it can be a resource demanding process. Using appropriate statistical techniques such as the Taguchi method of factorial design of experiments, the number of necessary experiments can be reduced and the statistical significance of parameters can be safely identified. In the case of linear friction welding it was found that the frequency of oscillation, power input and forging pressure are statistically insignificant for the range of friction pressures studied. An experiment can be considered as a process seeking to answer one or more carefully formulated questions.It should have carefully described goals which will be used to choose the appropriate factors and their range, as well as the relevant procedure.The factors studied should not be covered by other variables, with the chosen experimental sequence removing the effects of the uncontrolled variables.Replication of the experiments will help to randomise the results taken, to limit bias from the experiments.While replication ensures a measure of precision, randomisation provides validity of the measure of precision.By using this technique, many evaluations are usually needed to get sufficient information which can be a time-consuming process. The term "design of experiments" was originated around 1920 by Ronald A. Fisher, a British scientist who studied and proposed a more systematic approach in order to maximize the knowledge gained from experimental data [1] nce then, design of experiments has become an important methodology that maximizes the knowledge gained from experimental data by using a smart positioning of points in the space.This methodology provides a strong tool to design and analyze experiments; it eliminates redundant observations and reduces the time and resources to make experiments. In general, we can say that a good distribution of points achieved through a DOE (design of experiments) technique will extract as much information as possible from a system, based on as few data points as possible.Ideally, a set of points made with an appropriate DOE should have a good distribution of input parameter configurations.This equates to having a low correlation between inputs.The DOE approach is important to determine the behavior of the objective function we are examining because it is able to identify which factors are more important.The choice of DOE depends mainly on the type of objectives and on the number of variables involved.Usually, only linear or quadratic relations are detected.However and fortunately, higher-order interactions are rarely important and for most purposes it is only necessary to evaluate the main effects of each variable.This can be done with just a fraction of the runs, using only a "high" and "low" setting for each factor and some center points when necessary. Therefore DOE statistical techniques are especially useful in complex physical processes, such as welding.These processes usually involve a large number of interrelating parameters, which range from applied pressure to operating temperature and material properties, that are related with complex laws not fully described to the extent necessary for successful industrial implementation of such processes. Friction Welding Experiments Linear friction welding is a solid state process for joining materials either metals or plastics together [Fig.1] through intimate contact of a plasticised interface, which is generated by frictional heat produced as one component is moved under pressure in a direct reciprocating mode relative to another. The process is observed to have four distinct phases, which have been previously described [2] in some detail, and are discussed only briefly here for completeness. Introduction Phase I, The Initial Phase.From the initial phase the two workpieces are moving under pressure in a linear reciprocating manner.Heat is generated from solid friction, with the friction coefficient between the oscillating workpieces not exceeding unity, but increasing throughout this phase.True surface contact area increases throughout this phase due to wear and the thermal softening effects of movement.No weld penetration is experienced at this stage.This phase is critical for the rest of process to proceed, for if insufficient heat is generated the next phase will not follow. Phase II, The Transition Phase.Large wear particles are expelled from the rubbing interface.The heat affected zone expands from the asperities into the bulk of the material until phase III is reached.The true contact area is considered to be 100% of the cross sectional area, and the plasticised layer formed between the two rubbing surfaces cannot support the axial load, thus deforming permanently. Macroscopically, under the naked eye, in certain materials such as Ti6Al4V (numbers indicate wt.%), red hot spots appear at the interface, that extend with time till they cover the whole of the rubbing interface, and are accompanied by an exothermic reaction with oxygen. Phase III, The Equilibrium Phase.Axial shortening begins to register as plasticised matter is expelled into the upset.Material in the heat affected zone that has yielded, from the friction pressure exerted on it and the high temperature reached, moves out of the rubbing interface aided by the oscillatory movement.This form a flash, which may take different shape depending on the material extruded.The material never reaches melting conditions at the interface, as experimental data have shown.But even, if such temperatures were reached, the molten material would have been expelled out of the interface from the friction pressure with the aid of the workpiece movement, as molten material cannot withstand any load. Macroscopically, under the naked eye, in certain materials such as Ti6Al4V, as was observed by the author in all the experiments performed with this material [3], there is an exothermic reaction with oxygen.In the case of Ti6Al4V which is studied in this paper, the extruded material from the two specimens forms a single joined flash, and not separate flashes for each specimen [4].This indicates that the plastic material at the rubbing interface has been joined together at this stage. Phase IV, The Deceleration Phase.When the desired upset is reached the two materials are brought to rest very rapidly (in less than 0.1 s), and forging pressure may be applied.This last phase is thought of importance by specialists in the friction welding indus-try, and is used to consolidate the weld. From the foregoing description it is evident that there is a power input limit, below which welding is not possible.If operating below this limit, either by using smaller amplitude of oscillation or rubbing at a lower frequency of oscillation or applying a smaller friction pressure than necessary, the workpieces will never reach conditions which will produce well defined flash and subsequently join to form sound welds. Linear friction welding is a joining process aimed at extending the current applications for rotary friction welding to non axisymmetric metal and plastic components.However, the two processes differ considerably in the mode of heat input and the stress field imposed on the plasticized layer, and therefore existing rotary friction welding models are not directly applicable to linear friction welding.The more uniform interfacial energy generation present in linear friction welding may account for higher integrity welds.Moreover, much of the research in rotary friction welding is of an empirical nature, which cannot be used to predict weldability and optimum welding parameters for new materials with linear friction welding. Designing Experiments In the parametric design of the experiments the fractional factorial method [5] is used to assess the effect of a number of factors on the impact strength of linear friction welding of Ti6Al4V joints.If the full factorial experiment method was used four experiments plus replications would be required.The fractional factorial experimental design enables the reduction of the number of experiments using an adequately chosen fraction of the treatment combinations required for the complete factorial experiment, and the study of the combined effect of individual factors.The combined effect of the individual factors has to be less material than that of the main factors.Therefore, some understanding of the influence of both the main factors and the interactive factors is required for aliasing to be carried out.To avoid invalid results, the interactive effect should not be connected with less significant main effects. In designing a fractional factorial experiment care must be given so that all factors have an undeviating weight on other factors, at all levels that the may take, and orthogonal arrays [6] [7] [8] [9] are used to that effect.In these arrays, each factor is equally influenced by the effects of the factors under study. In [5] a number of orthogonal arrays are given for different experiments, and a L4 array is shown [Fig.2].The first row indicates the number of factors which will be tested, which are 3 in this case.The first column shows the number of experiments that must be completed for the fractional factorial experiment, in this case being four.The other columns underneath show the levels of each factor.In the first experiment of this array all factors are set to level 1, and similarly for the other rows. Each factor in an orthogonal array has a degree of freedom associated with it, which prescribes the orthogonal array selected.The degree of freedom of each factor is equal to the levels that it takes minus one.For an interaction factor, the degree of freedom is equal to the product of the degrees of freedom of the factors that compose it. The sum of the degrees of freedom of each individual factor studied must be equal, at the most, to the degree of freedom of the orthogonal array.The degree of freedom of the array is equal to the number of experiments performed minus one. To study the interaction between factors, the orthogonal array can be used to include this interaction as a separate factor.The number of factors under investigation will have to be reduced, so as to retain the correct number of degrees of freedom.A linear graph [Fig.2] is used to maintain the orthogonality in the array.It corresponds to columns in the orthogonal array, and on each line the factors investigated for association are shown.For example column 3 is reserved for the interaction between factors 1 and 2. Analysis of Designed Experiments Once linear friction welding experiments with Ti6Al4V have been completed [Table 1], results are analysed by calculating the signal-to-noise (S/N) ratio for each factor and each level in these experiments.This ratio is the reciprocal of the variance of the measurement error which is maximal for the combination of parameter levels that has the minimum error variance.Calculating the average of S/N value for each factor and plotting them for each level reveals the effect of the factor on the variable used to assess these experiments.In addition, analysis of variance (ANOVA) techniques can be used to study the fractional factorial experiments and identify the significance of each factor. The linear friction welding process is controlled by a number of parameters such as the frequency of oscillation, the amplitude of oscillation and friction pressure [2].These parameters directly affect the energy input into the process as the frictional heat that is generated by the oscillating process is directly related to these as physics laws dictate.Therefore, a joint may be produced depending on the values of the parameters used, with its weld strength affected by these as well as the forging force applied at the end of the process. The variable used to assess the linear friction welding experiments in this investigation was the impact strength of the produced joint using a Charpy impact test, as the objective was to create joints which would have high value of impact strength.The Charpy impact test is a standardized high strain-rate test which determines the amount of energy absorbed by a notched material during fracture.This absorbed energy is an estimate of the material's or the joint's toughness.It is a widely used test by industtry.Friction welding produces joints whose tensile strength is almost equal to that of the parent material. Joints produced were assessed using the Charpy impact test on an Avery impact test machine.The apparatus consists of a pendulum axe swinging at a notched sample of the welded joint.The energy transferred to the specimen can be inferred by comparing the difference in the height of the hammer before and after a big fracture.The notch in the specimen needs to be of regular dimensions and geometry. A measure of experimental error is necessary to estimate the significance of the results.In large factorial experiments estimates of higher order interactions can be obtained.These estimates are actually estimates of experimental error, as it is assumed that higher order interactions are physically impossible.In small factorial designs, as in the one used here, there are no estimates of higher order interactions, and the estimate is based on past experience. The analysis of these experiments of linear friction welding of Ti6Al4V are valid for the material studied and the operating range of the process parameters studied.Different materials will probably produce different results and demonstrate different parameter sensitivities.In addition to this complication, it may be possible that process parameters outside the envelope used in this experimental design may show different parameter sensitivities, although care has been taken to ensure that this possibility is at a minimum. Parametric Investigation Using the Taguchi method of designing fractional factorial experiments, the effects of these parameters were explored using two orthogonal L 4 arrays.The effect of the individual parameters is studied, as well as the combined effect that may have on the strength of the weld.The L 4 array is used in these designed experiments where two factors are changed to two levels each.Although in this case this design does not reduce the number of experiments performed, it should identify any statistically significant factors and distinguish any combined effect of them on the process.Analysis of the results indicates the effect of every factor on the parameter used for assessment and the effect of the combined interaction of the two factors as well.All experiments were repeated as it is common practice for justification of results.Once experiments have been completed, results are analysed by calculating the signal-tonoise (S/N) ratio for each factor and each level in these experiments.Calculating the average of S/N value for each factor and plotting them for each level reveals the effect of the factor on the variable used to assess these experiments. Effect of frequency of oscillation and friction pressure At a constant amplitude of oscillation of 0.92 mm, eight linear friction welds of Ti 6Al 4V were produced at a frequency of oscillation of 50 and 100 Hz, and at two friction pressures of 32 and 39 MPa [Table 1].As the mechanism to apply the friction pressure produced a varying pressure during the process, the friction pressure value used was the one achieved at the end of the process, as it is more representative of the conditions that exist at the end of the process and could govern the impact strength of the joint.The initial friction pressures applied were higher, by such an amount as to take into account the reduction in frictional force due to axial shortening that would be produced during the subsequent run. Analysing the results [Table 2] showed that the parameters studied in this experiment, i.e. frequency of oscillation and friction pressure, were not statistically significant, as their variance ratio was below 2. As expected, the combined effect does not affect the weld integrity as well.Increasing the friction pressure produced no statistically stronger welds.It should be noted that the range of the friction pressures used in this set of designed experiments was limited by the operational characteristics of the linear friction welding rig. A large number of experiments performed later, following a wide range of functional improvements made on the welding rig, where the full allowable range of friction pressure was used, showed an effect of friction pressure on the impact strength of the joints.This emphasizes the need to select an appropriately wide range of parameters for the analysis to be representative of the process.These experiments are not included in this work as the aim of this paper is to demonstrate the use of fractional factorial experiments and not list an extensive list of experimental data. Effect of power input and forging pressure Power input and the forging pressure applied at the end of process were examined using the same orthogonal array as before, at an amplitude of oscillation of 3 mm.The power input parameter was changed by altering the friction pressure, and the forging pressure was investigated at two levels, one the same as the final friction pressure and the other at 80 MPa [Table 3].The low level of forging force, was effected by not applying an additional further pressure at the end of the process, but leaving the welded specimens in the chucks under the friction pressure. As it can be seen [Table 4] the specific power input parameter, the forging pressure as well the combined interaction between the two parameters as their variance is below 2. It should be noted that the range of the friction pressures used was limited by the operational characteristics of the rig and the design of experiment procedure.As stated earlier, experimental results where the full allowable range of friction pressure was used showed the significant effect of friction pressure [3]. Conclusions A number of linear friction welding of Ti6Al4V parameters were studied using the Taguchi method of designing fractional factorial experiments to identify their significance.It was found that: • The frequency of oscillation and friction pressure were not statistically significant for the range of friction pressures studied.• Power input to the joint and forging pressure were not statistically significant either. Figure 1 . Figure 1.Schematic depiction of the four distinct phases that are incorporated in the linear friction welding process. Figure 2 . Figure 2. L4 Orthogonal array used for fractional factorial experimental designs and linear graph to manipulate it [5].
2015-07-20T21:03:00.000Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "cc41dd1f94448b3875f6bddd5beda2a0db0abd4b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.25103/jestr.021.19", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cc41dd1f94448b3875f6bddd5beda2a0db0abd4b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
21703044
pes2o/s2orc
v3-fos-license
‘Aye’ or ‘No’? Speech-level Sentiment Analysis of Hansard UK Parliamentary Debate Transcripts Transcripts of UK parliamentary debates provide access to the opinions of politicians towards many important topics, but due to the large quantity of textual data and the specialised language used, they are not straightforward for human readers to process. We apply opinion mining methods to these transcripts to classify the sentiment polarity of speakers as being either positive or negative towards the motions proposed in the debates. We compare classification performance on a novel corpus using both manually annotated sentiment labels and labels derived from the speakers’ votes (‘aye’ or ‘no’). We introduce a two-step classification model, and evaluate the performance of both one-and two-step models, as well as the use of a range of textual and contextual features. Results suggest that textual features are more indicative of manually annotated class labels. Conversely, in addition to boosting performance, contextual metadata features are particularly indicative of vote labels. Use of the two-step debate model results in performance gains and appears to capture some of the complexity of the debate format. Optimum performance on this data is achieved using all features to train a multi-layer neural network, indicating that such models may be most able to exploit the relationships between textual and contextual cues in parliamentary debate speeches. Introduction In the United Kingdom, transcripts of parliamentary debates (known as Hansard) are publicly and freely available. This provides access to a wealth of information concerning the opinions and attitudes of Members of Parliament (MPs) and their parties, towards arguably the most important topics facing society, as well as potential insights into the parliamentary democratic process. However, the large quantity of recorded material in Hansard, combined with the esoteric speaking style and opaque procedural language of Parliament, makes manual retrieval of information from these data a daunting task for the non-expert citizen. Despite the fact that opinion mining has been one of the most active areas of research in natural language processing (NLP), and a widespread need for political information has been cited as a motivation for the development of opinion mining technologies (Pang and Lee, 2008), automatic analysis of the positions taken by speakers in parliamentary debates has received relatively little attention from researchers. Sentiment anlysis is the task of automatically identifying the polarity (positive or negative) of the position taken by the holder of an opinion towards a target, such as an organization, a policy, a movement, or a product. We apply sentiment analysis methods to speeches made in the House of Commons of the UK Parliament to classify their sentiment polarity as being either positive (in support) or negative (in opposition) towards the target of each speech; that is, the motion proposed in the debate in question. Prior work on this task has relied on the use of MPs' division votes as sentiment polarity labels, under the assumption that these votes represent the speakers' opinions to-wards the subjects under discussion: votes for 'Aye' (that the motion be approved) or 'No' (that it be negated) are presumed to indicate positive and negative sentiment, respectively. However, as MP voting is to a large extent constrained by party affiliations, with members often under pressure to follow the party whip regardless of their personal opinion (Searing, 1994;Norton, 1997), we perform sentiment analysis experiments on the Hansard Debates with Sentiment Tags (HanDeSeT) corpus, which features manually annotated sentiment labels in addition to those extracted from division votes . In Parliament, the tabled motions under debate, by their nature, either approve of or oppose some piece of legislation or state of affairs, and hence also display sentiment polarity towards those targets. We therefore present a two-stage sentiment analysis model in which first, the sentiment of the motion towards the subject of the debate is determined, before sentiment analysis is carried out on the corresponding speeches. Our contributions In this paper, we compare the use of speakers' division votes with manually annotated polarity labels for the evaluation of sentiment analysis systems, and introduce a two-step sentiment analysis model for parliamentary debates in which the sentiment of both speeches and motions are classified. For the two-step model, we also propose an alternative method for determining motion sentiment that infers polarity labels from the relationship to the Government of the speakers who introduce the motions Additionally, we evaluate the use of n-gram textual features and a range of contextual features extracted from metadata related to the speakers. Background: UK parliamentary debates The UK Parliament consists of two chambers: the House of Commons and the House of Lords. The former is the superior legislative chamber, the target of most public and media attention, and the focus of this study. Each debate in the House of Commons begins with a motion proposed by an MP. Following this, MPs may speak, when invited, any number of times during a debate. Each speaking turn may be comprised of a short statement or question, or a longer passage, divided into paragraphs in the transcript. At any time during a debate, but most typically at the end, a division may be called. At this point MPs physically file through one of two division lobbies to register their vote-'aye' to support, and 'no' to oppose the motion in question. Labels extracted from the records of these divisions are referred to in this paper as division vote sentiment labels. Related Work Sentiment analysis has attracted substantial interest in NLP research, where the majority of work focusses on determining people's opinions in product reviews (e.g., Pang et al. (2002), Mukherjee and Bhattacharyya (2012)) and social media posts (e.g., Pak and Paroubek (2010), Rosenthal et al. (2017)). In the political speech domain, several papers address the application of opinion classification to debates from the United States Congress. For example, Thomas et al. (2006) use a supervised classification model (support vector machine) to determine whether or not individual speech segments support a piece of legislation, using contextual discourse information to obtain enhanced performance, while Burfoot et al. (2011) apply a collective classification approach to Congressional speeches, using the speakers' voting records to obtain sentiment labels. In Europe, Grijzenhout et al. (2010) perform sentiment analysis at the paragraph level on manually labelled Dutch parliamentary transcripts. For a related but somewhat different task on UK Hansard transcripts, Duthie et al. (2016) present a manually annotated corpus for the detection of speakers' positions, not towards the subject of debate, but rather other members' 'ethos'-which they define as the 'character' of the target, who is another participant in the debate. For sentiment analysis on this domain, Onyimadu et al. (2013) use a sentiment lexicon to identify opinionated text in House of Commons debates for ternary (positive, negative, neutral) classification at the sentence level, reporting an average accuracy of 43% agreement between a classifier's predictions and the manually applied gold standard labels. The most similar approach to ours is that of Salah (2014), which compares text classification using machine learning techniques and the use of sentiment lexicons to predict 'speaker attitude' on the concatanated speeches of MPs in the House of Commons, again relying on members' division votes as labels. We challenge the assumption that these votes reflect speaker sentiment by comparing these labels with those of human annotators. We also extend their use of party affiliation information, including other meta information about the debate participants, and examine whether these features are indeed predictive of sentiment as expressed in the speeches, or simply of likely voting outcome. Data: the HanDeSeT corpus We use the Hansard Debates with Sentiment Tags (Han-DeSeT) corpus . 1 The corpus consists of 1251 units, each of which is composed of a parliamentary speech of up to five utterances and an associated motion. Content inserted by the Hansard reporters, certain set procedural phrases, and quotations have all been removed from the text. Each speech has two binary (1 for positive or 0 for negative) sentiment polarity labels, produced with different labelling methods: 1. A speaker-vote label extracted from the division associated with the corresponding debate: 'aye' = 1, 'no' = 0. A manually annotated gold standard label. All motions also have been assigned two sentiment labels: 1. A label derived from the party affiliation of the MP who proposes the motion-1 if they are a member of the governing party or coalition at the time of the debate, 0 otherwise. A manually annotated gold standard label. In addition, the following metadata is included with each unit: debate ID, speaker party affiliation, and motion party affiliation. A detailed description of the corpus and annotation process can be found in Abercrombie and Batista-Navarro (2018). Debate speech sentiment models The motions tabled in these parliamentary debates express either positive or negative sentiment towards a piece of legislation, policy, or state of affairs, and members of the chamber speak either in support of, or in opposition to the motion. For example, a motion may call on members to approve or reject a Bill, Act or Paper, or express approval or condemnation of a policy or situation. The sentiment polarity of the motion under debate may therefore have a significant effect on the language used by a speaker when either supporting or opposing the motion. For example, for motions that commend the Government, speeches which support the motion are likely to incorporate positive language, while those that oppose the motion will tend to include typically negative language. On the other hand, for motions that oppose Government policy, speeches favourable to the motion are themselves also likely to use typically negative language towards the Government, and unfavourable speeches will conversely use positive language, as in Example 1. 2 Figure 1: Three classification models for sentiment analysis of parliamentary debates. In model 1, all speeches are classified together, while in models 2a and 2b, speeches given in response to positive and negative motions are classified separately. Speech: I do not support the regulations. The Government's rhetoric and practice do not add up. If I may paraphrase a well-respected authority, that which we call a tax riseby any other name would sting as hard, and that would be the effect of the regulations. In this case, the motion expresses negative sentiment towards a piece of legislation, and the speech (extract) uses negative language to communicate positive sentiment towards the motion. This 'double negative' effect presents complications for the learning of textual classification features, where lexical features that may be indicative of sentiment can differ in their polarity depending on the sentiment of the motion to which they respond. We therefore propose two models for comparison-as well as two different ways of classifying debate motions (see Figure 1): 1. Model 1: A one-step Speech sentiment analysis model, in which all units in the corpus are passed to the classifier simultaneously. 2. Model 2: A two-step Motion-speech sentiment analysis model, in which the corpus is first divided into those units with motions expressing positive, and those expressing negative sentiment polarity, before these two groups are classified separately. For this model, we also compare two methods of applying sentiment labels to the motions: (a) 2a: Sentiment classification using n-gram text features and learned from manually annotated labels. (b) 2b: Under the assumption that motions proposed by the Government are positive, and those proposed by other parties are negative, motions are divided by the party affiliation of the MP that proposes them-positive if they are a member of the governing party or coalition, negative if not. Experiments We perform experiments to compare sentiment classification performance using combinations of the following: • Two machine learning models: -Support Vector Machines (SVM)-linear support vector classification. -Multi-layered Perceptron (MLP)-a neural network with 100 hidden layers, using rectified linear unit (ReLu) activation, L-BFGS optimization and maximum 200 epochs. • Supervised learning of sentiment polarity classes using both manually annotated labels and division vote labels. • The two debate models: the one-step Speech sentiment model, and the two-step Motion-speech sentiment model. For the Motion-speech model, we also compare classification of the motions using n-gram textual features with labelling them simply according to the party affiliation of the MP who proposes the motion-positive if they are a member of the governing party or coalition, negative otherwise. • The following learning features: -Textual features extracted from lowercased, tokenized motions and speeches: * N-grams: all uni-, bi-, and trigrams, and combinations of these. -Contextual metadata features for speech classification: * Speaker party affiliation. Intuition suggests that a speaker's party membership should be a strong indicator of sentiment towards many topics, and Salah (2014) showed this to be the case, at least as far as correlation with speakers' division votes goes. * Debate ID number. As there are usually multiple speeches in each debate, and MPs will often express similar sentiments to members of their own party in a particular debate, we also follow Salah (2014) in including this feature to capture possible correlations between MPs' speech and voting behaviour. * Motion party affiliation. Because MPs are likely to be more or less supportive of a motion depending on who proposes it, we add that Member's party as a further contextual feature. Results & Discussion We present the results of classification using 10-fold crossvalidation. Due to slight imbalances in class labels, F1 scores are reported in addition to accuracy. For motion classification, the SVM classifier achieves accuracy of 92.1% and an F1 score of 0.921, while the MLP classifier obtains accuracy of 93.0% and an F1 score of 0.931. Considering human agreement rates on this task (Cohen's κ = 0.91 3 ), this is probably close to the optimal performance that could be expected. Many of the features most indicative of positive motion sentiment are related to the practicalities of legislation, reflecting the fact that many of these motions are brought by the Government in an effort to pass law. Many negative motions include structures such as '(this House) believes that/notes that/disagrees with/calls on the Government to...', and this is also reflected in the most discriminating n-gram features (see Table 1). Speech classification performance scores are presented in Table 2. The higest accuracy and F1 scores overall, using both labelling methods, are achieved using all features to train the MLP classifier. These results provide a number of insights into the relationships between the labelling methods used, the textual and Positive Negative 1 security notes 2 connection amend 3 given believes 4 purposes calls 5 general government 6 new calls government 7 schedule dated 8 proceedings eu 9 session disagrees 10 programme number Table 1: Top 10 most discriminating positive and negative n-gram features ranked by SVM training coefficients using manually annotated labels. metadata features in the corpus, and the debate models applied. Labelling Methods Results indicate a correlation between the labelling method used and performance resulting from the use of different feature types for classification. Use of manually annotated labels leads to slightly better performance when only textual features are considered, while with division vote labels, the inclusion (or exclusive use) of meta data leads to considerable gains in performance (see Figure 2). It therefore appears that information in the text correlates more closely to human understanding of the sentiment expressed in the speech, while contextual information regarding the speakers involved is more indicative of voting intention, with speaker party affiliation a particularly strong indicator of this label. However, while these results support the hypothesis that manual labels are more indicative of speech sentiment, considering the associated costs and the relatively small differences in performance, use of division votes may be the Table 2: Accuracy and F1 scores for one-and two-step models-the latter using automatically classified motion sentiment labels or Government/opposition motion sentiment labels. Results include division vote and manually annotated sentiment labels, and speech sentiment classification is performed using the support vector machine (SVM) and the multi-layer perceptron (MLP) classifiers. The best overall scores for each metric are in bold and best scores using textual n-gram features only are underlined. more pragmatic choice for this task for practical purposes. Debate Models Compared to the one-step Speech model, use of the Motionspeech models produces improved results for both classifiers under most model-feature configurations. It therefore seems that use of such a two-step model may go some way towards capturing the complex nature of these debates in which positive language can indicate negative sentiment polarity and vice-versa. Exceptions to this occur when the classifier is trained using contextual metadata features only. Here, as textual features are ignored, the two-step model becomes effectively redundant. Interestingly, the use in model 2b of labels derived from the relationship of the MP who proposes the motion to the Government (Government or opposition) is generally as effective as training a classifier on manually annotated labels (model 2a). This suggests that a two-step Motion-speech model can be used without the need for costly manual annotations, at least as far as motion sentiment labels are concerned. Features For textual features, the inclusion of bi-and trigrams does not appear to significiantly improve speech classification performance over the use of only unigrams for this task, particularly for the two-step models (see Figure 3). Table 3: Top 10 most discriminating textual n-gram features ranked by coefficients learned by training the SVM classifier. The bottom row of this table (*) shows the total mean sentiment score of the items in each column, as extracted from SentiWordNet 3.0. Ranking of n-grams by their SVM training coefficients also reveals that few bigrams and no trigrams feature in the top 10 most discriminating features (see Table 3). Examination of these predictive items underlines the fact that discriminating textual features for this task are not generally those that would be thought of as expressing positive or negative sentiment, even when using the two-step model. Calculating the average polarity of these lexical items (mean score of all entries for each item) according to a sentiment lexicon, 4 we find that 36.7% are neutral, 42.5% positive, and only 16.7% negative. This suggests that MPs tend to follow parliamentary guidelines to practise 'good temper and moderation', 5 avoiding negative language in these debates, whatever point they may be making. The acquisition of sentiment polarity we see here by objectively neutral language may also be due to the corpus containing a combination of debates on a wide variety of subjects and a relative sparsity of speeches addressing each of these topics. In debates which are skewed towards hav-ing more speakers either supporting or opposing the motion, topic words can become indicative of one or the other polarity. Hence, in this corpus, generally neutral lexemes such as 'fox' or 'Wales' become indicative of positive and negative sentiment polarity respectively. While use of contextual metadata features, improves overall performance, in some cases their inclusion leads to incorrect classification. This is prevalent in cases where an MP's sentiment is contrary to that of the majority of other members of their party, or in debates where MPs do not vote along party lines. In such cases, party affiliation can be a confounding feature and lead to incorrect classification. Classifiers Using textual features only, there is no significant difference between the performance of the two classifers. However, when contextual metadata features are included, the MLP tends to obtain higher accuracy and F1 scores, suggesting that such neural networks may be better able to exploit the complex relationships between textual and contextual cues in these parliamentary debates. Error Analysis Even using the best performing model-classifier-labelfeatures configurations, some speeches are not classified correctly. We manually examined the examples for which, using all learning features, and no matter which labels or model were used, the MLP classifier's predicted labels did not match the supervision labels. In the majority of these cases, we observed the following: 1. Speeches were longer than average (µ 218.8 vs. 167.8 words for the whole corpus). 2. Either: speech sentiment labels did not agree with the majority of that speaker's party (19.4% of errors), the speaker's party was split in the debate concerned (11.9%), the speaker was the only member of their party in this debate (22.4%), or the debate featured only that one speech (4.5%). In the remaining cases, speeches by Conservative MPs were erroneously classified as negative, and those of Labour or SNP speakers as positive. It therefore appears that the party affiliation feature may carry too much weight. While this feature is clearly strongly indicative of speaker sentiment, it can lead the classifier to over-generalise. For the use of textual features only, we also examined examples in which the best performing (highest accuracy) configuration-the Motion-speech model with SVM and manual labels-classified speeches incorrectly. While it is difficult to identify a common thread between all these cases, it appears that on many occasions, these speeches feature speakers addressing off-topic or tangentially related subject matter (see Example 2, in which the speaker talks about a different event than the target of the motion). (2) Motion: That the draft European Union Referendum (Date of Referendum etc.) Regulations 2016, which were laid before this House on 22 February, be approved. Speech: On suspicious intentions, may I remind the right hon. Gentleman that he campaigned with the Conservative party and the Labour party in Scotland, telling the people of Scotland that if they voted no in the Scottish referendum, they would be guaranteed to remain in the EU? What is his position on that point today? Even when speeches do contain subjective language directed at the motion, as in Example 3, multiple opinion targets, such as other MPs, parties, and topics, can also be present, complicating the task of sentiment analysis at this level of granularity. (3) Speech: We have always been opposed, and we continue to be opposed, to guillotines. They are wrong in principle and in this case. However, we are realistic and we know that the Government have a majority. We welcome very much the comments and support of the hon. Member for Thurrock... First, the Bill is unnecessary and should not have been introduced... As the Government failed to think the matter through and to act, it is unfair that hon. Members should be penalised by lack of time... Secondly, until a few minutes ago, I was under the impression that the Opposition line was to make their point on the guillotine, but not to divide the House. That will only penalise us, as we will lose another 15 to 20 minutes. I ask the hon. Member for Grantham and Stamford to think. Conclusions We have evaluated the use of manually annotated labels and division vote labels for sentiment analysis of speeches taken from Hansard UK House of Commons debate transcripts in the HanDeSeT corpus. We have also introduced a new two-step model for debate speech sentiment analysis, and evaluated its performance against the one-step model. We also compared the performance on this task of both SVM and MLP classifiers, and the use of both textual n-gram features and contextual metadata features. Results suggest that while contextual metadata can be highly predictive of their division vote, manually annotated labels more closely reflect speakers' sentiment as expressed in their speeches. However, considering the large overlap between the two sets of labels, for future work or to create larger datasets, manual annotation of these may not be cost-effective. Our two-step Motion-speech model outperforms a simple one-step model in nearly all label-feature-classifier configurations, and therefore seems better able to take account of the complexities inherent in the structure of House of Commons debates, such as double negation. Additionally, we have found that labelling motions according to the relationship to the Government of the speakers who propose them can approximate the effects of sentiment classification in debate motions, thus avoiding the need for costly manual annotations for this step. Overall, it seems that sentiment analysis of Hansard transcripts at the speech level does not yield major insights beyond those that could be obtained by merely examining MPs voting records. A more fine-grained analysis may be required to access the opinions expressed in these debates. In future work, we will focus on applying sentiment analysis to the different targets of the speakers' sentiment such as the various topics and subtopics that arise in parliamentary debates.
2018-05-18T19:02:30.592Z
2018-05-07T00:00:00.000
{ "year": 2018, "sha1": "c2d8f43b48d30c569f4ec65ad37b14053437429f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "d5b64064bd4784185adfbdccf663457c11ee2f9a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53458102
pes2o/s2orc
v3-fos-license
Blockage of saline intrusions in restricted, two-layer exchange flows across a submerged sill obstruction Results are presented from a series of large-scale experiments investigating the internal and near-bed dynamics of bi-directional stratified flows with a net-barotropic component across a submerged, trapezoidal, sill obstruction. High-resolution velocity and density profiles are obtained in the vicinity of the obstruction to observe internal-flow dynamics under a range of parametric forcing conditions (i.e. variable saline and fresh water volume fluxes; density differences; sill obstruction submergence depths). Detailed synoptic velocity fields are measured across the sill crest using 2D particle image velocimetry, while the density structure of the two-layer exchange flows is measured using micro-conductivity probes at several sill locations. These measurements are designed to aid qualitative and quantitative interpretation of the internal-flow processes associated with the lower saline intrusion layer blockage conditions, and indicate that the primary mechanism for this blockage is mass exchange from the saline intrusion layer due to significant interfacial mixing and entrainment under dominant, net-barotropic, flow conditions in the upper freshwater layer. This interfacial mixing is quantified by considering both the isopycnal separation of vertically-sorted density profiles across the sill, as well as calculation of corresponding Thorpe overturning length scales. Analysis of the synoptic velocity fields and density profiles also indicates that the net exchange flow conditions remain subcritical (G < 1) across the sill for all parametric conditions tested. An analytical two-layer exchange flow model is then developed to include frictional and entrainment effects, both of which are needed to account for turbulent stresses and saline entrainment into the upper freshwater layer. The experimental results are used to validate two key model parameters: (1) the internal-flow head loss associated with boundary friction and interfacial shear; and (2) the mass exchange from the lower saline layer into the upper fresh layer due to entrainment. Introduction The presence of natural topographic flow obstructions (e.g. sills, sand bars) can have significant implications for the intrusion of saline marine waters into semi-enclosed estuarine impoundments or fjordic basins. For example, the obstruction of exchange flows within partially-blocked estuaries can impact adversely on estuarine ecology due to the inhibition of tidal intrusion across submerged sand bars at the river mouth, with the suppression of associated estuarine circulation and mixing processes exacerbating stagnation and contaminant accumulation problems within the estuarine impoundments [2,3]. There are a number of field observations in the semi-closed seas such as the Baltic Sea [12-14, 16, 18], which indicate that two-way patterns of internal flow are present under different background vorticity conditions, associated with extensive mixing near permanent fronts [19]. In coastal regions, some estuaries can be completely blocked from saline marine water intrusion into the river basin, while others are strongly influenced by saline water circulations in the estuary mouth, with restricted intrusion into the estuary basin flowing in the opposite direction to the overlying freshwater outflow layer [22]. Such bidirectional stratified flows can lead to significant depthwise variations and strong gradients in both velocity and density profiles, leading to high gradient Richardson numbers [17,22]. The dynamics of these exchange flows can be represented by the position of two interfaces, namely (i) the density interface separating the intruding saline water from the overlying, outflowing fresh water layer and (ii) the zero-velocity interface determined by the reversal point in the velocity profile. Turbulent fluxes within the strong interfacial shear layer generated can result in significant interfacial mixing and transfer of mass and momentum between the layers. In some cases, strong vertical entrainment is present within the region of the salt-water return flow, while, in other circumstances, interfacial waves are first formed at the density interface; the interfacial instabilities (e.g. Kelvin-Helmholtz or Holmboe instabilities) providing an additional mechanism for vertical mixing across the density interface. Environ Fluid Mech The study of Farmer and Armi [5] concentrated on investigating the initial dynamics of stratified flows developing over submerged topography. Comparing their observations with numerical simulations [6], they found that, although the model agreed moderately well with the observed end state, it failed to reproduce the observations during the period of flow establishment. They concluded that the bottom boundary dynamics had a fundamental role in the initial evolution of the flow and that numerical models which aim to simulate stratified flows in a sill region must accurately represent the bottom boundary layer. More recently, Negretti et al. [20] and Fouli and Zhu [7] conducted experiments to investigate the mechanisms by which interfacial waves are generated in two layer exchange flows over submerged bottom sills, focusing on the influence of barotropic forcing and the generation conditions for Kelvin-Helmholtz instabilities, respectively. Negretti et al. [21] also investigated the influence of boundary roughness on the generation and collapse mechanisms for large scale interfacial waves in two layer flows down a slope, defining two main sources of entrainment associated with the waves themselves and the bottom roughness. Despite these investigations, there have been relatively few detailed laboratory investigations of the effect of bottom boundary dynamics on sill exchange flows to date. Furthermore, as these near bed processes are expected to induce suspension and transportation of bed sediments, they are, thus, important for mass transport and water quality within coastal regions of restricted exchange [9,10]. In restricted bi-directional stratified flow problems, the internal flow dynamics are expected to be sensitive to (i) the dimensions of the obstruction (i.e. sill length, height and submergence depth), (ii) density (and stratification) differences between the two water bodies separated by the sill obstruction and (iii) external barotropic forcing conditions due to tidal and freshwater inflows [7,20]. In this context, however, the range of parametric conditions under which these restricted exchange flows are initiated (or indeed blocked) are not, as yet, completely understood. In addition, further research is required to investigate the physical mechanisms associated with shear-induced mixing processes, vertical entrainment and the generation of interfacial waves by bi-directional flows across the obstruction. Improved knowledge of the bottom boundary dynamics associated with the intrusion of marine saline waters is also required to parameterise boundary layer processes associated with restricted, bi-directional stratified flows across topographic obstructions. These processes, in particular, are known to be crucial for water circulation, mixing, stratification, deep-water renewal, bottom stagnation and flushing within these semi-enclosed water bodies, although their exact role in each of these processes remains somewhat unclear. Internal hydraulic theory can provide a useful analytical modelling approach for the preliminary interpretation of the complicated internal flow dynamics of restricted, twolayer exchange flows across a submerged sill obstruction. In this regard, Zhu and Lawrence [24] included frictional and non-hydrostatic effects in their two-layer hydraulic model to investigate the case of a baroclinic exchange flow within a silled channel connecting two homogeneous water reservoirs of different densities. It was found from their study that the interface elevations measured at different sections, both in the vicinity of the sill obstruction and at more remote channel locations, corresponded well to predicted elevations from internal hydraulic theory when the internal flow head loss was specified in the range 0.0-0.1 of the total fluid depth. Cuthbertson et al. [2,3] applied successfully a similar two-layer exchange flow model to consider the case of a slowly descending barrier, initially separating two water reservoirs of different density. Within their investigations, the rate of descent of the barrier was assumed (correctly) to be sufficiently slow for the unsteady two-layer exchange flow generated above the sill crest to adjust continuously to the appropriate quasi-steady conditions at every stage of the barrier descent. Their results demonstrated that the thicknesses of the two layers at the barrier crest could be predicted satisfactorily by an internal hydraulic model that (i) assumed the existence of either one single control point (i.e. at the barrier crest- [3] or at two control points (i.e. at the barrier crest and channel exit- [2]; and (ii) incorporated internal flow losses from the sudden expansion and contraction of the upper and lower layers, respectively, at the channel exit [3]. In the present study, these two-layer exchange flow models for rectangular-shaped channels have been extended to include both frictional and entrainment effects, which are required to account for turbulent stresses and mass transfer from the lower saline layer (i.e. due to entrainment). As such, the experimental results are used to validate two key parameters in the internal flow model, namely: (i) the flow rate ratio q * of upper fresh and lower saline layers; and (ii) the mass exchange m from the lower saline layer into the upper fresh water layer. The theoretical results will thus be compared directly with the experimental findings. In this context, a key aim of the current study is to (i) address current knowledge gaps on interfacial mixing processes in bi-directional stratified flows generated across a submerged obstruction, and (ii) define the parametric influences (i.e. flow, density difference and obstruction submergence depth) on shear-driven mixing and entrainment dynamics across the sill, as well as the physical mechanisms associated with blockage of saline intrusions. The physical system A schematic representation of the physical system under investigation is shown in Fig. 1. A trapezoidal-shaped, submerged sill obstruction S of height h s , sill length l s and approach slopes a s is installed in a rectangular channel of overall length L, width B and depth H. This sill obstruction restricts the exchange flows generated between a freshwater impoundment I and the saline water basin M. The initial, undisturbed experimental configuration was one in which the rectangular channel is filled with freshwater of density q 1 , submerging the trapezoidal sill to a depth h b (=Hh s ). Saline water of density q 2 is then introduced at the bottom of basin M at an initially low volume flux Q 2 to allow a dense stratified layer to develop, whilst minimising mixing with the overlying fresh water layer. Once this layer is established, the saline water volume flux Q 2 is increased to a prescribed flow rate and a dense water intrusion is initiated Fig. 1 Schematic representation of physical system under investigation. Sections A, B and C indicate the locations defined in the idealised internal-flow hydraulic modelling approach (see Sect. 4) where internal hydraulic controls are expected to form (i.e. at location A and in the sill region between B and C) Environ Fluid Mech across the submerged sill before flowing down the inclined sill slope into basin I and out of the channel as a bottom gravity current. After this saline intrusion develops into a quasisteady saline overflow across the sill, a counter-flowing upper freshwater layer of density q 1 and volume flux Q 1 is initiated across the sill. This upper freshwater layer also adjusts to quasi-steady conditions before being increased incrementally throughout the experiment to investigate the influence of an increasing net-barotropic flow component (Q 1 [ Q 2 ) on the saline intrusion layer. As such, the parametric changes in the bi-directional exchange flows generated across the sill obstruction are effected by varying (i) the relative fresh Q 1 and saline Q 2 water volume fluxes, (ii) the density excess Dq (= q 2 -q 1 ), and (iii) the sill submergence depth h b . All other parameters associated with the sill obstruction geometry are kept constant within all experimental runs. 3 Experimental set-up and procedure Laboratory configuration The experimental program was conducted in a large-scale facility (Coriolis Platform II) at Laboratoire des É coulements Géophysiques et Industriels (LEGI) in Grenoble. For the current experimental study, a 9 m-long by 1.5 m-wide by 1.2 m-deep rectangular channel was constructed within the circular basin of overall dimensions 13 m-diameter and 1.2 mdeep (see Fig. 2), allowing total water depths H of up to 1 m to be considered. The rigid trapezoidal sill obstruction had a horizontal sill length l s = 2 m, at a height h s = 0.5 m above the channel floor, and inclined sill approaches set at an angle a s = 26.57°(see Fig. 2a). The walls of the rectangular channel were constructed from transparent acrylic to facilitate laser flow illumination and visualization of the bi-directional stratified flow development across the sill. It is noted that the basin-sill slope transitions in the current configuration are expected to have negligible effects on the internal-flow dynamics resulting from flow separation or other non-hydrostatic effects. With the circular basin and rectangular channel filled with freshwater to a total depth H = 0.85-1.0 m, the counter-flowing saline water (q 1 = 1004.7-1009.6 kg m -3 ) and overlying freshwater (q 0 = 1000 kg m -3 ) layers were externally driven across the submerged sill obstruction. The saline water was delivered to the bottom of basin M via a gravity feed system and 0.3 m-high by 1.5 m-wide rectangular manifold section (Fig. 2a), while fresh water was recirculated within the channel and surrounding circular tank by two centrifugal pumps positioned in the upper part of basin M, directly above the saline water manifold (Fig. 2b). These two flow systems provided saline and fresh water volume fluxes in the ranges Q 2 = 2.64-6.94 l s -1 (i.e. q 2 = Q 2 /B = 0.00176-0.00463 m 2 s -1 ) and Q 1 = 0-30 l s -1 (i.e. q 1 = Q 1 /B = 0-0.02 m 2 s -1 ), respectively. In all experimental runs, the saline volume flux Q 2 was held constant while the freshwater volume flux Q 1 was increased systematically in incremental steps (i.e. Q 1 = 0, 3, 11, 18, 26 and 30 l s -1 ) at prescribed elapsed times t, with corresponding quasi-steady, exchange flow conditions developing across the sill for each Q 1 :Q 2 combination. In order to maintain a quasiconstant depth H (and sill submergence depth h b ) within the channel and surrounding basin during each experimental run, water was drained continuously from the bottom of the circular basin, outside the rectangular channel, at a flow rate equivalent to the saline inflow volume flux Q 2 . The parametric dependence of the bi-directional stratified flow conditions developed across the sill obstruction was therefore tested in relation to (i) the relative sill submergence depth h b /H = (1h s /H) = 0.5-0.6; (ii) the relative density difference of the fresh and salt water inflows, i.e. (q 2 -q 1 )/q 1 = 0.005-0.01; and (iii) the relative magnitude of fresh and saline water volume fluxes, i.e. Q 1 /Q 2 (= q 1 /q 2 ) = 0-11.36. Summary details of the parametric experimental conditions tested are presented in Table 1. (Note: full details of individual run parameters are given in the supplementary material- Table S1). Instrumentation and measurements Experimental measurements focused mainly on obtaining high resolution density and velocity fields both across the sill obstruction and at selected locations within basins M and I (i.e. on either side of the obstruction) for the range of different bi-directional stratified flows tested. Flow illumination was provided by a continuous laser system sited at the far end of basin I, which produced a vertical laser light sheet aligned along the channel centerline (see Fig. 2a). Two-dimensional Particle Image Velocimetry (PIV) was then used to measure velocity fields within the resulting vertical (XZ) plane, employing two sidemounted digital CCD cameras (Dalsa 1M60, resolution 1024 9 1024 pixel) to record both instantaneous flow velocity fields at a frame acquisition rate of 10 Hz within specific regions of interest (i.e. across the 2 m-long sill section and on the down-sloping face of the sill obstruction into basin I) (see Fig. 2a). These PIV measurements were obtained over two minute durations for each parametric flow condition tested, allowing synoptic (i.e. time-averaged) velocity fields to be generated for the regions of interest. The PIV is performed with direct image cross-correlation and 3-point Gaussian subpixel estimator of the maximum, with a mask used to restrict the analysis to flow regions. Successive PIV Table 1 Summary of main experimental variables and derived parameters a Run no. (Table S1) Environ Fluid Mech iterations were conducted with increasing resolutions (two to four iterations were generally conducted depending on the seeding quality). The final correlation box was 20 9 30 pixels in size, providing spatial resolutions of typically 10 pixels in vertical and 15 pixels in horizontal. This vertical resolution represented 2.5 cm for Dalsa1 (along the sill crest) and 1.2 cm for Dalsa2 (on the down sloping face of the sill). After each PIV iteration, a smoothing interpolation was performed using thin plate splines to eliminate vectors (above a displacement threshold of 1.5 pixels) that were considered false. As this elimination was used only to reduce the search range for the next iteration, the final velocity vector fields obtained from the last PIV measurement had no smoothing applied. Finally the velocity data were linearly interpolated on a regular grid with a 1 cm mesh size to perform statistical analysis. Time intervals for the PIV were chosen to obtain maximum displacements of 5-10 pixels between successive images. With a root-mean-square precision of 0.2 pixel, this corresponds to a relative precision of about 5% of the maximum instantaneous velocity. Since those errors were random with zero mean, the corresponding precision on the mean velocities was somewhat higher. The processing software used is documented in http://servforge.legi.grenoble-inp.fr/projects/soft-uvmat, from which the source can be downloaded. High-resolution density profile measurements were also obtained at key locations both across the sill obstruction and within basin M using an array of motorized micro-conductivity probes [8] (C1-C5, Fig. 2a), These micro-conductivity probes traversed vertically through the full depth of the developed two-layer exchange flows at a rate of 5 mm s -1 , with full density profiles taken over time periods of 70-90 s for the range of sill submergence depths tested (i.e. h b = 0.345-0.45 m, see Table 1). These detailed density profile measurements enabled mixing characteristics at the interface between the counterflowing fresh and saline layers to be measured. Corresponding ADV velocity profile measurements were also obtained across the sill, which were used essentially to calibrate the source fresh water volumetric fluxes Q 1 generated within the channel under a range of different centrifugal pumping motor speeds. Internal hydraulic modelling 4.1 Composite Froude number The critical condition two-layer exchange flow across the sill ( Fig. 1) is defined by a relationship between the thicknesses of the counter-flowing fresh and saline water layers [h 1 (x) and h 2 (x), respectively], their corresponding flow velocities [u 1 (x) and u 2 (x), respectively], and the reduced gravitational acceleration where C is the density ratio between the fresh and saline waters. Within the Boussinesq approximation, the assumption is made that (1 -C) ( 1 when defining the critical condition for development of maximal two-layer exchange flow. The restricted exchange between the two basins I and M is controlled by the fresh and saline water volume fluxes, Q 1 and Q 2 , as well as by the total submergence depth h b (= h 1 ? h 2 ) above the sill obstruction height h s (x) (see Fig. 1). Internal-flow hydraulic controls for the inviscid flow case might be expected to form at locations A and B in the obstruction side near the saline water source (Fig. 1). However, for the current sill geometry (i.e. constant depth across the sill and in the basins either side), and for exchange flows with net barotropic flow components, these controls are expected to occur over regions rather than at exact locations. In general, the current study was concerned with investigating exchange flows with net barotropic components (i.e. q 1 = q 2 ) generated within the channel to define specific parametric conditions under which blockage of the saline intrusion layer occurs across the sill. Under these conditions, the net barotropic-flow component in the upper layer intro- in a horizontallyconstricted flow, which is dependent on the internal-flow head loss. Similarly, in the viscid case of sill flow, the second control [F 1 2 (x) ? F 1 2 (x) = 1] at location B ( Fig. 1) can be shifted along the sill in the direction of fresh-water source (i.e. towards location C), whilst for strongly dissipative cases, may be shifted beyond the sill area (i.e. into basin I). At these hydraulic control sections, the composite Froude number G is critical, such that where F 1 (x) and F 2 (x) are the local densimetric Froude numbers for the counter-flowing fresh and saline water layers, defined for a rectangular cross-sectional channel as The specific volume flux i.e. the volume flux per unit width for the counter-flowing fresh and saline layers can be defined as q 1 = Q 1 /B and q 2 = Q 2 /B. Thus, the corresponding flow velocities can be defined as u 1 (x) = q 1 /h 1 (x) and u 2 (x) = q 2 /h 2 (x), respectively. As such, Eq. (1) becomes It is noted here that, within an idealised, inviscid mathematical model representation of the exchange flows across the sill configuration under investigation, the composite Froude number G is constant when both the channel depth and width are constant. However, in the extended, viscid model developed below, G will vary due to mass transfer and internalflow energy losses across the sill. Hydraulic modelling of two-layer exchange flows For the two-layer exchange flows under consideration here, the relative density difference between the superimposed, counter-flowing, fresh and saline water layers in the internalflow energy equation is assumed to be small and pressure p 0 at the free-surface is atmospheric. For this inviscid, irrotational flow, the Bernoulli equations of the coupled layers yield the following two-layer flow equations Environ Fluid Mech In the modelling of two-layer exchange flows, it is customary to define the internal-flow energy equation as Substituting for E 1 (x) and E 2 (x) in Eq. (6), the internal-flow energy equation at a particular sill location x becomes The flow velocities of the counter-flowing fresh and saline layers, u 1 (x) and u 2 (x), can also be expressed as corresponding specific flow rates i.e. q 1 = u 1 (x).h 1 (x) and q 2 = u 2 (x).h 2 (x), respectively, with Eq. (7) being rewritten as where K ¼ q 2 2 2g 0 is the flow-rate parameter and q à ¼ q 1 =q 2 is the ratio of upper fresh and lower saline layer volume fluxes per unit width. This version of the internal-flow energy equation can be non-dimensionalised in the form where the following non-dimensional quantities are used The maximal flow rate per unit channel width can be derived from the dimensionless Eq. (9) by applying the implicit function differentiation theorem in respect of the dimensionless lower-layer depth h 2 * . In this way, the stratified-flow controlled flow-rate is given by [4] This lower layer flow rate (Eq. 10) corresponds to the bottom saline intrusion across the sill obstruction, while the counter-flowing upper fresh water flow rate can be determined directly from q 1 = q * .q 2 . However, in the case of exchange flows with a net barotropic component in the upper freshwater layer, the resulting bi-directional flow can be regarded as sub-maximal rather than maximal (i.e. two internal hydraulic controls are present). Maximal and sub-maximal flow modelling In the studies of the bi-directional channel flows, the internal hydraulic modelling solutions are usually limited to maximal or sub-maximal exchange flows [1,4,24]. Therefore, an essential consideration in the internal hydraulic analysis of two-layer flows has been to determine the location(s) of sections of internal control (G 2 = 1). In the present experimental study, a trapezoidal sill was used to separate the fresh-and salt-water sources and, thus from inviscid internal hydraulic theory, the primary control should be located at the basin M end of the trapezoidal sill (i.e. at location A, Fig. 1). In the case of maximal exchange flow conditions developing, a second control should be located at the basin M end of the trapezoidal-sill crest (i.e. at location B, Fig. 1). According to Armi and Farmer [1], for this maximal exchange-flow case, the two controls are connected by an internally sub-critical branch (i.e. G 2 \ 1), and separated from the upstream and downstream channel parts by super-critical branches (i.e. G 2 [ 1). However, with a comparatively large net-barotropic flow component in the surface layer, the second control can also be located in the fresh water source side (i.e. in basin I). The system of internal hydraulic model equations [15,24] for maximal exchange consists of four relationships and includes, respectively, the critical flow conditions and internal-flow energy equations at the two control locations. As first approximation, this standard approach is used, and the internal-flow model equations at two locations (i.e. at section A and at a section between B and C, Fig. 1) are applied. The hydraulic model equations at location A are while the hydraulic model equations at a section between locations B and C are Here, subscripts A and BC are used to denote the parameters specified for the control locations in basin M (i.e. section A, Fig. 1) and across the trapezoidal sill crest (i.e. between sections B and C, Fig. 1), respectively. A particular goal of the hydraulic modelling presented herein is to investigate the sensitivity of the internal-flow dynamics of the bi-directional stratified flow generated in the rectangular channel configuration incorporating a submerged, trapezoidal, sill obstruction. For this purpose, in addition to the flux ratio q * of source fresh and saline volume fluxes across the sill, another key non-dimensional parameter is introduced, namely which represents the loss of mass Dq 2 = (q 2,Aq 2,BC ) from entrainment of the saline water layer between two control locations A and BC, respectively. The internal-flow head loss is also estimated for different runs according to the formula A simple graphical solution of the internal hydraulic model (i.e. Eqs. [11][12][13][14] can be used to determine universal solution for the two-layer exchange flow over the trapezoidal Environ Fluid Mech sill for which a net-barotropic flow component is present (i.e. q * = 1). As an example, the solution domain for the normalised lower layer thickness h 2 * at control sections A and BC (i.e. h 2,A * and h 2,BC * ) is shown in Fig. 3a for the inviscid exchange flow case with flow ratio q * = 1, internal energy ratio (E BC * ? DE * )/E A * = 1 (i.e. blue lines in Fig. 3a), no mass transfer from the lower saline layer (i.e. m = q 2,BC /q 2,A = 1, red lines in Fig. 3a) As expected, the slope of interface between the two control sections (i.e. A and BC), has increased when compared to the corresponding slope of the inviscid case (i.e. due to larger depth h 2,A in basin M and a lower depth h 2,BC over the sill). This extended internal hydraulic model (accounting for both lower layer entrainment and head-losses) can be applied straightforwardly to the cases of net barotropic flow in the lower saline (q 1 \ q 2 , q * \ 1) and upper fresh (q 1 [ q 2 , q * [ 1) layers. The extended internal-flow hydraulic modelling approach will be used to estimate the limits for mass exchanges and internal-flow head losses in the present experiments. Description of exchange flow and saline blockage conditions Within the current experimental study, the development of bi-directional stratified flows across the sill obstruction were measured for both net-barotropic flows in the upper freshwater (q * [ 1) and lower saline (q * \ 1) layers using PIV measurements. In this context, Figs. 4 and 5 present examples of synoptic, time-averaged velocity vector fields and corresponding colour maps of the horizontal U velocity component for these net exchange flows generated across the horizontal sill and down the inclined slope into impoundment basin I. It was noted during PIV analysis that the measured velocity fields at specific x locations along the sill, and on the sill crest at the freshwater impoundment basin I, were distorted significantly by viewing obstructions in the transparent flume wall sections and the positioning of micro-conductivity density probes at these locations. As such, the velocity vector fields in these regions have been blanked out and discounted from subsequent quantitative analysis of the exchange flows. For run EX2, Fig. 4a-c indicate that the effect of increasing the upper source freshwater flow rate Q 1 in incremental steps (i.e. Q 1 = 0, 12 and 30 l s -1 shown) for a prescribed saline water volume flux Environ Fluid Mech Q 2 = 6.94 l s -1 , results in bi-directional stratified flow conditions with reducing lower layer thickness h 2 , defined by the u = 0 contour elevation, and increased velocity in the upper fresh layer (i.e. u 1 ? 6 cm s -1 ). However, the corresponding lower saline layer velocity u 2 does not diminish significantly (i.e. u 2 ? 4 cm s -1 ) under increasingly dominant upper fresh water flows and bi-directional stratified flow conditions persist across the sill and down the slope into basin I for all q * values tested (i.e. q * = 0 ? 4.32). By contrast for run EX7, Fig. 5a-d show both a general reduction in lower layer thickness h 2 and velocity u 2 as the net-barotropic forcing in the upper fresh layer increases. Indeed, it is shown in Fig. 5d that the saline intrusion is completely blocked across the sill under the strongest net-barotropic forcing conditions in the upper freshwater layer (i.e. q * = 3.75 and 4.32). It is noted that the only significant parametric difference between runs EX2 and EX7 is the total flow depth H and, hence, the total submergence depth h b of the horizontal sill (i.e. h b = 0.43 m and 0.349 m, respectively), indicating its parametric significance to conditions under which the saline intrusion is blocked. In this context, Fig. 6 defines the parametric conditions under which saline blockage occurs, plotting the non-dimensional source freshwater volume flux q 1 2 /(g 0 0 h b 3 ) versus the volume flux ratio q * = q 1 /q 2 . The magnitude of q 1 2 /(g 0 0 h b 3 ), which is equivalent to densimetric Froude number F 1 for the freshwater layer when h 1 = h b , is shown to control the parametric conditions under which the saline intrusion layer is blocked. This occurs at a (Fig. 6). In a dimensional sense, this is somewhat surprising as intuitively it might have been expected that lower saline volume fluxes q 2 across the sill (e.g. run EX5) could be blocked by correspondingly reduced fresh water volume fluxes q 1 , thus maintaining the same critical net-barotropic flow condition in the upper freshwater layer (i.e. q * [ 1) for saline blockage to occur. However, bi-directional stratified flows are shown to develop in all runs where q 1 2 /(g 0 0 h b 3 ) B 0.1 over a corresponding q * range of 0 to 11.36 (Fig. 6), suggesting that saline layer blockage requires a specific parametric combination of high freshwater volume flux q 1 and lower submergence depth h b and reduced gravity g 0 0 values (e.g. runs EX6 and EX7). In this context, a reduction in g 0 0 and/or h b appears to increase shear-driven interfacial mixing across the sill between the counter- flowing fresh and saline water layers, leading to enhanced entrainment of saline water by the dominant upper fresh water layer, especially at higher q * [ 1 values. This is also evidenced by the significant reduction in, and eventual disappearance of, the u = 0 contour elevation in Fig. 5 (run EX7) with increasing q * values. By contrast, the corresponding run (EX2, Fig. 4) at the higher submergence depth h b indicates a less pronounced reduction in the u = 0 contour elevation with increasing q * values, suggesting the exchange flow is more stably stratified with less interfacial mixing and entrainment of saline water into the dominant counter-flowing fresh water layer observed even under high q * ) 1 values. (Figs. 4, 5), the elevation of the u = 0 velocity interface between the counter-flowing layers is shown to reduce as the upper fresh water layer velocity u 1 increases under increasing q * values. However, within run EX2 (Fig. 7a), the lower saline intrusion layer is shown to remain relatively persistent both in terms of thickness h 2 and velocity u 2 across the sill for all q * values tested. This suggests that, although the net barotropic flows generated in the upper fresh layer has a pronounced dynamic influence on the saline intrusion, it is not sufficient to block it completely. Run EX7 (Fig. 7b) reveals similar parametric trends of increasing and reducing upper and lower layer velocities u 1 and u 2 , respectively, and decreasing saline layer thickness h 2 as the flow ratio q * is increased. In this case, however, the parametric dependence on q * results in the complete blockage of the saline water intrusion (i.e. noted that the key parametric difference between runs EX2 (Fig. 7a) and EX7 (Fig. 7b) is the sill submergence depth h b (i.e. h b = 0.43 m and 0.349 m, respectively). As such, the observed variation in the u = 0 interface height in both cases (i.e. h 2 = 15 ? 10 mm and 18 ? 0 mm, respectively) for increasing q * values is therefore controlled primarily by the submergence depth h b , thus confirming its parametric importance in the blockage of saline intrusions across the sill. It is possible to determine the local upperq 1 and lowerq 2 layer volume fluxes (per unit width) at specific x positions along the sill through integration of the velocity profiles shown in Fig. 7 In this way, calculations of the local upper and lower layer volume fluxes,Q 1 andQ 2 , respectively, along with corresponding measurements of layer thicknesses h 1 and h 2 , can be determined at various positions along the sill. A summary of these calculated fresh and saline volume fluxesQ 1 andQ 2 , and corresponding layer thicknesses h 1 and h 2 are provided in the supplementary material (Table S2) at locations x = -30 and -200 cm along the sill for runs EX2, EX3, EX6 and EX7. It is interesting to note that these local fluxes are, in general, significantly lower than the specified fresh and saline water flows Q 1 and Q 2 at source. This is likely to be in part due to uncertainties in predictingQ 1 andQ 2 (Eqs. 17, 18) from single bi-directional velocity profiles along the sill crest. However, it is informative to investigate how the local flux ratioQ 1 Q 2 varies along the sill under varying parametric conditions to provide insight into the influence of net-barotropic flow conditions on the volume flux changes in the upper fresh and lower saline layers across the sill. In this context, Fig. 8 Analysis of density profiles Density profiles for the exchange flows generated across the sill were measured via microconductivity probes located at x/l s = 0.0, -0.25 and -0.5. Figure 9a, b presents the raw density profiles from runs EX2 and EX3 for the values of the fresh to saline volume flux ratio q * shown. Both these runs indicate significant levels of mixing throughout the lower saline intrusion layer at x/l s = 0.0 [ Fig. 9a(iii), b(iii)] as it spills over the sill crest into impoundment basin I. At x/l s = -0.25 and -0.5, however, mixing is confined to the interfacial shear region between the counter-flowing fresh and saline layers, with some evidence of denser water entrainment into the upper freshwater layer [e.g. Figs. 9a(i), (ii), b(i), (ii)], while the saline intrusion layer appears to have a relatively stable density structure. It is noted here that the lower layer density increases over the first three q * conditions, indicating that the full density excess for the bi-directional stratified flow is not established over these q * values. This is possibly due to mixing and dilution of the inflowing saline water source flux during the initial infilling stage within basin M. Figure 9c(i), (ii)], while increased levels of mixing and dense water entrainment are into the upper freshwater layer are observed at higher q * values as the lower saline intrusion layer diminishes in thickness, before eventually disappearing at q * = 4.32. In order to investigate the level of mixing associated with bi-directional stratified flows with varying net barotropic components across the sill, the density profiles measured at the three x/l s locations were sorted vertically into an equivalent stable density profile for each of the different q * conditions tested. Examples of these sorted density profiles are plotted non-dimensionally in Fig. 10a-c for runs EX2, EX3 and EX7, respectively, as the density excess q 0 = [q(z) -q 1 ]/(q 2 -q 1 ) versus the normalised submergence depth z/h b above the sill at location x/l s = -0.25 for the range of q * values tested. The elevations of the q 0 = 0.2, 0.5 and 0.8 isopycnals are plotted on individual sorted density profiles, providing an indication of the mixing layer thickness at the different sill locations [3] for different q * values. The vertical separation of these isopyncals is typically largest at (i) low q * values, most probably due to initial saline-fresh water mixing in basin M during filling, and (ii) higher q * values, due to increased interfacial mixing and entrainment of the lower saline intrusion layer by strong net barotropic flow components in the upper freshwater layer. This is illustrated in Fig. 10c (i.e. run EX7, x/l s = -0.25) where the isopycnal separation in the generated exchange flows is greatest at q * = 0, 3.03 and 3.75, prior to the blockage Environ Fluid Mech of the saline intrusion layer at q * = 4.32. It is also interesting to note that the elevation of the q 0 = 0.5 isopycnal rises initially with increasing q * (i.e. q * = 0 ? 1.15) before reducing with further increasing q * values (i.e. q * = 1.15 ? 3.75). It is anticipated that the former effect is due to the less dominant fresh water layer acting to slow down the saline Fig. 9 Raw density profile measurements obtained by micro-conductivity probes in runs a EX2, b EX3 and (EX7) at longitudinal positions of (i) x = -100 cm; (ii) x = -50 cm; and c x = 0 cm across the sill for the exchange flows generated with the fresh-saline flux ratios q * as shown water intrusion layer across the sill, which for a given flux q 2 will result in the intrusion layer becoming thicker. Conversely, under more dominant upper fresh water flows (i.e. higher q * values), the mixing and entrainment of saline water at the interface will reduce the overall thickness of the saline intrusion layer, prior to its complete removal at q * = 4.32. In comparison, for runs EX2 and EX3 (Fig. 10a, b) where ultimately saline blockage does not occur, both the isopycnal separation and elevation of the q 0 = 0.5 isopycnal are more consistent over the range of q * values tested. Figure 11 shows the normalised isopycnal separation thicknesses d/h b (where d is the elevation difference between the q 0 = 0.2 and 0.8 isopycnals) for all runs in which bi-directional exchange flows are generated across the sill plotted versus a modified densimetric Froude number q 1 . This non-dimensional exchange flow parameter takes account of both the dominant role of the upper fresh water layer in generating interfacial mixing and the relative magnitude of the fresh-saline volume flux ratio q * . These d/h b values are averaged across the three density probe locations, with the error bars shown representing the standard deviation in these measurements. The figure shows a general trend of increasing d/h b values with increasing q 1 q 2 /(g 0 0 h b 3 ), both within individual runs and over the range of parametric conditions tested. This indicates that, for a specific prescribed saline volume flux q 2 , an increase in fresh water volume flux q 1 and/or a reduction in reduced gravity g 0 0 or submergence depth h b tends to result in increased interfacial mixing (i.e. larger isopycnal separation), with correspondingly higher variability on these measurements (i.e. larger error bars). It is also noted that the largest d/h b values [up to O(10 -1 )] are typically obtained for the two runs (EX6 and EX7) in which the saline intrusion layer was blocked at high q * values. It is observed that some density profiles highlight significant density inversions [e.g. Fig. 9c(iii)] that are indicative of large instabilities being generated at the interface between the counter-flowing fresh and saline layers, particularly close to the basin I end of the sill (i.e. x/l s = 0). These interfacial instabilities can be defined quantitatively by the Thorpe overturning length scale L T [23], which is a measure of the vertical scale of short wave instabilities, such as Kelvin-Helmholtz overturning motions, associated with shearinduced interfacial mixing. It is defined as the root-mean-square of vertical displacements required to re-order the measured density profile such that the resulting stratification becomes gravitationally stable. In the current study, where the time scales of short wave instabilities and overturning motions are be significantly shorter than the density profiling time scale (i.e. 70-90 secs), L T is used only as a semi-quantitative measure of the ensemble-averaged mixing characteristics at the three x/l s positions along the sill crest. Furthermore, estimations of L T can be subject to significant errors from noise in density profiles [11]. Hence, its prediction is limited to the re-ordered density profiles between isopyncals q 0 = 0.2 and 0.8 (see Fig. 10). In this context, Fig. 12 presents normalised Thorpe length scales L T /h b plotted versus q * for runs EX2, EX3 and EX7. These plots show a general increase in L T /h b with q * , again confirming that larger interfacial instabilities are generated under stronger net barotropic flow conditions in the upper freshwater layer (i.e. q * ) 1). In Fig. 12a, b, the L T /h b values increase by an order of magnitude [O(10 -2 ? 10 -1 )] over the range of q * values tested (i.e. q * = 0 ? 4.3), with no clear dependence on x/l s location. By comparison, Fig. 12c shows that estimated L T /h b values in run EX7 are significantly higher at x/l s = 0 than at x/l s = -0.25 or -0.5 for q * values of 0.43, 1.15 and 1.73, with Thorpe length scales at x/l s = 0 approaching up to half the total sill submergence depth (i.e. L T /h b ? 0.4-0.5). This may indicate that the degradation and eventual blockage of the saline intrusion layer across the sill is initiated by large scale instabilities forming at the basin I end of the sill crest, leading to bulk mixing and entrainment, under dominant fresh water flows with strong net barotropic components (i.e. q * ) 1). By contrast, more general shear-induced interfacial mixing between stable counter-flowing fresh and saline water layers is characterised by smaller L T /h b values under all q * conditions [e.g. run EX3, Fig. 12b]. These results are also in general accord with Fig. 11 Composite Froude number The composite Froude number G for the two-layer exchange flows generated across the sill was calculated for each run, following Eq. (3), as where u 1 2 (x) and u 2 2 (x) are representative layer-averaged velocities for the counter-flowing fresh and saline water layers, respectively. These are obtained by integrating PIV-derived velocity profiles (e.g. Fig. 7) above and below the u = 0 contour elevation at all x positions along the sill and dividing by the corresponding fresh and saline layer thicknesses h 1 (x) and h 2 (x). The reduced gravitational acceleration g 0 term in Eq. (19) is also based on local density profile measurements across the sill, detailed in the supplementary material (Table S2). As such, Fig. 13 shows the spatial variation in estimated composite Froude numbers G across the sill (i.e. x/l s = -1.0 ? 0.0) for individual runs and the range of fresh-saline flux q * ratio values shown in each plot. Within Fig. 13, the horizontal error bars represent the sill region over which G values were averaged, while the vertical error bars represent ± 1 standard deviation in the individual G values contributing to the spatially-averaged G values plotted. Within all runs, it is apparent that the estimated composite Froude numbers remain subcritical (i.e. G 2 \ 1) at all locations along the sill for all q * values. The largest G values (i.e. G & 0.55-0.85) are obtained in run EX2 (Fig. 13a) under the highest q * values (i.e. q * = 3.75 and 4.32), within which bi-directional exchange flows were generated across the sill even under the strongest net-barotropic forcing in the upper freshwater layer (see Fig. 4c). By contrast, smaller G values are typically estimated in all other runs over the range of q * values in which bi-directional exchange flows are generated (Fig. 13b-e). The fresh-saline flux ratio q * itself demonstrates a weak influence on the estimated G values (i.e. G increases as q * increases), especially within runs EX6 and EX7 (Fig. 13d, e), where the saline intrusion layer becomes increasingly diminished in thickness h 2 , then completely blocked, for increasing q * values (Fig. 5). It is also observed that the estimated G values tend to increase towards the impoundment I end of the sill, due primarily to the observed increase in the lower saline layer velocity u 2 and reduction in layer thickness h 2 as sill crest location x/l s ? 0 (e.g. see Fig. 4). In general, as the findings suggest that the internal-flow remains subcritical (i.e. G 2 \ 1) along the full length of the sill, the second internal hydraulic control point (G 2 = 1) must be positioned at some location within the freshwater impoundment I. This finding is in accord with Laanearu et al. [16], who found that for net exchange flows in a laterally-confined river channel, where the dominant barotropic component was in the upper freshwater layer, the location of the second hydraulic control could be displaced towards the freshwater source. It should be noted that while the spatially-averaged G values plotted in Fig. 13 are representative of the local exchange flow conditions generated across the sill, these specific values should c Fig. 13 Longitudinal variations in the estimated composite Froude number (G = F 1 ? F 2 ) [Eq. (19)] along the sill crest (x/l s = -1.0 ? 0.0) for runs a EX2, b EX3, c EX4, d EX6 and e EX7 for the values of fresh-saline flux ratio q * as shown. Horizontal error bars indicate the sill region extent over which each spatially-averaged G value is attained, while the vertical error bars indicate ± 1 standard deviation in these predicted spatially-averaged G values Environ Fluid Mech (e) Environ Fluid Mech only be considered as estimates. This is due to both the velocity and density profile measurements, used in the estimation of G, varying significantly from the idealised twolayer exchange flow condition. As such, they include uncertainties associated with turbulent fluctuations and interfacial wave activities. Analytical modelling Modelling of exchange flows over sill obstructions is usually restricted to two-layer cases, where flows in the upper and lower layers are in opposite directions. For idealised maximal exchange flows [24], two critical-flow sections (G 2 = 1) are assumed to form in the obstructed channel; one at location A in basin M and the second on the sill crest between locations B and C (see Fig. 1). However, sub-maximal exchange can also occur when only one critical-flow section exists, and is established at location A in basin M. Within the current experimental study, the second control is expected to be located outside of the trapezoidal sill area, toward the fresh-water source (i.e. in basin I, Fig. 1). This is confirmed by the estimated composite Froude numbers across the sill crest (i.e. Figure 13), which were shown to remain subcritical (i.e. G 2 \ 1) throughout. The experimental results also show that, under certain parametric conditions (Fig. 6), the dynamic blocking of the saline water intrusion can occur across the sill. This is analogous to ''salt-wedge'' behaviour in stratified estuaries, which has been modelled experimentally [22] and also been observed from flow velocity profiles and density front observations in a river channel [16]. The dynamic blocking of saline water intrusions in regions of restricted exchange remains relatively poorly understood as it involves complex internal-flow dynamics and forcing due to the interfacial mixing and entrainment between the counter-flowing fresh and saline layers. In the idealised analytical two-layer hydraulic model (i.e. inviscid flow case), upper or lower layer blockage can be simulated by reducing or increasing the fresh-saline flux ratio q * significantly (i.e. q * ? 0 and 1/q * ? 0, respectively) [1]. Within the current experiments, however, the dynamic blocking condition for the saline intrusion layer across the sill occurs at a finite value of 1/q * \ 1. As such, an additional complexity arises from the appropriate model representation of interfacial mixing and entrainment processes in a non-idealised two-layer hydraulic model (i.e. viscid flow case). Consequently, the extended internal-flow hydraulic model (detailed in Sect. 4) allows specification both of an internalflow head loss DE * and a mass transfer coefficient m from the lower saline layer between the two control (G 2 = 1) points A and BC (Fig. 1). This permits determination of the dynamic conditions for saline intrusions under restricted, two-layer exchange flows across the submerged sill obstruction, which can be compared directly with experimental observations of the u = 0 interface height across the sill. Essentially, two dimensionless sill submergence depths h s * = h s /H = (1h b / H) = 0.532 (runs EX2 and EX3) and 0.588 (runs EX4-EX7) were considered in the current experimental study of exchange flows with varying net-barotropic flow components (i.e. varying q * values). The extended internal-flow hydraulic model is therefore applied to predict the interface heights of maximal exchange flows generated across the sill, based on the two control point (G 2 = 1) solutions, for the range of parametric conditions tested. Fig. 14a, b between the extended two-layer hydraulic model predictions of interface elevations and the corresponding experimental data highlight the importance of specifying appropriate energy loss DE * and mass transfer m coefficients to accurately predict interface elevations for bi-directional stratified flows generated across the sill. In Fig. 14a, setting m = 0.75 clearly represents a 25% reduction in the lower saline layer volume flux between the two control (G 2 = 1) points, while the lower m = 0.25 value specified in Fig. 14b represents a 75% reduction in saline volume flux between these control points. This also appears to be largely consistent with the maximum reduction in local saline volume flux across the sill of 86%, calculated from synoptic PIV velocity fields (see Sect. 5.2). In general, the extended two-layer hydraulic model developed is shown to provide reasonable predictions of the measured interface elevations across the sill over the range of bi-directional stratified flows generated for different q * values. For both dimensionless sill submergence depths, h s * = 0.532 and 0.588, it is shown that careful selection of the internal-flow head loss DE * and saline mass transfer m coefficients, which are mutually independent, is clearly crucial to predicting the experimentally determined interfaces. Concluding remarks The current study has investigated the development of exchange flows across a submerged sill obstruction through a large-scale experimental study and complementary theoretical analysis using an extended two-layer internal-flow hydraulic modelling approach. The experiments focused on obtaining detailed synoptic velocity fields of these exchange flows from PIV measurements across the sill, as well as corresponding density profiles at specific sill locations using micro-conductivity probes. The synoptic velocity fields typically indicated that the lower saline intrusion layer reduced in overall thickness h 2 as the netbarotropic flow component in the upper fresh water layer increased (i.e. fresh-saline flux ratio q * increased). In the majority of runs, however, this dominant upper freshwater flow (i.e. q * [ 1) was insufficient to block the saline intrusion across the sill completely. This dynamic blocking only occurred in runs EX6 and EX7 under exchange flow conditions with the strongest net-barotropic component in the upper fresh layer (i.e. highest q * values), and for the parametric combination of reduced sill submergence depth h b and density difference Dq between the fresh and saline waters. Indeed, the experiments demonstrated that the magnitude of a densimetric Froude number based on the upper fresh water flux q 1 and the sill submergence depth h b was required to exceed 0.125 for dynamic blockage of the saline intrusion layer, irrespective of the magnitude of the source fresh-saline volume flux ratio q * . For exchange flows with increasing net-barotropic components in the upper fresh layer, the presence of sharp slope discontinuities in the trapezoidal sill and sill-basin transitions was also expected to influence interfacial mixing, entrainment and the eventual blockage of the saline intrusions. However, no direct experimental evidence was observed to suggest that the sill and basin geometry, apart from the submergence depth h b , played a significant role in the dynamic blocking of the saline intrusion. As such, it is anticipated that the parametric conditions required for saline layer blockage across the experimental sill (i.e. q 1 /(g 0 Fig. 6) may also be applicable to real estuarine conditions with dominant net-barotropic flows in the upper fresh water layer. Local freshQ 1 and salineQ 2 water fluxes were calculated at both ends of the horizontal sill crest to examine the mass transfer between the counter-flowing layers. At the marine basin M end of the sill (i.e. x/l s = -1.0), the local flux ratioQ 1 Q 2 was found to vary over a similar range to the source fresh-saline volume flux ratio q * , while close to the impoundment I end of the sill (i.e. x/l s = -0.15), this local flux ratioQ 1 Q 2 increased significantly (Fig. 8). Importantly, this finding was indicative of a significant reduction in the lower layer fluxQ 2 in the direction of the saline intrusion across the sill, which also tended to increase with increasing net-barotropic flow in the upper fresh water layer (i.e. higher q * values). This represented significant mass exchanges of up to 86% from the lower saline layer to the upper fresh water layer, driven by interfacial entrainment under increasingly dominant upper freshwater flows (i.e. q * ) 1). It was therefore important to represent this mass transfer coefficient as an internal-flow process in the extended twolayer hydraulic modelling approach used to predict the dynamic behaviour of restricted exchange flows with net-barotropic components. The levels of interfacial mixing in the bi-directional stratified flows generated across the sill were indicated by the significant instabilities observed in the recorded density profiles (Fig. 9), especially at the impoundment I end of the sill (i.e. x/l s = 0). A quantitative measure of the normalised mixing layer thickness d/h b was determined from the vertical separation of the q 0 = 0.2 and 0.8 isopycnals in vertically-sorted density profiles across the sill (Fig. 10). This mixing thickness was shown (Fig. 11) to increase with the modified densimetric Froude number q 1 2 /(g 0 0 h b 3 )(1/q * ) = q 1 q 2 /(g 0 0 h b 3 ), indicating that for a prescribed saline volume flux q 2 , increasing the fresh water volume flux q 1 and/or reducing g 0 0 or h b tends to result in increased interfacial mixing. Corresponding estimates of the Thorpe overturning length scales L T (Fig. 12) were also shown to increase monotonically as q * increases, approaching 40-50% of the overall sill submergence depth h b in some cases, especially close to basin I (i.e. x/l s = 0). Clearly, these large-scale instabilities may be indicative of large, shear-induced, Kelvin-Helmholtz-type billows generated on the density interface at the leading edge of the sill crest under exchange flows with strong net-barotropic components in the upper layer. However, the relatively long time scale of the density profiling measurements meant that the estimated L T values were more likely to be indicative of ensemble-averaged mixing characteristics rather than the identification of individual instabilities. In this regard, the nature of the interfacial instabilities generated across the sill, as well as the mixing and entrainment mechanisms leading to blockage of the saline intrusion layer, may have been better identified using a two-phase PIV/PLIF system. Estimates of the composite Froude number G 2 were also obtained from the synoptic PIV velocity fields and density profile measurements across the sill. This indicated that the bidirectional stratified flow conditions generated across the sill remained subcritical (i.e. G 2 \ 1) in all runs and for all q * values, although with G values typically increasing towards the impoundment I end of the sill (i.e. x/l s ? 0). It is noted here that under the idealised, inviscid, two-layer hydraulic modelling of maximal exchange flows with a net-barotropic component in the upper layer, a second internal control point (i.e. G 2 = 1) would be expected to form across the sill region of uniform depth and width (i.e. between sections B and C, Fig. 1). However, the fact that the exchange flow conditions across the sill remained subcritical (i.e. G 2 \ 1) throughout suggests that this second internal hydraulic control point (G 2 = 1) is displaced to a location in the freshwater impoundment I. An extended internal-flow hydraulic model has been developed to predict interface elevations at the two control sections A and at a sill section BC, assuming that maximal exchange flow conditions are generated across the sill, which were compared with spatially-averaged u = 0 velocity interface elevations measured across the sill crest. Predictions from the extended two-layer hydraulic model were obtained for the idealised, inviscid flow case (i.e. DE * = 0, m = 0), the viscid flow case (i.e. DE * = 0.1, m = 0), and the viscid flow case with finite saline mass transfer to the upper fresh water layer (i.e. DE * = 0.1, 0 \ m \ 1). In general, reasonable agreement was observed between the predicted interface elevations h 2,A * and (h 2,BC * ? h s * ) and measured elevations across the sill, when appropriate values of the internal energy loss DE * (= 0.1) and saline mass transfer m (= 0.75 and 0.25) coefficients were specified in the extended two-layer hydraulic model. In particular, the specification of m = 0.25, corresponding to a 75% mass transfer of saline flux into the upper fresh water layer, appears to be in general accord with the calculated maximum saline volume fluxQ 2 reduction of 86% across the sill, where a bi-directional stratified flow is still present. Furthermore, it is also noted that for the experimental runs in which the saline intrusion layer is dynamically blocked, the corresponding saline mass transfer coefficient m = 0 by definition, representing full entrainment of the saline layer into the upper fresh layer. It can therefore be concluded that the combined effect of bottom friction and interfacial entrainment is important in determining the behaviour of bi-directional stratified flows generated across the submerged trapezoidal sill obstruction, and defining the conditions under which dynamic blocking of the saline intrusion occurs. More general application of the extended two-layer model developed in this paper to a wider range of restricted exchange flow configurations would require a detailed sensitivity analysis into its predictive capabilities over different sill and channel constriction geometries and over specified ranges of DE * and m values; both of which are beyond the scope of the current paper.
2018-10-23T23:44:47.936Z
2018-02-01T00:00:00.000
{ "year": 2017, "sha1": "3340f6ca96eb413f2917924c0f08177948abf378", "oa_license": "CCBY", "oa_url": "https://discovery.dundee.ac.uk/files/20461937/10.1007_2Fs10652_017_9523_2.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "5cfb02f076a11876610e9bedb3ec30b03585965a", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
234877972
pes2o/s2orc
v3-fos-license
Analysis of Religious Tourism in Ukraine: Challenges of Time and Prospects The main objectives of tourism business management are to develop and implement new strategies for promoting the product in the tourism market with the help of new technologies, as well as improve the existing ones due to using the main management functions. The article aims to improve the religious tourism sector in the modern tourism market and draw attention to religious tourism management. Research methods include analysis, synthesis, comparison, generalization, forecasting, as well as the use of systemic, activity-oriented, historical and culturological approaches. The article offers some ways of solving the issues of religious tourism, taking into account the specifics of this sector. It also covers some problematic issues in religious tourism in the context of the main management functions. The article proves that the potential of religious tourism is unrealized in terms of management and marketing research on tourism. The crisis of the global tourism industry, due to the COVID-19 pandemic, has done significant harm to the entire tourism sector of Ukraine’s economy. But it opens a unique window of opportunities for Ukraine to become a world-famous religious destination. The WTTC drew attention to this, urging public and private sector leaders to work together to pave the way for the economic recovery, needed for the travel and tourist industry and create millions of jobs. Introduction Researchers of religious tourism claim that many pilgrims tend to prefer travelling in a group of like-minded people for the performance of rituals upon arrival at the desired place or preachings about a righteous life. Indeed, the spiritual power of pilgrimage to the Holy Places is generally recognized. At the same time, it is no less important that the participants in the pilgrimage should cultivate a special sense of community during it. Socio-spiritual guidelines of a certain faith are often engraved in the minds and behaviour of people outside of church institutions and religious denominations. The transformation processes in the sphere of pilgrimage and religion tourism are considered in the scientific works. Z. Bauman in the work of the «From Pilgrim to Tourist» considered transformational processes in the field of tourism, described the differences between the tourist, a tramp, a pilgrim and how it affects the formation of society [1]. George Tardge in the book «Pilgrimage» considered the motives for religious travel, their goals, the search for truth, considered religious routes, sacred places [2]. P. Koelo in the artistic novel «Diary of the magician or pilgrimage» described the path of St. James to Santiago de Compostela from the Acting of Apostles, passed Routes Spain and Portugal [3]. Among Ukrainian researchers it is necessary to pay attention to the work of V. Pazenok «Tourismology. The theoretical image of tourism», which describes the goals of religious tourism, characteristics of pilgrims, religious tourists, their differences and common features [4]. S. Panchenko in the monograph «Religious tourism in Ukraine: state, potential, perspectives» considered in detail religious tourism, its features in terms of different denominations, objects of pilgrimage, prospects for the development of this direction [5]. O. Borisova in the textbook «Specialized Types of tourism» highlights religious tourism in a separate species and considers its features in terms of interconfession, detailing attention to routes and sacred places of different denominations [6]. V. Gorskyi in the article «A trip as a phenomenon of culture» describes the value of a pilgrimage for a person, an industry of tourism, infrastructure, a spiritual value of a religious journey [7]. Z. Sapelkinа in the work «Religion and culture. Religious tourism» considers religious tourism in terms of management, marketing, analyzes the prospects for the development of this direction [8]. P. Yarotskyi in a scientific work «Philosophical aspect of pilgrimage (religious) tourism in Ukraine» examines the diplomatic solution between interconfessional problems and conflicts, laws and legal documents that regulate them internationally, since Ukraine is a multi-confessional state with the peculiarities of culture, religion, traditions [9]. The analysis of these works testifies to the problems of religious tourism and needs further study. Management implies applying management concepts and tools, taking into account socio-economic, demographic and socio-cultural factors. It allows one to gain benefits and additional financial resources. The main objectives of tourism business management are to develop and implement new strategies for promoting the product in the tourism market with the help of new technologies, as well as improve the existing ones due to using the main management functions. Management functions ref lect both the essence and content of management activities in the field of tourism. The main functions of tourism business management include planning, organization, regulation, motivation and control. Each function is important for the organization of tourism activities, in particular in the field of religious tourism [10]. Tourism business management also ensures the interaction between organization, planning, forecasting and financial activities, legal support, psychology of communication and customer services, sociology of labour and other aspects of tourism. Tourism management principles can be divided into general and individual. General principles include systematicity (the system's interdependence on others; interconnections between all links of the system; consideration of the effects of internal and external factors on the system's functioning); integration; multifunctionality (taking into account all aspects of tourism activities that are not only those, associated with sales of tourism products); objectivity; values-based orientations. Individual principles involve scientific justification; an optimal correlation between centralization and decentralization when making management decisions; planning; motivation; observance of employees' rights and assurance of their responsibility [11]. Religious tourism management causes the need for cooperation with religious leaders, knowledge about cultural differences of the target audience, religious practices and spiritual components of pilgrimage or religious leisure activities. They are essential to preparing the general programme of the tour, producing souvenirs, providing nutrition or clothes and involving support staff [12]. Ukraine is rich in religious and pilgrimage sites, which attract many pilgrims. There are many significant religious objects in all regions. In the country, the largest pilgrimage centres are Kyiv (Saint Sophia's Cathedral, Kyiv Pechersk The report aims to improve the religious tourism sector in the modern tourism market and draw attention to religious tourism management. Methods The problem of religious tourism was investigated by synthesis and analysis of philosophical, religious, management literature. Also used were culturological, phenomenological, religious studies, which allow to look at the development of religious tourism from the point of view of history and forecasts for the future. The research was carried out through the basics of scientific analysis using the principles of objectivity, systematicity, historical, comparative analysis, causation, axiological, logic-semantic, activity, comparative and prognostic approaches. The methodological basis of the article formed scientific works, which define the concepts, approaches and methods that serve as an instrument of studying religious tourism as a phenomenon of social life, characterizing the problems of self-affirmation of human by means of travel, implementation of religious travels [13]. The article uses elements of several scientific approaches: -systemic (the worldview of pilgrims is considered as a complex holistic system, ultimately consisting of simple elements and is due both objective and subjective factors); -phenomenological (everyday human being, his/her life; this approach makes it possible to consider the f low of consciousness in the aspect of the sense-forming constructs of the worldview system of the culture of pilgrims) -comparative (comparison of species of ideological orientations of a person of various religions in various socio-cultural systems as the principle of achieving objective knowledge) -socio-philosophical approach (makes it possible to present a pilgrimage as a specific social phenomenon, a noticeable factor in the development and improvement of society). Results In total, there are about 130 pilgrimage offices, services, companies and agencies in Ukraine. The geography of their proposed routes covers the entire territory of Ukraine and all countries of Europe and the Middle East. On average, one such service offers about 50 tours per year. One cannot determine the number of domestic religious tourists. Considering the fact that all active believers go on trips only for religious purposes, their share in the total tourist f low ranges from 8.5 % to 20 %. At the same time, religious tourism accounts for from 160 to 360 thousand organized trips and excursions, of which 120-300 thousand were foreign and only 25-60 thousands were domestic. It is important to note that the share of religious tourism in the total tourist f low in Ukraine hardly exceeds 8-10 %, most likely being even lower. It means that the level of religious tourism development in Ukraine is at least twice lower than the world average. There are on average about 60-70 thousands foreign pilgrims in Ukraine and about the same number of religious and educational tourists from abroad. Two-thirds of them are Israelis, and no more than 15 % are Russian pilgrims (8-10 thousand) [14]. Religious worldview affects society ambiguously and, sometimes, contradictorily. Indeed, it can unite and oppose them, thus causing many wars and conf licts on religious grounds. It can contribute to developing good moral values, humane worldview, civic activism. However, it can reinforce intolerance, hatred of otherwise minded people and even general contempt for a person as such. It can result in fanaticism, misanthropy, antisocialism and religious extremism. Thus, all these and other measures can facilitate the dynamic development of tourism in Ukraine with further prospects for developing international tourism in general and cognitive tourism in particular. Compared to European countries, Ukraine does not use different types of tourism, which could ensure its economic growth and solve the existing problems in the protection of historical heritage [15]. In recent years, the number of Ukrainian tourists, making pilgrimages to the relics of their country, has increased significantly. But it is almost impossible to calculate the number of domestic travelers, because they do not use agencies and instead travel on their own or at the initiative of the clergy. International tourists, who come to Ukraine for religious purposes, are also not included in statistics. The approximate annual number of Hasidic pilgrims is from 30,000 to 40,000, distribution of incoming (international) tourists. Out of 83,703 tourists in 2019, 5,232 came for business reasons, 58,404 for recreation, 2,390 for health purposes, 194 athletes, 16,874 special tourism (not specified), and 422 for other purposes [16]. Discussion Sacred places of Ukraine should be considered in terms of: 1) significant religious-and-historical places, objects, similar places; 2) denominational objects (Orthodox, Jewish, Catholic, Muslim, Hasidic, pagan); 3) descriptions of specific saints, religious and historical places by region (churches, convents or monasteries, Lavras, graves, hermitages, icons, places of residence of prominent figures of different denominations). The main objectives of developing religious tourism in Ukraine are as follows: 1) to restore all ancient religious architectural sites; 2) to restore religious architecture, made of wood; 3) to restore ancient pictorial, written and sculptural cult objects, providing rooms for their publicity (to establish museums); 4) to restore palace complexes and arrange their territories; 5) to publish popular literature in foreign languages, which informs about the religious history of Ukraine, the development of individual religious and spiritual centres and their prominent figures, many of whom also lived in Western Europe, and all sacred places in the country; 6) to create a system of benefits for those tourist organizations, which promote religious tourism as part of their package of services; 7) to create a system of benefits for investors to attract investments in the restoration of historical monuments in Ukraine; 8) to prepare optimal routes for religious tourists, including to some other historic sites of Ukraine; 9) to establish affiliations of travel agencies abroad; to launch an advertising campaign; to publish brochures on religious tourism in Ukraine for foreign readers; to create and broadcast television programmes about prominent religious sites and figures; 10) to introduce a general register of important religious sites and sacred sites from a tourist's point of view; 11) to organize logistic systems and infrastructure in Ukraine to provide routes for religious tourists and pilgrims (transport, hotels, catering establishments, sightseeing tours, souvenirs) [17]. Also, it is essential to further develop cognitive tourism. It requires the following steps: 1) to create a base of tourist resources, which should include all historical and cultural monuments; 2) to develop long-term programmes and strategies for restoring cultural monuments; 3) to cultivate responsibility for preserving cultural monuments and a corresponding attitude towards the objects of cognitive tourism in tourists; 4) to organize scientific and thematic excursions to historical and cultural monuments for educational purposes; 5) to implement a consolidated policy to increase the role of cognitive tourism in society. Conclusions Tourism in Ukraine is a poorly developed sector of the economy, with religious tourism in its infancy. New challenges of the time and the pandemic Covid-19 are making their own adjustments and religious tourism is now in a difficult situation. Although, as scientists describe scientific intelligence, it is religious tourism during the pandemic has established itself as a sustainable type of tourism and survives due to the stability and faith of the faithful. It lacks clear concept, content, or information sources. Therefore, in our point of view, it is worth launching a website that accumulates information on existing forms of religious tourism and pilgrimage, on routes and sacred sites, offered by travel agencies, tour operators and pilgrimage centers. This would provide people with relevant and comprehensive information, monitoring and analyzing the tourist services market [18]. Ukraine should be able to attract international and domestic tourists with its historical heritage. Besides, all monuments should be restored based on relevant programmes and laws and be able to provide, develop and preserve themselves due to tourist money. Moreover, one should always remember that it all starts with culture in general and an attitude towards cultural monuments in particular. It is vital to respect both the cognitive and cultural heritage of the country. Given that everyone is constantly preoccupied with worries, life situations, various plans, successes and losses, they need, first of all, spiritual enrichment and, at the same time, some break time. They should always care about their body, soul and spiritual principles. When combined with physical work, spiritual work can create a new personality, realized in a spiritual sense.
2021-05-21T16:57:54.190Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "f339c2330ac6f99b7ef291183fa0b12c6dafeeca", "oa_license": "CCBY", "oa_url": "http://journal.eu-jr.eu/ttissh/article/download/1763/1582", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b2b52092cc32c29d79b91989d70bce517e7fdd4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
52099189
pes2o/s2orc
v3-fos-license
AuthPDB: Query Authentication for Outsourced Probabilistic Databases Spurred by developments such as cloud computing, there are increasing efforts for outsourcing of data management. A company (data owner) who lacks expertise and comptational resources can outsource his data to a third-party service provider (server), who provides storage and query evaluation on the outsourced data as the services. One of the security concerns of the outsourcing paradigm is the integrity of the returned query results on the outsourced data. In this paper, we consider the outsourcing of probabilistic databases, on which query evaluation is of high complexity. A dishonest server may return cheap (and incorrect) query answers, hoping that the client who has weak computational power cannot catch the incorrect results. To address this issue, we design efficient integrity verification methods for both all-answer and top-k query evaluation on outsourced probabilistic databases. Our empirical results demonstrate the effectiveness and efficiency of our verification methods. I. INTRODUCTION Recently there has been an increasing demand for managing incomplete and uncertain data that emerges from scientific data management, sensor data management, data cleaning [1], [2], and information extraction [3]. Probabilistic database systems have been considered as a successful tool for managing uncertain data. A large number of systems (e.g., [4], [5], [6] have been developed for performing efficient query processing on probabilistic databases. One major challenge of query evaluation on probabilistic databases is its high complexity; the evaluation of some certain types of queries is of #P-complete complexity [5]. The high complexity of query evaluation and complex semantics behind the probabilistic databases hinder common users to establish the probabilistic database management for their own business and research use. A cost-effective solution is to outsource the probabilistic database to a third-party service provider (e.g., the Cloud). In the outsourcing paradigm, the data owner who has large volume of data but limited knowledge and resources for data analysis can send his data to the service provider. There exists quite a few outsourcing services that support the applications based on probabilistic databases, for example, Nogamy [7] for data integration, Flatworld [8] for data cleaning, and Informatics [9] for scientific data management. In this paper, we consider two types of queries on probabilistic databases, namely all-answer queries that return all possible answers from the probabilistic database, and top-k Identify applicable funding agency here. If none, delete this. queries that return k possible answers of the largest probability. Example 1. 1 gives the examples of both types of queries. Example 1.1: Consider a probabilistic data instance D in Table I (a), in which each record corresponds to an estate property listed for sale, collected from the Internet. Each record is associated with a tuple probability that describes its reliability. Consider a query Q: σ P rice≥20k (D) (i.e., return all property records whose price is greater than 20k. Table I (d) shows the all-answer results of Q, which include four possible answers {t 2 }, {t 3 }, {t 2 , t 3 }, and ∅. Each possible answer is associated with an answer probability, calculated from the tuple probability. How to compute the answer probability will be in Section II. The top-3 answers are {({t 2 , t 3 }, 0.48), ({t 2 }, 0.32), ({t 3 }, 0.12)} (i.e., the answers of top-3 probability). The complexity of all-answer and top-k query evaluation on probabilistic databases is exponential to the size of the dataset. An untrusted service provider (server) is incentivized to improve its revenue, e.g. by computing with less resource while charging for more, especially when it believes that the client cannot easily re-compute the results given his limited computational resources. Therefore, the server can forge the probability of possible answers with a random value. It also can cheat on the top-k results by randomly choosing k possible answers. Furthermore, a malicious server may return wrong query answers intentionally (e.g., return {t 3 } as the top-1 query result instead of {t 2 , t 3 }), if the server is hired to promote the estate properties in Fonda area and demote those in Cave Junction area. The problem of authenticating query evaluation over outsourced deterministic databases have been well investigated in the literature (e.g., [10], [11], [12]). These existing solutions can verify whether the records (named the hit set) in the returned possible answers satisfy the selection constraint of the query (e.g., P rice ≥ 20k in Example 1.1). However, they cannot verify whether the probability of each answer (e.g., the probability 0.32 of the answer {t 2 }) is correct. Apparently, re-computing the probability of each possible answer is prohibitively expensive, as the number of possible answers is exponential to the size of the hit set. For example, the hit set of the query Q contains two records {t 2 , t 3 }. There are 2 2 = 4 possible answers in the all-answer results of Q. In (a) An example of probabilistic database D (b) Possible instance W 1 (c) Possible instance W 2 Query Q : σ P rice≥20k (D) P r(W 1 ) = p 1 p 2 (1 − p 3 ) = 0.128 P r(W 2 ) = (1 − p 1 )(1 − p 2 )p 3 = 0.072 In this paper, we design AuthPDB, a framework that supports efficient result integrity verification of all-answer and top-k query evaluation on outsourced probabilistic databases. Figure 1 illustrates the framework of AuthPDB in a nutshell. Before outsourcing, the data owner constructs an authenticated data structure (ADS) T of his probabilistic dataset D. He transmits both D and T to the server, and sends the root hash h root of T to the legitimate clients. For a given query, the server evaluates the query on D and obtains the answers R. To serve the purpose of integrity verification, the server constructs a proof of R, which takes the format of a verification object (VO). The server returns both R and VO to the client. The client verifies if R is correct based on VO. In particular, our contributions include the following. First, we design a new ADS named aggregated probability B-tree (APB-tree). APB-tree enables to verify the probability of the query answers directly on the tree by integrating the tuple probability information in the outsourced database with the state-of-the-art Merkle hash tree [13]. Second, we design an authentication method that verifies the authenticity, soundness and completeness of all-answer query results. The verification method enables the server to construct a VO of the returned answers from the APB-tree. To avoid recomputing the probability of each possible answer, the client partitions the answers into several groups. The probability of possible answers in the same group satisfies a certain group property. Based on grouping, the client re-calculates the probability of only one possible answer per group. The probability of the other possible answers in the same group is verified based on the group property, whose cost is much cheaper than probability re-computation. Third, we design an efficient verification method to authenticate the top-k answers (i.e., if the returned k possible answers are indeed associated with the top-k probability). Instead of re-computing the probability of all possible answers (as the naive method), our authentication method minimizes the recomputation of the probability that is not top-k. In particular, our method only needs to compute at most [klnk]+k probabilities to find the top-k probability. In other words, our method only has to compute at most [klnk] probabilities besides the top-k ones. Fourth, we formally prove the security of our verification methods for both all-answer and top-k queries with the presence of the malicious adversary who has full knowledge of the verification methods. Last but not least, we perform an extensive set of experiments on both real-world and synthetic datasets to evaluate the performance of our verification approaches. Our experimental results show that our verification approaches are efficient. The paper is organized as following. Section II explains the preliminaries. Section III discusses our ADS structure. Section IV present our verification methods for all-answers and top-k answers. Section V and VI present the security and complexity analysis respectively. Section VII discusses our experimental results. Section VIII presents the related work. Section IX concludes the paper. A. Probabilistic Databases In this paper, we consider the tuple uncertainty probabilistic database model [14], [5] that is widely used for probabilistic database management. In this model, each probabilistic relational instance D consists of a set of basic attributes that describe the data. Each tuple t i is associated with a probability p i ∈ (0, 1] that denotes the existence probability of t i . We call the pair (t i , p i ) the tuple-probability pair (TP-pair). We follow the same tuple-independence assumption as [5], [14] that the existence of a tuple is independent of the existence of the other tuples in the database. Table I (a) shows an example of the probabilistic database. We consider the possible world semantics [15] that is widely used for probabilistic databases. It models the probabilistic database as a probability distribution over all deterministic versions of the database [16], [17]. Following this model, for any probabilistic database D, its possible worlds PWD(D) is defined as a set of possible database instances of D, each instance W associated with a probability P r(W ). Formally, In general, given a probabilistic database that consists of n tuples, there are 2 n possible worlds. For example, the probabilistic database in Table I (a) has eight possible worlds. Table I (b) and (c) display two of these possible worlds. In this paper, we consider two types of query evaluation over the probabilistic database, namely the all-answer and top-k queries. For the sake of simplicity, we only consider singledimensional selection queries, denoted as σ A∈[l,u] (D), where A is an indexing attribute, and [l, u] is the selection range. Our approach can be extended to multi-dimensional selection queries. Next, we formally define the two types of queries and present their evaluation process. All-answer queries. Given a probabilistic data instance D and its possible worlds PWD(D), for a given query Q on D, its answer Q(D) is defined as a set of possible answers, each associated with a probability. The result is obtained by applying Q to each deterministic instance in PWD(D). The probability of each unique answer is calculated as the sum of all instances that return the same answer. Formally, We define the hit set H as the set of unique tuples in R (i.e., the unique tuples in D that satisfy the selection range of Q). We call each pair (A, P ) ∈ R an answer-probability pair (APpair). A more efficient way to compute the probability P of each unique answer A is following (we use \ to denote set difference): Example 2.1: Continue with the probabilistic database D in Table I B. Authenticated Data Structure (ADS) One of the widely-used authenticated data structures (ADS) is Merkle Hash tree (MHT) [13]. A MHT T is a tree in which each leaf node N stores the digest of a tuple t: h N = H(t), where H() is a one-way, collision-resistant hash function (e.g. SHA-1 [18]). For each internal node N of T , it is assigned . . , N f are the children of N , and || is the concatenation operator. The hash value h root of the root node is used as the digest of the tree. Before outsourcing, the data owner constructs a MHT of the relation D, and keeps h root locally. Then he sends the MHT to the server, and h root to the client. When the server returns the query result to the client, it searches through the MHT, and constructs a verification object (VO) of the query results by including the information in MHT regarding the query results. The client can verify the query results by recomputing the root hash h ′ root from V O and the query results. The query results are considered as correct if h ′ root = h root . C. Condensed RSA RSA [19] is a classic public key encryption scheme. The scheme generates two λ/2-bits random prime numbers p and q, where λ is the security parameter. It computes B = pq, and finds a pair of integers (e, d) such that e, d ∈ Z * B , and ed ≡ 1 mod φ(B), where φ(B) = (p − 1)(q − 1). The public key p k = (B, e) is released to the public, while the private key s k = d is kept secret. Given an message m, its signature σ is generated as: where H() is a full-domain cryptographic hash function that converts a message into a value in Z * B . To enable authentication of a sequence of messages, a signature aggregation scheme named Condensed-RSA [20] compresses a set of RSA signatures into a single signature. To prove the authenticity of m 1 , . . . , m n , the prover computes an single aggregate signature σ 1,n as σ 1,n = Π n i=1 σ i (mod B). Only σ 1,n is sent to the verifier, who verifies the authenticity of m 1 , . . . , m n by D. Authentication Goal All-answer queries. Given a probabilistic data instance D and an all-answer query Q, let R be the answer of Q that is returned by the server. The client verifies authenticity, soundness, and completeness of R. • Authenticity: for each AP-pair (A i , P i ) ∈ R, it verifies if all tuples in A i exist in D and are not tampered with. • Soundness: for each AP-pair (A i , P i ) ∈ R, it verifies two types of soundness: (1) result soundness (R-soundness) verifies if A i satisfies the selection condition of Q; and (2) probability soundness (P-soundness) verifies if P i is correct (according to Equation (3)). • Completeness: it verifies if R includes all the AP-pairs that satisfy Q. Top-k queries. Given a top-k query Q k , the client verifies authenticity, soundness, and completeness of the top-k results. • Authenticity: the same as that of the all-answer queries. • Soundness: For each AP-pair in R, it verifies three types of soundness, result soundness (R-soundness), probability soundness (P-soundness) and top-k soundness (TopKsoundness). R-soundness and P-soundness are the same as that of the all-answer queries. TopK-soundness verifies if the AP-pairs of R are indeed of the k largest probability. It is worth noting that TopK-soundness naturally implies completeness. Thus, we do not deliberate over the completeness verification of top-k queries. The problem of verifying authenticity, soundness, and completeness of selection queries over deterministic databases have been well investigated in the literature (e.g., [10], [11], [12]). However, these works only can authenticate the hit set. They cannot verify P-soundness and TopK-soundness. The challenge to address is to enable the client who has limited computational resources (e.g., on mobile devices) to verify the query results that are potentially large, as the number of possible answers is exponential to the hit set size. E. Authentication Protocol and Security Model In this section, we first define the authentication protocol. Then we define the security of the authentication protocol. We adapt the definition of authentication protocols in [21] to our setting. Formally, Definition 2.1 (Authentication protocol): Let D be any probabilistic database. Let Q be an (all-answer/top-k) query on D, auth(D) be the authenticated data structure (ADS) constructed from D, and Π be the proof of the query result R. The authentication protocol is a collection of the following four polynomial-time algorithms: • {s k , p k } ← genkey(1 λ ): Given the security parameter λ, it outputs a secret key s k and a public key p k ; • {auth(D), δ} ← setup(D, s k , p k ): On input of a probabilistic database D, the secret key s k , and the public key p k , it computes the authenticated data structure auth(D) and its digest δ; Given a query Q, the authenticated data structure auth(D) and the public key p k , it returns the result R, along with its proof Π; • {accept, reject} ← verify(Q, R, δ, Π, p k ): Given a query Q, the result R, the digest δ of the ADS auth(D), the proof Π, and the public key p k , it outputs either accept or reject. The genkey protocol is straightforward. Given the security parameter λ, the data owner picks a collision-resistant hash function H whose output length is λ-bits. Then the data owner generates the keys of RSA signatures ( i.e., B, e, and d) with respect to λ (Section II-C). The genkey protocol outputs a pair of secret and public keys, where s k = d, and p k = {H, B, e}. In the following sections, we mainly focus on the design of setup, certify, and verify algorithms. In this paper, we consider the malicious adversary who has full knowledge of the authentication protocol. Next, we define the security of the authentication protocol against such malicious adversary. Definition 2.2 (Security): Let Auth be an authentication scheme {genkey, setup, certify, verify}, λ be the security parameter, ǫ(λ) be a negligible function, and {s k , p k } ← genkey(1 λ ). Let also Adv be a probabilistic polynomial-time adversary that is only given p k . The adversary has unlimited access to all algorithms of Auth, except for the algorithm setup to which he has only oracle access. Then, for the query Q, Adv returns a wrong result R = Q(D), and a proof Π. The authentication scheme Auth is secure if for all λ ∈ N, for all {s k , p k } pairs generated by the genkey scheme, and for any probabilistic polynomial-time adversary Adv, it holds that Intuitively, the authentication protocol is secure if the probability that the wrong query results can be accepted is negligible. III. AGGREGATED PROBABILITY B-TREE (APB-TREE) To facilitate efficient query authentication on probabilistic databases, we design a new authenticated data structure named aggregated probability B-tree (APB-tree). In this section, we first describe the APB-tree structure. Then we present the setup protocol that constructs the APB-tree. Fig. 2. An example of APB-tree APB-tree is built on top of the Merkle Hash tree (MHT) [13]. It stores an aggregated probability in each tree node. In particular, given a probabilistic database instance D, as well as the secret key s k and the public key p k , its TPpairs are sorted by the canonical order of their values on the indexing attribute A. Each TP-pair (t i , p i ) corresponds to a leaf node in the APB-tree, whose value takes the format , and H() is a fulldomain cryptographic hash function and is part of p k . For any internal node N of the APB-tree, let cov(N ) be the set of tuples enclosed in the leaf nodes of the subtree rooted at N , and C = {N 1 , . . . , N f } be the children nodes of N , where f is the fanout of N . Apparently, cov(N ) = f i=1 cov(N i ). The internal node N takes the format (t min , t max , p, σ, h), where: t min = min(cov(N )) and t max = max(cov(N )) are the minimum and maximum values of tuples in cov(N ) respectively; the probability p = Π 1≤i≤f N i .p (i.e. the multiplication of the probability of the leaf tuples in the subtree rooted at N ); the signature σ = Π 1≤i≤f N i .σ (mod B); and the hash value h = H(t min ||t max ||p||σ||h 1→f ), where h 1→f = H(h 1 || . . . ||h f ). Figure 2 shows an example of the APB-tree. It is worth noting that the APB-tree only supports one-dimensional query authentication. It can be extended to facilitate multi-dimensional selection queries by switching the underlying indexing structure to R-tree [22]. Before outsourcing the original database instance D to the server, the data owner constructs the APB-tree T from D, and sends both D and T to the server. The data owner only sends the root hash h root to legitimate clients. IV. AUTHENTICATION METHODS We observe that authenticity and R-soundness of both allanswers and topk-answers can be verified by the same process that verifies the soundness and completeness of the hit set. Therefore, we design a verification method that consists of two steps: (1) verification of hit set; and (2) verification of allanswers/topk-answers based on the authenticated hit set. Note that the hit set can be generated from the AP-pairs in the query results; the server does not need to return it separately from the query results. Next, we explain the details of these two steps. The verification of hit set is discussed in Section IV.A. The verification of all-answers and top-k answers is explained in two separate subsections (Sec IV.B and IV.C) respectively. A. Authentication of Hit Set There are quite a few existing solutions to verify authenticity, soundness and completeness of the selection query execution for outsourced deterministic databases (e.g., [23], [24]). The common idea of these solutions is the following: the server traverses the ADS and visits the essential nodes to construct the VO. The VO is sent back to the client together with the query results. From the query results and the VO, the client re-constructs the traversal path used in query execution and verifies that it is indeed authentic. We adapt the same idea to this paper to verify the correctness (i.e., authenticity, soundness, and completeness) of the hit set. Certify Protocol: We first introduce a number of definitions before the discussion of Certify protocol. Definition 4.1: Given an APB-tree T and a selection query Q of the range [l, u], we say an internal node N (t min , t max , p, σ, h) of T is a maximum false hit node (MF-node) if both of following conditions are satisfied: Condition (1): t min > u or t max < l (i.e., the tuples of N are false hits of Q); and Condition (2): the parent of N does not satisfy Condition (1) (i.e., N is maximum). Given a selection query Q, we categorize a tuple t into one of the three types: • M-tuple (in short for matching tuple): if t is located in the query range; • NC-tuple (in short for non-candidate tuple): if t is the descendant of an MF-node; • C-tuple (in short for candidate tuple): if t does not satisfy the selection condition, and is not covered by any MF-node. Obviously, the hit set H is the set of M-tuples. Definition 4.2: Given a probabilistic data instance D, its APB-tree T , and a query Q, let N be a set of nodes of T . We say N is the minimum coverage set where H is the hit set of Q (i.e., N covers all hit tuples); (2) for any pair of nodes N i , N j ∈ N (i = j), cov(N i ) ∩ cov(N j ) = ∅ (i.e., N i and N j cover nonoverlapping tuples); and (3) N contains the minimum number of nodes. In the rest of our paper, we use M (Q) to denote the MCS of a query Q. Take Figure 2 as an example. Consider a query Q whose hit set H = {t 3 , t 4 , t 5 , t 6 }. The MCS of the query Q is M (Q) = {N 3 , N 46 }. Now we are ready to describe how to construct the VO. Definition 4.3: Given a probabilistic data instance D and its APB-tree T , consider a query Q on T , let C and MF be the set of nodes of C-tuples and MF-nodes respectively. The VO of Q includes the following information: • For each leaf node N in M (Q) ∪ C, the pair (t, p, σ) is stored in VO, where t, p and σ are the tuple, probability, and the RSA signature stored in N ; A pair of brackets is injected before and after the objects that locate in the same tree node to denote the structure information. Verify Protocol: Given a query Q and the returned results R, the protocol is to verify the correctness of hit set H. This is achieved by re-constructing the root hash from VO and comparing it against the root digest received from the data owner. In particular, for each leaf node N in the VO, the client computes h N = H(t i ||p i ||σ i ), where t i , p i and σ i are obtained from the VO directly. For each internal node N , if N is included in the VO, the client can easily calculate h N = H(t min ||t max ||p||σ||h 1→f ). For those internal nodes that are not included in the VO, the client recovers h N from their children nodes. The client repeats this process until it obtains the root hash value, h ′ root . The client compares h ′ root with the local copy h root , which is shared by the data owner. If h ′ root = h root , the client is assured that the VO is constructed from the original APB-tree. Next, for every tuple t i ∈ H, the client retrieves its corresponding TP-pair (t i , p i ) from the result R. Furthermore, the client recovers the minimum coverage set M (Q) from the VO by including those nodes whose coverage falls into the query range. In specific, if t ∈ [l, u] for any leaf node N in VO, or t min ≥ l and t max ≤ u for any internal node, the client includes N in M (Q). The client verifies the correctness of hit set H by Example 4.1: Consider the APB-tree in Figure 2, and a query Q whose M-tuples are t 3 , t 4 , t 5 and t 6 (i.e., H = The VO is constructed as following: where h 4→6 = H(h 4 ||h 5 ||h 6 ), and h 7→9 = H(h 7 ||h 8 ||h 9 ). In the verification process, the client first re-calculates the root hash value of the APB-tree. Next, from the VO, the client constructs M (Q) = {N 3 , N 46 } and checks if σ 3 * σ 46 . This verifies the authenticity, soundness, and completeness of H. B. Authentication of All-answer Queries Given a query Q and its all-answer result R returned by the server, a naive verification method is that the client recomputes the probability of all AP-pairs in R. Apparently the naive method is prohibitively expensive, since the number of AP-pairs is exponential to the size of hit set of Q. Therefore, we design an efficient verify protocol that does not need to calculate the probability of all AP-pairs for the verification of P-soundness. Instead, the verify protocol partitions the APpairs into several groups. For all the AP-pairs in the same group, the verify protocol only re-calculates the probability of one single AP-pair, while the probability of the remaining APpairs is verified by checking if the AP-pairs in the same group have a certain property. This enables efficient verification of P-soundness. Apparently, for each AP-pair, the probability calculation takes O(h) complexity, where h is the size of the hit set, while our grouping-based approach verifies it with O(1) complexity. Therefore, our grouping-based approach saves the verification cost by a factor of h. Because P-soundness can be verified by using the same VO constructed by the certify protocol for the hit set, we omit the details of the certify protocol and only discuss the details of the verify protocol. Verify Protocol: In the verification process, the client partitions the AP-pairs into several groups for verification. We first explain how the AP-pairs are grouped. The grouping is based on the concept of AP-lattice. We define the AP-lattice formally below. We use S(v) to denote the tuples that the vertex v corresponds to. Fig. 3. An example of AP-lattice (for simplicity, only a subset of edges are labeled. It shows one group. The edges with red color denote the pairs of AP-pairs in the same group.) Theorem 4.1: Given a probabilistic database D, a query Q, and the AP-lattice L of Q, for any edge ǫ(v i , v j ) ∈ L, let t be the tuple in D that corresponds to the label of ǫ(v i , v j ). Let (A i , P Ai ) and (A j , P Aj ) be the AP-pairs that v i and v j correspond to respectively. Then it always holds that where p t is the probability associated with the tuple t in D. Proof. As v i and v j are connected by an edge whose label is t in the AP-lattice, it must be true that R j = R i ∪ {t}. According to Equation (3), we have where p t is the probability of t in D. Therefore, it is easy to see that To have a better understanding of Theorem 4.1, consider the AP-lattice in Figure 3, and two pairs of answer pairs < {t 1 }, {t 1 , t 2 } > and < {t 3 }, {t 2 , t 3 } > in the lattice. Since their corresponding edges in the AP-lattice are labeled with the same tuple t 2 , it must be true that 1−p2 , where p 2 is the probability of t 2 . Based on this property of the AP-lattice, the client groups the AP-pairs by the following procedure: for any two pairs of AP-pairs, >, P A and P A ′ are assigned to the same group if their corresponding edges in the AP-lattice are assigned with the same label. Figure 3 uses colors to show partial grouping results. For simplicity, we only show one group, with the edges colored red. The grouping is constructed before the client performs the verification procedure. Given a query Q and its returned all-answer results R, let (H, P H ) to be the seed AP-pair (i.e., the answer of the seed AP-pair contains all M-tuples), then the verification follows the 2-step procedure: (1) verify the authenticity, R-soundness and completeness of R; (2) verify P-soundness of R. Next, we explain the details of these two steps. Step 1: Verification of authenticity, R-soundness, completeness. After the hit set H passes the verification, the authenticity of R naturally follows. R-soundness of R is authenticated by checking if for each AP-pair (A, P ) ∈ R, whether A ⊆ H. The completeness of R is verified by checking if the number of AP-pairs of R equals to 2 h , where h = |H|. Step 2: Verification of P-soundness. First, the client verifies P-soundness of the seed AP-pair by checking if Π Nj ∈M(Q) p j = P H , because the minimum coverage set of M (Q) exactly covers the hit set H (Def. 4.2). If the result P H passes the verification, the client is assured of P-soundness of the seed AP-pair, and continues to verify P-soundness of nonseed AP-pairs based on the grouping of AP-pairs constructed from the AP-lattice L. In particular, for each group G, and for each pair of AP-pairs P A < (A i , P i ), (A j , P j ) > in G, WLOG we assume A i ⊂ A j . Then the client verifies if Pj Pi ? = pt 1−pt , where p t is the probability of the tuple t = A j \A i . C. Authentication of Top-K Queries In this section, we discuss the details of certify and verify protocols for top-k queries. To verify authenticity, R-soundness and P-soundness of the returned top-k AP-pairs, we follow the same VO construction strategy for all-answer queries. However, the VO construction procedure for the top-k answers is different from that of the all-answer results due to two reasons. First, the seed AP-pair that is used for all-answer query authentication may not be included in the top-k result. Second, the returned top-k results may not include all the APpairs that satisfy the query range. This disables to verify the P-soundness of the top-k AP-pairs by using grouping (Section IV-B). Next, we discuss how to resolve these two challenges. Certify Protocol: . To overcome the first challenge, we require the server to include the seed AP-pair (H, P H ) and the proof of H (Section IV-A) in the VO, regardless whether the seed is included in the top-k result. To overcome the second challenge, we require that the VO should include some additional APpairs (called witness AP-pairs) to ensure that with these witness AP-pairs, each top-k AP-pair can be reachable by the seed AP-pair in the AP-lattice. There may exist more than one set of witness AP-pairs. To reduce the VO size, we will pick the minimum set of the witness AP-pairs. For example, consider the AP-lattice in Figure 4, in which the top-k APpairs are colored blue. Given two sets of witness AP-pairs: S 1 = {{t 1 , t 3 , t 4 }, {t 2 , t 3 , t 4 }} and S 2 = {{t 1 , t 2 , t 4 }}}, we pick S 2 as the minimum witness set that is added to the VO in addition to the top-k AP-pairs. Next, we formally define the minimum witness set (MWS). Definition 4.5: Given a top-k query Q k and its returned results R = {(A i , P i )|1 ≤ i ≤ k}, let L be the AP-lattice of R, and (H, P H ) be the seed AP-pair of R. A set of APpairs X is the minimum witness set (M W S) of R if X satisfies the following two conditions: (1) for each AP-pair (A, P ) ∈ R, there is a path (v 0 , v 1 , . . . , v j , v seed ) ∈ L, where v 0 corresponds to the AP-pair (A, P ), v seed corresponds to the seed AP-pair, and v i (1 ≤ i ≤ j) corresponds to an AP-pair in R ∪ X; and (2) the size of X is minimum. The problem of discovering the minimum witness set (MWS) is equivalent to the classic minimum Steiner tree (MST) problem [25]. It is well-known that the MST problem is NP-hard. Therefore, the MWS problem is NP-hard too. We first adapt the 2-approximation algorithm [26] to discover the MST. Then we obtain MWS by extending the path between the node pairs in the MST. The time complexity is O(k 2 + L), where k is the size of top-k results R, and L is the total number of edges in the MST. Based on the definition of MWS, now we are ready to define the VO of top-k query evaluation. (2) verify TopK-soundness of R. This is remarkably challenging, since the client has to check if the returned AP-pairs have the top-k probability without re-computing the probability of all AP-pairs. To address this challenge, we design an efficient verification method that only has to compute at most [klnk]+k probabilities to find the top-k ones. Next, we explain the details of the two steps of our Verify protocol. Step 1: Verification of authenticity, R-soundness, P-soundness, completeness. Similar to all-answer verification in section IV-B, once the hit set H pass the verification, the authenticity of R naturally follows. R-soundness of R is authenticated by checking if for each AP-pair (A, P ) ∈ R, whether A ⊆ H. The completeness of R is verified by checking if the number of AP-pairs of R equals to k. P-soundness verification is slightly different from all-answer query authentication, since the client may not access all the AP-pairs in the AP-lattice L. The client first checks for every AP-pair (A, P ) ∈ R, whether there exists a path: (1) that consists of nodes corresponding to the AP-pairs in R ∪ X only, where X is the MWS of R, and (2) that connects the AP-pair (A, P ) to the seed node in the AP-lattice. Next, the client groups the AP-pairs in R ∪ X based on the edge labels in the AP-lattice L. For each pair where p t is the probability of the tuple t. Step 2: Verification of TopK-soundness. The challenge is to minimize the probability computation of any non top-k APpair. To address this challenge, we design an algorithm based on the divide-and-conquer strategy that enables the client to catch any TopK-soundness violation. Next we discuss the details of our approach. Let (A j , P j ) be the AP-pair of the j-th largest probability in R h/2+1,h 6: Let (A, P ) be a new AP-pair, where A = A i ∪ A j , and P = P i × P j 7: Add (A, P ) to R 8: end for 9: end for 10: Sort the AP-pairs in R by their probability 11: Keep the AP-pairs of top-k probability in R 12: return R The key idea of our verification approach is that the client generates the top-k AP-pairs from the hit set H in polynomial time, and compares them with R. Note that the hit set H is always included in the VO of top-k answers. In the literature, the divide-and-conquer (DC) strategy has been widely applied to top-k search [27], [28]. We adapt the same strategy to our verification. Before we explain the details of the verification method, first, we present Lemma 4.1, which shows that the top-k AP-pairs of a large hit set H can be computed from the top-k AP-pairs of the disjoint subsets. Proof. First, it is straightforward that A is a possible answer for the hit set H, i.e., A ⊆ H, since A i ⊆ H[1, . . . , h/2] and A j ⊆ H[h/2 + 1, . . . , h]. Also, it is easy to see that P r(A) = P r(A i ) * P r(A j ) = P i * P j = P . Next, there exist at least i * j AP-pairs on H whose probability is no smaller than P . For any answer and (A y , P y ) ∈ R h/2+1,h for any 1 ≤ y ≤ j, it must be true that P ′ = P x * P y ≥ P . This is because P x ≥ P i and P y ≥ P j . So among all the possible AP-pairs for the hit set H, the highest rank of (A, P ) is i * j. Therefore, in order for (A, P ) to be included in R 1,h , it must be true that i * j ≤ k. Based on Lemma 4.1, we design the divide-andconquer (DC) method that combines the top-k AP-pairs of H[1, . . . , h/2] and H[h/2 + 1, . . . , h]. Algorithm 1 shows the pseudo code. From Line 3 to 9, for each AP-pair (A i , P i ) ∈ R 1,h/2 , we only consider its combination of an AP-pair Among all the generated AP-pairs, we keep k AP-pairs of the highest probability and arrange them in descending order according to their probability. Based on Algorithm 1, we design the verification method that generates the top-k AP-pairs from the hit set H. Algorithm 2 shows the pseudo code. At high level, we keep dividing H until it only includes a single tuple. If t is the only tuple in H, there exists only two AP-pairs, i.e., ({t}, p) and (∅, 1 − p), where p is the probability associated with t. After that, we keep combining the solutions from the subset of H to generate the top-k AP-pairs of H. The total number of AP-pairs whose probability needs to be recomputed by Algorithm 1 & 2 is i ≤ lnk + 1 (a property of harmonic series), it follows that N ≤ klnk + k. In other words, at most [klnk] + k probabilities have to be computed to generate the top-k answers. V. SECURITY ANALYSIS In this section, we prove that our verification methods are secure (Def. 2.2). Our security analysis first shows the security of the authentication procedure of the hit set, as it is the common procedure for the authentication of both allanswers and top-k answers. Then we discuss the security of authentication protocols of all-answer and top-k queries respectively. A. Authentication of Hit Set Theorem 5.1: Given a probabilistic data instance D, an allanswer/top-k query Q on D, let H be the hit set of Q, our authentication protocol of hit set H is secure (Def. 2.2) under the RSA assumption and the collision resistant hash function. Proof. Given a query Q, consider a probabilistic polynomial-time adversary Adv that generates an incorrect result R ′ with incorrect hit set H ′ and proof Π ′ of H ′ , and tries to pass the verification routine of hit set H ′ . The incorrect hit set H ′ must fall into one of the following cases: • Case 1. Authenticity violation The hit set is not included in the dataset, i.e., H ⊆ D. • Case 2. R-soundness violation H does not satisfy the selection range in Q. • Case 3. Completeness violation There is at least an Mtuple missing in H ′ . Next, we prove that the probability that Adv can pass the verification is negligible. For Case 1, let H be the correct hit set for the query Q, and H ′ be the incorrect hit set. Without loss of generality, let t ′ be the only tuple that exists in H ′ but not in D. In other word, H ′ = H ∪ {t ′ }. In order to pass the verification, the signature of MCS M ′ (Q) constructed from H ′ must pass the root hash re-constructing procedure in the protocol. The probability that Adv can pass the verification is the same probability of forging the condensed-RSA signature of M (Q) (i.e., the correct answer), which is negligible [20]. Condensed-RSA is proved to be existentially unforgeable against chosen plaintext attack for any probabilistic polynomial-time adversary under the RSA assumption [20]. For Case 2, without loss of generality, we assume that t ∈ D and t ∈ [l, u] is the only incorrect tuple that exists in the hit set H ′ . In other words, H ′ = H ∪ {t}. In order to make the incorrect result to pass verification, t must not be covered by a leaf node in M ′ (Q). Otherwise, the client can catch t with 100% certainty according to our certify protocol. Thus, t must be covered by an internal node in M ′ (Q). Let N (t min ||t max ||p||σ||h) be the internal node that covers t. Obviously, either t min > u or t max < l. To let it pass the verification, Adv must substitute N with In other words, N ′ is an internal node in the selection range of Q and the hash of N ′ matches that of N . Thus the probability that Adv passes the verification is the same as the collision probability of the hash function H, which is negligible. In other words, the security against Case 2 follows the security of collision-resistant hash functions [29]. For Case 3, let H be the correct hit set for the query Q, and H ′ be the incorrect hit set. Without loss of generality, let t ′ be the only M-tuple in H that is missing in H ′ . In other word, H ′ = H − {t ′ }. In order to pass the verification, the signature of MCS M ′ (Q) constructed from H ′ must pass the root hash re-constructing procedure in the protocol. The probability that Adv can pass the verification is the same probability of forging the condensed-RSA signature of M (Q), which is similar to case 1. B. Authentication of All-answer Queries In this section, we prove our authentication protocol of allanswer queries meets the security definition (Def. 2.2). Theorem 5.2: Given a probabilistic data instance D, and an all-answer query Q on D, our authentication protocol of allanswer query evaluation is secure under the RSA assumption and the collision resistant hash function. Proof. Consider a probabilistic polynomial-time adversary Adv that generates the incorrect result R ′ which must fall into one of the following cases: • Case 1. Authenticity violation There exists an AP-pair • Case 2. R-soundness violation There exists an AP-pair (A i , P i ) ∈ R ′ s.t. A i does not satisfy the selection range in Q. • Case 3. P-soundness violation There exists an AP-pair With the full knowledge of the authentication protocol, Adv generates the proof Π ′ of R ′ , aiming to let R ′ being accepted by using Π ′ . Next, we prove that the probability that Adv can pass verification is negligible. First, consider Case 1-2. Given the correct hit set H, if there exists an AP-pair (A i , P i ) ∈ R ′ s.t. A i ⊆ D or A i does not satisfy the selection range in Q, it must be true that A i ⊆ H. The probability that Adv can pass the verification is 0. Next, let's consider Case 3. To pass P-soundness verification, Adv must make sure that (Π Nj ∈M ′ (Q) σ j ) e = Π ti∈H H(t i ||p i ) (mod B) and Π Nj ∈M ′ (Q) p j = P H . To achieve this, Adv must return at least one tuple t i ∈ H s.t. p ′ i = p i . Besides, the proof must pass the authentication based on the Condensed-RSA signature, i.e., Adv generates M ′ (Q) . Similar to Case 1 in the proof of Theorem 5.1 of hit set authentication (Section V-A), for both scenarios, the security follows the RSA assumption. For Case 4, given the correct hit set H, if a pair (A i , P i ) missing in R ′ , it must be true that 2 |H| = |R ′ |. The probability that Adv can pass the verification is 0. C. Authentication of Top-k Queries In this section, we prove our verification approach for top-k answers is secure (Def. 2.2). Theorem 5.3: Given a probabilistic dataset D, and a top-k query Q k on D, our authentication scheme of top-k answer query evaluation is secure under the RSA assumption and collision hash function. Proof. Given a query Q k , consider a probabilistic polynomial-time adversary Adv that generates the incorrect top-k result R ′ which must fall into one of the following cases: • Case 1-4. Violation of authenticity, R-soundness, P-soundness and completeness. Case 1-4 is similar to Case 1-4 for all-answer queries (Section V-B). • Case 5. TopK-soundness violation. There is a pair (A i , P i ) in R ′ s.t P i is the smaller than the k-th probability. Adv generates the proof Π ′ of R ′ , and tries to pass the verification routine by utilizing Π ′ . Case 1-4 is similar to the proof of Theorem 5.2 of all-answer query evaluation (Section V-B). The only difference is that the verification methods also check the AP-pairs in MWS in Case 3 and check if |R| = k in Case 4. For Case 5, the probability that Adv can pass the verification is negligible. This is straightforward, as the verification approach generates the correct top-k results. VI. COMPLEXITY ANALYSIS For verification preparation, the data owner constructs the APB-tree with O(n(C σ + C H + C P )) time complexity. It is worth noting that this is a one-time process, and its cost can be amortized over subsequent query verification. Moreover, in our experiments, we find that C P is cheaper than C H and C σ by three orders of magnitude. In the VO verification process of all-answer query, the main complexity arises from hash and signature computation of MCS, MF-nodes and C-tuples, which is O((n MCS + n MF + n C )(C H + C σ )). In comparison, the server has to traverse the APB-tree to construct VO, whose complexity is linear to n. Since in practice, n MCS , n MF and n C are significantly smaller than n, the verification complexity at the client side is substantially smaller than the VO construction complexity at the server side. To verify top-k queries, the client generates the top-k answers from the hit set by using our Algorithm 2, where the complexity is only O(hklogk). While the server needs to discover the MWS, which takes O(k 2 + L) complexity. We summarize the complexity analysis of the all-answer and top-k query verification approaches in Table II. The verification complexity of our approach is greatly cheaper than that of local query execution. Therefore, our query authentication approach is in particular suitable for the outsourcing paradigm. A. Setup Hardware We execute the experiments on a computer of 2.7GHz Intel CPU and 8GB RAM, running MAC OS X operating system. We implement the algorithms in C++. Datasets and queries We use two datasets, namely the International Ice Patrol (IIP) Iceberg Sighting dataset and the Uservisit dataset. The IIP dataset is provided by National Snow & Ice Data Center 1 which has 1 million tuples and 10 attributes. The Uservisit dataset is generated by using HiBench 2 , a data benchmark suite. The Uservisit dataset has 10 million tuples and 8 attribute. We generate the tuple probabilities by following two different probability distributions -the uniform distribution and the normal distribution N (0.5, 0.2 2 ). For both datasets, we prepare a number of queries whose selection ratio (i.e., the percentage of tuples that satisfy the 1 https://nsidc.org/data/g00807 2 https://github.com/intel-hadoop/HiBench filtering conditions) varies from 0.3% to 30%. For each selection ratio, we generate five unique queries. All the results shown later in this section are the average results of all the queries of the same selection ratio, with 20 times executions per query. For the top-k queries, we vary k from 10 to 300, and the hit set size from 14 to 24. Compared method To our best knowledge, this is the first work on query authentication for probabilistic databases. Thus we do not have any state-of-the-art work to compare with. Therefore, we consider the baseline approach by which the client first verifies the correctness (soundness and completeness) of the hit set by using the verification approach for deterministic databases [23], then computes the answer probability of each possible answer. The top-k result is obtained by returning k AP-pairs of the highest probability. B. All-answer Query Verification First, we measure the VO construction time. The result is shown in Figure 5 (a). The main observation is that the VO construction time increases linearly with the growth of selection ratio. This is because the server traverses the APBtree to find the hit set. Consequently, the larger the hit set is, the more APB-tree nodes the server visits to construct VO. Nevertheless, the VO construction process is extremely fast (it never exceeds 2.5 seconds), even for the query of large selection ratio such as 30%. Furthermore, the VO construction time is insensitive to the probability distribution of the data, as the tuple probability does not change the number of APBtree nodes that are visited to construct VO. We have similar observation on the IIP dataset. We omit the discussion due to the limited space. Second, we measure how the VO size changes with regard to various query selection ratios. The results are shown in Figure 5 (b). First, we observe that the VO size is always relatively small (around 6KB, 0.002% of the data size). We also observe that the VO size first increases with the growth of query selection ratio, then drops. This is because when the query selection ratio rises from 0.3% to 20%, the number of C-tuples, MF-nodes and the size of the minimum coverage set (MCS) that covers the M-tuples grows slightly. After that, the number of C-tuples and the size of MCS drops slightly, since an internal node in the APB-tree can cover more tuples in the hit set. Overall, the change in the VO size is unsubstantial (within 10%) with the increase of query selection ratio. Third, we compare the VO size |V O| with the size of query result |R|, and define ratio of VO size r s = |V O| |R| , where R does not include the VO. We display the ratio of VO size with various query selection ratios in Figure 5 (c). Overall, the VO size is negligible compared with the query result size (the ratio never exceeds 0.7%). Furthermore, we observe that with the growth of the query selection ratio, the ratio of VO size decreases dramatically. The reason is that while the size of query results grows exponentially to the hit set size, the VO size is relatively stable (has been shown in Figure 5 (b)). In Figure 8 (a), we display the verification time at the client side measured on the Uservisit dataset. We have similar Figure 7. We only show the result when h ≤ 24, since we are not able to generate all answers in the memory when h is larger than 24. The results are consistent with our theoretical analysis -the verification time increases exponentially with the size of hit set. We also observe that the verification time is insensitive to the probability distribution of the data. This is not surprising as the verification time is determined by the hit set size, not tuple probability. C. Top-k Query Verification VO construction time. The VO construction time is very small for both datasets. As show in Table III, it never exceeds 0.3 second. The fast VO construction is due to the fact that small hit set (i.e. 14-24) leads to fast APB-tree traversal for VO construction. VO VO size. We measure the VO size and show the results in Figure 6. We can see that the VO size is very small and stable. It never exceeds 5KB when the size of hit set varies from 16 to 24. We also notice that the VO size is insensitive to the choice of k, since the MWS only takes a very small fraction of the VO. Verification Time (ms) Step 2 Step 1 (a) All-answer query (b) Top-k query VO verification time. We evaluate the verification time of Step 1 (verification of authenticity, R-soundness and completeness) and Step 2 (verification of TopK-soundness) of our verification approach separately. We measure the verification time for small hit sets. From the results shown in Figure 8 (b), we observe that the verification is very fast. It never takes more than 25 milliseconds. We also observe that Step 1 dominates the verification time. Moreover, with the increase of k, the verification time of Step 1 keeps stable, but the verification time of Step 2 grows. This is because Step 1 only verifies the correctness of the hit set, whose time performance is irrelevant to k. On the other hand, the complexity of Step 2 depends on k. Besides, we observe that the verification time of both steps increase linearly with the hit set size. This is consistent with our complexity analysis in Section VI. D. Comparison with Baseline We measure the verification time T of our approach, and the verification time T B of the baseline approach, and report the ratio of verification time measured as T TB . Intuitively, the smaller T , the more efficient of our approach compared with the baseline approach. All-answer queries. We show the ratio of verification time on the Uservisit dataset in Figure 10 (a). First, we observe that the ratio of verification time is always no more than 9%. In other words, in most cases, our verification approach is 10 times more efficient than the baseline approach. Second, the ratio of verification time decreases when the hit set grows larger. This is because the P-soundness verification (Step 2 in Section IV-B) takes the majority of the verification time. With the growth of the hit set size, the generation time of all-answer AP-pairs for the baseline approach increases much faster than the P-soundness verification time, which means T B grows faster than T . Thus, the ratio of verification time decreases. This demonstrates that our verification method is suitable for the verification of all-answer queries that have large hit sets. The observation on the IIP dataset is similar as shown in Figure 9. Top-k queries. We vary the hit set size from 14 to 24, and report the ratio of the verification time in Figure 10 (b). The results show that in all the cases, the ratio is within 10%, which shows that our verification approach is at least 10 times more efficient than the baseline. Furthermore, with the growth of the hit set size, the ratio of verification time decrease dramatically, because the verification complexity of our approach is polynomial to the size of the hit set, while the baseline is exponential to the size of hit set. Moreover, we observe that smaller k yields smaller ratio of verification time, since the verification time of our method increases with the growth of k, while the complexity of the baseline approach is irrelevant to k. VIII. RELATED WORK Authentication of Outsourced SQL Queries. Authentication of SQL query results has been studied by a large body of literature. A variety of SQL queries have been considered. Due to the space limit, we only discuss the related work on range and aggregate queries, which are most relevant to our work. None of the existing works consider the SQL evaluation on the probabilistic database. First, most of the verification methods of range queries use tree-based authentication data structure (ADS). With certain information stored in the ADS, the ADS can be also used to verify the aggregation queries, like SUM, MIN and MAX. Most tree-based methods [30], [23] handle single-dimensional range queries by constructing VO from Merkle Hash tree. Zhang et al. [12] design a system named IntegriDB that can handle a rich subset of SQL queries, including multi-dimensional range queries, join, and aggregate queries. Those authentication methods can verify the tuple probability. But they cannot verify P-soundness and TopKsoundness. Li et al. [10] design efficient index structures for authentication of a variety of aggregate queries. Compared with this work, we store the aggregate tuple probability in the ADS instead of the aggregate tuple attribute values. Query Evaluation on Probabilistic Database. Fuhr et al. [31] define the relational algebra for probabilistic databases and introduce the intensional semantics for query evaluation. Dalvi et al. [5] transfer an arbitrarily complex SQL query with uncertain predicates to extensional query semantics. More work on algorithms and applications of probabilistic databases can be found in [32]. Re et al. [16] initialize the research on top-k query evaluation on probabilistic databases. They present the top-k query in DNF and design a multi-simulation based Monte-Carlo algorithm for top-k evaluation. Zhang et al. [33] study the semantics of the top-k query evaluation in probabilistic databases. Soliman et al. [34] define a top-k query model named U-topK query model. This model returns the k-length tuple vector that is of the highest probability. A good survey of top-k query evaluation on probabilistic databases can be found in [35]. Most of these works focus on designing efficient algorithms for query evaluation on probabilistic databases. None of them consider authentication of query evaluation. IX. CONCLUSION In this paper, we study the query authentication problem for outsourced probabilistic databases. We design efficient verification solutions for both all-answer and top-k queries. Empirical studies demonstrate the efficiency of our approaches. In the future, we plan to investigate the query authentication methods that support database updates and other operations (e.g., join and aggregation) on probabilistic databases.
2018-08-29T23:34:40.871Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "6a54552374f4273029f543a7df9a33469b14deac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6a54552374f4273029f543a7df9a33469b14deac", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
24721600
pes2o/s2orc
v3-fos-license
Strategies To Increase Alcohol Screening in Health Care Settings Although health care settings offer an ideal opportunity for identifying people who are currently experiencing or are at risk for problems with alcohol, clinicians screen fewer than one-half of their patients for alcohol use disorders. The rate of alcohol screening may be increased, however, by applying strategies shown to promote the use of screening procedures for other medical problems, such as cancer. These strategies include group education (e.g., workshops or seminars), training given by respected colleagues (i.e., opinion leaders), performance feedback, educational outreach visits to individual physicians (i.e., academic detailing), and financial incentives or penalties. Using clinic-based system protocols (e.g., patient questionnaires) can help make the implementation of alcohol screening in clinical practice both efficient and effective. Although incorporating alcohol screening into other high-priority clinical activities and screening programs remains a challenge, routine alcohol screening as a standard of care for all patients is receiving increased acceptance. B oth the U.S. Department of Agriculture and the Department of Health and Human Services recommend limiting alcohol consumption to no more than two standard drinks 1 per day for men and no more than one standard drink per day for women and people over age 65. Alcohol use above these recommended limits is associated with a wide range of healthrelated concerns, including high blood pressure, trauma, accidents, domestic violence, cancer, fetal alcohol syn-drome, and mental health problems. In fact, alcohol use disorders are some of the most common problems seen in health care settings. Studies suggest that 20 percent of the people who seek care in hospitals and outpatient clinics are at risk for or are experiencing alcohol-related problems (Fleming et al. 1998). Because patients consider their doctors to be trusted and credible sources of health information, health care settings are ideal for implementing alcohol-screening procedures. Several screening tests to identify alcohol use disorders in patients have been developed for use in clinical settings. These tests are highly sensitive, specific, and similar in accuracy to a blood pressure measurement to detect high blood pressure or a glucose tolerance test to screen for diabetes. For patients who screen positive for alcohol use disorders, physicians can take action to promote healthy, successful outcomes. For example, both alcohol consumption and health care utilization decrease when clinicians incorporate simple procedures (i.e., brief interventions, such as providing written material and advice) into routine office visits with patients who are nondependent drinkers and provide specialized treatment for patients who are alcohol dependent (Fleming et al. 1997a). PREVALENCE OF ALCOHOL SCREENING Despite findings that support the implementation of routine alcohol screening and demonstrate its advantages, the rate of alcohol screening in health care settings remains lower than 50 percent, as several studies have noted. For example, Moore and colleagues (1989) conducted a survey in a large university hospital in Baltimore, Maryland, and found that physicians recorded an alcohol use history for only about one-third of their patients. Inpatient psychiatric units had the highest rates of screening in this study, and surgical units had the lowest. Another study conducted in the emergency department of a large teaching hospital surveyed 346 patients involved in motor vehicle crashes and found that physicians obtained the patient's blood alcohol level in fewer than 25 percent of the cases (Chang and Astrachan 1988). One possible reason for such a low rate of alcohol screening may be related to medicolegal concerns. Clinicians may not realize that blood alcohol levels obtained in an acute care setting are not admissible as evidence for legal actions when the sampling does not follow a chain-of-custody collection procedure to safeguard it against any opportunity for interference (e.g., tampering). Schmidt and colleagues (1995) found that 20 percent of the patients who participated in exit interviews at a general medical clinic reported that their physicians had asked them about their alcohol use in the previous 6 months. Of the 26 patients who were asked about their alcohol consumption, only 2 received specific recommendations. Patients who screened positive for a diagnosis of alcohol abuse or dependence (according to criteria set forth in the American Psychiatric Association's Diagnostic and Statistical Manual, Third Edition, Revised [DSM-III-R]) were slightly more likely to have been asked about their alcohol use, but none of the patients who met current criteria for alcohol dependence were referred to a peer-support group, such as Alcoholics Anonymous, or to alcohol treatment. In another study, researchers assessed a sample of 972 adults in 2 rural primary care practices on their alcohol use . Of the 110 patients who met DSM-III-R criteria for alcohol abuse or dependence, only 9 reported that their physicians had talked to them about their drinking in the last 6 months. These studies suggest that physicians do not routinely screen their patients for alcohol use disorders. Too often, patients continue to be treated for alcohol-related trauma, high blood pressure, depression, anxiety, and other health problems without being treated for their underlying alcohol problem. Moreover, failing to screen for alcohol use disorders can result in serious clinical consequences. Surgeons and anesthesiologists, in particular, should consider alcohol screening as part of routine preoperative care, because alcoholics may require a greater amount of anesthesia to achieve the desired effect. In addition, delirium tremens may develop during the postoperative period, and alcohol withdrawal can severely compromise a patient's recovery from surgical procedures. Similarly, patients admitted to trauma or coronary care units who develop delirium tremens are at greater risk for respiratory failure, blood flow restriction to the heart muscle (i.e., myocardial ischemia), and brain damage. Routine alcohol screening and early treatment of withdrawal will minimize the development of such complications. STRATEGIES TO INCREASE ALCOHOL SCREENING RATES To increase alcohol screening rates in clinical settings, physicians must be encouraged to change their practice routines to include screening for every patient. Routine screening for all patients, however, involves overcoming barriers and issues in health care systems that currently block the way, such as the following: Several strategies have been found to be effective in promoting the use of screening procedures for other medical problems (e.g., cancer) in health care settings. These strategies can be classified into the following five general categories: • Group education sessions • Education by respected colleagues (i.e., opinion leaders) • Performance feedback • Educational outreach to individual physicians (i.e., academic detailing) • Financial incentives or penalties. Often, a combination of these types of strategies is used. Although their efficacy when applied to alcohol screening has not been widely tested, these strategies appear to offer a promising opportunity for the field. This article discusses each of the five types of strategies and presents reports of their effectiveness in other medical fields and in the alcohol field when available. Group Education Sessions Courses, seminars, and workshops on screening practices and procedures are sometimes offered to groups of health care professionals. These educational opportunities are used to increase the rates of routine screening in clinical practice. Group education sessions can vary in their effectiveness, however, depending on their structure and content. Schwartz and Cohen (1990) describe education as the "provision of new information," which is frequently necessary but not usually sufficient to change behavior. Physicians often require strong evidence before they will consider altering their routines. Therefore, change strategies that rely solely on providing new information without addressing the complex behavioral and organizational factors that influence physicians' behavior are generally not successful. Effective group education strategies include the following: Combining an educational program with some of the other intervention strategies presented later in this article (e.g., clinic-based system protocols and feedback from peers) also increases the program's effectiveness (Davis et al. 1995). The National Institute on Alcohol Abuse and Alcoholism's (NIAAA's) development of a trainers' manual for use with the Institute's guide for physicians is one example of an educational program that incorporates many of these group education strategies (Fleming et al. 1997b). Numerous researchers have examined the efficacy of group education in changing physician behavior and in its subsequent effects on improving patient health. Davis and colleagues (1995) surveyed the physician performance literature from 1975 to 1994 and found 160 studies that evaluated educational strategies, 99 of which were randomized clinical trials conducted in a variety of medical fields. Seventy percent of these studies reported changes in physician performance, and 48 percent of the studies that measured health outcomes found a positive change. The impact of the educational strategies varied among the different types of methods employed, however. Formal continuing medical education (CME) courses using lectures and handouts had limited impact, whereas educational programs that included peer discussion and skillspractice sessions were more effective. A study by Dietrich and colleagues (1990) examined CME programs on controlling cancer and concluded that programs using interactive discussion groups, opinion leaders, and physicianformulated plans (i.e., protocols and procedures they selected for implementing a cancer screening program into their practice) result in improved knowledge and self-reported behavior change. Cohen and colleagues (1994) listed several factors associated with effective CME programs. For example, effectiveness was enhanced when the trainers were physicians identified by their peers as being respected and influential and when the trainers used multiple methods, especially methods that were designed not only to motivate physicians, but also to teach them new skills and help them change their practice environments. Educational programs conducted for health care professionals on alcohol screening should incorporate the findings of all these reports. In particular, role playing can be an invaluable way to teach physicians how to become more comfortable with alcohol screening questions and interview techniques by allowing them to rehearse their skills before they interact with their patients. Because nothing can substitute for practice and repetition, role playing with colleagues, standardized patients (i.e., people trained to play a specific role), or people in recovery can build a physician's confidence in his or her alcohol-screening skills. For example, role playing can help physicians learn to focus as much on what patients do not say (i.e., nonverbal cues) as what they do say when questioned about their alcohol use. Trainers can facilitate role playing in a small group, or if the group is large, trainers can use a paired role-play technique in which participants role play with the person sitting next to them. Opinion Leaders Opinion leaders are respected colleagues who are trusted sources of clinical information. These leaders can be local physicians or colleagues known as experts at a State or national level. Often they are trained in the same specialty as the physicians to whom they are speaking. The presentation of new information involving changes in clinical practice can be very effective when conducted by a trusted colleague. This effectiveness was demonstrated by a study done in the obstetrics field, in which the researchers performed a randomized, controlled trial with 76 physicians in 16 community hospitals to increase rates of vaginal births in women with previous histories of cesarean sections (Lomas et al. 1991). The trial included three groups of providers. First, a control group of physicians received a one-time mailing informing them of the recommended cesarean section guidelines and simply requesting that Strategies To Increase Alcohol Screening they implement these guidelines. A second group had their patients' charts audited to compare actual practices with the recommended guidelines. This group of providers met quarterly for feedback and discussion on the audit results. The third group received written and oral communication from a physician nominated as an "educationally influential opinion leader," who educated the physicians on the advantages and safety of vaginal births after a previous cesarean section. After 24 months, vaginal birth rates in the audit-and-feedback group were no different from those in the control group. The rates of cesarean section fell dramatically, however, among the physicians educated by an opinion leader. The patients of this group also had shorter hospital stays. No adverse clinical outcomes were attributable to any of the education efforts. The use of respected colleagues as opinion leaders has special importance for the alcohol field, where societal and health care system barriers may impede the incorporation of alcohol screening into routine clinical care. Opinion leaders can help overcome these barriers by legitimizing and providing the scientific rationale for implementing alcoholscreening procedures. In addition, these leaders can counter societal biases and attitudes that place a lower value on spending health care resources for a so-called self-inflicted problem. Just as opinion leaders in the cardiology field can justify large expenditures to prevent and treat smoking-induced heart disease, opinion leaders in the alcohol field can provide the rationale for the prevention and treatment of alcohol use disorders. One of the most promising developments in the alcohol field is the expanding number of faculty in primary care, obstetrics, emergency medicine, and surgery who are teaching their colleagues how to screen for alcohol problems. Opinion leaders such as these faculty members can play a major role in educating physicians and facilitating changes in physicians' alcohol-screening practices. Although research on the effectiveness of using opinion leaders to change behavior is limited, this strategy appears to have potential. Performance Feedback Changing a physician's clinical behavior is not an easy process; however, providing feedback is one of the most powerful methods available, especially when a physician perceives a need for change in clinical care. According to Greco and Eisenberg (1993), feedback includes various ways of giving health care providers information about their practice performance and patient outcomes compared with the performance of other providers. Feedback can be used to introduce a new procedure, or it can be part of an overall clinic quality assurance system. Examples of effective feedback include confidential performance evaluations based on medical record reviews, written feedback by quality assurance committees, and feedback derived from patient satisfaction questionnaires. Peer-review feedback is increasingly used by managed care organizations to modify physician behavior, especially in the prevention field (e.g., to encourage immunizations and cancer prevention activities). Data gathered from peer-review feedback also are used to monitor the quality of care that patients receive as well as serve as the basis for financial incentives for physicians. Researchers in various health fields have evaluated feedback as a tool for changing physician behavior. Through more than 30 years of research, Bowers and Franklin (1977) have shown that general organizational change can be greatly facilitated when data about systems functioning are collected, communicated to the organization's members, and used to provide opportunities for diagnosis and action. Recent studies reviewed by Greco and Eisenberg (1993) • Reductions in laboratory and total hospital costs. Schwartz and Cohen (1990) describe some of the ways that feedback can be given. These include both impersonal means, such as providing computer profiles or reports, and personal interactions, such as through peer review groups or committees. Schwartz and Cohen report that feedback is most effective in changing behavior when it is delivered in a timely fashion, is combined with both education and either incentives or administrative changes (e.g., the reorganization of charts in a computerized or problem-based format), and includes comparisons with other peers. In practical terms, one way to offer feedback is to audit the medical records for a group of physicians and provide each physician with an individual performance rating relative to his or her peers. The physician could receive this feedback in a confidential report, perhaps as part of an educational session on alcohol screening. In addition, showing a slide during the session that anonymously lists each physician's rating can be a powerful motivation for change. Other studies suggest additional approaches to providing feedback. Morrow a framework for changing preventive health care behavior by combining peer review, feedback, and financial incentives. Payne and colleagues (1984) demonstrated an improvement in outpatient care following feedback to a group of physicians attending a seminar in which they participated in problem identification, problem-solving, and solution implementation. The resulting improvement was reinforced through followup consultations after the seminar. A report by Ockene and colleagues (1997) provides some of the first data on the use of feedback in the alcohol field. This study trained 31 clinicians (i.e., faculty, residents, and advanced nurse practitioners) in techniques for providing brief advice counseling to patients for alcohol use disorders. Each clinician participated in a 90-minute training workshop followed by a 30minute one-on-one feedback session 2 to 6 weeks later. Standardized patients were used to rate the clinical skills of the participants before and after the workshop and feedback sessions. A comparison of the "before" and "after" ratings demonstrated significant improvements in the clinicians' skills, attitudes, and knowledge related to alcohol and alcohol screening. As suggested in this brief review of the literature, the provision of feedback can change physician behavior and clinical practice. Eisenberg and Williams (1981) suggest that feedback works by capitalizing on the health care provider's sense of achievement and desire to excel. Regardless of the reason why feedback works, however, the success of this strategy makes it appealing for application in the alcohol field. Academic Detailing Academic detailing refers to clinicbased educational activities focused on individual practitioners. These educational activities involve outreach visits to offer short didactic presentations to physicians, skills training through role playing, performance feedback, or discussions on pertinent topics (e.g., how to overcome staff resistance to incorporating new procedures). Studies have focused on a variety of health care professionals, such as physicians, nurses, pharmacists, and health educators, to conduct these office-based outreach visits. In addition, pharmaceutical companies have employed this strategy effectively to encourage physicians to prescribe certain medications. Soumerai and Avorn (1990) examined face-to-face outreach visits by clinical pharmacists and the provision of written materials and compared the effects that these two methods had on changing physicians' prescribing patterns. In this study, 435 physicians were assigned randomly to receive one of the two experimental methods and were assessed for changes in their prescribing patterns. The results showed that educational visits significantly changed the physicians' prescribing patterns. In addition, the strength of the effects depended on the number of oneon-one followup visits by the clinical pharmacist: the more visits, the greater the change in prescribing patterns. The study concluded that brevity, repetition, and reinforcement of recommended practices are important elements in changing physician behavior. Financial Incentives or Penalties Research suggests that financial incentives are another effective tool for changing clinician behavior. Incentives can be based on a variety of indicators, such as the number of patients immunized, the frequency of screening for a selected health problem (e.g., mammography for women over age 50), the number of prescriptions written for selected medications (e.g., expensive antibiotics), the number of patients referred to specialty care, or the number of patients hospitalized. Positive incentives can include bonuses, higher base salaries, or increases in the negotiated rate a managed care organization pays physicians per enrolled patient (i.e., capitation payments). These types of incentives can be powerful motivators. For example, Hickson and colleagues (1987) performed a randomized clinical trial to determine whether pediatric residents who were paid per patient would attend more patients in the clinic (and thereby become more efficient in preparation for their future work) compared with residents who received a fixed salary. Not surprisingly, the residents who were paid per patient took care of significantly more patients, implying that the financial incentive was an effective motivator. Negative financial incentives (i.e., penalties) also produce changes in behavior, as found in another study by Hillman and colleagues (1989). This study examined rates of patient hospitalization among a group of primary-care physicians who were at personal financial risk for referral and hospital care. The results indicated that the rate of patient hospitalization decreased after this reimbursement policy was implemented. Although additional research in this area is warranted, one can reasonably assume that creating financial incentives for physicians could be applied to the alcohol field to facilitate the implementation of alcohol-screening procedures. As an example, managed care companies could review medical records and provide a year-end bonus to physicians who screened a predetermined percentage of patients for alcohol problems in the preceding year. IMPLEMENTATION OF ALCOHOL SCREENING To incorporate routine alcohol screening efficiently, physicians can adapt a comprehensive clinic-based program similar to the programs used to screen for other health concerns, such as high blood pressure, cancer, elevated cholesterol levels, and tobacco use. In many clinical practices, screening for these ALCOHOL HEALTH & RESEARCH WORLD The ultimate goal is to provide alcohol screening for all patients. health concerns already has become a routine element of care and usually includes procedures such as patient questionnaires, physical measurements or laboratory tests, manual or computerized reminder systems to ensure a thorough examination and assist with followup, standardized prevention messages, and protocol-driven treatment methods. Clinic-based systems acknowledge the complexity of implementing a new activity into a busy practice and the need to systematize the activity as part of routine care. In addition, a clinicbased system requires the active participation of all staff members, not just the individual clinician responsible for questioning the patients. Front-desk staff, for example, often distribute questionnaires and attach reminder printouts to patient charts. Nurses score the questionnaires and follow established protocols designed to manage positive and negative responses. Medical record clerks record the information in the charts and in databases. Physicians then use the data for clinical decisionmaking. The effectiveness of clinic-based systems has been an active area of research since the early 1980's (Kottke et al. 1988). For example, Solberg and colleagues (1990) conducted a study on a clinic-based system designed to establish a smoking cessation program. The program included patient interviews to screen for current smoking status, chart labels (i.e., color-coded stickers placed on the outside of the chart to indicate the patient's smoking status), brief messages advising patients of the importance of smoking cessation, reminder cards attached to the patient's medical record to prompt physicians to inquire about smoking status during the visit, and followup telephone calls by clinic nurses. After 1 year, the researchers reported smoking cessation rates of greater than 20 percent. Black and colleagues (1995) reported on a study designed to assess the effect of preprinted, structured, complaintspecific patient encounter forms (i.e., "quick sheets") on the documentation, resource use, and treatment of emergency room patients. These quick sheets aimed to guide care for common clinical con-ditions (e.g., asthma, sore throat, and cuts) and were based on expectations developed by medical staff in the emergency medicine field. Study results demonstrated a variety of positive outcomes, including improved documentation of the patient's history and physical findings, decreased use of clinical tests and medications, and decreased costs. For alcohol screening, a comprehensive clinic-based program could include the following components: • Questionnaires administered to patients by the receptionist or nurse, preferably with the alcohol questions embedded among general health questions • A readily available assessment tool, such as one of the instruments discussed in the next section • A computerized reminder system maintained by clerical staff to prompt clinicians to screen patients for alcohol use disorders or to follow up on previous treatment recommendations • A list, which is periodically updated by clerical staff, of alcohol specialists, peer-support meetings (e.g., Alcoholics Anonymous or Al-Anon), and community support agencies. Although the ultimate goal is to provide alcohol screening for all patients, screening in clinical settings could initially focus on particular high-risk groups, such as patients who are pregnant, suffering traumatic injuries, or receiving medication for high blood pressure or depression. Alcohol screening also could serve as one component of several targeted health issues, such as breast cancer or tobacco use screening. Choosing a Screening Instrument Several alcohol screening instruments with good accuracy are available for use in health care settings, including the instruments discussed here and in the related article by Cherpitel (see pp. 348-351). Each screening instrument has particular strengths and weaknesses and varies in its applicability to clinical settings. When selecting a screening procedure for routine implementation, clinicians and health care systems should consider factors such as the goals of the screening process, the target population (e.g., young adults, pregnant women, or the elderly), intervention options, clinician training needs, and costs. The physicians' guide developed by NIAAA recommends quantity/frequency and binge-drinking questions (see text box, p. 348) as the primary screening test (NIAAA 1995). These questions are sensitive (i.e., they correctly identify patients with alcohol use disorders in a high percentage of cases) and have a low rate of falsepositive results. The questions are easy to use and can be incorporated into a physician's practice with minimal cost and effort. Although patients sometimes underreport their alcohol use-particularly patients who are alcohol dependent or intoxicatedunderreporting can be minimized with the use of appropriate interview techniques (i.e., a direct, nonjudgmental approach); collaborative reports (i.e., family member reports and medical record reviews); and laboratory tests (i.e., breath analysis; blood alcohol level; or levels of other excessive alcohol consumption indicators, such as the enzyme gamma-glutamyl transferase or the blood component carbohydratedeficient transferrin). When screening patients for current or lifetime alcohol dependence, NIAAA's physicians' guide recommends using the CAGE questions (see CAGE text box, p. 349). As many as 50 percent of at-risk drinkers will not be identified if questions are limited to the four components of the CAGE test, however (Adams et al. 1996). To avoid missing the identification of atrisk drinkers, clinicians can use general health-screening questionnaires that include the CAGE questions, such as the PRIME-MD (Spitzer et al. 1994) or the Health Screening Survey (Fleming and Barry 1991). Some alcohol-screening instruments work best with specific patient populations. Health care systems that primarily focus on women, for example, may want to use the TWEAK test (see TWEAK text box, p. 349) or a similar instrument, such as T-ACE, that has been designed and tested specifically for use with women. Emergency-care clinicians should consider using the Rapid Alcohol Problems Screen (RAPS), which appears to have advantages over other screening methods when applied in emergency-care settings. Regardless of which alcohol screening instrument is selected, clinicians also may want to establish a brief assessment procedure for patients who screen positive. Examples of pencil-andpaper assessment tools for use in general health care settings include the 10question Alcohol Use Disorders Identification Test, the 25-question Michigan Alcohol Screening Test, and the 15question Short Alcohol Dependence Data Questionnaire (Davidson and Raistrick 1986). Clinicians should use an assessment procedure to determine where a patient is on the spectrum of alcohol use (i.e., whether the patient is a low-risk, at-risk, problem, or dependent drinker) before proceeding with a therapeutic plan, which may range from brief intervention to referral to an alcohol treatment program. A positive alcohol screen can have enormous implications for a patient, possibly affecting his or her employment status, ability to obtain insurance, and status in the community. Therefore, clinicians should diagnose an alcohol use disorder with the same caution used to diagnose other medical problems. SUMMARY The U.S. health care system provides a great opportunity to identify the majority of people adversely affected by alcohol use disorders. Several specific and sensitive screening tests are available to help clinicians implement routine alcohol screening in their practices. In addition, brief intervention trials have found that simply asking questions about alcohol use can reduce levels of drinking (Bien et al. 1993). The challenge, however, is to incorporate alcohol-screening procedures in the context of a multitude of other clinical activities and screening programs. For example, alcohol screening must compete and fit in with screening for immunization status, breast cancer, colon cancer, prostate cancer, cholesterol level, and smoking status, all of which have become high priorities in managed care systems. The incorporation of routine screening and treatment for high blood pressure and high cholesterol levels in the 1980's did not occur in the U.S. health care system until research demonstrated that screening for these conditions reduced the frequency of illnesses and deaths (Veterans Administration Cooperative Study Group on Antihypertensive Agents 1967; Multiple Risk Factor Intervention Trial Research Group 1982). Although decreased health care utilization and costs were not a major factor in the acceptance of routine screening for high blood pressure and cholesterol levels as the "standard of care," the emergence of managed care systems demands that any new screening procedures be cost-effective. Therefore, additional research is needed to establish that alcohol screening, brief intervention, and referral to specialized alcohol treatment care truly reduce disease, deaths, health care utilization, and costs. In particular, limited information currently exists on the costeffectiveness of alcohol screening and brief intervention (French in press). Changing physician behavior is a complex endeavor not to be taken lightly. The process of change is similar to changing a patient's behavior regarding alcohol use-knowledge and education are not enough. Educational endeavors must be expanded in medical schools, residency training sites, and CME programs to include role playing and skills-based workshops on alcohol screening. In addition, new physicians should be tested on their alcoholscreening skills as part of the requirement for graduation from medical school and residency. Standards of care also must change so that all patients admitted to hospitals and seen in outpatient clinics are screened for alcohol use disorders just as they are screened for high blood pressure, tobacco use, and high cholesterol levels. Accreditation groups (e.g., the Joint Commission, which accredits hospitals; the American Association of Medical Colleges, which accredits medical schools; and the 24 Residency Review Committees, which accredit residency programs) have the opportunity to require routine alcohol screening and adequate training for students and residents. Although alcohol screening for all patients is not yet the current standard of care, the acceptance of routine screening nevertheless has come a long way over the last 20 years, as the following examples demonstrate: • Most medical schools and residency programs now provide educational programs on alcohol screening. • Many hospitals are developing alcohol consulting services. • Most health forms given to new patients in hospitals and outpatient clinics, as well as many nursing intake forms, now include alcoholscreening questions. • Screening all pregnant women for alcohol use is becoming the standard of care in many areas of the country. • Some managed care companies are beginning to include (i.e., "carve in") specialized alcohol treatment services under primary care in order to facilitate referral and communication. • More than a dozen brief intervention clinical trials currently being supported by NIAAA should provide evidence to convince the managed care industry that alcohol screening and brief intervention are cost-effective. • The number of medical school faculty who work in the alcohol area continues to increase. By applying the knowledge and experience gained in changing physi-cian behavior and systems of care in other areas of medicine, the goal of routine alcohol screening for all patients in the U.S. health care system appears to be within reach. s
2017-06-17T11:26:13.779Z
1997-12-01T00:00:00.000
{ "year": 1997, "sha1": "3749ff4093bde460764990e262b13efef6d4d883", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3749ff4093bde460764990e262b13efef6d4d883", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5884398
pes2o/s2orc
v3-fos-license
Identification of a Cardiac Specific Protein Transduction Domain by In Vivo Biopanning Using a M13 Phage Peptide Display Library in Mice Background A peptide able to transduce cardiac tissue specifically, delivering cargoes to the heart, would be of significant therapeutic potential for delivery of small molecules, proteins and nucleic acids. In order to identify peptide(s) able to transduce heart tissue, biopanning was performed in cell culture and in vivo with a M13 phage peptide display library. Methods and Results A cardiomyoblast cell line, H9C2, was incubated with a M13 phage 12 amino acid peptide display library. Internalized phage was recovered, amplified and then subjected to a total of three rounds of in vivo biopanning where infectious phage was isolated from cardiac tissue following intravenous injection. After the third round, 60% of sequenced plaques carried the peptide sequence APWHLSSQYSRT, termed cardiac targeting peptide (CTP). We demonstrate that CTP was able to transduce cardiomyocytes functionally in culture in a concentration and cell-type dependent manner. Mice injected with CTP showed significant transduction of heart tissue with minimal uptake by lung and kidney capillaries, and no uptake in liver, skeletal muscle, spleen or brain. The level of heart transduction by CTP also was greater than with a cationic transduction domain. Conclusions Biopanning using a peptide phage display library identified a peptide able to transduce heart tissue in vivo efficiently and specifically. CTP could be used to deliver therapeutic peptides, proteins and nucleic acid specifically to the heart. Introduction Ischemic heart disease and occlusive coronary artery disease continue to be the number one killer in the developed world. There are an estimated 500,000 acute ST-elevation myocardial infarctions (MI) in the US alone each year [1], and this is becoming an increasingly significant problem in the developing world [2]. Current approaches for management of an acutely occluded coronary artery leading to an MI consist of anti-platelet and anti-thrombotic strategies with intervention aimed at opening the infarct-related artery in a timely fashion. Although this approach is able to protect cardiomyocytes from necrosis, with resulting decrease in morbidity and mortality, it necessitates exposing the heart to post-ischemic reperfusion injury. Limiting this reperfusion injury and decreasing apoptosis would ultimately lead to greater myocardial salvage and prevention of development of heart failure. Numerous animal studies have identified biological agents able to ameliorate this ischemia-reperfusion injury and reduce the ultimate infarct size [3,4]. However, further development of these approaches is hindered by the inability to deliver the biologic agents to the myocardium in a tissue-specific, efficient and rapid manner. A protein transduction peptide specific for the heart would be able to deliver biologic agents in a timely fashion to the heart when given at the time of reperfusion for an infarction. Protein transduction domains (PTD) are small cationic peptides that can cross cellular membranes, and are able to transport large, biologically active molecules into mammalian cells in culture as well as in vivo. The limitation of PTDs is the non-specific transduction of all tissue types with some tissues, such as liver and kidney, taking up the PTD much more avidly than heart tissue. Thus there is a need to identify peptides able to target cardiac tissue specifically for delivery of biologics of therapeutic potential. Screening approaches using peptide phage display libraries are effective for identifying peptides able to bind to specific ligand targets as well as identifying peptides with novel properties. Phage display uses filamentous bacteriophage, such as M13, that are able to replicate in E. coli. The proteins or peptides to be displayed are fused to the N-terminus of phage coat protein pIII or pVIII and thus are present on the surface of the phage. Screening of peptide phage display libraries has been used in vivo to identify peptides able to target tumor vasculature [5], adipose tissue [6] and pancreatic islet cells [7]. In addition, it has been used to identify peptides able to facilitate internalization of intact, infectious phage into specific cell types such as synovial fibroblasts [8]. In vivo phage display also has been utilized to target atherosclerotic plaques [9], and to probe the heart vasculature for endothelial markers [10]. Although in vitro selection of a specific peptide sequence carrying phage resulted in increased targeting of cardiomyocytes by phage in vivo [11], it remains to be determined if the peptide can actually deliver ''cargo'' peptides or proteins of therapeutic potential to the heart. If such were indeed the case, it would open up new avenues of drug development, leading to delivery of therapeutics directly to the ischemic heart. In the current study, we utilized a combinatorial approach of cell culture and in vivo biopanning using an M13 phage peptide display library to identify peptide(s) with potential for cardiomyocyte transduction in vivo in a tissue specific manner. We have identified a peptide, termed Cardiac Targeting Peptide or CTP that is able to transduce cardiomyocytes specifically in culture and in vivo. This peptide could be used to deliver peptides, proteins or nucleic acid of therapeutic potential specifically to the heart. Phage display A combined approach of in vitro and in vivo screening of a phage peptide display library for cardiomyocyte-specific transduction peptides was utilized. Cardiomyoblasts, H9C2 cells (ATCC, CRL-1446), were incubated with 10 ul (1610 11 pfu) of a 12-mer M13 phage peptide display library (NEB, E8110S), for 6 hours at 37uC, 5% CO 2 . Cells were then washed extensively, trypsinized and lysed by a single freeze-thaw cycle. Recovered phage was tittered and amplified. The post-amplified phage was again tittered and administered intravenously by retro-orbital injection at a dose of 3.5610 11 , to a female Balb/c mouse. The mice were pre-treated with intra-peritoneal injection of Chloroquine (20 mg/Kg) 24 hours prior to and on the day of the phage injection, in order to minimize intra-lysosomal destruction of internalized phage and increase the chances of recovering internalized phage. The phage was allowed to circulate for 24 hours, after which the mice were euthanized and heart and kidney tissues obtained. The rationale for this approach was based on the observation that after intravenous injection, native M13 phage had a half-life in blood of 4.5 hours [12]. Therefore we allowed the phage to circulate for ,5-6 half-lives to maximize the chance of uptake by cardiomyocytes and minimize contamination with non-specific phage circulating in the blood stream. To minimize the destruction of internalized phage in lysosomal compartments, the mice were pretreated with Chloroquine, a drug known to increase the pH of lysosomal compartments and theoretically decrease intracellular destruction of phage. The collected tissues were digested with collagenase and phage recovered by a single freeze/thaw cycle. Recovered phage was then tittered, normalized by tissue weight and subsequently amplified for a second round of biopanning. A total of three in vivo biopanning rounds were performed followed by sequencing of 10 plaques. All animal studies were approved by the University of Pittsburgh Institutional Animal Care and Use Committee (protocol approval number 0804422A-1). Confocal microscopy The cardiac targeting peptide (CTP) was synthesized in the University of Pittsburgh Peptide Synthesis Facility in either 6-carboxyfluorescene (CTP-6CF) labeled or biotinylated forms or conjugated to NBD (Nemo-binding domain), an 11-amino acid peptide (TALDWSWLQTE) which inhibits activation of the inducible NF-kB Kinase (IKK) by binding to the regulatory subunit (Nemo) of IKK. Luciferase assays H9C2 cells and MCA205 cells were transfected using Lipofectamine (Invitrogen, 11668-027) with a reporter plasmid expressing luciferase under an NF-kB promoter site as well as a Renilla control plasmid for normalization of transfection efficiencies. Twenty-four hours later, cells were treated with increasing concentrations of CTP-NBD and 30-minutes later challenged with murine TNF-a, 10 ng/ml, for 3 hours. Cells were then washed, trypsinized, lysed and supernatant collected for Luciferase activity assay. Differences across groups were compared using an unpaired Student's t-test. A two-tailed p-value of ,0.05 was considered statistically significant. In vivo imaging studies The initial in vivo targeting studies were performed using CTP-6CF. Female Balb/C mice were injected retro-orbitally with CTP-6CF (25 mg/Kg) and euthanized 15 minutes later. Heart crosssections were stained for actin using phalloidin Alexa-647 (Molecular Probes, A22287) and stained for laminin using a rabbit anti-laminin antibody followed by a goat anti-rabbit Cy3 (Jackson ImmunoResearch, 111-167-003) secondary antibody. Five non-overlapping sections were taken from each heart for quantification of green fluorescence (CTP-6CF) expressed as a percentage of total area (blue; stained for actin). A control peptide (CON; ARPLEHGSDKAT), picked from the original, unselected M13 phage library, CTP and 8-Lysine (8K, a homopolymer of lysine), a known cationic protein transduction domain, were synthesized in a biotinylated form. 200 mM of biotinylated CTP, CON and 8K, or equivalent volume of PBS, were incubated with 10 ul of Streptavidin-Alexa488 (2 ng/ml; Molecular Probes, S32354) for 2 hours at room temperature. Female Balb/c mice were intravenously (retro-orbitally) injected with peptides at a dose of 10 mg/Kg and then euthanized 30 minutes post-injection. Mice were also injected with biotinylated CTP conjugated to Streptavidin-Alexa488 at a dose of 10 mg/Kg and euthanized after varying circulation times to allow for tracking studies to be performed. Post-euthanasia heart, liver, lung, spleen, kidney, skeletal muscle and brain were harvested for cryosectioning followed by confocal microscopy. Sections were cross-stained with DRAQ5, a nuclear stain. For confocal microscopy, laser intensities/gains were set using negative control (PBS injected) heart tissue to minimize background fluorescence. Once the laser intensity for FITC was set using the control hearts from PBSinjected mice, it was kept constant across all subsequent imaging. Also serial scanning was performed to prevent ''bleed-through'' from one laser wavelength to another. Biotinylated CON or CTP peptides were labeled with neutravidin-conjugated fluospheres (Molecular Probes, F8770) with an overnight incubation at 4uC. These fluospheres are 40 nm in diameter and fluoresce at an excitation wavelength of 605 nm, allowing for in vivo bead tracking. Female Balb/c mice received intracardiac injections of fluospheres-labeled CON peptide, CTP peptide or control PBS incubated fluospheres alone. Mice were anesthetized with isoflurane delivered by the XGI-8 Gas Anesthesia System (Xenogen). Initial isoflurane concentration was set to 2.5% and was reduced to 1.5% once the animals were anesthetized. Mice were then imaged at 30, 60, 120, and 180 minutes post-injection with the IVIS Lumina (Caliper Life Sciences Inc.). All mouse studies were approved by the Institutional Animal Care and Use Committee at the University of Pittsburgh (protocol approval number 0804422A-1). Results Identification of a cardiac specific transduction peptide by biopanning of a M13 phage display peptide library In order to identify a peptide able to preferentially transduce cardiac tissue in vivo, a screening protocol using a 12 amino acid peptide M13 peptide phage display library was utilized. The first cycle of screening of the phage peptide display library for cardiomyocytes specific transduction peptides was performed on a rat cardiomyocyte cell line, H9C2, in culture. The H9C2 cells were incubated with the M13 phage peptide display library then washed extensively and possibly internalized phage recovered following trypsinization and lysis by freeze-thaw. For each of the subsequent three rounds, the phage were injected intravenously and mice euthanized 24 hours post-injection. The hearts and kidneys were isolated, enyzmatically digested and associated phage recovered. The isolated phage were quantified and expressed as number of phage per gram of tissue weight. Following each round of in vivo screening, there was a steady increase in the ratio of phage recovered from the heart relative to the kidneys, suggesting enrichment of phage targeting the heart (Fig. 1). After the third round of in vivo screening, 10 plaques were selected and sequenced. Six of the 10 phage contained the identical nucleic acid sequence of gcgccgtggcatctttcgtcgcagtattctcgtact, corresponding to the peptide APWHLSS-QYSRT, termed cardiac targeting peptide (CTP). A BLAST search in the NCBI database revealed that this sequence shared no homology with known naturally occurring peptides or proteins. Confocal microscopy analysis demonstrates preferential targeting of cardiomyoblasts In order to examine the ability of CTP to transduce cardiomyocytes preferentially in a dose-dependent manner, fluorescent confocal microscopy was performed using the peptide coupled to 6-carboxyflouroscene (6-CF). H9C2, 3T3, MCA-205, HeLa and HK-2 cells were incubated with increasing concentrations of CTP-6-CF, washed, fixed and counterstained with DRAQ5, a nuclear stain. As shown in figure 2, significant internalization of CTP-6-CF was observed in H9C2 cells compared to relatively minor internalization by 3T3, MCA-205 and HeLa cells at high concentrations, with no appreciable uptake by HK-2 cells. These results, performed by confocal analysis, demonstrate both the specificity of transduction by CTP as well as that the peptide is internalized, and not simply binding to the cell membrane. Inhibition of IKK/NF-kB signal transduction by a CTP-NBD fusion peptide demonstrates functional delivery to cardiomyoblasts To confirm functional transduction of H9C2 cells by CTP, the ability to deliver a peptide, NBD, able to block activation of the IKK/NF-kB transduction pathway, was examined. H9C2 and MCA205 cells were transfected with a plasmid expressing the luciferase marker under the control of a NF-kB-dependent promoter. Twenty-four hours post-transfection, cells were pretreated with the CTP-NBD fusion peptide for 30 minutes, followed by stimulation with murine TNF-a for three hours. TNF-a treatment alone caused an increase in NF-kB transcriptional activity, which was inhibited by pre-treating the H9C2 cells with increasing concentrations of CTP-NBD, in a dose-dependent fashion ( Fig. 3-a). In contrast, experiments performed using MCA205 cells did not show any inhibition of TNFa mediated NF-kB activation (Fig. 3b). CTP transduces cardiac tissue in vivo To demonstrate transduction of heart tissue in vivo, CTP-6CF or the biotinylated forms of CTP, RAN and 8K peptides coupled to Streptavidin-Alexa 488 (SA488) were injected intravenously (retroorbitally). Mice were euthanized at varying time points and heart and multiple other organs harvested for confocal microscopy. Confocal microscopy of heart tissue from mice injected with CTP-6CF showed rapid (15 minutes) transduction of heart tissue (Fig. 4). Staining for actin and laminin showed co-localization of CTP-6CF fluorescence (green; Figure 4a) with actin (blue; Figure 4b), but not laminin (red; Figure 4c). These co-localization studies strongly suggest that CTP is internalized into cardiac cells in vivo, similar to the cell culture experiments. Quantification of transduction, using Metamorph software, revealed that approximately 15% of the total heart was being transduced by CTP following intravenous injection (Fig. 4e). Injection of the CTP-biotin-SA488 complex showed rapid, efficient and specific transduction of heart tissue at 30 minutes in a diffuse pattern compared to Streptavidin-Alexa 488 alone. There was no appreciable transduction seen of liver, skeletal muscle, Figure 1. Enrichment of cardiac specific phage by multiple rounds of biopanning. After a single, screening cycle of phage incubated with H9C2 cells, recovered phage was amplified, tittered and injected intravenously into Balb/c mice. After a circulation time of 24 hours, mice were euthanized, heart and kidney dissected, digested with collagenase II, cells lysed and recovered phage tittered. Recovered phage was amplified, re-tittered and injected for subsequent round of biopanning. Phage recovered from heart versus kidney from each cycle of in vivo phage display was normalized by gram of tissue weight and expressed as a ratio of heart to kidney. doi:10.1371/journal.pone.0012252.g001 brain (Fig. 5) or spleen (data not shown). The only other organs with uptake were a small percentage of lung capillaries as well as limited transduction of endothelial cells of the glomerular capillaries in the cortex of the kidneys (Fig. 5). These results demonstrate the specificity of CTP transduction in vivo. To examine the biodistribution of CTP-biotin-SA488 over time, mice were euthanized at different time points following intravenous injection. Even with the large CTP-biotin-SA488 complex, efficient transduction of the heart was seen at 15 minutes, mainly confined to the sub-epicardial region of the heart. At 30 minutes this became more diffuse and by 120 minutes there was almost no fluorescence seen in the heart (Fig. 6). Over these three time points, the fluorescence gradually increased in the kidney glomerular capillaries (Fig. 6, center column), suggesting that this might be the mode of excretion of this peptide or at least the fluorescence after peptide breakdown. To confirm further the ability of CTP to transduce cardiac tissue in vivo, the peptide was coupled to fluospheres that allow for analysis of localization by whole animal imaging. Balb/c mice were injected intracardiac with 40 nm neutravidin-labeled fluospheres alone, CTP-biotin and CON-biotin labeled with these fluospheres. Mouse imaging was performed at baseline and 30, 60, 120 and 180 minutes. CTP+fluospheres were retained in the heart, as opposed to fluospheres alone or CON+fluospheres, which dissipated immediately after injection. CTP+fluospheres could still be found localized to the heart at 3 hours post-injection (Fig. 7). To determine the relative efficiency as well as specificity of transduction of cardiac tissue by CTP, the transduction ability of . Internalization and quantification of transduction by CTP-6CF in cardiac tissue in vivo. Cross-sections of mouse heart were stained for actin (blue) and laminin (red). Confocal microscopy showed co-localization of CTP-6CF (a) with actin (b) but not laminin (c) as seen in the color-merged micrograph (d). FITC fluorescence from non-overlapping heart micrographs from mice injected with CTP-6CF (n = 3) or PBS (n = 3) was quantified and expressed as a percentage of total area calculated from staining for actin ( Fig. 4e; error bars represent standard error of the mean). CTP-6CF -green; Actin -blue; Laminin -red. Scale bars represent 100 uM. doi:10.1371/journal.pone.0012252.g004 CTP was compared with 8K, a well characterized cationic protein transduction domain. Mice were injected with 10 mg/Kg dose of either CTP-SA488, 8K-SA488 or CON-SA488 conjugate and euthanized 30 minutes post-injection (Fig. 8). Mice treated with 8K-SA488 showed robust transduction of hepatocytes as well as kidney glomeruli with very little uptake in heart tissue. In contrast, CTP-SA488 conjugate showed only robust transduction of heart tissue with some uptake in the kidney glomerular capillaries and none by liver or spleen. The CON-SA488 complex did not show appreciable uptake in any organ. It is important to note that all of the analysis of CTP transduction in vivo was performed with the L-form, the naturally occurring form, of the peptide. Preliminary experiments using a non-degradable D-form have shown a far more efficient transduction that persists for extended periods of time (data not shown). Thus, it appears that there is degradation of the L-CTP complexes over time. Discussion The clinical application of potentially effective biological therapies for common acute cardiac conditions, like myocardial infarction, has been limited by efficiency and specificity of delivery of therapeutic agents. For example, for gene therapy approaches, such as plasmid DNA, delivery to the heart is very inefficient whereas there are significant time delays associated with cardiac . CTP specifically transduces cardiac tissue in vivo. Confocal analysis (206) was performed on tissues from heart, liver, kidney, lung, skeletal muscle and brain (a-f) from mice euthanized 30 minutes after intravenous injection of CTP-biotin-SA488 conjugate (10 mg/Kg) or PBS+SA488. Slides were counter-stained with DRAQ5, a nuclear stain. CTP-SA488 -green, Nuclei -blue. N = 3 in each group; scale bars represent 100 uM. doi:10.1371/journal.pone.0012252.g005 gene delivery using viral-based vectors. In addition, there are issues regarding the presence of pre-existing neutralizing antibodies or immune responses to certain viral vectors. The wellcharacterized cell penetrating peptides, like TAT from HIV coat protein, homopolymers of arginine or lysine, are not cell specific and transduce hepatocytes and multiple other organs in addition to the heart. Therefore, identifying a peptide with transduction capabilities specific for the heart would allow for new approaches for effective cardiac delivery of therapeutics. We previously have reported the ability to identify a synovial specific transduction peptide by screening an M13 phage peptide display library for internalized phage [8]. Thus we screened a large phage peptide display library in order to identify novel peptides potentially able to transduce cardiomyocytes in vivo. Indeed, we report here the identification of a specific peptide, termed CTP, which 15 or 30 minutes post-peripheral intravenous injection can efficiently and specifically transduce cardiac tissue ( Fig. 4 and 5 respectively). Transduction of cardiomyocytes on confocal microscopy of cross-sections of the mouse heart occurred in a diffuse manner, though there appeared to be some preference for the subendocardial and subepicardial regions at earlier time point of 15 minutes. No other organ showed uptake except kidney glomeruli, limited to the cortex, and rare lung capillaries, to a much lesser extent than heart tissue. Furthermore, CTP was able to transduce heart tissue in vivo far more efficiently and in a tissuespecific manner as compared to 8-Lysine, a known PTD (Fig. 7). Since the initial description of in vivo screening of phage display libraries by Pasqualini and Ruoslahti [13], this approach has been utilized to identify peptides that target tumor vasculature [5], adipose tissue [6], pancreatic islet cells [7], synoviocytes [8], atherosclerotic plaques [9] as well as heart endothelial cells [10]. This approach also has been used in cell culture with adherent primary cardiomyocytes to isolate a 20-mer peptide with a homology to tenascin-X [11], an extracellular matrix protein. The phage displaying this peptide was found to be associated with cardiomyocytes isolated from mice treated with it in vivo. However, although it was preferentially associated with cardiomyocytes, it could still be isolated from lung tissue. Furthermore, it was unclear whether this 20-mer peptide was able to function as a cardiac transduction domain and transduce heart tissue in vivo independent of the phage carrying it. In our screening approach, we combined a combination of cell culture and in vivo screening to identify a peptide able to be internalized into cardiac tissue. The first cycle was performed on cardiomyocytes as a screening approach to limit the population of non-specific phage from the initial phage library. All subsequent cycles were in vivo with intravenous injection in mice followed by a prolonged circulation time of 24 hours. Using this approach we identified a peptide that is able to deliver fluorescently labeled Streptavidin, a ,60 kDa complex, to cardiac cells in vivo without transduction of liver, spleen, skeletal muscle or brain, with minimal uptake by lung and glomerular capillaries. A BLAST search in the NCBI data base did not reveal homology to any known, naturally occurring proteins. Interestingly two separate groups of investigators, using an in vitro screening of a phage display library approach, have identified the exact same sequence as CTP, and shown it to have high affinity for binding to apatite-based, bone-like minerals [14] and two specific sulfated carbohydrates [15]. However, it is unclear how the ability of CTP to interact with these cellular components in vitro facilitates transduction of cardiac specific tissue in vivo. We currently are examining whether the transduction process is energy independent as well as whether it involves endocytosis. Also, the size limitation for the cargo to be delivered is unknown, but presumably it is as large as or larger than the M13 phage used for the identification of the peptide. Given the fact that CTP-mediated transduction of cardiac tissue is efficient, specific and rapid, it could be used to deliver a variety of proteins, peptides, small molecules and viral and non-viral gene transfer vectors to the heart for treating cardiac conditions. In addition, it could be used diagnostically to determine the extent of viable cardiac tissue following infarct or ischemia reperfusion injury. Overall, the identification of a heart-specific delivery peptide should allow for novel biological treatments for cardiac conditions.
2014-10-01T00:00:00.000Z
2010-08-17T00:00:00.000
{ "year": 2010, "sha1": "307b486ce6e1754fa083e15005dc8a88f423df7f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0012252&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "307b486ce6e1754fa083e15005dc8a88f423df7f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
59616652
pes2o/s2orc
v3-fos-license
Disclosure of complementary medicine use to medical providers: a systematic review and meta-analysis Concomitant complementary medicine (CM) and conventional medicine use is frequent and carries potential risks. Yet, CM users frequently neglect to disclose CM use to medical providers. Our systematic review examines rates of and reasons for CM use disclosure to medical providers. Observational studies published 2003–2016 were searched (AMED, CINAHL, MEDLINE, PsycINFO). Eighty-six papers reporting disclosure rates and/or reasons for disclosure/non-disclosure of CM use to medical providers were reviewed. Fourteen were selected for meta-analysis of disclosure rates of biologically-based CM. Overall disclosure rates varied (7–80%). Meta-analysis revealed a 33% disclosure rate (95%CI: 24% to 43%) for biologically-based CM. Reasons for non-disclosure included lack of inquiry from medical providers, fear of provider disapproval, perception of disclosure as unimportant, belief providers lacked CM knowledge, lacking time, and belief CM was safe. Reasons for disclosure included inquiry from medical providers, belief providers would support CM use, belief disclosure was important for safety, and belief providers would give advice about CM. Disclosure appears to be influenced by the nature of patient-provider communication. However, inconsistent definitions of CM and lack of a standard measure for disclosure created substantial heterogeneity between studies. Disclosure of CM use to medical providers must be encouraged for safe, effective patient care. Rationale: Concomitant use of CM and conventional medicines can carry a variety of risks and benefits. Effective communication between patients and providers is essential in order to ensure risks are minimised and benefits are optimised. Previous research from over a decade ago has shown that disclosure rates of CM use to conventional medical providers can vary widely and are often much lower than is desirable. The reasons patients give for not disclosing may offer insights into how disclosure rates might be improved. In order to identify whether or not disclosure rates are still lower than desired, and how communication about CM use between patients and providers might be optimised, a review of the current literature should be undertaken as an update on previous reviews of the topic. Research Question: To what extent do users of CM disclose this use to conventional medical providers, and what are their reasons for disclosing or not disclosing? Objectives:  To provide an update on the review by Robinson & McGrail (2004)  To assess rates of disclosure of CM use to conventional medical providers  To assess reasons for disclosing and not disclosing CM use to conventional medical providers Manual search to be undertaken of reference lists from reviews identified during search, of reference lists from papers selected for review, and according to authors' expertise in topic. 3. Outcomes report rates of disclosure/non-disclosure and/or reasons for disclosure/non-disclosure of CM use to conventional health/medical practitioners. 4. CM defined as any service, product or practice outside of conventional/dominant medical system, whether self-prescribed or accessed through CM (non-conventional) practitioner. 5. Sample can be reasonably described as comprising members of the general population. 2. Sample cannot be reasonably described as comprising members of the general population (e.g. disease-specific population). Additional Eligibility Criteria for Meta-Analysis: With respect to homogeneity, additional criteria will be applied to identify those papers suitable for meta-analysis. 2. Disclosure rate well-defined and consistent between included studies. 3. Definition of CM or type of CM used consistent between included studies. 4. Acceptable score in risk of bias assessment. Data Management: Citations to be managed using EndNote (Clarivate Analytics) citation management software. A copy of the library is to be archived after each step of the selection process. Selection Process: Citations will be filtered first by assessing the paper's title, then by assessing abstract, in order to identify which papers require reading by full-text. Citations will be retained at each step if it is conceivable that the paper may meet eligibility criteria. Full text articles will then be screened against all inclusion/exclusion criteria. Selection of studies for meta-analysis will be undertaken after data extraction to identify those with sufficient homogeneity. The filtering process will be undertaken by HF and overseen by AS, with a selected sample of eligible studies to be reviewed at each stage of screening. Discrepancies in opinion regarding which citations should be retained and which should be excluded will be resolved through discussion until consensus is reached. Data Extraction: A customised form will be used to systematically extract data from each retained full-text paper. This will include study characteristics (year, study design, location, setting, population, sample, funding sources) and details of disclosure/non-disclosure (rates, reasons), as well as space for additional disclosure-related data identified a posteriori. Data extraction will be performed by HF and overseen by AS. Any data which is potentially relevant to the research question but does not explicitly fit within pre-defined variables will be discussed until consensus is reached in regards to its inclusion. Outcomes:
2019-02-08T15:09:09.999Z
2019-02-07T00:00:00.000
{ "year": 2019, "sha1": "0d56163e2d3ab618bebb98aefca18193dbceeafd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-38279-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69e3298e4258bdbf5dd99453c104202f14958667", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247810825
pes2o/s2orc
v3-fos-license
Molecular Diagnosis of Primary Hyperoxaluria Type 1 and Distal Renal Tubular Acidosis in Moroccan Patients With Nephrolithiasis and/or Nephrocalcinosis Nephrolithiasis (NL) and urolithiasis (UL) are usual reasons for hospitalization and presentation in pediatric outpatient departments and their incidence continues to rise worldwide. In Morocco, a previous epidemiological study done in the Fez region between January 2003 and November 2013 reported a prevalence of 0.83% of childhood UL. In two studies, heritability accounted for almost half of all NL or nephrocalcinosis (NC) prevalence. Genetic factors must be considered in the etiological diagnosis of urinary lithiasis in Morocco since the frequency of consanguineous marriages is high. Hereditary tubular disorders, especially distal renal tubular acidosis (dRTA) and Dent disease, and metabolic disorders like idiopathic hypercalciuria and hyperoxaluria are the most common causes of medullary NC. Primary hyperoxaluria type 1 (PH1), which can generate an early onset of NC, and often chronic kidney disease (CKD) should always be considered and thoroughly diagnosed. The aim of this work was to establish a molecular diagnosis of PH1 and dRTA and, thus, to predict and explain the disease phenotype in a cohort of 44 Moroccan patients with NL and/or NC by analyzing the AGXT and ATP6V1B1 genes that cause NL and/or NC when mutated. Disease phenotype was molecularly explained and solved in six of 44 individuals with NL and/or NC (13.6%). In the pediatric subgroup of individuals, a causative mutation in 16.2% was identified, whereas in the adult cohort no pathogenic mutation was detected. In our patients, PH1 was objectified in 67% of cases followed by dRTA in 33% of cases. We suggest that prompt detection and prophylactic treatment of UL are necessary to limit the risk of everlasting renal damage and thus prevent or delay the progression to CKD. Introduction Nephrolithiasis (NL) and urolithiasis (UL) can be defined by solid stones developed in the kidney (NL) or the lower urinary tract (UL). Nephrocalcinosis (NC) results from calcium phosphate or calcium oxalate deposition in the kidney parenchyma, mainly in tubular epithelial cells and in the interstitial tissue [1]. NC is evaluated by ultrasonography according to the anatomical region of the deposit which can be cortical and diffuse NC or medullary NC, with the latter being classified as grade I, II, or III depending on their degree of echogenicity [2]. All three stages are frequently encountered during hospitalization and presentation in pediatric departments [3]. Although the incidence and the prevalence of NL and NC are still unknown, the condition is not so rare. Over the last several decades, the incidence of pediatric NL and NC has notably risen [4]. Episodes of colicky pain, the necessity for surgical intervention, high recurrence rate, and high economic cost are the major factors that can initiate a progression to chronic kidney disease (CKD), which may explain high morbidity in NL/NC patients [5]. NL is developed by up to 10% of individuals worldwide [6]. In Morocco, incidence and risk factors for hospitalization due to stone prevalence are seen more in adults than children. Indeed, the prevalence of childhood UL between January 2003 and November 2013 was estimated to be 0.83% [7]. Genetic and anatomical causes represent the main risk factors (~75%) for the development of kidney stones in children [8]. Kidney stones are not the disease itself, but the first symptom of the underlying disease and do not represent the diagnosis, which means that every first kidney stone in children has to be investigated carefully to disclose the underlying disease [3,9]. NL and NC are known to share a certain degree of heritability. In fact, NL and/or NC of hereditary origin can make up to half of the total cases [10,11]. Mutations in at least 30 genes can lead to monogenic forms of NL and/or NC due to autosomalrecessive, autosomal-dominant, or X-linked transmission, according to the Online Mendelian Inheritance in Man (OMIM) database. Causative mutations in 11.4% of adults and 20.8% of early-onset cases with NC/NL have been reported by Halbritter et al. [11]. This does not only confirm a considerable occurrence of heritable NL/NC but also demonstrates the importance of mutation detection for prescribing appropriate therapeutic and preventative measures. Hereditary monogenic kidney stones are classified into three groups: 1) inborn errors of metabolism of which primary hyperoxaluria type 1 (PH1) is the most dreadful; 2) congenital tubulopathies, especially distal renal tubular acidosis (dRTA) with or without hearing loss; 3) cystinuria [12]. In Morocco, the frequency of consanguineous marriages is very high. In fact, the homogenization of the gene pool of the population is reflected at the individual level by the accumulation of recessive alleles in the homozygous state within the loci, thus increasing the risk of expression of monogenic or even multifactorial diseases [13]. In this study, we aimed to establish, in a cohort of 44 Moroccan patients with NL and/or NC, a molecular diagnosis of PH1 and dRTA by analyzing, respectively, the AGXT and ATP6V1B1 genes that cause NL and/or NC when mutated. Patients Forty-four Moroccan patients from 40 unrelated families were enrolled in this study. Patients were recruited from the nephrology and pediatric departments of Hassan II University Hospital in Fez. The study was approved by the University Hospital Ethics Committee (Faculty of Medicine and Pharmacy, Fez) and referenced as 06/18. Patients were informed about the aim of the study, and their consent to genetic testing was obtained. The inclusion criterion of this study was defined by the first clinical manifestation of NL and/or the existence of NC on renal ultrasound. However, the exclusion criterion includes any condition or medication that might have caused a secondary renal stone disease. Clinical data, pedigree information, and blood samples were collected from 44 individuals. The collected data for this cohort include sex, age, history of consanguinity, and ultrasound findings. The cohort was composed of 32 male and 12 female patients. Among these, 33 had NL and nine demonstrated NC by renal ultrasound. Two exhibited both NC and NL. Molecular genetic testing QIAamp DNA Blood Mini Kit (Qiagen, Inc.) was used to extract genomic DNA from the patient's peripheral blood. The molecular study was performed by direct sequencing of exons 1, 2, 7, 9, and 10 of the AGXT gene to investigate the Maghrebian mutation, p.Ile244Thr, and the described Moroccan mutations: p.Val326TyrfsX15, p.Lys12ArgfsX34, and p.Arg111X [14], and exon 12 of ATP6V1B1 gene to inquire into the most recurrent mutation in North African populations, c.1155dupC [15][16][17]. All studied exons (coding regions and exon-intron junctions) of each gene were amplified by polymerase chain reaction (PCR). PCR reactions were performed in a total volume of 25 µL containing 10 ng of DNA for exons 1 and 2 of AGXT gene, 100 ng of DNA for the rest of exons, 2.5 µL of 10× enzyme buffer, 0.2 mM of each dNTP, 1.5 mM MgCl 2 , 0.4 µM of each primer, and 0.5 U Taq DNA polymerase (Invitrogen). All PCR primers and conditions are illustrated in Table 1 Statistical analysis Analysis of the collected data was done using the Statistical Package for the Social Sciences (SPSS) version 22.0 (SPSS Inc., Chicago, IL, USA). Frequencies and percentages were used to describe the data. To compare the proportion of values in each category, we used nonparametric chi-square (goodness-of-fit test). P-value was considered statistically significant when <0.05 for all tests. Patients' epidemiological and clinical data Forty-four patients from 40 unrelated families from different regions of Morocco were enrolled in this study: 12 female and 32 male patients with a sex ratio M/F of 2.7. Patients were aged from 1 to 58 years and 84% were <18 years. The average delay between the onset of a lithiasis disease and the diagnosis of the hereditary cause determined in six cases was six years (2-10 years) ( Table 2). Consanguinity was present in 43% of these patients, while 57% of them showed negative consanguinity. According to the presenting complaint, 3 (6.8%) of the patients had sensorineural hearing loss and 5 (11.4%) were diagnosed with failure to thrive. Renal ultrasound results were in favor of NC in 9 (20.5%) patients, NL in 33 (75%), and both NL and NC in 2 (4.5%) ( Table 3). (Figure 1). We also detected a causative mutation in 16.2% (six of 37) of patients in the pediatric subgroup, which demonstrated an onset before 18 years of age. However, we did not identify any pathogenic mutation in the adult cohort (≥18 years), a result that seems to be statistically not significant (P = 0.568). The sex of the molecularly solved patients was normalized to that of the cohort and allowed us to verify a possible correlation between sex and monogenic causes of the disease. The cohort consisted of 12 females and 32 males of whom five carrying pathogenic mutations were male and one was female ( Table 4), resulting in a statistically insignificant difference in the detection of pathogenic mutations between sexes (P = 0.664). The age of patients when the disease first manifested was less than six years for those with PH1 and less than one year for the patient diagnosed with dRTA. All PH1 patients presented isolated NL (P = 0.558), and their parents had positive consanguinity (P = 0.029). Two PH1 patients had renal impairment; three had a positive family history of renal stone. Two PH1 patients were siblings. For the remaining two patients with dRTA, their parents had positive consanguinity (P = 0.181). They presented medullary NC (P = 0.038), sensorineural hearing loss (P = 0.003), and failure to thrive (P = 0.011). Discussion Early onset of NL and NC in children are frustrating conditions for both clinicians and families because they are frequently unnoticed. Many monogenic mutations responsible for NL and/or NC pathologies have been identified in recent years [18]. In this study, 44 patients with NL and/or NC underwent a mutational analysis. By sequencing the coding regions of AGXT and ATP6V1B1 genes, which are among the genes known to lead to monogenic NL and/or NC, we spotlighted causative mutations in six out of 44 of them (13.6%). After analyzing the age distribution of patients in whom causative mutations are identified, it has been shown that recessive monogenic diseases usually occur earlier in life than dominant monogenic diseases [11]. Furthermore, genetic causes for the development of kidney stones are more frequent in children, while in adults kidney stones are predominantly due to dietary imbalance [12,19], Our results agree with this for monogenic causes of NL and/or NC. Table 4 shows that all mutated patients had an onset before 18 years of age. The average delay between the onset of a lithiasis disease and the diagnosis of the hereditary cause, as determined in six cases, was six years (2-10 years). In infantile renal tubular diseases, clinical manifestations occur mostly in the first decades of life and are easily diagnosed [20,21]. In one of our patients, the clinical manifestations started at the age of three months but the diagnosis of the etiology in question was molecularly confirmed eight years later. Indeed, there is a great delay in the diagnosis of the hereditary character of NL and/or NC. In our patients, the average delay was of six years even though the main elements that point to a genetic cause were common: a high percentage of parental consanguinity; familial cases of UL; dialysis nephropathy or death; bilateral, multiple, and recurrent calculus; or NC [20]. The fact that the correlation between sex or age of onset and monogenic causes of disease was statistically not significant ((P = 0.664) and (P = 0.568), respectively) could be explained by the small size of the cohort. AGXT was, above all, the most predominant disease-causing gene in the cohort we studied (P = 0.000). The median age of the first stone was five years. This finding is in line with retrospective analysis of stone composition indicating that PH1 is the main recessive monogenic origin of stone diseases in the pediatric patients [11,12]. In our patients, PH1 was objectified in 67% of cases followed by dRTA in 33% of cases. This distribution is in accordance with that described by the Cristal laboratory in France. In fact, PH1 was the main cause noted in 45% of pediatric cases followed by dRTA in 5% of cases [12]. PH1 is certainly underdiagnosed in Morocco because only four mutations have been studied among more than 178 identified during PH type 1 and the search for specific mutations of PH type 2 and type 3 is not in current practice yet [14,22]. PH1 is the most devastating of the familial forms of lithiasis and represents a frequent cause of CKD and dialysis [23]. At the time of its diagnosis, all the patients already had renal impairment. There is a geographic and ethnic specificity of the mutations which are decisive in the severity of the disease. c.508G>A/p.G170R is most common in Europe and North America while c.731T>C/p.I244T is most common in the Maghreb region [24]. The only mutation of the AGXT gene identified in our patients was c.731T>C/p.I244T, and this mutation was reported as the most frequent in previous Moroccan, Tunisian and Libyan series [14,[25][26][27]. Vitamin B6 (Pyridoxine*) prescribed at a dose of 5-10 mg/kg/day can reduce oxaluria from 300 to 600 mg/day by up to 30% in 30% of patients by diverting the metabolism of oxalate toward the more soluble glycocolle [28]. Patients carrying the c.508G>A/p.G170R or c.454T>A/p.F152I mutation are good responders [29]. It is imperative to test this treatment in any patient with PH1 because of the lack of a close correlation between genotype and phenotype. This treatment should be maintained even at the CKD stage [29]. None of these mutations mentioned above have been detected in our cohort. Genetic disorders common in pediatric patients can be linked to primary or inherited forms of dRTA [30][31][32][33]. Mutations in transport/channels genes, expressed in both the kidney and the inner ear, such as ATP6V1B1 and ATP6V0A4, can cause progressive sensorineural hearing loss in children as a result of dRTA [17,[34][35][36][37]. Additionally, significant functional impairment in urinary acidification and no responsiveness to acute acid load are seen in children presenting recessive dRTA with nonsense mutations in the ATP6V1B1 gene. Indeed, in this study, we identified two cases of dRTA with an early onset of deafness due to a mutation in this gene. It is, therefore, worthwhile to determine the monogenic origins of NL and/or NC very early. This will have important prognostic implications and will allow the adoption of the best therapeutic strategy. This includes suggesting practical implications such as initiating audiometry for ATP6V1B1 patients and predicting responsiveness to a vitamin B6 (pyridoxine) treatment since patients carrying the c.508G>A/p.G170R or c.454T>A/p.F152I mutation are known to be good responders [29]. Individuals with PH1 caused by mutations in the AGXT gene represent an excellent example of individualized therapy based on molecular genetic diagnosis. Indeed, it has been shown that sensitivity to pyridoxine by these patients is linked to the presence of a distinctive allele (G170R) [38]. We believe that by including a genetic diagnosis in the repertory of clinical tests we may avoid invasive and potentially harmful procedures such as the liver biopsy prescribed for patients with suspected primary hyperoxaluria type 1. Therefore, genetic screening can be valuable if there is an atypical clinical presentation or if the standard diagnosis is hampered by the progression to a CKD. Extremely rare diseases can be fairly appreciated if more genetic screenings are accomplished. We plan in future studies to consider whole-exome sequencing as a more effective approach to determine the molecular genetic basis of NL and/or NC. Conclusions This study highlighted mutations in at least two genes, namely the AGXT and ATP6V1B1, among the 30 genes known to be linked to monogenic forms of NL and/or NC. The mutation rate in our cohort was 13.6%. We emphasize the importance of prescribing specific genetic tests in the clinical practice of pediatric patients with NL and/or NC. A genetic screening if implemented would improve considerably the current approach to both prophylaxis and treatment and could lead to efficient personalized treatments. Genetic counseling and/or mutation analysis for the patient's healthy relatives at risk is recommended in people with NL and/or NC. Finally, the information provided by mutation analysis will not only allow early detection of such pathologies by clinicians but also the follow-up of disease development and the introduction of preventive treatments when possible. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. University Hospital Ethics Committee (Faculty of Medicine and Pharmacy, Fez) issued approval 06/18. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-03-31T15:18:01.527Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "70ad001f7638c145f15d884e1d723a3124d25f5f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/86820-molecular-diagnosis-of-primary-hyperoxaluria-type-1-and-distal-renal-tubular-acidosis-in-moroccan-patients-with-nephrolithiasis-andor-nephrocalcinosis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec61fdec5ed2768a9a4a7f4c23ee3df605bebeab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
201168439
pes2o/s2orc
v3-fos-license
A Systematic Review and Meta-analysis of Interventions to Improve Play Skills in Children with Autism Spectrum Disorder Children with autism spectrum disorders (ASD) experience difficulty with play, and a number of different interventions have been developed and evaluated to address this deficit. This systematic review of randomized controlled trials identified 19 studies reporting on play-based interventions for children with ASD aged 2–12 years. The components of each study, including elements of the interventions and methodological quality, were examined. A meta-analysis was completed for 11 studies, and a small but significant treatment effect was identified (Hedges’ g = 0.439). The current review supports future development of interventions with a focus on the child with ASD across social environments. Outcome measures and comprehensive reporting of intervention components are important considerations in future intervention development and testing. Significance for clinicians and future research is discussed. PROSPERO registration number: RD42015026263. Introduction Children with autism spectrum disorders (ASD) often experience difficulties with play and forming and maintaining peer relationships. Research has demonstrated that these social difficulties persist into adolescence and young adulthood (Schall and McDonough 2010). This review will focus on interventions that target play in children with ASD. For the purpose of this review, play is defined as a transaction between the individual and the environment which includes "…the presence of three elements: intrinsic motivation, internal control, and the freedom to suspend reality" (Skard and Bundy 2008, p. 71). Additionally, it needs to be apparent to both the players and observers that this transaction is playful, by the cues the players give and read- Bundy (2012) identifies this as being in the play frame. This definition of play is both contemporary and comprehensive, is appropriate across different ages and stages of development, and has been used by many observation and intervention studies in the past. Play is an important aspect of childhood, and there are many benefits in promoting play. Play is the context in which most childhood friendships are formed, from early preschool years through to adolescence (Bundy 2012). Play is essential to childhood development and provides an ideal opportunity and context for parent and peer engagement. Play, as an independent occupation, not just a means to promote other skills or development, is a legitimate and necessary outcome because it is a critical element of the human experience (Parham et al. 1996). Despite this importance, play may have diminished social validity or priority (Foster and Mash 1999). Certainly, time available for play has been significantly reduced for some children as other areas of development, such as academic outcomes, are increasingly valued (Ginsburg 2007). The three elements are not fixed, but rather move depending on the child's experience, and can tilt the transaction away from non-play to play. If children in the play frame have reduced internal control or intrinsic motivation, then play can tilt back to work (Bundy 2012). As play is defined as an intrinsically motivated transaction, simply providing toys in an engaging environment will not necessarily guarantee a child will play (Bundy 2011). Similarly, it is not enough for the study to say it was a play intervention if the transaction was highly structured or if the child was required to follow a set play routine. Furthermore, the complexity and constantly changing nature of play make measuring play ability difficult for educators, clinicians, and researchers (Brooke 2004). Frequently, measures are of readily observable social skills or children's behavior from the perspective of a parent or teacher, rather than a child's play ability (McAloney and Stagnitti 2009). Observation of unstructured play in a natural play context would support an accurate and authentic assessment (Ray-Kaeser and Lynch 2017). This assessment would be reliant on the assessor's definition and reporting of play skills to assure valid and reliable results and comparisons across individuals, context, and studies (Ray-Kaeser and Lynch 2017). Children with ASD have difficulty with components of play, specifically, turn taking, changing activities away from preferred interests, reduced symbolic quality, and relinquishing control of preferred play activities (MacDonald et al. 2009). For children with ASD, improvements in play skills lead to increased positive social interactions, as well as decreased inappropriate behaviors (Jung and Sainato 2013). Specifically, children with ASD with average cognitive functioning have been reported to have difficulties in social initiation and in social-emotional understanding (Sigman et al. 1999). It is this reduced social understanding, rather than social disinterest or insensitivity, that is the primary deficit for social play (Sigman et al. 1999). As a result of their social play challenges and subsequent lack of opportunity, these children can be caught in a cycle of social isolation. Given that children with ASD who do not acquire age appropriate social skills may lack opportunities for positive peer interactions, explicit training in social play with peers is a necessary intervention (Bauminger 2002;Jordan 2003;Jung and Sainato 2013). Several different approaches to interventions have been developed to address impaired social interactions and play in children with ASD. These different approaches include coaching the child with ASD, identifying and addressing individual play skills and interests, and developing supportive relationships and environments. Peer-focused interventions, including integrated play groups, peer buddy systems, and group interventions, represent the largest type of social play intervention for children with ASD (Bass and Mulick 2007). The inclusion of peers in the intervention helps support generalization of skills to other environments and creates a more authentic social environment for children with ASD to develop their social play skills (Chan et al. 2009). The use of typically developing peers in interventions is further supported as friendships are associated with prosocial behaviors and act as a protective factor against rejection and bullying, especially in the preschool and school environments (Chang et al. 2015). Although promoting play has been largely neglected in school contexts due to the focus by teachers on academic outcomes, some interventions focus on upskilling teachers to be able to provide intervention to the child with ASD and to create a supportive social environment (Kossyvaki and Papoudi 2016). Parents are also frequently a focus of interventions, especially with younger children (McConachie and Diggle 2007), as the social behavior of a child with ASD has been shown to be significantly enhanced by the interaction style of the parent or caregiver adapting to the child's play level (Freeman and Kasari 2013;Meirsschaut et al. 2010). To date, no researchers have conducted systematic reviews of play-based interventions for children with ASD tested using randomized controlled trials (RCTs). Previous systematic reviews have investigated social developmental outcomes in children with ASD, including pragmatic language interventions (Parsons et al. 2017), after school programs for personal and social skills (Durlak et al. 2010), and early behavior interventions (Warren et al. 2011) with outcomes that included improvements in cognitive performance, language skills, and adaptive behavior skills. A recent systematic review of RCTs for preschool children with ASD included interventions addressing behavioral, communication-focused, and developmental outcomes of ASD general symptoms, but not interventions addressing play outcomes (Tachibana et al. 2017). A review of 13 play-based intervention studies for children with ASD identified improvements when the intervention built upon the child's existing play repertoire (Luckett et al. 2007). However, effectiveness of these interventions could not be determined as the majority of the 13 studies used single case study designs (Luckett et al. 2007). Since the Luckett et al. study was published (2007), more than a decade ago, a number of researchers have published RCTs of play interventions for children with ASD (Corbett et al. 2016, b;Kasari et al. 2014, b). To date, no systemic reviews have been conducted of play-based interventions for children with ASD that have been investigated using RCTs. Objectives This systematic review focuses on the efficacy of play-based interventions to address the play skills of children with ASD. This systematic review aimed to summarize key characteristics of a range of play-based interventions for children with ASD and assess the quality of published RCTs. This metaanalysis addressed the following research questions: (1) are play interventions effective in improving play outcomes when compared to a non-play intervention or treatment as usual control group? and (2) do the following intervention characteristics mediate intervention effects: (a) focus of intervention (i.e., child, parent, peer, teacher or combination), (b) intervention setting, and (c) group or individual therapy? Protocol and Registration The methodology and reporting of this systematic review were based on the PRISMA and PRISMA-P statement (Moher et al. 2009;Shamseer et al. 2015), and the review was registered with PROSPERO (registration number RD42015026263; Booth 2013). Eligibility Criteria and Study Selection Studies were included if they met four inclusion criteria: (1) participants must include children who have a diagnosis of ASD according to the DSM-III-R, DSM-IV, or DSM-5 criteria; (2) study designs were RCTs; (3) the interventions included play as per the definition adopted in this study; and (4) treatment outcomes were assessed using play measures. Multimodal intervention programs in which the play-based intervention was part of a variety of social or behavioral components were also included. These criteria were selected to identify play-based intervention studies for children with ASD that are classified as level II on the National Health and Medical Research Council (NHMRC) Hierarchy of Evidence (NHMRC 2011). The Australian NHMRC developed the NHMRC Hierarchy of Evidence to rank and evaluate the evidence of healthcare interventions. According to the NHMRC Hierarchy of Evidence, level II studies are well-designed RCTs (NHMRC 2011). Information Sources and Search Studies were identified through the following two-step procedure. First, an electronic database search was conducted using PubMed, Embase, PsychINFO, CINAHL, and ERIC. These databases are where social interventions are most likely to be found. Two categories of subject headings were used in combination: (1) disorder (autism spectrum disorder; ASD) and (2) randomized controlled trials. Free text searches were also conducted for all four databases on September 4, 2017. Both subject headings and free text terms with limitations are described in Table 1. Secondly, identified studies were then Embase: ((autism/OR "pervasive developmental disorder not otherwise specified"/OR Rett syndrome/OR childhood disintegrative disorder/) AND (randomization/or randomized controlled trial/OR "randomized controlled trial (topic)"/OR controlled clinical trial/)) OR((autism OR autistic OR ASD OR PDD OR PDD-NOS OR pervasive OR Asperger OR Rett OR (childhood AND disintegrative AND disorder*)) AND (RCT OR (Randomized AND Controlled AND Trial) OR (Randomised AND Controlled AND Trial) OR (Randomized AND Clinical AND Trial) OR (Randomised AND Clinical AND Trial) OR (Controlled AND Clinical AND Trial)) limit to yr="2017-Current") Eric ((DE "Autism" OR DE "Pervasive Developmental Disorders" OR DE "Asperger Syndrome") AND (RCT OR (Randomized AND Controlled AND Trial) OR (Randomised AND Controlled AND Trial) OR (Randomized AND Clinical AND Trial) OR (Randomised AND Clinical AND Trial) OR (Controlled AND Clinical AND Trial))) OR ((autism OR autistic OR ASD OR PDD OR PDD-NOS OR pervasive OR Asperger OR Rett OR (childhood AND disintegrative AND disorder*)) AND ( searched for inclusion of play (see Fig. 1). Gray literature was searched using Google Scholar for disorder, RCT, and play. Synthesis of Results and Methodological Quality Data across all studies were extracted independently by the first author using data extraction tables. Intervention characteristics were extracted for the following: (1) focus of the intervention and play skills targeted; (2) interventionists and procedure described in the study; and (3) setting, mode of delivery, and duration. Data on study characteristics were then extrapolated and synthesized into several categories: (1) group design and participant group numbers, (2) play as primary or secondary focus of study, (3) age range (means and standard deviations), (4) inclusion and exclusion criteria for participants, and (5) the play outcome measure used and results of the treatment. The QualSyst critical appraisal tool was used to assess the methodological quality of the included studies (Kmet et al. 2004). The 14-item checklist has a three-point ordinal scoring system (yes = 2, partial = 1, no = 0) that provides a systematic, reproducible, and quantitative means of assessing the quality of research. The total QualSyst score can be converted to a percentage score, with a QualSyst score of ≥ 80% considered strong quality, a score of 60-79% considered good quality, a score of 50-59% considered adequate quality, and a score of < 50% was considered to have poor methodological quality. All included studies were reviewed by two assessors and interrater reliability was established for ratings. Meta-analysis A meta-analysis and overall treatment effects were calculated for play-based interventions on pre-post outcome measures. Between-group analyses were also conducted to compare post-intervention scores with control groups that included another intervention or treatment as usual comparator group. Studies that included no treatment or delayed control group were removed from between-group analyses (Corbett et al. 2016, b;Frankel et al. 2010, b;Kasari et al. 2010, b). Subgroup analyses were conducted to compare the effect as a function of intervention characteristics: (1) groups or individual, (2) focus of intervention (i.e., child, parent, peer, or teacher), and (3) setting (i.e., clinic, home, or school setting). A meta-regression analysis was conducted to determine whether focus of intervention, setting, or group or individual therapy mediated intervention effects. The study sample size (eight studies) allowed for multivariate analysis involving up to two covariates without compromising power (Borenstein et al. 2011), so one model addressed the interaction between group vs individual and setting and the other addressed the interaction between the focus of the intervention vs setting. To compare effect sizes, pre-and post-intervention means, standard deviations, and sample sizes were extracted. If the data required for meta-analysis calculations were not reported, attempts were made to contact authors to request the data. When multiple outcome measures of play were reported for one intervention, the measure that evaluated the highest level of play skills was extracted for analysis (e.g., symbolic play types were selected over functional play types in a structured play assessment). Extracted means, standard deviations, and sample sizes for pre-and post-intervention measures were entered into comprehensive meta-analysis, version 3.3.070 (Borenstein et al. 2005). A random effects model was used to generate effect size. The Hedges' g formula for standardized mean difference with a confidence interval of 95% was used to report effect size. Using Cohen's d convention for interpretation, an effect size of < 0.2 reflects negligible difference, between 0.2 and 0.49 is small, between 0.5 and 0.79 is moderate, and > 0.8 is large (Cohen 1988). Given that studies that report large and significant treatment effects are more likely to be selected for publication (Borenstein et al. 2005), it is possible that some low-effect or non-significant interventions are missing from the metaanalysis. The presence of publication bias was assessed using classic fail-safe N. The test calculates the number of additional studies that, if included in the analysis, would nullify the measured effect (N). If N is large, it can be considered unlikely that there would be so many unpublished low-effect studies and it can be assumed that the meta-analysis is not compromised by publication bias (Borenstein et al. 2005). Study Selection A total of 327 papers were identified and screened through the subject heading and free text searches (see Fig. 1). The first author assessed all 327 abstracts for inclusion, and the fourth author assessed a random selection of 40% for interrater reliability; weighted Kappa 0.88 (95% CI [0.7, 61.00]). A total of 82 full text articles were accessed to determine if the studies met the inclusion criteria. Specifically, further information was needed regarding the description of the intervention and the outcome measures to determine if studies met the inclusion criteria. Of these, 63 studies were excluded for one or more than one of the following reasons: 10 did not include children aged 2 to 12 years of age; 3 did not include participants with ASD; 26 were not a RCT study design; 26 did not include a play-based intervention as defined by our study; and 46 did not have an outcome measure for play (see Table 2). A total of 19 studies were selected for this systematic review based on the inclusion criteria. All the selected studies included participants aged 2-12 years with a diagnosis of ASD, used an RCT study design, investigated a play-based intervention, and reported on play outcomes that aligned with the definition of play adopted for this review. Study Characteristics Participants The 19 studies that met the eligibility criteria included a total of 1149 participants aged between 2 and 12 years. Of these, 11 studies included only preschool-aged children (2 to 5 years of age) involving 670 participants and nine studies included only primary school-aged children (5 to 12 years of age) with a total of 479 participants. Treatment group sample size ranged from 4 to 76 participants. T., et al. (2005). Outcome at 7 years of children diagnosed with autism at age 2: predictive validity of assessments conducted at 2 and 3 years of age and pattern of symptom change over time. J Child Psychol Psychiatry 46(5): 500-513. (9) Intervention A detailed description of each intervention is provided in Table 3. Interventions focused on the child with ASD, a parent or caregiver, teacher, or typically developing peers of the child with ASD. Ten interventions occurred in the preschool or school setting, one in the community, five in the clinic, two in the home, and one with a combination of both clinic and home sessions. Comparator Group All participants included in control groups had a diagnosis of ASD. Across the 19 studies, there were three different types of comparator group: wait-list control group, non-play-based intervention control group, and an alternative play-based intervention control group. Seven studies assigned control participants to wait-list control groups who served as a no-treatment comparison during the intervention phase of the project then went on to receive the intervention at a later stage. Control participants in four studies attended intervention for the same duration as the intervention group but participated in activities that did not meet the definition for a play-based intervention. Control groups in six studies were assigned to an alternative play-based treatment. A further three studies included both an alternative play-based intervention and a wait-list comparator group. Outcome Measures All outcome measures reported on play outcomes that matched the definition of play used in this study. Of the studies included, one used a parent-report questionnaire and 18 used observations of the child's behavior, 13 of which used a validated outcome measure with published psychometric properties. Fifteen studies showed significant improvements in treatment outcomes between groups for their selected play outcome measure; four did not identify any significant difference between the groups. Further details on characteristics of included studies are reported in Table 4. Meta-Analysis: Synthesis of Results Eleven of the 19 studies eligible for the systematic review were included in the meta-analysis (see Fig. 2). The remaining eight studies could not be included in the meta-analysis as they did not contain data required for calculations. One study reported individual scores. We contacted the remaining seven authors to collect the required data needed for the meta-analysis. Six authors did not respond, and one author no longer had access to the database. Effect sizes ranged from 0.033 to 1.898 in the pre-post intervention within-group analysis, as shown in Fig. 2. The overall intervention effect was small but significant (z(11) = 3.744, p < 0.001, Hedges' g = 0.439, 95% CI [0.209, 0.669]). The within group heterogeneity was not significant (Q(11 = 17.210, p = 0.070), and 41.9% of true variability (I 2 ) could be explained by individual study characteristics. A small but significant between-group total effect size favored play-based interventions for children with ASD (z (8) Following the subgroup analysis of intervention characteristics, a meta-regression analysis was performed on eight studies to further explain variability of the results (Chang et al. 2016, b;Goods et al. 2013, b;Kasari et al. 2006Kasari et al. , b, 2012Kasari et al. , b, 2014Kasari et al. , b, 2015Poslawsky et al. 2015, b;Quirmbach et al. 2009, b). The analysis of intervention characteristics indicated that intervention setting and group vs individual were not significant mediators of intervention effects (see Table 5). However, focus of the intervention (i.e., child, parent, peer or teacher) was found to be a significant mediator of play outcomes (Q(3) = 8.52, p = 0.036). Table 4 contains a description of the methodological quality and QualSyst ratings of the included studies. Two studies had adequate quality using the QualSyst checklist, and three studies had good quality. The remaining 14 studies had strong quality. Interrater agreement for overall scores of methodological quality of included studies was Kappa 0.884 (95% CI [0.755, 1.000]). Lego therapy A typical Lego therapy project would aim to build a Lego set with a social division of labor. In a group of 3 (which could be comprised of children with autism, peers and/or adults), one person is the "engineer," one the "supplier," and the other the "builder." Individuals communicate and follow social rules to complete the Lego build. The therapist's role was to highlight the presence of a social problem, and help children come up with their own solutions. Children started off building quick and simple models in pairs of threes with constant adult supervision and once proficient in a small group, they moved on to build more complex models over a few sessions. Eventually, children were Methodological Quality • Clinic • Group • Children attended therapy for 1 h per week for 18 weeks. Taking into account the holidays, the total duration of the study was 5.5 months Collective establishment of a daily schedule to encourage cohesiveness among group. Activities included conversational exercises, structured games, free play, improvised storytelling, and music. Peers encouraged to lead their own groups with adult supervision as needed. Typically developing peers viewed as positive role models (2-3 classmates to each child with ASD) selected friendship survey results and teacher nominations. The social group targeted peer engagement and acceptance using shared interests to provide the context for interactions. Activities were classroom and playground based. The group leader facilitated play as needed and faded as soon as the children played independently. joint attention (4 sessions) into teachers' everyday classroom routines and activities. Includes an individualized approach where teachers could choose to implement activities for the whole class, in small groups, and/or in a one-on-one setting. • Preschool classroom • Group • Weekly session including 10-15 min observation by the interventionist in the classroom followed by 1 h training session at a convenient time Caregivers were asked to engage in free play with their child with autism as they normally would at home using a standard set of toys. Videos were coded for the percentage of time in engagement states. Child's activity was segmented into unengaged/other engagement, object-engagement, or joint engagement. Child's play behaviors during the caregiver-child interaction were coded for types of functional and symbolic play acts. Functional play type refers to the total number of novel, child-initiated functional play acts. Symbolic play type refers to the total number of different novel, child-initiated symbolic play acts. Children in the IT group engaged in significantly less object-focused play (F(3,34) = 4.45, p < 0.01) and significantly more joint-engagement (F(3,34) = 3.21,p < 0.05) compared to children in the WL group. There was no significant difference between groups for the category for unengaged/other-engagement.Children in the IT group also displayed significantly more types of functional play acts compared to the WL group. (F(3,34) Child play behavior was videotaped (15 min free-play session) at the university hospital and at home by the researchers. The children were provided with a standardized set of toys. The parent was instructed to passively monitor while the child was playing. When the child was seeking contact or interaction, the parent was allowed to respond in a natural way. Video segments were coded by trained students for toy-preference and level of play category; (a) No significant intervention effect was found on children's play behavior. Children in both groups showed the same levels of play and variation in play. Risk of Bias within Studies The fail-safe N calculated during meta-analysis is 67, indicating a low-risk of publication bias. This means that we would need to locate and include 67 "null" studies for the combined 2-tailed p value to exceed 0.050. Discussion The aim of this study was to review and analyze the evidence for interventions to improve social play skills in children with ASD. A systematic review and meta-analysis of RCT studies were completed using the PRISMA and PRISMA-P statement as guides (Moher et al. 2009;Shamseer et al. 2015). The present study included 19 RCTs with a total of 1149 participants investigating the effectiveness of interventions to improve social play in children aged 2 to 12 years with ASD. When comparing individual child vs group interventions, the meta-analysis of 11 of these studies identified a small but significant effect size in favor of interventions focused on the individual child, as compared with group interventions. In terms of the focus of the intervention, the meta-analysis demonstrated significantly better outcomes if the focus of the intervention was the child with ASD, as opposed to parents, peers, or teachers. The meta-analysis in this review showed that it is not one intervention characteristic, but the combination of different intervention components that lead to the development of improved play skills. This systematic review allows clinicians to identify combinations of intervention components that may be effective to use with children with ASD to improve play outcomes and provide recommendations for future research. However, the definition of play and how it is measured are inconsistent across different studies. This inconsistency of definition and reporting is a challenge for clinicians when attempting to identify effective play interventions for children with ASD. Similarly, further investigations require consistent understanding and clear reporting of what play is to allow researchers to develop and test the multimodal and active ingredients in effective play interventions. The findings of the meta-analysis show a small effect size which indicates that play interventions are feasible and achievable in clinical practice; however, there is a continued need to add to the evidence for play basedinterventions to further strengthen them. Intervention Approaches The most commonly used approaches to improve play skills across studies included: twelve studies created supportive environments and relationships by upskilling peers, parents or teachers; ten studies used coaching the child with ASD; ten studies identified and developed individualized play skills and (Field et al. 2001, b); and one study used social stories (Quirmbach et al. 2009, b). It is difficult to identify which approaches are essential in improving play skills. One study that demonstrated significant large treatment effects utilized both coaching of the child with ASD and identifying and developing individual play skills (Kasari et al. 2006, b). The researchers did this by utilizing specific techniques using naturally occurring opportunities to prompt a particular treatment goal, such as imitating child's actions on toys and using the child's activity interests to develop play routines. Of the three studies that demonstrated significant moderate treatment effects, two included both supportive environment and relationships and development of individual play skills (Kasari et al. 2014(Kasari et al. , b, 2015. Both approaches in these studies used specific techniques to create opportunities of establishing jointly engaged play routines. The third study with moderate treatment effects included the approaches of supportive environment and relationships and coaching of the child with ASD (Corbett et al. 2016, b). Techniques included the use of video modeling and peer mediators (Corbett et al. 2016, b). Creating supportive relationships through the upskilling of parents, teachers, and peers in interventions may also provide support for generalization of play skills across environments and with other people. These relationships are frequently responsible for creating the social environment for interaction and transaction for the child with ASD. Parent, peer, and teacher mediated interventions show promise and require further development and investigation. Intervention Dosage Intervention dosage refers to the quantity of treatment provided and can be reported as total hours or over a set period of time, such as 1-h session per week (Linstead et al. 2017). Ten of the 19 studies reviewed reported either one or two sessions per week and four of the five effective interventions with the . 2 Within-group pre-post intervention meta-analysis largest effect sizes, utilized either daily or twice weekly sessions over multiple weeks (six to 12 weeks; Corbett et al. 2016, b;Kasari et al. 2006Kasari et al. , b, 2014Kasari et al. , b, 2015. Multiple opportunities over time are needed to allow for practice of social play skills from joint engagement to initiating play to joining in with peers who are already playing. This is similar to findings from the 2005 review of play therapy that identified the efficacy of treatment delivered by a therapist increases with the number of sessions (up to a range of between 30 and 35 sessions; Bratton et al. 2005). The session duration for the majority of interventions in this review was between 30 min and an hour. Play interventions in this review were less time intensive when compared to weekly social skills training interventions. Social skills training intervention session duration ranged from 1 to 3 h across eight studies, and in another review focusing only on group interventions, session duration ranged from 1 to 1.5 h across five RCT studies (Rao et al. 2008;Reichow et al. 2013). This difference in time may be reflective of the age of participants in the play interventions (ranging from 2 to 12 years). Using shorter session for younger participants is developmentally more appropriate to support engagement and learning, compared with the older participants in the social skills interventions (ranging from 6 to 18 years). Regardless, the play intervention session duration range in this review appears to be feasible. In considering what the optimal dosage may be, the current review identified that three of the five interventions with large effect sizes involved sessions of between 30 min and an hour with multiple sessions per week and a total number of intervention hours ranging from 10 to 15 h (Kasari et al., 2006(Kasari et al., , b, 2014(Kasari et al., , b, 2015. Kasari et al. (2006, b) compared daily 30 min sessions in a preschool setting (focusing on symbolic play, as compared to a joint attention intervention of the same duration and a no treatment control group), whereas Kasari et al. (2015, b) compared a twice weekly 30-min play session with a weekly 60-min parent only psychoeducational intervention. Kasari et al. (2014, b) compared a twice-weekly 60-min play session with the child and parent in the home with a weekly 2h parent only education group program. Authors of a 2017 review of behavioral interventions for children with ASD in a clinical setting found a linear relationship between treatment intensity and treatment outcomes (Linstead et al., 2017). Linstead et al. (2017) examined results of 726 children with a mean age of 7.1 years and found the intensity of the intervention accounted for 35% of the variance in treatment outcomes. Multiple sessions over time allow for complex skills to be developed, reviewed, and assimilated, supporting possible generalization of play skills to other environments and with other social partners. Importantly, as social play interactions become more complex across early and middle childhood, intervention components need to change and meet the demands of the increasingly complex contexts and skills required for successful engagement (Del Giudice, 2014). Setting The current review found that the play setting did not seem to favor the effectiveness of the interventions. It may be helpful to consider implementing interventions across various naturalistic settings to reinforce treatment principles and promote generalization of treatment effects. A naturalistic play environment provides the opportunity to develop play skills and interests, assisting with skill generalization across contexts and outside the intervention context. These results are consistent with the results of a previous review of school-based social skills interventions for children with ASD (Bellini, Peters, Benner, & Hopf, 2007). Using meta-analysis analysis of 55 single subject design studies, Bellini et al. (2007) recommended educators in school settings select interventions that could be implemented in naturalistic settings, as opposed to removing children from the classroom or playground for the intervention. Bellini et al. (2007) suggested that the familiarity and inclusion in real social situations had a positive effect on treatment outcomes. Further research is required to investigate contextual factors that influence outcomes. As such, clinicians and educators should not limit their choice of interventions to improve play skills in children with ASD based on setting. Unfortunately, reporting of generalization of play skills across environments has been neglected in the studies included in this review. The lack of reporting of generalization of skills is consistent with other psychosocial interventions for children with ASD (Rao et al., 2008;Reichow et al., 2013). Interventions that provide opportunities to develop skills in real social situations and across different contexts need to be balanced with what is feasible and practical for families, clinicians, and researchers. Outcome Measures Play is frequently used to improve other developmental areas, rather than being the focus of the study (Wong et al., 2015). Many studies using a play-based intervention to improve communication and social skills in children with ASD did not use a play outcome measure, resulting in their exclusion from this analysis. These excluded studies typically reported on aspects of social communication, such as joint attention, but not a comprehensive measure that captures the complex skills involved in play. Play as an independent outcome may have diminished social validity as it is not researched as much as other related skills, such as language and general social skills. Social validity is the significance of the intervention strategies and treatment objectives and refers to the perceived social importance of the intervention results (Foster & Mash, 1999). Certainly, social interactions with peers have demonstrated social validity but this is not necessarily associated with play skills (Watkins et al., 2015). Furthermore, reduced social validity is often related to reduced treatment fidelity, which, in turn, may influence treatment effects (Callahan et al., 2017). Therefore, there is a need to educate parents, teachers, clinicians, and researchers on the importance of improved play outcomes in and of itself. Clinicians and researchers should consider the feasibility of including additional education and resources for parents and teachers on the importance of improving play as an outcome of the intervention. Even when studies met the inclusion criteria for this review, play outcomes were not necessarily the primary focus of the intervention. This may be due to reduced social validity of play with clinicians and researchers. An alternative explanation may be that the outcomes focused on foundation level social skills that are easier to observe and therefore measure. For example, joint attention, behavior, and communication outcomes are frequently the outcomes that were measured in play-based interventions. However, it is difficult to say if improvements in these foundation skills contribute to the development of more complex play skills without therapists and researchers also reporting on play outcomes. Using outcome measures that report on play will support the social validity of play and encourage researchers, clinicians, and families to take play seriously (Bundy, 1993). Reporting on both play and social outcomes allows researchers to develop interventions that are more closely aligned with outcomes that families' value and which will impact on peer engagement. Observation was the most frequent means of measuring play in this review; however, not all observations were reported using validated measures with published psychometric properties. An example of an appropriate norm-referenced standardized assessment is the Child-Initiated Pretend Play Assessment (CHIPPA; Stagnitti, 2007;Uren & Stagnitti, 2009). The CHIPPA measures the complexity of a child's play skills, their ability to use symbols, and being reliant on someone else for play ideas. A possible explanation for why researchers are creating outcome measures specific to the study and not using preexisting outcome measures with proven psychometric properties is because of the difficulty of measuring play in a natural setting, given the complexity and intrinsic motivation inherent to play (Bundy, 1993(Bundy, , 2011. As such, measuring playfulness may provide a consistent, valid, and reliable alternative (Bundy, 1993(Bundy, , 2011. Playfulness is defined as a disposition to engage in play and has been shown to be responsive to change following intervention (Bundy, 2010). The Test of Playfulness is an appropriate outcome measure for observing play in natural settings with robust psychometric properties (Bundy, 2010;Skard & Bundy, 2008). Recommendations The continued development of play interventions for children with ASD using RCTs is important. Researchers conducting RCTs need to clearly report the intervention components following the CONSORT statement, so play-based intervention research can be advanced and potentially be adapted to different settings. Consistent and comprehensive reporting of play outcomes using valid and reliable measures when investigating a playbased intervention is needed. The use of play outcomes by researchers and clinicians will support the social validity of play and allow for balanced comparisons between interventions. Future research should also consider identifying and comparing the active ingredients within an intervention. Specifically, further investigation is recommended into the use of peers and how they could be more effectively utilized to support the child with ASD to improve their play. Finally, while the current review included children aged 2 to 12 years, there was significant variability in the inclusion criteria of participants of the studies, including developmental ability of the study participants. We recommend that future investigations include descriptive information of participants' language and social skills to enable clinicians to determine if the intervention would be appropriate to their client's needs. Limitations The inclusion criteria requiring the studies to report on play outcomes were necessary to be able to compare across studies; however, they were potentially restrictive, and effective play intervention studies may have been missed in this review because they did not explicitly report on play outcomes. While similarities of participant demographics between intervention and comparison groups remained similar across the different studies, there was some variability in the type of comparison groups. Seven of the studies used a wait-list, no treatment control group, while the remaining studies used an alternative treatment comparison group. Due to the differences in these comparison group types, and to make balanced comparisons between the studies, we included only alternative treatment comparison groups' studies used in meta-regression. This ensured homogeneity between comparison groups and outcomes but limited the number of combinations that could be assessed due to collinearity. As a result, significant relationships between study components may not have been identified. Conclusion The results of this systematic review and meta-analysis suggest that play-based interventions produced small to medium treatment effects between 0.083 and 0.586 for children with
2019-08-23T02:03:37.242Z
2019-07-29T00:00:00.000
{ "year": 2019, "sha1": "b6413e7f72bb93228013ce64e271517fe337ccbe", "oa_license": "CCBY", "oa_url": "https://scholarlypublications.universiteitleiden.nl/access/item:3635432/view", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "c87e28af43c78df7f13fa61a95d07716b4af9c4c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
258402111
pes2o/s2orc
v3-fos-license
Nanocarbon-Based Mixed Matrix Pebax-1657 Flat Sheet Membranes for CO2/CH4 Separation In the present work, Pebax-1657, a commercial multiblock copolymer (poly(ether-block-amide)), consisting of 40% rigid amide (PA6) groups and 60% flexible ether (PEO) linkages, was selected as the base polymer for preparing dense flat sheet mixed matrix membranes (MMMs) using the solution casting method. Carbon nanofillers, specifically, raw and treated (plasma and oxidized) multi-walled carbon nanotubes (MWCNTs) and graphene nanoplatelets (GNPs) were incorporated into the polymeric matrix in order to improve the gas-separation performance and polymer’s structural properties. The developed membranes were characterized by means of SEM and FTIR, and their mechanical properties were also evaluated. Well-established models were employed in order to compare the experimental data with theoretical calculations concerning the tensile properties of MMMs. Most remarkably, the tensile strength of the mixed matrix membrane with oxidized GNPs was enhanced by 55.3% compared to the pure polymeric membrane, and its tensile modulus increased 3.2 times compared to the neat one. In addition, the effect of nanofiller type, structure and amount to real binary CO2/CH4 (10/90 vol.%) mixture separation performance was evaluated under elevated pressure conditions. A maximum CO2/CH4 separation factor of 21.9 was reached with CO2 permeability of 384 Barrer. Overall, MMMs exhibited enhanced gas permeabilities (up to fivefold values) without sacrificing gas selectivity compared to the corresponding pure polymeric membrane. Introduction In recent years, gas separation, such as N 2 production, H 2 recovery, CO 2 capture and natural gas sweetening, is achieved with polymeric membranes, an efficient process competing sufficiently with well-established separation processes such as adsorption, extraction and cryogenic distillation [1]. Membranes offer easy fabrication and scalability, low capital and operating cost, operation simplicity, low energy consumption and maintenance, mechanical reliability and small carbon footprint. All these features render them a promising method in high-performance gas-separation applications [2]. However, the gas-separation performance of polymeric membranes is frequently restricted by the Robeson's trade-off upper bound [3]. Polymers with high permeability exhibit low selectivity and vice versa [3,4]. To address this, nanotechnology is used in membrane science, creating a new type of nanoengineered materials, where the membrane properties are modified in the direction of improving the filtration/separation processes [5]. Among numerous different modification of one-dimensional halloysite nanotubes (HNTs, up to 0.2 wt.% towards the casting solution) within the thin film via a solution casting technique. This change in crystallization of the polyamide component, which was induced at the HNT surface, provided the composite membrane an ultrahigh CO 2 /N 2 selectivity of up to 290 combined with a moderate CO 2 permeability of 80.4 Barrer. In another work, in 2019, Farashi et al. [21] studied Pebax-1657 mixed matrix membranes by using different contents of aluminum oxide (Al 2 O 3 ) (0, 2, 4, 6 and 8 wt.%) in the polymer matrix (Pebax). Permeances of pure CO 2 and CH 4 gases were measured in the range of pressures between 3 and 15 bar at 25 • C. The results revealed better separation efficiency (both CO 2 permeability and CO 2 /CH 4 selectivity) of the nanocomposite membranes than the pristine membrane. For example, the CO 2 permeability and ideal CO 2 /CH 4 selectivity values for the neat membrane at the pressure of 3 bar were 123.46 Barrer and 21.21, respectively, whereas those values for the membrane comprising 8 wt.% of Al 2 O 3 were 159.27 Barrer and 24.73,respectively. In addition to the polymer matrix selection, the selection of the specific kind of filler membrane materials is another route for improving the selectivity performance of the membrane. Carbon-based nanomaterials are reported as promising filler materials for producing improved mixed matrix membranes for gas-separation applications [5]. Among others, multi-walled carbon nanotubes (MWCNTs) and graphene nanoplatelets (GNPs) [26,27] are two types of nanocarbons, nanoscale carbon derivative materials, which can provide the requested improvement of both permeability and selectivity performance of the derivative mixed matrix membrane if they are used as membrane filler materials. The fact that both MWCNTs and GNPs are relatively cheap and easily available materials combined with their good physicochemical properties have rendered them particularly popular in composite membrane technology. Among others, some of the advantages of these two nanomaterials are (1) the facile production methods, (2) the ability for surface modification using wet chemistry, (3) their dispersibility using sonication, (4) their conductive properties and (5) their large specific surface areas [28]. In current work, Pebax-MH1657, a commercial multiblock copolymer (poly (ether-block-amide)) with remarkable CO 2 separation properties was selected as the base polymer of MMMs. In addition, both raw and treated MWCNTs and GNPs were used as membrane filler materials for producing composite membranes of 0.7, 3.0 and 5.3 wt.% in carbon nanofillers relative to the polymer content. The characterization and the performanceevaluation of thirteen produced membranes were carried out by means of SEM, FTIR, gas mixture permeability evaluation for CO 2 /CH 4 (10/90), contact angle measurements and mechanical tests. Materials Pebax MH 1657 (containing approximately 60 wt.% polyether segments and 40 wt.% polyamide segments) was purchased from Arkema S.A., France. Ethanol was purchased from VWR International Ltd., Lutterworth, UK. All of the above chemicals were of analytical grade and were used without further purification. Ultrapure water (Milli-Q, 18 MΩ·cm) was used throughout this study. The carbon nanofillers were provided from our colleagues from FutureCarbon GmbH, Bayreuth, Germany. For the preparation of flat sheet Pebax-1657-based mixed matrix membranes, a dispersion of raw and treated (plasma or oxidized) multi-walled carbon nanotubes (MWCNTs) and graphene nanoplatelets (GNPs) was incorporated into the polymeric matrix in order to improve the gas-separation performance and the polymer's structural properties. The raw GNPs were produced by a water-based milling process and for the wet chemical treatment of GNPs, KMnO 4 was employed as oxidizing agent. In addition, the raw MWCNTs were produced by chemical vapor deposition of hydrocarbon gas on iron-based catalysts, and for the functionalization of MWCNTs, plasma treatment was implemented (optimum operational parameters: He with 500 W plasma power for 10 min exposure time and O 2 with 500 W for 70 min exposure time). Dispersions of all twelve different MMM systems were formed by the solution blending method with an additional dispersion step to avoid agglomeration. The obtained homogeneous solutions were poured into Petri dishes, and solvent evaporation was performed under controlled atmospheric conditions. Finally, the films were dried in an oven at 60 • C for 2 h in order to remove any residual solvent. The overall membrane preparation process is illustrated in Figure 1. Membranes 2023, 13, x FOR PEER REVIEW 4 of 18 raw MWCNTs were produced by chemical vapor deposition of hydrocarbon gas on ironbased catalysts, and for the functionalization of MWCNTs, plasma treatment was implemented (optimum operational parameters: He with 500 W plasma power for 10 min exposure time and O2 with 500 W for 70 min exposure time). Dispersions of all twelve different MMM systems were formed by the solution blending method with an additional dispersion step to avoid agglomeration. The obtained homogeneous solutions were poured into Petri dishes, and solvent evaporation was performed under controlled atmospheric conditions. Finally, the films were dried in an oven at 60 °C for 2 h in order to remove any residual solvent. The overall membrane preparation process is illustrated in Figure 1. The status of dispersion quality was evaluated directly by optical determination of the solutions and the prepared membranes, as well as indirectly mainly by mechanical strength testing and microscopic analysis. After preparation, the membranes were dried, and permeation properties were analyzed for CO2 and CH4. Instrumentation-Characterization The composite carbon-based Pebax-1657 membranes were evaluated concerning their CO2/CH4 selectivity and permeability performance in a flow selectivity apparatus in conjunction with high-sensitivity gas chromatography. In addition, the prepared membranes were characterized by Fourier transform infrared spectroscopy (FTIR) conducted on a Nicolet Magna-IR Spectrometer 550 (Thermo Fisher Scientific, Waltham, A, USΑ.). The morphological characterization of selected samples was investigated by scanning electron microscopy (SEM) in a JEOL JSM-7401F instrument (Tokyo, Japan). The membranes' mechanical properties (Young's modulus, ultimate tensile strength and tensile elongation) were determined in a Thümler GmbH Tensile Tester Model (Roth, Germany) equipped with a PA6110 Nordic Transducer load cell with a maximum force of 250 N [29]. The specimens were prepared according to ASTM D882. Dynamic contact angle (CA) measurements of water/membrane interfaces took place using the Krüss DSA30S optical contact angle measuring instrument (Krüss GmbH, Hamburg, Germany). Membrane Preparation The first step of the membrane preparation process was the blending of the two solutions, "A" and "B". The "A" solution was prepared after solving the Pebax-MH1657 pellets, 5 wt.%, into EtOH/H2O (70/30 wt.%) solvent, whereas the "B" solution was derived by dispersing the carbon nanofiller into the same solvent (EtOH/H2O, 70/30 wt.%). The solution "A" was first refluxed at 80 °C for 2 h, and solution "B" was sonicated for 1 h. The final solution obtained after the mixing of both prepared solutions was stirred and sonicated for half an hour before it was poured into a flat glass Petri dish. Subsequently, the solvent was evaporated overnight at room temperature, and finally the films were dried in an electrical oven at 60 °C for a period of time of about two hours. In Table 1, all the studied cases for the MMMs preparation are presented. The status of dispersion quality was evaluated directly by optical determination of the solutions and the prepared membranes, as well as indirectly mainly by mechanical strength testing and microscopic analysis. After preparation, the membranes were dried, and permeation properties were analyzed for CO 2 and CH 4 . Instrumentation-Characterization The composite carbon-based Pebax-1657 membranes were evaluated concerning their CO 2 /CH 4 selectivity and permeability performance in a flow selectivity apparatus in conjunction with high-sensitivity gas chromatography. In addition, the prepared membranes were characterized by Fourier transform infrared spectroscopy (FTIR) conducted on a Nicolet Magna-IR Spectrometer 550 (Thermo Fisher Scientific, Waltham, MA, USA). The morphological characterization of selected samples was investigated by scanning electron microscopy (SEM) in a JEOL JSM-7401F instrument (Tokyo, Japan). The membranes' mechanical properties (Young's modulus, ultimate tensile strength and tensile elongation) were determined in a Thümler GmbH Tensile Tester Model (Roth, Germany) equipped with a PA6110 Nordic Transducer load cell with a maximum force of 250 N [29]. The specimens were prepared according to ASTM D882. Dynamic contact angle (CA) measurements of water/membrane interfaces took place using the Krüss DSA30S optical contact angle measuring instrument (Krüss GmbH, Hamburg, Germany). Membrane Preparation The first step of the membrane preparation process was the blending of the two solutions, "A" and "B". The "A" solution was prepared after solving the Pebax-MH1657 pellets, 5 wt.%, into EtOH/H 2 O (70/30 wt.%) solvent, whereas the "B" solution was derived by dispersing the carbon nanofiller into the same solvent (EtOH/H 2 O, 70/30 wt.%). The solution "A" was first refluxed at 80 • C for 2 h, and solution "B" was sonicated for 1 h. The final solution obtained after the mixing of both prepared solutions was stirred and sonicated for half an hour before it was poured into a flat glass Petri dish. Subsequently, the solvent was evaporated overnight at room temperature, and finally the films were dried in an electrical oven at 60 • C for a period of time of about two hours. In Table 1, all the studied cases for the MMMs preparation are presented. All percentages in wt.%. Gas Permeability/Separation Measurements under Continuous Flow Conditions Gas permeability and selectivity evaluation was performed by the "flow method" using the gas chromatography (GC) technique. The gas permeance values were measured in the apparatus presented in Figure 2, where the permeate stream was directed to a highly sensitive gas chromatography instrument, and the permeance coefficient was calculated by the integration of the recorded GC peak [30]. As a carrier gas, helium was used ( Figure 2). The experimental setup for the permeance measurements, using gas chromatography analysis, has been described in detail previously [31]. Using this setup, mixture selectivity experiments of 10/90 (mole concentration) for CO 2 /CH 4 were performed. Membranes 2023, 13, x FOR PEER REVIEW 5 of 18 Table 1. Concentrations of precursor solutions, polymer and carbon nanomaterials (CNMs) and final membrane concentrations after drying for the three studied cases of filler content. Gas Permeability/Separation Measurements under Continuous Flow Conditions Gas permeability and selectivity evaluation was performed by the "flow method" using the gas chromatography (GC) technique. The gas permeance values were measured in the apparatus presented in Figure 2, where the permeate stream was directed to a highly sensitive gas chromatography instrument, and the permeance coefficient was calculated by the integration of the recorded GC peak [30]. As a carrier gas, helium was used ( Figure 2). The experimental setup for the permeance measurements, using gas chromatography analysis, has been described in detail previously [31]. Using this setup, mixture selectivity experiments of 10/90 (mole concentration) for CO2/CH4 were performed. All thirteen flat sheet membranes, each one with an effective permeation area of about 5.3 cm 2 , were successively inserted into a metallic (bronze) membrane housing. Each membrane was placed into the cell and thoroughly degassed for at least 24 h at 10 −6 mbar and 80 °C before permeance/selectivity measurements. The 10/90 CO2/CH4 (mole concentration) gas mixture was introduced to the feed side of the membrane, whereas helium was used as the sweep gas on permeate side. Mass flow controllers (Brooks Instruments 0-50 mL/min) were used to define the flow rates of each gas. In the retentate side, the pressure was controlled by a backpressure regulator, while the permeate side was maintained at atmospheric pressure. Transmembrane pressure was recorded using a differential manometer. An 8610C gas chromatograph equipped with high-sensitivity TCD and FID detectors was used for analysis of both gas lines. The selectivity coefficients were calculated according to the following equation [33]: All thirteen flat sheet membranes, each one with an effective permeation area of about 5.3 cm 2 , were successively inserted into a metallic (bronze) membrane housing. Each membrane was placed into the cell and thoroughly degassed for at least 24 h at 10 −6 mbar and 80 • C before permeance/selectivity measurements. The 10/90 CO 2 /CH 4 (mole concentration) gas mixture was introduced to the feed side of the membrane, whereas helium was used as the sweep gas on permeate side. Mass flow controllers (Brooks Instruments 0-50 mL/min) were used to define the flow rates of each gas. In the retentate side, the pressure was controlled by a backpressure regulator, while the permeate side was maintained at atmospheric pressure. Transmembrane pressure was recorded using a differential manometer. An 8610C gas chromatograph equipped with high-sensitivity TCD and FID detectors was used for analysis of both gas lines. The selectivity coefficients were calculated according to the following equation [33]: where A (gas1/perm) , A (gas1/ f eed) and A (gas2/perm) , A (gas2/ f eed) are the peak surfaces for the permeate and feed gas streams, respectively. Morphology of Nanofillers/Prepared MMMs High-purity carbon nanotubes were produced by catalyst-assisted chemical vapor deposition [34] and were treated as described in Section 2.1. The GNPs were produced following the water-based milling process [35] and were oxidized using KMnO 4 as oxidizing agent (see Section 2.1). In the following Figure 3, SEM micrographs illustrate the morphology of MWCNTs and their interwoven and entangled arrangement. They appeared in the form of ribbon complexes with no sign of any impurities, and their outer diameter ranged between 13 and 23 nm. The GNPs fillers have a wide range of dimensions both in thickness and lateral dimensions, which fluctuate from 2 to 5 µm and from 50 to 100 nm, respectively. In addition, GNPs have a high purity and well-defined structure of uniform flakes. where ( 1/ ) , ( 1/ ) and ( 2/ ) , ( 2/ ) are the peak surfaces for the permeate and feed gas streams, respectively. Morphology of Nanofillers/Prepared MMMs High-purity carbon nanotubes were produced by catalyst-assisted chemical vapor deposition [34] and were treated as described in Section 2.1. The GNPs were produced following the water-based milling process [35] and were oxidized using KMnO4 as oxidizing agent (see Section 2.1). In the following Figure 3, SEM micrographs illustrate the morphology of MWCNTs and their interwoven and entangled arrangement. They appeared in the form of ribbon complexes with no sign of any impurities, and their outer diameter ranged between 13 and 23 nm. The GNPs fillers have a wide range of dimensions both in thickness and lateral dimensions, which fluctuate from 2 to 5 µm and from 50 to 100 nm, respectively. In addition, GNPs have a high purity and well-defined structure of uniform flakes. Figure 4 summarizes the SEM micrographs of four selected membranes. The selected membranes are the neat Pebax-MH1657 membrane, the MM5 sample (3 wt.% of raw GNPs relative to polymer content), the MM8 sample (3 wt.% of plasma-treated MWCNTs) and the MM11 sample (3 wt.% of raw MWCNTs). SEM images illustrate two main characteristics of the prepared membranes, namely that they show dense structure without any pinholes and that their thickness ranges from about 8 to 90 µm. Both carbon nanotubes and GNPs are not visible in the cross-sectional images of the three selected mixed matrix membranes in the presented range of magnitude, in which no significant differences in the matrix are observed. Figure 4 summarizes the SEM micrographs of four selected membranes. The selected membranes are the neat Pebax-MH1657 membrane, the MM5 sample (3 wt.% of raw GNPs relative to polymer content), the MM8 sample (3 wt.% of plasma-treated MWCNTs) and the MM11 sample (3 wt.% of raw MWCNTs). SEM images illustrate two main characteristics of the prepared membranes, namely that they show dense structure without any pinholes and that their thickness ranges from about 8 to 90 µm. Both carbon nanotubes and GNPs are not visible in the cross-sectional images of the three selected mixed matrix membranes in the presented range of magnitude, in which no significant differences in the matrix are observed. This may be sufficient proof for the homogeneous dispersion of the filler in the polymer matrix, without observable agglomerates or obvious defects in the matrix, indicating the good affinity and adhesion between the polymer and nanofillers. The existence of some white "dots", especially on the MM5 membrane, can be attributed to dust, which sticks to the surface after membrane cutting preparation with liquid nitrogen as a cooling agent. The membranes are extremely flexible, which becomes obvious in the SEM image of the neat sample (curved membrane). FTIR Analysis The FTIR spectra of the thirteen Pebax-1657-based membranes are shown in Figure 5. All samples exhibit very similar spectra. The peak at 3294 cm −1 indicates the presence of N-H amide group [36] and the peaks at 2938 and 2864 cm −1 the existence of the aliphatic -C-H groups and the vibrations of the δ(C-H) and ν(C-H) [37]. The characteristic peaks at 1640 and 1544 cm −1 correspond to the hydrogen-bonded amide peak and to the C-O stretching band, respectively. The two characteristic peaks at 1731 and 1099 cm −1 correspond to C=O (carbonyl group) and C-O-C (ether group) stretching vibrations in the pure Pebax-1657 structure [38]. This may be sufficient proof for the homogeneous dispersion of the filler in the polymer matrix, without observable agglomerates or obvious defects in the matrix, indicating the good affinity and adhesion between the polymer and nanofillers. The existence of some white "dots", especially on the MM5 membrane, can be attributed to dust, which sticks to the surface after membrane cutting preparation with liquid nitrogen as a cooling agent. The membranes are extremely flexible, which becomes obvious in the SEM image of the neat sample (curved membrane). FTIR Analysis The FTIR spectra of the thirteen Pebax-1657-based membranes are shown in Figure 5. All samples exhibit very similar spectra. The peak at 3294 cm −1 indicates the presence of N-H amide group [36] and the peaks at 2938 and 2864 cm −1 the existence of the aliphatic -C-H groups and the vibrations of the δ(C-H) and ν(C-H) [37]. This may be sufficient proof for the homogeneous dispersion of the filler in the polymer matrix, without observable agglomerates or obvious defects in the matrix, indicating the good affinity and adhesion between the polymer and nanofillers. The existence of some white "dots", especially on the MM5 membrane, can be attributed to dust, which sticks to the surface after membrane cutting preparation with liquid nitrogen as a cooling agent. The membranes are extremely flexible, which becomes obvious in the SEM image of the neat sample (curved membrane). FTIR Analysis The FTIR spectra of the thirteen Pebax-1657-based membranes are shown in Figure 5. All samples exhibit very similar spectra. The peak at 3294 cm −1 indicates the presence of N-H amide group [36] and the peaks at 2938 and 2864 cm −1 the existence of the aliphatic -C-H groups and the vibrations of the δ(C-H) and ν(C-H) [37]. The characteristic peaks at 1640 and 1544 cm −1 correspond to the hydrogen-bonded amide peak and to the C-O stretching band, respectively. The two characteristic peaks at 1731 and 1099 cm −1 correspond to C=O (carbonyl group) and C-O-C (ether group) stretching vibrations in the pure Pebax-1657 structure [38]. The characteristic peaks at 1640 and 1544 cm −1 correspond to the hydrogen-bonded amide peak and to the C-O stretching band, respectively. The two characteristic peaks at 1731 and 1099 cm −1 correspond to C=O (carbonyl group) and C-O-C (ether group) stretching vibrations in the pure Pebax-1657 structure [38]. Water Contact Angle Measurements The water contact angle (WCA) measurements were performed employing a Krüss DSA30S instrument as mentioned. The CA measuring instrument has a range of 180 • for surface tensions ranging from 0.01 to 2000 mN/m. A digital image followed by the calculated droplet's contact angle is recorded automatically by the Advance-Krüss software. The instrument provides remarkable reproducibility and high accuracy of measurement [39]. During the measurement, at any equilibrium stage of the drop/surface system, a calculated contact angle is recorded automatically. The affinity of the membranes' surfaces to water was assessed by the equilibrium contact angle of all studied samples through the contact angle measurements, as shown in Table 2. The presented value of each sample is the average value of five measurements from different spots of the membranes' surfaces. It is obvious that in all cases of mixed matrix membranes, surface hydrophilicity was lower compared to the neat Pebax-1657 membrane. Specifically, the WCA is 63.4 • for the neat Pebax-1657 membrane, while for the derived MMMs it fluctuated between 69 and~108 • . The same behavior was also previously observed for cross-linked Pebax membranes, where higher grade of crosslinking resulted in an analogous increase in surface hydrophobicity [40]. Similarly, this trend has also been presented in a study regarding MWCNTs/Pebax MMMs [41]. In all four groups of mixed matrix membranes (relative to filler's type) a common feature was noticed: An increase in filler concentration leads to a surface hydrophilicity reduction. In particular, the GNP fillers affect more intensely the membranes' hydrophilicity, as these nanomaterials render the property of hydrophobicity sturdier than the MWCNTs. This can be attributed to shape and dimensions (see Section 3.1) of the robust GNPs. The edges of GNP flakes protruding from the surface form a kind of comb at the membrane's surface, which increases its roughness. The existence of the GNPs' edges/wrinkles is probably more in line with a Cassie-Baxter wetting state than a Wenzel state [42], and therefore the surface hydrophilicity decreases. The higher roughness leads to reduction in water wettability. Furthermore, as observed in Table 2, the measured higher WCA values correspond to the modified nanofillers (both cases of MWCNTs and GNPs) and not the raw nanofillers, providing good evidence for their better dispersion and compatibility with the polymeric matrices. The difference of the surface hydrophilicity is apparent in the three selected water contact angle images, which are presented in Figure 6. Water Contact Angle Measurements The water contact angle (WCA) measurements were performed employing a Krüss DSA30S instrument as mentioned. The CA measuring instrument has a range of 180° for surface tensions ranging from 0.01 to 2000 mN/m. A digital image followed by the calculated droplet's contact angle is recorded automatically by the Advance-Krüss software. The instrument provides remarkable reproducibility and high accuracy of measurement [39]. During the measurement, at any equilibrium stage of the drop/surface system, a calculated contact angle is recorded automatically. The affinity of the membranes' surfaces to water was assessed by the equilibrium contact angle of all studied samples through the contact angle measurements, as shown in Table 2. The presented value of each sample is the average value of five measurements from different spots of the membranes' surfaces. It is obvious that in all cases of mixed matrix membranes, surface hydrophilicity was lower compared to the neat Pebax-1657 membrane. Specifically, the WCA is 63.4° for the neat Pebax-1657 membrane, while for the derived MMMs it fluctuated between 69 and ~108°. The same behavior was also previously observed for cross-linked Pebax membranes, where higher grade of crosslinking resulted in an analogous increase in surface hydrophobicity [40]. Similarly, this trend has also been presented in a study regarding MWCNTs/Pebax MMMs [41]. In all four groups of mixed matrix membranes (relative to filler's type) a common feature was noticed: An increase in filler concentration leads to a surface hydrophilicity reduction. In particular, the GNP fillers affect more intensely the membranes' hydrophilicity, as these nanomaterials render the property of hydrophobicity sturdier than the MWCNTs. This can be attributed to shape and dimensions (see Section 3.1) of the robust GNPs. The edges of GNP flakes protruding from the surface form a kind of comb at the membrane's surface, which increases its roughness. The existence of the GNPs' edges/wrinkles is probably more in line with a Cassie-Baxter wetting state than a Wenzel state [42], and therefore the surface hydrophilicity decreases. The higher roughness leads to reduction in water wettability. Furthermore, as observed in Table 2, the measured higher WCA values correspond to the modified nanofillers (both cases of MWCNTs and GNPs) and not the raw nanofillers, providing good evidence for their better dispersion and compatibility with the polymeric matrices. The difference of the surface hydrophilicity is apparent in the three selected water contact angle images, which are presented in Figure 6. Mechanical Properties The mechanical behavior concerning their Young's modulus, ultimate tensile strength and tensile elongation at fracture of the studied membranes was also investigated [29]. These three characteristic mechanical properties were evaluated from the tensile axial stress-strain curves, and their full calculations have been described in our previous work [43]. In Table 3 and Figure 7, the numerical results of the mechanical tests of all samples are summarized, and the tensile properties of all membranes are depicted. The measurements were performed at ambient humidity of about 50%, whereas the samples were pre-equilibrated at this condition prior to each measurement. By analyzing the data in Table 3, it becomes clear that the values for the twelve MMMs can differ slightly or significantly compared to the "neat" membrane. The characteristic factor, which plays the major role in the determination of the Young's modulus and ultimate tensile strength properties, is the concentration of the nanofiller material. In all four cases of different added nanofiller materials, higher concentration results in higher values of both abovementioned properties. For both properties, the addition of GNPs is more effective than the addition of MWCNTs. The highest values are reached in the cases of oxidized GNPs and plasma-treated MWCNTs and not for the corresponding raw materials. Specifically, for the membranes prepared with oxidized GNPs, the Young's modulus increased up to 3.2 times (MM3) and for membranes with the plasma-treated MWCNTs up to 2.3 times (MM9) compared to the neat polymeric membrane. The observed differences between the mixed matrix membranes with raw and modified nanofillers are explained by the homogeneous and uniform dispersions of the modified GNPs and MWCNTs in the polymeric matrix, leading to stiffness enhancement of the polymer composite. The Young's modulus value,~59 MPa, of the neat Pebax-1657 membrane is also reported by Duan et al. [44] in their recent work, where covalent organic frameworks (COFs)-functionalized MMMs were studied concerning CO 2 /N 2 performance. Similar to our results is the behavior of the functionalized-GO/Pebax-1657 membranes in the work of Zhang et al. [45], where the addition of functionalized graphene oxide (f-GO) into the Pebax-1657 matrix resulted in higher Young's modulus values, from~46 MPa for the neat membrane up to 126 MPa for the 0.7 wt.% f-GO/Pebax-1657 sample. In contrast to our results, where the addition of carbon nanofillers led to the increase in the Young's moduli, for the reported addition of COF-5 at concentrations up to 3 wt.%, the Young's modulus was always subordinate to the corresponding neat membrane. This behavior of the Young's modulus detriment has also been observed in another work of Fam et al. [46]. Indeed, for the case of Pebax-1657/ionic liquid (IL) membranes, the initial value of 73.7 MPa was decreased down to 1.2 MPa for the membrane with 80% of IL loading. Furthermore, the good interfacial adhesion between the polymer and the nanofillers impelled the high toughness of the mixed matrix membranes, and again the ones with embedded treated nanofillers presented better results. Then, the ultimate tensile strengths of MM3 and MM9 membranes were higher than those of the neat membrane by 55.3% and 23.5%, respectively. On the other hand, the more flexible MWCNT nanofillers enhanced the membranes' elongation at fracture more effectively than the stiffer GNPs nanofillers. An increase up to 64.1% in the elongation at fracture was observed for the 5.3 wt.% plasma-treated MWCNTs fraction (MM9 membrane) compared to the neat polymeric membrane. In contrast, a maximum elongation (210.8%) was achieved for MM10 membrane (0.7 wt.% raw MWCNTs), and a further increase in raw MWCNTs content up to 5.3 wt.% impaired the elongation. This trend could be explained based on the hypothesis that at high MWCNT loadings, the membranes (MM11 and MM12) become more brittle because of a weaker interfacial binding between polymer and nanofiller and simultaneously a higher restriction of the chains' mobility within the polymer matrix [47,48]; conversely, the stronger interaction between the modified MWCNTs and the polymer matrix caused by the incorporation of functional groups in the nanofiller's structure results in a better outcome. The trade-off between ductility and tensile strength has also been reported in literature [49,50]. The rigid agglomerates of MWCNTs in the soft segment of the polymer matrix, forming after an inefficient dispersion, act as stress raisers resulting in premature fracture. An overall remark is that the type of nanofiller, its modification/treatment (which facilitates its good dispersibility), as well as its weight fraction in the polymer matrix significantly affect the mechanical properties of the prepared mixed matrix membranes. In order to compare the experimental Young's modulus and tensile strength of the mixed matrix membranes with theoretical predictions, three (Halpin-Tsai, Ekvall and Whitney-Riley) and two (Halpin-Kardos and Hirsch) well-known models were employed, respectively. Firstly, for the aligned and randomly and unidirectionally distributed filler conditions, the equations of the Halpin-Tsai model are the following [51]: where E c , E m and E f are the Young's moduli of the composite, the Pebax matrix and the filler (MPa), respectively; ϕ f is the volume fraction of the filler in the composite, derived from the equation ϕ f = (w f /ρ f )/(w f /ρ f + (1 − w f )/ρ m , where w f is the filler mass fraction and ρ f and ρ m the densities of the filler (2.2 and 2.0 g/cm 3 for GNPs and MWCNTs, respectively) and the matrix (1.01 g/cm 3 ), respectively; ζ is a parameter regarding the nanofiller's geometry, distribution and loading with k = 2/3 and 2 for GNPs and MWCNTs, respectively; while l f and t f refer to the length and thickness/diameter of GNPs/MWCNTs, respectively. These latter parameters were defined by SEM analysis. According to the Ekvall model, the Young's modulus is calculated using the following equations [52]: where ϕ m is the volume fraction of the matrix in the composite and ν m is the Poisson's ratio of the matrix (0.3). Moreover, for the Whitney-Riley model, the Young's modulus can be calculated from the following equations [53]: where ν f is the Poisson's ratio of the filler (0.22 for GNPs and 0.35 for MWCNTs), and G m is the shear modulus of the matrix calculated from the equation G m = E m /2(1 + ν m ) (22.6 MPa). For all aforementioned models, the Young's moduli of matrix (Pebax), GNPs and MWCNTs are 58.7 MPa (from the tensile test),~1000 GPa and~600 GPa, respectively. In Figure 8, the comparison of the theoretical calculations from the three models and the experimental data of the tensile moduli of all MMMs is depicted. For low nanofiller loading (0.7 wt.%) the Halpin-Tsai model for the aligned distributed nanofiller condition of all nanofillers, except raw GNPs, fitted experimental data perfectly. For higher loadings, the Halpin-Tsai model for randomly and unidirectionally distributed nanofillers is most suitable, mainly for the case of modified nanofillers, indicating the quality of a sufficient dispersion and the avoidance of aggregate formation. In general, the other two models, that are based only on Poisson's ratio, diverge from experimental data to a greater or lesser extent, depending on the type or amount of nanofiller. The overall theoretical calculations are illustrated in Figure 9. As observed, the H model is apparently more consistent for all types of nanofillers than the Halpin-Ka model, indicating that parameter x (Equation (20)) is a crucial factor for the predictio the tensile strength of the composites. Similarly, for the theoretical calculations of the composites' ultimate tensile strength, the Halpin-Kardos model is based on the following equations [54]: where UTS f , UTS m and UTS m are the ultimate tensile strengths of the filler, matrix and composite (MPa), respectively. Finally, the equation of the Hirsch model is given below [55]: where x is a parameter that determines the stress transfer between the matrix and the filler. For both abovementioned models, the tensile strengths of the Pebax matrix, GNPs and MWCNTs are 7.2 MPa (from the tensile test),~10 GPa and~20 GPa, respectively. The overall theoretical calculations are illustrated in Figure 9. As observed, the Hirsch model is apparently more consistent for all types of nanofillers than the Halpin-Kardos model, indicating that parameter x (Equation (20)) is a crucial factor for the prediction of the tensile strength of the composites. The overall theoretical calculations are illustrated in Figure 9. As observed, the Hi model is apparently more consistent for all types of nanofillers than the Halpin-Kar model, indicating that parameter x (Equation (20)) is a crucial factor for the predictio the tensile strength of the composites. CO 2 /CH 4 Permeability and Selectivity Results In the present work, the binary CO 2 /CH 4 mixture of gases was selected in order to study the membranes' performance based on their permeability and selectivity. Experiments were conducted over a feed pressure range of 1.3 to 5.0 bar at the temperature of 25 • C. The measurements were performed as described in Section 2.4. In Table 4, the CO 2 and CH 4 permeability values and the respective CO 2 /CH 4 selectivities for the 10/90 (molar concentrations) CO 2 /CH 4 gas mixture are presented at five studied feed pressures of 1.3, 2, 3, 4 and 5 bar. Table 4 and Figure 10 exhibit the effect of both GNPs and MWCNTs as four different nanomaterial types for three different filler-loading concentrations. All membranes are selective for CO 2 and present permeability values fluctuating between 67 (MM4 membrane at 5 bar feed pressure) and 384 Barrer (MM11 membrane at 4 bar feed pressure). On the other hand, in the case of MWCNTs-based MMMs the CO2 permeability increases even for the very low amount of 0.7 wt.%, whereas GNPs require concentrations above 3 wt.% in order to increase permeability, as mentioned. The smooth one-dimensional nanochannels of the MWCNTs could act as accelerated CO2 transport pathways through the MMMs. For both raw and plasma-treated carbon nanotubes, and at all feed pressures, higher CO2 permeability is observed in the case of MMMs with filler concentration of 3 wt.%. Furthermore, the membranes with oxidized GNPs need a concentration of 5.3 wt.% in order to reach MWCNT performance concerning permeability. Effect of Feed Pressure: All membranes were tested in CO2/CH4 (10/90 vol.%) at five different transmembrane pressures, 1.3, 2, 3, 4 and 5 bar. With one exemption (that of the MM11 membrane), for the twelve membranes the influence of feed pressure on CO2 per- Figure 10. CO 2 permeability (left) and CO 2 /CH 4 separation factor (selectivity) (right) versus pressure drop across all studied MMMs for 10% v/v CO 2 in CH 4 . Temperature was kept constant at 298 K. As seen in Table 4, the CO 2 permeability values change in a different manner for the MWCNTs and the GNPs-based MMMs. For the cases of raw and oxidized GNPs (samples MM1 and MM4), a decrease of~10% in CO 2 permeability is observed for the MMMs loaded with 0.7 wt.% o-GNPs. This can be attributed to the fact that this very low number of GNPs works like a crosslinker material and makes the polymeric matrix less flexible with lower free volume than the neat matrix, resulting in a decrease in gas diffusivity and consequently also gas permeability [56]. By increasing the number of GNPs, the CO 2 permeability also increases, presenting a maximum value for the concentration of 3 and of 5.3 wt.% in the cases of oxidized and raw GNPs, respectively. This behavior is in correlation with what is reported in literature for Pebax [46,57,58] but also for other polymeric mixed matrix membranes [16,59,60]. Here, it must be noted that the addition of the oxidized GNPs finally provides improved properties and better position in the Robeson plot of selectivity versus permeability, as the selectivity is remaining constant, and simultaneously the CO 2 permeability increases about 46% at 1.3 bar. On the other hand, in the case of MWCNTs-based MMMs the CO 2 permeability increases even for the very low amount of 0.7 wt.%, whereas GNPs require concentrations above 3 wt.% in order to increase permeability, as mentioned. The smooth one-dimensional nanochannels of the MWCNTs could act as accelerated CO 2 transport pathways through the MMMs. For both raw and plasma-treated carbon nanotubes, and at all feed pressures, higher CO 2 permeability is observed in the case of MMMs with filler concentration of 3 wt.%. Furthermore, the membranes with oxidized GNPs need a concentration of 5.3 wt.% in order to reach MWCNT performance concerning permeability. Effect of Feed Pressure: All membranes were tested in CO 2 /CH 4 (10/90 vol.%) at five different transmembrane pressures, 1.3, 2, 3, 4 and 5 bar. With one exemption (that of the MM11 membrane), for the twelve membranes the influence of feed pressure on CO 2 permeability was negligible. As seen in Table 4 and Figure 10, only small fluctuations, of about ±2-7%, in CO 2 permeability were observed as pressure rose from 1.3 to 5 bar. However, for the MM11 membrane, the effect of feed pressure on CO 2 permeability was positive, with an increase from 270 Barrer at feed pressure 1.3 bar to 384 Barrer at feed pressure of 4 bar. On the contrary, the effect of pressure on CO 2 /CH 4 selectivity is negative with a slight decrease compared to the value of the neat membrane. The only exception to this trend was observed for the M11 membrane (see Figure 10). At this point, irrespective of their concentration, nanofillers retain the selectivity of the neat membrane above 4 bar, withstanding membrane compaction. Overall, MMMs exhibited enhanced gas permeabilities with up to fivefold values without sacrificing selectivity compared to the neat polymeric membrane. Conclusions and Outlook Pebax-1657 mixed matrix flat sheet membranes were prepared following the solution casting/solvent evaporation technique, characterized and tested concerning their CO 2 /CH 4 separation performance. Two types of carbon nanofillers were used, namely multi-walled carbon nanotubes and graphene nanoplatelets, both as raw materials but also after plasma treatment modification and oxidation process, respectively. In both cases, the modified nanomaterials (the oxidized GNPs and the plasma-treated MWCNTs) had a stronger reduction effect on water wettability of the prepared MMMs. The addition of these carbonbased nanomaterials resulted in membranes with improved mechanical properties and higher CO 2 permeability, while selectivity was maintained at the level of the neat membrane. The polar groups of modified nanofillers form hydrogen bonds with the polymeric chains of Pebax, and hydrogen bonding interactions between nanofillers and Pebax are developed, resulting in the polymer chain packing disturbance and increase in free volume (voids) for the CO 2 and CH 4 molecules penetration and consequently the increase in gas diffusion (permeability). Furthermore, the functional groups on the surface of nanofillers may interact with gases (e.g., CO 2 ) and increase the solubility in the MMMs, as the CO 2 gas transition through the MMMs is facilitated. Improved withstanding of membrane compaction was an additional benefit of all examined nanofillers. The CO 2 /CH 4 separation factor of the tested mixture of 10/90 (mole fraction) fluctuated between 16 and 21, with the higher values being observed at the lowest transmembrane pressure of 1.3 bar. A general conclusion is that MMMs exhibited enhanced gas permeabilities up to fivefold values (~384 Barrer for MM11) without sacrificing the gas selectivity in comparison with the neat membrane.
2023-04-30T15:10:11.821Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "75369e5c2e2a4ad944f25e60595fccc7f112c7c7", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1267cd22984c9fc0a791f652e2a0693f287146bb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
15599431
pes2o/s2orc
v3-fos-license
Pharmacodynamic monitoring of (immuno)proteasome inhibition during bortezomib treatment of a critically ill patient with lupus nephritis and myocarditis Objective To describe the pharmacodynamic monitoring of (immuno)proteasome inhibition following treatment with bortezomib in a therapy-refractory systemic lupus erythematosus (SLE) patient with life-threatening myocarditis and lupus nephritis. Patient and methods Inhibition of catalytic activities of the proteasome subunits β5 (constitutive proteasome), β5i and β1i (immunoproteasome) were measured in peripheral blood mononuclear cells using subunit-specific fluorogenic peptide substrates in a patient who received three cycles of bortezomib (1.3 mg/m2 subcutaneously, days 1, 4, 8 and 11; every three weeks) along with plasma exchange during the first two cycles. Results Proteasome β5, β5i and β1i subunit activities were readily inhibited 1 h after bortezomib administration. Twenty-four hours post-bortezomib administration, β5 and β5i activities were largely restored, whereas inhibition of β1i activity was sustained. Clinically, after three cycles, cardiac function had improved, with concurrent improvement of haemodynamic stability during haemodialysis. Anti-ds-DNA dropped from >400 to 12 IU/mL along with normalisation of complement C3 and C4. Bortezomib therapy was well tolerated, and patient now has a sustained remission for >16 months. Conclusions This case illustrates the potential benefit of pharmacodynamic monitoring of (immune)proteasome subunit-specific activity after bortezomib dosing in patients with therapy refractory SLE. This tool may hold potential to guide personalised/precision dosing aiming to achieve maximal efficacy and minimal toxicity. of the proteasome subunits β5 (constitutive proteasome), β5i and β1i (immunoproteasome) were measured in peripheral blood mononuclear cells using subunit-specific fluorogenic peptide substrates in a patient who received three cycles of bortezomib (1.3 mg/m 2 subcutaneously, days 1, 4, 8 and 11; every three weeks) along with plasma exchange during the first two cycles. Results: Proteasome β5, β5i and β1i subunit activities were readily inhibited 1 h after bortezomib administration. Twenty-four hours post-bortezomib administration, β5 and β5i activities were largely restored, whereas inhibition of β1i activity was sustained. Clinically, after three cycles, cardiac function had improved, with concurrent improvement of haemodynamic stability during haemodialysis. Anti-ds-DNA dropped from >400 to 12 IU/mL along with normalisation of complement C3 and C4. Bortezomib therapy was well tolerated, and patient now has a sustained remission for >16 months. Conclusions: This case illustrates the potential benefit of pharmacodynamic monitoring of (immune) proteasome subunit-specific activity after bortezomib dosing in patients with therapy refractory SLE. This tool may hold potential to guide personalised/precision dosing aiming to achieve maximal efficacy and minimal toxicity. INTRODUCTION Systemic lupus erythematosus (SLE) is a chronic autoimmune disease with heterogeneous presentation and involvement of multiple organ systems, resulting in high morbidity and a threefold higher mortality rate than the general population. Myocarditis is an uncommon manifestation and occurs particularly in conjunction with pericarditis. Active nephropathy is observed in nearly 30% of patients with SLE and is associated with a further increase in mortality risk. 1 To date, beyond conventional immunosuppressive agents, various biologicals are used for therapy: rituximab and belimumab (targeting B cells), abatacept (inhibition of T cell activation) and eculizumab (interfering in the complement cascade), demonstrating variable efficacies. 2 In spite of these therapies, a subgroup of patients with SLE are refractory to treatment and experience increasing morbidity due to ongoing disease activity and/or drug toxicity. Proteasome inhibitors have been identified as a novel experimental treatment modality based on their mechanisms of action (depletion of long-lived plasma cells and inhibitory effects on critical signalling pathways) and have encouraging effects in animal models with lupus-like disease. [3][4][5] The proteasome inhibitor bortezomib has been approved for the treatment of multiple myeloma and mantle cell lymphoma 6 and has also been successfully applied in a small group of refractory patients with SLE 7 and two patients with SLE with concomitant multiple myeloma. 8 confers similar area under the curve concentrations as intravenous administration, but with much lower peak levels and subsequently reduction of side effects (eg, polyneuropathy). Pharmacodynamic monitoring in these trials was performed by measuring the inhibition of the activity of the constitutive proteasome subunit β5 by bortezomib. 9 Autoimmune diseases like SLE, however, are characterised by upregulation of immunoproteasome subunits. 10 11 New assays are now available to measure the specific catalytic activity of the subunits of the immunoproteasome, and these assays could be of value to optimise dosing of bortezomib in patients with SLE. Here, as a feasibility study, we measured the specific activity of the immunoproteasome subunits β5i and β1i, as well as the constitutive proteasome subunit β5, in blood cells during bortezomib treatment. MATERIALS AND METHODS Catalytic activity of (immuno)proteasome subunits When feasible, blood samples were drawn prior to bortezomib therapy and 1 and 24 h after bortezomib administration during consecutive cycles of bortezomib. Peripheral blood mononuclear cells (PBMCs) were harvested by Ficoll density gradient centrifugation and stored at −80°C until analysis. Catalytic activity of constitutive proteasome subunit β5 and the immunoproteasome subunits β5i and β1i was analysed in cell extracts of PBMCs using specific fluorogenic peptide substrates; Ac-WLA-AMC, Ac-ANW-AMC and Ac-PAL-AMC, respectively, essentially as described previously. 12 PATIENT HISTORY A 43-year-old male patient from South American origin was diagnosed with SLE in 2009, based on pericarditis, arthritis, lymphadenopathy and positive autoimmune serology (antinuclear, anti-dsDNA, anti-Sm and anti-RNP antibodies). He had a history of persistent disease (arthritis, myopathy and lymphadenopathy) under successive treatments with hydroxychloroquine, azathioprine, rituximab and mycophenolate mofetil (MMF) in combination with prednisolone and courses of methylprednisolone (MPNS). Despite this treatment, he was diagnosed with proliferative lupus nephritis (ISN/ RPS class IV-G) in August 2012 for which he was treated with cyclophosphamide according to the 'Eurolupus' regimen and subsequently with MMF. With this treatment, his kidney function recovered and urinalysis normalised. In April 2013, he developed heart failure. A myocardial biopsy showed evidence of myocarditis with abundant infiltration of macrophages next to lymphocytes (figure 1). He was treated with MPNS and intravenous immunoglobulin. This resulted in a temporary clinical response, but in November 2013 he was admitted to the intensive care unit with respiratory failure caused by heart failure and acute kidney failure, despite maintenance therapy with corticoids and MMF. The acute kidney failure was induced by a flare of lupus nephritis ( proteinuria increased to 1.6 g/day), probably concomitant with acute tubular necrosis due to heart failure. Treatment included non-invasive ventilation, renal replacement therapy and MPNS. At this stage, experimental therapy with bortezomib (1.3 mg/m 2 , subcutaneous, days 1, 4, 8 and 11; every three weeks) was started because of the otherwise expected fatal outcome, in combination with plasma exchange during the first two cycles. Bortezomib therapy was well tolerated (except for transient thrombocytopenia during the first cycle) and effective. After three cycles, cardiac function had improved, along with normalisation of anti-ds-DNA levels (>400 to 12 IU/mL) and complement C3 and C4 (figure 2). Maintenance therapy consisted of MMF 1000 mg twice daily and low-dose prednisolone. The patient now has a sustained remission for almost 2 years. From blood samples drawn during the first two cycles, pharmacodynamic monitoring of inhibition of catalytic activity of individual proteasome subunits known to be targeted by bortezomib, that is, constitutive β5 and the immunoproteasome subunits β5i and β1i, was assessed (figure 3). In PBMCs, 1 h after bortezomib administration, β5-activity was suppressed (mean 45% compared with untreated controls), but this activity was largely recovered 24 h later. Likewise, immunoproteasome β5i catalytic activity was potently inhibited 1 h after drug administration (mean 73% compared with untreated controls). After 24 h, residual β5i inhibition was 25% compared with untreated control. Finally, β1i activity was also potently inhibited 1 h after bortezomib administration (mean 74% compared with untreated controls), but remarkably, this inhibition was largely sustained (mean 65% compared with untreated controls) over 24 h. DISCUSSION In this study, we describe for the first time the dynamics of immunoproteasome inhibition in this patient with SLE by assessment of bortezomib-induced inhibition in PBMCs of the catalytic activities associated with the β5i and β1i immunoproteasome subunits, next to the β5 constitutive subunit. Consistent with bortezomib being a reversible proteasome inhibitor, 3 inhibition of β5 and β5i shortly after bortezomib administration was largely relieved after 24 h. Interestingly, dynamics of β1i catalytic activity showed a different profile. First, inhibition by bortezomib was sustained for >24 h. Second, basal β1i activity appears to decrease during the course of bortezomib. The latter could reflect the loss of immunecompetent cells with aberrant β1i activity during treatment. In this regard, Ghannam et al 10 showed that active inflammation in myositis was associated with upregulation of β1i expression. Similarly, Morawietz et al 11 showed that expression of β1i is significantly increased in inflammatory infiltrates of salivary glands in patients with Sjogren's syndrome. As bortezomib targeting may involve various immune cells (B cells, plasma cells, T cells, macrophages and dendritic cells), 3 4 13 it is conceivable that inhibition of β1i activity therein contributes to bortezomib's therapeutic effect, for example, by induction of apoptosis, suppression of pro-inflammatory cytokine release and/or altered generation of antigenic peptides with a consequently lower autoimmune response. Together with the first two cycles of bortezomib, our patient received plasma exchanges that could have contributed to his recovery. Although plasma exchange is still recommended in life-threatening SLE disease activity, several studies did not show any benefits of plasma exchanges in patients with active lupus nephritis. 14 Moreover, the sustained clinical response makes a plasma exchange-induced improvement less likely. Given the fact that bortezomib therapy showed efficacy by inducing remission of disease activity in several individual cases and large case series of patients with therapy-refractory SLE, 7 8 15 further exploration of proteasome inhibitor-based therapies is warranted. Although bortezomib therapy over three cycles was well tolerated by our patient (except for transient thrombocytopenia), awareness of potential toxic side effects should be considered in case of repeated treatments. 6 Specific adverse events may include peripheral neuropathy, thrombocytopenia, diarrhoea and infectious complications, among which the latter were reported by Alexander and colleagues in a series of 12 patients with SLE treated with bortezomib 7 according to schedules applied for multiple myeloma treatment. 6 15 Assessment of 'molecular therapeutic efficacy' by measuring (immuno)proteasome subunit inhibition, particularly of the β1i subunit, would be helpful to design optimal dosing strategies in future clinical studies with bortezomib to achieve maximal efficacy and minimal toxicity for patients with refractory SLE. Figure 3 Bortezomib (BTZ)-induced inhibition of (immuno) proteasome activity in peripheral blood cells of a patient with systemic lupus erythematosus . Catalytic activity of β5, β5i and β1i is depicted at three time points during bortezomib treatment either prior to bortezomib dosing and 1 and 24 h post-bortezomib administration. NA, sample not available.
2016-05-04T20:20:58.661Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "d731ce37ea45b2424243556ec5331c9bca45c18f", "oa_license": "CCBYNC", "oa_url": "https://lupus.bmj.com/content/lupusscimed/2/1/e000121.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d731ce37ea45b2424243556ec5331c9bca45c18f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244787554
pes2o/s2orc
v3-fos-license
A Case Report of ChAdOx1 nCoV- 19 Corona Virus Recombinant Vaccine Related Granuloma Annulare Background: Granuloma annulare (GA) is a benign, self-limiting inflammatory skin condition of unknown origin that may occur following multiple etiological triggers. GA incited secondary to vaccinations has been rarely reported in the medical literature. The COVID-19 pandemic has introduced extensive global immunization against the SARS-COV-2 virus, bringing a gamut of vaccine-related complications. We elucidate a case report of the spontaneous eventuality of GA following ChAdOx1 nCoV-19 Corona Virus Recombinant Vaccine. Case Report: A healthy 26-year male presented with a one-week history of asymptomatic single, flesh-pink patch with a raised margin over his left ventral forearm. On close examination, the margin of the lesion had multiple annularly arranged papules. Biopsy of lesion was done, and histopathology revealed numerous palisading granulomas in the dermis consistent with findings of localized GA. The patient was managed with once-daily external application high potent topical Case Study Shri and Thomas; JPRI, 33(51B): 202-207, 2021; Article no.JPRI.76523 203 corticosteroids, which was used intermittently by the patient. However, the lesion showed spontaneous resolution in one month. Conclusion: Identifying ChAdOx1 nCoV19 Vaccine-related adverse events following its first dose is paramount, as evidence of the proportion of local or systemic severe cutaneous adverse skin reaction (SCAR) on subsequent dosing is a paucity. A more extensive systematic review corroborating SCARs and safety profile following immunization with ChAdOx1 nCoV-19 Vaccine prevails to be the need of the hour. INTRODUCTION Granuloma annulare (GA) is a benign, idiopathic, self-limiting inflammatory skin condition commonly reported following trauma, insect bites, viral infections, and malignancy [1,2]. Few cases describe the spontaneous occurrence of granuloma annulare following vaccinations. With the ongoing extensive global immunization program against the SARS-CoV-2 virus, global distribution of 7.41 billion doses has been administered that covered 51.5% of the world population to have at least one dose of COVID-19 vaccine [3]. However, anecdotal reports on the cutaneous adverse reactions following COVID-19 immunization form lacunae in their early diagnosis and active medical management. Here, we describe a case of GA-like eruption following ChAdOx1 nCoV-19 Corona Virus Recombinant Vaccine in a young Indian male who has no medical history nor allergies. CASE REPORT A 26-year-old male, otherwise healthy software professional, presented with a one-week history of a single lesion over his left forearm. His skin lesions were neither itchy nor painful. He has no personal or family history of skin diseases or auto immune conditions. His recent medical history includes the first dose of COVID-19 immunization ten days ago injected on his left deltoid. Prior to vaccination, he had no symptoms suggestive of COVID-19 disease. Physical examination revealed a solitary well-demarcated, annular, erythematous plaque with raised margins over the ventral aspect of his left forearm. On closer inspection, the margin of the lesion had multiple flesh-pink pinhead papule's, with a regressing pattern towards the center. The patient shared a photographed Figure of the same lesion taken on day 1 of its appearance, showing a smaller, flesh-pink patch with a raised and irregular margin with central hyperpigmentation. A 4mm punch biopsy from the lesion revealed discrete areas of well-circumscribed central necrobiotic collagen surrounded by a palisade of histiocytes, multinucleate giant cells and peri-vascular lymphocytes mid-dermis consistent with necrobiotic palisading granulomas. The patient was diagnosed with localized GA following COVID-19 immunization with ChAdOx1 nCoV-19 Corona Virus recombinant vaccine. The patient was reassured about the benign nature of the lesion and was initiated with topical mometasone furoate cream with follow-up after ten days; Patient came for review after one month, with a history of discontinuation of treatment within ten days and partial improvement of lesions which eventually resolved spontaneously at1 month from its first appearance. Granuloma annulare is an idiopathic granulomatous dermatosis that commonly presents as asymptomatic, self-limiting papular eruption [4]. With a higher incidence in children and young adults, GA is approximately twice as common in females than males [1,5]. The lesions are usually found over the arms, legs, hands, and feet but may rarely present over the palms, penis, ears and periocular area. The lesions' morphology is in accordance with the clinical subtypes, namely, generalized, localized, linear, perforating, and subcutaneous [1,2,4]. In up to 15% of cases [1], generalized GA is described by the presence of 10 or more lesions or widespread plaques [2]. Patients reported with hundreds of discrete or confluent papules are not uncommon. Lesions of localized GA are common over the dorsum of the hands or feet arranged in a distinctive annular configuration showing large, slightly erythematous patches with a palpable margin on which scattered papules may subsequently arise. Perforating GA seen in 5% GA cases show tender, umbilicated lesions in a localized distribution, rarely may be generalized [1,4]. The rarest clinical variant of GA, namely subcutaneous GA presents with subcutaneous nodules, seen especially in children [1] having close clinical resemblance with rheumatoid nodules, although there is no history of arthritis and normal serology for rheumatoid factor, antineutrophilic cytoplasmic antibody(ANCA), antinuclear antibodies(ANA) and anti-citrullinated protein (anti-CCP) antibody. Pathogenesis of GA is based on alternating views of immunoglobulin-mediated vasculitis and delayed-type hypersensitivity response to an unknown antigen [1,6,7]. The cell-mediated immune response appears to be marked with prominent activated helper T cells. The exact mechanism by which GA is triggered in our patients is unknown. Immunological activation following vaccination may explain the presence of activated T-cells in the lymphocytic infiltrate in the palisading granulomas. There is a lesser possibility of traumatic inoculation hypothesis, as the site of granuloma formation is distant and hence less convincing [8]. Histopathology of GA is characteristic of necrobiosis and granuloma formation and abundant mucin deposition involving the dermis and subcutis. The term 'necrobiosis' is used to describe tissue death and its simultaneous but inadequate replacement by viable tissue. Four distinctive histological patterns are observed in GA, namely infiltrative (interstitial) pattern, palisading granuloma pattern, and an epithelioid nodule (sarcoidal granuloma, mixed) pattern, are known [1]. GA shows characteristic palisading granuloma, a pattern exemplified by stacked epithelioid histiocytes aligned around a central focus of mucin [2]. In some instances, histiocytes that are seen as a foci within the dermis can be distributed interstitially as strands, cords, or columns in other foci, i.e., between bundles of collagen. Synthesis of types I and III collagen also occur as a reparative response. Necrobiosis lipoidica is a common differential diagnosis of GA shows pan-dermal inflammation, linear arrays of histiocytes surrounding necrobiotic collagen and abundant plasma cells [9]. The presence of mucin and the absence of asteroid bodies or other giant cell inclusions also less favors sarcoidosis [2]. The lesions do not display scaling and are not accompanied by vesicles or pustules, which helps distinguish GA from tinea corporis [10]. In addition, hyphae can be visualized in a potassium hydroxide preparation from a suspected lesion tinea corporis and not in a lesion of GA [10]. Hansens disease is less likely in the absence of anaesthesia in the lesion and/or a normal peripheral nerve examination, especially in endemic regions of leprosy. Several vaccines have been reported to trigger GA [5]. Bacillus Calmette-Guérin (BCG) vaccine has been most frequently reported [4], followed by the hepatitis B vaccine, influenza vaccine, tetanus and diphtheria-tetanus toxoid vaccine and pneumococcal vaccine [11]. GA after SARS-CoV-2 vaccination has not been previously described. Most cases GA following immunization with any of the above have commonly occurred in young patients, probably because the frequency of vaccination was higher at a younger age as a part of routine immunization. However, owing to the gravity of the COVID-19 pandemic situation, there is extensive immunization among adults and paediatric population.This expands the opportunity of identifying various complications that occurs post-COVID-19 immunization. Wide spectrum of vaccines formulations are in the conduit against COVID-19 disease are based upon inactivated or live attenuated viruses, protein sub-unit, virus-like particles (VLP), viral vector (replicating and non-replicating), DNA, RNA, nanoparticles, etc. with each exhibiting unique merits and demerits [12]. According to WHO: " vaccine must provide a highly favourable benefit-risk contour; with high efficacy, only mild or transient adverse effects and no serious ailments" [13]. The ChAdOx1 nCoV-19 Vaccine used by our patient is a recombinant vaccine based on viral vector technology scheduled as two doses injected intramuscularly at 0.5ml. The Indian government has recommended that the time interval between the 1st and 2nd dose should be between 12-16 weeks. The most commonly reported adverse reactions to COVID-19 vaccinations are usually milder, transient, and widely acceptable over time [14]. CONCLUSION Vaccine hesitancy and literacy pose significant challenges for the success of the ongoing immunization program. The general people should be aware of the minor side effects, manageable with some symptomatic treatment. The challenge of meeting the public's expectations towards accepting COVID-19 vaccines is critical to counter this pandemic disease. Undoubtedly, improving the knowledge and skills of health care workers trusted by their communities can be a valuable resource to promote successful vaccination campaigns and improve the overall acceptance of COVID-19 vaccines. The health care workers must engage in the education and motivation of the patients, which makes the latter feel safe, respected, and provided with an opportunity to make informed health-related decisions more accurately. Future studies systematically reviewing the minor and serious adverse reactions to COVID-19 vaccines would improve vaccine acceptance, by the expansion of traditions views that weight the risk and benefits associated with COVID-19 vaccines. DISCLAIMER The products used for this research are commonly and predominantly used in our research area and country. There is no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for litigation but the advancement of knowledge. Also, the research was not funded by the producing company; instead, it was financed by the personal efforts of the authors. CONSENT AND ETHICAL APPROVAL As per international standard or university standard written ethical approval has been collected and preserved by the author(s).The statements, texts and photographic materials used in this report have been consented by the patient to be made available for in variety of formats and platforms by the reporting author.
2021-12-02T16:21:24.691Z
2021-11-26T00:00:00.000
{ "year": 2021, "sha1": "ebee101d03e690bc1c6626481e6a9fc16223b328", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/33533/63144", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3dabba164fa816996614198c008afd611f00d3d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257874237
pes2o/s2orc
v3-fos-license
A power-aware task scheduler for energy harvesting-based wearable biomedical systems using snake optimizer There is an increasing interest in energy harvesting for wearable biomedical devices. This requires power conservation and management to ensure long-term and steady operation. Hence, task scheduling algorithms will be used throughout this work to provide a reliable solution to minimize energy consumption while considering the system operation constraints. This study proposes a novel power-aware task scheduler to manage system operations. For example, we used the scheduler to handle system operations, including heart rate and temperature sensors. Two optimization techniques have been used to illustrate the impact of task scheduling on energy consumption. The first is based on Snake Optimizer (SO), and the second is a greedy approach to compute the Hamming-based Tikhonov regularization. The FPA approach showed 50%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} improvement in the convergence time for the scheduler. Introduction In recent years, there has been a growing trend in using smart IoT, and low-power wearable devices due to their role in promoting the quality of life for people [1][2][3]. However, the rechargeable battery-based wearable devices have charging and discharging nature which requires a repetitive human intervention [4]. Minimizing this frequent intervention by increasing the size of the energy storage element in such devices is inapplicable and hinders the comfortable and user-friendly design [4,5]. Furthermore, batteries represent added cost to the device and non-environmental friendly solution. In order to overcome the limitations of the batterybased wearable devices, energy harvesters such as kinetic, solar, and thermal ones have been used to harvest energy for powering up wearable device [6]. Using energy harvesting does not eliminate the need for batteries but reduces it. This is because current systems are designed to give fixed performance, and it shut off whenever the energy is below a certain level. This requires fixed and stable energy source which is not the case for most of the energy harvesting technologies [6][7][8]. The energy harvesting technology is widely used in biomedical wearable devices as the energy can be easily scavenged from the ambient motion, kinetic energy, of the human body [9][10][11]. Such biomedical devices can be used for continuous monitoring purposes by sensing and acquiring different bio-signals (e.g., heart rate, blood glucose level, temperature, and oxygen saturation of the blood) [12,13]. The energy harvesting technology is remarkably beneficial in this context as it allows using a simultaneous source of energy which allows the transducer to harvest sufficient energy to power the sensors. Therefore, two techniques are utilized for powering: harvestand-store and harvest-and-use depending on the body activity, due to the variable nature of the gathered energy of human motion [14,15]. When the kinetic activity of the human body is high, there is sufficient energy to be used for powering purposes and to be stored in a supercapacitor. This stored energy is then used whenever the activity of the human body is low and insufficient to feed the wearable device with the minimal required energy. Accordingly, the energy harvesting-based wearable devices contain Power Management Unit (PMU) to control and organize the flow of the energy throughout the device [16,17]. However, PMU does not control the application of the system and uses only micrcontroller built-in features to manage the power consumption. This raises the need for finding a way to correlate between the available energy and the system functionality. This means, enabling/disabling some functions of the system based on the available energy. Alternatively, adapting the running function and the required time for each function based on the available energy. This will make the system survive severe energy conditions without interruption. Hence, Task scheduling algorithms can be used to create a table for the time window in terms of location and width for executing each function in the system based on the available energy [18]. Thus, task scheduling can be considered as a beneficial mechanism to manage the execution of various sensing tasks under limited, and time-varying harvested energy [14,19]. This can be crucial to maximize the performance in the context of continuous monitoring to avoid any interruption in the device operation [8]. Three main task scheduling algorithms were tested: the Dynamic Voltage and Frequency Scaling (DVFS), the decomposing and combining of tasks, and the duty cycling, to create the optimum order of task operation based on the available energy. [14]. The details of these algorithms will be presented in the upcoming section. Then, the impact of the generated order from each algorithm on the energy will be illustrated. In this paper, sensors task scheduling is proposed for the energy harvesting-based wearable biomedical devices based on the Snake Optimizer (SO) and a greedy approach to compute Hamming-based Tikhonov's regularization to mandate the feasibility of the obtained solution. The remaining paper parts are organized as follows. The literature review is presented in Sect. 2. The system description and problem formulation, including the objective function, constraints, Tikhonov regularization approach, are found in Sect. 3. In Sect. 4, the details of the adopted snake optimization technique are found. The results are presented and discussed in Sect. 5, while the conclusion is summarized in Sect. 6. Literature survey This section considers the previous studies that focused on providing task scheduling algorithms for energy harvestingbased wearable devices. The sensor nodes consume power depending on the clock frequency and the delivered voltage. Consequently, it is possible to adjust both the supplied voltage and the operating frequency to optimize the consumed power on a real-time basis. Using the concept of dynamic voltage and frequency scaling, the energy consumption can be minimized while considering the performance constraints. Weather-Conditioned Moving Average (WCMA), which is a method for task scheduling. Traditional intertask scheduling (W-LSA) and task migration have been proposed to enhance performance by proactively balancing task workload [20]. Liu et al. [18] introduced a scheduling algorithm that estimates the predicted harvested energy and adapts the processing of the task according to it and the available energy. Also, they suggested in another study [21] to switch between the direct use of the current harvested energy and the stored energy while running the sensor nodes to overcome wasting the harvested energy in battery leakage. Another scheduling technique was presented by Liu et al. in [22] where they combined both the concepts of adaptive and static schedules with DVFS to achieve the highest possible performance while considering timing constraints. The algorithm schedules the tasks adaptively whenever there is a prediction of overflowing energy to optimize the operation by maximizing the benefit from the energy being harvested. Allavena and Mossé proposed a task scheduling technique in [23] which uses DVFS, and the tasks are prioritized according to their deadlines. On the other hand, the choice of task execution depends on the available energy. If the current energy is higher than a certain predefined threshold, the task can be executed; otherwise, the task will be postponed. For healthcare monitoring, a task scheduler was developed by Ravinagarajan et al. in [24]. In a framework based on DVFS and a linear regression-based algorithm, their schedule is meant to manage both the periodic and the sporadic tasks. Although the DVFS-based algorithms are efficient with the energy harvesting technology, they may not offer sufficient voltage levels to execute some tasks. However, the task scheduling algorithms that rely on decomposing and combining tasks can decompose the energy-intensive tasks into multiple sub-tasks that demand lower energy during their operation. Zhu et al. in [25] introduced an algorithm based on task decomposition and combining to optimize the energy consumption. The decomposition phase imposes dividing an intensive task into two separate sub-tasks of data sensing and transmission when the scavenged energy is limited. While in the combining phase, multiple transmission sub-tasks can be combined by grouping the data and transmitting them in a single data pocket to minimize the energy consumption. A relatively reduced delay characterizes this concurrent task execution, and smaller latency [14]. By evaluating their task scheduling algorithm, the results showed that their algorithm could efficiently utilize the dynamically available energy. More tasks can be executed with fewer missed deadlines, which shows the reliability of the decomposing and combining-based algorithm. In energy harvesting-based devices, the duty cycle adjusting mechanisms are commonly used to optimize energy consumption. Duty cycling is mainly based on managing the sleep, and awake modes, which reduces the overall energy consumption while maintaining the functionality of sensor nodes [26]. In an energy budget-based duty cycling framework, Kansal et al. [27] introduced a duty cycling technique where the duty cycle of each sensor node of the system depends on the average harvested energy and the consumed energy in the wake and sleep modes. In their study, the duty cycle adjustment depends on the overall energy consumption, which cannot exceed the average harvested energy. In [28], a mathematical model was described for duty cycling. The model maximizes the system's performance on a realtime basis by employing an exponentially weighted moving average scheme to predict the harvested energy. This helps estimate the duty cycle to operate sensors nodes of the system. In another study [29], the authors employed a dynamically-controlled duty cycle to maximize the utilization of the harvested energy. Whenever the harvested energy is limited, they consider duty cycle reduction. Another energy budgetbased duty cycling mechanisms were presented in [30] and [31] by Yang et al.. The first study introduced an adaptive sensing scheduling algorithm by dynamically adjusting the sensing rate according to the available energy budget. In contrast, an online scheduling policy was suggested in the second study. In case of the unavailability of sufficient harvested energy, estimating or predicting the upcoming energy is beneficial for managing the tasks in terms of execution or delay. Previous related studies [32][33][34][35] showed different methods for predicting the future harvested energy. Most of these studies considered the surrounding environment in energy prediction. System overview In this section, a mathematical realization for the problem of our interest will be conducted by formulating the objective function used in the optimization technique and presenting Tikhonov's regularization approach. In order to profile the problem, it is crucial to mention the components of our adopted energy harvesting-based biomedical wearable device which represent a test platform for the algorithm. The system consists of the basic essential components that guarantee the functionality: a piezoelectric harvester, a bridge rectifier, a supercapacitor, a processing unit, and sensors. However, not all of these components will participate in the problem formulation. The sensors and the supercapacitor are the only components that matter in the problem formulation, which governs the energy consumption (and its corresponding voltage drop across the supercapacitor) based on the activity of the sensors. However, the proposed algorithm can be applied on the system even if the number of functions is increased. In this study, two sensors are utilized: a heart rate sensor and a temperature sensor, where the activity of both sensors determines the corresponding voltage drop across the supercapacitor, which acts as the system energy reservoir. The heart rate sensor is selected as it measures a quantity needed to be monitored continuously. This represents the situation of high frequency of the discharging cycles to test the algorithm capabilities in heavy load applications. While the temperature sensor represents relatively low frequent operations. However, the temperature indicates several biological conditions such as blood pressure which makes it an important parameter to be measured [36]. We adopted the beat-to-beat optical Pulse Sensor as our heart rate sensor [37] and the non-contact infrared MLX90614 thermometer [38] as the temperature sensor to be connected to the microcontroller [39]. The main objective of using the optimization technique, SO, is to find the best schedule which organizes the operation across the two sensors. Accordingly, before proceeding in the problem formulation, we need to model the activity of the two sensors by representing their activity in two bits. The MSB (Most Significant Bit) indicates the status of the heart rate sensor, while the LSB (Least Significant Bit) indicates the status of the temperature sensor. If the sensor is ON, then the binary value of its corresponding bit is equal to 1. While if the sensor is OFF, then the binary value of its corresponding bit is equal to 0. We consider a smart wearable system that contains a power-aware task scheduler thus by decoding the two binary bits into corresponding decimal values, a more compact representation is achieved as follows: where y k is a decision variable which indicates the activity of the two sensors over a specific period divided into N slots time slots. k refers to the slot number where k ∈ {1, 2, ..., N slots } . Accordingly, the desired task schedule is expected to be a sequence of decision variables Y = {y 1 , y 2 , ..., y N slots } corresponding to defined N slots time slots as illustrated in Fig. 1. Assumptions and constraints The sensors are used to acquire some human vital signs and then translate them into a meaningful value [40]. Consequently, these sensors should operate on a periodic basis, ensuring a valid real-time operation. According to the type of signal being sensed, the frequency of signal acquisition is determined [40,41]. In general, somebody parameters are likely to change more rapidly according to the body's activity and health condition while others are not. For example, the heart rate frequently fluctuates and needs multiple measurements over a short time horizon while the body temperature can be obtained with a single measurement over a longer time horizon. Hence, the frequency of reading both sensors is not the same and also the period of measuring. This should be considered while formulating the problem in this study to maintain the functionality constraints. Thus, the assumed period P hr upon which the heart rate sensor operates must be less than the assumed period P t upon which the temperature sensor operates. These operational criteria can be listed as follows: • The heart rate sensor measures the body's heart rate only once during each P hr period. • The temperature sensor acquires the body temperature only once during each P t period. • P hr < P t . Objective function Maximizing the energy throughout the energy harvesting-based biomedical wearable device is the main target of this work. Consequently, the optimization technique is meant to find the schedule Y, which maximizes the energy over N slots time slots. In order to find the best schedule, we need to calculate the voltage drop across the supercapacitor to determine the effect of the schedule upon the energy consumption. To maximize the energy, we should guarantee that the final voltage across the supercapacitor V final is maximized as well. The final voltage V final can be easily calculated depending on a dataset we generated from a lab-based biomedical device. By setting an initial voltage and iterating throughout the generated dataset, and accumulating the voltage drop across the supercapacitor at the end of each time slot, V final can be calculated. Moreover, to force the snake optimization technique to eliminate the infeasible solutions as much as possible, a regularization approach is chosen for this study. A Hamming-based Tikhonov regularization is adopted to penalize the infeasible solution depending on how much the solution Y violates the earlier-mentioned constraints. Accordingly, The objective function can be expressed mathematically as follows: where T Tikhonov is the regularization term that will be deeply described in the upcoming subsection. Ŷ is the optimum schedule. When the search space of the problem is large, regularization techniques such as Tikhonov's are preferable to be imposed to constrain the possible solution space [45]. The strategy is based on adding a regularization term to the objective function in order to approach a particular solution with desirable properties [8]. This added term enforces the optimization algorithm to exclude the infeasible solutions by adding a penalty. Through this, it is possible to find or approach the desired optimal solution Ŷ , which satisfies our objective of maximizing the energy. In this study, the solution Y can be represented in terms of binary values, as illustrated earlier. Accordingly, we propose a greedy approach to compute the Tikhonov regularization term based on Hamming distance efficiently. In the context of dealing with binary values: 0 or 1, Hamming distance between two-bit vectors of equal lengths is the number of bits at which the corresponding elements of the vectors have different values and can be represented as follows [46]: where Y NF is the nearest feasible solution to solution Y, ⊕ is the XOR operator, and s(.) is a summation operator of the bit vectors. However, our energy harvesting-based wearable biomedical system has two sensors where their schedule, solution Y, can be represented in two separate binary vectors. Thus, the Hamming distance in this study is computed Fig. 1 Illustration of the desired schedule Y over the time slots for the solution of each sensor separately using Algorithm 1 which eventually represents the L 2 (2-norm) regularization, and hence, T Tikhonov is calculated as follows: where is the Lagrange multiplier, N s is the number of sensors which is equal to two in this study. According to this number of sensors N s , Algorithm 1 is repeated to compute the required Hamming distance for each sensor solution Sol n separately along with N slots time slots. The algorithm firstly computes the nearest feasible solution Sol NF n for sensor n then finds the Hamming distance between it and Sol n . For computing Sol NF n , the sensor period P n is needed along with the slot number S n at which the sensor starts firstly to operate. F o r m o r e e l a b o r a t i o n , s u p p o s e we h ave Sol n = {1, 0, 1, 1, 0, 1} bin , P n = 2 , once S n is found, the nearest feasible solution Sol NF n can be generated as shown in Fig. 2, and hence the Hamming distance can be computed. Since we are dealing with the optimization problem of a relatively large search space, the meta-heuristic is not n guaranteed to reach the global maximum. The required computation to calculate the nearest neighbor is an NPhard problem. So, it can explode in time exponentially. Accordingly, we propose an approximate algorithm to compute the nearest feasible solution. Snake optimizer In this work, the snake optimization technique is utilized to find the most feasible task schedule for the above-described problem. The snake optimization algorithm is a recent natureinspired technique that was introduced in [47]. The inspiration comes from the behavior of snake mating in nature. The snakes are cold-blooded vertebrates belonging to reptiles, and crucial factors govern the mating between the female and male snakes. The competition among the males to attract females' attention for mating starts to take place when the temperature is sufficiently low during the late spring, and early summer [48]. However, the mating process does not rely on the temperature nor female's decision only, but also on the food availability [48]. Accordingly, mating occurs only when the temperature is low, and food is available; otherwise, the snakes will only search for food or eat what is already there. By considering this information in the optimization context, two phases in the searching process can be figured out: exploration and exploitation. When the environmental factors from food and low temperature do not exist, the snakes only go searching for food to survive, and this is the exploration. At the same time, the exploitation phase has various transition phases to efficiently reach the global minimum. If the temperature is high, but the food is available, the snakes care about only eating the existing food. In the case of food availability and a low-temperature environment, the mating process occurs. There are two different cases for the mating process: the fighting and mating modes. In the fight mode, each male will compete for the best female, and each female will attempt to select the best male. On the other hand, in the mating mode, the mating of each pair is related to the quantity of the available food. Before getting more familiar with the mathematics behind the SO technique, it is important to mention that SO is just like all the metaheuristics algorithms; it starts by generating a random initial population to be able to begin the optimization algorithm process. Also, this work follows the original parameters of the SO technique presented earlier by Hashim et al. in [47], where they assumed that the population is divided into two groups: a male group and a female group of an equal number of members. The process starts by defining the surrounding temperature T surr and the food quantity Q food as follows: where t is the current iteration and t max is the maximum number of iterations. If Q food is less than a threshold of 0.25, this means that the snakes go exploration and search for food, and this can be modeled as follows: where, X i,j refers to the i th snake position of gender j (male or female) where this equation is calculated during the exploration for the two groups: the male group ( X i,m ) and the female one ( X i,f ). X rand,j refers to the position of a random snake while r is a random number following a uniform distribution between 0 and 1. A j represents the ability of the snake to find the food during searching, and it is measured in terms of the fitness f. Furthermore, if the Q food is more than the threshold and T surr is higher than 0.6 (which means the surrounding environment is hot), then no mating will occur, and the snakes focus on feeding themselves with the available food. Hence, this behavior can be expressed as: where X food is the position of the best individuals. Whenever the food is still available, and the temperature becomes cold enough and lower than the 0.6 threshold, the mating will occur either in the fight mode or the mating mode. During the fighting mode, the fighting ability of the female and male agents can be calculated from: where f best,f is the fitness of the best agent of the female group, while f best,m is the fitness of the best agent of the male group and f i is the agent fitness. Consequently, the fighting mode can be modeled as: where X i,m and X i,f refer to i th male and female positions, respectively. While X best,m and X best,j are the positions of the best individual in the male group and the female group, respectively. For the mating mode: If egg hatch, select worst male X worst, m and worst female X worst, f and replace them: In terms of the formulated problem, the SO optimization algorithm is used in such a manner to generate the desired task schedule. The optimization algorithm starts with setting the parameters and reading the data to work on. The parameters are the initial voltage across the supercapacitor, the number of time slots N slots , P hr , P t , , and finally, the upper and lower scheduling boundaries. The algorithm iterates for N itr iterations where it updates the best solution and carries it from one iteration to another. Results Several experiments are conducted to validate the performance of the snake optimizer technique for sensor task scheduling. Furthermore, the technique performance is compared with that of FPA (Flower Pollination Algorithm) [49] which was utilized in a prior study [8] for task scheduling. Implementation of SO, Hamming distance-based Tikhonov regularization, and all the experiments were done using MATLAB. Firstly, the SO technique is operated and experimented with at different N slots to tackle the effect of changing search space complexity on the performance. While the other parameters are set as follows: P hr = 3 , P t = 5 , the initial voltage value across the supercapacitor = 3.3 V , and = 0.6 where it is considered a tuning parameter employed during regularization. The value of is chosen to fit the objective function which calculates the analog final voltage V final . It is also noticeable that P hr is shorter than P t to meet the problem constraint, however, both values are chosen to be small in order to guarantee the monitoring functionality. Figure 3 shows the convergence plot of the SO technique for finding the optimal schedule Ŷ upon the different N slots . As the objective function maximizes the energy across the supercapacitor, the convergence plot tends to increase during the 500 iterations. By looking at the curves, the SO algorithm shows fast convergence and stagnation (i.e., nochanging stage) for N slots = 20 . While the evolution curve for N slots = 40 shows a faster convergence than that for N slots = 60. Furthermore, it is noticeable from the figure that stagnation is likely to occur during some iterations, where no better solution is being found. Yet, SO proves its ability to overcome stagnation over time by reaching better solutions. Also, Fig. 3 shows the potential of SO technique in dealing with the large search spaces to achieve better schedule when N slots is 40. Although SO is still not guaranteed to reach the optimal schedule, its behavior reveals its efficiency with the assigned problem. It is also important to study the voltage drop across the supercapacitor that occurs due to the best solution or the best task schedule found by the optimization technique. This can be done by observing the final voltage versus the different N slots time slots as shown in Table 1. It is logical that the final voltage V final across the supercapacitor decreases as the time slots increases due to the energy demand. From these conducted experiments, it can be concluded that SO shows better optimization performance than the state-of-the-art FPA algorithm. A set of validation experiments is conducted to compare the performance of FPA and SO optimization algorithms on the same problem. FPA showed in previous related work [8] its reliability on finding feasible schedules for a small number of N slots . It will be tested along with SO on finding feasible solutions but for a relatively larger number of time slots. Our Tikhonov regularization approach will be used with both optimization algorithms to maximize the final voltage V final . Figures 4, 5 and 6 compare the performance of SO and FPA on finding the most feasible schedule for N slots equal to 25, 50 and 100, respectively. More iterations are involved in experimenting FPA and SO, where N itr = 1000. The remaining key parameters remain the same except for P hr and P t which were updated to be equal to 6 and 8, respectively. This update was done to enrich the conducted experiments by investigating the performance upon less frequent operations than the previous experiments and those conducted in [8]. The figures show that the SO algorithm is more reliable for all the N slots and can reach more feasible solutions that meet the objective function of maximizing the energy. Besides the feasibility, the SO algorithm converges more noticeably than the FPA algorithm over the iterations. As the number of N slots increases, the search space dramatically increases, making finding feasible solutions more Fig. 7. From the figure, it is obvious that the schedules of SO results in less voltage drop than that of FPA for both time slots. For 50 time slots, the voltage drop is less than around 0.035 volts. In terms of energy management, the schedule that maximizes the final voltage across the supercapacitor is preferable if there are no operational constraints. Yet, a constrained problem is adopted in this approach, where finding a feasible schedule that satisfies the operation constraints is crucial to maintain the functionality. The search space in this study depends on the number of slots N slots and the number of involved components N comp , thus, the computational cost is O(2 N comp x N slots ) . Despite that, the search space is considerably challenging, the results of optimizing the energy throughout the wearable system using SO-based task scheduling approach outperforms the state-of-the-art FPA algorithm. Also, the snake optimizer algorithm has proven its potential in tackling optimization limitations along with the proposed greedy approach for computing Hamming distance-based Tikhonov's regularizing term. Conclusion In this work, two optimization techniques are used in order to create an optimum scheduler for task operation. The main goal of the proposed techniques is to avoid power interruption. Then, a test platform is used to illustrate the impact of the task scheduling on the battery lifetime. The platform consists of a temperature sensor and a heart rate sensor. The first algorithm is based on the snake optimization technique. Also, the functionality constraints are enforced with a Hamming-based Tikhonov regularization in a greedy approach. The proposed task scheduling technique can be generalized and efficiently maximizes the supercapacitor's stored energy. Second one is the flower pollination algorithm in maintaining the supercapacitor energy. The capability of generalization of the SO technique makes it suitable for the inclusion of more sensors and modules in future studies. SO algorithm showed better convergence time than FPA (over 50% ) for this problem. Moreover, SO showed 20% saving in energy saving than FPA for same number of iteration. Supplementary information The used dataset in the presented experiments is provided as Data.mat and Data.csv along with its description in Description.pdf. research. He is currently an assistant professor at Nile University, Egypt. He was a Research Associate and EDA/CAD Specialist with the School of Engineering, Newcastle University, Newcastle upon Tyne, U.K. He was a Teacher Assistant with the faculty of Engineering, Fayoum University, Fayoum, Egypt, for nine years and was an R &D Firmware Engineer for eight years. He also was an R &D Manager for an LED company at Qatar for one year and half. His current research interests include smart energy harvesting systems and power management for biomedical implantable devices and lab-on-chip systems. He is also interested in the thermal impact of the implantable devices on the human tissues, embedded system design for lab-on-chip system, the investigation of fractional circuits and systems, specifically in fractional order analog filters for signal processing, and fractional order modelling for biomedical applications. His research aims to establish a new healthcare monitoring system and diagnosis on the fly by development of autonomous devices. He published more than 80 papers in a prestige journals with Google H-index 15. He got the award the best thesis award from Cairo University for 2014 and he got the State Encouragement Award in Egypt 2019 for his contribution in the field of biomedical research in Egypt and worldwide. He is won a Fellowship from Royal Academy of Engineering in Leaders in Innovations (LIF) and he is a fellow from the higher Academy of Education in UK. M. Saeed Darweesh received his Master's and Ph.D. degrees (with honors) in Electronics and Electrical Communications Engineering from the Faculty of Engineering, Cairo University, Giza, Egypt, in 2013 and 2017. Currently, he is a full-time Associate Professor in the ECE program, School of Engineering and Applied Sciences, Nile University, Egypt. Besides, he is an expert in the Phi Science Institute. Also, he is a former Adjunct Assistant Professor at the American University in Cairo (AUC), Zewail City (ZC) of Science and Technology, and Institute of Aviation Engineering and Technology (IAET). He is the IEEE Egypt Young Professionals Chairman for term 2022-2023. Also, he is the IEEE CAS Egypt Chapter Treasurer 2023-2024. He is a PI of different research projects funded by the Information Technology Industry Development Agency (ITIDA), Co-PI, and a Research Associate in 18 research projects funded from different agencies like the Science and Technology Development Fund (STDF), National Telecom Regulatory Authority (NTRA), and the Academy of Scientific Research and Technology (ASRT). He has a solid technical background with a keen interest in machine learning and artificial intelligence. His research interests focus on Narrow-Band IoT, Autonomous Driving Vehicle to Vehicle (V2V) Systems, Wireless Communications, Biomedical Engineering (EEG Seizure Detection, Sleepiness Detection using EEG, and Breast Cancer Classification), and Data Compression. He has a research stay at the American University in Cairo (AUC), Zewail City (ZC) of Science and Technology, Faculty of Engineering, and National Institute of Laser Enhanced Sciences (NILES), Cairo University. He worked in many telecom operators and suppliers like (Orange, Alcatel.Lucent, and Geniprocess) and has a strong computer network and security background.
2023-04-01T15:06:39.380Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "21192c18e82d18d327d773b915ba242a7dea7512", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10470-023-02154-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "31070d746e17922930cc646a67e0646b5e3c247e", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
234367163
pes2o/s2orc
v3-fos-license
Rupture of an epidural filter connector during bolus administration of local anesthetic: a case report Background Epidural catheters are routinely placed for many surgical procedures and to treat various pain conditions. Known complications arising from epidural catheter equipment malfunction include epidural pump failure, epidural catheter shearing, epidural catheter connector failure, epidural filter connector cracking, and loss-of-resistance syringe malfunction. Practitioners need to be aware of these potentially dangerous complications and take measures to mitigate the chances of causing significant patient harm. We report on the complete breakage of an epidural filter connector during epidural bolus administration of local anesthetic by hand with a syringe. Case presentation A B. Braun Perifix® epidural catheter was placed in a 73-year-old male scheduled for radical prostatectomy. During the operation, a continuous infusion of local anesthetic was administered through the epidural catheter in addition to general endotracheal anesthesia. At the conclusion of surgery and after extubation, the patient endorsed incisional pain. The epidural filter connector broke in half as a bolus of local anesthetic was administered by hand with a syringe. The local anesthetic sprayed widely throughout the room as the fragmented epidural filter connector became a projectile object that recoiled and struck the patient. Conclusions This incident placed the patient and surrounding healthcare providers at substantial risk for injury and infection from the fractured epidural filter connector becoming a projectile object and from the local anesthetic spray. The most plausible cause of this event was from a large amount of pressure being applied to the filter connector. This may have occurred by excessive force being applied by hand to the syringe, by the presence of a clogged filter, or by the catheter being kinked or blocked proximal to the filter. Being aware of this deleterious complication and potentially modifying existing epidural bolus techniques, such as using smaller syringes with less applied force and checking all epidural components vigilantly prior to and during bolus administration, can help anesthesia providers deliver the safest possible care to patients with epidural catheters. Background Epidural catheters are commonly placed for a wide range of surgical procedures and to aid in the management of acute and chronic pain. Complications related to epidural catheter placement such as epidural hematoma, nerve injury, and infection are well known and described in the literature [1]. Complications related to epidural equipment problems or defects are lesser known and limited to case reports in the literature describing epidural pump failure, catheter connector failure, filter connector cracking, epidural catheter shearing and breakage, and loss-of-resistance syringe malfunction [2][3][4][5][6][7]. Epidural boluses of local anesthetic or opioid are routinely administered to patients in order to enhance pain control. This may be accomplished through the epidural catheter connector or filter connector using the bolus feature of an infusion pump or by hand using a syringe. Epidural filter connectors have traditionally been used to decrease bacterial intrusion into the epidural space and to block debris such as glass or plastic from entering [8]. We describe the first case to ever be reported in the literature where the epidural filter connector split in two while a bolus of local anesthetic was being administered to the patient by hand. Written HIPAA consent for the publication of this case report was obtained from the patient. This manuscript adheres to the applicable EQUATOR guidelines. Case Presentation A 73-year-old male long-term cigarette smoker with no significant past medical history presented for right nephroureterectomy and radical cystoprostatectomy with ileal conduit diversion for bladder and ureteral cancer. Prior to the induction of general anesthesia, a T10-11 epidural catheter was placed to be utilized intraoperatively and for post-operative pain management. The epidural catheter was placed with components from a B. Braun Perifix® continuous epidural anesthesia tray (B. Braun Medical Inc., Bethlehem, Pennsylvania, USA). A 20-guage closed tip epidural catheter was inserted into the epidural space at T10-11. This was accomplished using a loss-of-resistance technique through a midline approach with a 17-gauge Tuohy needle and a Perifix™ plastic luer slip loss-of-resistance syringe. The epidural catheter was inserted five cm into the epidural space. The epidural catheter was then connected to a clamp style catheter connector which was then attached to a 0.2 μm filter connector. A test dose of 3mL of 1.5% lidocaine with 1:200,000 epinephrine was administered without significant resistance through the filter connector with a 20mL luer lock plastic syringe. The test dose was negative, general anesthesia was induced, and the procedure began as planned. The anesthetic was maintained in this patient using a combination of inhaled sevoflurane through an endotracheal tube and a continuous infusion of 0.0625% bupivacaine through the epidural catheter. A B. Braun Perfusor® Space Infusion Pump was used to deliver the continuous epidural infusion. Infusion rates varied between 2 and 8mL per hour for the duration of the operation, and high-pressure alarms on the pump never activated during the case. At the conclusion of an uneventful and successful surgical procedure, the patient was extubated smoothly. The patient endorsed incisional pain, and the patient's blood pressure, heart rate, and respiratory rate were all mildly elevated from baseline. The continuous epidural infusion was stopped. A bolus of 5mL of 0.25% bupivacaine with 1:200,000 epinephrine was delivered by hand using a BD 10mL Luer-Lok™ syringe (Becton, Dickinson and Company, Franklin Lakes, New Jersey, USA) via the epidural filter connector. Midway through the hand delivered bolus, the epidural filter connector top popped off from the rest of the mechanism spraying the contents of the syringe several feet in multiple directions (Fig. 1). The bottom half of the epidural filter connector that remained connected to the clamp style catheter connector and epidural catheter itself recoiled and struck the patient near his shoulder which did not cause any immediate noticeable harm. It was estimated that the patient received 2mL of the epidural bolus before the filter connector broke. An additional 3mL was then administered to the patient via the clamp style catheter connector without significant resistance. The patient was then taken to the post-anesthesia care unit (PACU) where a new filter connector was attached to the clamp style catheter connector. A BD Alaris™ patient-controlled analgesia (PCA) pump was then used to deliver 0.0625% bupivacaine with 10mcg/ Fig. 1 A staged image showing the top of the epidural filter connector separated from the base with surrounding local anesthetic spray mL hydromorphone at 6mL/hour with a 2mL every 20 minutes demand option. The epidural catheter was removed on post-operative day (POD) 3 as the patient transitioned to oral pain medications. The patient was discharged from the hospital on POD 5 in stable condition. Discussion Epidural catheter and related equipment design and innovation has steadily progressed over many decades [9]. Therefore, complications related to epidural equipment disturbances are uncommon and limited to case reports describing epidural pump failure, catheter connector failure, filter connector cracking, epidural catheter shearing and breakage, and loss-of-resistance syringe malfunction [2][3][4][5][6][7]. To our knowledge, this is the first description in the literature of an epidural filter connector breaking in half during bolus administration of local anesthetic by hand with a syringe. Total volume of local anesthetic and increasing patient age allow for a greater distribution and spread of sensory blockade after epidural injection, while speed of delivery and pressure exerted during local anesthetic injection through an epidural catheter have lesser effects [10]. Though it is not our aim or practice, it is possible that too much pressure was exerted on the filter connector as the syringe plunger was depressed by hand during bolus administration. Additionally, syringe size is inversely proportional to the amount of pressure that can be generated when injecting into tissues and cavities such as the epidural space [11]. In our case, we used a 10mL syringe for injection which would generate less pressure than 3mL and 5mL syringes that are also commonly used to dose epidural catheters. It may be prudent to use pressure monitoring devices such as the B-Smart™ in-line manometer (B. Braun Medical Inc., Bethlehem, Pennsylvania, USA) or CompuFlo® computerized injection pump technology (Milestone Scientific Inc., Livingston, New Jersey, USA) to avoid high injection pressures which could lead to both tissue and nerve injury as well as disrupt the integrity of the epidural catheter equipment as we experienced in our case [12]. Epidural filter connectors are widely used when placing epidural catheters in order to minimize the risk of bacterial contamination of the epidural space [8]. They are also placed to mitigate the risk of particulate debris such as glass or plastic from syringes, vials, or other epidural equipment from entering the epidural space. Particulate matter entering the epidural space leading to nerve impingement or injury could happen easier without a filter connector in place but has also happened with a filter connector attached [13]. There are no case reports of this happening with modern epidural filters. In our case, it is possible that some debris entered and clogged the filter leading to increased injection pressures and subsequent rupturing of the filter connector. The filter membrane of the connector appeared intact after the apparatus broke though we cannot be certain that some degree of clogging took place on a microscopic level. It is also possible that the filter connector had a manufacturer's defect which led to a weakening or blocking of the apparatus. This seems to be a rare occurrence, and a thorough review of the literature revealed a single letter correspondence describing a redundant and misaligned filter membrane causing a blocked epidural filter connector [14]. We reached out to B. Braun Medical Inc., and the company had never heard of such an event and also stated that their equipment adheres to the highest standards of manufacturing and quality control checks. The epidural catheter and clamp style connector which are proximal to the filter connector could have caused obstruction leading to increased pressure within the system and eventual filter connector breakage. It is unlikely that the clamp style connector was damaged because after the filter connector broke, injection of local anesthetic through the clamp style connector proved to be easy and without significant resistance. Epidural catheters themselves can experience coiling, curling, kinking, knotting, and stretching [15,16]. They can also develop blood clots at the tip [17]. Again, these would be unlikely causes for filter connector rupture in our case, as injection through the clamp style connector was smooth. Additionally, when the epidural catheter was removed on POD 3, it was found to be undamaged and completely intact. Another potential cause of proximal obstruction within the components of the epidural system is counterpressure build-up where pressure builds within the epidural space and injecting becomes more difficult as increased amounts fluid such as local anesthetic or saline are injected [18]. This would occur rarely within the actual epidural space and would be more likely if the epidural catheter was inserted into another tissue plane or muscle. Again, our injection was smooth once the epidural filter connector was removed making counterpressure build-up a less likely reason for our filter rupture. Another concerning aspect of the epidural filter connector breaking is that the top popped off from the rest of the filter causing bupivacaine to be sprayed in a wide path. At the time, only two practitioners were near the patient and both were wearing facemasks and eye protection. The bottom half of the filter connector remained connected to the clamp style catheter connector and thus to the epidural catheter itself as it broke away from the top half of the filter connector. No one got injured as the two parts disconnected though the patient and the two providers nearby did get mildly saturated with the bupivacaine spray. The providers' eye protection limited any fluid from entering their eyes, and the patient's eyes were closed at the time. The risk of harm from bupivacaine entering a person's eyes is relatively low but can be damaging in those with pre-existing eye conditions or if the fluid is sprayed with high force and in large quantities [19]. Additionally, there was a risk of introducing infection to the providers in the room and the patient with the now compromised connector and from the local anesthetic spray. Our incident seems most likely related to enough pressure being generated by hand to disrupt the integrity of the filter connector. It is very unlikely that an epidural infusion pump itself could generate enough force to damage a filter connector, and modern pumps stop infusing when pressure limits are reached. Also, this case raises the question of whether providers should be bolusing epidural catheters by hand through the clamp connector rather than through the filter connector. A review of the literature does not aid in making this determination. There is reassurance in knowing that bolusing epidurals by hand with syringes is not a common reason for clamp connector breakage or disconnect [20]. Epidural catheter equipment malfunction is uncommon but has the potential for serious consequences when it does occur. Bolusing an epidural catheter with a syringe by hand could generate enough pressure to disrupt the integrity of the filter connector. A broken filter connector becoming a projectile object as well as spray of local anesthetic could harm the patient and surrounding personnel. Careful consideration should be taken in determining how much force to use when bolusing an epidural catheter with a syringe by hand and whether to administer the local anesthetic through the filter connector itself or through the other connector that attaches directly to the epidural catheter.
2021-05-12T14:07:09.698Z
2021-05-12T00:00:00.000
{ "year": 2021, "sha1": "caddce484df24ccd771f4af7b3f0051bd0d559c9", "oa_license": "CCBY", "oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/s12871-021-01372-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "caddce484df24ccd771f4af7b3f0051bd0d559c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118510559
pes2o/s2orc
v3-fos-license
Orbital diamagnetic susceptibility in excitonic condensation phase We study the orbital diamagnetic susceptibility in excitonic condensation phase using the meanfield approximation for a two-band model defined on a square lattice. We find that, in semiconductors, the excitonic condensation acquires a finite diamagnetic susceptibility due to spontaneous hybridization between the valence and the conduction bands, whereas in semimetals, the diamagnetic susceptibility in the normal phase is suppressed by the excitonic condensation. We also study the orbital diamagnetic and Pauli paramagnetic susceptibilities of Ta2NiSe5 using a two-dimensional three-band model and find that the calculated temperature dependence of the magnetic susceptibility is in qualitative agreement with experiment. I. INTRODUCTION The spontaneous pair condensation of electrons and holes (excitons) in semiconductors or semimetals was predicted to occur as an exotic ground state of matter more than half a century ago [1][2][3][4][5][6]. This phase is referred to as the excitonic (condensation) phase (EP). Actual materials in the EP still are, however, being searched for because the exciton is charge neutral and, unlike in superconductivity, to detect the pair condensation experimentally is not straightforward. One of the characteristic changes in the electronic structure at the EP transition is the band gap opening in semimetals and the band edge flattening in semiconductors, which angle-resolved photoemission spectroscopy (ARPES) experiments can detect. It was thereby suggested that some materials such as Ta 2 NiSe 5 [7][8][9][10] and 1T -TiSe 2 [11][12][13] are actually in the EP. In particular, for Ta 2 NiSe 5 , characteristic behaviors of the elastic constant, specific heat, ultrasonic attenuation rate, and NMR relaxation rate [14], as well as the ARPES spectrum [9,10], were discussed in this respect. A possible occurrence of a Fulde-Ferrell-Larkin-Ovchinnikov-type excitonic state in Ta 2 NiSe 5 under high pressures was also discussed [15,16]. Besides these physical quantities, it is known that a strong enhancement of the diamagnetic susceptibility below the excitonic transition temperature is observed in both Ta 2 NiSe 5 [17] and 1T -TiSe 2 [18], which suggests that a fundamental relationship may exist between the excitonic condensation and diamagnetism. In this paper, we therefore calculate the orbital diamagnetic susceptibility in the excitonic condensation phase and consider its physical significance, which we hope will shed some light on the excitonic condensation in real materials. Orbital diamagnetic susceptibility in a periodic potential was first formulated by Peierls [19], which was however applicable only to a single-band system. Then, after much effort was made to extend the formula to multiband systems, Fukuyama [20] succeeded to generalize the formula, writing it in a mathematically compact form. This formula is applicable to tight-binding lattice models as well [21][22][23], which therefore we will use in the present calculations. It was recently pointed out [24,25] that the use of Bloch wave functions, rather than the pure tightbinding lattice model, is important, the significance of which however we will leave for future study. Because it is known that the effects of spin fluctuations hardly affect the diamagnetic susceptibility [26], we expect that the formula will also be useful for EP. In this paper, we will first introduce a two-orbital square-lattice model with an interorbital Coulomb interaction, which is a minimum model for the excitonic condensation with active spin degrees of freedom. We will then obtain the EP of this model in the mean-field approximation and calculate the orbital diamagnetic susceptibility of this phase. We will thereby show that, in semiconductors, the excitonic condensation acquires a finite diamagnetic susceptibility due to spontaneous hybridization between the valence and the conduction bands, whereas in semimetals, the diamagnetic susceptibility in the normal phase (NP) is suppressed by the excitonic condensation via the band gap opening. We will clarify the origin of these behaviors by a simple model calculation. We will also introduce a two-dimensional three-band model for describing the band structure near the Fermi level of Ta 2 NiSe 5 and calculate the orbital diamagnetic and Pauli paramagnetic susceptibilities of this system, assuming the spin-singlet excitonic condensation. We will show that the temperature dependence of the calculated total magnetic susceptibility is in qualitative agreement with experiment. The rest of this paper is organized as follows. In Sec. II, we present our study on the two-orbital square-lattice model, where we obtain the mean-field solution for the EP of the model, calculate the orbital susceptibility, and discuss its significance to the excitonic condensation. In Sec. III, we present the two-dimensional tight-binding model of Ta 2 NiSe 5 and, assuming the excitonic condensation, we calculate the orbital diamagnetic and Pauli paramagnetic susceptibilities of this system. A summary of the paper is given in Sec. IV. A. Excitonic condensation Let us first introduce a two-orbital model defined on the two-dimensional square lattice [see Fig. 1(a)], where the f and c orbitals form the valence and conduction bands with hopping integrals t f and t c , respectively, which are separated by the energy level splitting D. There is no hopping of electrons between the f and the c orbitals, but an interorbital repulsive interaction V acts between two electrons in the f and c orbitals. This is a minimum lattice model for the excitonic condensation. The Hamiltonian is written as where f i,σ (f † j,σ ) is the annihilation (creation) operator of an electron with spin σ in the f orbital at site i and c i,σ (c † j,σ ) is that in the c orbital. The symbol i, j stands for the nearest-neighbor pair of sites i and j. Defining the order parameter of the spin-singlet EP as we rewrite Eq. (2) into in the mean-field approximation. Here, we neglect the intraorbital terms containing c † i,σ c i,σ or f † i,σ f i,σ because we do not consider the other ordered phases such as spin-density-wave and charge-density-wave phases in the present study. Introducing the Fourier transformations where N is the number of the unit cells, we obtain the mean-field Hamiltonian, where a is the lattice constant. Note that the Hartree shift is excluded since we neglect the intraorbital terms. B. Orbital susceptibility Applying a uniform magnetic field perpendicular to the lattice plane, the orbital susceptibility of our system is given by [22] or equivalently by [23] where µ B = e /2mc is the Bohr magneton, a B = 2 /me 2 is the Bohr radius, and Ry = e 2 /2a B is the Rydberg constant. Here, we define the electric current, and stress tensor, G is the temperature Green's function written as where ω m = (2m + 1) π/β is the Matsubara frequency with reciprocal temperature β = 1/k B T and µ is the chemical potential. k B is the Boltzmann constant. For the k-summation, we divide the Brillouin zone into 400×400 meshes in the mean-field self-consistent calculations and 1000 × 1000 meshes in the susceptibility calculations. The summation over the Matsubara frequencies is carried out with the usual analytical continuation technique. C. Results for the square-lattice model We assume a particle-hole symmetric situation t = t f = −t c and a number of electrons at half filling (two electrons per site), so that we can set the chemical potential to be zero. We thus have a direct-gap semiconductor for D/t > 8 and a semimetal for D/t < 8 at V /t = 0. Figure 1(b) shows the calculated phase diagram of our model in the mean-field approximation. This phase diagram is enlarged near the semimetal-semiconductor phase boundary in Fig. 1(c). We find that, in the semimetallic region D/t < 8, the EP persists down to an infinitesimal value of V /t, whereas in the semiconducting region D/t > 8, it vanishes at a finite value of V /t. Thus, a comparatively large value of the interorbital Coulomb interaction is required for the excitonic condensation in the semiconducting region. This result is in apparent contrast to the result of the electron gas model [27,28], where the EP survives well above the semimetal-semiconductor transition point. This contrast may be understood because we assume a constant value of the Coulomb interaction V in the lattice model, whereas in the gas model, the interaction is screened in the semimetallic region but not in the semiconducting region. We then calculate the orbital susceptibility as [23] with where n F (ǫ) is the Fermi distribution function and n ′ F (ǫ) is its derivative. The calculated results for the temperature dependence of the orbital susceptibility are shown in Fig. 2 where we define a constant χ 0 = µ 2 B a 4 t 6a 4 B Ry 2 N . Let us first discuss the semiconducting case [see Fig. 2(a)] where we assume the parameter values D/t = 8.2 and V /t = 4.5, so that we obtain the transition temperature k B T c /t = 0.056. Above the transition temperature, the orbital susceptibility is negative, indicating that the system is diamagnetic. As the temperature decreases, the EP transition occurs, at which the orbital susceptibility shows a kink. Decreasing the temperature further, we find that the diamagnetic susceptibility is much enhanced in the EP, compared with the NP. At zero temperature, the orbital susceptibility remains negative (diamagnetic), whereas it vanishes in the NP. Next, let us discuss the semimetallic case [see Fig. 2(b)] where we assume D/t = 6 and V /t = 2.5, so that we obtain k B T c /t = 0.065. In the NP, the orbital susceptibility is largely negative (strongly diamagnetic) and almost temperature independent. As the temperature decreases, the orbital susceptibility shows a kink at the excitonic transition, and below the transition temperature, the diamagnetism is slightly weakened in the EP, compared with the NP. The orbital susceptibility calculated as a function of D/t at low-temperature k B T /t = 0.01 is shown in Fig. 2(c) where we find that the essential difference occurs between the semiconducting and the semimetallic regions. In the semiconducting region D/t > 8, the orbital susceptibility is almost zero in the NP, and only for large values of V /t where the system goes into the EP, does the diamagnetism appear. In the semimetallic region D/t < 8, on the other hand, the orbital susceptibility is largely negative (strongly diamagnetic) already in the NP, and the diamagnetism is weakened when the system goes into the EP with increasing V /t. We note here that exactly the same results for the orbital susceptibility as above are obtained when we assume the spin-triplet excitonic condensation. Also noted is that no change occurs in our results for the orbital susceptibility in the indirect gap situation t f = t c . Now, let us clarify the origin of the above-discussed behaviors of the orbital susceptibility. To this end, we introduce a hybridization t cf between the c and the f orbitals artificially, without taking into account the excitonic condensation; i.e., we add a c-f hybridization term, to the noninteracting Hamiltonian [Eq. (1)] but we neglect the interaction term [Eq. (2)]. The model remains electron-hole symmetric. We thus calculate the orbital susceptibility using the formula given in Sec. II B. First, we consider the semiconducting case (D/t = 8.2) in the absence of the c-f hybridization t cf /t = 0. The orbital susceptibility calculated as a function of the chemical potential µ/t is shown in Fig. 3(a). Here, the c and f orbitals are completely independent so that it is clear that the orbital susceptibility vanishes when µ/t is in the semiconducting gap, i.e., the electrons in the filled f band cannot move with an infinitesimal magnetic field. The peaks at µ/t = ±4.1 are due to the van Hove singularity of the present model. Now, introducing a finite value of t cf , we find that the system acquires the diamagnetic susceptibility even at µ/t = 0 [see Fig. 3(b)], where the band gap remains open. This result may be understood because the electrons in the filled valence band become mobile via the electron hopping t cf (or c-f hybridization) under an infinitesimal magnetic field. In Fig. 3(c), we show the t cf dependence of the orbital susceptibility in the semiconducting case. We find that the system acquires the diamagnetism with increasing t cf as discussed above. However, the diamagnetism is weakened again for very large values of t cf because the band gap in this situation becomes too large for the electrons to move easily. In the semimetallic case, of which the result is shown in Fig. 3(d), we find that the orbital susceptibility, which is largely negative even at t cf = 0 as discussed above, is suppressed with increasing t cf . This is because the band gap opens at t cf > 0 and the gap size increases with increasing t cf , so that the electrons become less mobile. The same discussion as above can be applied to the interpretation of the behavior of the orbital susceptibility calculated in the EP because the essential feature of the excitonic condensation is the spontaneous hybridization between the valence and the conduction bands. Namely, the excitonic order parameter in Eq. (4) plays exactly the same role as t cf in Eq. (14). III. MAGNETIC SUSCEPTIBILITY OF TA2NISE5 Let us apply our theory to Ta 2 NiSe 5 , which is a candidate material for the spin-singlet excitonic condensation. Because the orbital susceptibility calculation requires the system of more than one dimension, we extend the one-dimensional three-chain model proposed [9,14] to a two-dimensional one where the interchain hopping parameters are introduced as shown in Fig. 4(a). The noninteracting tight-binding Hamiltonian reads j,α,σ c † Rj +a2,2,σ c j,1,σ + c † Rj +a2−a1,2,σ c j,1,σ + H.c. where t c is the hopping integral along the Ta chains, t f is that along the Ni chains, t cc1 and t cc2 are the interchain hopping integrals between the Ta chains, and t ff is the interchain hopping integral between the Ni chains. These are illustrated in Fig. 4(a). The annihilation operator of an electron with spin σ in the αth Ta chain is defined as c j,α,σ or c Rj ,α,σ and that in the Ni chain is defined as f j,σ or f Rj ,σ , where R j is the position of the jth unit cell. From the band structure calculation [9], we set t c = −0.8, t f = 0.4, t cc1 = −0.02, t cc2 = −0.1, t ff = 0.01, and ε c − ε f = 2.95 in units of eV. The band dispersions of this model are shown in Fig. 4(b). The primitive cell vectors are given by a 1 = (a, 0) and a 2 = (−a/2, b), where a = 3.496 and b = 7.820Å are estimated from experiment [29]. Note that the doubly degenerate conduction bands in the three-chain model [9] split into two due to the interchain hopping integral between the Ta chains. We also include the intersite repulsion term between Ni and Ta ions, just as in the three-chain model [9,14], which is defined as where n c j,α,σ = c † j,α,σ c j,α,σ and n f j,α,σ = f † j,σ f j,σ . V is the strength of this interaction. The intrasite repulsion terms in the Ni and Ta ions are neglected because of the same reasons given in Sec. II A. The electron-phonon coupling term is required to explain the lattice distortion that occurs at the EP transition in Ta 2 NiSe 5 . However, because the mean-field Hamiltonian of the system is written in terms of a sum of the order parameter of the lattice distortion and the order parameter of the excitonic condensation as is evident in Ref. 9, the electron-phonon coupling term contributes to the orbital susceptibility just as the V term does. This means that the two contributions to the orbital susceptibility cannot be distinguished within the framework of the theory so that hereafter we only refer to the V term as a representative of both the V term and the electron-phonon coupling term. Then, defining the excitonic order parameter as we apply the mean-field approximation to Eq. (16), just as in Eq. (4), and solve the gap equation self-consistently. We thus find a transition temperature of 605 K at V = 0.9 eV, which is considerably larger than the experimental value of 328 K [17]; the discrepancy may be attributed to the mean-field approximation ignoring quantum fluctuations. We then apply the formula discussed in Sec. II B and calculate the orbital susceptibility of this phase. The Pauli paramagnetic susceptibility, which may be affected by the excitonic condensation and can have a strong temperature dependence, may also contribute to the temperature dependence of the magnetic susceptibility of Ta 2 NiSe 5 . We therefore calculate the spin susceptibility as well, which is given by where E k,ǫ,σ is the eigenenergy of the gap equation with the wave-vector k, band ǫ, and spin σ [30]. Figure 5 shows the calculated temperature dependence of the orbital susceptibility χ orb , spin susceptibility χ spin , and total susceptibility χ tot = χ orb + χ spin in the EP, where we use the two-dimensional three-band model discussed above. We find that the orbital susceptibility shows diamagnetism and has a typical temperature dependence in the semiconducting phase as discussed in Sec. II C because this system is a direct-gap semiconductor. However, the contribution of the orbital susceptibility is rather small. This is because the system is quasione dimensional so that the component coming from the electric current perpendicular to the chains is much smaller than the component coming from the electric current parallel to the chains. The spin susceptibility, on the other hand, has large values at high temperatures and decreases rapidly below the excitonic transition temperature. We do not include the large diamagnetic contributions from the core electrons as well as the Van Vleck susceptibility, which are important in the total magnetic susceptibility [31]. However, these contributions are almost temperature independent and lead to a uniform negative shift to the magnetic susceptibility in total. Our result indicates that the spin susceptibility and orbital susceptibility cooperatively enhance the diamagnetism in the EP of Ta 2 NiSe 5 , which is qualitatively consistent with experiment [17]. Effects of electron correlations as well as a recent development in the theory of diamagnetism [24,25], which are neglected in the present calculations, should be taken into account for more quantitative discussions, but we believe that the essential features of the temperature dependence of the magnetic susceptibility of Ta 2 NiSe 5 assuming the excitonic condensation are obtained in the present calculations. IV. SUMMARY We have studied the orbital diamagnetic susceptibility in the excitonic condensation phase using the meanfield approximation for the interacting tight-binding lattice models. We calculated the orbital susceptibility for the two-band model defined on the square lattice, and found that, in semiconductors, the excitonic condensation acquires a finite diamagnetic susceptibility, whereas in semimetals, the diamagnetic susceptibility in the NP is suppressed by the excitonic condensation. We showed that these results can be interpreted in terms of the hybridization between the valence and the conduction bands; in semiconductors, the electrons in the valence band become mobile via the spontaneous hybridization with the conduction band so that the system acquires the diamagnetism when the excitonic condensation occurs, whereas in semimetals, the system is diamagnetic in the NP and the spontaneous hybridization between the valence and the conduction bands leads to the band gap opening, which suppresses the diamagnetism. We also studied the orbital diamagnetic and Pauli paramagnetic susceptibilities of Ta 2 NiSe 5 using the twodimensional three-band model and found that the spin and orbital susceptibilities cooperatively lead to the rapid decrease in the magnetic susceptibility due to the excitonic condensation, which is in qualitative agreement with experiment.
2016-08-09T10:53:12.000Z
2016-07-21T00:00:00.000
{ "year": 2016, "sha1": "272c2ba65a7e7862065e31b668a2b0405a0030b7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1607.06237", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "272c2ba65a7e7862065e31b668a2b0405a0030b7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256291407
pes2o/s2orc
v3-fos-license
New Method Based on the Direct Analysis in Real Time Coupled with Time-of-Flight Mass Spectrometry to Investigate the Thermal Depolymerization of Poly(methyl methacrylate) In this work, the isothermal decomposition of poly(methyl methacrylate) synthesized in bulk by the radical route of methyl methacrylate in the presence of azobisisobutyronitrile as the initiator was carried out and monitored for the first time with the DART-Tof-MS technique at different temperatures. Nuclear magnetic resonance (NMR) analysis revealed a predominantly atactic microstructure, and size-exclusion chromatography (SEC) analysis indicated a number average molecular weight of 3 × 105 g·mol−1 and a polydispersity index of 2.47 for this polymer. Non-isothermal decomposition of this polymer carried out with thermogravimetry analysis (TGA) showed that the weight loss process occurs in two steps. The first one starts at approximately 224 °C and the second at 320 °C. The isothermal decomposition of this polymer carried out and monitored with the DART-Tof-MS method revealed only one stage of weight loss in this process, which begins at approximately 250 °C, not far from that of the second step observed in the case of the non-isothermal process conducted with the TGA method. The results obtained with the MS part of this technique revealed that the isothermal decomposition of this polymer regenerates a significant part of methyl methacrylate monomer, which increases with temperature. This process involves radical chain reactions leading to homolytic chain scissions and leading to the formation of secondary and tertiary alkyl radicals, mainly regenerating methyl methacrylate monomer through an unzipping rearrangement. Although they are in the minority, other fragments, such as the isomers of 2-methyl carboxyl, 4-methyl, penta-2,4-diene and dimethyl carbate, are also among the products detected. At 200 °C, no trace of monomer was observed, which coincides with the first step of the weight loss observed in the TGA. These compounds are different to those reported by other researchers using TGA coupled with mass spectrometry in which methyl isobutyrate, traces of methyl pyruvate and 2,3-butanonedione were detected. Introduction Poly(methyl methacrylate) (PMMA) is an essential material in the industrial field. This polymer is the source of several applications, such as optic fibers for the transmission of light, plastic glasses, contact lenses, materials for dental prostheses, signs and displays, and billboard liquid crystal displays (LCDs) [1][2][3][4]. The wide applicability of this polymer in different sectors of industry is due to its exceptional properties, such as transparency, toughness, glass-like refractive index, non-toxicity and resistance to impact and to most chemicals. Moreover, its production is expected to increase over the next few years and reach 8.16 billion USD by 2025. This will put pressure on its average price, which is already high compared to other traditional polymers, and lead to growing concerns about the waste management of this material [5,6]. The early studies of the bulk thermal depolymerization of PMMA date from the end of the 1940s. Indeed, several researchers [7][8][9][10] at the time investigated this important aspect of polymers in order to understand the mechanisms involved in such a process. The thermal depolymerization of PMMA was also carried out in solution in diphenyl ether, 1,2,4-trichlorobenzene and in α-methylnaphthalene. Some experiments conducted by Bywater and Black [9] suggested that the mechanism of this degradation involved end-of-chain initiation, de-propagation and chain breaking by solvent transfer. These processes were observed in all the studied solvents. The degradation by chain transfer was increasingly efficient in the trichlorobenzene < diphenylether < α-methylnaphthalene series, while the chain initiation rate was almost the same in all solvents. Afterwards, several works on this subject have been achieved. The decomposition of PMMA in all its microstructures has been widely investigated with thermogravimetry analysis (TGA). According to different studies, it has been reported that the non-isothermal decomposition of this polymer depends mainly on its tacticity, its molecular weight and the initiator involved in the polymerization. Indeed, Jellinek and Luh [11] investigated the thermal decomposition of isotactic and syndiotactic PMMA with TGA in a closed system over a range of temperatures comprised between 300 • C and 400 • C in an inert atmosphere. These authors reported that the initial string lengths of the isotactic polymers were approximately ten-to twenty-times longer than those of the syndiotactic microstructures. The depolymerization characteristics of the isotactic polymers showed a kinetic chain length less than that of the polymer chains, whereas the depolymerization characteristics of the syndiotactic microstructures exhibited a kinetic chain length of the same order of magnitude as the length of the polymer chain. Regarding the effect of molecular weight on the decomposition of this polymer, Ferriol et al. [12] investigated the degradation of two PMMA samples synthesized by a free radical polymerization route that had molecular masses of 350,000 and 996,000 g·mol −1 . The obtained TGA thermogram showed three weight-loss stages for the PMMA with 3.5 × 10 5 g·mol −1 , while its DTG curve revealed four steps of decomposition. A similar behavior was also reported by Manring et al. [13]. In the same area, Chen [14] reported a study on the depolymerization of PMMA with different molecular masses in toluene, ethylacetate and chloroform; the obtained results revealed that the polymer with 2.2 × 10 5 g·mol −1 underwent degradation, while the low molecular weight sample showed a depolymerization process. Kashiwagi et al. [15] established that the first thermal decomposition of PMMA prepared by a free radical route takes place in three steps. The least stable step (approximately 165 • C) would be initiated by head-to-head (H-H) bond scissions with the H-H bond dissociation energy estimated to be lower than that of a backbone bond C-C due to a large steric hindrance and the inductive effect of the vicinal ester groups. The second step, which occurs at approximately 270 • C, corresponds to the chain scission of the terminal unsaturated groups (resulting from termination by disproportionation) involving homolytic β scission to the vinyl group. The last step (approximately 350 • C) is initiated by a random scission within the polymer chain. The effect of terminal groups or the nature of the initiator used in the preparation of PMMA on its thermal decomposition has been studied by Li and Ren [16]. They revealed that, when a thiol was used as an initiator, the thermal degradation of the resulting polymer mainly led to the monomer. The mechanism suggested by these authors involved only the scission of the main chains. Grassie et al. [17] studied the non-isothermal decomposition of polydisperse PMMA with molecular weights that ranged between 3.6 × 10 4 and 1.79 × 10 5 g·mol −1 obtained by a free radical polymerization route using different initiators. They concluded that the nature of the terminal group had an effect on the rate of degradation, such that the introduction of 1,4-diamino anthraquinone terminal groups prevented the degradation of PMMA at 220 • C. Similar results were also described by Madorsky [18] who found that PMMA initiated with benzoyl peroxide was thermally much less stable than that thermally polymerized in the absence of an initiator. PMMA-benzoyl peroxide started to degrade slowly at 240 • C, while thermally polymerized PMMA degraded at a similar rate at 310 • C. Brockhaus and Jenckel [19] investigated the effect of molecular weight on the depolymerization of PMMA prepared with free radical polymerization initiated with benzoyl peroxide at temperatures in the range between 250 and 350 • C. These authors deduced that, during the decomposition process, two reactions occurred: one initiated at the unsaturated and the other at the saturated chain ends. The non-isothermal decomposition of PMMA with 9.96 × 10 5 g·mol −1 synthesized with free radical polymerization was studied by Peterson et al. [20] using TGA. The weight loss observed was found to occur stepwise, beginning at 150 • C before slowing down momentarily at 300 • C (with 40 weight% loss) and continuing in a second degradation step. These results seem to be consistent with those obtained by Grassie [17] and Madorsky [18] for PMMA initiated with benzoyl peroxide. The two-step decomposition process of this polymer does not agree with the results obtained by Holland and Hay [21] in the nonisothermal mode of synthesized commercial PMMA involving free radicals, which generated a single decomposition in one step starting at 290 • C. Recently, Godiya et al. [22] used thermogravimetric analysis coupled with mass spectrometry (TGA-MS) to study the depolymerization of PMMA. The results obtained revealed that, in this process, the polymer chains were decomposed mainly into methyl methacrylate and also into a few non-polymerizable species that prevented the re-polymerization of the recovered monomer. This study stated that, besides the main by-product (methyl isobutyrate), traces of methyl pyruvate and 2,3-butanonedione were also formed during the thermal depolymerization of PMMA. The 2,3-butanedione formed was found to be responsible for the unpleasant odor in the recovered MMA. To optimize the recovery of methyl methacrylate (MMA) resulting from the depolymerization of polymethyl methacrylate (PMMA) dental resin fragments/residues in order to pilot the experiments at technical scale, Bisi dos Santos et al. [23] used the thermogravimetric analysis (TG/DTG/DTA). The liquid-phase products obtained at 420 • C were subjected to fractional distillation. The results revealed that the depolymerization of PMMA dental resin waste led to methyl methacrylate with concentrations varying between 83.454 and 98.975%. This study also showed that the optimum operating conditions to achieve high MMA concentrations, as well as elevated yields of liquid reaction products, were 345 • C and 80 min. The influence of temperature on the recovery and purity of methyl methacrylate (MMA) by depolymerization of polymethyl methacrylate (PMMA) dental resin scraps was investigated by Ferreira et al. [24]. The GC-MS analysis identified methyl methacrylate (MMA) and ethylene glycol dimethacrylate (EGDMA). Methyl methacrylate concentrations varied between 94.20 and 95.66%, showing a moderate increase with rising depolymerization temperature. Through our literature review, we did not find any detailed articles dealing with the isothermal decomposition of PMMA and the products resulting from this process. In the present investigation, our objective was to study the isothermal decomposition of PMMA using, for the first time, a new technique based on direct analysis in real-time coupled with time-of-flight mass spectrometry (DART-ToF-MS), the results being compared with those obtained with a non-isothermal process carried out with TGA. To achieve this goal, PMMA was synthesized by the bulk polymerization of methyl methacrylate using AIBN as the initiator. The structure and tacticity of the prepared polymer were characterized with nuclear magnetic resonance (NMR), and the average molecular weights were determined with size-exclusion chromatography (SEC). The isothermal decomposition of the synthesized polymer was investigated with DART-ToF-MS at temperatures ranging between 200 and 550 • C, while the non-isothermal decomposition of this same polymer was carried out with the TGA method from 25 to 600 • C. DSC analysis was also employed in this work to characterize the different transitions of the synthesized PMMA, notably that of the first mass loss observed on the TGA thermogram cited in the literature between 150 and 270 • C. The main products or their isomers generated from the isothermal decomposition process of PMMA were elucidated in the MS part of the present study. Preparation of Poly(methyl methacrylate) PMMA was prepared by a free radical bulk polymerization route of MMA in the presence of AIBN as the initiator at 70 • C. A total of 30 g (0.3 mol) of MMA was introduced in a three-necks bottom flask containing 11.5 mg (0.07 mmol) of AIBN linked to a refrigerant by the main opening. The cooler was connected at its top opening to a bubbler containing silicone oil. Through one of the two secondary openings was introduced the monomer, and through the other, nitrogen gas flowed at a rate of 3 mL·min −1 , which produced bubbles that expelled the air from the reactor and ensured the homogenization of the reaction mixture and the temperature of the reaction. A very viscous solution was obtained after approximately 20 min of the reaction. The reactor was then allowed to cool in air to the ambient temperature (~25 • C), and the polymer obtained was then isolated by precipitation in n-heptane. To eliminate the residual monomer and oligomers, the PMMA collected was purified three times by dissolution in THF and then precipitated in heptane. The polymer was dried in open air for 12 h and then under vacuum at 50 • C for 24 h. The average percentage yield of the polymerization was 75.22 wt%. This procedure was triplicated three times in the same conditions, and the results of the polymerization were taken from the arithmetic mean of the three trials. The apparent molecular weight, M, estimated from the percentage yield of the polymerization and Equation (1) [25] was 1.61 × 10 5 g·mol −1 . where n MMA and n AIBN are the mole numbers of MMA and AIBN, respectively, M MMA is the molar mass of MMA and 2 is the number of free radicals that resulted from the dissociation of AIBN. NMR Analysis The structure and the tacticity of the polymer obtained were examined with 1 HNMR et 13 C NMR in CDCl 3 at room temperature (~25 • C) using a JEOL FX 90 Q NMR apparatus at 500 and 200 MHz, respectively. SEC Analysis The average number molecular weights of the prepared PMMA were estimated in THF at 30 • C with size-exclusion chromatography (SEC) on a Varian apparatus. This instrument is equipped with a JASCO-type 880-PU HPLC pump with a flow rate of 1.0 mL, refractive index and UV detectors and Shodex GPC KF-806 M (8.0 mm I.D. × 300 mm) columns calibrated with polystyrene standards; the results obtained indicated 2.30 × 10 5 g·mL −1 , 5.67 × 10 5 g·mL −1 and 2.47 for M n , M w and polydispersity index (I), respectively. The average polymerization degree (Dp) of the prepared PMMA, which is by definition equal to the average molecular weight of the polymer divided by the molar mass of the monomer unit, calculated from Equation (2) [26] was 2297. TGA Analysis The TGA of PMMA was performed under dynamic nitrogen gas on a TGA/DSC Mettler-Toledo thermogravimeter (Columbus, OH, USA). Samples weighing between 10 and 14 mg were loaded into the TGA aluminum pan and then heated from 25 • C to 600 • C at a heating rate of 50 • C·min −1 . DSC Analysis The DSC thermogram of PMMA was traced using a DSC device (Shimadsu DSC 60, Kyoto, Japan) previously calibrated with indium. Between 10 and 12 mg of the samples were packed in aluminum DSC pans before being placed in the DSC cell. The samples were scanned by heating from 30 to 280 • C with a heating rate of 10 • C·min −1 . The value of the glass transition temperature (Tg) was taken from the inflection point of the thermal curves. DART-ToF-MS Analysis The isothermal decomposition of PMMA was analyzed with an Accu-ToF LC-plus JMS-T100 LP mass spectrometer JEOL (Tokyo, Japan) working with a direct-analysis-inreal-time (DART) ion source (Ion Sense, Saugus, MA, USA). The sample was used without any prior preparation. The fragments that resulted from the decomposition process were evaporated in a stream of helium atmosphere heated at constant temperatures of 200, 250, 300, 350, 400, 450, 500 and 550 • C. The heated helium/vapor mixture was then ionized by excited metastable helium atoms before entering the ion source of the time-of-flight mass spectrometer. All the samples were analyzed with DART-ToF-MS using the Accu-TOF mass spectrometer acquired from JEOL (Tokyo, Japan). The experimental conditions were as follows: vacuum level 1.3 × 10 5 Pa, He used as a heating and ionization gas, ring lens with a voltage of 4 V, peaks voltage of 500 V, and a mass resolution that ranged between 3600 and 4900. The sample (1-2 mg) in powder form was placed in a small piece of sandpaper and introduced between the DART outlet and the MS port inlet, and polyethylene glycol (10 4 g·mol −1 ) was used in this work for calibration. Results and Discussion The characterization of PMMA was carried out on the three replicas, and the data obtained were taken from their arithmetic means. NMR Analysis The structure and the microstructure of the synthesized PMMA were characterized with 1 H and 13 CNMR, and Figures 1 and 2 show the different signals of protons and carbon-13, respectively. As can be seen from the 1 HNMR spectrum, the characteristic peaks centered at 0.92, 1.78 and 3.51 ppm are attributed to the protons of methyl (a), ethyl (b) and methyl (c) groups, respectively. No other significant peaks assigned to impurities or residual monomers are observed in this spectrum. The 13 C NMR spectrum of the prepared PMMA in Figure 2 shows the structure of this polymer through the five signals attributed to the carbons a, b, c, d and e at 20.0, 45.5, 51.3, 53.0 and 177.6 ppm, respectively. The tacticity of PMMA was estimated from the deconvolution of the localized overlapping signals in the 1 HNMR spectrum between 0.62 and 1.63 ppm assigned to the methyl groups (a) belonging to different tetrads. According to the literature [27], the tetrads characterizing the tacticities of PMMA are gathered in Table 1. The deconvolution of these overlapping signals in Laurentzian peaks led to the results in Figure 3. The rate of each triad composing the polymer sample is calculated from the following equation: were and are the surface area of the triad i and the total area, respectively. On the other hand, the deconvolution of the signals of the carbon (d) in the 13 CNMR spectrum in Figure 2 that appear between 176 and 180 ppm, as shown in Figure 4, reveals the presence of signals belonging to the microstructures of the mrrm, mrrr/rrrm, rrrr, mmrr/rmrr and rmmr tetrads at 178.75, 178.27, 177.80, 176.90 and 176.05 ppm, respectively [27]. The integration of these signals made it possible to estimate the tacticity of this polymer, and the results obtained are grouped with those determined with 1 HMR in Table 1. The tacticity of PMMA was estimated from the deconvolution of the localized overlapping signals in the 1 HNMR spectrum between 0.62 and 1.63 ppm assigned to the methyl groups (a) belonging to different tetrads. According to the literature [27], the tetrads characterizing the tacticities of PMMA are gathered in Table 1. The deconvolution of these overlapping signals in Laurentzian peaks led to the results in Figure 3. The rate of each triad composing the polymer sample is calculated from the following equation: were a i and a T are the surface area of the triad i and the total area, respectively. On the other hand, the deconvolution of the signals of the carbon (d) in the 13 CNMR spectrum in Figure 2 that appear between 176 and 180 ppm, as shown in Figure 4, reveals the presence of signals belonging to the microstructures of the mrrm, mrrr/rrrm, rrrr, mmrr/rmrr and rmmr tetrads at 178.75, 178.27, 177.80, 176.90 and 176.05 ppm, respectively [27]. The integration of these signals made it possible to estimate the tacticity of this polymer, and the results obtained are grouped with those determined with 1 HMR in Table 1. TG-Analysis The thermogram TGA/ATD of the synthesized PMMA (M n = 2.3 × 10 5 g·mL −1 , M w /M n = 2.47), with a microstructure that is mainly atactic, is shown in Figure 5. This thermal curve shows two stages of the weight loss in its decomposition process. The first step begins at approximately 224 ± 3 • C and ends at 272 ± 3 • C in which 4.5 ± 0.6 wt% of the sample is volatilized, and the second step at 320 ± 4 • C in which a large amount of monomer is recovered. According to certain authors [16,28], during the first step, the small amount of the sample released in the first step contains residual non-reacted monomer, solvent and/or precipitant encrusted in the polymer. Indeed, the effect of the residual solvent involved in the synthesis of this polymer on its thermal decomposition behavior was investigated by Kizilduman et al. [29]. It was found that the first weight loss begins at 288 • C (acetone), 152 • C (THF) and 154 • C (chloroform and toluene). Other authors attribute this first step to the thermal decomposition of PMMA initiated by scissions of the head-to-head linkages (H-H) [15,30,31]. The second stage of PMMA obtained in this work involves a radical decomposition process, which begins from the end of the chains containing unsaturated bonds, regenerating a large amount of monomer resulted by the depolymerization reaction [22]. °C (acetone), 152 °C (THF) and 154 °C (chloroform and toluene). Other authors attribute this first step to the thermal decomposition of PMMA initiated by scissions of the headto-head linkages (H-H) [15,30,31]. The second stage of PMMA obtained in this work involves a radical decomposition process, which begins from the end of the chains containing unsaturated bonds, regenerating a large amount of monomer resulted by the depolymerization reaction [22]. DSC Analysis The DSC analysis of the prepared PMMA aims to remove the nuance on the character of the weight loss observed at approximately 200 °C in the TGA/TDA thermograms of this polymer. Indeed, the profile of the DSC thermogram in Figure 6 shows the glass transition temperature for this PMMA at 125 ± 2 °C, which agrees with the literature [32]. The small endothermic peaks localized between 188 ± 2 °C and 288 ± 2 °C, which coincide with the first temperature of mass loss observed on the TGA thermogram, are probably attributed to the vaporization temperatures of some fragments released from the first step of the decomposition of this polymer. Such a thermal behavior was also observed on the DSC thermograms of PMMA by Ulu et al. [31] and El-Zaher et al. [33] and also attributed to the decomposition of this polymer. DSC Analysis The DSC analysis of the prepared PMMA aims to remove the nuance on the character of the weight loss observed at approximately 200 • C in the TGA/TDA thermograms of this polymer. Indeed, the profile of the DSC thermogram in Figure 6 shows the glass transition temperature for this PMMA at 125 ± 2 • C, which agrees with the literature [32]. The small endothermic peaks localized between 188 ± 2 • C and 288 ± 2 • C, which coincide with the first temperature of mass loss observed on the TGA thermogram, are probably attributed to the vaporization temperatures of some fragments released from the first step of the decomposition of this polymer. Such a thermal behavior was also observed on the DSC thermograms of PMMA by Ulu et al. [31] and El-Zaher et al. [33] and also attributed to the decomposition of this polymer. DART-ToF-MS Study For the characterization of the polymer samples, the experimental parameters used in the DART-ToF-MS were optimized. The best results were observed in the positive ionization mode. Most of the mass spectral peaks corresponded to protonated ions [M + H] + , while the molecular peaks M + were also present for a few compounds. Also, in some cases, DART-ToF-MS Study For the characterization of the polymer samples, the experimental parameters used in the DART-ToF-MS were optimized. The best results were observed in the positive ionization mode. Most of the mass spectral peaks corresponded to protonated ions [M + H] + , while the molecular peaks M + were also present for a few compounds. Also, in some cases, the MS spectra showed the simultaneous presence of adduct ions [M + NH 4 ] + resulting from the addition of an atmospheric ammonium ion to the molecule. The isothermal decomposition of PMMA performed by means of DART-ToF-MS in a stream of helium atmosphere was carried out at temperatures that ranged between 200 and 550 • C, and the results obtained are shown in Figure 7 (200, 250 and 300 • C) and Figure 8 (350, 350, 400, 450, 500 and 550 • C). The characteristics of the fragments that resulted from this decomposition are grouped in Table 2. As can be observed from these data, PMMA begins its thermal decomposition at approximately 200 • C. This value is not very far from that observed on the thermogram of the TGA (224 • C), although the heating mode used in this method, which is isothermal, is different to that of TGA, which is non-isothermal. At 200 • C, in addition to the intense signal of cyclopentyl benzene used in this study as an internal reference that appears on the DART-Tof-MS spectrogram at 147.116 m/z, other small signals are observed, the most important of which appear at 91.085, 117.105, 132.105 and 141.097 m/z, which do not characterize the monomer, and their intensities increase with temperature. This confirms the presence of the endothermic peaks observed in the DSC thermogram between 188 and 288 • C, indicating the vaporization of the released molecules with molar masses varying between 91 and 141 g·mol −1 , such as the isomers of leucine (C 6 H 14 NO 2 ), 5-amino-5-oxo-1-pentanaminium(C 5 H 13 N 2 O) and 2-Methyl carboxyl, 4-methyl, penta-2,4-diene (C 8 H 13 O 2) . This coincides well with the temperature of the first stage of mass loss at 224 • C already observed on the TGA thermogram. These results remove the nuance on the nature of the released products during the first step of the mass loss in which certain authors using the TGA method [16,28] attribute them to the residues of monomer and traces of solvent and precipitant incrusted in the polymer matrix. At 350 • C and more, the thermal decomposition of PMMA regenerates a significant amount of monomer, which increases with temperature. This finding confirms the results reported by different authors using the thermal pyrolysis [3,[34][35][36] and the TGA methods [18,19]. The mechanism of the thermal depolymerization of PMMA in an inert medium is well known in the literature [28,[37][38][39][40]. According to these authors, the thermal decomposition of PMMA involves radical chain reactions leading to homolytic scissions of the chains and leading to the formation of secondary and tertiary alkyl radicals, mainly regenerating methyl methacrylate monomer through an unzipping rearrangement as shown in Scheme 1. What still remains random and poorly known is the mechanism highlighted to obtain the other resulting fragments due to the uncontrollable size and movement of the radicals formed in the degradation process. For this reason, the products resulted from the decomposition of this polymer other than MMA differ from one method used to another and one experiment to another. For example, Zeng et al. [28], using TGA coupled with MS (TGA-MS) to study the non-thermal decomposition of PMMA, revealed a large amount of monomer and other byproducts, such as methyl isobutyrate, traces of methyl pyruvate and 2,3-butanedione, which are totally absent in the DART-ToF-MS thermograms. ment of the radicals formed in the degradation process. For this reason, the products resulted from the decomposition of this polymer other than MMA differ from one method used to another and one experiment to another. For example, Zeng et al. [28], using TGA coupled with MS (TGA-MS) to study the non-thermal decomposition of PMMA, revealed a large amount of monomer and other byproducts, such as methyl isobutyrate, traces of methyl pyruvate and 2,3-butanedione, which are totally absent in the DART-ToF-MS thermograms. Scheme 1. Probable mechanism of the depolymerization of PMMA. The chemical formulas and the saturation degrees of the fragments that resulted from this decomposition and were estimated through the data of the MS part of this technique are gathered in Table 2, and the proposed names and structures of the main fragments and/or their isomers are grouped in Table 3. Except for the regeneration of the monomer obtained as indicated by Scheme 1, the other molecules are resulted from a random recombination of different radicals resulting from the thermal decomposition of PMMA. The chemical formulas and the saturation degrees of the fragments that resulted from this decomposition and were estimated through the data of the MS part of this technique are gathered in Table 2, and the proposed names and structures of the main fragments and/or their isomers are grouped in Table 3. Except for the regeneration of the monomer obtained as indicated by Scheme 1, the other molecules are resulted from a random recombination of different radicals resulting from the thermal decomposition of PMMA. Methyl carboxylate-2, penta-2,4-diene, cyclohexan-α-one, β -methylcarbxyl Conclusions DART mass spectrometry was successfully applied to study the isothermal decomposition of PMMA. It was found that the non-isothermal decomposition of PMMA studied with TGA shows two steps of weight loss. The first one begins at approximately 224 °C, and the second one starts at 320°C. However, the isothermal decomposition of this polymer carried out with the DART-Tof-MS method revealed only one stage of weight loss, which started at approximately 200 °C. At this temperature and more, the results of the thermal decomposition of PMMA were a regeneration of a significant part of the monomer confirming those of the literature. It was also revealed that the amount of MMA recovered increased with temperature to reach a maximum at 550 °C. Among the molecules released during the first stage of the decomposition process, no trace of monomer was detected, thus eliminating the nuance on the nature of the products generated during this stage. Conclusions DART mass spectrometry was successfully applied to study the isothermal decomposition of PMMA. It was found that the non-isothermal decomposition of PMMA studied with TGA shows two steps of weight loss. The first one begins at approximately 224 °C, and the second one starts at 320°C. However, the isothermal decomposition of this polymer carried out with the DART-Tof-MS method revealed only one stage of weight loss, which started at approximately 200 °C. At this temperature and more, the results of the thermal decomposition of PMMA were a regeneration of a significant part of the monomer confirming those of the literature. It was also revealed that the amount of MMA recovered increased with temperature to reach a maximum at 550 °C. Among the molecules released during the first stage of the decomposition process, no trace of monomer was detected, thus eliminating the nuance on the nature of the products generated during this stage. Data Availability Statement: The data presented in this study are available in the article. Conclusions DART mass spectrometry was successfully applied to study the isothermal decomposition of PMMA. It was found that the non-isothermal decomposition of PMMA studied with TGA shows two steps of weight loss. The first one begins at approximately 224 • C, and the second one starts at 320 • C. However, the isothermal decomposition of this polymer carried out with the DART-Tof-MS method revealed only one stage of weight loss, which started at approximately 200 • C. At this temperature and more, the results of the thermal decomposition of PMMA were a regeneration of a significant part of the monomer confirming those of the literature. It was also revealed that the amount of MMA recovered increased with temperature to reach a maximum at 550 • C. Among the molecules released during the first stage of the decomposition process, no trace of monomer was detected, thus eliminating the nuance on the nature of the products generated during this stage.
2023-01-27T16:14:53.511Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "c47830061d64caec1343662e1291f11a4b282b7c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/3/599/pdf?version=1674561552", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "885d8147cb68813ac1730473ef29e99c80d8c630", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
118460187
pes2o/s2orc
v3-fos-license
The entanglement entropy of 1D systems in continuous and homogenous space We introduce a systematic framework to calculate the bipartite entanglement entropy of a compact spatial subsystem in a one-dimensional quantum gas which can be mapped into a noninteracting fermion system. We show that when working with a finite number of particles N, the Renyi entanglement entropies grow as log N, with a prefactor that is given by the central charge. We apply this novel technique to the ground state and to excited states of periodic systems. We also consider systems with boundaries. We derive universal formulas for the leading behavior and for subleading corrections to the scaling. The universality of the results allows us to make predictions for the finite-size scaling forms of the corrections to the scaling. Introduction Entanglement is a fundamental phenomenon of quantum mechanics. Much theoretical work has focused on the entanglement properties of quantum many-body systems, showing their importance to characterize the many-body dynamics [1]. In particular, lots of studies have been devoted to quantify the nontrivial connections between different parts of an extended quantum system, by computing von Neumann or Rényi entanglement entropies of the reduced density matrix ρ A of a subsystem A. Rényi entanglement entropies are defined as For α → 1 this definition gives the most commonly used von Neumann entropy S 1 = − tr ρ A ln ρ A , while for α → ∞ is the logarithm of the largest eigenvalue of ρ A also known as single copy entanglement [2]. One of the most remarkable results is the universal behavior displayed by the entanglement entropy at 1D conformal quantum critical points (i.e. with dynamical critical exponent z = 1), determined by the central charge [3] of the underlying conformal field theory (CFT) [4,5,6,7]. For a partition of an infinite 1D system into a finite piece A of length and the remainder, the Rényi entanglement entropies for much larger than the short-distance cutoff a are where c is the central charge and c α a non-universal constant. When A is a finite interval of length in a finite periodic system of length L, CFT predicts the universal asymptotic scaling [6] S α = c where, remarkably c α is the same non-universal constant in Eq. (2). For future reference, it is also important to mention the result in a finite system of length L with some boundary conditions at its ends and for an interval of length starting from one of the two boundaries [6,8,9]: where again c α is the same non-universal constant as above and ln g is the universal boundary entropy of Affleck and Ludwig [10]. All the Rényi entropies S α are proper and equivalent measure of entanglement in a pure state [1], but the knowledge of S α for different α characterizes the full spectrum of non-zero eigenvalues of ρ A (see e.g. [11]) providing significantly more information on the entanglement than the solely knowledge of the von Neumann entropy. The CFT results reviewed above have been confirmed in many spin chains and in 1D itinerant systems on the lattice (too many to be mentioned here, we remand the interested reader to the comprehensive reviews on the subject [1]). These studies have allowed a deeper understanding of the convergence and precision [12] of 1D simulation algorithms based on the so-called matrix product states [13]. However, analogous results must also be valid for systems in continuous space, and therefore directly derivable in continuous models. Apart from the interest to describe trapped 1D gases experimentally realized with cold atoms, the entanglement of continuous models is also instrumental to develop 1D tensor network algorithms for gases, as the one proposed in [14]. Despite of this fundamental interest, almost no effort (with the exception of Refs. [15,16] and the orbital partitioning in quantum Hall states [17]) has been devoted to the spatial entanglement of gas models (that is distinguished from the particle partitioning [18]). In a previous short communication [19] we introduced a systematic framework to tackle free fermion gases in any external conditions for an arbitrarily large number of particles. The most general result of this investigation was that, when dealing with a finite number of particles N , the 1D entanglement entropy grows like ln N , with a prefactor that again is given by the central charge. In this formulation N acts as an UV cutoff, representing a concrete alternative to the lattice. In this manuscript, we detail the calculations in Ref. [19] for homogenous 1D gases and we report a series of new results about the leading and subleading corrections to their scaling behavior. The degree of universality of these results allows to us to make novel predictions for spin chains on some universal functions describing the corrections to the scaling. The determination of these functions was left as open problem by previous lattice investigations [20,21,22]. The model and its equivalence with others We consider a system of free spinless fermions in the continuum interval [0, L]. We work with a finite number of particles N . Therefore all the quantities and in particular entanglement entropies are finite since N acts as a cutoff. Appropriate boundary conditions (BC) are imposed in order to have a discrete energy spectrum. Apart the per se interest, spinless free fermions are also equivalent to other models of direct physical application. The 1D Bose gas with short-ranged repulsive interaction (i.e. the Lieb-Liniger model [23]) with Hamiltonian in the limit of strong interaction C → ∞ (i.e. impenetrable bosons, also known as Tonk-Girardeau gas) is exactly mapped to spinless fermions [24] and the entanglement entropy of a single interval in the two models do coincide, because the boson in an interval are functions only of the fermions in the same interval (this is not true anymore in the case of more disjoint intervals because of the presence of a bosonization string, analogously to spin-chain models [25,26,27]). The properties of the Lieb-Liniger model are described solely by the dimensionless parameter γ = CL/N [23], thus the Tonks-Girardeau limit describes the dilute model (i.e. N/L 1) for any value of C. Another important model mappable to free fermions is the spin-1/2 XX chain defined by the Hamiltonian where σ x,y,z l are the Pauli matrices at site l. The Jordan-Wigner transformation maps this model to the quadratic Hamiltonian of spinless fermions Here h represents the chemical potential for the spinless fermions c l , which satisfy canonical anti-commutation relations {c l , c † m } = δ l,m . The Hamiltonian (8) is diagonal in momentum space and for |h| < 1 the ground-state is a Fermi sea with filling ν = arccos |h| π . Only for |h| < 1 we are dealing with a gapless theory. The continuum limit of the Hamiltonian (8) is then the system of free-fermions we are considering in this paper and so all the universal properties that do not depend on lattice regularization can be obtained from the continuum model. At this point, it is worth discussing how to obtain the continuum limit in some details. The lattice model is formed by L lat sites separated by the lattice spacing a (usually set to 1 in all lattice studies). N particles populate the chain with filling ν = N/L lat and we are interested in the entanglement entropy of lat sites. The continuum limit is a system of N free fermions in a box of length L and is obtained by sending a → 0, ν → 0 keeping fixed ν lat equal to N/L in the continuum, where L = aL lat and = a lat . This allows us to use the CFT results to predict a priori some of the results we are going to derive. In Ref. [28], Eq. (2) has been derived for the XX chain, obtaining where also the non-universal constant is determined as Combining this exact result with the CFT prediction in Eq. (3), we get the asymptotic scaling behavior of the entanglement entropies in finite XX chain Taking now the continuum limit a → 0, ν → 0 as explained above, we arrive at the prediction where the subtraction of the term proportional to ln a comes from the normalization of the reduced density matrix. Finally we quote the 1D Bose-Hubbard model described by the Hamiltonian where b i are bosonic operators and n i ≡ b † i b i is the particle density operator. The hard-core limit U → ∞ of the Bose-Hubbard model implies that the particle number n i per site is restricted to the values n i = 0, 1, and so in this limit can be exactly mapped into a lattice model of spinless fermions. Clearly the continuum limit of the Bose-Hubbard model is nothing but the Lieb-Liniger gas introduced above. The method We consider a system of N non-interacting spinless fermions with discrete one-particle energy spectrum, such as a finite system or one confined by a proper external potential. The many body wave functions Ψ(x 1 , ..., x N ) can be written in terms of the oneparticle eigenstates as a Slater determinant where the normalized wave functions φ k (x) represent the occupied single-particle energy levels. The ground state is obtained by filling the N levels with lowest energies. Thus, the ground-state two-point correlator is where c(x) is the fermionic annihilation operator and the one-particle eigenfunctions φ k (x) are intended to be ordered according to their energies. The reduced density matrix of a subsystem A extending from x 1 to x 2 can be written as where H = ln[(1 − C)/C] and the normalization constant is fixed requiring Trρ A = 1. This equation can be straightforwardly seen as the continuum limit of the formula for lattice free fermions [29,30], but can also be obtained following the standard derivation in Ref. [29] in path integral formalism. In the passage from lattice to continuum, the normalization factor in (17) depends explicitly on the lattice spacing a and it is responsible for the subtraction of the term proportional to ln a in Eq. (13). We want to compute the bipartite Renyi entanglement entropies defined as in Eq. (1) of the space interval A in this fermion gas. For this purpose, we introduce the Fredholm determinant ‡ where C A (x, y) is the restriction of C(x, y) to the part at hand from x 1 to x 2 that can be written as where P A is the projector on the interval A). The same definition holds for δ A (x, y) = P A δ(x − y)P A . Following the ideas for the lattice model [28], D A (λ) can be introduced in such a way that it is a polynomial in λ having as zeros the N eigenvalues of C A . The Gaussian form of ρ A in Eq. (17) allows us to exploit the relation between the eigenvalues of ρ A and C A to write ‡ A Fredholm determinant is the extension of the standard determinant to continuous matrices. Its simplest operative definition is through the generalization to continuous kernels K(x, y) of the standard identity for determinants of a finite matrix M where the traces are simply where the integration contour encircles the segment [0, 1], and For α → 1, e 1 (λ) = −x ln x − (1 − x) ln(1 − x) and Eq. (21) reproduces the von Neumann definition. The integral representation (21) has been already derived and used in the context of discrete chain models [28], thus involving the determinant of a standard matrix with the lattice sites as indices. The Fredholm determinant is turned into a standard one by introducing the N ×N reduced overlap matrix A (also considered in Ref. [15]) with elements such that where from the first to the second line we use Eq. (16). Thus where we use twice Eq. (18) and we denote with a m the eigenvalues of A. Inserting it into the integral (21), we obtain as a consequence of the residue theorem. The matrix A is easily obtained for any non-interacting model from the oneparticle wave functions, as the definition (23) shows. Calculating the entanglement entropies is then reduced to an N × N eigenvalue problem that can be easily solved numerically and in some instances even analytically, as we are going to show. The ground-state of systems with periodic boundary conditions In a system of length L with periodic boundary conditions (BC), the normalized oneparticle wave-functions are plane waves with integer wave numbers and energy E k = 2π 2 k 2 /L 2 . For some physical problems, one has to impose antiperiodic BC for the fermion degrees of freedom (the appropriate BC can also depends on the parity of N ) and so the momentum is quantized in terms of semi-integer wave numbers. However, as long as we are interested in the entanglement entropy of a single interval, this does not change the final results because, as we shall see soon, the elements of the matrix A depends only on the difference between momenta that are always integer. Different results would be instead obtained for the entanglement of two disjoint intervals, similarly to what happens in CFT [31,32] and lattice models [25,26,27]. The element of the overlap matrix between two one-particle eigenstates with wave number k 1 and k 2 is The elements of the matrix A are not invariant under translation because of the explicit dependence on x 1 + x 2 of the phase factor. However, the eigenvalues of A do not depend on this phase factor (but the eigenvectors do) and so also the entanglement entropies are translational invariant, as they must be. Indeed, in the determinant of λI − A, for each column we can bring out of the determinant the factor e −πik1(x1+x2)/L and for each row e πik2(x1+x2)/L . Since k 1 and k 2 run on the same set of integers, the product of all these phases is 1, regardless of the values of the ks. In the following we use this freedom to fix the phase factor to 1 and denote = x 2 − x 1 . The ground state of a fermion gas with N particles is obtained by filling the N k-modes with lowest energies. In the case of odd N , this amounts to fill symmetrically the N states with |k| ≤ (N − 1)/2 (the zero mode is clearly included). For even N , there are two degenerate states obtained from the N − 1 ground state by filling the first available state either on the right or on the left. This small difference between odd and even terms does not play any role because the elements of the matrix A in Eq. (28) depend only on the differences between k 1 and k 2 . Thus we can just start counting modes from the lowest k one-particle occupied state and the resulting matrix A for a segment of length = x 2 − x 1 is from Eq. (28) By inserting the N eigenvalues of A into Eq. (26), we obtain the entanglement entropy in a system of N particles. This is very easily done numerically as in Fig. 1. The leading behavior of the entanglement entropies We can also compute analytically the large N behavior of the entanglement entropies. Indeed we can write ln We can then use the Fisher-Hartwig conjecture [33] to rigorously § calculate the large N behavior of S α . However, going through all the technical complications of the Fisher-Hartwig conjecture is not needed, because we can exploit the results obtained for the very similar matrices of lattice free fermions on the infinite line. In fact, for lattice models in the thermodynamic limit at fixed filling ν = N/L lat , it has been found that the entanglement entropies of lat consecutive sites is given by Eq. (21) where D A (λ) is a standard lat × lat determinant with the correlation matrix C given by [30,35] It is evident that this matrix C lat nm is the same as A in Eq. (29) identifying ν with /L. However, this is only a mathematical coincidence and it will most probably not be true for interacting systems. Indeed, in Eq. (29) we have a finite system and the indices are related to occupation modes of the N particles in the full system. On the other hand, in Eq. (30) we have an infinite lattice with filling ν and the indices refer to the lattice sites of the subsystem. Having established this equivalence between the two matrices, we can use the exact calculations in Ref. [28] (see also [21]), replace ν with /L, and obtain the asymptotic behavior of the desired entanglement entropies as Eq. (31) agrees with the CFT prediction for finite systems with periodic BC in Eq. (3) and represents an explicit analytic confirmation of this CFT result. It coincides with the scaling prediction in Eq. (13) from the lattice model, but here it has been derived from first principles. Notice that we cannot recover the infinite volume limit from Eq. (29) because this limit must be taken at fixed ratio N/L. If we naively take L → ∞ in Eq. (29) Figure 2. Leading asymptotic correction to the Rényi entanglement entropies of half system = L/2 for N up to 100 and α = 2, 5. Straight lines correspond to the prediction (33) that agrees with numerical data. Note that also subleading corrections oscillate and can be described by Eq. (34). we get a meaningless result, reflecting the non-commutation of the limits. Oppositely, after computing the determinant as in Eq. (31), the infinite volume limit exists at finite density N/L. Fig. 1 shows a comparison with exact finite-N calculations, for α = 1, 2, 5, ∞. It is evident that (especially for large α) the data are affected by finite N corrections that are exactly calculated in the next subsection. Corrections to the asymptotic behavior and universal FSS in finite chains The above correspondence, between the determinants giving the entanglement entropies of the continuos system and the ones for spin chains, permits a quantitative description of the scaling correction to the leading behavior in Eq. (31) by exploiting the results based on the generalized Fisher-Hartwig conjecture [36] in Refs. [20,21]. We introduce the differences between the entanglement at finite N and the asymptotic values S asy We can again use the spin-chain results of Refs. [20,21], where a quantity analogous to d α (N ) was calculated at the leading order. Using these results and replacing ν with /L (in [28,20,21] k F = πν was used, but here we prefer to use ν to avoid confusion with the Fermi momentum in the continuous system), we obtain that the leading correction term is given by A check of the correctness of this expression is reported in Fig. 2, where we report the absolute value of d α (N ) for the half system entanglement (i.e. /L = 1/2) for N up to 100. The power law behavior is evident and the straight lines are given by Eq. (33) without any adjustable parameter. These corrections of the form N −2/α correspond to the −2/α corrections found within conformal field theory [37], that have already been generalized to other situations, such as massive field theories [38], confined systems [39], disordered models [40] and have been carefully checked numerically in many different models [20,41,22]. Subleading corrections to the scaling are visible in Fig. 2, and for large values of α they have a sizable effect. These can be exactly calculated adapting the results of Ref. [21] (based on generalized Fisher-Hartwig [36] and random matrix theory [42]) and by replacing ν with /L. The full result for d α ( ) up to order N −3 can be cast in the form where As for the spin chain, while Eq. (34) provides an infinite number of contributions, for a given fixed value of α only a finite number of them will be smaller than the leading neglected term, which is always of order O(N −3 ). To be specific, in the cases α = 2, 3, 10 Eq. (34) gives the leading 4, 8, 46 terms in the asymptotic expansion of d α (N ) and hence the leading 6, 10, 48 terms in the expansion of S α ( ). We do not report here all the terms which contribute for specific values of α that can be obtained from a simple adaptation of the results above or from Ref. [21]. A remarkable exact result that we obtain from the previous analysis is the universal finite size scaling (FSS) form for finite XX spin chains with periodic BC. Indeed, as explained in section 1.1, the above result represents the continuum limit of the XX spin chain in a finite volume. For these spin chains, a universal FSS form has been observed in Ref. [20]. The quantity considered for the spin chain in Ref. [20] is In Ref. [20] chains with an odd number of spins L lat have been considered. Fig. 3 shows these results for α = 2 and magnetic field h = 0. The figure shows a perfect data collapse and somehow the correctness of the FSS ansatz. Already in Ref. [20], it was observed that the scaling function is perfectly described by F 2 (X) = ± cos(πX), as evident result from the figure (where it is impossible to distinguish the data from the conjecture). However, this is in apparent contradiction with our result, suggesting that a FSS form does not exist and the quantity F α (X) in Eq. (39) should instead be just a function of lat and in particular where we changed the variable of F α from X to lat . To elucidate what is happening in the right panel of Fig. 3 we report the numerical data at L lat = 101 in zero magnetic field against our new prediction Eq. (40). Being L lat odd, the ground state is not exactly at half-filling but at ν = (L lat − 1)/2L lat . In Fig. 3 (right panel) we can observe that the prediction (40), i.e. the strongly oscillating curve, agrees with the numerical data, apart some subleading corrections to the scaling. The ± cos(πX) form (also shown in the right panel) is nothing but the sampling of the curve at the integer values of lat . Indeed from basic trigonometry we have = cos π lat cos π lat L lat + sin π lat sin π lat L lat = (−1) lat cos πX , where we used that lat is an integer. The final expression is exactly the phenomenological result conjectured in Ref. [20] that we then prove and generalize here. Indeed for even chains, always in zero field, Ref. [20] proposed phenomenologically F α (X) = ±1, which corresponds to cos(2πν lat ) i.e., for ν = 1/2, to (−1) lat . We checked Eq. (40) against other spin-chain results at different filling, always finding agreement. Leading corrections to the scaling having a structure similar to Eq. (40) have been conjectured in Ref. [43] for spin chains with open boundary conditions. 3.2.1. The single copy entanglement: α → ∞. For α → ∞, the Rényi entanglement entropy gives the logarithm of the maximum eigenvalue of ρ A also known as single copy entanglement [2]. It is not possible to obtain the result at α → ∞ from the general form in Eq. (34) because all the corrections of the form N −2/α resum. Again instead of re-doing all the calculation to resum these terms, we exploit the correspondence with the infinite spin chain, and simply obtain the final result from Ref. [21] substituting ν with /L. After straightforward algebra we obtain with b = exp(−Ψ(1/2)) ≈ 7.12429. We have checked these results against exact numerical computation that we do not report here. It is interesting also in this case to explore the consequences of this result for finite spin chains, on the same lines as above for the finite α results. In Ref. [20], for chains with an odd number of spins, it has been shown that the data for several choices of L lat and lat collapse on a single master curve if plotted as Numerical data showing this collapse (analogous to those in Ref. [20]) are reported in Fig. 4 (left) for zero magnetic field and odd L lat . Oppositely to the case for finite α, the shape of this curve was too complicated to be guessed in Ref. [20]. As before, assuming universality in the FSS towards the continuum limit, we predict For a small chain with L lat = 37, the above curve is reported in the right panel of Fig. 4 and it shows high frequency oscillations, but perfectly coincides with the exact lattice calculations (apart small subleading corrections). As for finite α, the smooth result obtained for chains of different length (reported in the left panel) is a consequences of the sampling at integer lat . Using the property of the Li 2 (y) function, the envelope in the left panel of Fig. 4 is that is also shown in both panels, but in the left one is indistinguishable from data points. We consider again the XX spin chain for which the corrections to the scaling for the von Neumann entropy have not been considered quantitatively in finite systems. For infinite systems Ref. [21] reports the exact result The von It should be pointed out that this term is an "analytical correction" to the scaling, i.e. it is not due to the insertion of an irrelevant operator. For this reason, its finite-size scaling cannot be obtained simply by replacing the distance lat with the chord length, as done for d α at finite N . For finite systems, we expect the FSS form where F 1 (X) is an unknown function with F 1 (0) fixed by Eq. (47). However, by looking at its continuum limit, it is reasonable to propose the FSS ansatz F 1 (X) = A + B sin 2 πX. The constant A can be fixed by requiring that F 1 (0) is given by Eq. (47). The constant B can be fixed with the numerical data (e.g. by the scaling at X = 1/2). After a careful analysis, we conjecture the FSS scaling function Excited states in periodic chains We now turn our attention to excited states that can be easily treated in the formalism we introduced. Indeed, the only change compared to the ground state is in Eq. (16) where we have to sum over the occupied one-particle levels. It is convenient to have a simple graphical representation of the many-body states. This can be easily done by representing each single particle state with a circle and filling in black the occupied ones and leaving empty the others. For example, the ground state is · · · · · · · · · N · · · (50) where the underlined circle represents the zero momentum mode. When working at fixed number of particles N , excited states are obtained from the ground states just by moving black circles to empty white ones. The entanglement of excited states has been already considered few times in the literature, but only in the context of discrete lattice models. In [44] it was shown that the negativity (which is a different measure of entanglement, related to some Rényi entropies [1]) shows a universal scaling in critical spin systems. In Ref. [45], on the bases of Toeplitz matrix arguments for the XX spin chain and by exact calculations for the anisotropic Heisenberg one, it has been shown that only a small subclass of excited states can exhibit a universal logarithmic divergence with the subsystem size , while most of the states strongly violates the area law and their entanglement entropies increase linearly with , with non-universal prefactor. The states providing universal scaling are those where there is a finite (and possibly small) number of sets of one-particle states occupied sequentially, as e.g. · · · · · · (51) and the locations of these blocks of states is not essential. This set of states includes all low-lying excited states. In Ref. [46] it has been shown that the entanglement entropies of low-lying excited states display a universal finite size scaling that is different from the one in the ground state of Eq. (3). These can be calculated by means of CFT, because low-lying states in CFT language are described by the action of a scaling operator on the ground state. The states that are obtained by applying a primary operator to the ground states are of particular importance. In this case, the Rényi entropies for integer α have been related to the correlation functions of these operators in a α-sheeted Riemann surface [46]. We remand the interested reader to the original reference [46] and we limit to quote the main result Υ (X) is the universal scaling function depending on the operator Υ whose action on the ground state gives the desired excited state. In particular F (α) Υ (0) = 1, i.e., in the thermodynamic limit, all these low-lying states have entropies degenerate with the ground-state, in agreement with Ref. [45]. Two sets of primary operators can be easily treated for a free boson compactified on a circle, which describes the thermodynamic limit of the free-fermion gas we are considering. First, the vertex operators V (x) for which Ref. [46] reports F (α) V (X) = 1 (i.e., the entanglement entropies are the same as in the ground-state). In the freefermion gas, this corresponds to the excited states obtained by shifting the groundstate (50) in momentum space, i.e. replacing all k i with k i + M with M arbitrary integer number. The matrix A is always given by Eq. (28), that depends only of the differences between the various momenta, and so it is exactly equal to the ground-state one, confirming the prediction F The other operator considered in Ref. [46] is Υ = i∂φ, which has been found to have a non-trivial scaling function given by for integer α. Here H is a 2α × 2α matrix with elements For α = 2, this reduces to the simple expression but for any other α > 2 the explicit formulas are too cumbersome to be reported in their full glory. It must be mentioned that the analytic continuation of F is not yet known, and so also the von Neumann entanglement entropy of this excited state is still unknown. However for small X it has been found whose analytic continuation is obvious. The excited state generated by the action of i∂φ on the ground state is the particle-hole excitation obtained by moving one particle from the highest occupied level to the first available one, i.e., graphically The corresponding N ×N matrix A has then the first n−1 rows and columns identical to the ground state Eq. (29), but the last different, given by and A mN = A N m . Despite only one row and one column differ from the ground-state, the matrix A ceases to be a Toeplitz matrix and (to the best of our knowledge) no analytic treatment is possible anymore. We check the prediction in Eq. (53) numerically. In Fig. 6, we report the numerical calculated scaling function for several values of N as a function of X for α = 2, 3. It is evident that in the large N limit the CFT prediction (53) is approached with small oscillating corrections to the scaling which are more pronounced for small X. In order to shed some light on the analytic continuation at α → 1, we also report (always in Fig. 6) the scaling function for the von Neumann entropy As a difference with F (α) with α ≥ 2, the corrections to the scaling are much smaller, as for the entanglement in the ground-state. Unfortunately, as already stated, the analytic continuation to α → 1 of Eq. (53) for arbitrary X is not yet known and so the data in the figure cannot be contrasted to an exact prediction. However, such an analytic continuation is known for small X: from Eq. (56) we have . This prediction is reported on top of the numerical data and they agree perfectly up to X ∼ 0.1. Having established the leading asymptotic behavior, we move our interest to the leading corrections to the scaling. We find numerically that the corrections have the same exponents as in the ground-state, i.e. they decay with α dependent power-law N −2/α . In order to show this, we report in Fig. 6 (left panels) the quantity for the half-system entanglement. The data clearly show the behavior for the corrections of the form (−1) N N −2/α for α > 1. We found numerically that D 2 ∼ (−1) N 0.19039 . . . and D 3 ∼ (−1) N 0.225 . . .. These non-universal amplitude are different from the ones found for the ground state and we have been not able neither to calculate nor to guess their α dependence. We checked that for general /L, the corrections are of the form cos(2πN /L)N −2/α as for the ground state. Furthermore subleading corrections seem to have the same power structure as in the ground-state (cf. Eq. (34)). The von Neumann entropy at α = 1 requires a separate analysis. Indeed, as the last panel of Fig. 6 shows, the correction to the scaling are monotonic. However in this case we do not know exactly the constant term in the leading behavior. An accurate numerical analysis for the half-system entanglement is consistent with the behavior with y 1 = 0.540726 . . . and y 2 = 0.35 . . .. These numerical data have been obtained by fitting data for N > 100, 150, 200 and keeping under control the stability of the fit. Although these fitting parameters have been extracted from asymptotic large N , Fig. 6 shows that the fit describes very accurately the data down to N ∼ 3. Hard-wall boundaries We now consider a gas of spinless fermions confined in the interval [0, L] by a hardwall potential, i.e. the gas density vanishes outside the interval x / ∈ [0, L] and the boundary condition is that the wave-function vanishes at the boundaries (Dirichlet BC). The one-particle wave functions are with energies E k = π 2 k 2 /2L 2 . 5.1.1. An interval starting from the boundary. The elements of the overlap matrix (cf. Eq. (23)) between two one-particle eigenstates n and m have a particularly simple form for an interval starting from the boundaries, i.e. A = [0, ]. In fact we have with n, m = 1, ..., N . As for the periodic case, the matrix A above is exactly the same as the correlation matrix C lat of an infinite lattice with ν replaced by /N . This has been considered in Ref. [43] where, using a recent generalization of the Fisher-Hartwig conjecture to Toeplitz+Henkel matrices [47], the asymptotic behavior of the entanglement entropies for the lattice model have been calculated exactly. Exploiting the equivalence between the two problems (i.e. replacing ν with /N in Ref. [43]) we easily obtain for the asymptotic behavior of the entanglement entropies where E α is defined in Eq. (11). Notice that this result agrees with the general CFT prediction in Eq. (4) with ln g = 0 that is a well-known result for open boundary conditions [10]. A comparison of the finite-N results with Eq. (65) is shown in Fig. 7. It is evident that for any α there are corrections to the scaling oscillating with N . These are of the order O(N −1/α ) and can be deduced exactly from the analogy with the lattice model solved in Ref. [43]. Defining we have from Ref. [43] and replacing ν with /L . (67) Fig. 7 (right panel) show these corrections for half-system entanglement entropy for α = 1, 2, 3. Further corrections of the form N −p/α with p integer can be straightforwardly deduced from the analysis in Ref. [43]. We mention that, as for the periodic case, these leading N −1/α corrections correspond to the ones of the form O( −1/α ) found within CFT [37,43]. Generic interval. In the case of a subsystem consisting of generic interval A = [x 1 , x 2 ], the entanglement entropies require a different analysis. The general formula for the matrix A is slightly more complicated: where the matrix B is defined in Eq. (64). The entanglement entropies of the interval [x 1 , x 2 ] can be computed by inserting its eigenvalues in Eq. (26). This allows us to easily compute S α (N ) up to large values of N , and compare its behavior with the asymptotic CFT prediction S α (N ) = 1 6 1 + 1 α ln 4N + E α + 1 2 ln sin 2 [π(y 2 − y 1 )/2] sin(πy 1 ) sin(πy 2 ) sin 2 [π(y 2 + y 1 ) where y i = x i /L. The proof of this equation is a straightforward CFT exercise that we report in Appendix A. In particular, we considered a block of size Ly centered at the middle of the system, i.e., y 2 = 1/2 + y/2 and y 1 = 1/2 − y/2. The data of the entanglement entropy approach the asymptotic behavior predicted by Eq. (69), i.e., Neumann boundary conditions Another interesting situation arise when imposing Neumann boundary conditions on the fermionic wave-function, i.e. imposing that the derivative of the wave function vanishes at the two boundaries at 0 and L. In this case, the normalized one-particle wave functions are with the same energy as for Dirichlet BC. As an important difference compared to Dirichlet BC, also the zero-mode with k = 0 does contribute. The A matrix is readily calculated. It is an N -by-N matrix with entries that are more easily written if we count rows and columns with n, m starting from 0 and up to N − 1 as for the modes above. For an interval of length starting from the boundary, straightforward calculations lead to π(n − m) + sin[π(n + m)z] π(n + m) if n, m = 1, ..., N − 1, and A 0m = A m0 . Note the plus sign between the two terms for n, m = 0 and the zero-mode contribution, as a difference compared to Dirichlet BC. Because of the presence of the zero row and column, A is not of the form Toeplitz+Hankel as it is for Dirichlet BC. Thus the recent generalizations of Fisher-Hartwig conjecture in Ref. [47] cannot be used. We then determine numerically the matrix A for various N and, through Eq. (26), we compute the Rényi entanglement entropy shown in Fig. 9 (left panel). The analysis of their large-N behavior gives shown as continuous lines in the figure. This form is consistent with the general CFT expectation in Eq. (4) with g = 1. In order to avoid confusion with CFT literature, we stress that, in this paper, we are considering Neumann and Dirichlet BC on the fermion degrees of freedom. These do not correspond to Dirichlet and Neumann BC on the bosonic field obtained from the bosonization of the fermionic theory that instead are well known to have different g function (see e.g. [48]). It is known that they both correspond to Neumann conditions of the bosonized field and so it should be not a surprise that the asymptotic behavior up to O(N 0 ) is the same as Eq. (65). Notice that we have included a O(1/N ) term in the leading behavior of the logarithm (i.e. the −1 in (2N − 1)) that has the effect to cancel the leading non-oscillating correction to the scaling. This was present also for Dirichlet BC, but it has opposite sign. While before this was motivated by the mapping to the lattice model (cf. Ref. [43]), here we introduced it on a phenomenological basis in order to describe the data (see below) and we do not have any mathematical explanation for it. We now consider the corrections in N to the leading behavior, that are again consistent with the general scaling from CFT O(N −1/α ). On the basis of the numerics, we guess exactly the first correction to the scaling, and we can write the entanglement entropies as In Fig. 9 (right panel) we show the evidences for this scaling for the half-system entanglement ( = L/2). We report the quantity that for α = 1, 2, 3 approaches for large N the value predicted from the ansatz (75) r α = 2 1−α Γ( 1 2 + 1 2α ) Γ( 1 2 − 1 2α ) . For α = 1 the leading corrections are of order 1/N . The figure shows that the absolute value for even and odd N coincide (but they have opposite signs). This confirms that choosing to parametrize the leading term with (2N − 1) in Eq. (74) cancels the 1/N non-oscillating corrections completely. Some of the factors (2N −1) in Eq. (75) are subleading and are not explicitly tested by the numerical data presented. They have been introduced from the analogy with Dirichlet BC results. Finally, we stress that we do not have any mathematical basis to justify Eq. (75): while its general structure can be inferred by CFT [37] because fermionic Neuman BC are in the same universality class as Dirichlet BC, the non-universal amplitude of the correction has been guessed and conjectured by exploiting the analogy with Dirichlet BC and tested agains numerical data. We also considered numerically other situations, such as other bipartitions of the systems. However, none of these results present particularly relevant or unexpected features to be mentioned here. Conclusions In this manuscript we report the details about the computation of the entanglement entropies of continuous systems (gases) which have been anticipated in the short communication [19]. The most important ingredient to write down the entanglement entropies in terms of finite determinants is the use of the reduced overlap matrix in Eq. (23). The calculation of the entanglement entropies is then mapped to the solution of an eigenvalue problem of an N × N matrix, with N being the number of particles of the gas. For the ground state of a periodic system we obtain the leading behavior in the form while for a gas with Dirichlet or Neumann boundary conditions we find both in agreement with CFT and scaling expectations, but they have been found here from first principles. We also derive the corresponding leading behavior for some classes of excited states. Furthermore, adapting to the problem at hand the results in Refs. [21,27], we calculate also subleading corrections. The universality of these formulas allowed us to infer the finite-size scaling forms for spin chains which are reported in Eq. (40) for 1 < α < ∞, in Eq. (42) for α = ∞, and in Eq. (49) for α = 1. The determination of these exact formulas were left as open problems from previous investigations. Some other applications (such as to systems with defects, star graphs, and to gases confined by an external potential both in and out of equilibrium) of this novel method have been already shortly presented in Ref. [19], but they will be detailed elsewhere. Other generalizations, which we are currently investigating, concern the calculation of the entanglement for quadratic Hamiltonian which do not conserve the fermion number (such as the continuum limit of the XY model), free gases in higher dimensions and different geometries. Finally, some non equilibrium situations such as local quantum quenches (e.g., instantaneously turning on/off of a defect) can also be tackled within this framework. The asymptotic CFT results in several circumstances are known [51], but analytic calculations for specific models are still missing. They may provide important insights in view of the recent proposals of using the full counting statistics after a quench as an experimental probe and a measure of entanglement [52]. where the constant E α is given by Eq. (11), according to the result for the single interval in a periodic systems. Assuming now the scaling hypothesis when working with finite number of particles, Eq. (69) follows simply by replacing L/π by 2N using the argument in Section 1.1.
2011-09-16T11:36:04.000Z
2011-07-20T00:00:00.000
{ "year": 2011, "sha1": "016aa9f098595624b584730c1fa86edd62022cf0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1107.3985", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "016aa9f098595624b584730c1fa86edd62022cf0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55062497
pes2o/s2orc
v3-fos-license
Prestress Force Identification for Externally Prestressed Concrete Beam Based on Frequency Equation and Measured Frequencies A prestress force identification method for externally prestressed concrete uniform beam based on the frequency equation and the measured frequencies is developed. For the purpose of the prestress force identification accuracy, we first look for the appropriate method to solve the free vibration equation of externally prestressed concrete beam and then combine the measured frequencies with frequency equation to identify the prestress force. To obtain the exact solution of the free vibration equation of multispan externally prestressed concrete beam, an analytical model of externally prestressed concrete beam is set up based on the BernoulliEuler beam theory and the function relation between prestress variation and vibration displacement is built. The multispan externally prestressed concrete beam is taken as themultiple single-span beamswhichmustmeet the bendingmoment and rotation angle boundary conditions, the free vibration equation is solved using sublevel simultaneous method and the semi-analytical solution of the free vibration equation which considered the influence of prestress on section rigidity and beam length is obtained. Taking simply supported concrete beam and two-span concrete beamwith external tendons as examples, frequency function curves are obtained with the measured frequencies into it and the prestress force can be identified using the abscissa of the crosspoint of frequency functions. Identification value of the prestress force is in good agreement with the test results.Themethod can accurately identify prestress force of externally prestressed concrete beam and trace the trend of effective prestress force. Introduction Externally prestressed concrete structure is broadly applied in the highway bridges, urban bridges, and railway bridges with the development of external prestress technology.In design and construction process of externally prestressed concrete bridge, the prestress force is often determined according to the theory formula [1].But in the actual construction process, many factors such as relaxation of steel, shrinkage and creep of concrete, and ambient temperature can lead to the change of the prestress force and the prestress force can show obvious change when the concrete beam has the cracks or failure.Therefore, in order to effectively control the operating state and the bearing capacity of bridges, it is very important to identify the prestress force of externally prestressed concrete bridge.The existing method which has good accuracy is to install force sensors in the prestressed concrete beam to monitor the change of the prestress force.The disadvantage of this approach is that the sensor is expensive and the accuracy of the force sensor will decrease with the increase of age in services.Above all, it is necessary to find a simple and effective method to identify the prestress force.In recent years, scholars did a lot of research on identification of prestress force and obtained some results. Lu and Law [2] presented a method for the identification of prestress force of a prestressed concrete bridge deck using the measured structural dynamic responses and the prestress force is identified using a sensitivity-based finite element model updating method in the inverse analysis.Law and Lu [3] also studied the time-domain response of a prestressed Euler-Bernoulli beam under external excitation based on modal superposition and the prestress force is identified in the time domain by a system identification approach.Li et al. [4] carried out numerical simulations to identify the magnitude of prestress force in a highway bridge by making use of the dynamic responses from moving vehicular loads based on dynamic response sensitivity-based finite element model updating.Law et al. [5] developed a new method of prestress identification using the wavelet-based method in which the approximation of the measured response is used to form the identification equation.Bu and Wang [6] presented a BP neural network method to identify the effective prestress for a simply supported PRC beam bridge based on modal frequencies and dynamic responses of the bridge.Abraham et al. [7] investigated the feasibility of using damage location algorithm technique for detecting loss of prestress in a prestressed concrete bridge.Kim et al. [8] studied a vibrationbased method to detect prestress loss in beam-type PSC bridges by monitoring changes in a few natural frequencies.Xuan et al. [9] evaluated the prestress loss quantitatively in the steel-strand reinforced structures by an optical fibersensor based monitoring technique.However, the prestress force and prestress loss cannot be estimated directly, simply, and accurately unless the beam has been instrumented at the time of construction.Several researchers also studied the dynamic behavior of prestressed beam with external tendons and predicted the relation between the modal frequency and the given prestress force.Miyamoto et al. [10] studied the effect of the prestressing force introduced by the external tendons on the vibration characteristics of a composite girder with the results of dynamic tests and derived the formula for calculating the natural frequency of a composite girder based on a vibration equation.Hamed and Frostig [11] presented the effect of the magnitude of the prestressing force on the natural frequencies of prestressed beams with bonded and unbonded tendons.Saiidi et al. [12,13] reported a study on modal frequency due to the prestress force with laboratory test results.The above researchers only considered the prestressing effect on dynamic characteristics of the simply supported beam.Very few works have been presented on the effect of prestressing on the dynamic responses of a beam and identification of prestress force directly or indirectly. The exact solution of the free vibration equation of multispan externally prestressed concrete uniform beam is obtained in this paper.An inverse problem to identify the prestress force based on the frequency equation and the measured frequencies is then presented taking the prestress force as an unknown parameter in the frequency functions.The prestress force identification method is suited to the externally prestressed concrete uniform beam.Firstly, based on Miyamoto et al. 's study [10], the function relation between prestress variation and vibration displacement of multispan externally prestressed concrete beam is built according to the basic principle of the force method.The multispan externally prestressed concrete beam is considered as the multiple single-span beams which must meet the bending moment and rotation angle boundary conditions.The free vibration equations of multispan externally prestressed concrete beam by using sublevel simultaneous method which can simplify the solution of dynamic equations are given and the semianalytical solution of the free vibration equations which considered the influence of prestress on section rigidity and beam length is obtained.Then, frequency functions which are obtained by frequency equation are used to identify the prestress force by the appropriate method.Two dynamic tests of externally prestressed concrete beam in the laboratory are submitted to illustrate the effectiveness and robustness of the proposed method.At last, the effect of the error of the measured frequencies on identification of the prestress force is studied in the proposed method. Vibration Equation of Externally Prestressed Simply Supported Beam.An externally prestressed simply supported beam is shown in Figure 1.It is assumed that the prestress force has no prestressing loss along the beam length and the beam bending must meet the plane section assumption.The vibration equation of this simply supported beam can be expressed as follows: where is the flexural rigidity of the beam, is the mass of the beam per unit length, (, ) is the transverse deflection, is the horizontal component of the prestress force , is the equivalent eccentricity of the external tendons, and Δ is the variation of the prestress force due to flexural vibration.Because eccentricity of external tendons in different positions on the beam is not the same, the equivalent eccentricity can be calculated according to the principle of equal area in the bending moment diagram. Vibration Equation of Multispan Externally Prestressed Beam.A multispan externally prestressed continuous beam which has spans is shown in Figure 2 and the th span of the beam is taken as the study subject.The rotation angle and bending moment of the beam end at point are ,+1 and ,+1 and the rotation angle and bending moment of the beam end at point + 1 are +1, and +1, , respectively.According to (1), the free vibration equation of the th span of the beam can be written as follows: where (, ) is the transverse deflection of the th span, is the equivalent eccentricity of the external tendons of the th span, Δ is the variation of the prestress force due to flexural vibration of the th span, and is the horizontal component of the prestress force of the th span. The rotation angle and bending moment at both ends of the th span of the beam need to satisfy the following boundary conditions: The first and the last span of multispan externally prestressed concrete beam must meet the boundary conditions Obviously, the free vibration equation of multispan externally prestressed concrete beam can be considered to be the free vibration equations of multiple single-span externally prestressed beams which must satisfy the rotation angle and bending moment boundary conditions, as shown in (3) and (4).In order to solve the vibration equations, relations between prestress variation Δ and vibration displacement (, ) should be defined firstly. Relations between Prestress Variation and Vibration Displacement. Prestress force would change as the vibration displacement during the free vibration of multispan externally prestressed concrete beam, the free vibration of the beam, is considered in small deformation condition, so relations between prestress variation Δ and vibration displacement (, ) can approximatively be seen as a linear relationship on the geometric deformation [10,14].Assume that there is a concentrated force on the midspan of the beam to get the relations between prestress variation Δ and concentrated force , then to obtain the relations between vibration displacement (, ) and concentrated force , and at last to find the relationship between prestress variation Δ and vibration displacement (, ) by variable replacing. The side span ( = 1, ) of the multispan externally prestressed concrete beam can be simplified approximately as the structure shown in Figure 3(a).Concentrated force acts on the midspan of the side span beam model, the prestress variation Δ, and bending moment on the support are identified as the unknown forces 1 and 2 and the basic system can be generated after removing the redundant constraints.The bending moment diagrams with the unknown forces 1 = 1 and 2 = 1 and concentrated force acting on the beam model are shown in Figure 3(a).The deformation compatibility equations can be written as follows: where Equation ( 5) can be rewritten as follows: The vertical displacement on the midspan of the side span beam model can be expressed as follows: Substituting ( 6) into (7), we can get the following: When concentrated force acts on the midspan of the beam model, the vertical displacement can be produced at the midspan and external tendons can produce internal force which will produce the prestress variation Δ.At the same time, internal force will lead to the vertical displacement Δ which has the opposite direction of the .The vertical displacement Δ can be written as follows: The vertical displacement which is caused by the concentrated force can be calculated as follows: Substituting ( 8) and ( 9) into (10), we can obtain where The middle span (2 ⩽ ⩽ −1) of the multispan externally prestressed concrete beam can be simplified as the structure which is shown in Figure 3(b).Concentrated force acts on the midspan of the middle span beam model and unknown forces are 1 , 2 , and 3 .The deformation compatibility equations can be expressed as follows: where and Δ can be calculated by ( 5) and ( 13) can be rewritten as follows: where The vertical displacement on the midspan of the middle span model can be expressed as follows: The vertical displacement Δ caused by internal force can be written as follows: Substituting ( 16) and ( 17) into (10), the relationship between prestress variation Δ and vibration displacement (, ) can be expressed as in (11).But the coefficient can be written as follows: 2.4.Equivalent Eccentricity .The equivalent eccentricity can be computed according to the principle which is that the areas of the bending moment diagram are equal [10,14].As shown in Figure 3(b), the bending moment of the middle span caused by external tendons can be written as follows: The area of the bending moment diagram is Similarly, the equivalent eccentricity of the side span can be written as follows: Substituting ( 11) into (2), we can get Equation ( 23) is the free vibration equation of the multispan externally prestressed concrete beam and the section rigidity and beam length can be modified as follows.Kim et al. [8] considered that the total rigidity of prestressed beam is the sum of the flexural stiffness of reinforced concrete beam and the flexural stiffness of the prestressed steel and took the prestressed steel as the cable which is fixed at both ends of the beam.According to the principle that the natural frequency of the cable is equal to that of the beam, we can obtain The total rigidity of prestressed beam can be written as follows: where is the beam length of the th span and is the modal order. The prestress force on the cross section can be regarded as an axial force and a moment and the beam length will change under the axial force [15].The actual beam length of the th span can be written as follows: where is the actual beam length of the th span.The section rigidity and beam length in (23) can be corrected according to (25) and ( 26) before solving it. Frequency Equation of Multispan Externally Prestressed Concrete Beam 3.1.To Solve the Vibration Equation.Xiong et al. [14,16] utilized Dirac function to establish vibration equation of externally prestressed continuous beam and this method is not suitable for the solution of the vibration equation of threespan and more than three-span externally prestressed continuous beam.This paper translates the vibration equation of the multispan externally prestressed concrete beam into vibration equations of multi-single-span beams which must satisfy the rotation angle and bending moment conditions.According to (23), the vibration equation of th single-span beam can be simplified as follows: For any mode of vibration, the lateral deflection (, ) may be written in the form [17] (, ) = () () , where () is the modal deflection and () is a harmonic function of time .Then substitution of (28) into (27) yields where 2 = 4 /, 2 = ( − ).Equation (29) is the fourth order constant coefficient differential equation and the assumption that the solution of (29) is Φ () = .Taking it into (29), we can get where ).The general solution of (29) can be written as follows: where , , , and are constants which can be obtained by rotation angle and bending moment boundary conditions. To Solve Modal Equation. As shown in Figure 2, the displacement and bending moment at the ends of the th single-span beam can be written as follows: Mathematical Problems in Engineering Using (32) and its second partial derivative, constants , , , and can be obtained as follows: Taking the values of constants into (32), model functions can be derived..According to (32), the angle equation of the th single-span beam can be written as follows: To Solve Frequency Equation where For the th support which is shown in Figure 2, the equation = ,+1 = ,−1 always stands up and the angles on both sides of the th support can be rewritten as follows: The angles on both sides of the th support must be equal ( ,−1 = ,+1 , 2 ⩽ ⩽ ), so we can get where = − , = cos(ℎ ) − cosh( ).Equation (37) has +1 unknowns and −1 equations and taken (37) into matrix form where The bending moment within the first and last span beam ends needs to satisfy that 1 = 0 and +1 = 0. Equation (38) can be simplified as follows: where Equation ( 40) must have a nonzero solution according to the physical meaning of the formula, so the frequency equation of the span externally prestressed concrete beam can be written as follows: The initial roots can be obtained by calculating (42) which are the essential order frequencies of the span externally prestressed concrete beam. Identification from Frequency Equation and Measured Frequencies.In order to identify the prestress force according to the frequency equation (42), the frequency function () can be defined as follows: where is the prestress force which is the independent variable of the frequency function ().Taking the measured frequency into the frequency function () and assuming that the frequency function is equal to zero (() = 0), the prestress force can be obtained by solving the formula () = 0.The th order measured frequency is taken into frequency function ().We can rewrite it as follows: We can obtain frequency functions such as 1 (), 2 (), . . ., () if there are measured frequencies.The prestress force can be identified by looking for the zeros of the frequency functions if the measured frequencies are accurate enough.Actually, there are inevitable errors in the measured frequencies and the true prestress force will appear near the zero of the frequency function.If we still identified the prestress force at the zero of the frequency function, the errors of prestress force could be larger.In order to obtain more accurate results, the finite order measured frequencies are taken into the frequency function () and we can get the relationship equations about the true prestress force as follows: Graphs of (45) must have the intersection and the value of the independent variable at the intersection is the prestress force which needs to be identified.Concrete steps for the prestress force identification method will be given with examples in Section 4. Prestress Force Identification in a Single-Span Beam. A simply supported concrete beam with external tendons is studied.The length of the beam is 2.6 m, and the height and the width of the section are 0.15 m and 0.12 m, respectively.The concrete grade is C35, there is an external tendon 7 5 0.12 2.6 0.15 0.05 within the beam, the cross sectional area of the external tendon is 139 mm 2 , and eccentricity of the external tendon is 0.125 m.Schematic diagram of the single-span concrete beam with external tendons is shown in Figure 4. The biggest tensile force of the external tendon is 120 kN according to principles that the tensile force cannot exceed 75% tensile strength and the eccentric compressive concrete beam cannot cause cracks under the prestress force.The stretching device is a hydraulic jack and pull-press sensor at the beam end.The tensile force in each tensioning stage is measured by the pull-press sensor.The external tendon was tensioned using multilevel tensioning method and the vibration signal of the beam using hammer to stimulate it was collected by the acceleration sensor at each tensioning stage.The photos of test are shown in Figure 5.When the test is completed, acceleration signals are analyzed by the methods of digital signal processing including FFT.The first two order frequencies of the single-span beam in each tensioning stage are obtained and shown in Table 1. The values of material parameters such as elastic modulus and density cannot directly use the standard value because of the manufacturing error and material difference.The values of material parameters must be corrected before identifying the prestress force.On the basis of the external tendon layout completed and no tensioning, the values of material parameters are corrected by using the frequency equation ( 42) and the measured frequencies.After the completion of the correction, the first two order frequencies with the modified material parameter by frequency equation ( 42) are fit with the measured frequencies which are shown in Table 2. Obviously, the corrected results of the first two order frequencies have a good agreement with the measured frequencies at the external tendon layout completed and no tensioning stage. The frequency function () of the simply supported concrete beam with external straight tendon according to (44) can be written as follows: where is the th frequency of the test beam, is the eccentricity of the external tendon, and is the radius of gyration.The frequency function () can be rewritten as 1 () and 2 () when = 1 and = 2 (to identify the prestress force using the 1st and 2nd measured frequencies).The abscissa value of the intersection of the frequency functions 1 () and 2 () is the prestress force which needs to be identified.Graphs of frequency functions 1 () and 2 () are shown in Figure 6 and the identified prestress force and error are shown in Table 2. Figure 6 shows that graphs of frequency functions 1 () and 2 () do meet in one point on every tensioning state and the intersection of the frequency functions 1 () and 2 () seems close to the function zero which match with theoretical analysis in Section 4.1.Table 2 shows that the identified prestress force is slightly larger than the true value and the maximum error is 6.59%.This shows that the new method is available and can reflect the change trend of the prestress force in the beam. Prestress Force Identification in a Two-Span Beam. A two-span concrete beam with external tendons is studied.The height of the beam is 0.36 m, the width of the beam is 0.17 m, concrete grade is C20, and the span length is 4.3 m + 4.3 m.There is an external tendon 7 5 within the beam, the cross sectional area of the external tendon is 139 mm 2 , and the biggest tensile force of the external tendon is 180 kN.Test method and procedure of the two-span beam are the same as the single-span beam.Schematic diagram of the two-span concrete beam with external tendons is shown in Figure 7.The first three order frequencies of the two-span beam in each tensioning stage are obtained and shown in Table 3. Frequency equation of the two-span concrete beam with external tendons based on (38) can be obtained as follows: where (47) is not corrected by ( 25) and ( 26).The values of material parameters of the two-span concrete beam with external tendons must be corrected based on the first three order measured frequencies after (47) is corrected by ( 25) and (26) and the correction method is the same as the singlespan beam in Section 5.1.The corrected values of material parameters are shown in Table 4. Prestressing force (kN) The frequency function () of two-span concrete beam with external tendons can be presented according to (44).The frequency function () can be rewritten as 1 (), 2 (), and 3 () when = 1, = 2, and = 3 (to identify the prestress force using the first three measured frequencies).The abscissa value of the intersection of the frequency functions 1 (), 2 (), and 3 () is the identified prestress force.Because there are always errors with the measured frequencies, graphs of frequency functions 1 (), 2 (), and 3 () cannot be accurate in one point.Actually, graphs of any two frequency functions can meet in one point and the three frequency functions will have three intersections.The prestress force of the two-span concrete beam will be identified based on the first three measured frequencies which will have higher accuracy than the result based upon the first two measured frequencies.The true prestress force will appear in the triangle with the three intersections.According to the geometric relationship of the triangle, the identified prestress force can be acquired by the abscissa value of the triangle of gravity.Graphs of frequency functions 1 (), 2 (), and 3 () are shown in Figure 8 and the identified prestress force and error are shown in Table 4. Figure 8 shows that graphs of frequency functions 1 (), 2 (), and 3 () have three intersections and frequency functions value at the triangle of gravity is closed to the functions zero which match with theoretical analysis above.Note: "0" stands for the error which does not appear in the certain calculating condition and "1" stands for the error which appears in the certain calculating condition. The identified prestress force appears near functions zero.Table 4 shows that the identified prestress force of the twospan concrete beam with external tendons is also larger than the true value and the maximum error is 6.89%.Identification method for the prestress force based on the frequency equation and the measured frequencies can effectively identify the prestress force. Effect of Measured Frequency Errors. Structural dynamic responses can be collected using the acceleration sensor in practical engineering and the natural frequencies can be obtained based on spectrum transformation with the acceleration data.The low order natural frequencies usually have higher precision than the high order natural frequencies.If test environment is relatively stable and the test beam does not show cracks and plastic deformation in the tensioning stage, the change of the measured frequencies reflects that the prestress force has effects on dynamic characteristics of the structure.The prestress force can be effectively recognized based on the measured frequencies and frequency functions of externally prestressed concrete beam which is presented in this paper.Natural frequencies which are obtained by the signal processing technology are relatively accurate, but there is a certain error between the measured results and the true values which is caused by the test method and data processing method.It is necessary to study that the identified result of the prestress force is affected by the error of the measured frequencies. Taking the single-span beam which is shown in Section 5.1 as an example, the first two order frequencies under different prestress force ( = 60kN, 90 kN, and 120 kN) can be obtained by (42) and then assuming that the frequencies which are obtained by (42) have a maximum error of ±3%.The frequencies and the error can be combined to different calculating conditions.Frequencies with different errors in different calculating conditions are plugged into frequency functions (45).The graphs of frequency functions with the first two order frequencies will have the intersection and the identified prestress force can be modified with the intersection which is shown in Section 5.The error analysis results under different calculating conditions are shown in Table 5. Table 5 shows that the identified prestress force has a greater difference with the different frequency and the error combination in the same tensioning stage which illustrate that the error of natural frequency has significant effect on the accuracy of prestress force identification and the more accurate the measured frequencies are the higher precision the identified prestress force has.In different tensioning stage and the same frequency error, the smaller the prestress force is, the more significant effect by the frequency error the identified results have and the influence of frequency error for prestressed force identification will wane with the increase of tensioning force.The identified results are affected more significantly by the error of higher order frequency.Apparently, there exists nonlinear relationship between natural frequency and prestressed force.Above all, when the test environment is relatively stable and beam does not show cracks and plastic deformation in the tensioning stage, the natural frequencies are obtained more accurately using the right test method and data processing method; then the prestress force can be identified based on frequency equation and the measured frequencies and the identification accuracy for the prestress force depends on the accuracy of the measured frequencies.In the long-term bridge health monitoring, the dynamic response of the bridge can be collected by sensors, the influence of environmental factors and external incentives is eliminated in signals, and the measured frequencies are obtained by spectrum transformation.The prestress force of the bridge can be identified based on the frequency equation and the measured frequencies and the change trend of the prestress force can be reflected. Conclusion In this study, a new method to identify the prestress force in externally prestressed concrete beam based on the frequency equation and the measured frequencies is proposed.The effectiveness of the prestress force identification method is demonstrated by the single-span externally prestressed concrete beam and two-span externally prestressed concrete beam tests.Taking the single-span beam as an example, the influencing regularities of the error of the measured frequencies on the identified results are analyzed by numeric calculation.The free vibration equation of multispan externally prestressed concrete beam is solved using sublevel simultaneous method and the multispan externally prestressed concrete beam is taken as the multiple single-span beams which must meet the bending moment and rotation angle boundary conditions.The function relation between prestress variation and vibration displacement is built and the formula of equivalent eccentricity is presented.In the longterm bridge health monitoring, the measured frequencies can be obtained by practical signal processing.The prestress force of the bridge can be identified based on the new identified method and the change trend of the prestress force can be reflected. Figure 1 : Figure 1: Analysis model of vibration system. Figure 2 : Figure 2: Analysis model of the th span of the beam. Figure 3 : Figure 3: The analysis model and bending moment diagram. Table 1 : Measured frequencies and identified prestress force of the single-span beam.Error denotes the (identified value − test value)/test value × 100%, the same meaning in the Table3. Table 2 : The corrected result of material parameters and frequencies. Note.Error denotes the (mode with corrected or uncorrected material parameter − test mode)/test mode × 100%, the same meaning in the Table4. Table 3 : Measured frequencies and identified prestress force of the two-span beam. Table 4 : The corrected result of material parameters and frequencies. Table 5 : The error analysis results.
2018-12-13T03:08:27.578Z
2014-05-29T00:00:00.000
{ "year": 2014, "sha1": "39131936f3948d434693eb6ee736c81500946e8a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/840937.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39131936f3948d434693eb6ee736c81500946e8a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
233689658
pes2o/s2orc
v3-fos-license
Significantly higher serum tumor marker levels in patients with oral submucous fibrosis Background/purpose Our previous study showed that carcinoembryonic antigen (CEA), squamous cell carcinoma antigen (SCC–Ag), and ferritin are significantly higher in patients with oral potentially malignant disorders (OPMDs including oral leukoplakia, oral erythroleukoplakia, and oral verrucous hyperplasia) than in healthy controls (HCs). Oral submucous fibrosis (OSF) is also recognized as an OPMD. This study evaluated whether these three serum tumor marker levels were also significantly higher in OSF patients than in HCs. Materials and methods The serum CEA, SCC-Ag, and ferritin levels in 41 OSF patients and 164 HCs were measured and compared. Patients with serum CEA level ≥3 ng/mL, SCC-Ag level ≥2 ng/mL, and ferritin level ≥250 ng/mL were scored as serum positive for CEA, SCC-Ag, and ferritin, respectively. Results We found significantly higher mean serum CEA, SCC-Ag, and ferritin levels in 41 OSF patients than in 164 HCs (all P-values < 0.05). Moreover, 41 OSF patients had significantly higher serum positive rates of CEA (39.0%), SCC-Ag (19.5%), and ferritin (53.7%) than 164 HCs (all P-values < 0.05). Of the 41 OSF patients, 26 (63.4%), 7 (17.1%), and 2 (4.9%) had serum positivities of one, two, or three tumor markers including CEA, SCC-Ag, and ferritin, respectively. Conclusion There are significantly higher mean serum CEA, SCC-Ag, and ferritin levels and significantly higher serum positive rates of CEA, SCC-Ag, and ferritin in OSF patients than in HCs. The serum CEA, SCC-Ag, and ferritin levels may be served as tumor markers for evaluation of malignant potential of OSF lesions. Introduction An estimation of 354,864 new cases of lip and oral cavity cancers was reported in 2018 worldwide, and 177,384 deaths from lip and oral cavity cancers occurred in the same year. However, the global new cases of lip and oral cavity cancers emerging in 2012 were about 300,400, and there were 145,400 deaths from lip and oral cavity cancers in the same year. 1,2 In Taiwan, the data of the Ministry of Health and Welfare show that oral cancer is the fifth leading cause of cancer death in the total population and ranks fourth leading cause of cancer death in males in 2019. 3 The growing number of the newly-diagnosed oral cancers and the increased number of death from oral cancers highlight the importance of early diagnosis and treatment for oral cancers. Oral squamous cell carcinoma (OSCC) is the most common oral cancers. 4 Oral leukoplakia (OL), oral erythroleukoplakia (OEL), oral verrucous hyperplasia (OVH) and oral submucous fibrosis (OSF) are considered to be common oral potentially malignant disorders (OPMDs). 5e10 The malignant transformation rates of OPMDs are reported to be 1e7% for homogeneous thick OL, 4e15% for granular or verruciform OL, 18e47% (mean 28%) for OEL, 3e17% for OVH, and 7e13% for OSF lesions. 6e9 The high malignant transformation rates of OPMDs also indicate the importance of early diagnosis and treatment for OPMDs. OSF is a chronic inflammatory disease with deposition of excessive collagen in the subepithelial connective tissue or superficial muscle layer. 10 The onset of OFS is relatively insidious and often lasts for several years. Though OSF patients may complain of stiffness of oral mucosa, difficulty in tongue movements, and limitation of mouth opening in the later phase, the initial symptoms of OSF include burning sensation of oral mucosa and sensitivity to spicy and irritating foods. Several treatment modalities for OSF have been proposed, including surgical intervention, medical treatments, and physiotherapy but there is lack of an effective way for treatment of OSF so far. 11e15 Thus, early diagnosis of OSF is pivotal for prevention of occurrence of OSF. Biomarkers are the substances that can be detected or measured in some biological or pathogenic processes. 16,17 Many biomarkers have been recognized in human cancers and some of them can even be applied as tools to detect cancers or to predict treatment outcomes; for instance, prostate specific antigen (PSA) for screening prostate cancers, 18 CA15-3 for monitoring breast cancers, 19 and carcinoembryonic antigen (CEA) for detecting colorectal adenocarcinomas. 20 In recent years, more and more associated biomarkers have been proved to be related to OSF. 21e30 The biomarkers involved in OSF can be further categorized into solid tissue markers, serum markers, and saliva markers. Tissue markers associated with OSF include proliferating cell nuclear antigen (PCNA), Ki67, hypoxiainducible factor 1-alpha (HIF-1a), E-cadherin, Shh, Gli-1, CD1a, CD207, bax, and p53, which are involved in cell proliferation, hypoxia, epithelialemesenchymal transition, immunity, tumor suppressor genes, cell apoptosis, and angiogenesis. 21e25 Serum markers consist of betacarotene, copper, malondialdehyde (MDA), and lactate dehydrogenase (LDH). Moreover, LDH can be detected in saliva of OSF patients as well. These markers are involved in antioxidant activity, oxidative stress, and cell metabolism, respectively. 26e30 The number of serum and saliva markers is lower than the number of the solid tissue markers. However, for clinical use, the acquisition of the sera or saliva for marker detection is relatively convenient and accepted by the patients. Squamous cell carcinoma antigen (SCCeAg) is the tumorassociated protein and is found to be associated with the uterine cervical carcinomas. 31,32 Elevated serum SCC-Ag level and high serum SCC-Ag-positive rates in OSCC patients have been reported in several studies. 33e36 Also, the serum SCC-Ag level has been proved to have potential to detect the recurrence and metastasis during the follow-up period in post-operative SCC patients. 31e36 A more recent study has demonstrated that the serum SCC-Ag, ferritin, and CEA levels in OSCC patients are significantly increased, when comparing to those in patients with benign oral tumors or healthy control subjects. 37 Our recent study has assessed the serum CEA, SCC-Ag, and ferritin levels in OL, OEL and OVH patients and found significantly higher mean serum CEA, SCC-Ag, and ferritin levels and significantly higher serum positive rates of CEA, SCC-Ag, and ferritin in these OL, OEL and OVH patients than in healthy control subjects. 38 Thus, the serum CEA, SCC-Ag, and ferritin levels are tumor markers for evaluation of malignant potential of OL, OEL and OVH lesions. 38 Oral risk habits including betel quid chewing, cigarette smoking, and alcohol consumption are involved in the multistate progression of the OPMDs in Taiwan. 39 The betel quid chewing habit is a major risk factor of OSF and the concurrent use of cigarette and alcohol results in synergistic Journal of Dental Sciences 16 (2021) 846e853 effects on malignant transformation of OSF. 40,41 Taken together, the main purpose of this study was to assess whether OSF patients may have significantly higher mean serum CEA, SCC-Ag, and ferritin levels and significantly higher serum positive rates of CEA, SCC-Ag, and ferritin than healthy control subjects. In addition, the relations between these serum markers and oral risk habits in OSF patients were further evaluated as well. Study participants The study subjects in this study consisted of 41 OSF patients (39 men and 2 women; age range, 23e69 years; mean 43.9 AE 11.9 years). All the OSF patients were seen consecutively, diagnosed, and treated in the Department of Dentistry, Far Eastern Memorial Hospital, New Taipei City, Taiwan from 2019 to 2020. This study was reviewed and approved by the Institutional Review Board at the Far Eastern Memorial Hospital (FEMH No.: 107116-E). The diagnosis of OSF was made when patients exhibited characteristic manifestations of OSF, including paleness of the oral mucosa, prominent fibrous bands or stiffness of the buccal mucosa, and limitation of mouth opening. 42 Because OSF is a clinically-apparent disease, biopsy is usually not performed for the characteristic cases of OSF. However, when OPMD lesions such as OL, OEL, OVH lesions occurred in OSF patients, biopsy was often performed to rule out epithelial dysplasia and oral malignancy. Three OPMD lesions in three OSF patients were further biopsied in this study and the histopathological results showed epithelial hyperplasia in two lesions and severe dysplasia in one lesion. The exclusion criteria included patients with autoimmune diseases (such as systemic lupus erythematosus, rheumatoid arthritis, Sjogren's syndrome, pemphigus vulgaris, and cicatricial pemphigoid), other inflammatory diseases, malignancy, serum creatinine concentrations indicative of renal dysfunction (men, > 131 mM; women, > 115 mM), and with a past medical history of stroke, heavy alcohol use, or diseases of the liver, kidney, or coronary arteries. For each OSF patient, four age-(within 3 years of the age of OSF patient) and sex-matched healthy control subjects were recruited from dental patients with dental caries, pulpal disease, malocclusion, or missing of teeth but without any oral mucosal or systemic diseases. Thus, a total of 164 healthy control subjects were included in this study. In addition, none of our participated OSF patients and healthy control subjects had taken any prescription medication for their corresponding oral diseases at least 3 months before entering the study. 43,44 Patients' oral habits including details of daily/weekly consumption of betel quid, cigarette, and alcohol as well as the duration of these oral habits were recorded. OSF patients were defined as betel quid chewers when they chewed 2 or more betel quids daily for at least one year, as cigarette smokers when they smoked every day for at least one year and consumed more than 50 packs of cigarettes per year, and as alcohol drinkers when they drank more than three days and consumed more than 20 g of pure alcohol per week for at least one year. 43,44 By these definitions, all the 41 OSF patients were betel quid chewers, but the data of cigarette smoking habit were available for 38 OSF patients and the data of alcohol drinking habit were available for 30 OSF patients. We further divided our OSF patients into current chewers and ex-chewers. In addition, the chewers or smokers were further stratified into different groups by their daily consumption of betel quids or cigarettes and duration of these chewing or smoking habit. 43e45 Determination of serum CEA, SCC-Ag, and ferritin concentrations After obtaining the signed consent form, the blood samples were drawn from the 41 OSF patients and 164 healthy control subjects for measurement of serum CEA, SCC-Ag, and ferritin concentrations. The serum CEA, SCC-Ag, and ferritin concentrations were determined by the routine tests performed in the Department of Laboratory Medicine, Far Eastern Memorial Hospital. Based on previous associated studies, patients with serum CEA level !3 ng/mL, SCC-Ag level !2 ng/mL, and ferritin level !250 ng/mL were scored as positive for serum CEA, SCC-Ag, and ferritin, respectively. 33e36,46e52 Statistical analysis The mean serum levels of CEA, SCC-Ag, and ferritin were compared between 41 OSF patients and 164 healthy control subjects by the Student's t-test. The serum positive rates of CEA, SCC-Ag, and ferritin between 41 OSF patients and 164 healthy control subjects were compared by chi-square or Fisher's exact test, where appropriate. Pearson correlation was used to test whether there were significant correlations between any two of these three markers in OSF patients. The mean serum CEA, SCC-Ag, and ferritin levels between two groups of 41 OSF patients with or without alcohol drinking or cigarette smoking habit as well as between two groups of chewers or smokers consuming different amounts or durations of betel quids or cigarettes, respectively, were compared by the Student's t-test. The result was considered to be significant if the P-value was less than 0.05. Results The mean serum levels of CEA, SCC-Ag, and ferritin in 41 OSF patients and in 164 healthy control subjects are shown in Table 1. The mean serum CEA, SCC-Ag, and ferritin levels were 3.0 AE 1.7 ng/ml, 1.4 AE 1.2 ng/ml, and 282.6 AE 159.8 ng/ml in 41 OSF patients and 1.4 AE 0.7 ng/ml, 0.9 AE 0.3 ng/ml, and 59.9 AE 72.7 ng/ml in 164 healthy control subjects, respectively. There were significantly higher mean serum CEA, SCC-Ag, and ferritin levels in 41 OSF patients than in 164 healthy control subjects (all Pvalues < 0.05, Table 1). We also discovered that parts of OSF patients had the serum positivities of one, two or three tumor markers such as CEA, SCC-Ag, and ferritin. Of the 41 OSF patients, 26 (63.4%), 7 (17.1%), and 2 (4.9%) had serum positivities of one, two, or three tumor markers including CEA, SCC-Ag, and ferritin, respectively (Table 3). Besides, 6 OSF patients had none of serum positivities of CEA, SCC-Ag, or ferritin (Table 3). Moreover, Pearson correlation coefficient demonstrated no significant correlations between any two of these markers in OSF patients (data not shown). We further investigated whether the oral risk habits might influence the serum tumor marker levels in 41 OSF patients. In this study, all 41 OSF patients were betel quid chewers including 19 current chewers and 22 ex-chewers. The data of cigarette smoking habit were available for 38 OSF patients including 30 smokers and 8 non-smokers. The data of alcohol drinking habit were available for 30 OSF patients including 17 drinkers and 13 non-drinkers. Comparisons of mean serum CEA, SCC-Ag, and ferritin levels between two groups of OSF patients with or without alcohol drinking or cigarette smoking habit as well as between two groups of chewers or smokers consuming different amounts or durations of betel quids or cigarettes, respectively, are shown in Table 4. In general, there were no significant correlations between alcohol drinking, betel quid chewing, or cigarette smoking habit and the serum tumor marker levels in two different groups of OSF patients (Table 4). However, the mean serum SCC-Ag level was found to be higher in betel quid chewers consuming more than 20 quids per day than those consuming less than or equal to 20 quids per day (marginal significant, P Z 0.09) ( Table 4). Discussion In the past decade, various biomarkers have been discovered to be associated with OSF. These biomarkers may be involved in certain physiological processes or pathological alterations. Hosthor et al. found that the serum copper level is significantly increased in OSF and OSCC patients when compared to that of healthy control subjects. 26 Besides, both OSF and OSCC patients have the betel quid chewing habit, while the healthy control subjects do not have this habit. Because copper ions may damage proteins, RNA or DNA by generation of superoxide radicals that can initiate the malignant transformation process, they concluded that the serum copper ions and the betel quid ingredients may be associated with the pathogenesis and progression of OSF and OSCC. 26 Beta-carotene can reduce free radical damage and probably hamper the development of malignancy. Rathod et al. discovered that 40 OSF patients have decreased serum beta-carotene levels as compared to the 40 healthy control subjects. 27 Moreover, there is the lowest serum beta-carotene level in the most serious OSF patients (with the mouth opening < 10 mm), suggesting the possible protective function of betacarotene in OSF patients. 27 DNA adducts with malondialdehyde (MDA) detected in oral mucosal cells have been regarded as a risk oral cancer marker. Paulose et al. showed higher serum MDA levels and DNA damages in 30 OSF patients than in 30 healthy control subjects. 28 Hence, MDA, an oxidative biomarker, together with comet assay analysis to detect DNA damage may be of diagnostic value to identify OSF patients with high risk of malignant potential. 28 The lactate dehydrogenase (LDH) is a cytoplasmic enzyme in human cells, and its increase in the serum or saliva may reflect the alteration from normal tissue to premalignant lesions or even to oral cancers via glycolysis. Mishra et al. demonstrated significantly higher serum and salivary LDH levels in OSF patients than in the age-matched healthy control subjects. 29 The serum LDH is further shown to be correlated with the frequency of areca nut chewing habit and mouth opening in OSF patients, whereas saliva LDH level was not. Therefore, the serum LDH level may be a good biological marker in evaluation of malignant potential of OSF lesions. 29 In this study, we found significantly higher mean serum CEA, SCC-Ag, and ferritin levels as well as significantly higher serum positive rates of CEA (39.0%), SCC-Ag (19.5%), and ferritin (53.7%) in 41 OSF patients than in 164 healthy control subjects. These findings indicate that the serum CEA, SCC-Ag, and ferritin levels may be served as important biomarkers for evaluation of malignant potential of OSF lesions. SCC-Ag is a tumor-associated protein that involves in the pathogenesis and progression of several human SCCs by inhibiting cell apoptosis and promoting tumor cell migration. 53 Although it was first recognized in cervical SCC, several studies have demonstrated the higher serum SCC-Ag levels and serum SCC-Ag positive rates in OSCC patients. 36,46e48,54 Yoshimura et al. have reported a positive correlation between serum SCC-Ag level and clinical stages, lymph node status or recurrence in oromaxillary cancer patients. 36 Yoshida et al. have also proved the close relations between serum SCC-Ag level and the tumor sizes, tumor sites, clinical stages or recurrence in OSCC patients. 47 Similarly, Lin et al. found the elevated serum SCC-Ag levels in OSCC patients and their significant associations with the tumor status and lymph node status. Besides, the serum SCC-Ag positivity is also found to be associated with some prognostic parameters, including disease-free survival, overall survival, and distant metastasis. 48 Thus, significantly higher serum SCC-Ag levels in OSF patients than in healthy controls in this study may suggest the increased malignant transformation potential in OSF patients. However, Travassos et al. did a meta-analysis on Table 4 Comparisons of mean serum carcinoembryonic antigen (CEA), squamous cell carcinoma antigen (SCCeAg), and ferritin levels between two groups of oral submucous fibrosis (OSF) patients with or without alcohol drinking or cigarette smoking habit as well as between two groups of chewers or smokers consuming different amounts or durations of betel quids or cigarettes, respectively. Group Mean serum tumor marker level AE standard deviation (ng/mL) 1901 head and neck SCC cases and discovered that the elevated serum SCC-Ag level is significantly correlated to male and advanced TNM stages, but is not significantly associated with overall survival and disease-free survival. 52 Ferritin is an iron storage protein and elevation of serum ferritin has been found in patients with different cancers. It is generally considered that its elevation in cancer patients is most possibly induced by an inflammatory status. Besides, ferritin can provide abundant iron for DNA synthesis, which is necessary for maintaining the high proliferation potential of cancer cells. Extracellular ferritin may also cause immunosuppressive effects on immune cells, making the cancer immunotherapy more difficult. 55,56 Stevens et al. showed that the increase of ferritin and decrease of transferrin levels may be utilized to predict for the presence of hepatocellular carcinoma. 57 Baharvand et al. found a significantly higher serum ferritin level in 60 oral cancer patients than in healthy control subjects. 51 Therefore, the significantly higher serum ferritin levels in OSF patients than in healthy controls indicate the OSF oral mucosa is under a higher inflammatory status and also has a relatively higher malignant potential. CEA SCC-Ag Ferritin The CEA is a cell surface glycoprotein and functions as an intercellular adhesion molecule. Dysregulation of CEA inhibits terminal differentiation of epithelial cells and anoikis, and in turn cells maintain a proliferative potential, consequently resulting in tumorigenesis. 58e60 Elevation of serum CEA has been reported to show a close association with colorectal cancers clinically. 20,61 Because the high serum CEA has been also reported in OSCC patients, the significantly higher serum CEA levels in OSF patients than in healthy controls may also suggest the augmented malignant potential for OSF lesions. 33,34,37 Combination of SCC-Ag and other tumor markers can be applied together for detection of OSCC or head and neck SCC and for prediction of T status, N status, clinical or pathological stages, recurrence, and/or overall or diseasefree survival in OSCC or head and neck SCC patients. Kurokawa et al. have shown significantly higher serum positive rates of CEA, SCC-Ag, and immunosuppressive acidic protein (IAP) in OSCC patients than in control patients. 33,34 Kimura et al. also assessed the serum SCC-Ag, CEA, and ferritin levels in a group of head and neck SCC patients. 50 They discovered that serum SCC-Ag levels are correlated with tumor size, lymph node metastasis, clinical stages, and survival rates. However, there are no significant associations between the serum CEA or ferritin levels and the clinicopathological parameters of head and neck SCC patients in their study. 50 Most previous studies assessed the serum SCC-Ag, CEA, and ferritin levels in OSCC and head and neck SCC patients and only few studies measured the serum SCC-Ag, CEA, and ferritin levels in OPMD patients. Thakur and Guttikonda showed the decreased levels of blood hemoglobin and serum iron and ferritin levels in OSF patients compared with normal control subjects. 30 There is also an inverse correlation between these markers and the severity of OSF, suggesting that blood hemoglobin and serum iron and ferritin levels are reliable biomarkers in OSF patients. 30 Our previous studies have assessed the CEA, SCC-Ag, and ferritin levels in OL, OEL and OVH patients. Significantly higher mean serum CEA, SCC-Ag, and ferritin levels as well as significantly higher serum positive rates of CEA, SCC-Ag, and ferritin were found in OL, OE and OVH patients than in healthy control subjects. 38 Therefore, based on the results of above-mentioned studies, the serum CEA, SCC-Ag, and ferritin levels seem to be potential markers for OPMDs including OL, OEL, OVH, and OSF, and may serve as clinical tools to evaluate the malignant potential of these OPMD lesions. In regard to the relations between oral risk habits and serum CEA, SCC-Ag, and ferritin levels, in general we found no significant associations of alcohol drinking, betel quid chewing, and cigarette smoking habits with the serum CEA, SCC-Ag, and ferritin levels in OSF patients except a marginally significantly higher mean serum SCC-Ag level in betel quid chewers consuming more than 20 quids per day than in those consuming less than or equal to 20 quids per day (Table 4). Furthermore, although there was no significant difference, the mean serum CEA, SCC-Ag and ferritin levels were higher in smokers than in non-smokers (Table 4); these results were similar to those reported previously in our OL, OEL, and OVH patients. 38 Because the sample size of this study was relatively small, further studies with a large sample size of OSF patients are needed to clarify the exact relations between the serum CEA, SCC-Ag and ferritin levels in OSF patients and the oral risk habits such as betel quid chewing, cigarette smoking, and alcohol drinking. We conclude that there are significantly higher mean serum CEA, SCC-Ag, and ferritin levels and significantly higher serum positive rates of CEA, SCC-Ag, and ferritin in OSF patients than in healthy control subjects. The serum SCC-Ag, CEA, and ferritin levels may be served as tumor markers for evaluation of malignant potential of OSF lesions. Declaration of competing interest The authors have no conflicts of interest relevant to this article.
2021-05-05T00:09:42.694Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "9711753848869aea3565d482ec7891f513e0c35e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jds.2021.02.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f14aab728dbf5bba3d0e33c3e9f662e7310c1dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16783141
pes2o/s2orc
v3-fos-license
Transcatheter Arterial Chemoembolization for Intermediate-Stage Hepatocellular Carcinoma: Clinical Outcome and Safety in Elderly Patients Aim: The aim of our study was to compare clinical outcomes between elderly patients aged ≥75 years (elderly group, n=66) with intermediate hepatocellular carcinoma (HCC) undergoing transcatheter arterial chemoembolization (TACE) and younger patients aged <75 years (control group, n=84) with intermediate HCC undergoing TACE. Methods: Clinical outcomes, including overall survival (OS) and tumor response rate at initial therapy, were compared between these two groups. Results: The median survival time and the 1- and 3-year cumulative OS rates were 2.90 years and 84.1% and 48.0%, respectively, in the elderly group and 2.44 years and 78.2% and 39.3%, respectively, in the control group (p=0.887). The objective response rate in the elderly group was 81.8% (54/66 patients), while that in the control group was 78.6% (66/84 patients) (p=0.227). Conclusion: Elderly patients with intermediate HCC undergoing TACE had a prognosis comparable with that of younger patients with intermediate HCC undergoing TACE. Societal aging implies that the number of elderly patients with malignancy will rise in the future (9). In Japan, 75-year-old men and women have an average expected life span of around 5 and 10 years, respectively, and Japan has the greatest longevity in the world (10). The risk of developing HCC is known to be age-dependent, and patients aged ≥75 years sometimes present with HCC (11,12). The increased longevity of the population means that more elderly patients with HCC are to be expected in the coming years. In Japan, the adjusted HCC mortality has increased in recent years (13). Moreover, the average age of patients with HCC in Japan is increasing, as is the proportion of elderly patients with HCC (14). Thus, there is an urgent need to identify the optimal Ivyspring International Publisher management for HCC in elderly patients. TACE is a procedure whereby an embolic agent is injected into the tumor-feeding artery to deprive the tumor of its major nutrient source by means of embolization; this results in ischemic necrosis of the targeted tumor (8). The survival benefit of TACE in patients with unresectable HCC was established in two randomized controlled trials and one meta-analysis (15)(16)(17). Thus, TACE plays an important role in the treatment of unresectable HCC. It is clearly defined as a first-line therapy with a better 2-year survival rate than that of conservative therapy (18). The Barcelona Clinic Liver Cancer (BCLC) intermediate stage (BCLC-B) includes Child-Pugh A and B patients with multifocal HCC, defined as more than three tumors of any size or two to three tumors with a maximal diameter of <3 cm and a single HCC of <5 cm (18)(19)(20)(21). The BCLC classification indicates that these patients are optimal candidates for TACE (18,19). Advanced age was previously considered to be a contraindication for TACE in the treatment of HCC (22). There are few data regarding the clinical outcome in elderly patients with intermediate HCC undergoing TACE (21,(23)(24)(25)(26) and most of them are reported from countries other than Japan. Furthermore, the BCLC classification does not stratify strategies according to age (18,19 Materials and Methods Patients. We performed TACE as an initial treatment in 150 treatment-naive patients diagnosed with intermediate-stage HCC in the Department of Gastroenterology and Hepatology, Osaka Red Cross Hospital, Japan between December 2003 and December 2012. Of these patients, 147 were treated with TACE using an epirubicin-mitomycin-lipiodol (EML) emulsion, and three were treated with TACE using a miriplatin-lipiodol emulsion. We categorized them into two groups: the elderly group (≥75 years old, n=66) and the control group (<75 years old, n=84). The breakpoint of 75 years of age was chosen because in Japan, patients aged ≥75 years are covered by a health insurance system that differs from that for patients aged <75 years. We compared the clinical outcomes including overall survival (OS), tumor response rate, and safety between these two groups. Patients diagnosed with HCC rupture at initial therapy were not included in this study because they were treated with transcatheter arterial embolization without chemoembolization. Written informed consent was obtained from all patients prior to each therapy, and the study protocol complied with all provisions of the Declaration of Helsinki. This study was approved by the Ethics Committee of Osaka Red Cross Hospital, Japan, and the need for written informed consent was waived because the data were analyzed retrospectively and anonymously. The present study comprised a retrospective analysis of patient records registered in our database, and all treatments were conducted in an open-label manner. HCC diagnosis. HCC was diagnosed using abdominal ultrasound and dynamic computed tomography (CT) scans (hyperattenuation during the arterial phase in all or some part of the tumor and hypoattenuation in the portal-venous phase) and/or magnetic resonance imaging (MRI), based mainly on the recommendations of the American Association for the Study of Liver Diseases (18). Arterial-and portal-phase dynamic CT images were obtained at approximately 30 and 120 s, respectively, after the injection of the contrast material. When carrying out angiography, we also confirmed the presence of intermediate-stage HCC using CT during hepatic arteriography (CTHA) and CT during arterial portography (CTAP) (27,28). TACE procedure. In our angiography room, a catheter was advanced to the superior mesenteric artery, and CTAP was performed to investigate the site and size of the HCC. Furthermore, we confirmed the patency of the portal vein at the time of postmesenteric portography. A catheter was then advanced to the celiac artery, and a microcatheter was advanced to the common hepatic artery or proper hepatic artery through a catheter. This approach was used to perform CTHA and digital subtraction angiography with the purpose of investigating the tumor vascularity and identifying the feeding vessels. After the completion of these procedures, a microcatheter was advanced as close as possible to the feeding vessels of the targeted tumor. This was followed by intra-arterial infusion of an anticancer agent and lipiodol emulsion via the feeding arteries according to tumor size and liver function (20,29,30). After the infusion of the anticancer agent and lipiodol emulsion, gelatin sponge particles were slowly injected into the feeding arteries to prevent reflux into untreated segments. The sites of injection of the embolizing agents were segmental or subsegmental in all patients treated with TACE. When patients had poor liver function, the doses of the anticancer agents and lipiodol were reduced. Assessment of treatment efficacy. Treatment efficacy was evaluated using CT findings within 2 months after the initial treatment. We regarded lipiodol accumulation in targeted tumors seen on CT scans as an indication of necrosis. This was because several studies previously reported that the lipiodol retention areas observed on CT corresponded to necrotic areas (31,32). A complete response (CR) was defined as the disappearance of all targeted tumors or 100% tumor necrosis, a partial response (PR) was defined as a ≥50% reduction in tumor size and/or necrosis, and progressive disease (PD) was defined as ˃25% tumor enlargement and/or the appearance of any new HCC tumors. Stable disease (SD) was defined as disease that did not qualify for classification as CR, PR, or PD. Follow-up. Follow-up after each therapy comprised periodic blood tests and monitoring of tumor markers, including α-fetoprotein and des-γ-carboxy prothrombin. Dynamic CT scans and/or MRI were obtained every 2 to 4 months after each therapy. Chest CT, whole abdominal CT, brain MRI, and bone scintigraphy were performed when extrahepatic HCC recurrence was suspected. When disease progression of the treated HCC lesions was observed after the initial therapy and/or new hepatic lesions were observed, the most appropriate treatments were performed if the liver functional reserve was adequate and if patients did not refuse such therapies. These treatments included transcatheter arterial therapies in most cases. However, when the treated lesion was well controlled after the initial therapy and the new lesion appeared in the liver, percutaneous ablative therapies were also considered. In cases that were refractory to transcatheter arterial therapies or those involving extrahepatic metastases, a molecular targeting therapy such as sorafenib was also considered (33). Statistical analysis. Data were analyzed using univariate and multivariate analyses. Continuous variables were compared using the unpaired t-test, and categorical variables were compared using Fisher's exact test. For analysis of OS, follow-up ended at the time of death from any cause, and the remaining patients were censored at the last follow-up visit. The cumulative OS rates were calculated using the Kaplan-Meier method and tested using the log-rank test. Factors with a p value of <0.05 in univariate analysis were subjected to multivariate analysis using the Cox proportional hazards model. These statistical methods were used to estimate the interval from initial treatment. Data were analyzed using SPSS software (SPSS Inc., Chicago, IL, USA) for Microsoft Windows. Data are expressed as the mean ± standard deviation. Values of p<0.05 were considered to be statistically significant. Results Baseline characteristics. The baseline characteristics of the patients in the two groups are shown in Table I. The median observation periods were 1.6 years (range, 0.2-5.3 years) in the elderly group and 1.9 years (range, 0.2-9.0 years) in the control group. There was a significantly higher proportion of female patients, a lower positivity rate for hepatitis B surface antigen, and a lower body mass index (BMI) in the elderly group. The serum albumin level, prothrombin time (PT), and platelet count were significantly higher in the elderly group than in the control group, and the proportion of patients with Child-Pugh class A disease was significantly higher in the elderly group than in the control group. These findings indicated that patients in the elderly group had a liver functional reserve superior to that of patients in the control group. No significant difference was observed in comorbid diseases between the two groups. Median survival time and cumulative OS rates. The median survival time (MST) and the 1-, 3-, and 5-year cumulative OS rates were 2.90 years and 84.1%, 48.0%, and 15.0%, respectively, in the elderly group and 2.44 years and 78.2%, 39.3%, and 33.8%, respectively, in the control group; there was no significant difference between the two groups (p=0.887) (Figure 1). Mean doses of anticancer agents and lipiodol in the two groups. In the elderly group, TACE using EML emulsion containing epirubicin (Farmorubicin; Pfizer) at a mean dose of 39.1 ± 9.8 mg, mitomycin (Mitomycin C; Kyowa Hakko Kirin Company, Ltd., Tokyo, Japan) at a mean dose of 8.8 ± 3.3 mg, and lipiodol at a mean dose of 5.9 ± 2.8 ml was performed in 65 patients, and TACE using miriplatin-lipiodol emulsion containing miriplatin (Miripla; Dainippon Sumitomo, Tokyo, Japan) at a dose of 140 mg and lipiodol at a dose of 7 ml was performed in 1 patient (29,30,34). In the control group, TACE using EML emulsion containing epirubicin at a mean dose of 39.3 ± 10.9 mg, mitomycin at a mean dose of 8.9 ± 3.1 mg, and lipiodol at a mean dose of 5.6 ± 2.7 ml was performed in 82 patients, and TACE using miriplatin-lipiodol emulsion containing miriplatin at a dose of 120 mg and lipiodol at a dose of 6 ml was performed in 2 patients (29,30,34). Treatment efficacy at initial treatment in the two groups. In the elderly group, a CR was achieved in 10 patients, a PR in 44 patients, SD in 11 patients, and PD in 1 patient. Thus, the objective response rate (ORR) in the elderly group was 81.8% (54/66 patients). In the control group, a CR was achieved in 20 patients, a PR in 46 patients, SD in 18 patients, and PD in 0 patients. Thus, the ORR in the TACE group was 78.6% (66/84 patients). The difference in initial treatment efficacy between the two groups did not reach significance (p=0.227). Univariate and multivariate analyses of factors contributing to OS. Univariate analysis identified the following factors as being significantly associated with OS for all cases (n=150): the Child-Pugh classification (p<0.001), tumor number of ≤5 (p=0.001), tumor distribution (p=0.001), maximum tumor size of ≤4.5 cm (p=0.008), objective tumor response at initial treatment (p=0.004), serum albumin level of ≥3.7 g/dl (p=0.014), and total bilirubin level of ≥1.0 mg/dl (p=0.011) ( Table II). The hazard ratios and 95% confidence intervals calculated using multivariate analysis for the eight factors with p-values of <0.05 in the univariate analysis are detailed in Table II. The Child-Pugh classification (p=0.039), tumor number of ≤5 (p=0.018), maximum tumor size of ≤4.5 cm (p=0.048), and ORR at initial therapy (p=0.010) were found to be significant predictors linked to OS in multivariate analysis. Causes of death. Thirty-two patients in the elderly group (48.5%) died during the follow-up period. The causes of death were HCC progression in 24 patients, liver failure in 4 patients, and miscellaneous causes in 4 patients. Fifty-two patients in the control group (61.9%) died during the follow-up period, and the causes of death were HCC progression in 29 patients, liver failure in 19 patients, and miscellaneous causes in 4 patients. Adverse events and hospitalization days in the two groups. In both groups, symptoms associated with postembolization syndrome such as fever, appetite loss, abdominal pain, and nausea were transient and mostly resolved within 2 weeks after initial treatment (35). In the elderly group, serious adverse events (SAEs) were observed in three patients (4.5%). Each of these three patients had one of the following SAEs: cholangitis, aspiration pneumonia, or liver abscess formation. All of these SAEs were managed successfully. Thus, TACE-related mortality in the elderly group was 0%. In the control group, SAEs were observed in five patients (6.0%). Each of these five patients had one of the following SAEs: acute respiratory distress syndrome (ARDS), hepatic encephalopathy, hyponatremia, hyperbilirubinemia, or refractory ascites. All of these SAEs were managed successfully, although in one patient who developed ARDS, management in the intensive care unit was required. Thus, TACE-related mortality in the control group was 0%. The mean number of hospitalization days in the elderly group tended to be less than that in the control group (10.7 ± 5.0 vs. 12.6 ± 6.7 days, respectively; p=0.053). Subgroup analyses according to the Child-Pugh classification. A significant difference was observed between the two groups in terms of the Child-Pugh classification (p=0.019), and we therefore performed subgroup analyses according to this classification. No significant difference (p=0.390) was observed between the two groups in terms of OS in patients with Child-Pugh class A disease (53 patients [80.3%] in the elderly group and 52 patients [61.9%] in the control group); the MST was 3.62 years in the elderly group and 3.32 years in the control group (Figure 2A). Simi-larly, no significant difference (p=0.434) was found between the two groups in terms of OS in patients with Child-Pugh class B disease (13 patients [19.7%] in the elderly group and 32 [38.1%] in the control group); the MST was 2.10 years in the elderly group and 1.72 years in the control group ( Figure 2B). Subgroup analyses according to maximum tumor size. Although there were no significant differences in baseline tumor characteristics between the two groups, tumor-related characteristics are reportedly prognostic factors associated with OS in patients with HCC undergoing TACE (8,(15)(16)(17). Hence, we performed subgroup analyses according to maximum tumor size. No significant difference (p=0.861) was observed between the two groups in terms OS in patients with a maximum tumor size of ≥4.5 cm (39 patients [59.1%] in the elderly group and 38 [45.2%] in the control group); the MST was 2.41 years in the elderly group and 1.88 years in the control group (Figure 3A). Similarly, no significant difference (p=0.559) was found between the two groups in terms of OS in patients with a maximum tumor size of <4.5 cm (27 patients [40.9%] in the elderly group and 46 [54.8%] in the control group); the MST was 3.78 years in the elderly group and 2.92 years in the control group (Figure 3B). Subgroup analyses according to gender and other factors. Since a significant difference of proportion of male patients was observed between the two groups, we performed subgroup analyses according to gender. In male patients (34 patients in the elderly group and 63 in the control group), the MST was 2.68 years in the elderly group and 2.56 years in the control group (p=0.986). In female patients (32 patients in the elderly group and 21 in the control group), the MST was 3.62 years in the elderly group and 2.10 years in the control group (p=0.885). In subgroup analyses of other factors [tumor number (>5 or <5), presence or absence of ORR, tumor distribution (bilobar or unilobar), pretreatment serum albumin level (>3.7g/dl or <3.7g/dl) and total bilirubin (>1mg/dl or <1mg/dl)], no significant difference was observed in the two groups in terms of OS (data not shown). Discussion In Japan, there is a trend toward an increasing number of elderly patients with HCC. In addition, the latest estimates suggest that the incidence of HCC peaks above the age of 70 years worldwide (36). However, few investigators have reported the clinical outcome in elderly patients with intermediate-stage HCC who underwent TACE as initial therapy, although there are several studies on the clinical outcome in elderly patients with HCC who underwent surgical resection or ablative therapies (21,(23)(24)(25)(26)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47). Hence, we conducted the current comparative study. Our results showed no significant difference in OS or treatment efficacy at initial therapy between the elderly group and the control group, and similar results were obtained in all subgroup analyses. These findings indicate that elderly patients with intermediate-stage HCC who underwent TACE had a prognosis comparable with that of younger patients. Cohen et al. reported that the MST in patients with HCC aged ≥75 years treated with TACE was 1.88 years, while in our study, the MST in the elderly group was 2.90 years (25). Because the baseline characteristics differed between their study and ours, it may not be possible to reach a definitive conclusion. However, our TACE procedure may have been more effective than that of Cohen et al. In the present study, a significantly higher proportion of female patients and a lower positivity rate for hepatitis B surface antigen were found in the elderly group than in the control group and patients in the elderly group had a liver functional reserve superior to that of patients in the control group. In previous studies, elderly patients with HCC were more likely to be women (37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47). This may have been associated with a larger female elderly population because of their longer life expectancy (39). The fact that male tend to drink and smoke more than female in general may also be associated with our observations, although in this study, drinking history and smoking history are not exactly taken from all studied subjects. Furthermore, as in our study, elderly patients with HCC were more likely to have hepatitis C virus (HCV) than hepatitis B virus (HBV) carriers in many previous studies (37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47). This finding may be explained by the fact that most HBV carriers acquire the virus via vertical transmission in the perinatal period, whereas most HCV carriers are infected at a later stage in life. HCC therefore manifests as a complication in HCV carriers much later in life than in HBV carriers (40)(41)(42)(43)(44)(45)(46)(47). Interestingly, however, the elderly group had a significantly lower BMI than that of the control group. Hepatic steatosis is significantly correlated with an increasing BMI and results in accelerated liver carcinogenesis (48)(49)(50). These facts may be related to our observations. In our multivariate analysis, the Child-Pugh classification, tumor number of ≤5, maximum tumor size of ≤4.5cm, and ORR at initial therapy were significant predictors associated with OS. Takayasu et al. reported in their large study that the degree of liver damage, alpha-fetoprotein level, maximum tumor size, number of lesions, and degree of portal vein invasion were significant factors linked to OS according to their multivariate analysis (8). Our results are consistent with their reports. In general, elderly patients have a significantly higher proportion of comorbid diseases than younger patients (24,25,(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47) However, our baseline characteristics showed no significant difference in comorbid diseases between the two groups. Elderly patients with HCC with severe or numerous comorbid diseases may be excluded from the current analysis because of the expected TACE-related SAEs. The fact that the proportion of patients with Child-Pugh B disease was significantly lower in the elderly group than in the control group may be also due to the same reason. In our study, TACE-related mortality was 0% in both groups. TACE-related mortality reportedly ranges from 0.5% to 7% (8,21,23). Furthermore, the mean number of hospitalization days in the elderly group tended to be fewer than that in the control group. Our safety profile of TACE in elderly patients with HCC is encouraging. This study had several limitations. First, it was a retrospective study over the period of 10 years. Thus, diagnostic procedure and treatment procedure for HCC may not be consistent in each patient, leading to bias. Second, the sample sizes in the two cohorts were small for analysis. Third, as mentioned earlier, elderly patients with severe comorbid diseases may be excluded from this analysis, also potentially leading to bias. Larger prospective comparative studies will therefore be needed in the future to confirm these results. However, our study results demonstrated that the elderly group had a prognosis comparable with that in the control group and that our TACE procedure was safe. In conclusion, elderly patients with intermediate-stage HCC undergoing TACE had a prognosis comparable with that of younger patients with intermediate-stage HCC undergoing TACE. TACE for elderly patients with intermediate-stage HCC should not be withheld based on advanced age alone.
2016-05-12T22:15:10.714Z
2014-07-17T00:00:00.000
{ "year": 2014, "sha1": "7c31923a70f06f0b3817fde91499f5094f202309", "oa_license": "CCBYNCND", "oa_url": "http://www.jcancer.org/v05p0590.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7c31923a70f06f0b3817fde91499f5094f202309", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271062096
pes2o/s2orc
v3-fos-license
RepurposeDrugs: an interactive web-portal and predictive platform for repurposing mono- and combination therapies Abstract RepurposeDrugs (https://repurposedrugs.org/) is a comprehensive web-portal that combines a unique drug indication database with a machine learning (ML) predictor to discover new drug-indication associations for approved as well as investigational mono and combination therapies. The platform provides detailed information on treatment status, disease indications and clinical trials across 25 indication categories, including neoplasms and cardiovascular conditions. The current version comprises 4314 compounds (approved, terminated or investigational) and 161 drug combinations linked to 1756 indications/conditions, totaling 28 148 drug–disease pairs. By leveraging data on both approved and failed indications, RepurposeDrugs provides ML-based predictions for the approval potential of new drug–disease indications, both for mono- and combinatorial therapies, demonstrating high predictive accuracy in cross-validation. The validity of the ML predictor is validated through a number of real-world case studies, demonstrating its predictive power to accurately identify repurposing candidates with a high likelihood of future approval. To our knowledge, RepurposeDrugs web-portal is the first integrative database and ML-based predictor for interactive exploration and prediction of both single-drug and combination approval likelihood across indications. Given its broad coverage of indication areas and therapeutic options, we expect it accelerates many future drug repurposing projects. Introduction The pharmaceutical industry has increasingly shifted its focus toward discovering novel applications for approved drugs, a strategy commonly known as drug repositioning or repurposing (DR) [1].This approach is particularly appealing because it can expedite drug development, lower costs and address unmet medical needs [2].The average cost of developing a drug from scratch currently ranges between 1-2 billion dollars [3,4], while drugs granted extended uses (utility extension or DR) mainly incur regulatory and administrative costs only; however, if different dosing or route of administration is needed, then further safety evaluations are often required.For example, sildenafil, initially designed for angina and sickness, was successfully repurposed for erectile dysfunction in 1998 [5].Another successful example of DR is azathioprine, initially developed for rheumatoid arthritis, but later repositioned to treat renal transplant [6]. Over the recent years, various computational resources have been developed to support DR efforts, including DrugRepo [7], Drug Repurposing Hub [8], repoDB [9] and RepurposeDB [10], alongside hundreds of other web-based databases that can directly or indirectly support DR [11].In addition to web-based resources, various DR prediction methods have been developed, such as PREDICT, a widely used computational method for predicting drug-disease relationships [12].PREDICT approach utilizes drug-drug and disease-disease networks, focusing on 593 drugs and 313 diseases.Another study introduced a bipartite graph-based methodology for uncovering new drug indications through their relationship with similar drugs [13].The CMAF method employs matrix factorization to predict drugindication associations based on drug and indication similarity networks [14].However, these methods cover only a limited set of drugs and indication pairs, leading to limited prediction Table 1.Approved, terminated (failed), investigational and predicted drug-indication associations.The top three rows show statistics for single drug-indication associations, whereas the bottom three rows are for drug combinations in the current version of RepurposeDrugs.capability.Additionally, the lack of interactive web-applications for these methodologies limits their practical utility and broader accessibility. Approved To address these limitations and facilitate DR among researchers without programming skills, we developed RepurposeDrugs (https://repurposedrugs.org/), a machine learning (ML)-based web portal that provides a versatile approach to uncover new relationships between drugs and indications, covering both single and combination therapies.RepurposeDrugs systematically classifies drug indications into 25 distinct categories, presented through interactive heatmaps that visualize established and emerging indications for various drugs and drug combinations.Each drug indication pair is linked to its clinical trial data, ensuring that both investigational and approved therapies are easily traceable.Furthermore, RepurposeDrugs features an 'Analyze custom data' option that enables users to upload approved drugs or investigational compounds and obtain predicted approval likelihood scores for various disease indications.To the best of our knowledge, RepurposeDrugs is the first web-tool that provides a comprehensive database and user-friendly portal for exploring drug and combination therapy indications and ML-based approval predictions across multiple indications and drug classes. Drug-indication associations The dataset for drug-indication associations was extracted and manually curated from the clinical trials database (https:// clinicaltrials.gov/).The drug-indication associations were then classified into approved and terminated, based on the reported status of each clinical trial.Approved drug-indication associations are those for which phase 4 in any clinical study is passed, whereas terminated drug-indication pairs represent those associations, where at least two trials were terminated/withdrawn for the particular disease indication, other than the one in which the drug was originally approved.All the approved drug indications in RepurposeDrugs correspond to FDA approvals.The dataset was initially derived using an in-house computational pipeline and subsequently manually curated and uploaded as the backend database of RepurposeDrugs.Despite a thorough literature review, we found no existing datasets or methods that report or predict the likelihood of drug approval for specific indications.We believe that our curated benchmark dataset (Supplementary File 1) can serve as a valuable resource for future research.Additionally, investigational indications for approved drugs and investigational compounds not yet approved were extracted from the ChEMBL database (Version 33) (Table 1). There are 4187 unique approved drugs or investigational compounds for 1669 disease indications currently under clinical trial phases I, II or III.Some of these compounds may be under investigation in multiple clinical phases concurrently.Figure 1a and b shows the distribution of reported clinical trials for unique compounds and indications, respectively.Figure 1c and d shows chord plots, sourced from the RepurposeDrugs statistics tab, which highlight the overlap of shared drugs between approved and failed indications across multiple disease groups. Notably, cardiovascular and infectious disease indications have currently the highest number of approved drugs.In contrast, most failed drug trials are carried out in neoplasms and digestive system indications, with a notable overlap of terminated drugs between these two groups (35 drugs, as indicated in Figure 1d).These visualizations offer valuable insights into future DR research, suggesting the potential cross-applicability of drugs between certain disease groups. Classifying diseases into high-level groups We mapped all the approved drugs and investigational compounds into PubChem IDs and standard InChIKeys.Additionally, we unified the naming of the disease indications according to Unified Medical Language-Concept Unique Identifiers (UML-CUIs) guidelines [15] for effective cross-referencing with disease databases such as DisGeNET [15] and OpenTargets [16].Leveraging DisGeNET data, we classified all disease indications into 25 overarching indication groups as follows: Investigational disease indications that were inferred from ChEMBL were represented using Experimental Factor Ontology (EFO) and Medical Subject Headings (Mesh) IDs.Therefore, we mapped all EFO and MESH IDs to UML-CUIs so that the indication classifications can also be performed for investigational drugindication associations from the ChEMBL dataset. Primary targets and 2D structures of the drugs We linked 2D structures for all the drugs and investigational compounds using RDKit (version 2023.09.2).Furthermore, we collected primary target information for all drugs from the MICHA database [17].There were 987 drugs and investigational compounds (23%) for which we could find primary target information.Users of the web-tool can see the primary targets and 2D structures by hovering over the drug name in the heatmap on the landing page of RepurposeDrugs. Graphical user interface The RepurposeDrugs landing page features an interactive heatmap that displays indications for a user selection of drugs and diseases.Users have the f lexibility to add or remove indications and/or drugs using the dropdown list on the right menu (Figure 2).Hovering over a drug name brings up a popup window that provides the PubChem ID, primary targets and the drug's 2D structure.The heatmap shows information on single drugs by default, but users can also explore the heatmap for drug combinations by selecting the 'Combinations' tab at the top of the page.Additionally, the platform allows users to rearrange the legend positions for indication groups, clinical phases and predicted approval likelihoods, as well as export high-quality figures in PNG or SVG formats, in addition to XLX data download. The web-platform further features chord diagrams under the RepurposeDrugs statistics tab, showcasing overlaps of shared drugs across various indication groups for approved and failed indications (Figure 1c and d).A word cloud plot, available in the lower right area of the landing page, highlights drugs with the most approved disease indications or terminated trials.Finally, the 'Analyze custom data' feature enables users to input compound names or Isomeric SMILES, offering predictions for the approval likelihood across various disease indications.We encourage users to propose enhancements or report issues by utilizing the feedback button on the right side of the landing page. ML-based prediction algorithm To enable approval likelihood predictions for single drugs and drug combinations, we fine-tuned two XGBoost regression models separately for each prediction scenario.The positive outcome in these models represents approved indications, whereas the negative outcome comprises cases where at least two trials failed for a specific drug-indication association, excluding those already approved.Statistics for the training datasets are shown in Table 1, which indicate a relatively balanced distribution of positive and negative classes for both single drugs and drug combinations.While training the models, investigational disease indications were not mixed with the approved ones. Figure 3a outlines the RepurposeDrugs workf low, where step 1 involves manual curation of the datasets, including integration of drug/disease identifiers, collecting primary target and structural information and categorizing diseases into high-level groups per DisGeNET classifications.The subsequent step focuses on the XGboost model training and testing, incorporating comprehensive feature extraction using diverse drug and disease descriptors, as explained in Section 2.5.1.Finally, we employed a conformal prediction approach to filter out low-confidence predictions for identifying most confident new drug-indication associations (Section 2.5.3).Importantly, for drug combination predictive modeling, we formulated a combination feature vector for each compound combination (Section 2.5.1). The model operates via a user interface, accepting drug SMILES or names as inputs, which are processed through a web tool to generate indication predictions (Figure 3b).These inputs are paired with one-hot encoded indication information from our database, streamlining the process by only requiring drug names or SMILES from users.The model then generates predictions, returning the likelihoods for each indication directly to the user.A step-by-step workf low illustrating the use of the Repurpose-Drugs web tool for the generation of predictions is shown in Supplementary Figure 1. Feature description The XGboost model was trained with various structural descriptors, including 2D, 3D and graph neural network (GNN) based fingerprints of drugs, alongside Lipinski's Rule of Five (RO5) descriptors.The 2D fingerprints encode structural information and included ECFP4 (1024 bits), ECFP6 (1024 bits), MACCS (166 bits), Klekota-Roth (4860 bits), PubChem (881 bits) and E-State (79 bits).The 3D fingerprints, E3FP, offer spatial arrangement insights with a length of 4096 bits (descriptors), while GNN-based fingerprints (3DInfoMax) capture molecular graph structures (256 bits).Each fingerprint bit, regardless of the fingerprint type, was treated as an individual predictor variable in the model.The fingerprint vector of each drug combination was defined by applying a logical OR operation to each bit of the individual drugs' fingerprints, effectively capturing features present in either drug of the combination.In addition to the fingerprints, molecular descriptors such as molecular weight, ALOGP, number of hydrogen bond acceptors and donors, topological polar surface area, number of rotatable bonds, aromatic ring count, heavy atom count and various properties related to Lipinski's rules were actively included in the model.For diseases, descriptors included disease names and disease classes, both encoded using one-hot encoding. Prediction algorithm We utilized the XGBoost python package (v.1.5.0) for binary classification to identify drug-indication associations, chosen for its efficiency with sparse datasets and excellent performance in binary classification tasks [18].Optimal XGBoost parameters were determined using Bayesian optimization and 10-fold cross-validation (CV), efficiently minimizing overfitting while maximizing performance, with the RMSE as the primary performance metric.The hyperparameters tuned through Bayesian optimization included the number of leaves, maximum depth, minimum data in a leaf, learning rate, bagging fraction, feature Conformal predictions The conformal prediction framework assigns a confidence score to each drug-indication association prediction, ensuring only high-probability matches are given to the user [19].This approach uses statistical techniques to assess the reliability of predictions based on the data distribution in the training set.In short, conformal prediction utilizes absolute values of the CV residuals as a dependent variable for the error model to predict how unlikely (conformal) each prediction is, which subclusters the prediction space into various confidence level regions.The RepurposeDrug platform filters out low-confidence predictions by setting a confidence threshold of 0.8, by default, hence focusing on the most promising drug-indication associations. Performance metrics To ensure the models' capability to generalize to new data, outof-fold predictions were utilized as a key method for validating performance.This approach involves using each fold of the data as a temporary validation set while training occurs on the remaining folds, thereby providing a comprehensive evaluation of the model's performance across data not seen during training.Such out-of-fold predictions offer a robust indication of how the model might perform in real-world scenarios, contributing to an unbiased and dependable performance evaluation. We used the Pearson (point-biserial) correlation coefficient to measure the relationship between combined out-of-fold predictions of the model (representing the predicted approval likelihood) and the binary outcome of actual drug approval (predicted or terminated).The combined out-of-fold predictions, derived from aggregating predictions made on each validation set during the 10-fold cross validation process, serve as a continuous variable in this context.The Pearson correlation coefficient, ranging from −1 (perfect negative correlation) through 0 (no correlation) to 1 (perfect positive correlation), was pivotal for assessing the relationship between model predictions and actual drug approvals, thus providing insight into the models' predictive accuracy. Evaluating model performance We evaluated the XGboost model accuracy at predicting new drug-indication associations in two scenarios: (1) correlating outof-fold predictions made based on training data (see Section 2.5, Table 1) with actual outcomes in the test data, and (2) assessing predictions for drug-indication pairs in phases I, II and III clinical trials, which were not part of the training data, to determine their future likelihood of approval.Figure 4a illustrates the correlation between out-of-fold predictions on the holdout data excluded from the training process and the actual 'approved' or 'terminated' outcomes.We observed a strong positive correlation (Pearson point-biserial correlation coefficient of 0.75, P < 0.001) between the out-of-fold predictions and clinical trial outcomes for single compounds or approved drugs. We further compared XGBoost performance with other prevalent algorithms including LightGBM, Random Forest, Kernel SVM and Elastic Net.Our analysis supports the use of the XGBoost method, which, together with another gradient boosting method, LightGBM, achieved the highest prediction accuracy of r = 0.75, as detailed in Supplementary Table 1.Since XGBoost is trained on fingerprints representing molecular features, we additionally compared it with a model that leverages molecular graphs for their predictions.For this, we implemented a hybrid model combining GNN and convolutional neural network (CNN) to process both graph-based features derived from SMILES and other relevant features such as disease information.However, this approach achieved a lower performance (r = 0.56) compared with the XGBoost, demonstrating the superiority of the fingerprintbased XGBoost approach for our dataset.The relatively small size of our dataset (1744 drug-indication associations) likely limited The predicted likelihood of approval is quantified on a scale ranging from 0 (not likely) to 1 (highly likely).The observed incremental trend from phase 1 to phase 3 in the predicted likelihoods underscores the model's proficient performance.Significant differences with P < 0.05 (Wilcoxon test) are marked with ' * ', while 'n' denotes the total count of drugs and drug combinations present in each analysis. the effectiveness of the GNN and CNN models, which typically require larger datasets to train effectively. We further used an independent dataset of compounds currently in phases I, II and III clinical trials for additional validation, which were not included in the model training.Our predictions revealed a clear pattern across the clinical developmental phases (Figure 4b).In phase I, compounds had an average predicted approval likelihood of 29%, ref lecting the preliminary nature of this phase.The predicted approval likelihood increased to 38% for phase II trials, in line with its emphasis on efficacy.Most notably, phase III trials showed a substantial jump to 63% mean predicted likelihood, echoing their advanced stage and closer proximity to market approval.These results highlight our model's capability to estimate the likelihood of drug approval at different stages of clinical trials.Importantly, the predictions follow a bimodal distribution, since all the low-confidence predictions with conformal scores less than 0.8 were excluded. The model predictions for drug combination approval likelihood showed a slightly lower, yet highly significant correlation (r pb = 0.56, P < 0.001) than for individual drugs (Figure 4c). A similar pattern across the development phases was observed in the independent validation set (Figure 4d), confirming that also the drug combination predictions are clinically meaningful.This reduced performance, when compared with single-drugs, is likely due to the complex interactions of drug combination modeof-action and the reduced combination dataset size, which can significantly affect the efficacy and safety profiles, making such prediction tasks more challenging.Nonetheless, the moderate positive correlation indicates a surprisingly good effectiveness of the model at capturing the intricacies of drug combination interactions in current clinical trials. Enhancing DR with ML predictor When evaluating the performance of the RepurposeDrugs ML predictor, we focused on its ability to predict potential new uses for drugs, particularly those in phase 3 clinical trials that are close to market entry but not yet included in our training dataset (Table 2).This predictive capability is notably exemplified in the case of Capecitabine as an effective treatment for colorectal Table 2. Predictive efficacy of the RepurposeDrugs ML predictor in DR.Outcomes of the RepurposeDrugs ML Predictor's analysis, focusing on various drugs and their repurposing potential.carcinoma, aligning with findings from ongoing phase III clinical trials (Clinical trial identifier: NCT00312000).This predictive insight is particularly valuable for understanding the intricate drug-indication relationships in complex anticancer therapy.In the field of neurological disorders, the ML predictor proved its versatility by predicting Valproic acid and Zonisamide, both established epilepsy treatments, as potential therapeutic options for managing schizophrenia, in line with phase III clinical trial findings (NCT00073164 and NCT00401973).Further demonstrating its utility, the ML predictor pinpointed semaglutide and tirzepatide, both approved for Type 2 diabetes.ML predictor precisely identified semaglutide, a GLP-1 receptor agonist approved in 2017 [20], and tirzepatide, a newer dual GIP and GLP-1 receptor agonist, for their significant roles in diabetes management [21], each with a 100% prediction confidence and a predicted approval likelihood of 1.0.This successful case example underscores the model's ability to discover novel therapeutic applications for a drug beyond its original indication, even when this information was not available in the training data.Notably, both anti-diabetic drugs tirzepatid and semaglutide, despite having passed phase 4, were not included in the RepurposeDrugs training data due to an oversight in our initial data curation pipeline.These drugs were subsequently identified and incorporated into the validation dataset, allowing us to leverage the existing data while acknowledging and correcting the initial oversight in the data curation. Tirzepatide In addition to these five DR cases, Table 2 highlights five additional predictions by RepurposeDrugs, aligning with ongoing investigations in clinical trials.It is important to note that some of the drugs were present in the training dataset as approved and/or terminated indications, while others were not present at all (Table 2, column 5).These case examples effectively highlight the ML predictor's proficiency in accurately identifying drug-indication associations, underscoring its potential as a powerful tool to accelerate and de-risk the DR and discovery process.The RepurposeDrug platform offers unique insights into drug-indication associations, both for mono-and combinatorial therapies, especially in indication areas where capturing complex therapeutic relationships is crucial, such as cancer and neurological indications, paving the way for innovative medical research and patient care applications. Conclusions and future directions RepurposeDrugs is the first web-tool and comprehensive database that provides existing drug-indication associations and predicted approval likelihoods for single drugs and drug combinations across a vast range of indications and drug classes, accompanied by convenient interactive visualizations.The predictive platform combines comprehensive datasets through an intuitive heatmap that visualizes drug-indication associations (approved, investigational, terminated and predicted), disease groups, primary targets and 2D structures of the drugs and associated clinical trial studies.Approved and terminated drugindications were sourced from https://clinicaltrials.gov/.This dataset was first compiled using an in-house computational pipeline, then manually verified before added to RepurposeDrugs.The unstructured nature of data, such as varying synonyms for drugs and indications, complicates the automatic extraction of all approved drug-disease indications from https://clinicaltrials.gov/.Consequently, several associations, like those for 'tirzepatide' and 'semaglutide' (referenced in section 3.2), were not initially included in the database.In the future, we will further improve our in-house curation pipeline so that it automatically identifies and does not miss approved drug-indications.Our dataset for approved and terminated drug-indications (Supplementary File 1) can serve as a benchmark for future method developers to train and compare prediction methods. Each data point of known associations in RepurposeDrugs is linked to clinical trials using NCT IDs, enabling users to explore additional information such as the number of patients, enrollment criteria and outcomes of the study.Classification of indications into 25 high-level groups further helps to cluster drugs based on indication areas.The 'Analyze custom data' option, available at GUI, allows users to predict disease indications for approved, investigational or even preclinical compounds (via Isomeric SMILES), helping users to identify potential compounds or combinations for a particular indication.As an important part of RepurposeDrugs, we have developed an in-house pipeline to curate drug-indication associations in a semi-automated way.Such a pipeline will be crucial for processing new drug-indication associations once available from clinical trials.In the future, we plan to extend the extraction of approval status beyond FDA, and source approved drugs from the European Medicines Agency (EMA) and the Japan Pharmaceutical and Medical Devices Agency (PMDA). The XGboost prediction algorithm used in RepurposeDrugs achieved a significant correlation of 0.75 for single drugindication association prediction and 0.56 for drug combinations.Despite a lower correlation for drug combinations, partly due to the limited dataset size (only 65 unique drug combinations are available for 55 disease indications currently), future enhancements with additional trial data are expected to improve the model's predictive capabilities.Building on this foundation, we also acknowledge the benefits of integrating more complex biological data in the future to increase model performance.However, our main goal with RepurposeDrugs was to develop an exceptionally user-friendly web tool for exploring existing data and predicting potential drug-disease indications.Currently, RepurposeDrugs requires only the drug SMILES or drug names as input, enhancing the ease of use for researchers without programming skills.SMILES are a widely recognized format readily available in public databases like PubChem or ChEMBL, making them easily obtainable for users.This approach ensures that RepurposeDrugs remains accessible while retaining the potential for its future expansion and refinement.In subsequent updates, we plan to explore how integrating additional data types may further improve the model's predictive capabilities, allowing us to gradually enhance the platform while maintaining its ease of use. To demonstrate the performance of RepurposeDrugs, we identified 10 repurposing scenarios (likelihood of approval >80%) that match ongoing phase III and phase IV studies (Table 2), further justifying the predictive capability of RepurposeDrugs. Finally, the web-application of RepurposeDrugs allows the user to export high-quality, customized figures tailored to the user's selection of drugs and indications, which can be directly utilized in manuscripts or scientific presentations.RepurposeDrugs is anticipated to become a valuable resource for the DR community, facilitating the development of innovative DR strategies and enabling the effective utilization of existing data for predictive analysis. Key Points • RepurposeDrugs is the first web tool and comprehensive database that provides existing drug-indiciation associations and predicts approval likelihoods for single drugs and combinations across a wide range of indications and drug classes.It features convenient interactive visualizations.• The RepurposeDrugs database contains 382 drugs with approved indications and 4187 drugs or compounds currently under investigation for other uses.It also includes 65 approved drug combinations for specific indications and 34 combinations under investigation.• The platform combines extensive datasets into an intuitive heatmap that visualizes drug-indication associations (approved, investigational, terminated and predicted), indication groups, primary targets, drug 2D structures and associated clinical trial studies.• Each data point of known associations in Repurpose-Drugs is linked to clinical trials using NCT IDs, allowing users to explore additional information such as patient numbers, enrollment criteria and study outcomes. Figure 1 . Figure 1.Comparative distribution of (a) unique drugs (or compounds) and (b) disease indications in different developmental stages: approved, terminated and under investigation in clinical phases I, II and III.(c) Number of approved drugs common across various indication groups.(d) Number of approved drugs, with failed clinical trials, common across different indication groups. Figure 2 . Figure 2. The RepurposeDrugs landing page displays an interactive heatmap of drug indication associations with indications color-coded by high-level indication groups (left panel), and a word cloud plot highlighting drugs with the most approved disease indications (bottom right side).Hovering over a drug name brings up a popup window that provides the PubChem ID, primary targets, standard inChIKey and the drug's 2D structure.Hovering over a cell in a heatmap shows the drug indication pair, the study phase and NCT IDs for a clinical trial.The 'Analyze custom data' feature at the right panel is linked with an ML predictor that predicts the approval likelihood of input drugs or preclinical compounds against hundreds of disease indications. Figure 3 . Figure 3. RepurposeDrugs prediction workf low.(a) Schematic of model construction.Left panel: manual curation of datasets, integration of drug/indication identifiers, structural information and indication categorization.Middle panel: model training and testing, highlighting feature extraction from drug and indication descriptors.Right panel: new drug-indication association predictions, employing a conformal prediction approach to exclude lowconfidence predictions.(b) The model processes drug SMILES or names via a web tool to predict likelihood of drug approval for indications available in the database. Figure 4 . Figure 4. Validation of the model performance for indication associations with single drugs (a and b) and drug combinations (c and d).Predictive performance was assessed using two sets of validation data: (a) out-of-fold predictions generated during the model training process and (b) independent dataset from phase 1, phase 2 and phase 3 clinical trials.Panels (c) and (d) display analogous validation outcomes for drug combination data.The predicted likelihood of approval is quantified on a scale ranging from 0 (not likely) to 1 (highly likely).The observed incremental trend from phase 1 to phase 3 in the predicted likelihoods underscores the model's proficient performance.Significant differences with P < 0.05 (Wilcoxon test) are marked with ' * ', while 'n' denotes the total count of drugs and drug combinations present in each analysis.
2024-07-10T06:17:15.763Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "4c6887687aabffd68045f02bd101518df1c78f02", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "58d44968b424704a31b27454d3335b2e72ba2220", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
1872893
pes2o/s2orc
v3-fos-license
Experimental manipulation reveals few subclinical impacts of a parasite community in juvenile kangaroos Graphical abstract Introduction Gastrointestinal helminths commonly infect mammalian herbivores (Sykes, 1987). Helminth infections often have clinical impacts, causing disease and mortality (Holmes, 1987), but can also cause what has been termed 'subclinical' disease (Gunn and Irvine, 2003), inducing more subtle effects in the host. Subclinical impacts are well known in livestock: reductions in appetite and food absorption caused by helminth infections can decrease host fecundity and growth (Mejia et al., 1999), body condition (Loyacano et al., 2002) and alter metabolism (O'Kelly et al., 1988). In contrast, the effects of parasitism on wildlife have received far less attention, although there is mounting evidence that parasites can have similar negative impacts, reducing fitness (Watson, 2013) as well as causing complex changes to host physiology (Van Houtert and Sykes, 1996), behaviour (Scantlebury et al., 2007) and population dynamics (Hudson et al., 1992a;Albon et al., 2002;Stien et al., 2002). However, the types of impacts measured in livestock and wildlife often differ, due to the difficulties of studying natural host-parasite relationships and quantifying fitness consequences. In order to thoroughly investigate such effects, experimental manipulation is imperative. Field experimentation allows the actual costs of parasites on hosts to be investigated rigorously, eliminating many of the issues associated with extrapolating laboratory results onto free-living individuals or populations (Seitz and Ratte, 1991). Studies of fitness consequences in the wild typically focus on natural covariation between parasite load and fitness parameters, and so may be confounded by the inherent differences among individuals that can contribute to high parasite burdens. Consequently, it is often unclear whether changes in fitness parameters are due to heavy parasite burdens, or if these burdens result from other pre-existing factors related to fitness. Ecological host-parasite studies in wild animals have been mostly based on correlations, although there have been some field experiments in wild systems (e.g. Svalbard reindeer, Rangifer tarandus platyrhynchus , Soay sheep, Ovis aries (Gulland, 1992) and red grouse Lagopus lagopus scoticus (Hudson et al., 1992b)). The subclinical effects of parasites can be extremely difficult to quantify in the wild. Ecologists tend to use body condition as an http indicator of an animal's health and reproductive potential. Body condition essentially reflects available energy reserves (Green, 2001): an animal in good condition is assumed to have more reserves than one in poor condition (Schulte-Hostedde et al., 2005). Energy reserves can be quantified directly by measuring fat stores (e.g. amount of back or kidney fat (Riney, 1955)) or non-invasively using mass/size ratio indices of body condition, which attempt to determine the size of energy stores after correcting for structural body size (Schulte-Hostedde et al., 2005). Alternatively, haematological and serum biochemical parameters can be used to assess an animal's health. Although less commonly used in ecological studies of wildlife, haematological parameters may provide more sensitive information on the immediate physiological status of a host (Milner et al., 2003;Budischak et al., 2012). That is because parasite infections can alter haematological parameters directly through haematophagy (blood-feeding) and indirectly through activation of host immunity in response to infection or by limiting the digestion and absorption of essential nutrients, such as protein (Colditz, 2008). Red blood cell counts, haemoglobin and plasma protein concentrations can all be used to assess an animal's health and condition, and have been directly linked to performance and reproductive success (Moore and Hopkins, 2009). The combined application of both haematological and body condition indices may therefore provide greater insight into the subclinical effects of parasite communities on a host. Juvenile mortality is commonly increased by infection (Schmidt et al., 1979), however evidence from livestock suggests that this age-class can also experience considerable subclinical effects such as reductions in body weight (Chiejina and Sewell, 1974), growth (Loyacano et al., 2002) and appetite (Kyriazakis et al., 1998). Despite the evidence from livestock hosts, it is unclear to what degree subclinical effects occur in juveniles of wildlife species. Theoretically, juveniles should experience significant costs when infected with parasites due to the nutritional deficits they cause and the costs of mounting an immune response (Colditz, 2008), and these effects should be particularly marked during early growth and development. Such effects are important to comprehend as it is well established that conditions early in life can have significant implications for survival and reproductive success as an adult (Metcalfe and Monaghan, 2001). Most wildlife hosts harbour complex parasite communities (Bordes and Morand, 2011), and kangaroos (Marsupialia: Macropodidae) are known to support more species of parasites than any other group of mammals (Beveridge and Chilton, 2001). The eastern grey kangaroo (Macropus giganteus) carries a diverse fauna of gastrointestinal nematode parasites in its complex, sacculated forestomach , with most species showing seasonal fluctuations, peaking in the winter months (Arundel et al., 1990). Most of these gastrointestinal nematodes are directly transmitted via ingestion (Sykes, 1987). Adult kangaroos do not appear to develop immunity to most of these nematode species , and juveniles are susceptible to gastrointestinal parasitism, primarily from high burdens (400-1500) of the intestinal trichostrongylid nematode Globocephaloides trifidospicularis Juveniles can experience high mortality, coupled with declining haematocrit and plasma protein concentrations, in their first winter post-weaning between 14 and 20 months of age (Arundel et al., 1990). Populations of eastern grey kangaroos can reach high densities, and individuals are gregarious, forming mixed-sex, open-membership groups to forage and rest (Coulson, 2009), conditions that favour helminth parasite transmission (Altizer et al., 2003). Eastern grey kangaroos are capable of breeding throughout the year, but most births occur between September and March, during the austral spring/summer months (Poole, 1983). Following a short gestation period and then an extended period of development in the pouch, young will exit permanently at around 320 days (Poole, 1975). Toward the end of pouch life, juveniles begin to forage on the pasture and are exposed to the infective stages of nematodes. Juveniles will continue to associate with and suckle from their mothers until over 18 months of age (Poole, 1975). The year following permanent pouch exit is the most critical for juvenile kangaroos, as they must undergo substantial growth. During this period, the average monthly weight gain is 1.4 kg for males and 0.9 kg for females (Poole et al., 1982). To sustain this growth, juveniles have around 1.8 times the energy requirement of mature, non-lactating females (Munn and Dawson, 2004). In addition, during this period of growth, individuals are immunologically naive (Arundel et al., 1990), and become infected by gastrointestinal parasites. Individual variability in body size increases following permanent pouch exit (Poole et al., 1982), suggesting that growth rate is a plastic trait that can be influenced by external factors. We examined the effect of concomitant infection with multiple parasites on the growth, body condition and blood chemistry of one cohort of free-ranging juvenile eastern grey kangaroos, by manipulating parasite loads. We removed gastrointestinal parasites from a group of juveniles using an oral anthelmintic and then compared them with control individuals, with the expectation that control juveniles would show subclinical effects. We predicted that due to an increased availability of nutrients and energy resources, treated juveniles would have a greater growth rate and mass gain, and would increase their body condition relative to controls. We also predicted that there would be changes in haematological parameters, with decreases in red blood cell counts, haemoglobin concentration and haematocrit in control juveniles. Similarly, we expected that serum biochemistry would indicate subclinical effects, with decreased levels of total protein and albumin, and increases in levels of globulin. Study site This study was conducted at the Anglesea Golf Club (38°24 0 S, 114°10 0 E) in southern Victoria, Australia, in 2012. The golf course covers an area of 73 ha and contains open, grassy fairways dominated by couch grass (Cynodon dactylon), separated by patches of woodland and shrubland (Inwood et al., 2008). The course is bordered by native heathy woodland to the north and west; kangaroos move freely between the course and native vegetation, as well as through residential properties in the south and east. Population surveys (following Inwood et al., 2008) at the time of the study showed that the population density of kangaroos at the site was approximately 2.0/ha (Cripps and Coulson, unpublished data). Potential predators at the site include the red fox (Vulpes vulpes) and domestic dogs (Canis lupus familiaris). Animal capture and treatment Juvenile kangaroos (n = 42) were first captured in March and April 2012. Due to their habituation to humans, kangaroos at this site tolerate close approach on foot. Juveniles were identified primarily based on their size, but also on whether they were closely associated with an adult female. Juveniles were captured using either an extendable pole syringe (1.4 m, 2.4 m or 3.6 m long) (King et al., 2011), or by an injection arrow fired from a band-powered gun (Para-medic; Wildvet, Melbourne, Victoria, Australia). Both methods injected the hind limb musculature with Zoletil Ò 100 (100 mg/mL of 50:50 tiletamine hydrochloride -zolazepam hydrochloride mixture; Virbac Animal Health Pty Ltd, Milperra, New South Wales, Australia) at a dose of approximately 5 mg/kg body mass. To identify individuals, they were fitted with a unique combination of coloured, reflective ear tags (Leader, Craigieburn, Victoria, Australia). Standard body measurements (Poole et al., 1982) were collected using a retractable tape measure and Vernier calipers. Leg, pes (foot) and arm lengths were measured to the nearest mm; body mass was measured to the nearest 0.1 kg using 25-kg spring scales (Salter, Melbourne, Victoria, Australia). The approximate date of birth of each individual was calculated from the mean of three estimates based on leg, foot and arm measurements at the first capture using growth tables provided by Poole et al. (1982). In some cases, one measurement gave an estimate that was >2 months apart from the other two. If this occurred, birthdate was calculated using the mean of the other two estimates. Only individuals born after 1 August 2010 were included in the final analysis to ensure that they were 621 months of age and therefore encountering nematode larvae for the first time over the winter months of June -August 2012. Individuals were randomly allocated to either a control (n = 20) or a treatment (n = 22) group, stratified by sex to ensure equal numbers of males and females in each group. Treated individuals were given an oral dose of albendazole (Alben Ò for sheep, lambs and goats, 19 g/L, Virbac Animal Health Pty Ltd, Milperra, New South Wales, Australia) at a rate of 3.8 mg/kg body mass (Cripps et al., 2013), while control individuals were left untreated. No oral control was administered to untreated individuals to avoid indirectly affecting the gastrointestinal fauna. A number of juveniles disappeared (either died or dispersed) during the study, so only 15 controls (8 male, 7 female) and 12 treatment (8 male, 4 female) kangaroos were included in the analysis. The average age of these individuals at first capture was 14.5 months (range 11-19.5 months). Juvenile kangaroos were recaptured between May and June 2012 to re-administer the anthelmintic to the treated group and to collect body measurements from both groups. The interval of re-treatment was based on the estimated pre-patent period of infection (approximately 3 months) in eastern grey kangaroos (Cripps et al., 2013). Individuals were recaptured in a similar order to their initial captures; the average time between first and second capture was 77 days (range 69-91). Individuals were recaptured again between July and September 2012 (mean recapture interval of 77 days, range 68-114 days) and final body measurements were collected. In total, each individual was captured three times. Faecal egg counts To determine the efficacy of the parasite treatment, egg counts were conducted on faecal samples collected within 33 days of first capture. Samples were collected again at 40-90 days post-treatment, prior to the second re-treatment period. At this point, it is possible that many of the nematodes were larval stages and thus were not reflected in the faecal egg counts. Faecal samples were collected in 2 h blocks at dawn and dusk, as this is when kangaroos are actively foraging and defaecation rates are greatest (Johnson et al., 1987). Marked juveniles were observed until they defaecated, and observers collected faecal samples immediately after they were deposited. It was not possible to collect samples from every individual, so faecal samples were collected for a subset of juveniles in each group. Samples were collected from 13 juveniles (5 treated, 8 controls) days 12-33 post-treatment, and from 20 juveniles (9 treated, 11 controls) days 40-90 post-treatment. Samples were maintained at 4°C and analysed within 24 h. The number of eggs per gram (epg) was determined by a modified McMaster technique using 2 g of faeces mixed with 60 mL of saturated sodium nitrate solution (Redox Pty Ltd, Minto, New South Wales, Australia). An aliquot of 0.5 mL of homogenised filtrate was transferred into a Whitlock Universal counting chamber then examined under a microscope at 100Â magnification. Only typical strongylid eggs were counted, with each egg representing 60 epg of faeces. Blood collection and analysis During the second recapture, blood samples were also collected from the lateral caudal vein. Blood for haematology was transferred into 2 mL vacutainers containing EDTA, and for serum analysis into 5 mL vacutainers containing gel for serum separation but no additives. Vials were immediately placed in a cooler and taken back to the field base, where blood for serum analysis was centrifuged for 15 min. Blood smears were also prepared on glass slides within 4 h of collection. Sera and whole blood samples were refrigerated until transport to a commercial, NATA-accredited, diagnostic veterinary laboratory (IDEXX Laboratories, Mount Waverley, Victoria, Australia). The time between blood collection and delivery to the lab was <24 h. Whole blood was analysed for the number of red blood cells, the haemoglobin concentration and haematocrit. Assessment of haematologic values was performed using a Sysmex XT-2000i haematology analyzer (Sysmex Corportaion, Kobe, Japan). Serum chemistry profiles were obtained with an Olympus AU 400 analyzer (Olympus Diagnostics, Hamburg, Germany) and included total protein, albumin and globulin. Statistical analysis Logistic regression was used to analyse the effects of sex and treatment on the disappearance rate of individuals throughout the study. Faecal egg count reduction calculations were made according to Wood et al. (1995) using the Excel plug-in 'Reso' (Cameron, 2003). Analysis of the effects of treatment on kangaroo faecal egg counts was carried out using Genstat, Version 10 (VSN International Ltd., Hemel Hempstead, UK). Faecal egg counts were log (1 + epg) transformed to meet the assumptions of normality and analysed using restricted maximum-likelihood analyses (REML), with time and treatment as fixed factors, and kangaroo identity as a random factor to account for repeated measures. Differences in body mass and leg lengths of juveniles in the treated and control groups at the initial capture were tested using independent sample t-tests. The Scaled Mass Index (Peig and Green, 2009) was used to measure body condition. This index overcomes several drawbacks of other indices, many of which fail to account for the changing relationship between mass and length as growth occurs. Following Peig and Green's procedure (2009), the most strongly correlated body measurement with body mass (on a log-log scale) was determined. Initially each sex was tested separately, but there was no difference in the strength of the correlations so sexes were pooled. Both leg and arm length were more highly correlated with body mass (leg: r = 0.92, P < 0.01; arm: r = 0.91, P < 0.01) than pes length (r = 0.81, P < 0.01), so we chose leg length as the length (L i ) value for each individual. The population mean (L 0 ) was calculated separately for each of the three capture periods. The Scaled Mass Index of body condition was then calculated as M i (L 0 /L i ) bSMA , where M i is the body mass of the individual and bSMA is the standardised major axis regression slope of the ln M i À ln L i plot (Peig and Green, 2009). To assess the effect of treatment across the three capture periods, repeated measure ANOVAs were performed to determine differences in body mass, leg length and the Scaled Mass Index. The main factor was treatment (albendazole administered or untreated control), and the repeated factor was captures. Mauchly's test of sphericity indicated that the assumption of sphericity was violated in all cases (body mass: v 2 2 = 9.86, P = 0.007; leg: v 2 2 = 10.93, P = 0.004; body condition: v 2 2 = 13.97, P = 0.001), so a Greenhouse-Geisser correction was used. Analysis of the effect of treatment on haematological and serum biochemical parameters was carried out using independent sample t-tests or Mann-Whitney U tests in the cases where data were non-normal. Power analyses were performed for each growth and blood parameter, using G*Power (Erdfelder et al., 1996). The magnitude of effects for most of the parameters we tested is rarely reported in the literature. The magnitude of body mass effects reported for livestock hosts are extremely variable. When compared to parasitised controls, unparasitised heifers gained approximately 18% more weight (Mejia et al., 1999), whereas in sheep, the difference ranged from 74% (McLeod and Wolff, 1968) to 81% (Anderson et al., 1980). Consequently, we used both 20% and 80% differences between the treatment and control groups as the effect sizes in these calculations. All other statistical analyses were carried out using SPSS Version 21 (IBM Corporation, Armonk, New York, USA). The assumptions of parametric statistic analyses (normality and equality of variances) were tested for all sets of data. Normality was assessed using the Kolmogorov-Smirnov statistic with a > 0.05, and Levene's test for homogeneity of variances was used to test equality of variances, with a > 0.05. We did not apply sequential Bonferroni adjustments on the basis that we report power analyses for all our tests, and had already selected a subset of blood and growth parameters for analysis (Moran, 2003). Mean (±SE) faecal egg counts of treated kangaroos (36 ± 24 epg) were much lower than for control kangaroos (885 ± 251 epg, F 1,25.6 = 19.23, P < 0.001; Fig. 1) 12-33 days following treatment, resulting in a 99% reduction in faecal egg counts. There was no difference in the faecal egg counts within each group in either time period (F 1,3.1 = 2.41, P = 0.22). However, there was a significant interaction between time and treatment (F 1,3.4 = 32.36, P = 0.007), such that faecal egg counts in the treated group increased more than those in the control group. No cestode eggs were detected in the faecal flotations. Eimeria oocysts were present in some samples but at very low numbers. Serum albumin levels were 8% higher in treated juveniles than in controls, and this difference was significant (P = 0.01, Table 2). While there were no significant differences in the concentrations of total protein, globulin or haemoglobin, the red cell count or the haematocrit, all of these parameters showed trends in the predicted directions, with lower levels in the parasitised juveniles. Power was high for all the blood parameters (>0.8, Table 2) for both 20% and 80% effect sizes. Discussion Contrary to our predictions, experimental removal of parasites in juvenile kangaroos had no effect on their body condition, mass gain or limb growth. Juvenile kangaroos treated with anthelmintics had significantly higher albumin levels than control juveniles, but showed no differences in any other blood parameters. There was no evidence of parasite-induced mortality at our site, and the disappearance rate of juveniles (39%) was comparable to that seen in another eastern grey kangaroo population subject to predation by red foxes (Banks et al., 2000). Our results suggest that juvenile eastern grey kangaroos are largely unaffected by gastrointestinal parasitism in populations where burdens of G. trifidospicularis are Defence against parasites incurs a cost, which may lead to resources being partitioned into immune response rather than growth (Zuk and Stoehr, 2002). As resources are limited, animals undergoing rapid growth must partition and prioritize them appropriately (partitioning framework; Coop and Kyriazakis, 1999). Accordingly, a growing animal encountering parasites for the first time would be expected to prioritise immunity over growth (Coop and Kyriazakis, 1999). We therefore predicted that untreated (control) juvenile kangaroos, which were encountering nematode parasite larvae for the first time, would have a reduced allocation to growth and energy reserves, and this would be reflected in their blood parameters. Field experiments such as ours are rare in wildlife hosts, but have mostly shown significant subclinical effects on hosts (e.g. Hillegass et al., 2010;Stien et al., 2002). It was therefore surprising that the removal of parasites had little effect on growth or the haematological variables examined, particularly as the faecal egg counts were high in the parasitised kangaroos. However, when compared to untreated controls, anthelmintic-treated juveniles tended to gain more weight, and tended to have a longer leg and pes. Although not significant, the power analysis suggests that perhaps with larger sample sizes, greater impacts of parasitism in juvenile grey kangaroos might have be seen. If individuals were investing extra energy and nutrients in several areas at once (such as both skeletal and muscular growth), it may have made it difficult to detect changes in each parameter alone (Munger and Karasov, 1989), leading to low power. Haematological parameters can be altered by parasites directly through haematophagy (blood-feeding) and indirectly by limiting the digestion and absorption of essential nutrients, such as amino acids and protein (Colditz, 2008). Lowered albumin levels in kangaroos could be directly caused by blood loss and/or inflammation in the gastrointestinal tract (Rothschild et al., 1988;Arundel et al., 1990). In juvenile eastern grey kangaroos, heavy infections of G. trifidospicularis have been associated with severe anaemia and clinical disease (Arundel et al., 1990; I. Beveridge pers. obs), although burdens (from post-mortems) at our site were more than five times lower than levels known to cause disease (400-1500, Arundel et al., 1990), perhaps explaining why we observed only an 8% difference in albumin levels between the two groups of juveniles. The nematode M. baylisi may also feed on blood (Arundel et al., 1990), however clinical impacts have never been confirmed. Larvae of a third species, R. rosemariae, can cause severe lesions on the gastric mucosa, yet hosts can carry large burdens without any obvious effects on health (Beveridge and Presidente, 1978). As in all eastern grey kangaroo populations, helminth coinfection was ubiquitous in kangaroos at our site, and juveniles were infected with between 5 and 8 different helminth species (post-mortem data, Cripps, unpublished data). However, due to the inability to morphologically distinguish the eggs of the various taxa in the faeces, it was impossible to determine which combination of species were present in any individual in our study at any one time, and therefore hypoalbuminaemia could not be attributed to any helminth species in particular. Severe blood loss should also reduce concentrations of haemoglobin and total protein, and lower the haematocrit, but we found no differences in any of these despite our high statistical power. We are confident our experimental study would have detected even small changes in any of the other blood parameters. Alternatively, the hypoalbuminaemia we observed could have resulted from reduced food intake and consequent malnutrition, which is a common cause of reduced albumin synthesis (Rothschild et al., 1972). Voluntary reductions in food intake are common during parasitic infections in livestock (Holmes, 1987;Kyriazakis et al., 1998), but whether parasite-infected kangaroos also exhibit anorexia is unknown. Our study demonstrates that when levels of G. trifidospicularis are low, kangaroo hosts are able to tolerate their helminth community and exhibit few subclinical effects from infection. Increasing resource acquisition may be pivotal in allowing hosts to reduce the potential costs of parasitism. Livestock hosts on highprotein diets show reduced pathophysiological responses to parasitism (reviewed by Van Houtert and Sykes, 1996). For example, infected sheep maintained on high protein diets increase their live weight gain by around 85% compared with control sheep (Van Houtert et al., 1995). Similarly, juvenile kangaroos could compensate for the costs of parasitism by using high-quality resources that offset nutrient and resource depletion. The best predictors of body Table 1 Repeated measures ANOVA of treatment effects on selected ecological growth parameters for control and anthelmintic-treated juvenile eastern grey kangaroos at the Anglesea Golf Club, Victoria, Australia, from May to September 2012. The summaries show the sample size (n), the mean (±SE) increase (first to final capture) and the P-value for each group. The statistical power of this experiment to detect a significant difference between treatment and control groups was calculated for effect sizes of 20% and 80%. condition in free-ranging kangaroos are the biomass and quality of forage (Shepherd, 1987;Moss and Croft, 2009), and juvenile kangaroos cannot sustain their growth on a poor-quality diet, even when they are still suckling Dawson, 2003, 2006). The climate at Anglesea is mild, and the fairways are irrigated, fertilized and regularly mown, which encourages new foliage with high protein content (Jarman, 1974;Mattson, 1980). Furthermore, although the juveniles in our study should have been weaned, maternal care in kangaroos is a variable trait that can be influenced by a number of factors, including a mother's age and/or body condition, and environmental conditions (Stuart-Dick and Higginbottom, 1989). Differential maternal investment among individual mothers could explain the high variability in the growth parameters we measured, leading to a low power to detect change. The combination of resources from lactation and protein-rich pasture could have allowed infected juveniles to maintain their growth and body condition in spite of parasitic infections. We have strong experimental evidence that juvenile kangaroos experience few subclinical effects from parasitism, contrary to our predictions. While parasites clearly have subclinical effects in many herbivorous hosts, data on free-ranging wildlife can be difficult to obtain and experimental field manipulations like ours are imperative for investigating such relationships. Importantly, our study is one of the first to combine both growth parameters and haematological parameters, and supports the suggestion by Budischak et al. (2012) that haematological parameters are more sensitive to the subclinical effects of parasitism. In our study, where levels of G. trifidospicularis were relatively low, gastrointestinal helminths had minimal subclinical effects on the juvenile eastern grey kangaroos. However, even small differences in growth and blood parameters could be biologically meaningful, and may have implications for individuals later in life, particularly for lifehistory traits (Metcalfe and Monaghan, 2001). Future studies should take a longitudinal, individual-based approach toward examining the cumulative effects of parasites over time. Conflicts of Interest There are no known conflicts of interest.
2016-05-12T22:15:10.714Z
2014-04-13T00:00:00.000
{ "year": 2014, "sha1": "b4271231ce622e976a89101500934381bbaf440e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijppaw.2014.03.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4271231ce622e976a89101500934381bbaf440e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
245576587
pes2o/s2orc
v3-fos-license
Prime editor-mediated correction of a pathogenic mutation in purebred dogs Canine hip dysplasia (HD) is a multifactorial disease caused by interactions between genetic and environmental factors. HD, which mainly occurs in medium- to large-sized dogs, is a disease that causes severe pain and requires surgical intervention. However, the procedure is not straight-forward, and the only way to ameliorate the situation is to exclude individual dogs with HD from breeding programs. Recently, prime editing (PE), a novel genome editing tool based on the CRISPR-Cas9 system, has been developed and validated in plants and mice. In this study, we successfully corrected a mutation related to HD in Labrador retriever dogs for the first time. We collected cells from a dog diagnosed with HD, corrected the mutation using PE, and generated mutation-corrected dogs by somatic cell nuclear transfer. The results indicate that PE technology can potentially be used as a platform to correct genetic defects in dogs. Domestic dogs (Canis lupus familiaris) are the most variable mammalian species 1,2 . More than 400 breeds have been developed by intense artificial selection from a limited number of founders 1,3 . Consequently, purebred dogs have a greater risk of suffering from genetic disorders than any other species 4 . A number of scientific publications have described the health problems of purebred dogs [5][6][7][8][9][10][11] and emphasized the need for action [9][10][11][12][13][14] ; the problem has also been highlighted recently in public media 15 . As a result, many breeders are increasingly using DNA tests to reduce the frequency of deleterious mutations in their breeding programs 4 . However, no direct treatment has been developed to solve these inherent problems. In particular, canine hip dysplasia (HD) is the most common inherited polygenic orthopedic trait in dog; however, there is still no ideal medical or surgical treatment 16 . Genome editing tools, such as CRISPR/Cas9 technology, can be a solution to this genetic problem. In particular, prime editing (PE), a novel and universal precision genome-editing technology has great potential for the correction of pathogenic alleles in purebred dogs. Unlike the conventional CRISPR/Cas9 system, PE does not induce double-strand breaks, which can induce random indel mutations at the target locus. PE was designed to generate a nick in single strands of the target genome locus, and then induce accurate target sequence switching using reverse transcriptase 17 . While PE was originally developed in human cells 17 , it has recently been used to develop genome-edited plant varieties and animals, including mice and fruit flies [18][19][20] . However, there is no report on the use of PE in dogs. In this study, we used PE to determine whether point mutations causing canine HD can be corrected in canine fibroblasts. Furthermore, we attempted to confirm the possibility of producing genetically modified dogs by somatic cell nuclear transfer (SCNT) using PE-mediated genetically gene-corrected canine fibroblasts. Results Vector construction and prime editor guide RNA design. We selected the target point mutation locus for PE-mediated gene correction based on our previous study that identified 25 SNPs correlating with hip dysplasia in dogs 21 . Among the 25 SNPs, BICF2S23030416 (Supplement Table 1) was selected as the genome editing target in this study because it showed the highest statistical significance for canine HD (p < 0.0001). The BICF2S23030416 SNP is located in an intergenic region on chromosome 4 and is hypothesized to function in regulating MSH homeobox 2 (MSX2). MSX2 has been utilized as a representative marker for cell ossification induction 22,23 . Our previous report showed that dogs (Labrador retrievers) with HD had a T to C point mutation at the BICF2S23030416 locus among the 25 SNP mutations assessed; therefore, we designed a PE vector to correct www.nature.com/scientificreports/ the T to C point mutation at this target locus. A lentiviral vector expressing both the PE enzyme (CRISPR/Cas9 nickase fused with M-MLV reverse transcriptase) and a prime editor guide RNA (pegRNA) was constructed and cloned (Fig. 1a). The pegRNA consisted of a primer binding site that could hybridize with sequences near the BICF2S23030416 locus, and a reverse transcriptase template containing the corrected genomic sequence at the point mutation site (Fig. 1b). Establishment of a gene-corrected canine fibroblast cell line. Ear fibroblasts were collected from a Labrador retriever dog diagnosed with HD (donor patient) and cultured in vitro. The T to C point mutation sequence at the BICF2S23030416 locus in the fibroblasts was determined by sequence analysis (Fig. 1c and Supplementary Figure 1, donor patient). Lentiviral particles, which is expressing PE, were transduced into the fibroblasts; the rate of transduction was measured by expression of the enhanced green fluorescent protein (EGFP) reporter gene. After 5 days of culture, gDNA was isolated from the transduced fibroblasts. Sequence analysis confirmed that the PE-treated fibroblasts had a T sequence at the BICF2S23030416 locus, mixed with C, indicating that PE successfully recovered one allele of the point mutation at the target site (Fig. 1c). Generation of gene-corrected dogs. After showing that we could correct the point mutation in fibroblasts, we demonstrated that we could produce gene-corrected dogs using SCNT. Among the fibroblasts produced using PE, we selected the 'C>T cell #1′ for SCNT. In total, 18 reconstructed embryos were generated by SCNT using PE-treated fibroblasts and then surgically transferred into the oviducts of a surrogate mother (Table 1). Pregnancy was detected by ultrasonography at 40 days of gestation, and two puppies weighing 656 g (C>T dog #1) and 585 g (C>T dog #2 were delivered by cesarean section (Fig. 2a). We confirmed the integration of the PE vector by EGFP expression (Fig. 2b) and polymerase chain reaction (PCR) analysis (Fig. 2c). As expected, the C to T gene correction at the BICF2S23030416 locus was confirmed in both puppies ( Fig. 2d and Supplementary Figure 1). These results are in line with the sequence analysis data from the PE-treated fibroblasts used as donor cells for SCNT (Fig. 1c). We also performed an in silico analysis of potential off-target loci from C>T dog #1 and C>T dog #2. No off-target mutations were identified in any of the analyzed loci (Table 2). Discussion In the present study, we successfully generated two gene-corrected dogs, cloned from a dog diagnosed with HD, using PE technology for the first time. HD is a musculoskeletal disorder caused by an unstable connection between the femoral head and acetabulum and is accompanied by severe pain. It is a common disorder in medium-to large-sized dogs, and is known to cause osteoarthritis, lameness and decreased mobility 24 . HD is a polymorphic disease caused by a combination of genetic and environmental factors 25 . Thus, to reduce the www.nature.com/scientificreports/ prevalence of HD, breeding strategies incorporating screening schemes are widely used 24 . However, studies that eliminate the cause of HD by directly controlling the causative gene have not yet been reported in dogs. PE technology is effective and a potential solution for correcting genetic mutations in specific canine breeds. It is a simple and highly efficient gene correction system compared to that of CRISPR/Cas9-mediated homology directed repair (CRISPR-HDR) [17][18][19] . The CRISPR-HDR method is dependent on cell division events and requires an additional donor DNA template to correct genetic mutations. PE overcomes the shortcomings of CRISPR-HDR; it can be performed at any stage of the cell cycle and does not require additional donor DNA 17 . Therefore, PE is expected to be a very useful tool, enabling precise target sequence correction at specific loci in dogs. In addition, we also analyzed sequences from the potential off-target loci and did not find any unexpected mutations. The off-target analysis results revealed that the PE system is specific in canine cells. These findings are in line with previous studies demonstrating that the PE-mediated base conversion is highly specific 17,18 . We corrected a single SNP from a dog with the HD phenotype. However, due to multiple SNP mutations are contributed to the HD, additional gene correction at the other SNP loci related to HD might be needed to generate a fully HD-recovered canine breed. We regarded the current study is the starting point to overcome HD of purebred dog. Thus, we integrated our PE vector into the genome of our gene corrected dogs and plan to perform further studies focused on correcting the additional SNPs. Integrated PE system will induce spontaneous nickase activity at the target site. In the current study, we did not find any indel mutation from our sequencing results; however, a more stable form of PE, such as RNP, can be recommended in further studies. Precise editing of pathogenic SNP in dog also provides valuable information for understanding the role of each SNP as it relates to HD. Since canine HD is remarkably similar in clinical expression and pathogenesis to that of human HD 26 , information gleaned from gene-corrected dogs may be very useful for understanding human HD. Thus, PE may be a very useful tool for generating genome-edited dog models to study human diseases. Table 1. Production of C > T dogs by somatic cell nuclear transfer (SCNT). The number of oocytes used for SCNT is shown. A total of 42 mature oocytes were used. Three dogs donated oocytes, and one dog was used as a surrogate mother. A total of two cloned dogs were achieved. www.nature.com/scientificreports/ In conclusion, we successfully confirmed the feasibility of PE in dogs and produced HD-related gene-corrected dogs using PE. To the best of our knowledge, this is the first study to adapt PE for use in a canine system. Further studies to analyze gait, behavior, and mobility of the current gene-corrected dogs, and the generation of additional gene-corrected dogs, are needed to understand the relationship between each SNP and HD. Materials and methods Ethics statement. The experimental procedures and methods used in this study were approved by the Animal Welfare and Ethics Office (2019012A-CNU-174), Chungnam National University, Daejeon, and performed according to "The Guide for the Care and Use of Laboratory Animals" published by IACUC of Chungnam National University. Female mixed dogs from 2 to 6 years of age were used in this study as oocyte donors and embryo transfer recipients. The dogs were housed indoors and fed once daily with water ad libitum. All methods are reported in accordance with ARRIVE guidelines (https:// arriv eguid elines. org) for the reporting of animal experiments in the Methods section. Construction of prime editor vector and production of lentiviral particles. The vector for PE was purchased from Addgene (Watertown, MA, USA: #135955) and modified to correct HD-related SNPs. Briefly, the CMV promoter was obtained by PCR using the primer sets 5′-gaattcttgacattgattattgactag-3′ and 5′-tctagaaatttcgataagccagtaagc-3′, and inserted into the vector by EcoRI and XbaI (NEB Inc., MA, USA: #R0101M and #R0145M) enzyme cuts. The pegRNA targeting the HD locus was newly synthesized and then added to the vector using PacI (NEB Inc., MA, USA: #R0547S) and EcoRI. Finally, the vector was confirmed through sequencing. The lentiviral particles of PE vector were produced by commercial vendor (Lugen SCI, Inc., Bucheon, South Korea). Collection and establishment of canine fibroblast cell lines, transduction, and transgene analysis. Fibroblasts were collected from the ears of an 18-month-old Labrador retriever diagnosed with HD (donor patient). The primary fibroblasts were cultured in vitro using culture medium composed of DMEM-Glu-taMAX, 15% fetal bovine serum, and 1% penicillin/streptomycin solution (GIBCO, Inc.). For transduction, 100 multiplicity of infection (MOI) of the PE lentiviral particles, containing 1 μg/mL of polybrene, was transduced into 1 × 10 5 fibroblasts per a well of 12-well plate. Transgene expression was confirmed by EGFP and integration of the vector was confirmed by sequence analysis. Collection of in vivo matured canine oocytes. We collected mature oocytes from dogs as described previously 27 . The concentration of progesterone in the blood was measured to optimize the hormone concentration for harvesting mature oocytes. After confirming the time of estrus, blood was collected, and progesterone was measured using VET Chroma (ANIVET Inc., Chuncheon, South Korea). When the analyzed progesterone level was in the range of 4-7 ng/mL we considered that day as ovulation. Three days after ovulation, mature oocytes were surgically collected. During the procedure, all dogs were treated with ketamine and xylazine at a concentration of 6 mg/Kg, and anesthesia was maintained with 2% isoflurane. After exposing the ovary and uterus, a 24G intravenous catheter was inserted into the oviductal lumen near the uterotubal junction, and www.nature.com/scientificreports/ the culture medium was flowed to collect mature oocytes. The culture medium was prepared by adding 2 mM NaHCO 3 , 1% penicillin/streptomycin, 0.5% bovine serum albumin, and 10% FBS to medium 199 containing 25 mM HEPES. SCNT and embryo transfer. For generating gene corrected dogs, SCNT followed by embryo transfer was performed following the method described elsewhere 27 . Briefly, in vivo matured oocytes with the first polar body were used for micromanipulation. Metaphase chromosomes were removed by aspiration from the oocytes. A single cell (C>T cell) was transferred into the perivitelline space of an enucleated oocyte, and each donor cellcytoplast couplets were fused by two pulses of direct current (24-26 V for 15 μsec) using an Electro-Cell fusion apparatus. The fused SCNT embryos were chemical activated by incubating with 10 μM calcium ionophore (Sigma) and then 1.9 mM 6-dimethylaminopurine (6-DMAP). The activated SCNT embryos were surgically transferred into the oviducts of estrus-synchronized surrogates. Pregnancy was confirmed by ultrasonography at 30 days after embryo transfer. PCR validation and sequencing analysis. Transgene integration into the genome of transduced fibroblasts and gene-corrected dogs was confirmed by PCR. The PCR primers used to validate the Cas9 sequence in the vector were 5′-catcgctattaccatggtgat-3′ and 5′-ctcttgcagatagcagatcc-3′. These primer sets detected the linkage between the CMV promoter and dCas9 of the vector used in this study. Sequencing of the target locus was performed to validate the PE-mediated gene correction. The sequencing primers used were 5′-gacgccaagggagcagatatt-3′ and 5′-cctctcttatgagaacagcat-3′ (Bioneer Inc., Daejeon, South Korea). In addition, TA cloning was performed for accurate sequencing analysis using the products generated through PCR (Supplement Fig. 1). PCR products and T vector (Promega Inc., WI, USA: #A1360) were mixed at a ratio of 1:3, and DNA was isolated and purified from the bacteria colony generated by ligation (NEB Inc., MA, USA: #M0202) to extract DNA. After confirming the extracted DNA with EcoRI restriction enzyme, sequencing analysis was performed. Analysis of off-target mutations in gene-corrected dogs. Potential off-target loci were determined in silico using Cas-OFFinder (http:// www. rgeno me. net/ cas-offin der/). We selected two potential off-target loci with two mismatches and another three loci with three mismatches compared to the genomic target sequences of pegRNA used in the study. The potential off-target loci were PCR amplified with genomic DNA from the C>T dog #1 and C>T dog #2 and sequencing analysis was performed (Supplementary Table 2). Ethics approval and consent to participate. In conducting this study, we collected cells from a retriever with hip dysplasia, and this was done after explaining the study to the retriever owner and consent was obtained. The experimental procedures and methods used in this study were approved by the Animal Welfare and Ethics Office (CNU-01090) of Chungnam National University, Daejeon, and performed according to the Guide for the Care and Use of Laboratory Animals published by the IACUC of Chungnam National University. All methods are reported in accordance with ARRIVE guidelines (https:// arriv eguid elines. org) for the reporting of animal experiments in the Methods section. Data availability The datasets generated and/or analysed during the current study are not publicly available due to some data required for our further studies but are available from the corresponding author on reasonable request.
2021-12-31T16:20:12.050Z
2021-12-29T00:00:00.000
{ "year": 2022, "sha1": "9ff8b99ea1accc97810172fe7e26c25e4b90a800", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-17200-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ffd5427085b83c0cfdbe3a0928922d2dbe403a2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
26093854
pes2o/s2orc
v3-fos-license
Protective Effects of Antrodia Cinnamomea Against Liver Injury Chinese herbal medicine (中草藥) attracts much attention in the treatment of liver injuries. Numerous studies have revealed various biological activities of medicinal mushrooms such as Antrodia Cinnamomea (牛樟芝). Although A. cinnamomea is rare in the wild, recent developments in fermentation and cultivation technologies make the mycelia and fruiting bodies of this valuable medicinal mushroom readily available. Liver diseases such as fatty liver, hepatitis, hepatic fibrosis, and liver cancer are complicated processes of liver injuries that have tremendous impact on human society. In this article, we reviewed studies about the hepatoprotective effects of the fruiting bodies and mycelia of A. cinnamomea performed in different experimental models. The results of those studies suggest the potential application of A. cinnamomea in preventing and treating liver diseases and its potential to be developed into health foods or new drugs. Introduction Several medicinal mushrooms are known to be valuable for their effectiveness in the treatment of diseases. Among the popular medicinal mushrooms such as Antrodia Cinnamomea (Figure 1), Ganoderma lucidum, Cordyceps sinensis, and Phellinus linteus, A. cinnamomea can be found only in Taiwan. Antrodia Cinnamomea is used to treat liver diseases, hypertension, abdominal pain, diarrhea, and other diseases (Ao et al., 2009). Historically, indigenous Taiwanese people have used A. cinnamomea to ameliorate liver disorders from excessive alcohol consumption. However, it is now becoming more difficult to collect wild A. cinnamomea fruiting bodies. With the development of fermentation technology, the mycelia of many medicinal mushrooms can now be obtained through submerged fermentation. This technology can effectively manage the identity and purity of medicinal mushrooms without contamination. In this review, we discuss the hepatoprotective activities of A. cinnamomea in animal models of carbon tetrachloride (CCl 4 )-and alcohol-induced liver injuries, as well as its in vitro anti-liver cancer activity. The liver is the largest organ responsible for a spectrum of functions, including the uptake, metabolism, conjugation, and excretion of various endogenous and foreign substances (Hoekstra et al., 2012). Chronic hepatitis or toxification leads to severe liver injury. The damaged hepatocytes are initially denatured and then undergo fibrosis and necrosis. This process eventually leads to hepatoma (Friedman, 1997). Antrodia Cinnamomea , a species of the genus Antrodia (Polyporaceae), is a parasitic fungus that lives in the inner cavity of Cinnamomum kanehirai, which is endemic to Taiwan. Antrodia Cinnamomea is believed to be one of the most potent liver-protecting herbs in Taiwan (Ao et al., 2009). Numerous reports have been published on the chemical components of A. cinnamomea . Compounds isolated from this mushroom, such as benzenoids, diterpenes, triterpenoids, steroids, maleic/succinic acid derivatives, and polysaccharides, have been reported to exhibit some biological activities. Ta b l e 1 s u m m a r i z e s t h e r e p o r t e d b i o l o g i c a l activities of A. cinnamomea , including antiallergy, anticancer, antihypertension, anti-inflammation, antioxidation, hepatoprotection, neuroprotection, and immunomodulation. Antrodia Cinnamomea a n d l c o h o linduced iver njury Alcohol liver disease (ALD) is a kind of liver injury induced by alcohol drinking, which remains one of the most common causes of chronic liver disease worldwide. ALD includes steatosis (fatty liver), steatohepatitis (alcoholic hepatitis), and cirrhosis (Day and Yeaman, 1994). Steatosis is the earliest response of the liver to excessive alcohol use and is characterized by the accumulation of fat in hepatocytes. Steatohepatitis is characterized by infiltration of inflammatory cells into hepatocytes and hepatocellular injury (Gao and Bataller, 2011). Alcohol consumption suppresses the antifibrotic effects of natural killer cells and interferon-γ, and therefore enhances the progression of liver fibrosis . Alcohol promotes the accumulation of fat in the liver mainly by the substitution of ethanol for fatty acids as the major hepatic fuel (Baraona and Lieber, 1979). Ethanol increases the fatty acid synthesis by acetaldehyde through the upregulation of sterol regulatory element-binding protein 1c (SREBP-1c). AMP-activated protein kinase (AMPK) (You et al., 2004), sirtuin 1 (You et al., 2008), adiponectin (You, 2009), and signal transducer and activator of transcription 3 (STAT3) (Horiguchi et al., 2008) were reported to be downregulated by alcohol. The DNAbinding and transcriptional activation activities of peroxisome proliferator-activated receptor-α, a nuclear hormone receptor, in hepatocytes are directly inhibited by acetaldehyde (Galli et al., 2001). Three main pathways are involved in the metabolism of ethanol: the alcohol dehydrogenase (ADH) pathway in the cytosol, the microsomal ethanol-oxidizing system (MEOS) in the endoplasmic reticulum, and the catalase pathway in peroxisomes (Jiménez-López et al., 2002). The ADH pathway is the major metabolic pathway during the early stage of chronic alcohol liver injury. During the ADH-mediated oxidation of ethanol, hydrogen is transferred from the substrate to the cofactor nicotinamide adenine dinucleotide (NAD), resulting in excess conversion to its reduced form (NADH) with the production of acetaldehyde (Cronholm et al., 1988). Excess NADH changes the redox state and leads to metabolic irregularity in the liver. The increase in α-glycerophosphate level and the suppression of the citric acid cycle are due to the elevated NADH-to-NAD ratio, thus favoring the accumulation of hepatic triglycerides in the liver (Ao et al., 2009). In the early 1960s, Lieber et al. (1963) formulated a liquid ethanol diet and used experimental models to study ethanolinduced hepatotoxicity. Therefore, the hepatoprotective effects of herbs, natural products, and chemicals have been widely investigated using these models. In Taiwan, A. cinnamomea is believed to effectively ameliorate liver disorders induced by excessive alcohol consumption. However, the effects of A. cinnamomea, either its fruiting bodies or mycelia, on alcohol-induced liver injuries have only recently been reported (Huang et al., 2010a;Lu et al., 2007Lu et al., , 2011Kumar et al., 2011;Wu et al. 2011). Hepatoprotective ffect of A. cinnamomea ruiting odies gainst lcohol-induced iver njury The effects of A. cinnamomea fruiting bodies on alcohol-induced liver damage have been reported. Huang et al. (2010a) investigated the effects of A. cinnamomea fruiting bodies in a chronic alcohol consumption model. Alcoholic fatty liver disease was induced by adding 20% (w/w) alcohol to the drinking water of experimental rats. Antrodia Cinnamomea fruiting body (0.1 g per kilogram body weight per day [g•kg BW -1 •day -1 ]) orally administered to the chronic alcohol consumption group for 4 weeks increased the levels of fecal cholesterol and bile acid. The gene expression of 3-hydroxy-3-methoxyglutaryl-CoA reductase, SREBP-1c, acetyl-CoA carboxylase, fatty acid synthase, and malic enzyme was downregulated by A. cinnamomea fruiting body. The results of histological examination revealed that the alcohol-induced liver injuries, i.e., hepatocyte necrosis and inflammatory cell infiltration, were prevented by A. cinnamomea fruiting body (0.1 g•kg BW -1 •day -1 ) and that its effect was even better than that of silymarin (0.25 g•kg BW -1 •day -1 ). Wu et al. (2011) investigated the beneficial effects of A. cinnamomea fruiting bodies on alcohol-induced liver fibrosis. The expression of hepatic mRNAs, i.e., matrix metalloproteinase (MMP)-9, tumor necrosis factor (TNF)-α, Kruppel-like factor (KLF)-6, and transforming growth factor (TGF)-β1, were downregulated by orally administered A. cinnamomea fruiting bodies (0.025 g/kg BW, once a day ). The hepatoprotective effect of A. cinnamomea fruiting bodies (0.025 g/kg BW, once a day ) was comparable to that of silymarin (0.25 g/kg BW, once a day). This protective effect might account for the acceleration of alcohol clearance and the suppression of ethanol-induced elevation of MMP-9, TNF-α, KLF-6, and TGF-β1 levels. Hepatoprotective ffect of A. cinnamomea ycelia gainst lcohol-induced iver njury Lu et al. (2007) reported the effects of A. cinnamomea mycelia against ethanol-induced liver injury. Male Sprague-Dawley (SD) rats were orally administered w i t h 0 . 5 a n d 1 . 0 g / k g B W o f A . c i n n a m o m e a mycelia for 10 days. At the end of 10 th day, rats were administered with ethanol (5.0 g/kg BW) by gavage to induce acute hepatic injury. The levels of serum aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), and bilirubin in rats given A. cinnamomea were comparable with the silymarin positive group (0.1 g/kg BW). Thus, A. cinnamomea showed hepatoprotective activity in this acute ethanol-induced liver injury rat model. Lu et al. (2011) further reported that a triterpenoid-enriched fraction of the ethanolic extract of A. cinnamomea mycelia showed the greatest effectiveness in preventing ethanol-induced acute liver injury and free radical generation in rats. Kumar et al. (2011) evaluated the effect of the ethanolic extract of A. cinnamomea in ethanolinduced acute hepatotoxicity. Ethanolic extracts of A. cinnamomea mycelia (0.25, 0.5, and 1 g/kg BW, once a day) were orally administered to rats by gavage for 10 days. Silymarin (0.2 g/kg BW) was used as a positive control. After the final administration, hepatotoxicity was induced by administering ethanol (5 g/kg BW) through oral gavage. The results showed that the serum levels of ALT and AST in the group given the ethanolic extract of A. cinnamomea (1 g/kg BW) were comparable to those in the silymarin positive group (0.2 g kg BW). The levels of protein expression of hemeoxygenase-1 (HO-1) and NF-E2 related factor-2 (Nrf-2) were increased in groups orally administered with A. cinnamomea ethanolic extract and silymarin. The transcription factor Nrf-2 can induce the expression of a variety of cytoprotective and detoxifying genes (e.g., HO-1 ). This study suggested that the hepatoprotective effects of A. cinnamomea ethanolic extract might be through a mechanism that involves Nrf-2 activation and upregulation of the expression of the downstream antioxidant gene. Figure 2 shows the hepatoprotective mechanisms of A. cinnamomea against ethanol-induced liver injuries according to the above results. Antrodia Cinnamomea and CCl 4 -induced iver njury Chronic CCl 4 treatment causes liver injury, oxidative stress, and nitrosative stress. Collagen accumulation, which causes hepatic fibrosis, was also found to be a result of chronic CCl 4 treatment (Tipoe et al., 2010). The hepatotoxicity of CCl 4 consists of 2 steps. The first step involves the production of free radicals (CCl 3 • and CCl 3 OO•) through hepatic metabolism mediated by the cytochrome P450 system. The second phase involves the activation of Kupffer cells, which is accompanied by the production of inflammatory and profibrogenic m e d i a t o r s . C y t o c h r o m e P 4 5 0 2 E 1 ( C Y P 2 E 1 ) knockout mice (cyp2e1 -/-) were used to investigate the involvement of CYP2E1 in the development of CCl4induced hepatotoxicity, and CYP2E1 was found to be a major factor in CCl 4 -induced hepatotoxicity in mice (Wong et al., 1998). The enzymatic activation of CCl 4 , leading to the generation of CCl 3 radicals, also disrupts the structure and function of lipids and proteins in the membrane and cell organelles (Xiao et al., 2012). The injured hepatic tissue is thus degraded by MMPs, while tissue inhibitor of MMPs (TIMPs) act as regulators of MMPs, preventing further degradation and damage to the newly synthesized collagen and unaffected tissue (Hemmann et al., 2007). CCl 4 intoxication increased the expression of mRNAs, including MMP-2, MMP-9, TIMP-1, TIMP-2, TGF-β1, α-smooth muscle actin (SMA), and procollagen (Knittel et al., 2000). The synthesis of collagen leads to liver fibrosis. CCl 4 -induced liver injury involves the increased expression of TGF-β1 mRNA and hydroxyproline (the unique component of collagen) content, as observed in liver fibrosis. Heptaprotective ffect of A. cinnamomea ruiting odies gainst CCl 4 -induced l iver njury Hsiao et al. (2003) investigated the hepatoprotective activity of the water extract of A. cinnamomea fruiting bodies against CCl 4 -induced liver injury in ICR mice. Mice were subcutaneously injected with CCl 4 (40% CCl 4 diluted with olive oil, 0.1 mL/10 g BW) twice a week for 8 weeks to induce chronic chemical liver injury. Water extracts of A. cinnamomea fruiting bodies (0.25, 0.75, 1.25 mg/kg BW, day, 4 days a week) were orally administered for 8 weeks. The levels of plasma transaminases, i.e., AST and ALT, were significantly lower in the group of mice orally administered with the water extract of A. cinnamomea (1250 mg/mL) than in the corn oil control group, and were comparable to the silymarin positive group (100 mg/kg BW). The activities of hepatic superoxide dismutase and catalase were dose-dependently increased by treatment with the water extract of A. cinnamomea. The water extract of A. cinnamomea fruiting bodies showed an ability to scavenge 1,1-diphenyl-2-picrylhydrazyl radicals in an in vitro study . Thus, the water extract of A. cinnamomea fruiting bodies could effectively prevent CCl 4 -induced hepatotoxicity by scavenging the CCl 4 Liver injury Liver fibrosis free radical formation or by inhibiting the inflammatory mediators in CCl 4 -mediated lipid peroxidation. The protective mechanisms of the fruiting bodies of A. cinnamomea against CCl 4 -induced liver injuries, liver fibrosis, and lipid peroxidation are shown in Figure 3. Antrodia Cinnamomea and iver ancer Hepatocellular carcinoma (HCC) is one of the most lethal malignancies worldwide, especially in Taiwan, China, Korea, and Sub-Saharan Africa (Marrero, 2006). When cancer cells become invasive and metastatic, the excess breakdown of the extracellular matrix (ECM) occurs (Christofori et al., 2006). MMP-2 and MMP-9, members of the MMP family that degrade the ECM during tissue remodeling, are believed to play important roles in the invasion and metastasis of liver cancers (Hofmann et al., 2005). Protective ffect of A. cinnamomea ruiting Bodies Against Liver Cancer Although the hepatoprotective effects of A. cinnamomea fruiting bodies and mycelia have been reported, studies on the effect of A. cinnamomea fruiting bodies in liver cancer are few. Hsu et al. (2005) reported the apoptotic effects of the ethyl acetate extract of A. cinnamomea fruiting bodies in human HCC cell lines (i.e., HepG2 and PLC/PRF/5). They found that the extract inhibited cell survival signaling by increasing the expression of IκBα in the cytosol and decreasing the amount of NF-κB in the nucleus, leading to the increased expression of Bcl-XL in both HepG2 and PLC/PRF/5 cells. Hsu et al. (2007) further reported the anti-invasion potential of the ethyl acetate extract of A. cinnamomea fruiting bodies in PLC/PRF/5 cells. This extract decreased the expression of MMP-2, MMP-9, membrane type 1-MMP, and vascular endothelial growth factor, which resulted in the anti-invasion effects on PLC/PRF/5 cells. Hsieh et al. (2010) isolated 3 triterpenoid compounds, antcin A, antcin C, and methyl antcinate A, from the fruiting bodies and investigated the growth-inhibitory effects of these 3 compounds in human liver cancer cells (i.e., HepG2, Hep3B, and Huh7). Methyl antcinate A showed more inhibitory activity than the other 2 compounds and was most potent in Huh7 cells. Methyl antcinate A decreased the levels of antiapoptotic proteins (i.e., Bcl-2 and Bcl-XL ), increased the cytochrome c release from mitochondria to the cytosol, increased the mitochondrial translocation of cofilin and Bax, and enhanced NADPH activity in cells, suggesting that the apoptotic effects of methyl antcinate A were partially through a mitochondrial signaling pathway. These results indicated that methyl antcinate A isolated from A. cinnamomea fruiting bodies is a potential candidate in the treatment of HCC. Hsieh et al. (2011) Song et al. (2005b) reported the effects of MEM on the Fas and Bcl-2 pathways. Since Fas belongs to the TNF receptor (TNFR) superfamily, Fas ligands are known to induce apoptosis through Fas-induced caspase cascade activation, first caspase-8 followed by caspase-3 (Yoon et al., 2002). The expression of Fas was significantly increased in MEM-treated HepG2 cells (Song et al., 2005b). The expression of death receptors (DR), including DR3, DR4, TNFRI, and TNFRII, were decreased by MEM in both a dose-and time-dependent manner. When HepG2 was treated with MEM, the expression of the antiapoptotic protein Bcl-2 was decreased but that of the proapoptotic protein Bcl-XL increased. The MEM-induced cell apoptosis possibly involves the upregulation of Fas expression, which promotes Fas and FasL ligation and then passes the death message to cytosolic messengers. Therefore, procaspase-8 is activated to caspase-8, thus triggering the caspase activation cascade. The studies of Song et al. (2005a,b) did not identify the active compounds that contribute to the anti-HCC effects of A. cinnamomea mycelia extracts. Lin et al. (2010) reported 4-acetylantroquinonol B isolated from the ethanolic extract of A. cinnamomea mycelia to be the major antihepatoma constituent. The authors used antiproliferative activity-guided isolation to trace the active compound and identify its structure. 4-Acetylantroquinonol B was purified and identified as the most potent compound to inhibit the proliferation of HepG2 cells. The authors also reported that the antiproliferation mechanism of 4-acetylantroquinonol B (Lin and Chiang, 2011) in HepG2 cells was through a dose-dependent cell cycle arrest at the G phase. In the G1 phase, cyclin D, cyclin E, CDK2, and CDK4/6 work together to promote cell cycle progression. In 4-acetylantroquinonol B-treated HepG2 cells, the protein expression of CDK2 and CDK4 was slightly decreased. p27, a cell-cycle regulator, was reported to be inactivated in HCC and considered a suppressor of liver cancer cells. The expression level of p27 in HepG2 cells was increased by 4-acetylantroquinonol B. Thus, the growth-inhibitory effect of 4-acetylantroquinonol B was mostly mediated by cell cycle arrest through the decrease in CDK2 and CDK4 and increase in p27. Figure 4 shows the protective mechanisms of A. cinnamomea against liver cancer cells, including grow inhibition, apoptosis, and cell cycle arrest. Submerged fermentation or solid-state culture is a process extensively used for obtaining mycelia. Mycelial components such as maleic and succinic acid derivatives and triterpenoids were also investigated Nakamura et al., 2004;Shao et al., 2008). Nakamura et al. (2004) reported 5 new maleic and succinic acid derivatives together with ergosterol peroxide from A. cinnamomea mycelia. Shao et al. (2008) identified 12 compounds from the methanolic extract of A. cinnamomea mycelia. Among them are 4 new compounds: 10-hydroxy-γ-dodecalactone, 11-hydroxy-γ-dodecalactone, 2-(2-hydroxyethyl)phenol, and 12-hydroxydodecanoic acid methyl ester; the other 8 c o m p o u n d s w e r e e rg o s t a t r i e n -3 β -o l ( S T-1 ) , ergosterol peroxide, methyl (4-hydroxyphenyl) acetate, vanillin, 4-hydroxybenzaldehyde, hexadecanoic acid, 5-methoxymethylfuran-2-carbaldehyde, and 5-hydroxymethylfuran-2-carbaldehyde. Lu et al. (2011) investigated the hepatoprotective effect of the fractions from ethanolic extracts of A. cinnamomea mycelia and analyzed the composition of fractions (Fr-) I to III. Fr-I is a triterpenoid-enriched fraction making up 359.8 mg per gram of mycelia. Fr-II is the polysaccharide content (483.1 mg/g), which consists of rhamnose, arabinose, xylose, mannose, glucose, and galactose in a ratio of 0.11:0.23:0.06:0.77:0.29:1.00. Finally, Fr-III is a polyphenol-enriched fraction. The triterpenoid-enriched fraction, Fr-I, showed the best hepatoprotective activity against ethanol-induced acute liver injury in SD rats. However, the active compounds contributing to these hepatoprotective effects have not been reported up to now. The active compounds isolated from A. cinnamomea shown to exhibit anti-liver cancer activities are listed in Table 2. Anticin A, anticin B, anticin C, and their derivatives methylantcinate A and methylantcinate B b e l o n g t o t h e e rg o s t a n e -t y p e t r i t e r p e n o i d s . Antroquinonol and 4-acetylantroquinonol B are ubiquinones. These compounds were reported to have anti-liver cancer activities in human hepatoma cell lines such as HepG2, Hep3B, and Huh7. As shown in Figure 4, anticin B, methylantcinate A, methylantcinate B, and antroquinonol exerted apoptosis-inducing effects through the mitochondrial-or ROS-mediated pathways, whereas 4-acetylantroquinonol inhibited cell growth by activating cell cycle arrest. Conclusion Antrodia Cinnamomea attracts much attention for its effectiveness in treating liver injuries, as shown in numerous published articles. The hepatoprotective effects of the fruiting bodies and mycelia of this mushroom have been investigated in animal models of ethanol-and CCl 4 -induced liver injury. The extracts of A. cinnamomea fruiting bodies and mycelia have also been investigated for their anti-liver cancer activities. Several active compounds were identified to have anticancer activity in vitro. The protective mechanisms of A. cinnamomea against ethanol-induced liver injuries were found to be through the inhibition of fatty acid synthesis and liver fibrosis (Figure 2). In addition, A. cinnamomea inhibited lipid peroxidation and liver fibrosis in CCl4-intoxicated animals ( Figure 3). As for the anti-liver cancer effects, 7 compounds from A. cinnamomea were found to inhibit cell growth by the activating apoptosis or cell cycle arrest in human hepatoma cells (Figure 4). The present report provides scientific evidences for the use of A. cinnamomea in ameliorating liver diseases. However, human clinical trials for studying the effectiveness of A. cinnamomea are still ongoing (http:// clinicaltrials.gov/ct2/show/results/NCT01287286). The results of the studies described here suggest the potential of A. cinnamomea to prevent and treat liver diseases, as well as its potential to be developed into health foods or new drugs.
2018-04-03T02:26:26.479Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "1b2c9f611a72abde859d8d6323791544364bbc0a", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1016/s2225-4110(16)30114-6", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bd7818a3d603e7040c9964bf78ab1bd596d4681a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266778689
pes2o/s2orc
v3-fos-license
Quasi-Zero Stiffness Vibration Sensing and Energy Harvesting Integration Based on Buckled Piezoelectric Euler Beam This paper presents a novel quasi-zero stiffness vibration sensing and energy harvesting integration system for absolute displacement measurements based on a buckled piezoelectric Euler beam (BPEB) with quasi-zero stiffness (QZS) characteristics. On one hand, BPEB provides negative stiffness to the system, thus creating a vibration-free point within the system and transforming the absolute displacement measurement problem into a relative motion sensing problem. On the other hand, during the measurement process, the BPEB collects the vibration energy from the system, which can provide electrical energy for low-power relative motion sensing devices and remarkably suppress the frequency range of the jump phenomenon, thereby further expanding the frequency domain measurement range of the sensing system. The research results have shown that this system can measure the absolute motion signal of the tested object in low-frequency vibration with small excitation. By adjusting parameters such as the force–electric coupling coefficient and damping ratio, the measurement accuracy of the sensing system can be improved. Furthermore, the system can convert the mechanical energy of vibrations into electrical energy to power the surrounding low-power sensors or provide partial power. This could potentially achieve self-powering integrated quasi-zero stiffness vibration sensing, offering another approach and possibility for the automation development in wireless sensing systems and the Internet of Things field. Introduction There is a widespread demand for displacement measurement in people's production and daily lives, such as isolation for moving vehicles [1,2], ship-mounted optical instrument protection [3], precision engineering [4], and scientific research [5].Particularly in lowfrequency vibration applications, it is often necessary to utilize the displacement caused by vibration to better describe system motion or achieve effective wideband vibration isolation.Accelerometers are widely used to measure the absolute motion of a dynamic system.However, it is almost impossible to accurately retrieve the absolute displacement from the measured acceleration signal in real time [6,7].Geophone is commonly applied to provide absolute velocity measurements of motion.However, these types of sensors are often bulky and relatively expensive [3].A wide range of methods, including laser displacement sensors [3] and Hall displacement sensors [3], can be used for the accurate measurement of relative displacement.However, these methods require a stationary reference point during usage. In recent years, researchers have proposed various methods to improve displacement measurement techniques, including the use of new sensing materials, nano-fabrication technology, quasi-zero stiffness structures, and more.Over the past three decades, these Sensors 2024, 24, 153 2 of 21 structures have been extensively and deeply studied primarily as high-performance lowfrequency vibration isolators.By significantly reducing the system resonance peak and its frequency, an absolute stable point in a broadband frequency domain can be created in the QZS system, and it is used in broadband vibration displacement measurement.By using two pre-deformed horizontal springs as the negative stiffness corrector, Sun et al. [6,7] presented measurement methods for mobile platforms.By employing a pre-deformed scissor-like structure, Jing et al. [7] proposed a 3-D QZS-based vibration sensor system for 3-D absolute motion measurement.In 1958, Molyneux [8] first designed a passive vibration isolation device consisting of two horizontal springs and one vertical spring.The high-static low-dynamic (HSLD) characteristic of the device can be achieved by changing the geometric structure of the elastic elements.Kamil Kocak et al. [9] employed flexible beams and a quasi-zero stiffness structure to reduce the starting isolation frequency range.Chen et al. [10] used a combination of quasi-zero stiffness isolation systems to significantly suppress the vibration of a large vehicle-mounted optoelectronic tracking device under a moving platform.Bo et al. [11] used a lever-type quasi-zero stiffness (QZS) vibration isolator to establish a theoretical model with ECD-QZS-VI, further improving the vibration suppression performance in the resonance region.Fulcher et al. [12] and Kashdan et al. [13] studied the structure of buckled beams with bistable and negative stiffness characteristics.They designed a novel Euler beam negative stiffness quasi-zero stiffness (QZS) isolator and established an analytical model for the system and Euler beam.Wang et al. [14] combined two subordinate quasi-zero stiffness mechanisms in parallel with a vertical spring to propose a novel dual quasi-zero stiffness mechanism for a nonlinear ultra-low-frequency vibration isolator.They also analyzed its vibration isolation performance.Sun et al. [6] conducted a study on the use of quasi-zero stiffness structures in vibration measurement.They compared the absolute motion of several vibration platforms with the measured motion signals from these platforms.Xu et al. [15] designed a prototype quasi-zero stiffness (QZS) system that combines a vertical helical spring with two inclined bars connected to magnet springs.The condition of quasi-zero stiffness characteristics can be easily achieved by adjusting the distance between the two magnet springs.Ye et al. [16] designed a quasizero stiffness isolation system that supports different loads and can isolate vibrations within the low-frequency range under multiple loads.This system is capable of effectively isolating vibrations in various load scenarios.Zhou et al. [17] and Sun et al. [18] designed a semi-active electromagnetic isolator for tunable hybrid semi-active liquid damper systems (HSLDSs) using magnetic mechanisms.This electromagnetic isolator allows for damping characteristics in the HSLDS system to be adjusted.In the author's previous study [19], by establishing a six-degree-of-freedom Stewart platform with QZS legs, excellent vibration isolation performance was achieved in six directions. At present, there are four main ways to convert vibration energy into electrical energy: the electrostatic method [20], a friction motor [21][22][23], electromagnetic method [24][25][26], and piezoelectric method [27][28][29].Fang et al. [30] drew inspiration from bird motion and designed an isolation system for capturing broadband vibrations and harvesting energy from spacecraft.This system aims to provide wideband vibration isolation capabilities while simultaneously harnessing the generated energy.Yan et al. [31] replaced permanent magnets with springs and designed a four-stable-state piezoelectric vibration energy harvester with geometric nonlinearity in the springs, which improved the power generation of the energy harvester.Wang et al. [32] coupled an electromagnetic generator with a frictional generator to form an energy harvester that collects energy from ultralow-frequency vibration environments.Additionally, they combined the energy harvester with a quasi-zero stiffness mechanism to enhance the energy harvesting performance.Wang et al. [33] researched extracting energy from low-excitation-level ultra-low-frequency vibrations.They studied a rolling magnet system and proposed a novel quasi-zero stiffness electromagnetic energy harvester (QZS-EMEH).This new design aims to efficiently collect energy from such vibrations.Koszewnik et al. [34] used a cantilever beam with a piezoelectric harvester for structural health monitoring.Lee et al. [35] used mechanical Sensors 2024, 24, 153 3 of 21 metamaterials and phononic crystals for energy harvesting, which have a wide range of potential applications for a renewable and ecologically benign energy transition.Cheng et al. [36] proposed a novel piezoelectric energy harvesting device with a high density of the energy harvested from highway traffic.Wang et al. [37] improved energy harvesting by repositioning the piezoelectric patch (PZT) in the middle of a fixed-fixed elastic steel sheet instead of the root, as is commonly the case.To reduce the working space of the energy harvesting mechanism, Su and Tseng [38] proposed to design an extended Charpy piezoelectric energy harvester, which increased the output power compared with the traditional energy harvesting system.Yang et al. [39] researched the development of a multi-stage oscillator for ultra-low-frequency vibration isolation and energy harvesting.They addressed the challenges faced by existing oscillators in achieving effective vibration suppression and utilization under ultra-low-frequency excitations.The multi-stage oscillator they proposed aims to overcome these limitations and enable efficient vibration isolation and energy harvesting in such low-frequency scenarios.The theory of nonlocal elasticity and surface elasticity has been used to analyze the nonlinear vibration of nano piezoelectric structures [40][41][42].Kiani [43][44][45] studied the axial buckled, vibration, and instability of current-bearing bundle elements in nanosystems.So far, most energy harvesting methods are based on amplifying vibrations or combining vibration isolation with energy harvesting.Hsiao and Chung [46], Siahpour et al. [47], and Li et al. [48] used artificial intelligence methods to check the quality of machine-generated questions or sensor fault diagnosis problems.Jia et al. [49] established a high-temperature strain gauge automatic calibration device, which can simultaneously collect the output signals of the high-temperature strain gauge, thermocouple signals, and displacement signals of the grating ruler.The measurement results are used to calculate theoretical mechanical strain.Information data processing is also a very important stage.Good information processing technology can provide us with great help in different areas, such as in using deep nonlinear state space models [50], end-to-end dual stream convolutional neural networks [51], quotient space theory [52], and fusing MG-DTRS and NRS methods [53].Currently, few scholars have combined quasi-zero stiffness vibration sensing and energy harvesting to study the measurement of absolute motion and the collection of vibrational energy.This paper proposes a quasi-zero stiffness vibration sensing and energy harvesting integration based on a buckled piezoelectric Euler beam.This system allows for the measurement of the absolute displacement of an object without magnifying vibrations.Furthermore, it can convert mechanical energy from vibration into electrical energy to power low-power devices in the vicinity or provide partial electrical energy.This could potentially achieve self-powering integrated quasi-zero stiffness vibration sensing.Combining quasi-zero stiffness vibration sensing with energy harvesting not only allows for the measurement of low-frequency and ultra-low-frequency vibrations of objects but also enables the collection of mechanical energy within the structure.This approach provides a deeper understanding of an object's vibration characteristics.Moreover, this energy harvesting method can provide partial electrical energy for low-power devices such as active sensors and wireless sensors, reducing the reliance on traditional batteries.This combination of quasi-zero stiffness vibration sensing and energy harvesting offers new insights and methods for the design and optimization of vibration control and energy utilization in structures.It holds significant theoretical and practical implications for improving structural performance, extending device lifespan, and reducing energy consumption.It is expected to bring about more innovations and breakthroughs in engineering practice. In the author's previous research [54], a three-axis torsional stiffness (TQZS) was proposed to achieve torsional vibration sensing in three rotational degrees of freedom.The QZS system converts the absolute displacement measurement problem into a relative motion measurement problem by providing a wide-frequency vibration-free point.On one hand, to achieve vibration sensing, it is typically necessary to provide power to the relative motion measurement components through wiring or regular battery replacement.On the other hand, the mechanical energy carried by the vibration itself dissipates into the environment.When the object being measured undergoes small vibrations, the system can realize sensing measurements and collect vibration energy to provide a partial power supply for low-power devices.When the object being measured generates relatively large vibration amplitudes, the system can collect vibration energy to provide green and sustainable power for components such as relative motion measurements and wireless communication.Integrating energy harvesting and vibration displacement sensing will be beneficial for environmental protection and reduce labor and material costs. Based on the above considerations, quasi-zero stiffness vibration sensing and energy harvesting integration based on a buckled piezoelectric Euler beam is proposed.The piezoelectric Euler beam provides negative stiffness to the entire system, while the vertical spring provides positive stiffness.Due to the quasi-zero stiffness (QZS) structure's good vibration isolation characteristics in an extensive frequency range, there will be an absolute rest point within the system.When the system is fixed to the object being measured at this rest point, the relative motion measured by the system can represent the absolute motion of the object being measured.On the other hand, as the system vibrates, the piezoelectric ceramic undergoes strain and generates electrical energy, enabling energy harvesting. Structure of the Sensor System The quasi-zero stiffness vibration sensing and energy harvesting integrated system, as shown in Figure 1, mainly consists of two Euler beams, a vertical spring, and a vibration energy harvesting mechanism.The Euler beam provides negative stiffness to the entire system, while the vertical spring provides positive stiffness.The combination of the Euler beam and the vertical spring forms the quasi-zero stiffness structure.The surface of the Euler beam is equipped with piezoelectric strain patches.When the load mass m undergoes relative reciprocating motion with the object being measured, the Euler beam deforms, and the piezoelectric material undergoes deformation.Due to the piezoelectric effect, charges are generated on the surface of the piezoelectric ceramics, which can power the low-power relative motion sensing component (and wireless communication component) or provide partial electrical energy, thus enabling energy harvesting for absolute vibration measurement. hand, to achieve vibration sensing, it is typically necessary to provide power to the relative motion measurement components through wiring or regular battery replacement.On the other hand, the mechanical energy carried by the vibration itself dissipates into the environment.When the object being measured undergoes small vibrations, the system can realize sensing measurements and collect vibration energy to provide a partial power supply for low-power devices.When the object being measured generates relatively large vibration amplitudes, the system can collect vibration energy to provide green and sustainable power for components such as relative motion measurements and wireless communication.Integrating energy harvesting and vibration displacement sensing will be beneficial for environmental protection and reduce labor and material costs. Based on the above considerations, quasi-zero stiffness vibration sensing and energy harvesting integration based on a buckled piezoelectric Euler beam is proposed.The piezoelectric Euler beam provides negative stiffness to the entire system, while the vertical spring provides positive stiffness.Due to the quasi-zero stiffness (QZS) structure's good vibration isolation characteristics in an extensive frequency range, there will be an absolute rest point within the system.When the system is fixed to the object being measured at this rest point, the relative motion measured by the system can represent the absolute motion of the object being measured.On the other hand, as the system vibrates, the piezoelectric ceramic undergoes strain and generates electrical energy, enabling energy harvesting. Structure of the Sensor System The quasi-zero stiffness vibration sensing and energy harvesting integrated system, as shown in Figure 1, mainly consists of two Euler beams, a vertical spring, and a vibration energy harvesting mechanism.The Euler beam provides negative stiffness to the entire system, while the vertical spring provides positive stiffness.The combination of the Euler beam and the vertical spring forms the quasi-zero stiffness structure.The surface of the Euler beam is equipped with piezoelectric strain patches.When the load mass m undergoes relative reciprocating motion with the object being measured, the Euler beam deforms, and the piezoelectric material undergoes deformation.Due to the piezoelectric effect, charges are generated on the surface of the piezoelectric ceramics, which can power the low-power relative motion sensing component (and wireless communication component) or provide partial electrical energy, thus enabling energy harvesting for absolute vibration measurement.In the measurement of displacement, the vibration sensing system should be fixed on the measured object.It is assumed that the displacement of the substrate excitation and the load mass are U and S , respectively, for a small relative vibration x S U = − .When the vibration sensor system has the QZS property, the vibration response S should be In the measurement of displacement, the vibration sensing system should be fixed on the measured object.It is assumed that the displacement of the substrate excitation and the load mass are U and S, respectively, for a small relative vibration x = S − U. When the vibration sensor system has the QZS property, the vibration response S should be far less than excitation U. So, U ≈ −x, that is, the absolute motion can be obtained by measuring the relative motion.beam.Here, z represents the coordinate perpendicular to the axis, and y represents the coordinate along the axis.One end of the beam is subjected to an axial force F in the y direction, and the displacement in the z direction at the middle of the beam is denoted as w.The output voltage of the piezoelectric material is V. Modeling of the Sensor System 2.2.1.Modeling of the BPEB Figure 2 shows a schematic diagram of the buckled piezoelectric Euler beam.The ends of the beam are supported on the bracket, and the piezoelectric sheet made of lead zirconate titanate piezoelectric ceramics (PZT-5H) is attached to the middle of the Euler beam.Here, z represents the coordinate perpendicular to the axis, and y represents the coordinate along the axis.One end of the beam is subjected to an axial force F in the y direction, and the displacement in the z direction at the middle of the beam is denoted as w .The output voltage of the piezoelectric material is V .It is assumed that the deformation and electric field of the piezoelectric ceramic are uniform in the z direction.Under the action of an axial force F , the geometric shape of the simply supported Euler beam can be regarded as a half-sine wave.Then, the deflection at various points on the Euler beam can be expressed as ( ) ( ) where h is the deflection of the midpoint of the Euler beam, the length of the beam is L , and the strain corresponding to the Euler beam is ( ) The stress of the Euler beam is where E is the Young's modulus of the Euler beam, and the strain of the piezoelectric ceramic can be approximated to the strain of the beam, and its strain can be expressed as where L h is the thickness of the Euler beam.Using the method of small elements, when the element size of the beam is dy and the Euler beam is deformed, the element will rotate by an angle θ, and the projection of the element's length in the horizontal direction is d cos y θ .The displacement l at the end of the beam is It is assumed that the deformation and electric field of the piezoelectric ceramic are uniform in the z direction.Under the action of an axial force F, the geometric shape of the simply supported Euler beam can be regarded as a half-sine wave.Then, the deflection at various points on the Euler beam can be expressed as where h is the deflection of the midpoint of the Euler beam, the length of the beam is L, and the strain corresponding to the Euler beam is The stress of the Euler beam is where E is the Young's modulus of the Euler beam, and the strain of the piezoelectric ceramic can be approximated to the strain of the beam, and its strain can be expressed as where h L is the thickness of the Euler beam.Using the method of small elements, when the element size of the beam is dy and the Euler beam is deformed, the element will rotate by an angle θ, and the projection of the element's length in the horizontal direction is dy cos θ. The displacement l at the end of the beam is The cos θ value is expanded into a power series, neglecting higher-order terms as follows: The deflection δ and angle θ are related to Sensors 2024, 24, 153 6 of 21 Substituting Equation ( 7) into Equation ( 6) can be obtained using the following equation: where q = 1 − l/L.For the piezoelectric cell of this sensing system, the simplified constitutive equation can be written as where σ t is the stress of the piezoelectric material, Y t is Young's modulus [55], e 31 is the piezoelectric constant, E 3 is the electric field strength, u 3 is the charge areal density, and η 33 is the permittivity.At a uniform electric field strength, the relationship between E 3 and the output voltage of the piezoelectric material V is where h t is the thickness of the piezoelectric ceramic [55].The piezoelectric material output current I is where D t is the cross-sectional area of the piezoelectric material.The output current of the piezoelectric material can be obtained by substituting Equations ( 4), (9), and (10) into Equation ( 11) to obtain Equation (12). where d t is the width of the piezoelectric ceramics, L t is the length of the piezoelectric ceramics [55], and χ = [cos(π(1 . Now, substituting Equation (8) into Equation ( 12) yields the first electromechanical coupling relationship: Under the virtual displacement λl acting on the Euler beam, the virtual strain energy of the piezoelectric Euler beam is where V t and V L represent the volume of the piezoelectric layer and the beam, respectively, and S L is the cross-sectional area of the Euler beam, which can be obtained by combining the above equation. where I L represents the cross-sectional moment of inertia of the beam, and I L and χ can be represented by the following equation: Sensors 2024, 24, 153 7 of 21 The second electromechanical coupling relationship of the piezoelectric Euler beam can be obtained by using the principle of virtual work, which is Extracting the constants from the piezoelectric coupling relationship of two piezoelectric Euler beams can be further simplified as where Γ is the electromechanical coupling coefficient, C is the internal capacitance, and B is the mechanical parameter, which can be obtained using the following equation: Dynamic Modeling of the Integrated System Assuming that the piezoelectric Euler beam that provides negative stiffness to the sensing system has a projected length D in the horizontal direction, R is the equivalent resistance of the relative motion measuring element, and C v is the damping coefficient.When the mass m moves relative to each other, the Euler beam will deform, and the mass m and the base will produce relative displacement m; the length of the beam is Substituting Equation (20) into Equation ( 18) can be carried out to obtain the following: Under the basic displacement excitation, the equation of motion of the sensing system can be expressed as m .. x In this paper, the number of Euler beams N is taken by two, where β can be represented by the following equation: In the above equation, α = D 2 + x 2 .Under harmonic excitation, the fundamental motion U can be expressed as Substituting Equations ( 21) and (24) into Equation ( 22) yields the first electromechanical coupling equation of the system as follows: Assuming that the Euler beams are evenly distributed in space, their output currents I can be described as Sensors 2024, 24, 153 8 of 21 where R is the load resistance.Equations ( 20) and ( 26) are substituted into Equation ( 18) to obtain the second electromechanical coupling equation of the system: Next, let us nondimensionalize the two force-electromagnetic coupling equations. where Ω N is the reference frequency, ω is the dimensionless excitation frequency, C 1 is the dimensionless capacitance, Λ is the dimensionless mechanical parameter, ξ is the damping ratio, u 0 is the magnitude of the dimensionless excitation [55], X is the dimensionless displacement, v is the dimensionless output voltage, Ψ is the dimensionless electrocoupling coefficient, γ is also the dimensionless mechanical parameter,t is the dimensionless time, φ is the dimensionless geometric parameter, and the dimensionless parameter is substituted into the two electromechanical coupling equations, which can be simplified to From Equation ( 29), we can obtain the dimensionless resilience of the system as The normalized stiffness is To satisfy the condition of quasi-zero stiffness, the stiffness at the static equilibrium point when the system is stationary is zero. To represent the relationship between force and displacement more intuitively and to simplify the calculations, a fifth-order Taylor series expansion is performed around X = 0 under small excitation.As shown in Figure 3, the fifth-order Taylor expansion closely approximates the original expression. When the fifth-order Taylor expansion is adopted, there will be a numerical difference between the approximate value and the real value, but the error of the measurement amplitude range is within 0.31%, which meets the general engineering accuracy requirements.So, the dimensionless electromechanical coupling equation containing irrational fractions is approximated by using the fifth-order Taylor series, and the processed electromechanical coupling equation is where Sensors 2024, 24, 153 9 of 21 To satisfy the condition of quasi-zero stiffness, the stiffness at the static equilibrium point when the system is stationary is zero. To represent the relationship between force and displacement more intuitively and to simplify the calculations, a fifth-order Taylor series expansion is performed around 0 X = under small excitation.As shown in Figure 3, the fifth-order Taylor expansion closely approximates the original expression.When the fifth-order Taylor expansion is adopted, there will be a numerical difference between the approximate value and the real value, but the error of the measurement amplitude range is within 0.31%, which meets the general engineering accuracy requirements.So, the dimensionless electromechanical coupling equation containing irrational fractions is approximated by using the fifth-order Taylor series, and the processed electromechanical coupling equation is ( ) ( ) ( ) where Performance Analysis The harmonic balance method is applied to solve the dimensionless force-electric coupling equation of the system and its dynamic response.The excitation term and equation solution of the system are both represented as Fourier series.When the tested object vibrates for one cycle, the piezoelectric beam will experience two identical mechanical states, and the output voltage will also go through two identical cycles.Therefore, the frequency of the output voltage is twice the vibration frequency of the tested object.It is assumed that the relative displacement between the foundation and load mass of the system are in steady-state vibration X = X c cos(ωt) + X s sin(ωt), and the output voltage of the vibration sensing system is v = v c cos(2ωt) + v s sin(2ωt).Considering the case of small excitation, only the main resonance response is studied, and higher harmonic terms are ignored.By equating the left and right sides of the equations involving cos(ωt) and sin(ωt) in the motion equation and equating the left and right sides of the equations involving cos(2ωt) and sin(2ωt) in the voltage equation, the harmonic balance equation is obtained. Solving the system of quaternionic decimal equations for Equation (36) yields the dynamic response of the system, as shown in Figure 4.The dynamic response and output power of quasi-zero stiffness vibration sensing and the energy harvesting integration system based on a buckled piezoelectric Euler beam are analyzed.The default parameters used in the solution process are shown in Table 1.To better describe the dynamic sensing characteristics of the system, the following assumptions are made.If u ≤ °, the measurement accuracy of the sensing system is considered to be high, otherwise the sensing system is not considered to be within the scope of application of the sensor.Figure 4a shows the percentage of amplitude measurement error A x , the measured phase difference P u , and the output power p curve.From the percentage of amplitude measurement error curve in Figure 4a, it can be observed that frequency jump phenomena occur between frequencies of 0.2 and 0.33, which may result in larger measurement errors.For ω = 0.35, the percentage of amplitude measurement error is approximately 9%, the measured phase difference is approximately 6.9°, and the output power is .At this frequency, it is considered that relative motion X − can represent absolute motion u.For frequencies greater than 0.55, the percentage of amplitude measurement error A x is less than 5%, and the phase difference P u is less than 5°.It can be considered that the sensing system can accurately maintain the phase characteristics between the input and output signals.From the output power curve in Figure 4a, it can be seen that as the frequency increases from 0.55 to 1.8, the output power increases from .The system can provide electrical energy for low-power relative motion sensor components, enabling energy harvesting.It is noted that the negative portion of the derivative on the horizontal axis corresponds exactly to the turning point (unstable solution).In general, in the quasi-zero stiffness vibration sensing system, the error is caused by the movement of the load mass m.The measurement accuracy is described by the percentage of the amplitude error ratio x A and the phase difference u P between the excitation u and the measured signal −X.The energy harvesting performance can be described by the electrical power p at the load R. To better describe the dynamic sensing characteristics of the system, the following assumptions are made.If x A ≤ 10% and u P ≤ 10 • , it is considered that relative motion −X can represent absolute motion u.If x A ≤ 5% and u P ≤ 5 • , the measurement accuracy of the sensing system is considered to be high, otherwise the sensing system is not considered to be within the scope of application of the sensor.Figure 4a shows the percentage of amplitude measurement error x A , the measured phase difference u P , and the output power p curve.From the percentage of amplitude measurement error curve in Figure 4a, it can be observed that frequency jump phenomena occur between frequencies of 0.2 and 0.33, which may result in larger measurement errors.For ω = 0.35, the percentage of amplitude measurement error is approximately 9%, the measured phase difference is approximately 6.9 • , and the output power is 2.8 × 10 −6 .At this frequency, it is considered that relative motion −X can represent absolute motion u.For frequencies greater than 0.55, the percentage of amplitude measurement error x A is less than 5%, and the phase difference u P is less than 5 • .It can be considered that the sensing system can accurately maintain the phase characteristics between the input and output signals.From the output power curve in Figure 4a, it can be seen that as the frequency increases from 0.55 to 1.8, the output power increases from 5.8 × 10 −6 to 3.3 × 10 −5 .The system can provide electrical energy for low-power relative motion sensor components, enabling energy harvesting.It is noted that the negative portion of the derivative on the horizontal axis corresponds exactly to the turning point (unstable solution). By using the Runge-Kutta method for numerical simulation, the stability of the calculated results is verified.The frequency-amplitude curves obtained by the Harmonic Balance Method (HBM) and the Runge-Kutta Method (RKM) are also shown in Figure 4a.It can be observed from Figure 4a that before the frequency downward jump points, there is a significant difference in the phase difference curve between the HBM curve and RKM.This difference arises from the presence of numerous higher-order harmonics near the frequency jump band.The vibration response calculated by the RKM contains harmonics.When estimating the phase difference from results with numerous harmonics, the error is relatively large.Figure 4b shows excitation frequencies of 0.03, 0.1, 0.2, and 0.3, respectively.It can be observed from Figure 4b that under low-frequency excitation, the displacement response and output voltage contain not only the fundamental frequency component but also higher harmonic components.However, compared to the fundamental frequency component, the amplitude of the higher harmonic components is relatively small.In Figure 4b, as the excitation frequency increases from 0.03 to 0.3, the higher harmonic components gradually disappear, and the vibration response is gradually dominated by the fundamental frequency.It is precisely because the higher harmonic components are significant at low frequencies, and the HBM method ignores these higher harmonic components, that the HBM method and RKM method exhibit certain differences in the low-frequency range.From Figure 4a, it can be seen that except for the points before the frequency of 0.3, the results obtained using the two methods show good consistency. Influence of Difference Structural Parameters From the electromechanical coupling shown in Equation (34), it can be seen that the electromechanical coupling coefficient, damping ratio, and measured amplitude parameter changes of the system will affect the performance of the sensing system. Figure 5 shows the influences of different force-electric coupling coefficients on the percentage of amplitude measurement error, measured phase difference, and output power of the vibration sensing system.It can be observed from the figure that as the force-electric coupling coefficient increases, the range of frequency jump phenomena gradually decreases, and the peak also decreases.At high frequencies, as the force-electric coupling coefficient increases, the percentage of amplitude measurement error and the measured phase difference gradually and slightly increase.When the frequency is 1.5 and the force-electric coupling coefficient increases from 0.06 to 0.12, the percentage of amplitude measurement error increases from 1% to 1.5%.Compared with the low-frequency state, the changes in the force-electric coupling coefficient have little effect on the amplitude ratio and phase difference of the measurement error.At a frequency of 0.8, when the force-electric coupling coefficient increases from 0.06 to 0.12, the percentage of amplitude measurement error is less than 5%, and the measured phase difference is less than 5 • .The output power decreases from 1.8 × 10 −3 to 4.9 × 10 −4 .At this time, the measuring accuracy of the sensing system is relatively high and can provide some electrical energy for low-power devices.When the frequency is more than 0.4, during the process of increasing the force-electric coupling coefficient from 0.04 to 0.1, the amplitude ratio of the sensing system's measurement is within 10%, and the measured phase difference is less than 10 • .It is considered that relative motion −X can represent absolute motion u.At a frequency of 0.37, during the process of decreasing the force-electric coupling coefficient from 0.12 to 0.06, frequency jump phenomena occurred.Therefore, appropriately increasing the force-electric coupling coefficient can suppress the occurrence of frequency jump phenomena, expand the measurable range, and improve the accuracy and efficiency of the sensing system. Sensors 2024, 24, x FOR PEER REVIEW 13 of 23 during the process of decreasing the force-electric coupling coefficient from 0.12 to 0.06, frequency jump phenomena occurred.Therefore, appropriately increasing the force-electric coupling coefficient can suppress the occurrence of frequency jump phenomena, expand the measurable range, and improve the accuracy and efficiency of the sensing system.As shown in Figure 6, different damping ratios also have significant impacts on the vibration sensing system.When the frequency is below 0.45, applying a damping ratio greater than or equal to 0.015 will cause frequency jumps, with the percentage of amplitude measurement error exceeding 10%, and the measurement result is inaccurate.When the frequency is above 0.7, as the damping ratio increases from 0.015 to 0.03, the percentage of amplitude measurement error gradually increases but remains below 10%, and the measured phase difference also gradually increases but remains below 10°.At this time, it is considered that relative motion X − can represent absolute motion u.During the pro- cess of increasing the damping ratio from 0.015 to 0.03, the peak output power of the sensing system decreases from .It is noted that the negative portion of the derivative on the horizontal axis corresponds exactly to the turning point (unstable solution).In the high-frequency region, the effect of changes in the damping ratio on the output power is small, and appropriately reducing the damping ratio can improve the measurement accuracy of the sensing system.In the low-frequency region, appropriately increasing the damping ratio can suppress frequency jumps.Therefore, it is not advisable to apply a small damping ratio in the measurement process to prevent the amplitude ratio of measurement errors from exceeding 10% when the frequency is below 0.45.However, the damping ratio should not be too large; otherwise, the measurement error in the highfrequency region will increase.As shown in Figure 6, different damping ratios also have significant impacts on the vibration sensing system.When the frequency is below 0.45, applying a damping ratio greater than or equal to 0.015 will cause frequency jumps, with the percentage of amplitude measurement error exceeding 10%, and the measurement result is inaccurate.When the frequency is above 0.7, as the damping ratio increases from 0.015 to 0.03, the percentage of amplitude measurement error gradually increases but remains below 10%, and the measured phase difference also gradually increases but remains below 10 • .At this time, it is considered that relative motion −X can represent absolute motion u.During the process of increasing the damping ratio from 0.015 to 0.03, the peak output power of the sensing system decreases from 3.1 × 10 −3 to 7.9 × 10 −4 .It is noted that the negative portion of the derivative on the horizontal axis corresponds exactly to the turning point (unstable solution).In the high-frequency region, the effect of changes in the damping ratio on the output power is small, and appropriately reducing the damping ratio can improve the measurement accuracy of the sensing system.In the low-frequency region, appropriately increasing the damping ratio can suppress frequency jumps.Therefore, it is not advisable to apply a small damping ratio in the measurement process to prevent the amplitude ratio of measurement errors from exceeding 10% when the frequency is below 0.45.However, the damping ratio should not be too large; otherwise, the measurement error in the high-frequency region will increase.The same as before, the dashed lines in the figure represent the unstable solutions of this system.The motion differential equation of Formula 34 shows that the entire system has two main sources of damping.One comes from the damping of the spring or the damper connected in parallel with the spring.In addition, the energy harvesting mechanism can be equivalently considered as a nonlinear damping to a certain extent.During the vibration process, when the amplitude of relative motion is relatively large, the equivalent damping ratio is also relatively large.Conversely, the equivalent damping ratio is small.Therefore, properly reducing the linear damping and increasing the piezoelectric coupling coefficient can be beneficial for both vibration sensing and vibration energy harvesting. Generally, it is difficult to simultaneously balance the quasi-zero stiffness system response at both high and low frequencies.An analysis of the changes in the force-electric coupling coefficient and damping ratio on sensing performance reveals that different force-electric coupling coefficients and different damping ratios have slightly different effects on sensing performance.This quasi-zero stiffness vibration sensing system can reduce the peak value of the percentage of amplitude measurement error at low frequencies and the percentage of amplitude measurement error at high frequencies by modifying two parameters: the force-electric coupling coefficient and the damping ratio.As shown in Figure 7, at low frequencies, the peak value of the percentage of amplitude measurement error decreases gradually with the increasing force-electric coupling coefficient.At high frequencies, the percentage of amplitude measurement error increases with the increasing force-electric coupling coefficient, but the impact on the sensing system is relatively small.Similarly, at low frequencies, increasing the damping ratio reduces the percentage of amplitude measurement error.However, unlike the force-electric coupling coefficient, increasing the damping ratio significantly increases the percentage of amplitude measurement error at high frequencies.Combining Equation (34) and Figure 7, it can be seen that the influence of the force-electric coupling coefficient on the dynamic performance of the sensing system is related to the vibration amplitude.When the vibration amplitude is larger, its influence on the dynamic performance of the system is significant, The same as before, the dashed lines in the figure represent the unstable solutions of this system.The motion differential equation of Formula 34 shows that the entire system has two main sources of damping.One comes from the damping of the spring or the damper connected in parallel with the spring.In addition, the energy harvesting mechanism can be equivalently considered as a nonlinear damping to a certain extent.During the vibration process, when the amplitude of relative motion is relatively large, the equivalent damping ratio is also relatively large.Conversely, the equivalent damping ratio is small.Therefore, properly reducing the linear damping and increasing the piezoelectric coupling coefficient can be beneficial for both vibration sensing and vibration energy harvesting. Generally, it is difficult to simultaneously balance the quasi-zero stiffness system response at both high and low frequencies.An analysis of the changes in the force-electric coupling coefficient and damping ratio on sensing performance reveals that different force-electric coupling coefficients and different damping ratios have slightly different effects on sensing performance.This quasi-zero stiffness vibration sensing system can reduce the peak value of the percentage of amplitude measurement error at low frequencies and the percentage of amplitude measurement error at high frequencies by modifying two parameters: the force-electric coupling coefficient and the damping ratio.As shown in Figure 7, at low frequencies, the peak value of the percentage of amplitude measurement error decreases gradually with the increasing force-electric coupling coefficient.At high frequencies, the percentage of amplitude measurement error increases with the increasing force-electric coupling coefficient, but the impact on the sensing system is relatively small.Similarly, at low frequencies, increasing the damping ratio reduces the percentage of amplitude measurement error.However, unlike the force-electric coupling coefficient, increasing the damping ratio significantly increases the percentage of amplitude measurement error at high frequencies.Combining Equation (34) and Figure 7, it can be seen that the influence of the force-electric coupling coefficient on the dynamic performance of the sensing system is related to the vibration amplitude.When the vibration amplitude is larger, its influence on the dynamic performance of the system is significant, while when the vibration amplitude is smaller, its influence is slight.Therefore, the sensing system proposed in this paper is suitable for adopting a smaller damping coefficient and a larger force-electric coupling coefficient, thereby simultaneously obtaining a wider measurement bandwidth and better high-frequency measurement performance. while when the vibration amplitude is smaller, its influence is slight.Therefore, the sensing system proposed in this paper is suitable for adopting a smaller damping coefficient and a larger force-electric coupling coefficient, thereby simultaneously obtaining a wider measurement bandwidth and better high-frequency measurement performance.From Figure 8, it can be observed that as the measured amplitude increases from 0.03 to 0.1, the curve of the percentage of amplitude measurement error shifts to the right, indicating an expanded range of frequency jump phenomena in the low-frequency range.At a frequency of 0.5, with measured amplitudes of 0.03, 0.05, 0.07, and 0.1, the percentages of amplitude measurement error are 4.1%, 4.1%, 4.5%, and 5.6%, respectively.This indicates that as the measured amplitude increases, the percentage of amplitude measurement error also increases.When the frequency is above 0.5 and the measured amplitudes are below 0.1, and when the percentage of amplitude measurement error is less than 10% and the phase difference is less than 10°, it is considered that relative motion X − can represent absolute motion u.From the phase diagram P u , it can be seen that during the process of increasing the measured amplitude from 0.03 to 0.1, the phase difference curve of the measurement and the output power curve shift to the right.As the measured amplitude decreases, the percentage of amplitude measurement error near the resonant peak slightly increases.When the measured amplitude is between 0.03 and 0.1, as the measured amplitude increases, the peak value of the generated output power increases from , which can provide power for surrounding low-power devices or supply partial energy.From Figure 8, it can be observed that as the measured amplitude increases from 0.03 to 0.1, the curve of the percentage of amplitude measurement error shifts to the right, indicating an expanded range of frequency jump phenomena in the low-frequency range.At a frequency of 0.5, with measured amplitudes of 0.03, 0.05, 0.07, and 0.1, the percentages of amplitude measurement error are 4.1%, 4.1%, 4.5%, and 5.6%, respectively.This indicates that as the measured amplitude increases, the percentage of amplitude measurement error also increases.When the frequency is above 0.5 and the measured amplitudes are below 0.1, and when the percentage of amplitude measurement error is less than 10% and the phase difference is less than 10 • , it is considered that relative motion −X can represent absolute motion u.From the phase diagram u P , it can be seen that during the process of increasing the measured amplitude from 0.03 to 0.1, the phase difference curve of the measurement and the output power curve shift to the right.As the measured amplitude decreases, the percentage of amplitude measurement error near the resonant peak slightly increases.When the measured amplitude is between 0.03 and 0.1, as the measured amplitude increases, the peak value of the generated output power increases from 4.5 × 10 −5 to 4.1 × 10 −3 , which can provide power for surrounding low-power devices or supply partial energy. Comparison with Different Quasi-Zero Stiffness Vibration Sensing Systems Figure 9 shows the curves of different quasi-zero stiffness vibration sensing systems after dimensionless transformation.They are, respectively, the buckled piezoelectric Euler Comparison with Different Quasi-Zero Stiffness Vibration Sensing Systems Figure 9 shows the curves of different quasi-zero stiffness vibration sensing systems after dimensionless transformation.They are, respectively, the buckled piezoelectric Euler beam model curve, the three-spring model [6] curve, and the roller cam model [56] curve.Among these three structural forms, the vertical stiffness is 40 N/m, and the horizontal dimension is 0.4 m.From the figure, it can be seen that when the percentage of amplitude measurement error is 10%, the corresponding frequencies are 0.41, 0.63, and 1.08, respectively.Their corresponding phase differences are 7.5 • , 8.6 • , and 9.1 • , respectively.When the frequency is more than 0.28, the percentage of amplitude measurement error of the buckled Euler beam structure is smaller than that of the three-spring and roller cam structures. Time-Domain Simulation The following is a time-domain simulation using the fourth-or fifth-order Runge-Kutta method, which considers single-frequency excitation, periodic excitation, and random excitation, respectively, and obtains the time-domain response results of the quasizero stiffness piezoelectric Euler beam sensing system under different excitation conditions. Single-Frequency Excitation A single-frequency excitation is applied to the QZS vibration sensing and energy harvesting integrated system.Figure 10a-c show the dynamic measurement performance and output voltage over time of the system at different single-frequency excitations with an amplitude of 0.2.It can be seen from the figure that the frequency of the output voltage of the system is twice the frequency of the displacement response, with frequencies of 0.9, 1.4, and 1.9 corresponding to Figure 10a-c As the frequency increases from 0.9 and 1.4 to 1.9, the peak values of the percentage of amplitude measurement error are 9.4%, 5.1%, and 3.5%, respectively, and the peak values of the output voltage are .The peak value of the percentage of amplitude measurement error is less than 10%.Therefore, as the measured frequency increases, the measurement accuracy of the sensor gradually improves, and the output voltage also gradually increases.Figure 10d-f show the dynamic measurement performance and voltage over time of the system at different excitation amplitudes when the frequency is 1.6, with amplitudes of 0.03, 0.06, and 0.09 corresponding to Figure 10d-f.It can be seen from the figure that as the amplitude increases from 0.03 to 0.09, the peak values of the percentage of amplitude measurement error are 3.3%, 3.5%, and 3.6%, respectively, and the peak values of the output voltage are .Therefore, as the measured amplitude increases, the output voltage of the system gradually increases, and the measurement error of the sensing system also increases.When the excitation amplitude is small, the system can provide some electrical energy for surrounding low-power devices.Under small excitation conditions, the measurement accuracy is within 10%, and the measurement accuracy of the sensing Compared to the three-spring structure and the roller cam structure, the vibration sensor of the buckled Euler beam structure has a larger amplitude error and phase difference in the low-frequency non-measurement range (near the resonance peak) (in fact, the vibration sensor of the buckled Euler beam structure has good vibration energy collection characteristics in this frequency range).However, the initial measurement frequency range is lower, and it has a higher amplitude and phase accuracy in the measurement frequency range (in the high-frequency range).This is mainly due to the vibration sensor of the buckled Euler beam structure using a combination of smaller damping and appropriate force-electric coupling coefficients.In the high-frequency measurement range, the equivalent damping of the force-electric coupling coefficients are very small, which makes the sensor have a wider measurement frequency range and a better measurement performance while collecting vibration energy.This conclusion is consistent with the previous analysis. Time-Domain Simulation The following is a time-domain simulation using the fourth-or fifth-order Runge-Kutta method, which considers single-frequency excitation, periodic excitation, and random excitation, respectively, and obtains the time-domain response results of the quasi-zero stiffness piezoelectric Euler beam sensing system under different excitation conditions. Single-Frequency Excitation A single-frequency excitation is applied to the QZS vibration sensing and energy harvesting integrated system.Figure 10a-c show the dynamic measurement performance and output voltage over time of the system at different single-frequency excitations with an amplitude of 0.2.It can be seen from the figure that the frequency of the output voltage of the system is twice the frequency of the displacement response, with frequencies of 0.9, 1.4, and 1.9 corresponding to Figure 10a-c As the frequency increases from 0.9 and 1.4 to 1.9, the peak values of the percentage of amplitude measurement error are 9.4%, 5.1%, and 3.5%, respectively, and the peak values of the output voltage are 1.3 × 10 −2 , 1.7 × 10 −2 , and 2.1 × 10 −2 .The peak value of the percentage of amplitude measurement error is less than 10%.Therefore, as the measured frequency increases, the measurement accuracy of the sensor gradually improves, and the output voltage also gradually increases.Figure 10d-f show the dynamic measurement performance and voltage over time of the system at different excitation amplitudes when the frequency is 1.6, with amplitudes of 0.03, 0.06, and 0.09 corresponding to Figure 10d-f.It can be seen from the figure that as the amplitude increases from 0.03 to 0.09, the peak values of the percentage of amplitude measurement error are 3.3%, 3.5%, and 3.6%, respectively, and the peak values of the output voltage are 4.1 × 10 −4 , 1.6 × 10 −3 , and 3.7 × 10 −3 .Therefore, as the measured amplitude increases, the output voltage of the system gradually increases, and the measurement error of the sensing system also increases.When the excitation amplitude is small, the system can provide some electrical energy for surrounding low-power devices.Under small excitation conditions, the measurement accuracy is within 10%, and the measurement accuracy of the sensing system is high.Therefore, the relative motion measured by the system can approximately represent the absolute motion of the measured object., respectively.The peaks of the corresponding measured motion are 0.069, 0.089, 0.029, and 0.096; the peak values of the percentage of amplitude measurement error are , respectively.Therefore, under different periods of excitation, the system can generate voltage to power low-power devices, and the relative displacement signal measured by the system can approximately represent the absolute displacement signal of the measured object., respectively.The peaks of the corresponding measured motion are 0.069, 0.089, 0.029, and 0.096; the peak values of the percentage of amplitude measurement error are , respectively.Therefore, under different periods of excitation, the system can generate voltage to power low-power devices, and the relative displacement signal measured by the system can approximately represent the absolute displacement signal of the measured object.shows the measurement performance and the output voltage of the system when the periodic excitation is ( ) ( ) ( ) Random Excitation Figure 12 shows the dynamic measurement performance and its frequency spectrum under random excitation.The tested random signal is filtered using a bandpass filter with an upper cutoff frequency of 10 and a lower cutoff frequency of 0.5.Before filtering, the mean of the signal is 0, with a standard deviation of 0.5 in Figure 12a,b, and a .And when the frequency is less than 10, the phase remains near 0. The measurement errors of the measured object's motion and relative motion, as measured by this system, are within 5%, indicating a high measurement accuracy.The relative motion signal obtained from this system can be used as an approximation of the absolute motion signal of the measured object.The harvested electrical energy can provide a partial power supply for nearby low-power devices.shows the measurement performance and the output voltage of the system when the periodic excitation is u = 0.03 cos(2t + 2π/3) + 0.1 cos(1.1t+ 3π/4).(c) shows the measurement performance and the output voltage of the system when the periodic excitation is u = 0.01 cos(1.5t+ π/2) + 0.05 cos(2t − π/6).(d) shows the measurement performance and the output voltage of the system when the periodic excitation is u = 0.05 cos(2.3t+ π/3) + 0.04 cos(1.3t+ π/6) + 0.01 cos(1.6t+ π/4). Random Excitation Figure 12 shows the dynamic measurement performance and its frequency spectrum under random excitation.The tested random signal is filtered using a bandpass filter with an upper cutoff frequency of 10 and a lower cutoff frequency of 0.5.Before filtering, the mean of the signal is 0, with a standard deviation of 0.5 in Figure 12a,b, and a And when the frequency is less than 10, the phase remains near 0. The measurement errors of the measured object's motion and relative motion, as measured by this system, are within 5%, indicating a high measurement accuracy.The relative motion signal obtained from this system can be used as an approximation of the absolute motion signal of the measured object.The harvested electrical energy can provide a partial power supply for nearby low-power devices. And when the frequency is less than 10, the phase remains near 0. The measurement errors of the measured object's motion and relative motion, as measured by this system, are within 5%, indicating a high measurement accuracy.The relative motion signal obtained from this system can be used as an approximation of the absolute motion signal of the measured object.The harvested electrical energy can provide a partial power supply for nearby low-power devices. Conclusions An integrated system of quasi-zero stiffness vibration sensing and energy harvesting based on bulked piezoelectric Euler beam is proposed.The following conclusions were obtained: 1.The system utilizes quasi-zero stiffness vibration sensing technology, which enables the measurement of the absolute vibration displacement of the tested object under small excitation.Moreover, the electrical energy harvested by the system can be used to power low-power components or provide partial power, providing an alternative approach for wireless applications.2. Increasing the electromechanical coupling coefficient can reduce the peak value of the measurement error.The higher the damping ratio, the smaller the peak of output power, the smaller the peak of measurement error in the low-frequency range, and the higher the accuracy of the sensing system.However, at high frequencies, the amplitude of measurement error may increase.A larger amplitude of the tested object results in a higher output power of the system, but it may also decrease the accuracy of the sensing system.3. The quasi-zero stiffness piezoelectric Euler beam vibration sensing system effectively suppresses frequency jumping phenomena and significantly improves measurement performance in the high-frequency range by using a small damping ratio and a large force-electric coupling coefficient.This flexible parameter adjustment capability allows the system to demonstrate good performance in various operating conditions and applications, resulting in more accurate and reliable measurement results.Compared with the three-spring structure and roller cam structure, the vibration sensor of the Euler beam structure can achieve a wider measurement frequency band and Conclusions An integrated system of quasi-zero stiffness vibration sensing and energy harvesting based on bulked piezoelectric Euler beam is proposed.The following conclusions were obtained: 1. The system utilizes quasi-zero stiffness vibration sensing technology, which enables the measurement of the absolute vibration displacement of the tested object under small excitation.Moreover, the electrical energy harvested by the system can be used to power low-power components or provide partial power, providing an alternative approach for wireless applications. 2. Increasing the electromechanical coupling coefficient can reduce the peak value of the measurement error.The higher the damping ratio, the smaller the peak of output power, the smaller the peak of measurement error in the low-frequency range, and the higher the accuracy of the sensing system.However, at high frequencies, the amplitude of measurement error may increase.A larger amplitude of the tested object results in a higher output power of the system, but it may also decrease the accuracy of the sensing system. 3. The quasi-zero stiffness piezoelectric Euler beam vibration sensing system effectively suppresses frequency jumping phenomena and significantly improves measurement performance in the high-frequency range by using a small damping ratio and a large force-electric coupling coefficient.This flexible parameter adjustment capability allows the system to demonstrate good performance in various operating conditions and applications, resulting in more accurate and reliable measurement results.Compared with the three-spring structure and roller cam structure, the vibration sensor of the Euler beam structure can achieve a wider measurement frequency band and better measurement performance. Further studies will establish more refined sensor models, such as by incorporating high-order vibration modes of a buckled beam into calculations, to obtain more accurate and realistic sensor dynamic responses.And deep learning and other methods will be applied to optimize the structural parameters of the sensor to achieve better vibration sensing and energy harvesting performance. Figure 1 . Figure 1.Scheme diagram of the quasi-zero stiffness vibration sensing and energy harvesting integration system. Figure 1 . Figure 1.Scheme diagram of the quasi-zero stiffness vibration sensing and energy harvesting integration system. 2 . 2 . 1 . Figure2shows a schematic diagram of the buckled piezoelectric Euler beam.The ends of the beam are supported on the bracket, and the piezoelectric sheet made of lead zirconate titanate piezoelectric ceramics (PZT-5H) is attached to the middle of the Euler Figure 3 . Figure 3. Relationship between force and displacement. Figure 3 . Figure 3. Relationship between force and displacement. Figure 4 . Figure 4. (a) shows the frequency domain response plot of the percentage of amplitude measurement error A x , measured phase difference P u , and output power p calculated using two differ- ent methods: the Harmonic Balance Method (HBM) and the Runge-Kutta Method.(b) presents the phase plots at excitation frequencies of 0.03, 0.1, 0.2, and 0.3 for the Runge-Kutta Method. Ld The length of the piezoelectric ceramic 60 mm t The width of piezoelectric ceramics 31 mm Figure 4 . Figure 4. (a) shows the frequency domain response plot of the percentage of amplitude measurement error x A , measured phase difference u P , and output power p calculated using two different methods: the Harmonic Balance Method (HBM) and the Runge-Kutta Method.(b) presents the phase plots at excitation frequencies of 0.03, 0.1, 0.2, and 0.3 for the Runge-Kutta Method. Figure 5 . Figure 5.Comparison of the percentage of amplitude measurement error A x of the sensing system and the measured phase difference P u and output power p at different electrocoupling coeffi- Figure 5 . Figure 5.Comparison of the percentage of amplitude measurement error x A of the sensing system and the measured phase difference u P and output power p at different electrocoupling coefficients Ψ. Figure 6 . Figure 6.Comparison of the percentage of amplitude measurement error Ax of the sensing system and the measured phase difference P u and output power p at different damping ratios ξ . Figure 6 . Figure 6.Comparison of the percentage of amplitude measurement error x A of the sensing system and the measured phase difference u P and output power p at different damping ratios ξ. Figure 7 . Figure 7.The effects of different force-electric coupling coefficients and damping ratios on sensing performance. Figure 7 . Figure 7.The effects of different force-electric coupling coefficients and damping ratios on sensing performance. Sensors 2024 , 23 Figure 8 . Figure 8.Comparison of the percentage of amplitude measurement error A x of the sensing system and the measured phase difference P u and output power p at different measured amplitudes Figure 8 . Figure 8.Comparison of the percentage of amplitude measurement error x A of the sensing system and the measured phase difference u P and output power p at different measured amplitudes u 0 . Sensors 2024 , 23 Figure 9 . Figure 9.Comparison diagram of vibration sensing systems with different quasi-zero stiffness values. Figure 9 . Figure 9.Comparison diagram of vibration sensing systems with different quasi-zero stiffness values. Figure 10 . Figure10.The frequencies corresponding to (a-c) are 0.9, 1.4, and 1.9, respectively; the amplitudes corresponding to (d-f) are 0.03, 0.06, and 0.09, respectively.3.4.2.Periodic ExcitationFigure 11a-d show the time-domain response curves of the system under different periodic excitations.In Figure 11a corresponds to a cycle excitation of peak output voltage values are Figure 11 . Figure 11.(a) shows the measurement performance and the output voltage of the system when the periodic excitation is ( ) ( ) 0.02 cos 1.4 / 3 0.05 cos 2.2 / 4 u t t π π = + + − Figure12shows the dynamic measurement performance and its frequency spectrum under random excitation.The tested random signal is filtered using a bandpass filter with an upper cutoff frequency of 10 and a lower cutoff frequency of 0.5.Before filtering, the mean of the signal is 0, with a standard deviation of 0.5 in Figure12a,b, and astandard deviation of 0.2 in Figure 12c,d.The vibration response s values in Figure 12a,c are 3 8 10 − × and . 4 3 The peak values of the vibration response s in Figure12b,d are Figure12shows the dynamic measurement performance and its frequency spectrum under random excitation.The tested random signal is filtered using a bandpass filter with an upper cutoff frequency of 10 and a lower cutoff frequency of 0.5.Before filtering, the mean of the signal is 0, with a standard deviation of 0.5 in Figure12a,b, and astandard deviation of 0.2 in Figure 12c,d.The vibration response s values in Figure 12a,c are 8 × 10 −3 and 2 × 10 −3 , and the peak values of the output voltage are 4.7 × 10 −2 and 4.4 × 10 −3 .The peak values of the vibration response s in Figure 12b,d are 3.4 × 10 −4 and 1.4 × 10 −3 .And when the frequency is less than 10, the phase remains near 0. The measurement errors of the measured object's motion and relative motion, as measured by this system, are within 5%, indicating a high measurement accuracy.The relative motion signal obtained from this system can be used as an approximation of the absolute motion signal of the measured object.The harvested electrical energy can provide a partial power supply for nearby low-power devices. Figure 12 . Figure 12.(a) The motion and the output voltage diagram of the system are shown when the standard deviation is 0.5.(b) The motion amplitude and phase diagram of the system are shown when the standard deviation is 0.5.(c) The motion and the output voltage diagram of the system are shown when the standard deviation is 0.2.(d) The motion amplitude and phase diagram of the system are shown when the standard deviation is 0.2. Figure 12 . Figure 12.(a) The motion and the output voltage diagram of the system are shown when the standard deviation is 0.5.(b) The motion amplitude and phase diagram of the system are shown when the standard deviation is 0.5.(c) The motion and the output voltage diagram of the system are shown when the standard deviation is 0.2.(d) The motion amplitude and phase diagram of the system are shown when the standard deviation is 0.2. Table 1 . Quasi-zero stiffness vibration sensing and energy harvesting integration based on buckled piezoelectric Euler beam system structure parameter table. Table 1 . Quasi-zero stiffness vibration sensing and energy harvesting integration based on buckled piezoelectric Euler beam system structure parameter table.
2024-01-06T16:32:29.140Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "07a5842c4bdb6c7cd12bd402ab375f7616396bad", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/24/1/153/pdf?version=1703677102", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "019e53c328ec9d7f594dcac820fa40dd59d441a4", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
85594015
pes2o/s2orc
v3-fos-license
Effect of domestic processing treatments on iron, β-carotene, phytic acid and polyphenols of pearl millet Abstract The objective of the present study was to evaluate the effect of various processing treatments (individual and combination) on iron, β-carotene, phytic acid, polyphenols and ash content of pearl millet (Pennisetum americanam). Grains were subjected to soaking, pressure cooking, steaming, malting, pearling and extrusion cooking for different time intervals such as soaking for 3, 6, 9 and 12 h; steaming for 5, 10, 15 and 20 min; pressure cooking for 2, 5, 7 and 10 min; controlled germination (malting) for 12, 18, 24, 36, 40, 46 and 52 h along with three combinations of treatments. Data revealed that phytic acid was reduced maximum (38.23%) by malting, whereas polyphenols (49.28%) and ash content (22.09%) were decreased maximum by pressure cooking. Loss of β-carotene and iron was also higher (29.79, and 16.03%, respectively) during pressure cooking in comparison to other processing methods. However, combined treatments showed higher retention of β-carotene and iron with more reduction of anti-nutrients over the individual treatments. Overall, it can be concluded that combination of domestic treatments is a better approach in improving the nutritional profile of pearl millet, which can be consumed directly or as one of the ingredients for formulations like weaning foods, bakery products, etc. PUBLIC INTEREST STATEMENT Several crops like pearl millet, sorghum and maize can grow under adverse agro-climatic conditions such as little irrigation facilities and infertile land. These crops are also termed as "nutri-cereals" and could be a potential source of micronutrients such as iron and zinc. If correct combination of various processing treatments (soaking, germination, pressure cooking and steaming) is employed to reduce the level of anti-nutrients such as phytic acid and polyphenols, their nutritional attributes could be improved with better bioavailability. Foods prepared by these domestically processed underutilized cereals could be an effective replacement of high cost processed food products available in the market with similar nutritional profile; and could be a better approach to deal with the malnutrition and micronutrients based deficiency diseases such as anemia in developing and underdeveloped countries worldwide. Introduction Pearl millet (Pennisetum americanam), also known as Bajra in India, is an important food crop of South Asia and Africa. Because of its sustainability under adverse agro-climatic conditions, it is also termed as crop of food security. India is the largest producer of pearl millet both in terms of area and production (Yadav, 2014). It is also termed as "nutricereal" due to presence of complex carbohydrates (67.5%), high proportion of dietary fibers, and other phytochemicals with neutraceutical properties (Sumathi, Ushakumari, & Malleshi, 2007). The protein's biological value and digestibility coefficient of pearl millet were measured as 83 and 89%, respectively. The protein efficiency ratio of pearl millet (1.43) was more than that of wheat (1.2) (National Research Council, 1996). Pearl millet is known for its high amount of macro-and micronutrients (such as B-vitamins, potassium, phosphorous, magnesium, iron, zinc, copper, and manganese) (Sihag et al., 2015). However, it contains significant amounts of anti-nutrients also, such as polyphenols, enzyme inhibitors, and phytates. Antinutrients are associated with the low bioavailability of minerals and proteins. Humans and other non-ruminant animals cannot digest phytates due to the absence of digestive enzyme phytase. It is usually found as a complex with essential minerals and/or proteins. The actual mechanism of the interactions between phytic acid and minerals are yet to be understood, although it is possible that it could form a complex with a cation on the same or different molecules within a simple phosphate group or between two phosphate groups (Hithamani & Srinivasan, 2014). Similarly, polyphenols also act as anti-nutrients and chelates divalent metal ions like iron and zinc and reduce their bioavailability. They also inhibit digestive enzymes and may also precipitate proteins. Various processing treatments have been reported which can reduce the level of anti-nutrients, such as soaking, germination, steaming, fermentation, microwave heating, etc. Several researchers have studied the effect of various processing treatments on the content of anti-nutrients of different cereals/legumes Goyal, Siddiqui, Upadhyay, & Soni, 2014;Osman & Gassem, 2013;Rao & Muralikrishna, 2001). Sharma, Goyal, and Barwal (2013) studied the effect of soaking and cooking on polyphenols, tannins, and phytates and reported approximately 14.7-45.1% reduction in soybeans. Similarly, Hithamani and Srinivasan (2014) investigated the effect of domestic processing on the polyphenol content in pearl millet (Pennisetum glaucum) and observed that sprouting and pressure cooking reduced 33.52 and 41.66% polyphenols, respectively. It is clear from the literature discussed above that several workers have worked on this aspect, but the effect of combination of various domestic treatments on reduction of anti-nutrients along with the impact on the loss of β-carotene and iron is hardly discussed in pearl millets. In this context, the objective of the study was to optimize the process to develop pearl millet flour with minimum loss of iron and β-carotene and maximum reduction of anti-nutrients, so that it could serve as a base component for the formulation of pearl millet-based weaning foods. Materials Pearl millet grains (Pro-Agro's 9444) were procured form pearl millet breading farm, Haryana Agriculture University, Hisar (India). Airtight plastic containers were procured for grain and flour storage. All solvents and reagents used in this study were of analytical grade and were purchased from Himedia, India and Merck, Germany. Different combinations of treatments for pearl millet processing The pearl millet grains were subjected to four different processing treatments, viz. soaking, pressure cooking, steaming, and controlled germination (malting) at different time intervals (Table 1). Soaking/steeping The raw, clean grains were soaked in water in the ratio of 1:4 for 3, 6, 9, and 12 h at 25 ± 2°C. The soaked grains were stirred periodically in order to remove the gases accumulated around the grains and the steep water was changed after every 2 h interval to prevent the growth of undesirable microbes. The grains were dried at 60 ± 2°C using tray dryer to 13% moisture content. Steaming The raw, clean grains were steamed in a cooker (by detaching the whistle from the lid of the pressure cooker) for 5, 10, 15 and 20 min. The ratio of grains to water was kept 1:4. The steamed grains were then dried at 60 ± 2°C using tray dryer to 13 ± 0.50% moisture content (dry weight basis). Pressure cooking The raw, clean grains were pressure cooked (the whistle was attached with the lid) in a cooker for 2, 5, 7 and 10 min. The ratio of grains to water was kept 1:4. The grains were then dried at 60 ± 2°C using tray dryer to 13 ± 0.50% moisture content. Controlled germination Controlled germination of pearl millet grains was carried out for different time intervals, viz. (12, 18, 24, 36, 40, 46 and 52 h) at 25 ± 2°C. The washed grains were allowed to germinate between the folds of muslin cloth in an incubator (25 ± 2°C). Water was sprinkled intermittently to moisten the muslin cloth. After malting, the grains were dried at 60 ± 2°C using a tray dryer till the moisture content reached about 13 ± 0.50%. The rootlets of germinated and dried grains were removed by scrubbing manually over a perforated tray. Pearling The raw and germinated pearl millet grains were pearled in a pearling machine to remove the outer greyish layers of the grains. Pearling was done by rubbing the grains against the abrasive stones and air pressure was used to remove the loosened bran layers. The degree of removal was regulated by controlling the time of pearling and by adjusting the space between the abrasive stones and the screen. The grains were weighed before and after the pearling in order to calculate the degree of pearling. Pearling was done for 40 s corresponding 15-20% removal of the husk. The grains were cleaned to remove the fine pearling with the help of lab aspirator. Pearled grains were subjected to extrusion processing described as follows. Extrusion processing Prior to extrusion processing, grains were preconditioned to adjust the feed moisture content. The calculated amount of water was sprinkled over the pearl millet grains to increase the feed moisture content to 13 ± 0.5%. The moistened grains were kept for 48 h for preconditioning in airtight polythene bags to equilibrate moisture. After 48 h of conditioning, grains were fed into feeder hopper that contained screw auger to transport materials at uniform rate into the barrel. The single screw extruder consists of one screw in the barrel to transport the ingredients through its three zones, viz. feeding zone, kneading zone, and cooking zone. The temperature of cooking zone was maintained at 110°C.The material was finally extruded through a 3 mm diameter die where it expanded due to sudden evaporation of water from plasticized mass. It was cut into pieces of desired size by a cutter which was adjusted at 570 rotations per min (RPM) with feeder speed of 115 RPM and motor speed 1,300 RPM. Finally, extrudates were milled in the milling machine to obtain the flour with a fine sieve attached. The flour was kept in airtight plastic containers for further use. Milling The moisture content of pearl millet grains was maintained to 13% by conditioning. The conditioned grains were milled in roller mill (Chopin Laboratory CD-1 mill, France). Phytic acid Phytic acid content was determined using Megazyme assay kit (Wicklow, Ireland). Ash and iron content Ash and iron content were estimated by the standard methods of Association of Analytical Chemists (2005). Iron was estimated using Atomic Absorption Spectrophotometer (AAS) using dry digestion. β-carotene estimation β-carotene extraction, saponification was performed by the method of Howe and Tanumihardjo (2006), while estimation in all the treatments and extruded pearl millet was carried out using method of Sanusi and Adebiyi (2009). Statistical analyses Means (n = 3), standard error mean (SEM), linear regression analysis, and 95% confidence intervals were calculated using Microsoft Excel 2007 (Microsoft Corp., Redmond, WA). Data were subjected to a single-way analysis of variance (ANOVA) to calculate CD value. Effect of soaking Phytic acid and total phenol contents of raw pearl millet flour were found to be 683.07 ± 1.87 and 207.23 ± 3.06 mg per 100 g, respectively (Table 2). Phytic acid content decreased significantly (p < 0.05) as the soaking time increased. Similarly, total polyphenol content was also reduced significantly (p < 0.05) from its initial value of 207.23 to 198.96 mg and 183.49 mg after 9 and 12 h of soaking, respectively. β-carotene level was not affected significantly up to 9 h of soaking, however, after 12 h of soaking, it was reduced significantly (p < 0.05). Iron and ash content reduced significantly (p < 0.05) as the soaking time was increased. Our findings are in accordance with findings of other workers who observed decrease in phytic acid content of cereal grains meant for the production of weaning foods by soaking and germination (Gupta & Sehgal, 1991;Osman & Gassem, 2013). Similar reduction in phytic acid concentration during soaking in pearl millet was also reported by other workers (Duhan, Chauhan, Punia, & Kapoor, 1989). Our results with respect to polyphenol content are also in agreement with the finding of Osuntogun, Adewusi, Ogundiwin, and Nwasike (1989), who observed 20% reduction in total polyphenol content of Nigerian sorghum due to steeping. The decrease in the level of phytic acid and polyphenols during soaking may be attributed to their leaching in soaking water under the concentration gradient (Abdullah, Baldwin, & Minor, 1984) and endogenous phytase activity (Liang, Han, Nout, & Hamer, 2009). The lower level of total phenols and β-carotene after soaking may be due to release of phenolic compounds into the soaking water (Akillioglu & Karakaya, 2010). The results of the present investigation of reduction in β-carotene content upon soaking are in agreement with Afify, El-Beltagi, El-Salam, and Omran (2012), who reported the reduction in antioxidant activity and antioxidant capacity after soaking due to leaching of total phenols, flavonoids, vitamin E, and β-carotene in soaking water. The reduction in iron content during soaking might be due to leaching out of minerals in soaking water (Malik, Singh, & Dahiya, 2002). Similar reduction was reported by Lestienne, Icard-Vernière, Mouquet, Picq, and Trèche (2005) in iron content of pearl millet grains after 24 h of soaking. Reduction in ash content during soaking may be due to the leaching of both micro and macro elements and anti-nutrients into the extracting medium (Mugendi et al., 2010). Effect of pressure cooking Phytic acid and total phenols reduced significantly (p < 0.05) after both 5 and 10 min of pressure cooking (Table 3). Pressure cooking led to a significant (p < 0.05) decrease in the β-carotene, iron, and ash contents of the pearl millet just after 2 min of cooking. In the present study, the soaking and cooking water were discarded and leaching may be the major reason along with thermal degradation for reduction in phytic acid, total polyphenols, β-carotene, iron and ash contents. Another reason attributed to the decrease of these compounds is their thermal degradation during pressure cooking (Kataria, Chauhan, & Punia, 1989). Bishnoi, Khetarpaul, and Yadav (1994) found that domestic processing and cooking methods reduced the phytic acid and polyphenol contents of various pea varieties, with germination for 48 h having a marked lowering effect. Vijayakumari, Siddhuraju, and Janardhanan (1996) studied the effect of soaking, cooking, and autoclaving on the concentrations of phytic acid in the tribal pulse Mucunamono sperma and found that cooking for 3 h resulted in significant reductions, with even higher losses associated with autoclaving. The findings of the Marty and Berset (1986), who reported that the degradation of β-carotene during cooking may be due to thermal stress. Haytowitz and Matthews (1983) studied the effect of cooking on nutrient retention in legumes and found a significant amount of iron ions in cooking water showing leakage of iron complexes from chickpea into hot water. Increase in electrical conductivity of the cooking water studied by Avola, Patane, and Barbagallo (2012) also supports the leaching out of minerals during cooking treatment. Collective effect of leaching out of minerals and anti-nutritional factors from pearl millet grains might be the reason for reduction in ash percentage in the present investigation during pressure cooking treatment (Borade, Kadam, & Salunkhe, 1984). Effect of steaming The results of the present study revealed that phytic acid was reduced significantly (p < 0.05) from its initial value of 683.07 ± 1.87 to 625.34 ± 3.34 and 616.93 ± 1.76 mg after 5 and 10 min of steaming, respectively (Table 4). Similarly, total polyphenol content was also reduced significantly (p < 0.05) from its initial value of 207.23 ± 3.06 to 138.96 ± 2.34 mg, 124.03 ± 1.54 and 115.23 ± 2.28 mg after 5, 10, and 20 min of steaming, respectively. β-carotene and ash content were reduced significantly (p < 0.05) as the steaming time was increased and there were also significant (p < 0.05) losses of iron content after 5 and 15 min of steaming. In both the treatments, viz. pressure cooking and steaming, there was a definite reduction in all the studied parameters, but the reduction was more in pressure cooking than the steaming. This is attributed to the fact that pressure cooking is more severe heat treatment than steaming. Effect of controlled germination From Table 5, it is evident that the phytic acid, total polyphenols, iron and ash contents were reduced significantly (p < 0.05) as a result of malting just after 12 h and continued till the end of 52 h. However, β-carotene content losses were non-significant up to 46 h of malting and then reduced significantly after 46 h. The findings of the present investigation in context to phytic acid are in agreement with the results obtained by Gupta and Sehgal (1991). They observed a decrease in phytic acid contents of cereal grains meant for the production of weaning foods by soaking and germination. This decrease in phytic acid during soaking could be attributed to the leaching into soaking water under the concentration gradient. Another reason in phytic acid decrease during germination was attributed to the increase in the phytase activity in germinating grains (Borade et al., 1984;Rao & Deosthale, 1982). Phytase activity was also observed in germinating wheat, barley, rye, and oats, which hydrolyze phytate to phosphate and myoinositol phosphates (Larsson & Sandberg, 1992). Our findings with reference to polyphenols are in accordance with the findings of Prasad, Alok, Arvind, and Nitya (2015) and Sharma and Sehgal (1992), who reported that germination of pearl millet reduces the polyphenol content. This loss of polyphenols during germination may be attributed to (Rao & Deosthale, 1982) and to the hydrolysis of tannin-protein and tannin-enzyme complexes which results in the removal of tannins or polyphenols (Farhangi & Valadon, 1981). In addition to leaching, other reason for the reduction of total phenols during germination could be facilitated by increased enzymatic hydrolysis (Bishnoi et al., 1994). The findings of the present investigation in context to β-carotene are not in agreement with the results obtained by Yang, Basu, and Ooraikul (2001), who reported that the concentration of β-carotene steadily increased in wheat with increase in germination time. Similarly, Lee, Hwang, Lee, Chang, and Choung (2013) reported that during germination, β-carotene content of soybean gets accumulated in whole soybean sprouts and cotyledon, while hypocotyls did not accumulate lipophilic pigments during germination. The contradictory finding of the present study may be attributed to the fact that grains were dried and cotyledons were removed where the major portion of β-carotene was lost. Major reduction in iron content during malting was observed after 9 h of soaking treatment (Table 2). This reduction in iron content attributed to the soaking treatment given to the pearl millet grains before malting. The reduction in mineral contents during soaking and sprouting treatments might be due to the leaching out of minerals in the soaking water. These findings are supported by the observations recorded by earlier workers (Chavan, Kadam, & Beuchat, 1989;Rani & Hira, 1993). Our results in relation to decrease in ash content (Table 5) reveals that ash content decreased significantly (p < 0.05) during controlled germination and reached to the tune of 1.39% in comparison to 1.72% at zero hour of germination. This reduction in the ash content of pearl millet flour may be attributed to the leaching out of solid matter during the soaking step prior to the malting of pearl millet. The findings of the present study are in agreement with the observations of other workers (Duhan, Khetarpaul, & Bishnoi, 1999;Gernah, Ariahu, & Ingbian, 2011;Mubarak, 2005) who reported that germination and cooking processes cause a significant decrease in ash content. Effect of combination of processing treatments As evident from the results discussed above, different treatments lead to the reduction of anti-nutrients to different extents. Hence, the combinations of different processing treatments were also studied to optimize the best treatment combination to obtain pearl millet flour suitable for the preparation of various products such as ready-to-reconstitute weaning food. It can be noted here that out of the two heat processing treatments discussed above (Pressure cooking and steaming), pressure cooking was selected for making further combinations because of its significant effect on antinutrients' reduction in lesser time as compared to the steaming. As the purpose of the work was to prepare flour suitable for various food products, therefore, further two treatments (pearling and extrusion) were also incorporated in the study. It is evident from Table 6 that phytic acid was reduced significantly (p < 0.05) in all the three combinations in comparison to the control. Data indicated that reduction in phytic acid content in comparison to control was 36, 37.3, and 38% in combination (A), (B), and (C), respectively. However, the effect of treatment (B) and (C) on the reduction of phytic acid content was non-significant. The results obtained in the present study are in agreement with the results obtained by Duhan et al. (1999), who subjected Manak, a high-yielding cultivator of Cajanus (Pigeon pea) to various domestic processing and cooking methods including soaking, soaking and dehulling, ordinary cooking, pressure cooking and germination, and found that the phytate concentration was reduced significantly. A reduction of 13-35% was observed after extrusion of a wheat bran-starch-gluten mixes (Andersson, Hedlund, Jonsson, & Svensson, 1981). Reduced phytate levels in wheat flour were also reported by Fairweather-Tait, Portwood, Symss, Eagles, and Minski (1989) on extrusion cooking. This reduction was attributed to high shear coupled with very high temperature during extrusion processing, which hydrolyzed phytate to release phosphate molecules. Total phenols were also reduced significantly (p < 0.05) with respect to the control as a result of treatment (A), (B), and (C), but the extent of reduction between combination (B) and (C) was nonsignificant (Table 6). Results indicated that reduction in total phenol content in comparison to control was 37.8, 26.4, and 28.6% in combination (A), (B), and (C), respectively. Our findings in context to total phenol reduction are in agreement with findings of Sinha and Kawatra (2003). They also reported significant reduction in the concentration of phytic acid and polyphenols in cowpea as a result of soaking, de-hulling, ordinary cooking, pressure cooking and germination. Changes in the polyphenol content after thermal treatment might result in the binding of phenolic compounds with other organic materials present (Alonso, Rubio, Muzquiz, & Marzo, 2001). β-carotene was also lost significantly (p < 0.05) in each treatment as compared to control ( Table 6).The reduction during extrusion cooking may be attributed to the exposure of β-carotene to high temperatures and high mechanical stresses accelerating oxygen or light induced as well as other chemical reactions or structural changes (Emin, Mayer-Miebach, & Schuchmann, 2012). Iron was reduced significantly (p < 0.05) in both the treatments (A) and (B). Data revealed that iron content was reduced to the tune of 24.7 and 22.1% in treatments (A) and (B), respectively. The decrease in iron content during treatments (A) and (B) may be attributed to the leaching out of minerals in the spent water during soaking. However, in combination (C), the iron content remained almost unaffected (42.06 ppm) and was comparable to that of control sample (43.23 ppm). As compared to other treatments, higher value of iron in treatment "C" could be attributed to the minute addition of iron from the screws and inner parts of extruder during processing. Our findings are supported by Alonso et al. (2001) and Singh, Chauhan, Suresh, and Tyagi (2000), who reported a slight increase in iron content in extrusion probably due to the addition of these minerals through water used during extrusion processing and also due to the wear of metallic pieces of the extruder. Similarly, Camire (2000) also noted that total iron content was increased by 38% due to extrusion. It is evident from Table 6 that ash content of pearl millet was reduced significantly (p < 0.05) as a result of treatment (A), (B), and (C) in comparison to control. However, the extent of reduction amongst treatments (A), (B), and (C) was non-significant. The most probable reason of reduction in ash content may be attributed to the collective action of mechanical stress, heat degradation, and leaching out of minerals and anti-nutrients during different treatments. The other reason in the reduction of ash content is the removal of mineral-rich outer greyish layer or bran portion of the pearl millet during pearling (El Hag, El Tinay, & Yousif, 2002). From the data represented in the Table 6, it is evident that there is significant (p < 0.05) reduction in both the anti-nutrients, viz. phytic acid and total polyphenols in treatment combinations (A) and (C) as compared to control. Table 6 also suggests that the iron content in treatment combination (C) remained similar to that of control, however, in case of combination (A), it was less than the control. Therefore, looking into the effect of treatments (A) and (C) on the reduction in anti-nutrients in pearl millet, both the treatments were selected for the preparation of pearl millet flour. Conclusions It can be concluded that domestic processing treatments, viz. soaking, steaming, pressure cooking, malting, pearling, and extrusion, could reduce the level of anti-nutrients significantly. Out of soaking, pressure cooking, and steaming, higher reduction in anti-nutrients as well as iron and β-carotene were observed in pressure cooking. However, controlled germination was found to be the most suitable treatment which resulted in maximum loss of anti-nutrients with no significant reductions in iron and β-carotene content. Effect of combination of different methods showed that treatment "C" (9 h Soaking + 40 h Controlled Germination + Pearling + Extrusion cooking) was comparatively better in terms of iron and phytic acid. Overall, it can be concluded that combination of domestic treatments is a better approach rather than individual process in improving the nutritional profile of pearl millet, which can be consumed directly or can be used as one of the ingredients for formulations like weaning foods, bakery products, etc.
2018-12-27T09:24:21.503Z
2015-10-29T00:00:00.000
{ "year": 2015, "sha1": "66b403265b73365cbf0f7d9cf478d7cc8c37834f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311932.2015.1109171", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "66b403265b73365cbf0f7d9cf478d7cc8c37834f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Materials Science" ] }
6336975
pes2o/s2orc
v3-fos-license
Apocynin Attenuates Cardiac Injury in Type 4 Cardiorenal Syndrome via Suppressing Cardiac Fibroblast Growth Factor-2 With Oxidative Stress Inhibition Background Type 4 cardiorenal syndrome (CRS) refers to the cardiac injury induced by chronic kidney disease. We aimed to assess oxidative stress and cardiac injury in patients with type 4 CRS, determine whether the antioxidant apocynin attenuated cardiac injury in rats with type 4 CRS, and explore potential mechanisms. Methods and Results A cross-sectional study was conducted among patients with type 4 CRS (n=17) and controls (n=16). Compared with controls, patients with type 4 CRS showed elevated oxidative stress, which was significantly correlated with cardiac hypertrophy and decreased ejection fraction. In vivo study, male Sprague-Dawley rats underwent 5/6 subtotal nephrectomy and sham surgery, followed with apocynin or vehicle treatment for 8 weeks. Eight weeks after surgery, the 5/6 subtotal nephrectomy rats mimicked type 4 CRS, showing increased serum creatinine, cardiac hypertrophy and fibrosis, and decreased ejection fraction compared with sham-operated animals. Cardiac malondialdehyde, NADPH oxidase activity, fibroblast growth factor-2, and extracellular signal-regulated kinase 1/2 (ERK1/2) phosphorylation increased significantly in the 5/6 subtotal nephrectomy rats. These changes were significantly attenuated by apocynin. In vitro study showed that apocynin reduced angiotensin II–induced NADPH oxidase–dependent oxidative stress, upregulation of fibroblast growth factor-2 and fibrosis biomarkers, and ERK1/2 phosphorylation in cardiac fibroblasts. Importantly, the ERK1/2 inhibitor U0126 reduced the upregulation of fibroblast growth factor-2 and fibrosis biomarkers in angiotensin II–treated fibroblasts. Conclusions Oxidative stress is a candidate mediator for type 4 CRS. Apocynin attenuated cardiac injury in type 4 CRS rats via inhibiting NADPH oxidase–dependent oxidative stress-activated ERK1/2 pathway and subsequent fibroblast growth factor-2 upregulation. Our study added evidence to the beneficial effect of apocynin in type 4 CRS. T he heart and kidneys have a complicated and bidirectional interrelationship. The impaired function of 1 organ usually has a detrimental effect on the other, which in turn injures the function and structure of both organs. This interrelationship has been defined as cardiorenal syndrome (CRS). 1 Type 4 CRS refers to the injury of cardiovascular structure and function in the setting of chronic kidney disease (CKD) and is also called chronic renocardiac syndrome. 1 Previous studies showed that even mild impairment of renal function was associated with significantly elevated morbidity and mortality associated with cardiovascular diseases. 2 Cardiovascular diseases, especially heart failure, are the major causes of death in CKD patients. 3 The prevalence of CKD worldwide is estimated to be 8% to 16%, 4 and the high incidence of type 4 CRS has a heavy burden on public health. 3,5 Although there are new advances in clinical treatment for both cardiovascular and renal diseases, the treatment for type 4 CRS remains a challenge. 6 The mechanism of type 4 CRS is complicated and involves many potential candidate mediators. Increasingly, findings suggest that oxidative stress may play an important role in cardiac and renal impairments in type 4 CRS. 7 Oxidative stress is a prominent feature of CKD, which can be monitored with oxidative stress indicators or inducers, including malondialdehyde (MDA), superoxide dismutase (SOD), and asymmetric dimethylarginine and advanced oxidation protein products. [8][9][10] In addition, both systemic and cardiac angiotensin II (Ang II) levels increase significantly in CKD. 11 The renin-angiotensinaldosterone system (RAAS) is activated in patients with CKD and involved in inducing oxidative stress and cardiac impairment. 12 Oxidative stress is gradually recognized as a causal factor for cardiovascular diseases induced by CKD. Therefore, inhibition of oxidative stress with antioxidants may be a promising treatment strategy for type 4 CRS. Apocynin is an assembly inhibitor of nicotinamide adenine dinucleotide phosphate oxidase (NADPH oxidase [NOX]) and widely used as an antioxidant in disease models in which oxidative stress is involved. 13 Apocynin shows protective effects on kidneys and heart through the reduction of oxidative stress. 14,15 However, whether inhibiting oxidative stress with apocynin can improve cardiac injury in type 4 CRS is still unclear. In the present study, we sought to determine the relationship between oxidative stress level and cardiac injury in patients with type 4 CRS and to explore whether apocynin could reduce oxidative stress and attenuate cardiac injury in a rat model of type 4 CRS. The extracellular signal-regulated kinase 1/2 (ERK1/2) pathway can be activated by differential oxidative stress inducers, including advanced glycation end products and Ang II, and ERK1/2 is involved in the deleterious effects of these inducers on cardiac myocytes and fibroblasts. 16,17 In addition, we 18 and others' 19 previously found that cardiac fibroblast growth factor (FGF)-2 is significantly upregulated in cardiac nonmyocytes by prohypertrophic factors, including Ang II, endothelin-1, and isoproterenol, and contributes to cardiac hypertrophy and fibrosis. However, it is still not clear whether FGF-2 is implicated in type 4 CRS. To explore potential mechanisms, we investigated the involvement of the ERK1/2 pathway and the role of FGF-2 in type 4 CRS. Study Population and Data Collection Patients admitted to Sun Yat-sen Memorial Hospital of Sun Yat-sen University for primary CKD during June to December in 2013 were enrolled in this cross-sectional study. Type 4 CRS is defined as cardiac abnormalities such as decreased cardiac function in the setting of primary CKD. 1 In this study, patients with both moderate to severe CKD (stage 3 to 5) and heart failure were included in the type 4 CRS group. The patients had previously diagnosed primary CKD. Moderate to serious CKD was diagnosed when the estimated glomerular filtration rate was <60 mL/min as assessed using the Cockcroft-Gault formula. The diagnosis of heart failure was made according to the ESC Heart Failure Guidelines 2012. 20 Patients with New York Heart Association class II through IV were classified as having heart failure based on medical history, symptoms, signs, and echocardiographic results. Exclusion criteria were acute renal failure, kidney transplantation, nephrotic syndrome, obvious chronic or acute cardiac abnormalities before CKD, neoplasm, severe hepatopathy, infectious diseases, acute or chronic inflammatory diseases. Patients who were taking immunosuppressive agents and classic antioxidants, such as carotenoid, vitamin C, or vitamin E, were also excluded. Among the patients enrolled, 17 patients were classified as having type 4 CRS. Patients without documented renal abnormalities and heart failure were classified as controls. A total of 16 controls were enrolled, but these controls had mild to moderate hypertension. The study conformed to the Helsinki Declaration. The Ethics Committee of Sun Yat-sen Memorial Hospital of Sun Yat-sen University approved the protocol. Written informed consent was obtained from all patients. On admission, medical history, demographic (age and sex), anthropometric (weight, height, and blood pressure) and biochemical parameters, and echocardiographic results were recorded. Previous studies found that RAAS inhibitors and bblockers had inhibitory effects on oxidative stress. 12,21 Therefore, the rate of use of RAAS inhibitors or b-blockers in both groups was recorded. Venous blood samples for biochemical tests were drawn after overnight fasting. SOD, an important antioxidant enzyme, was tested to assess oxidative stress level. Plasma and erythrocyte SOD activity was measured with use of an assay kit (Cayman Chemical). The absorbance at 450 nm was recorded by using a Wallac Victor 2 multilabel counter (Perkin Elmer Life Sciences). Serum N-terminal pro-brain natriuretic peptide, commonly used in heart failure diagnosis, was detected with the use of an electrochemiluminescence immunoassay (Elecsys proBNP assay, Roche Diagnostics Corporation). For measurement of estimated glomerular filtration rate, serum creatinine was tested with the use of an automatic biochemical analyzer (7170A; HITACHI). Urea and high-sensitivity C-reactive protein were also measured with an automatic biochemical analyzer. Plasma Ang II level was tested with radioimmunoassay kits (Beijing North Institute of Biological Technology, Beijing, China). Echocardiography was performed to measure patients' cardiac structural and functional changes with a 2.5-MHz transducer (Vivid 3; GE VingMed Ultrasound). Left ventricular posterior wall thickness at diastole (LVPWd), interventricular septum depth (IVSD), left ventricular end-diastolic diameter (LVEDD), and left ventricular ejection fraction (EF) were recorded. Animal Model Animal experiments were approved by the Animal Experimental Ethics Committee of Sun Yat-sen University and conducted in accordance with the "Guidelines for the Care and Use of Laboratory Animals" published by the US National Institutes of Health (NIH publication No. 85-23, revised 1996). Male Sprague-Dawley rats obtained from Sun Yat-sen University weighing 160AE20 g were housed in an environment-controlled room at 24AE1°C with a 12-hour light/dark cycle and fed tap water and rodent chow. Animals were randomly divided into a sham-operated group (n=10), a 5/6 subtotal nephrectomy (STNx) group (n=10), and an STNx+apocynin group (n=10). A 2-step STNx was described previously as a model of CKD. 22 Briefly, rats were anaesthetized with 40 mg/kg ketamine and 5 mg/kg xylazine (intraperitoneal injection), and the adequacy of anesthesia was determined by loss of response to the pinching of the skin of the abdomen, toes, or tails. The body temperature maintained at 37°C by using an electrical warming pad. The artery of left kidney was temporarily occluded, and then the upper and lower poles of this kidney were ligated and excised. In this way, one-third of the left kidney remained. Buprenorphine (0.03 mg/kg, subcutaneous injection twice daily for 3 days) was used for postoperative analgesia. After a 1-week recovery period, the right kidney was exposed and removed after ligation of the renal pedicle. Sham-operated rats underwent similar surgery but only the renal envelops were removed. Rats in the STNx+apocynin group were fed apocynin (Sigma-Aldrich) in drinking water (1.5 mmol/L) for 8 weeks. 23 At baseline and weeks 4 and 8 after surgery, blood samples were collected through the tail vein for measurement of creatinine. Circulating Ang II at week 8 after surgery was detected with use of the radioimmunoassay kits (Beijing North Institute of Biological Technology). Measurement of Blood Pressure and Heart Rate At baseline and weeks 4 and 8 after surgery, systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate were measured with a tail-cuff device (BP-98A; Softron). Conscious rats were placed into a restrainer with an electrical warming pad for 20 minutes, and all rats were trained to become accustomed to this process for 1 week before measurement. To avoid circadian variations of blood pressure and heart rate, all measurements were carried out between 8:00 and 11:00 AM. At least 3 measurements of each rat were recorded at intervals of 1 to 2 minutes, and mean values of blood pressure and heart rate were calculated. Histological Analysis Rats were anesthetized with 1% pentobarbital (100 mg/kg intraperitoneally) and killed at the end of week 8. Body weight and left ventricular weight were measured to assess the ratio of left ventricular weight to body weight. Hearts were fixed with 4% paraformaldehyde and embedded in paraffin. Left ventricular sections were stained with Masson reagent for detecting fibrosis. To evaluate the degree of myocardial fibrosis, 10 fields of each section were randomly selected. The cardiac fibrosis volume fractions were calculated as the ratio of aniline blue-stained fibrosis areas to total myocardium areas with Image Pro-plus 5.0 software (Media Cybernetics). Measurement of Cardiac MDA Harvested hearts were stored at À80°C for Western blot analysis and the measurement of oxidative stress. Cardiac oxidative stress was determined by the measurement of cardiac MDA. 9 Briefly, homogenates of left ventricular tissue were centrifuged at 1600 g for 10 minutes at 4°C. The levels of MDA in the supernatant were measured with thiobarbituric acid reaction by using a commercial kit (Beyotime Biological). 24 The absorbency was detected with a multimode microplate reader (Spectra Max M5; Molecular Devices). Cell Culture and Treatment One-to 3-day-old neonatal Sprague-Dawley rats were killed by decapitation. The hearts were tore into small pieces and predigested by 0.125% trypsogen for 5 minutes and then digested with 0.06% collagenase-II for 2 hours in a shaker at 37°C. Collected cells were plated onto a culture dish for 45 minutes. Then, unattached cells were removed. The remained cardiac fibroblasts were cultured in high glucose (4500 mg/L) Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum in a humidified incubator with 5% CO 2 at 37°C. The purity of cardiac fibroblasts was greater than 98% as determined by positive staining for vimentin and negative staining for von Willebrand factor. Cells were cultured in serum-free DMEM for 24 hours before treatment. Cells were treated with (1) dimethyl sulfoxide (DMSO) (1 lL, Sigma-Aldrich) alone, (2) Ang II (100 nmol/L, Sigma-Aldrich), (3) apocynin (100 lmol/L) alone, (4) Ang II (100 nmol/L)+apocynin (100 lmol/L), (5) ERK1/2 inhibitor U0126 (10 lmol/L, Cell Signaling Technology), or (6) Ang II (100 nmol/L)+U0126 (10 lmol/L). Apocynin and U0126 were dissolved in DMSO and added to cells for 1 hour before the stimulation of Ang II. Cells were treated with Ang II for 24 hours before detection of the expressions of procollagen I, procollagen III, transforming growth factor (TGF)-b and FGF-2. Reactive Oxygen Species Assay We determined intracellular reactive oxygen species (ROS) in cardiac fibroblasts by detecting superoxide anion with dihydroethidium (Molecular Probes, Invitrogen). Briefly, cells were treated with Ang II (100 nmol/L) with or without apocynin (100 lmol/L) for 2 hours and then incubated with dihydroethidium (10 lmol/L) for 30 minutes. Florescence was observed with a fluorescence microscope (DMI3000 B; Leica), and fluorescence intensities were detected with a multimode microplate reader. Mitochondrial ROS production was measured by using MitoSox Red, a fluorescent probe specific for mitochondria ROS (Invitrogen). After treatment, cardiac fibroblasts were incubated with 3 lmol/L MitoSox Red for 30 minutes at 37°C. Florescence was observed with a fluorescence microscope, and fluorescence intensities were detected with a multimode microplate reader. Results were expressed as relative fluorescence intensity normalized to controls. Measurement of NOX Activity We used a lucigenin-enhanced chemiluminescence assay kit (Genmed Scientifics) to assess NOX activity. Briefly, the cardiac tissues and harvested cells were lysed and sonicated on ice. NOX activity was detected according to the manufacturer's instructions. Chemiluminescence readings were normalized to total protein level. The final results were expressed as relative NOX activity normalized to controls. Western Blot Analysis The protein samples obtained from heart extracts and cell lysates were mixed with loading buffer and boiled at 95°C for 5 minutes. Boiled samples were separated on 10% to 12% SDS-polyacrylamide gels and proteins were transferred to PVDF membranes. The membranes were then incubated with primary antibodies: anti-TGF-b antibody, anti-total ERK1/2 antibody, anti-phospho-ERK1/2 antibody, anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) antibody (all from Cell Signaling Technology; dilution, 1:1000), anti-procollagen I antibody, anti-procollagen III antibody, or anti-FGF-2 antibody (all from Santa Cruz Biotechnology; dilution, 1:200) in Trisbuffered saline and Tween 20 containing 5% (w/v) bovine serum albumin (antibody buffer) overnight at 4°C. The membranes were then washed and incubated with horseradish peroxidase-linked secondary antibody (Cell Signaling Technology; dilution, 1:1000) and then visualized with enhanced chemiluminescence (Thermo Fisher Scientific). The densities of the bands were analyzed semiquantitatively and normalized with respect to GAPDH by image software (Thermo). Statistical Analysis Normal distribution data were expressed as meanAESD, and non-normal distribution data were expressed as median with interquartile range. Comparisons between 2 groups were performed with t test or Mann-Whitney U test. Repeatedmeasures analysis was used to examine overall differences in blood pressure, heartbeat, and serum creatinine in rats over time among groups. One-way ANOVA followed by a Bonferroni comparison test was used to compare data between multiple groups. Categorical data were compared with use of the v 2 test. Partial correlation analysis was used to assess the correlations between SOD level and echocardiographic data in patients with type 4 CRS and controls after controlling for age, sex, and weight. All the tests were performed by using SPSS version 13.0 (SPSS Inc). Statistical differences with a 2-tailed P value <0.05 were considered to be statistically significant. Oxidative Stress Was Significantly Associated With Cardiac Remodeling and Dysfunction in Patients With Type 4 CRS A total 17 patients with type 4 CRS and 16 controls were included in the study. The characteristics of the 2 groups are shown in Table 1. The 2 groups had no difference in sex, weight, DBP, and the rate of use of b-blockers or RAAS inhibitors. Patients with type 4 CRS were older and had higher SBP values than did the controls. Patients with type 4 CRS showed significantly elevated serum creatinine, urea, Ang II, and high-sensitivity C-reactive protein levels and lower estimated glomerular filtration rates compared with controls. As expected, patients with type 4 CRS displayed higher Nterminal pro-brain natriuretic peptide and lower EF than controls. Echocardiographic results showed elevated LVPWd and IVSD in patients with type 4 CRS compared with controls, indicating remarkably cardiac remodeling. Patients with type 4 CRS also showed increased LVEDD. In addition, increased oxidative stress level was detected in patients with type 4 CRS as suggested by decreased serum SOD level compared with controls (93AE27 versus 131AE20 U/mL, P<0.05). Partial correlation analysis (Table 2) found that SOD level was inversely correlated with cardiac remodeling and positively correlated with EF, after controlling for age, sex, and weight. These findings indicated that increased oxidative stress may be an important factor related to the cardiac remodeling and dysfunction in patients with type 4 CRS. Apocynin Attenuated Cardiac Remodeling, Interstitial Fibrosis, and Cardiac Dysfunction in STNx Rats Figure 1 shows the results of blood pressure, heart rate, and serum creatinine in rats at baseline and weeks 4 and 8 after surgery. At 8 weeks after surgery, STNx rats showed significant increases in SBP and DBP but no significant change in heart rate ( Figure 1A through 1C). STNx also resulted in significantly higher levels of serum creatinine compared with sham surgery ( Figure 1D). Treatment with apocynin attenuated the increases in SBP and DBP but had no significant effect on the increased serum creatinine in STNx rats. Both STNx and apocynin treatment had no significant effect on heart rate. At week 8 after surgery, there was no significant difference in survival rate between STNx rats treated with and those not treated with apocynin (Table 3). STNx rats showed a significant decrease in body weight (Table 3) but a remarked increase in left ventricular weight compared with sham-operated rats. STNx resulted in a significant increase in the ratio of left ventricular weight to body weight compared with sham surgery (Table 3), indicating significantly cardiac remodeling. In addition, echocardiographic examination showed a significant increase in LVPWd in STNx rats compared with sham-operated rats. Increased LVESD and decreased left ventricular fractional shortening and EF were observed in STNx rats (Table 3). Masson staining revealed increased cardiac interstitial fibrosis in STNx rats (Figure 2A and 2B). The expression of FGF-2 was upregulated in STNx rats ( Figure 2C). These findings demonstrated that STNx rats with impaired renal function showed remarkable cardiac impairments, including cardiac remodeling, interstitial fibrosis, and cardiac dysfunction, all of which were improved with apocynin treatment. Increased circulating Ang II was found in STNx rats. Also, STNx rats showed increased cardiac oxidative stress compared with sham-operated rats, as indicated by elevated levels of MDA in left ventricular tissue (3.86AE0.68 versus 2.05AE0.23 nmol/mg protein, P<0.01), which was also attenuated by apocynin (3.86AE0.68 versus 2.32AE0.18 nmol/mg protein, P<0.01) ( Figure 2D). Apocynin markedly reduced the increased NOX activity in STNx rats ( Figure 2E). These findings indicated that apocynin attenuated cardiac remodeling, interstitial fibrosis, and cardiac dysfunction in rats with impaired renal function via inhibiting oxidative stress. Apocynin Reduced NOX-Dependent Oxidative Stress and Inhibited Expressions of FGF-2 and Fibrosis Biomarkers in Cardiac Fibroblasts Treated With Ang II To explore whether the cardioprotective effects of apocynin in type 4 CRS are independent of its antihypertensive effect, we determined the effect of apocynin on cardiac fibroblasts treated with Ang II in vitro. Ang II was widely used as a stimulator for oxidative stress due to its important role in CKD. Western blot analysis showed that Ang II upregulated the expressions of FGF-2 and fibrosis biomarkers, including procollagen I, procollagen III, and TGF-b in cardiac fibroblasts ( Figure 3). Ang II also induced a significant increase in generally intracellular ROS ( Figure 4A) and NOX activity ( Figure 5) in cardiac fibroblasts, indicating increased oxidative stress. The increases in general and NOX-dependent superoxide anion and upregulations of FGF-2 and fibrosis biomarkers induced by Ang II were attenuated with apocynin treatment. However, apocynin had no significant effect on Ang II-induced mitochondrial superoxide anion production ( Figure 4B). Therefore, these in vitro findings demonstrated that apocynin had an antifibrotic effect on Ang II-treated cardiac fibroblasts through its inhibition of NOX-dependent A B C D Figure 1. The effects of apocynin on blood pressure, heart rate and serum creatinine in rats at baseline, and the 4th and 8th week after subtotal nephrectomy. A through C, Apocynin reduced the elevated SBP and DBP in STNx rats. STNx and apocynin had no significant effect on heart rate. D, The increased serum creatinine in STNx rats was not significantly affected by apocynin. Data were expressed as meanAESD. n=10 for each group; *P<0.01 vs Sham group; # P<0.01 vs Apocynin+STNx group. DBP indicates diastolic blood pressure; SBP, systolic blood pressure; STNx, 5/6 subtotally nephrectomized. Beneficial Effect of Apocynin Occurred Through Inhibition of ERK1/2 Activation and Subsequent FGF-2 Expression The ERK1/2 pathway was activated in STNx rats, which was inhibited by apocynin treatment (Figure 6). In the in vitro study, Ang II induced phosphorylation of ERK1/2 in cardiac fibroblasts, while pretreatment with apocynin showed an inhibitory effect on ERK1/2 phosphorylation ( Figure 7A). Furthermore, ERK1/2 inhibition with U0126 suppressed Ang II-induced upregulations of FGF-2, procollagen I, procollagen III, and TGF-b in cardiac fibroblasts ( Figure 7B and 7C). These results demonstrated that the protective effect of apocynin against Ang II-induced fibrosis occurred via reducing NOX-dependent oxidative stress and inhibition of subsequent ERK1/2 activation and expression of FGF-2. Discussion Growing attention has been paid to type 4 CRS due to its high morbidity and mortality. We found that oxidative stress was involved in the development of type 4 CRS. In patients with type 4 CRS, oxidative stress was remarkably elevated and significantly associated with cardiac remodeling and dysfunction. Furthermore, the animal study, for the first time, demonstrated that apocynin effectively attenuated cardiac remodeling and interstitial fibrosis and improved cardiac function in a rat model of type 4 CRS through inhibition of oxidative stress and FGF-2 expression. Our in vitro results indicated that the protective effects of apocynin were partly mediated by inhibition of NOX-dependent oxidative stress-activated ERK1/2 pathway and FGF-2 upregulation and independent of hemodynamic changes. The pathophysiological mechanism linking renal and cardiac impairments in type 4 CRS is not fully understood. The pivotal role of oxidative stress in the pathogenesis of cardiovascular diseases, including hypertension, cardiac remodeling, and heart failure, has long been emphasized. 25 Previous studies indicated that increased oxidative stress also played a pivotal role in CSR. [26][27][28] In our cross-sectional study, SOD was significantly decreased and related to cardiac remodeling and dysfunction in patients with type 4 CRS. Furthermore, increased oxidative stress was detected in STNx rats with remarkable cardiac impairments. Although there is not a well-accepted model of type 4 CRS so far, the cardiac A B Figure 3. Apocynin attenuated Ang II-induced upregulations of fibrosis biomarkers and fibroblast growth factor-2 in cardiac fibroblasts. A, Western blot results showed that Ang II-induced upregulations of procollagen I, procollagen III and FGF-2 were reduced by apocynin (n=4). B, Apocynin attenuated Ang IIinduced upregulation of TGF-b (n=4). Data were expressed as meanAESD. *P<0.01 vs Control group; # P<0.01 vs Ang II group. Ang II indicates angiotensin II; FGF-2, fibroblast growth factor-2; GAPDH, antiglyceraldehyde 3-phosphate dehydrogenase; TGF-b, transforming growth factor-beta. impairments in STNx rats, including cardiac remodeling, interstitial fibrosis, and cardiac dysfunction, mimicked the cardiac changes in this syndrome. 1 NOX is a major source of ROS within the heart. 29 The expression and activity of NOX are upregulated in clinical and experimental cardiovascular diseases. 30 NOX activity also increased significantly in STNx rats and Ang II-induced cardiac fibroblasts in the present study. Experimental studies find that inhibition of NOX have cardioprotective effects, including reducing blood pressure, inhibiting cardiac remodeling, and improving cardiac function. 30 Inhibition of oxidative stress with antioxidants, including NOX inhibitor, may also be a potential therapeutic strategy for CRS. [26][27][28] However, related studies assessing antioxidant treatments in type 4 CRS are rare. A small-sample randomized controlled trial by Camuglia et al 31 found that antioxidant treatment with Nacetylcysteine improves forearm blood flow in patients with CRS (n=9). This clinical trial indicated the beneficial effect of antioxidant treatment in CRS. In the present study, inhibition of NOX with apocynin ameliorated the increased oxidative stress and accompanied cardiac impairments in STNx rats. The present experimental study adds evidence to the beneficial effects of antioxidant treatment with apocynin in type 4 CRS. However, we should notice the gaps between experimental studies and clinical trials concerning antioxidant treatments in cardiovascular diseases. Although increasing experimental studies discover the beneficial effects of antioxidant treatments on cardiovascular diseases, the results of clinical trials so far are controversial. Vitamins are the most commonly used antioxidants in clinical trials. Recently, a meta-analysis found that antioxidant vitamins, including vitamin C, vitamin E, and beta-carotene, had no significant effect on major cardiovascular diseases. 32 The differences in antioxidants and dosages may influence their curative effects on cardiovascular diseases. Some antioxidants show dose-dependent antioxidative and prooxidative properties. 33 In addition, low to moderate ROS levels promote endogenous antioxidant response by upregulating antioxidant enzymes. 34,35 Therefore, the regulation of oxidative stress level is complicated, and there is a long way to go to find clinically effective antioxidants and related therapeutic regimens. In total, our findings confirmed the cardioprotective effects of apocynin in type 4 CRS. Apart from the cardioprotective effects, apocynin showed an antihypertensive effect. This was accordance with previous studies in other hypertension models. 36 To investigate whether the cardioprotective effects of apocynin were independent of its antihypertensive effect, we conducted an in vitro experiment in cardiac fibroblasts, using Ang II as a stimulator due to its significant effect in CKD. 37 In vitro results found a significant inhibitory effect of apocynin on fibrosis and NOX-dependent oxidative stress in cardiac fibroblasts. Although apocynin had an antihypertensive effect, previous studies found that lowering blood pressure alone could not improve CKD-induced cardiac fibrosis. 38,39 Therefore, these results indicated that the effect of apocynin on cardiac interstitial fibrosis may be independent of its antihypertensive effect. Studies demonstrate that apcynin reduces NOX activity by inhibiting the expressions of NOX subunits and translocation from cytosol to the membrane. 40,41 The cardioprotective effects of apocynin occurred via attenuation of NOX-dependent oxidative stress, but the potential downstream mechanisms are not clear. 14,42 Li et al found that osteopontin might be involved in the protection of apocynin against cardiac fibrosis. 14 Liu et al found that upregulating the expression and activity of SERCA2a was associated with the beneficial effect of apocynin on cardiac dysfunction. 42 The present study found that the phosphorylation of ERK1/2 and overexpressions of fibrosis biomarkers in both STNx rats and cardiac fibroblasts treated with Ang II were markedly suppressed by apocynin. Furthermore, ERK1/2 inhibition suppressed the profibrotic effects of Ang II on cardiac fibroblasts. These results revealed that apocynin-mediated suppression of cardiac fibrosis was partly through inhibiting NOX-dependent oxidative stressactivated ERK1/2 pathway. FGF-2 is an important profibrotic factor. Findings from our previous study 18 and others' 19 revealed that FGF-2 played a pivotal role in cardiac remodeling and fibrosis and that cardiac nonmyocytes like fibroblasts were the main sources of cardiac FGF-2. In the present study, we found that FGF-2 was also involved in cardiac impairments in type 4 CRS. Many factors that are believed to induce oxidative stress, including Ang II, endothelin-1, and transforming growth factor-b1 can upregulate FGF-2 and are suppressed by antioxidants. 43,44 In our study, the upregulated FGF-2 and increased oxidative stress in STNx rats and Ang II-treated cardiac fibroblasts were inhibited by apocynin. Therefore, the harmful effects of oxidative stress on cardiac tissue in type 4 CRS may occur partly through upregulation of FGF-2. Previous studies found that FGF-2 activated the ERK1/2 pathway in fibroblast and endothelial cells. 45,46 However, we found that ERK1/2 inhibitor attenuated the upregulation of FGF-2 in fibroblasts. Figure 7. Apocynin attenuated cardiac fibrosis via inhibiting extracellular signal-regulated kinase 1/2 activation and fibroblast growth factor-2 expression. A, Ang II-induced ERK1/2 p-ERK was inhibited by apocynin. B and C, Ang II-induced expressions of FGF-2 and fibrosis biomarkers, including procollagen I procollagen III and TGF-b, were reduced by U0126 (ERK1/2 inhibitor) in cardiac fibroblasts. Data were expressed as meanAESD; n=3 for each group. *P<0.01 vs Control group; # P<0.01 vs Ang II group. Ang II indicates angiotensin II; p-ERK, phosphorylation extracellular signal-regulated kinase; FGF-2, fibroblast growth factor-2; TGF-b, transforming growth factor-beta; GAPDH, anti-glyceraldehyde 3-phosphate dehydrogenase; ERK1/2, extracellular signal-regulated kinase 1/2. These findings indicated that there may be a positive feedback mechanism between FGF-2 and ERK1/2 pathway, but further investigations are required. In all, our results demonstrated that the cardioprotective effects of apocynin were due to reducing NOX-dependent oxidative stress and possibly inhibition of the positive feedback mechanism between FGF-2 and ERK1/2. Therefore, NOX-dependent oxidative stress-ERK1/2-FGF-2 may be a novel mechanism involved in the cardioprotective effects of apocynin. In addition, studies found that apocynin had protective effects on kidneys in some models of kidney disease, including attenuating selective albuminuria, tubular apoptosis, and interstitial fibrosis. 47,48 However, in the present study, apocynin showed no significant effect on serum creatinine in STNx rats. The possible explanation may be due to the difference in animal models. An irreversible reduction in renal mass and function after STNx surgery may be hard to improve with apocynin treatment. There are some limitations in the present study. First, the study exploring the relationship between oxidative stress and cardiac impairments in patients with type 4 CRS was a crosssectional study with a small sample size. Second, in this animal study, we did not use RAAS blockers as positive control treatment. Previous studies have already demonstrated that RAAS blockers could attenuate CKD-related oxidative stress and cardiac remodeling. We will compare the different as well as synergistic effects of apocynin and RAAS blockers on type 4 CRS in future studies. Also, although apocynin is a commonly used antioxidant, there are some controversies about its specificity. 49 However, in this study, we did find a significant effect of apocynin on reducing oxidative stress. Future studies need to evaluate the effects of other, more-selective antioxidants in type 4 CRS. Conclusion Oxidative stress plays a pivotal role in cardiac injury accompanied by CKD. Increased oxidative stress was significantly linked to the cardiac remodeling and dysfunction in patients with type 4 CRS. Our results showed, for the first time, that apocynin attenuated cardiac injury in type 4 CRS. The mechanism may be through inhibition of NOX-dependent oxidative stress-activated ERK1/2 pathway and subsequent FGF-2 upregulation. The present study added evidence to the cardioprotective effects of antioxidant treatment and underlined the involvement of FGF-2 in type 4 CRS. Given the high morbidity and mortality of this syndrome, and the controversy on antioxidant treatment in cardiovascular diseases, more studies are needed to assess the roles of oxidative stress and the effect of different antioxidants in type 4 CRS as well as other cardiovascular diseases.
2017-06-29T15:17:19.460Z
2015-06-24T00:00:00.000
{ "year": 2015, "sha1": "485a7c81dcbaedbb2636a16105ec70a22afa40d1", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.114.001598", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "485a7c81dcbaedbb2636a16105ec70a22afa40d1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265158816
pes2o/s2orc
v3-fos-license
Women’s experiences of psychological treatment and psychosocial interventions for postpartum depression: a qualitative systematic review and meta-synthesis Background To provide a comprehensive, systematic evaluation of the literature on experiences of psychological interventions for postpartum depression (PPD) in women. Depression is one of the most common postpartum mental disorders. Studies have identified that psychological interventions reduce depressive symptoms. However, less is known about the experiences of women who have received such treatments. Methods A systematic review of the literature was conducted by searching five databases (CINAHL, Cochrane Library, EMBASE, Medline, PsycINFO), in August 2022. Studies with qualitative methodology examining women’s experiences of professional treatment for PPD were included and checked for methodological quality. Eight studies (total N = 255) contributed to the findings, which were synthesized using thematic synthesis. Confidence in the synthesized evidence was assessed with GRADE CERQual. Findings The women had received cognitive behavioral therapy (5 studies) or supportive home visits (3 studies). Treatments were individual or group-based. Two main themes were identified: Circumstances and expectations, and Experiences of treatment, with six descriptive themes. Establishing a good relationship to their health professional was important for the women, regardless of treatment model. They also expressed that they wanted to be able to choose the type and format of treatment. The women were satisfied with the support and treatment received and expressed that their emotional well-being had been improved as well as the relationship to their infant. Conclusion The findings can be helpful to develop and tailor patient-centered care for women who are experiencing postnatal depression. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02772-8. Women's experiences of psychological treatment and psychosocial interventions for postpartum depression: a qualitative systematic review and meta-synthesis Background Pregnancy and the first year after childbirth involve significant changes in a woman's life and can be associated with emotional distress of varying types and degrees.For some, worry and mood disturbances are natural and transient reactions to the challenges of a new life situation.For others, symptoms can persist and develop into a condition where support or treatment is needed.Depression is one of the most common postpartum mental disorders during this period.The prevalence of postpartum depression (PPD) has been estimated at 5-9% in high income countries, around 13% when self-report measures are used [1,2].Women with previous mental health problems are more at risk, as well as women with previous or current stressful life experiences, especially being exposed to interpersonal violence, partner relationship problems, migration, and lack of support [3,4].Associations between PPD and adverse outcomes on the child are most evident when depression is severe or recurrent, or when associated risk factors may explain a substantial part of the negative outcome on children [3]. In general, and across various cultures, mothers with PPD have been found to prefer talking therapies or supportive interventions over pharmacological treatments, in part due to fear of negative effects on the child by transmission to breastmilk [5][6][7].A recent review highlighted how mothers put what they thought was best for their baby first when making decisions about treatment, including taking or not taking medication [8]. Systematic reviews have found that psychotherapy and psychosocial interventions for perinatal depression are generally effective [9][10][11].Common treatments for PPD are cognitive behavior therapy (CBT), interpersonal psychotherapy (IPT), and non-directive supportive counseling, also called listening visits [9].Treatments can use an individual or group format, take place as home visits, at a clinic, or be internet-based, and are often tailored for the postnatal period, sometimes including a parent-child interaction component. Besides outcomes in terms of symptom reduction, it is also relevant to explore women's experiences of treatment.A meta-synthesis focusing on experiences of seeking and receiving psychosocial interventions for postpartum depression found that women could experience several barriers to help-seeking, but that they were generally positive to the interventions they had received [12].However, this meta-synthesis included low-quality studies.Barriers can be lack of time, stigma, childcare or transportation issues [5][6][7], and negative healthcare experiences [13].Some women also have concerns about being judged as a "bad mother", which may delay seeking help.Another meta-synthesis of studies concerning the experiences of perinatal women with a broader range of mental health problems, identified several unmet needs of information, collaborative integrated care, and posttreatment follow-up [14].Some important components of treatment expressed by the women were the importance of the health professionals' non-judgmental attitude as well as conveying hope. The aim of the current review was to provide an updated and comprehensive understanding of women's experiences of psychological interventions for postpartum depression, based on a systematic evaluation of the literature and a meta-synthesis of the findings, including an assessment of the reliability of the findings. Search strategy An information specialist (MKF) searched five databases: CINAHL (EBSCO), Cochrane Library (Wiley), EMBASE (Embase.com),Medline (Ovid), PsycINFO (EBSCO).Searches were run in November and December 2021, and updates in June 2022.A manual search of reference lists from the included articles was also undertaken to identify studies not captured by the electronic search. The search strategy was developed by the information specialist in collaboration with the experts in the review team, and combined terms and phrases describing the population, interventions, patients' experiences, and qualitative research methods.Another information specialist at the Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU) reviewed the search strategy using the PRESS Checklist [15].The search strategy and search terms used can be found in Appendix 1.The review used PRISMA Guidelines for reporting the search strategy [16]. Inclusion criteria Studies were included if they satisfied the inclusion criteria, see Table 1. Study selection The search process yielded 8804 unique studies.All titles and abstracts were screened for eligibility, 70 articles were assessed in full-text, and eight studies were included for data extraction and synthesis after assessing for quality (Fig. 1). Quality assessment of primary studies To assess the methodological quality and risk of bias, included studies were evaluated using the SBU Quality assessment tool for studies with a qualitative design [17].This critical appraisal tool consists of five domains (adherence to epistemological position, recruitment and appropriateness of participants, appropriateness of data collection procedures, aspects of the data analysis, and the role of the researcher), each with signaling questions. Three authors initially assessed each study (LS, PM, AD, EA, or JÅ), followed by a consensus discussion concerning the degree to which the methodological limitations impacted the findings, assessed as low, moderate, or high risk.For studies with a low or moderate risk of bias, data was extracted and compiled in tables while studies with high risk of bias were excluded from the further analyses. Data extraction and synthesis An inductive thematic synthesis was conducted using a three-stage procedure, largely in line with Thomas and Harden (2008) [18]. First, the included studies were read, in depth, to provide a full understanding.Three authors (EA, PM, and LS) also discussed their respective pre-understanding of the field, with both insider and outsider perspectives.PM (clinical psychologist) and EA (midwife) are both researchers in the field; PM also had experience of treating PPD.LS is a psychology professor, not experienced in this field, but in research methodology.These authors then independently extracted meaningful units from the included studies and translated them into codes.In stage 2, codes were grouped into descriptive themes, first individually, and then in a consensus procedure until everyone agreed. The same three authors grouped the stage 2 themes, resulting in two overarching stage 3 themes.Thomas and Harden (2008) [18] have described this third step as generating analytical themes.In the current synthesis, however, the two main themes generated were descriptive, and will therefore be referred to as main themes.Throughout the process, the emerging results were reflected upon in relation to the results of the primary studies to ensure that the findings would be grounded in the data and interrelated with each other to form a systematic whole.Quotes illustrating the findings were selected by all authors together. Assessment of the reliability of the combined findings The reliability of the synthesis was assessed using GRADE-CERQual (www.cerqual.org),which consists of four domains: methodological limitations, coherence, adequacy of data, and relevance.Three authors (EA, PM, LS) conducted the assessments.First, two authors (LS and EA) assessed the synthesis individually and proposed a preliminary assessment which was then reviewed by a third author (PM), adding new perspectives.Finally, consensus was reached among the three authors to reach a reliability assessment for each descriptive theme in stage 2. Characteristics of the included studies The eight studies represented the experiences of 255 women from the UK, Australia, and Canada.See Table 2 for detailed information about the participants, the treatments, and the research methodologies. Meta-synthesis The meta-synthesis resulted in two main themes: Circumstances and expectations; and Experiences of treatment (stage 3) with two and four (stage 2) descriptive themes, respectively.See Table 3 for certainty of evidence assessment and CERQual components grading for each descriptive theme. Main theme 1: circumstances and expectations Practical circumstances and social support were important for treatment to be feasible.Women in several studies described how important practical and social circumstances could be for them to take part in treatment. Women talked about practical issues such as transportation [20] and childcare [19,20] as fundamental.The internet-based therapies were appreciated for being accessible outside of office hours, despite some women having limited time for the program [22].Another aspect was that many participants felt a lack of support from family and friends [20,21,26], and treatment was their only opportunity to talk about how they were feeling.Other women experienced some support from their family and meant that this support was vital for treatment. "I didn't have anyone to talk to and no one actually knew about me being diagnosed with postnatal depression, my mum or anyone, no one knew, not even my partner. So it was quite nice just to offload on someone. " (HV listening visits [26]) Expectations, previous experiences, and attitudes influenced how women experienced treatment. Women in most of the studies reported on how previous experiences, expectations, motivation, and beliefs about PPD influenced their experience of treatment.The women's expectations of treatment were generally positive, however, there were those who didn't believe that treatment would help them, grounded in a sense of hopelessness [21], or because of low confidence in health services, e.g., fear of not being understood [25,26] or not being taken seriously [21].Others talked about how their feelings of shame for being depressed, and thoughts about not being a good mother, affected how Table 2 Characteristics and methodological assessment of the included studies they believed treatment providers would perceive them [24,26].Obstacles to seeking help could also be previous negative experiences of certain health professionals [24,25] or screening procedures [24], or fear of having their child removed if they revealed their depression [25]. "None of us have ever admitted to having postnatal depression…there is still a stigma it's incredible. " (Online-CBT [21]) There were women who had their own thoughts about why they were depressed, how it should be treated, and the potential of the treatments [19,24,25].[25]) Some women worried that other participants in group sessions [20] or the health visitor [26] might disclose confidential information and chose therefore to not share all their thoughts and problems. Main theme 2: experiences of treatment Overall, the included studies showed that the women were satisfied with the treatments they had received.Contributing factors were the format and content of the treatments, as well as the clinician's approach. The received treatment's modality was appreciated, but women had specific preferences concerning length, scope, and individual adaptations.Moderate.Some concerns for the lack of information on the researchers and their positionality. Table 2 (continued) Most of the studies included women's thoughts and experiences of the treatment formats.Women who had received group therapies generally expressed positive experiences.They appreciated hearing other women's stories, and that they could support each other [19,20]. Objections towards the group format could be not feeling connected with others in the group, or that group therapy does not suit everyone [19].Others would have liked more group sessions, and individual sessions as an adjunct to the group sessions [20]. Women who received home visits were satisfied to receive support in their own environment and with the continuity [23,25,26]. Some advantages mentioned by women who received internet-based CBT were accessibility and flexibility and to be able to work with the modules when they could fit it in [21,22].Internet therapy was experienced as less scrutinizing than face-to-face therapy [22], and less stigmatizing [21].In one treatment model, the internet format was individualized with a personal e-mail from the therapist, which was appreciated [22]. "When my maternal depression was really bad, there was no way I would have left my house to speak with a therapist -I was so weepy, shaky and terrified.…//… in those early weeks, the sort of anonymous nature of this program was a Godsend." (Internet CBT [22]) In another study, where the internet format did not include any personal contact with the therapist, there were more dropouts, and the women had several suggestions for improvement, e.g., a more needs-based and relevant content, a more interactive format, and more individual support [21]. Regardless of treatment format, there were women who would have liked more treatment sessions and more flexibility and tailoring [20][21][22]26].Other women were happy with the number of sessions [22].Ending therapy was described as a potentially anxiety provoking time [19,26].When women experienced continued support from family, other group participants, or professionals, this did not have to be a problem [19].When no other support was available, however, ending therapy could be experienced negatively [26]. "Just me thinking about it [the idea of no treatment after the visits] now makes me feel quite panicky… what would have been the point of ripping off the plaster and starting to abrade the wound, only to then just say, oh well. " (HV listening visits [26]) The relationship with the clinician, and perceptions about her/his competencies influenced how treatment was experienced. Women in all eight studies talked about how they experienced their relationship to their health professional and their competencies. The relationship with the nurse or therapist was described as important, regardless of treatment model or format.A good relationship was associated with trust and being able to talk about their depression.Some specific aspects of the relationship mentioned were chemistry [19,22], credibility and broad competence [20,21,23,26], e.g., knowledge of both infant's needs and postnatal depression [23], interpersonal skills [20,23], and intercultural and language competencies [20]. She [health visitor] was so understanding and easy to talk to and willing to listen, that I actually opened up, otherwise I wouldn't have done. (HV listening visits [24]) Sometimes, a good relationship was not established, or mothers did not feel confident that their therapist had the appropriate competence or necessary personal qualities [24][25][26], or was not flexible [26].These experiences could lead women to decline further sessions [24,25].Some mothers wondered about who the home visitor's primary interest was, the mother or the baby [25]. Women expressed varying opinions about the treatments' content, therapeutic approach, and the extent of their own expected contribution. Most studies included views concerning the specific content and therapeutic approach of the received treatment, and how this impacted the women's own contribution. Women who received home visits had many thoughts about the health visitors' approach [23][24][25][26].Active listening with an empathetic and non-judgmental approach was appreciated by many women as helpful for feelings of guilt and inadequacy [23]. We've analyzed all the reasons why I've been down and depressed, how to, sort of, challenge negative thoughts.(Individual CBT [25] ) In the older studies there were women who didn't find the home visits meaningful [24,25], and these were sometimes described as too unstructured [24].In the newer studies, however, the experiences of home visits were generally positive.Although the home visits were intended to be supportive, i.e., not giving advice, there were women who expressed a need for more clear and concrete advice from their home visitor [23,25,26]. Also, women who received CBT expressed positive experiences of their therapist's personal approach [19,22]. [The internet therapist was] so helpful and thoughtful.She wasn't hard on me like I am on myself and really made me stop and think about how I treat myself.(Individual CBT [22]) Women described positive treatment outcomes, but a few did not experience any improvement. In general, women experienced their received intervention as helpful, and positive for their confidence and selfesteem.Treatment was described having led to a better understanding of their own distress and to insights about depression [20,26], to acceptance and normalization, a generally more positive outlook on life and the future, and an increased sense of control [19,20,22]. Not dwelling on all the negatives that I might feel, and she really made me see the little things that actually were big things that I'd done in life, so yeah, I think it made me a very different, you know, person. (Individual CBT [19]) A common experience following treatment was a better mother-infant relationship.Women described how they had gained knowledge about infants and about their own importance for their child's development [23].Many felt that their own improved mood had led to a better relationship with their child [19,22] and that they had become more relaxed, patient, and secure in their parental role [22,23]. By 12 months, I felt I had the tools within myself to continue with sureness that I was a capable, confident mother. (Supportive home visits [23]) There were women who didn't experience any improvement.In general, these women didn't perceive supportive counselling as therapy [24], or as a sufficiently powerful intervention [26], and proceeded to seek other treatments instead.This was particularly notable in women with more chronic or recurrent depression [24,26]. Discussion This meta-synthesis was based on studies that explored women's experiences of CBT or supportive home visits.Treatments were individual or group-based. Overall, the women were satisfied with their treatment, although various practical and social circumstances, as well as their own expectations, had an impact on their participation in and experience of treatment.Some findings reported were increased confidence and sense of control, and a better mother-infant relationship.Similarly, in an earlier meta-synthesis of psychological and psychosocial interventions for PPD, almost all included studies reported that women found their interventions helpful, specifically concerning their distress, their parenting, and their relationships [12]. Reoccurring themes in the current and previous syntheses were women's wishes of being involved in decisions concerning their treatment and the impact of their own expectations of treatment [12,14,27].They wanted to be involved in the choice of treatment type and format, and for treatments to be individualized, e.g., the selection and order of modules to be tailored to their personal preferences and practical circumstances.It has been argued that therapeutic alliance as well as flexibility, i.e., tailoring psychological treatments to the individual's needs and circumstances can be more important than fidelity to treatment protocols [28].In meta-analyses exploring the effectiveness of PPD, CBT has consistently demonstrated a favorable impact, e.g., Sockol et al. (2015) and Huang et al., (2018) [29,30], with a relatively large number of studies confirming these results.Furthermore, this effect seems to be consistent for different formats (therapy delivered individually, in groups, or digitally) [31].This is encouraging, suggesting that mothers' preferences for various formats align with positive outcomes from an efficacy perspective, potentially instilling a sense of confidence in clinicians when considering the delivery of CBT in diverse forms.A recent synthesis investigating experiences of psychological treatment for depression in a broader context, excluding PPD [27], highlights how expectations concerning specific therapeutic approaches or formats can influence motivation and engagement in therapy. The current synthesis identified some general expectations, e.g., positive previous experiences of care or expecting services to be under-resourced.There were also expectations, beliefs, and fears more specific to the perinatal period and related to being a new mother, in line with other syntheses in postpartum contexts [12,14], such as motivation to get better, or fear of not being understood or not taken seriously.Mothers also worried they were, or would be seen as a bad mother, sometimes to the extent of fear of having their child removed.Our synthesis, as well as the one by Hadfield et al. [12] also identified women's uncertainty concerning the health visitor's role and competence to assess and support the mental wellbeing of mothers, which could sometimes lead to discontinuing treatment. Women who had received group therapy expressed mainly positive experiences, consistent with McPherson et al. 's (2020) synthesis of non-postpartum treatments, where the group format contributed to normalization when realizing that they shared similar experiences and were not alone [27].A negative aspect of the group format identified by McPherson et al., but less evident in our synthesis, was not feeling safe disclosing feelings, thus censoring what they shared.Common findings regarding CBT approaches were finding homework burdening, and more evident in McPherson's synthesis than in the current, that CBT-modules could be difficult to apply. Another finding, in line with Hadfield and Wittkowski [12] and a review by Daehn et al. investigating helpseeking among perinatal women [7], was the role of support from the partner or other family members to seek and take part in treatment.Practical circumstances such as transportation and childcare issues were evident for depressed mothers in the current and Hadfield's synthesis, providing one reason for home visits being appreciated.However, the review of treatments in nonpostpartum populations by McPherson et al. also found that transportation could be a problem and that remote therapy was preferred by some patients [27]. The significance of establishing a good relationship to their health professional was emphasized by the women, regardless of the treatment's format or theoretical basis, consistent with other syntheses [12,14,27].An empathetic, supportive, and non-judgmental approach was essential for the women's wish to follow through with the treatment, and for their recovery.This is understandable considering how depression during this period is associated with feelings of anxiety, guilt, and worthlessness [32,33].In the synthesis by Megnin-Viggars et al. (2015) women emphasized continuity of care; for example, seeing the same nurse or therapist during the whole care period from assessment to treatment and follow-up, as important for being able to disclose symptoms of depression [14].McPherson et al. (2020) emphasize patients' descriptions of the therapeutic relationship as collaborative, and providing a space for sharing thoughts and feelings, and for receiving advice [27]. Methodological strengths and limitations Eight studies with low and moderate methodological limitations were included in the synthesis, and the findings concerning the women's experiences were concordant among the included studies.Most studies had relatively few participants, but the interviews generated rich data with detailed descriptions of experiences.Most of the studies contributed data to all six descriptive themes, which were assessed as reflecting the variation in the findings, including contradicting and differing views and the complexity in the participants' experiences.Authors had used semi-structured interview schedules with similar topic guides, likely explaining the similar types of narratives found.All studies lacked information about the researchers' competencies and experience, and relationship to the participants; thus, how the authors' preunderstandings were taken into consideration is largely unknown. Other limitations are that the included studies were from the UK, Australia, and Canada and only one study targeted ethnic minorities, limiting the generalizability of our findings.Also, four of the eight included studies were more than 10 years old.Considering that we found some differences between the older versus newer studies in our review, it is possible that the delivery and formats of these treatments, mainly listening visits by a nurse or health visitor, may have changed over time suggesting a need for more updated studies. A treatment with perhaps even better effect on depression during the perinatal period is Interpersonal therapy (IPT) [34], although less studied.It has been suggested that IPT may be especially suitable for women with postpartum depression because it focuses on improving relationships and addressing social support, which can be critical during the challenging postpartum period.IPT has been found to help women navigate the interpersonal challenges and changes that often accompany motherhood [34].Unfortunately, our current meta-synthesis did not include any IPT studies, and limited data on treatment experiences are available.However, one study by Grote et al. (2009) reported high treatment satisfaction among mothers treated for PPD with IPT, as assessed through a brief questionnaire [35]. Strengths of the study include our following of an established method for synthesizing qualitative findings.Furthermore, and unlike previous meta-syntheses, we used CERQual to assess confidence in these findings. Conclusions Most women described positive outcomes of the treatment they received, and findings suggested improved parent-related outcomes.The findings highlight the importance of involving women in decisions concerning treatment for postpartum depression so that support can be tailored to their circumstances and preferences.It is important for practitioners to take an interest in the women's own thoughts about why they are depressed and their expectations of the treatment.Furthermore, the personal approach of the health professional; non-judgmental, sensitive, and able to convey hope is important during this vulnerable time.There is a need for updated research, including experiences of IPT. Fig. 1 Fig. 1 PRISMA Flow of study selection process Table 1 Inclusion and exclusion criteria Table 3 Main themes, descriptive themes, and confidence in the findings
2023-11-15T14:13:05.489Z
2023-11-14T00:00:00.000
{ "year": 2023, "sha1": "9647a7649dbb3dea7c0bfc6c27c5adcbedf2f810", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "4b780096a32360cf49c6dc9f39b2e4e853fdb5b6", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253734739
pes2o/s2orc
v3-fos-license
Contraction theorem for generalized pairs We use Koll\'ar's gluing theory to prove the contraction theorem for generalized pairs. In particular, we show that we can run the MMP for any generalized log canonical pairs. Introduction We work over the field of complex numbers C. In recent years, it has become increasingly clear that it is important to generalize results from the MMP for pairs to the MMP for generalized pairs, see [Bir21] and references therein. One of the most important conjectures in the MMP is the abundance conjecture. It is expected that if (X, D) is an lc pair and K X + D is nef then K X + D is semi-ample. An important result in this direction is [FG14,HX16], where it is shown that log abundant nef lc pairs are semi-ample. Unluckily, this is false for generalized lc pairs even if we assume log abundance ([LX22a, Example 1.4]). Nonetheless, some weaker semi-ampleness results related to the MMP are still believed to be true for generalized pairs and will lead to many interesting applications. Therefore it is important to understand exactly where the semiampleness starts to fail and what assumptions one should add to avoid this failure. As in the log canonical case (cf. [Kol13, Section 5.5]), a generalized log canonical structure gives a stratification (called glc stratification) of a variety with respect to its glc centers ([LX22b, Section 4]). Thanks to the P 1 -link techniques developed in [FS20, Theorem 1.4](cf. [Bir20, Theorem 3.5]), the glc stratification turns out to be nice and useful. For instance, we use the glc stratification to prove that any glc singularity is Du Bois ([LX22b, Section 6]). More importantly, we can do adjunction to glc centers via the generalized canonical bundle formula developed in [Fil20,HL21b,JLX22], thus this stratification allows us to use Kollár's gluing theory to prove semi-ampleness properties by induction on the dimension. The essential difficulty is to show that some induced equivalence relation is finite. In the lc pair case, as explained in [HX13,HX16] the finiteness of B-representations will imply the required finiteness of relations. In the glc pair case, this finiteness of B-representations can fail in general, which is the main reason that the abundance conjecture is not true for generalized pairs. However, there are certain situations where we can actually show the finiteness of Kollár's relations regardless of the B-representations. In these cases, we expect the corresponding semi-ampleness results hold. For example, in [LX22b] Jihao Liu and the author prove the existence of glc flips and an analogue of [HX13,Theorem 1.1] in the setting of generalized pairs. Except for the semi-ampleness, the Minimal Model Program seems to work pretty well for generalized pairs. For example, termination of flips and existence of minimal models or Mori fiber spaces hold for many generalized pairs with some standard assumptions similar to the case of usual lc pairs ( [HL22,Theorem 4.1],[LT22, Theorem 1.1], [Has22,Theorem 3.17],[LX22a, Theorem 1.2, 1.3]). As summarized in [HL21a], many very general results concerning running the MMP for lc/dlt pairs are still true for glc/gdlt pairs. Moreover, [HL21a] established the Cone theorem for glc pairs, and also established the Contraction theorem when M X is R-Cartier. Their approach involves replacing the generalized pairs by some auxiliary usual pairs ([HL21a, Theorem 4.1]) with the help of some ample divisor. However, there are essential differences between glc pairs and the lc pairs when the ambient variety is not Q-factorial ([LX22b, Example 2.1]). Hence in order to obtain the contraction theorem for glc pairs in full generality, which is equivalent to showing some semiampleness, we have to extend the theory for generalized pairs instead of just using theorems developed for lc pairs. The main purpose of this paper is to use the glc stratification developed in [LX22b] and Kollár's gluing theory to prove the following semi-ampleness theorem: Theorem 1.1 (=Theorem 4.1). Let (X, ∆, M)/U be a glc Q-pair, and L a nef Qdivisor such that L − (K X + ∆ + M X ) is nef and log big/U with respect to (X, ∆, M). Then L is semi-ample over U . Since ample divisors are automatically nef and log big, therefore in particular we have: Theorem 1.2. Let (X, ∆, M)/U be a glc pair and A an ample/U R-divisor. Then K X + ∆ + M X + A is nef/U if and only if it is semi-ample/U . As an easy corollary, we have: Theorem 1.3. Let (X, ∆ + A, M)/U be a glc pair such that • B + (A/U ) contains no glc center of (X, ∆ + A, M). By looking at the gluing relations more carefully, we can actually get a stronger result, which is the g-pair analogue of the base point free theorem for lc pairs. Theorem 1.4 (Base point free theorem for glc pairs). Let (X, ∆, M)/U be a glc Q-pair, and L a nef/U Cartier divisor such that aL − (K X + ∆ + M X ) is nef and log big/U with respect to (X, ∆, M) for some real positive number a. Then O X (mL) is globally generated over U for all m ≫ 0. The immediate application is the Contraction theorem for g-pairs, which fulfill the last part of [HL21a, Theorem 1.3] when M X is not necessarily R-Cartier: Theorem 1.5. Let (X, ∆, M)/U be a glc pair and R be a (K X + ∆ + M X )-negative extremal ray in NE(X/U ). Then R is a rational extremal ray. In particular, there exists a projective morphism cont R : X → Y over U satisfying the following. • For any integral curve C such that the image of C in U is a point, then In other words, cont R is a contraction. • Let L be a line bundle on X such that L · R = 0, then there exists a line bundle L Y on Y such that L = cont * R L Y . The author has been told by Jihao Liu that Theorem 1.5 along with [LX22b, Theorem 1.1] would allow one to run MMP for glc pairs in the non-Q-factorial setting: Theorem 1.6. Let (X, ∆, M)/U be a glc pair, then we can run a (K X + ∆ + M X )-MMP over U . This turns out to be useful for proving some expected semi-ampleness results since log bigness is not preserved when pulling back to Q-factorial gdlt models. Actually N. Tsakanikas and I will pursue the following statement in a forthcoming paper: Theorem 1.7. Let (X, ∆, M)/U be a glc pair and A be an ample/U R-divisor such that (X, ∆ + A, M) is also glc. Then we can run a (K X + ∆ + M X + A)-MMP which terminates with a Mori fiber space or a good minimal model (not necessarily Q-factorial). Since the proof of the main theorem relies on showing certain finiteness of relations, we will inevitably run into many technical issues, so we would like to give a sketch here to explain the core ideas in the proof. Sketch of the proof of Theorem 1.1: By perturbing the generalized pair and applying Fujino's technique (cf. [Fuj11]), we can easily reduce the question to proving that L| V is semi-ample, where V = Ngklt(X, ∆, M) is the non-gklt locus of (X, ∆, M) with the reduced scheme structure. The subtle thing here is that the structure of V is somehow complicated (eg. V may not be equi-dimensional or irreducible) so it is usually really hard to tell when a line bundle on V should be semi-ample. However, V is actually semi-normal and has a good stratification structure (glc stratification) coming from the glc centers (cf. [LX22b]), and if we consider some nicely chosen stratified morphsim, for example, the normalization π : V n → V , then we can do subadjunction to V n and then by induction on the dimension we know that L| V n is semi-ample. Notice that V n = V i is a disjoint union of irreducible normal varieties, so for each V i , L| V i defines a contraction with a so called glc crepant log structure (see Definition 2.5), which induces a glc stratification on Z i . In order to show that L| V is semi-ample, we must first find the correct candidate morphism g : V → Z that will be defined by L| V with the information coming from the g i and π. More precisely, we must consider the relation between Z i and Z j (i, j need not to be distinct). After some extra effort, we can give a nice interpretation on the induced relation between the Z i 's by relating it with some group actions on the strata induced by the stratification. Fortunately for us, we have a powerful gluing theory introduced by Kollár, with the help of which we only need to show that the above relation generated by g i and π is finite in some sense ([Kol13, Theorem 9.21]). Moreover, we only need to check that this holds on each glc center Z i,γ ⊂ Z i . This is essentially equivalent to showing that the stablizer group stab(Z i,γ ) is finite. The relation given by π is always finite since π is a finite morphism, but the contractions g i will create extra gluing information. A more careful computation shows that the extra relations essentially come from the different minimal glc centers on V i that dominate the same glc center on Z i . For simplicity, we consider the case that there is only one g i : V i → Z i and g i | V i,α is also a contraction for any glc center V i,α ⊂ V i . Let V i,α 1 and V i,α 2 be two different minimal glc center that dominate Z i . Assume that the only gluing relation coming from π is given by an isomorphism Then we can see that τ 12 does not generate any automorphism in V i,α 1 or V i,α 2 . However, τ 12 induces an automorphism of Z i , which may not be of finite order in general (see Example 1.8 below). Nevertheless, the log bigness in our assumption makes the situation much better behaved (see Theorem 3.2). We actually show that for any glc center Z i,γ ⊂ Z i , there is a unique minimal glc center V i,α that controls all the relations concerning Z i,γ . In particular, any automorphism of Z i,γ coming from the relations will lift to an automorphism of V i,α which in turn is induced by the π. Therefore the finiteness of relations between the V i will ensure the finiteness of relations between the Z i and we can obtain the geometric quotient Z as we desired. Applying the same philosophy to the total space of the line bundle mL| V over V for sufficiently divisible m, we will be able to find a line bundle H on Z such that g * H = mL| V . We can easily show that H is ample, hence L| V is semi-ample and we are done. The following example shows that the uniqueness of minimal glc centers is really necessary when using gluing theory to find the desired Z and H that correspond to the semi-ample L. Example 1.8 ([LX22b, Example 4.15]). Let λ ∈ C * and consider P 1 × A 1 , which can be regarded as the total space of a trivial line bundle over P 1 . We define φ λ : {0} × A 1 ≃ {∞} × A 1 by (0, t) → (∞, λt) and glue {0} × A 1 and {∞} × A 1 together using φ λ to get a demi-normal variety M with projection p : M → C, where C is a nodal cubic. Then M is the total space of a line bundle N on C. Moreover, N ∈ Pic 0 (C) ≃ G m = C * and can be canonically identified with λ ∈ C * . (1) Let W := P C (O C ⊕N ) be a P 1 -bundle over C, and let C ′ ⊂ W be the section at infinity, which belongs to |O W (1)|. Then the normalization W n = P 1 ×P 1 , and the extended isomorphism gives the gluing relation of π W : W n → W . Notice that K W is Cartier since W is a locally complete intersection. Let L := K X + 3C ′ , then we see that Thus the relation generated by π and p 2 is given by and is finite if and only if λ is a root of unity. (2) Let π C : P 1 → C be the normalization. Then π * C (N ) ≃ O P 1 and it defines g n : P 1 → Spec C, then the gluing relation {0} ∼ {∞} from π C gives no extra relation under g n , and so we get the morphism: However, if we consider the total space M then π M : P 1 × A 1 → M is the normalization. Notice that P 1 × A 1 is also the total space of the trivial line bundle over P 1 , thus there is a canonical morphism g n M : P 1 × A 1 → A 1 between total spaces of corresponding line bundles coming from g n, * O SpecC = O P 1 . Then φ λ induces a automorphism of A 1 : Thus the relation generated by π M and g n M on A 1 is given by {t ∼ s | t = λ l s for some l ∈ Z} and is finite if and only if λ is a root of unity. Even if λ is a root of unity, for example, assume λ generates µ n ⊂ G m , we have A 1 /µ n ≃ A 1 and get g M : M → A 1 as our desired morphism. However, if we look at the equivariant G m action under g M , we see the action on A 1 \{0} is the natural G m action on G m /µ n . This corresponds to the fact that N is not a pullback of a line bundle on Spec C. Actually the A 1 \{0} with the above G m action is call a Seifert bundle ([Kol13, Definition 9.50]) and it will become a line bundle if we replace N with nN . Acknowledgement. The author would like to thank his advisor Christopher D. Hacon for useful discussions and constant support. He would like to thank Jihao Liu for introducing the questions and giving useful comments. He would also like to thank Jingjun Han and Nikolaos Tsakanikas for giving useful comments. The author is partially supported by NSF research grants no: DMS-1801851, DMS-1952522 and by a grant from the Simons Foundation; Award Number: 256202. Preliminaries We adopt the standard notation and definitions in [KM98,BCHM10] and will freely use them. We will first introduce the definition of generalized pairs by using b-divisors. Then we will recall the glc crepant log structures and its induced glc stratifications developed in [LX22b]. 2.1. Generalized pairs. We will follow the original definitions in [BZ16] but will adopt the same notations as in [HL21a]. Notice that there are some small differences with the definitions in [HL21a]: in this paper, all generalized (sub)-pairs are assumed to be NQC. Let X X ′ be a birational map. For any valuation ν over X, we define ν X ′ to be the center of ν on X ′ . A b-divisor D over X is a formal sum D = ν r ν ν where ν are valuations over X and r ν ∈ R, such that ν X is not a divisor except for finitely many ν. If in addition, r ν ∈ Q for every ν, then D is called a Q-b-divisor over X. Let X → U be a projective morphism and assume that D is a b-divisor over X such that D descends to some birational model Y over X. If D Y is nef/U (resp. semi-ample/U ), then we say that D is nef /U (resp. semi-ample/U ). If D Y is a Cartier divisor, then we say that D is b-Cartier. If D can be written as an R ≥0linear combination of nef/U b-Cartier b-divisors, then we say that D is NQC /U . Definition 2.2 (Generalized pairs). A generalized sub-pair (g-sub-pair for short) (X, ∆, M)/U consists of a normal quasi-projective variety X associated with a projective morphism X → U , an R-divisor ∆ on X, and an NQC Definition 2.3 (Singularities of generalized pairs). Let (X, ∆, M)/U be a g-(sub-)pair. For any prime divisor E and R-divisor D on X, we define mult E D to be the multiplicity of E along D. Let h : W → X be any log resolution of (X, Supp ∆) such that M descends to W , and let The log discrepancy of a prime divisor D on W with respect to (X, ∆, M) is 1 − mult D ∆ W and it is denoted by a(D, X, ∆, M). We say that (X, ∆, M) is (sub-)glc (resp. (sub-)gklt) if a(D, X, ∆, M) ≥ 0 (resp. > 0) for every log resolution h : W → X as above and every prime divisor D on W . We say that (X, ∆, M) is gdlt if (X, ∆, M) is glc, and there exists a closed subset V ⊂ X, such that (1) X\V is smooth and ∆ X\V is simple normal crossing, and (2) for any prime divisor E over X such that a(E, X, ∆, M) = 0, center X E ⊂ V and center X E\V is an lc center of (X\V, B| X\V ). Suppose that (X, ∆, M) is sub-glc. A glc place of (X, ∆, M) is a prime divisor E over X such that a(E, X, ∆, M) = 0. A glc center of (X, ∆, M) is the center of a glc place of (X, ∆, M) on X. The non-gklt locus Ngklt(X, ∆, M) of (X, ∆, M) is the union of all glc centers of (X, ∆, M). If a Q-g-pair (X, ∆, M) is glc, then we will call (X, ∆, M) a glc Q-pair for short. We say that an R-Cartier divisor D is log big over U with respect to (X, ∆, M) if D is big over U and for any generalized lc center T of (X, ∆, M)/U with the normalization T n → T , the pullback D| T n is big over U . The following lemma is important for applying [Fuj11,Theorem 13.1] when we try to prove semi-ampleness by inductions. We refer the readers to [Fuj11, Section 7] for the definitions of non-lc ideal and locus. Proof. Let f : Y → X be a log resolution such that Exc(f ) ∪ f −1 ∆ is snc and M descends on Y . Let K Y +∆ Y +M Y = f * (K X +∆+M X ), then f * L−(K Y +∆ Y )+M Y is nef and big so there is an effective R-divisor E on Y such that where A n can be chosen to be a sufficiently general effective ample R-divisor. By perturbing A n a little bit we can also assume that ⌊∆ Y ⌋ ⊂ Supp E. Since 1 2 (L − K X − ∆ − M X ) is nef and big, there is an effective R-divisor E ′ on X such that (1) There exists a unique element W ∈ S z that is minimal with respect to inclusion. (3) Any intersection of glc centers of f : (X, ∆, M) → Z is also a union of glc centers. Definition 2.9 ([Kol13, Definition 9.15]). Let X be a scheme. A stratification of X is a decomposition of X into a finite disjoint union of reduced locally closed subschemes. We will consider stratifications where the strata are of pure dimensions and are indexed by their dimensions. We write X = ∪ i S i X where S i X ⊂ X is the i-th dimensional stratum. Such a stratified scheme is denoted by (X, S * ). We also assume that ∪ i≤j S i X is closed for every j. The boundary of (X, S * ) is the closed subscheme B(X, S * ) := ∪ i<dim X S i X = X\S dim X X, and is denoted by B(X) if the stratification S * is clear. We call S dim X the open stratum. Let (X, S * ) and (Y, S * ) be stratified schemes. We say that f : Let (Y, S * ) be a stratified scheme and f : X → Y a quasi-finite morphism such that f −1 (S i Y ) has pure dimension i for every i . Then S i X := f −1 (S i Y ) defines a stratification of X. We denote it by (X, f −1 S * ), and we say that f : X → (Y, S * ) is stratifiable. Next we give a special stratification that is induced by the glc crepant log structure. The stratification of Z induced by S i (Z) is called the generalized log canonical stratification (glc stratification for short) of Z induced by f : (X, ∆, M) → Z. Since this is the only stratification we are going to use in the rest of this paper, we usually will not emphasize the glc crepant structure f : (X, ∆, M) → Z, and we will denote the corresponding stratified scheme by (Z, S * ). The boundary of (Z, S * ) is the closed subspace Crepant log structure with log bigness In this section we show that there will be no P 1 -link in a glc crepant log structure if some mild log bigness assumptions are posed on the g-pair. Lemma 3.1. Let f : (X, ∆, M) → Z be a glc crepant log structure and V be a glc center on Z. Let W and W ′ be two minimal glc centers on X that dominates V . LetW ⊂ X be another glc center such that V ⊂ f (W ) (We allow X itself to be a glc center when Z = V ). Then the followings hold: (1) There exist glc centers W 0 , W 1 · · · , W n andŴ 1 , · · · ,Ŵ n on X such that W = W 0 ⊂Ŵ 1 ⊃ W 1 ⊂ · · · ⊂Ŵ n ⊃ W n = W ′ and f (W i ) = f (Ŵ i ) = V . (2) There exists a glc center W ′′ ⊂ X such that W ′′ ⊂W and f (W ′′ ) = V , hence we can also choose such W ′′ to be minimal. Theorem 3.2. Let (X, ∆, M)/U be a glc pair, and L a Q-Cartier Q divisor such that A := L − (K X + ∆ + M X ) is nef and log big/U with respect to (X, ∆, M). Assume that L is semi-ample and defines a projective contraction φ : X → Z over U . Then φ can also be regarded as a glc crepant log structure (X, ∆,Ā + M) → Z. Let V be any glc center on Z, then there exists a unique minimal glc center W on X such that • W dominates V , or in other words, φ(W ) = V . • For any glc centerW on X such that V ⊂ φ(W ), we have W ⊂W . Moreover, letW ⊂ X be any glc center that dominates Z andW n be the normalization, thenW n → Z is a contraction, or equivalently, the Stein factorization of W n → Z is trivial. Proof. We use induction on the dimension. If dim X = 1, then φ is an isomorphism unless X = P 1 , for which case the statements are also straightforward. By shrinking Z we can assume that V is the unique minimal glc center on Z. Let W ⊂ X be a minimal glc center over Z, which implies φ(W ) = V by our assumption, then it suffices to prove that any other glc centersW ⊂ X intersects with W , which is equivalent to W ⊂W since W is minimal. By Lemma 3.1 (2) we only need to show this for those minimalW that φ(W ) = V . We first claim that there is no glc centersW andŴ such that if there is such aW , then by Lemma 2.6 we can assumeW ⊂Ŵ . LetŴ n be the normalization ofŴ , then by [HL21b, Theorem 1.2] there is a glc structure (Ŵ n , ∆Ŵ n , MŴ n ) induced by the subadjunction (K X + ∆ + M X )|Ŵ n = KŴ n + ∆Ŵ n + MŴ n W n and the glc centers onŴ n exactly comes from the pullbacks of glc centers contained inŴ . LetŴ n → V ′ → V be the Stein factorization ofŴ n → V , then the glc centers dominating V ′ are exactly those dominating V . Therefore by the induction hypothesisW should contains W (after pullback toŴ n ), which is a contradiction. Now by Lemma 3.1 (1) we have glc centers W 0 , W 1 · · · , W n andŴ 1 , · · · ,Ŵ n on X such that W = W 0 ⊂Ŵ 1 ⊃ W 1 ⊂ · · · ⊂Ŵ n ⊃ W n = W ′ and f (W i ) = f (Ŵ i ) = V . Thus by keep using the claim above we see that W i andŴ i all contain W . Moreover, we can consider | F for the general fiber F of φ and apply the above statements to (F, ∆ F , M F ). Then we can see thatW | F is connected since any irreducible component of it is a glc center, which will contain a unique common minimal glc center. Therefore φ|W n :W n → Z must be a contraction. Actually Theorem 3.2 holds in a much more general setting: Theorem 3.3. Let (X, ∆ 1 , M 1 )/U and (X, ∆ 2 , M 2 )/U be two glc Q-pairs with exactly the same glc places (centers). Assume that ( is log big/U with respect to (X, ∆ 1 , M 1 ). Furthermore, (K X + ∆ 1 + M 1 X ) is semi-ample over U and defines a glc crepant log structure f : X → Z. Let V be any glc center on Z, then there exists a unique minimal glc center W on X such that • W dominates V , or in other words, φ(W ) = V . • For any glc centerW on X such that V ⊂ φ(W ), we have W ⊂W . Moreover, letW ⊂ X be any glc center that dominates Z andW n be the normalization, thenW n → Z is a contraction, or equivalently, the Stein factorization of W n → Z is trivial. Proof. The assumptions preserve under adjunctions, so we can use induction on dimension and the proof now is exactly the same as that of Theorem 3.2. Gluing relations on glc crepant log structures Before giving the proof of Theorem 1.1, we need to make some preparations in order to describe the relations on those glc centers in a clear way. We will keep using the notations in the whole section. Let (X, ∆, M)/U be a glc pair and let f : (Y, ∆ Y , M) → (X, ∆, M) be a Qfactorial gdlt modification. Let (X, S * ) and (Y, S * ) be the natural glc stratifications. Let W := ⌊∆ Y ⌋ be the boundary B(Y ) of (Y, S * ). Then W is demi-normal and let π : W n → W be the normalization. Let D be the double-normal locus on W n , D n the normalization of D andτ : D n → D n the induced involution. We see thatτ induces a gluing relation R(τ ) on (W n , S * ), this relation is finite and W n /R(τ ) = W . Then we have the contraction: For any glc center V i,α ⊂ V i , we defineṼ i,α := Spr(V i,α , W i ) and let h i,α :Ṽ i,α → V i,α be the corresponding finite morphism, which is stratified by Lemma 2.7. Let W i,α,k ⊂ W i be a minimal glc center that dominates V i,α and if there is a gluing relationτ i,α,k,T : W i,α,k → T where T is a glc center of W j for some j. Let f j (T ) = V j,β , since we have the following commutative diagram: thus T is also a minimal glc center that dominates V j,β since W i,α,k is a minimal glc center that dominates V i,α . Henceτ i,α,k,T induces an isomorphism τ i,α,k,T :Ṽ i,α → V j,β as follows: Notice that τ i,α,k,T is isomorphic on glc centers and gives a relation {h i,α (x) ∼ h j,β (τ i,α,k,T (x)) | x ∈Ṽ i,α } between V i,α and V j,β , which is the same as If T ′ ⊂ W i is another glc center such that V i,α ⊂ f i (T ′ ) with a gluing relatioñ τ T ′ ,T ′′ : T ′ → T ′′ , where T ′′ is a glc center on W j , then by Lemma 3.1(2) we know that there is an minimal glc center W i,α,k ⊂ T ′ that dominates V i,α . Sinceτ T ′ ,T ′′ is isomorphic on glc centers, we get a gluing relationτ i,α,k,T : W i,α,k → T by restriction. Let f j (T ) = V j,β Then we can easily see that the relation α and V j,β is the same as the one given by the τ i,α,k,T above. Therefore it suffices to consider all possible τ i,α,k,T and let R(τ ) be the induced pro-finite relation on V i . Then R(τ ) will reflect the relation generated byτ and f i . Notice that W n /R(τ ) = W , and all f i come from f , the relation actually equals to Then it is not hard to see the pro-finite relation R(τ ) is actually a finite relation (by looking at the geometric points of a fiber) and f (W ) = V i /R(τ ) is the geometric quotient (see for example [HX13, Propostion 3.12]). Theorem 4.1. Let (X, ∆, M)/U be a glc Q-pair, and L a nef Q-divisor such that A := L − (K X + ∆ + M X ) is nef and log big/U with respect to (X, ∆, M). Then L is semi-ample over U . We use the notations above, by induction on the dimension we can assume that L| V i is semi-ample and defines a contraction g i : V i → Z i , so we have Then for any glc center Z i,γ ⊂ Z i , we defineZ i,γ := Spr(Z i,γ , W i ) and let p i,γ :Z i,γ → Z i,γ be the canonical finite stratified morphism. By Theorem 3.2, there exists a unique minimal glc center V i,α ⊂ V i that dominates Z i,γ . And we have the following diagram: Then it is easy to see thatg i,γ is a contraction by considering the gdlt crepant log structure W i → Z i . Assume that there is a gluing relation τ i,α,k,T :Ṽ i,α →Ṽ j,β . Since the pullback of L| W from W is indeed τ i,α,k,T invariant, V j,β ⊂ V j is also the unique minimal glc center that dominates Z j,θ = g j (V j,β ). Hence we have a induced gluing relation σ i,α,k,T :Z i,γ →Z j,θ as follows: This gives a relation between Z i,γ and Z j,θ . As we have discussed above, the relation induced by g i and R(τ ) is generated by all the σ i,α,k,T , which we use R(σ) to denote. Now we want to show that R(σ) is a finite relation. It suffices to check on each open stratum of Z i,γ . Notice that any relation on the open stratum ofZ i,γ itself is induced by the following form σ i 0 ,γ 0 ,i 1 ,γ 1 • · · · • σ i n−1 ,γ n−1 ,in,γn , where σ i l ,γ l ,i l+1 ,γ l+1 :Z i l ,γ l →Z i l+1 ,γ l+1 is the isomorphism induced by some τ i,α,k,T in R(τ ) and (i 0 , γ 0 ) = (i n , γ n ) = (i, γ). Then by Theorem 3.2 for each Z i l ,γ l , there is a unique minimal glc center V i l ,α l ⊂ V i l that dominates Z i l ,γ l . Therefore this relation lifts to a relation τ i 0 ,α 0 ,i 1 ,α 1 • · · · • τ i n−1 ,α n−1 ,in,αn in R(τ ): Since the relation R(τ ) is finite, so does R(τ )|Ṽ i,α , thus the relation R(σ) on the open stratum ofZ i,γ is also finite. Then by the construction we can see that R(σ) is actually a finite, set theoretic, stratified equivalence relation (cf. [LX22b,Lemma 4.14]). Therefore by [Kol13,Theorem 9.21] we know that the geometric quotient Z := Z i /R(σ) exists and there is a morphism g : f (W ) → Z over U . Let m be sufficiently divisible such that mL is Cartier and M i := mL| V i defines g i : V i → Z i for each i. Then as in [LX22b,Construction 4.13], we can consider the total spaces V M i of the line bundles M i over V i , and the total spaces of Z H i of the line bundles H i over Z i , where H i is very ample and g * i H i = M i . Similarly, we can define the corresponding relation R(τ M ) on V M i and the induced R(σ H ) on Z H i . Then by applying the same statements above we can deduce that R(σ H ) is also a finite, stratified equivalence relation. Possibly by replacing m with a multiple, the geometric quotient Z H i /R(σ H ) exists by [Kol13,Theorem 9.21] and is the total space of an ample line bundle H Z over Z, where g * H Z = mL| f (W ) . Therefore L| f (W ) is semi-ample over U and we are done. We will use the notations in the proof of Theorem 4.1, by which we know that L is semi-ample over U . By the induction hypothesis, we know that for any m ≫ 0, mL| V i induces the contraction g i : V i → Z i , thus L i := L| V i = g * i K i for some ample line bundle on Z i . Let V L i (resp. Z K i ) be total space of the line bundle L i (resp. K i ) over V i (resp. Z i ), and R(τ L ) (resp. R(σ K )) the corresponding gluing relation. Then V L i /R(τ L ) is just the total space of the line bundle L| V over V . Since all the relation R(σ K ) could lift to a relation on R(τ L ), it implies that Z K i /R(σ K ) is also the total space of an ample line bundle H Z over Z, where g * H Z = L| V . Therefore mH Z is very ample for m ≫ 0 by Serre Vanishing and Castelnuovo-Mumford regularity (cf. [Laz04, 1.8.22]), which shows that mL| V = g * (mH Z ) is globally generated for all m ≫ 0. such that H is ample and E ≥ 0 contains no glc center of (X, ∆ + A, M). Then for any sufficiently small ǫ > 0, (X, ∆ + A + ǫE, M) is still glc and its glc centers are
2022-11-22T06:41:15.276Z
2022-11-19T00:00:00.000
{ "year": 2022, "sha1": "609f52a42b207f9569f0356c2c4d589d45958044", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "609f52a42b207f9569f0356c2c4d589d45958044", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269361973
pes2o/s2orc
v3-fos-license
Ciliopathy patient variants reveal organelle-specific functions for TUBB4B in axonemal microtubules Tubulin, one of the most abundant cytoskeletal building blocks, has numerous isotypes in metazoans encoded by different conserved genes. Whether these distinct isotypes form cell-type and context-specific microtubule structures is poorly understood. Based on a cohort of 12 patients with primary ciliary dyskinesia, as well as mouse mutants, we identified and characterized variants in the TUBB4B isotype that specifically perturbed centriole and cilium biogenesis. Distinct TUBB4B variants differentially affected microtubule dynamics and cilia formation in a dominant-negative manner. Structure-function studies revealed that different TUBB4B variants disrupted distinct tubulin interfaces, thereby enabling stratification of patients into three classes of ciliopathic diseases. These findings illustrate that specific tubulin isotypes have unique and non-redundant subcellular functions and establish a link between tubulinopathies and ciliopathies. types and developmental stages.Different genes each encoding a distinct tubulin isotype provide metazoans with discrete transcriptional modules to meet shifting demand for tubulin subunits across development and cell type (1)(2)(3).Mutations in genes encoding tubulins cause tubulinopathies, a broad spectrum of predominantly neurological conditions and neurodegenerative disorders (4).Differences in coding sequence between isotypes within species could alter the physical properties of microtubules they are incorporated in, thereby supporting functional specialization (5)(6)(7)(8).This idea underlies the proposed concept of a 'tubulin code,' in which expression of a given set of isotypes, combined with specific post-translational modifications, could dictate the stability and mechanical properties of the microtubule lattices they form (3). Whether biological context-specific subcellular or organelle-specific lattices actually exist remains to be clearly demonstrated.One area of exploration is within subcellular compartments that exhibit distinct microtubule architecture. The cilium is one such subcellular compartment.Cilia are microtubule-based organelles essential to embryonic development and required postnatally across critical physiological processes, including vision, hearing, olfaction, respiration, excretion and reproduction.Although different cilia types may vary in their final structure, function, size and number, they share certain conserved elements.In all cilia, microtubules are arranged into an axoneme, an axial structure consisting of nine microtubule doublets.This arrangement is templated by the basal body (a modified centriole), in which microtubules are organized into nine triplets.Tubulin heterodimers are polymerized into protofilaments that are radially interlinked within the microtubule doublets and lengthened longitudinally at the cilia tip.Unlike other cytoskeletal networks in a cell, the microtubules of axonemes are comparatively stable (9,10), particularly in motile cilia, which power fluid flow and thus require considerable mechanical strength to permit constant motor-driven bending (11,12).While conventional transmission electron microscopy (TEM) suggests that these microtubule-based structural elements are similar across basal bodies and axonemes, it remains unknown whether axoneme assembly and function utilize specific tubulin isotypes. Mutations in over 200 genes that affect cilia structure and/or function result in a list of over 40 conditions termed ciliopathies (13,14).These can be roughly divided into sensory and motile ciliopathies.Sensory ciliopathies result from impaired signaling functions of nonmotile primary cilia.They are associated with a spectrum of diseases, ranging from lethal multiorgan syndromes to non-syndromic forms like retinal dystrophy, which impact only a specific organ.Motile ciliopathies affect the ability of motile cilia to generate effective fluid flow (15).This results in heterogeneous clinical manifestations, which include defects in airway mucociliary clearance and hydrocephaly from accumulation of cerebrospinal fluid (CSF) in brain ventricles.The molecular basis for the clinical heterogeneity observed amongst ciliopathy patients -even within one type of condition -remains unclear. Primary ciliary dyskinesia (PCD, OMIM: PS244400) is a motile ciliopathy affecting the structure and function of motile cilia that line the airways, the brain ependyma, the reproductive tracts and the transient embryonic node.In patients with PCD, these cilia are static, beat in an uncoordinated manner, or are completely absent (ciliary agenesis).These ciliary defects can result in chronic respiratory disease, owing to impaired mucociliary clearance, as well as laterality defects, hydrocephaly and infertility in a subset of patients (16,17).Syndromic (i.e. with additional sensory ciliopathy features) PCD is very rare (18,19).PCD patients almost exclusively present with respiratory features with or without involvement of other motile ciliated tissues.Most PCD cases are recessively inherited, owing to variants in ~50 genes (20).However, mutations in these genes only account for ~70% of PCD cases, indicating that additional unidentified causal genes likely exist. Here, we identified mutations in the β-tubulin isotype TUBB4B associated with a subgroup of PCD.These mutations impacted different functional interfaces of the tubulin protein and resulted in distinct presentations of ciliopathic disease, where some showed only PCD phenotypes and others exhibited a syndromic ciliopathy.Through computational analysis, cell-based structure-function analysis, and mouse knockout studies, we defined the role of TUBB4B in ciliary function and the effect of disease-associated mutations on microtubule dynamics and axoneme assembly. Identification of de novo heterozygous TUBB4B variants in PCD cases To examine the molecular basis of PCD in a cohort of 8 clinically diagnosed patients, we undertook trio whole genome sequencing (WGS) (21).We identified by ultrastructural analysis a patient P1 (HG-003) exhibiting ciliary agenesis, sometimes referred to as reduced generation of multiple motile cilia (RGMC), a specific subtype of PCD.This patient also had shunted hydrocephalus.Hydrocephalus in human patients with PCD is rare but occurs most commonly through recessive inheritance in genes associated with the RGMC phenotype such as CCNO and MCIDAS, or through heterozygous dominant de novo mutations in the master motile ciliogenesis transcriptional regulator FOXJ1.However, no pathogenic or potentially pathogenic gene variants have been identified in any of these genes (22)(23)(24) or in other known PCD genes. PCD is largely inherited in an autosomal recessive manner except for one recent example of autosomal dominant inheritance (FOXJ1) and a few cases of X-linked recessive inheritance (RPGR, PIH1D3, OFD1) (20).Unlike the other seven patients, whom we molecularly diagnosed as carrying biallelic variants in known PCD genes, we found that patient P1 carried a de novo missense mutation -p.P259L (chr9:g.137242994:C>T(hg38)) -in the TUBB4B gene encoding the β-tubulin 4B isotype (Fig. S1A).We then identified an additional cohort of eleven unrelated PCD patients with heterozygous, often recurrent, variants in TUBB4B.Of these, five patients carried p.P259L, one patient carried p.P259S (chr9:g.137242993:C>T(hg38)), one patient carried an in-frame ten amino acid duplication p.F242_R251dup (chr9: g.137242941_137242970dup (hg38)) and four patients carried p.P358S (chr9:g.137243290:C>T(hg38)) (Fig. 1A-C, Fig. S1).Common clinical features of airway disease including bronchiectasis were observed across the cohort (Fig. 1D,E).In addition, 6/12 patients exhibited the less-commonly associated feature of hydrocephaly (Fig. 1F,G, Tables S1,S2).Laterality defects were uncommon, observed in only 1/12 patients.8/12 patients presented with PCD only (PCD-only group: p.P259L, p.P259S, p.F242_R251dup).In comparison, the four patients with the p.P358S substitution also presented with Leber congenital amaurosis (LCA) associated with sensorineural hearing loss (SNHL)-a syndromic phenotype (PCD+SND group).Two out of these four p.P358S substitution patients also exhibited renal defects (RD), congenital heart defects (CHD) or skeletal growth defects (SD), suggesting defects in several additional ciliated tissues.These phenotypes were all distinct from that of sensory-neural disease (SND-only) previously reported to be linked to the recurrent TUBB4B missense variants p.R391H or p.R391C across four unrelated families (25).These patients were characterized by early-onset and severe retinal dystrophy (EOSRD/LCA) associated with sensorineural hearing loss (SNHL).Importantly, no rhinopulmonary features characteristic of PCD airway dysfunction were reported for these original four families.These findings taken together suggested that dominant mutations in TUBB4B can cause three distinct and separate clinical presentations: a solely motile ciliopathy (PCD-only), a solely sensory ciliopathy (SND-only) and a syndromic form impacting both motile and sensory cilia (PCD+SND). TUBB4B mutations disrupted cilia and centrosomes in patient respiratory cells Regardless of genotype, we observed similar cellular phenotypes in PCD TUBB4B patientderived respiratory epithelial cells.These phenotypes included reduced numbers of apically docked basal bodies, basal bodies that fail to extend an axoneme (Fig. 1H-L, Fig. S2A-D), and incomplete centriole microtubule triplets (Fig. 1K,M, Fig. S2C'-F).Axonemes that did extend were short and had bulbous tips displaying disrupted and misoriented microtubules (Fig. 1O-S, Fig. S2G-J', Movie S1).To confirm the ciliary agenesis phenotype, we expanded and differentiated control and patient respiratory epithelial cultures from nasal brushings.Patient cells in culture recapitulated poor ciliation and reduced number of basal bodies (Fig. 1T, Fig. S2K). PCD is most commonly caused by mutations that disrupt the expression and assembly of the axonemal dynein motors that power cilia beating (20,26).We observed by immunofluorescence in TUBB4B patient cells mislocalization of dynein motors either to the cytoplasm or to the apical region where cilia should have formed (Fig. 1U,V, Fig. S2L).Furthermore, acetylated α-tubulin, which normally marks axonemes, appeared as cytoplasmic aggregates (Fig. 1U,V, Fig. S2K, arrowheads).These results suggested that axonemal motors were still produced, even in the absence of axonemes.Indeed, the rare axonemes that did form had inner and outer dynein arms (Fig. 1J,N), albeit with reduced motility (Fig. 1W, Movie S2-S6).Post-translational modifications of tubulin are a key part of the tubulin code and are normally common on ciliary microtubules.In contrast, patient cells showed alterations of these marks on the rare cilia observed with apical cytoplasmic accumulations (Fig. 1U,V, Fig. S2K-M).These defects in centriole amplification, axoneme extension, and tubulin modification may underlie the defects in mucociliary clearance observed in patients. TUBB4B is essential for motile cilia assembly in specific tissues To investigate the requirement for TUBB4B in vivo, we generated Tubb4b -/-homozygous protein null mice (Fig. S3A-C).Tubb4b -/-mice were born at Mendelian ratios but exhibited perinatal lethality with runting (Fig. 2A-C) and hydrocephaly (Fig. 2D), both features associated with motile cilia dysfunction in murine models.Consistent with the lack of prominent laterality defects amongst our TUBB4B patient cohort (1/12 patients with Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts dextrocardia), we did not observe any left-right patterning defects in the mice.We observed defects in surviving male spermatogenesis (Fig. 2G) and defects in oviduct multiciliated cells, which exhibited reduced cilia lengths (Fig. S4A-C).The lack of overt skeletal or growth phenotypes at birth in Tubb4b -/-neonates (Fig. 2B) suggested that TUBB4B is not required for embryonic development, where primary cilia play key roles.Indeed, Tubb4b -/- cilia in primary fibroblasts showed normal numbers and lengths (Fig. S3D-F). To examine the effects of mutations on motile cilia function, we first evaluated the hydrocephalus phenotype.This phenotype can be caused by defects in motile cilia on ependymal cells, which generate CSF flow.Given that 75% of the PCD-only cohort of TUBB4B patients also had hydrocephaly, we expected to see defects in the multiciliated ependymal cells lining the ventricles.However, although Tubb4b -/-mice exhibited pronounced and progressive dilatations of the ventricles neonatally without obstruction of aqueducts that suggested communicating hydrocephalus (Fig. 2D), motile cilia on ependymal cells exhibited grossly normal lengths and densities (Fig. 2F, Fig. S4D-F).Instead, we observed profound reductions in cilia number and length in choroid plexus cells involved in CSF secretion and regulation (Fig. 2E).Ependymal cilia further examined ex vivo (Fig. S4G-I) confirmed that the cilia numbers and lengths were grossly normal, and also showed that there was no significant difference in ciliary beat patterns or frequency.These data emphasized that, despite the similarities in molecular cascades driving multiciliogenesis between tissue types in mammals, lack of TUBB4B did not cause overt ependymal ciliary defects as it does in the adjacent choroid plexus epithelial cells. We also observed defects in both the number and length of Tubb4b -/-tracheal cilia (Fig 2H-L, Fig. S3G).Tubb4b -/-centrioles also failed to amplify and exhibited partially formed microtubule triplets.Despite these defects, some fully formed basal bodies managed to dock and extend rare, stumpy axonemes (Fig. 2L-T).These phenotypes (25) have been confirmed by a recent publication on an additional Tubb4b allele (27).On examining serial sections, we observed axonemal defects including the loss or duplication of central pairs, loss of microtubule doublets and microtubule disorganization arising at or just proximal to the transition zone (Fig. S5).Notably, in the absence of TUBB4B, other cytoskeletal processes looked grossly normal including apical-basal patterning in the pseudostratified epithelium.These data suggested a unique role for TUBB4B as a critical 'limiting component' specific for organelle size control and scaling in airway epithelial cell cilia. We performed proteomic analysis of wild-type tracheal cultures at different timepoints across airway epithelial differentiation (e.g.air-liquid interface day 4 (ALI4): centriole amplification, ALI10: early ciliogenesis) and observed that multiple different β-tubulin isotypes were expressed during the process, as indicated by unique peptide reads (Fig. 2U).Together with our knockout data, this suggested that although other β-tubulins in multiciliated airway cells are expressed during developmental timepoints when centrioles and cilia were built, they are non-redundant with TUBB4B.TUBB4B therefore fulfils a unique role in ciliogenesis and is essential for the formation of multiple motile cilia in the respiratory epithelium.These data further support the idea that TUBB4B is a cilia-specific tubulin. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts TUBB4B variants differentially impacted microtubule dynamics and tubulin heterodimer formation In order to understand how different TUBB4B mutations might affect microtubule dynamics and ciliation, we transiently overexpressed human FLAG-tagged wild-type and diseaseassociated variants of TUBB4B in RPE-1 cells (Fig. 3, Fig. S6).PCD-only TUBB4B variants (p.P259L/S) failed to colocalize strongly to microtubules (Fig. 3A,B) and the PCD+SND syndromic variant (p.P358S) showed reduced colocalization.In contrast, microtubule localization was minimally affected for the SND-only variants (p.R391H/C). Under serum-starvation conditions to induce ciliogenesis, we also examined effects of TUBB4B variants on cilia length and numbers (Fig. 3C-E).We measured the kinetics of microtubule depolymerization in cells expressing these different TUBB4B variants by tracking the number and lengths of microtubules bound to the end-binding protein EB1 after cold shock followed by repolymerization (Fig. S6).PCD-only TUBB4B variants (p.P259L/S), which showed low incorporation into MTs including those of the centrosome, had no observable effects on cytoplasmic microtubule dynamics (Fig. 3F-H) but profoundly decreased cilia number and length (Fig 3C -E).In contrast, the syndromic PCD+SND TUBB4B variant (p.P358S) localized to centrosomes upon repolymerization but strongly impeded the number and length of repolymerizing cytoplasmic microtubules, as well as decreased the number and length of cilia (Fig. 3C-H).The SND-only variants (p.R391H/C) showed intermediate effects on microtubule dynamics, and only modestly affected the length of primary cilia (Fig. 3C-H).Overexpression of wild-type TUBB4B only slightly increased cilia length without disrupting rates of ciliation or microtubule dynamics, suggesting that the effects observed of the variants are unlikely to be caused by overexpression alone (Fig. 3C-H).Together, these findings suggested that each variant acts to disrupt microtubule biology via differing mechanisms. The p.P358S variant, however, did not show disrupted binding to α-tubulin (Fig. 3I,J, Fig. S7C).To further determine how p.P358S impacts tubulin function, we purified in vitro recombinant human TUBB4B co-expressed with TUBA1A.Both wild type and p.P358S TUBB4B formed heterodimers with TUBA1A robustly in this system (Fig. S8A,B,C).In the presence of a slowly hydrolyzable GTP analogue, guanylyl-(α, β)-methylene-diphosphonate (GMPCPP), wild-type TUBA1A-TUBB4B heterodimers polymerized into micrometer-long Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts microtubules.However, p.P358S mutant TUBB4B required both GMPCPP and taxol, which stabilizes microtubules, to form microtubules.This indicated that p.P358S-containing TUBA1A-TUBB4B heterodimers were capable of forming a microtubule lattice but required a higher critical concentration to do so (Fig. S8D).To test the effects of p.P358S on microtubule dynamics, we undertook total internal reflection fluorescence (TIRF) videomicroscopy on wild-type seed microtubules with varying ratios of isotypically pure tubulin TUBA1A-TUBB4B heterodimers containing wild-type and p.P358S variants (Fig. 3L-Q).Mutant TUBB4B was able to potently inhibit wild-type microtubule dynamics in a dose-dependent manner in vitro, similar to what we observed in our cellular experiments (Fig. 3F,G).We observed significant decreases in the growth characteristics of p.P358S containing microtubules (i.e.polymerization rate (Fig. 3M), growth (Fig. 3N), nucleation frequency (Fig. 3O)) and increases in the duration of pause during polymerization (Fig. 3Q), events without sustained microtubule growth or shrinkage. Together these findings demonstrated how PCD-causing TUBB4B mutations disturbed centriole number and axoneme size by disrupting heterodimer assembly (PCD-only variants) or disrupting polymerization (PCD+SND) in an organelle-specific manner. PCD-associated mutations have a dominant-negative effect in mouse and patient cell models If, as proposed, certain mutant TUBB4B variants can act in a dominant-negative manner in vivo, we could expect to see different phenotypes in heterozygous mice carrying single patient Tubb4b mutations versus null mutations (i.e.haploinsufficiency).We therefore used CRISPR-Cas9 mediated genome editing to engineer into mice the Tubb4b patient variants carried in PCD-only (p.P259L, p.P259S), syndromic PCD+SND (p.P358S) and SND-only (p.R391H) patients, as well as deletion alleles (Fig. 4A, Fig. S3A, Fig. S10A).While animals heterozygous for the two null Tubb4b alleles described above (Fig. 2, Fig. S3) exhibited normal neonatal survival and growth (Fig. S9D), with no reduction in airway cilia length (Fig. S9A,B) or fertility defects (Fig. S9C), founder mice carrying PCD-causing mutations exhibited increased postnatal lethality (Fig. 4B).They developed pronounced hydrocephaly neonatally (Fig. 4C,D, D') and defects in mucociliary clearance within the upper airways (Fig. 4F,F'), with loss of multicilia throughout the respiratory epithelium (Fig. 4G,H).Moreover, we were unable to transmit any of the PCD variants because surviving founders exhibited both male and female infertility (Fig. 4E).These mice therefore phenocopy PCD patients. In contrast, we were able to generate a mouse line carrying the SND-only Tubb4b R391H/+ variant (Fig. S10A,B), although males remained infertile due to defects in spermatogenesis (Fig. 4I).Tubb4b R391H/+ mice did not develop any retinal degeneration (Fig. S10C-E) even when aged (Fig. S10E).We observed a significant (20%) reduction in airway cilia length (Fig. 4J-L) in Tubb4b R391H/+ mice, indicating a dominant effect, as we could confirm TUBB4B protein levels were identical between control and p.R391H/+ littermates in vivo (Fig. 4M).However, SND TUBB4B mutations in mice do not recapitulate the phenotypes of human SND patients. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts We further examined the effects of the disease variants on tubulin autoregulation in human airway nasal epithelial cultures by carrying out proteomic and transcriptomic analyses on lysates from healthy donors and patients carrying either the p.P259L or the p.P358S variant.Comparable levels of TUBB4B protein were detected between controls and patients (Fig. 4N), further suggesting haploinsufficiency is not the disease mechanism.However, bulk RNA sequencing (RNASeq) of the samples from these two patients and control donors revealed distinct molecular signatures.Only the p.P259L patient samples displayed a twofold increase in TUBB4B mRNA and concomitant increase in mRNAs encoding TUBA1A and TPPP3, a microtubule polymerizing protein (Fig. 4O).This is consistent with the expectation that the variant disrupting α/β tubulin heterodimer assembly would also impact on the tight regulatory feedback in cells that ensures an appropriate balance of α and β subunits (28,29).In keeping with this concept, we also observed upregulation of the mRNA encoding the TBCA and TBCB tubulin chaperones involved in binding and stabilizing the nascent β-tubulin and α-tubulin protein, respectively (Fig. 4P).These tubulin autoregulation signatures were not observed in the syndromic PCD+SND p.P358S samples, consistent with our observation that this variant does not disrupt tubulin heterodimer assembly (Fig. 3J) but rather exerts downstream dominant effects on microtubule dynamics (Fig. 3L). Together these data suggested that these disease-causing variants are acting via non-loss-offunction mechanisms i.e. through dominant-negative or gain-of-function effects.Indeed, it is difficult to distinguish between these two possibilities (30), particularly for tubulins (31).Despite a decreased intrinsic propensity for the PCD-only variants to assemble into microtubules, transcriptional upregulation of TUBB4B itself and its chaperones still produce mutant TUBB4B protein.This production of mutant protein could have competitive dominant-negative effects over wild-type TUBB4B by competing for tubulin chaperones.In contrast, incorporation of the p.P358S variant appeared to have assemblymediated dominant-negative effects over the wild-type TUBB4B-containing microtubule lattices.Here, the variant poisoned microtubule dynamic properties in a dose-dependent manner.Overall, these results support distinct dominant-negative modes of action of TUBB4B mutations in each ciliopathy subtype.In the case of mice carrying PCD-causing Tubb4b mutations, these models phenocopied many patient features, at both cellular and physiological levels, consistent with the mutant variants acting in a dominant manner to disrupt centriolar and ciliary microtubules. TUBB4B mutations differentially localized across tubulin surfaces according to clinical phenotype Although TUBB4B is widely expressed, our mouse knockout studies indicated an essential and non-redundant function for this β-tubulin isotype in building airway cilia.In order to understand why, we undertook a structural approach, reprocessing cryo-electron microscopy (cryo-EM) data of the human microtubule doublet isolated from the axonemes of airway multiciliated cells (32) to determine a structure of the tubulin heterodimer to 2.8-Å resolution (Fig. 5A).Within this reconstruction, we could assign both αand β-tubulin isotypes, based on their sidechain density.After evaluating each residue of the candidate β-tubulin isotypes, we determined TUBB4B to be the best fit to the density map and thus Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts likely the predominant isotype incorporated into airway cilia axonemes in vivo (Fig. S11-S14).Thus, structural analysis further confirmed that TUBB4B is a cilia-specific tubulin, despite the expression of many other β-tubulin isotypes within this cell type. The site of mutation in the TUBB4B variants associated with the three different phenotypic classes of disease (SND-only, PCD-only or PCD+SND) were differentially distributed across the structure of the protein, both within and between tubulin heterodimers (Fig. 5B, Table S3).The previously reported SND-only TUBB4B variants p.R391H and p.R391C (25) localized to the interface between adjacent tubulin heterodimers (Fig. 5C).The SND p.R391H/C mutations were moderately destabilizing to the protein but predicted to more strongly impact longitudinal interactions with the adjacent α-tubulin in neighboring heterodimers.Indeed, recent cryo-EM maps reveal an interaction between the α-tubulin C-terminal tail that links adjacent dimers to two conserved arginine residues (R391, R392) on β-tubulin to stabilize the microtubule filament (33).Moreover, several other pathogenic missense mutations have been reported in mostly neurodegenerative disorders at this position in other β-tubulin isotypes, including TUBB4A p.R391H/L, TUBB3 p.R391L, TUBB2A p.R391H and TUBB8 p.R391C, where these mutations were predicted to disrupt microtubule stability (31,34). The PCD-only group (p.P259L/S, p.F242_R251dup) variants localized to the intradimer interface, the interface between each αand β-subunit of a tubulin heterodimer (Fig. 5D,E) and have not been reported to be associated with human disease.Both missense mutations at P259 were predicted to destabilize the protein itself but were more likely to affect the interface with α-tubulin (Table S3).A similar disruption of this intradimer binding interface by the p.F242_R251dup was expected, although the effects of an insertion mutation on protein stability are more challenging to predict. The PCD+SND syndromic variant (p.P358S) was located within the intralumenal face of the tubulin heterodimer-the side facing into the microtubule lumen-close to binding site of the anti-tumor drug taxol (Fig. 5F).This site promotes lateral aggregation of taxolbound protofilaments into stabilized microtubules (35).This intralumenal position is also known to interact with many microtubule inner proteins (MIPs) within cilia axonemes (36).The p.P358S mutation was predicted to be destabilizing and could also disrupt TUBB4B interactions at the intralumenal side of protofilaments, potentially with MIPs or lateral interactions between protofilaments.p.P358L/A/S mutations have also been reported in TUBB8, and are associated with female infertility (37). Our combined analysis showed that different TUBB4B mutations disrupt distinct molecular surfaces of β-tubulin, which in turn disturb different aspects of tubulin function and result in different ciliopathic disease phenotypes.We propose that how these mutations impact tubulin heterodimers and their assembly into higher-order structures within cilia and centrioles dictates whether patients present with purely motile ciliopathy features, purely sensory ciliopathy features or a syndromic form affecting both cilia types. TUBB4B is an organelle-specific isotype localized to centrioles and cilia While our structural analysis confirmed TUBB4B to be the predominant β-tubulin isotype in motile cilia axonemes, it remained unclear whether TUBB4B also contributes more broadly to other microtubules in cells where it is expressed or whether organelle-specific microtubule lattices exist. To rule out a general tubulin deficiency in Tubb4b mutants, we examined transcriptomic and proteomic expression of all β-tubulin isotypes in neonatal tracheas (Fig. 6A,B).Increased protein levels of the highly similar TUBB5 and TUBB4A isotypes in Tubb4b -/-meant that overall β-tubulin levels were not significantly changed.This suggests that phenotypes were due to lack of function of a specific, and non-redundant, isotype required for building cilia. We hypothesized that the phenotypic sensitivity of a given tissue (and cilia type) to TUBB4B loss would reflect the relative ratios of TUBB4B to other isotypes locally available.To test this, we isolated neonatal trachea (affected by TUBB4B loss) and ependyma (not affected by TUBB4B loss) from Tubb4b +/+ and Tubb4b ALFA/+ littermates and performed quantitative immunofluorescence (Fig. 6G,H).We observed a ten-fold difference in the axonemal content of TUBB4B between these motile cilia types. Together, these results demonstrate that although TUBB4B is expressed in multiple tissues and preferentially localized to centrioles and cilia, cilia in different cell types are composed of different ratios of β-tubulin isotypes and thus differentially sensitive to TUBB4B loss.This supported a model where, in certain tissues where TUBB4B inherently is not highly represented in cilia, such as MEFs or ependyma, other tubulin isotypes can compensate in the absence of TUBB4B.However, these tissues could still be impacted by variants that integrate into microtubules to exert a dominant-negative effect, such as the syndromic PCD variant p.P358S (Fig. S15), which abrogates microtubule dynamics. Discussion Given that cilia are by definition microtubule-based organelles, it is perhaps surprising that mutations in tubulin genes have not been observed previously to be associated with ciliopathies.This is likely due to a high level of redundancy amongst tubulin isotypes capable of building cilia.However, our human disease and mouse genetic data now uncover Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts a specific requirement for TUBB4B in the construction and function of ciliary axonemes in specific tissues.We found that disease-causing TUBB4B variants can act in a dominant manner to cause a spectrum of ciliopathic diseases.The locations of the mutations of these variants across the β-tubulin protein result in different effects on tubulin heterodimer assembly and polymerization into higher-order structures of microtubule doublets and triplets, ultimately impacting organelle number and axoneme size.These findings explained the different disease presentations across patients carrying different variants. Differences in the patterns and levels of tubulin isotype expression, including TUBB4B, as well as tissue-specific differences in the ratio of different tubulin isoforms used to build cilia, likely explain why certain specialized cilia and tissues are more sensitive to TUBB4B mutations (Fig. S15).In some axoneme types, like those of microtubule doublets of respiratory cilia (38), we demonstrated that only one predominant β-tubulin isotype is utilized.In these cilia, mutations that inhibited heterodimer assembly completely disrupted centriole biogenesis and axoneme elongation as other isotypes cannot compensate, thus explaining why PCD-only phenotypes were observed.In other tissues, where a 'mix' of tubulin isotypes may be used to build different ciliary axonemes, other isotypes could compensate for the inefficient integration of TUBB4B variants in PCD-only patients (or in KO mice) into these structures, and therefore, cilia function was not compromised.Hence, heterodimer-impaired TUBB4B mutations resulted in PCD-only phenotypes.In contrast, the syndromic PCD+SND variant (p.P358S) could robustly integrate into axonemes and acted in a dominant-negative manner to disrupt the microtubule lattice, thereby leading to additional sensory and renal disease phenotypes in any tissues where TUBB4B was highly expressed.In the polymerization-impaired TUBB4B variants found in SND-only patients (p.R391H/C), we observed less dramatic effects on cilia length in vivo and in vitro.This is consistent with more subtle structural defects in the kinetics or stability of axonemal microtubules into which such variants integrated, and a tissue-specific sensitivity to dysfunction that leads to sensory ciliopathy features. Our data also suggest that in some contexts, it is possible that different tubulin isotypes are able to compensate for bespoke properties needed to withstand the high mechanical demands of cilia motility and the high structural order of microtubule doublets.For example, centriole amplification and ciliogenesis in mouse ependymal cells appear to be unimpacted by TUBB4B loss, unlike ciliary airway cells, although ependymal cells expressed seemingly similar patterns of isotypes and levels of Tubb4b mRNA to ciliated airway cells (Fig. S4J).However, visualization of specific tubulin isotypes in microtubule networks across time and cellular space in vivo through endogenous tagging of tubulin genes using Tubb4b ALFA showed that wild-type ependymal cilia contain 10x less TUBB4B protein than tracheal cilia.Thus, alternate isotypes such as the highly similar TUBB4A may compensate for the loss of TUBB4B function (39).These observations are consistent with Drosophila studies suggesting that only isotypes with a particular amino acid sequence in the carboxyl terminus (EGEFXXX) are required for normal axonemal function (7,(40)(41)(42).This motif is only found in TUBB4A and 4B in mammals and is the site of posttranslational modifications associated with cilia stability.These findings raise the possibility that mammalian TUBB4A/B is a motile cilia-specific β-tubulin required for the unique mechanical and structural properties of motile axonemes (43). Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Cilia on the choroid plexus are dramatically remodelled during development from motile to sensory/immotile (44) and then lost gradually with age (45).Our work suggests that defects in choroid plexus function could underlie hydrocephaly phenotypes more broadly in PCD patients, rather than defects in ependymal cells, which have been largely accepted to be the culprit.The exact function of what choroid plexus cilia do remains unclear but it has been suggested choroid plexus cilia regulate fluid transcytosis and their motility could help cilia sample CSF (46).In a rapidly growing body of evidence for non-genetic causes of hydrocephaly, the importance of the choroid plexus in triggering innate immune and CSF secretory responses to drive hydrocephaly has been linked to insult-induced cilia loss in choroid plexus epithelial cells (47,48).Moreover, in PCD patients with RGMC phenotypes, like Multicilin variants, MRI imaging revealed fully penetrant hydrocephaly with choroid plexus hyperplasia (49).Future studies will be required to understand the mechanisms by which cilia loss regulates CSF secretion and homeostasis in the choroid plexus. Our study raises an intriguing question of how a cell expressing different tubulin isotypes preferentially creates specific isotype-enriched microtubule structures with different proportions of available isotypes.One possibility is regulation by the large class of microtubule-associated proteins, which can interact with tubulin and microtubules to affect their dynamic and physical properties (50).To dissect this would require our in vivo approaches, which preserve both the endogenous network of regulatory factors and tubulin balance.Our endogenously tagged Tubb4b ALFA model allows us to monitor in development and disease isotype-specific functions sensitively during the organization of different cellular microtubule arrays.Such approaches are necessary to understand the molecular mechanisms leading to isotype-specific differences in the intracellular microtubule networks which support bespoke cell functions.For example, given the pleiotropic features of PCD+SND p.P358S patients, which include effects on kidney function, heart and bone growth, it will be important to use our Tubb4b ALFA model to study contributions of TUBB4B to not only primary cilia within these tissues, but more widely to highly specialized cardiomyocytes and renal tubular epithelia that each have distinct cytoskeletal networks.Indeed, a single individual has been identified carrying a de novo p.Q11R variant in TUBB4B without clear PCD and exhibiting instead sensorineural hearing loss but not LCA, renal Fanconi Syndrome and hypophasphatemic rickets (51).The Q11 residue is close to the tubulin catalytic GTPase site and is proposed to lead to hyperstabilized microtubules. In conclusion, our study provides detailed mechanistic insights into how TUBB4B variants cause a spectrum of ciliopathic diseases that spans both sensory and motile ciliopathies.The disease presentation manifesting in patients depends on how each variant affects tubulin heterodimer pools, as well as the differential tubulin isotype composition of the cilia and centrioles into which they are incorporated.Our study extends the understanding of tubulinopathies outside of classical neurological features, links them with ciliopathies, and suggests how tubulin diversity in humans underlies and facilitates the diversity of cilia seen in vivo. Subjects Twelve affected individuals from twelve unrelated families and their healthy relatives were included in the study (6 females and 6 males).Genomic DNA was extracted from peripheral blood by standard procedures.Signed and informed consent was obtained from the affected individual as well as relatives through approved protocols. Whole genome sequencing (WGS) and candidate prioritization For P1 (UOE), DNA was sequenced by WGS at Edinburgh Genomics (21).Libraries were prepared using the Illumina TruSeq PCR-free protocol and sequenced on the Illumina HiSeq X platform.The average yield per sample was 136 Gb, with mean coverage of 36x (range 33.9-38.3).After first running analysis with a virtual gene panel of 146 genes, based on the PCD PanelApp panel (v1.14)(52) with five additional genes identified in the literature (CFAP300, DNAH6, DNAJB13, STK36 and TTC25), no diagnostic variants were identified in P1.Expanded analysis identified a de novo missense mutation p.P259L (chr9:g.137242994:C>T(hg38)) in the gene TUBB4B only in the patient, and not present in either parent (Fig. S1A). TUBB4B is an outlier in gnomAD in terms of constraint, especially in the overrepresented synonymous category (Z-score bottom 1.4% of all genes, 1.5x observed vs expected variants).The gene is also highly intolerant to missense variants, (Z-score top 0.4% of all genes, 0.28x observed vs expected variants) and intolerant to loss of function variants (probability loss of function intolerant (pLI) 1, 0.2x observed vs expected variants).Gene annotation for TUBB4B was obtained from gnomAD (v2. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts We examined the sequence context of P259 residue given it arose repeatedly by independent mutation in 7/12 patients within this study's cohort. GTCCCGTTT -V--P--F- The change of P>L is CCG>CTG, so deamination of a methylated CpG seems the most likely cause of that mutation.The other change at this residue P>S is CCG>TCG, which could be caused by non-canonical methylation MCG followed by deamination (56).In contrast, the sequence context of P358 which also arose as an independent de novo mutation in 4/12 patients is: The change of P>S is CCT>TCT, which is unlikely to be caused by non-canonical methylation. For P8, data was accessed, analysed and filtered as described in (57).Data was reviewed by the Airlock Committee prior to export. Whole exome sequencing (WES) and NGS targeted panel Details of how WES genomic libraries for P2-P7 and P8-P12 were generated, captured and sequenced are summarized in Table S2. Ciliary nasal brushing and high speed videomicroscopy Biopsies and brushing of ciliated epithelium were obtained using a cytology brush from nasal mucosa (inferior turbinate) of the affected individuals P1-P6, P8-P10, P12 and p.R391C and processed for ciliary investigations.All clinical experiments were performed in the absence of acute respiratory tract infections.Cilia beat frequency and pattern were assessed by high-speed video microscopy at a frame rate of >500 frames per second.Video microscopy of ciliated epithelial cells was performed using an inverted microscope with a 20X phase contrast objective (Eclipse Ti-U; Nikon, Melville, NY) enclosed in a customized environmental chamber maintained at 37 °C.Images were captured by a high-speed video camera and processed with the Sisson-Ammons Video Analysis system (Ammons Engineering, Mt.Morris, MI, USA) and analyzed using established methodologies (58).Cilia beat frequency was analyzed in at least 4 fields obtained from each cell preparation.Cells were collected under approval from appropriate local authorities including Washington University institutional Review Board (IRB) approval and the local ethics committee DC-2008-512, Paris-Necker. Transmission electron microscopy-human Airway biopsies were immersed in 2.5% glutaraldehyde and processed by standard procedures for transmission electron microscopy ultrastructural analysis (59).Ultrathin sections were examined at a final magnification of 60000x without knowledge of the clinical Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts data.In samples without a ciliary agenesis phenotype, analysis of at least 50 transverse ciliary sections of different cells were required to study the internal axonemal structure according to a quantitative method (60).Ciliary ultrastructure results were expressed as a percentage of abnormal cilia among the total number of cilia analysed.As previously reported, up to 10% of cilia in control specimens can exhibit ultrastructural defects (61).For each ciliary study, axonemal abnormalities were expressed as the concerned ultrastructure (i.e.dynein arms, central complex and/or peripheral microtubules).In the case of ciliary agenesis, multiple ultrathin sections are analyzed but very few ciliary cross-sections are observed and analyzed (e.g.P4 had 9 cilia cross sections, and P5 had 4 cilia cross-sections, both showed outer doublet defects in cross-sections).As such, quantification of these parameters is not feasible and is skewed by the fact the primary defect is cilia rarefaction. ET data of a section (300 nm thick, stained plastic-embedded) was collected using a TEM CM10 (Philips, Amsterdam, The Netherlands) equipped with a TemCam-F416 camera (TVIPS, Gauting, Germany).The microscope was operated at an acceleration voltage of 80 kV.The tilt series was collected manually from -45° to 35° at 2.5° intervals (single-tilt axis) with a final pixel size of 1,102 nm.The tilt series were aligned and reconstructed using IMOD (62). Air liquid interface culture-human Primary human nasal epithelial cells were expanded at 37 °C in media selective for basal cells (PneumaCult™-Ex Plus Medium Stemcell™ Technologies, Cambridge, UK) or specialized media (58).At 80% confluent basal cells were dissociated and seeded into 6 mm transwell inserts (Corning ® Transwell ® polyester membrane cell culture inserts, Flintshire, UK).Once a confluent base monolayer had formed, the apical fluid was removed and the basolateral fluid was replaced with ALI media (PneumaCult™-ALI Medium, Stemcell™ Technologies, Cambridge, UK) to promote differentiation.Experiments were performed once cells had been at ALI for at least 3 weeks and were fully differentiated into ciliated epithelium.For RNA and proteomic studies, transwell insert membranes with cells were cut out and stored in RNAlater (ThermoFisher) or snap frozen respectively and stored at -80 °C until use.Cell preparations were maintained in culture for four to twelve weeks. Human proteomics studies-in gel digestion Frozen cell pellets of cultured primary nasal epithelia from healthy unrelated controls or unaffected parent with parallel cultures from patient samples (p.P259L (P1), p.P358S (P9)) were lysed in 2% SDS in PBS and resolved by SDS-PAGE.Each insert was treated as an experimental replicate and graphed separately.Aiming to enrich for tubulin peptides, gel sections (between molecular weight markers of 37-75 kDa) were cut out and further dissected into 1 x 1 mm 2 fragments.These were dehydrated with acetonitrile (ACN), reduced with 10 mM DTT and 50 mM ammonium bicarbonate (AB) for 20 min at 56 °C, followed by alkylation with 55 mM iodacetamide and 50 mM AB for 1 h RT.Samples were washed by sequential dehydration/hydration steps alternating between ACN and 50 mM AB.Then, samples were digested with trypsin (Promega) at 37 °C for overnight, extracted with 0.1% trifluoroacetic acid (TFA) and 80% ACN 0.1 % TFA.The combined eluates were concentrated in a CentriVap Concentrator (Labconco) and loaded onto StageTips. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts The tryptic peptides eluted from StageTips (80% ACN, 0.1% TFA) were lyophilised and resuspended in 0.1% TFA.Samples were analysed on a Q Exactive plus mass spectrometer connected to an Ultimate Ultra3000 chromatography system (Thermo Scientific, Germany) incorporating an autosampler.5 μL of each tryptic peptide sample was loaded on an Aurora column (IonOptiks, Australia, 250 mm length), and separated by an increasing ACN gradient, using a 40 min reverse-phase gradient (from 3%-40% ACN) at a flow rate of 400 nL/min.The mass spectrometer was operated in positive ion mode with a capillary temperature of 275 °C, with a potential of 1,500 V applied to the column.Data were acquired with the mass spectrometer operating in automatic data-dependent switching mode, using the following settings: MS 70k resolution in the Orbitrap, MS/MS 17k resolution obtained by HCD fragmentation (26 normalised collision energy).MaxQuant version 1.6.was used for mass spectra analysis and peptide identification via Andromeda search engine (63) using standard settings apart from: Match between runs was enabled.Trypsin or LysC was chosen as a protease with minimum peptide length 7 and maximum of two missed cleavage sites.Carbamidomethyl of cysteine was set as a fixed modification and methionine oxidation and protein N-terminal acetylation as variable modifications.Total proteomic data are available via ProteomeXchange with identifier PXD036304. Nasal epithelial cell immunofluorescence Nasal cells were fixed directly from the patient or after expansion in ALI cultures for wholemount.If direct, samples suspended in cell culture media were spread onto glass slides, air dried, and stored at -80 °C until use.Expanded cultures were fixed directly on the membrane in 4% PFA/PBS, then immunostained and imaged (Tables S7,S8).Nuclei were stained using 4′, 6-diamidino-2-phenylindole 1.5 µg/mL. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Genetic and three-dimensional structure analysis The 3-dimensional structure of wild-type TUBB4B and mutant variants was predicted using I-TASSER and based on TUBB4B NM_006088.5 reference.Predicted models were aligned on the cryo-electron microscopy structure of a GDP-protofilament (GDP-K MT, EMD-6353, PDB: 3JAS) using the UCSF Chimera software.We modelled the molecular effects of missense mutations using the stability predictor FoldX v5 with all default parameters and three replicates (Table S3).We used the PDB structure 5FNV, chain B, as it possesses both the inter-and intra-dimeric interfaces with α-tubulin.ΔΔG subunit values represent the predicted effects of mutations on the TUBB4B molecule alone, ignoring any intermolecular interactions, while the ΔΔG full values were calculated using the full protein complex structure, and thus include effects from the predicted disruption of α-tubulin interfaces.To visualize the structure of the Dup variant, we modelled the structure of the full variant protein using SWISS-MODEL. Identification of the tubulin isotypes that form axonemal microtubule doublets The tubulin isotypes that form respiratory axonemal microtubule doublets were determined using sidechain density from the 3.6-Å resolution structure of human microtubule doublets (32).All potential isotypes were first determined by mass spectrometry of the sample used for cryo-EM analysis.Candidates for α-tubulin were TUBA1A, TUBA1B, TUBA1C, and TUBA4A.Candidates for β-tubulin were TUBB4B, TUBB2B, and TUBB5.Multiple sequence alignments were generated for the αand β-tubulin isotypes to highlight positions in the primary sequence where the residues differed.The density corresponding to each site of variation was then examined to discriminate between candidate residues.For example, TUBB2B was excluded as an asparagine sidechain at position 57 that does not match the density (Fig. S12C).The methionine sidechain at position 293 and the alanine sidechain at position 365 of TUBB4B fitted better to the density than the valine (293) and valine (365) of TUBB5, respectively.After performing this sequence comparison at every variable residue, we determined that the amino acid sequence of TUBB4B was the best fit to the density.The same approach was used to identify the α-tubulin isotype as TUBA1A, where the glycine of TUBA1A at position 232 was a better fit to the density than the serine of TUBA1B, TUBA1C and TUBA4A.Our assignment is consistent with the abundance of tubulin isotypes in single-cell RNA-sequencing of human multiciliated respiratory cells (64). Site-directed mutagenesis Patient-derived TUBB4B variants c.776C>T, p.P259L; c.775C>T, p.P259S; c.1072C>T, p.P358S; c.1171C>T, p.R391C; and c.1172G>A, p.R391H were generated by mutagenesis via inverse PCR with Phusion polymerase using vector pcDNA3.1-TUBB4B-C-(K)DYK(Genscript, Piscataway, USA) as a respective template with primers listed in Table S6.The amplified product was digested with DpnI to avoid religation of original non-mutated DNA.Constructs were amplified in XL1-Blue competent cells (Agilent, US), and the whole ORF of each plasmid was Sanger sequenced to confirm the presence of the patient mutation. Microtubule co-localization and lattice dynamics in RPE1 cells Microtubule dynamics were characterized 48 h post-transfection.Cells were directly fixed in ice-cold methanol (5 min at -20 °C) for the steady-state microtubule lattice or after having been maintained on ice for 20 and 30 min for microtubule depolymerization or for 30 min prior to incubation at 37 °C for microtubule repolymerization for 4 and 6 min., respectively.Fixed cells were permeabilized in PBS supplemented with 3% bovine serum albumin and 0.1% TritonX-100 (1 h RT) prior to immunostaining.Staining colocalization between positive FLAG and positive α-tubulin from a given cell area (ROI) was quantified using machine learning of Ilastik software( 65), percentages of staining colocalization were generated using JACoP plugin on ImageJ software(66) and plotted using GraphPad software.Microtubule lengths were measured by determining the number of EB1 protein spots and the distance between the centrosome and each EB1 signal, in repolymerization state, using a Spot detector plugin within an ROI using Icy software (67).Individual distances were plotted using GraphPad software.Means of fiber alignment degrees (co-localization), EB1 spot numbers and microtubule lengths were calculated from two independent experiments (> 30 cells for each cell line).Statistical analyses were carried out by ANOVA and the PLSD Fisher test. Ciliary abundance and length in RPE1 cells Transfected cells were propagated for 24 h at 37 °C, 5% CO 2 in serum-free Opti-MEM Glutamax I medium to promote ciliation.Cells were fixed in ice-cold methanol and immunostained.Mean numbers of ciliated cells and cilia lengths were calculated from two independent experiments (> 100 cells for each condition) using ImageJ (66).Statistical analyses were carried out by the PLSD Fisher test according to the significance of the Student's t test. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Edinburgh, UK).To generate stable IMCD3 cell lines expressing control or patient variants, IMCD3 cells (CRL-2123, ATCC) were transduced with 3 x 10^7 copies/mL of virus with polybrene (H9268, Sigma) final concentration of 10 μg/mL in DMEM-F12 (12634010, Gibco), 10% FCS, 1% P/S to 80% confluency.After 24 hours, fresh media was added.After 96 hours following transduction, hygromycin was added to the media at a final concentration of 100 μg/mL and cells were selected for 7 days. To test the ability of TUBB4B variants to heterodimerize, IMCD3 stable cell lines expressing wild type and patient variants fused in frame with the small C-terminal tag ALFA (NanoTag Biotechnologies) were grown to 80% confluency.Plates were placed on ice for 30 min to depolymerize microtubules after which the culture media was aspirated and cells scraped into 400 µl BRB80 buffer (80 mM PIPES, 1 mM MgCl 2 , 1 mM EGTA, pH 6.8) plus 10% glycerol, 0.2% Triton X-100, 5 µg /ml DNase I, Halt Protease inhibitor (Pierce) and 1 mM GTP (R1461, Thermo Scientific).Cells were lysed in a water bath sonicator for 10 min, incubated at 37 °C for 20 min and centrifuged at 13K rpm.Cleared supernatants were used to determine total protein levels or incubated with 20 µl ALFA SelectorPE beads (N1510, NanoTag Biotechnologies, Germany) for affinity capture of TUBB4B-ALFA for 1 hour at RT, washed x 4 times with BRB80 buffer and 10% Glycerol.Bound proteins were released by competition with 0.1 mg of ALFA-elution peptide (N1520, NanoTag Biotechnologies, Germany) for 15 min RT.Resin eluted or total lysates were resolved in acrylamide gels and transferred using Trans-Blot ® Turbo™ Transfer System (170-4150, Biorad) with transfer reagents Trans-Blot Turbo™ (Biorad 170-4270), then followed by iBind™ (SLF1000, Thermo Scientific) and iBind™ Flex Solutions (SLF2020, Thermo Scientific).Blots were immunoblotted using antibodies listed in Tables S7,S8 and imaged following incubation with chemiluminescent substrate SuperSignal™ West Pico Plus (34580, Thermo Scientific) on the ImageQuant 800 (Amersham) using either auto-exposure or manual with indicator of saturation.All quantifications done in non-saturated bands using ImageJ. To visualize tubulin heterodimer chaperone complexes the IVT reactions were run at different amounts of time.When adding excess tubulin for the tubulin 'pulse', 1 µl porcine tubulin extracted from brains was added after 80 min, and the reaction allowed to proceed for a further 60 min (1 µM).The reactions were loaded onto Invitrogen NativePAGE 4 to 16%, Bis-Tris, 1.0 mm, Mini Protein Gels (ThermoFisher, #BN1002BOX) according to the Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts manufacturer's instructions except blue dye was excluded and the gels were transferred onto nitrocellulose membranes and processed as above for subsequent immunoblotting. In vitro studies: Cloning, protein expression and purification Expression and purification of affinity tag-free wildtype and mutant recombinant human TUBB4B/TUBA1A heterodimers as unlabelled, biotinylated and fluorophore-labelled proteins were overexpressed and purified following the protocol described previously (68). Polymerization of microtubules GMPCPP-stabilized wildtype and mutant TUBB4B/TUBA1A microtubules were generated by polymerizing 1 mg/mL or 4 mg/mL unlabelled tubulin supplemented with 2.8% fluorescent tubulin and 2.8% biotinylated tubulin following the protocol described previously (68).Taxol stabilized mutant TUBB4B/TUBA1A microtubules were generated by polymerizing 3 mg/mL unlabelled tubulin supplemented with 2.8% fluorescent tubulin and 2.8% biotinylated tubulin following the protocol described previously (68).GMPCPP and taxol double-stabilized mutant TUBB4B/TUBA1A microtubules were generated following the method described earlier but with a slight modification where 1 mM GMPCPP was used instead of GTP. Microtubule dynamics Free tubulin was prepared by supplementing unlabelled free tubulin with 3% fluorescent tubulin in 1xBRB80 containing 5% (w/v) glycerol, 2 mM GTP, 1 mM tris(2carboxyethyl)phosphine (TCEP).The samples were incubated on ice for 5 min before centrifuging at 90,000 rpm at 4 ˚C for 10 min (Beckman Coulter, TLA 120.1) to remove aggregated tubulin.Final concentration was determined by using Bradford assay.The flow chambers were assembled as described previously (68).Next, the flow chambers were treated sequentially with 0.2 mg/mL neutravidin for 5 min, followed by 1% (w/v) pluronic for 15 min and lastly 0.2 mg/mL κ-casein for 5 min with two washes of 10 μL working buffer in between each treatment.The microtubule seeds were immobilized by perfusing 0.8 μL of resuspended polymerized microtubules into the chamber and incubated for 8 min before washing twice with 10 µL working buffer (1X BRB80, 5% sucrose, 1 mM MgCl 2 , and 1 mM TCEP).After checking the density of immobilized seeds in the chamber, the chamber was perfused with 10 µL of reaction mix containing oxygen scavenger mix (4.5 mg/mL glucose, 200 mg/mL glucose oxidase, 35 mg/mL catalase, and 2 mM GTP) and free tubulin (3.5 mg/mL).The flow chamber was sealed with Valap sealant (a 1:1:1 mixture of Vaseline, lanolin, and paraffin) and incubated at 37 ˚C for 5min before imaging.Three independent experiments were performed for varying ratios of wildtype TUBB4B/TUBA1A to mutant p.P358S (100:0, 75:25, 50:50).Time-lapse images were captured at 5 sec per frame for 15 min using iLAS3 ring-total internal reflection fluorescence (TIRF) microscope (Inverted: Nikon Ti2-E) with a Photometrics Prime95B camera.The microscope stage was kept at 37 ˚C using a warm stage controller.Excitation laser 561 nm was used. Dynamics assay analysis In the analysis of TIRF images, the Mosaic plugin (https://mosaic.mpi-cbg.de/Downloads/update/Fiji/MosaicToolsuite/) of Fiji was used to remove the background prior to drawing kymographs (69).Lines were manually drawn along the vertical distance of the event for catastrophe time, while the catastrophe length was determined by manually drawing horizontal lines along the event before undergoing catastrophe.The pause time was determined by visually identifying the segments where no discernible growth was observed during the event.To estimate the time taken for a new microtubule to nucleate, the nucleation time was calculated by subtracting the catastrophe time from the total duration of the image series captured. Since the polymerization rate did not allow for distinguishing polarity, all analyses were conducted by pooling events from both ends.The polymerization rate was determined by calculating the slope of the event (catastrophe length/catastrophe time).Catastrophe frequency referred to the total number of events divided by the total catastrophe time, while the nucleation frequency was defined as the total number of events divided by the total nucleation time.The pause fraction is a measure of the time during which a filament experienced a pause event, expressed as a fraction of the total catastrophe time for that filament.Due to the stochastic nature of catastrophe and nucleation, a Poisson distribution was assumed for catastrophe frequency and nucleation frequency.The standard error of mean catastrophe frequency was calculated by dividing the observed catastrophe frequency by the square root of the total number of events observed.Nucleation frequency was analyzed in the same manner. Cryo-EM data processing The starting point for cryo-EM analysis was a stack of 208,558 particles used previously to calculate a structure of the 48-nm repeat of microtubule doublets isolated from human respiratory cilia (32).These particles were extracted in 512-pixel boxes from micrographs collected on a Titan Krios microscope (Thermo Fisher Scientific) equipped with a BioQuantum K3 imaging filter (slit width, 25 eV) and a K3 detector (Gatan) (Table S4). The micrographs had a defocus range of −0.8 to −2.0 μm and a pixel size of 1.37 Å.Each particle had undergone at least one round of contrast transfer function refinement (CTFRefine) and Bayesian polishing in RELION-4.0(70) and was a survivor from multiple rounds of three-dimensional classification.In the previous publication (32) this particle set was used to calculate a composite cryo-EM map of the human microtubule doublets with a nominal resolution of 3.6 Å (EMD-26624).Full details of the sample preparation, microscope settings, and data processing steps are provided in (32). In this study, we have used this particle set to improve the resolution of a α/β-tubulin heterodimer to be able to confidently identify the αand β-tubulin isotypes.Using RELION-4.1 (70), we created a soft-edged mask over one of the best resolved regions of the map -the microtubule wall (the ribbon) that partitions the lumens of the A and B Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts tubule (Fig. S11A).Density outside the masked region was subtracted to focus subsequent refinements on the ribbon microtubule density.Two rounds of focused local refinement with decreasing initial angular sampling (0.3 and 0.1, respectively) with the subtracted ribbon density yielded a nominal resolution of 2.8 Å (Fig. S11A) based on the Fourier Shell Correlation=0.143 criterion, which is close to the Nyquist limit imposed by the 1.37 Å pixel size set during data collection.A subregion of this map, corresponding to α/β-tubulin and colored by local resolution (Fig. S11B), shows that the majority of the tubulin dimer is resolved to better than 3 Å, consistent with the nominal resolution of 2.8 Å. Maps were sharpened using standard postprocessing in RELION-4.1.The map has been deposited to the Electron Microscopy Data Bank with accession number EMD-40480. Identification of tubulin isotypes from cryo-EM maps The tubulin isotypes that form human respiratory axonemal microtubule doublets were determined using well-resolved sidechain density from the 2.8-Å resolution map.All potential isotypes were considered with the exception of TUBAL3, which has an insertion of 7 residues at position 39 incompatible with the α-tubulin cryo-EM density.Multiple sequence alignments were generated for the αand β-tubulin isotypes to identify positions in the primary sequence where the residues differed (Fig. S14).The density corresponding to each site of variation was then examined manually to discriminate between candidate residues.Candidates were excluded if their residues had sidechains that extended beyond the cryo-EM density or if their sidechains were smaller than indicated by the cryo-EM density.Using this approach TUBA1A and TUBB4B were identified as the best fit to the density for αand β-tubulin, respectively (Fig. S12 and S13), although we cannot rule out other tubulin isotypes making a minor contribution that is averaged out during cryo-EM processing.This assignment is consistent with the upregulation of TUBA1A and TUBB4B transcripts in single-cell RNA-sequencing of human multiciliated respiratory cells (64). Model Refinement To refine the TUBA1A and TUBB4B models, chains LI and LH and residues within 10 Å of these chains were extracted from PDB: 7UNG using ChimeraX and refined into the improved cryo-EM density using ISOLDE (71).ISOLDE's command `write phenixRsrInputˋ was used to create a parameter file for subsequent refinement in Phenix.real_space_refinement(72).Refinement statistics are provided in Table S4.The model has been deposited to the Protein Data Bank with accession code 8SH7. Generation of mouse lines and patient mutation F0 founder mouse models For the Tubb4b R391H/+ line, the NIH guide was followed for the care and use of laboratory Tubb4b R391H/+ mice were generated using CRISPR/Cas9 as described in Fig. S6, using guides detailed in Table S5 as shown in Fig. S10A.To generate Tubb4b ALFA/+ mice we used the guide and repair shown in Table S5.To generate Tubb4b +/-mice (Tubb4b KO1/+ and Tubb4b KO2/+ ) we used CRISPR/Cas9 as described in Fig. S3A, using guides detailed in Table S5.To generate animals Tubb4b P259L/+ , Tubb4b P259S/+ and Tubb4b P358S/+ mice, a similar targeting strategy was attempted using a variety of guide and repairs detailed in Table S5.Silent mutations were included to block the re-cutting as reported to increase the HDR accuracy and efficiency (73).However, we saw increased perinatal lethality and infertility in founders carrying patient-derived mutations and no lines could be established even using IVF techniques.Briefly, C57BL/6J female mice were super-ovulated and fertilized embryos were injected at the 1 cell stage.The microinjection mix consisting of RNPs with 0.35 µM of guides and 1.8 µM of recombinant GeneArt Platinum Cas9 (Thermo Fisher Scientific, US) and 20 ng/μl ssODN repair templates (Integrated DNA Technologies (IDT), US) was incubated at 37 °C for 10 min prior to pronuclear microinjection.Zygotes were cultured overnight before transfer to pseudopregnant CD1 females.Founders were genotyped by PCR and Sanger sequenced from genomic DNA, isolated from ear biopsies, using primers (see Table S6). To establish colonies of Tubb4b R391H/+ and Tubb4b +/-mice, founder mice were crossed with C57BL/6J and CD1 mice respectively to remove potential off-targets, and the heterozygous offspring then outcrossed to C57BL/6J and CD1 respectively at least 5 generations to maintain a colony.CD1 was used for Tubb4b +/-mice to reduce the severity of neonatal lethality and hydrocephaly, coupled to small litter size, characteristic of C57BL/6J mice.Genotyping was performed using primers detailed in Table S6, followed by Sanger sequencing in-house or by Transnetyx (Cordova, TN).For basal body quantification (Fig. S3, S4) mice were crossed to transgenic line Cen2GFP (CB6-Tg(CAG-EGFP/CETN2)3-4Jgg/J, The Jackson Laboratory). Left-right patterning analysis of mouse models For left-right patterning defect analysis, three E12.5 litters of Tubb4b +/-intercrosses were dissected and organ laterality examined blind to genotype.All 27 of the embryos examined displayed situs solitus, including the six Tubb4b -/-mutants. Electroretinographic analysis of mouse models The function of the retina of ten Tubb4b 391H/+ and ten wild-type C57BL/6J mice, aged of 2 months to 1 year, was analyzed by electroretinography using the CELERIS Next Generation Rodent ERG Testing platform, according to the manufacturer protocol (Diagnosys LLC, Cambridge, UK) and guidelines for animal safety.In brief, mice were dark-adapted overnight, anesthetized according to their weight and exposed to two four-step sequences of increasing light stimuli from 0.01 to 3 cd.s/m 2 and a step sequence of 3 and 10 cd.s/m 2 , to elicit and record rod and cone-specific responses, respectively.Statistical analyses were carried out using GraphPad software using the post hoc Sidak's test (two-way ANOVA). Mouse trachea proteomics Tracheas from mutant and wild type littermate mice aged between P1 and P5 for Tubb4b KO experiments and between P40 and P100 for Tubb4b 391His/+ experiments were used for total proteomics, using a filter aided sample preparation (FASP) method.Tracheas were flash frozen in liquid nitrogen and stored at - were acidified with 1% trifluoroacetic acid (TFA) and desalted using StageTips, dried using a CentriVap Concentrator (Labconco) and resuspended in 15 µl 0.1% TFA.Protein concentration was determined by absorption at 280 nm on a Nanodrop 1000, then 2 µg of de-salted peptides were loaded onto a 50 cm emitter packed with 1.9 µm ReproSil-Pur 200 C18-AQ (Dr Maisch, Germany) using a RSLC-nano uHPLC systems connected to a Fusion Lumos mass spectrometer (both Thermo, UK).Peptides were separated by a 140-min linear gradient from 5% to 30% acetonitrile, 0.5% acetic acid.The Lumos was operated using following settings: MS 120k resolution in the Orbitrap, MS/MS obtained by HCD fragmentation (30 normalized collision energy), read out in the ion-trap with "rapid" resolution with a cycle time of 1 s.The Limma package was used for mass spectra analysis and peptide identification (75).Total proteomic data are available via ProteomeXchange with identifier PXD036304. For Fig. 2U, we specifically analyzed unique peptides for all αand β-tubulins from the total proteomes from control mTECs across differentiation time points we previously generated (76).Briefly, here total mTEC proteomes were derived from two animals/genotype with three experimental replicates per time point (days 4-10, animal pair 1; days 14-18 animal pair two).The data were analyzed using the MaxQuant 1.6 software suite (https:// www.maxquant.org/)by searching against the murine Uniprot database with the standard settings enabling LFQ determination and matching.The data were further analyzed using the Perseus software suite.LFQ values were normalized, 0-values were imputed using a normal distribution and standard settings. Mouse trachea transcriptomics Tracheas from mutant, heterozygous and wild type P4 and P5 mice were dissected.RNA was extracted and measured as described for the human transcriptomics above, except turbo DNase was not used, the samples were instead run through a gDNA eliminator column (Qiagen) before extraction.The sequencing library was prepared as described for the human transcriptomics and was sequenced using the Histology and immunohistochemistry of mouse tissues Mouse tissues were obtained at different stages after euthanasia by cervical dislocation, anaesthetic overdose or CO 2 asphyxiation, performed according to protocol guidelines for animal safety.Upon dissection, tracheas, kidneys and brains were fixed in 4% PFA/PBS, testes fixed in Bouin's fixative, and eyes were fixed in Davidson's fixative according to standard protocols.Tissues were serially dehydrated and embedded in paraffin.Samples were sectioned at 5-8 µm and processed for hematoxylin-eosin (H&E) using standard protocols.Wild-type C57BL/6J mice were used as a reference in all knock-in Tubb4b 391His/+ and all F0 analyses. Paraffin embedded eye and trachea tissue sections were dewaxed and re-hydrated via ethanol series prior to antigen retrieval in 10 mM Tris-HCl pH 9.2, 2 mM EDTA, 0.01% Tween-20 for 7 min at 900 W in the microwave.Sections were blocked for 1 h with blocking solution (0.1% Tween, 50 mM NH 4 Cl, 1% BSA and PBS) prior to immunostaining using primary and secondary antibodies detailed in Tables S7,S8.Samples were stained with 1.25 µg/mL DAPI (Roche, Mannheim, Germany), rinsed and mounted in Fluoromount medium (Sigma) under glass coverslips. Transmission electron microscopy on tissues-mouse Samples were dissected into PBS.Samples were fixed in 2 % PFA/2.5 % glutaraldehyde/0.1 M sodium cacodylate buffer pH 7.4 (Electron Microscopy Sciences).Tracheas were fixed for 18 h at 4 °C and then rinsed in 0.1 M sodium cacodylate buffer, post-fixed in 1% OsO 4 (Agar Scientific) for 1 h and dehydrated in sequential steps of acetone prior to impregnation in increasing concentrations of resin (TAAB Lab Equipment) in acetone followed by 100%, placed in moulds and polymerized at 60 °C for 24 h.Ultrathin sections of 70 nm were subsequently cut using a diamond knife on a Leica EM UC7 ultramicrotome.Sections were stretched with chloroform to eliminate compression and mounted on Pioloform filmed copper grids prior to staining with 1% aqueous uranyl acetate and lead citrate (Leica).They were viewed on a Philips CM100 Compustage Transmission Electron Microscope with images collected using an AMT CCD camera (Deben). Brain ventricles were dissected according to (77), pre-extracted with 0.1% Triton X in PBS for 1 min, then fixed in 4% PFA or ice cold methanol for at least 24 h at 4 °C, followed by Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts permeabilization in PBS (0.5% Triton X-100) for 20 min room temperature.Ventricles were blocked in 4% BSA in PBST (PBS/0.25%Triton X-100) for 1 h at room temperature, then placed ependymal layer down in primary antibodies (Table S7) in 4% BSA/PBST for at least 12 h.Ventricles were washed in PBS 3 X 10 min and incubated with secondary antibodies (Table S8) in 4% BSA/ PBST (0.25% Triton X-100) at 4 °C for at least 12 h.Ventricles were washed in PBS 3 X 10 min, and mounted on glass bottom dishes (Nest, 801002) in Vectashield (VectorLabs), immobilized with a cell strainer (Greiner Bio-One, 542040). Isolation and immunofluorescence of primary mouse cells Mouse tracheal epithelial cells (mTECs) were isolated and cultured as described previously (78,79).Ependymal cells were isolated from mice aged P0 to P5 and cultured as described previously (80).Mouse fibroblasts were harvested from a mix of tail and ear tissue from P5 mice as described (81).For immunofluorescence, cells were plated on coverslips or in glass bottom plates.Ependymal and fibroblast cells were fixed in 4% methanol-free formaldehyde for 5-10 min.Samples were permeabilized in TBST (TBS/0.1% Triton-X-100) for 5 min, blocked in 5% donkey serum in TBST.The corresponding primary and secondary antibody incubations (Tables S7,8) were done overnight in 1% donkey serum in TBST.Washes were done in TBST and stained with DAPI before mounting in Prolong Gold (Life Technologies, Thermo Fisher Scientific). Single cell RNASeq analysis Published 10X single cell data and metadata was read using the Seurat (82-85) SCTransform method to obtain a gene by cell expression matrix using data from (86)(87)(88) or the published gene by cell matrix was used with data from (89,90).The proportion of cells of each of the cell types, as identified in the data by the respective authors, expressing a tubulin (expression greater than zero) was calculated as a proportion of all cells of that type. The resulting matrix was visualized as a heatmap allowing rows (cell types) to cluster (Fig. S4J (mouse airway, choroid plexus and ependymal datasets), Fig. S10F (mouse and human neuroretina across ages)). Quantitative imaging of Tubb4bALFA levels between cilia types Wild-type (n=2) and heterozygous (n=3) Tubb4b ALFA/-P8 neonatal littermates were culled by pentobarbital barbiturate intraperitoneal injection.Both trachea and brains were subdissected from each animal in ice-cold PBS.Tissues were fixed overnight at 4 °C in 4% PFA/PBS, then rinsed and permeabilised in PBST (PBS, 0.5% Triton-X 100 (ThermoFisher, #85111)) for 20 min and blocked using 2% BSA (Merck, #A9418) in PBS with 0.05% Tween-20 (ThermoFisher, #28320) for 30 min.Primary antibodies (FOP, acetylated αtubulin and anti-ALFA-Alexa647) were diluted (Table S7) in block solution and incubated overnight at 4 °C.Tissues were washed 3 times with PBS for 5 min per wash.Secondary antibodies (Table S8) were then diluted in block solution, and the tissues incubated with secondary antibodies for 2 h.Tissues were then washed 3 times with PBS prior to mounting. Tracheas were cut into longitudinal strips and mounted onto a glass microscopy slide (Fisher Scientific, #11562203) using ProLong Gold mounting medium (ThermoFisher,#P10144).Brain ventricles were mounted in round glass bottom dishes (Nest, #801002) using ProLong Gold mounting medium and secured with a glass coverslip on top.Images were acquired Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Data analysis Automated quantification of basal bodies from wholemount tracheas, lateral ventricles and oviducts using a macro in FIJI (National Institutes of Health) to count Centrin2-GFP fluorescence intensity maxima in user-defined cells, the script used can be found on Zenodo (92).Cilia lengths for Fig. S9B were measured using NDP.view2 (Hamamatsu Photonics, Japan) while cilia lengths for IF images in Fig. S3, S4 were measured using FIJI's line tool (National Institutes of Health Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts For P1 (UOE), the study was approved by the London-West London & Gene Therapy Advisory Committee Research Ethics Committee (REC number 11/LO/0883), P2 by London-Bloomsbury Research Ethics Committee (REC 08/H0713/82; IRAS ID 103488), P3 by the Institutional Ethics Review Board of the University Muenster (2015-104-f-S), and P4 and P5 by the Institutional Review Board from Institut national de la santé et de la recherche médicale (IRB00003888 -approval n°15-259.Protocols for UNC (P6, P7) human studies were approved by the Institutional Review Board at the University of North Carolina and were performed in compliance with ethical regulations.P8 was recruited for WGS as part of the 100,000 Genomes Project, under approved Research Registry Project RR185 'Study of cilia and ciliopathy genes across the 100,000 GP cohort'.For P9, the study protocols were approved by the Institutional Review Board at Washington University in St. Louis.P10 was recruited under approved studies approved by the Institutional Review Board for Human Use at the University of Alabama at Birmingham (US), in compliance with ethical regulations.P11 was recruited under the Undiagnosed Disease Network protocol 15-HG-0130 approved by the National Institutes of Health Institutional Review Board.For P12, the study protocols were approved by the Pediatric Ethics Committee of Tuscany. 70 °C.Samples were lysed in 2% SDS, 0.1 M DTT, 0.1 M Tris-HCl pH 7.6 by pipetting, heating to 95 °C for 3 min and then removing non-solubilized material.The resulting sample was diluted to 100 μL with 0.1 M Tris-HCl pH 7.6.FASP purification and double digestion was done following the protocol described in (74) with following alterations: we used Vivaspin500 30k cut-off ultrafiltration devices (Sartorius) and 50 μL aliquots.All other steps, including washes, reduction and alkylation were identical.1 μg of endoproteinase LysC (Wako) in 40 μL with 0.1 M Tris-HCl pH 8.5 was added.After overnight incubation at 37 °C, the LysC fraction was collected by centrifugation of the filter units for 20 min.The sample in the filter was then eluted with 40 μL of 0.1 M Tris-HCl pH 8.5 containing 1 μg of trypsin.Following a 4 h digestion, tryptic peptides were collected by centrifugation of the filter units for 20 min.Samples using Nikon A1+ Confocal (Nikon Europe B.V., Netherlands) with Oil 60x lens using the same PIN, gain and laser intensities between tissues.Single planes shown in Fig.6Gwere generated using ImageJ.For quantitative imaging of ratiometric levels, as shown in Fig.6H, values of pixel intensities from single Z-planes were obtained from NIS Elements (Nikon Europe B.V., Netherlands), by drawing a line across those ciliary bundles that displayed clearest separation between individual cilia (n= 4-8 ciliary bundles per animal per tissue).Pixels with overlapping intensity values for AcTUB and ALFA647 above background were used to calculate the ratios shown in the box plots.ImagingImages for Fig.1Twere acquired using epifluorescent microscopy.Brightfield images were captured on a Hamamatsu Nanozoomer XR (Hamamatsu Photonics, Japan) with 20X and 40X objectives.Confocal Z-stack projections in Fig.1U,V, Fig.3A,E,F, Fig. 4K, Fig. S2K-M, Fig. S6 and Fig. S10D were captured on a Spinning Disk Zeiss microscope (Zeiss, Oberkochen, Germany).Confocal Z-stack projections in Fig. S2L,M, Fig. 4H and Fig.2E,F,I were taken on a Nikon A1+ Confocal (Nikon Europe B.V., Netherlands) with oil immersion 60X or 100X objectives with 405, Argon 561 and588, 640 lasers and GaSP detectors.3D reconstructions of images in Fig. S3F and Fig. S4J were captured with an Andor Dragonfly and Mosaic Spinning Disc confocal using Nikon oil 40X or 100X lenses.Fig. 6D-G (regular mode), Fig. S2N,O (lightning mode), Fig. S3G (lightning mode) and Fig. S4A,D (lightning mode) taken on Leica STELLARIS 8 Spectral confocal microscope equipped with a White Light Laser enabling tuneable excitation wavelengths between 440-790 nm with a 405 nm diode laser for UV.Flexible fluorophore detection occurs via two HyD S (Silicone Multi-Pixel Photon Counter) & two Power HyD X (GaAsP Hybrid) detectors (Leica Microsystems UK Ltd, Milton Keynes, UK) with an oil immersion 63X or 100X NA lens.Basal body quantification data was acquired using the Leica LASX acquisition software with the lightning deconvolution setting.High-speed video microscopy for mouse samples was performed on a Nikon Ti microscope with a 100X SR HP Apo TIRF Objective, and Prime BSI, A19B204007 camera, imaged at 250 fps.Projections, 3D reconstructions or panels were generated using ImageJ (National Institutes of Health), NIS Elements (Nikon Europe B.V., Netherlands), NDP.view2 (Hamamatsu Photonics, Japan) or Imaris software 9.9 (Oxford Instruments, UK).Final composite images were generated using FigureJ plugin on ImageJ software (National Institutes of Health, Bethesda, MA, USA) (91), Photoshop or Illustrator (Adobe Systems, San Jose, CA). Figure 5 . Figure 5. Structural environment of disease-causing variants of TUBB4B.(A) Human microtubule doublet cross-section (PDB ID: 7UNG) highlighting microtubule interacting proteins (MIPs) that interact with TUBB4B residues associated with disease.Microtubule doublets are the conserved cytoskeletal element of both primary and motile cilia, consisting of complete A-tubule with 13 protofilaments and incomplete B-tubules with 10 protofilaments.(B) Orthogonal views showing disease-causing TUBB4B variants within the ciliary microtubule doublet lattice.The human tubulin isotypes (TUBA1A (purple) and TUBB4B (gold)) were determined based on the human microtubule doublet cryo-EM density map (32) and abundance in human multiciliated respiratory cells by scRNAseq (64).Variant positions are indicated with spheres colored based on their disease association.Only one TUBB4B molecule is shown in the cross-section (right), where R391 is not visible.(C) Interaction of R391 of TUBB4B with the microtubule inner protein CFAP126.(D-E) p.P259 and loop p.F242-R251 locate at the intradimer interface.(F) p.P358 locates at the taxol binding site which interacts with multiple microtubule interacting proteins (MIPs) including, for example, PIERCE2. Science.Author manuscript; available in PMC 2024 July 16. animals, with approval from the French Ministry of Research (APAFiS # 20324) and following ethical principles in the LEAT Facility of Imagine Institute.For all other lines, animals were maintained in SPF environment and studies carried out in accordance with the guidance issued by the Medical Research Council in "Responsibility in the Use of Animals in Medical Research" (July 1993) and licensed by the Home Office under the Animals (Scientific Procedures) Act 1986 under project license number P18921CDE in facilities at the University of Edinburgh (PEL 60/6025). was variable though the majority of libraries generated ≥33M reads (Min: 29.0M, Max: 49.8M, Mean: 40.7M).Data was deposited in GEO under the accession number GSE246488 (https://www.ncbi.nlm.nih.gov/gds/?term=GSE246488) libraries ).Data analysis was carried out in Microsoft Excel, GraphPad Prism 9 (version 9.4.1,GraphPadSoftware, USA) and Matlab.Statistical tests are described in the figure legends and methods sub-sections.Identification of organelle-specific requirements for the TUBB4B β-tubulin isotype in building functional cilia in human health and disease. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts One-Sentence Summary Dodd et al.Page 34 Science.Author manuscript; available in PMC 2024 July 16.Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts
2024-04-26T12:04:06.588Z
2024-04-26T00:00:00.000
{ "year": 2024, "sha1": "5ee9bc4e645721f92ca68e0408820717bd1c08be", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "69c4f5d70d4d3ef1a5bd5ac653e437399a1d8046", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14164047
pes2o/s2orc
v3-fos-license
Symplectic genus, minimal genus and diffeomorphisms In this paper, the symplectic genus for any 2-dimensional class in a 4-manifold admitting a symplectic structure is introduced, and its relation with the minimal genus is studied. It is used to describe which classes in rational and irrational ruled manifolds are realized by connected symplectic surfaces. In particular, we completely determine which classes with square at least -1 in such manifolds can be represented by embedded spheres. Moreover, based on a new characterization of the action of the diffeomorphism group on the intersection forms of a rational manifold, we are able to determine the orbits of the diffeomorphism group on the set of classes represented by embedded spheres of square at least -1 in any 4-manifold admitting a symplectic structure. §1 Introduction Let M be a smooth, closed oriented 4−manifold. An orientation-compatible symplectic form on M is a closed two−form ω such that ω ∧ ω is nowhere vanishing and agrees with the orientation. For any oriented 4−manifold M , its symplectic cone C M is defined as the set of cohomology classes which are represented by orientationcompatible symplectic forms. For any class e ∈ H 2 (M ; Z), its minimal genus m(e) is the minimal genus of a smoothly embedded connected surface representing the Poincaré dual PD(e). The problem of determining the minimal genus has involved many of the important techniques in 4−manifold topology, and it bears its origin in the older problem of representing the Poincaré dual to a class by an embedded sphere (See the excellent survey papers and [Kr1] on these two problems). We are here interested in studying both these problems for 4−manifolds with non-empty symplectic cone. We will introduce the notion of symplectic genus η(e) for 4−manifolds with non-empty symplectic cone. Recall that any symplectic structure ω determines a homotopy class of compatible almost complex structures on the cotangent bundle, whose first Chern class is called the canonical class of ω. Roughly, the symplectic genus η(e) of a class e is given by the formula [e 2 +K · e]/2 +1, where K has largest pairing against e amongst canonical classes of symplectic structures for which the symplectic area of e is positive. η(e) has many nice properties, among which are (i) invariance under the action of diffeomorphism group and (ii) bounding the minimal genus from below. We speculate that, for most class of positive square, the symplectic genus is in fact the minimal genus, at least when b + (M ) = 1 (b + (M ) is the maximal dimension of a positive definite subspace of H 2 (M ; R)). The minimal genus, by definition, is non-negative. And it is easy to see that the symplectic genus of a sufficiently large multiple of a class of positive square is positive. However it is not obvious that the symplectic genus of any class of positive square is non-negative. In this paper we prove Theorem A. Let M be a smooth, closed oriented 4−manifold with non-empty symplectic cone and b + (M ) = 1. Then the symplectic genus of any class of positive square is non-negative, and it coincides with the minimal genus for any sufficiently large multiple of such a class. The proof of Theorem A is not very difficult except when the manifold is a non-minimal rational or irrational ruled manifold. In fact, for this kind of manifold we are able to obtain a much stronger result. Let us explain what such a manifold is. Let E M be the set of integral cohomology classes whose Poincaré duals are represented by smoothly embedded spheres of squares −1. M is said to be (smoothly) minimal if E M is the empty set. Any manifold M can be decomposed as a connected sum of a minimal manifold N with some number of CP 2 . Such a decomposition is called a (smooth) minimal reduction of M , and N is a minimal model of M . M is said to be rational if one of its minimal models is CP 2 or S 2 × S 2 ; and irrational ruled if one of its minimal models is an S 2 −bundle over a Riemann surface of positive genus. When M has non-empty symplectic cone and is not rational or irrational ruled, M has a unique minimal reduction (see [L1] and [Mc3]). Using the invariance of the symplectic genus under diffeomorphisms and the Taubes-Seiberg-Witten theory, we are able to show Theorem B. Let M be a rational or irrational ruled 4−manifold. If e is a class with square at least one, then its symplectic genus is non-negative and computable. Furthermore, if e· e ≥ η(e) −1, then PD(e) is represented by a connected symplectic surface, and therefore its minimal genus coincides with its symplectic genus. For classes with square zero and −1 on rational and irrational ruled manifolds, we have similar results. Observe that if PD(e) is represented by an embedded sphere, then m(e) = 0 and therefore η(e) is zero as well. It turns out that this simple fact enables us to completely determine which class of square at least −1 is represented by a smoothly embedded sphere in any symplectic 4−manifold. When M has nonempty C M and is not rational or irrational ruled, such a description is known (see [T2], [Mc3] and [L1]). Let N #nCP 2 be the unique minimal reduction of M , then, if e has square at least −1, PD(e) is represented by a smoothly embedded sphere if and only if e is a generator of one of the CP 2 . For rational and ruled manifolds, we have Theorem C. Let M be a rational or irrational ruled manifold and e ∈ H 2 (M ) be a class with square at least −1. If η(e) = 0, then PD(e) is represented by a smoothly embedded sphere. Furthermore, if PD(e) is represented by a smoothly embedded sphere, then either η(e) = 0 or e is a non-primitive class of square zero with e = pe ′ and η(e ′ ) = 0. We would like to remark that the proofs of Theorems A, B and C are built out of the work Taubes on Seiberg-Witten invariants realizing symplectic surfaces, the wall crossing formula for proving the non-triviality of the Seiberg-Witten invariants, and the fact that for minimal manifolds with b + = 1 it is easy to force symplectic surfaces to be connected. For non-minimal manifolds, we need the additional technical notion of the reduced class. Beyond determining the set of classes represented by spheres and with square at least −1, we are also able to determine the action of the diffeomorphism group on this set. Let us call a class spherically representable if its Poincaré dual is represented by a smoothly embedded sphere. Let SPH(M ) be the set of spherically representable classes and SPH ≥−1 be the subset of classes with square at least −1. Obviously Diff(M ) acts on SPH(M ) and preserves SPH ≥−1 . We are able to completely determine the orbits of SPH ≥−1 under Diff(M ). To state the result we need to introduce more notations. We say a class e is of divisibility p if e = pẽ withẽ a primitive class. Let SPH s,p (M ) be the subset of classes in SPH(M ) with square s and divisibility p. SPH s,p (M ) can be further decomposed depending on the type, i.e. whether a class is characteristic or ordinary. Recall a class is called characteristic if it is an integral lift of the second Stiefel-Whitney class. Such a class v satisfies v · u = u·u (mod 2) for any class u. A class is called ordinary if it is not characteristic. Define SPH o s,p (M ) and SPH c s,p (M ) to be the subsets of ordinary and characteristic classes in SPH s,p (M ). When the group of diffeomorphisms Diff(M ) acts on H 2 (M ), it preserves the square, the divisibility and the type. Therefore, Diff(M ) acts on SPH o s,p (M ) and SPH c s,p (M ) separately. Remarkably this action is transitive if s ≥ −1. We do not know whether the transitivity continues to hold when s is less than −1. The case s = −2 is particularly interesting and will be the subject of further investigation. We remark that Theorem D will be applied in [L2] to prove that the fiber sums of relatively minimal Lefschetz fibrations are minimal manifolds. Convention. From now on, when we say an integral cohomology class is represented by a surface, we mean its Poincaré dual is represented by a surface. The organization of the paper is as follows. In §2, we study which automorphisms of the cohomology lattice of a rational manifold are realized by diffeomorphisms. Based on a characterization by Friedman and Morgan in , we give a new characterization in terms of the K−symplectic cones. This new characterization will be used in §4 and is one major new theoretical input in this paper. In §3, we systematically study the symplectic genus and prove Theorem A and B. Most of this section is a series of computational lemmas which give enough case-bycase control to prove the theorems. In §4, we study the problem of representing a class by spheres and determine the action of diffeomorphism groups on SPH(M ). Theorems C and D will be proved there. The authors would like to thank Janos Kollár, Ronnie Lee, Robert Friedman and Gang Tian for helpful discussions. This research is partially supported by NSF and 973 Program of P. R. China. §2. Diffeomorphism group of rational and K−symplectic cones On a manifold M , each diffeomorphism induces an automorphism of the lattice of the second integral cohomology. Hence there is a natural map from Diff(M ) to the automorphism group of the lattice. Let D(M ) be the image of this natural map. In other words, an automorphism is in D(M ) if it is realized by an orientationpreserving diffeomorphism. We will describe D(M ) for both rational and irrational ruled manifolds. On each irrational ruled manifold M , there is a class (unique up to sign) with square zero whose Poincáre dual is represented by an embedded sphere. It is proved in [FM2] that an automorphism of the cohomology lattice is in D(M ) if and only if that class is preserved up to sign. In particular, the -Id automorphism is in D(M ). The case for rational manifolds is rather complicated. Each rational manifold M is of the form CP 2 #nCP 2 . When n ≤ 9, a result of Wall states that any automorphism is realized by a diffeomorphism. The more difficult case n ≥ 10 requires the concepts of P −cell and super P −cell introduced by Friedman and Morgan [FM1], and a characterization of the diffeomorphism group via these terms. In fact they are not adequate for the purpose of this paper, and we need to consider their partial compactifications and relate them to the K−symplectic cones. Suppose M is an oriented closed manifold with b + = 1, b − = n and no torsion in H 2 (M ; Z). A basis (x, α 1 , ..., α n ) for H 2 (M ; Z) is called standard if x 2 = 1, and α 2 i = −1 for each i = 1, ..., n. Let For each class x ∈ H 2 (M ; Z) with x 2 < 0, we define x ⊥ ∈ H 2 (M ; R) to be the orthogonal subspace to x with respect to the cup product, and we call (x ⊥ ) ∩ P the wall in P defined by x. Let W 1 be the set of walls in P defined by integral classes with square −1. A chamber for W 1 is the closure in P of a connected component of P − ∪ W ∈W 1 W. Any point x ∈ P with square 1 at which n mutually perpendicular walls of W 1 meet is called a corner. Any corner is an integral class (see Lemma 2.2 in [FM1]). Suppose C is a chamber for W 1 . If x is a corner in C, a standard basis (x, α 1 , . . . , α n ) for H 2 (M ; Z) is called a standard basis adapted to C if α i · C ≥ 0 for each i. The canonical class of the pair (x, C) is defined to be κ(x, C) = 3x − i α i . Suppose C is a chamber for W 1 and x is a corner in C, we define Any subset of P of the form P (x, C) is called a P −cell. Notation. For any U ⊂ P (similarly B, P), we will use int P (U ) (similarly int B (U ), int P (U )) to denote U ∩int(P) (similarly U ∩int(B), U ∩int(P)). For any V ⊂ P (respectively P), we will useV (similarlyV ) to denote its closure in P (similarly P). The basic properties of P −cells are summarized in the following lemma. Lemma 2.1. 1. A P −cell is a chamber for the set of walls W 1 ∪ {κ(x, C) ⊥ ∩ P}. 2. If P (x, C) = P (x ′ , C ′ ), then κ(x, C) = κ(x ′ , C ′ ). Thus for each P −cell P we can assign a unique canonical class of the form κ(x, C), which will be written as κ(P ). 3. If ψ is an automorphism of the lattice and P is a P −cell with canonical class κ, then ψ · P is also a P −cell with canonical class ψ(κ). 4. If P and P ′ are distinct P −cells, then int P (P ) and int P (P ′ ) are disjoint. 5. If P and P ′ are distinct P −cells, then int B (P ∩ B) and int B (P ′ ∩ B) are disjoint. In other words, the interiors of the B−boundaries of the closure of distinct P −cells are also disjoint. Proof. The proofs of the first 4 properties can be found in chapter II in [FM 1]. Here we prove property 5. Notice that P = P ∪ B. If x is any point in int B (P ∩ B), then the intersection of P with any sufficiently small neighborhood in P of x is non-empty and is contained in int P (P ). Thus if int B (P ∩ B) and int B (P ′ ∩ B) intersect, then int P (P ) and int P (P ′ ) overlap and hence they are the same P −cells by property 4. Therefore distinct P −cells have disjoint B−boundaries. It turns out that P −cells are closely related to the K−symplectic cone introduced in [LLiu1]. Let us recall the definition of K−symplectic cone. A class K ∈ H 2 (M ; Z) is called a symplectic canonical class if it is the canonical class of some orientation-compatible symplectic structures. Let K be the set of symplectic canonical classes. For any K ∈ K we introduce the K−symplectic cone: where Ω K is the set of orientation-compatible symplectic forms with K as the symplectic canonical class. It is shown in [LLiu2] For a manifold with b + = 1 and any K ∈ K, we can in fact determine C K in terms of a certain subset of E M . Recall that E M is the set of integral cohomology classes represented by smoothly embedded spheres of square −1. When there is no confusion we will omit the subscript M . Introduce the set of K−exceptional spheres as It is proved in Theorem 4 in [LLiu1] that LetĈ K be the closure of C K in P. Then it is not hard to provê In order to link the P −cells and the symplectic cones, we also need to consider good generic surfaces as in [FM1]. A good generic surface X is an algebraic surface such that the anti-canonical divisor is effective and smooth, and that any smooth rational curve has square no less than −1. All such surfaces are rational surfaces and can be holomorphically blown down to CP 2 (see I.2 in [FM1]). Let ρ : X −→ CP 2 be a holomorphic blow down with exceptional fibers F 1 , ..., F n , where each F i is an embedded rational curve with square −1. Let K X be the canonical class of X. The surface X has many Kähler metrics. Associated with each such metric is its Kähler form and associated cohomology class in H 2 (X; R). The Kähler cone A(X) of X is then the set of all Kähler cohomology classes. By the Nakai-Moishezon criterion, the Kähler cone A(X) consists of all the classes in P which pair positively on any holomorphic curve. LetÂ(X) be the closure of A(X) in P. Proposition 2.2. Let X be a good generic surface. Let P 0 be the P −cell containing the class ρ * H. Then P 0 coincides withÂ(X), and κ(P 0 ) = −K X . Moreover, Proof. The first statement is proved in II. 3 and II.4 in [FM1]. So we only need to show thatÂ(X) consists of all classes inĈ K X which pair non-negatively with (−K X ). Since a Kähler form is a symplectic form, the Kähler cone A(X) is certainly a subset of the K X −symplectic cone C K X . ThereforeÂ(X) is a subset ofĈ K X . To prove the inclusion in the other direction, we need the following result: which is Proposition 3.4 in [FM1]. Here E hol (X) is the set of embedded rational curves with square −1. With this characterization ofÂ(X) we just have to show that, on a good generic surface, any class e ∈Ĉ K X with e·(−K X ) ≥ 0 is non-negative on any class in E hol (X). This follows from the obvious inclusion E hol (X) ⊂ E K X . The proof is complete. Since P 0 coincides withÂ(X), it is possible that the two sets E hol (X) and E K X are the same. Lemma 2.4. Let M be a rational 4−manifold. For each K ∈ K, there exists a P −cell P K such that κ(P K ) = −K and Proof. Suppose X is a good generic surface and M is the underlying rational 4−manifold. By the result in [LLiu1] The proof is complete. Now we introduce super P −cell, which is defined via a reflection group associated to a P −cell. Suppose γ is a class with square −1 or −2. We can define an automorphism of the lattice as follows, This automorphism R(γ) is called the reflection along γ. For a P −cell P define G P to be the set {α|α 2 = −1, α = κ(P ) and α defines a wall of P}. Let R(P ) be the group generated by reflections along classes in G P . The super P −cell of P is defined as We will need the following simple fact on reflections. Proof. For any class x, we have , and the statements follow. Proposition 2.6. Let M be a rational 4−manifold. Any good generic surface X with M as the underlying manifold gives rise to a P −cell of M , denoted by P 0 . 1. If φ is an automorphism, then φ(S(P )) is also a super P −cell. In particular, −S(P ) is a super P −cell. 2. An automorphism is in D(M ) if and only if it preserves the distinguished super P −cell S(P 0 ) up to sign. Consequently, D(M ) is generated by -Id, R(P 0 ) and the isotropy subgroup of P 0 . 3. If int P (S(P ))∩ int P (S(P ′ )) is not empty, then S(P ) = S(P ′ ). Proof. The first three properties are in chapter II in [FM1]. The proof of property 4 goes exactly along the line of the proof of the analogous property for the P −cells in Lemma 2.1. If x is any point in int B (S(P ) ∩ B), then the intersection of P with any sufficiently small neighborhood in P of x is non-empty and is contained in int P (S(P )). Thus if int B (S(P ) ∩ B) and int B (S(P ′ ) ∩ B) intersect, then S(P ) and S(P ′ ) have overlapping interiors and hence they are the same super P −cells by property 3. Notice that The last statement follows immediately from the properties 3 and 4. The proof is complete. In the next proposition we are going to relate the super P −cells ±S(P 0 ) to the K−symplectic cones. Proposition 2.7. Define M and X as in Proposition 2.5. Let K 0 be the canonical class of P 0 . Then every K ∈ K is of the form ±ψ Proof. This is a consequence of the result in [LLiu1] which states that D(M ) acts transitively on K. The positive cone P has two connected components. Let K be a symplectic canonical class such that C K and C K 0 are in the same connected component of P. Since D(M ) acts transitively on K, there exists ψ ′ ∈D(M ) such that ψ(K 0 ) = K. By Proposition 2.6(2), ψ ′ (S(P 0 )) = ±S(P 0 ). Since ψ ′ (P 0 ) is still in the same component of P 0 , ψ ′ (S(P 0 )) = S(P 0 ). Therefore ψ ′ (P 0 ) is a P −cell within S(P 0 ). By the definition of a super P −cell, ψ ′ (P 0 ) = ψ(P 0 ) for some ψ ∈ R(P ). Therefore K = ψ(K 0 ). By Lemma 2.1, ψ(P K 0 ) and P K have the same canonical class and therefore they are identical. Notice we have shown that To prove the inclusion in the other direction, we just need to show that, for any ψ ∈ R(P ), ψ(P 0 ) = P K for some K ∈ K. This is obvious since K = ψ(K 0 ) is certainly a symplectic canonical class. The proof is finished. It is mentioned in [FM2] that super P −cells are chambers for the walls given by primitive characteristic classes with square 9 − n. We can in fact show that the set of walls for S(P 0 ) and −S(P 0 ) is just the set of the symplectic canonical classes. Now we are able to present the main result of this section, a characterization of D(M ) in terms of K−symplectic cones. Theorem 2.8. Let M be a rational 4−manifold. An automorphism φ is in D(M ) if and only if there are K and K ′ in K and 1. either there are classes e ∈Ĉ K and e ′ ∈Ĉ K ′ with e · (−K) > 0 and e ′ · (−K ′ ) > 0, such that e is mapped to e ′ by φ, 2. or there are classes e ∈ C K ∩ B and e ′ ∈ C K ′ ∩ B with e · (−K) > 0 and e ′ · (−K ′ ) > 0, such that e is mapped to e ′ by φ. Proof. Due to Proposition 2.7, in the first case, we just have to show that e and e ′ are in the interior of S(P 0 ) ∪ −S(P 0 ). The arguments for e and e ′ are exactly the same, so we will only argue for e. By Lemma 2.4, e ∈ P K . If e is in the interior of P K , then e is in the interior of S(P 0 ) ∪ −S(P 0 ) by Proposition 2.7. e may fail to be in the interior of P K only when e · E = 0 for some E ∈ E K . Suppose P K = ±ψ(P 0 ) for some ψ ∈ R(P 0 ), then E = ψ(E 0 ) for some E 0 ∈ G P 0 . By Lemma 2.5, the P −cell obtained by reflecting P K along E is still in S(P 0 ) ∪ −S(P 0 ). Thus we see that e must be in the interior of S(P 0 ) ∪ −S(P 0 ). The proof in the second case is similar. We just have to show that e and e ′ are in int B (S(P 0 ) ∪ −S(P 0 ) ∩ B) and we only have to argue for e. By Lemma 2.4, e ∈ P K ∩ B. e may fail to be in the interior of P K ∩ B only when e · E = 0 for some E ∈ E K . However by Lemma 2.5, the reflection of P K ∩ B along E is still in S(P 0 ) ∩ B. Thus we see that e must be in the interior of S(P 0 ) ∪ −S(P 0 ) ∩ B. The theorem is proved. §3. Symplectic genus We first give the formal definition of the symplectic genus for manifolds with non-empty symplectic cone. For any integral class e ∈ P, we first define a subset of K, K e = {K ∈ K|there exists a class τ ∈ C K such that τ · e > 0}. We further define a subset of K e , K(e) = {K ∈ K e |K · e ≥ K ′ · e for any K ′ ∈ K e }. Definition 3.1. Let K be any class in K(e). The symplectic genus of e is defined to be η(e) = 1 2 (e · K + e 2 ) + 1. We now list some simple properties of symplectic genus. 1. The symplectic genus is no bigger than the minimal genus. Furthermore, if a class is represented by a connected symplectic surface, then its symplectic genus is equal to its minimal genus. 2. η(−e) = η(e). 3. For any positive integer p, In particular, η(pe) = 0 when e · e = 0 and p ≥ 2. 4. The symplectic genus is invariant under the action of the group of orientationpreserving diffeomorphisms. 5. The symplectic genus of any class of a sufficiently large multiple of any class of positive square is positive. Proof. Property 1 is a consequence of the adjunction inequalities. When b + > 1 the adjunction inequality in [KM], [MST], [OS] and [T2] asserts that the genus g of any embedded surface representing e satisfies 2g − 2 ≥ |K · e| + e · e (3.1) for any symplectic canonical class K. When b + = 1 and e has non-negative square, the adjunction inequality in [LLiu2] asserts that 2g − 2 ≥ K · e + e · e (3.2) for any symplectic canonical class K ∈ K e . When e has negative square, inequality (3.2) still holds and is basically proved in §3 in [OS]. We explain here briefly. Suppose ω is a symplectic form whose class τ pairs positively with e, and let K(ω) be its symplectical canonical class. Let s 0 be the canonical Spin c structure with c 1 (s 0 ) = −K(ω). The class e determines another Spin c structure, denoted by s 0 −e. Suppose e is represented by an embedded surface of genus h such that 2h − 2 < K(ω) · e + e · e. Then, by Theorem 1.3 in [OS] and the corresponding result in [FS], in a common chamber for both s 0 and s 0 − e which is perpendicular to e, the Seiberg-Witten invariant of s 0 being nontrivial implies that the invariant of s 0 − e is nontrivial as well. The ω−symplectic chamber is such a chamber. Moreover, according to Taubes ([T1]), in this chamber, the Seiberg-Witten invariant of s 0 is nontrivial. Therefore, in the ω−symplectic chamber, the Seiberg-Witten invariant of s 0 − e is nontrivial as well. By another result of Taubes ([T2]), τ · (−e) > 0. This contradicts our assumption, so inequality (3.2) still holds in this case. Therefore, in any case, we have m(e) ≥ η(e). Suppose that e is represented by a genus h symplectic surface with respect to a symplectic form ω. Then ω is positive on this surface. If K(ω) is the symplectic canonical class of ω, then K(ω) ∈ K e and 2h − 2 = K(ω) · e + e · e. Together with inequalities (3.1) and (3.2), we see that h = m(e) = η(e). If K is the symplectic canonical class of a symplectic form ω, then −K is the symplectic canonical class of the symplectic form −ω. Therefore, And η(−e) = η(e) is an immediate consequence of equation (3.3). For any positive integer p, we have K e = K pe and K(e) = K(pe) (3.4). The formula for η(e) then follows from equation (3.4) with a straightforward calculation. When e · e = 0, η(pe) is therefore given by p(η(e) − 1) + 1. Evidently it is not divisible by p and concequently cannot be zero if p ≥ 2. Property 4, then, is an immediate consequence of equation (3.5). The last property follows directly from the definition. Let e be a class with positive square. When N is large, N e · N e dominates N e · K for any K ∈ K(e), and therefore N e has positive symplectic genus. Lemma 3.2 is proved. Now we set out to prove Theorem B. The proof requires the notion of reduced classes for non-minimal rational and irrational ruled manifolds (for rational manifolds, it is introduced in [Ki] and [G]). A nice property of this notion is that every class with positive square can be transformed in an explicit way to a reduced class via diffeomorphisms. Thus by Lemma 3.2(4) we only have to show that Theorem B holds for any reduced class e. To introduce the reduced class let us review the minimal reductions of a rational or irrational ruled manifold. The only minimal rational manifolds are CP 2 and S 2 × S 2 . And a non-minimal rational manifold has two kinds of decomposition-it is either decomposed as CP 2 #nCP 2 or as S 2 ×S 2 #(n−1)CP 2 . We will always use the first decomposition and call it a standard decomposition. The picture for irrational ruled manifolds is similar. S 2 −bundles over a Riemann surface of positive genus are the only minimal irrational ruled manifolds. Fix the base surface Σ g , there are two S 2 −bundles over it, the trivial one S 2 × Σ g and the unique non-trivial one S 2× Σ g . A non-minimal irrational ruled manifold also has two types of decomposition, either as S 2 × Σ g #nCP 2 or as S 2× Σ g #nCP 2 . We will use the first decomposition and call it a standard decomposition. Let H be a generator of H 2 (CP 2 ; Z) and E 1 , . . . , E n be the generators of the CP 2 . Let U and T be classes in S 2 × Σ g represented by {pt} × Σ g and S 2 × {pt} respectively. H, E 1 , . . . , E n are naturally considered as classes in H 2 (CP 2 #nCP 2 ; Z) and form a basis. We will call such a basis a standard basis. Similarly, U, T, E 1 , . . . , E n are naturally considered as classes in H 2 (S 2 × Σ g #nCP 2 ; Z) and form a basis. Such a basis is also called a standard basis. Given such a basis, according to Wall ([W]), an automorphism is called trivial if either it permutes the E i or it is a reflection along an E i . It was shown in [W] that trivial automorphisms are in D(M ). On By the blow up construction (see e.g. [Mc1]) K 0 is a symplectic canonical class. Definition 3.3. For a non-minimal rational manifold with a standard decomposition CP 2 #nCP 2 and a standard basis For a non-minimal irrational ruled manifold with a standard decomposition S 2 × Σ g #nCP 2 and a standard basis {U, T, E 1 , . . . , E n }, a class e = aU + bT − c i E i is called reduced if a ≥ 0, c 1 ≥ c 2 ≥ · · · ≥ c n ≥ 0 a ≥ c i for any i. Reduced classes have the following properties: Lemma 3.4. Let M be a non-minimal rational or irrational ruled manifold with a standard decomposition and a standard basis. 1. Any class of non-negative square is equivalent to a reduced class under the action of orientation-preserving diffeomorphisms. Moreover we can find such a diffeomorphism by a simple algorithm. 2. For a class with square −1, when b − (M ) = 2, it either has reduced form or is equivalent to the class E 1 ; when b − (M ) = 2, another possibility is that it is characteristic, and equivalent to H − E 1 − E 2 in the rational case and to T − E 1 in the ruled case. Similarly, for a class with square −2, when b − (M ) = 3, it either has reduced form or is equivalent to the class E 1 + E 2 ; when b − (M ) = 3, another possibility is that it is characteristic, and equivalent to H − E 1 − E 2 − E 3 in the rational case and to T − E 1 − E 2 in the irrational ruled case. 3. If e is reduced, then e · F ≥ 0 for any F ∈ E K 0 . 4. If e is reduced, then K 0 ∈ K e . 5. If e is a reduced class with non-negative square, then K 0 ∈ K(e), and consequently η(e) is given by (K 0 · e + e · e)/2 + 1. Proof. We divide the proof into two cases. (i). First consider a non-minimal rational manifold with a standard decomposition CP 2 #nCP 2 and a standard basis. When n ≤ 9, all the properties have been established (for 1 and 4 see [Li1], for 2 see [LiL2]). So we assume that n ≥ 10. Property 1. In fact, it was also proved in [Li1]. Since we will use the similar arguments to prove property 2, we provide some details here. Suppose e = aH − i b i E i is a class with non-negative square. First of all, by the trivial automorphisms, we can arrange so that a ≥ 0 and b . If e is not already reduced and 2a − (b 1 + b 2 + b 3 ) ≥ 0, then 0 ≤ a ′ < a. From this we see the process can be repeated either to lead to a reduced class or to a class with 2ã − (b 1 +b 2 +b 3 ) < 0. However, if 2ã < (b 1 +b 2 +b 3 ), then from a 2 ≥ ib 2 i and (b 1 +b 2 +b 3 ) 2 ≤ 3(b 2 1 +b 2 2 +b 2 3 ), we have (b 2 1 +b 2 2 +b 2 3 ) ≤ã 2 < (3/4)(b 2 1 +b 2 2 +b 2 3 ). This is an obvious contradiction. In the latter case we easily find that there are only two possibilities up to trivial automorphisms: Property 3. Assume that F = tH − s i E i . Then It was shown in [LLiu2] that F, E 1 , ..., E n and H are all represented by connected smooth J−holomorphic spheres for some almost complex structure J. By the positivity of intersection of distinct J−holomorphic curves, t ≥ 0, and s i ≥ 0 unless F = E i . If F = E i for some i, clearly we have e · F ≥ 0. If F = E i , then t > 0 and t ≥ s i ≥ 0. We can divide the b i into t groups, each consisting of no more than three b i . Since s i is no bigger than t, the division can be made such that the b i in each group have distinct indices. The condition of e being reduced implies that a − b i − b j − b k ≥ 0 for any i, j, k which are mutually distinct. The property follows. Property 4. Notice that for any sufficiently small ǫ, ω ǫ = H − ǫE i is a symplectic form with canonical class K 0 . Since ω ǫ · e > 0 for ǫ small, we have K 0 ∈ K e . Property 5. Since a reduced class e with non-negative square has a positive H term, by the light cone lemma in [LLiu2], the class of a symplectic form is positive on e only when it has a positive H term as well. Therefore, if K is any symplectic canonical class in K e , then it has a negative H term. By Proposition 2.5, any K ∈ K e is of the form ψ(K 0 ) for some ψ ∈ R(P 0 ). We claim that K · e ≤ K 0 · e for any K of the form K ∈ K e . Once this is established it is clear that K 0 ∈ K(e) and property 4 follows. Now we proceed to prove the above claim. Write ψ as This, together with the fact F i · F j ≥ 0 due to positivity of intersection, we have Now the claim follows from property 3. (ii). Now consider a non-minimal irrational ruled manifold with a standard decomposition and a standard basis. Suppose e = aU + bT − i c i E i is a class. Properties 1 and 2. We will prove the first two properties together. As we have mentioned, the -Id automorphism is realized by an orientation-preserving diffeomorphism. Therefore we can assume that a ≥ 0. The easier case is when a = 0. In this case, e·e = − i c 2 i . If e has non-negative square, then c i = 0 for each i and e is simply the class bT , which is certainly reduced. If e has square −1. then c i = ±1 for some i and c j = 0 for any j = i. Consider the reflection along [b/2]T − E i which maps e to E i or T + E i . When n ≥ 1, the reflection along T + E 1 − E 2 maps T + E i to the class E 2 . If e has square −2, we have c i = c j = 1 for some i = j and c k = 0 for any k different from i and j. When a is strictly positive, we will show that under an orientation-preserving diffeomorphism which is a composition of reflections along classes represented by embedded spheres with square −1, e is equivalent to a classẽ = aU +b − ic i E i with a ≥c i for each i. For any r i ≥ 0 and ǫ i = ±1 to be determined, it is easy to see, via the tube construction, that µ i = r i T + ǫ i E i is represented by an embedded sphere with square −1. Therefore, the reflection along µ i is realized by an orientation-preserving diffeomorphism. Since e · µ i = ar i + ǫ i c i , under the reflection, and a is invariant. We first assume that a is positive. In order for |c ′ i | ≤ a i , we find that r i and ǫ i should satisfy Clearly, such a pair (r i , ǫ i ) exists, and there is a unique solution when c/a is not an odd integer, and there are two solutions when c/(2a) is an odd integer. By applying this process for each i, we obtain a desired classẽ. Notice thatẽ is equivalent to a reduced class under trivial automorphisms. So we have proved that e is equivalent to a reduced class if a > 0. Property 3. It is a immediate consequence of the fact (see [Bi] or [LLiu1]) that Indeed, e · E i = c i and e · (U − E i ) = a − c i , both of which are positive because e is reduced. Property 4. Consider symplectic forms ω ǫ = U + T − ǫE i . Their canonical class is K 0 = −2U + (2g − 2)T + i E i , and for ǫ small, ω ǫ · e > 0 for any reduced class e = aU + bT − i c i E i . Therefore, K 0 is in K e . Property 5. Suppose now e = aU +bT − i c i E i is a reduced class with non-negative square. Let τ be the class of a symplectic form which is positive on e. Since both a and b are non-negative and one of them is positive, by the light cone lemma, τ must have a positive U term as well. Therefore any symplectic canonical class in K e is of the form . We will show that S i ≥ 0 for each i. When s i ≥ 3, the two factors of S i are both non-positive, so S i is non-negative. When s i = 1, S i = 0. Finally, when s i ≤ −1, the two factors are both non-negative, and therefore S i is non-negative. We have finished the proof of property 5 for a non-minimal ruled manifold and hence the proof of Lemma 3.4. We will now prove a rather general result relating the symplectic genus and the minimal genus of a reduced class, using Taubes' equivalence between Seiberg-Witten invariants and Gromov-Taubes invariants ( [T2]). Let us first provide some background of this equivalence (see e.g. [LLiu1] and [T2]). Recall that Seiberg-Witten invariants are defined on Spin c structures. For manifolds without torsion-free homology group, like rational and irrational ruled manifolds, the Spin c structures correspond to characteristic classes. For this reason, we will simply speak of the Seiberg-Witten invariants of the characteristic classes. Suppose K is a symplectic canonical class, then any class of the form −K + 2e is a characteristic class. The Seiberg-Witten invariant of −K + 2e is defined when its Seiberg-Witten dimension −K · e + e · e is non-negative. For manifolds with b + = 1, the Seiberg-Witten invariants also depend on the chambers. In the presence of a symplectic form ω, there is an ω−symplectic chamber. On such a manifold, the Gromov-Taubes invariant of a class e is a suitable count of ω−symplectic surfaces representing e. The surface is not required to be connected, but is required to be embedded and any component with negative square is a ω−symplectic sphere with square −1. When K is the symplectic canonical class of ω, a fundamental theorem of Taubes states that, if the Seiberg-Witten invariant of −K + 2e in the ω−symplectic chamber is nontrivial, then (i) e is represented by a J−holomorphic curve (possibly singular) for any ω−compatible almost complex structure J; (ii) the Seiberg-Witten invariant is the same as the Gromov-Taubes invariant of e provided e · E ≥ 0 for any E ∈ E K . Proposition 3.5. Let M be a non-minimal rational or irrational ruled manifold with a standard decomposition and a standard basis. Suppose e is a reduced class. If e·e is no less than η(e)−1, then e·e ≥ 0 and e is represented by a symplectic surface. Moreover, if e is either a class of positive square or a primitive class with square 0, e is represented by a connected symplectic surface, and therefore its minimal genus is given by its symplectic genus. Proof. We will first prove that e is represented by a symplectic surface. By the definition of the symplectic genus and Lemma 3.4(4) Therefore, under our assumption, the Seiberg-Witten dimension of the class −K 0 + 2e satisfies −K 0 · e + e · e ≥ 2(e · e + 1 − η(e)) ≥ 0. Now we divide the proof into two cases. In the case of rational manifold, for a symplectic from ω with −K 0 = 3H − i E i as its canonical class, it is shown in [LLiu2] that H is represented by an embedded J−holomorphic sphere for a generic almost complex structure J compatible with ω. Since the reduced class e = aH − b i E i has a positive a term, (K 0 − e) has a negative a term and so (K 0 − e) · H < 0. Therefore, K 0 − e is not represented by a J−holomorphic curve, because the intersection number of two distinct J−holomorphic curves is non-negative. So the Seiberg-Witten invariant of −K 0 + 2(K 0 − e) = K 0 − 2e is trivial by the result of Taubes. By the symmetry of Seiberg-Witten invariants (see Lemma 2.3 in [LLiu1]), the Seiberg-Witten invariant of −K 0 +2e in the non-symplectic chamber is trivial. By the wall crossing formula of Seiberg-Witten invariants (see [KM] and Lemma 3.3 in [LLiu1]), the Seiberg-Witten invariant SW ω (−K 0 + 2e) in the ω−symplectic chamber is non-trivial. Since e is reduced, by Lemma 2.3(3), we have e·E ≥ 0 for any E ∈ E K 0 . Thus, e is represented by an embedded symplectic surface by the result of Taubes. In the case of irrational ruled manifold, by [LLiu2], for any symplectic form ω with K 0 as its canonical class, T is represented by a J−holomorphic sphere for a generic ω−compatible almost complex structure J. Since a reduced class has a positive U term and U · T = 1, we can show that K 0 − e has trivial Seiberg-Witten invariant in the ω−symplectic chamber. Applying Lemma 2.3 and Lemma 3.3 in [LLiu1] as above, and notice that −K 0 + 2e has a positive U term and that the class γ in Lemma 3.3 in [LLiu1] is just the class T here, we find that the Seiberg-Witten invariant of −K 0 + 2e is nontrivial. Taubes's result and Lemma 2.3(3) then can be applied to show that e is represented by an embedded symplectic surface. We have shown that e is represented by a symplectic surface. This surface may have many components. Any component with negative square is a symplectic sphere with square −1. However, since e · E ≥ 0 for any E ∈ E K 0 , no such component exists. Thus, e is represented by a symplectic surface the components of which all have non-negative square, and therefore e · e is non-negative. If e · e > 0, there can only be one component by the light cone lemma. If e · e = 0, again by the light cone lemma, there might be several components, all of which are multiples of the same class. All the multiplicities have to be one because of the adjunction formula. Thus, if e is primitive, there is only one component. The proof is complete. Notice that, as an immediate consequence of Proposition 3.5, the symplectic genus of certain reduced class is non-negative. In fact, this weaker assertion holds in much greater generality. Lemma 3.6. Let M be a non-minimal rational or irrational ruled manifold with a standard decomposition and a standard basis. 1. The symplectic genus of any class with positive square or a primitive class with square 0 is non-negative. 2. Any class with square −1 or −2 has non-negative symplectic genus. In addition, the classes which are equivalent to reduced classes have positive symplectic genus, and those which are not equivalent to reduced classes have symplectic genus 0. Proof. Let e be a class with square at least 0 and equivalent to a reduced class e ′ . Due to Lemma 3.2(1), e and e ′ have the same symplectic genus. Suppose that the symplectic genus of e is negative, then e · e ≥ −1 ≥ η(e) − 1. By Proposition 3.5, e ′ is represented by a symplectic surface and hence the connected symplectic genus is non-negative. This is a contradiction. When e · e = −1, by Lemma 3.4(2), e is either equivalent to a reduced class, or equivalent to E 1 , H − E 1 − E 2 or T − E 1 . It is easy to see that E 1 , H − E 1 − E 2 or T − E 1 are all spherically representable and have symplectic genus zero. Suppose e is a reduced class and η(e) ≤ 0. Since e · e = −1, it satisfies the assumption of Proposition 3.5, and we can conclude that e · e ≥ 0. This contradicts with our assumption. Therefore, by Lemma 3.2(1), any class equivalent to a reduced class has positive symplectic genus. For the case of a class of square −2, the same argument as in the previous paragraph proves that the symplectic genus can not be smaller than zero. What we still need to show is that there does not exist any reduced class e with symplectic genus 0. Suppose e is such a class. Then by definition there is a K ∈ K such that K · e = 0. In light of Lemma 3.4(4), it is also necessary that K 0 · e ≤ 0. We first exclude the case K 0 · e < 0. Let K ′ be a symplectic canonical class such that C K ′ and C K 0 are in the same component of the positive cone P. Notice that the argument in the proof of Lemma 3.4(5) actually proves that K ′ · e ≤ K 0 · e for any reduced class e. Therefore all such K ′ satisfies K ′ · e < 0. It is clear that any symplectic canonical class is either a K ′ or a −K ′ . Thus, there is no symplectic canonical class K satisfying K · e = 0. This contradiction leaves the case K 0 · e = 0 as the only possibility. In this case, the reflection R e along e preserves E K 0 since it preserves K 0 . So, if F ∈ E K 0 , then F ′ = R e (F ) ∈ E K 0 , and F ′ = F + (e · F )e. By [LLiu2], for any symplectic form ω ∈ C K 0 , F and F ′ are both represented by smooth J−holomorphic spheres for some generic ω−compatible almost complex structure J, we have F · F ′ ≥ 0 by the positivity of intersection. This fact, together with Lemma 3.4(3), leads to the following contradiction The lemma is proved. We are ready to prove Theorem B. In fact, we will prove the following more general result. Theorem B'. Let M be a rational or irrational ruled four−manifold. Suppose e is a class with square at least −1, and in the case that e has square one, we further assume that e is a primitive class. Then its symplectic genus is non-negative and there is an algorithm to calculate its symplectic genus. Furthermore, if e·e ≥ η(e)−1, then e is represented by a connected symplectic surface, and therefore its minimal genus coincides with its symplectic genus. Proof. When M is minimal, M is either CP 2 , S 2 × S 2 or an S 2 −bundle over a Riemann surface. The minimal genus problem for these manifolds has been completely solved in . When M is non-minimal, with a choice of a standard decomposition and a standard basis, we can define reduced classes. Suppose e is a class satisfying the conditions of Theorem B'. By Lemma 3.4, under an algorithm, e can be transformed to a reduced classẽ or a class e ′ which can be represented by a symplectic sphere. Theorem A. Let M be a smooth, closed oriented 4−manifold with non-empty symplectic cone and b + (M ) = 1. Then the symplectic genus of any class of positive square is non-negative, and it coincides with the minimal genus for any sufficiently large multiple of such a class. Proof. In the rational and irrational ruled cases, by Theorem B', every class with positive square has positive symplectic genus. If M is neither rational nor irrational ruled, we examine the minimal case first. Given a class e with positive square and a symplectic form ω, by the light cone lemma, either ω · e > 0 or −ω · e > 0. Let us assume that we are in the first situation. By a result in [Liu], K(ω) · ω ≥ 0 if K is the canonical class of a symplectic form ω. Then, by the light cone lemma, K(ω)·e ≥ 0. Thus it follows directly from inequality (3.2) that the symplectic genus of e is positive. For the non-minimal case, we claim that one can find K ∈ K e such that K · e ≥ 0 if e · e ≥ 0. The positivity of η(e) then follows immediately from it. Suppose M = N #nCP 2 is the (unique) minimal reduction of M . Let E 1 , ..., E n be the generators of H 2 of the n CP 2 . Write e = e m − r 1 E 1 − ... − r n E n , where e m is the pull back of a class in H 2 (N ; Z) also denoted by e m . Pick a symplectic form ω m on N such that ω m · e m > 0. Let K m be a symplectic canonical class of ω m . Then, as above, we have e m · K m ≥ 0. By the blow up construction, for sufficiently small ǫ, the class [ω m ] − ǫE 1 − ... − ǫE n is realized by a symplectic form on M with symplectic canonical class K m +E 1 +...+E n . Applying the reflections along the E i , we see that [ω m ] ± ǫE 1 ± ... ± ǫE n are realized by symplectic forms with symplectic canonical classes K m ∓ E 1 ∓ ..., ∓E n . For possibly smaller ǫ, the pairing between e and [ω m ] ± ǫE 1 ± ... ± ǫE n is positive. Therefore, any symplectic canonical class of the form K = K m ± E 1 ± ... ± E n is in K e . Since K m · e m ≥ 0, by choosing E i or −E i appropriately, we can easily find a K ∈ K e such that K · e ≥ 0. The last statement of the theorem (for a class e satisfying e · E = 0 for any E ∈ E) is a direct consequence of the following two results, together with Lemma 3.2(1). One result is in [LLiu1] that The other is due to Donaldson (see [D]). It states that, for any sufficiently large integer N , N [ω] can be represented by connected symplectic submanifolds. Now suppose that e·E = 0 for some E ∈ E. By the result in [L1], there exists a symplectic form ω such that E is represented by an ω−symplectic sphere. Blowing down that sphere, we obtain a new symplectic manifold M ′ . There is a class e ′ in M ′ which is pulled back to e. It is easy to see that m(le ′ ) ≥ m(le) and η(le ′ ) = η(le) for any integer l. If e ′ · E ′ = 0 for any E ′ ∈ E M ′ , then η(le ′ ) = m(le ′ ) for sufficiently large l. Therefore η(le) = η(le ′ ) = m(le ′ ) ≥ m(le). Together with Lemma 3.2(1) we arrive at the conclusion that η(le) = m(le). If there is still a class E ′ ∈ E M ′ such that e ′ · E ′ = 0, we can continue the process above. However, this process can only be repeated finitely many times. The proof of Theorem A is complete. We remark that, using some of the arguments in [LLiu1], in fact we are able to get an effective estimate on how large a multiple N is allowed in the last statement of Theorem A. Here we just mention, in the case of a minimal manifold with b + = 1 which is neither rational nor irrational ruled, it suffices to take N = 2|e·K|/e 2 , where ±K are the only two symplectic canonical classes. In particular, when a manifold with b + = 1 has a torsion symplectic canonical class, we are able to conclude that the minimal genus of every class e with positive square coincides with its symplectic genus (which is simply (e · e)/2 + 1). Such manifolds include the Enriques surface, hyperelliptic surfaces, any torus bundle over torus which has b + = 1. In addition, from the results in [LiL4], [Li1] and [Kr2], manifolds with the property that two genera coincide for any class of positive square include minimal irrational ruled manifold, rational manifold with b − ≤ 9 and the product of a circle with a fibered 3−manifold Y with b 1 (Y ) = 1. We close this section with another remark. There are classes of positive square, which do not satisfy the conditions of Theorem B but still have the same symplectic genus and minimal genus. Some of them are actually represented by connected symplectic surfaces. For any positive integer a bigger than 4, consider the reduced class aH − a 2 −1 i=1 E i . Its square is 1 and symplectic genus (a 2 − 3a)/2. If we blow up a 2 − 1 points on a smooth curve of degree a, then the proper transform is a smooth curve in this given class. Others, including some classes in the non-trivial S 2 −bundles over Riemann surfaces are not known to be represented by connected symplectic surfaces. To deal with such classes, we may need to find more constructive techniques as in and [Li1]. §4. The classes represented by spheres In this section we determine the set of classes represented by spheres and the orbits of Diff(M ) on this set. We start with Theorem C. Let M be a rational or irrational ruled manifold and e ∈ H 2 (M ) be a class with square at least −1. If η(e) = 0, then PD(e) is represented by a smoothly embedded sphere. Furthermore, if PD(e) is represented by a smoothly embedded sphere, then either η(e) = 0 or e is a non-primitive class of square zero with e = pe ′ and η(e ′ ) = 0. Suppose the symplectic genus η(e) is zero. Again, there is a reduced classẽ with the same square, the same divisibility, the same symplectic genus and the same symplectic minimal genus. Applying Proposition 3.5 toẽ, together with Lemma 3.2(3), which excludes the case whenẽ is a divisible class with square zero, we conclude that m(ẽ) = 0. Therefore, m(e) is zero as well. Finally we deal with the case that e · e = −1. By Lemma 3.6(2), either e has positive symplectic genus, or η(e) = 0 and e is spherically representable. When η(e) > 0, e is not spherically representable by a sphere due to Lemma 3.2(1). Thus, e is spherically representable if and only if η(e) = 0. The proof is finished. For the convenience of the proof of Theorem D, we state the following corollary. Corollary 4.1. Let M be a rational or irrational ruled 4−manifold. Suppose e is a class with positive square or a primitive class with square zero, the following statements are equivalent: 1. e is represented by a smoothly embedded sphere. are all spherically representable. Moreover, the spheres representing them can be chosen to be pairwise disjoint. The first claim now follows from the elementary fact: if A 1 and A 2 are represented by two spheres which intersect at most at one point, then A 1 + A 2 is spherically representable. To prove the last claim, we need the following two results. Proposition 4.3. Up to automorphisms of H 2 , the set of spherically representable classes with non-negative square are given as above. Lemma 4.4. Let ω be a symplectic form with symplectic canonical class K. 1. Any class R with positive square and represented by an ω−symplectic sphere is inĈ K and satisfies R · (−K) > 0. 2. Any R with square 0 and represented by an ω−symplectic sphere is in C K ∩ B and satisfies R · (−K) > 0. Since H 2 (M ; Z) has the same decomposition with respect to the basis < H − E 1 , H, E 2 , ..., E n >, there is an automorphism of H 2 (M ; Z) sending y to H − E 1 . The non-primitive case follows immediately. Proof of Lemma 4.4. For any ω−compatible almost complex structure J, the ω−symplectic sphere representing R can be taken J−holomorphic. Moreover, for a generic ω−compatible almost complex structure J, any E · E K is represented by a smooth J−holomorphic sphere. Then R · E ≥ 0 for any E ∈ E K by the positivity of intersection of pseudo-holomorphic curves. Thus, when R has positive square, it isĈ K , and when R has square zero, it is in the C K ∩ B. In either case, by the adjunction formula, R · (−K) = 2 + R · R ≥ 2. The lemma is proved. For an irrational ruled manifold, the only spherically representable classes with non-negative square are ±kT . From Theorem C we can list all the possible orbits with non-negative square. For a given square, when there are more than one orbit, they are distinguished by divisibility.
2014-10-01T00:00:00.000Z
2001-08-31T00:00:00.000
{ "year": 2001, "sha1": "a71fa7d06c9033e43b2cc13e5e35e8a54ad7624f", "oa_license": null, "oa_url": "https://www.intlpress.com/site/pub/files/_fulltext/journals/ajm/2002/0006/0001/AJM-2002-0006-0001-a007.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ecebda18608f2c800a5cbfb14d612903e33b9c5e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
59272973
pes2o/s2orc
v3-fos-license
Diatom species composition and indices for determining the ecological status of coastal Mediterranean Spanish lakes Diatom indices have been used and tested mainly for assessing the ecological status of rivers and deep lakes, but there are scarce studies that determine their effectiveness in shallow lakes and in coastal Mediterranean lakes. This study evaluates the validity of several common diatom indices (SPI, BDI, CEC and TDIL) for the determination of the ecological quality of three coastal lakes (Valencia, Spain) and presents descriptions and ecological data of the main diatom species recorded. Diatom samples were collected from phytobenthos, both from epiphyton of the dominant submerged macrophytes and the sediment. The ecological status of the systems was determined according to different physico-chemical variables and was compared with the results obtained from epiphytic diatom communities. The results showed discrepancies among diatom indices and also with the state determined by the environmental variables. The effectiveness of the indices depended on the number of species assessed for each index with respect to the total species recorded and the suitability of the weight assigned to each species. The results reveal the need to gather more information about the composition and ecology of the diatoms and microalgae characteristic of coastal Mediterranean standing waters. This work contributes to their better knowledge. INTRODUCTION The Water Framework Directive requires to the European Union countries to monitor and control the ecological status of their water bodies, as an essential way to protect, improve and conserve the EU aquatic systems (WFD, Directive 2000/60/EC).Among the proposed bioindicators, benthic diatoms are one of the most relevant groups for the monitoring and control of the ecological status of aquatic ecosystems.These organisms are used in bioindication studies, by means of autoecological indices based on the relative abundance of each taxon, their sensitivity to environmental factors and their ecological distribution (Della Bella & Mancini, 2009;Cejudo-Figueiras, 2011).Most of the implemented protocols using diatom indices for the assessment of the water quality have been focused on rivers and deep lakes, mainly located in Central Europe (DOCE, 2008).However, there is less information about its application on shallow lakes (Kitner & Poulickova, 2003;Blanco & al., 2004;Stenger-Kovács & al., 2007), and particularly on Mediterranean shal-low lakes, even though there is evidence that their ecology differ from that reported in the temperate zone (Romo & al., 2004;Moss & al., 2004).Moreover, among the indices, there is a wide heterogeneity in their database and about the diatom species that they consider.Furthermore, their effectiveness varies depending on the ecosystem type and the eco-region (Cejudo-Figueiras, 2011;Álvarez-Blanco & al., 2011Álvarez-Blanco & al., , 2012)).Nowadays, the information about the use of diatoms indices in Mediterranean standing waters is scarce.Recently, the approach of adapting diatom indices commonly used in rivers to establish the ecological status of some inland waters in the Iberian Peninsula has been carried out (Blanco & al., 2004;Cejudo-Figueiras & al., 2010).However, similar studies of diatom indices application in coastal lakes are lacking.For example, coastal water bodies have higher salinity ranges than inland freshwaters, as well as different morphology and environmental impacts on their catchment areas. This work tests the usefulness of several well-known diatom indices in some coastal shallow lakes.The study is a first approach to the evaluation of its suitability in the determina-Diatom species composition and indices for determining the ecological status of coastal Mediterranean Spanish lakes Beatriz Antón-Garrido 1 * , Susana Romo 1 & María José Villena 2 1 Departamento de Ecología, Edificio de Investigación, Universitat de València, Campus Burjasot, E-46100 Burjasot, Spain; beanga@alumni.uv.es;Susana.Romo@uv.es 2 Laboratorios Tecnológicos de Levante, S.L., E-46980 Paterna, Spain; mjvillen@uv.estion of the ecological status in this type of ecosystems for the implementation of the WFD.Furthermore, it reports detailed taxonomic descriptions and ecological data of the main diatom species, in order to contribute to the knowledge of the characteristic diatom flora inhabiting this type of ecosystems. MATERIAL AND METHODS The study was carried out in three interdunar, shallow and permanent lakes located in the Natural Park of Albufera and named as, Mata del Fang (MFPNS) and Lagunas del Canyar (CNP1 and CNP3) (Fig. 1).Their hydrological dynamic is characteristic of this type of Mediterranean water bodies, with seasonal fluctuations in the water level mainly during summer and water inputs from rain, groundwater and saline spray.They are surrounded by helophytic vegetation (Phragmites, Thypha, Juncus and Scirpus) and their bottom was covered by charophyte meadows, growing on a sand-silty substrate. The ponds were sampled on 15 th October 2009.An integrated water sample from each pond was collected by sampling and mixing the water from several points across the pond and subsamples were used for water chemical analyses and other determinations.Depth, temperature, pH, conductivity and oxygen concentration in the water were determined in situ.Dissolved nitrogen and phosphorus concentrations, as well as phytoplanktonic chlorophyll a were analyzed according to standard methods (APHA, 1992). Samples of epiphytic and benthic microalgae were also collected.Epiphyton samples were taken from the dominant macrophytes (Chara spp.).Epiphytic algae were detached from the plant by shaking in a standardized water volume that was fixed with lugol solution for further studies and taxo-Diatoms and bioindication in coastal Mediterranean lakes nomic identification (Zimba & Hopson, 1997).Benthic algae were sampled by collecting a total of 20 sediment surface cores (upper 2 cm) taken randomly all over each pond.The cores were integrated into a sample per pond and frozen for later analyses.Diatom samples were digested with hydrogen peroxide according to standard protocols (UNE-ENE 13946, 2004).Identification and counting were made from permanent preparations mounted with Naphrax ® and at least 400 valves were counted for each sample at ϫ1000 magnification.In addition, scanning electron microscope (SEM) analyses were carried out for accurate identification of some species.Morphometry of the dominant diatoms was determined by measuring length and width of a minimum of 20 individuals.The determination of the different taxa was carried following specific references : Germain, 1981;Krammer & Lange-Bertalot, 1991a, b, 1997a, b, 2000;Prygiel & Coste, 2000;CHD, 2010.The diatom indices SPI (Specific Polluosensitivity Index, CEMAGREF, 1982), BDI (Biological Diatom Index, Lenoir & Coste, 1996) and CEC (Comission for Economical Community Index, Descy & Coste, 1990) were calculated for the epiphyton by means of the software OMNIDIA version 5.1 (Lecointe & al., 1993).Only those taxa with relative abundance higher than 5% were considered in their calculation.The TDIL index was calculated according to Stenger-Kovács & al. (2007).These indices were selected for their wide use in Spain, as well as in other European countries (Blanco & al., 2008) and TDIL for being specific on shallow lakes.The ecological status of each pond was also assessed by physico-chemical metrics according to their habitat typology of permanent, coastal, interdunar lakes (BOE, 2008;CEDEX, 2010a, b). RESULTS The studied lakes were oligohaline, with almost saturated oxygen concentrations, neutral-alkaline pHs and water transparency reaching the bottom (Table 1).Phosphate values were low and similar in the three ponds, while nitrate was the main form of soluble inorganic nitrogen (Table 1).Ammonium and nitrite were below the detection level of the analytical methods.Mata del Fang (MFPNS) recorded the highest values of nitrate and phytoplanktonic chlorophyll a (Table 1).CNP1 had concentrations of phytoplanktonic chlorophyll a slightly higher than CNP3, even though their proximity (Fig. 1) and similar nutrient concentrations (Table 1). The ecological status assessment of the ponds by using threshold values for pH, conductivity and mainly phytoplanktonic chlorophyll a concentration are shown in Table 2. The diatom indices showed discrepancies with the ecological states determined by the environmental variables (Tables 2, 3).Furthermore, the quality assessment also differed between the indices, mainly depending on the number of recorded diatom species considered by each index in their calculation (Table 3).The most efficient diatom indices and those which considered higher number of the recorded diatom species were SPI and BDI (Table 3).However, com-pared to the environmental metrics, they seemed to overestimate water quality for MFPN, while underestimated that of CNP3 (Tables 2, 3).By contrast, CEC and TDIL indices were less efficient and they took into account lower percentages of the identified diatom species (Table 3). Diatom species description A total of 63 species of diatoms belonging to 30 genera were identified for the epiphytic and benthic communities of the three sampled lakes.The identification was carried out to species level in 60 cases and to genus level in 3 cases (Table 4).In general, most of the identified species had ecological requirements related to alkaline pH, tolerance to salinity and a wide range of nutrient concentrations.It was remarkable the abundance and presence in the three study lakes of species related to brackish environments, such Halamphora cf.sydowii and those species belonging to the genus Mastogloia, as well as other species, such as Seminavis pusilla, Fragilaria gracilis and Nitzschia elegantula (Table 4).In the epiphyton, Seminavis pusilla co-dominated in abundance, together with Rhopalodia gibba var.gibba in CNP3 and with Encyonema sp. in CNP1 and MFPNS (Fig. 2).Fig. 2. Percentage of relative abundance of the main epiphytic and benthic diatoms identified in the study.Abbreviated taxa names according to Table 4. The diatom communities in the sediment had a species composition and abundance distribution similar to that of the epiphyton in CNP1, while in CNP3 and MFPNS there were more differences, although most of the recorded species appeared in both substrates (Fig 2 , Table 4).In MFPNS the benthic microalgal community was dominated by Pseudostaurosira brevistriata, while several Halamphora species predominated in CNP3. A catalogue of the main diatom species found with a relative abundance higher than 5% is presented below, together with a description of their taxonomic characteristics.Light (LM) and scanning electron microscope (SEM) photographs were taken from the epiphytic and benthic samples of the aquatic study systems to illustrate each taxon.The biometric ranges provided correspond to individuals measured in the samples. Ecology and distribution: characteristic of brackish waters, widely distributed in coastal waterbodies of the Mediterranean.In this study, recorded in all the lakes (Table 4).Ecology and distribution: broad distribution, circumneutral, fresh-brackish, oligo-eutrophic-␤-mesosaprobic waters.In this study, it was observed in CNP3 and MFPNS, mainly living on the vegetation.Ecology and distribution: in this study, it was mainly observed in the epiphyton and also in the sediment of CNP1 and MFPNS (Table 4).Ecology and distribution: alkalophilic, fresh-brackish, mesotrophic oxygenated waters.In this study, it was recorded in CNP1 (epiphyton and sediment) and in low number in the epiphyton of MFPNS (Table 4).Ecology and distribution: circumneutral, fresh-brackish, eutrophic-␣-mesosaprobic waters.In this study, it was mainly present in the epiphyton of MFPNS, CNP1 and CNP3 (Table 4). Ecology and distribution: periphytic, alkalophilic, tolerant to a wide range of conductivity, oligosaprobic-oligo-eutrophic waters.Frequently cited in the Iberian peninsula.In this study, it was abundant in the sediment of MFPNS (Table 4, Fig. 2).Ecology and distribution: epiphytic, lotic and lentic brackish waters, typically considered euryhaline.Cited in different water bodies of the Spanish Mediterranean coast.In this study, described in the three lakes (Table 4). Ecology and distribution: benthic, alkalophilic, freshbrackish, oligo-eutrophic-␤-mesosaprobic waters.Broad distribution.In this study, it was observed in the epiphyton of CNP1 and was abundant in that of MFPNS (less present in its sediment) (Table 4).Length 10-14.6 µm, breadth 3.4-4 µm.Linear-elliptical valves with rounded apices.Filiform raphe with marked central nodules.Axial area narrow-linear, central area irregular with a central-spaced stria on both sides.Bluntly striae, radial pores distribution in the center and parallel-convergent at the ends, striae do not reach the apices.SEM showed a polar raphe fissure and apical pores along the margin. Navicula sp. Bory de Valves lanceolate with rounded apices and dense, fine, barely marked striae 19-21/10 µm.Axial area very narrow, small, rounded central area.Centric, filiform, straight raphe, with proximal ends slightly bent to one side and marked central nodules.Parallel striations at the ends and striae slightly shorter and radial in the center. Ecology and distribution: in this study, it was observed in the sediment of CNP3 (Table 4).Length 26-31.1 µm, breadth 5.4-6.6 µm.Valves moderately dorsiventral, with a convex dorsal margin and a ventral margin slightly convex in the center.Rounded ends.Raphe almost central, straight with the distal fissures curved towards the ventral margin and proximal ends towards the dorsal margin, central nodules marked.Thin hyaline area along the axial axis, with irregular margins and slightly wider at the center.Marked striae, 16-17/ m (dorsal and ventral), parallel at the ends and radial in the center (alternance of short and long striae).Pores distribution visible by SEM. Seminavis pusilla (Grunow Ecology and distribution: temperate, epicontinental coastal waters (Gemain, 1981).In this study was abundant mainly in the epiphyton of the three study lakes (Table 4, Fig. 2).Cells with dorsiventral valves, wider dorsally in the center, sharply curved hook-shaped ends.Marginal raphe-system on the dorsal side.Ventral margin straight, dorsal margin bent in the central area, usually has a small nick in the middle marking the position of the proximal raphe endings.Parallel costae in the center and radial at the ends with two rows of pores between transapical costae. Ecology and distribution: characteristic specie of brackish waters.In this study, it was observed in the three study lakes, being abundant in the sediment of CNP3 (Table 4, Fig. 2).12/10 µm, composed by two rows of pores, which form a ring at the valve contour (SEM).Central region smooth or slightly undulated, fine radial striae, 1-4 fultoportulae visibles under light microscope.Small peripheral spines visibles by SEM. DISCUSSION The diatom indices SPI, BDI and CEC are widely used to assess water quality, mainly in rivers, both in Europe and the Iberian Peninsula, and only recently their suitability for lentic aquatic systems has been tested (Blanco & al., 2004;Cejudo-Figueiras & al., 2010).The TDIL index was specifically made to be applied in shallow lakes of Central Europe (Hungary) (Stenger-Kovács & al., 2007).However, coastal water bodies have higher salinity ranges and a different morphology and watershed impacts.In the present study, several problems were detected when these indices were tested for coastal shallow lakes.The main pitfall was the already limited knowledge gathered about the diatom flora and their ecological requirements in coastal Mediterranean standing waters (Trobajo, 2005;Cantoral-Uriza & Aboal, 2008).In our study, the effectiveness of the indices depended on the number of species assessed for each index with respect to the total species recorded and the suitability of the weight assigned to each species.For instance, several of the dominant species in this study, which have been also reported in other Mediterranean water bodies (e.g.Mastogloia spp., Tomàs, 1982), were omitted by the database of the indices tested.In addition, one of the most representative species in our study, Seminavis pusilla, was only included by the SPI and TDIL indices.This could also explain the observed discrepancy between indices, as a result of the different listed and weight given to diatom species by every single index (Table 3).The fact that taxonomy challenged the identification of some species, such as Encyonema sp., which was dominant in CNP1 and MFPNS, constitutes an additional problem to cope with the finest taxonomic accuracy required by the indices.Some of these problems have been also described by Blanco & al. (2004), when they used the SPI and BDI indices for the water quality evaluation of some inland shallow lakes of the Northwest Iberian Peninsula.Nevertheless, these authors found a good correlation between these indices and the lakes trophic status, based on the distribution of diatom abundances and trophic preferences.Therefore, to use more extensively diatom indices in coastal lentic systems, further studies should be carried out on the description and autoecology of their characteristic diatom flora. The ecological requirements of the algal species may vary depending on the eco-region and thus the use of any diatom index must be calibrated for different ecosystems (Álvarez & al., 2011, 2012).The design of new and specific indices for different eco-zones is a new approach (e.g.DDI index, Álvarez- Blanco et al., 2012).This alternative approach, even though, is time and investment consuming might be more accurate for the ecological evaluation of coastal water bodies of the Mediterranean zone. It is remarkable that a large number of the recorded diatom species in this study (38 in total, highlighted in bold in Table 4) have been also cited by other authors in coastal Mediterranean Iberian environments (lagoons, marshes, ponds, river mouths and streams) (Tomàs, 1982;Aboal, 1989;Trobajo, 2005;Cantoral-Uriza & Aboal, 2008;Rovira & al., 2009).This set of species could be a starting point for further studies, in order to define their role as bioindicators in coastal water bodies and for the design of new diatom indices.Therefore, more studies on the ecology and algal flora are needed to characterize, evaluate and implement the WFD in these ecosystems. In general, the diatom species found in this study had a wide tolerance range to several physico-chemical variables, which agree with the dynamic nature of coastal water bodies, adapted, for instance, to water level and environmental changes (Trobajo, 2005;Della Bella & Mancini, 2009).The limnological variables evaluated in the present study complemented the diagnosis given for the diatom indices and resulted especially helpful to discriminate between possible water quality status (Tables 2, 3).This result agrees with that described by other authors that argued about the use of several metrics for the correct water quality evaluation of shallow lakes (Kitner & Poulícková, 2003;Moss & al., 2003). In conclusion, our results suggest that diatom bioindication could be a useful tool for the determination of the ecological status of coastal shallow lakes, together with environmental metrics.However, it would be necessary modify or create new indices based specifically on the autecology and distribution of diatom species in the gradient of environmental conditions characteristic of the Mediterranean eco-zone.Therefore, it is important to gather more information about the composition and ecology of the microflora and diatoms of these ecosystems. Table 1 . Morphometric and physico-chemical characteristics of the study lakes. Table 2 . Ecological status assessment of the study lakes based on some environmental variables according to CEDEX (2010 a, b). Table 3 . Diatom indices SPI, BDI, CEC and TDIL with indication of the percentage of identified epiphytic diatoms species considered by each index and the ecological status assessment for the study lakes. Table 4 . List of diatom species identified in the study.Asterisk indicates diatom species used by the indices databases (SPI, BDI and TDIL).Relative abundance (percentage) of each species in the epiphyton and sediment.Diatom species cited in other coastal Mediterranean environments are highlighted in bold (see discussion section
2018-12-07T21:56:37.211Z
2013-12-30T00:00:00.000
{ "year": 2013, "sha1": "f22930b4e4b0f4f7ba7dac4f2b8931111c4a1973", "oa_license": "CCBY", "oa_url": "https://rjb.revistas.csic.es/index.php/rjb/article/download/401/396/399", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82851fc8ad07c21c5a2a4a11a1c5dbf02289eb81", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
58901894
pes2o/s2orc
v3-fos-license
Development of a New Class of Thiocyanate-Free Cyclometalated Ruthenium ( II ) Complex for Sensitizing Nanocrystalline TiO 2 Solar Cells We designed and developed a new class of thiocyanate-free cyclometalated ruthenium sensitizers for sensitizing nanocrystalline TiO2 solar cells. This complex shows appreciably broad absorption range. Anchoring to nanocrystalline TiO2 films for light to electrical energy conversion in regenerative photoelectrochemical cells achieves efficient sensitization to TiO2 electrode. With this new sensitizer, there were a power conversion efficiency of 4.76%, a short-circuit photocurrent density of 11.21 mA/cm2, an opencircuit voltage of 0.62 V, and a fill factor of 0.68 obtained under standard AM 1.5 sunlight. Introduction A molecular system that consists of a wideband gap semiconductor photoanode, typically TiO 2 , an anchored molecular photosensitizer, a redox electrolyte, and a platinized photocathode is called dye-sensitized solar cells (DSCs) [1][2][3][4][5].Among these elements, the sensitizers play a vital role in DSC.A lot of Ru-complex sensitizers [6][7][8][9][10][11][12][13][14][15][16] and organic sensitizers have been developed in DSC [17].So far, sensitizers such as black dye, N719, and N3 are known as best sensitizers in DSC.Black dye sensitized nanocrystalline TiO 2 solar cells yielding solar to electric power conversion efficiency of over 11% under standard AM 1.5 conditions [12,13].Much effort has been made to increase photovoltaic performance (stability) of a device, towards the development of sensitizers, electrode, and photoanode material.A way to improve the stability is the development of a dye without thiocyanate (SCN) donor ligands because monodentate SCN is believed to provide the weakest dative bonding within the metal complexes, making the sensitizer unstable.Few efforts have been made to replace the SCN donor ligands with effective pyridyl pyrazolate chelating chromophores [18] and 2,4-difluorophenyl pyridinato ancillary ligands [19].More recently, cycloruthenated compounds have been used as sensitizers for efficient DSC devices [19][20][21][22][23][24].Although the preliminary attempts gave only limited success [20][21][22][23][24], a superior power conversion efficiency is now achieved with a novel thiocyanate-free cyclometalated sensitizer [19].However, further development of new sensitizer is still a challenging issue for DSC to improve the efficiency.Here, we report on the new class of thiocyanate-free cyclometalated ruthenium(II) complex for sensitizing nanocrystalline TiO 2 solar cells. Experimental 2.1.Materials.All the solvents and chemicals were of reagent grade and used as received unless otherwise noted.Chromatographic purification was performed by gel permeation on Sephadex LH-20 (from Sigma). Synthesis of Complex HIS1.cis-Dichlorobis (4,4 -dicarboxy-2,2 -bipyridine)ruthenium (180 mg, 0.27 mmol) and 5-phenyl-3-(trifluoromethyl)-1H-pyrazole (117 mg, 0.55 mmol) were dissolved in ethylene glycol (30 mL), and the reaction mixture was heated to 170 • C under argon for 2 h.Then tetrabutyl ammonium hydroxide (1.1 g, 1.37 mmol) was added to the reaction mixture and further heated to 170 • C under argon for 2 h.After evaporating the solvent, the resulting solid was dissolved in water (15 mL) and was titrated with 0.2 M HNO 3 to pH 3.8.The reaction mixture was kept in a refrigerator overnight and allowed to warm to 25 • C. The resulting precipitation was collected on a sintered glass crucible by suction filtration.The solid was dissolved in a basic water solution (pH 10-11) and purified on a Sephadex LH-20 column by eluting with water.The yield, 167 mg. 1 Fabrication of Dye-Sensitized Solar Cell. A nanocrystalline TiO 2 photoelectrode of 20 μm thickness (area: 0.25 cm 2 ) was prepared by screen printing on conducting glass as previously described [25].The films were further treated with 0.05 M TiCl 4 and 0.1 M HCl aqueous solutions before examination [26].Coating of the TiO 2 film was carried out by immersing for 45 h in a sensitizer solution of 3 × 10 −4 M acetonitrile/tert-butyl alcohol (1/1, v/v) solution.Deoxycholic acid (20 mM) was added to the dye solution as a coadsorbent to prevent aggregation of the dye molecules [27,28].Photovoltaic measurements were performed in a twoelectrode sandwich cell configuration.The dye-deposited TiO 2 film and a platinum-coated conducting glass were used as the working electrode and the counterelectrode, respectively.The two electrodes were separated by a surlyn spacer (40 μm thick) and sealed by heating the polymer frame.The electrolyte was composed of 0.6 M dimethylpropylimidazolium iodide (DMPII), 0.05 M I 2 , TBP 0.3 M, and 0.1 M LiI in acetonitrile. Results and Discussion Scheme 1 shows the synthetic approach for the synthesis of thiocyanate-free cyclometalated ruthenium (II) complex HIS1. The absorption spectrum of the complex HIS1 is dominated by metal to ligand charge transfer transitions (MLCTs) and shows MLCT bands in the visible region at 546 nm with a molar extinction coefficient of 12 × 10 3 M −1 cm −1 .There are high-energy bands at 380 nm due to ligand π-π * charge transitions.A comparison of UV-vis spectra of the HIS1 and N719 complexes is displayed in Figure 1. To get an insight into the electron distribution of this new series of complexes for better understanding of the charge injection and dye regeneration process, the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) of complex HIS1 were calculated using Gaussian-09 program package (Figure 2).The HOMO of cyclometalated complexes of type [Ru(N ∧ N ∧ N) Ru(N ∧ N ∧ C)] and [Ru(N ∧ N) 2 (C ∧ N) + ] is typically extended over the metal and, to a lesser extent, the anionic portion of the cyclometalating ligand [29].The LUMO typically resides International Journal of Photoenergy on the neutral polypyridyl ligands along with low-lying excited states delocalized over the polypyridyl portion(s) cyclometalating ligand (Figure 2).Ionization potential of complex HIS1 bound to nanocrystalline TiO 2 film was determined using a photoemission yield spectrometer (Riken Keiki, AC-3E).The ground-state oxidation potentials (Ru 3+/2+ ) value of −5.95 eV obtained for sensitizer HIS1 was low enough for efficient regeneration of the oxidized dye through reaction with iodide.The excitedstate oxidation potential, E * (Ru 3+/2+ ), of sensitizer HIS1 was estimated to be −4.18eV. Monochromatic incident photon-to-current conversion efficiency (IPCE) for the solar cell, plotted as a function of excitation wavelength, was recorded on a CEP-2000 system (Bunkoh-Keiki Co. Ltd.).IPCE at each incident wavelength was calculated from (1), where I sc is the photocurrent density at short circuit in mA cm −2 under monochromatic irradiation, q is the elementary charge, λ is the wavelength of incident radiation in nm, and P 0 is the incident radiative flux in Wm −2 , The photocurrent density-voltage curves and incident photon-to-current efficiency (IPCE) spectra of the cells based on sensitizer HIS1 under the illumination of air mass (AM) 1.5 sunlight (100 mW/cm 2 , WXS-155S-10: Wacom Denso Co., Japan).Figure 3 shows the action spectra of monochromatic incident photon-to-current conversion efficiency (IPCE) for DSC composed of complex HIS1 sensitized nanocrystalline TiO 2 electrode and an iodine/triiodide redox electrolyte with reference to N719-based DSC constructed under comparable conditions.Although complex HIS1 shows somewhat lower IPCE values, this problem could be solved using structural modification of complex HIS1, a subject for future research.We observed an IPCE of 68% in complex HIS1, while in the case of N719, the IPCE was 76%.The dye-sensitized solar cell based on sensitizer HIS1 achieves a conversion efficiency (η) of 4.76%, a shortcircuit photocurrent density of 11.21 mA/cm 2 , an opencircuit voltage of 0.62 V, and a fill factor of 0.68 obtained under standard AM 1.5 sunlight.N719-sensitized solar cell under the same cell fabrication and efficiency measuring procedures achieves a conversion efficiency (η) of 7.56%, a short-circuit photocurrent density of 15.83 mA/cm 2 , an open-circuit voltage of 0.65 V, and a fill factor of 0.73.The photo-induced voltage (Voc) is determined by the difference between the quasi-Fermi level of TiO 2 and redox potential of the electrolyte and is able to be enhanced as a slow recombination process of injected electrons in TiO 2 with oxidized species and a negative shift of band edge.tert-butylpyridine (TBP) is known to increase Voc of DSC due to an enhanced electron lifetime and a negative shift of band edge [30,31].Hence, the higher Voc with electrolyte containing 0.3 M TBP is (0.62 V) and without TBP 0.50 V observed. Conclusions In summary, a new class of thiocyanate-free cyclometalated ruthenium-based dye HIS1 was strategically designed and synthesized.This complex shows appreciably broad absorption range.Anchoring to nanocrystalline TiO 2 films for light to electrical energy conversion in regenerative photoelectrochemical cells achieves efficient sensitization to TiO 2 electrode.With this new sensitizer power, there were a conversion efficiency of 4.76%, a short-circuit photocurrent density of 11.21 mA/cm 2 , an open-circuit voltage of 0.62 V, and a fill factor of 0.68 obtained under standard AM 1.5 sunlight.Further improvement in the solar cell efficiency as well as the dynamic study of electron injection and recombination in complex HIS1 sensitized nanostructured TiO 2 is currently on progress in our lab and will be disclosed in due course. H NMR (CD 3 OD with a drop of NaOD): δ
2018-12-19T12:19:28.960Z
2011-05-04T00:00:00.000
{ "year": 2011, "sha1": "4166d20ad8c1933137147e447676d7cae58e8412", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijp/2011/520848.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4166d20ad8c1933137147e447676d7cae58e8412", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
240354816
pes2o/s2orc
v3-fos-license
Supporting Information: Investigating the Influence of N,N,N-trimethyl-1-adamantyl ammonium (TMAda) Structure Directing Agent on Al Siting in the Zeolite Chabazite Using Atomistic Simulations The catalytic properties of zeolites, which are primarily determined by the framework topology and active centers (e.g. Ti4+, Sn4+ or Al3+), remain challenging to be controlled in zeolite synthesis. Here, we combined first-principal and classical molecular simulations to investigate how Al siting, and OSDA orientation impacts the energy of a zeolite supercell. 36 T-site CHA zeolite with TMAda+ (OSDA) was chosen as the model system. By applying a Boltzmann factor to each Al configuration and as a function of TMAda+ orientation, we came to the conclusion Al pairs prefer to locate in 8-MRs compared to 6-MR, 4-MR and D6R, which is consistent with our previous experimental finding. We also found that the potential energy was governed by the distance between the anionic AlO4 tetrahedra and the cationic quaternary ammonium 1 ar X iv :2 11 0. 12 52 3v 3 [ co nd -m at .m tr lsc i] 3 0 O ct 2 02 1 groups (the Al-N distance), which is the key factor in determining the Al distribution. These results highlight opportunities of using classical molecular simulation combined with partial charges to represent electrostatic interactions, which enables a promising methodology to exploit the Al distribution and the OSDA orientation in larger supercell systems. : CMD and AIMD results for figure 2. The first column is the structure label and those structures have been provided in the zipped file. Figure S1: Averaged Al to Al distances versus relative potential energies (∆U i ). Left: "AAB" orientation in red. Right: "AAA" orientation in blue. Figure S2: Averaged quaternary ammonium N to quaternary ammonium N distances versus relative potential energies (∆U i ). Left: "AAB" orientation in red. Right: "AAA" orientation in blue. Introduction Zeolites are porous crystalline aluminosilicates with different three-dimensional architectures formed by the linking of oxygen atoms of AlO − 4 and SiO 4 tetrahedra to form a covalent network structure. 1 Due to the fact that their pores are of molecular dimensions, zeolites can act as molecular sieves, which enables their application in separation processes. [2][3][4][5] The aluminum atoms serve as Brønsted acid sites, which lends zeolites catalytic activity for hydrocarbon cracking, alkylation, and isomerization. 6 Haag et al. reported high turnover rates of several hydrocarbon reactions with increasing concentration of aluminum sites. 7 Even in the low Al-concentration of Si-rich zeolites, AlO − 4 provides shape-selective effects in catalysis reactions when sitting at certain positions inside cavities. 8 It has been estimated that the introduction of zeolites to refining operations has resulted in a ∼30% increase in gasoline yield, making zeolites one of the most important materials in petrochemical processing. 9 Compared with natural zeolites, synthetic zeolites comprise over 70% of the global market, which was valued at 29.08 billion USD in 2016. 10 Synthetic zeolites are typically made under hydrothermal conditions using special organic structure directing agents (OSDAs) that help steer the self-assembly process toward a particular pore shape and Al distribution. 11 It has been estimated that there are millions of potential zeolite frameworks, 12 but only a fraction of these have been actually synthesized. For this reason, a great deal of experimental and theoretical work has been done to better understand the complex process in which different synthesis conditions and OSDAs lead to different zeolite structures. [13][14][15][16][17][18][19][20] The AlO − 4 tetrahedra groups in zeolites are proton active sites, which enables them to be distinguished from other tetrahedra-sites (T-sites) in their surrounding chemical environment. 27 Rietveld refinement of powder X-ray diffraction is another technique which can locate OSDAs inside zeolite cavities after crystallization. [28][29][30] Other characterization techniques, including fluorescence spectroscopy, 31 valence-to-core X-ray emission spectroscopy (XES), 32 and atom probe tomography (APT) 33 have been applied to provide information on the relationship between Al siting and the type of OSDA used in synthesis. Molecular modeling and simulation has also been used to help understand the relationship between OSDA, zeolite structure, and Al siting. Early work focused on using quantum mechanics (QM) and Density Functional Theory (DFT) calculations to explain the acidity of Al sites [34][35][36] and to develop classical core-shell force fields for modeling the zeolite framework. 37 In all these previous studies where classical MD simulations were applied, the effect of electrostatic interactions between the OSDA and the lattice was ignored. That is, only the steric effects present between the OSDA and the zeolite framework were taken into account. Since we are interested in how the OSDA controls the resulting framework and Al distribution and siting, it is important to account for electrostatic interactions, since this undoubtedly plays a major role in how Al is distributed throughout the lattice. In this work, we combine classical and first-principles models to explore the dependence of charged OSDA/framework energies on the distribution of Al within the framework. We choose the zeolite CHA and the OSDA TMAda + as a model system. This is because CHA has a single, symmetrydistinct tetrahedra site, and TMAda + is known to form CHA. In addition, TMADa + can adopt only two geometrically equivalent but orientationally opposite configurations within the CHA cage, thereby reducing the total number of distinct OSDA orientations within the lattice. We postulate that lattice energies are related to Al distribution for a given orientation of TMAda + , which if true suggests that OSDA/framework interactions are a potentially exploitable design parameter in creating zeolites enriched in desirable heteroatom configurations. 2 Simulation Details Sampling Al Configurations and OSDA Orientations Our hypothesis is that the potential energy between OSDAs and the CHA framework is related to the probability of a given Al configuration during synthesis. We therefore investigated the interaction energy between a pre-formed CHA framework with co-caged OSDAs. For the CHA framework with TMAda + as the OSDA, the Al distribution or "configuration", the direction of the long axis of TMAda + as it sits in the framework or "OSDA orientation", and the local position of the OSDA atoms "OSDA conformation", can all have a significant effect on the energy. Two strategies were used to ensure that exhaustive sampling was done over all configurations, orientations, and conformations. We first constructed a 36 T-site supercell using lattice constants obtained from the Database of Zeolite Structures. 49 Note that there are only two distinct ways that three TMAda + cations in a 36 T-site supercell can arrange themselves: all in the same orientation (which we denote "AAA"), and two TMAda + cations pointing in the same direction with the third pointing in the opposite directions (which we denote "AAB"). Three TMAda + cations were added to the CHA lattice in a given orientation. Following this, three Al atoms were distributed over all possible T-sites in the lattice, excluding those that contain Al-O-Al linkages, since this violates Löwenstein's rule. This results in 4908 Al configurations for a given TMAda + orientation. By flipping the orientation of one TMAda + cation and placing Then, we carried out MD simulations on each of the 9816 configurations. The energy barrier for a TMAda + to "flip" from one orientation to the other is so high that on the timescale of the MD simulations, the TMAda + cations only sample their local confirmations; they do not flip. The average potential energy U i of each configuration i was computed and the relative energy of each configuration ∆U i was computed as where the reference potential energy U ref is taken as the lowest energy configuration AIMD Simulations Born-Oppenheimer molecular dynamics simulations were performed on systems containing different Al configurations and OSDA orientations using VASP 50 with PAW potential 51,52 at the gamma point using a plane wave cutoff of 400 eV at PBE-D3 level. 53,54 Simulations were run at 633 K in the canonical (NVT) ensemble, using a Nosé-Hoover thermostat with time constant of 100 fs. A 1 fs time step was used. Hydrogen atoms were replaced by deuterium atoms to allow a larger time step. Zeolite framework atoms (Si, Al and O) were kept frozen during the simulations. For each step, self-consistent-field (SCF) electronic energies were converged to 10 − 5 eV. 10 ps simulations were conducted for all structures. For each structure, the first 2.5 ps of the trajectory was discarded and the remaining 7.5 ps of the trajectory was used to calculate the average potential energy. Classical Force Field We used the Dreiding force field 55 to represent intramolecular OSDA interactions, as well as van der Waals interactions between the zeolite framework and the OSDA. We chose the Dreiding force field because it as been shown to perform well when treating interactions between OSDAs and siliceous zeolite frameworks. 45,46 The Dreiding force field 55 has the following functional form We considered the Lennard-Jones (LJ) intramolecular and intermolecular interactions for TMAda + s but neglected self-interactions of the zeolite framework atoms since the lattice is rigid. To model electrostatic interactions between the OSDA and the framework, we introduced partial charges into the system. To derive the partial charges, we performed three DFT minimization simulations starting from three different Al configurations and TMAda + orientations. Then the Density Derived Electrostatic and Chemical (DDEC) approach [56][57][58] was used to obtain atomic net charges for the three conformations. Plane-wave, periodic supercell DFT calculations were performed using the Vienna Ab initio Simulation Package Then, single-point calculations were conducted to generate required AECCAR0, AECCAR2, and CHGCAR files for performing DDEC atomic population analysis. 59 The partial charges derived in this way are robust; we performed ten additional minimizations on different configurations and orientations. The differences between the partial charges obtained from these two different sets of minimizations was less than 5%, and more importantly, the distribution of partial charges remained essentially the same. The XYZ files containing raw partial charges obtained from the different minimizations are provided as a zipped file in Supporting Information. We then categorized the atomic charges based on their chemical environments. The partial charges used in this work are listed in Table 1. For zeolite atoms, it is clear from the charges on Al and Si that the Al T-site has a different chemical environment than the Si T-site. So we reserved two atom types for Al and its adjacent O in the tetrahedra AlO − 4 , namely al and ob in the Table 1 and Figure 1. The chemical environment of Si atoms can also be affected by the number of neighboring AlO − 4 groups, so the adjacent Si atoms were also divided into four types accordingly. For the TMAda + ion, it is clear that its carbon and hydrogen atoms need to be categorized based on their relative positions to the quaternary 8 ammonium group. The definition of each atom type for TMAda + used in Table 1 is illustrated in Figure 1. Atomic charges for chemically-equivalent atoms were summed and averaged from raw charge files, while neutrality of charges was also ensured during the process. Classical Molecular Dynamics Simulations In this study, we primarily investigate 36 T-site CHA systems with three pre-caged The NVT ensemble with the Nosé-Hoover 61,62 thermostat at 433K was applied. A cutoff of 10Å was used for non-bonded interactions, including van der Waals attraction and repulsion, as well as electrostatic interactions. A standard long-range van der Waals tail correction was added to the energy and pressure, while a particle-particle particle-mesh solver 63 was applied to take care of the long-range electrostatic interaction. Validating CMD results against DFT calculations Before we analyzed the results of the CMD simulations (described below), we first needed to test the accuracy of the energies obtained from the CMD simulations. To do this, AIMD simulations were conducted on ten AAA and ten AAB systems having the lowest CMD energies and ten AAA and ten AAB systems with the highest CMD energies, for a total of What are the most probable Al distributions in systems with the AAA TMAda + orientation? On the left panel of Figure 3, we show an example snapshot including the CHA framework and the AAA orientated TMAda + s. In the middle panel, we report relative potential energies of different Al configurations, sorted from lowest to highest energy. Note that each point in the middle panel of Figure 3 corresponds to a specific Al configuration with the AAA orientated TMAda + s. We have divided the energy distribution into low, medium, and high Figure 4 shows these Al pairs schematically. To quantify the relative probabilities of different Al configurations, we performed Boltzmann weighting of each configuration. The configurational integral of the system with the AAA TMAda + orientation is where i denotes an Al configuration, ∆U i is defined by Equation (1), k is the Boltzmann constant, T is the simulated temperature (433 K), and only AAA orientations are considered in the summation. The probability of an Al configuration i with an AAA TMAda + orientation is Finally, the probability Π j of a particular Al pair type j (i.e. 8-MR, 6-MR, 4-MR, D6R, or isolated) for an AAA TMAda + orientation is given by where the n j,i stands for the number of Al pair types j in Al configuration i. The factor of 3 accounts for the fact that there are three Al pairs in each configuration. How different are Al distributions in systems with the AAB TMAda + orientation? We next tested the sensitivity of the Al pair distributions to TMAda + orientation. We flipped the orientation of one TMAda + to obtain an AAB orientation and repeated the simulation procedure used for the AAA orientation. Energy distributions are reported in : Probability distributions (Π j ) of AAB TMAda + orientations. Probability distributions are split into three regions with different colors, namely "low" (blue), "medium" (black), and "high" (red). The way of grouping Al configurations is based on Figure 7 and has been discussed in the context. What contributes most to the difference of the potential energy of each configuration? We believe that the relative energies between different Al configurations is mainly related to the distance between Al atoms in the anionic tetrahedra AlO 4 units and N atoms of the TMAda + , due to electrostatic interactions. Figure 10 shows the distribution of Al-N distance for all Al configurations as a function of TMAda + orientation. The AAB orientation has a broader distribution compared to the sharply peaked distribution for the AAA orientation. This is consistent with the energy distributions shown in Figure 3 and Figure 7. To illustrate the underlying relation of Al-N distance to energy, Figure 11 shows a parity plot of averaged Al-N distances versus relative potential energies (∆U i ) for the two TMAda + orientations. There is a clear correlation between Al-N distance and relative energy, with the lower energy configurations have shorter Al-N distances. The AAA orientation has a much narrower energy and Al-N distance distribution than the AAB orientation, consistent with the results presented above. Parity plots of N-N distance versus ∆U i , as well as Al-Al distance versus ∆U i are provided in Figures S1, S2 in the Supporting Information. However, neither the N-N distance nor the Al-Al distance correlate well with the relative potential energy (∆U i ). Though the potential energy can be a result of multiple contributions, among those, we believe that the cation-to-anion electrostatic interaction determined by Al-N distance contributes the most. Figure 10: Averaged Al to quaternary ammonium N distance distributions for AAB (red) and AAA (blue). Note: there are 3 quaternary ammoniums and 3 Al sites, so the distance on the x-axis is the value averaged over 9 pairwise distances. Figure 11: Averaged Al to quaternary ammonium N distances versus relative potential energies (∆U i ). Left: "AAB" orientation in red. Right: "AAA" orientation in blue. 3.5 How does the TMAda + orientation affect the energy within a given framework structure? To study the effect of TMAda + orientation on the energy of a given Al distribution, we selected two Al configurations from the AAA orientation in Figure is not a preferable distribution. 17 But flipping the orientation of TMAda + s can stabilize the high energy framework by reducing the Al-N distance. Another interesting observation from comparing the two bar plots is that, although relative potential energies (∆U i ) can be affected by the orientation of TMAda + , overall, all of the OSDA orientations associated with the low energy configuration are lower in energy than the high energy configurations, regardless of the orientation of TMAda + . This is consistent with the probability distributions in Figure 6 and Figure 9, where the probability distributions of Al pairs in AAA and AAB orientations are quantitatively different but qualitatively the same. Alternatively, coarse graining could be performed to explore much larger systems. Conclusions In this work, we performed atomistic simulations to investigate how Al siting and OSDA position and orientation impacts the energy of a zeolite supercell. We examined a 36 Tsite model of the CHA zeolite containing three AlO 4 tetrahedral and 3 TMAda + OSDA molecules. We enumerated all possible Al configurations as well as TMAda + positions and orientations. Classical molecular dynamics simulations were used to compute the average energy of each Al configuration as a function of TMAda + orientation. In addition to van der Waals interactions between the zeolite lattice and the TMAda + , electrostatic interactions were included by assigning partial charges to the framework atoms and the OSDA atoms using the DDEC partial charge method. By applying a Boltzmann factor to each Al configuration and OSDA orientation, we came to the conclusion Al pairs prefer to locate in 8-MRs compared to 6-MR, 4-MR and D6R. This observation is consistent with our previous experimental finding that 6-MR are less likely to be found if the synthesis only involves TMAda + as the OSDA without an inorganic SDA. 17 The main contribution to the potential energy comes from electrostatic interactions, which are governed by the distance between the anionic AlO 4 tetrahedra and the cationic quaternary ammonium groups (the Al-N distance). Thus this is a key factor in determining the Al distribution in the lattice. We also studied the influence of TMAda + orientations on the energy of the Al distribution. The "AAB" orientation, where one TMAda + is pointing in a different direction than the other two, has a broader energy distribution over all Al configurations than the "AAA" orientation. Finally, we studied the effect of lattice system size by examining a 72 T-site lattice. Due to computational limitations, only two Al configurations were studied. For each Al configurations, we flipped the orientation of TMAda + to generate 64 distinct orientations. Consistent with the findings from the 36 T-site system and regardless of the orientation of TMAda + , the Al-N distance was found to govern the overall energy of the system. The highlight of this work is the construction of a classical molecular model that probes the energy of an OSDA-zeolite system as a function of Al configuration and OSDA orientation. The results of the model are consistent with the experimental finding that Al tends to localize in 8-MR when TMAda + is used as an OSDA in forming CHA. The insights of the model can help explain the energetic effects that lead an OSDA to help form a particular zeolite framework. In future work, we will use the model to: (1) include inorganic alkali structure directing agents and investigate their influence on Al distribution; (2) study other OSDAs and investigate their roles in shaping the formation of other aluminosilicate zeolites; (3) develop a Monte Carlo method to sample larger systems.
2021-10-26T01:16:50.778Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0cd65e8f031e36d207ac5f5787057571f248a8c7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0cd65e8f031e36d207ac5f5787057571f248a8c7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
243938355
pes2o/s2orc
v3-fos-license
Complexity of the Ackermann fragment with one leading existential quantifier In this short note we prove that the satisfiability problem of the Ackermann fragment with one leading existential quantifier is ExpTime-complete. Introduction After it was realized that the satisfiability problem of first-order logic is undecidable, a major research program emerged with the aim to classify prefix fragments of first-order logic based on their decidability status. This program was successfully completed in the 1980's. We refer the reader to [1] for a detailed overview of its main results. One of the two maximal decidable prefix classes with equality turned out to be the Ackermann fragment [∃ * ∀∃ * ] = [1], i.e., the fragment of first-order logic composed of all sentences in prenex normal form, with a quantifier prefix that matches the expression ∃ * ∀∃ * . It is wellknown that the Ackermann fragment has a NExpTime-complete satisfiability problem [4] and a closer analysis of the proof for the lower bound shows that it holds already for the slightly smaller fragment [∃ 2 ∀∃ * ] = . It is claimed (without proof) in [1] on page 288 that the satisfiability problem of [∃ * ∀∃ * ] = can be efficiently reduced to that of [∀∃ * ] = . 1 If true, this claim would imply that the complexity of the Ackermann fragment remains NExpTime-hard even without leading existential quantifiers. However, in this paper we show that this claim is false (unless NExpTime = ExpTime). The following is our main result. In [3] it was proved that already the satisfiability problem of [∀∃ 2 ] is ExpTime-hard. Thus we have the following immediate corollary. To prove Theorem 1.1, we will design an alternating polynomial space procedure which, roughly speaking, starts by guessing the isomorphism type of the leading existential quantifier and proceeds to guess witnesses for universally selected elements. Email address: reijo.jaakkola@tuni.fi (Reijo Jaakkola) 1 More specifically, the authors claim that this reduction can be done with the method presented in Exercises 6.2.39 and 6.2.40. However, these exercises assume that the sentences under consideration do not contain equality. Preliminaries In this paper we will work with vocabularies which do not contain constants and function symbols. We will also assume that there are no relation symbols of arity 0. We will use the Fraktur capital letters to denote structures, and the corresponding Roman letters to denote their domains. Let τ be a vocabulary. An atomic τ -formula α(v 1 , . . . , v m ) is of the form R(v 1 , . . . , v m ), where R ∈ τ . A 1-type π over τ is a maximally consistent set of unary literals, by which we mean formulas of the form α(x) or ¬α(x), where α is an atomic τ -formula. Given a model A and a ∈ A we will use tp A [a] to denote the unique 1-type which a realizes in A, namely the set where α ranges over atomic τ -formulas. 1-type of a single element describes completely the set of quantifier-free formulas it satisfies. For our alternating procedure we will also need a closely related notion which specifies sufficiently large portions of the quantifier-free types of pairs of elements. To define this formally, we need to set up some notation. Consider a quantifier-free formula ψ and suppose that {x 1 , . . . , x n } contains all the free variables of ψ. We will use Atom(ψ) to denote the set of subformulas of ψ (note that by definition Atom(ψ) does not contain equalities between variables). Given a formula χ, we define We let cl(ψ) denote the smallest set which contains Atom(ψ) and is closed under ∼. Fix two distinct variables x, y ∈ {x 1 , . . . , x n }. An (x, y)-substitution is a mapping s : {x 1 , . . . , x n } → {x 1 , . . . , x n } with the property that s(x) = x and s(y) = y. Given a (x, y)substitution s, we use cl(ψ, s) to denote the set For any (x, y)-substitution, a maximally consistent set ρ ⊆ cl(ψ, s) is called a (2, ψ)-profile. Note that for a fixed pair (x, y) there are at most 2 |ψ| n n ≤ 2 |ψ| 2 (2, ψ)-profiles, since ψ has at most |ψ| distinct subformulas. Consider a model A, an assignment s : {x 1 , . . . , x n } → A and two distinct elements a, b ∈ A. Suppose that s(x) = a and s(y) = b. We define as the (2, ψ)-profile that (a, b) realizes in A. We emphasize that in the above definition α ranges (again) only over the formulas in cl(ψ). Proof of the upper bound Our goal is to design an alternating procedure running in polynomial space, which determines whether a given sentence ϕ ∈ [∃∀∃ * ] = is satisfiable. Since APspace = ExpTime [2], Theorem 1.1 will follow from this. Fix a sentence ϕ := ∃z∀x∃y 1 . . . ∃y n ψ(z, x, y 1 , . . . , y n ), where ψ is quantifier-free. Throughout this section τ will denote the set of relation symbols occurring in ϕ. For technical convenience we will replace ψ with the following quantifier-free formula ψ(z, z, y 1 , . . . , y n ) ∧ (x = z ∨ ψ(z, x, y 1 , . . . , y n )) Clearly the resulting sentence is equivalent with ϕ. Thus we can assume that x = z over models of size two. We start with an important auxiliary definition. In this definition, and also in the rest of this paper, the free variables of (2, ψ)-profiles are z and x. Observe that the size of the description of C in a witness (C, s) is in the worst case exponential with respect to |ϕ| (the length of ϕ), which causes the description of the whole witness to be too large for our purposes. However, to determine whether C |= ψ(s(z), s(x), s(y 1 ), . . . , s(y n )) holds, we only need to know, in addition to C and s, the truth values that atomic formulas of ψ receive under the assignment s in the model C, and these can be described using descriptions which have size polynomial with respect to |ϕ|, since ψ has at most |ψ| subformulas. We also note that, besides true atomic formulas, our alternating procedure needs to know the 1-types of elements of C, which can of course also be described with a description of size polynomial with respect to |ϕ|. We will now present the promised alternating procedure. First, we will define an auxiliary alternating process AckermannSatRoutine that receives as its input a tuple (ϕ, π 0 , c, π, ρ), where c is a counter (a natural number), π 0 and π are 1-types over τ and ρ is a (2, ψ)-profile. Keeping in mind the fact that one can represent 2 |τ | 2 |ψ| 2 + 1 using only polynomially many bits, it is clear that AckermannSat uses only polynomial amount of space. The following two lemmas establish its correctness. Proof. If ϕ has a model of size one, then AckermannSat clearly accepts ϕ. On the other hand, if A is a model of ϕ of size at least two, then all the existential guesses can be made in accordance with A. Proof. Suppose that ϕ does not have a model of size one, but the existential player ∃ still has a positional winning strategy σ in the alternating reachability game played on the configuration graph of the procedure AckermannSat on input ϕ. Let π 0 denote the 1-type that σ instructs ∃ to choose at the start of the game. Without loss of generality we can assume that the move determined by σ in Step 2 depends only on the current 1-type π and the (2, ψ)-profile ρ (and not on the value of c). Let W denote the set of all witnesses that are encountered in those histories of the alternating reachability game where ∃ moves according to σ. Note that strictly speaking AckermannSat does not guess the whole structure C in any of the witnesses (C, s) ∈ W. In other words there might be tuples of elements of C for which AckermannSat did not specify all of the relation symbols whose interpretations contain them. For definiteness, if AckermannSat did not specify whether (c 1 , . . . , c k ) belongs to R C , then we specify that it does not belong to R C (in other words we complete the models C in a minimal way, although the exact choice of completion does not matter here). A pair (π, ρ), where π is a 1-type over τ and ρ is a (2, ψ)-profile, is called an extended 1-type. Since 2 |τ | 2 |ψ| 2 + 1 is an upper bound on the number extended 1-types, for every (C, s) ∈ W and i ∈ C the pair (tp C [s(i)], tp ψ C,s [s(z), s(i)]), which we call the extended 1-type that i realizes, is encountered in some history of the alternating reachability game where ∃ follows σ. Let Φ denote the set of extended 1-types that are realized by elements in the witnesses that belong to W. Our goal is to construct an increasing sequence of models with the goal being that their union will be a model of ϕ. We start by describing how the model B 0 can be constructed. Recall that π 0 denotes the 1-type that was selected by ∃ at the start of the game. In addition to π 0 , the strategy σ instructs ∃ to choose an another 1-type π and a (2, ψ)-profile ρ. We define B 0 to be a model consisting of two elements b 0 and b 1 , so that b 0 and b will have the 1-types π 0 and π respectively, and furthermore the pair (b 0 , b) realizes the (2, ψ)-profile ρ. Before proceeding with the rest of the proof, we first give a high level description of how rest of the models B 1 , B 2 , . . . will be constructed. Given a model B m we want to assign witnesses for elements that lack them; such elements are called defects. More precisely, a defect is an element b ∈ (B m − {b 0 }) for which there does not exists a tuple (d 1 , . . . , d n ) ∈ B n m so that ψ(b 0 , b, d 1 , . . . , d n ). These witnesses will be selected in a natural way from W. Of course, to be able to do this, we need to make sure that for every b ∈ (B m − B m−1 ) we have that b realizes an extended 1-type from Φ; this will be guaranteed by construction. Now, an important technical detail is that we have to be very careful with the way in which we specify the structure of pairs (b 0 , b), for b = b 0 . Indeed, AckermannSat will only specify the (2, ψ)-profile of (b 0 , b) and we can only extend this after we have provided a witness for b, since the witness structure that we use might enforce additional constraints on the structure of (b 0 , b). This technical detail might cause our structures B m to be incomplete, since at stage m for every b ∈ B m − (B m−1 ∪ {b 0 }) we have only specified what is the (2, ψ)-profile of the pair (b 0 , b). This will not cause us problems, because we can completely specify the structure of (b 0 , b) after we have provided a witness for b. Now we continue with the formal proof. Suppose that we have constructed the model B m and we want to construct the model B m+1 . For every defect b we will pick a (π, ρ)-witness (C b , s) ∈ W, where π := tp Bm [b] and ρ is the (2, ψ)-profile that (b 0 , b) realizes in B m . Without loss of generality we will assume that s(z) = b 0 and s(x) = b. Furthermore, we will assume that B m ∩ C b = {b 0 , b} and for any distinct We then define the model B m+1 as follows. Its domain is the set 2. For every R ∈ τ and (b 1 , . . . , b k ) ∈ B k m , where k is the arity of R and {b 1 , . . . , b k } = {b 0 , b}, we specify that 3. We will specify that (b 0 , b) realizes the (2, ψ)-profile tp ψ C b ,s [b 0 , b]∪ρ. Then, for every R ∈ τ and (b 1 , . . . , b k ) ∈ B k m for which we have not yet specified whether (b 1 , . . . , b k ) belongs to R B m+1 , we specify that (b 1 , . . . , b k ) does not belong to R B m+1 . 4. For every c ∈ (C b − {b 0 , b}), we will specify that (b 0 , c) realizes tp ψ C b ,s [b 0 , c]. 5. For every R ∈ τ and (c 1 , . . . , c k ) ∈ C k b , where k is the arity of R and for every c ∈ C b (including b 0 ) {c 1 , . . . , c k } = {b 0 , c}, we specify that This completes the construction of B m+1 . It is perhaps worthwhile to try to explain why the above argument fails in the case where there are two (or more) existential quantifiers, since the reason for this seems to be somewhat technical. Suppose that an imaginary variant of AckermannStart would start by guessing, say, the 1-types of the two elements a, a ′ that we are going to choose to interpretate the two existentially quantified variables. As the procedure goes through different 1-types and different ways universally selected elements are related to a and a ′ , the procedure needs to remember how a and a ′ are related to each other, if it wants to guarantee that the pair (a, a ′ ) satisfies quantifierfree formulas in a consistent manner in the different existentially guessed witness structures. But since the procedure can run for an exponential amount of time, it could run out of memory, since describing the full structure of (a, a ′ ) will require a description of exponential size. Conclusions In this paper we have fixed an error in the literature around prefix fragments by proving that the complexity of the satisfiability problem of the Ackermann fragment becomes ExpTimecomplete, if we allow the sentences to contain at most one leading existential quantifier. To prove the ExpTime upper bound, we designed an alternating polynomial space procedure which tries to essentially construct a model for the input sentence. There are still open problems concerning the complexity of certain prefix fragments. Perhaps the most challenging one would be to determine the exact complexity of the Ackermann fragment extended with a single unary function. This fragment was proved to be decidable in [5], but for a complete proof we refer the reader to the book [1].
2021-11-11T02:31:46.361Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "63deffb1c1a7ca78880483694454a9759354a6ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "63deffb1c1a7ca78880483694454a9759354a6ac", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
3111988
pes2o/s2orc
v3-fos-license
Improvements in quality of life associated with biphasic insulin aspart 30 in type 2 diabetes patients in China: results from the A1chieve® observational study Background Based on the 24-week, prospective, non-interventional, observational study, A1chieve®, we investigated how health-related quality of life (HRQoL) changed, and the predictors of such changes, in Chinese people with type 2 diabetes mellitus (T2DM) after starting with, or switching to, biphasic insulin aspart 30 (BIAsp 30). Methods In total, 8,578 people with T2DM starting treatment with, or switching to, BIAsp 30 were recruited from 130 urban hospitals in China. HRQoL was assessed at baseline and 24 weeks using the EuroQol-5 dimensions (EQ-5D) questionnaire. Descriptive statistics, paired t-test, and chi-square test were conducted and the linear ordinary least squares regression model was used to determine predictors for changes in EQ-5D score. Results Haemoglobin A1c (HbA1c) decreased from 9.5% to 7.0% after 24 weeks. The reported HRQoL measured by the EQ-5D visual analogue scale score increased by 6.2 (p < 0.001) from 75.8 to 82.0, and EQ-5D index score increased by 0.018 (p < 0.001) from 0.875 to 0.893 for the cohort over 24 weeks. The percentage of patients reporting no problems in the mobility, pain/discomfort, and anxiety/depression dimensions of EQ-5D increased significantly (p < 0.001) from 88.4% to 91.4%, 77.3% to 82.8%, and 74.2% to 77.1%, respectively. Patients with higher HbA1c levels at baseline, major hypoglycaemia or micro-complications exhibited significantly larger changes in EQ-5D scores than those with lower baseline HbA1c levels, without major hypoglycaemia or micro-complications after controlling for demographics and other baseline characteristics. Conclusions BIAsp 30 treatment was associated with improved glycaemic control and HRQoL in T2DM patients in China. Patients with worse health conditions were more likely to experience larger improvements in HRQoL than those with better health conditions. Trial registration ClinicalTrials.gov, NCT00869908. Electronic supplementary material The online version of this article (doi:10.1186/s12955-014-0137-9) contains supplementary material, which is available to authorized users. Background Globally, the number of people with diabetes has increased at an alarming level, and diabetes is placing a heavy economic burden on families and healthcare systems. The number of people with diabetes was estimated at more than 371 million in 2012, and is expected to be 551.9 million in 2030 [1]. China has become the country with the largest number of people with diabetes in the world. The most recent study estimated that the prevalence of diabetes among a representative sample of Chinese adults was 11.6% and the prevalence of pre-diabetes was 50.1%, which corresponded to 113.9 million and 493.4 million people, respectively, in 2010 [2]. The Chinese Diabetes Society of the Chinese Medical Association and International Diabetes Federation estimated that 13% of total medical expenditures in China were directly caused by diabetes in 2010 [3]. Diabetes is a debilitating disease characterized by deficiencies in insulin secretion, insulin action, or both, leading to chronic hyperglycaemia [4]. Insulin treatment is the inevitable choice for people with type 2 diabetes (T2DM) as diabetes progresses. It is typically used after glycaemic control fails or is not maintained with lifestyle changes and combinations of oral anti-diabetic medications [5]. Insulin treatment can improve glycaemic control, prevent the development of long-term complications of diabetes [6], and influence patients' quality of life [7]. There are a few studies concerning the impacts of insulin use on patients' health-related quality of life (HRQoL), with the impacts recorded ranging from positive [8][9][10] to negative [11][12][13]. There are no studies regarding whether insulin therapy improves or decreases patients' quality of life in a Chinese setting. The purpose of this study was to assess how HRQoL changed, and the predictors of such changes, after starting with, or switching to, biphasic insulin aspart 30 (BIAsp 30, 30% soluble insulin aspart, 70% protamine-crystallized insulin aspart) over a 24-week period among people with T2DM in China using Chinese subgroup data from the A 1 chieve®, study [14]. Study design A 1 chieve® was a 24-week, international, prospective, multicentre, non-interventional, observational study of people with T2DM in non-Western countries who had begun using basal insulin detemir, bolus insulin aspart and premixed insulin BIAsp 30, alone or in combination [14]. It was the largest observational study ever conducted in insulin therapy and was carried out in 28 countries across four continents (Asia, Africa, South America and Europe). Individuals with type 2 diabetes with no prior history of using the study insulins previously, and who had been started on one of the insulins in the 4 weeks prior to the study start are eligible for this study. People with hypersensitivity to the study insulins or excipients, and women who were pregnant, breast feeding, or who intended to become pregnant within 6 months of the study are excluded. The therapies were prescribed by the physicians in the course of normal clinical practice and treatment demand rather than randomly assigned by the researchers. The study was conducted in accordance with the Declaration of Helsinki. The ethics committee approval was obtained for each country, and all participants gave written, informed consent prior to their inclusion in the study. In China, central ethics committee approval was performed in China-Japan Friendship Hospital. The coordinating sites accept the central ethics committee approval or further conduct the ethics committee approval by ethics committee of their own hospitals (Additional file 1). The Chinese cohort that either started (6,612) or were switched (1,966) to BIAsp 30 in the A 1 chieve® study consisted of 8,578 people with T2DM from 130 urban hospitals in China. They were recruited between January 2009 and June 2010, and had an average observation period of six months. Approval from ethics committees were obtained at all the study sites. Clinical endpoints Clinical endpoints including safety and effectiveness outcomes were evaluated. Safety assessment included the incidence of serious adverse drug reactions (SADRs), including major hypoglycaemic events, the change in number of hypoglycaemic, the change in number of nocturnal hypoglycaemic event, and the number of adverse drug reactions (ADRs) from baseline to final visit. Effectiveness assessments included change in Haemoglobin A1c (HbA1c), fasting plasma glucose (FPG), postprandial plasma glucose (PPG), body weight between baseline and interim and final visits, and change in systolic blood pressure (SBP) and lipid profile at the final visit. HRQoL measurement The HRQoL was measured by the Chinese version of EQ-5D questionnaire at baseline and after 24 weeks of therapy. The EQ-5D consists of a descriptive system of five dimensions: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Each of the five dimensions can take one of three responses recording different level of severity: no problems, some or moderate problems and extreme problems. These responses could be converted into a single utility value using the EQ-5D preference weights elicited from general population samples. The EQ-5D also includes a visual analogue scale (VAS) recording the respondents' direct valuation of their current HRQoL state on a graduated (0 − 100) scale with higher scores for higher HRQoL [15]. The Chinese version of the EQ-5D was obtained from the EuroQol Group [16]. Its validity and reliability have been assessed in mainland China [17][18][19] and it has been used for studies of different populations in mainland China [20][21][22]. Statistical analyses Descriptive analysis and multivariable regression were performed using SAS (Version 9.1.3, SAS Institute Inc., NC 27513-2414, USA). The change from baseline to 24 weeks in clinical endpoints, HRQoL with the EQ-5D VAS, and health utility value as continuous variables, were analysed with the Wilcoxon signed-rank test. The UK preference weights were used for calculation of EQ-5D utility value because Chinese preference weights were still to be established. The change in the percentage of people reporting no problem in EQ-5D descriptive dimensions was analysed with a chi-square test. For descriptive analysis, the total cohort was divided into subgroups of insulin-naïve people (those not taking insulin therapy at baseline) and previously insulin-experienced people (current insulin users). Linear OLS regression was further employed to explore predictors of the changes in EQ-5D score. Independent variables included patients' demographics (age and sex), health conditions (macro-complications, micro-complications, duration of diabetes, body mass index (BMI), HbA 1c , SBP, total cholesterol, high-density lipoprotein (HDL) and lowdensity lipoprotein (LDL)), and other related indicators (previous insulin experience, total hypoglycaemia and major hypoglycaemia) at baseline. Results Demographics and characteristics of respondents Table 1). Clinical endpoints Blood glucose control measures improved markedly in both insulin-naïve and prior insulin users after 24 weeks of therapy with BIAsp 30. HbA 1c decreased from 9.5% to 7.0% for the total cohort, with a decrease from 9.1% to 7.0% for prior insulin users and a decrease from 9.6% to 7.0% for the insulin-naïve group. From a similar baseline measure, body weight of the two groups increased slightly by 0.3 kg during the therapy. No major hypoglycaemia was observed during the study, and reported hypoglycaemia rates (including overall, nocturnal and minor hypoglycaemia) decreased in the total cohort and in both subgroups. All of these results indicated BIAsp 30 could improve blood glucose control without increasing the risk of hypoglycaemia ( Table 2). Indicators including FPG, PPG, SBP, LDL and HDL changed favourably, and no SADR was reported during the study period [23]. Quality of life Quality of life in the total cohort As measured by VAS from the EQ-5D (on a scale of 0-100), reported QoL of the total cohort increased by 6.2 from 75.8 at baseline to 82.0 at 24 weeks (p < 0.001). The health utility value (on a scale of 0-1) increased by 0.018 from 0.875 at baseline to 0.893 at 24 weeks (p < 0.001). The increased percentages of people reporting no problem on the descriptive EQ-5D dimensions indicated that there were improvements of HRQoL after BIAsp 30 treatment. The percentages of patients reporting no problems in three of the five dimensions of EQ-5D-mobility, pain/discomfort and anxiety/depression-increased significantly from 88.4% to 91.4% (p < 0.0001), 77.3% to 82.8% (p < 0.0001) and 74.2% to 77.1% (p = 0.002) after 24 weeks, respectively. There was no statistical significance found in the percentage of patients who reported no problems in self-care or who reported no problems in usual activities ( Table 3). Quality of life for prior insulin-experienced and insulin-naïve subgroups Quality of life improved in both insulin-experienced and insulin-naïve patients. Baseline EQ-5D VAS scores were similar for both prior insulin-experienced and insulinnaïve subgroups (75.3, 75.9). There was a significant increase in both subgroups after 24 weeks (+15.8, +14.4, p < 0.001). The baseline health utility value of the insulin-experienced group (0.851) was lower than that of the insulin-naïve group (0.882). After 24 weeks, the health utility value of the insulin-experienced and insulin-naïve groups increased by 0.035 (p < 0.001) and 0.014 (p < 0.001) resulting in a similar health utility value between the two groups. The percentages of patients reporting no problems in dimensions of mobility, pain/discomfort and anxiety/ depression increased significantly from 84.5% to 91.0% (p < 0.0001), 71.4% to 80.7%(p < 0.0001) and 71.8% to 75.5% (0.0236), respectively, for the prior insulinexperienced group, and from 89.5% to 91.5% (p = 0.0005), 79.1% to 83.4%(p < 0.0001) and 74.9% to 77.5% (p = 0.0028), respectively, for the insulin-naïve group. Decrease in percentages of patients reporting no problems were seen in the self-care dimension for the prior insulin-experienced group (from 91.9% to 90.5%) and in the usual activities dimension (from 89.2% to 88.2%) for the prior insulin-naïve group, but neither change was statistically significant. There were similar percentages of patients reporting no problems across all other dimensions between the two groups ( Table 3). Linear OLS regression for the change in EQ-5D score Patients with higher HbA 1c levels at baseline, having major hypoglycaemia or micro-complications exhibited significantly larger changes in EQ-5D scores than those with lower baseline HbA 1c levels, without major hypoglycaemia or micro-complications after controlling for demographics and other baseline characteristics. HDL and LDL at baseline were negatively associated with change in EQ-5D scores. Other variables such as age, sex, duration of diabetes, and patients' prior insulin experience were not significantly associated with change in HRQoL (Table 4). Discussion This was the first study examining the impact of BIAsp 30 on HRQoL of people with T2DM in China. The result showed that people with T2DM starting with, or switching to, BIAsp 30 experienced significantly increased HRQoL over 24 weeks. The findings of this study were consistent with previous studies [24,25] in other countries based on A 1 chieve® that evaluated how patients' HRQoL changed after BIAsp 30 treatment. The efficacy and safety of BIAsp 30 compared with other insulins were shown in randomized controlled trials [26][27][28][29][30], and the effectiveness of BIAsp 30 in near-routine clinical practice was demonstrated by observational studies [31,32]. This study extended the results from clinical outcomes of BIAsp 30 and added additional evidence for decision making by assessment of humanistic outcomes. HRQoL was considered as a multidimensional concept reflecting patients' subjective perceptions of their physical, mental and social functioning [33]. Measuring HRQoL provided a way to know patients' subjective perceptions of clinical practice, and allowed a comprehensive evaluation of the health intervention. There was evidence that proper assessment of HRQoL during healthcare management can result in improvements to the patients' health [34]. BIAsp30 treatment could be the most likely factor for improvements in HRQoL in this study. After treatment of BIAsp30, the patients' glycaemic control improved and rates of hypoglycaemic events decreased, and it is recognized both of these could lead to improvement in HRQoL [35,36]. However, since the A 1 chieve® was nonrandomised and lacked a standardised treatment protocol, it should be noted that factors other than BIAsp 30 therapy itself could contribute to the improvements as well. The circumstances in which BIAsp 30 was started were unknown, and patients' self-management activities might be enhanced. Concomitant medication and dietary intake were not controlled either [14]. In addition to the impact of BIAsp 30 therapy on HRQoL, this paper also examined predictors for such impacts. The results of multivariable linear regression showed patients with a higher HbA 1c level, major hypoglycaemia or micro-complications at baseline experienced a larger amount of change in their EQ-5D scores. This finding indicates that patients with worse health conditions at baseline were more likely to experience larger improvements of HRQoL than those with better health conditions. There were several limitations in this study. First, because evaluation of HRQoL was based on the observational A 1 chieve® study, which was non-randomised and lacked a standardised treatment protocol, confounding factors such as improvement of life style might affect patients' HRQol. Second, the UK preference weights used for utility calculations of EQ-5D in this study might differ from those of comparable Chinese weights and result in an inaccurate evaluation of change in HRQoL in Chinese people with T2DM. Moreover, although EQ-5D has been widely used in treatment evaluation for diabetes, disease-specific questionnaires are often regarded as more sensitive than generic measures such as EQ-5D for capturing the impact of treatment [6]. All of these issues leave room for future research. Conclusion This study suggested that BIAsp 30 treatment was associated with improved glycaemic control and HRQoL in people with T2DM in China. Patients with worse health conditions were more likely to experience larger improvements of HRQoL than those with better health conditions. Additional file Additional file 1: 130 hospitals and EC information for A 1 chieve study in China. Competing interests This study was funded by National Science Foundation of China (71273016) and Young Foundation of Ministry of Education, Humanities and Social Science Research Projects (10YJC630332); The A 1 chieve® study was funded by Novo Nordisk (China) Pharmaceuticals Co., Ltd.
2019-03-13T13:29:17.344Z
2012-11-01T00:00:00.000
{ "year": 2014, "sha1": "b9a81dea50015f0d0dd0eca1cd74715d333b07c5", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-014-0137-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a54f05ef3eb363243da4e0d859ef69bf927b714d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
231840382
pes2o/s2orc
v3-fos-license
Modified Biochanin A Release from Dual pH- and Thermo-Responsive Copolymer Hydrogels The temperature- and pH-responsive poly(N-isopropylacrylamide-co-acrylic acid), p(NIPAM-co-AA), copolymer was synthesized by free radical polymerization and examined as a carrier for modified release of biochanin A. Biochanin A is a biologically active methoxylated isoflavone which exhibits estrogenic and other pharmacological activities. Due to its poor aqueous solubility and extensive first-pass metabolism, biochanin A has low bioavailability. The aim of this work was to incorporate biochanin A into the synthesized p(NIPAM-co-AA) copolymer and to examine its release at the body temperature and pH values that correspond to pH values of vaginal and rectal cavities. The amount of released biochanin A was monitored by the ultra-visible spectroscopy (UV-Vis) method. The structure of synthesized p(NIPAM-co-AA) copolymer and copolymer with incorporated biochanin A were characterized by using Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM) methods. The content of residual monomers in the synthesized copolymer was analyzed by using the high-pressure liquid chromatography (HPLC) method. The swelling behavior of p(NIPAM-co-AA) copolymer was monitored in relation to the temperature and pH values of the surrounding medium. For modelling the process of p(NIPAM-co-AA) copolymer swelling, the full three-level factorial design was applied. Introduction Biochanin A is a methoxylated isoflavone which can be found predominantly in plants of the Fabaceae family, primarily in red clover (Trifolium pratense L.), soybean (Glycine max L.), alfalfa (Medicago sativa L.) and chickpea (Cicer arietinum L.) [1]. The chemical structure of biochanin A is given in Figure 1. Due to its structural similarity to human estrogen, biochanin A can exhibit both estrogenic and antiestrogenic activity depending on the applied concentration [2]. Also, Biochanin A has other pharmacological activities, such as anti-inflammatory, antioxidant, antimicrobial, antidiabetic, osteoprotective and anticancer activity, primarily against hormone-dependent cancers [3][4][5][6][7][8]. The oral administration of biochanin A is primarily limited due to its poor aqueous solubility [9]. Further, pharmacokinetic studies have shown that biochanin A undergoes extensive first-and second-pass metabolism despite efficient penetration through the enterocyte membrane. Due to conjugation and hydrolysis by enzymes and intestinal bacteria, biochanin A undergoes extended enterohepatic recirculation. In this way, the biochanin A ability to enter the systemic circulation is further decreased as well as its bioavailability [10]. Since biochanin A can inhibit P-glycoprotein (P-gp), as well as other transport proteins in the liver and intestines, and represents a potential substrate in glucuronidation and sulfation reactions, biochanin A administered per os can enter into unpredictable interactions with various drugs [11]. The absorption rate of biochanin A is limited by its dissolution rate, so biochanin A is classified as a BSC (The Biopharmaceutics Classification System) Class II drug [12]. In order to improve the physico-chemical characteristics and safety of biochanin A, as well as to avoid its interactions with other drugs, it is necessary to develop novel drug delivery systems for biochanin A. Wu et al. [13] have prepared polymeric mixed micelles with Pluronic 127 and Plasdone S630 in a ratio of 1:1 as carriers for biochanin A. By incorporating biochanin A into mixed micelles, a slower release was achieved during 72 h, with total released amount of 72.86% ± 7.28%, compared to free biochanin A which was released rapidly (66.84% ± 6.68%) within the initial 8 h. These results indicated that mixed micelles as carriers for biochanin A increase the solubility and absorption of orally administered biochanin A and consequently its bioavailability by 2.16 times [13]. Sachdeva et al. [14] have described the preparation of enteric-coated microparticles with biochanin A as a novel drug delivery system. About 84% of the biochanin A was released from the microparticles within 48 h at pH 6.8, and only 20% of the biochanin A at pH 1.2. The pharmacokinetic studies have shown that incorporation of biochanin A into microparticles prolongs blood circulation, reduces clearance and enhances oral bioavailability of biochanin A [14]. Tao et al. [15] have used solid lipid nanoparticles as carriers for biochanin A. The release of biochanin A from solid lipid nanoparticles was monitored in vitro by using the dialysis bag method in phosphate buffer solution pH 7.4 at 37 ± 0.5 • C. The released amount of free biochanin A was 97% within the initial 8 h, while the release of incorporated biochanin A from solid lipid nanoparticles was prolonged, with 95% of the biochanin A released after 48 h [15]. Nanostructured lipid carriers with a biphasic release pattern were also used as a delivery system for biochanin A. The active substance on the surface of nanoparticles and in the outer shell was released abruptly during the initial 4 h. The release rate from the nanoparticles' cores was relatively constant. Incorporation of biochanin A into nanostructured lipid carriers resulted in increased absorption after oral administration, reduced first-pass metabolism and seven times higher bioavailability of the biochanin A [16]. In order to avoid extensive first-pass metabolism, the carriers for alternative routes of biochanin A administration were also developed [17]. Hanski et al. have prepared buccal film formulation with biochanin A using water-soluble polymers (hydroxypropyl cellulose and hydroxypropyl methylcellulose), plasticizers and disintegrants. More than 90% of biochanin A was released from the film formulation during 4 h. The advantages of this formulation type with water-soluble polymers can include the reduction of applied dose of active substance by increasing its bioavailability and entrance into the systemic circulation, without undergoing first-pass metabolism in liver [17]. The similarity of hydrogels to living tissue creates many possibilities for their application in biomedicine [18][19][20]. Many patents and scientific papers on the possible applications of hydrogels in drug delivery have been published, but only a few of them have resulted in commercial products. Hydrogels are interesting as drug delivery systems due to their unique physical properties [21]. The high porosity that characterizes hydrogels can be easily adjusted by controlling the crosslinking density within the matrices, and thus their affinity for water. Due to the porous structure of hydrogels, drugs can be incorporated into the matrices and then released under certain conditions. The advantages of hydrogels as drug carriers include the possibility of prolonged, continuous release, which results in maintaining a high local concentration of the pharmaceutically active substance over a long period of time [22]. The active substance can be incorporated into the hydrogel by hydrogel swelling to the equilibrium in the active substance solution. The hydrogel can be designed to swell and release active substance in the specific environment depending on the conditions, such as pH, ionic strength, temperature, etc. An increase of body temperature leads to the hydrogel contraction, and subsequently to the active substance release [23]. The incorporated drug can be released from hydrogel through several mechanisms: diffusion controlled, swelling controlled, controlled chemically and environmentally [18]. In this paper, poly(N-isopropylacrylamide-co-acrylic acid) polymeric carrier, p(NIPAMco-AA), for alternative application routes of biochanin A, was synthesized. The copolymer p(NIPAM-co-AA) has specific properties due to the presence of carboxyl and amino groups in its structure, which makes it suitable for controlled drug release [24]. This type of polymer has the ability to absorb large amounts of water or physiological fluids (up to 2000 times larger than its mass) [25]. It also reacts to external stimuli, primarily to pH and temperature, which changes its absorption properties. The behavior of the copolymer at the certain pH value depends on the pKa and pKb values of the acidic and basic functional groups in the copolymer. At pH values lower than pKa, the acrylic acid carboxyl groups are protonated (COOH) and the hydrogel is contracted. At lower pH values, the amino groups in the NIPAM monomer are protonated (NH 2 + ) and electrostatic repulsive forces increase the hydrophilicity of the polymeric network and hydrogel swells [26]. Hydrophilic acrylic acid contributes to a higher degree of water absorption and to a higher value of lower critical solution temperature, which is close to the physiological body temperature [27]. The listed characteristics of the p(NIPAM-co-AA) copolymer indicate that it can be used as a suitable carrier for drug delivery [28]. Synthesis of the Copolymeric p(NIPAM-co-AA) Hydrogel The copolymeric hydrogel of poly(N-isopropylacrylamide-co-acrylic acid), p(NIPAMco-AA), was synthesized by radical polymerization of NIPAM and AA (5 mol%) monomers using 1.5 mol% of EGDM as a cross-linker. The polymerization reaction was initiated by adding 2.7 mol% of AIBN. Acetone was used as a solvent. After dissolving the reactants, the homogenized reaction mixture was injected into the glass tube, being sealed afterwards. The polymerization reaction was performed in the following temperature regime: 0.5 h at 75 • C, 2 h at 80 • C and 0.5 h at 85 • C. After cooling, the long cylinder of synthesized copolymer was cut into disks (d × h = 5 × 2 mm, where d is the disk diameter and l is the thickness after drying in mm). The synthesized copolymer was treated for 72 h with methanol (60 cm 3 of methanol per 1 g of copolymer) to remove residual reactants. The treated copolymer was rinsed using solutions of methanol/distilled water in a ratio of 75%/25%, 50%/50%, 25%/75% and 0%/100% in order to remove methanol and then dried at 40 • C to constant weight. The decanted methanol solution was analyzed in order to determine residual reactants content by using high-pressure liquid chromatography (HPLC). The obtained copolymer was used for the incorporation of biochanin A. Lyophilization of the Copolymeric p(NIPAM-co-AA) Hydrogel Lyophilization of the p(NIPAM-co-AA) hydrogel in swollen state was performed on the device LH Leybold Heraeus, Lyovac GT2 (Frekendorf, Switzerland). The hydrogel was first frozen at -40 • C for 24 h. In the primary drying phase, the amount of the solution was reduced by sublimation at -30 • C and at the pressure of 5 Pa during 12 h. In the secondary drying phase (isothermal desorption), the hydrogel was heated at 20 • C and at the pressure of 5 Pa. The lyophilized hydrogel was stored at 4-8 • C and used for the incorporation of biochanin A. Incorporation of the Biochanin A into the Copolymeric p(NIPAM-co-AA) Hydrogel The solution of biochanin A (2 mg/cm 3 ) was prepared by dissolving biochanin A in ethanol (96%, v/v). The mass of 0.020 g of the synthesized lyophilized and non-lyophilized p(NIPAM-co-AA) xerogel was weighed. The samples were poured with 0.6 cm 3 of the biochanin A solution and allowed to swell for 4 h. The available amount of biochanin A for incorporation into the copolymer was 60 mg/g xerogel . After reaching equilibrium, the swollen p(NIPAM-co-AA) hydrogels with incorporated biochanin A were separated from the solution by decanting. The hydrogel samples were washed using distilled water to remove excess biochanin A. The mass of samples with incorporated biochanin A was measured in order to calculate the loading efficiency. The content of the incorporated biochanin A in the synthesized copolymers (lyophilized and non-lyophilized) was determined by measuring the mass of samples before and after swelling in the biochanin A solution. The loading efficiency (η) of biochanin A was calculated using Equation (1): wherein L g is the mas of biochanin A incorporated into the hydrogel (mg/g xerogel ) and L u is the initial mass of biochanin A in the swelling solution (mg/g xerogel ). Modified Release of Biochanin A from the Copolymeric p(NIPAM-co-AA) Hydrogel The swollen lyophilized and non-lyophilized p(NIPAM-co-AA) hydrogels with incorporated biochanin A were poured with 2 cm 3 of the adequate medium: a solution of hydrochloric acid pH 4.5 or a solution of sodium hydroxide pH 7.9. The samples were stirred and thermostated in a water bath at 37 • C for 4 h. The release of biochanin A was monitored by sampling 100 µL of solution over time (0, 0.5, 1, 2 and 4 h) and diluting with ethanol (96%, v/v) to the volume of 2 cm 3 . The absorbance of the prepared samples was determined at 261 nm by using the ultra-visible spectroscopy (UV-Vis) method. The synthesized p(NIPAM-co-AA) xerogel and xerogel with incorporated biochanin A were ground to powder in an amalgamator (WIG-L-Bug, Dentsply RINN, a Division of Dentsply International Inc., York, PA, USA). FTIR spectra of the biochanin A, monomer NIPAM and xerogels were recorded by a technique of thin transparent pastilles by vacuuming and pressing under the pressure of about 200 MPa. The pastilles were prepared by mixing 150 mg of KBr and 1 mg of the sample. The comonomer AA was recorded as a thin film between two plates of zinc selenide (ZnSe). FTIR spectra were recorded in the area of wavenumbers from 4000 to 400 cm -1 on a Bomem Hartmann and Braun MB-series FTIR spectrophotometer (Hartmann & Braun, Baptiste, Quebec, QC, Canada). The obtained spectra were analyzed using the Win-Bomem Easy software. High-Pressure Liquid Chromatography (HPLC) The HPLC method was applied for qualitative and quantitative analysis of the residual reactants content in the synthesized p(NIPAM-co-AA) copolymer. The combined methanol extracts obtained after processing the synthesized copolymer were used for the analysis. The analysis was performed on an Agilent 1100 Series HPLC device (Waldborn, Germany) with a Zorbax Eclipse XDB-C18 column, 250 × 4.6 mm, 5 µm (Agilent Technologies, Inc., Santa Clara, CA, USA) at 25 • C. Methanol was used as the mobile phase with a flow rate of 1 cm 3 /min. The injected sample volume was 10 µL. The detection was performed on a Diode array detector (DAD) 1200 Series detector at wavelengths of 205 nm for AA and EGDM, and 220 nm for NIPAM. For the construction of calibration curves, the series of adequate standard solutions of known concentrations were prepared. All samples were filtered on the Econofilter with the pore diameter of 0.45 µm and used for the HPLC analysis. The recorded spectra were processed using Agilent ChemStation software. Based on the constructed calibration curves, the equations for determining the content of NIPAM, AA and EGDM in the combined methanolic extracts obtained by processing the synthesized p(NIPAM-co-AA) copolymer were obtained. The calibration curve for NIPAM was linear in the concentration range of 0.005-0.506 mg/cm 3 . Equation (2) with the correlation coefficient R 2 = 0.997 applies: The calibration curve for AA was linear in the concentration range of 0.010-0.300 mg/cm 3 . Equation (3) with the correlation coefficient R 2 = 0.989 applies: The calibration curve for EGDM was linear in the concentration range of 0.005-0.264 mg/cm 3 . Equation (4) with the correlation coefficient R 2 = 0.989 applies: In the Equations (2)-(4), A is the peak area (mAU·s) and c is the content of NIPAM, AA and EGDM (mg/cm 3 ), respectively. From the peaks' integration data of the tested methanol extract samples, the obtained peak area values were in the range of the calibration curve. Swelling Study The lyophilized and non-lyophilized p(NIPAM-co-AA) xerogels were immersed in the solutions of certain pH values (3.5, 6.0 and 8.5) and the swelling process was monitored gravimetrically. The solutions of the given pH values were prepared using HCl or NaOH and the acidity was measured using a digital pH meter (HI9318-HI9219, HANNA, Woonsocket, RI, USA). The hydrogel samples were taken out from the solutions and the excess solution was removed from their surface. The sample mass was measured in certain periods of time until the equilibrium was reached, i.e., until the constant mass of hydrogel. The swelling degree, α, was calculated according to Equation (5): where m 0 is the mass of the dry gel and m is the mass of swollen hydrogel in the moment of time, t. Equation (6) is applied to analyze the nature of the diffusion process of the solvent within the hydrogel matrix. This equation is valid for the initial phase of the swelling (M t /M e ≤ 0.6) [29,30]: where F is a fractional sorption, M t is the mass of the absorbed solvent at the time t, M e is the mass of the absorbed solvent in the equilibrium state, k is the constant characteristic for a certain type of polymer network (min 1/n ) and n is the diffusion exponent. By taking the logarithm of Equation (6), Equation (7) is obtained: The values of the diffusion exponent n and constant k can be determined from the slope and intercept respectively, of the linear relationship between lnF and lnt. The mechanism of the solvent diffusion is determined by the value of the diffusion exponent n. For the hydrogels with planar geometry at the value n = 0.5, the fluid transport mechanism corresponds to the Fickian diffusion mechanism (Case I), and the polymer chain relaxation rate is much higher than the diffusion rate. "Less Fickian" diffusion is the mechanism of the solvent diffusion at n < 0.5, and the solvent transport in the polymeric matrix is considerably slower than the relaxation of polymer chains. The hydrogel swelling can be controlled by both solvent diffusion into the matrix and the relaxation of polymer chains (0.5 < n < 1), which corresponds to non-Fickian diffusion (anomalous diffusion mechanism). If n = 1, the solvent diffusion process is much faster than the relaxation of polymer chains (Type II, Case II). If n > 1, the hydrogel swelling is also controlled by the polymer chains' relaxation (Type III, Case III, Super Case II) [29][30][31][32][33][34]. Besides the mechanism of the solvent absorption, it is necessary to determine the solvent molecule diffusion coefficient (D). The most commonly used method for determining the diffusion coefficient, D, taking into account only the initial phase of swelling during which the thickness of the sample basically remains constant [34], is presented as Equation (8): where D is the diffusion coefficient (cm 2 /min) and l is the thickness of the dry hydrogel (cm). By taking the logarithm of Equation (8), Equation (9) is obtained (the linear relationship between ln(M t /M e ) and lnt): The diffusion coefficient, D, can be calculated from the intercept of the linear relationship between ln(M t /M e ) and lnt. Modelling the Process of p(NIPAM-co-AA) Copolymer Swelling The experimental design is a structured and organized way of conducting and analyzing controlled experiments in order to evaluate the factors (independent variables, X) that affect the system response (dependent variable, Y) [35]. For modelling the process of p(NIPAM-co-AA) hydrogels swelling, the full three-level factorial design was applied. In full factorial design, the effects of all experimental factors and their interaction effects on the system response are investigated [36]. The system response is the equilibrium swelling degree (α e ) of the p(NIPAM-co-AA) copolymeric hydrogels. Factors and levels applied in the three-level factorial design are given in Table 1. The analysis of variance (ANOVA) test was used for selection and evaluation of the model adequacy and statistically significant factors in the model. The factors and interactions with values of probability levels (p) lower than 0.05 were considered as statistically significant members. The optimization of the swelling process of p(NIPAM-co-AA) hydrogels was performed by using Design-Expert ® software, version 7.0.0 (Stat-Ease Inc., Minneapolis, MN, USA). Table 1. Factors and levels in full three-level factorial design for swelling process of p(NIPAM-co-AA) hydrogels. Factors Coded Actual Level Values Coded Actual The absorbances of the biochanin A samples for construction of the calibration curve and monitoring the modified release from p(NIPAM-co-AA) hydrogels (lyophilized and non-lyophilized) were measured at 261 nm in the quartz cuvette (1 × 1 × 4.5 cm) on a Varian Cary-100 spectrophotometer (Mulgrave, Victoria, Australia) at room temperature. UV spectra were processed using the Cary WinUV software. Ethanol (96%, v/v) was used as a blank. The calibration curve was constructed as the dependence of the absorbance at 261 nm on the known concentration of the biochanin A. The stock solution of the biochanin A (2 mg/cm 3 ) was prepared by dissolving biochanin A in ethanol (96%, v/v) and then diluted with ethanol in the concentration range of 2-10 µg/cm 3 . The obtained calibration curve for biochanin A with the correlation coefficient (R 2 ) of 0.999 is given in Equation (10): 2.6.6. Scanning Electron Microscopy (SEM) Scanning electron microscopy (SEM) was used to examine the morphology of the synthesized copolymeric p(NIPAM-co-AA) hydrogel. The lyophilized samples of the synthesized p(NIPAM-co-AA) copolymer and copolymer with incorporated biochanin A in the equilibrium swelling state were lyophilized on an Edwards Mini Fast 680 laboratory freeze-dryer (Edwards Ltd, Eastbourne, UK). The lyophilized samples were immersed into nitrogen before cutting to prevent breakage and deformation. After that, the samples were sprayed by an alloy of gold and palladium (85%/15%) under vacuum in a Fine Coat JEOL JFC-1100 Ion Sputter (JEOL Ltd., Tokyo, Japan). The metalized samples of p(NIPAMco-AA) were scanned with a JEOL Scanning Electron Microscope JSM-5300 (JEOL Ltd., Tokyo, Japan). Synthesis of the Poly(N-Isopropylacrylamide-co-Acrylic Acid) Polymer Simultaneous reaction to various external stimuli, such as temperature and pH of the surrounding medium, is one of the important conditions for hydrogels' application, especially as drug carriers [37][38][39]. In order to obtain temperature-and pH-responsive hydrogel, the NIPAM monomer was copolymerized with the ionic monomer AA using EGDM as a cross-linker. The reaction of free radical polymerization was initiated by 2-cyano-2-propyl radical formed at high temperature by degradation of the initiator 2,2'azobis(2-methylpropionitrile) ( Figure 2). The resulting primary radicals in the initiation phase react with monomer and crosslinking molecules to form radical species, whose structures are shown in Figure 3. In the propagation and termination phases of the polymerization reaction, a crosslinked structure of the p(NIPAM-co-AA) copolymer was formed. The possible structure of the copolymeric hydrogel network is shown in Figure 4. The synthesized p(NIPAM-co-AA) hydrogel was characterized using the FTIR method, the content of the residual reactants was determined and the swelling degree as a function of pH and temperature was examined. FTIR Spectroscopy Analysis The FTIR spectrum of the NIPAM monomer is given in Figure 5. The FTIR spectrum of the comonomer acrylic acid is shown in Figure 6. In the FTIR spectrum of AA ( Figure 6), a wide absorption band originating from O-H valence vibrations, ν(OH), of the carboxyl group is observed in the range of wavenumbers from 3500 to 3200 cm −1 , which is in accordance with the literature [40]. In this area, the band originating from the isolated vinyl hydrogen atom (=C-H) is also expected but difficult to observe due to the overlapping with the band from O-H valence vibrations. This band gives a weak maximum at 3067 cm −1 . The absorption band in the range of wavenumbers from 1710 to 1690 cm −1 in the spectra of aliphatic carboxylic acids is assigned to valence vibrations of C=O groups, ν(C=O). In the spectrum of AA, it appears as a strong-intensity band with a maximum at 1702 cm −1 . Due to conjugation with the vinyl group, this band is shifted to lower wavenumbers. For the same reason, the band of valence vibrations of C=C bond from vinyl group, ν(C=C), is shifted to lower wavenumbers and it appears as a strong-intensity band with a maximum at 1614 cm −1 . The valence vibrations of C-O bond, ν(C-O), coupled with in-plane deformation vibrations, δ(OH), give two bands in the spectrum of AA with maxima at 1433 and 1241 cm −1 , and confirm the presence of the COOH group. The characteristic bands originating from out-of-plane bending vibrations of the vinyl C-H bond appear at 1044 and 982 cm −1 . The presence of these bands in the spectrum of AA indicates that the double bond is monosubstituted. The FTIR spectrum of the synthesized copolymer p(NIPAM-co-AA) with 5 mol% of acrylic acid and 1.5 mol% of the cross-linker EGDM is shown in Figure 7. In the FTIR spectrum of the synthesized p(NIPAM-co-AA) copolymer (Figure 7), the absence and shifts of certain characteristic absorption bands of NIPAM and AA can be observed. The absence of the absorption bands originating from the valence vibrations of vinyl C=C bonds, ν(C=C), of the monomers and cross-linker in the range of 1640-1620 cm −1 , and in-plane deformation vibrations, δ(=C-H), in the range of 1450-1200 cm −1 indicates the successful polymerization. Also, the absence of the bands originating from out-of-plane C-H bending vibrations, γ(=C-H), which occur in the NIPAM and AA spectra at 989 and 918 cm −1 , and 1044 and 982 cm −1 respectively, clearly indicates that vinyl groups of monomers participated in the polymerization process. The carboxyl and alkylated amide groups from the AA and NIPAM monomers respectively, are preserved in the structure of the copolymer, which is indicated by the presence of the corresponding bands in the FTIR spectrum of the copolymer. The broad absorption band in the FTIR spectrum of the copolymer has two saddles, one at 3488 cm −1 assigned to OH valence vibrations, ν(OH), from the carboxyl group of the AA comonomer, and the other at 3298 cm −1 assigned to N-H valence vibrations, ν(N-H), from the NIPAM monomer [41]. The absorption band with the maximum at 1720 cm −1 is assigned to valence C=O vibrations of the carboxyl group of the AA comonomer. The maximum of this band is shifted towards higher wavenumbers by 18 units relative to the same band in the FTIR spectrum of AA comonomers ( Figure 6). In the FTIR spectrum of the copolymer, there is an absorption band at 1653 cm −1 which corresponds to the amide band I originating from the valence vibrations, ν(C=O), and is shifted by 5 units towards lower wavenumbers in relation to the same band in the FTIR spectrum of NIPAM ( Figure 5). The analysis of the FTIR spectra of monomers and copolymer indicates that polymerization was achieved and that the assumed structure of copolymer p(NIPAM-co-AA) (Figure 4) is accurate. Residual Reactant Analysis The HPLC method was used to analyze methanol solutions from the obtained p(NIPAMco-AA) copolymer in order to determine the amount of residual unreacted monomers and the cross-linker. Under the selected chromatographic conditions, the retention time (R t ) of 3.278 min corresponds to NIPAM, R t = 3.082 min to AA, and R t = 3.403 min to EGDM. The HPLC chromatograms and UV spectra of the monomers and cross-linker are shown in Figure 8a (I and II), b (I and II) and c (I and II), respectively. The unreacted amounts of monomers and cross-linker from the p(NIPAM-co-AA) copolymer synthesis calculated in relation to the total mass of the synthesized xerogel, as well as in relation to their initial amount in the reaction mixture, are presented in Table 2. The obtained residual reactants content was within the acceptable limits and indicates almost complete conversion of the initial compounds in the process of p(NIPAM-co-AA) copolymer synthesis. The total content of the residual reactants is less than 1%, which is in accordance with the permitted limits for the similar materials [42]. Since the toxicity of residual reactants depends on their content, the synthesized copolymer p(NIPAM-co-AA) can be considered safe for use as a carrier of active substances. Swelling Study In order to achieve the safe application of p(NIPAM-co-AA) hydrogels, the stability of NIPAM microgels with different content of AA under the various conditions of temperature, pH and sodium chloride concentrations were examined [43]. It has been shown that the increase of the temperature and sodium chloride concentration, as well as the decrease of pH, cause aggregation and reduce the stability of the microgel. The p(NIPAM-co-AA) microgel is unstable at high sodium chloride concentration, at temperature higher than 45 • C and at pH lower than 2.25 [43]. Taking into account the stability of p(NIPAM-co-AA) microgels and previous research on p(NIPAM-co-AA) hydrogels [44,45], the swelling study of the synthesized p(NIPAM-co-AA) hydrogel as a potential carrier for alternative routes of administration was conducted in the temperature range of 25-37 • C and at pH of 3.5-8.5, which correspond to temperature and pH of vaginal and rectal cavities. The changes in the swelling degree of poly(NIPAM-co-AA) hydrogel before and after lyophilization, in function of time, in the solvents with different pH values (3.5 and 8.5) and temperatures (25 and 37 • C), are shown in Figures 9 and 10. The sample of the p(NIPAM-co-AA) hydrogel was swollen to equilibrium in distilled water and lyophilized to obtain a polymer with large pore size. Therefore, the lyophilized p(NIPAM-co-AA) hydrogel absorbed solvent rapidly and had higher swelling degree in the initial phase of swelling ( Figure 9). It can be observed (Figure 9) that lyophilized p(NIPAM-co-AA) hydrogel reached lower values of the equilibrium swelling degree at both pH values (3.5 and 8.5), which could be the consequence of reduced flexibility of the polymer chains and lower solvent absorption during re-swelling. Both non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogels had the significantly greater values of the equilibrium swelling degree in the alkaline medium at pH 8.5 (269.323 and 259.218) than in the acidic medium at pH 3.5 (11.531 and 11.226). The increase of pH value caused the expansion of the polymer network due to electrostatic repulsion between numerous ionized carboxyl groups (COO − ) of the polymer chains [46]. By comparative analysis of the swelling process of both non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogels at 37 • C (Figure 10), asimilar behavior of polymers was noticed at 25 • C. The polymer network that had already reached equilibrium was rigid after the lyophilization, and the lyophilized p(NIPAM-co-AA) hydrogel at both pH values (3.5 and 8.5) had lower equilibrium swelling degrees. The lyophilized p(NIPAM-co-AA) hydrogel absorbed solvent faster at 37 • C in the first 100 min of swelling due to larger pore size obtained during lyophilization. The increase of temperature from 25 to 37 • C caused the contraction of non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogel and the decrease of equilibrium swelling degree (Figures 9 and 10), so this copolymeric hydrogel is classified as negative temperature-sensitive. Values of kinetic parameters (n, k and D) for the swelling process of p(NIPAM-co-AA) hydrogel before and after lyophilization at 25 and 37 • C and pH values of 3.5 and 8.5 are given in Table 3. Table 3. Kinetic parameters of p(NIPAM-co-AA) hydrogel swelling before and after lyophilization at different pH values (3.5 and 8.5) and temperatures (25 and 37 • C). Temperature ( • C) pH Before Lyophilization After Lyophilization n k × 10 2 (min 1/n ) R 2 D × 10 7 (cm 2 /min) n k (min 1/n ) R 2 D × 10 4 (cm 2 /min) 25 3 The diffusion exponent n of the p(NIPAM-co-AA) hydrogel before lyophilization had a value of 0.822-1.014 (Table 3). The swelling of the p(NIPAM-co-AA) hydrogel before lyophilization was controlled by the solvent diffusion and the relaxation of polymer chains (anomalous diffusion mechanism). The exception was the swelling of the hydrogel at 25 • C and pH 3.5 which was controlled by the relaxation of polymer chains (Super Case II). After lyophilization, the swelling of the p(NIPAM-co-AA) hydrogel at all temperature values and pH was controlled by the solvent transport into the polymer matrix ("less Fickian" diffusion) (n < 0.5, Table 3). The lyophilized p(NIPAM-co-AA) hydrogel had higher values of the diffusion coefficient D and constant k (Table 3), which indicated a higher degree of solvent penetration into the hydrogel. This result can be explained by the fact that lyophilized hydrogel has larger distances between the nodes of the polymer network, which enables faster solvent diffusion. In order to examine the influence of process factors (pH and temperature) on the system response (equilibrium swelling degree, α e ) of p(NIPAM-co-AA) copolymer with 5 mol% of AA and 1.5 mol% of EGDM, nine experiments were performed. The matrix of the full two-factor three-level experimental design (3 2 ) with experimental values of the responses is shown in Table 4. Table 4. Matrix of the full factorial design with experimental values of the responses. The quadratic model is better and more acceptable in comparison to linear and twofactor interaction (2FI) models for representing the influence of temperature and pH of the solution on the equilibrium swelling degree of the synthesized p(NIPAM-co-AA) hydrogel. The quadratic model has the highest value of coefficient of determination (R 2 = 0.98) and adjusted coefficient of determination (adj. R 2 = 0.95) and the lowest value of standard deviation. The results of the analysis of variance (ANOVA) for the quadratic model of the equilibrium swelling of p(NIPAM-co-AA) hydrogel are given in Table 5. The quadratic model is statistically significant, because p-value is lower than 0.05 (Table 4, p = 0.0089). Both independent variables-temperature (X 1 ) and pH (X 2 )-were sta-tistically significant members of the model, as well as squared term of the pH variable (X 2 2 ), p = 0.0342. The final Equation with coded values (11) for quadratic model of equilibrium swelling of the p(NIPAM-co-AA) hydrogel is given as: Number of Experiment When coded values were replaced by the actual ones, the second-degree polynomial Equation (12) was obtained: Equations with coded values were used to determine the variables' effects on the system response. A higher absolute value of the regression coefficient indicates a greater influence of the corresponding variable on the system response. The sign in front of the regression coefficient determines the type of variable influence on the response. The positive sign indicates the positive effect of the variable on the system response, whereas the negative one indicates the negative effect [47,48]. It can be observed that values of pH (X 2 ) have the highest regression coefficient (100.22, Equation (11)) and thus the greatest influence on the equilibrium swelling degree of the p(NIPAM-co-AA) hydrogel. According to the equation with coded values, temperature (X 1 ) has less influence on the system response. Based on the sign in front of the regression coefficients, it can be concluded that increasing the pH value causes an increase in the equilibrium swelling degree of the p(NIPAM-co-AA) hydrogel, while temperature has the opposite effect. The functional dependence of the system response on variables is shown in Figure 11. From Figure 11, it can be observed that with the increase of medium pH, the equilibrium swelling degree of p(NIPAM-co-AA) hydrogel increases. According to the model, the maximum value of the swelling degree of p(NIPAM-co-AA) hydrogel was obtained in the solution with pH 8.5 at 25 • C, α e = 281.11, while under the same conditions, the experimental value was α e = 269.32. The copolymer with incorporated biochanin A was analyzed by the FTIR method together with biochanin A and the synthesized p(NIPAM-co-AA) copolymer ( Figure 12). In the FTIR spectrum of biochanin A, a strong, broad absorption band in the range of 3400-3200 cm −1 with maximum at 3261 cm −1 is assigned to valence vibrations of phenolic OH groups, ν(OH). The characteristic valence vibrations of the phenolic C-O bond, ν(C-O)Ar, gave astrong band in the range of 1260-1000 cm −1 , which is located at 1176 cm −1 in the spectrum of biochanin A [14,49,50]. In-plane deformation vibrations of hydroxyl groups, δ(OH), occur in the range of 1500-1300 cm −1 and give a low-intensity band with maximum at 1323 cm −1 in the spectrum of biochanin A. The strong absorption band with maximum at 1661 cm −1 can be assigned to valence vibrations of carbonyl group, ν(C=O). The characteristic absorption bands at 1625, 1585 and 1515 cm −1 in the spectrum of biochanin A originate from valence vibrations of aromatic double bonds, ν(C=C)Ar, which is in accordance with the literature [49]. The asymmetric valence vibrations of ether C-O-C bond, ν as (C-O-C), give two strong bands in the range of 1275-1200 cm −1 , and they are found at 1258 and 1237 cm −1 in the spectrum of biochanin A. By incorporating biochanin A into the p(NIPAM-co-AA) copolymer, the formation of hydrogen bonds between phenolic OH groups of biochanin A (proton donor) with oxygen from C=O and C-O groups of the side chains of p(NIPAM-co-AA) copolymer (proton acceptor) is expected. Besides that, the C=O group of biochanin A can form hydrogen bonds with NH and OH groups of the side chains of the p(NIPAM-co-AA) hydrogel (proton donor). In the FTIR spectrum of p(NIPAM-co-AA) copolymer with incorporated biochanin A, the band originating from valence vibrations of the OH group of AA is shifted by11 units to lower wavenumbers (3477 cm −1 ) compared to its position in the spectrum of p(NIPAM-co-AA) copolymer. The decrease of the valence vibrations' frequency indicates participation of the OH groups in the formation of hydrogen bond, and the magnitude of this decrease is proportional to the strength of the formed bond. The maximum at 3323 cm −1 in the FTIR spectrum of copolymer with biochanin A originating from valence vibrations of N-H group, ν(N-H), is shifted by 25 units towards higher wavenumbers in relation to its position in the spectrum of p(NIPAM-co-AA) copolymer. The shifting of the maximum originating from deformation vibrations of N-H groups, δ(N-H), for 1 unit towards higher wavenumbers (1544 cm −1 ) in the spectrum of copolymer with biochanin A indicated that the N-H group participates in the formation of hydrogen bonds. The position of the amide band I, ν(C=O), remained the same (1653 cm −1 ) after biochanin A incorporation into the p(NIPAM-co-AA) hydrogel, indicating that this group did not participate in the intermolecular interactions between biochanin A and p(NIPAMco-AA) hydrogel [51]. The absorption bands of the valence vibrations of the C-O bond in the FTIR spectrum of copolymer with biochanin A at 1249 cm −1 (ν as (C-O)) and 1176 cm −1 (ν s (C-O)) are shifted by 7 and 4 units respectively, towards higher wavenumbers in relation to their positions in the spectrum of p(NIPAM-co-AA) copolymer. The maximum of the absorption band of carbonyl group, ν(C=O), at 1717 cm −1 is shifted by 3 units to lower wavenumbers in the spectrum of copolymer with incorporated biochanin A compared to its position in the spectrum of (NIPAM-co-AA) copolymer. The mentioned shifts also indicate the participation of C=O groups in the formation of intermolecular hydrogen bonds. In-plane deformation vibrations, δ(OH), give one band with maximum at 1387 cm −1 in the spectrum of copolymer with incorporated biochanin A, which is shifted by 64 units to higher wavenumbers relative to its position in the spectrum of biochanin A, indicating that OH groups participated in the formation of strong intermolecular hydrogen bonds, which is in accordance with the literature data [49]. Based on the FTIR analysis, the structure of p(NIPAM-co-AA) copolymer with incorporated biochanin A and formed intermolecular hydrogen bonds between copolymer and biochanin A is given in Figure 13. Scanning Electron Microscopy Analysis The morphology of the synthesized p(NIPAM-co-AA) hydrogel, biochanin A and the influence of the incorporated biochanin A on the morphology of hydrogel was examined by using the SEM method. The hydrogel samples were swollen to equilibrium and then lyophilized in order to understand the morphology better. The obtained SEM micrographs are shown in Figure 14. The pore size of the synthesized p(NIPAM-co-AA) copolymer in the swollen state goes up to 100 µm. The pores are fairly uniform, and this structural organization of the polymer network corresponds to macroporous polymers and provides enough free space for incorporation of different molecules (Figure 14a). The incorporation of crystalline biochanin A (Figure 14c) into the p(NIPAM-co-AA) hydrogel affects the cross-sectional morphology of the hydrogel, making it less porous because the pores are filled with biochanin A molecules (Figure 14b). The results obtained by SEM analysis indicate incorporation of biochanin A into the p(NIPAM-co-AA) copolymer, which is in accordance with results obtained by FTIR analysis. The Loading Efficiency of Biochanin A into the p(NIPAM-co-AA) Hydrogel The loading efficiency of biochanin A into the polymeric network of non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogels was determined in relation to the total available mass of biochanin A (L u = 60 mg/g xerogel ). The masses of the non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogels and incorporated biochanin A (L g ), as well as loading efficiency (η), are shown in Table 6. Table 6. The masses of xerogels and incorporated biochanin A (L g ) and loading efficiency (η). Hydrogel Sample Mass of Xerogel (g) L g (mg/g xerogel ) η biochanin A (%) The presented results showed satisfactory loading efficiency of biochanin A into the polymeric network of non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogels. The loading efficiency for lyophilized p(NIPAM-co-AA) hydrogel is higher (97.205%) compared to non-lyophilized hydrogels (92.767%), which is in accordance with the results of the swelling study. In Vitro Release of Biochanin A from p(NIPAM-co-AA) Copolymer Release of biochanin A from non-lyophilized and lyophilized p(NIPAM-co-AA) copolymers was monitored in vitro at 37 • C and pH 4.5 and 7.9, which simulate body temperature and pH environment of the vaginal and rectumspace [51,52], using the UV/Vis method. Results of these studies are shown in Figure 15a,b, respectively. Results of biochanin A release at 37 • C in a fluid at pH 4.5 for 12 h show that the released amount from non-lyophilized p(NIPAM-co-AA) copolymers is 24.82 mg/g xerogel (41.37%), and from lyophilized 27.71 mg/g xerogel (46.18%) of the total available amount (Figure 15a). In both copolymers, after 12 h, more than 50% of biochanin A remained in the pores, which provides the possibility of prolonged release in a medium that simulates the vaginal space. The content of biochanin A released from a non-lyophilized copolymer p(NIPAM-co-AA) in pH fluid of 7.9 at 37 • C for 12 h is 50.57 mg/g xerogel (84.28%), where the content of biochanin A released from the lyophilized copolymer is 53.29 mg/g xerogel (88.83%) relative to the available amount, under pH conditions corresponding to the rectum. Studies show that pH of the surrounding medium and lyophilization have an effect on the release of biochanin A from copolymer p(NIPAM-co-AA). From both copolymers p(NIPAM-co-AA), non-lyophilized and lyophilized, a higher amount of biochanin A (84.28-88.83%) is released at pH 7.9 than at pH 4.5 (41.37-46.18%), respectively. At a physiological body temperature (37 • C) that is higher than the volume phase transition temperature of lower critical solution temperature (LCST) [44,45] copolymer p(NIPAM-co-AA), the intermolecular interactions between biochanin A and side groups of the polymer matrix get broken and contraction of the polymer matrix starts, which initiates drug release. Kinetic parameters of the release of biochanin A from the matrix of copolymer p(NIPAM-co-AA) calculated using Equations (6)-(9) ( Table 7) show that the process flows according to "Less Fickian" diffusion law. The drug transport is slower than the polymer chain relaxation process and is controlled by the diffusion process. Higher values of the diffusion coefficient, D, from the lyophilized hydrogel p(NIPAMco-AA) indicate a higher rate of biochanin A release, which is expected due to the wider distance between the nodes of the polymer network. According to the test results, it has been shown that copolymer p(NIPAM-co-AA), nonlyophilized and lyophilized, can be suitable as a carrier for modified release of biochanin A for rectal and vaginal application. Formulations of biochanin A with pH-and thermosensitive p(NIPAM-co-AA) copolymer, non-lyophilized and lyophilized, may be of interest for further testing. Table 7. Kinetic parameters of biochanin A release from poly(N-isopropylacrylamide-co-acrylic acid), p(NIPAM-co-AA), non-lyophilized and lyophilized, at pH values of 4.5 and 7.9 and at 37 • C. Conclusions The poly(N-isopropylacrylamide-co-acrylic acid), p(NIPAM-co-AA), copolymer was synthesized by free radical polymerization of N-isopropylacrylamide (NIPAM) monomer with 5 mol% of acrylic acid (AA) and 1.5 mol% of the cross-linker ethylene glycol dimethacrylate (EGDM) for alternative routes of biochanin A application. The increase of temperature caused the contraction of non-lyophilized and lyophilized p(NIPAM-co-AA) hydrogel and the decrease of equilibrium swelling degree, so this copolymeric hydrogel is classified as negatively thermo-sensitive. Based on the sign in front of the regression coefficients in the equation obtained by three-level factorial design, it can be concluded that increasing the pH value causes an increase in the equilibrium swelling degree of the p(NIPAM-co-AA) hydrogel, while temperature has the opposite effect. Since the total content of residual reactants is less than 1%, the synthesized copolymer p(NIPAM-co-AA) can be considered safe for use as a carrier of active substances. The FTIR analysis of the hydrogel with incorporated biochanin A indicated that the hydrogen bonds between polymeric chains and molecules of biochanin A are dominant. The pore sizes of the synthesized p(NIPAMco-AA) copolymer in the swollen state were determined by SEM analysis of the lyophilized sample and went up to 100 µm, suggesting that synthesized hydrogel can be classified as macroporous. The loading efficiency of biochanin A into the synthesized hydrogel was up to 60 mg/g xerogel , and the release of biochanin A was faster at pH 7.9 than at pH 4.5. About 50% of the incorporated biochanin A was released from the lyophilized hydrogel at pH 7.9 and temperature of 37 • C in the initial 6 h. The obtained results indicate the possibility of using pH-and thermo-sensitive p(NIPAM-co-AA) copolymer as a carrier for modified release of biochanin A in the acidic environment of the vaginal cavity and weakly alkaline environment of the rectum.
2021-02-08T05:53:05.509Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "4e5cfaf0dc11b85e6e68f2fd002834fd5fb5ed49", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc7865815?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4e5cfaf0dc11b85e6e68f2fd002834fd5fb5ed49", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255464227
pes2o/s2orc
v3-fos-license
Occupational health coaching for job stress management among technical college teachers: Implications for educational administrators Background: The need for stress management strategies has been empirically investigated and supported considering demands in workplaces. However, some people in public offices do not seem to have been exposed to occupational health strategies that could reduce the adverse impacts of stress on job productivity and quality of life. Consequently, they become susceptible to mental health disturbances requiring the attention of occupational therapists. Given this, we studied the impact of occupational health coaching for job stress management among technical college teachers. Methods: Using a randomized control design study, 90 technical college teachers were screened and ready to participate. The eligible teachers were included and assigned to intervention and control groups. An occupational stress index was given to the participants before, immediately after, and 2 months after the delivery of occupational coaching program by career counselors while the comparison group received no intervention. Data collected were analyzed using multivariate analysis of variance analysis. Results: The results showed a significant improvement in the management of job stress after receiving rational emotive occupational health coaching. According to the multivariate analysis of variance analysis, there were between-group differences immediately after the intervention and 3 months later. As a result, the study suggested that career counselors and school management systems should incorporate rational-emotive behavioral therapy into workforce and workplace programs. Introduction In developing nations like Nigeria, the state of the work environment and working conditions have given serious concern to researchers. [1] It is serious among academic researchers who demonstrated unfair conditions in Nigerian organizations. [2] In most organizations in Nigeria today, workers experience serious hardship every day while on the job looking for conveniences, which causes work abandonment and unneeded stress. [3] This has reduced the level of work engagement as it is against their predicted job outputs. [4] When the workloads become uncontrollable and demands are excessive, the teachers feel exhausted and frustrated which could lead to work-related stress. Without a doubt, an employee's working environment affects both their general wellbeing and how successfully they do their job obligations. An analysis of earlier studies reveals that better working circumstances have a beneficial effect on workers' performance. [3] Stress is a public health problem that constantly affects how people feel at work. According to the report, teachers in Nigeria experience a lot of stress at work. [5] It has been that teachers with work role ambiguity are vulnerable to job-related risks. Written informed consent was granted by the participants. The authors have no funding and conflicts of interest to disclose. The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request. The Faculty of Education, University of Nigeria, approved this study. Medicine Some of them are posed with dual roles. For example, some teachers may function as school psychologists, social workers, nurses, school counselors, etc -or a combination of some of these -in an academic setting. [6] The competing responsibilities and diverse roles that teachers must perform cause job stress. [7] In response to such demands, they engage in psychological combat to meet the academic requirements, social expectations, and emotional needs of students and members of their immediate families. [8] As a result, the level of stress in the workplace has been increasingly reported among teachers. [9] According to reports from earlier research, 72% of school employees in Nigeria report having a bad quality of work-life due to work-related stress. [10] It was reported that civil servants in Nigeria are experiencing psychological and health problems attributed to job stress. [11,12] Further evidence reported that Nigeria has the worst working conditions among the developing countries in the world. [3] This situation is not interesting and is unhealthy for workers in Nigeria, especially teachers who receive stipends. It is a stipend because they collect salaries that are <200 US dollars. The current state of working conditions is intolerable, [12] making work-life dangerous to the extent that many teachers are frustrated. [13,14] Beyond Nigeria's statistic, reports showed that countries like the US, Sweden, [15] Japan, [16] and Togo are affected. [17] Upon the negative impacts of stress on productivity and commitment, coaching strategies for cushioning the adverse effects are not enough. [18] Given the health and psychological states of the teachers, career-based professionals suggested for occupational health coaches in improving the quality of life of teachers. [19,20] One of the occupational interventions is rational emotive occupational health intervention. Rational emotive occupational health intervention [21] is crafted from Ellis's philosophies of rational and irrational beliefs. The philosophies are sounded around demandingness, awfulizing, frustration intolerance, and depreciation beliefs. [22] Rational beliefs are characterized by being flexible, reasonable, non-extreme, objective, constructive in conclusion, and consistent with reality. While irrational beliefs are defined as being rigid, illogical, and drawing a poor conclusion. Based on these philosophies, rational emotive occupational health intervention was developed to help workers to deal with negative feelings and dysfunctional beliefs. It is a coaching approach that aimed to modify negative beliefs, improve rational ones, and adaptive coping in the face of a stressful working environment. Using rational emotive occupational intervention helps teachers to understand that harsh working conditions (A = activating event) do not account for stress but unrealistic and illogical perceptions (B = beliefs). For if employees rely more on exaggerating the truth or misrepresenting the nature of their jobs, it could result in psychosocial problems, work deviant behavior, and maladjustment. The outcome of the disbelief system may be poor adaptation and inconsistent with workplace rules and regulations (NC = negative consequences). As the teacher's behavior becomes inconsistent leading to withdrawal, sanction, and termination of appointment. The teacher requires the attention of career counselors for treatments (D = disputation). There is also evidential support showing that rational-emotive techniques are curative and preventive. [23] Due to the negative impacts of irrational thinking on workers' behaviors, we contend that teachers experiencing significant levels of unhappiness, fatigue, and physical and mental tiredness could benefit from rational-emotive techniques. Based on this, we evaluated the effectiveness of a rational-emotive occupational health intervention in enhancing technical college teachers' stress management. In light of this, we hypothesized that, at time 2 and time 3, technical college teachers exposed to the occupational health intervention would significantly manage their job stress compared to technical college teachers in the control group. The technical college teachers' scores on the job stress scale will be influenced significantly by group and gender interaction effect. Research design A group-randomized trial design was adopted to assign subjects to respective groups. In group randomized trials, complete groups are randomly assigned to treatment conditions, and everyone in the same group receives the same care. [24] Participants The study participants were registered technical college teachers with moderate and severe stress levels employed to teach students. In total, we selected 95 participants using the convenience sampling technique. Participants were randomly allocated to groups of intervention (n = 47) and waitlisted control (n = 48) using random allocation software. The sample size was considered to be appropriate using Gpower. [25] For each teacher to be included in the study, criteria such as willingness to participate, being in service for at least 1 year and above, history of stress (confirmed by occupational stress index [OSI]), and absence of psychosis and schizophrenia). Unwillingness to continue the study, unanticipated events, and changes in the treatment plan for whatever reason were among the exclusion criteria. Dependent measures Participants' demographic information: Prior to the intervention, the information of participants were collected, including age, gender, education, occupation, and marital status. Compliance with ethical standards The principles of the Helsinki Declaration were adhered to in this investigation. Ethical approval was obtained from the Department of Educational Foundations Ethics Committee of University of Nigeria and the principals of the technical colleges. Consent to participate was given by the recruited teachers. It was also made known that anyone who wanted to leave the treatment session can do so without penalty. The participants received assurances from the researchers regarding their rights and privacy protection. Therapist and integrity checks Two therapists with a fundamental orientation in career counseling carried out the intervention. They have practiced rational-emotive therapy for more than a decade as university lecturers. To guarantee that coaching integrity is maintained, the researchers gave 2 members of the research team extra responsibilities. They were viewed as independent assessors who kept an eye on the proper application of the intervention manual. This was done because several crucial elements of the manual could be disregarded by coaches. The team was tasked with keeping an eye on how the treatment was being administered as well as to how the participants responded during sessions, carried out the prescribed home exercises, and asked questions. Procedure The research team visited the 2 technical colleges where flyers were distributed to teachers in their respective offices. Prior to that, oral permission to engage the teachers was given by school head teachers. In the flyers, there were phone numbers of the research team. A few days later, 57 teachers contacted us via phone calls, expressing interest to participate. Those that indicated interest were scrutinized by the dependent measure, inclusion, and exclusion criteria. The 50 qualified teachers were recruited for the study. A simple random sampling technique without replacement was adopted to assign the recruited participants to groups, that is, intervention (n = 25) and waitlisted (n = 25). Kindly see Figure 1 for participants' allocation. The participants in the intervention group were coached using rational emotive occupational health coaching programme and those in the comparison group were waitlisted, in that, they only participated during assessment 1, assessment 2, and assessment 3. The coaching program was designed to last for 12 sessions, 90 minutes per session. It was brief coaching sessions for teachers in technical colleges. Session 2 = Addressing the aim and objectives of the sessions. The study's goals and currently available treatments were explained to the participants, highlighting the need to participate from the onset of the program to the end. Session 3 = conceptualization of the basic terms such as quality of work and work stress rational emotive behavior therapy. Session 4 = assignment and statement of actions by the group members, their expectations, roles, and obligations were highlighted in line with rational emotive principles. Session 5 = explaining irrational and rational beliefs, examples, and application. Session 6 = how irrational beliefs detrimentally affect work behaviors. Session 7 = relating irrational beliefs with unhealthy reactions and feelings. Stress in the workplace and relationship with negative perceptions. Session 8 = how negative perception and behavior lead to poor quality of work-life and poor stress management practice as well as practice exercises. Session 9 = identification of irrational beliefs and redefining participants' perceptions of poor quality of work-life and poor stress management practices. Session 10 = how to apply rational-behavior techniques in changing and altering irrational beliefs and behavior related to poor quality of work-life and poor stress management practices. Session 11 = how to apply rational beliefs in work settings and the importance of integrating poor quality of work-life and stress management to overcome work stress. Session 12 = revision and termination of the sessions. This summary was adapted from past studies that have utilized rational-emotive occupational health coaching. [28,29] The individuals in both groups were reevaluated (assessment 2) immediately following the intervention to determine whether there had been a beneficial change in the treatment. A third evaluation was given to the participants 2 months later (assessment 3). Data analysis A statistician who was unaware of the participants' distribution used SPSS version 28 (IBM Corp., Armonk, NY) to evaluate the data. Statistical analysis was done on the data gathered prior to the intervention, following the intervention, and during the follow-up phase. For data analysis, a multivariate analysis of variance statistic with a 0.05 level of significance was applied. The dependent measure's effect size of the intervention was reported using η p 2 . Post hoc analysis was also carried out using Sidak. After screening the data, an assumption violation test was run on the data using Mauchly test. Field [30] states that when the sphericity assumption fails, the data should be interpreted using either the Huynh-Feldt correction or the Greenhouse-Geisser correction (if the ε value is ≥0.75). However, the sphericity assumption did not fail. The effectiveness of the intervention on the outcome measures at follow-up was further determined using the univariate test. Results Of all the teachers in intervention group, 10 (45.5%) had <10 years of experience and 15 (53.6%) had ≥11 years of experience; 9 (42.9%) were males and 16 (55.2%) were females. While teachers in waitlisted group, 12 (54.5%) had 10 years of experience and 13 (46.4%) had ≥11 years of experience; 12 (57.1%) were males and 13 (44.8%) were females. Sociodemographic information of the participants shows that there is a significant difference in terms of participants' gender (χ 2 = .739, P = .39), and years of experience (χ 2 = 0.325, P = .569). Table 1 Table 2 shows that there was a significant effect of group on job stress scores, F (1, 46) = 227.218, P < .01, η² P = .83. The results also indicate that teachers in technical colleges' scores on job stress scale were influenced significantly by group and gender interaction effect, F (1, 46) = .405, P = .52, η² P = .01. The results also show a statistically significant effect of time on the job stress of technical college teachers, F (1, 92) = 438.430, P < .01, η² P = .91. The results also indicate that the technical college teachers' scores on job stress scores were significantly influenced by group and time interaction effect, F (1, 92) = 425.882, P < .01, η² P = .90. The follow-up result revealed that the significant effect of the intervention on job stress among technical college teachers' was sustained over time, F (1,49) = 417.276, P < .01, η² P = .90. Table 1 Descriptive statistics for participants as measured by OSI. Sidak post hoc analysis in Table 3 for group × time interaction effects shows that at pretest, there is no significant difference between technical college teachers on the OSI scores in the intervention group and those in the comparison group (mean difference = 5.017, standard error = 1.664, P = .004, 95% confidence interval [CI]: −1.668, 8.366). On the contrary, technical college teachers in the intervention group significantly reduced the OSI scores at posttest when compared to the control group (mean difference = −36.994, standard error = 1.935, P < .01, 95% CI: −40.889, −33.098). Additionally, at the follow-up test the technical college teachers in the intervention group still showed significantly lower OSI scores than those in the control group (mean difference = −40.926, standard error = 2.081, P < .01, 95% CI: −45.114, −36.737). Figure 2 also demonstrated the interaction effect of group on time as measured by OSI. Sidak post hoc analysis in Table 4 for group × gender × time interaction effects shows that at pretest, male and female technical college teachers in the intervention group had significantly similar OSI scores to the control group (mean difference Discussion This study investigated the impact of occupational health coaching on job stress among teachers in technical colleges in Enugu State, Nigeria. After delivering a coaching program and assessments, the result showed that the job stress coping strategies of the teachers were improved due to the rational emotive occupational health coaching. The findings also showed that group and gender interaction effects had a substantial impact on the occupational stress levels of technical college teachers. The findings also show that the group and time interaction effect had a substantial impact on the job stress levels of technical college teachers. The follow-up findings showed that the intervention's strong impact on teachers at technical colleges' levels of job stress persisted over time. These results are consistent with earlier research that showed how group rational-emotive behavioral therapy (REBT) training effectively reduced stress and changed irrational beliefs in a range of participant types. [21,31,32] This is the major goal of REBT approaches, that is, to change erroneous beliefs responsible for severe stress. Previous research by REBT experts noted that it is very important to assess both participants' disrupted emotions and their irrational beliefs in a REBT intervention. [33] This is so because therapists can better understand how the REBT treatment alters irrational beliefs, which are thought to be the primary cause of emotional disturbance accruing stress. It can also show whether the treatment had the desired effect on these irrational beliefs and the emotional disturbance it was intended to treat. [11] The REB coaching program offers organizational personnel, beneficial strategies to reduce occupational stress, which was previously studied and proven. [34,35] It also supported successfully implemented REBT studies, [36] that demonstrated how stress in the workplace may be controlled utilizing REBT's fundamental concepts. The earlier empirical evidence, that occupation-focused rational emotive training is beneficial and therapeutic has been further supported by this study. [37] Another study that used the identical rational-emotive intervention demonstrated success in modifying unfavorable opinions of public servants. Agu et al [28] took the same stance when they advised using a rational emotive occupational health intervention (REOHI) in a workplace setting where employees spend a few weeks away from their family at home. The intervention, according to the author, may be able to assist Nigerian workers in overcoming the negative impacts that stress has on their professional outcomes. This implies that perceptions of stress management strategies and work delivery methods may alter if REOHI is implemented in technical colleges in Nigeria. As the present study showed that group and gender interaction effects had a substantial impact on the occupational stress levels of technical college teachers, this is not in line with past study that found that there is no interaction effect of rational emotive occupational health coaching and gender. [29] The variation could be due to the work engagement by the teachers. Unlike the present study, [29] the reviewed study sampled primary school teachers. Possibly, the exposure and risks involved in the 2 work environments may not be the same, in that, the role expectation of male and female teachers in the 2 levels (primary schools and technical college) of education differs. Practice implications As this study has reported positive impacts of rational emotive occupational health coaching, it becomes important to consider how the finding could be replicated during practice. It should be used by professionals in the field of organizational psychology to assist those just starting their careers. For example, newly Table 3 Post hoc analyses for the OSI scores based on group × time interaction effects. employed teachers should receive rational emotive occupational coaching as part of their professional orientation. The coaching will help them to cope easily with organizational hazards. Possibly, the managerial team could benefit from the rational coping techniques acquired from the coaching. In the course of practice in organizational or educational settings school counselors can employ REOHI to change teachers' unfavorable beliefs about how work environments and host communities interact. Conclusion Investigating the perception of workers about the work environment and conditions is important as it X-rays possible negative impacts and possible interventions. This forms the crux of the present study to test the impact of occupational health coaching on job stress among technical college teachers. It was imperative as the health and psychosocial well-being of Nigerian workers are deteriorating from time to time. From the outcome of this study, it was found that rational emotive occupational health is a significant strategy in reducing job stress among technical college teachers. The results also indicate that teachers in technical colleges' scores on job stress scale were influenced significantly by group and gender interaction effect. Additionally, the findings suggest that the group and time interaction effect had a substantial impact on the job stress scores of technical college teachers. The results of the follow-up study showed that the intervention's lasting impact on teachers at technical colleges' levels of job stress was significant. The positive outcome suggests the need for further examination of mediation and moderators of the condition and intervention. Limitations Despite the positive impacts of the intervention, we appeal to readers to interpret the result of this study with caution as there are noted flaws that affected the external validity and generalizability of the finding to the general audience. One of the major flaws is the small sample size. The 50 participants are small size compared to the number of teachers in Nigeria. Secondly, a significant flaw in this study is the absence of measurement of participants' illogical ideas. The stress measure was not analyzed based on the dimensions. Additionally, there was no way to gauge participant satisfaction with the treatment to determine whether or not they were happy with it. We recommend that future studies fill in these gaps in light of these flaws.
2023-01-06T22:12:28.778Z
2023-01-06T00:00:00.000
{ "year": 2023, "sha1": "c700acd3ca1d0d4dd1ca96e5433383cd80e6f89b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "e486985616a506e1958ce5d8ed9dd75f094204d3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
219054381
pes2o/s2orc
v3-fos-license
Challenges in the Analysis of Historic Concrete: Understanding the Limitations of Techniques, the Variability of the Material and the Importance of Representative Samples ABSTRACT The number of historically-significant concrete structures which require conservation and repair is ever-increasing. The use of unsuitable proprietary materials has led to poor quality repairs of historically-significant structures in the United Kingdom, some of which have resulted in damage to the historic character of the structure and accelerated deterioration of the substrate. As a result, the approach to the repair of historic concrete structures has shifted from the use of mass-produced proprietary repair materials to purpose-made ‘like-for-like’ replacements which, theoretically, have similar mechanical and aesthetic properties. In order to create like-for-like repair materials, the original mix proportions and water/cement (w/c) ratio of the substrate have to be established. However, there are concerns regarding the accuracy of existing techniques and standards used for the analyses of hardened concrete. Furthermore, due to a lack of available material, analyses are often carried out on samples that are much smaller than the minimum requirement for a representative sample, or from areas which are not representative. This paper discusses these issues and hopes to provide information to conservators and analysts on the limitations of techniques, the variability of the material and the importance of representative samples. Introduction When selecting a repair material for concrete structures it is critical to match the characteristics of the original material as closely as possible. Failure to match the mechanical and chemical properties can not only lead to an unsuccessful repair but can also cause significant damage and accelerated deterioration to the original material. It is usually also important to match the aesthetic characteristics, as this will allow the two materials to blend well visually, retaining the historic character of the structure. The use of unsuitable proprietary materials which do not meet these criteria has led to poor quality repairs of historically-significant structures in the United Kingdom (English Heritage 2012) and, as a result, the approach to the repair of historic concrete structures has shifted from the use of mass-produced proprietary repair materials to purpose-made 'like-forlike' replacements which, theoretically, have similar mechanical and aesthetic properties. Given the significant role the mix proportions and water/cement ratio (w/c) play in the properties of con-crete, it is, understandably, desirable to replicate these in a repair material. Unfortunately, current standards for determining mix proportions and w/c ratio, such as BS 1881-124 (BSI 2015, BS 1881-211 (BSI 2016 and NT Build 361 (Nordtest Method 1999), are not suitable for use with historic concrete. However, despite this, they are applied in the assessment of historic structures as there are simply no better alternatives. This presents a problem, as the potential inaccuracy of the standard test methods is not included in test reports, and this may have a significant impact on the repair strategy applied to historic concrete structures. Scope of the problem BS 1881-124 determines w/c ratio indirectly, through separate determinations of cement content and water content. However, following a series of round-robin laboratory tests on contemporary concretes, the Concrete Society (2014) determined that, in favourable circumstance (undamaged, uncarbonated concrete with cement content 200-500 kg/m 3 and w/c 0.4-0.8, which contain aggregates that permit reliable estimates of the cement content) and with reliable analysts, the w/c ratio could only be calculated to within ± 0.1, and the reproducibility error was around ± 0.28 for a typical design range of 0.7-0.3. For determining cement content, the reproducibility of the BS 1881-124 method was found to be 55-85 kg/m 3 for concretes with cement contents ranging from 240-425 kg/m 3 . With regards to BS 188-124 more generally, it was concluded that "there is significant doubt regarding the accuracy of the BS 188-124." In order to even achieve this low level of accuracy, a petrographical examination of the concrete is first required to determine whether acid-soluble aggregate is present, as the standard utilises acid digestion of the cement matrix to determine the aggregate content. The standard itself claims that 'acceptable' results are only possible when the concrete is less than five years old and without physical or chemical damage, as these result in changes to the microstructure and porosity values which are used for calculating w/c. Furthermore, the calculations used in these standards require certain assumptions to be made about the initial chemical composition of the cement and these are based on the soluble silica and calcium oxide contents of current Portland cementsnot of the actual Portland cement being analysed. This is significant since the calcium content of the material increased considerably over the first half of the 20th century (Halstead 1961). Nordic standard for conformity assessment, NT Build 361, describes a method of estimating w/c ratio in hardened concrete, using microscopic investigation of thin sections impregnated with a fluorescent agent. These thin sections are then compared to a series of laboratory-prepared reference samples and the w/c ratio determined by comparing the fluorescent intensity of the samples. However, the accuracy of this method has also been called into question by some authors (Neville 2003;St John 1994) claiming a realistic accuracy of ± 0.1 for w/c ratio within the range of 0.4 to 0.6. Moreover, the necessity for comparable reference samples and the reduction in pore volume due to carbonation make this method also unsuitable for historic concrete. Additional methods of estimating the w/c ratio of hardened concrete are detailed in BS 188-211 (BSI 2016) and the Applied Petrography Group code of practice for the petrographic examination of concrete, APG SR2 (Eden 2010). Both of these documents state that the criteria for the assessment of w/c should include the amount, size and distribution of calcium hydroxide (CH) in the cement paste, as concretes with a low w/c tend to develop only limited proportions of coarsely crystalline calcium hydroxide. However, the ratio of alite (C 3 S) and belite (C 2 S) in Portland cements has varied significantly since the 19th century when it was first manufactured (Corish and Jackson 1982), and this affects the quantities of CH produced during hydration. For example, if the assumption is made that the final product of hydration is C 3 S 2 H 3 , then the approximate hydration reactions of alite and belite, and the corresponding masses involved, can be written as (Neville 2011): Therefore, for C 3 S and C 2 S of the same mass, although a similar mass of water is required for their hydration, C 3 S produces more than double the amount of CH than the hydration of C 2 S, and so any assumptions made about the w/c due to the CH content are likely to be incorrect. Changes in mean C 2 S and C 3 S contents over time (Corish and Jackson 1982) are shown in Figure 1. Estimations of CH content for hydrated cement pastes by production year based on mean C 2 S and C 3 S levels, taking into account the mass of water incorporated in cement paste for full hydration of the calcium silicate and calcium aluminate phases, are shown in Figure 2. Work has been undertaken in developing new methods of determining w/c ratio, such as that by Wong and Buenfeld (2009) which utilises scanning electron microscopy and image analysis to estimate initial cement content, water content and free w/c ratio of hardened cement-based materials. However, there is, at present, no adequate or standardised method for accurately determining the w/c ratio of historic concrete. While microscopical methods for estimating the composition of hardened concrete have been proposed by such authors as Polivka, Kelly, and Best (1956) and Axon (1962), these can only provide volumetric proportions. While they can be used to assess conformance of a hardened concrete of known mix proportions, they cannot be independently applied to determine an unknown original mix design. In order establish the original mix design, the specific gravities of the components would either need to be estimated or determined through physical testing undertaken in parallelboth of which provide complications, as will be discussed. There are also two additional and significant challenges facing those tasked with performing analysis on historic concrete structures. Firstly, when dealing with historic structures, it often difficult to obtain the volume of sample required to carry out analysis, and, secondly, the samples that can be obtained may not necessarily be representative of the area requiring repair, or even of the concrete in general. This is particularly problematic when dealing with historic structures, as owners are, understandably, reluctant to allow further damage to occur to a structure in order for samples to be taken, and wish to retain as much of the original fabric as possible. In addition, in the United Kingdom, it is a criminal offence to remove material from a listed structure or scheduled monument without written consent from the Secretary of State (Department for Culture, Media and Sport 1979). To put this issue in perspective, BS 188-124 (BSI 2015) requires a minimum of two representative samples to be taken for analysis of hardened concrete from a source of less than 6 m 3 and a minimum of ten independent samples from larger volumes of concrete. Furthermore, the mass should not be less than 1 kg in any case, not less than 2 kg to determine original water content, and not less than 4 kg if aggregate grading is to be determined. To carry out petrographic analysis of hardened concrete, BS 188-211 (BSI 2016) requires a minimum area of 100×100 mm to determine air void content and for volumetric estimation of mix proportions of concrete containing coarse aggregates, and a sample size of 70×50 mm for concrete and mortar containing fine aggregate and cement paste only. However, if this is to be removed by coring, it represents a significantly larger volume. In their code of practice for the petrographic examination of concrete, the Applied Petrography Group (Eden 2010) states that: Core samples need to represent not only the surface concrete but also the concrete at depth and should be ideally no less than 70 mm in diameter and 200 mm long. Where smaller diameters are unavoidable two or more cores may be needed to represent each sampling location. ASTM C 856-18a requires a minimum sample size of at least one core, preferably 6 in. (152 mm) in diameter and 1 ft. (305 mm) long to conduct a petrographic analysis, though smaller diameter cores can be used if the aggregate Figure 2. Estimation of CH content for hydrated cement pastes by production year based on mean C 2 S and C 3 S levels. is small enoughwith cores three times the maximum aggregate sample size desirable (ASTM International 2018). To conduct a microscopical determination of airvoid content, ASTM C 45-98 requires a minimum surface area which is dependent on the maximum aggregate size, as shown in Figure 3 (ASTM International 1998). It should be noted that, historically, aggregates were usually significantly larger than those used in modern construction, with some engineers specifying aggregate up to the size of 'an egg' (Pasley 1826(Pasley [1862), with aggregates of 40-80 mm not uncommon. Given the associated required sample sizes, it is understandably difficult to obtain permission to remove the minimum mass of material that would be required for a thorough analysis of a historic concrete structure. This lack of available material can often result in analysts being asked to carry out investigations on samples which are much smaller than a standard's minimum requirement for a representative samplewhether that is a mass sample for physical/chemical analysis or a thin section for microscopical analysis. If these samples are also supplied with little information as to where exactly on the structure they were taken from, it prevents the analyst from being able to provide a context for their resultsa necessity when dealing with material as heterogeneous as concrete. Methodology In order to assess the limitations of the current standards when used in the analysis of historic concrete samples, nine concrete mixes were produced using Portland cement (CEM I 42.5 N) as the sole cement constituent, and with mix proportions (Table 1) based on typical mix designs from the early 20th century (Abrams 1922;Bussell 2001;Concrete Society 2009;Somerville 2001;Yeomans 1997). While concrete mixes in the early 20th century were proportioned by volume, the ones used in this study were proportioned by mass in order to ensure accuracy and eliminate errors which may arise due to the challenge of maintaining a consistent bulk density of the materials. The proportions were approximately 1:1:2, 1:2:4 and 1:1.5:3 by mass of cement, fine and coarse aggregate respectively, but with the fine aggregate content slightly adjusted for each mix in order to maintain a constant cement and coarse aggregate content per 1 m 3 while varying the w/c ratio. The concrete was mixed in accordance with BS 188-125 (BSI 2013a), and cast in 100x100x500 mm moulds. However, due to the water demand of the 1:2:4 mix combined with the low w/c of 0.4, the workability of the T1 mix was so low that it was not possible to achieve adequate compaction, and therefore the T1 mix was not included for testing. After demoulding, the concrete samples were cured in potable water for 28 days. A slice of approximately 100x100x15 mm was then sawn from the centre of each concrete sample. This size was selected to replicate a similar mass of sample (300-400 g) to those that have previously been sent to the author from historic structures for analyses. After sawing, the samples were allowed to air-dry for six months and were then placed in a carbonation tank at 4% CO 2 for fourteen weeks in order to simulate the carbonation that would have occurred naturally in historic concrete. These slices were then split in half through the vertical plane and one-half used for aggregate grading and density tests, while the other half was used for all chemical testing. The analyses were carried out following BS 188-124 (BSI 2015), with the exception of density tests which were carried out in accordance with BS EN 1239-7 (BSI 2009a), aggregate water absorption tests which were carried out in accordance with BS EN 109-6 (BSI 2013b), and chemically-bound water prior to carbonation, which was estimated using X-ray fluorescence (XRF) combined with an optimisation process which determined the percentage of chemically bound water by mass of anhydrous cement required to achieve full hydration. Additional porosity measurements were carried out using mercury intrusion porosimetry (MIP) on 8 mm diameter microcores that had been vacuum dried at 40°C for 24 hours. Mix proportion calculations 2.1.1. Aggregate/matrix content As the control samples were known to contain no aciddigestible aggregate, aggregate content by mass was assumed to be the insoluble residue content determined in accordance with Clause 7 of BS 188-124 (BS1 2015). The cement matrix content as a % of the mass of total concrete was then calculated to the nearest 0.1% as follows: The aggregate and matrix content as a mass in kg per m 3 concrete mix could then be determined from the previously calculated oven-dry (OD) density, ρ c.OD : where: ρ c.OD is the density of the OD concrete in kg/m 3 ; M A.OD is the mass of OD aggregate (total) per m 3 mix in kg; M M.OD is the mass of OD matrix per m 3 mix in kg. It should be noted that the matrix content is different from the anhydrous cement content, as the matrix content includes chemically bound water and CO 2 as the cement has hydrated and then carbonated. 2.1.2. Anhydrous cement content LOI was carried out on powdered sub-samples of each specimen. During the LOI test, all chemically bound water and carbon dioxide that are part of the cement matrix are driven off, and so the remaining mass is attributed to the anhydrous cement and aggregate. As the overall matrix content has been previously calculated, it was then possible to calculate the anhydrous cement content of the concrete: The anhydrous cement content as a mass in kg per 1 m 3 concrete mix could then be determined from the previously calculated OD density, ρ c.OD : where: M cem is the mass of anhydrous cement per m 3 mix in kg. Combined water content The amount of chemically bound water in the cement matrix, also known as the 'combined water', is typically calculated using the procedure detailed in BS 188-124. However, this test is particularly unsuitable for use with historic concrete as it calculates bound water content from the mass of gas that is driven off at 1000°C and subsequently recaptured in an absorption tube, and, in the case of carbonated concrete, the conversion of calcium hydroxide to calcium carbonate results in the loss of measurable combined water. As such, the combined water of hydration was, as specified in BS 188-124, assumed to be: where: M cw is the mass of combined water per m 3 mix in kg; It should be noted that the value of 0.23 is only an estimation, and BS 188-124 states that the combined water of BS EN 197-1 (BSI 2011) CEM I and CEM III cements is between 0.20 and 0.25, for full hydration. Aggregate voids ratio The aggregate voids ratio was calculated from the results obtained from the aggregate water absorption and particle density tests, carried out in accordance with BS EN 109-6 (BSI 2013b), using the following expression: where: e a is the voids ratio of the aggregate; V a.w is the volume of aggregate voids filled by water in m 3 ; V a.s is the volume of aggregate solids in m 3 ; M a.SSD is the mass of the saturated-surface-dried (SSD) aggregate in kg; M a.OD is the mass of the OD aggregate in kg; M a.IM is the mass of the saturated sample immersed in water in kg; ρ w is the density of water in kg/m 3 . Voids ratios were calculated for both fine and coarse aggregate separately. Obtaining reliable results was challenging as the sample size obtained from the hardened concrete was small. In the case of the coarse aggregate, an average value was determined from all the samples tested and the same value used for all mix design calculations. In the case of the fine aggregate, the quantity of aggregate obtained was too small to perform the water absorption and particle density tests, so the test was carried out on a reference sample of fine aggregate and the same values used for the calculations of every mix design. Aggregate and matrix volume The SSD aggregate mass in kg per m 3 of concrete mix could then also be determined using the previously calculated dry aggregate mass per m 3 of concrete mix and the voids ratio of the aggregate: This, as well as the saturated-surface-dry aggregate density, previously determined from the procedure in BS EN 109-6, was then used to calculate the volume of saturated-surface-dry aggregate per m 3 of concrete mix where: V a.SSD is the volume of SSD aggregate per m 3 mix in m 3 ; M A.SSD is the mass of SSD aggregate per m 3 mix in kg; ρ a.SSD is the density of SSD aggregate per kg/m 3 . Assuming that the remainder of the volume is attributed to the saturated-surface-dry matrix, the volume of saturated-surface-dry matrix per m 3 of concrete mix was then calculated from the expression: where: V m.SSD is the volume of SSD matrix per m 3 mix in m 3 . 2.1.6. Fine and coarse aggregate content It was possible to determine the fine and aggregate content by measuring the grading of the aggregate following the dry sieving procedure described in BS EN 93-1 (BSI 2012b). The fine aggregate was considered to be that which passed through the 4 mm aperture sieve, and the coarse aggregate that which was retained, and the content of each per 1 m 3 mix was determined using the following expressions: where: M Af is the mass of OD fine aggregate per m 3 mix in kg; M Ac is the mass of OD coarse aggregate per m 3 mix in kg; M A.OD is the mass of OD aggregate per m 3 mix in kg; M f is the mass of fine aggregate passing through the 4 mm sieve, in kg; M c is the mass of coarse aggregate retained on the 4 mm sieve, in kg; M t is the total mass of aggregate used in the dry sieving procedure, in kg. It should be noted that each of these masses represents the mass of aggregate only, and does not consider the additional mass of water required to bring the aggregate to a saturated-surface-dry state. Concrete voids ratio The voids ratio of each hardened concrete samples was calculated from the saturated-surface-dried and ovendried densities calculated in accordance with BS EN 1239-7 (BSI 2009a), using the following expression: where: e c is the voids ratio of the concrete; ρ c.SSD is the density of the saturated-surface-dried concrete in kg/m 3 ; 2.1.8. Proportional share of concrete voids The voids ratio of the cement matrix could be calculated from the proportional share of concrete voids attributed to the matrix. However, in order to do this, it was first necessary to calculate the proportion of concrete voids attributed to the aggregate. Aggregate proportion of voids. The aggregate proportion of voids was calculated from the voids ratio of the aggregate and the calculated volume of SSD aggregate in per m 3 mix, using equation (12). It was calculated for the fine and coarse aggregates separately. As the volume of the SSD aggregate is calculated per m 3 mix, this term can be expressed as a ratio (unitless) as well as a volume (m 3 ). where: e c.a is the proportion of the concrete voids ratio attributed to the aggregate; V A.SSD is the volume ratio of saturated-surface-dry aggregate per m 3 mix. Cement matrix proportion of voids. Assuming that the remainder of the concrete voids are found in the cement matrix, the proportion of total concrete voids attributed to it could be calculated from the expression: e c:m ¼ e c À ðe c:af þ e c:ac Þ where: e c.m is the proportion of the concrete voids ratio attributed to the cement matrix; e c.af is the proportion of the concrete voids ratio attributed to the fine aggregate; e c.ac is the proportion of the concrete voids ratio attributed to the coarse aggregate. 2.1.9. Cement matrix voids ratio As with the SSD aggregate, the volume of the SSD cement matrix was calculated per m 3 mix and can, therefore, be expressed as a ratio (unitless) as well as a volume (m 3 ). The voids ratio of the cement matrix can be calculated from the expression: where: e m is the voids ratio of the cement matrix; V M.ssd is the volume ratio of SSD matrix per m 3 mix. It should be noted that, as with other tests methods that have been discussed, carbonation of the cement matrix leads to a reduction in pore volume which will provide a source of error in determining the volume of voids. 2.1.10. Free water content The volume of free water was considered to be that which filled the voids of the hardened cement matrix, and was therefore calculated using the expression: where: V fw is the volume of free water per m 3 mix in m 3 ; This can then be converted to a mass: where: M fw is the mass of free water per m 3 mix in kg; 2.1.11. Total water content If considered to be the sum of combined water and free water, the total water content of each sample could be calculated from the expression: where: M tw is the total mass of water per m 3 mix in kg; Mix proportion summary Once the mix proportions had been calculated, the results could be compared against the designed mix proportions (Table 2), and the standard and mean deviations determined (Table 3). The relevant terms used to represent the specific constituents in the previous mix proportion calculations can be found in Table 4. Mix proportions The deviations of the mix proportions determined from the analysis of the concrete samples from the designed values were significant (Tables 2 and 3). Furthermore, there does not appear to be any correlation between the mix proportions and the observed deviationsthat is to say that no general correlation could be found between the degree of variation in results and specific mix characteristics such as w/c ratio or cement content, suggesting that the errors are due to experimental or sampling errors. Sensitivity analysis In order to determine how each variable affected the mix proportion calculations, the mix proportions for T7the most accurately estimatedwere re-calculated 10 times, with one of the ten input variables obtained from experimental testing increased by 10% (factor of 1.1) each time, as shown in Table 5. It is clear from these results that the calculated mix proportions were extremely sensitive to small changes in measured results. In particular, small variations in the concrete density measurements have a significant impact on the accuracy of the results due to the scaling effect when normalising the proportions for a 1 m 3 mix. For example, if a 10 kg/m 3 increase in OD density was applied to each mix design, it resulted in a decrease of 0.02-0.03 in the calculated w/c ratio of each sample (Figure 4). This is of particular concern as deviations in calculated density by this margin are common, as the calculations involved are themselves particularly sensitive to scaling errors inherent to the use of relatively small test samples. One particular reason for these errors is the need to weigh the sample in a SSD state, which means that, theoretically, all the pores and voids of the sample are completely saturated with water, but no additional moisture is present on the outer surface. In reality, this is highly unlikely to be exact, as the determination that the sample has reached the SSD state is based purely on the perception and judgement of the individual carrying out the test. This issue is particularly relevant when dealing with small specimens which have a relatively high ratio of surface area to total volume, and introduces the possibility that small variations in the saturation state of the surface layer will result in calculated density errors which compound as they are used throughout multiple calculations. The prevalence of this issue can be put into perspective by examining the differences in the results of the density tests. The OD and SSD densities were determined twice for each of the hardened concrete samples and the results compared. The mean deviation between tests was 10.9 and 20.4 kg/m 3 for OD density and SSD density, respectively, with standard deviations of 4.8 and 4.0 kg/m 3 . As adjusting the mix proportion calculations with a 10 kg/m 3 increase in OD density resulted in a decrease of up to 0.02-0.03 in the calculated w/c ratio of each sample, this presents a significant issue given that the mean deviation between any two OD density test results for one sample was 10.9 kg/m 3 . Examples of how changes in measured density can affect the calculation of w/c are shown in Figures 4 and 5. Another significant variation that occurred was in the fine, coarse and total aggregate contents. In all cases except sample T9, the total aggregate content was calculated as being lower than the designed mix. While the total aggregate content errors can be attributed to the previously discussed issues inherent to the density calculations, the ratio of both fine and coarse aggregate to total aggregate should not be affected by this. The fine and coarse aggregate contents as a percentage of total aggregate mass were calculated from the mass of aggregate passing and retained on a 4 mm aperture sieve, respectively. As the sieving procedure required the aggregates to be in an OD state, and the same sample could be retested an unlimited number of times, there is very little error introduced from the actual experimental procedure. As such, it is likely that the errors can be attributed to variations in the physical composition of the concrete sample. As shown in Table 6, in all cases except sample T3 the percentage of aggregate passing was significantly greater than expected, and there are several potential reasons this could have occurred. Firstly, once mixing was complete, the fresh concrete was hand trowelled into moulds in layers and it is possible that some segregation occurred in the horizontal plane at this stagecausing the fine and coarse aggregate to be inconsistently positioned throughout the mould. As the sawn specimens were relatively thin in one orientation Secondly, as the sawn specimen was relatively thin, it is possible that a portion of the coarse aggregate that was positioned in the plane of each cut was sawn such that it now passed through the 4 mm aperture sieve and was counted as fine aggregate. In practice, this issue should be minimised by taking cores with a diameter of at least three and a half times that of the maximum aggregate size (BSI 2012a). However, as previously discussed, it is not always possible to take concrete samples of such sizeparticularly from historic structures. Thirdly, the concrete samples were heated in a furnace to 400 ± 5°C, as per BS 1881-124 (BSI 2015), in order to aid in the break-down of the cement matrix, and this may have resulted in some fragmentation of the aggregate and an increase of finer particles. The impact of the density equation errors is again highlighted when comparing the errors obtained during aggregate sieving and aggregate mix proportion calculations, as shown in Table 6. One such example is sample T3 which, despite having a negligible error (0.4%) from the aggregate grading, had errors of −8.3% (−52 kg/m 3 ) and −6.9% (−82 kg/m3) for fine and coarse aggregate respectively. Another example of particular note is sample T9 where an error of 22.4% in the aggregate passing the 4 mm aperture sieve resulted in a fine aggregate error of 87.3%. These errors occur as a result of the error in total aggregate content which is distributed into fine and coarse aggregate contents using the results from the sieve grading, which in turn increases the error in terms of mass per 1 m 3 mix proportionally, and when this mass error is converted into a percentage error of original mix proportions it has the potential to become particularly high. Another source of error comes from the assumption that the insoluble residue obtained from acid-digestion of the concrete is an accurate representation of the aggregate content. There are two conditions that need to be met for to this assumption to be correct; firstly, that none of the aggregates are acid-soluble, and, secondly, that all of the cement matrix is acid-soluble. The former is an issue which relatively well understood, and can be taken into consideration by performing microscopical analyses to determine the presence of acid-soluble aggregates, and quantified to allow an approximate correction to the insoluble residue results. However, the previously discussed limitations of such techniques need to be considered. The latter issue is more complex. The insoluble residue obtained from historic concrete samples can contain significant amounts of amorphous 'glassy' material which is not acid-soluble, and most likely originates from the cement and not the aggregate. The presence of this glass in Portland cement clinker is inevitable, and research carried out by Lerch (1938) approximated the glass contents of Portland cement clinker from 21 plants in the USA and found that they varied from 2 to 21 per cent. Furthermore, it was concluded that, for any given clinker composition, the glass content was dependent on the cooling conditions that the clinker was subjected towith relatively high glass contents caused by cooling the clinker rapidly, and relatively low glass contents by cooling slowly. This presents a problem when dealing with early Portland cements, since the cement manufacturing process was, at that time, very much a process of trial and error and this makes it difficult to predict reasonable results for the insoluble amorphous content of cements from historic concrete samples. While the amount of amorphous material can be estimated with some accuracy through quantitative X-ray diffraction (XRD) analysis, it is also possible that some aggregates also contain amorphous material and it is not possible to distinguish the amount that should be attributed to each. As such, this presents a source of error in the calculation of aggregate and cement matrix contents and subsequently results in errors in the determination of both aggregate and cement matrix chemical composition. Taking these various factors into account, it can be concluded that an accumulation of experimental errors recycled through multiple calculationsparticularly those related to density testscontributed to the significant variation of the estimated mix proportions from the designed mix proportions. Furthermore, the tests used are increasingly inaccurate as the sample size is decreased and this is problematic when dealing with historic concrete structures where limited amounts of material are available for testing. Porosity The porosity of concrete is an important factor which affects not only the physical properties of the hardened material, such as surface texture and subsequently the manner and extent to which it will weather, but influences the mechanical properties, such as strength, shrinkage and creep. Porosity is determined predominantly by the w/c ratio and curing conditions of the concrete (Basheer and Barbhuiya 2010), and, as it has been shown to be difficult to accurately analyse the w/c ratio of historic concrete, it may be necessary to determine the porosity of samples taken from the in-situ concrete source if a repair material is to be designed for it. However, it is unclear how suitable current techniques which directly measure porosity are for use with historic concrete and so an investigation was carried out on the control samples T2 -T9, the results of which are shown in Figures 6 and 7. Porosity was determined by two different methods: firstly, using MIP and, secondly, from the comparison of the results from OD and SSD density tests. Strictly speaking, the results from density measurements are not a measure of porosity as they inevitably include larger air voids that were not present in the samples used in the MIP analyses. However, as the tests were carried out on laboratory-made samples, which were compacted following the standard procedure, the proportion of air voids should be minimal. There are several factors which influence the porosity of hardened concrete, which need to be considered in the comparison of results. While the w/c ratio is the key parameter to control the formation of the cement paste microstructure, it is also important to consider that when measuring the porosity of hardened concrete T2 T3 T4 T5 T6 T7 T8 T9 Porosity, % Sample ID. <10 µm >10 µm Total porosity Figure 6. MIP results comparing total porosity and its distribution in pore sizes. T2 T3 T4 T5 T6 T7 T8 samples, the aggregate type and quantity can also have a significant impact on the porosity results, as can the curing conditions that the concrete was subjected to. In this study, the same aggregate type was used in each sample and the mix proportions were known, which allowed a more accurate interpretation of the results. Furthermore, as the cement type used and curing conditions were the same for each sample, this eliminated two potential sources of variation between the different designed mixes. However, it is still necessary to compare the results of samples which share one equal parameter; in this case, comparison is made between the results of samples with the same mix proportions but different w/c ratio (T2/T3, T4/T5/T6, T7/T8/T9), and also between the results of samples with the same w/c ratio but different proportions of cement and sand (T4/T7, T2/T5/T8, T3/T6/T9). In both the density and MIP porosity results, it was clear that for similar cement: sand: aggregate proportions, an increase in w/c ratio resulted in an increase in porosity. There was, however, a discrepancy in the porosity results of the MIP and density tests when comparing samples with the same w/c ratio but different mix proportions. The expectation was that, at constant w/c ratio, an increase in cement content would result in a higher porosity as the cement matrix is more porous than the aggregate, i.e. in this study, at constant w/c, the 1:1:2 mix would have the greatest porosity and the 1:2:4 mix would have the lowest porosity. While the results of the density tests support this, the MIP results do not, as the 1:1.5:3 mixes T5 and T6 have a lower MIP porosity than the corresponding 1:2:4 mixes -T2 and T3 respectively. This discrepancy could possibly be attributed to two factors: firstly, that a significant amount of the coarsest pores in the 1:1.5:3 mix may fall outside of the range of measurement of MIPan issue which is associated with this technique (Taylor 1997). Secondly, the discrepancy may have arisen as a result of experimental and sampling errors associated with this techniquean issue which will be discussed subsequently. In any case, due to the limited number of tests specimens available from each sample on which these tests were carried out, it is difficult to draw any firm conclusions on this discrepancy. This presents an issue which hinders the usefulness of MIP when trying to ascertain the correlation between particular variables, such as cement content and w/c, on the porosity of historic concrete samples. Given that this could not be achieved in a controlled study where the original mix proportions and w/c ratios were known and the variation between samples limited, it is unlikely that, in a wide-scale study where all the samples have varying mix proportions, unknown curing conditions, different cement and aggregate types, and where the number of samples available for destructive testing is limited, the use of MIP will provide any meaningful data. Experimental error While porosity tests can provide useful information on the pore structure of laboratory-made cement pastes and mortars, there are two important factors which need to be taken into consideration when analysing the data from tests carried out on hardened concreteparticularly that which is carbonated. Firstly, when the test is carried out on concrete, each sample will inevitably contain varying quantities of cement and aggregate. In order to give a context to results obtained, it is important to have first determined not only the cement matrix and total aggregate contents but also the proportion of fine and coarse aggregates as these will each have different porosities which will affect the results. In the case of the results discussed in this chapter, this issue is of less concern than with concrete taken from an in-situ source as the original mix proportions of these samples were known. However, there will inevitably be a degree of variation from the designed mix proportions due to the heterogeneity of concrete, and this is particularly true when carrying out MIP, as the test is carried out on very small specimens (8 mm diameter cores, approximately 15 mm in length)making it very difficult to ensure that any individual test specimen is, in fact, an accurate representation of the bulk mass with known aggregate and cement matrix contents. Again this issue is of even greater concern when dealing with samples of unknown mix proportions due to the inaccuracies in methods to determine these as discussed earlier in this study. Secondly, MIP estimates pore-entry sizes, not the distribution of pore sizes, and so if large pores can only be accessed through narrow entrances they will be incorrectly registered as smaller pores (Taylor 1997). This is problematic when dealing with carbonated concrete, as the conversion of calcium hydroxide to calcium carbonate results in an increase in the crystal volume by approximately 11.7% (Ishida and Maekawa 2001) which in turn causes a decrease in the size of pores in the concretecausing a finer porosity to be registered during MIP tests. There are additional errors inherent to this technique, such as its mathematical assumption that the pores are perfectly cylindrical, which is unlikely to be the case, and the sample preparation and testing procedures which can both alter the delicate pore structure (Taylor 1997). This creates difficulty when trying to determine the relationship between various historic cement compositions and the pore structure of cement paste, as even samples with the same cement type and w/c that are carbonated to a different degree may be analysed by MIP as being quite different, due to the effects of carbonation on the pore entry sizes. However, MIP tests may still provide valuable information when analysing an individual concrete sample from a proposed repair area. While the actual quantification of the range of pore sizes, and indeed the quantification of total porosity, may not be a particularly accurate reflection of the bulk material and therefore unsuitable for assessing how a certain cement type will influence the formation of pores in the hardened pasteand subsequent mechanical properties such as shrinkageit may be that even this analysis of pore entry sizes can provide insight into the physical characteristics of the surface layer of concrete. For example, the results provided by the MIP tests on the pore entry sizes of carbonated concrete may be used to better understand how that material has degraded or will degrade in response to its environment and also for comparison with potential repair materials to ensure they will have a similar surface texture and will weather and stain in a similar fashion. Variations in the composition of in-situ concrete While there are experimental errors that are inherent to concrete testing, it is also important to consider that the heterogeneity of concrete is generally such that, when working with small samples, the bulk material is not being taken into consideration and any test, no matter how accurate, can only give a localised quantitative assessment of composition. With this in mind, there are several issues related to the in-situ casting of fresh concrete which need to be considered when relating the properties of relatively small analytical samples to the much larger substrateparticularly when these samples are derived from one particular area and are unlikely to be representative of bulk material. Segregation in fresh concrete is a significant factor which contributes to an increase in the variation in the composition of the hardened concrete. It can be attributed to several factors including over-compaction, poor placement and inadequate mix designthe latter is particularly relevant to historic concrete as the first standards for concrete in the UK were not introduced until the first half of the 20th century. A lack of suitable grading is conducive to segregation, which in turn can result in the dense coarse aggregate particles settling to the bottom of the mix and fluid cement paste rising to the top (Neville and Brooks 2010). The effects of segregation on concrete heterogeneity should not be underestimated, particularly when selecting samples for analysis, as it has been found to result in a difference in cement content of as much as 100 kg/m 3 between the top and bottom of concrete walls and columns (Skinner 1980). Bleeding, another form of segregation which occurs in fresh concrete, is usually a result of over-compaction and can have a detrimental effect on concrete as it causes water to rise to the top surface, creating a weak and porous layer in the hardened concrete which varies from the underlying material. It can also result in areas of high permeability below large aggregate or reinforcement as the rising water becomes trapped; leaving voids in the hardened concrete (Neville and Brooks 2010). Segregation is of far more concern when dealing with concrete cast in-situ than with concrete cast in a laboratory environment or even cubes taken on a construction site for quality assurance tests. There are two reasons for this: Firstly, when making concrete cubes for laboratory testing, the samples are compacted following a standard proceduresuch as BS EN 12390-2 (BSI 2009b)while concrete cast in-situ is compacted to the satisfaction of the concrete finisher, foreman or engineer, and this can result in varying degrees of under or over-compaction, which subsequently affects the heterogeneity of the mix. Secondly, while the control samples used in this study did suffer from some degree of segregation, as shown in Figure 8, this predominantly results in variations through the vertical plane of the sample and, as the samples were sawn parallel to the vertical plane, these variations are contained within the dimensions of the sample which is being tested. When dealing with in-situ concrete, it is unlikely that the effects of segregation in the bulk of the concrete will be accurately reflected in samples taken for testing, unless they are vertical cores of the full depth of the concrete. Another influencing factor is the 'wall effect'; a physical phenomenon which occurs at the interface of concrete and formwork, where the surface of the formwork affects particle packing by preventing the uniform distribution of coarse aggregate, which in turn causes an increase in the mortar content required to fill the surrounding space (Neville 2011). This results in the formation of three skin layers: the cement skin, mortar skin and concrete skinapproximately 0.1 mm, 5 mm and 30 mm respectively (Kreijger 1984)and while the w/c ratio in these layers remains unchanged, both the cement and water content increase (Neville 2011). Furthermore, some tests have shown that the wall effect can result in an increase in sand content at the concrete surface equal to 10% of the total mass of aggregate (Shacklock 1959). As such, it is important that any material analysed from the surface skins is not considered to be representative of the bulk of the concrete and vice versa. Conclusions The number of historically-significant concrete structures which require conservation and repair is everincreasing. As the use of proprietary repair materials has previously resulted in repairs of variable quality, the approach to the repair of historic concrete structures in the United Kingdom has shifted from the use of massproduced proprietary repair materials to purpose-made 'like-for-like' replacements. However, there are four key issues with this approach that have been discussed which need to be considered: (1) Doubts regarding the accuracy of existing test procedures in general; (2) The unsuitability of existing procedures for use with historic concrete due to the physical and chemical alteration which has occurred; (3) Limited availability of substrate to allow accurate characterisation; (4) The variability of the substrate and ensuring that samples are adequately representative. Funding Initial work on this study was supported by a collaborative doctoral research award funded by the Arts and Humanities Research Council [AH/L008467/1] and Historic Environment Scotland through the Scottish Cultural Heritage Consortium. Additional work and preparation of the manuscript were supported by the J. Paul Getty Trust. Notations e a is the voids ratio of the aggregate; e c is the voids ratio of the concrete; e c.a is the proportion of the concrete voids ratio attributed to the aggregate; e c.ac is the proportion of the concrete voids ratio attributed to the coarse aggregate; e c.af is the proportion of the concrete voids ratio attributed to the fine aggregate; e c.m is the proportion of the concrete voids ratio attributed to the cement matrix; e m is the voids ratio of the cement matrix; M a.IM is the mass of the saturated sample immersed in water in kg; M a.OD is the mass of the OD aggregate (total) in kg; M A.OD is the mass of OD aggregate (total) per m 3 mix in kg; M a.SSD is the mass of the SSD aggregate (total) in kg; M A.SSD is the mass of SSD aggregate (total) per m 3 mix in kg; M Ac is the mass of OD coarse aggregate per m 3 mix in kg; M Af is the mass of OD fine aggregate per m 3 mix in kg; M c is the mass of coarse aggregate retained on the 4 mm sieve, in kg; M cem is the mass of anhydrous cement per m 3 mix in kg; M cw is the mass of combined water per m 3 mix in kg; M f is the mass of fine aggregate passing through the 4 mm sieve, in kg; M fw is the mass of free water per m 3 mix in kg; M M.OD is the mass of OD matrix per m 3 mix in kg; M t is the total mass of aggregate used in the dry sieving procedure, in kg; M tw is the total mass of water per m 3 mix in kg; V a.s is the volume of aggregate solids in m 3 ; V a.SSD is the volume of SSD aggregate per m 3 mix in m 3 ; V A.SSD is the volume ratio of saturated-surface-dry aggregate per m 3 mix; V a.w is the volume of aggregate voids filled by water in m 3 ; V m.SSD is the volume of SSD matrix per m 3 mix in m 3 ; V M.SSD is the volume ratio of SSD matrix per m 3 mix; ρ a.SSD is the density of SSD aggregate per kg/m 3 ; ρ c.OD is the density of the OD concrete in kg/m 3 ; ρ c.SSD is the density of the saturated-surfacedried concrete in kg/m 3 ; ρ w is the density of water in kg/m 3 .
2020-04-30T09:05:20.116Z
2020-04-27T00:00:00.000
{ "year": 2022, "sha1": "c944055aafbed670f8afa1bf7b3c25f32108b0b9", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15583058.2020.1749728?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "dfbafc3f620b6b74c180045d2ac27d335fd8dda8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
52163558
pes2o/s2orc
v3-fos-license
Body mass index modulates the association between CDKAL1 rs10946398 variant and type 2 diabetes among Taiwanese women CDKAL1 rs10946398 is a type 2 diabetes (T2D)-associated variant. It is a new body mass index (BMI)-associated variant in Asian populations. We investigated the association between rs10946398 and T2D among 9908 participants aged 30–70 years based on BMI: normal weight; 18.5 ≤ BMI < 24 kg/m2, overweight; 24 ≤ BMI < 27 kg/m2, and obesity; BMI ≥27 kg/m2. The CC genotype conferred a higher risk of T2D than the CA genotype. The odds ratios (ORs) were 1.83; 95% confidence interval (CI) 1.49–2.26 and 1.20; 95% CI 1.02–1.40, respectively. The C allele was the significant risk allele compared with A allele (OR = 1.32; 95% CI 1.19–1.47). For normal, overweight and obese participants with CC genotype, the ORs were respectively 1.69; 95% CI 1.02–2.81, 2.34; 95% CI 1.50–3.66, and 1.58; 95% CI 1.02–2.45 among men and 1.22; 95% CI 0.67–2.22, 2.42; 95% CI 1.30–4.52, and 2.3; 95% CI 1.19–4.50 among women. The C allele ORs were higher in obese and overweight women. In conclusion, the rs10946398 CC/CA genotypes, as well as the C allele increased the risk of T2D. The ORs were higher in women who were overweight and obese than in those with normal weight. Nonetheless, significant results were prominent only among those with CC genotype and C allele. The C allele of rs10946398 is reported to be a T2D risk allele 18 . A meta-analysis of cohort studies has reported significant associations between CDKAL1 variant and T2D in general populations 19 . However, further analyses showed significant associations only in Asian but not African subgroups. This is an indication that results from individual studies are not consistent with one another. Obesity (defined by BMI) is strongly associated with the risk of type 2 diabetes 20 . There are sex differences in the pathogenesis of this disease. For example, men are said to develop it at a relatively lower BMI than women 21 . Genetic components have different effects in men and women. However, most of the previous associations have focused only on one sex. SNP rs10946398 is one of the variants that have shown the strongest associations with diabetes particularly in European and South East Asian populations. Besides, it is a new BMI-associated locus specifically among Asian populations 22 . However, it has received less attention than other SNPs 22 . To date, only one study has investigated the effect of this SNP on T2D in Taiwan 12 . Using Taiwan biobank resources, we assessed whether there are sex-related differences in the association between rs10946398 SNP and T2D based on BMI. Table 1 presents the basic characteristics of study participants and odds ratios of type 2 diabetes. Among 974 participants identified with type 2 diabetes, 619 were men and 355 were women. The mean age was 48.60 years (SD = 10.99) for men and 48.64 years (SD = 10.63) for women. The risk of diabetes imparted by the CC genotype (OR = 1.83; 95% CI, 1.49-2.26) was higher than that of the CA genotype (OR = 1.20; 95 CI, 1.02-1.40). In addition, the C allele's effect was more significant compared with the A allele (OR = 1.32; 95% CI 1. 19-1.47). The odds ratio for T2D was higher among men compared to women (OR = 1.20; 95% CI, 1:00-1.44). Table 2 presents the baseline characteristics of study participants stratified by BMI. The proportion of individuals with T2D were significantly different (p < 0.0010). Obese individuals had the highest rate of diabetes (19.64%) compared with the overweight (10.01%) and normal weight (5.42%) individuals. Table 3 shows the association of rs10946398 with type 2 diabetes across different categories of BMI. The risks imparted by the CC genotype were as follows: normal weight (OR = 1.47; 95% CI, 1.01-2.15), overweight (OR = 2.33; 95% CI, 1.62-3.33), and obesity (OR = 1.80; 95% CI, 1.25-2.57), respectively. For the CA genotype, a significant association was found only among obese individuals (OR, 1.31; 95% CI, 1.02-1.69). However, only a borderline association was found among overweight individuals (OR, 1.33, p = 0.0540). The C allele odds ratios (compared with the A allele) were 1.17 (0.97-1.41), 1.50 (1.25-1.79), and 1.33 (1.13-1.58) for normal weight, overweight and obese individuals. In the dominant model, the interaction between BMI and rs10946398 on T2D risk was significant among women (P for interaction = 0.0318), but not men (P for interaction = 0.969). Table 4 shows the association between rs10946398 and type 2 diabetes stratified by sex and BMI. There was no significant interaction between rs10946398 and sex. The risk of T2D among men with CC genotype was as follows: OR = Discussion In this study, we found a significant association between CDKAL1 rs10946398 and type 2 diabetes among Taiwanese individuals. The association was substantially stronger among CC compared to CA carriers, as well as in C allele compared to A allele carriers. Based on stratified analyses, the CC genotype was significantly associated with T2D in overweight and obese women, as well as in men regardless of their BMI. Several studies have provided evidence for the significant contribution of CDKAL1 rs10946398 to T2D risk 13,14,16,22,23 . According to findings from a global meta-analysis, 8 studies have reported a trend of elevated ORs for the C risk allele, whereas only two studies have found no associations 14 . In our initial analysis, the CA genotype was a risk factor for T2D. However, after stratification by BMI, a significant odds ratio was found only among obese individuals (OR, 1.31; 95% CI, 1.02-1.69). When stratified by sex, borderline associations were found among overweight (OR, 1.59, P = 0.064) and obese (OR, 1.51, P = 0.061) women. However, significant odds ratios were not observed in men. Menopause has been associated with certain unfavorable changes in the body that serve as predisposing factors for type 2 diabetes 24,25 . Post-menopausal women are believed to be more susceptible to T2D due to their greater percentage of body fat and intra-abdominal visceral fat 26 . In the present study, an increased risk of T2D was found among normal weight menopausal women (OR, 2.61; 95% CI, 1. 17-5.80). Based on the criteria defined by the Department of Health, a BMI of 18.5-23.9 kg/m 2 indicates normal weight, 24-26.9 kg/m 2 indicates overweight while ≥27 kg/m 2 indicates obesity 27 Asian populations varies from 22-25 kg/m 2 . Values above 26 kg/m 2 are associated with a higher risk of T2D 28 . In a study conducted in Taiwan, Chang and colleagues reported that the rs10946398 C allele was associated with an increased risk of T2D. This association was attenuated in persons with a larger BMI 12 . Nonetheless, we found contrasting results. In their study, the sample size was comparatively small and stratifications did not include sex. In addition, there was no information about the total number of participants with a higher BMI (>26.9 kg/m 2 ). In another study, ENPP1 rs1044498 was significantly associated with T2D but not with obesity 29 . It is worth mentioning that T2D had no significant association with smoking, physical activity, and alcohol drinking. The mechanism through which the CDKAL1 gene influences T2D is not fully understood. However, it has been reported that it codes for the cyclin-dependent kinase 5 (CDK5) regulatory subunit-associated protein 1-like 1 which may affect the activity of the CDK5 protein 15 . This may subsequently lead to degeneration of β cells and development of type 2 diabetes mellitus 14 . In this Taiwan-based study, we have attempted to validate the association between CDKAL1 rs10946398 and type 2 diabetes. Such findings are necessary considering that T2D is one of the significant risk factors associated with mortality in Taiwan. Furthermore, noting that SNP rs10946398 is a new BMI-associated locus specifically for individuals of Asian ancestry, large dedicated studies are necessary to confirm such associations 22 . Genetic variants associated with BMI may also have associations with metabolic traits. Stratification of samples helps to clarify the role of BMI in modulating pancreatic beta-cell function possibly influenced by the rs10946398 variant. So far, such stratifications have not been widely considered in Asian populations 13 . Our study is limited in that we used a nonrandom or purposive sampling technique; hence, results may not be fully reflective of the general population. Second, there is a possibility of a response bias considering that information were collected using questionnaires. Moreover, there was no information regarding diabetes regimens. Finally, multiple stratifications might have affected our results. From this aspect, there is a need for additional research. In summary, our study provides the following conclusions: (1) there is a significant association between CDKAL1 rs10946398 and type 2 diabetes among Taiwanese men and women. (2) The CC genotype is a risk factor for type 2 diabetes in women with BMI ≥ 24 kg/m 2 , as well as in men regardless of their BMI. (3) The CA genotype appears to be a risk factor for T2D mainly in obese individuals. In conclusion, we found that rs10946398 CC/CA genotypes and C allele increased the risk of T2D among Taiwanese adults. Overweight and obese women had higher odds ratios than normal women. However, the effect was significant only among those with CC genotype and C allele. These findings should serve as an incentive for larger dedicated studies. (1) had a fasting glucose level ≥126 mg/dl; (2) had a glycosylated hemoglobin A1c value of at least 6.5%); or (3) self-reported a history of diabetes based on the question, "have you ever been diagnosed with type 1 or type 2 diabetes by a doctor or health professional?" In general, 49.49% (n = 482) of patients with type 2 diabetes were diagnosed by physicians. Participants were grouped into BMI categories: normal weight (18.5 ≤ BMI < 24 kg/m 2 ), overweight (24 ≤ BMI < 27 kg/m 2 ), and obesity (BMI ≥ 27 kg/m 2 ). The selected phenotypic characteristics included age, sex, body mass index, total cholesterol (T-CHO), triglycerides (TG), high-density lipoprotein (HDL-C), low-density lipoprotein (LDL-C), uric acid, systolic blood pressure (SBP), and diastolic blood pressure (DBP). Lifestyle factors included physical activity, smoking (never/former and current smoker), and alcohol consumption (150 c.c per week or on a regular basis for 6 months). Information on menopause was based on self-report. Women who reported a complete absence of menstrual period for 12 consecutive months without hysterectomy were categorized as naturally menopausal. These variables were selected based on their previous assessments in previous studies 33,34 . Genetic variant selection/Genotyping. We searched peer-reviewed literature databases (Pub Med, ScienceDirect, Google Scholar, SNPedia, and GWAS Catalog) to identify common CDKAL 1 gene variants that have been associated with type 2 diabetes. Four of the SNPs (rs10946398, rs7754840, rs7756992, and rs9465871) were chosen for analysis. However, only rs10946398 was included in the final analysis because of its highly significant association. Furthermore, it is the only CDKAL1 SNP associated with BMI among Asian populations. In addition, it has been shown to have a more striking replication signal. SNP genotyping was performed at the National Center for Genome Medicine in Academia Sinica using the Axiom-Taiwan Biobank Array Plate (Affymetrix, Santa Clara, CA, USA) 31 . The Axiom ™ Genome-Wide ASI Array Plate maximizes genomic coverage of common and rare alleles of East Asian genome while the Axiom ™ Genome-Wide CHB Array plate maximizes genomic coverage of common alleles of the Han Chinese genome. Only participants with call rates greater than 90% were included in the study. SNPs were excluded if the minor allele frequency (MAF) was <0.05. Also excluded were SNPs whose genotypes deviated from the Hardy-Weinberg equilibrium (HWE). Statistical analysis. We used the PLINK 1.09 beta and SAS 9.3 software (SAS Institute, Cary, NC) for data management and statistical analyses. The distribution of variables were tested using Kolmogorov-Smirnov test and Shapiro-Wilk test. Data with normal distributions were analyzed by t-test and ANOVA while those not normally distributed were compared by Wilcoxon rank-sum test and Kruskal-Wallis test. Normally distributed variables were presented as mean and standard deviation (SD) while non-normally distributed variables (T-CHO, TG, HDL-C, and LDL-C) were presented as median and interquartile range (IQR). Genotypic associations of the SNP with T2D were determined using the Chi-square test. The risk allele/genotype-specific odds ratios and 95% confidence intervals were calculated using multivariate logistic regression models. Associations of genotypes with T2D were represented as co-dominant models. The interaction between BMI and rs10946398 on T2D risk was tested using a dominant model. Data Availability The data that support the findings of this study are available from Taiwan Biobank Institutional Dataset. To gain access, interested individuals should contact "biobank@gate.sinica.edu.tw".
2018-09-14T14:08:26.836Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "0f81916f1c55fb7b93a3945406d8325be495f696", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-31415-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f81916f1c55fb7b93a3945406d8325be495f696", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233887857
pes2o/s2orc
v3-fos-license
Electrostatic Discharge Characteristics of SiGe Source/Drain PNN Tunnel FET : Gate-grounded tunnel field effect transistors (ggTFETs) are considered as basic electrostatic discharge (ESD) protection devices in TFET-integrated circuits. ESD test method of transmission line pulse is used to deeply analyze the current characteristics and working mechanism of Conventional TFET ESD impact. On this basis, a SiGe Source/Drain PNN (P+N+N+) tunnel field effect transistors (TFET) was proposed, which was simulated by Sentaurus technology computer aided design (TCAD) software. Simulation results showed that the trigger voltage of SiGe PNN TFET was 46.3% lower, and the failure current was 13.3% higher than Conventional TFET. After analyzing the simulation results, the parameters of the SiGe PNN TFET were optimized. The single current path of the SiGe PNN TFET was analyzed and explained in the case of gate grounding. Introduction The conventional metal oxide semiconductor (MOSFET) has a subthreshold swing limit of 60 mV/dec at room temperature, which limits the application of MOSFET devices in ultra-low power integrated circuits (ICs) [1,2]. In this context, the tunneling field effect transistor (TFET), which can break through the 60 mV/dec sub-threshold swing limit, is a competitive candidate to replace MOSFET in low power ICs [3][4][5]. Over the last decade, TFETs have gone through tremendous explorations, including SiGe source, spacer engineering, highly doped abrupt source profiles, double gate architectures, Band gap engineering using III-V materials [6][7][8] and vertical tunnel [9,10]. These explorations are mainly aimed at breaking through the subthreshold swing of 60 mV/dec and obtaining a higher open state current. However, in the IC industry, electrostatic discharge (ESD) impact phenomena may be generated from processing, packaging, transportation, system integration to use. With the decrease of the process size, advanced technologies such as thin gate oxide layer, short channel and shallow junction depth, while improving device performance, will also cause a significant decline in the anti-ESD impact capability for these devices [11]. According to the reliability analysis of the United States Center, 15% of electronic equipment hardware failures are caused by ESD impact, and in electrostatic highly sensitive integrated circuits, 60% of electronic equipment hardware failures are caused by ESD impact [12]. It shows that the electrostatic discharge phenomenon of TFET is a major reliability problem in the sub-10 nm node technology [13,14]. The analysis of ESD performance of TFET devices in the early stage can not only shorten the design time but also obtain devices with better ESD shock resistance, especially considering that TFET is expected to be a strong competitor to replace Fin FET in sub-10 nm node technology. But under ESD impact, the triggering voltage of TFET is higher than expected, which reduces its application in ESD protection. To enhanced TFET's ESD performance, SiGe Source/Drain TFET has been proposed, this new type of TFET's ESD characteristics has been improved compared with Conventional TFET [15]; N+ pocket TFET has also been proposed to obtain a better ESD design window [16]; the double current path phenomenon in ggTFETs has been proposed to explore ESD phenomenon [17,18]. Based on these works, this paper proposes a SiGe S/D PNN TFET. Sentaurus TCAD software(Mountain View, California, USA) is used to simulate the exhaustion of PNN TFET and Conventional TFET [19][20][21]. The simulation results show that the trigger voltage and failure current of PNN TFET are better than that of Conventional TFET. In addition, this paper makes a comprehensive analysis of SiGe PNN TFET. Different from the doublecurrent-path phenomenon of Conventional TFETs, the unique single current path phenomenon of PNN TFETs was found for the first time. The influence of various process and device parameters on ESD performance of PNN TFET is also given to obtain a better ESD design window. Basic Concept of Electrostatic Discharge (ESD) Protection TFET TFET is essentially a reverse biased gated p-i-n diode [22][23][24]. Under negative ESD stress, ESD current is injected into the source terminal of TFET with drain terminal grounded. TFET will operate in a positive diode conduction mode and has a high current discharge capability, as shown in Figure 1a [25,26]. Under the positive ESD stress, ESD current is injected into the drain terminal of TFET with the source terminal grounded. TFET will operate in avalanche breakdown mode to discharge the ESD current, as illustrated in Figure 1b. Since avalanche breakdown requires a relatively high electric field, the conduction voltage of TFET under positive ESD stress is high, making it unacceptable in advanced nanoscale technologies. Thus, the research on TFET under ESD stress mainly focuses on the positive discharge mode. Figure 2a,b are schematic diagrams of SiGe S/D PNN TFET and Conventional TFET, both of which have SiGe Source/Drain. In order to facilitate heat dissipation, the device size is not set to a very small value [27,28]. The high K material of the gate medium is HfO2. The default device parameters are: gate oxide thickness = 4 nm, gate length LG = 100 nm, source, drain length = 100 nm, junction depth Xi = 10 nm, thickness= 1 μm, p+ source doping, p-channel doping and n+ doping of Conventional TFET are NS = 1 × 10 20 cm −3 , ND = 5 × 10 19 cm −3 , and NC = 1 × 10 17 cm −3 ; p+ source doping of SiGe S/D PNN is NS = 1 × 10 20 cm −3 , and the n+ drain and n+ channel doping are defined as ND, both of which are 5 × 10 19 cm −3 . In order to avoid possible high defect density at the SiGe/Si interface, we set the default Ge mole fraction to 0.5. Transmission line pulse(TLP) is a kind of non-destructive equivalent ESD impact test, which can accurately obtain the ESD characteristic parameters of the device. The drain terminal of the TFET was stressed with TLP pulses while keeping the gate and the source terminals grounded. The rise time and the pulse width were set as 10 ns and 100 ns. The voltage samples were obtained by averaging the transient data in the range of 60 ns to 90 ns [16]. Device Structure and Simulation Setup All simulations of the above device structures are performed using Synopsys Sentaurus simulation software. In order to increase the accuracy of the simulation, the dynamic non-local path was used to analyze the band tunneling phenomenon of TFET devices under ESD impact. The probability of tunneling(TBTBT) depends on an electric field across the tunnel junction (E), carrier effective mass (m), the source band gap (Eg), as depicted in Equation (1). The thermodynamic model was used to calculate the lattice temperature, and the Van overstraeten-de Man model was used to calculate the avalanche occurrence. In order to get more accurate simulation results, the bandgap narrowing model, the carrier mobility model, the high-field saturation model, and the doping associated carrier composite model were also used. The ESD characteristics of the device were evaluated by using the TLP simulation method. In the case of gate and source grounding, TLP current with 10 ns rise time and 100 ns pulse width was added to drain. The voltage samples are obtained by averaging the transient data in the range of 60 ns to 90 ns. Simulation Results and Discussion In order to verify the performance of PNN TFET, failure currents, trigger voltages, and electric fields of PNN TFET and Conventional TFET need to be simulated in this work. The source region tunneling junction of the Conventional TFET device was composed of a p+ SiGe doped source region and a low-doped p-Si substrate. Compared to PNN TFET, due to the wide tunnel junction and high trigger voltage, the failure current of Conventional TFET tends to be smaller. When the size of TFET device is reduced, the excessive trigger voltage may lead to the premature breakdown of the gate oxide layer, while the small failure current may lead to the premature damage of the device. In order to obtain better ESD design window, it is necessary to reduce the trigger voltage and increase the failure current. The TLP I-V curves of PNN TFET and Conventional TFET are shown in Figure 3. Under the same pulse current, the trigger voltage of PNN TFET was 1.3 V, and failure current was 3.0 mA/μm. Compared with Conventional TFET, the trigger voltage reduced by 66.3%, and the failure current increased by 20%. These key parameters will make it easier for TFET to adapt to modern ESD design Windows and improve the ESD performance of TFET devices. Figure 4 shows the distribution of the tunneling probability of PNN TFET and Conventional TFET under 1 mA current pulse. The difference is that PNN TFET tunneling mainly occurred at the source/channel junction, while Conventional TFET mainly occurred at the drain/channel junction. According to the simulation results, near the channel surface (within 2 nm), both PNN TFET and Conventional TFET had high band-to-bandtunneling (BTBT) Gen Rate (GBTBT)( > 1 × 10 30 cm −3 s −1 ). However, from the middle of the channel (5 nm), the tunneling probability of the Conventional TFET decreased significantly while that of the PNN TFET remained at 1 × 10 30 cm −3 s −1 . Therefore, the tunneling probability distribution of PNN TFET is more uniform, and the tunneling area is effectively improved. In order to study the improvement of the tunneling uniformity of the PNN TFET band, the electric field intensity distribution in the junction of drain and channel was extracted. When the band curvature was greater than the band gap of SiGe material; obvious band tunneling occurs. In other words, when the band bend exceeded the band gap of SiGe, the tunneling path obtained a large GBTBT. Because the band bending from drain to channel depends on the electric field. For SiGe PNN TFET, obvious tunneling occurred when the electric field at the tunneling junction was higher than 3.7 MV/cm −1 . As shown in Figure 5, the strength of the PNN TFET's entire electric field under 1 mA/μm TLP current was higher than 3.7 MV/cm −1 , indicating that the tunneling area was greatly increased. For PNN TFET, the doping concentration in the N+ channel region was 5 × 10 19 cm −3 ; for Conventional TFET, the channel was close to the eigenvalue, and the doping concentration was 1 × 10 17 cm −3 . When a TLP current pulse was applied, electrons accumulated in the channel. But the electron density in the middle and below of the channel was much lower than that on the channel surface. Since the channel region of PNN TFET was heavily doped, the electron density in and below the channel will be much higher than that of Conventional TFET when receiving the current pulse. When the TLP current increased, the double-current path of Conventional TFET was more obvious. The device could be observed to have two distinct current paths. The upper path was the hole current path, and the lower path was the electron current path. Different from the Conventional TFET's double-current path, the total current, electron current, and hole current of PNN TFET were all in one path; that is, there was only a single-current path, as shown in Figure 6. In the junction of channel/source, there existed a blocky current. This phenomenon can be explained as follows: for the Conventional TFET with both gate and source grounded, the potential of drain level increases under the circumstance of ESD current injection. Figure 7 shows the electric potential contour of the SiGe S/D PNN TFET and Conventional TFET. The closely-spaced contour near the drain/channel interface indicates that the region has a high potential and is the first region where BTBT occurs, and BTBT generates a large number of hole electron pairs. The holes are swept to the oxide interface by a vertical electric field in Conventional TFET [18]. For PNN TFET, while the channel and the drain are n+ doping, the substrate is p-doping, it is equivalent to a reverse-biased PN(P-N+) junction, the electric field in the channel and the drain is basically the same. The difference of vertical electric field basically only exists at the junction of channel and source. This is the reason why the block current is generated here. Optimization of SiGe S/D PNN TFET Device Parameters Under an ESD event, the Joule heat is the main heat component in the device, and it can be expressed as in reference [29]. Where H is heat, J is current density, μ is mobility, and the subindices n and p are electrons and holes, respectively. The joule heat of the hole is higher than the joule heat of the electron because the hole generated by the impact moves from the drain interface to the source through the channel region, and the electron is collected by the drain and channel without any movement. These, in turn, cause a large amount of hole Joule heat generated at the interface regions, as shown in Figure 8. The hole mobility in SiGe S/D TFET is higher than that in Si TFET because the hole Joule heat is the dominant heat source and hole mobility in SiGe is higher than that in Si [30]. As shown in Figure 9, The hole mobility in the SiGe S/D PNN TFET is higher than that in the conventional TFET. In SiGe PNN TFET, as shown in Figure 11, it can be seen that the increase of the mole fraction of Ge led to a decrease in the triggering voltage. It resulted in a slight reduction in the failure current. This trend can be easily understood from the preceding discussion. Because the hole Joule heat is the dominant heat source and hole mobility in SiGe is higher than that in Si, so the increase of the mole fraction of Ge will lead to the decrease of the triggering voltage. Conclusions In this paper, a new SiGe Source/Drain PNN field effect transistor was presented, and its ESD characteristics were studied by TCAD simulation. Compared with the Conventional TFET, the trigger voltage of the SiGe Source/Drain PNN TFET is reduced because the tunneling region has a high BTBT probability and a higher impact ionization coefficient. The failure current of the SiGe Source/Drain PNN TFET is also increased by the lower trigger voltage and a smaller Joule heat resulting from higher hole mobility in SiGe. The unique single current path phenomenon in PNN TFET has been discovered in this work. The results have demonstrated that the single current path is formed by the same electric field in the channel and the drain. SiGe S/D PNN TFET device parameters are optimized in this work; an increase of drain doping level and mole fraction of Ge can improve ESD performance. This enhanced ESD performance will be beneficial for constructing robust TFET-based ESD protection devices in the future.
2021-05-08T00:04:25.458Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "1b1685933a37c0dc279f156ba1da69983973b8bd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/10/4/454/pdf?version=1613866920", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "86daf650d953cff20d355a926e5dc6e4f86502df", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
246885877
pes2o/s2orc
v3-fos-license
Quality aspects in the development of pelletized dosage forms The aim of this work was to identify and collate the major common challenges that arise during pellet development. These challenges focus on aspects right from raw material properties until the final drying process of the pelletization. The challenges associated with the particle size of drug and excipients, physicochemical properties, drug excipient interaction and the effect of type/grade and amount of raw material on the pellet properties are covered in this review. Technological and process related challenges within the commonly used pelletization techniques such as extrusion-spheronization, hot-melt extrusion and layering techniques are also emphasized. The paper likewise gives an insight to the possible ways of addressing the quality of pellets during development. Introduction The pelletized drug delivery is gaining paramount importance in therapeutics owing to their narrow range of particle size, good flow properties, and less friability, which prevents dose dumping. The technological advancement has added a new horizon in the manufacturing and scalability of these drug deliveries [1]. There have been many investigations on optimization of these formulations controlling process parameters and polymers added to obtain pellets of high quality [2,3]. The schematic representation in Figure 1 gives a review of drug delivery development and applications, which are critical in terms of polymer and manufacturing method selection. The various pelletization techniques are extrusion-spheronization, Hot-melt extrusion, Layering techniques, Balling (Spherical agglomeration), Compression, Globulation, Spray drying, Spray congealing and cryopelletization are used [4,5,6]. Each technique listed above is superior based on the application for which pelletized drug delivery being manufactured. Therefore, determining critical process parameters that influence the quality of the product is important during development process [7,8,9]. A deep understanding of these process parameters and material property help in reducing the batch manufacturing deviations leading to a robust product [10,11,12]. The studies in the past highlights the different approaches to pelletization, but there has not been a study collating the possible challenges that come in the way. In this review, major challenges that arise within the formulation, process and technological aspects during pellet development are discussed. The report explains the effect of various types, grades and quantity of drug and excipients on the pellet properties [13,14,15,16]. It also discusses the challenges that could occur due to change in processing parameters like time, temperature, speed of the equipment used in pelletization [3,15,17,18]. Impact of material attributes An initial inspection and evaluation of physicochemical properties of the raw material is very essential. By virtue of their inherent properties, the drug-excipients may be either in the completely dissolved or undissolved state in the final dosage form [13,19]. This state of the drug in dosage form could be the result of processing which have an immense impact on the stability of the product. 2.1. Impact of particle size and shape on process and product The particle size of the starting material such as drugs, polymers and binders impacts the surface roughness of the pellets [18]. Small particles pack well and leaves less peaks and valleys. Hence, the smaller the particles size the smoother is the surface of the pellets [20,21]. Starting materials, such as Microcrystalline cellulose (MCC) gives pellets with smoother surfaces than those produced by crosspovidone or lactose. This is because MCC disaggregate into smaller particles during the wetting process [22]. Therefore, the surface roughness of the pellet depends upon the particle size of the disaggregated particles. In addition to MCC, further smoothness of the pellets could be attributed to formation of gel following pellet shrinkage [23]. Impact of API property on processing The critical material attributes significantly affect the choice of pelletization process as listed in Table 1. Powder containing high concentrations of hydrophobic drugs are difficult to extrudate and spheronize due to poor wettability of the powder mass. Hydrophobic drugs impart good tensile strength to the pellets due to its low water concentration and it also slows the dissolution time [29]. On the other hand, Hydrophilic drugs show uniform wettability of the powder mass. They tend to agglomerate because of their high-water concentration. Pellets produced with hydrophilic drugs are of low tensile strength and shows faster dissolution rate. Pellets of drugs of low solubility show narrow distribution of size when compared to pellets of high solubility drugs [30,31]. The hot melt extrusion converts the extruded rods to compactable material and thereby overcomes all compressibility problems during tableting [32]. The research proved that the materials melt under the Particle Size distribution Blend uniformity and pellet size Particle sizing and optimization of blending [24] Fines/Oversize Content uniformity Order of addition, Number of revolutions (Speed & Time) [25] Particle shape Sphericity of the pellets Spheronization Speed & Time) [24,26] Compaction behaviour Friability and Hardness of the pellets Addition of type of binder and concentration [27] Moisture content Material loos during processing, friability and hardness of the pellets Environmental temperature and relative humidity [28] processing conditions and affect the functionality of other ingredients [33]. It has been reported inhibition of hardening of Polyethylene glycol (PEG) PEG-MCC matrix by fenoprofen calcium, resulting in an impracticable product [34]. In another work Lidocaine lowered the Tg of Eudragit E/high-density polyethylene (HDPE) films [35], and a time-dependent lowering of the glass transition temperature of hydroxypropyl cellulose (HPC) films was observed with hydrocortisone [36]. Water content during processing Water content is the most important factor that influences the pellet size, pellet size range and the pellet shape. The pellet size increases with an increase in the water content. The water content is dependent on the type of drug. At high water concentration, pellets of powder masses especially with hydrophilic drugs agglomerate during spheronization [37,38] While wet-massing, water at a low concentration does not impart sufficient plastic properties and hence the pellets prepared my such a less wetted mass are not spherical. The bulk and tapped densities as well as the flow rate increases with an increase in the concentration of water [39]. Therefore, it is necessary to use an optimum concentration of water to get pellets of desired size, sphericity and smaller size range. A research demonstrated that the addition of Glyceryl monostearate (GMS) to the powder mass could be beneficial for drug that are moisture sensitive or sensitive to the heat energy required to evaporate the water, as GMS decreases the water concentration in the formulation. GMS also imparted smoother surface and less porosity to most formulations. However, during extrusion and spheronization process, the length of the extrudates and thereby the pellet size increases when GMS concentration is increased [40,41]. Basically melt extrusion process avoids latent drug degradation due to hydrolysis and thereby proved to be effective and most preferred anhydrous process for hydrolyzable drugs [10,42]. Impact of binders on processibility during extrusion spheronization Both the binder concentration and the type of binder affect the physical properties and appearance of the pellets. During the spheronization process, pellets of larger size and less sphericity are obtained as the binder concentration is increased. Because with a high binder concentration, the small particles combine with one another or with the large particles to form even larger particles [14,43,44,45]. Compared to other binders such as Methocel E15 LV, Methocel A4M, HPC-L, certain binders such as HPC-M show less effects on pellet size and sphericity at higher concentrations. More spherical pellets, a narrow size distribution and good flow could be obtained by increasing the HPC-M concentration [46,47,48]. Impact of commonly used polymer properties during extrusion spheronization Thermoplastic behaviour of the polymer and formulation plays a crucial role while selecting a polymer. The Important factor is the compatibility and stability of the plasticizer-polymer mixture. Most commonly used plasticizers are Triacetin [49], citrate esters [50], and low molecular weight polyethylene glycols [51]. The type and level of plasticizer determines the extent of glass transition (Tg) lowering for a particular polymer which thereby improves the stability of API and polymer [52,53]. The high molecular polymers can be processed very easily by lowering shear forces. The lowering of those shear forces helps them extrude out of the extruder [54,55]. Other parameters in the selection of plasticizer are thermostability and volatility of the plasticizer [56]. Microcrystalline cellulose (MCC) Due to its excellent plastic behaviour and cohesiveness when wetted and its capacity to uptake, hold and yield water; MCC parts acceptable shape, size, mechanical resistance and flow properties to pellets. Hence, it is the most preferred aid for pellet formulation [18,57]. Different grades of MCC show significant difference in their physical properties, which influences the water uptake. Even though MCC grades that possess high bulk density, lower porosity and good water retentive capacity produce equivalent size pellets, these pellets are less spherical and are observed to show more shape variations [58,59]. MCC exhibits batch variability as it is derived from natural sources. Moreover, research proves that MCC has been chemically incompatible with some drugs [2,60]. Another limitation is that MCC increases the dissolution time of pellets due to its high cohesive strength [61,62]. Therefore, MCC may be replaced with alternative excipients such as crosspovidone, carrageenan, pectinic acid, cellulose derivates (Hydroxypropylmethyl cellulose, hydroxyethyl cellulose), polyethylene oxide, modified starches, glycerides, chitosan, β-cyclodextrin and sodium alginate [22,63]. Challenges faced with some of the widely used pelletization aids are discussed below. Cross Polyvinylpyrrolidone (Crosspovidone) Crosspovidone is a cross-linked synthetic polymer, which possesses a water reservoir. Like MCC, the rigid and flexible cross-linked structure facilitates absorption, release and reabsorption of water during wetting, extrusion and spheronization respectively. However, Crosspovidone requires more amount of water as compared to MCC [17]. Although it produces pellets of larger size, the particle size range of pellets is narrow. The use of crosspovidone with negligible water added, removes the necessity of binder addition. Crosspovidone imparts shorter dissolution time as compared to MCC due to its superior disintegration property [64,65]. Carrageenan Carrageenans are acid polysaccharides isolated from the cell walls of the red seaweeds. It is capable to replace MCC due to its ability to produce pellets of adequate quality and fast drug release. Carrageenans immobilize more water, imparts good size distribution and results in very fast drug release. However, carrageenans make the pellets highly porous and hence yield pellets of lower tensile strength as compared to MCC [22,23]. Pectin Pectin is a gel-forming, non-toxic polysaccharide ideal for colontargeted drug delivery. The quality of pectin based pellets depends on the concentration and type of additive used in the granulating liquid. The mucoadhesive pellets prepared shows superlative property. Research demonstrates that amidated pectin produced short and nearly spherical pellets with the use of ethanol in the granulating liquid. However, these pellets lack mechanical strength and are likely to disintegrate faster thereby imparting a high dissolution rate in acidic as well as basic buffer to the pellets [66,67,68,69,70]. Processing challenges The most commonly investigated pelletization techniques are Extrusion-Spheronization and suspension/solution/powder layering techniques. Hot melt extrusion is another pelletization technique of increasing importance. The challenges associated with these techniques are discussed below. Process related challenges in extrusionspheronization technique Extrusion-Spheronization is the most widely used pelletization technique due to its cost-effectiveness and its ability to produce high quality pellets. It is a three-step process: 1. Wet Massing, 2. Extrusion and 3. Spheronization [71]. There are several critical parameters for these three stages and has great influence on pellet characteristics. These parameters include type and concentration of drug and other excipients (as discussed earlier), Extruder type, extrusion pressure and speed and spheronization speed, pressure and time [16,72]. K.Thoma investigated the effect of different types of extruders on extrusion behaviour and sphere characteristics [73]. It was observed that the extrudates from three different types of extruders possessed certain difference in their properties. These differences in the extrudates further impacted the pellet size and other physical properties of the pellets. It has been demonstrated that the spheronization speed and time impacted the size of the produced pellets [74]. An increase in the speed of the Spheronizer resulted in a decrease in the pellet size. The smaller pellets also showed a difference in the dissolution profile [75]. Agrawal et al reported that the pellet shape is highly affected by the spheronization time. The pellets become more round as the spheronization time is increased till an optimum level, after which very little difference is seen in the roundness [76]. In addition to the spheronization time, the pellets shapes were also affected by the rotational speed of the friction plate [77]. However, the attribution force imparted by increase in spheronization time at lower speed is greater than that at decreased spheronization time and increased speed, thus producing more circular pellets and rendering the pellets more flowable [78]. Process related challenges in hot melt extrusion technique Hot met extrusion has certain advantages over conventional pelletization techniques such as shorter processing time, eliminates the use of solvents and enhances drug delivery. However, it is a challenging process due some limitations. These include degradation of thermolabile drugs, requirement of raw materials possessing high flow properties, limited options for heat stable polymers and high energy requirement. These limitations increase the overall costs of production [79]. The most preferable processing can be done at temperature above the M.P. of semi-crystalline polymers or [ [80,81]. According to Breitkreutz et al., the material temperature control is the most critical factor on the spheronization of solid lipid based extrudates, which can be corrected by reducing the wall jacket temperature and employing an external source of heat: an IR light. This changed exposure area of the material to the heat source making it very small in comparison with the commonly used conventional setup [82]. Process related challenges in layering techniques Solution/suspension layering and dry powder layering techniques are used to prepare high drug potency and controlled release pellets. These techniques often pose challenges such as increase in pellet size, rough pellet surface due to larger particle size of the coating materials and blockage of the nozzle resulting in non-uniform layering [83,84]. The surface of the pellets becomes porous due to inappropriate type or concentration of binder. An increase in binder concentration will increase the surface smoothness of the pellet but it decreases the potency. Other parameters that affect the pellet characteristics are weight of the core pellet, solution/suspension/powder application rate, atomizer's type, position and speed, temperature, atomization degree and air cap [85]. Challenges related to drying process Research has illustrated that the drying process has a significant impact on the mechanical strength, surface properties and the drug release profile of formulated pellets [86] Although Pellets dried on a tray dryer possess more diametrical strength and greater crushing strength, these pellets show less elasticity. The lengthy drying time used on tray dryer enhances the in-vitro drug release of tray-dried pellets. However, lengthy drying time hampers the surface smoothness of the pellets and may even degrade certain drugs [86,87]. On the other hand, the drying time for drying pellets on a fluidized bed dryer is shorter as compared to the tray drying, thus eliminating the risk of thermal degradation of drug. Pellets dried over a fluidized bed dryer are more elastic and possess a smoother surface as compared to pellets dried over a tray dryer. However, these pellets possess less mechanical strength and exhibit slow dissolution rates [88]. The drying temperature also affects the pellet size. Scientist proposed that the pellets begin to shrink with an increase in temperature resulting in smaller particle size [75]. Improve solubility and dissolution of poorly water soluble drug A 30 fold increase in the dissolution rate of 17-Estradiol hemihydrate (10 % w/w) compared to pure drug has been reported by using PEG 6000, polyvinylpyrrolidone or a vinylpyrrolidone vinyl acetatecopolymer along with Sucroester ® WE 15 or Gelucire ® 44/14 as additives [89]. The increase in solubility of carbamazepine was reported by using dgluconolactone (GNL) in hot melt extrusion [90]. The solid dispersion prepared with Eudragit EPO by hot-melt extrusion (HME) in the drug: carrier ratio of 4:1 resulted in more than 85 % drug release in just 5 min [91]. Hot melt extrudes of carbamazepine were compared against simple physical mixture for improvement in solubility and dissolution by using polyethylene glycol 4000 (PEG 4000) as a hydrophilic carrier and low melting binder. The extrudates obtained with uniform shape and density revealed much faster release as compared to the physical mixture [92,93]. The solid dispersion of curcumin using different ratio of Poloxamer 407 by melt method were finally extruded and spheronize. The resulting pellets showed increase in solubilisation however it is noteworthy that pelletization process had no impact on solubilisation of solid dispersion [94], were formulated as pellets without any impact on drug release profile. Sustained/controlled release/enteric release dosage form Matrix type enteric or sustained release pellets can be produced by extrusion spheronization or hot melt extrusion whereas the reservoir type contains drug core embedded by polymer layer [95,96]. Nakamichi and co-workers developed the sustained release pellets of nicardipine hydrochloride and hydroxypropyl methylcellulose acetate succinate using twin-screw extruder. The position of screw element and the barrel temperature were considered as a critical factor in obtaining a puffed mass which thereby exhibits a floating property by lowering density. The pellets were retained for long period in the stomach as discrete floating particles releasing drug for 24 h [97]. The sustained release gastro-retentive floating pellets of Diltiazem hydrochloride filled into capsule were made using ethyl cellulose, cellulose acetate butyrate (CAB), poly (ethylene-co-vinyl acetate) (EVAC) and polymethacrylate derivative (Eudragit ® RSPM) by Follonier et al. The parameters which affected the release of diltiazem hydrochloride were polymer type, drug/polymer ratio, and pellet size. The release rate was optimized by incorporating croscarmellose sodium (Ac-Di-Sol) and sodium starch glycolate (Explotab) into the formulations. The pellets produced, exhibited a smooth surface and low porosity by the use of triacetin and diethyl phthalate as plasticizers [98]. The sustained release tablet were also explored from the pellets prepared using cheaper excipients and exhibited marginal difference in the performance [99]. Kleinebudde & Reitz, 2009 developed sustained release matrix spheres by solvent-free spheronization process. A binary lipid mixture with different amounts of Witocan 42/44 ® , Dynasan 114 ® and theophylline as model drug were used to produce extrudates. The proper control on jacket temperature and maintaining low product temperature helped in achieving spherical sustained release matrix pellets with low porosity, defined surface area and narrow particle size distribution [100]. Various wax polymers have been accessed to produce a controlled release dosage form by hot melt extrusion. Diclofenac as a model drug and carnauba wax were processed and it was found that a wax matrix with high mechanical strength could be obtained even at temperatures below the melting point of the wax. The dissolution release was strongly influenced by the formulation of the granules with different concentrations of hydroxypropyl cellulose, methacrylic acid copolymer (Eudragit L-100) and sodium chloride [101]. Some of the most attractive investigations by researchers in preparation of controlled release dosage form are low processing temperatures, high kneading and dispersing ability, and low residence time of the material in an extruder [102]. Melt extrusion technology has helped the researchers in the development of controlled release reservoir systems consisting of polyethylene vinyl acetate (EVA) copolymers and thereby lead to Implanon ® and Nuvaring ® technology successfully [53,103]. A controlled release of 24 h achieved by using SLS as pore forming material in blend of Glyceryl Palmito-Stearate, microcrystalline cellulose, Sodium Alginate along and olanzapine (20: 55: 05: 20 % w/w) [103,104]. The study suggests that a small concentration of surfactants in a mixture of lipid and MCC successfully controls the release rate from delivery devices. The enteric coated pellets of Ketorolac were prepared by incorporating Eudragit/microcrystalline cellulose (Avicel PH 101) using extrusion/spheronization technique. Release was less than 10 % in acidic medium whereas it was complete within 60-120 min in phosphate buffer (pH 6.8) for optimized formulation [105]. The nanocrystalline ketoprofen was converted into pellets to modify the release from drug delivery. As the corn starch was utilized for making pellet, the release rate of ketoprofen increased however the drug recovery was problematic. Therefore, Cremophor ® RH 40 was added during pelletization process resulting in sustained release [106]. Nandgude et al formulated the modified release pellets of Apremilast using microcrystalline cellulose (MCC), lactose, TKP, and crosspovidone. Addition of 10% Eudragit L100 was able to sustain the release up to 5 h [107]. The chrono-pharmacological needs were sufficed by modifying the release of Montelukast sodium pellets using ethyl cellulose coating in Wurster coater [108]. Spherical and extended-release pellets were prepared for Aspirin using four types of lipids (adeps solidus, Compritol ® 888 ATO, Precirol ® ATO5 and Compritol ® HD5 ATO) and their admixture in different ratios by solvent-free extrusion/spheronization. The pellets met the release requirement as per USP [109]. Rapidly dissolving pellets Famotidine (FM) has low and variable bioavailability and water solubility therefore a solid dispersion using two hydrophilic carriers, namely Gelucire 50/13 and Pluronic F-127 were employed to prepare rapidly releasing pellets by extrusion/spheronization. The drug release from pellets was improved compared to tablets. Tablets containing solid dispersed pellets showed total drug released in 30 min while only 30 % release was seen in normal formulation of FM after 2 h [110]. Indomethacin, nifedipine, furosemide, ibuprofen, prednisolone and hydrochlorothiazide were extruded with a new co-processed excipient composed of microcrystalline cellulose (MCC), sorbitol, chitosan and Eudragit ® E. The extrusion spheronization increased the stability and solubility of the formulation [111]. Conclusion From the above review, it is evident that the major challenge that occurs during pellet development is to choose the appropriate type and quantity of excipient for preparing pellets of the desired drug. Water content drastically affect pellet properties. Similarly, change in the grade; type and/or quantity of the excipient also affect pellet properties. Different challenges are associated within different pelletization techniques and instruments. Every technique and instrument has its own pros and cons, which must be identified to choose the optimum technique and instrument for obtaining desired pellets. These challenges can further be addressed by finding the critical process parameters using Quality by Experimental Design approach. Expert opinion Understanding the quality prospects of drug delivery solves the main problem of the development program. The polymer's innovations offer multiple drug release profiles, but the lack of extensive clinical studies could result in toxicity when applied. Research suggests that even the blend of existing polymers with these advanced polymers will compete well and have better safety. The inclusion of technological advances will certainly improve processing, but it will place a financial burden on the developers. The quality and excellence of the product could be achieved through controls of the existing processes and equipment. Implementing risk management and quality through design helps improve product performance and produce robust products in the shortest possible time. Research on pelletized dosage forms is mainly limited to oral dosage forms. Much research is possible to achieve better drug solubility, stability and release modifications of powders that must be reconstituted prior to parenteral administration. Author contribution statement All authors listed have significantly contributed to the development and the writing of this article. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data included in article/supplementary material/referenced in article.
2022-02-17T16:12:38.250Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "6706f87d59345e4a2f3444f46ae4e2b158a24194", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844022002444/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ba483dfb00a9deb773dc9330cbaa09ca1de2000", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
67856758
pes2o/s2orc
v3-fos-license
Deep Bayesian Multi-Target Learning for Recommender Systems With the increasing variety of services that e-commerce platforms provide, criteria for evaluating their success become also increasingly multi-targeting. This work introduces a multi-target optimization framework with Bayesian modeling of the target events, called Deep Bayesian Multi-Target Learning (DBMTL). In this framework, target events are modeled as forming a Bayesian network, in which directed links are parameterized by hidden layers, and learned from training samples. The structure of Bayesian network is determined by model selection. We applied the framework to Taobao live-streaming recommendation, to simultaneously optimize (and strike a balance) on targets including click-through rate, user stay time in live room, purchasing behaviors and interactions. Significant improvement has been observed for the proposed method over other MTL frameworks and the non-MTL model. Our practice shows that with an integrated causality structure, we can effectively make the learning of a target benefit from other targets, creating significant synergy effects that improve all targets. The neural network construction guided by DBMTL fits in with the general probabilistic model connecting features and multiple targets, taking weaker assumption than the other methods discussed in this paper. This theoretical generality brings about practical generalization power over various targets distributions, including sparse targets and continuous-value ones. Introduction Online multi-media platforms usually provide a rich set of interactions with users. This is especially true with the Taobao live-streaming application. As one of the biggest live product promotion platforms on the internet, Taobao live-streaming not only enables users to watch, comment, like and establish connection with live hosts, but also provides various portals towards adding to cart and purchasing behaviors (Figure 1). Therefore it serves the purpose of both an e-commerce platform and a content production/consumption platform. The criteria to evaluate the success of such a multi-media platform are multi-dimensional, concerning not only click-throughrate (CTR) but also many other metrics relevant to user experience such as average user stay time in live rooms, and yet also many other links in the transaction chain. With such a multi-dimensional evaluation system, design of the recommendation system is naturally multi-target, taking many user actions other than the simple click through into the labeled data system. Multi-target learning has been a very useful line of research in the recommendation system literature. An especially important case is the simultaneous pursuit of click-throughrate (CTR) and conversion rate (CVR) for e-commerce and advertising platforms [Ma et al., 2018;Ni et al., 2018;Chapelle et al., 2015]. In general, the motivations of doing multi-target learning can be categorized into two perspectives: 1) to balance various performance criteria, especially the conflicting ones; 2) to incorporate auxiliary target information to improve the prediction precision of the primary target. To balance various performance criteria is usually a business requirement. As an example, for Taobao live-streaming the business pursuit includes not only user attention (CTR) but also user experience (user stay time in live room), social connection establishment (follow) and conversion to transactions. The balance between targets is usually achieved by applying weights upon training and inference. As a work on learning architectures and their characteristics, discussions in this paper are more relevant to the second perspective, about the potential of incorporating auxiliary target information to improve the primary target. [Ruder, 2017] gives a comprehensive discussion on this paradigm. An important observation is that instead of being independent or mutually inhibitive, in many real applications multiple targets are actually highly correlated and possess significant potential of being in synergy. In such cases, gradient descent directions led by different targets can actually guide each other towards a globally better solution, rather than wrestle with each other to reach a mediocre compromise. Methodology of model design on this line is to try to find a model that can better express the underlying correlation among targets. This can be implemented in deep networks by hard parameter sharing [Caruna, 1993], soft parameter sharing [Evgeniou et al., 2005;Yang and Hospedales, 2016], and cross-stitch networks [Misra et al., 2016]. From a Bayesian network perspective, the shared layers generate common par-ents of the targets, and the unshared parameters generate distinct parents. In other words, targets can have common or distinct causes, but they are not causes for other targets. Without changing this big framework, the difference in performance lies in how well the deep structure meets with the real depth of correlation, and how the allocation of shared and distinct parameters fits in with the specific reality of the application. In practice, implementing common and distinct priors by shared and distinct parameters produces satisfactory results, provided that parameters and the sharing structure are properly tuned. However from a more general perspective, this formulation do not capture the facts that targets can have direct causal effects on each other. A easy case is recognized in the Taobao live-streaming application where a user cannot have any live room actions and purchasing behaviors until he clicks and enters the live room. Clear-cut causal relationships like this can be hard-coded into the model to regularize its behavior, which is done in ESMM [Ma et al., 2018]. But for less obvious relationships, it is not easy to hard-code. To handle the direct causal relationships across targets that need to be learned, we propose learning a Bayesian network across target events from data. A lightweight and tempting solution is to estimate the Bayesian network with target nodes only, cutting off the feature side, and then mount back whatever complex feature side deep network to fine-tune. The problem is that when the two networks are estimated separately, it introduces the problem of "explaining-away" [Koller et al., 2009;Jensen and others, 1996]. In light of the ever more complex deep network architectures on the feature side in today's recommendation systems, and the enormous effect that complex feature information can exert on target relationships, it is unlikely that the separated estimation workflow would produce good results. Causal relationships between targets can be obscure without evidence from features, for example the joint distribution of user stay time in the live room and purchasing behavior may well depend on traits of the live and the user themselves. These are the inspirations and considerations leading to our proposal of an integrated Bayesian framework called Deep Bayesian Multi-Target Learning, or DBMTL. In summary, DBMTL is an integrated feed-forward network modeling the feature-target and target-target relationships simultaneously. As will be shown in Section 2, it imposes weaker probabilistic assumption than previous models. We try to model the causal relationships between multiple targets through direct feed-forward MLPs between target nodes, adjusting the direction of each feed-forward link through evaluation and model selection. Letting the model learn cross-inference parameters and directions automatically from data, we avoid making wrong prior assumptions about the causal relationships between target events. Deep Bayesian Multi-Target Learning In this section we introduce the Deep Bayesian Multi-Target Learning framework, or DBMTL. We first formulate the single/multi-target prediction problem in its most abstract probabilistic form. Then we discuss various assumptions adopted by previous models. Finally we describe DBMTL and how it weakens the assumptions. It will then be selfexplanatory how DBMTL fits into the spectrum of probabilistic formulations. Probabilistic Formulation for Multi-Target Learning In probabilistic form, learning the CTR prediction model in recommendation systems can be formulated as fitting the conditional probability of the click target to the training data. Let x denote the features of an impression, l denote the label whether the impression has been clicked or not. The learning process tries to fit a model H so as to maximize the probability P (l|x, H) . In a feed-forward network setup, H represents the parameters in the multilayer perceptron, mapping the feature end to the predicted target end. Without regularization, this is a maximum likelihood estimation. A regularization on H usually corresponds to certain kind of prior assumption on H so as to make it an MAP estimation, where we instead try to maximize P (l, H|x) = P (l, |x, H) · P (H). Now if we have two targets to predict, let them be l, m respectively, the formulation now becomes P (l, m|x, H). As an example, in the Taobao live-streaming application, l can represent the binary variable denoting whether a user has clicked and entered a live room, and m can represent the binary variable denoting whether the user has clicked the commodity list button (the commodity list button directs the user to the list of commodities being introduced by the live host, clicking which implies the user's intention of buying something). If more than two targets are concerned, the objective becomes P (l 1 , l 2 , l 3 , ...|x, H). Separation of Target Variables When we have a single binary (or multi-class) target, usually the last stage of prediction is modeled as a logistic regression (or softmax regression) problem. When there are multiple binary targets, we can model them together as a multi-class classification problem in the cartesian space of each target space. But when the number of targets is considerable, the number of categories in the cartesian space blows up exponentially. Each instance of the combined target values can become very sparse in the data so as to make the prediction performance quickly deteriorate. So usually we avoid this exponential space expansion by separating the joint distribution into smaller joint or individual distributions, with certain assumptions about the probabilistic model. For example if we assume the conditional independence of two target events, we can write The loss is then split into two distinct terms, and the dimensionality curse of the label space vanishes. Among deep network models, this formulation corresponds to network constructions using hard-shared layers [Caruna, 1993], which is termed as vanilla MTL throughout this paper. Abstractly the network topology assumes the pattern illustrated in Figure 2. The model can take various forms of networks for its hidden layers, but in the final layer it spurs out independent feed-forward branches towards target heads. Most multitarget models in the literature are of this flavor [Caruna, 1993;Evgeniou et al., 2005;Yang and Hospedales, 2016;Misra et al., 2016]. One important observation of our work is that this strong independence assumption in Eq.(1) is not necessary. Using the Bayesian formula, we can instead express the likelihood in Eq.(1) as The equation holds without any assumption. To implement the equation we'll still need to assume that P (m|l, x, H) can be effectively learned, but in general the formulation of Eq.(2) takes much weaker assumption. The corresponding network structure is illustrated in Figure 3. This is the formulation that we term as DBMTL. We would like to note that the ESMM model [Ma et al., 2018] can be regarded as an instance of such formulation, where m, l must be binary and P (m = 1|l, x, H) is further separated (with explicit assumption about the causality relationship between l and m) to the form of f (x, H) · P (l = 1|x, H), where f (x, H) = P (m = 1|x, H) is an inference function for a virtual binary event m . In our construction, we directly model P (m|l, x, H) as another level of MLP and learn the parameters automatically from data. Compared with ESMM, this theoretically reduces the assumption of P (m|l, x, H) to the entire function space that can be expressed by the MLP. Making the cross-target relationship learnable has greater importance when the causal directions between target events is unclear. In such scenarios both directions (P (l|x, H) · P (m|l, x, H) versus P (m|x, H) · P (l|m, x, H)) can be trained and tested, and then model selection can determine which is better. It's not a matter of which construction is "correct" -from the Bayesian perspective, both are correct -it's a matter of which one is more "learnable" from data. All discussions in this section naturally generalize to scenarios of more than two targets, i.e. P (l 1 , l 2 , l 3 , . . . |x, H). In the Taobao live-streaming application, the target events include user click, live room purchasing behaviors, users' time of stay in a session, interactions in the live room, establishing follow relationships, and many others. Most of these target events are binary, while a little subtlety arises when users' time of stay, which is a real-value variable, enters the target set. In such case p(l 1 , l 2 , l 3 , ...|x, H) should be understood as a probability density rather than a probability, while all derivations and conclusions in this section still hold. Choice of Network Structures When there are many targets to be predicted and the causal relationship is obscure, it is generally not feasible to iterate over all setups and compare their results. The number of relationships between targets is O(n 2 ) and the number of all possible Bayesian network setups is 2 O(n 2 ) . Simple techniques for Bayesian network structure learning can be used to reduce the space of exploration [Koller et al., 2009]. One way is to build up the Bayesian network incrementally and greedily, i.e. adding target nodes incrementally, fixing all existing edges unchanged, and only iterating through links relevant to the new node to determine best directions. This technique reduces the complexity to O(n 2 ). Another way is to simply provide an initial network structure based on intuition and prior knowledge about the causal relationships, then apply local variations within certain limit of iterations, keeping good variations and drop bad ones. Some principles can guide us towards a relatively good initial design. Directions with natural causal relationship is usually better than directions with natural anti-causal relationship. And it is generally better to use a more evenly distributed target to predict a less evenly distributed target, than the reverse. For example "Follow" button click rate is less evenly distributed than "Goods Bag" button click rate (follow button click is relatively rare while the ratio of live room users that will click the goods bag button is relatively closer to 1/2), therefore we can expect worse result using the follow event as the cause than using goods bag click as the cause, which is confirmed in our experiments. These principles are to be made clearer in Section 3. Implementation of DBMTL and Experiments In this section we give a detailed description of our real implementation of the DBMTL network. Then we run experiments to demonstrate traits of the model from several perspectives. The DBMTL Network Structure Our implemented DBMTL framework ( Figure 4) includes input layer, shared embedding layer, shared layer, specific layer and Bayesian layer. Shared embedding layer is a shared lookup table, where shared embedding features are learned across different targets. Shared layer and specific layer are multilayer perception (MLP), which captures the common features and specific features from different targets respectively. Bayesian layer is the most import part in DBMTL. As for the instance shown in Figure 4, it implements the Bayesian formula P (t 1 , t 2 , t 3 |x, H) = P (t 1 |x, H) · P (t 2 |t 1 , x, H) · P (t 3 |t 1 , t 2 , x, H) The corresponding negative log-likelihood loss is L(x, H) = − log(P (t 1 , t 2 , t 3 |x, H)) = − (log(P (t 1 |x, H)) + log(P (t 2 |t 1 , x, H)) + log(P (t 3 |t 1 , t 2 , x, H))) For practical reasons, different weights is applied to each term to control the relative importance of various targets, transforming the loss to a form of where f w1 1 (x, H) = P (t 1 |x, H), f w2 2 (t 1 , x, H) = P (t 2 |t 1 , x, H) and f w3 3 (t 1 , t 2 , x, H) = P (t 3 |t 1 , t 2 , x, H). Functions f 1 , f 2 , f 3 in the Bayesian layer are implemented as fully connected perceptrons or MLP to learn the hidden relationship among targets. Each concatenates the embeddings of its inputs as the MLP input and outputs an embedding of the output target. Each target embedding then goes through a final linear-logistic layer to generate the final probability of target. In Experimental Setup In the Taobao live-streaming application, users can interact in multiple ways, and there are correspondingly many dimensions to evaluate its success. We give a term for each as below: • Click Through Rate (CTR) -the percentage of impressions that result in click and entrance of the live room. • Goodslist Conversion Rate (CGR) -in a live room, there is a "Goods Bag" button where users can click to view the list of goods being introduced (users can then select a good of interest and be forwarded to the purchase page). The Goodslist Conversion Rate is defined as the percentage of live room users that have clicked the goods bag button. • Follow Conversion Rate (CFR) -a user can follow a live host so that when the host starts living, the user gets notified. The Follow Conversion Rate is defined as the percentage of live room users that have resulted in following behavior. • Comment Conversion Rate (CCR) -a user can send real-time comment in a live room. The Comment Conversion Rate is defined as the percentage of live room users that have comment behavior. • Like Conversion Rate (CLR) -a user can "like" the live host in a live room. The Like Conversion Rate is defined as the percentage of live room users that have clicked the "Like" button. • Average Stay Time (AST) -The time a user spends before leaving the live room is an important indication of the user's interest and his satisfaction of the content of the live. The Average Stay Time is defined as the average time that users spend in live rooms. According to these evaluation dimensions, loss heads can be easily constructed for each target. For CTR, CGR, CFR, CCR and CLR, because they correspond to binary outputs, they are associated with logistic loss heads. For AST, it is a real-value regression problem, and mean square error (MSE) is adopted as loss function. In consideration of the scale problems of our real data, we use logarithm of stay time, instead of its original value as the AST label. The dataset for our experiments comes from online logs. In a certain time window, data from the first 15 days are taken as training data, and samples of the next day are used to test the performance. Over a hundred features are extracted from each data sample, including many large-scale sparse id features. For the label part, since all live room interactions are dependent on user entering the room in the first place, the CGR, CFR, CCR, CLR and AST labels are only turned "on" (assigning value 1 for binaries, assigning the value as it is for AST) when the CTR label is "on". Otherwise an "off" (value 0 for binaries, 0.0 for AST) is assigned. In all experiments demonstrated below, the same feature extraction workflow is used for each method tested. General Performance on Multiple Targets In the first experiment, prediction performance on the 6 targets is evaluated and compared among various learning methods. In this experiment, the structure among targets is designed as the CTR target pointing to the others, which follows the natural causality principle and the weights of CTR, CGR, CFR, CCR, CLR, AST losses are set as [0.7, 0.05, 0.05, 0.0, 0.0, 0.1]. Note that this weight setting on the one hand reflects the business view about the importance of each product target, e.g. user stay time is a relatively more important auxiliary target than goods bag click or follow; Also note that CCR and CLR targets are intentionally left as 0.0 to test the generative power of the model towards no-training targets (to be made clearer in the analysis of results). The methods tested include the single-task Wide&Deep network [Cheng et al., 2016], vanilla MTL [Caruna, 1993], ESMM [Ma et al., 2018] and our DBMTL framework. AUC for each binary target and MSE for the continuous target are evaluated, as is displayed in Table 1. From the results we can observe that: 1) Multi-target learning models in general achieve better performance than the single-target model. Despite that single-target model is specifically tuned to optimize the CTR target, Multi-target models can excel on the primary target. We believe this is good evidence in support of adding auxiliary targets to improve the main target, even when auxiliary targets performance are not actually cared about [Ruder, 2017]. The concept manifests especially well in our application, and we believe it's because auxiliary behaviors in Taobao livestreaming (goods bag click, follow, comment, like and stay time in the live room) are very strong indicators of the quality of the live, which in turn is a valuable information source to predict the CTR target. 2) Among all three multi-target learning models, DBMTL has significantly better performance on all the 6 targets. This supports the analysis in Section 2 that with Bayesian network modeling across targets, inter-target causality relationship can be better captured, and the prediction performance benefits from weaker statistical assumption. Note also, the effect manifests well in our scenario possibly because the targets concerned have intricate black-box causal relationships that is better expressed by a MLP. 3) Despite that we have intentionally left the CCR and CLR targets un-trained, ESMM and DBMTL can learn significant information about these two targets (vanilla MTL does not have this ability since it does not model cross-target relationships). This on the one hand reinforces the merit of using auxiliary targets to enhance the concerned targets, on the other hand it further implies that inter-target Bayesian modeling can indeed benefit learning. Performance on Target Pairs In this experiment, we take a closer look at the performance of MTL methods on auxiliary targets with different traits. Specifically, we select CGR -a binary target with normal sparsity (a large percentage of users in the live room will click the goods bag button), CFR -a binary target with extensive sparsity (a small percentage of users in the live room will click the follow button), and AST -a real-value target. Each auxiliary target is combined with CTR to form an optimization pair. Performance on each optimization pair is listed in Tables 2, 3, 4, respectively. The observations from this experiment are: 1) DBMTL outperforms other methods in all three experiments, demonstrating its generality among various target types. 2) ESMM and DBMTL are both more successful in modeling the auxiliary target, while the improvement is not as significant in the CTR-CGR (non-sparse) case as in the CTR-CFR (sparse) case. Referring to the analysis of [Ma et al., 2018], this may be due to that inter-connection of targets brings especial gain when learning sparse targets, since the sparse target can take advantage of information from non-sparse primary target data. 3) The improvement of DBMTL over ESMM is more significantly seen in the CTR-AST (real-value) case, showing the generalization power of DBMTL when dealing with continuous-value targets. Varying Bayesian Structures The Bayesian network structure, i.e. directions of connections in the acyclic Bayesian network can have significant influence on the performance. To demonstrate the design principles we propose in Section 2.3, we select a group of three targets and three structures, each assuming the structure of one target event pointing towards the other two target events. Comparisons of their performances are shown in table 5. Consistent with our intuitive guess, the natural causal direction CTR→others (natural in the sense that users can only have other behaviors once clicked and entered the live room) yields the best score in all criteria. We believe the reason is that "correct" causal relationships can in general be more efficiently modeled and learned. CGR→others direction scores better than the CFR→others direction in all criteria, making evident the other design principle, that we should favor evenly distributed targets pointing towards unevenly distributed targets rather than the reverse (the ratio of positive versus negative samples for the CGR target is much more evenly distributed than for the CFR target). The rationality is that evenly distributed targets contain more information (has a higher entropy value) than unevenly distributed targets. Performance in Online Taobao live-streaming Environment The online deployment of DBMTL has brought significant improvement to the Taobao live-streaming application. DBMTL improves the online CTR, CGR, CFR, CLR, CCR, AST by 4.41%, 3.06%, 2.91%, 10.23%, 5.95%, 4.99% respectively relative to vanilla MTL (ESMM performs not as well as vanilla MTL). The improvement is measured in an online A/B test during 2 weeks. In the online deployment the weights of CTR, CGR, CFR, CCR, CLR, AST are set as [0.7, 0.05, 0.05, 0.05, 0.05, 0.1] for training as well as predicition. Note that performances and appropriate parameters may well vary according to specific applications. Conclusions For the multi-target learning problem, we propose the DBMTL formulation, modeling the causal relationships among targets explicitly using a Bayesian network structure across target heads. DBMTL outperforms single-target WDL and other MTL methods on the Taobao live-streaming dataset. The success of DBMTL lies in its integral way of modeling the causal relationship among targets and features, weakening many assumptions that other deep MTL structures adopt for the underlying probability model. Since DBMTL framework does not make specific assumption about target distributions and types, it readily generalizes to various distributions and value types. We also propose two principles in designing the Bayesian structure: respecting clear-cut natural causalities, and favoring more-entropy targets pointing to less-entropy targets. Apart from the business merits the targets themselves behold, multi-target learning in the merit of taking advantage of auxiliary targets to enhance the primary target can also be regarded as a step towards blurring between features and labels, or more generative rather than discriminative models. However efficient learning of the Bayesian structure is still a challenging task when the number of nodes becomes high, which is a major obstacle towards the "more generative" direction.
2019-02-25T09:11:53.000Z
2019-02-25T00:00:00.000
{ "year": 2019, "sha1": "32cbb3ccfd308fdc0340864356befd4f45aed7a2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "32cbb3ccfd308fdc0340864356befd4f45aed7a2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
267638159
pes2o/s2orc
v3-fos-license
Exercise-Induced miR-210 Promotes Cardiomyocyte Proliferation and Survival and Mediates Exercise-Induced Cardiac Protection against Ischemia/Reperfusion Injury Exercise can stimulate physiological cardiac growth and provide cardioprotection effect in ischemia/reperfusion (I/R) injury. MiR-210 is regulated in the adaptation process induced by exercise; however, its impact on exercise-induced physiological cardiac growth and its contribution to exercise-driven cardioprotection remain unclear. We investigated the role and mechanism of miR-210 in exercise-induced physiological cardiac growth and explored whether miR-210 contributes to exercise-induced protection in alleviating I/R injury. Here, we first observed that regular swimming exercise can markedly increase miR-210 levels in the heart and blood samples of rats and mice. Circulating miR-210 levels were also elevated after a programmed cardiac rehabilitation in patients that were diagnosed of coronary heart diseases. In 8-week swimming model in wild-type (WT) and miR-210 knockout (KO) rats, we demonstrated that miR-210 was not integral for exercise-induced cardiac hypertrophy but it did influence cardiomyocyte proliferative activity. In neonatal rat cardiomyocytes, miR-210 promoted cell proliferation and suppressed apoptosis while not altering cell size. Additionally, miR-210 promoted cardiomyocyte proliferation and survival in human embryonic stem cell-derived cardiomyocytes (hESC-CMs) and AC16 cell line, indicating its functional roles in human cardiomyocytes. We further identified miR-210 target genes, cyclin-dependent kinase 10 (CDK10) and ephrin-A3 (EFNA3), that regulate cardiomyocyte proliferation and apoptosis. Finally, miR-210 KO and WT rats were subjected to swimming exercise followed by I/R injury. We demonstrated that miR-210 crucially contributed to exercise-driven cardioprotection against I/R injury. In summary, this study elucidates the role of miR-210, an exercise-responsive miRNA, in promoting the proliferative activity of cardiomyocytes during physiological cardiac growth. Furthermore, miR-210 plays an essential role in mediating the protective effects of exercise against cardiac I/R injury. Our findings suggest exercise as a potent nonpharmaceutical intervention for inducing miR-210, which can alleviate I/R injury and promote cardioprotection. Introduction Cardiovascular mortality is predominantly attributed to coronary heart diseases including myocardial infarction [1].Restoring coronary artery blood flow is the optimal strategy to limit infarct size, thereby preserving cardiac function and improving prognosis [2].However, the cardiac ischemia/reperfusion (I/R) process can provoke additional myocardial damage after the primary ischemic injury [3][4][5].Therefore, reducing I/R injury while treating myocardial infarction is key to prevent further myocardial damage and heart failure. Exercise is regarded as a cost-effective strategy to boost cardiovascular health, and is often recommended as an important component of cardiac rehabilitation [6][7][8].The cardiac protection conferred by exercise is attributed to several mechanisms [9], among which are molecular targets identified in physiological cardiac growth that might be potentially used to prevent myocardial injury [10].Exercise stimulates physiological cardiac growth, characterized by both the cardiomyocyte enlargement in size, referred to as cardiomyocyte physiological hypertrophy, and a surge in cardiomyocyte proliferative activity [11].Mechanistically, the insulin-like growth factor 1 (IGF1)/phosphatidylinositol 3-kinase (PI3K)/AKT pathway and the regulation of C/EBPβ transcription factor are pivotal to mediate physiological growth of the heart during exercise and can protect against pathological cardiac remodeling [12][13][14].Notably, microRNA (miR-222), long noncoding RNA (CPhar), and circular RNA (circUtrn) have been revealed to regulate exercise-induced cardiac growth and exhibit cardioprotective effects [15][16][17].With regard to cardiomyocytes, the enhanced proliferation and apoptosisreducing effects are among the central mechanisms for exerciseinduced myocardial protection [10].Therefore, investigating the molecules that regulate exercise-induced cardiac growth is a promising way to develop new methods to prevent cardiac I/R injury. miR-210, also known as miR-210-3p, has been known as a protective miRNA that protects against cardiac ischemic diseases [18].MiR-210 is up-regulated upon hypoxia stress, while increasing miR-210 diminishes stem cell apoptosis and promotes cardiomyocyte survival [19][20][21].MiR-210 can facilitate cardiac repair by augmenting cardiomyocyte survival as well as angiogenesis and mitochondrial metabolism in animals [22][23][24].Interestingly, miR-210 has been revealed to be regulated by exercise [25,26].This study investigated the function and the mechanism of miR-210 in exercise-induced physiological cardiac growth and explored its involvement in exercisedriven cardiac protection against I/R injury. MiR-210 expressions were determined in the animal exercise models and in the blood samples from patients that were diagnosed of coronary heart diseases before and after cardiac rehabilitation.Utilizing miR-210 knockout (KO) rats, we established a swimming exercise model to investigate the effect of miR-210 in exercise-induced physiological cardiac growth.We then studied the functional roles of miR-210 and its potential downstream targets in primary neonatal rat cardiomyocytes (NRCMs), human AC16 cell line, and human embryonic stem cell-derived cardiomyocytes (hESC-CMs).Finally, the potential contribution of miR-210 to exercise-driven cardiac protection was evaluated in exercised miR-210 KO rats followed by cardiac I/R injury.Our research may offer a new mechanistic understanding for physiological cardiac adaptation during exercise and unravel the essential role of miR-210 in mediating exercise's cardioprotection effect. Exercise induces miR-210 expression levels in both humans and experimental models Murine models of swimming exercise were established, leading to physiological cardiac growth [15,27].After a regularly programmed swimming regimen, miR-210 was markedly upregulated in the murine heart tissues (Fig. 1A and B).Meanwhile, murine serum samples were collected after the swimming exercise regimen, showing that circulating miR-210 levels were also elevated upon exercise (Fig. 1C and D).To find out whether miR-210 was sustainably increased after the end of exercise, we also collected the serum samples from mice at 8 days after they had finished the swimming program where miR-210 was still increased in the swimming group (Fig. 1E).Interestingly, miR-210 could also be increased in the swimmed mice followed by cardiac I/R surgery (Fig. 1F).We further examined miR-210 levels in the human serums collected from patients diagnosed of coronary heart diseases after a programmed cardiac rehabilitation with targeted intensity [28].Our results showed that miR-210 could also be induced in humans after aerobic exercise-based rehabilitation program (Fig. 1G).Consistently, these results indicate that regular exercise can induce miR-210 expression levels in both humans and experimental models. MiR-210 is not integral for cardiac hypertrophy but influences cardiomyocyte proliferative activity during cardiac adaptation upon exercise Cardiac growth as a result of exercise can encompass cardiac hypertrophy and increased cardiomyocyte proliferative activity.To clarify the function of miR-210 in this process, miR-210 KO rats underwent 8-week swimming exercise compared to WT controls (Fig. 2A).As demonstrated in Fig. 2B, following exercise, miR-210 was obviously up-regulated in WT rat hearts.Upon assessing heart weight (HW), cardiac hypertrophy, and cardiomyocyte proliferative activity in rats, we observed that swimming exercise markedly increased HW in both control and miR-210 suppressing rats; however, the HW increase after exercise did not show discernible differences between control and miR-210 suppressing rats (Fig. 2C).Interestingly, swimming exercise induced an equal increase in cardiomyocyte size in both WT and miR-210 KO rats (Fig. 2D), but the exercise-induced cardiomyocyte proliferative activity observed in WT rats was absent in miR-210 KO rats (Fig. 2E).Furthermore, the atrial natriuretic peptide (ANP) and brain natriuretic peptide (BNP) expression levels remained unaltered in WT or miR-210 KO exercised rat hearts, thus ruling out pathological hypertrophy in our experimental model (Fig. 2F).Collectively, these data demonstrate that miR-210 deficiency impedes exercise-induced cardiomyocyte proliferative activity but does not dampen exercise-induced cardiac hypertrophy. MiR-210 induces cardiomyocyte proliferation without affecting size Following the in vivo study on miR-210's function in the exercised model, we sought to further determine its influence on cardiomyocyte size and proliferation in vitro.Primary neonate cardiomyocytes were effectively transfected with mimic/inhibitor targeting miR-210 (Fig. 3A).Upon performing immunofluorescent .For statistical analysis, robust two-way ANOVA followed by post hoc pairwiseMedianTest was performed for (B) and (C) (HW/BW ratio).Two-way ANOVA test followed by Tukey post hoc test was performed for (C) (HW, BW, and HW/TL ratio) to (F).Data are mean ± SD. *P < 0.05; **P < 0.01; ***P < 0.001. MiR-210 mitigates OGD/R-induced cardiomyocyte apoptosis Given that molecules regulated by exercise may exert myocardial protection effect, we continued to determine whether miR-210 could regulate apoptosis in neonate cardiomyocytes under the stress of oxygen glucose deprivation/reperfusion (OGD/R).It has been reported that miR-210 responses to hypoxic stress and our data also showed that OGD/R stress induced miR-210 expression in both primary NRCM (Fig. 4A) and AC16 cell line (Fig. S1D).Immunofluorescent staining for TUNEL (terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate nick end labeling)/α-actinin and Western blot analysis further revealed that miR-210 overexpression attenuated OGD/R-induced cardiomyocyte apoptosis, while miR-210 inhibition enhanced apoptosis (Fig. 4B and C and Fig. S1E).Thus, miR-210 mitigates apoptosis in cardiomyocytes under OGD/R condition. CDK10 is targeted by miR-210 involved in cardiomyocyte proliferation To further investigate the downstream targets of miR-210, we performed bioinformatic analysis using miRTarBase and miR-Walk, and screened potential target genes that play functional roles in cell proliferation and survival.This included cyclindependent kinase 10 (CDK10) and ephrin-A3 (EFNA3).Algorithm analysis predicted CDK10, a cell proliferation regulator, as a downstream target of miR-210 [29,30]; however, its involvement in miR-210's regulatory effect in cardiomyocytes remained largely unknown.Here, we demonstrated that overexpressing miR-210 down-regulated CDK10, whereas inhibiting miR-210 upregulated CDK10 (Fig. 5A).The negative regulation of miR-210 on CDK10 expression was similarly found in AC16 cell line (Fig. S2A and C).Meanwhile, luciferase reporter assay indicated that transfection of the plasmids with the 3′ untranslated region (UTR) of CDK10 (predicted to bind with miR-210) together with miR-210 mimic could significantly reduce luciferase activity, while this was not observed after 3′UTR mutation (Fig. 5B), implying a direct interaction between miR-210 and CDK10.Further investigation was conducted through function-rescue experiments in NRCM to ascertain whether CDK10 was involved in miR-210-induced cardioprotection.As determined by EdU labeling, our data demonstrated that miR-210 inhibitor significantly reduced cardiomyocyte proliferation, which was, however, attenuated in NRCM cotransfected with CDK10 small interfering RNA (siRNA) (Fig. 5C).Notably, knock down of CDK10 did not affect OGD/R-induced cardiomyocyte apoptosis regardless of miR-210 inhibition (Fig. 5D).Finally, we found that CDK10 was up-regulated in heart tissues from miR-210 KO rats (Fig. 5E).Thus, miR-210 also negatively regulated CDK10 in vivo.These findings suggest that CDK10 is targeted by miR-210 in regulating cardiomyocyte proliferation, but it does not regulate cardiomyocyte apoptosis. MiR-210 targets EFNA3 regulating both cardiomyocyte proliferation and apoptosis Previous studies have reported that EFNA3 is targeted by miR-210 in multiple cell types [31,32].However, the question of whether miR-210 could regulate EFNA3 in cardiomyocyte proliferation and/or apoptosis remained unanswered.We initially determined EFNA3 expression in primary NRCM and AC16 cell line with miR-210 overexpression or inhibition.We demonstrated that EFNA3 was also negatively regulated by miR-210 at the level of cardiomyocytes (Fig. 6A and Fig. S2B and D).Luciferase reporter assay showed that the 3′UTR sequence of EFNA3 was directly targeted by miR-210 (Fig. 6B).We then performed function-rescue experiments by transfecting NRCM with miR-210 inhibitor and/or EFNA3 siRNA.We found that knockdown of EFNA3 attenuated the proliferationreducing effect of miR-210 inhibitor (Fig. 6C).Knockdown of EFNA3 also attenuated the effect of miR-210 inhibitor that aggravated cardiomyocyte apoptosis under OGD/R stress (Fig. 6D).Additionally, EFNA3 was detected as being up-regulated in the heart tissues of miR-210 deficiency rats in vivo (Fig. 6E).Our findings, thus, demonstrate that miR-210 targets EFNA3 through which regulates both cardiomyocyte proliferation and apoptosis. MiR-210 promotes proliferation and survival of hESC-CMs The functional role of miR-210 was further examined in hESC-CM through transfection with mimic or inhibitor targeting miR-210 (Fig. 7A).Functionally, miR-210 mimic markedly increased proliferation; conversely, miR-210 inhibitor reduced proliferation in hESC-CM (Fig. 7B).Increasing miR-210 also inhibited OGD/R-induced apoptosis of human cardiomyocytes (Fig. 7C).We further demonstrated that CDK10 and EFNA3 were negatively regulated by miR-210 in hESC-CM (Fig. 7D).Thus, miR-210 is sufficient to promote the proliferation and survival of human cardiomyocytes, underpinning its potential to be used as an interventional target for clinical treatment. MiR-210 mediates exercise-induced cardiac protection against I/R injury Due to the results of miR-210 as a significant mediator of exerciseinduced cardiomyocyte proliferation and its anti-apoptotic effects in cardiomyocytes, we decided to elucidate its involvement in exercise-driven cardiac protection.We subjected miR-210 KO and WT rats to 8-week swimming exercise, followed by I/R surgery, with the conceptual experimental model shown in Fig. 8A.Our data demonstrated that in I/R WT rats, swimming exercise induced a substantial miR-210 expression in the heart; however, miR-210 KO rats did not exhibit the same miR-210 up-regulation after exercise (Fig. 8B).Notably, exercise significantly decreased the infarction area in WT rats compared to sedentary controls, while miR-210 deficiency resulted in a loss of this exercise-induced protective effect (Fig. 8C).TUNEL staining further indicated that swimming exercise reduced myocardial apoptosis in WT rats upon I/R injury, while this effect was attenuated in miR-210 KO rats (Fig. 8D).Additionally, swimming exercise led to an increase in Ki67positive cardiomyocytes in WT rats after I/R injury, while miR-210 deficiency significantly abolished the exercise-induced cardiomyocyte proliferation (Fig. 8E).Meanwhile, we observed a significant decrease in both CDK10 and EFNA3, the target genes of miR-210, at the protein level in the exercised group versus sedentary group in WT rats; this reduction was reversed by miR-210 deficiency (Fig. 8F).Collectively, our findings demonstrate that the absence of miR-210 hampers exercise-driven protection against I/R injury in vivo.This, in turn, indicates that the presence and exercise-responsive up-regulation of miR-210 are crucial for mediating exercise-induced protection to alleviate I/R injury. Discussion Cardiac I/R injury, a secondary injury after the treatment of myocardial infarction, has attracted considerable attention.Despite this, there are few effective strategies in place to mitigate I/R injury [33].Accumulating evidence indicates that the mechanisms underpinning exercise-induced cardiac protection and tissue repair provide a new way that might treat I/R injury [34][35][36][37].This study demonstrates that miR-210, an exercise-responsive miRNA, governs exercise-induced cardiac growth and medicates exercise's effect, which alleviates cardiac I/R injury.Specifically, miR-210 functions to enhance proliferation and reduce apoptosis of cardiomyocytes through its target genes CDK10 and EFNA3.In patients diagnosed of coronary heart diseases, circulating miR-210 can be induced after a programmed cardiac rehabilitation.Increasing miR-210 is also effective to promote proliferation and survival of hESC-CM.These findings suggest its translational value to clinical treatment and indicate that exercise, by inducing miR-210, can effectively mitigate cardiac I/R injury. Cardiomyocyte death commonly occurs during myocardial injury and represents a significant driver of ventricular remodeling and heart failure [38].Enhancing cardiomyocyte survival and proliferation is believed to be key to myocardial protection and repair [39].However, the proliferation capacity of cardiomyocytes is very limited in the adult mammalian heart, constraining the heart's self-renewal ability after injury [40].Scientists have used apical resection model to study the cellular and molecular mechanisms of cardiomyocyte proliferation [40,41].Molecules that are required for cardiac regeneration after apical resection in neonates were found to be able to promote cardiomyocyte proliferation and myocardial repair even in the adult heart [42,43].Interestingly, previous studies have shown that exercise can induce new cardiomyocyte generation in both healthy and injured adult mouse hearts [44].In addition to the different molecules (miR-222, miR-17-3p, lncRNA CPhar, Mettl14, etc.) that are known to be involved in this physiological process [15,16,45,46], our study demonstrates that exerciseinduced miR-210 is a pivotal molecular mechanism for the exercise-induced proliferative activity of cardiomyocytes, even though it is not necessary for exercise-induced hypertrophy of cardiomyocytes.Consistently, our in vitro experiments show that miR-210 enhances cardiomyocyte proliferation and exerts apoptosis-reducing effect, without influencing cardiomyocyte size.Our data further support that exercise-induced miR-210 is required to mediate exercise's protection against cardiac I/R injury, which is at least partially attributed to increased proliferative activity and reduced apoptosis of cardiomyocytes.MiR-210 may be a key miRNA involved in the potential contribution of endogenous cardiomyocyte generation to exercise-induced cardiac protection [39]. MiR-210, a hypoxia-responsive microRNA, can be induced with the development of ischemic cardiac diseases [47].MiR-210 levels also increase in the blood circulation of heart failure patients, suggesting its potential as a predictive biomarker for heart failure or cardiovascular death [48,49].However, the expression changes and biological functions of peripheral blood and intracellular miRNAs should be analyzed differently.Actually, intracellular miR-210 exhibits protective roles in alleviating cardiac injury and dysfunction, including I/R injury [50].In addition to strategies that increase miR-210, such as utilizing extracellular vesicles to load and deliver miR-210 [51], exercise has been shown to boost the expression of cardiac miR-210, thereby enhancing angiogenesis in healthy rat hearts [26].In the present study, we demonstrate that exercise can lead to markedly increased miR-210 expressions in the experimental murine models of swimming exercise.Meanwhile, serum levels of miR-210 were sustainably increased even at 8 days after the end of swimming program in both the sham and cardiac I/R mice.In this case, we would also like to determine whether miR-210 could be induced in healthy individuals or patients after exercise.In our previously reported paper, circulating miR-210 expression levels were not significantly elevated in healthy individuals after a 3-month basketball training [52].However, to our great notice, circulating miR-210 was markedly induced in those patients diagnosed of coronary heart diseases after an 8-week programmed cardiac rehabilitation.The different response of circulating miR-210 after a 3-month basketball training compared to an 8-week programmed cardiac rehabilitation can be probably related to exercise type (aerobic versus mixed exercise), intensity [targeted intensity set as the heart rate recorded 1 min before attaining the anaerobic ventilation threshold during the cardiopulmonary exercise test (CPET) test versus vigorous intensity], duration (8 weeks versus 3 weeks), and frequency (a 30-session and 3 times a week versus an average of 565 min a week containing amateur basketball matches and other regular training) [28,52].Our results at least provide evidence that a programmed exercise rehabilitation, which is commonly used as an effective and economic treatment for patients with cardiovascular diseases, can induce miR-210 expression level in the blood circulation. The functional studies using hESC-CM and human AC16 cells further show that miR-210 is effective to promote proliferation and survival of human cardiomyocytes.Thus, miR-210, an exercise-responsive miRNA, has obvious protective effects for cardiomyocytes, including human cardiomyocytes.Furthermore, the animal experimental I/R models were implemented in miR-210 KO rats with or without swimming exercise, showing that exercise can also effectively induce miR-210 expression in the injured myocardium, thus providing protection for the heart.It is intriguing that both physiological stimulus (e.g., exercise training) and pathological stress (e.g., cardiac I/R injury) can induce miR-210 expression in the heart.Accumulating evidence has supported that miR-210 is increasingly expressed under hypoxic conditions through the hypoxiainducible factor (HIF) pathway [53].Interestingly, miR-210 can also be activated in HIF-independent pathway [21].We hypothesize that the AKT activation may be instrumental in the induction of miR-210 following exercise training [9,54].Further investigations applying RNA-sequencing and proteomics of myocardium after physiological exercise and/or pathological ischemic and hypoxic stress will be useful to elucidate the upstream regulators of miR-210 in these different conditions.Herein, we propose that exercise, a nondrug intervention, can serve as a potent tool to stimulate the endogenous expression of miR-210 in heart, thereby exerting cardioprotective effects.The underlying mechanism of increased miR-210 in response to exercise and the influence of different exercise type and regimen on cardiac miR-210 expression deserve further investigation. Based on the observations of miR-210-regulated cardiomyocyte proliferation and apoptosis in I/R injury, we sought to screen potential downstream targets of miR-210.Our focus fell on CDK10 and EFNA3.CDK10, an essential regulator of the cell cycle, is essentially involved in regulating cell proliferation.Previous studies have highlighted the dual roles of CDK10 that can function as tumor suppressor or oncogene in cancers [55][56][57].Lower level of CDK10 was associated with resistance to endocrine therapy in breast cancer patients [58].The methylation of CDK10 promoter or ubiquitination of CDK10 has been shown among the mechanisms of CDK10 regulation in cancers [58,59].However, its role in the myocardium has been relatively underexplored.In addition, we evaluated EFNA3, a previously identified target of miR-210 [31,32], in the myocardium and in the I/R injury model after swimming regimen.Increasing evidence has indicated that EFNA3 participates to regulate apoptosis in different cell types such as nucleus pulposus cells, vascular endothelial cells, Müller cells, and sensory axon [59][60][61][62].Lower EFNA3 expression was previously demonstrated in the peri-infarct myocardium with miR-210 overexpression treatment [23]; however, the cell type-specific study from which the exact contribution of EFNA3 to regulate cardiomyocyte proliferation or survival was unclear.Here, we demonstrate that miR-210 can directly target CDK10 and EFNA3, and show that miR-210 efficiently down-regulates CDK10 and EFNA3 expressions in primary NRCM, AC16 cardiomyocyte cell line, and hESC-CM.Meanwhile, we found that in the I/R hearts, exercise could up-regulate miR-210 while simultaneously down-regulating CDK10 and EFNA3 expressions.However, these changes were attenuated in miR-210 KO rats.These results unveil a novel regulatory mechanism of miR-210, where CDK10 and EFNA3 are revealed as target genes of miR-210, which regulate cardiomyocyte proliferation.Additionally, EFNA3 downregulation also mediates the apoptosis-reducing effect of miR-210 in cardiomyocytes.These results shed new light on the mechanisms by which miR-210 contributes to exercise-driven protection upon the I/R injury. Noteworthy, in addition to CDK10 and EFNA3 that we have demonstrated to mediate the functional role of miR-210 in promoting cardiomyocyte proliferation and/or inhibiting apoptosis, other potential downstream target genes of miR-210 might regulate cardiomyocyte functions and be involved in the protective effect of miR-210 against cardiac I/R injury.Moreover, in addition to cardiomyocytes, miR-210 has also been reported to be altered in endothelial cells, which can regulate angiogenesis and participate in cell cross talk through paracrine actions in ischemic heart diseases [53].As here we used miR-210 KO rats in the exercised model and cardiac I/R experiment, it should be taken into consideration that other types of cells (in addition to cardiomyocytes) might also take effect to mediate the protections of exercise against cardiac I/R injury, which deserve further investigations. In conclusion, we elucidate the role of miR-210, an exerciseresponsive miRNA, in promoting the proliferative activity of cardiomyocytes during physiological cardiac growth when induced in an exercised heart.MiR-210 enhances proliferation and exerts apoptosis-reducing effect by targeting CDK10 and EFNA3.Furthermore, miR-210 essentially contributes to exercise-driven protection against cardiac I/R injury.MiR-210 also functions in human cardiomyocytes and can be induced in patients with coronary heart diseases after programmed cardiac rehabilitation.These findings provide robust evidence for the underlying mechanism of exercise as an effective nondrug intervention.By inducing miR-210, exercise promotes the survival and proliferative activity of cardiomyocytes and alleviates I/R injury in the heart. Methods Patients diagnosed of coronary heart diseases were recruited in Shanghai Xuhui Central Hospital (Shanghai, China) for blood sample collection, with the protocol approved by the Ethics Committee of Shanghai Xuhui Central Hospital (number 2016-10) and the written informed consents given by the participants before enrollment.All animal experiments were approved by the Shanghai University Committee for the Ethics of Animal Experiments.The animal experiments were performed under the Guidelines concerning laboratory animals for biomedical research published by the National Institutes of Health (no.85-23, revised 1996). Participants and cardiac rehabilitation program From June 2016 to December 2018, totally 20 patients diagnosed of coronary heart diseases were recruited in Shanghai Xuhui Central Hospital [63].The coronary heart disease was diagnosed by a cardiologist according to the clinical symptoms and medical examinations by electrocardiogram, echocardiography, and/or coronary angiogram.Patients took CPET before the cardiac rehabilitation program.Patients underwent targeted intensity cardiac rehabilitation (3 times per week, 8-week duration), consisting of 5-min warm up, 20-min cycle ergometer aerobic exercise training with targeted intensity (ranging from 70% to 80% of the estimated heart rate peak and recorded 1 min before attaining the anaerobic ventilation threshold during the test), and 5-min slow down as previously reported [28].The human serum samples were collected before the cardiac rehabilitation program and right after the last exercise training and then stored at −80 °C until determination of circulating miR-210 levels. Generation of miR-210 KO rat ESC and rat animal A total of 12 μg of homologous targeting plasmids was transferred into 5 × 10 5 rat ESCs using the electroporation method to achieve miR-210 KO in rat ESC.Then, the cells were plated into G418-resistant feeder-coated 35-mm dishes and further selected by G418 (GIBCO, 200 μg/ml) for 7 days to obtain single colony, which was amplified and genotyped using polymerase chain reaction (PCR).The primer used (forward and reverse) was as follows: ctgttcctgcctctaatcaaggttatag and caccttggagccgtactggaac.MiR-210 KO rats were generated by diploid blastocyst injection and mutual breeding and mating.The miR-210 KO rats were provided by W. Li (State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences). Animal model and swimming exercise regimen Eight-week-old male miR-210 KO and wild-type (WT) rats were maintained in specific pathogen-free (SPF) animal facility (Shanghai University, Shanghai, China).Adult miR-210 KO rats and WT rats received either a swimming exercise for 8 weeks or remained sedentary.Briefly, the swimming exercise regimen began with a 1-week adaptation period of 10-min sessions twice daily (once in the morning and once in the afternoon), escalating 10 min each day until reaching 60 min twice daily [27].From the third day, an additional 3% body weight (BW) load was introduced.The swimming duration, inclusive of the adaptation period, spanned 8 weeks.After 8 weeks, rats were anesthetized followed by heart tissue harvest.The HW and the HW relative to BW or tibia length (TL) were determined [15,16].Rat serum samples, heart tissues, or optimal cutting temperature compound (OCT)-embedded heart tissues were frozen and stored at −80 °C.To determine the cardiac and circulating miR-210 levels in exercised mice, male adult C57BL/6J mice purchased from Charles River (Beijing, China) took a 4-week swimming exercise regimen without BW load as previously reported [15,16].At the end of swimming exercise, mice serum samples were collected.To further investigate whether circulating miR-210 was sustainably increased after exercise, mice were subjected to swimming exercise for 3 weeks and then cardiac I/R injury (ischemia for 30 min and then reperfusion) as reported previously [35].At 8 days after the end of swimming exercise (7 days after I/R or sham surgery), mice serum samples were collected to measure circulating miR-210 level. Cardiac I/R injury The involvement of miR-210 in exercise's protection was studied by using the miR-210 deficiency rats or WT controls that either received swimming exercise or remained sedentary; following 8 weeks of swimming exercise or sedentary living, a cardiac I/R injury model was implemented [35,64].After anesthetization and endotracheal ventilation for rats, the I/R surgery (ligation, 1 h; reperfusion, 3 h) was performed at the level of left anterior descending coronary artery [35].It has been reported that from 2 to 3 h after reperfusion, the injured heart can develop obvious myocardial injury, including increased infarct size, cardiac myocyte death, oxidative stress and mitochondrial metabolic dysfunction, and increased circulating level of lactate dehydrogenase (LDH) [65][66][67][68].Infarct size was determined by 2,3,5-triphenyltetrazolium chloride staining [35].Subsequently, heart tissues or OCT-embedded heart tissues were frozen and stored at −80 °C. OGD/R stress The OGD/R stress was implemented in cardiomyocytes serving as a widely used cellular model to mimic cardiac I/R injury.Briefly, NRCM or AC16 cells were exposed to a hypoxic condition (<1% O 2 ) in glucose-and serum-deprived DMEM for 8 h.Following this, cells were cultured in DMEM with serum and glucose under normoxic conditions for 12 h as previously reported [35].Transfections with miR-210 mimic/inhibitor, CDK10/EFNA3 siRNA, or negative controls were performed for 48 h as previously described before the end of OGD/R stress.After 20 h of OGD/R stress, TUNEL staining was conducted in NRCM or AC16 cells.For hESC-CM, cells were subjected to OGD/R stress (oxygen glucose deprivation, 16 h; reperfusion, 12 h) for a total of 28 h before cell harvest. Luciferase reporter assay To evaluate the direct interaction between miR-210 and its target genes, the binding sequences (mutated or not) in the 3′UTR of CDK10 and EFNA3 were inserted into the PGL3-Basic luciferase reporter vector.The miR-210 mimic and the luciferase reporterinclusive plasmids were cotransfected into human embryonic kidney (HEK) 293 cells.Dual-Luciferase Reporter Assay (Promega) was used for luciferase activity measurement of Firefly and Renilla.The sequences of CDK10 and EFNA3 binding sites used for luciferase reporter assay are listed in Table S1. Statistical analysis All statistical analyses were performed using SPSS 20.0 or GraphPad Prism 8. Data were presented as mean ± SD.All data were first analyzed by normality distribution test.For the data that passed normality test, unpaired Student's t test was used for statistical analyses between 2 groups; one-way analysis of variance (ANOVA) test was used for statistical analyses among 3 groups; two-way ANOVA test and Tukey post hoc test were applied for comparisons among 4 groups.For the data that did not pass normality test, we used Mann-Whitney U test to compare differences between 2 groups, and used robust two-way ANOVA followed by post hoc pair-wiseMedianTest in the rcompanion package for comparisons among multiple groups.Paired Student's t test was used to compare the difference of human serum miR-210 expression levels before and after the cardiac rehabilitation program.P < 0.05 indicates statistical significance. Fig. 1 . Fig. 1.MiR-210 expression is increased in the heart and blood circulation upon exercise training.(A and B) qRT-PCR of miR-210 expression in the rat (A, n = 4) and mouse (B, n = 6) heart tissues after regular swimming exercise regimen.(C and D) qRT-PCR of miR-210 expression in the serum of rat (C, n = 4) and mouse (D, n = 6) after regular swimming exercise regimen.(E) qRT-PCR of miR-210 expression in the serum of mice at 8 days after finishing the swimming regimen (n = 6).Mice performed a 3-week swimming regimen followed by sham surgery.Serum samples were collected at 8 days after swimming exercise.(F) qRT-PCR of miR-210 expression in the serum of mice with cardiac ischemia/reperfusion (I/R) injury at 8 days after finishing the swimming regimen (n = 6).Mice performed a 3-week swimming regimen followed by cardiac I/R injury for 7 days.(G) qRT-PCR of miR-210 expression in human serum samples from patients with coronary heart diseases before and after 8 weeks of cardiac rehabilitation (n = 20).For statistical analysis, unpaired Student's t test was performed for (A) to (D) and (F).Mann-Whitney U test was performed for (E).Paired Student's t test was performed for (G) to compare the difference of human serum miR-210 expression levels before and after the cardiac rehabilitation program.Data are mean ± SD. *P < 0.05; **P < 0.01.
2024-02-14T16:15:07.625Z
2024-02-12T00:00:00.000
{ "year": 2024, "sha1": "e30f41dd52900361b06df7772237bf47a33a35a4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3304e0cbb94fd64956c63fde908255a4e940ce64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37588722
pes2o/s2orc
v3-fos-license
Arachidonate and related unsaturated fatty acids selectively inactivate the guanine nucleotide-binding regulatory protein, Gz. G is a member of the family of trimeric guanine nucleotide-binding regulatory proteins (G proteins), which plays a crucial role in signaling across cell membranes. The expression of G is predominately confined to neuronal cells and platelets, suggesting an involvement in a neuroendocrine process. Although the signaling pathway in which G participates is not yet known, it has been linked to inhibition of adenylyl cyclase. We have found that arachidonate and related unsaturated fatty acids suppress guanine nucleotide binding to the α subunit of G. This inhibition of nucleotide binding by cis-unsaturated fatty acids is specific for G; other G protein α subunits are relatively insensitive to these lipids. The IC for inhibition by the lipids closely corresponds to their critical micellar concentrations, suggesting that the interaction of the lipid micelle with G is the primary event leading to inhibition. The presence of the acidic group of the fatty acid is critical for inhibition, as no effect is observed with the corresponding fatty alcohol. While arachidonic acid produces near-complete inhibition of both GDP and guanosine 5′-(3-O-thio)triphosphate binding by G, release of GDP from the protein was unaffected. Furthermore, the rate of inactivation of G by arachidonate is essentially identical to the rate of GDP release from the protein, indicating that GDP release is required for inactivation. These observations indicate that the mechanism of inactivation of G by unsaturated fatty acids is through an interaction of an acidic lipid micelle with the nucleotide-free form of the protein. Although the physiologic significance of this finding is unclear, similar effects of unsaturated fatty acids on other proteins involved in cell signaling indicate potential roles for these lipids in signal modulation. Additionally, the ability of arachidonate to inactivate this adenylyl cyclase-inhibitory G protein provides a molecular mechanism for previous findings that treatment of platelets with arachidonate results in elevated cAMP levels. G z is a member of the family of trimeric guanine nucleotide-binding regulatory proteins (G proteins), which plays a crucial role in signaling across cell membranes. The expression of G z is predominately confined to neuronal cells and platelets, suggesting an involvement in a neuroendocrine process. Although the signaling pathway in which G z participates is not yet known, it has been linked to inhibition of adenylyl cyclase. We have found that arachidonate and related unsaturated fatty acids suppress guanine nucleotide binding to the ␣ subunit of G z . This inhibition of nucleotide binding by cisunsaturated fatty acids is specific for G z␣ ; other G protein ␣ subunits are relatively insensitive to these lipids. The IC 50 for inhibition by the lipids closely corresponds to their critical micellar concentrations, suggesting that the interaction of the lipid micelle with G z␣ is the primary event leading to inhibition. The presence of the acidic group of the fatty acid is critical for inhibition, as no effect is observed with the corresponding fatty alcohol. While arachidonic acid produces near-complete inhibition of both GDP and guanosine 5-(3-O-thio)triphosphate binding by G z␣ , release of GDP from the protein was unaffected. Furthermore, the rate of inactivation of G z␣ by arachidonate is essentially identical to the rate of GDP release from the protein, indicating that GDP release is required for inactivation. These observations indicate that the mechanism of inactivation of G z␣ by unsaturated fatty acids is through an interaction of an acidic lipid micelle with the nucleotide-free form of the protein. Although the physiologic significance of this finding is unclear, similar effects of unsaturated fatty acids on other proteins involved in cell signaling indicate potential roles for these lipids in signal modulation. Additionally, the ability of arachidonate to inactivate this adenylyl cyclase-inhibitory G protein provides a molecular mechanism for previous findings that treatment of platelets with arachidonate results in elevated cAMP levels. Trimeric guanine nucleotide binding regulatory proteins (G proteins) 1 comprise a class of membrane-associated proteins that participate in a wide variety of signal transduction path-ways by communicating the external signal from cell surface receptors to intracellular effector molecules (1,2). These G proteins are ␣␤␥ heterotrimers, consisting of two functional subunits, an ␣ subunit containing bound guanine nucleotide, and a ␤␥ complex. In the resting (GDP-bound) state, a G protein can interact with a liganded receptor in a fashion that drives the exchange of GDP for GTP on the ␣ subunit. The ␣-GTP and ␤␥ subunits then dissociate, and both subunit complexes can interact with, and modulate the activity of, downstream effectors. The signal is terminated by an intrinsic GTPase activity of the ␣ subunit; subsequent reassociation with the ␤␥ complex returns the system to its resting state. Effector molecules for G proteins include adenylyl cyclase, certain subtypes of phospholipase C, and various ion channels (3,4). G proteins are classified through the identity of their ␣ subunit. The high sequence homology among these polypeptides has led to the cloning of several forms for which precise physiologic roles have not yet been ascribed. One such isotype is G z␣ (5,6). The distribution of G z␣ is limited primarily to platelets and neurons, implicating this G protein in some specific role in these tissues (5)(6)(7)(8). The protein has been purified from bovine brain as well as a bacterial expression system and shown to possess biochemical properties distinct from other G protein ␣ subunits (9). For example, nucleotide exchange by G z␣ is highly dependent on free magnesium concentrations. At free magnesium concentrations greater than 10 Ϫ5 M, GTP binding by G z␣ is nearly completely suppressed. This effect is not seen with other G proteins; in fact, the presence of high magnesium concentrations generally stimulates their rates of nucleotide exchange (2). Magnesium-dependent suppression of nucleotide exchange is observed, however, with members of the monomeric family of GTP-binding proteins, e.g. Ras (10). G z␣ also has a very slow intrinsic rate of GTP hydrolysis, more similar to that of Ras and Ras-related proteins than ␣ subunits (9). Although G z is formally a member of the G i family, it is insensitive to ADP-ribosylation catalyzed by pertussis toxin (9), a modification that inactivates the other members of the G i family (11). A property that G z␣ does share with most members of the G i family is an ability to mediate inhibition of adenylyl cyclase (12,13). In addition, G z␣ serves as an excellent substrate for activated protein kinase C both in vitro and in intact platelets (14), and evidence has been obtained that this phosphorylation blocks subunit interactions of this G protein (15). Several reports have appeared recently, indicating that particular biogenically active lipids can interact in vitro with signaling proteins and modulate their activities. For example, arachidonate and related unsaturated fatty acids physically associate with, and inhibit the activity of, the Ras GTPase activating protein known as GAP (16,17). Such lipids can also regulate the association of the Ras-related protein, Rac, with a specific GDP dissociation inhibitor (18). Similarly, cis-unsatur-ated fatty acids such as oleate and arachidonate have been shown to activate protein kinase C (19). While the mechanism by which lipids modulate the activities of these proteins is not completely defined, their interaction raises interesting possibilities for the role of lipids in cellular regulation. In this study, we demonstrate that cis-unsaturated fatty acids block GTP␥S binding by G z␣ . The mechanism of inactivation involves a specific effect of lipid micelles on the nucleotide-free form of the protein. These observations are of particular interest since the tissues in which G z␣ is found are known to accumulate significant levels of arachidonic acid in response to certain activating stimuli (20,21), and thus the potential exists for cross-talk between arachidonate-producing pathways and those controlled by G z . EXPERIMENTAL PROCEDURES Production and Purification of Recombinant G Protein ␣ Subunits-Recombinant G z␣ was expressed in Escherichia coli and purified as described previously (9). The protein was stored at Ϫ80°C in 50 mM HEPES, 1 mM EDTA, 1 mM DTT, and 5 mM MgCl 2 , supplemented with 2 mg/ml bovine serum albumin. Recombinant G s␣ was purified from a bacterial expression system as described (22). Recombinant G i␣1 , G i␣3 , and G o␣ were generous gifts of Maurine Linder (Washington University School of Medicine, St. Louis MO) (23). Lipid Storage and Micelle Preparation-All lipids were purchased from Sigma, dissolved in ethanol at final concentrations of 50 mM, and stored under N 2 at Ϫ80°C. For the preparation of pure micelles, the required amount of ethanolic lipid solution was dried under vacuum and suspended in 50 mM HEPES, pH 8.0, 1 mM EDTA, and 1 mM DTT. This suspension was subjected to bath sonication until homogenous. Mixed micelles were prepared by dissolving the dried lipids in the same buffer containing 0.1% Lubrol (ICN) at 30°C with vortexing or brief sonication. Guanine Nucleotide Binding Assays-Guanine nucleotide binding by G protein ␣ subunits was quantitated as described previously (24). Briefly, 1-5 pmol of the protein to be analyzed was diluted to 30 l with 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, and, where indicated, 0.1% Lubrol. 30 l of GTP␥S binding mix consisting of 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, and 2 M [ 35 S]GTP␥S (specific activity, ϳ10,000 cpm/pmol) was then added. In experiments investigating competition between arachidonate and GTP␥S, the concentration of the nucleotide was varied as indicated in the appropriate figure legend. Reactions were initiated by addition of protein, and, unless otherwise indicated, incubation conditions were set as a function of the intrinsic rates of exchange of the various ␣ subunits. These were G z␣ , 30 min at 30°C; G i␣1 and G i␣3 , 20 min at 30°C; G o␣ , 2 min at 20°C; G s␣ , 6 min at 20°C. Free Mg 2ϩ concentration during incubation was 700 nM unless otherwise indicated. Reactions were terminated by the addition of 2 ml of ice-cold 20 mM Tris-Cl, pH 8.0, 25 mM MgCl 2 , and 100 mM NaCl. Samples were kept on ice until filtration through BA85 nitrocellulose filters. Filters were dried, and radioactivity was determined by liquid scintillation spectrophotometry. For experiments assessing the time course of GDP dissociation from G z␣ , 11 pmol of the protein were incubated at 30°C for 60 min in the presence of 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, 100 mM NaCl, 0.05% Lubrol, and 0.5 M [ 3 H]GDP (specific activity, ϳ26,000 cpm/pmol). Arachidonic acid (300 M) or palmitic acid (300 M) in 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, and 0.1% Lubrol was added, and samples were incubated for an additional 2 min. Samples were then spiked with unlabeled GDP such that the final concentration of GDP in the "chase" was 50 M. The addition of arachidonate and GDP were of small enough volume as not to significantly perturb the relative concentration of protein or detergent. At the time points indicated in the appropriate figure, aliquots of G z␣ were removed to ice-cold buffer (20 mM Tris-Cl, pH 7.7, 100 mM NaCl, 25 mM MgCl 2 ) and stored on ice until filtration through BA85 nitrocellulose filters. Filters were dried, and radioactivity was determined by liquid scintillation spectrophotometry. For experiments demonstrating the recovery of binding activity with time, G z␣ (7.6 pmol) was incubated under the standard reaction conditions plus 300 M arachidonte. After 5 min, the reaction was diluted 10-fold with 50 mM HEPES, 1 mM EDTA, 1 mM DTT, 0.05% Lubrol, and 2 M GTP␥S. At the times indicated in the appropriate figure, aliquots were removed from the incubation into ice-cold buffer (20 mM Tris-Cl, pH 7.7, 100 mM NaCl, 25 mM MgCl 2 ), and bound nucleotide was determined. For experiments assessing the time course of inactivation of G z␣ by arachidonate, ϳ7 pmol of protein were incubated in the presence or absence of 300 M arachidonic acid for up to 90 min. At the time points indicated, aliquots of the incubation mixture containing 0.4 pmol of G z␣ were removed and immediately subjected to a 60-min GTP␥S binding assay. Transferring the protein from the pre-incubation to the GTP␥S binding reaction effectively diluted the arachidonate to 30 M in the samples in which the pre-incubation was performed in the presence of 300 M lipid. Additional changes from the standard reaction mixture included the presence of 5 M GDP during the pre-incubation to stabilize the G protein and inclusion of 10 M [ 35 S]GTP␥S (specific, activity ϳ10,000 cpm/pmol) in the GTP␥S binding mix. Fluorimetric Determination of Critical Micellar Concentrations of Lipids-CMC values for lipids were determined by fluorescence spectroscopy as described by Chattopadhyay and London (25) using the fluorescent probe 1,6-diphenyl-1,3,5-hexatriene. A stock 500 M suspension of lipid (see above) was diluted to the appropriate concentration in 500 l of 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, 5 mM MgCl 2 , and 1.5 mg/ml bovine serum albumin, and this mixture was then added to 500 l of 50 mM HEPES, pH 7.6, 1 mM EDTA, 1 mM DTT, and 1 M GTP. Following equilibration to room temperature, 1,6-diphenyl-1,3,5hexatriene (2 l of a 1 mM solution in tetrahydrofuran) was added, and the tubes were incubated in the dark for at least 30 min. Fluorescence measurements were performed on a Perkin-Elmer 650 -40 fluorescence spectrophotometer set at excitation and emission wavelengths of 358 and 430 nm, respectively. Cis-unsaturated Fatty Acids Inhibit GTP␥S Binding by G z␣ -Reports of the effects of arachidonic acid on proteins involved in cell signaling prompted us to evaluate arachidonate and related lipids for potential effects on G protein activities. The initial experiments focused on G z , since as noted in the Introduction, this G protein is predominantly expressed in tissues with highly active phospholipase A 2 pathways. We first tested various fatty acids for their effect on the ability of G z␣ to bind GTP␥S, a non-hydrolyzable analog of GTP. As shown in Fig. 1, the 20-carbon unsaturated fatty acid, arachidonic acid, dramatically inhibited the ability of G z␣ to bind guanine nucleotide. GTP␥S binding by G z␣ was suppressed in a dose-dependent fashion, and suppression was nearly complete at a concentration of 120 M arachidonic acid. Essentially identical results were obtained when the binding of [ 3 H]GDP, rather then GTP␥S, was examined (results not shown). The carboxyl group on the fatty acid was essential for inhibition of nucleotide binding, as the equivalent fatty alcohol, arachidonyl alcohol, was not inhibitory. Interestingly, arachidic acid, the saturated 20-carbon fatty acid, also had no effect on GTP␥S binding by G z␣ , indicating the requirement for the double bonds for inhibition. The ability of arachidonate to suppress GTP␥S binding by G z␣ prompted us to examine whether other unsaturated fatty acids exert the same effect. This was indeed found to be the case. Oleic acid, linoleic acid, and linolenic acid all suppressed nucleotide binding by G z␣ in the same dose-dependent fashion as arachidonic acid (Fig. 1). Oleic acid and linoleic acid both completely suppressed GTP␥S binding by G z␣ at a concentration of 175 M, and linolenic acid was completely inhibitory at 250 M. However, the trans-unsaturated fatty acid, elaidic acid, was only slightly inhibitory at these concentrations (data not shown). The steepness of the inhibition curves for the unsaturated fatty acids indicated that the inhibition was not due to a simple binding event but rather to some sort of cooperative process. One such process, which is quite obvious when working with lipids, is the formation of micelles, which is a highly cooperative aggregation event. Accordingly, we determined the CMC for the lipids under the same conditions (e.g. ionic strength, Mg 2ϩ concentration) as for the GTP␥S binding experiments. CMC values for the various lipids were determined using a fluorescence technique (24), and it was found that the CMC values corresponded nearly identically to the observed IC 50 for the inhibition of GTP␥S binding (Table I). For example, the observed IC 50 and the CMC for arachidonic acid were 60 and 73 M, respectively. These observations provide strong evidence that the abilities of the unsaturated fatty acids to suppress GTP␥S binding by G z␣ is micelle dependent; i.e. it is an interaction of the protein with an anionic lipid micelle, which is responsible for the inhibition. To facilitate manipulation of the lipid in subsequent studies, we assessed whether the inhibition by arachidonate of the ability of G z␣ to bind GTP␥S occurred when the fatty acid was present in a mixed micelle. The data in Fig. 2 show that this is the case, as the same type of inhibition is observed in response to increasing arachidonic acid when the fatty acid is present in a mixed micelle with the non-ionic detergent, Lubrol. The doseresponse curve is shifted substantially to the right as would be expected for a process that depends on the mole fraction of lipid in the micelle (26). In fact, the IC 50 for inhibition of GTP␥S binding by G z␣ shifts in proportion to the mole fraction of the lipid (results not shown). Inhibition of Nucleotide Binding by Arachidonate Is Specific for G z␣ -We next determined the specificity of G proteinarachidonate interactions by determining the effect of the fatty acid on GTP␥S binding by other G protein ␣ subunits. As in the previous experiments, GTP␥S binding by G z␣ was inhibited 50% at 150 M arachidonate, while at 300 M, nucleotide binding was nearly completely suppressed (Fig. 3). GTP␥S binding by G i␣3 , a member of the G i subfamily to which G z␣ belongs, was not significantly inhibited at 150 M arachidonate but was inhibited ϳ50% at 300 M. The other G protein ␣ subunits tested, including that of the closely related G i1 as well as G o and G s , were not inhibited by arachidonate at any concentra-tion tested. This specificity for G z␣ over other G protein ␣ subunits suggests a potential role for arachidonate in G z signaling and prompted us to investigate the mechanism of this lipid effect on GTP␥S binding by G z␣ . Mechanism of Action-The binding of GTP␥S by G proteins is a two-step process involving GDP release (the rate-limiting step) and subsequent diffusion-controlled GTP␥S binding by the nucleotide-free protein (2). To explore the mechanism by which arachidonic acid inhibits GTP␥S binding, we first assessed the effect of arachidonic acid on the rate of GDP release from the protein. As release of bound GDP from ␣ subunits is the rate-limiting step in nucleotide exchange, we expected that this process would be inhibited by arachidonic acid in a fashion similar to that of GTP␥S binding. Quite surprisingly, however, release of [ 3 H]GDP from G z␣ occurred with the same rate constant (Fig. 4) in the presence of 300 M palmitic acid (a non-inhibitory fatty acid) or in the presence of 300 M arachi- 2. Effect of a non-ionic detergent on arachidonic acid-dependent inhibition of GTP␥S binding by G z␣ . G z␣ was subjected to the GTP␥S binding reaction as described under "Experimental Procedures" for 20 min at 30°C in the presence of the indicated concentrations of arachidonic acid and 0.05% Lubrol (E). For comparison, the data obtained in the absence of added Lubrol are shown (q, see Fig. 1). Data shown are from a single experiment that has been repeated at least three times. Maximal GTP␥S binding is defined as the amount of GTP␥S binding observed in the absence of arachidonate under each condition; the presence of Lubrol had a negligible effect on the binding. donic acid, a concentration that essentially completely suppressed GTP␥S binding to the protein (see Fig. 2). These data indicate that the effect of arachidonic acid is exerted at the step of GTP␥S binding. This would be highly unusual since GTP binding by G proteins is normally diffusion controlled, and thus its rate would have to be reduced by many orders of magnitude before an effect on the overall binding reaction would be observed. One possibility for the selective effect of arachidonate on the GTP␥S binding step is that the lipid micelle could interact specifically with the unoccupied nucleotide binding site on G z␣ and effectively compete for GTP␥S binding. If this were the case, inhibition of nucleotide binding by arachidonic acid should be reduced by increasing the concentration of competing nucleotide. To explore this possibility, we measured the effect of arachidonic acid on GTP␥S binding in the presence of increasing GTP␥S concentrations. However, assessment of the arachidonate-mediated inhibition over a 50-fold range of GTP␥S revealed that binding was nearly completely suppressed at all concentrations of competing nucleotide (Fig. 5). Since inhibition of nucleotide binding by arachidonic acid was unaffected at GTP␥S concentrations as high as 25 M, which is Ͼ1000-fold above the K d of G protein ␣ subunits for GTP␥S (2), it is considered highly unlikely that the lipid micelle is competing for the nucleotide binding site of the protein. An alternative explanation for the effect of arachidonate on GTP␥S binding, but not on GDP release, by G z␣ is that the fatty acid could somehow interact with and inactivate the nucleotide-free form of the G protein that is a transient intermediate in the exchange process. To examine this possibility, we assessed the time dependence of the inactivation of G z␣ by arachidonate. If arachidonate could exert its effect only on the nucleotide-free form of G z␣ , a recovery of binding activity should be observed if the protein is exposed to high arachidonate and then is diluted to an ineffective concentration. This recovery of binding activity would then reflect the fraction of the protein that had not yet released its GDP. Furthermore, if the arachi-donic acid is selectively inactivating the nucleotide-free form of G z␣ , then the rate of inactivation of GTP␥S binding should correspond to the rate of GDP release. Indeed, the evidence indicates that this is the case (Fig. 6). In the first experiment (Fig. 6A), binding activity was measured after G z␣ was first incubated with 300 M arachidonate for 5 min and then diluted to 30 M arachidonate. While GTP␥S binding activity was detected, the level of nucleotide binding recovered was significantly less than the control levels in which only 30 M fatty acid had been present throughout. This same type of experiment was performed over a range of pre-incubation times with 300 M arachidonate from 5 to 90 min; in each case, the quantity of G z␣ capable of binding nucleotide was assessed after a 10-fold dilution to the ineffective concentration of the lipid (i.e. 30 M). The results of this analysis, shown in Fig. 6B, revealed in each case a loss of GTP␥S binding activity that was not recovered by subsequent dilution. This was not due simply to protein lability, as pre-incubation of G z␣ in the absence of arachidonate did not result in a loss of GTP␥S binding activity. An equally important finding from this experiment is that the time dependence in the loss of the binding activity of G z␣ could be fit to an exponential with a decay constant of 0.028 min Ϫ1 , which is nearly identical to the rate constant for GDP release from G z␣ under the same conditions (9). Taken together, these data indicate that, in the presence of arachidonate, G z␣ is able to release GDP normally but is then rapidly inactivated when the lipid micelle interacts with the nucleotide-free form of the protein. The incubation conditions were adjusted for each ␣ subunit as described under "Experimental Procedures." Data shown represent the mean of three separate determinations with the 100% control value being the binding observed in the absence of added arachidonic acid. AA, arachidonic acid. DISCUSSION The role of lipids in cellular signaling has received increasing attention in recent years (27). It is now clear that lipids such as arachidonic acid and diacylglycerol actively participate as second messengers in signaling pathways (28,29). Examples are also beginning to emerge of arachidonate and other cis-unsaturated fatty acids directly modulating activities of signaling proteins. For example, these fatty acids can associate with and alter the activity of Ras-GAP (17,30). Cis-unsaturated fatty acids have also been shown to regulate association between the monomeric G protein, Rac, and its GDP dissociation inhibitor (18). Arachidonate and other unsaturated fatty acids have also been shown to activate certain isozymes of the protein kinase, protein kinase C (19). In this report, we have identified an additional effect of cis-unsaturated fatty acids on a signaling protein, that being the inactivation of a G protein ␣ subunit, specifically G z␣ . Arachidonate-dependent inactivation of GTP␥S binding by G z␣ was quite specific for this ␣ subunit, as treatment of a number of other ␣ subunits had only minimal effects on their abilities to bind nucleotide. The inactivation was dependent upon the presence of an acidic group on the lipid and correlated with the formation of a lipid micelle. Several cis-unsaturated fatty acids were potent inhibitors of GTP␥S binding by G z␣ with a dose dependence that matched the lipid's respective CMC. This suggests that it is an interaction between the charged surface of a micelle with G z␣ that is required for its inhibition. These results are similar to the results of Serth et al. (17), who observed the inhibition of Ras-GAP in the presence of fatty acids and acidic phospholipids but not in the presence of neutral lipids, and only under conditions in which the active lipids formed micellar structures. The inhibition of GTP␥S binding by G z␣ seen upon the addition of arachidonic acid could have been exerted at either of two distinct steps in the process, these being dissociation of bound GDP or association of the GTP␥S. The former step was initially considered the most likely, as GDP dissociation from G proteins is ϳ10 7 -fold slower than association of guanine nucleotides FIG. 6. Arachidonic acid selectively inactivates the nucleotidefree form of G z␣ . A, G z␣ was incubated in a batch reaction as described under "Experimental Procedures" at 30°C in the presence of either 30 M (f) or 300 (q, E) M arachidonate. In one of the reaction mixtures containing 300 M arachidonate (q), the mixture was diluted 10-fold with 50 mM HEPES, 1 mM EDTA, 1 mM DTT, 2 M GTP␥S (specific activity, 12,000 cpm/pmol) after 5 min of incubation. At the times indicated, aliquots containing ϳ0.5 pmol of G z␣ was removed to ice-cold 20 mM Tris-Cl, 25 mM MgCl 2 , 100 mM NaCl, and bound nucleotide was determined. B, G z␣ was incubated in a batch reaction as described under "Experimental Procedures" at 30°C in 50 mM HEPES, 10 mM EDTA, 1 mM DTT, 2.65 mM MgCl 2 , 5 M GDP, 0.05% Lubrol, and either 0 (E) or 300 M (q) arachidonate. At the indicated times, aliquots containing ϳ0.4 pmol of G z␣ were removed and added to a GTP␥S binding reaction mixture containing 10 M GTP␥S and a free Mg 2ϩ concentration of 700 nM. For the experiment conducted in the absence of arachidonic acid (E), the GTP␥S reaction mixtures contained an added 30 M arachidonic acid so that the lipid concentration in all binding assays was held constant. GTP␥S binding was performed at 30°C for 60 min for all data points. Data points represent means of six separate determinations. AA, arachidonic acid. (31). To identify the step in G z␣ nucleotide exchange affected by arachidonate, we directly determined the effect of arachidonic acid on the rate of GDP release. Quite surprisingly, GDP release was virtually unaffected by concentrations of arachidonate, which essentially completely suppress GTP␥S binding, indicating that GTP␥S binding was the step being affected. An assessment of the time dependence of arachidonate inhibition of GTP␥S binding revealed that (a) inhibition of GTP␥S binding was not reversible even after 60 min of incubation and (b) the rate of this inactivation corresponded precisely with that of GDP release by G z␣ . Taken together, these experiments indicate that the effect of arachidonate on the ability of G z to bind guanine nucleotides is dependent on an association of the lipid micelle with the nucleotide-free form of the protein, resulting in an alteration of the protein that renders it inactive. While lipid-mediated modulation of G protein activity by irreversible inactivation seems an unlikely mode of regulation in the cell, the selectivity of the process for G z over other G proteins, as well as the unique distribution of G z , provides strong reasons to suspect that the process is physiologically relevant. As noted above, the cell types in which G z is found, such as platelets and chromaffin cells, are known to possess high levels of phospholipase A 2 activity (20,21). These cells are also known to produce substantial levels of arachidonate in response to external stimuli (20,32,33). Also of note in this regard are previous studies showing that treatment of platelets with high levels of exogenous arachidonate results in increased intracellular cAMP accompanied by reduced aggregation (34,35). In one of these studies, treatment of platelets with an adenylyl cyclase inhibitor restored aggregation in the presence of arachidonate, indicating that the fatty acid was exerting its effect at or upstream of adenylyl cyclase (35). The finding that arachidonate can inactivate a G protein that is both present in platelets and implicated in the inhibition of adenylyl cyclase thus provides a potential molecular mechanism for these effects. Finally, it is certainly possible that in the context of an intact cell an increase in the concentration of arachidonic acid might be only transiently inhibitory, i.e. the cellular environment could provide protection of the apoprotein form of G z␣ from permanent inactivation by the lipid. Possibilities here include a protective factor in these cells that associates with G z␣ or one that reverses the association between G z␣ and inhibitory lipids. Identification of the pathway in which G z␣ participates will likely shed some light on these results and on the possibilities for the novel means of G protein regulation they may represent.
2018-04-03T04:03:45.946Z
1996-02-09T00:00:00.000
{ "year": 1996, "sha1": "a111c9f1fdede4ff6ffb8142e3f1f349c0694c5a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/6/2949.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "cfc3453d7e11679492ad0baa338fbb74bf1849d5", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251683137
pes2o/s2orc
v3-fos-license
Development of a Tailored Sol-Gel Immobilized Biocatalyst for Sustainable Synthesis of the Food Aroma Ester n-Amyl Caproate in Continuous Solventless System This study reports the synthesis of a hybrid sol-gel material, based on organically modified silanes (ORMOSILs) with epoxy functional groups, and its application in the stabilization of lipase type B from Candida antarctica (CalB) through sol-gel entrapment. The key immobilization parameters in the sol-gel entrapment of lipase using epoxysilanes were optimized by the design of numerous experiments, demonstrating that glycidoxypropyl-trimethoxysilane can allow the formation of a matrix with excellent properties in view of the biocatalytic esterifications catalyzed by this lipase, at an enzyme loading of 25 g/mol of silane. The characterization of the immobilized biocatalyst and the correlation of its catalytic efficiency with the morphological and physicochemical properties of the sol-gel matrix was accomplished through scanning electron microscopy (SEM), fluorescence microscopy (FM), as well as thermogravimetric and differential thermal analysis (TGA/DTA). The operational and thermal stability of lipase were increased as a result of immobilization, with the entrapped lipase retaining 99% activity after 10 successive reaction cycles in the batch solventless synthesis of n-amyl caproate. A possible correlation of optimal productivity and yield was attempted for this immobilized lipase via the continuous flow synthesis of n-amyl caproate in a solventless system. The robustness and excellent biocatalytic efficiency of the optimized biocatalyst provide a promising solution for the synthesis of food-grade flavor esters, even at larger scales. Introduction Esters containing non-chiral alcohols and short-chain fatty acids are important additives used in the food and cosmetic industries and are particularly popular due to their fruity flavor [1]. Current processes for the production of esters consist of the esterification of a carboxylic acid with an alcohol in the presence of non-selective inorganic catalysts at high temperatures or via extraction from natural sources. Esters extracted from plant materials are often either too scarce or too expensive for commercial use, while those produced by chemical synthesis are not considered natural products [2]. Due to the growing concerns regarding climate change and environmental issues, industries are increasingly focusing on developing greener, safer, and more sustainable alternatives for the actual industrial-scale manufacturing processes [3]. Enzyme-catalyzed processes occur under mild reaction conditions, ambient temperature, atmospheric pressure, and physiological pH, therefore, being more environmentally friendly and cost-effective [4]. Compared to chemical catalysts, enzymes are readily biodegradable and non-toxic, reducing the hazards of processing [5]. These properties allow the manufacturing of quality products with fewer raw materials and lower chemicals, water, and energy consumption, and with less waste generation [6]. Interfacial active enzymes, such as lipases, can effectively catalyze not only hydrolytic but also synthetic reactions. Esterification reactions catalyzed by lipases are of great interest [7]. In particular, n-amyl caproate is an aroma chemical with a fresh floral flavor and a fruity apple taste with melon notes, marketed by several companies as a GRAS (Generally recognized as safe, as indicated by FEMA, the Flavor and Extract Manufacturers Association) flavoring compound accepted as an additive in different food products [8]. Consequently, numerous attempts have been made to develop an efficient lipase system for the synthesis of food-grade esters, showing that the influence of reaction parameters varies strongly with both substrate types. Therefore, each ester synthesis can be considered as a specific task [2]. To date, most lipase-catalyzed synthesis reactions are carried out in non-polar solvents, one of the main reasons for this being the homogeneity of the reaction mixture, even at lower operating temperatures. However, the actual requirements for sustainability and compatibility with food regulations have oriented the developments toward the utilization of green solvents or solventless systems. Biobased solvents from cereal or sugar sources are mainly obtained by the fermentation and valorization of lignocellulose residues, such as the furfural derivative, 2-methyl-tetrahydrofuran (2-MeTHF), or cyclopentyl methyl ether (CPME). Of these, 2-MeTHF is probably the most promising green solvent, obtained from carbohydrates and produced on an industrial scale from lignocellulosic biomass, particularly from agricultural wastes such as corn stover or sugarcane bagasse [9]. In recent years, several applications of 2-MeTHF, as well as CPME in biocatalysis have been reported [10]. Although organic solvents provide important advantages as reaction media for enzymatic reactions, in the case of the synthesis of food-grade flavor esters, the complete elimination of solvents is highly desirable [11]. Higher selectivity and volumetric productivity, along with improved substrate and product concentrations, are other advantages of using solventless reaction systems. Moreover, the production costs generally decrease by eliminating complex purification steps and provide a "clean" and "green" synthetic pathway by reducing the process hazards, accompanied by solvent exposure, toxicity, and flammability, shifting the manufacturing process toward an environmentally friendly route [12]. Immobilized lipases are excellent biocatalysts for the enzymatic synthesis of those short-and medium-chain fatty esters used as food flavor compounds; however, their catalytic activity depends greatly on the selected immobilization method [1]. Although numerous studies refer to lipase immobilization, there are still no straightforward protocols regarding the optimal method for each type of enzyme. Thus, it is necessary to customize the immobilization procedure for the selected enzyme and the envisioned applications. The choice of the appropriate immobilization procedure has to be evaluated considering the characteristics of the biomolecules, the supports, and the intended application. The uniqueness and versatility of the sol-gel immobilization process are provided by the mild, easy, and tunable synthesis conditions [13]. The sol-gel polymer network is compact enough to retain the enzyme molecules while allowing the substrate and products to pass through. The enzyme is not bound to the polymer matrix, so the inactivation of the enzyme is reduced, offering very high applicability [14]. The three-dimensional network of a solgel matrix creates a protective microenvironment for enzyme stability, enhanced by the forces involved in the support-enzyme interaction, such as electrostatic and hydrophobic interactions, which greatly influence enzyme performance in the case of a biocatalyst [7]. The great variety of organic silane precursors available allows many ways to improve the properties of the silica network. The most common and widely used precursors are tetraalkyl orthosilicates (tetraalcoxysilanes) Si(OR) 4 (R being methyl or ethyl) in a mixture with trialkoxysilanes R Si(OR) 3 or dialkoxysilanes R R Si(OR) 2 , substituted with alkyl or aryl functional groups (R and R ), which allow the tuning of the xerogel matrix properties [15,16]. The properties of sol-gel particles are governed by the type and content of the particular organically modified silanes used [13]. Silanes with epoxy groups have multiple uses as coupling agents, even in the case of chemically modified 3D-printed scaffolds for enzyme immobilization [17], but their possible utilization as silane precursors for the sol-gel entrapment of lipases was reported only in our previous work [1]. The utilization of a precursor silane with an active epoxy group could lead to the supplementary stabilization of the enzyme by covalent binding without a severe multipoint attachment, due to the relatively low epoxy group content in the sol-gel matrix and their rather low reactivity under immobilization conditions. The covalent immobilization of enzymes on epoxy-activated supports allows a multipoint attachment via a reaction with the nucleophile groups of specific amino acid (lysine, histidine, tyrosine) residues that are located on the surface of the enzyme molecule, combined with physical adsorption [18]. Such supports are commercially available, but it is obviously difficult to control the interaction between the enzyme and the support, and inactivation can occur due to conformational changes induced by a too-strong multipoint binding. Therefore, the development and optimization of a sol-gel immobilization procedure, involving the entrapment of the enzyme and possible additional covalent bonding, is a challenging task. The design of experiments (DOE) is an important approach when optimizing chemical processes [19], one that is used to effectively explore the relationships between the inputs and outputs of a process and to better understand them [20]. At the same time, it is increasingly used for the improvement of biocatalytic processes [21][22][23], but the optimization of immobilization parameters using this method is seldom reported [16]. Response surface methodology (RSM) is an optimization technique that is also used in biocatalysis and determines the optimum process conditions by testing several variables simultaneously [2]. The aim of this study was to investigate the influence of the innovative functionalization of the sol-gel matrix using silane precursors with epoxy groups, which could also allow improved stabilization by additional covalent bonding, on the catalytic performance of immobilized lipase from Candida antarctica B (CalB). The main novelty of this research is the utilization of an immobilized biocatalyst obtained via sol-gel entrapment, optimized through experimental design in the synthesis of the natural food aroma ester n-amyl caproate, in both batch and continuous systems. The best biocatalyst provided excellent thermal and operational stability in batch conditions, enabling a clean synthesis process and allowing further development toward a continuous-flow system. Moreover, the optimization of the major reaction parameters, substrate concentration, flow, and temperature in solventless conditions was accomplished, using, for the first time, a desirability approach to correlate the ester yield and productivity. These results demonstrate that a commercially marketed natural aroma ester can be synthesized in a continuous solventless process, using a novel biocatalyst obtained through an epoxy-functionalized sol-gel system and an immobilization process optimized via experimental design. Materials Lipase from Candida antarctica type B, produced by the fermentation of genetically modified microorganisms, was a generous gift from Genofocus (Daejeon, Republic of Korea). The commercial preparation CalB-IM TM (Genofocus, Daejeon, Republic of Korea) contains lipase from Candida antarctica B, adsorbed onto a microporous ion-exchange resin. The silane precursors used for the sol-gel entrapment of the native lipase were tetramethoxysilane (TMOS) from Acros Organics (Geel, Belgium), while the silanes with epoxy functional groups, (3-glycidoxypropyl)trimethoxysilane (GPTMS) 99+% (product code SIG5840.1), (3-glycidoxypropyl)bis(trimethylsiloxy)methylsilane (GP(TMS) 2 MS) 97% (product code SIG5820.0) and 1,3-bis(glycidoxypropyl)tetramethyldisiloxane ((GP) 2 TMDSO) 97% (product code SIB1115.0) were purchased from Gelest (Morrisville, PA, USA). The other materials used were tris-(hydroxymethyl)-aminoethane, 2-propanol, 1-octyl-3-methyl-imidazolium tetrafluoroborate (OmimBF 4 ), n-amyl alcohol, caproic acid, n-hexane, Coomassie Bril- The immobilization of lipase B from Candida antarctica was studied by means of entrapment in hybrid sol-gel matrices that consisted of binary silane systems of tetramethoxysilane (TMOS) with an epoxysilane. In the process of optimizing the immobilization by sol-gel entrapment, several parameters were considered. First, the nature of the silane system and the type of the basic catalyst for the hydrolysis reaction of the silane, which greatly influence the properties of the enzymatic preparations, were investigated. For each of these two parameters, three distinct settings were chosen. In search of an optimal sol-gel network for enhanced enzyme activity, a combination of the three epoxysilanes with glycidoxypropyl groups, namely, GPTMS, GP(TMS) 2 MS, and (GP) 2 TMDSO with TMOS (at a 1:1 molar ratio), were tested. For each silane combination, three catalysts (sodium fluoride, potassium fluoride, and ammonia) were investigated. The influence of the studied parameters (silane system and catalyst type) on the final properties of the immobilized preparations were evaluated in terms of immobilization yield and catalytic efficiency in the esterification reaction of n-amyl alcohol and caproic acid in a solventless medium. Statistical Optimization of Sol-Gel Entrapment of Candida antarctica B Lipase Candida antarctica lipase type B was immobilized through sol-gel entrapment, using a binary TMOS:GPTMS system at different molar ratios between the two silanes. The influence of immobilization parameters, such as silane molar ratio (respectively, the organic matter content of the sol-gel network), and the enzyme loading (expressed as enzyme amount per total silane concentration) on the ester yield was investigated. The experimental setup was built using the Design-Expert software (Stat-Ease, Inc., Minneapolis, MN, USA), and process optimization was carried out by response surface methodology (RSM) employing a central composite design (CCD). The levels of the immobilization parameters investigated are given in Table S1 in the Supplementary Materials. All immobilization reactions were carried out in duplicate; the results given represent the means. Table 1 provides an overview of the experimental setup with 2 factors at 3 levels, consisting of the 11 immobilization runs (8 non-center and 3 center points) and the corresponding experimentally determined ester yields. The fit of the model to the experimental data was first evaluated, followed by an analysis of variance (ANOVA) to determine the statistical significance attributed to the model terms (immobilization parameters). Subsequently, the response equation for the quadratic model was given and the reaction parameters were optimized in order to maximize ester yield. Finally, data predictions were made using the fitted model, and a validation run was conducted under the optimized reaction conditions. The general procedure for the immobilization by sol-gel entrapment of enzymes, which was used as the starting point for the experimental design, was previously reported by the authors of [24]. Briefly, a certain amount of the Candida antarctica lipase B, depending on the enzyme loading used in the experiment, was suspended in Tris/HCl buffer (0.1 M, pH 8.0) and stirred magnetically at 600 rpm (rotations per minute) at room temperature, until a homogenous enzyme suspension was obtained. The suspension was further centrifuged at 15 • C and the supernatant was used for immobilization. The enzyme suspension, immobilization additives-ionic liquid, OmimBF 4 and 2-propanol, and 1M solution of the hydrolysis catalyst (NaF/KF/NH 3 )-were added to a 4-mL glass vial and the mixture was kept under continuous stirring at room temperature. After 30 min, the appropriate silane precursors were added to the mixture in specific ratios (amounting to a total of 6 mmoles). The mixture was kept under stirring until gelation commenced and the obtained gel was kept at room temperature until complete polymerization was achieved. The wet gel was washed consecutively with 2-propanol, distilled water (for removal of unreacted compounds and unbound enzyme), 2-propanol, and hexane (for the removal of excess water), and vacuum-filtered through a glass Buchner funnel with a sintered filter disc of G3 porosity. Subsequently, the washed gel was dried for 24 h at room temperature and then in a vacuum oven at 25 • C for another 24 h (100 mbar final level of vacuum) for complete removal of the rinsing solvent. The resulting xerogel was crushed in a mortar and stored under refrigeration (4 • C). The washing filtrate was tested for proteins using the Bradford method [25]. The efficiency of the immobilization procedure was evaluated in terms of protein immobilization yield, according to Equation (1): where P i is the immobilized protein quantity and P t is the total protein quantity added in the immobilization protocol. P i was determined by the subtraction of the unbound protein detected in the filtrate and washings from the initial total protein, P t . Although the Bradford protein assay results can be affected by several interferences, as was shown by Nicolas et al. [26], in our case, they were minimized by removing the insoluble components of the native solid CalB lipase and avoiding the utilization of phosphate buffers. In the conditions of sol-gel entrapment, the Bradford assay provides useful results, particularly when they are used for the comparative evaluation of different immobilization protocols. A leaching test was also carried out to determine the possible loss of enzymes by diffusion in the solution. First, 20 mg of entrapped lipase was maintained for 1 h under stirring in 0.05 M phosphate buffer at a pH of 7.0; following centrifugation at 3000× g, a certain supernatant volume was used for the activity test of the possibly leached enzyme with p-nitrophenyl palmitate, as described by Guo et al. [27]. Enzymatic activity was not detected in the samples by this assay. Catalytic Efficiency of the Immobilized CalB Lipase in Solventless Batch Esterification The esterification reactions of n-amyl with caproic acid were carried out in a closed system in solventless media, as follows: in a 2-mL Eppendorf safe lock tube, substrates at a 1:1 molar ratio and with a native/immobilized enzyme, with a biocatalyst/substrate ratio of 9 and 50 (g/mol substrate) for the native and immobilized enzyme, respectively, were incubated in a Thermomixer (Eppendorf AG, Hamburg, Germany) at 36 • C and 1000 rpm for 16 h. The obtained ester amounts were assayed by gas chromatography, on a Varian 450 Chromatograph (Varian Inc., Utrecht, The Netherlands) equipped with a flame ionization detector (FID), using a 15 m × 0.25 mm VF-1ms non-polar capillary column with a 0.25-µm film thickness of dimethylpolysiloxane. The analysis conditions were as follows: oven temperature 80-160 • C, with a heating rate of 10 • C/min, injector temperature 300 • C, detector temperature 350 • C, and carrier gas flow (hydrogen) of 1.9 mL/min. The samples were dispersed in acetone, then a quantitative analysis was performed using n-dodecane as an internal standard. The ester yield and catalytic efficiency of the lipase were determined based on the GC data. The catalytic efficiency of the biocatalysts was expressed in terms of U/g biocatalyst, defined as the amount of ester (µmoles) synthesized per time unit (1 min) by 1 g of biocatalyst, under specific reaction conditions (36 • C, 1:1 substrate molar ratio, and 16 h reaction time). The term "efficiency" was utilized instead of "activity" because enzymatic activity is a property related to the rate of the enzyme-catalyzed reaction and would suppose a linear increase in the product amount during the whole reaction time (16 h in our case). The ester productivity expressed in g ester/g substrate (caproic acid) per time unit (1 h), obtained by converting 1 g of caproic acid using 1 g of biocatalyst under specific reaction conditions (36 • C, 1:1 substrate molar ratio, and 16 h reaction time), was also determined. All esterification experiments were run in duplicate, and the mean values were considered. Sample analysis was performed in triplicate. Thermal Stability of the Biocatalysts The sol-gel-entrapped biocatalyst CalB-SG and the native lipase CalB were incubated in n-amyl alcohol at different temperatures in the range of 40-80 • C. After 24 h, the biocatalysts were washed several times with acetone and centrifuged for substrate removal. Fresh substrates were added, and the biocatalyst's activity was determined in the esterification reaction of n-amyl alcohol with caproic acid, as described in Section 2.3. Influence of the Reaction Medium For the green synthesis of the aroma ester n-amyl caproate, the reaction was tested in solventless conditions and in the presence of 2-methyltetrahydrofuran, the green alternative to the biomass-derived solvent, tetrahydrofuran. For comparison, we also studied the model esterification reaction of n-amyl alcohol with caproic acid in hexane media. The biocatalyst's activity and ester yield were determined as described in Section 2.3. Operational Stability of the Biocatalysts in the Batch Synthesis of n-Amyl Caproate The sol-gel-entrapped biocatalyst CalB-SG, the native CalB lipase, and the commercial immobilized biocatalyst CalB-IM TM were studied in repeated reuse cycles of the model reaction of n-amyl alcohol with caproic acid, as described in Section 2.3. After each esterification cycle, the solid biocatalyst was separated from the reaction mixture by centrifugation and washed several times with acetone. Fresh substrates were then added, and the reaction was run under the same conditions as previously described. Scanning Electron Microscopy (SEM) Scanning electron microscopy was performed on a Quanta FEG 250 system (FEI, Hillsboro, OR, USA) using a secondary electron detector (SED). Powder samples were collected using a spatula and were fixed on SEM stubs with carbon tape. Studies were carried out in low vacuum mode at 5 kV and at 1.5 spot sizes, to avoid the charging of powder particles. Fluorescence Microscopy (FM) Fluorescence microscopy was performed with an inverted microscope, the Leica DMI4000B (Leica, Munich, Germany), to investigate the distribution of the enzyme within the sol-gel nanostructures. For this purpose, the lipase from Candida antarctica B was dyed with fluorescein isothiocyanate (FITC), as described in the Pierce TM FITC labeling kit. The removal of unbound FITC from the obtained solution was carried out by centrifugation in a centrifuge tube with an Amicon Ultra-4 filter (10 KDa cut-off) and repeated washings with distilled water until the absorbance of the collected fractions at the corresponding wavelength of 493 nm was approximately 0.1. The obtained solution (containing the FITC-labeled enzyme) was concentrated to 10 mg protein/mL by centrifugation in the Amicon Ultra-4 filter tube and was used for immobilization, as described in Section 2.2.3. For comparison purposes, a blank sol-gel matrix without the enzyme-FITC complex was also prepared. Thermal Analysis (TGA/DTA) Thermogravimetric measurements (TGA/DTA) were recorded using a TG 209 F1 Libra thermogravimetric analyzer (Netzsch, Selb, Germany) operating at a resolution of 0.1 µg, in a nitrogen atmosphere. Thermogravimetric curves were recorded from 30 to 1000 • C, with a heating rate of 10 • C/min. The average sample mass was 4.0 ± 0.2 mg; the samples were tested in open alumina crucibles (average mass 190 ± 1.0 mg). Continuous-Flavor Ester Synthesis in a Packed Bed Reactor in a Solventless System The continuous synthesis of n-amyl caproate was achieved in a packed bed reactor (PBR) consisting of a stainless-steel column (dimensions: 150 × 4.6 mm) filled with approximately 1.5 g of sol-gel-entrapped Candida antarctica B lipase in consecutively stacked rows of biocatalyst and quartz sand (for improved hydrodynamics). Thus, a substrate solution was pumped through the bioreactor by means of an HPLC pump for controlled and constant flow rates. The stainless-steel column was placed in a thermostat, to maintain an adequate reaction temperature. The amount of ester synthesized per time unit was assayed periodically up to the 7 h mark by gas chromatography, to track ester formation and the stability of the system, as described in Section 2.3 for the batch reaction system. Sample analysis was performed in triplicate. The rate of production of the aroma ester, n-amyl caproate, was expressed in U/g and was defined as the amount of the ester (µmol) obtained per time unit (1 min) by 1 g of biocatalyst under specified reaction conditions. The productivity of the solventless system was expressed as the amount of g ester formed per hour and per g of the sol-gel biocatalyst. Screening of Silanes with Epoxy Groups and Catalysts for Improved Sol-Gels The introduction of organic groups to the silica precursors leads to the so-called organically modified silica gels (ORMOSILs). While many studies have been published on the subject of organically modified silica gels, the relationship between the properties of the functionalized support and the enzyme activity is still not completely understood [13]. The catalytic efficiency of immobilized enzymes strongly depends on the local environment provided by the supporting material. Reetz et al. [28] pioneered the use of n-alkyltrimethoxysilanes as precursors in the sol-gel process, demonstrating their efficiency in the modification of the specific activity of immobilized lipases. The utilization of functionalized silane precursors for lipase immobilization is one of the most successful achievements of the sol-gel technique. The organically modified alkoxide silane (3-glycidoxypropyl)trimethoxysilane (GPTMS) owns two distinct functional groups, methoxy, that can be hydrolyzed to silanol groups that form a silica network during the condensation process, and a pending epoxy group through which an organic network formation can be achieved [29]. The terminal epoxy group can easily undergo ring-opening reactions and can also be used as a coupling agent to covalently bind organic and inorganic compounds [30]. In our study, hybrid organic-inorganic sol-gel matrices incorporating epoxysilanes were investigated for the entrapment of lipases. Having highly reactive functional groups, we assumed that the epoxysilanes could also lead to the formation of covalent bonds with the protein molecules, allowing a more stable double immobilization of enzymes [1]. Another important factor in sol-gel formation is the nature of the catalyst. Specifically, a basic catalyst leads to larger pores in the sol-gel matrix, compared to acid catalysts [31]. For this reason, sol-gel immobilization was carried out with basic catalysts, such as sodium fluoride, potassium fluoride, and ammonia. Furthermore, the presence of additives in the sol-gel matrix formation process can decrease the internal tension and contraction of the material during gel formation [31] and can significantly increase the catalytic activity of the immobilized enzymes [32]. Ionic liquids have been proven to increase the effectiveness of sol-gel entrapment [31]. In our previous study [24], the ionic liquid OmimBF 4 was proven to enhance the activity and enantioselectivity of the immobilized enzyme. Figure 1 reveals a strong correlation between the silane system of the sol-gel network and the catalytic efficiency of the entrapped biocatalyst. It can be noted that although the KF catalyst facilitates the entrapment of a larger amount of enzyme, especially in the case of networks containing larger silane molecules, such as GP(TMS) 2 MS and (GP) 2 TMDSO, a significant loss in the catalytic activity of the entrapped enzyme was observed. KF also seems to favor the formation of a tighter sol-gel network, leading to impeded mass transfer. Similarly, ammonia was also found to be a weak catalyst in terms of immobilization yield, as well as the catalytic activity of the entrapped enzyme. Interestingly, the TMOS:(GP) 2 TMDSO silane system, although capable of entrapping a large amount of enzyme due to the rather spacious structure of the epoxysilane, proved to be disadvantageous in terms of enzyme activity, except for the use of NaF as a catalyst. The results show that TMOS:GPTMS was the most advantageous silane network system when NaF was used as the catalyst. According to these results, the best biocatalyst for the synthesis of the targeted aroma ester was obtained by entrapment in a sol-gel matrix, obtained with TMOS:GPTMS silane precursors at an equimolar ratio and with NaF as a catalyst. According to these results, the best biocatalyst for the synthesis of the targeted aroma ester was obtained by entrapment in a sol-gel matrix, obtained with TMOS:GPTMS silane precursors at an equimolar ratio and with NaF as a catalyst. Statistical Optimization of Sol-Gel Entrapment of Candida antarctica B Lipase A response surface methodology by means of a 3-level-2-factor central composite design was employed to optimize the immobilization parameters in the sol-gel entrapment of Candida antarctica B lipase, such as the silane molar ratio of the precursor silane system and enzyme loading. Table 1 provides an overview of the experimental setup, as well as the predicted and observed data of the yield of the esterification reaction catalyzed by the lipase, immobilized according to the results presented in Section 3.1.1. Among the different immobilization reactions, the maximum ester yield (88%) was achieved in experiment no. 9 at a 2:1 silane ratio and 16.66 g/mol enzyme loading, while the minimum yield (only 28%) was recorded in experimental run no. 2, at the lowest enzyme loading value and the lowest organic content in the sol-gel network (3:1 silane ratio, 8.33 g enzyme/mol silanes). In all experiments, immobilization yields of between 78 and 90% were obtained. Model fit The fit of the quadratic model was examined using the R 2 value and was determined to be 0.998, indicating that up to 99.8% of the variability in the response could be explained by the model. The plot of experimental values of ester yield (%), versus those calculated from the model equation (Figure 2), suggests a good fit between the data, with a correlation coefficient (predicted R 2 ) of 0.989. This in reasonable agreement with the adjusted R 2 of 0.997. Overall, these results revealed that the predicted and experimental values are in good agreement, implying that the empirical model derived from RSM can be used to effectively define the relationship between the factors and the response of the system in the enzymatic synthesis of n-amyl caproate by sol-gel-entrapped Candida antarctica B lipase. Analysis of variance (ANOVA) Statistical analysis of the experimental data was performed using Statistical analysis of the experimental data was performed using the Design-Expert software from Stat Ease, Inc. (Minneapolis, MN, USA) and allowed for the estimation of the main effects and interaction effects of the investigated immobilization parameters. The statistical significance of the model terms was determined using an analysis of variance. The multiple regression coefficients were obtained by employing a least-squares technique to predict a second-order polynomial model Equation (2) for ester yield; the results of the statistical analysis are summarized in Table 2. In comparison, the interaction term AB was found to be of less significance. Values greater than 0.1 indicate that the model terms are not significant. The model also showed a statistically insignificant lack-of-fit relative to the pure error, having an F-value of 1.87. Factor B (enzyme loading) has the greatest influence on the system studied, as also highlighted by the perturbation plot ( Figure It seems that the most relevant variable for enzyme immobilization is enzyme loading, with estimated effects of 26.00, while the silane molar ratio has a slightly negative influence on the ester yield (−2.00). The results indicate the importance of working with high loads of the enzyme. Process optimization Data were visualized using contour plots, as well as 3D surface plots. Graphical optimization plots are given in Figure 3. As discussed previously, the silane ratio seems to have little influence on the catalytic efficiency of the entrapped lipase; however, since the immobilization yields were lower at superior ratios, we concluded that lower ratios are preferable for the immobilization through sol-gel-entrapment of Candida antarctica B lipase. The optimization analysis revealed that maximum ester yields can be obtained for enzyme loadings above 20 g/mol silane precursors. Since the immobilization yields were generally below 90%, to account for the loss of enzyme in the immobilization procedure, we determined an enzyme loading of 25 g/mol of silane precursors as an optimum. Point prediction and model validation Given these points by statistical analysis and optimization, model validation was carried out at the suggested TMOS:GPTMS silane ratio of 1:1 and enzyme loading of 25 g/mol of silane precursors. The predicted value of ester yield was 83%, which is in good agreement with the experimentally observed value of 84%. the immobilization yields were lower at superior ratios, we concluded that lower ratios are preferable for the immobilization through sol-gel-entrapment of Candida antarctica B lipase. The optimization analysis revealed that maximum ester yields can be obtained for enzyme loadings above 20 g/mol silane precursors. Since the immobilization yields were generally below 90%, to account for the loss of enzyme in the immobilization procedure, we determined an enzyme loading of 25 g/mol of silane precursors as an optimum. Point prediction and model validation Given these points by statistical analysis and optimization, model validation was carried out at the suggested TMOS:GPTMS silane ratio of 1:1 and enzyme loading of 25 g/mol of silane precursors. The predicted value of ester yield was 83%, which is in good agreement with the experimentally observed value of 84%. Immobilization Procedure of Candida antarctica B Lipase and Immobilization Yield The immobilization yields obtained for the entrapment of Candida antarctica B lipase in sol-gel matrices consisting of the binary silane system TMOS:GPTMS in different molar ratios and with different enzyme loadings are presented in Table 3. The immobilization procedure proved to be very effective, with high protein entrapment yields, the lowest yield obtained being 78% for the TMOS:GPTMS silane system at a 3:1 molar ratio and the lowest enzyme-loading. Immobilization Procedure of Candida antarctica B Lipase and Immobilization Yield The immobilization yields obtained for the entrapment of Candida antarctica B lipase in sol-gel matrices consisting of the binary silane system TMOS:GPTMS in different molar ratios and with different enzyme loadings are presented in Table 3. The immobilization procedure proved to be very effective, with high protein entrapment yields, the lowest yield obtained being 78% for the TMOS:GPTMS silane system at a 3:1 molar ratio and the lowest enzyme-loading. The increase in organic content of the sol-gel network (TMOS:GPTMS silane ratio) led to significantly lower gelation rates and, accordingly, to an exponential rise in gel formation time, from a couple of seconds to 12 h (data not shown). Catalytic Efficiency of the Immobilized CalB Lipase in Solventless Batch Esterification The ester yield obtained in the enzymatic synthesis of n-amyl alcohol with caproic acid alongside the sol-gel-entrapped CalB lipase under optimized conditions (TMOS:GPTMS silane ratio of 1:1, and enzyme loading of 25 g/mol silane) was 84%, close to that obtained with the native enzyme (87%) under the same reaction conditions, demonstrating that the lipase was not inactivated during the immobilization process. Pertaining to the catalytic efficiency of the CalB lipase (106 U/g), an expected decrease in the biocatalyst after immobilization was observed, which can be explained by the in-depth distribution of the enzyme throughout a much larger support matrix. However, it remained at satisfactory and economically viable levels (18 U/g), very close to those of the commercial enzyme preparation, CalB-IM TM (20 U/g). Thermal Stability of the Immobilized Biocatalyst The temperature stability of immobilized enzymes is one of the advantages of sol-gel entrapment because the conformational flexibility of the enzyme is reduced by encapsulation inside the xerogel matrix [15]. Thus, the temperature stability of the lipase from Candida antarctica B (CalB) was studied via preincubation in n-amyl alcohol (also used as the reaction substrate) at temperatures ranging from 40 to 80 • C. The influence of temperature on the ester yield is presented in Figure 4, compared to the native biocatalyst. The ester yield obtained in the enzymatic synthesis of n-amyl alcohol with caproic acid alongside the sol-gel-entrapped CalB lipase under optimized conditions (TMOS:GPTMS silane ratio of 1:1, and enzyme loading of 25 g/mol silane) was 84%, close to that obtained with the native enzyme (87%) under the same reaction conditions, demonstrating that the lipase was not inactivated during the immobilization process. Pertaining to the catalytic efficiency of the CalB lipase (106 U/g), an expected decrease in the biocatalyst after immobilization was observed, which can be explained by the indepth distribution of the enzyme throughout a much larger support matrix. However, it remained at satisfactory and economically viable levels (18 U/g), very close to those of the commercial enzyme preparation, CalB-IM TM (20 U/g). Thermal Stability of the Immobilized Biocatalyst The temperature stability of immobilized enzymes is one of the advantages of sol-gel entrapment because the conformational flexibility of the enzyme is reduced by encapsulation inside the xerogel matrix [15]. Thus, the temperature stability of the lipase from Candida antarctica B (CalB) was studied via preincubation in n-amyl alcohol (also used as the reaction substrate) at temperatures ranging from 40 to 80 °C. The influence of temperature on the ester yield is presented in Figure 4, compared to the native biocatalyst. The activities of the sol-gel-immobilized lipase remained constant at all incubation temperature values, displaying excellent thermal stability in the studied temperature range of 40-80 • C. In contrast, the native lipase lost up to 15% of its activity as the incubation temperature increased to 80 • C. This same tendency was noticed concerning the catalytic efficiency values, which were almost constant in the range of 11-12 U/g for CalB-SG, while showing a slight decrease from 104 U/g at 40 • C to 84 U/g at 80 • C for the native CalB. The significant catalytic efficiency differences between the native and immobilized CalB lipases are due to the much lower enzyme amount that was effectively introduced in the reaction system, using an immobilized biocatalyst that contains only about 170 mg enzyme/g preparation, since a major part of the immobilized material is silica xerogel. While both native and immobilized lipase have been used successfully over the tested temperature range, the immobilized enzyme proved more stable at elevated temperatures (80 • C) than the native lipase. The increase in incubation temperature had a beneficial effect on ester production when using the sol-gel-immobilized lipase, as can be seen from the slightly higher ester yields, probably due to improved mass transfer. Influence of the Reaction Medium Most of the esterification processes catalyzed by lipases are accomplished in nonpolar organic solvents, considering that in this way the water can be displaced from the catalytic center zone, while the equilibrium is shifted toward esterification [33]. In the search for a greener synthesis method for the food aroma, ester n-amyl caproate, we carried out the reaction in both organic solvent and solventless systems. Along with nhexane, the most commonly used non-polar solvent, the green biomass-derived solvent 2-methyltetrahydrofuran (2-MeTHF) was also tested. However, using this solvent for the esterification reaction significantly lowered catalytic efficiency (Figure 5a), and lower yields (Figure 5b) were obtained than in n-hexane or in the solventless medium. The commercially immobilized lipase CalB-IM™ exhibited similar behavior, losing more than 50% in terms of activity in 2-MeTHF, compared to n-hexane and the solventless system. The sol-gel-entrapped lipase demonstrated comparable catalytic efficiency with the commercial biocatalyst, proving that it could be used successfully for this process, also having the advantage of reusability in comparison with the native lipase. The reaction in the solventless system led to slightly higher yields, as in n-hexane, and allowed us to carry out subsequent experiments without any solvent. Most of the esterification processes catalyzed by lipases are accomplished in nonpolar organic solvents, considering that in this way the water can be displaced from the catalytic center zone, while the equilibrium is shifted toward esterification [33]. In the search for a greener synthesis method for the food aroma, ester n-amyl caproate, we carried out the reaction in both organic solvent and solventless systems. Along with nhexane, the most commonly used non-polar solvent, the green biomass-derived solvent 2-methyltetrahydrofuran (2-MeTHF) was also tested. However, using this solvent for the esterification reaction significantly lowered catalytic efficiency (Figure 5a), and lower yields (Figure 5b) were obtained than in n-hexane or in the solventless medium. The commercially immobilized lipase CalB-IM™ exhibited similar behavior, losing more than 50% in terms of activity in 2-MeTHF, compared to n-hexane and the solventless system. The sol-gel-entrapped lipase demonstrated comparable catalytic efficiency with the commercial biocatalyst, proving that it could be used successfully for this process, also having the advantage of reusability in comparison with the native lipase. The reaction in the solventless system led to slightly higher yields, as in n-hexane, and allowed us to carry out subsequent experiments without any solvent. Operational Stability of the Biocatalysts in the Batch Synthesis of n-Amyl Caproate The most important feature of enzymes that can be improved by immobilization is their stability upon reuse in consecutive reaction cycles, a key requirement for industrial application. Resistance to mechanical and chemical degradation by solvents and/or substrates (alcohols and acids), as well as the compatibility of the sol-gel with the reaction medium (hydrophobic, in the case of n-hexane solvent, but more hydrophilic in solventless conditions, especially when using excess amounts of alcohol) were important features of the chosen immobilization technique. Moreover, since the designed sol-gel biocatalyst was further used for the continuous flow synthesis of n-amyl caproate in a packed-bed reactor, it was essential to maintain the integrity of the sol-gel matrix, not only under shear stress but also under the backpressure present in this type of reactor, to prevent enzyme leaching. A reusability study was carried out to demonstrate that the immobilized biocatalyst is robust and maintains its stability over many reaction cycles. The operational stability of Candida antarctica B lipase, immobilized by an optimized sol-gel entrapment procedure Operational Stability of the Biocatalysts in the Batch Synthesis of n-Amyl Caproate The most important feature of enzymes that can be improved by immobilization is their stability upon reuse in consecutive reaction cycles, a key requirement for industrial application. Resistance to mechanical and chemical degradation by solvents and/or substrates (alcohols and acids), as well as the compatibility of the sol-gel with the reaction medium (hydrophobic, in the case of n-hexane solvent, but more hydrophilic in solventless conditions, especially when using excess amounts of alcohol) were important features of the chosen immobilization technique. Moreover, since the designed sol-gel biocatalyst was further used for the continuous flow synthesis of n-amyl caproate in a packed-bed reactor, it was essential to maintain the integrity of the sol-gel matrix, not only under shear stress but also under the backpressure present in this type of reactor, to prevent enzyme leaching. A reusability study was carried out to demonstrate that the immobilized biocatalyst is robust and maintains its stability over many reaction cycles. The operational stability of Candida antarctica B lipase, immobilized by an optimized sol-gel entrapment procedure using a binary system of silane precursors (TMOS:GPTMS in 1:1 molar ratio and an enzyme loading of 25 g enzyme/mol silane), was studied in repeated batch esterification cycles of the esterification reaction of n-amyl alcohol and caproic acid, in a solventless system. The results are shown in Figure 6, in comparison to the commercially available immobilized enzyme preparation, CalB-IM™, derived from the same lipase. The lipase immobilized by the sol-gel technique proved its superiority, preserving more than 99% of its initial activity, even after 10 reaction cycles. Conversely, the relative activity of the commercially available CalB-IM TM lipase decreased significantly under the selected process conditions, losing up to 80% of its initial value after 7 reuse cycles. The ester yields in the solventless synthesis in batch mode showed the same tendency. In the case of CalB-SG, they were almost constant during the 10 reuse cycles, in the range of 82-88% ± 1.5%, while for the reference commercial biocatalyst, CalB-IM™, a decrease from 88% in the first cycle to 17% after 10 uses was noticed. The catalytic efficiencies per g of biocatalyst that were achieved in the solventless synthesis of n-amyl caproate in the batch mode were 17.6-18.1 U/g biocatalyst, in the case of CalB-SG, and 3.9-19.2 U/g biocatalyst for the resin-adsorbed lipase CalB-IM™, respectively. The ester productivity relative to the amount of substrate was also determined, as described by Li et al. [34]. As there are two substrates in the esterification reaction, the results were expressed in relation to the natural caproic acid. As expected, the ester productivities in the repeated batch processes, were also in a close range for CalB-SG between 1.62 and 1.71 g ester/g acid during 10 batch reaction cycles. At the same time, the reference biocatalyst CalB-IM TM exhibited a consistent decrease in productivity, dropping from 1.74 g ester/g acid to 0.35 g ester/g acid at the end of only 7 batch-reaction cycles. The high reusability of the sol-gel-immobilized biocatalyst is mainly due to the inherent properties of silica gels. The xerogels obtained in the sol-gel immobilization process are poly-siloxane glasses with high mechanical and chemical stability, these being the main advantages of these systems compared to resins (as CalB-IM™) or other supports that are commonly used for the immobilization of enzymes [35]. The lipase immobilized by the sol-gel technique proved its superiority, preserving more than 99% of its initial activity, even after 10 reaction cycles. Conversely, the relative activity of the commercially available CalB-IM TM lipase decreased significantly under the selected process conditions, losing up to 80% of its initial value after 7 reuse cycles. The ester yields in the solventless synthesis in batch mode showed the same tendency. In the case of CalB-SG, they were almost constant during the 10 reuse cycles, in the range of 82-88% ± 1.5%, while for the reference commercial biocatalyst, CalB-IM™, a decrease from 88% in the first cycle to 17% after 10 uses was noticed. The catalytic efficiencies per g of biocatalyst that were achieved in the solventless synthesis of n-amyl caproate in the batch mode were 17.6-18.1 U/g biocatalyst, in the case of CalB-SG, and 3.9-19.2 U/g biocatalyst for the resin-adsorbed lipase CalB-IM™, respectively. The ester productivity relative to the amount of substrate was also determined, as described by Li et al. [34]. As there are two substrates in the esterification reaction, the results were expressed in relation to the natural caproic acid. As expected, the ester productivities in the repeated batch processes, were also in a close range for CalB-SG between 1.62 and 1.71 g ester/g acid during 10 batch reaction cycles. At the same time, the reference biocatalyst CalB-IM TM exhibited a consistent decrease in productivity, dropping from 1.74 g ester/g acid to 0.35 g ester/g acid at the end of only 7 batch-reaction cycles. The high reusability of the sol-gel-immobilized biocatalyst is mainly due to the inherent properties of silica gels. The xerogels obtained in the sol-gel immobilization process are poly-siloxane glasses with high mechanical and chemical stability, these being the main advantages of these systems compared to resins (as CalB-IM™) or other supports that are commonly used for the immobilization of enzymes [35]. Scanning Electron Microscopy (SEM) The morphology and particle size distribution of the sol-gel-entrapped lipase were analyzed and measured using scanning electron microscopy. As described in the general procedure for immobilization by sol-gel entrapment (Section 2.2.3), the resulting xerogel was crushed in a mortar. The brittle bulk material was fragmented into smaller particles, which are generally in the range of 1-70 microns. The SEM micrographs are presented in Figure 7, at two different magnifications of 5000× and 10,000×, respectively. SEM images of the CalB-SG sample presented in Figure 7b show particles that have an irregular shape and a wide size distribution in the range of 13-57 microns. The block shape morphology, with smooth surfaces and sharp edges, is characteristic of brittle fractures. In addition, tiny powder particles, generally smaller than 4 microns, can be seen SEM images of the CalB-SG sample presented in Figure 7b show particles that have an irregular shape and a wide size distribution in the range of 13-57 microns. The block shape morphology, with smooth surfaces and sharp edges, is characteristic of brittle fractures. In addition, tiny powder particles, generally smaller than 4 microns, can be seen on the surface of the larger particles. These small particles tend to form agglomerations, as can be seen from the micrographs. Similar findings were reported by Erasmus et al. [36]. The morphology of the blank sol-gel matrix (without enzyme) in Figure 7a is similar to the preparation containing the entrapped enzyme, with a particle size distribution in the range of 15-67 microns. For the CalB-SG samples recovered after reuse in more than 10 reaction cycles (Figure 7c), and after incubation in n-amyl alcohol at 80 • C for 24 h (Figure 7d), a similar morphology to the initial CalB-SG preparation was observed; it retains the block shape structure, which is in good correlation with the excellent activity demonstrated and remained constant after thermal stress and reuse. Only a small difference related to the edges of the particles could be detected. As can be observed in the SEM micrographs, the edges are less sharp and more rounded/smoother compared to the initial preparation, probably due to temperature stress or repeated usage. Furthermore, fine particles in the range of 15-20 microns are missing, which might be caused by the repeated washing of the powder during the recycling and reuse process. Fluorescence Microscopy (FM) Fluorescence microscopy was used to observe the distribution of the enzyme within the sol-gel network. Since Candida antarctica B lipase does not possess inherent fluorescence, it was labeled with fluorescein isothiocyanate (FITC). The fluorescent image of the FITC-enzyme complex that was immobilized in the TMOS:GPTMS matrix was recorded, as shown in Figure 8b, compared to the blank matrix in Figure 8a. retains the block shape structure, which is in good correlation with the excellent activity demonstrated and remained constant after thermal stress and reuse. Only a small difference related to the edges of the particles could be detected. As can be observed in the SEM micrographs, the edges are less sharp and more rounded/smoother compared to the initial preparation, probably due to temperature stress or repeated usage. Furthermore, fine particles in the range of 15-20 microns are missing, which might be caused by the repeated washing of the powder during the recycling and reuse process. Fluorescence Microscopy (FM) Fluorescence microscopy was used to observe the distribution of the enzyme within the sol-gel network. Since Candida antarctica B lipase does not possess inherent fluorescence, it was labeled with fluorescein isothiocyanate (FITC). The fluorescent image of the FITC-enzyme complex that was immobilized in the TMOS:GPTMS matrix was recorded, as shown in Figure 8b, compared to the blank matrix in Figure 8a. The image of the xerogel preparation containing the FITC-enzyme complex showed fluorescence (which is absent in the control xerogel), with the enzyme distributed on the surface and in the inner part of the sol-gel matrix, indicating that the enzyme is distributed uniformly throughout the immobilized preparation. Thermal Analysis (TGA/DTA) The weight-loss curve of the sol-gel-entrapped biocatalyst obtained by thermogravimetric analysis (TGA) is presented in Figure 9. The weight loss percentage of the three regions observed (30-280 • C, 280-530 • C, and 530-990 • C) is given in Table 4. In the first region (30-280 °C), a weight loss of 1.7-5.2 is associated with the evaporation of water and some volatile compounds that are used in the sol-gel process. The sol-gel preparation containing the entrapped lipase showed a weight loss 3.6% higher than that of the matrix without enzyme, showing that there is a significant amount of water remaining in the preparation that is important for the preservation of the active catalytic conformation of the biocatalyst. This is correlated with the good biocatalytic efficiency obtained for our CalB-SG preparation. The difference of 6.1% in weight loss between the sol-gel preparation and the blank sol-gel matrix, SG, in the 280-530 °C interval can be assigned to the entrapped protein. The DTG curves of the CalB-SG and the blank SG matrix are quite similar, showing a peak at 450 °C compared to that of the native lipase at only 350 °C, indicating greater protection against increasing temperatures after immobilization, as shown in our previous studies [15,24]. The thermogram of the native lipase in Figure 9c was completely different in comparison with that of the sol-gel preparation (Figure 9b). The weight loss in the first region was higher (approximately 21.6%), showing the high water content of native lipase, even when the unbound lipase is a solid material. The thermal decomposition of the organic part of the native lipase is completed at 530 °C, showing a significant weight loss In the first region (30-280 • C), a weight loss of 1.7-5.2 is associated with the evaporation of water and some volatile compounds that are used in the sol-gel process. The sol-gel preparation containing the entrapped lipase showed a weight loss 3.6% higher than that of the matrix without enzyme, showing that there is a significant amount of water remaining in the preparation that is important for the preservation of the active catalytic conformation of the biocatalyst. This is correlated with the good biocatalytic efficiency obtained for our CalB-SG preparation. The difference of 6.1% in weight loss between the sol-gel preparation and the blank sol-gel matrix, SG, in the 280-530 • C interval can be assigned to the entrapped protein. The DTG curves of the CalB-SG and the blank SG matrix are quite similar, showing a peak at 450 • C compared to that of the native lipase at only 350 • C, indicating greater protection against increasing temperatures after immobilization, as shown in our previous studies [15,24]. The thermogram of the native lipase in Figure 9c was completely different in comparison with that of the sol-gel preparation (Figure 9b). The weight loss in the first region was higher (approximately 21.6%), showing the high water content of native lipase, even when the unbound lipase is a solid material. The thermal decomposition of the organic part of the native lipase is completed at 530 • C, showing a significant weight loss of 48.7%. The protein content measured by the Bradford method for the native lipase was 7%, the 41.7% difference representing organic compounds that were added to the lipase in order to stabilize it. The residual mass of 17.13% indicates the inorganic compounds added to the native lipase as stabilization additives. The commercial CalB-IM™ biocatalyst shows a significant weight loss of 89% to 530 • C (Figure 9d), meaning that the immobilization support is an organic ion exchange resin. Comparatively speaking, the sol-gel matrix is mostly inorganic. Continuous Flavor Ester Synthesis in a Packed Bed Reactor in a Solventless System The immobilization of Candida antarctica lipase type B by entrapment in epoxyfunctionalized hybrid sol-gel led to enzyme loading of about 170 mg/g and protein loading of 13.3 mg/g, respectively. The continuous flow esterification of caproic acid and n-amyl alcohol ( Figure 10) was performed in a solventless system, using a packed-bed reactor with 1.5 g immobilized enzyme (20 mg of protein); the main parameters and their interactions in the enzymatic synthesis of n-amyl caproate in a continuous flow regime were evaluated. The investigated reaction parameters were substrate concentration, flow, and temperature, each at three distinct levels. The levels of the three esterification reaction parameters are given in Table 5. biocatalyst shows a significant weight loss of 89% to 530 °C (Figure 9d), meaning that the immobilization support is an organic ion exchange resin. Comparatively speaking, the sol-gel matrix is mostly inorganic. Continuous Flavor Ester Synthesis in a Packed Bed Reactor in a Solventless System The immobilization of Candida antarctica lipase type B by entrapment in epoxyfunctionalized hybrid sol-gel led to enzyme loading of about 170 mg/g and protein loading of 13.3 mg/g, respectively. The continuous flow esterification of caproic acid and n-amyl alcohol ( Figure 10) was performed in a solventless system, using a packed-bed reactor with 1.5 g immobilized enzyme (20 mg of protein); the main parameters and their interactions in the enzymatic synthesis of n-amyl caproate in a continuous flow regime were evaluated. The investigated reaction parameters were substrate concentration, flow, and temperature, each at three distinct levels. The levels of the three esterification reaction parameters are given in Table 5. The system response was evaluated in terms of both ester yield ( Figure 11 and Figure S2 in the Supplementary Materials) and productivity ( Figure 12 and Figure S3 in the Supplementary Materials). At the given enzyme/substrate ratio (0.03 g enzyme/g caproic acid, 3.38 g enzyme/mol caproic acid), a maximum ester yield of 77% was achieved at a substrate molar ratio of 1:1, flow of 0.2 mL/min, and temperature of 80 °C. Although the yield in the given conditions was not as high, we obtained excellent production rates (rflow of 312 U/g), and productivities (3.5 g ester/h per gram of biocatalyst). The maximum productivity of 4.15 g ester/h per gram of sol-gel biocatalyst, however, was obtained at elevated flow rates and slightly different reaction conditions: a substrate molar ratio of 2:1, flow of 0.4 mL/min, and temperature of 80 °C, pertaining to a maximum production rate of 370 U/g. Comparatively, Thomas et al. [37] obtained production rates of 250 U/g in the continuous flow kinetic resolution of secondary alcohols, and a conversion rate up to 57%. It was also found that the ester yield did not vary over a period of 7 h for the synthesis of n-amyl caproate in the continuous flow packed-bed reactor, beyond the experimental error. Thus, we concluded that the catalyst efficiency did not diminish for 7 h of biocatalysis, even without the removal of water. At the given enzyme/substrate ratio (0.03 g enzyme/g caproic acid, 3.38 g enzyme/mol caproic acid), a maximum ester yield of 77% was achieved at a substrate molar ratio of 1:1, flow of 0.2 mL/min, and temperature of 80 °C. Although the yield in the given conditions was not as high, we obtained excellent production rates (rflow of 312 U/g), and productivities (3.5 g ester/h per gram of biocatalyst). The maximum productivity of 4.15 g ester/h per gram of sol-gel biocatalyst, however, was obtained at elevated flow rates and slightly different reaction conditions: a substrate molar ratio of 2:1, flow of 0.4 mL/min, and temperature of 80 °C, pertaining to a maximum production rate of 370 U/g. Comparatively, Thomas et al. [37] obtained production rates of 250 U/g in the continuous flow kinetic resolution of secondary alcohols, and a conversion rate up to 57%. It was also found that the ester yield did not vary over a period of 7 h for the synthesis of n-amyl caproate in the continuous flow packed-bed reactor, beyond the experimental error. Thus, we concluded that the catalyst efficiency did not diminish for 7 h of biocatalysis, even without the removal of water. At the given enzyme/substrate ratio (0.03 g enzyme/g caproic acid, 3.38 g enzyme/mol caproic acid), a maximum ester yield of 77% was achieved at a substrate molar ratio of 1:1, flow of 0.2 mL/min, and temperature of 80 • C. Although the yield in the given conditions was not as high, we obtained excellent production rates (r flow of 312 U/g), and productivities (3.5 g ester/h per gram of biocatalyst). The maximum productivity of 4.15 g ester/h per gram of sol-gel biocatalyst, however, was obtained at elevated flow rates and slightly different reaction conditions: a substrate molar ratio of 2:1, flow of 0.4 mL/min, and temperature of 80 • C, pertaining to a maximum production rate of 370 U/g. Comparatively, Thomas et al. [37] obtained production rates of 250 U/g in the continuous flow kinetic resolution of secondary alcohols, and a conversion rate up to 57%. It was also found that the ester yield did not vary over a period of 7 h for the synthesis of n-amyl caproate in the continuous flow packed-bed reactor, beyond the experimental error. Thus, we concluded that the catalyst efficiency did not diminish for 7 h of biocatalysis, even without the removal of water. Concerning the production of aroma esters on a larger scale, the use of packed-bed reactors containing immobilized enzymes is more cost-effective than traditional batch-mode reactors. Research in the field of the miniaturization of packed-bed reactors has also revealed promising results, such as improved mass transport, leading to higher yields and shorter reaction times [38]. Table 6 shows examples of biocatalytic ester synthesis reactions that were carried out in different types of packed-bed reactors with immobilized enzymes, as reported by other groups, in comparison with our results. Sheih et al. [39] investigated the synthesis of hexyl laurate in a packed-bed reactor (25 cm × 0.25 cm) for the esterification of lauric acid and 1-hexanol, catalyzed by immobilized Rhizomucor miehei lipase (Lipozyme IM-77, 1.5 g). They obtained high conversion rates (97%) using a 1:2 alcohol/acid substrate ratio in hexane solvent, at a flow rate of 4.5 mL/min and 45 • C. The maximum production rate was 437.6 µmol/min. Subsequently, Sheih et al. [40] used Rhizomucor miehei lipase, immobilized by adsorption onto anionic resin (Lipozyme IM-77), in a packed-bed reactor for the synthesis of hexyl laurate in solventless media. Under optimal conditions (54 • C and a flow rate of 0.5 mL/min) at a lauric acid concentration of 0.3 M, they obtained an ester production rate of about 87 µmol/min and a maximal yield of 60%. Woodcock et al. used Novozym 435 for the synthesis of alkyl esters in a miniaturized continuous-flow packed-bed reactor (30 cm × 1.65 mm). A substrate solution of 0.2 M in hexane (molar ratio of 1:1 acid/alcohol) was pumped through a packed bed of Novozyme 435 (approximately 100 mg) at 1 µL/min and 23 • C, yielding alkyl esters at conversions of >99% over a two-hour period. They attained a productivity rate of 2.04 mg ester/hour with 100 mg of biocatalyst, at a flow rate of 1 µL/min [41]. Wang et al. [38] used a microfluidic chip packed bed reactor (1 cm × 500 µm × 75 mm) filled with 90 mg of Novozym 435 to synthesize caffeic acid phenylethyl esters (CAPE) by the transesterification of alkyl caffeates with phenylethanol, using ionic liquid BmimTf 2 N as the solvent. Under optimal conditions, they obtained up to 93% of ester yield at a flow rate of 2 µL/min and 2.5 h residence time. The maximum productivity achieved was 0.027 µmol/min at a flow rate of 20 µL/min, 60 • C, and 3 mg/mL substrate concentration (molar ratio of 1:40 ester/alcohol). Although we did not surpass a substrate conversion of 77% with the given enzyme loading (0.02 g enzyme/g total substrate, at 1:1 molar ratio) and experimental setup, the obtained ester yields at this stage of the continuous flow experiments in the solventless enzymatic synthesis of the aroma ester n-amyl caproate offer very promising results and could potentially be further improved by the use of elongated packed-bed reactors and the longer residence time of the substrates. We obtained very good ester production rates (up to 370 U/g) that were much higher than those usually achievable for ester synthesis reactions in solventless systems, comparable to those accomplished in solvent-entrained systems. For example, Shieh et al. obtained about the same amount of immobilized lipase and comparable flow rate, demonstrating much lower ester productivity in the solventless synthesis of hexyl laurate. In addition, the productivities attained in our work are much higher than in the case of microchannel reactor systems, which benefit from improved mass transport and system performance. Conclusions Candida antarctica B lipase immobilization through entrapment with epoxysilanes was proven to be an improved sol-gel technique for rendering highly stable biocatalysts and appropriate for the clean production of food aroma esters, particularly n-amyl caproate, under mild reaction conditions (36 • C, solventless system). The stability of the biocatalyst was investigated under thermal stress and repeated usage, in comparison to the native enzyme and a commercial preparation of the same enzyme. Under the investigated conditions, the sol-gel biocatalyst exhibited excellent catalytic efficiency and robustness against external stressors. Furthermore, it proved high cost-effectiveness by maintaining 99% of its initial activity, even after 10 reuse cycles.The continuous flow synthesis of the aroma ester n-amyl caproate catalyzed by the entrapped CalB lipase was evaluated in terms of flow, temperature, and alcohol:acid molar ratio, leading to high productivity (4.15 g ester/h) at the optimal values of these parameters: a substrate molar ratio of 2:1, a flow of 0.4 mL/min, and a temperature of 80 • C, respectively. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods11162485/s1, Figure S1: Perturbation plot of experimental factors A (TMOS:GPTMS silane ratio) and B (enzyme loading) on the system response (ester yield, %) in the enzymatic synthesis of n-amyl hexanoate catalyzed by sol-gel-entrapped lipase from Candida antarctica B; Figure S2: Ester yield (%) in the continuous enzymatic synthesis of n-amyl hexanoate in solventless system catalyzed by sol-gel-entrapped Candida antarctica B lipase at: (a) a substrate ratio of 2:1 and a temperature of 60 • C; (b) a substrate ratio of 2:1 and a flow rate of 0.2 mL/min; Figure S3: Productivity in the continuous enzymatic synthesis of n-amyl hexanoate in a solventless system catalyzed by sol-gel-entrapped Candida antarctica B lipase at: (a) a substrate ratio of 2:1 and a temperature of 60 • C; (b) a temperature of 80 • C and a flow rate of 0.2 mL/min; Table S1: Immobilization parameters of sol-gel entrapment of Candida antarctica B lipase used in the CCD optimization setup. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
2022-08-20T15:03:56.174Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "9c9279dbd55a692916f14742bd276cbd0e38b32c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/16/2485/pdf?version=1660732400", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9da412d6e11fb646b61be9ca2991392167efe08f", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
203374797
pes2o/s2orc
v3-fos-license
A cross-scale assessment of productivity-diversity relationships Biodiversity and ecosystem productivity vary across the globe and considerable effort has been made to describe their relationships. Biodiversity-ecosystem functioning research has traditionally focused on how experimentally controlled species richness affects net primary productivity (S→NPP) at small spatial grains. In contrast, the influence of productivity on richness (NPP→S) has been explored at many grains in naturally assembled communities. Mismatches in spatial scale between approaches have fostered debate about the strength and direction of biodiversity-productivity relationships. Here we examine the direction and strength of productivity’s influence on diversity (NPP→S) and of diversity’s influence on productivity (S→NPP), and how this varies across spatial grains using data from North American forests at grains from local (672 m2) to coarse spatial units (median area = 35,677 km2). We assess relationships using structural equation and Random Forest models, while accounting for variation in climate, environmental heterogeneity, management, and forest age. We show that relationships between S and NPP strengthen with spatial grain. Within each grain, S→NPP and NPP→S have similar magnitudes, meaning that processes underlying S→NPP and NPP→S either operate simultaneously, or that one of them is real and the other is an artifact. At all spatial grains, S was one of the weakest predictors of forest productivity, which was largely driven by biomass, temperature, and forest management and age. We conclude that spatial grain mediates relationships between biodiversity and productivity in real-world ecosystems and that results supporting predictions from each approach (NPP→S and S→NPP) serve as an impetus for future studies testing underlying mechanisms. Significance statement The relationships between diversity and productivity are central to efforts explaining global variation of biodiversity and rates of carbon sequestration. However, little is known about the relative importance of biodiversity as the driving force, or as the consequence of ecosystem-level productivity. Our analysis of a comprehensive database of North American forests reveals that biodiversity and productivity can be both cause and effect and that their relationship strengthens with spatial grain. Importantly, we show that environmental context is more important in determining biodiversity and productivity than either biodiversity or productivity alone. Productivity-diversity relationships emerge at multiple spatial grains, which should widen the focus of national and global policy and research to larger spatial grains. 1 Overview of hypotheses predicting grain dependence of relationships between net primary 1 3 8 productivity (NPP) and species richness (S). No. Direction Mechanism of grain dependence Weakens or strengthens towards coarse grain? I NPP→S and S→NPP Spatially asynchronous demographic stochasticity impacts small populations (or small grains) and averages out over large grains. Both NPP→S and S→NPP strengthen towards coarse grains II NPP→S At larger grains, higher NPP is associated with increased heterogeneity and/or dissimilarity of local patches, allowing for greater regional coexistence. NPP→S strengthens towards coarse grains (28,35,36) III NPP→S A statistical interaction between NPP and grain in their effect on S emerges as a consequence of increasing occupancy with NPP. NPP→S weakens towards coarse grains (37) IV NPP→S At very large grains (thousands of km 2 and larger), high productivity increases occupancy and population size, thus increasing the probability of reproductive isolation and speciation NPP→S strengthens towards coarse grains V S→NPP Stochastic sampling effects dominate at small grains, resource partitioning at larger grains ('spatial insurance'), and their relative magnitude determines the grain dependency. Both strengthening or weakening possible (39,40) VI S→NPP Functionally redundant species at the regional grain can compensate for low richness at local grains. S→NPP strengthens towards coarse grains VII S→NPP With incomplete compositional turn-over, proportional changes in larger-grain richness are always less than proportional changes in smaller-grain richness such that the explanatory power of richness on changes in functioning decreases with spatial scale. S → NPP strengthens strengthens towards coarse grains until species richness saturates (42) Running title: Cross-scale diversity-productivity Craven et al. 7 Spatial patterns in productivity (NPP) and richness (S) emerged at coarser spatial grains, with higher S 1 4 5 and NPP usually observed in the eastern USA than in the western USA (Fig. 1). Biomass, a time-1 4 6 integrated measure of NPP that also influences diversity, also exhibited similar patterns (Fig. 1). Bivariate show NPP as a response to S, panels C and D show NPP as a predictor. Solid lines are least-squares linear 1 5 3 regressions fitted at each grain, shaded areas are standard errors. Analyses were performed using stratified Structural Equation Models (SEM). We examined relationships between species richness and net 1 5 8 primary productivity (NPP) across spatial grains using two SEMs for each spatial grain: the first 1 5 9 (S→NPP) testing the direct effect of S on NPP and the indirect effect of NPP on S (via biomass), and the 1 6 0 second (NPP→S) testing both the direct and indirect effects of NPP on S (Fig. 3). In both SEMs, 1 6 1 environmental variables (e.g., mean annual precipitation (MAP), mean annual temperature (MAT), Running title: Cross-scale diversity-productivity Craven et al. 9 temperature seasonality, and elevation range), size of the species pool, forest age, and management were 1 6 3 used to explain variation in S, biomass, and NPP. At the intermediate and coarse grains, we also included 1 6 4 area (of each spatial unit) to account for variation in species richness due to the effects of area (see 1 6 5 Methods). variables (e.g., mean annual precipitation, mean annual temperature, temperature seasonality, and 1 7 1 elevation range), size of the species pool, forest age, and management, in forests across the contiguous 1 7 2 USA at three spatial grains. Both models fit the data well at all spatial grains (P-value of the Chi-square 1 7 3 test > 0.1; Table S1). Boxes represent measured variables and arrows represent relationships among 1 7 4 variables. Solid blue and red arrows represent significant (P< 0.05) positive and negative standardized 1 7 5 path coefficients, respectively, and their width is scaled by the corresponding standardized path 1 7 6 coefficient. Solid and dashed gray arrows represent non-significant (P>0.05) positive and negative 1 7 7 standardized path coefficients, respectively. AGE is forest age, MANAGED is forest management, 1 7 8 ANN.PREC is mean annual precipitation, ANN.TEMP is mean annual temperature, TEMP.SEAS is 1 7 9 temperature seasonality, ELEV.RANGE is elevation range, S.POOL is the regional species pool, and 1 8 0 AREA is area. S, BIOMASS, NPP, and AREA were natural log transformed prior to analysis. grains. Points are standardized path coefficients and solid lines are 95% confidence intervals. Both models fit the data well for all spatial grains (P-value of the Chi-square test > 0.1; Table S1). At increasing spatial grain ( Fig. 3 & 4). At the fine spatial grain, we found a weak direct effect of S Overall, the SEMs show that the productivity-diversity relationship increases in strength with spatial 2 0 2 grain, and both relationships (S→NPP and NPP→S) explain similar amounts of variation, albeit with 2 0 3 some differences in the direct and indirect effects. At fine spatial grains, our SEMs show greater support 2 0 4 for a strong indirect effect of NPP on S via biomass, but do not support the inverse effect of S on NPP. Towards coarser spatial grains, our SEMs do not conclusively show stronger support for one direction of Table S1) and between biomass and NPP ( Fig. S3D, E, and F; Table S1). and NPP, and to provide an assumption-free alternative to the SEMs, we fitted two random forest models 2 1 0 for each of the three spatial grains: one with NPP and the other with S as response variables. We found 2 1 1 that NPP was an important predictor of S at the fine and intermediate spatial grains (Fig. 5A), with 2 1 2 unimodal and linear effects respectively (Fig. 5), but was less important relative to other predictors at the 2 1 3 coarse spatial grain. For S, we found that species pool, MAT, MAP, and forest age were the best coarse grains. For NPP, we found that species richness was one of the weakest predictors relative to other and NPP at three spatial grains, which is the mean decrease in squared error caused by each of the The first important result is the similar magnitude of the S→NPP (18) and NPP→S (10, 26, 43) 2 3 0 relationships at all grains. This reflects, in part, that both productivity and species richness have many 2 3 1 environmental and geographical drivers in common (44), which complicates distinguishing correlation 2 3 2 from causation, even when using SEMs (45, 46). There are two possible interpretations of this result: (i) it 2 3 3 may indicate that diversity's causal effects on productivity and productivity's causal effects on diversity 2 3 4 operate simultaneously, which was suggested by (18), but never demonstrated on observational data from experiments that manipulate diversity in ways that mimic biodiversity change (i.e. species gains and 2 3 9 losses) in real-world ecosystems (11, 48-50), we see little hope for resolving this with contemporary data 2 4 0 and approaches. 4 1 Our second important result is that both S→NPP and NPP→S strengthen from the fine to the intermediate 2 4 2 grain, and in the case of the SEM both relationships continue strengthening towards the coarsest grain. 4 3 While grain-dependent shifts are often expected (Table 1), this had not been shown previously with 2 4 4 empirical data for S→NPP using spatial grains coarser than several hectares (25, 31, 32). If the S→NPP 2 4 5 direction is the real causal one, then our results from SEM and RF analyses support several theoretical (Table 1) and give further impetus to efforts quantifying biodiversity effects in naturally 2 4 7 assembled ecosystems at broad spatial scales (51). If the NPP→S direction is the real causal one, then our third possibility is that both NPP→S and S→NPP are real and that they operate simultaneously, as suggested by our SEM results. In this case, we are unaware of any theory that considers how this 2 5 2 reciprocal relationship would be expected to change with increasing spatial grain. The one caveat 2 5 3 applicable to interpreting any direction of diversity-productivity relationships is that of demographic 2 5 4 stochasticity (mechanism I in Table1), which may weaken both NPP→S and S→NPP, or their synergistic 2 5 5 interplay, at fine spatial grains. In our study, the strong local effect of demographic stochasticity appears 2 5 6 plausible given the small area of the forest plots (0.067 ha) and small population sizes (12.24 ± 0.02 trees 2 5 7 per plot; range = 1-157 trees per plot) therein. This would suggest that temporal changes in local scale The third key result is that other predictors, such as temperature and biomass, were particularly influential 2 6 0 in all our analyses. That is, the grain dependence of the relationship between S and NPP was coupled with 2 6 1 a clear increase in the effect of annual temperature (but not precipitation) on both S and NPP towards 2 6 2 coarse grains, which supports the notion that either temperature-dependent diversification (55, 56) or Running title: Cross-scale diversity-productivity Craven et al. 14 ecological limits (43) shape diversity at these spatial grains. The consistently weak effect of precipitation 2 6 4 is expected since we focus on forests, which only grow above certain precipitation thresholds (57). 6 5 Second, we found a positive, indirect effect of NPP on species richness via forest biomass at the fine 2 6 6 spatial grain, which supports multiple hypotheses (Table 1) such as the view that higher ecosystem 2 6 7 productivity enhances species diversity by enabling larger numbers of individuals per species to persist 2 6 8 due to lower extinction rates (35,37,58), particularly at fine grains where stochastic extinctions occur functioning at any spatial grain. Our results reveal that mechanisms associated with one direction of diversity-productivity relationships biomass at the fine spatial grain provides support for the more individuals hypothesis (37), although it is 2 7 9 typically tested at regional to continental spatial scales. Increasingly, macroecological mechanisms such 2 8 0 as speciation gradients (60) and water-energy variables are being examined in small-grain experimental 2 8 1 grasslands to explore their role in mediating niche-based processes (61) and biodiversity effects (62), crucial mechanism in determining spatial variation in ecosystem functioning at large spatial scales. Rather dependency of diversity-productivity relationships across spatial grains (Table 1). These recent developments in BEF research and macroecology suggest that conceptual integration between these two 2 8 9 disciplines is just beginning (65), yet further efforts to bridge disciplinary gaps are essential to deepen 2 9 0 current understanding of mechanisms that underpin the shifts in diversity-productivity relationships 2 9 1 across spatial scales. To conclude, we show that the relationship between diversity and productivity strengthens toward coarse grains. This result is in line with expectations from both BEF theory, and some (but not all) expectations 2 9 4 from macroecological studies on NPP→S, and highlights the potential of demographic stochasticity to 2 9 5 distort diversity-productivity relationships at fine grains. Moreover, we find similar support for both 2 9 6 directions of diversity-productivity relationships across spatial grains, revealing that biodiversity and 2 9 7 productivity can be both cause and effect. Future research on this relationship needs to move from fine- impacts of anthropogenic biodiversity change on ecosystem function. county with a probability proportional to 1/sqrt(forest area+1), which will more likely select small rather 3 1 5 than large counties. This was because small counties can be merged to approach the grain of the large grain dataset) and then after it reached 98 merged spatial units (coarse grain dataset) (Fig. 1). Although 3 2 4 the algorithm substantially reduced variation in area within both spatial grains (Fig. S9), it did not 3 2 5 eliminate the variation entirely, and thus we still used area as a covariate in the statistical analyses at the Species richness (S). For all spatial grains, we estimated diversity as species richness (S) because it is the 3 2 8 most commonly used and best understood metric of biodiversity, although other measures of diversity 3 2 9 may be better predictors of net primary productivity (67-69). We extracted S at the fine spatial grain from identified to species level. In total, our final dataset included 344 woody species and 93,771 plots. To Name Resolution Service (TNRS)(74), following the protocol described in (75). We included hybrid 3 4 1 forms but excluded any names that could not be resolved to the species level. 4 2 Filtering of species occurrences. We restricted our analyses to woody species occurring in forest. To this 3 4 3 end, we initially filtered the BONAP data to species classified as 'trees' in BONAP's taxonomic query shrubs and subshrubs (77), except for 37 species without such data for which we instead inferred 3 4 7 woodiness from online searches or assumed resemblance among congeneric species. We also filtered out unlikely to be forest occurrences, as inferred from independent species occurrences within forested pixels Supplementary Note). To make species richness data internally consistent across the different spatial 3 5 3 grains, we added a further 6,593 quality-vetted county-level forest occurrences of woody species from 3 5 4 FIA plot records to the 282,991 occurrences in the taxonomically harmonized BONAP dataset. productivity. This was measured as the change in tree C over time due to growth (gC m -2 y -1 ), and is the 3 5 8 sum of aboveground C increment of living trees between two measurements and conservatively excludes recruits and dead trees (67). Tree-level carbon was estimated by multiplying tree-level biomass (see 3 6 0 below) by 0.48, but recognize that gymnosperms may have higher carbon content than that of US. Due to insufficient data on species' dispersal abilities, we assumed that dispersal probability between 3 9 9 focal units and species' occurrences would decay with great-circle distance between the respective 4 0 0 regions' centroids. We explored five alternative exponential distance-decay functions, with scaling Running title: Cross-scale diversity-productivity Craven et al. 20 All of the variables used in our analyses are listed and summarized in Table 2. other parts are environmentally unique and small. We employed stratified random sampling (87) for the (2) prevent excessive statistical leverage of the large number of data points from homogeneous areas and 4 2 2 Running title: Cross-scale diversity-productivity Craven et al. 21 (3) reduce spatial pseudoreplication (autocorrelation) by increasing the geographic distance between data 4 2 3 points. We first identified 11 strata at the fine and intermediate grains respectively, using multivariate 4 2 4 regression trees with S, NPP and biomass as response variables and all covariates as predictors (Fig. 1). 2 5 We then took a random and proportionally sized sample of spatial units from each strata (fine grain, N = 4 2 6 1,000; intermediate grain, N = 500). We did not use stratified random sampling at the coarse spatial grain 4 2 7 because the number of spatial units was small (N = 98) and spatial autocorrelation was low. The spatial 4 2 8 locations of the stratified samples are in Fig. S1. All of the analyses presented here, as well as our main 4 2 9 conclusions, are based on these stratified sub-samples of the data. within models, we fitted Random Forest models (RFs) (33). The results from SEMs provide insight into 4 3 9 differences among models (i.e. between the two causal pathways per spatial grain, and among spatial covariates on S, NPP, and biomass (except for area at the fine spatial grain) ( Figure S2). An effect of area 4 4 6 Model fit can only be tested on unsaturated models, i.e. those that have at least one missing path. 5 9 Therefore, we removed the path with the lowest standardized path coefficient from the model. As SEMs 4 6 0 had equal number of paths, we could compare model fit across all models within each spatial grain using 4 6 1 their unadjusted R 2 values. After excluding the additional paths, path coefficients of S, NPP, and biomass 4 6 2 remained qualitatively the same, and model fit to the data were still accepted (Chi-square test; P>0.05). 6 3 This indicates that the models are identifiable and their results are robust. Therefore, we did not further 4 6 4 reduce the model, and models maintained the same number of paths within each scale. 6 5 To assess the differences among scales in the relationships between S, NPP and biomass for each model, we compared the standardized regression coefficients using their 95% confidence intervals. All SEMs 4 6 7 were fitted using the 'sem' function of the 'lavaan' package in R (88).
2019-09-19T09:09:09.285Z
2019-09-14T00:00:00.000
{ "year": 2019, "sha1": "f640e5776fadc0d5291009da0eb206b97bf75496", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/geb.13165", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "f340b6cd0f0cce49e9175bbac3d7a49bc710b883", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Geography" ] }
52189195
pes2o/s2orc
v3-fos-license
New Approaches to the Integration of Navigation Systems for Autonomous Unmanned Vehicles (UAV) The article presents an overview of the theoretical and experimental work related to unmanned aerial vehicles (UAVs) motion parameters estimation based on the integration of video measurements obtained by the on-board optoelectronic camera and data from the UAV’s own inertial navigation system (INS). The use of various approaches described in the literature which show good characteristics in computer simulations or in fairly simple conditions close to laboratory ones demonstrates the sufficient complexity of the problems associated with adaption of camera parameters to the changing conditions of a real flight. In our experiments, we used computer simulation methods applying them to the real images and processing methods of videos obtained during real flights. For example, it was noted that the use of images that are very different in scale and in the aspect angle from the observed images in flight makes it very difficult to use the methodology of singular points. At the same time, the matching of the observed and reference images using rectilinear segments, such as images of road sections and the walls of the buildings look quite promising. In addition, in our experiments we used the projective transformation matrix computation from frame to frame, which together with the filtering estimates for the coordinate and angular velocities provides additional possibilities for estimating the UAV position. Data on the UAV position determining based on the methods of video navigation obtained during real flights are presented. New approaches to video navigation obtained using the methods of conjugation rectilinear segments, characteristic curvilinear elements and segmentation of textured and colored regions are demonstrated. Also the application of the method of calculating projective transformations from frame-to-frame is shown which gives estimates of the displacements and rotations of the apparatus and thereby serves to the UAV position estimation by filtering. Thus, the aim of the work was to analyze various approaches to UAV navigation using video data as an additional source of information about the position and velocity of the vehicle. Introduction The integration of observation channels in control systems of objects subjected to perturbations and measurement errors of the motion is based on on the observations control theory started in the early 1960s. The first works on this topic were based on the simple Kalman filter property, namely: the possibility of determining the root-mean-square estimation error in advance, without observations, by solving the Riccati equation for the error covariance matrix [1]. The development of this methodology allowed solving problems with a combination of discrete and continuous observations for stochastic systems of discrete-continuous type. At the same time, methods were In the next Section 2 we present a review of various approaches applied to a video navigation and tested during work performed by a research group joined in the IITP RAS. Then in Section 3 we consider various approach to filtering in estimation of the UAV position. The approach to navigation based on computing of projective matrices between successive frames registered by on-board camera presented with the joint filtering algorithm in Section 4. Section 5 is conclusion. We should underline that the main contribution of the paper is a review of different approaches to the UAV video navigation along with results of some experiments related to possible new developments which look promising in implementation of long-term autonomous missions performed by multi-purpose UAV. UAV Motion Model In modern conditions, it becomes extremely important to fulfill UAV missions without using external navigation systems, this is why in this paper we focus on the benefits which can be derived directly from video data and on how to convert them into the UAV control system entrance form. The autonomous UAV navigation tasks solution requires obtaining the coordinates estimates, camera orientation angles, as well as the coordinates velocities of the apparatus itself and the angular velocities of the camera orientation change. For navigation one can use simple UAV model taking into account kinematic relations for position, velocities and accelerations such as (1), which in discrete time have a form Here t k is a sequence of discrete times, such that t k+1 − t k = ∆t, X(t k ) ∈ R 3 is vector of current UAV coordinates in the Earth coordinate system, V(t k ) ∈ R 3 is velocities vector, ACC(t k ) ∈ R 3 is accelerations vector of, U(t k ) is vector of programmed accelerations from the UAV control system, W(t k ) is a vector of perturbations including aerodynamic influences and control system errors. For navigation needs we must get the current attitude and velocity estimates. Thus, to solve the navigation problems, the motion model must be completed with a model of observations, which can contain, in an explicit or implicit form, the current coordinates and/or velocities and possibly accelerations. Typically, this information comes from the inertial navigation system (INS) and from the sensor system or the global satellite navigation system and serves as an additional means which increases the accuracy and detects failures in the navigation system. For autonomous UAV flights this additional system is highly required and if the trained pilot uses this information automatically, for UAV it is necessary to convert the video information into data suitable for use by the vehicle control system. Below we give a series of examples of video features used for navigation. New Possibilities Related to the Usage of On-Board Opto-Electronic Cameras The use of opto-electronic cameras aboard the UAV opens a multitude of ways to separately or jointly evaluate the coordinates and velocities that characterize the position of the UAV and the orientation of the surveillance system. Some examples of successful usage of on-board cameras for micro aerial vehicles (MAV) in GPS denied environment were reported in [19,20]. It is known the series of succesful usage of such small cameras in various applcations including indoors and outdoor MAV autonomous flights [21,22]. However, in this research we were focused on outdoor UAV applications with the usage of on-board camera as an additional source of navigation information. It should be noted that there are only two approaches to video navigation: the first one is navigation through ground objects with known coordinates and the second one which is determination of the absolute UAV velocities, by observing the evolution of the video image of the underlying surface. In both cases one needs to take into account the filtering of altitude and the speed of the device received from the INS. In reality, both approaches should be used, but it is necessary first to investigate their accuracy characteristics. These accuracy characteristics, of course, depend on a variety of factors, such as illumination, shooting conditions, seasonality and others. Therefore, it is not possible to determine in advance which algorithms and approaches will be most effective. This is what determines the purpose of this work -to review the existing methods and, if possible, to assess their effectiveness in video navigation issues. We list only a few of them and give our comments related to our experience obtained with real and/or virtual flights. • Usage of the terrain maps and comparing the images of observed specific objects with its position on a preloaded terrain map. This seemingly most obvious method requires the presence of huge collection of observable objects on the board for reliable operation of the recognition system. These images must be recorded under different observation conditions, including aspect, scale, lighting, and so on. Of course, for some characteristic objects these problems are completely surmountable, but on the whole this creates serious difficulties. • To solve this problem, special techniques have been invented that can be attributed to the allocation of some characteristic small regions (singular points) that are distinguished by a special behavior of the illumination distribution that can be encoded by some set of features that are invariant to the scale and change of the aspect angles [23][24][25][26]. The application of this approach is described in the work [16,27], where it is demonstrated on model images using a 3D map of the local area. In these work we used a computer simulation of a UAV flight and simulated on board video camera imaging. The simulating program is written in MATLAB. Type of the feature points are: ASIFT realized in OpenCV (Python) [26]. Feature points in this model work as in a real flight because the images for the camera model and for the template images were transformed by projective mapping and created by observations from different satellites. However, the use of this method is limited by the need to ensure the closeness of registration conditions. Moreover, significant difference in the resolution level of the reference image and the recorded images in flight also leads to significant errors. • In tasks of the UAV navigating with the use of a preloaded map of the ground, the matching of reference and observable images plays a fundamental role. In recent years, the methodology based on the images matching with the use of singular points has been further developed. For example ORB methodology versus SIFT and SURF, use a very economical set of binary features of singular points, which allows to significantly reduce the execution time of the registration operation and demonstrate very high resistance to images noise and rotations [28]. In a series of detailed surveys [29][30][31] various alignment methods are examined either on the Oxford dataset test set and on others, while the ORB performance is high in terms of time consuming and the rate of erroneous pixel matching. Meanwhile, from the viewpoint of solving the problems of video navigation, it is more important the accuracy of matching, and more importantly, for specific images such as aerial photographs. In this connection, the results obtained with photogrammetric surveys using ORB-SLAM2 [32][33][34] which show the high potentialities of the ORB methodology, are of great interest in applications related to video navigation. • Less sensitive to the difference in shooting conditions are methods based on combining extended linear objects such as roads, house walls, rectilinear power lines and so on [18]. The application of analysis of linear objects gives rise to the usage of fast Hough transform [35,36]. Here we give some results of the image matching based on combining the linear objects (see Figure 1). • Similarly to linear objects, it is possible to use curvilinear shape preserving their forms for successful alignment, at least for various season, namely: the boundaries of forest, lands, banks of rivers and water bodies basing on the form [37] and color-texture domains [38,39]. An example is given in (see Figure 3) below. • It should be noted that the use of the above-mentioned approaches for navigation requires, on the one hand, the solution of the camera calibration problems and the elimination of all kinds of registration nonlinearities, such as distortion [40][41][42], motion blurring [43,44], but the most important peculiarity is the registration of images on 2D photodetector array, that is, the transformation of the 3D coordinates of the object into 2D, which gives only the angular coordinates of objects, known in the literature as bearing-only observations [45]. This is special area of nonlinear filtering problem, which may be solved more or less successfully with the aid of linearized or extended Kalman filtering and also with particle and unscented Kalman filtering. Meanwhile the comparison of various filtering solution shows [46,47] either the presence of uncontrolled bias [48] or the urgent necessity of the filter dimension extension like for particle and unscented filtering. Meanwhile comparison of the filtering accuracy shows almost identical accuracy [49], that is why one should prefer most simple pseudomeasurement Kalman filter without bias, developed on the basis of Pugachev's conditionally optimal filtering [50][51][52][53]. To obtain 3D coordinates, it is also possible to measure the range, which is possible using stereo systems [54][55][56] or using active radio or laser range finders [6]. The latter can be limited in use because they need essential power and disclose the UAV position, and the stereo systems require very accurate calibration and need the creation of a significant triangulation base which is rather difficult to maintain on small-sized UAV in flight. • The problem of observing bearings only has long been in the focus of the interests of nonlinear filtering specialists, since it leads to the problem of estimating the position from nonlinear measurements. In the paper [16], we described a new filtering approach using the pseudo-measurement method, which allows expanding the observation system, up to unbiased estimations of the UAV's own position, on the basis of the determination of bearings of terrestrial objects with known coordinates. However, the filtering is not the only problem which arises in bearing-only observations. Another issue is the association of observed objects with their images on template. Here the various approaches based on RANSAC solutions are necessary [57], such as [58,59], but the most important is the fusion of the current position estimation with the procedure of outliers rejection [60], for details see [16]. • In addition, bearing monitoring requires knowledge of the position of the line of sight of the surveillance system, which is not determined by the orientation angles of the apparatus coming from the INS. That is why it is of interest to estimate the line of sight position from the evolution of the optical flow (OF) or projective matrices describing the transformation of images of terrain sections over two consecutive frames. In the case of visual navigation one needs also the set of angles, determining the orientation of the camera optical axis. The general model, developed for OF observation and describing the geometry of the observation is given in [61], and the corresponding filtering equations for the UAV attitude parameters have been obtained in [62]. These equations and models were tested with the aid of special software package [63] and the possibility of the estimate of coordinates and angular velocities of the UAV were successfully demonstrated in [64][65][66]. However, neither OF nor evolution of projective matrices give the exact values of angles determining the position of the line of sight but define rather the angular velocities, so the problem of the angles estimation remains and must be solved with aid of filtering. The UAV Position and the Coordinates Velocities Estimation The methods described above for measuring various parameters, associated with the UAV movement supply different information, which must be appropriately converted into the inputs of the control system. In particular, the data on the coordinate velocities of the UAV motion are contained in the OF measurements and are extracted by filtering the dynamics equations with the corresponding measurements [65,67]. More difficult is the use of bearing-only measurements, although a complete set of filtering equations is given in the works [11,16,53]. In addition, observations of moving target with aid of bearing-only observations allow us to evaluate their velocities, which is shown in the work [68]. Filtering equations for the UAV velocities on the basis of the OF measurements have been given in [64,69] with examples of the estimation of the current altitude and coordinate velocity of straightforward motion. The UAV Angles and Angular Velocities Estimation In general, the OF field contains the information about the coordinate velocities of the UAV and angular velocities of the sight line. Reliable results were obtained on virtual series of images modelling the flight with constant coordinate velocity and rotation along yaw angle [64]. However, experiments with the real video shows the high correlation level between different motions, for example between pitch angle and velocity of descent. Unfortunately, the measurement of the position of the line of sight in the UAV coordinate system, which is very precise in principle, is distorted by the own movement of the apparatus, since the UAV slopes are necessary for maneuvering and their separation from the angles of the line of sight is a very delicate problem. For example in experiments with quadrocopter equipped with stabilized camera, one needed to distinguish the angle of the camera inclination and the vehicle inclination which is necessary for UAV motion itself. In our experiments without careful determination of the UAV inclination angle we did not get any reliable results related to the coordinate and vertical UAV motion [61][62][63]. Estimation of the Angular Position UAV angular position estimation is given by three angles θ(t k ), ϕ(t k ), γ(t k ) (pitch, roll and yaw, respectively), angular velocities ω p (t k ), ω r (t k ), ω y (t k ) and angular accelerations a p (t k ), a r (t k ), a y (t k ). Pitch angle and pitch angular velocity dynamics described by the following relations: where W p (t k )is the white noise with variance σ 2 p . The pitch angular velocity measurement using the OF has the following form: where W ω p (t k )is the noise in the angular velocity measurements using OF, which is the white noise with variance σ 2 ω p . Similarly to the coordinate velocity estimation we get the pitch angle θ(t k ) and pitch angular velocity ω p (t k ) estimations: The formulae forφ,γ andω r ,ω y are analogous and used in the model based on the OF estimation. Visual-Based Navigation Approaches Several studies have demonstrated the effectiveness of approaches based on motion field estimation and feature tracking for visual odometry [70]. Vision based methods have been proposed even in the context of autonomous landing management [12]. In [47] a visual odometry based on geometric homography was proposed. However, the homography analysis uses only 2D reference points coordinates, though for evaluation of the current UAV altitude the 3D coordinates are necessary. All such approaches presume the presence of some recognition system in order to detect the objects nominated in advance. Examples of such objects can be special buildings, crossroads, tops of mountains and so on. The principal difficulties are the different scale and aspect angles of observed and stored images which leads to the necessity of huge templates library in the UAV control system memory. Here one can avoid this difficulty, by using another approach based on observation of so-called feature points [71] that are the scale and the aspect angle invariant. For this purpose the technology of feature points [23] is used. In [10] the approach based on the coordinates correspondence of the reference points observed by on-board camera and the reference points on the map loaded into UAV's memory before the mission start had been suggested. During the flight these maps are compared with the frame of the land, directly observed with the help of on-board video camera. As a result one can detect current location and orientation without time-error accumulation. These methods are invariant to some transformations and also are noise-stable so that predetermined maps can be different in scale, aspect angle, season, luminosity, weather conditions, etc. This technology appeared in [72]. The contribution of work [16] is the usage of modified unbiased pseudomeasurements filter for bearing only observations of some reference points with known terrain coordinates. Kalman Filter In order to obtain metric data from visual observations one needs first to make observations from different positions (i.e., triangulation) and then to use nonlinear filtering. However, all nonlinear filters either have unknown bias [48] or are very difficult for on-board implementation like the Bayesian type estimation [27,73]. Approaches for a position estimation based on bearing-only observations had been analyzed long ago especially for submarine applications [49] and nowadays for UAV applications [46]. Comparison of different nonlinear filters for bearing-only observations in the issue of the ground-based object localization [74] shows that EKF (extended Kalman filter), unscented Kalman filter, particle filter and pseudomeasurement filter give almost the same level of accuracy, while the pseudomeasurement filter is usually more stable and simple for on-board implementation. This observation is in accordance with older results [49], where all these filters were compared in the issue of moving objects localization. It has been mentioned that all these filters have bias which makes their use in data fusion issues rather problematic [45]. The principle requirement for such filters in data fusion is the non-biased estimate with known mean square characterization of the error. Among the variety of possible filters the pseudomeasurement filter can be easily modified to satisfy the data fusion demands. The idea of such nonlinear filtering has been developed by V. S. Pugachev and I. Sinitsyn in the form of so-called conditionally-optimal filtering [50], which provides the non-biased estimation within the class of linear filters with the minimum mean squared error. In our previous works we developed such a filter (so called Pseudomeasurement Kalman Filter (PKF)) for the UAV position estimation and give the algorithm for path planning along with the reference trajectory under external perturbations and noisy measurements [16,53]. Optical Absolute Positioning Some known aerospace maps of a terrain in a flight zone are loaded into the aircraft memory before a start of flight. During the flight these maps are compared with the frame of the land, directly observed with the help of on-board video camera. For this purpose the technology of feature points [23] is used. As a result one can detect current location and orientation without time-error accumulation. These methods are invariant to some transformations and also are noise-stable so that predetermined maps can vary in height, season, luminosity, weather conditions, etc. Also from the moment of previous plane surveying the picture of this landscape can be changed due to human and natural activity. All approaches based on the capturing of the objects assigned in advance presume the presence of some on-board recognition system in order to detect and recognise such objects. Here we avoid this difficulty by using the observation of feature points [71] that are the scale and the aspect angle invariant. In addition, the modified pseudomeasurements Kalman Filtering (PKF) is used for estimation of UAV positions and control algorithm. One should mention also the epipolar position estimation for absolute positioning [75], where it helps at landing on runway (see Figure 4). Projection Matrices Techniques for Videonavigation The transformation of images of plane regions when the camera position is changed is described by a projective transformation given by the corresponding matrix. The complete matrix of the projective transformation contains information on the displacement of the main point of the lens and the rotation of the line of sight. Once installed on an aircraft flying over a relatively flat portion of the earth's surface, the camera registers a sequence of frames, and if there is overlap between consecutive frames, analysis of the displacement of characteristic points in the overlap region carries information about the linear and angular motion of the camera and thereby about the UAV motion. Projective transformations are often used to match images, but here we use alignment for motion analysis. Although it is quite natural, here is the first time when we used it to estimate UAV motion using actual video survey data. The OF gives the estimation via measurement of coordinate velocities of the image shift and therefore provides just local esimations where angular components are highly correlated with coordinate velocities and the latter are few orders higher than angular ones. So the issue of estimation of angular velocities on the basis of OF looks very difficult. At the same time the estimation on the basis of projective matrix evolution looks more promising for estimation of angles of the sight line [76]. The estimation of motion via projective matrices have been known long ago [77] and remains in the focus of researchers until now [9,78,79]. The OF works only on the basis of local information on the speed of motion, which leads to the error drift incresing. Therefore, it would be useful to obtain corrective data about the orientation of the UAV at some intermediate time instances. Such information maybe obtained with the aid of projective transformation between successive frames. Of course this method is known for a long time, however, until now in the literature we did find only few examples of using this method for the UAV navigation, see for example [80]. Perhaps the reason is that this method needs a good estimation of the initial position and the line of sight angles, therefore the fusion with filtering algorithms taking into account the dynamical model of the UAV is rather urgent. The artcile [80] presents also the estimation of errors, though they depend on the specific landscape features, so it would be nice to estimate the influence of the projective matrices computation on the accuracy of the shift and rotation evaluation. Here we also present the experimental results of the UAV position estimation based on the computation of projective matrices on the basis of real video data in combination with the filtering of coordinates and angles. The basic idea of our approach is similar to [81], though in our implementation of homography between two successive frames we add the estimation of motion via Kalman filtering. In [80] an interesting application of the UAV observation for road traffic where it is demonstrated the application of the projective transform to estimation of the vehicle velocity by analysis of two successive frames. The disadvantage of this approach is that it is based on knowledge of coordinates of some specific points within field of view. Generally such points are absent. Very interesting example of projective matrix technique usage is given in [82]. However, all these approaches look like successful applications in rather specific cases. General approach to the camera pose estimation has been presented in [81] Earth Coordinate System We assume the flat earth surface, the coordinate system OXYZ is chosen as follows: • The origin O belongs to the earth surface. • Axis OX is directed to the east. • Axis OY is directed to the north. • Axis OZ is directed to the zenith. The earth surface is described by equation z = 0. The Camera Coordinate System The pinhole camera model is used and the following camera The coordinate systems are shown on Figure 5. where R is the matrix of the camera rotation, such that R T R = RR T = E. So we have p = CR(r − t). Representation of the Camera Rotation with the Aid of Roll, Pitch, Yaw Angles For practical reasons the position of the camera can be defined by superposition of three rotations corresponding to the rotations of the vehicle, such as: • Yaw that is rotation about the axis O Z so that positive angle corresponds the anticlockwise rotation. • Pitch that is rotation about the axis O X so that for positive angle image moves downward. • Roll that is rotation about the axis O Y so that positive angle image moves right. If the optical axis of the camera directed downward all rotations are zeros and the top of image is directed to the north. Projective Representation of the Frame-to-Frame Transformation Let us find the matrix of the projective transformation P between homogeneous coordinates on the earth surface ρ = Denote as M [1,2] the matrix comprising first two columns of M, that is [1,2] x y − Rt = C R [1,2] , −Rt ρ. Determining of the Camera Position Assume we have two positions of the camera, where • the first one is known R 1 , t 1 . • the second one to be determined R 2 , t 2 . Then 2 , −R 2 t 2 , p 2 = P 2 ae. From other side, p 2 = Hp 1 , where H is the matrix of projective transformation of frame 1 to frame 2, so we get Any matrix obtained from H by multiplying on k = 0, that is kH, determines the same projective transformation. Assume we got the estimateĤ of the matrix kH obtained on the basis of two successive frames. An example of the matrix H obtained on the basis of two successive frames like in Figure 6 For gettingĤ we use the RANSAC methodology [57] and interpret the differenceĤ − kH as a normal noise, so asĤ = kH + ε, where ε is a matrix normal noise added to all entries ofĤ. It permits to determine the second camera position as a solution of the following minimization problem: or equivalently (4) Figure 6. The two successive frames used for determining the projective matrix. Red crosses show the singular points usedw for the matrix calculation. The difference between two frames is rather small since it corresponds to time interval ∆t = 1/25 s. The low resolution is due to the aircraft motion, which produce additional blurring. Solution of Minimization Problem The minimization problem (3) or/and (4) admit the following solution. Introduce the matrix 1 , −R 1 t 1 . Then the above problem may be reformulated as follows Denote as M [3] the third column of matrix M (M [3] = M 0 0 1 ) and by analogy denote M [1] and M [2] . So the minimization problem may be rewritten as R 2 ,t 2 ,k = arg min The first term does not depend on t, the second term achieves its minimum at t = − 1 k R T G [3] where it is equal zero. Thust 2 = − 1 kR T 2 G [3] . By substitution to the original minimizing term one can reduce the problem to the following R 2 ,k = arg min [R,k] G [1,2] Vectors G [1] and G [2] belong to some plane γ, therefore vectorsR [1] 2 andR [2] 2 giving minimum in (5) belong to the same plane. Then definek in accordance with linear regression with quadratic penalization k = g [1] T r [1] + g [2] T r [2] r [1] T r [1] + r [2] T r [2] , finally,t 2 = − 1 kR T 2 G [3] . It appears that two solutions fort 2 differs by the sign of Z coordinate, so the extra solution lies under earth and must be rejected. Testing of the Algorithm Suppose that we have a map, represented in raster(-scan) graphics, where q = q x q y 1 are their pixel coordinates. They are related with the earth coordinates by matrix Q, that is Let the length unit on the earth corresponds to k pixels on the map and the map size is w (width) and h (hight) pixels. For example the origin of the earth coordinate system O is in the map center. Then: Relation between the frame pixels p i and the map pixels q is given by Note that Then the test algorithm is as follows • Choose the map and find the connection with the earth coordinates by determining Q. • Forget for a moment R 2 , t 2 . • Obtain two frames in accordance with A i from the map. • Visually test that they correspond to R i , t i . Testing Results In this testing we are evaluating the influence of noise in determining of projective matrix on the camera motion estimation. Testing shows that They show more or less good correspondence but only on rather short time intervals, approximately 30 s; however, it is typical for algorithms giving the estimates of the UAV velocities, since all algorithms which involve accumulating of the drift need the correction with the aid of another algorithm. These algorithms should be based on matching of observed images and templates, some of them are described in Sections 2 and 3. Statistical Analysis of Projective Matrices Algorithm Thus, the average error equals 3.1837 m per frame, where the average frame shift equals 16.8158 m. The average relative error/per frame equals ≈ 19.08. See the Table 1. Comparison of Projective Matrices Algorithm with OF Estimation Both projective matrices algorithms and the OF give infromation related to the coordinate and the angular velocities of the UAV. We test the OF approach [63] on the same videodata as for projective matrices. On Figures 11 and 12 in comparison with corresponding data on Figures 8 and 9 one can see a very short period of reliable estimation of coordinates, so the projective matrices computation shows more relible tracking, however it is just a comparison of the algorithms per se, without fusion with INS, which is necessary in order to evaluate the current flight parameters. The estimation of the orientation angles is given in Figure 13. The described algorithm gives only one step in the estimation of the displacement and rotation of the camera, while the initial data for the operation of the algorithm must be obtained from the system for estimating the position of the apparatus. In other words, the algorithm based on the calculation of the projective transformation from frame to frame can complement the sensors of the coordinate and angular velocities of the apparatus and its performance depends on the accuracy of the position matching of the singular points. Of course, this modelling example needs further verification with new videodata and new telemetry data from INS. It is obvious that the noise in the definition of the projective transformation matrix is decisive in assessing the operability of the algorithm and depends on a variety of factors. Therefore, further analysis of the algorithm will be performed on the new flight data sets. The matherial presented in this Section 4 provides a complex tracking algorithm in which the computation of the projective transformation serves as a sensor of displacement and rotations of the line of sight. Moreover, with the example of a sufficiently long flight, the possibility of determining the UAV velocities is shown on the basis of the algorithm for calculating a projective transformation from frame to frame. This demonstrates the possible efficiency of the approach and opens the way for its integration in the navigation system. Anyway the efficiency of this algorithm strongly depends on the other systems giving, for example, the initial estimates of position and angles, otherwise the increasing errors are inevitable. It is clear that the realization of video navigation from observations of the earth's surface is a difficult task and we are only at the very beginning of the road. Conclusions The article describes a number of approaches to video navigation based on observation of the earth's surface during an autonomous UAV flight. It should be noted that their performance depends on external conditions and the observed landscape. Therefore, it is diffcult to choose the most promising approach in advance and most likely it is necessary to rely on the use of various algorithms taking into account the enviromental conditions and computational and energy limitations of real UAV. However, there is an important problem, namely, the determination of the quality of the evaluation and the selection of the most reliable observation channels during the flight. The theory of control of observations opens the way to a solution, since the theory of filtering, as a rule, uses a discrepancy between the predicted and observed values and videos could help to detect them. This is probably, most of all, the main direction of future research in the field of video integration with standard navigation systems. Author Contributions: The work presented here was carried out in collaboration among all authors. All authors have contributed to, seen and approved the manuscript. B.M. is the main author, having conducted the survey and written the content. A.P. and I.K. developed and analyzed the projective matrix approach to UAV navigation. K.S. and A.M. were responsible for the parts related to the usage of stochastic filtering in the UAV position and attitude estimation. D.S. performed the analysis of videoseqiences for computation the sequence of projective matrices. E.K. developed the algorithm of geolicalization on the basis linear objects. Funding: This research was funded by Russian Science Foundation Grant 14-50-00150. Conflicts of Interest: The authors declare no conflict of interest.
2018-09-16T07:03:54.265Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "4bd7229fad95ecb860d795f678965d55fcc621e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/18/9/3010/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4bd7229fad95ecb860d795f678965d55fcc621e8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ] }
269683153
pes2o/s2orc
v3-fos-license
The Incidence of Type 2 Diabetes Mellitus and Weight Gain in People Living with HIV Receiving a Dolutegravir-Based Antiretroviral Therapy in Addis Ababa, Ethiopia: A Pilot Single-Arm Historical Cohort Study : Introduction: The development of antiretroviral therapy (ART) has immensely improved the quality of life of people living with HIV/AIDS. Despite such a change, concerns continue to persist regarding the safety of the latest drugs added to the regimens. This study aims to evaluate the incidence of type 2 diabetes mellitus (T2DM) and weight gain in individuals receiving antiretroviral therapy containing dolutegravir at a general hospital in Addis Ababa, Ethiopia. Methods: A retrospective cohort study was conducted at RDDMH from 1 February to 30 March 2022. The study included PLHIV who had dolutegravir substituted into their combined regimen in November 2019. Collected data underwent cleaning, entry, and analysis using Statistical Package for Social Sciences (SPSS) v. 26.0 and R programing. Descriptive statistics were employed for univariate and bivariate analysis. The Kaplan–Meier model in R was used to illustrate the hazard function. A significance level of p < 0.05 and a 95% confidence interval were employed for statistical reporting. Results: The study followed 185 PLHIV who were on ART who either substituted their previous regimens or initiated a new dolutegravir-based regimen for 12 months. Most were females (59.5%), aged over 38 years (57.5%), married (50.8%), and had lived with HIV for 7 or more years (51.9%). The incidence proportion of T2DM in this sample was 7.0% (95% CI: 3.8–10.3). The age category (X 2 (1, N = 185) = 12.29, p < 0.001) exhibited a statistically significant relationship with the incidence of T2DM. The cumulative rate of T2DM in the age group over 38 years was approximately 15.4%. The pairwise Wilcoxon signed rank test revealed statistically significant differences in BMI scores between time points. Conclusion: This study observed a noteworthy incidence of T2DM among PLHIV receiving a dolutegravir-based first-line ART. Healthcare providers should prioritize early follow-up and management options for PLHIV who are on dolutegravir-based ART regimens Introduction It is apparent that antiretroviral therapy (ART) significantly improves the well-being of individuals living with human immunodeficiency virus (HIV) [1].Currently, concerns are reported about an increased risk of other chronic conditions, including type 2 diabetes mellitus (T2DM), attributed to both underlying patient factors and certain drugs used in ART regimens [2,3].In the literature, the incidence of diabetes mellitus (DM) has been linked to factors such as the HIV virus itself [4], co-infection with hepatitis C virus (HCV) [5], and specific classes of ARV drugs like lopinavir, indinavir, stavudine (d4T), didanosine (ddI), and zidovudine (AZT) [6,7]. On the other hand, contradictory findings have been documented, with some studies showing no association between DM incidence and HIV or ART use at all [8,9].The conflicting results, coupled with variations in study populations and methodological inconsistencies, create confusion regarding any modifiable differences and appropriate preventive measures.Additionally, with the introduction of newer ART regimens, the risk of developing DM may change significantly among PLHIV taking these drugs.Identifying the safest newer agents in a specific context in terms of adverse events becomes crucial. In Ethiopia, studies on the prevalence of DM among people living with HIV (PLHIV) have been conducted in various settings [10][11][12][13][14][15].According to the findings, the cumulative burden of diabetes ranged between 7.1% and 8.8%, underscoring the public health importance of HIV-DM comorbidity in the population.And metabolic syndromes, marked by lipodystrophy, dyslipidemia, and insulin resistance, were reported at 25% [15].While these studies have provided insights into the overall magnitude and associated factors at a specific point in time, the incidence of DM and the survival time of PLHIV receiving newer ART drugs remain unknown. The 2018 national consolidated comprehensive guideline for HIV prevention, care, and treatment in Ethiopia recommends the use of tenofovir/lamivudine/dolutegravir (TDF/3TC/DTG) as the preferred first-line regimen for adults [16].Interestingly, Dolutegravir (DTG), known for its superior effectiveness and safety, has replaced non-nucleoside reverse transcriptase inhibitors (NNRTIs), particularly nevirapine, in many PLHIV [17,18].Nevertheless, a few studies have raised concerns about potential adverse events, such as hyperglycemia [19,20], and weight gain [21] following the initiation of dolutegravir. Even though suspected hyperglycemia or confirmed DM is associated with an increased risk of morbidity and mortality in the general population [22,23] and in PLHIV [24,25], determining the incidence and predictive factors among PLHIV on these drugs remains an essential step.The generated evidence can inform current practices, address modifiable factors, and significantly enhance the quality of life and longevity in this population.Considering the possibility of conflicting evidence on the potential contribution of dolutegravir to the progression of T2DM [26], this study aimed to determine the incidence of hyperglycemia and weight gain among PLHIV who were receiving a dolutegravir-based antiretroviral therapy (ART) at a public hospital in Addis Ababa, Ethiopia. Study Setting, Design, and Period The research took place at Ras Desta Damtew Memorial Hospital (RDDMH) in Addis Ababa, Ethiopia.RDDMH is a general hospital with 168 beds and 550 staff members, offering inpatient and outpatient services, including antiretroviral therapy (ART) [27].The study, using a historical cohort design, investigated the T2DM and weight gain incidence among PLHIV on dolutegravir-based ART at RDDMH from 1 February to 30 March 2022. Population and Eligibility Criteria The source population comprised all adult individuals (≥14 years old) living with HIV (PLHIV) who had initiated any highly active antiretroviral therapy (HAART) and were receiving care in the current study setting.This demarcation is based on the consideration and linkage of this group to adult ART or other healthcare clinics in the Ethiopian health system.The eligible study population consisted of PLHIV on a dolutegravir-based HAART regimen actively undergoing follow-up at the hospital.These individuals had no history of diabetes mellitus (DM) before initiating dolutegravir (DTG), and their non-nucleoside reverse transcriptase inhibitor (NNRTI)-based regimen had been substituted by dolutegravir from 1st through 30th of November 2019.They were subsequently placed on the same regimen (TDF/3TC/DTG).This represented an open cohort, encompassing eligible PLHIV who started on the new treatment during the specified period. Sample Size and Sampling Methods The sample size for this study was determined using the single population proportion formula.As there were no earlier studies in Ethiopia regarding the prevalence of dolutegravir-related hyperglycemia in people living with HIV (PLHIV), the following assumptions were taken into consideration when estimating the minimum sample size: a 14% prevalence of dolutegravir-related hyperglycemia as documented in [28], a 5% type one error, a 95% confidence level (two-tailed test), and a 5% margin of error.As a result, a total of 185 medical records were determined to be necessary for this study.Along with this, a consecutive sampling method was employed to obtain medical records of PLHIV for whom a substitution had been made during the reviewed period. Variables of the Study Outcome variable(s): T2DM and BMI. Independent variables: The following factors were evaluated as independent (attributable) variables: Sociodemographic characteristics: these included age at baseline, sex, and occupation.Clinical characteristics: these encompassed a wide array of factors, including World Health Organization (WHO) staging at baseline, CD4 count at baseline, presence of opportunistic infections (OIs) at baseline, baseline weight, baseline blood pressure (BP) level, height, name of the baseline highly active antiretroviral therapy (HAART) regimen, type of current HAART regimen, duration since HIV diagnosis, duration since ART initiation, baseline history of any chronic comorbidity, history of any adverse drug reaction (ADR), history of therapy switch, and history of smoking. Data Collection Instrument, Procedure, and Quality Assurance The data collection utilized a structured data extraction format, which was developed based on a review of the relevant literature [29].Subsequently, the baseline patient characteristics outlined above were extracted.A time-updated version of the covariates was recorded for the year of enrollment, age, CD4+, ART regimen, BMI, and WHO clinical stage at later times of follow-up. Additionally, BMI was computed as weight in kilograms divided by the square of the height in meters and was further classified into <18.5 (underweight), 18.5-24.9(optimal), 25-29.9(overweight), and ≥30 (obese) [30].The measurements were taken at three time points: at baseline, when the patient was diagnosed with HIV (Time 1); when PLHIV were initiated with a dolutegravir-based regimen (start of this follow-up period) (Time 2); and any time during the follow-up period when the patient experienced the event or was censored (Time 3). Diagnosis of type 2 diabetes mellitus (T2DM) was considered upon confirmation of any of the following assessments being recorded by an authorized health professional: (1) FPG ≥ 126 mg/dl, as defined in the 2013 American Diabetes Association criteria [31], or (2) initiation of an anti-diabetic medication following a diagnosis of type 2 diabetes mellitus (T2DM).The use of terms like 'hyperglycemia' or 'DM' in this study refers to the measures outlined in this definition.A trained and informed data collector with experience in ART clinical service conducted the data collection.ART registry codes were employed to retrieve medical records on medication and clinical profiles, with the first author overseeing the data collection process on a daily basis.All reports and procedures in this study adhered to the recommendations of the STROBE guidelines (see Supplementary Materials). Data Analysis The collected data underwent manual coding, cleaning, and entry into Microsoft Excel.Data analysis was conducted using Statistical Package for Social Sciences (SPSS) v.26.0 and R version 4.2 package.Descriptive statistics were employed to present univariate and bivariate analysis.When the assumptions of binary logistic regression were not met in the analysis of factors related to the hyperglycemia incidence, we employed the Chi-square test of independence.A one-year follow-up period that spanned from the first visit in the month of November 2019 to October 2020 was considered in this study.The end mark for the follow-up was marked by either the occurrence of an event (the first diagnosis of diabetes mellitus), death due to any cause, loss to follow-up (defined as 6 months past the next visit), or the end of the cohort, whichever occurred first.The hazard function was visualized using the Kaplan-Meier model in R to observe the impact of age on the incidence of the event.The effect of dolutegravir (DTG) on the weight gain of PLHIV was assessed using a repeated measure ANOVA in R. As normality assumptions were not met for all the three time intervals of BMI measurements (Time.1,Time.2 and Time.3),Friedman's non-parametric model was employed.The Wilcoxon signed rank test was used to compare specific BMI pairs.A significance level of p < 0.05 and a 95% confidence interval were used for reporting all outputs. Characteristics of Participants A total of 185 charts of PLHIV who either substituted their earlier regimens or initiated a new dolutegravir-based regimen in November 2019 were followed for a period of 12 months.The majority were female (110, 59.5%) and in the age category of over 38 years (107, 57.5%), nearly half were married (94, 50.8%) and had lived with HIV for 7 or more years (96, 51.9%), about two-thirds were normal weight (126, 68.1%), half were either experienced with ART or transferred-in from other health facilities (94, 50.8%), and over one-third (69, 37.3%) were self-employed.Regarding comorbid conditions, the vast majority had no comorbidity (175, 94.6%).Of the 10 cases with comorbidity, hypertension accounted for the most cases (n = 8), followed by deep venous embolism (DVT) (n = 1) and asthma (n = 1).All participants were reported to be in a WHO staging of a healthy state of health (treatment1).And dolutegravir was substituted in those who developed an event (see Table 1). Incidence Proportion and Related Characteristics of T2DM The incidence proportion of T2DM in the present setting was found to be 7.0% (95% CI: 3.8-10.3).A Chi-square test of independence was performed to identify the presence of relationships between the incidence of an event and patient characteristics, namely, sex, marital status, employment status, comorbid condition, ART experience, age category, years since diagnosed with HIV, and BMI when patient started dolutegravir-based regimen.Accordingly, it was found that only the age category (X 2 (1, N = 185) = 12.29, p < 0.001) showed a statistically significant relationship with the incidence of T2DM in the cohort (Table 2).As marital status initially exhibited a statistically significant association with the incidence of type 2 diabetes mellitus (T2DM), an analysis was conducted to delve deeper into the influence of marital status on the occurrence of T2DM by stratifying it based on age.Given the prevalence of cells with expected counts below 5, the likelihood ratio test was employed for both age groups.The findings indicated no statistically significant association in either the group aged 38 years or younger (X 2 (3, N = 185) = 1.701, p = 0.637) or the group aged over 38 years (X 2 (3, N = 185) = 9.229, p = 0.056) (see Table 3).From the descriptive analysis, it was found that most of those at or below the age of 38 were females (75%).And males dominated nearly 62% of the upper age group.However, about 16.7% of the females and 14.6% of the males over the age of 38 were diagnosed with T2DM.The age-stratified incidence of T2DM is plotted in Figure 1 below.It can be observed that until the fourth month following the substitution or initiation of dolutegravir, both groups did not develop T2DM.However, the group in the upper age group was found to more likely to be diagnosed with the disease in the subsequent months, with the cumulative magnitude being 15.4% at the end of the follow-up period (Figure 1). Incidence of Weight Gain A one-way non-parametric repeated measures ANOVA, using Friedman's test in R, was performed.The dependent variable was BMI measured at the three different time points (as a factor), namely, at baseline when the patient was diagnosed with HIV (Time 1), when PLHIV were started on a dolutegravir-based regimen (start of this follow-up period) (Time 2), and any time during the follow-up period when the patient experienced an event or was censored (Time 3).The box plot below shows the distribution of BMI scores (dots on the plot) over the three time intervals (see Figure 2). Incidence of Weight Gain A one-way non-parametric repeated measures ANOVA, using Friedman's test in R, was performed.The dependent variable was BMI measured at the three different time points (as a factor), namely, at baseline when the patient was diagnosed with HIV (Time 1), when PLHIV were started on a dolutegravir-based regimen (start of this follow-up period) (Time 2), and any time during the follow-up period when the patient experienced an event or was censored (Time 3).The box plot below shows the distribution of BMI scores (dots on the plot) over the three time intervals (see Figure 2). A statistically significant difference in BMI scores (dots on the plot) was noted at three different time points (Friedman test, X 2 (2) = 37.49, p < 0.001).Kendall's W was used as the measure of the Friedman test's effect size.Accordingly, the magnitude of the effect size (0.101) was considered small in this evaluation.The pairwise Wilcoxon signed rank test between groups revealed statistically significant differences in BMI scores between Time 1 and Time 3 (p < 0.001) and Time 2 and Time 3 (p < 0.001).These differences are indicated by asterisks placed between the respective time points on the plot (see Figure 3).was performed.The dependent variable was BMI measured at the three different time points (as a factor), namely, at baseline when the patient was diagnosed with HIV (Time 1), when PLHIV were started on a dolutegravir-based regimen (start of this follow-up period) (Time 2), and any time during the follow-up period when the patient experienced an event or was censored (Time 3).The box plot below shows the distribution of BMI scores (dots on the plot) over the three time intervals (see Figure 2).A statistically significant difference in BMI scores (dots on the plot) was noted at three different time points (Friedman test, X 2 (2) = 37.49, p < 0.001).Kendall's W was used as the measure of the Friedman test's effect size.Accordingly, the magnitude of the effect size (0.101) was considered small in this evaluation.The pairwise Wilcoxon signed rank test between groups revealed statistically significant differences in BMI scores between Time 1 and Time 3 (p < 0.001) and Time 2 and Time 3 (p < 0.001).These differences are indicated by asterisks placed between the respective time points on the plot (see Figure 3). Discussion The observed incidence of hyperglycemia in the current study setting was approximately 7%, a figure that is consistent with a study on the burden of diabetes in individuals receiving highly active antiretroviral therapy reported from eastern Ethiopia [13].Despite similarities in the sex and age distribution between the two samples, our study, which had a greater proportion of normal-weight PLHIV when initiating the dolutegravir-based regimen, requires careful consideration due to the measurement type employed.Consequently, the emergence of new cases of type 2 diabetes mellitus (T2DM) within a 12-month follow-up period may signal a potential risk of the drug in altering blood glucose levels in this population.A recent report from Uganda documented dolutegravir-related adverse events, including hyperglycemia, reaching up to 10% [32].Additionally, only a few cases of diabetes incidence have been reported among PLHIV receiving regimens containing this drug in Ethiopia [19,20].Evidence from clinical trials has also suggested that integrase-strand transfer inhibitors (INSTIs), including dolutegravir, are generally linked with hyperglycemia.Up to 6% of participants were reported to have experienced a grade 2 event (serum plasma glucose level between 126 and 250 mg/dL) from both dolutegravir and raltegravir [33,34]. The study also explored factors contributing to the incidence of diabetes.However, estimating the potential effect size proved challenging in the logistic regression model due to the limited number of cases with this event.Employing a distribution-free approach via the Chi-square test, a statistically significant correlation between type 2 diabetes mellitus (T2DM) and both marital status and age category was identified.While further investigation is required to clarify the specific relationship between T2DM and dolutegravir-based highly active antiretroviral therapy (HAART), the association between diabetes risk and marital status appears to be uncommon.To confirm this, an age-stratified analysis examining the relationship between marital status and T2DM incidence revealed no statistically significant association.Conversely, the potential link Discussion The observed incidence of hyperglycemia in the current study setting was approximately 7%, a figure that is consistent with a study on the burden of diabetes in individuals receiving highly active antiretroviral therapy reported from eastern Ethiopia [13].Despite similarities in the sex and age distribution between the two samples, our study, which had a greater proportion of normal-weight PLHIV when initiating the dolutegravir-based regimen, requires careful consideration due to the measurement type employed.Consequently, the emergence of new cases of type 2 diabetes mellitus (T2DM) within a 12-month follow-up period may signal a potential risk of the drug in altering blood glucose levels in this population.A recent report from Uganda documented dolutegravir-related adverse events, including hyperglycemia, reaching up to 10% [32].Additionally, only a few cases of diabetes incidence have been reported among PLHIV receiving regimens containing this drug in Ethiopia [19,20].Evidence from clinical trials has also suggested that integrase-strand transfer inhibitors (INSTIs), including dolutegravir, are generally linked with hyperglycemia.Up to 6% of participants were reported to have experienced a grade 2 event (serum plasma glucose level between 126 and 250 mg/dL) from both dolutegravir and raltegravir [33,34]. The study also explored factors contributing to the incidence of diabetes.However, estimating the potential effect size proved challenging in the logistic regression model due to the limited number of cases with this event.Employing a distribution-free approach via the Chi-square test, a statistically significant correlation between type 2 diabetes mellitus (T2DM) and both marital status and age category was identified.While further investigation is required to clarify the specific relationship between T2DM and dolutegravir-based highly active antiretroviral therapy (HAART), the association between diabetes risk and marital status appears to be uncommon.To confirm this, an age-stratified analysis examining the relationship between marital status and T2DM incidence revealed no statistically significant association.Conversely, the potential link between age category and diabetes may stem from the fact that nearly all individuals experiencing the event belonged to the age group above the mean.This finding aligns with prior research documenting the heightened association between increasing age and T2DM [35,36]. Assessing the age and sex-wise presentation of the outcome, approximately 16.7% of females and 14.6% of males over the age of 38 were diagnosed with T2DM.The higher proportion in females is attributed to the inverted magnitude of the denominator in the two groups, with females being less likely to be in the upper age group compared to males.Generally, it can be speculated that the risk of diabetes increased in the age group over 38 years, which was noticeably recorded four months following the initiation of a dolutegravir-based regimen and steadily increased over the next months.While aging is a well-studied risk factor for diabetes [37], the result could also be confounded by other factors that alter glucose metabolism in older age, such as changes in body composition and insulin resistance resulting in impaired physiological regulation [38].In addition, it has been proposed that the insulin resistance in PLHIV receiving INSTIs could be caused by the chelation of magnesium, thereby inhibiting the release and signaling of insulin [39]. The change in BMI of participants was found to be small across the three time points.The mean weight was nearly equal between the first and second measures, with a statistically significant increase noted after the initiation of dolutegravir-based HAART.No stratification was considered in the last BMI (measured after starting dolutegravir), as 93.5% of the cases had completed the follow-up period, and the number of events was small.A five-year retrospective study on the same ART combination reported an average weight gain of 6 kg [40], whereas the study by Bourgi et al. [41] documented the same magnitude of weight gain in 18 months following dolutegravir.The observed slight change in weight gain in the current study might be affected by the short follow-up period of this study and the non-parametric nature of the distribution. This study is the first of its kind in Ethiopia to provide a contextual understanding of diabetes in people living with HIV (PLHIV) and receiving dolutegravir-based regimens as their first-line highly active antiretroviral therapy (HAART).All appropriate methodological considerations and assumptions have been taken into account to improve the reliability and validity of the findings.Despite thorough efforts and careful considerations, significant limitations may limit the generalizability of our findings.Firstly, no comparison group was considered, making it impossible to compare the effects of different treatments across groups.Secondly, the selection of study participants was based on a one-month inclusion period (PLHIV for whom dolutegravir was started or substituted in the same month and were on the same combined regimen), which might have introduced selection bias, thereby influencing the outcome of interest and between-group analyses.Thirdly, the sample size considered in this study was small, and the effect estimates might not reflect occurrences in other contexts, making the generalization of findings challenging.Lastly, potentially important clinical and virological parameters, such as blood pressure level, random or fasting glucose levels, viral load, and CD4 count, were missing in the majority of the participants and were hence not assessed. Conclusions A noteworthy incidence of type 2 diabetes mellitus was observed among PLHIV receiving a first-line antiretroviral therapy (ART) based on dolutegravir at the current study site.The analysis revealed a statistically significant association between type 2 diabetes mellitus and age category, indicating a progressively higher risk with advancing age.Additionally, a slight increase in the body mass index (BMI) of participants was identified during the 12-month follow-up period following the initiation of dolutegravir. Healthcare professionals should prioritize timely screening and continuous monitoring of blood glucose markers in this population.It is recommended that prospective studies be undertaken to precisely determine the incidence level, assess the potential role of prognostic factors, such as age, and evaluate the impact on weight gain associated with dolutegravirbased first-line regimens.Furthermore, methodological approaches incorporating proper controls, larger sample sizes, diverse populations, and extended follow-up periods are warranted for a more comprehensive understanding of the implications. Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/venereology3020008/s1:Table S1: STROBE_checklist.Informed Consent Statement: Informed consent was obtained from all participants involved in the study. Figure 2 . Figure 2. Distribution of BMI scores across three time periods among PLHIV treated with dolutegravir-based HAART at RDDMH, Addis Ababa, Ethiopia. Figure 3 . Figure 3. Pairwise comparison of Wilcoxon signed rank test between three BMI scores among PLHIV receiving a dolutegravir-based HAART at RDDMH, Addis Ababa, Ethiopia. Author Contributions: Conceptualization, T.S.; methodology, T.S.; validation, T.S., D.G., Z.S. and D.S.; formal analysis, T.S.; investigation, E.S. and A.I.B.; data curation, T.S.; writing-original draft preparation, T.S. and A.I.B.; writing-review and editing, T.S., A.I.B., D.G., D.S. and Z.S.; supervision, E.S.; project administration, T.S.; funding acquisition, T.S.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) Saint Paul's Hospital Millennium Medical College (Ref.No: PM22/620). Table 1 . Characteristics of study participants who were started on a DTG-based HAART at RDDMH, Addis Ababa, Ethiopia. a classification was based on the mean of the distribution. Table 2 . Test of independence between incidence of T2DM and selected characteristics among PLHIV receiving a dolutegravir-based HAART at RDDMH, Addis Ababa, Ethiopia. Table 3 . Test of independence between incidence of T2DM and age-stratified marital status among PLHIV receiving a dolutegravir-based HAART at RDDMH, Addis Ababa, Ethiopia.
2024-05-11T16:25:05.483Z
2024-05-06T00:00:00.000
{ "year": 2024, "sha1": "43dd2ad180b571c9ed5026bdd34a5cf23d12baea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2674-0710/3/2/8/pdf?version=1715001731", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8be910d1d24c553e5440861111fa8b8759e62cb8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118516020
pes2o/s2orc
v3-fos-license
Bound state spectra of three-body muonic molecular ions The results of highly accurate calculations are presented for all twenty-two known bound $S(L = 0)-, P(L = 1)-, D(L = 2)-$ and $F(L = 3)-$states in the six three-body muonic molecular ions $pp\mu, pd\mu, pt\mu, dd\mu, dt\mu$ and $tt\mu$. A number of bound state properties of these muonic molecular ions have been determined numerically to high accuracy. The dependence of the total energies of these muonic molecules upon particle masses is considered. We also discuss the current status of muon-catalysis of nuclear fusion reactions. I. INTRODUCTION In this study we consider the bound state spectra of the muonic molecular ions ppµ, pdµ, ptµ, ddµ, dtµ and ttµ. In this paper the notations p, d, t designate the nuclei of hydrogen isotopes (protium, deuterium and tritium, respectively), while µ means the negatively charged muon µ − . Our main goal is to determine total energies and other bound state properties in these muonic molecular ions to high enough numerical accuracy to be sufficient for all current and anticipated future experimental needs. In general, the bound state spectra in the six muonic molecular ions ppµ, pdµ, ptµ, ddµ, dtµ and ttµ can be separated into three different groups [2] on qualitative grounds. The first group includes the three light muonic molecular ions ppµ, pdµ and ptµ. Each of these systems has two bound states: one S(L = 0)−state and one P (L = 1)−state, where the notation L means the total angular momentum of the three-body system. Neither of these two states is weakly bound. Note that each of these light muonic molecular ions contains at least one protium nucleus. The second group includes the two 'intermediate' muonic It can be shown (see, e.g., [3]) that the total number of bound states in any muonic molecular ion a + b + µ − is determined by the lightest nucleus in this ion. This explains why only three groups of different bound state spectra can be found in six such ions. Moreover, it follows that there must be a similarity between the energy spectra of the 'protium' muonic molecular ions ppµ, pdµ and ptµ. An analogous similarity can be found in the bound state spectra of the ddµ and dtµ ions. It can be shown that in such 'families' of muonic molecular ions the symmetric ion, e.g., ppµ, always has the maximal binding energy of the three protium ions: ppµ, pdµ and ptµ. By using these similarities between the bound state spectra in each of these 'families', one also finds a number of useful relations for the total and binding energies as well as for other bound state properties of different muonic molecular ions. For instance, let us assume that we know that the excited P −state in the ddµ ion is weakly bound and its binding energy is ≈ -1.9745 eV (see, e.g., [2]). From the similarity of the bound state spectra of the ddµ and dtµ ions it one predicts that the corresponding excited P −state in the dtµ ion is also weakly bound and its binding energy is above -1.9745 eV , i.e. Our labels for the bound states in muonic molecular ions are based on atomic LSnotations (see, e.g., [4]). Note that there is another classification scheme which is still in use for muonic molecular ions and which was originally introduced to classify bound state spectra in adiabatic molecular ions, e.g., in the H + 2 molecular ion [5]. In this scheme each bound state is designated by its rotational J and vibrational ν quantum numbers, i.e., we have the (J, ν)-states. The ground state in any muonic molecular ion is designated as the (0,0)-state, while the excited P −state in this scheme is denoted as the (1,1)-state, etc. Each of these classification schemes has its own advantages and disadvantages in applications to actual systems. Note also that there is a uniform correspondence between the 'atomic' and 'molecular' classification schemes. II. THE HAMILTONIAN AND WAVE FUNCTIONS As mentioned above in this study we consider the bound state spectra in the six muonic molecular ions ppµ, pdµ, ptµ, ddµ, dtµ and ttµ. All particles which form such three-body ions are assumed to be point and structureless. Each of these three particles has a finite mass which equals one of the masses m µ , m p , m d and/or m t ; the electric charges are q µ = −1 and q p = q d = q t = +1 (in muon-atomic units, whereh = 1, m µ = 1, e = 1). The Hamiltonian H of the three-body muonic molecular ions is written in the form, e.g., for the a + b + µ − ion In muon-atomic units we have where the two masses m a and m b of nuclei of two hydrogen isotopes must be expressed in terms of the muon mass m µ . In fact, in this study only the muon-atomic units (h = 1, m µ = 1, e = 1) are used. Advantages of these units are discussed in Section IV below. Our computational goal is to determine exceptionally accurate solutions, i.e., the eigenstates and corresponding wave functions of the non-relativistic Schrödinger equation , where E < 0 and the non-relativistic Hamiltonian is written in the form of Eq. (2). In actual calculations the wave functions of muonic molecular ions are usually approximated with the use of different variational expansions. In this work we shall consider the exponential variational expansion in relative coordinates r 12 , r 13 , r 23 [6]. Here and everywhere below in this work the notation r ij =| r i − r j |= r ji designates the relative coordinate between particles i and j. In many cases, however, it is very convenient to introduce three new variables u 1 , u 2 , u 3 which are called perimetric coordinates. They are simply related to the three relative coordinates: u i = 1 2 (r ik + r ij − r jk ), and therefore, r ij = u i + u j , where (i, j, k) = (1, 2, 3). The perimetric coordinates are truly independent, and each of them varies from 0 to +∞. This significantly simplifies derivation of the explicit formulas for all matrix elements needed in highly accurate computations of the bound states. The explicit form of the exponential variational expansion in perimetric/relative coordinates where C i are the linear (or variational) parameters, α i , β i , γ i , δ i , e i and f i are the non-linear parameters and ı is the imaginary unit. The functions Y ℓ 1 ,ℓ 2 LM (r 31 , r 32 ) in Eq.(3) are the bipolar harmonics [7] of the two vectors r 31 = r 31 · n 31 and r 32 = r 32 · n 32 . The bipolar harmonics are defined as follows [7] where C LM ℓ 1 m 1 ;ℓ 2 m 2 are the Clebsch-Gordan coefficients (see, e.g., [7]) and the vectors n x = x x and n y = y y are the corresponding unit vectors constructed for arbitrary non-zero vectors x and y. Also, in this equation L is the total angular momentum of the three-body system, i.e.L 2 Ψ LM = L(L + 1)Ψ LM , while M is the eigenvalue of theL z operator, i.e.L z Ψ LM = MΨ LM . In actual calculations it is possible to use only those bipolar harmonics for which ℓ 1 + ℓ 2 = L + ǫ, where ǫ = 0 or 1. The first choice of ǫ (i.e. ǫ = 0) corresponds to the natural spatial parity χ P = (−1) L of the wave functions. The second choice (i.e. ǫ = 1) represents states with the unnatural spatial parity χ P = (−1) L+1 . In this work we shall consider only the bound states of natural parity, since only such states exist in real physical systems. An additional family of polynomial-type functions φ i (r 32 , r 31 , r 21 ) are also used in Eq.(3) to represent the inter-particle correlations at short distances. In general, these simple polynomial functions allow one to increase the overall flexibility of the variational expansion Eq.(3). In our present calculations, however, these additional functions were chosen in the form φ i (r 32 , r 31 , r 21 ) = 1 for i = 1, . . . , N. The operatorP 21 in Eq.(3) is the permutation of the identical particles in symmetric three-body systems, where κ = ±1, otherwise κ = 0. In general, highly accurate computations of bound states in muonic molecular ions are not easy to perform, since there are bound states with different angular momenta L (L = 0, 1, 2 and 3) and some of these states are very weakly bound. Variational expansions used in highly accurate calculations must provide fast convergence rates for each of the bound states, including all weakly bound states. Note that the actual goal of many current calculations of muonic molecular ions is the computation of various bound state properties, rather than the energies. In general, the convergence rate for some of these properties, including many nuclear-nuclear expectation values, e.g., the expectation values which include the nuclear-nuclear delta-function δ ++ (e.g., the δ ++ ∂ n ∂r n ++ expectation values for n ≥ 1), is substantially slower than for the total energies. This explains our current need for highly accurate wave functions. For instance, as follows from numerical calculations to determine the δ ++ ∂ n ∂r n ++ expectation values, to the accuracy ±1 · 10 −8 one needs to use wave functions which provide an accuracy ≈ 1 · 10 −15 a.u. for the total energy. Such values are needed in computations of the lowest order relativistic and QED corrections. A separate, but serious problem for accurate computations of three-body systems is the adiabatic divergence described in [8]. This problem always appears when 'pure atomic' variational expansions are applied to the two-center Coulomb systems and/or to the systems close to them. For our systems this means that the convergence rates observed for the ppµ and pdµ ions are relatively high in comparison to the analogous convergence rates for the dtµ and ttµ ions which are substantially slower. Nevertheless, variational calculations of bound states in muonic molecular ions are of great interest for predicting the physics of few-body systems as well as in some applications. In general, the study of bound state spectra in muonic molecular ions has provided us with a large amount of very valuable information and drastically improved our knowledge about the bound state spectra in arbitrary Coulomb three-body systems. Furthermore, all muonic molecular ions are three-body systems with unit charges. The energy spectra in such systems have many significant differences from known atomic spectra. In particular, any Coulomb three-body system with unit charges has a finite number of bound states [9]. The only exception to this rule is the ∞ H + 2 ion which has infinite number of bound states [9]. non-linear parameters in them. All these parameters must carefully be optimized. After such an optimization the short-term booster function provides 11 -15 exact decimal digits for each bound state energy in the considered muonic molecular ions. It appears that the overall accuracy of such short-term wave functions is much better for the protium muonic molecular ions ppµ, pdµ, ptµ than for the heavier ions ttµ and dtµ. At the second stage of our optimization procedure the remaining 3(N − N 0 ) non-linear parameters in the wave function Eq.(3) are chosen quasi-randomly from three different boxes or parallelotops. The total energies and other bound state properties obtained with such trial wave functions depend upon the boundaries of these boxes. In reality the boundaries of these three boxes can be described [6] with the use of 28 non-linear parameters only. The numerical values of these 28 parameters were optimized approximately with the use of N = 800, 1000, 1200 and 1400 basis functions [6]. These values allowed us to determine the approximate limit for each of these 28 parameters as N → ∞ by extrapolation. These limiting values have been used in our final computations. The two-stage strategy proposed in [2] and described above allows one to obtain very accurate variational wave functions based on the use of exponents in relative and/or perimetric coordinates. In particular, the overall accuracy of our results obtained in this study for all considered muonic molecular ions (see Tables I, II, III and IV below) is significantly higher than the accuracy obtained for these ions in earlier calculations. Moreover, by using this optimization strategy we expect to be able to increase the accuracy in future calculations by a factor of ≈ 10 3 − 10 5 which would be sufficient for all future anticipated theoretical needs. The described two-stage optimization procedure has been used in this study for all bound S(L = 0)− and P (L = 1)−states in muonic molecular ions, including all excited states. For the bound D(L = 2)−states in heavy muonic molecular ions (ddµ, dtµ and ttµ) we used another approach in which the short-term booster function is not constructed. However, it is clear that our current strategy for optimization of the non-linear parameters in the wave functions of the bound D(L = 2)−states is not optimal. In future studies we want to improve this strategy and produce highly accurate results for all bound D(L = 2)−states in muonic molecular ions. IV. VARIATIONAL ENERGIES OF MUONIC MOLECULAR IONS The results of our calculations of different bound states can be found in Tables I -VI where m e designates the electron mass. Note that our highly accurate computations in this study are performed with the use of 84 -104 decimal digits per computer word [12], [13], allowing total energies to be determined to an accuracy ≈ 1 · 10 −20 − 1 · 10 −23 m.a.u. A natural and effective way to perform such calculations is to assume that all particle masses and corresponding conversion factors (e.g., the factor Ry below) are exact. Such assumptions are always made in papers on highly accurate computations in few-body systems (see, e.g., [14] and [15]). The known experimental uncertainties in particle masses and conversion factors are taken into account at the last step of calculations, when the most accurate computations are simply repeated for a few times with the use of different particle masses and conversion factors. Analogously, the lowest order relativistic and QED corrections can be determined as the expectation values of some operators computed with our non-relativistic wave functions. To avoid a substantial loss of numerical accuracy during such computations these non-relativistic wave functions must be extremely accurate. Table I contains the total variational energies obtained for the ground S(L = 0)−states of the non-symmetric muonic molecular ions pdµ, ptµ and dtµ. Table II includes Table III -IV. Table V contains Note also that the total energies of the P (L = 1)−states of the non-symmetric muonic molecular ions pdµ, ptµ and dtµ and excited P * (L = 1)−states of the ddµ, ttµ and dtµ ions have recently been determined in [16]. The most accurate variational energies for all known 22 bound states in the set of six muonic molecular ions studied here (expressed in muonatomic units) can be found in Table VI. Note that the F (L = 3)−state of the ttµ ion has not been re-calculated in this study, but instead it has been taken from our earlier work [2], where this state was computed with the use of quadruple precision. As follows from Tables I -VI the total energies obtained in this study for different bound states in six muonic molecular ions are significantly more accurate than the corresponding energies computed in earlier studies (see, e.g., [2], [14] and [15]). The current wave functions are more compact and have better overall quality than wave functions obtained in [2], [14] and [15]. They can be used for highly accurate computations of other bound state properties, including properties which contain singular expectation values. Our variational wave functions can be used to compute some bound state properties of muonic molecular ions. A large number of bound state properties have been computed in our earlier studies (see, e.g., [6] and references therein). However, some of the bound state properties could not be determined to high numerical accuracy, due to relatively low accuracy of the wave functions used in earlier studies. It was clear that the expectation values of some nuclear-nuclear properties, e.g., all properties which include the nuclearnuclear delta-function, needed to be re-calculated with more accurate wave functions. By using highly accurate wave functions obtained in this work we have performed numerical re-calculation of a number of bound state properties for different muonic molecular ions. The computed expectation values can be found in Tables VII and VIII. Results presented in Table VII illustrate in the case of the dtµ ion. In Eq. (6) and Eq. (7) below all energies must be expressed in the same units, e.g., in atomic units or in muon-atomic units reduced to the same muon mass. The same formula can be written for any non-symmetric muonic molecular ion. In such cases we always have α ≥ β, if the coefficient α corresponds to the mass shift produced by the heaviest nucleus. For symmetric muonic molecular ions, e.g., for the ddµ ion, the analogous formula is In these equations the notation 'our' denotes the mass value used in this study, while the notation 'new' designates a different mass value, e.g. from some work performed in the future. In general, the numerical values of the coefficients α and β in Eq.(6) (also called the mass gradients) are determined from separate energy calculations with different masses. In our earlier works we have used a very simple approach based on four additional calculations with different masses for non-symmetric muonic molecular ions [17]. For symmetric muonic molecular ions one has to perform at least two additional calculations with the two different mass ratios. This method is simple, but it is not very accurate. Recently, we have developed a more accurate procedure. To describe this procedure let us consider the ground P (L = 1)−state in the ddµ ion. The mass ratio mµ(our) m d (our) is designated below as x 0 , while the notation h stands for the difference mµ Eq. (7) can now be computed with the use of the formula where A is a numerical parameter of order of unity. It can be shown that this parameter equals the fifth order derivative of the total energy E with respect to the mass ratio mµ Table IX. Table IX also This approach was used in some earlier works, but it produces incorrect results in applications to the bound states which can disappear (as bound states) during variations of some physical parameter, e.g., the mass of the particle. For such states one needs to use an alternative approach described in [18]. In this method the explicit expression for the total energy of the two-body 'almost unbound' system takes the form where U =h Note that for real muonic molecular ions the total energy of the two-body system ε given in Eq.(9) is, in fact, the binding energy of the three-body ion, e.g., the dtµ ion, which corresponds to the lowest-by-energy decay channel: (dtµ) + = tµ(ground state) + d + . The results of our calculations for the P * (L = 1)−state in the ddµ ion can be found in Table X. In these calculations we have increased (at each step) the mass of the µ − muon (see above) by one electron mass m e . As follows from Table X Analogous results for the weakly bound P * (L = 1)−state in the dtµ ion can be found in Table XI. In computations performed for this Table we varied only the muon mass m µ , while the deuterium and tritium masses have not been changed. At such conditions the muon threshold mass was found to be equalm µ ≈ m µ + 1.99m e . It is clear that the binding energy of the P * (L = 1)−state in the dtµ ion is the function of the two independent mass ratios mµ m d and mµ mt . Therefore, Table XI gives only an approximate picture of how the total and binding energies (in eV) of the P * (L = 1)−state in the dtµ ion vary when the muon mass changes. To study the pre-threshold mass dependence of the weakly bound P * (L = 1)−state in the dtµ ion in detail one also needs to consider changes of the deuterium and tritium masses. Such calculations, however, are very difficult to perform, since they require substantial computer resourses. VI. MUON STICKING PROBABILITIES Originally, all numerical computations of bound states in muonic molecular ions were motivated by various problems of muon-catalyzed nuclear fusion. In fact, the bound state computation of muonic molecular ions is only one of many problems which must be solved before we can discuss a possibility to use muon-catalyzed nuclear fusion for energy production and for other purposes. It is clear that the most interesting and promising case is the muon catalysis of the (d, t)−nuclear reaction in the dtµ ion. A central problem here is to determine the muon sticking probability during the nuclear reaction dtµ = 4 He+µ + n since the numerical value of this coefficient essentially determines the feasibility of using muon catalysis of nuclear fusion reactions for energy production purposes. Let us evaluate the muon sticking probabilities for this ion by assuming that the nuclear dt-fusion occurs only in the two S(L = 0)−states (ground and excited), ignoring the possibility of nuclear fusion in the P (L = 1), P * (L = 1) and D(L = 2) bound states of the dtµ ion. The analytical expression for the muon sticking probability for the bound S(L = 0)−state (initial state is designated with the subscript in; final state by f i) takes the form [6] (see also [19] and [20]) where n and ℓ are the appropriate principal and angular quantum numbers for the final hydrogen-like ( 4 Heµ) + ion with radial function R nℓ (r). The choice of the factor a in Eq. (10) is discussed below. The j ℓ (Qr) function is the spherical Bessel function (see e.g., [21]): The factor Q is where m n = 1838.683662 m e is the neutron's mass, ∆E is the total energy release during the nuclear (d, t)−reaction and M 4 = 7294.2296m e is the mass of the 4 He nucleus. In the formulas presented above φ in (a −1 r) is the initial 'post-process' wave function, i.e. the wave function of the system which arises when the sudden process (i.e. nuclear fusion) is over. The function φ in (a −1 r) can be found from the bound state wave function Ψ of the initial three-body system. For instance, in the case of nuclear fusion in the S(L = 0)−state of the dtµ muonic molecular ion one finds: where δ 21 is the nuclear delta-function and C i are the linear variational parameters from Eq.(1). These coefficients have been determined during numerical solution of the Schrödinger equation for the initial three-body system (see Section IV). Note also, that after the 'sudden' nuclear fusion the new 4 He nucleus arises at the same point '2'. This does not change the relative r 32 coordinate which is mass independent. After the nuclear reaction the r 32 coordinate becomes the helium-muonic relative coordinate. In two-body atomic problems, however, it is more convenient to use the mass-weighted coordinate r. The relation between the relative r 32 coordinate and mass-weighted r coordinate (which corresponds to the heliummuonic ion) is written in the form: where in muon-atomic units m µ = 1 and M 4 is the nuclear mass (in muon-atomic units) of the 4 He nucleus. Finally, the initial wave function φ in (a −1 r) takes the form: where a −1 = 1+M 4 M 4 . For the muon and nuclear masses indicated above, one finds that a −1 ( 4 He) = 1.028346555. The formulas given above allow us to determine the muon sticking probabilities for the to assume to proceed only from the two bound S−states of the dtµ ion. Let P be exact sticking probability of the muon in the dtµ ion. The analogous value P s is the muon sticking probability determined for the same dtµ ion, but only for its bound S(L = 0)−states. As follows from the discussion above we can replace the factor κ = P −1 by the approximate value κ ≈ P −1 s . There are also a few other corrections which can change the numerical value of the factor κ = P −1 s ≈ 112. The largest of such corrections corresponds to the 'muon stripping' during collisions of the fast ( 4 Heµ) + ion with neutral hydrogen molecules. However, even such a correction cannot change the predicted value of κ by 40 % [1]. In other words, the maximal value of κ is ≈ 160 -170. On the other hand, to reach break-even, i.e. to compensate for the energy spent for creation of one µ muon (≈ 8000 MeV [1], [22]), one muon needs to catalyze at least 2285 nuclear dt−reactions. In this evaluation we have ignored all possible energy losses and assumed 100 % efficiency for each muon. In reality any thermal-to-electrical conversion has only ∼ 30 % efficiency and only ∼ 70 % of all muons can produce the maximal number of fusion reactions. With all these corrections one finds that the factor κ must be ≈ 8,000 -11,000 to reach break-even. Such values are ≈ 65 times larger than the maximal value of κ which has been measured experimentally (κ ≈ 150). If somehow in future experiments the numerical value of κ will be increased up to 500, even then it will be ≈ 20 times smaller the value which is needed to reach break-even. This indicates clearly that muon catalysis of nuclear reactions cannot be used for energy production purposes. It should be mentioned that originally the idea to use µ − -muons for production of repetitive nuclear reactions between light nuclei of hydrogen isotopes was proposed more than sixty years ago [23]. Based on an obvious chemical analogy these processes were called the muonic catalysis of nuclear reactions. It was confirmed in [24] experimentally by observing two consecutive (p, d)−nuclear reactions catalyzed by the same muon. The first numerical computations of the bound states in three-body muonic molecular ions were performed by Belyaev et al in 1959 [25] who found only 20 bound states in six ions ppµ, pdµ, ptµ, ddµ, dtµ and ttµ. The overall accuracy of the procedure used in [25] was very low and the authors could not confirm the boundness of the excited P * (L = 1)−states (or (1,1)-states) in the ddµ and dtµ ions. It was concluded only that, if such states are bound, then they are very weakly bound. The binding energy of these two states was expected to be smaller than 4.5 eV , i.e. smaller than the binding energy of a typical molecule. Immediately after publication of [25] an intense stream of speculations started about a possible interference (or resonance) between the formation of excited P * (L = 1)−states (or (1,1)-states) in the ddµ and dtµ muonic molecular ions and different atomic/molecular processes in surrounding molecules (see, e.g., [26] and references there in). Finally, in a few experimental studies performed by Bystritskii et al (see [27] and [28] and references therein) it was shown that one muon can catalyze approximately 10 -20 (d, d)−nuclear reactions in liquid deuterium (D 2 ) and 90 -110 (d, t)−reactions in the liquid equimolar deuterium-tritium mixture (D 2 : T 2 = 1:1). Such very large numbers of nuclear reactions catalyzed by one muon can be explained only by the resonance (or very fast) formation of ddµ and dtµ muonic molecular ions. Correspondingly, these processes were called 'resonance' muon-catalyzed fusion of nuclear reactions, in contrast with the 'regular' muon-catalyzed fusion observed in [24]. In experiments performed in 1980's the total number of nuclear reactions catalyzed by one muon (i.e. the numerical value of the factor κ defined above) for the equimolar deuteriumtritium mixture were evaluated a s ≈ 150 (see discussion and references in [1]). This value is ≈ 15 times smaller than the value which is needed for theoretical break-even and ≈ 65 times smaller than necessary for actual break-even. Therefore, we have to conclude that all discussion of the 'bright future' for applications of the resonance muon-catalized future for the energy production purposes appears to be groundless. VIII. CONCLUSION We have considered the problem of highly accurate calculations of bound states in the three-body muonic molecular ions ppµ, pdµ, ptµ, ddµ, dtµ and ttµ. The study of bound state spectra in the muon-molecular ions is of interest for solving some theoretical problems and in a number of applications. In fact, our present knowledge of the bound state spectra in Coulomb three-body systems with unit charges is essentially based on knowledge of the spectra of the muonic molecular ions. Note that all muonic molecular ions can easily be created in real experiments and their various properties can be measured quite accurately. From a certain point of view, theoretical and experimental study of these ions is more interesting and informative than the traditional analysis of atomic three-body (i.e. twoelectron) systems. The results of variational computations of the total energies for various bound states in the various muonic molecular ions are presented in Tables I -VI. Table VI years ago [29], [30], [31], [32]. In these works only S(L = 0)− and P (L = 1)−states of muonic molecular ions were considered. It is interesting to note that at that time the nonvariational calculations (see, e.g., [33] and references therein) of muonic molecular ions had a comparable overall accuracy for many bound S(L = 0)− and P (L = 1)−states in muonic molecular ions. In addition to this, the non-variational methods allow one to determine the total energies of the bound D(L = 2)− and F (L = 3)−states in muonic molecular ions [33]. Our first variational calculations of muonic molecular ions started 25 years ago [34]. In particular, the first successful variational computations of the weakly bound P * (L = 1)−states in the ddµ and dtµ ions were performed in our work [34] and also in [35]. However, at that time we could not compute the bound D− and F −states in the ddµ, dtµ and ttµ ions. Furthermore, our maximal accuracy achieved at that time was relatively low. Currently, the same energies for all S(L = 0)− and P (L = 1)−states in muonic molecular ions obtained in [34] can be reproduced with the use of only 20 -30 exponential basis functions in Eq.(3), with carefully chosen non-linear parameters α i , β i and γ i in each basis function. The first variational computations of the bound D(L = 2)−states in the dtµ, ddµ and ttµ ions were performed in 1986 [36], while analogous calculations of the F (L = 3)−state in the ttµ ion were conducted 15 years later [2]. The bound D−state in the dtµ ion was also calculated (variationally) by Kamimura in 1988 [37]. By using our highly accurate wave functions we have determined the expectation values of some bound state properties of muonic molecular ions. We also discuss the problem of mass shifts in muonic molecular ions, including the case of weakly bound states. A separate (but very important) problem is to evaluate the muon sticking probabilities for the S(L = 0)−states in the dtµ ion. By using these probabilities we estimated the total number of the reactions of (d, t)−fusion which can be catalized by one µ − muon in a liquid equimolar D 2 :T 2 mixture. The current status of the 'resonance' muon-catalyzed nuclear fusion is briefly discussed. It is shown that this process cannot be used for energy production purposes. (a) The best variational energies known from earlier calculations. (a) The best variational energies known from earlier calculations. (a) The best variational energies known from earlier calculations. Eq. (2).
2011-01-08T21:33:02.000Z
2010-08-18T00:00:00.000
{ "year": 2011, "sha1": "50629996e65bd22241f77b58fce57e6c4bef2df2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.3010", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "50629996e65bd22241f77b58fce57e6c4bef2df2", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
201243365
pes2o/s2orc
v3-fos-license
Research on New Sprinkler Heat Insulation System Based on 3D Printing Technology Firstly, the mechanical structure of the sprinkler is designed by Solidworks in this paper. The wire of the sprinkler is made of waste PET. Then the theoretical feasibility is proved by further simulation analysis with ANSYS software, and the thermodynamic theory of heating and heat dissipation is analyzed and utilized. Finally, the practical operation feasibility of the new 3D printer print nozzle insulation system is verified by physical printing test. Introduction At present, most of the common plastic bottles are "No. 1" PET plastic bottles. Recycling methods of PET plastic bottles abroad mostly adopt plastics simple regeneration method, and some of them adopt pyrolysis monomer technology. The recycling rate of domestic PET plastic bottles is only 42%. A large part of waste PET plastic bottles are treated by traditional methods such as incineration and landfill. At present, there are many problems in the recycling of PET plastics, such as waste of resources, pollution of the environment, unstable performance of recycled products and so on [1][2] . PET, as the main raw material of waste plastic bottles, has good plasticity and regeneration. It has a wide range of sources [3] . Its unique properties have been proved to be effective in 3D printing. However, FDM forming technology lacking cheap and high-quality raw materials is in need of such high-quality materials. At present, there are few studies on the combination of PET plastics and FDM moulding technology in China, mainly on the regeneration of PET and the research of new FDM materials. Waste PET plastic bottles have been studied for 3D printing, but because of its unique nozzle structure and high printing temperature, the heat dissipation of nozzle heating block through aluminium profile is excessive and a lot of heat is wasted in the experiment process, which not only increases the difficulty of temperature control, but also makes the extrusion process of nozzle unstable; it also improves the working temperature of motor and shortens the service life of motor. Life. Therefore, in this paper, we mainly design how to make the thermal insulation system of this new printer. Firstly, the mechanical structure of the new printer nozzle is introduced. Then, the theoretical calculation and simulation analysis of the thermal insulation system are carried out. Finally, the physical test is carried out. Because the PET material used is flat, it needs to be modified [4] . We can't print with the existing printing sprinkler, so we design a new type of sprinkler insulation system. The structure of printing sprinkler mainly includes screw feeding nucleating agent adding control mechanism, screw extruding mechanism, heating and insulation system. Figure 1 shows the structure of the printing nozzle. Among them, nucleating agent feeding motor, fixing bracket, nucleating agent feeding screw and nucleating agent barrel constitute nucleating agent adding mechanism; nozzle, feeding screw and driving motor constitute positive and negative screw extruding mechanism; heating block and nozzle constitute heating system; belt wheel, side plate with hole and heat insulation plate constitute heat insulation system. Flake PET material enters the mixing barrel through the feeding port through the remote wire feeding device [5] . The mixing barrel is heated by the heating rod. After the device reaches a certain temperature, the PET material melts. The rotation of the decelerating motor drives the rotation of the adding screw of the nucleating agent. By controlling the speed of the motor, the nucleating agent is quantitatively and steadily delivered to the mixing container below. Introduction of mechanical structure of thermal insulation The extrusion screw is designed with positive and negative threads. The whole screw is divided into three sections. The upper and lower ends are positive threads. It plays the role of extrusion feeding. The middle part is reverse threads. There is an upward thrust on molten PET during the screw rotation process, which can achieve the goal of homogeneous mixing of nucleating agent and molten PET. The material is extruded from the nozzle by the downward extrusion of the screw. The driving motor provides power for the screw through synchronous toothed belt drive. Compared with the direct connection between the motor and the screw, the influence of the screw heat on the motor work is reduced. The sprinkler heating system controls the temperature precisely by controlling the on-off of the heating circuit. A temperature sensor is installed near the heating rod. When the temperature is lower than the preset temperature, the heating circuit is connected, the heating rod works, and the temperature of the heating block rises. When the temperature is higher than the preset temperature, the heating circuit is disconnected and the heating rod stops working, so that the heating block is kept through the feedback adjustment process. At a constant temperature, the nozzle can work continuously and normally. The insulation system adopts layer by layer insulation of heating block, feeding screw and driving motor. Thermal insulation board is installed on the heating block to reduce the heat transfer from the heating block to the outside frame of the printer nozzle. The synchronous toothed belt drive is adopted between the PET feeding screw and the motor. On the one hand, it ensures the uniformity and stability of the motion transfer during the extrusion process, on the other hand, it makes the motor work at a lower temperature and prolongs its service life. Mechanical structure of thermal insulation system The insulation system designed in this paper is a layer-by-layer insulation system of heating block, side plate with hole, feeding screw and driving motor. Its characteristics are: prolonging the life of the motor and improving the performance of the sprinkler. Figure. 2 is the mechanical diagram of the insulation system. The thermal insulation system uses the insulation board as the first stage, the side plate with holes as the second stage, and the synchronous belt as the third stage. Heating block is not directly contacted with mixing container when heating, and heat insulation plate is installed in the middle to reduce the heat transfer from heating block to the outside frame of printing nozzle to realize the first stage heat insulation; side plate with holes accelerates the heat transfer from heating block to the body to realize the second stage heat insulation; feed screw and drive motor are driven by synchronous belt wheel to reduce the heat absorbed by driving motor and realize the third stage heat insulation. Thermodynamic Theoretical ANALYSIS of Heating and Heat Dissipation The sprinkler heating system controls the temperature precisely by controlling the on-off of the heating circuit to ensure the sprinkler working continuously and normally. Installation of heat insulation board on the heating block can reduce the heat transfer from the heating block to the outer frame of the printing sprinkler, and make the motor work at a lower temperature to prolong its service life. Surface Heat Dissipation Coefficient of Sprinkler Heat Dissipation Module hc (W/(m2*k)) The computational expression is: The symbolic meanings in the formula: hc: convective heat transfer coefficient; w: air velocity (m/s) Owing to the different air velocities at different parts of the nozzle, W1=6.9m/s, W2=3.7m/s and W3=1.44m/s, the heat dissipation coefficients calculated are hc1=30, hc2=25 and hc3=20. ANSYS simulation analysis ANSYS Workbench was used for thermal analysis of the nozzle heating and cooling module. The material of the nozzle and heating block was brass with a thermal conductivity of 108.9W/(m*k), aluminum alloy with a thermal conductivity of 155W/(m*k), rubber polyurethane with a thermal conductivity of 25W/(m*k) and stainless steel with a thermal conductivity of 16.2W/(m*k) for the synchronous belt. It is known that the melting temperature of PET is between 240 255 C. The optimum heating temperature of heating rod is obtained by thermal analysis. Through the analysis of different heating temperatures, it is concluded that when the heating rod temperature is set to 270 C, the temperature in the mixing barrel reaches 250 C, which is in the melting temperature range of PET. The temperature distribution nephogram is shown in Figure. 3. Physical Printing Test According to the model and the following analysis, we have processed the physical prototype according to the virtual model, and printed some physical objects with the 3D printer, the effect is good. Figure. Conclusion 3D printing sprinkler is easily disturbed by various factors, which affects the continuity of spinning. Especially in the case of unreasonable temperature control, the application quality and effect of 3D printing sprinkler will be greatly reduced. Therefore, by analyzing the temperature of 3D printing nozzle, the three-stage insulation system mentioned above can prolong the life of the motor and improve the performance of the nozzle, which can provide some design references for the research in related fields.
2019-08-23T10:06:29.912Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "a9ee98bf5dea442633a1c3f7890a5fecf2c00885", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/295/3/032087", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4c03315b273061ec21641b9701a7ce422a65a518", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
234741825
pes2o/s2orc
v3-fos-license
Neutron--proton spin--spin correlations in the ground states of N=Z nuclei We present expressions for the matrix elements of the spin--spin operator $\vec S_{\rm n}\cdot\vec S_{\rm p}$ in a variety of coupling schemes. These results are then applied to calculate the expectation value $\langle\vec S_{\rm n}\cdot\vec S_{\rm p}\rangle$ in eigenstates of a schematic Hamiltonian describing neutrons and protons interacting in a single-$l$ shell through a Surface Delta Interaction. The model allows us to trace $\langle\vec S_{\rm n}\cdot\vec S_{\rm p}\rangle$ as a function of the competition between the isovector and isoscalar interaction strengths and the spin--orbit splitting of the $j=l\pm \frac{1}{2}$ shells. We find negative $\langle\vec S_{\rm n}\cdot\vec S_{\rm p}\rangle$ values in the ground state of all even--even $N=Z$ nuclei, contrary to what has been observed in hadronic inelastic scattering at medium energies. We discuss the possible origin of this discrepancy and indicate directions for future theoretical and experimental studies related to neutron--proton spin--spin correlations. Introduction The nuclear pairing mechanism [1] has been, for many years, a central subject of study in low-energy nuclear physics [2]. Although the energy gain of the nuclear system due to pairing is relatively modest, pairing correlations have a strong influence on many properties of the nucleus including the moment of inertia, deformation and excitation spectra [3]. The dominant pairing in almost all known nuclei with N > Z is that in which "superconducting" pairs of neutrons (nn) and protons (pp) couple to a state with angular momentum zero and isospin T = 1, known as isovector or spin-singlet pairing. However, for nuclei with N ≈ Z, neutrons and protons occupy the same single-particle orbits at their respective Fermi surfaces and Cooper pairs, consisting of a neutron and a proton (np), may form. These types of pairs may couple in either isovector or isoscalar (spin-triplet with J = 1 and T = 0) modes, the latter being allowed by the Pauli principle. Contrary to the case of nuclei with large isospin imbalance, where the spin-orbit suppresses pairing in the triplet channel, in nuclei with N = Z the isoscalar mode is expected to dominate. Since the nuclear force is charge independent, one would also expect that pairing should manifest equivalently for np pairs with T = 1 and S = 0, akin to nn and pp pairs. While there are convincing arguments for the existence of isovector np pairs, the existence of a correlated isoscalar np pair in condensate form, and the magnitude of such collective pairing, remains an intriguing and controversial topic in nuclear-structure physics [4]. Long-standing theoretical predictions of the onset of isoscalar pairing, the interplay between both pairing modes, and the presence of a condensate composed of both isoscalar and isovector pairs have remained without experimental confirmation [4,5,6]. This is mainly because the region of the nuclear landscape near the proton drip line, where such phenomena are expected to appear, is largely unreachable and because the experimental observables are either inconclusive and/or complicated to interpret. Two-neutron transfer reactions such as (p,t) and (t,p) have provided a key probe to understand neutron pairing correlations in nuclei [7,8]. The rapid quenching of np pairs as one moves away from N = Z [9] suggests that the transfer of a np pair from even-even to odd-odd self-conjugate nuclei could also be a sensitive tool to study np correlations. Hence, reactions such as ( 3 He,p) and (p, 3 He) are among the best choices [10,11]. A different and elegant approach has been proposed by the Osaka group [12, ?] and consists of the study of neutronproton spin-spin correlations in the ground states of N ≈ Z nuclei. The relevant observable is S n · S p , the scalar product between the total spins of the neutrons and protons, which can be measured by spin M 1 excitations produced by inelastic hadronic scattering at medium energies. In Fig. 1 we illustrate why this quantity can inform us on the nature of the pairing condensate. It can be seen that given the distinctive values in the two-particle system, S n · S p will also depend strongly on the type of pairs being scattered across the Fermi surface, as will be discussed in Sect. 3. In a series of experiments carried out at the RCNP [13] facility, high-energy-resolution proton inelastic scattering E p = 295 MeV was studied in 24 Mg, 28 Si, 32 S and 36 Ar. The results give positive values of S n · S p ≈ 0.1 for the sd shell suggesting a predominance of quasi-deuterons, somewhat at variance with the discussions above and USD shell-model calculations that are unable to reproduce the experimental results. However, shell-model wave functions that take into account an enhanced spin-triplet pairing seem to reproduce the measured spin-spin correlations [15]. Also, the no-core shell model with realistic interactions [16] predicts positive values (lower limits due to convergence) that could be attributed to mixing with higher-lying orbits due to the tensor correlation. It seems clear to us that further work is required to fully assess the origin of the spin-spin correlation and its microscopic origin. For example: Are the observed spin-spin correlations between neutrons and protons connected to a) our beloved surface pairing BCS condensate [1], b) aligned np pairs [17] or c) effects of the tensor force [18]? are questions that remain to be answered. To shed light on these questions, we develop in this work the formalism to calculate the matrix elements of the S n · S p operator in a variety of coupling schemes and apply it to the solution of a schematic model consisting of nucleons in a single-l shell. In spite of its simplicity, the model allows us to study the behaviour of S n · S p as a function of the competition between the isovector and isoscalar components of the effective force between nucleons, and the spin-orbit splitting of the j = l ± 1 /2 shells. In Sect. 2 we discuss the structure of the S n · S p operator and we calculate its matrix elements in Sect. 3. In Sect. 4, following a short discussion of the model, we present and discuss our results for several cases involving particles occupying shells with l = 1 to 5 and contrast these with the experimental observations to date. Finally, Sect. 5 is devoted to the summary and conclusions of our work. 2 The S n · S p operator The S n · S p operator is given by where the sums are over the neutrons and over the protons in the nucleus. Introducing the isospin projection operator t z , which gives + 1 /2 acting on a neutron and − 1 /2 acting on a proton, we rewrite this operator as where the sums are over all nucleons in the nucleus. It follows that S n · S p contains an isoscalar as well as an isotensor part. Let us consider the case of nucleons occupying a single-l shell. We introduce the spin, isospin and spin-isospin operators in terms of the nucleon creation operators a † lm l 1 /2ms 1 /2mt and the modified annihilation operatorsã lm l 1 /2ms 1 /2mt ≡ −(−) l+m l +ms+mt a l−m l 1 /2−ms 1 /2−mt . The operators (3) are scalar with respect to the orbital angular momentum and generate Wigner's SU(4) supermultiplet algebra [19]. The representation (2) shows that S n · S p can be written as which proves that it is an element of the SU(4) algebra. The SU(4) tensor character of S n · S p is derived in the Appendix. 3 Matrix elements of the S n · S p operator One-body matrix elements of S n · S p vanish, lm l sm s tm t | S n · S p |lm l sm s tm t = lsjm j tm t | S n · S p |lsjm j tm t = 0. Two-body matrix elements can be derived in LS or in jj coupling, and in both cases in an isospin or in a neutronproton basis. Since S n · S p is a scalar in orbital angular momentum, spin and total angular momentum, the associated projections M L , M S and M J can be suppressed. It is, however, not a scalar in isospin and therefore its matrix elements depend on the projection M T . In an LST basis the two-body matrix elements are where it is assumed that L + S + T is odd. In a JT basis the two-body matrix elements are where it is assumed that J + T is odd and that the sum runs over odd L + S + T . Equation (7) can be applied if the two nucleons are in the same j shell. If the nucleons occupy an l shell, matrix elements of S n · S p are needed with one nucleon in the l + 1 /2 and the other in the l − 1 /2 shell. In this case it is more convenient to consider the problem in a neutron-proton basis. The expression for the two-body matrix elements of S n · S p is particularly simple in an LS-coupled neutron-proton basis, where the only non-zero matrix element is (l n 1 /2)(l p 1 /2)LS| S n · S p |(l n 1 /2)(l p 1 /2)LS = (−) S+1 3 2 In a jj-coupled neutron-proton basis the matrix element of S n · S p is j n j p J| S n · S p |j n j p J = (−) j n +jp+J j n S n j n j p S p j p j n j n 1 j p j p J , (9) in terms of the reduced matrix elements If the neutron and proton occupy the same shell, j n = j p ≡ j, the matrix element reduces to For the deuteron l = 0 and j = 1 /2, and one recovers the familiar values of − 3 4 for J = 0 (isovector or spin singlet) and + 1 4 for J = 1 (isoscalar or spin triplet). Finally, it is of use to find the reduced matrix elements in LST coupling of the separate isoscalar and isotensor parts of S n · S p . We write S n · S p = T (000) where the upper indices refer to the tensor character in LST and the lower indices to the projections M L M S M T . The following relations are valid where the double-barred matrix elements are reduced in L, S and T . With the help of the expressions (6) one deduces Schematic model We consider a single-l shell, corresponding to two j shells, j = l ± 1 /2, together with the schematic Hamiltonian where n ± are the number operators for the j = l ± 1 /2 shells and the last term represents a surface delta interaction (SDI). Following Brussaard and Glaudemans [22] we introduce the isoscalar and isovector strengths, a T = a T C(R 0 ), where C(R 0 ) is a radial integral, and we adopt the notation a x ≡ a 0 and a(1 − x) ≡ a 1 , so that x indicates the relative importance of both strengths. We note that, as long as one considers a single-l or single-j shell, as is done in the following, results obtained with SDI are identical to those with a delta interaction, except for an overall scaling of the strengths. We also note that the additional terms introduced in the modified SDI, although important to reproduce nuclear binding energies [22], do not alter wave functions and therefore do not influence expectation values of S n · S p . For any combination of its parameters the eigenstates of the Hamiltonian (15) carry good angular momentum J and isospin T . For such eigenstates the expectation value is calculated of the operator S n · S p , which, as shown above, contains an isoscalar and an isotensor component. The spectrum of the Hamiltonian (15) depends on four parameters whereas relative energies are determined by the three parameters ∆ ≡ − − + , a and x. Eigenfunctions depend on only two dimensionless parameters ∆ /a and x, which are varied in order to study their influence on the expectation value of S n · S p . A bounded parameter can be defined as In most cases rapid changes in the expectation value S n · S p occur for |∆ /a| ≈ 5 (see Sect. 4.3). With the choice of 5 in the denominator this corresponds to |y| ≈ 0.5. In the convention of a positive strength a for an attractive force and with a spin-orbit interaction that favours the alignment of spin and orbital angular momentum Calculations can be restricted to the lower half of the l shell because the results for the upper half can be obtained through the application of a particle-hole transformation. The Hamiltonian (15) is not invariant under particle-hole conjugation since this transformation induces the change ± → − ± . In the (x, y) parametrisation introduced above the particle-hole transformation leaves x invariant and induces a sign change in y. We may therefore restrict calculations to the lower half of the l shell provided we extend the parameter domain to −1 ≤ y ≤ +1. This covers all possible parameter values for all possible nucleon numbers. A number of limiting cases of interest occur, which are illustrated in the next subsections. SU(4) symmetry If a 0 = a 1 and − = + (or x = 1 2 and y = 0), the Hamiltonian (15) conserves orbital angular momentum L, spin S, isospin T and in addition has an SU(4) symmetry. Since S n · S p can be written in terms of SU(4) generators, its expectation value in the ground state depends solely on the supermultiplet labels (λµν) and on (LST ) in the ground state. For example, even-even N = Z nuclei have the ground-state labels (λµν) = (000) and (LST ) = (000). Oddodd N = Z nuclei have a ground-state configuration with (λµν) = (010), which contains two degenerate states with (LST ) = (010) (isoscalar) or (001) (isovector). These labels completely determine the expectation value of S n · S p , which therefore is independent of the nucleon number. For a SDI all N = Z nuclei have L = 0 in the ground state. Denoting the ground state of an even-even N = Z nucleus as |l 4k L = 0ST and that of an odd-odd N = Z nucleus as |l 4k+2 L = 0ST , we conclude that the following expectation values are valid: l 4k 000| S n · S p |l 4k 000 = 0, N = Z even, As far as the expectation value of S n · S p is concerned, the ground state of an odd-odd N = Z nucleus therefore behaves as a deuteron by virtue of the SU(4) symmetry. LS coupling If a 0 = a 1 and − = + (or x = 1 2 and y = 0), the Hamiltonian (15) breaks SU(4) symmetry but conserves orbital angular momentum L, spin S and isospin T . The energy matrix associated with the Hamiltonian (15) can therefore be constructed in an LST basis. The S n · S p operator is not an LST scalar, however, since it has an isoscalar as well as an isotensor piece. Its matrix elements can be calculated from the application of the Wigner-Eckart theorem [20,21] l n LST M T | S n · S p |l n LST M T = 1 (2L + 1)(2S + 1)(2T + 1) l n LST T (000) l n LST The n-particle LST -reduced matrix elements of T (000) and T (002) can be related recursively to the two-particle matrix elements (14) by means of coefficients of fractional parentage (CFPs) in LST coupling. The above method has the advantage of requiring the diagonalisation of matrices of only modest dimension but it has the drawback that CFPs have to be calculated recursively in LST coupling for the total number of nucleons. It is therefore more efficient to consider the problem in a neutron-proton LS basis. In this basis matrices are still of reasonable dimension and CFPs can be evaluated for the neutrons and the protons separately. For example, for 5 (7) neutrons and 5 (7) protons in the d (f ) shell with L = 0, the dimensions are 26 (731) for S = 0 and 42 (1407) for S = 1. Figure 2 shows the expectation value S n · S p as a function of x = a 0 /(a 0 + a 1 ) in the ground state of N = Z nuclei, for two neutrons and two protons in a p, d, f or g shell and for even numbers of neutrons and protons in the d shell. The ground state has (LST ) = (000) for the entire parameter range. The expectation value S n · S p is 0 at x = 1 2 , its value in the SU(4) limit, and becomes more negative as the l of the shell and/or the number of nucleons increases. Note that for l = 0 two neutrons and two protons fill the s shell (not shown in Fig. 2) and S n · S p = 0, independent of the Hamiltonian. It should also be noted that S n · S p is invariant under the exchange of a 0 and a 1 . Although no data are available at present for odd-odd N = Z nuclei, for completeness we show in Fig. 3 S n · S p as a function of x in the yrast eigenstates with (LST ) = (001) and (010), for three neutrons and three protons in a p, d, f or g shell and for odd numbers of neutrons and protons in the d shell. For x > 1 2 the isoscalar interaction is dominant and the ground state has (LST ) = (010); for x < 1 2 the isovector interaction is dominant and the ground state has (LST ) = (001). Figure 3 shows S n · S p for both states over the entire range of values 0 ≤ x ≤ 1. For all x, S n · S p is below its value at x = 1 2 , where one recovers the SU(4) values − 3 4 and 1 4 for S = 0 and S = 1, respectively. As the l of the shell and/or the number of nucleons increases, S n · S p further decreases. Spin-orbit interaction The single-particle energies of the l + 1 /2 and l − 1 /2 shells are not expected to be degenerate and, because of the spin-orbit component of the nuclear interaction, the former is the lowest, + < − . We assume for simplicity in this subsection that the isoscalar and isovector strengths are the same, a 0 = a 1 . Figure 4 shows the expectation value S n · S p in the J = 0 ground state of the Hamiltonian (15) for two neutrons and two protons in a p, d, f or g shell, and for even numbers of neutrons and protons in the d shell. Two neutrons and two protons fill the s shell (not shown in Fig. 2) and S n · S p = 0, independent of the Hamiltonian. For − = + the quantum numbers L and S are not conserved and one has to revert to labelling states with their total angular momentum J. Given the definition (16), results for y → ±1 approach those for a single shell with j = l ± 1 /2. This explains some of the values observed in Fig. 4 at the limits y = ±1. For example, two neutrons and two protons fill the p 1/2 shell and therefore the p (black) curve in Fig. 4(a) necessarily must converge to 0 at y = −1. Likewise, four neutrons and four protons fill the d 3/2 shell and the k = 4 (red) curve in Fig. 4(b) converges to 0 at y = −1. Furthermore, particle-hole symmetry explains some of the results of Fig. 4. In a d 5/2 shell the ground state of a 2n-2p system is the particle-hole conjugate of that of a 4n-4p system Therefore the k = 2 (blue) and k = 4 (red) curves in Fig. 4(b) converge at y = +1. The spin-orbit term in the nuclear mean field is A dependent and, with use of its estimate given in Ref. [23], one finds a splitting of the spin-orbit partner levels of the order ∆ ≈ 10(2l + 1)A −2/3 MeV. The strengths of the SDI are also A dependent and a rough estimate is given in Ref. [22], a 0 ≈ a 1 ≈ 25A −1 MeV. We arrive therefore at the following estimate of the parameters of the schematic Hamiltonian (15): Application to the d shell with A 1/3 ≈ 3 leads to |y| ≈ 0.375 and we see from Fig. 4 that for such values the transition from SU(4) to the single-j regime is taking place. For completeness we show in Fig. 5 the expectation value S n · S p as a function of y in odd-odd N = Z systems. For y = 0 one recovers the SU(4) values of − 3 4 and 1 4 for J = 0 and J = 1, respectively. It is seen that systems that are self-conjugate under particle-hole symmetry, that is, three neutrons and three protons in a p shell [black curves Fig. 5(a,c)] or five neutrons and five protons in a d shell [red curves in Fig. 5(b,d)], display a mirror symmetry with respect to y = 0 axis. In the limit of infinite spin-orbit splitting, y = ±1, the model space is effectively reduced to one constructed out of a single-j shell. Particle-hole symmetry then relates k = 1 to k = 3 in the d 3/2 shell [black and blue curves at y = −1 in Fig. 5(b,d)] as well as k = 1 to k = 5 in the d 5/2 shell [black and red curves at y = +1 in Fig. 5(b,d)]. Single-j shells In the limit | − − + | → +∞ the problem is reduced to a single-j calculation. Figure 6 shows S n · S p in the J = 0 ground state of the Hamiltonian (15) for various even-even N = Z systems confined to a single-j shell. Whether the orbital angular momentum and the spin are aligned, j = l + 1 /2, or anti-aligned, j = l − 1 /2, has little influence on the results. The expectation value is slightly less negative in the latter case except for extreme (and unphysical) values of x. Finally, in Fig. 7 the x-dependence of S n · S p is illustrated for a variety of odd-odd N = Z systems confined to a single-j shell. For k = 1 one recovers the result (11), that is, − 11 36 and − 95 324 in the g 9/2 shell, and − 9 44 and − 95 484 in the h 9/2 shell, for J = 0 and J = 1, respectively. The slightly less negative values for j = l − 1 /2 as compared to those for j = l + 1 /2 in the general case can therefore be traced back to the expression (11) for two nucleons. The main conclusion from the analysis of the single-j-shell case is that the expectation value S n · S p in an yrast J = 0 or J = 1 eigenstate is found to be negative for all possible parameter values. It is relevant to point out the work in Ref. [24], where the isovector M 1 transitions in odd-odd N = Z nuclei are interpreted in terms of quasi-deuteron configurations. Simple analytical expressions for B(M 1) transition strengths derived within a single-j shell approximation explain well the experimental data for both j = l + 1/2 and j = l − 1/2 cases for which large and small B(M 1)s are observed. Summary and outlook This study shows that there is no 'simple' explanation for the positive values of S n · S p as observed in the experiments reported in Refs. [12,13,14]. For all possible parameter values in the Hamiltonian (15) the expectation value S n · S p is found to be negative in the ground state of all even-even N = Z nuclei. Admittedly, the Hamiltonian (15) is of a schematic character and the analysis is carried out in a single-l shell. But our results show that the naive expectation that an increase of the isoscalar (spin-triplet) interaction strength leads to positive values of S n · S p is unfounded. Also the role of the spin-orbit term in the nuclear mean field is clearly established as it inevitably leads to more negative S n · S p values in even-even N = Z nuclei. The interpretation of the results for odd-odd N = Z nuclei is more intricate. While no yrast J = 0 state is found with positive S n · S p , this might occur for yrast J = 1 eigenstates. The present results call for a theoretical study in similar vein but with a more sophisticated schematic Hamiltonian. While realistic shell-model calculations are able to reproduce the observed spin-spin correlations [15,16], it would still be worthwhile to pinpoint the exact origin of the positive S n · S p values. The positive values of S n · S p , found experimentally in sd-shell nuclei [12,13,14], might be a consequence of mixing between configurations in the s and d shells, not considered in the present work. Alternatively, they might be due to a non-central, in particular a tensor, component of the nuclear interaction. As the tensor interaction to some extent acts as a negative spin-orbit term, it is yet not clear whether its effect on S n · S p is adequately represented in the schematic Hamiltonian considered in this work, although it could be partially captured in our dimensionless parameter y. Finally, the positive S n · S p values are perhaps the result of a combination of both effects, that is, of configuration mixing and the tensor component of the nuclear interaction. Note that the positive values seen in 4 He and 12 C, where the LS coupling scheme could be considered a good approximation, may indeed favour the important role of the tensor force. This study also shows the value of extending spin-spin-correlation experiments in two directions. One is towards odd-odd N = Z nuclei where the occurrence of J = 0, T = 1 and J = 1, T = 0 states at similar energies might give complementary information. In this regard, measurements on 6 Li and 14 N will be of much interest. Above 40 Ca, a program to study (p,p') scattering with radioactive beams in inverse kinematics, at facilities such as RIKEN [25], FRIB [26] and FAIR [27], is compelling. A second direction is to go slightly off the N = Z line. Since the S n · S p operator is a combination of isoscalar and isotensor parts, the measurement of its expectation value in the J = 0 ground state of an even-even N = Z + 2 nucleus as well as in its isobaric analogue state in the neighbouring N = Z odd-odd nucleus determines the separate pieces. Along this line, an approved experiment at iThemba [28] will extend the studies of Ref. [13] measuring the spin-spin correlations in the ground states of 46,48 Ti. For N > Z targets, a combination of (p,p') and (d,d') scattering is required to disentangle the IS and IV components of the M 1 operator.
2021-05-18T01:16:23.782Z
2021-05-15T00:00:00.000
{ "year": 2021, "sha1": "3815adfc16128215f9deefcbc4dbafdcce32548f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2105.07267", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3815adfc16128215f9deefcbc4dbafdcce32548f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221019621
pes2o/s2orc
v3-fos-license
Sodium-coupled glucose transport, the SLC5 family, and therapeutically relevant inhibitors: from molecular discovery to clinical application Sodium glucose transporters (SGLTs) belong to the mammalian solute carrier family SLC5. This family includes 12 different members in human that mediate the transport of sugars, vitamins, amino acids, or smaller organic ions such as choline. The SLC5 family belongs to the sodium symporter family (SSS), which encompasses transporters from all kingdoms of life. It furthermore shares similarity to the structural fold of the APC (amino acid-polyamine-organocation) transporter family. Three decades after the first molecular identification of the intestinal Na+-glucose cotransporter SGLT1 by expression cloning, many new discoveries have evolved, from mechanistic analysis to molecular genetics, structural biology, drug discovery, and clinical applications. All of these advances have greatly influenced physiology and medicine. While SGLT1 is essential for fast absorption of glucose and galactose in the intestine, the expression of SGLT2 is largely confined to the early part of the kidney proximal tubules, where it reabsorbs the bulk part of filtered glucose. SGLT2 has been successfully exploited by the pharmaceutical industry to develop effective new drugs for the treatment of diabetic patients. These SGLT2 inhibitors, termed gliflozins, also exhibit favorable nephroprotective effects and likely also cardioprotective effects. In addition, given the recent finding that SGLT2 is also expressed in tumors of pancreas and prostate and in glioblastoma, this opens the door to potential new therapeutic strategies for cancer treatment by specifically targeting SGLT2. Likewise, further discoveries related to the functional association of other SGLTs of the SLC5 family to human pathologies will open the door to potential new therapeutic strategies. We furthermore hope that the herein summarized information about the physiological roles of SGLTs and the therapeutic benefits of the gliflozins will be useful for our readers to better understand the molecular basis of the beneficial effects of these inhibitors, also in the context of the tubuloglomerular feedback (TGF), and the renin-angiotensin system (RAS). The detailed mechanisms underlying the clinical benefits of SGLT2 inhibition by gliflozins still warrant further investigation that may serve as a basis for future drug development. Expression cloning to unveil the molecular structure and function of sodium glucose cotransporters The concept that transport of glucose across the intestinal brush border membrane (BBM) requires an active mechanism that is achieved through coupling of transport to the inwardly directed Na + gradient has already been disclosed in 1960 [26]. It was subsequently refined and extended to cover active transport of a variety of molecules, including nutrients, neurotransmitters, metabolites, and electrolytes. These transporters are called cotransporters or symporters and they This article is part of the special issue on Glucose Transporters in Health and Disease in Pflügers Archiv-European Journal of Physiology usually link uphill solute transport to the cotransport of Na + or H + [70]. The primary sequences of the corresponding transporters, however, remained unknown until later in 1980, when expression cloning approach was discovered, because the hydrophobic nature of these integral membrane proteins precluded their isolation in a form suitable for amino acid sequencing. The needed approach was conceptualized based on the observations that micro-injection of mRNA from rabbit small intestine into Xenopus laevis oocytes (X. oocytes) stimulated Na +dependent and phlorizin-sensitive uptake of 14 C-α-methyl-Dglucopyranoside ( 14 C-α-MG), the specific substrate of the intestinal sodium-glucose cotransporter and that microinjection of size-fractionated intestinal poly(A) RNA of about 2.4 kb specifically induced this transport function [68]. The X. oocyte expression system offered a convenient method for this purpose, as the relatively large cells (~0.8-1.3 mm diameter) easily allowed the micro-injection of poly(A) RNA (or cRNA derived from cDNA clones), and 3 days after injection, the encoded transporters could already be detected in the oocyte cell membrane using radio-isotope uptake studies or twoelectrode voltage clamping (TEVC). A cDNA transcription library was then generated from the active size fraction, cDNA clones were screened functionally for their ability to induce uptake of 14 C-α-MG, and a positive clone was identified, sequenced, and fully characterized in terms of its structure, function, and physiological roles [67]. Soon thereafter, the approach resulted in the identification of the primary structure of the human intestinal Na + /glucose cotransporter SGLT1 (SLC5A1) [67]. During the following years, expression cloning became the premier method to isolate transporter clones, as it did not require any DNA or antibody probes to screen cDNA libraries, only transporter-specific functional assays [141]. Indeed, this approach has revealed the structural and mechanistic foundations for transporters of iron (SLC11A2/ DMT1/DCT1) [61], vitamin C (SLC23A1/SVCT1 and SLC23A2/SVCT2) [124], urea (SLC14A2/UT1) [206], glutamate (SLC1A1/EAAC1) [75], dibasic amino acids (SLC3A1/ D2) [197], oligopeptides (SLC15A1/PepT1) [42], myoinositol (SLC5A3/SMIT) [90], iodide (SLC5A5/NIS) [27], and the epithelial calcium channel TRPV6/CaT1 [124]. The human homolog of rabbit SGLT1 (SLC5A1) [69] and the kidneyspecific human homolog SGLT2 (SLC5A2) [76,198] were subsequently identified. Using expression cloning with coexpression of SGLT2, an important activator, named MAP17, of SGLT2 was furthermore identified by expression cloning [23]. Overall, it soon became apparent that the sodium glucose cotransporters (SGLTs) belonged to a larger family of transporters, including members in lower organisms such as the Escherichia coli Na + /proline cotransporter PutP. Three decades after the initial cloning of the rabbit intestinal Na +glucose cotransporter SGLT1, many new discoveries have been made in the field, from mechanistic analysis to molecular genetics, structural biology, drug discovery, and clinical applications. All of these advances have greatly shaped physiology and medicine [202]. Brief description of key properties, expression patterns, physiological and pathological implications of SLC5 family members The human SLC5 solute carrier family includes 12 members. It is part of the sodium symporter family (SSS) that encompasses members from all kingdoms of life. The SSS family is also annotated in TCDB [147] as family #2.A.21, and the corresponding transmembrane domain in Pfam database [37] as "SSF" ("sodium:solute symporter family," Pfam ID: PF00474). SLC5 members typically transport small solutes, such as sugars, vitamins, amino acids, or smaller organic ions such as choline or monocarboxylates (short chain fatty acids). Proposed evolutionary relationships between human SLC5 members are shown in Fig. 1. Sequences of the 12 human SLC5 members were downloaded from the UniProt database [181], and aligned by Clustal Omega [159,160]. Based on this multiple alignment, a phylogenetic tree was generated using PhyML 3.0 with smart model selection (SMS) [60,94] with default settings (Fig. 1). The resulting tree shows a Fig. 1 Phylogenetic tree of human SLC5 members. The physiological substrates of individual transporters are indicated (SCFA: short chain fatty acids). The phylogenetic tree was visualized with the Interactive Tree of Life (iTOL) server [95] partitioning of human SLC5 proteins in accordance with substrate selectivity, whereby the sugar transporters appear to be more closely related to each other than to non-sugar transporters. Human SLC5A7, which is a Na + /Cl − -coupled choline transporter, seems to be more distantly related to other SLC5 members. SLC5A1/SGLT1 SGLT1 was the first member of the SLC5 family to be cloned [67,69] and it has been extensively studied during the past 4 decades [66,71,200,202]. Its main function is the absorption of glucose and galactose across the intestinal brushborder membrane. However, as outlined later on in this review, it also plays a role in the reabsorption of these sugars in the kidney, where it can be found in the regions S2 and S3 of the proximal tubules [64]. Mutations in this gene lead to the intestinal glucose/galactose malabsorption (GGM, OMIM #606824), a rare metabolic disorder that causes a severe diarrhea that can be fatal unless glucose and galactose are removed from the diet [19,201]. Moreover, as reflected by its renal function, GGM patients present mild renal glucosuria (see below), urinary tract infections, and calculus formation [19,200]. Due to the pivotal role of SGLT1 in glucose absorption in the intestine, which is accompanied by water absorption, a treatment promoting the activity of SGLT1, named as oral rehydration therapy (ORT), is one of the standard medical procedures to prevent and overcome secretory diarrhearelated complications [71]. Recently, SGLT1 has been proposed to contribute to a new variety of physiological processes, including glucose sensing in the brain [202], protection against pathogens in activated lymphocytes [10], or embryonic implantation [151]. SGLT1 transports the natural sugars glucose and galactose, but not fructose or mannose. It is also able to transport some non-metabolized glucose analogues, such as α-MG and 3-Omethyl-glucose, but not 2-deoxy-glucose [71]. Regarding the transport mechanism, sugar transport requires the cotransport of Na + , which occurs with a stoichiometry of 2 Na + ions per each transported sugar molecule [21,92]. In addition, it has been proposed that SGLT1 can work as Na + uniporter, and even as a water and urea channel [71]. SLC5A2/SGLT2 SGLT2 is located in the apical membrane of the early renal proximal convoluted tubule S1 segments [207], where it mediates the absorption of most of the glucose present in the glomerular filtrate [76,184,198]. Mutations in the gene encoding SGLT2 are responsible for familial renal glucosuria (FRG, OMIM #233100) [109,174], a disorder that results in loss of glucose in the urine despite normal blood glucose levels [19]. In fact, due to its highly specialized role in proximal tubule glucose reabsorption, SGLT2 has been largely studied by the pharmaceutical industry as a therapeutic target to control glucose levels in diabetic patients [155,170], as will be explained later on in detail. Moreover, SGLT2-targeting drugs have received increasing attention due to their cardioprotective effects in diabetic patients and, thus, possible additional use as pharmaceutical tools to prevent heart failure [187]. While the expression of SGLT2 is mainly restricted to the early proximal tubules of the kidney, interestingly, its expression has been detected in pancreas, prostate tumors, and glioblastoma, which opens the door to potential new strategies for cancer treatment by targeting SGLT2 in those tissues [202]. Numerous in vivo experiments in the 1980s highlighted the existence of a low-affinity glucose transport system in the early proximal tubules [7]. Subsequently, SGLT2 was successfully cloned and characterized in the early 1990s [76,198]. However, due to the low signal-to-noise ratio observed for protein activity in the different expression systems tested [71,76,207], an accurate and comprehensive description of its transport mechanism remained elusive for more than 2 decades. Nevertheless, recent studies revealed that co-expression of SGLT2 with MAP17, a small protein that interacts with SGLT2, greatly intensifies its transport function [23]. This finding allowed to confirm previous observations [71] that SGLT2 is a low-affinity high-capacity transporter, very selective for glucose, inhibited by phlorizin and that the Na + to glucose coupling stoichiometry is 1:1 [23]. Therefore, mechanistic details of SGLT2 transport function is expected to follow soon, which will improve our molecular understanding of its physiological and pharmacological properties. SLC5A4/SGLT3 (also known as SAAT1) Expression of human SGLT3 has been described in enteric neurons of the intestinal epithelia and in neuromuscular junctions of the skeletal muscle [35]. Interestingly, when expressed in X. oocytes, this protein has shown to be unable to transport glucose, while the binding still induces currents through SGLT3-mediated depolarization of the plasma membrane resting potential. SGLT3 binds glucose with low affinity (K m = 20 mM), while galactose, fructose, and mannitol do not interact with the protein. The substrate-induced currents through SGLT3 are Na + -dependent, increase in the presence of H + , and are specifically inhibited by phlorizin [35]. Imino sugars were described as potent and specific activators of the electrogenic properties of SGLT3. Interestingly, imino sugars are also used to treat type 2 diabetes and lysosomal storage diseases based on their inhibition of α-glucosidases and glucosyltransferases [211]. Imino sugars may serve as tool compounds to explore the precise physiological role of SGLT3 [190]. Due to its location in the enteric neurons and its particular transport mechanism, it has been proposed that SGLT3 works as a glucose sensor, the activity of which might regulate the intestinal motility in response to glucose [35]. Likewise, it was speculated that it could regulate skeletal muscle activity by depolarizing neuromuscular junction cells in response to glucose [35]. Recent studies provided evidence for expression of SGLT3 in human kidney, where it might contribute to Na + transport in proximal tubules [88]. Also, whole-exome sequencing identified a genetic variant of SGLT3, which disrupts glucose-induced sodium conductance, and is present in some patients affected by attention deficit/ hyperactivity disorder (ADHD). The co-segregation of the SGLT3 variant and ADHD phenotype was, however, imperfect [154]. Taken together, while the functional properties of SGLT3 have been studied by several investigators, its physiological role is still the subject of debate [165]. The glucose-sensing mechanism that has been proposed based on in vitro studies, showing that SGLT3 generates membrane currents in the presence of high concentrations of glucose, is lacking in vivo validation. Also, the tissue localization of this protein needs to be clarified to help predict its physiological role. In addition, the role of imino sugars should be further investigated. The lack of a human disease phenotype linked to the transporter malfunction and the absence of appropriate transgenic animal models and inhibitors further slowed down defining its true physiological role. SLC5A9/SGLT4 The expression of SGLT4 in humans has been detected in the small intestine and kidney [171]. Functional studies using COS-7 cells overexpressing SGLT4 revealed that it is a Na +dependent α-MG transporter (K m = 2.6 mM). Moreover, transport of α-MG was inhibited by mannose and glucose, and to a lower extent by fructose and the metabolite 1,5anhydro-D-glucitol (1,5-AG). Direct measurements of mannose uptake by SGLT4 and the capacity of mannose to inhibit SGLT4-mediated α-MG transport (IC 50 = 0.15 mM) indicated that SGLT4 is a Na + -coupled mannose transporter [171]. Three rare variants of this gene were identified in patients of proliferative diabetic retinopathy and it was suggested that this protein is expressed in retinal endothelial cells, where it may play a role in the pathogenesis of this disease [180]. Recent genetic studies revealed that SGLT4 is expressed in kidneypancreatic-colorectal tumors, while not expressed in the matched normal tissues [49]. Similarly, a genome-wide association study identified a SNP near the SGLT4 locus that affects susceptibility for colorectal cancer development [44]. Overall, the lack of detailed functional information, together with limited information regarding its expression profile, results in a poorly understood physiological role of this transporter. SLC5A10/SGLT5 SGLT5 has been shown to be highly expressed in human kidney [58]. Immunolocalization revealed apical expression in the proximal straight tubules [54]. Functional studies using SGLT5-overexpressing TREX HEK293 cells revealed that it is a mannose (K m = 0.45 mM) and fructose (K m = 0.62 mM) transporter, which shows the typical functional characteristics of the SLC5 glucose transporter family members, such as the inhibition by phlorizin and Na + -dependence of the transport process [58]. Due to its selectivity toward fructose and mannose, it has been speculated that it is responsible for reabsorption of these sugars from the glomerular filtrate. In support of this idea, studies of Slc5a10 knockout mice showed an increased loss of fructose in the urine, without affecting the plasma levels of fructose [46]. Furthermore, studies of the renal reabsorption of fructose in rat proximal tubules revealed that this process can be blocked to a large extent by phlorizin and that it is Na +dependent [54]. In addition, recent genome-wide association studies revealed that genetic variations of SGLT5 are associated with impaired 1,5-anhydroglucitol (1,5-AG) blood levels [96]. 1,5-AG is a monosaccharide found in nearly all foods and its blood concentration decreases during times of hyperglycemia and, within a couple of weeks, returns to normal levels in the absence of hyperglycemia. Rare loss-of-function mutations of SGLT5 have been associated with lower levels of 1,5-AG, indicating that SGLT5 reabsorbs this monosaccharide [100]. In healthy individuals, 1,5-AG level is kept relatively constant through intestinal absorption and renal reabsorption. Interestingly, 1,5-AG reabsorbed by the kidney is inhibited by competition with glucose [203]. Thus, during hyperglycemia, when the kidney cannot reabsorb all glucose via SGLT1 and SGLT2, this leads to inhibition of 1,5-AG reabsorption, presumably via SGLT5, and therefore a decrease in the of 1,5-AG levels in the blood. Once the hyperglycemia is corrected, 1,5-AG begins to be reabsorbed from the kidney back into the blood at a steady rate and if a person's glucose levels remain below 10 mM for approximately 4 weeks, 1,5-AG will return to its normal levels. The 1,5-AG test is currently FDAapproved for diabetes patients to measure 1,5-AG levels in the blood, in order to determine the history of hyperglycemic episodes [203], complementary to HbA1c and fructosamine tests. Despite all these interesting recent findings, only limited information is available about the role of the SGLT5 transporter in health and disease and to what extent its functional activity affects the 1,5-AG diabetes test. SGLT2 inhibitors, which lower blood glucose and produce glycosuria, may likely cause abnormal circulating 1,5-AG levels and, thus, could interfere with the diabetes test. Other SLC5 family members The SLC5 family comprises 12 members, including the abovedescribed sugar transporters (SGLT1-5), in addition to transporter for other substrates, such as the myo-inositol (SLC5A3 and SLC5A11), iodide (SLC5A5), monocarboxylate (SLC5A8 and SLC5A12), choline (SLC5A7), and vitamin (SLC5A6) transporters. All these membrane proteins share a common transport mechanism, which uses the Na + electrochemical gradient to sense (SGLT3) or incorporate their substrates into cells. SLC5A3/SMIT1 and SLC5A11/SMIT2 The SLC5A3 Na + /myo-inositol transporter, known as SMIT1, was found to be widely expressed in humans [9], with prominent expression in the intestine and brain [5]. In addition to myo-inositol (K 0.5 = 50 μM), SMIT1 also transports D-glucose with low affinity [200]. Studies with SMIT1 knockout mice revealed that it plays an essential role in osteogenesis, bone formation, and maintenance of bone mineral density [28]. SMIT2 was identified as a major gene responsible for the syndrome of infantile convulsions and paroxysmal dyskinesia (ICCA syndrome), as well as for benign familial infantile convulsions (BFIC) [140]. As observed for SMIT1, SMIT2 is expressed in a wide variety of human tissues [140], but in contrast to SMIT1, SMIT2 does not transport glucose [97]. Genetic studies have shown that SMIT2 interacts with immune-related genes and it seems to be involved in certain immune effects. Accordingly, it was proposed that SMIT2 could function as an autoimmune modifier [176]. Recent studies revealed a strong correlation between the expression of the myo-inositol transporters SMIT1 and SMIT2 and psychiatric diseases, e.g. schizophrenia and bipolar disorder, and it was suggested that alterations in their expression in specific brain regions account for the symptoms of these diseases [188]. Another recent study showed an altered expression pattern of SMIT1 and SMIT2 in the sciatic nerve and dorsal root ganglia in an experimental diabetes model, which may play a role in the pathogenesis of diabetic neuropathy [41]. SLC5A5/NIS The Na + /iodide transporter SLC5A5, also known as NIS, is highly expressed in the thyroid gland, where it mediates accumulation of iodide (I − ), which is required for the biosynthesis of the thyroid hormones T3 and T4 [30]. NIS is also expressed in non-thyroid tissues, such as salivary glands, stomach, lactating breast, and primary and metastatic breast cancer [136]. In addition, NIS is expressed in the small intestine, where it contributes to the absorption of dietary I − [114]. In addition to I − , NIS also transports thiocyanate (SCN − ) and chlorate (ClO 3 − ) with a stoichiometry of 2 Na + :1 substrate. Interestingly, it also transports the pollutant perchlorate (ClO 4 − ), although with a stoichiometry of 1:1 [123]. Mutations in the SLC5A5 gene lead to a condition known as I − transport defect (ITD), which reduces the accumulation of I − in the thyroid and results in hypothyroidism [136]. Due to its pivotal role in I − absorption, NIS is the major target for diagnosis and therapy of thyroid cancer using iodide radioisotopes, whereby its expression level is crucial for tumor prognosis. The relatively high expression of NIS in breast cancer also makes it a potential target for the treatment of the disease [214]. SLC5A8/SMCT1 and SLC5A12/SMCT2 SMCT1 is a high-affinity membrane transporter of lactate, and can also mediate the uptake of other monocarboxylates such as pyruvate, butyrate, propionate, and acetate [105]. SMCT1 expression can be found in the intestinal colon and the kidneys, and to a lower extent, in the brain and retina [48]. In the colon, SMCT1 is postulated to be responsible for the apical uptake of short-chain fatty acids (SCFA) such as acetate, propionate, and butyrate generated by bacterial fermentation of dietary fiber [11,162]. SMCT2 is a low-affinity membrane t r a n s p o r t e r o f l a c t a t e , a l s o t r a n s p o r t i n g o t h e r monocarboxylates, including pyruvate and nicotinate [55]. SMCT1 and SMCT2 are both Na + -coupled. SMCT1-mediated transport is electrogenic with a Na + to SCFA stoichiometry of 2:1, whereas SMCT2-mediated transport is electroneutral (Na + to SCFA stoichiometry of 1:1) [162]. Moreover, SMCT1 has been shown to function as a tumor suppressor gene for colon, thyroid, stomach, kidney, and brain tumors [48]. SMCT1 and SMCT2 are both expressed in the apical membranes of the intestine and kidneys. However, while SMCT1 is found mainly in colon and outer kidney cortex, SMCT2 is present in the proximal parts of the intestinal tract and both kidney cortex and medulla [166]. Kidney-specific ablation of the expression of SMCT1 and SMCT2 resulted in a marked increase in urinary loss of lactate and a decrease in blood levels, indicating that these transporters might be responsible for renal lactate reabsorption [172]. A recent study of protein-protein interactions revealed that the PDZK1 adaptor protein is a binding partner of both SMCT1 and SMCT2 and additionally identified a molecular complex of SMCT1-PDZK1 and the urate transporter URAT1 (SLC22A12) (see "transporter complex" in Fig. 4a on the right). This suggests a possible role of SMCT1 in urate reabsorption in the kidney [167]. Another quite recent work showed the expression of both SMCT1 and SMCT2 in pancreas and also that SMCT1 function can be regulated by insulin [101]. SLC5A7/CHT1 CHT1 is a high-affinity Na + /choline transporter (K m = 2 μM), which is exclusively expressed in tissues containing cholinergic neurons. There, its transport activity constitutes the ratelimiting step for acetylcholine synthesis [84,118]. In contrast to the other members of the SLC5 family, the transport process mediated by CHT1 is Cl − -dependent and regulated by extracellular pH [72]. CHT1 resides in intracellular compartments and is translocated to the plasma membrane in response to neuronal activity [62]. CHT1 knockout mice have a normal embryonic development, but die after birth due to defective cholinergic neurotransmission [43]. Patients with truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies [152]. A single nucleotide polymorphism in the SLC5A7 gene with high prevalence in the Asian population, resulting in a replacement of isoleucine by valine in the third transmembrane domain, has been reported to decrease the ability of CHT1 to transport choline by about 50%, thereby representing a risk factor for cholinergic dysfunction in this population [62]. SLC5A6/SMVT SMVT is a Na + -dependent transporter of the water-soluble vitamins pantothenic acid, biotin, and α-lipoic acid, the latter being a cofactor of several enzymes such as pyruvate dehydrogenase. Transport through SMVT is electrogenic and occurs with a stoichiometry of 2 Na + to 1 vitamin molecule [177]. Like NIS (SLC5A5), to which it shows a high degree of sequence identity, SMVT transports I − as well [29]. SMVT is ubiquitously distributed in the human body. However, stronger expression levels are found in absorptive tissues such as the intestine, the kidney, and the placenta [132]. Studies of intestine-specific Slc5a6 knockout mice indicated that SMVT is indispensable for intestinal biotin uptake [52]. In addition, SMVT was found to be required to keep a normal mucosal integrity [145]. Furthermore, it was shown that the defects due to the absence of SMVT in knockout mouse intestine can be reverted by biotin and pantothenic acid supplementation [146]. A recent study described a neurodegenerative disorder as consequence of a biallelic mutation in the SLC5A6 gene, which could be clinically improved by "triple vitamin" (biotin, pantothenate, and lipoate) replacement therapy [17]. Due to its broad tissue distribution and substrate range, SMVT is exploited as a drug delivery transport system to increase the bioavailability of prodrugs as conjugates of biological substrates of SMVT with drugs. For example, a biotinylated lipid prodrug of acyclovir has been generated for improved cellular uptake via SMVT in corneal epithelial cells, to be used for the treatment of herpes simplex virus keratitis [182]. Description of the transport properties of SGLT1 to SGLT5 The sodium-dependent glucose transporters or SGLTs are secondary active transporters present in the plasma membranes of different intestinal and renal epithelial cells. As already alluded to, they are able to transport their substrates against their concentration gradients, by using the energy provided by the inwardly directed Na + electrochemical gradient generated by the Na + /K + -ATPase. Their transport process follows the socalled alternating access model [73], which means that the substrate binding site is alternately exposed to both sides of the plasma membrane (Fig. 2). Given the positive charge of the Na + ions, the transport process is electrogenic, and therefore, induces depolarization of the plasma membrane resting potential. Thus, electrophysiological methods have been widely used to characterize the functional properties of these transporters [71]. These and other functional studies, together with the structural information obtained from the homolog vSGLT from Vibrio parahaemolyticus [40], have revealed the mechanistic details of the transport cycle of these transporters [31] and uncovered a kinetic model for the wellstudied SGLT1 [202]. SLC5A1/SGLT1 According to the latest version of this kinetic model for SGLT1 (Fig. 2) [202], there are five different steps within the transport cycle. In the first step, 2 extracellular Na + ions bind the protein, which leads to the opening of the first of the two proposed structural gates, the external gate, allowing the transit of a glucose molecule through the external vestibule to a binding site located within the core of the protein (step 2). When both Na + and glucose are bound, the external gate closes and the protein is in an occluded state, in which the binding site is not accessible from either sides of the membrane (step 3). In step 4, the inner gate opens, allowing the release of glucose and Na + through the internal vestibule to the cytosol. Finally, to complete the transport cycle, the empty transporter returns to the initial conformation (step 5). This transport cycle has been shown to be reversible, and the direction and transport rate is dependent on the transmembrane concentration gradients of Na + and the electric potential [133]. There is asymmetry in sugar kinetics and specificity between the forward and reverse transport modes, in line with the physiological role of the transporter to accumulate sugars within the cells [39]. As previously mentioned, the transport stoichiometry is 2 Na + :1 glucose per cycle for SGLT1, which has been determined by measuring the reversal potential of the transport-induced currents [21] and combining electrophysiological and radiotracer flux measurements [76,102]. It is remarkable that in the absence of glucose, there is a specific Na + -leak current through SGLT1, indicating that this protein can also function as a Na + uniporter. Moreover, it has been shown that SGLT1 can transport water passively. However, while the Na + uniport requires conformational changes and is saturable, water permeation shows a behavior similar to water channels [98]. Interestingly, water permeation through SGLT1 is independent of Na + and glucose concentration gradients, but can be blocked by phlorizin. This property has been recently validated experimentally, since it has been shown that water permeability through SGLT1 can be altered by mutating residues lining the sugar transport pathway [213]. Likewise, it has been shown that SGLT1 can transport urea, which is achieved through the same pathway as the water molecules [213]. In terms of the physiological roles of these permeabilities, as stated by the authors [213], the water permeability of SGLT1 is orders of magnitude lower than that of aquaporins, which may be partially corrected by the high density of the transporters in the intestinal brush border membrane. Likewise, the precise physiological role of SGLT1-mediated urea transport requires further examination. Another feature of the SGLT1-mediated transport mechanism that has been studied in detail is its pre-steady state kinetics. The transient movements of the transporter generate electric currents, which reflect the Na + binding/release and/or the movement of charged/ polar residues within the electrical field across the membrane in response to changes in the resting membrane potential of the cell [65,66]. Electrophysiological studies revealed that the SGLT1 pre-steady state currents are blocked by external phlorizin and glucose, and that they are Na + -dependent, which indicates that Na + binds the transporter in the absence of the sugar [65,99]. As a consequence of the Na + -dependence of the pre-steady state movements, the distribution of the conformations of the SGLT1 population in the membrane reflects the availability of external Na + [71]. SLC5A2/SLGT2 As already noted, the relatively low functional activity of SGLT2 expressed in oocytes or cultured cells somewhat limited the study of the functional properties of this protein. Nevertheless, the studies clarified that SGLT2 corresponds to the previously described and long-awaited low-affinity high-capacity transporter of the kidney proximal tubule S1 segments [76]. Using expression studies in X. oocytes, it was demonstrated that human SGLT2 mediates saturable Na + -dependent and phlorizin-sensitive transport of Dglucose and α-MD with K m values of 1.6 mM for α-MD and~250 to 300 mM for Na + , consistent with the previously reported low affinity Na + /glucose cotransport. In contrast to SGLT1, SGLT2 did not transport D-galactose. By comparing the initial rate of 14 C-α-MD uptake with the Na + influx calculated from α-MD-evoked inward currents, it was shown that the Na + to glucose coupling ratio of SGLT2 is 1:1. Fig. 2 Na + -glucose cotransporter SGLT1 kinetic model. Extracellular Na + binds first to the Na + 1 and Na + 2 binding sites of the empty carrier (states #1 and #2). This opens the external gate, allowing glucose to bind to the central pocket, whereupon the outer barrier closes to form the occluded state (#3). The internal barrier opens and the two Na + ions and glucose can exit to the cytoplasm via the aqueous inner vestibule (#4). The transport cycle is then completed (#5) and the empty carrier returns to its original state (#1), ready for the next transport cycle. The transport rate of SGLT1 depends on the rate of the conformational changes needed to open and close the outer and inner barriers (step #2 to #3 and step #3 to #4, respectively) Using combined in situ hybridization and immunocytochemistry with tubule segment-specific marker antibodies, it was demonstrated that there is an extremely high level of SGLT2 message in proximal tubule S1 segments [76]. Subsequent studies, using HEK293T cells as an expression system, were in agreement with these findings. In addition, the affinity for D-glucose was determined and the K m was 5 mM, confirming the low affinity of SGLT2 [71]. Phlorizin turned out to be a more potent inhibitor of SGLT2 (IC 50 = 11 nM) than of SGLT1 (IC 50 = 140 nM). As presented later in this review (see the "SGLT2 regulation" section), a recent study demonstrated that co-expression of SGLT2 together with the protein MAP17 greatly increases SGLT2 functional activity [23,24]. SLC5A4/SGLT3 When expressed in X. oocytes, SGLT3 did not show any ability to transport glucose despite being correctly inserted in the plasma membrane. As already mentioned, glucose was able to induce depolarization of the membrane potential of the SGLT3-expressing oocytes, which was reversible and inhibited by phlorizin. These electrophysiological properties indicated substrate selectivity among the different sugars tested, and only glucose and α-MG induced currents, albeit with very low affinity (K 0.5 = 20 mM). The authors highlighted that no sugar-induced currents were observed in the absence of Na + , but currents were significantly increased by lowering the pH levels, suggesting permeation of H + through SGLT3. Furthermore, due to the low activation energy when measuring the temperature dependence of the currents, the authors suggested that SGLT3 shares more similarity to ion channels than transporters [35]. Studies of the substrate selectivity of SGLT3 revealed that substrate binding happens with very low affinity, with K m values ranging from 19 to 43 mM. In contrast, SGLT3 exhibited high-affinity for imino sugars, with K m values ranging from 0.5 to 9 μM. Moreover, the later study revealed that phlorizin inhibits sugar-induced currents through SGLT3 (K i = 0.12 mM) and hyperpolarized the membrane potential in the absence of sugar, suggesting a possible Na +leak current through SGLT3 [190]. Strikingly, the mutation of a single amino acid converted SGLT3 into a sugar transporter with functional properties similar to those of SGLT1. Specifically, replacement of glutamate in position 457 by glutamine, the amino acid present in the homologous position for SGLT1 and SGLT2, turned SGLT3 into a transporter with broad selectivity for sugars, with much higher affinity for glucose, α-MG, and phlorizin, and a stoichiometry of 2 Na + :1 substrate. Furthermore, these studies pointed out the high conductance of H + at acidic levels, and again posed the question of whether SGLT3 behaves like a cation uniporter or a channel-like transporter [12]. In order to understand the function of SGLT3 better, several studies have been conducted with rodent SGLT3 isoforms. Some of these proteins also acted as "glucose sensors." However, significant differences were observed regarding response to sugars, phlorizin, and H + [165]. Moreover, initial studies with the porcine SGLT3 indicated that this isoform acts as Na + -coupled glucose transporter [34]. Overall, as already noted in the previous section, there is no consistency for the functional properties of SGLT3 among species, and further studies are needed to reveal the in vivo functional properties. SLC5A9/SGLT4 Regarding SGLT4, there is only a single study describing the functional properties of this protein, and while it provides information about the substrate selectivity, as already described above, the description of the putative transport mechanism is limited to the mentioning of the Na + -dependence of the transport process [171]. SLC5A10/SGLT5 Similarly, initial studies with SGLT5 defined its substrate selectivity and showed that the transport process was Na + -dependent [58]. In addition to that, a more recent study revealed that SGLT5-mediated sugar transport is electrogenic, sensitive to voltage, and the authors proposed a Na + :glucose coupling ratio of 1:1. Moreover, it was shown that in the absence of Na + , H + could also drive glucose transport, even though to a lower extent. It is also interesting to mention that no presteady state currents were observed for SGLT5 [51]. As already noted, this transporter is of particularly interest in the context of the FDA-approved test to diagnose blood sugar levels in diabetes patients, which measures the plasma level of the metabolite 1,5-AG, since SGLT5 is likely responsible for the reabsorption of 1,5-AG in the proximal tubule of the kidney, and the transport is inhibited by glucose in response to hyperglycemia. SGLT1 and SGLT2 in transepithelial sugar transport in the intestine and kidney In the small intestine (duodenum, jejunum), dietary carbohydrates are hydrolyzed to monosaccharides by pancreatic enzymes and brush-border hydrolases such as lactase and sucrase-isomaltase, resulting in high sugar concentrations on the brush border surface after a carbohydrate-rich meal. The digestion products are primarily D-glucose, D-galactose, and D-fructose, which must be efficiently absorbed by mature enterocytes in the upper one-third of intestinal villi, to avoid osmotic imbalance, as presented in Fig. 3a. In the kidney, Dglucose is freely filtered at the glomerulus and almost completely extracted from the tubular fluid by the proximal tubule transporters SGLT2 and SGLT1 (Fig. 3b), and returned to the blood. Approximately 90% of the filtered glucose is reabsorbed by the early S1 segment of the proximal tubules, and only a smaller fraction reaches the proximal straight tubule (later part of S2 segments and all of the S3 segments). As already noted, transport of each glucose molecule is coupled either to the cotransport of two Na + ions (SGLT1) or one Na + ion (SGLT2) (Fig. 3). Once inside the cell, glucose can diffuse into the blood via GLUT2 [109]. The Na + /K + -ATPase [163], located in the basolateral membrane, pumps Na + out of the cell to maintain the inwardly directed Na + electrochemical gradient required to drive uphill glucose transport across the brush border membrane. The high-affinity low-capacity Na +glucose cotransporter SGLT1 is expressed in the small intestine, whereas expression of the low-affinity high-capacity Na + /glucose cotransporter SGLT2 is almost exclusively restricted to the early proximal tubule S1 segment of the kidney (besides some expression in pancreas and cancer tissues). In the intestine, it is the "low-capacity high-affinity" SGLT1 that mediates rapid uptake of glucose and galactose. Despite its "low capacity," uptake of large amounts of sugar is warranted by the immense expansion of the absorptive area provided by the intestinal villi and microvilli, giving rise to an SGLT1expressing membrane surface of about 200 m 2 . Whereas Dglucose and D-galactose are absorbed in the intestine by the Na + /glucose cotransporter SGLT1, D-fructose is transported across the apical membrane by the facilitated fructose transporter GLUT5 (SLC2A5), followed by basolateral exit via GLUT2 (SLC2A2). Alternatively, fructose may exit via GLUT5, shown to be expressed in the basolateral membrane as well [14]. In addition to the absorptive roles of the Na + / glucose cotransporters, their activity also enables water absorption. This occurs through the paracellular route via solvent drag across GAP junctions. In addition, SGLT1 transporters themselves were shown to contribute to some extent toward transcellular water transport in the intestine [213]. In the kidney proximal tubules, reabsorption of about 2/3 of the filtered water occurs via the transcellular route where it is ensured by the expression of the aquaporin AQP1 in both the apical and basolateral membranes [127], while only a smaller part is absorbed through the paracellular route. Sodium that enters the epithelial cell through Na + -coupled transport is pumped out of the epithelial cells into the blood via the basolateral Na + /K + -ATPase. It is this resulting transepithelial Na + flux that generates the osmotic gradient necessary to drive the fluid absorption. Thus, the presence of luminal transport of glucose, galactose, and other solutes absorbed via Na + -coupled transport in intestine and kidney stimulates transepithelial salt and water absorption. In the intestine, crypt and villus cells cooperate during digestion to cycle fluid from the blood to the intestinal lumen and back again. Crypt cells extrude Cl − through apical Cl − channels into the lumen (e.g., via the cystic fibrosis transmembrane . SGLT2 is expressed in the apical membranes of the kidney early proximal tubule cells (segment S1). GLUT2 is expressed in the basolateral membranes of intestine and renal proximal tubule S1 cells. In kidney proximal tubule S3 segments, cytosolic glucose exit occurs via basolateral GLUT1. In the intestine, in the absence of GLUT2, the current concept is that an alternative basolateral exit pathway exists, according to which glucose is converted to glucose-6-phosphate, which is transported into vesicles, followed by exocytosis [168]. GLUT5 is expressed in both apical and basolateral membranes of intestinal cells conductance regulator, CFTR), triggering release of Na + and water [173]. Villus cells pump Na + back into the space between cells via the coordinated action of Na + -coupled cotransporters and Na + /K + -ATPase. Disturbance of transepithelial glucose uptake in the intestine has significant implications in the context of fluid absorption: defects in SGLT1 in patients with the rare genetic disorder GGM (OMIM #606824) have severe diarrhea [201]. Likewise, patients with renal glucosuria (SGLT2 defect) have moderate diuresis that is partially compensated by SGLT1 in the proximal tubule S3 segments [120]. The in vivo importance of SGLT1 and GLUT2 in intestinal glucose absorption and renal reabsorption was studied in great detail using Slc5a1 −/− and Slc2a2 −/− single or double knockout mice. In addition, positron emission tomography (PET) using 18 F-labeled glucose analogs with unique transport specificities for GLUTs and SGLTs was employed to analyze these knockout animals [149,150]. The conclusions from these investigations are presented below. Further insights were derived from clinical data of patients with (1) GGM harboring different missense mutations in the SGLT1 gene [19,201] (OMIM *182380), (2) GLUT2 deficiency (OMIM #227810), also known as Fanconi-Bickel syndrome (FBS), a rare disorder of glucose homeostasis that leads to accumulation of glycogen in the liver and kidney, glucose, and galactose intolerance [38,104], and (3) familial renal glucosuria due to missense mutations in the SGLT2 gene (OMIM #233100) [109,174]. What follows is a list of three major conclusions that can be drawn from both the studies with the transgenic mice and the clinical observations: SGLT1 is required for rapid glucose and galactose uptake in the intestine This finding was revealed by the PET studies of Slc5a1 −/− mice [150]. Furthermore, it is consistent with the severe diarrhea in affected infants with GGM: feeding breast milk or regular infant formulas leads to life-threatening dehydration due to luminal accumulation of glucose and galactose, whereas fructose-based formulas that do not contain glucose or galactose are tolerated. Interestingly, in the absence of SGLT1, the absorption of the oral glucose load delivered to the small intestine was not completely abolished and there was slow absorption through an as yet unknown mechanism [150]. GLUT2 is dispensable in the intestine with respect to glucose and galactose absorption but essential for glucose reabsorption in the kidney and glucose transport into and out of the liver Evidence comes from the findings that FBS patients [109] exhibit the same phenotype as Slc2a2 −/− mice [149], i.e. intestinal glucose absorption in both the patients and the knockout mice was not impaired in the absence of GLUT2 [109], and that there is renal glucosuria, confirming the important role of GLUT2 in the kidney proximal tubules. How is then glucose absorbed across the intestinal basolateral membranes in the absence of GLUT2? The current notion is that basolateral glucose exit alternatively occurs via glucose phosphorylation, whereupon resulting glucose-6-phosphate is transported into the ER, followed by exit via membrane exocytosis [109]. This mechanism is analogous to the alternative pathway for glucose release from hepatocytes in the absence of GLUT2 [59]. Evidence for this stems from transepithelial glucose transport measurements of the intestine of Slc2a2 −/− mice [168]. Interestingly, it has recently been proposed that GLUT2 can also be targeted to the intestinal BBM where it would play an important role in intestinal sugar absorption [79], and that at very high luminal glucose concentration (e.g. 30 mM or higher), GLUT2 is recruited to the BBM, facilitating additional, SGLT1-independent apical glucose uptake [56]. However, as noted above, the subsequent studies with Slc5a1 −/− and Slc2a2 −/− mice, and the findings in patients with SLC2A2 transporter defects indicate that such a role for GLUT2 in the BBM is unlikely under normal physiological conditions [139]. SGLT1 and SGLT2 are essential for glucose reabsorption in the kidney The indispensability of these transporters was demonstrated by the measurement of 24 h glucose excretion in Slc5a1/ Slc5a2 double knockout mice [130,184]. In these mice, the entire filtered glucose load was excreted in the urine, while in the single SGLT2 or SGLT1 knockout mice, 67% and 98% of the filtered glucose load was reabsorbed, respectively. This is consistent with the phenotype of patients with SLC5A1 and SLC5A2 defects. Patients with defects in SGLT2 have glucosuria and excrete less than 50% of the filtered glucose load, while those with defects in SGLT1 have only mild glucosuria. Thus, in both rodents and human, the lowaffinity high-capacity glucose transporter SGLT2 reabsorbs the bulk of the filtered glucose load in the proximal tubule S1 segments with GLUT2 in the basolateral membrane [207], whereas the high-affinity low-capacity glucose transporter SGLT1 reabsorbs the remaining glucose molecules from the filtrate in the late proximal tubule, with predominantly GLUT1 in the basolateral membrane [66,76,92,149,184,207]. SGLT2 emerges as an attractive therapeutic target for diabetes treatment Confirmation of the physiological importance of human SGLT2 was provided by the finding that the mutations of the SGLT2 gene cause familial renal glucosuria [186]. Because the loss-of-function mutations of SGLT2 resulted in the urinary excretion of glucose, SGLT2 was confirmed to be the transporter involved in the renal reabsorption of glucose. Further validation that SGLT2 is the high-capacity transporter responsible for the reabsorption of the majority (90%) of glucose filtered at glomerulus came from the knockout mouse study in which the SGLT2 gene was disrupted [184]. The renal excretion of the glucose was much higher in SGLT2 knockout mice than that in the mice with the knockout of SGLT1 expressed in the distal straight portion (later part of S2 segments and all of S3 segment) of renal proximal tubules, which is involved in 10% of glucose reabsorption [78,184]. The micro-puncture of the tubular fluid from the proximal tubules of knockout mice, furthermore, revealed that SGLT2 knockout almost completely abolished the glucose reabsorption in the proximal portion of proximal tubules, whereas in wild-type mice,~90% of filtered glucose was reabsorbed [184]. This finally established that SGLT2 is the transporter responsible for high capacity glucose reabsorption that has been proposed to occur at the proximal portion of renal proximal tubules. These findings unmasked SGLT2 as an attractive target for the treatment of diabetic patients. Diabetes enhances renal glucose reabsorption by increasing the tubular glucose load and the expression of SGLT2, which maintains hyperglycemia. Inhibitors of SGLT2 enhance urinary glucose excretion and thereby lower blood glucose levels in type 1 and type 2 diabetes [47]. Regulation of SGLT2 expression It has been a long-standing question why SGLT2 does not induce a high level of functional activity when expressed in an exogenous expression systems such as X. oocytes or mammalian cultured cells, where only about a 1.5-to 3-fold increase in α-MG uptake could be achieved by expression of SGLT2, compared with control oocytes [76,207]. This low expression level in heterologous expression systems has initially slowed down the development of specific inhibitors for SGLT2 as therapeutic lead compounds for the treatment of diabetes. Since SGLT2 has been identified as the transporter that corresponds to the previously described low-affinity/ high-capacity glucose reabsorptive pathway of the renal proximal tubule S1 segments, there has been an argument that the low functional activity of SGLT2 in the exogenous expression systems contradicts its proposed role in high-capacity glucose reabsorption. Given that SGLT2 alone expressed in heterologous expression systems did not exhibit high levels of glucose uptake activity, unlike SGLT1 [42,67,76,207], it has been proposed that for the full functional expression of SGLT2, an additional protein is required, which is not expressed in the exogenous expression systems but present in renal proximal tubules. Such a protein has indeed been successfully identified by functional expression cloning in which SGLT2, together with cRNA of clones from a cDNA library from kidney was coexpressed in X. oocytes [23]. Following screening, the protein identified was an integral membrane protein with two membrane spanning domains, designated as MAP17 (Fig. 4a). As show in the figure, it interacts with PDZK1, a scaffolding protein that was shown to interact with other membrane transporters [53]. MAP17 increased glucose uptake activity of SGLT2 in RNA-injected X. oocytes by two orders of magnitude [23] (see Fig. 4b). A database search revealed another protein named MARDI (also called small integral membrane protein 24, SMIM24) structurally related to MAP17 [23]. Similar to MAP17, MARDI highly augmented SGLT2-mediated glucose uptake when co-expressed with SGLT2 [23]. The importance of MAP17 to maintain SGLT2 function in the renal proximal tubules under physiological condition was further confirmed in a patient of renal glucosuria, in which SGLT2 did not show any identifiable mutations. This patient was carrying a homozygous mutation in the MAP17-coding gene [23]. Among 60 individuals with familial renal glucosuria, one patient without identifiable mutations in SGLT2 displayed homozygosity to a splice mutation in the coding region of MAP17. With this finding in hand, the same research group performed an indepth functional characterization of SGLT2/MAP17 using X. oocytes as an expression system. In agreement with previous findings, the experiments confirmed that SGLT2 is a highly selective low-affinity glucose transporter (K m = 3.4 mM). It has a transport stoichiometry of 1:1 and shows a 10-fold higher affinity for phlorizin than SGLT1. Furthermore, electrophysiological recordings showed that SGLT2 has little presteady state and Na + -leak currents, and they described a competitive inhibitory mechanism for the SGLT2 inhibitors phlorizin and dapagliflozin [23]. Although the mechanisms through which MAP17 and MARDI enhance glucose transport activity of SGLT2 have not been fully clarified at the moment, it is intriguing that MAP17 did not change the quantity of SGLT2 protein in the exogenous expression systems [23]. Therefore, some direct or indirect effect of MAP17 on SGLT2, which is not due to membrane targeting or stabilization of the SGLT2 protein, must be involved in this high augmentation of transporter function. It was proposed that the protein conformation of SGLT2 must be somehow affected because the coexpression with MAP17 significantly increased both phlorizin binding and α-MG transport [23]. In terms of protein-protein interactions, MAP17 and MARDI are both PDZ-binding proteins with a typical PDZ-binding motif at their C-termini and that can interact with PDZ protein PDZK1 (see Fig. 4a, right part). This interaction (e.g., with other membrane transporters), however, was not essential for the stimulation of SGLT2 activity, because the deletion of the PDZ-binding motif of MAP17 did not affect MAP17's augmentation of SGLT2 activity [23]. Upregulation of SGLT2 expression under diabetic conditions and consequences for diabetic nephropathy Under diabetic conditions, SGLT2 expression in proximal tubules is increased [115]. Growth and hypertrophy of the diabetic kidney may be the trigger for a general increase in the transport machinery in the proximal tubule under diabetic conditions, and it may be exacerbated with advanced nephropathy, when nephrons are lost and surviving nephrons try to compensate. Increased diabetic blood glucose levels enhance the amount of glucose filtered, provided that GFR is preserved, and renal glucose reabsorption is increased both due to the increase in glucose concentration in the glomerular filtrate and an increase of glucose transport capacity, thereby generating a substantial tubular glucose load [210]. Notably, in the early phase of diabetes, GFR is often enhanced, leading to glomerular hyperfiltration and further exacerbation of the tubular glucose load. It was also shown that maximum renal glucose reabsorptive capacity (T m G) is elevated in patients with type 2 diabetes (T2D) [1]. While diabetes typically elevates glomerular filtration and tubular glucose reabsorption, it additionally activates renal gluconeogenesis, thereby further exacerbating the hypoglycemia. This may be caused by (1) diabetes-related metabolic acidosis that causes renal gluconeogenesis due to the conversion of glutamine to glucose in the proximal tubule epithelial cells, with generation of ammonia and bicarbonate, to compensate the acidosis (for review see [77]), and (2) activation of the sympathetic nervous system in diabetes followed by renal stimulation of gluconeogenesis via epinephrine. All these factors give rise to a vicious circle that further nourishes hypoglycemia and hyperfiltration. The kidney's safety feature, however, limits hyperglycemia by excretion of glucose in the urine, once blood glucose loads exceed T m G. Nonetheless, the diabetic kidney will undergo pathological alterations due to excessive metabolism of glucose in the tricarboxylic acid which generates enhanced oxidative stress, as well as glomerular and tubulointerstitial damage. This is followed by nephropathy with loss of nephrons, while surviving nephrons are trying to compensate. SGLT2, which is responsible for the majority of glucose uptake in early proximal tubules, is upregulated in diabetes, but the mechanism of regulation of SGLT2 has not yet been fully established. The following pieces of information are available that help assemble the puzzle: i. The regulation of expression of SGLT2 in T2D was investigated in Zucker rats. It was shown that SGLT1 and SGLT2 mRNA levels in Zucker diabetic obese rats were 1.6-and 4.8-fold higher than in age-matched leans, respectively [169]. were significantly increased in the epithelial cells obtained from diabetic patients compared with those from normal subjects [135]. Accordingly, sugar uptake was increased in the cells from diabetic patients. iii. The high level of glucose itself did not affect SGLT2 expression in tubular cells but insulin significantly increased tubular SGLT2 levels through the generation of oxidative stress [110]. Because hyperinsulinemia is accompanied with the onset and early stage of T2D [80], the upregulation of SGLT2 at least at the early stage of T2D thus could be due to the stimulation by insulin. Indeed, increased glucose absorption in proximal tubule membrane vesicles of hypertensive rats was found to be due to induction of SGLT2 expression leading to kidney damage due to the generation of reactive oxygen species (ROS) [137]. Increased SGLT2 expression in the diabetic condition may lead to kidney damage via the production of reactive oxygen species. iv. HNF1α was shown to control renal glucose reabsorption in mouse and human. It was demonstrated that HNF1α controls SGLT2 expression by direct transcriptional activation, whereas SGLT1 and GLUT2 were not affected [129]. In addition, it was shown that renal proximal tubular reabsorption of glucose was reduced in patients with maturity onset diabetes of the young type 3 (MODY3) that is caused by mutations of the HNF1α gene. v. In addition, it was shown, based on studies with db/db mice with diabetes and high-glucose-cultured porcine PT LLC-PK1 cells in a two-chamber system treated with the SGLT2 inhibitor canagliflozin, that SGLT2 expression was stimulated by basolateral high glucose concentrations through activation of the so-called GLUT2/ importin-α1/HNF-1α pathway, while the expression of the NAD + -dependent protein deacetylase SIRT1 decreases, leading to a deficiency of autophagy [179]. Upregulation of SGLT2 expression in diabetes may be caused through sensing of basolateral hyperglycemia via GLUT2 [111,119]. Interestingly, SGLT2 inhibitors induce SIRT1, together with adenosine monophosphateactivated protein kinase AMPK, which have been shown to stimulate autophagy, thereby ameliorating cellular stress and glomerular and tubular injury [121]. vi. It was reported that mTOR signaling is involved in the upregulation of the expression of nutrient transporters, including SGLT2, in renal proximal tubule cells. The combination of the deletions of the mTOR complex 1 (mTORC1) and 2 (mTORC2) by the conditional knockout of the regulatory associated protein of mammalian target of rapamycin (RAPTOR), which is an essential component of mTORC1, and the rapamycin-insensitive companion of mammalian target of rapamycin (RICTOR), which is essential for mTORC2, resulted in the development of a Fanconi-like syndrome involving glucosuria in mice [57]. Interestingly, proteomics and phosphoproteomics of freshly isolated kidney cortex identified either reduced expression or loss of phosphorylation at critical residues of nutrient transporters, which leads to reduced nutrient transport due to perturbation of the endocytotic machinery. For example, phosphorylation of mouse Sglt2 at S623 was reduced in kidney cortex by the loss of mTORC1 [57]. This phosphosite is highly conserved between species, including human SGLT2 (S624) [50]. It was shown that the phosphorylation at this site increases membrane insertion of SGLT2 and enhances glucose transport [50]. The loss of phosphorylation at S623 was, thus, proposed to explain the glucosuria caused by the mTORC1/mTORC2 deletion. vii. It remains to be determined, whether the scaffolding protein MAP17 is involved in the functional upregulation of proximal tubule transporters such as SGLT2 and also NHE3 (SLC9A3). Both of these transporters are stimulated by insulin to enhance Na + and glucose reabsorption [83,110], but a role of MAP17 in the regulation of SGLT2 and other relevant transporters in the diabetic conditions has not been reported thus far. In conclusion, further studies will still be needed to fully understand the mechanisms of upregulation of both SGLT2 and GLUT2 in the different diabetic conditions. Regulation of SGLT1 in the kidney in the absence of SGLT2 SGLT2 inhibition shifts glucose uptake further downstream toward S3 segments and thick ascending limbs (TAL). In the kidney, SGLT2 and SGLT1 account for at least 90% and about 3% of fractional glucose reabsorption (FGR), respectively, while euglycemic individuals treated with an SGLT2 inhibitor maintain an FGR of 40-50% [138]. This value is similar to the values of Sglt2 knockout mice [130]. The increase of the contribution of SGLT1 toward glucose reabsorption following SGLT2 inhibition was studied in mice [138]. Selective SGLT2 inhibition in mice revealed that SGLT2 and SGLT1 account for renal glucose reabsorption in euglycemia, with 97 and 3% being reabsorbed by SGLT2 and SGLT1, respectively. However, when SGLT2 is fully inhibited by an SGLT2-inhibitor, there is an increase in SGLT1-mediated glucose reabsorption which explains why only 50-60% of filtered glucose gets excreted [138]. Thus, when SGLT2 function is lacking, either due to SGLT2 inhibition or lack of gene expression in Sglt2−/− mice or in patients with familial glucosuria, there is significant compensation by SGLT1. An olfactory G protein-coupled receptor, Olfr1393, expressed in the kidney proximal tubule was proposed to serve as a physiological regulator of SGLT1 expression [138]. Alternatively, the G protein-coupled glucose-sensing receptor T1R3 that was originally identified in sheep and rodent intestine [106,158] might also be a candidate for SGLT1 regulation in kidney proximal tubules as well [215]. However, the precise mechanism by which renal SGLT1 is upregulated to compensate for the lack of SGLT2 function still awaits further investigation. Structural biology; molecular modeling; SGLT1 and SGLT2; molecular architecture Proteins of the SSS family usually contain 10-14 transmembrane helices (TMHs), and the tertiary structures of two representative members are known. The sodium/galactose symporter from Vibrio parahaemolyticus (vSGLT) was crystallized with bound galactose in an inward-open conformation (PDB ID: 3DH4) [40]. The structure of the inactive mutant variant K294A has also been solved and displays a very similar inward-open conformation in the absence of the bound sugar (PDB ID: 2XQ2) [194]. In addition, another protein from the same family, the sialic acid transporter SiaT from Proteus mirabilis, was crystallized in the presence of sialic acid (N-acetylneuraminic acid) (PDB ID: 5NV9) [192]. Interestingly, this structure features the protein in an outward-open conformation. The above structures show that SSS transporters share the same structural fold as APC (amino acid-polyamineorganocation) transporters, featuring a transporter core formed by 5 + 5 TMHs in an inverted repeat arrangement. Strikingly, a bound Na + ion was also observed in the structures of vSGLT and SiaT in a location analogous to the "Na2" cation-binding site present in many APC-fold transporters [25,125,126,161,204], indicating the conservedness of the Na2 site across the structural superfamily. Many APC-type transporters with a 2:1 cation:solute transport stoichiometry also feature a second cation-binding site (termed "Na1"), which is typically closer to the substrate-binding pocket. However, the SiaT structure suggests that a second Na + ion binds to a distinct (third) site, termed "Na3," implying that the Na1 site found in other APCtype transporters is not conserved in the SSS family [192]. This is also likely to be the case for human SLC5A1. The mechanism of transport has been extensively studied for human SLC5A1 using a variety of methods (Fig. 2). Experimental data have long supported a transport model where Na + ions bind to the protein first, followed by glucose [122]. Initial binding of Na + is expected to occur to the Na2 site (termed "Na + 2 " in Fig. 2), then quickly jump to the Na3 site ("Na + 1 " in Fig. 2) [192]. After the second Na + binding to "Na + 2 " (Fig. 2), the two bound Na + ions are expected to stabilize the transporter in an outward-open state and increase affinity for sugar binding [40,148]. Binding of sugar induces the formation of the extracellular gate (Y87, F424, M73 in vSGLT; F101, F453, L87 in hSLC5A1; F98, F453, L84 in hSLC5A2, respectively) and thus extracellular closure [40]. The subsequent transition of the protein to an inward-facing state destabilizes the Na + bound at the "Na + 2 " site, and this ion is proposed to be released on a short time scale [194]. These rearrangements are also expected to disrupt the "Na + 1 " site and cause the dissociation of Na + bound at "Na + 1 " (Fig. 2). The release of Na + ions allows Y263 in vSGLT (Y290 in hSLC5A1 and hSLC5A2), which was shown to be in a stacking interaction with the bound sugar ring and was proposed to be an intracellular gate residue, to adopt a different conformation and thus facilitate the dissociation of the bound sugar [194]. Interestingly, further intracellular gating residues R260 and D182 in SiaT, where D182 also forms part of the "Na + 1 " (Na3) binding site, are also conserved in both human SLC5A1 and SLC5A2 (R300 and D204/D201, respectively). In the inward-open holo structure of vSGLT, the hydroxyl groups of the bound galactose molecule make extensive hydrogen-bond contacts with protein residues, and its heterocyclic carbon ring engages in a stacking interaction with aromatic side-chains of the protein. To assess the structural details of sugar binding to human SLC5A2 during the early steps of the transport cycle, we explored the binding modes of glucose to an outward-open state of human SLC5A2 using a combination of homology-based modeling and molecular docking (Hediger et al., unpublished observation). To this end, we have used MODELLER 9.23 [195,196] to build structural models of human SLC5A2 based on the structure of SiaT (PDB ID: 5NV9) using a manually adjusted sequence alignment more suitable for loop modeling. To account for conformational variation, we have generated a total of 1000 protein models. We have performed molecular docking of α-D-glucose on all 1000 protein models using AutoDock 4.2.6 [108] with flexible ligand and rigid protein side-chains. The search space was defined as a cube of 22.5 × 22.5 × 22.5 Å centered on the location of the galactose ring from the vSGLT structure (PDB ID: 3DH4) after fitting to SiaT. For each model of SLC5A2, 10 docked poses of glucose were generated. The best 500 of the resulting 10′000 glucose poses were energy minimized using SMINA [86] while keeping neighboring protein side-chains flexible, and the resulting ligand poses rescored using the Vinardo scoring function [134]. From the resulting set, glucose poses where the distance of the center of the docked glucose and the original galactose ring from vSGLT was less than 2 Å were ranked according to their calculated binding energies. The glucose pose with the lowest energy yielded by the above described process is shown in Fig. 6a. Importantly, the plane of the glucose ring predicted in our study is parallel to that of galactose observed in the vSGLT structure, while all hydroxyl groups of the sugar ligand are in hydrogen-bonding orientation with protein residues. This is in contrast to a recent analogous study [13], where the sugar was predicted to bind in a perpendicular orientation. Based on our model, SLC5A2 residues K321, E99, S287, and N75 directly coordinate the sugar ligand, whereas the stacking interaction with Y290 expected in analogy to vSGLT is absent due to steric reasons. The formation of this interaction might be a major driving force for the outwardto-inward transition of the apo transporter. Importantly, our results, also in-line with previous studies [13], explain the deleteriousness of the naturally occurring loss-of-function variant K321R associated with familial renal glucosuria [103]. However, other mutations near the sugar-binding site, such as F98L [208,209], A102V [18], and G77R [193], might also cause loss-of-function in hSLC5A2 through directly interfering with glucose binding. Interestingly, during our docking studies, we have consistently observed docked glucose poses where the pyranose ring center was~7 Å away from its expected location according to the vSGLT structure (Fig. 6f). Structural analysis of the protein environment near this region suggests a plausible sugarbinding site, where glucose would form hydrogen-bonds with protein residues D158, K154, Y150, Y290, and the backbone amino and carbonyl groups of G77 and N75, respectively. Remarkably, D158 is conserved in all 7 sugar-transporting human SLC5 proteins, but not in non-sugar transporting SLC5 members, while the other 3 residues are partially conserved (K154 not in hSLC5A4, Y150 not in hSLC5A4 and hSLC5A10, Y290 not in hSLC5A9). We believe that these residues might form an intermediate sugar-binding site during the sugar translocation process, but further investigation is needed to clarify their roles in the transport process. Development of inhibitors, analysis, and inhibition kinetics of canagliflozin acting on SGLT2 and SGLT1 The observation that the suppression of renal glucose reabsorption ameliorates diabetic conditions was first reported in 1987. To demonstrate the contribution of glucotoxicity to the development and exacerbation of the diabetic condition, phlorizin was administered to partially pancreatectomized diabetic rats with continuous subcutaneous infusion through small implantable minipumps. In this study, it was demonstrated that phlorizin, by increasing urinary glucose excretion, normalized plasma glucose levels and completely corrected abnormalities associated with the diabetic conditions such as insulin deficiency and hyperglycemia [142]. Furthermore, the correction of hyperglycemia with phlorizin treatment normalized tissue sensitivity to insulin in the diabetic rats [142]. In this study, phlorizin was used to normalize the blood glucose level without mediating insulin actions to demonstrate the contribution of glucotoxicity to diabetic conditions [142]. It was shown that phlorizin recovered the function of β-cells secreting insulin and ameliorated tissue insulin resistance. Although phlorizin is a high-affinity inhibitor of Na + /glucose transporters, it has previously not been clinically developed as an anti-diabetic drug. One of the main reasons was that phlorizin is not orally administrable. Phlorizin is an O-glycoside and, therefore, it is hydrolyzed in the intestinal lumen by β-glucosidase and broken down into glucose and phloretin that inhibits facilitative glucose transporters (GLUTs) [36]. The second reason was that phlorizin is less selective. Phlorizin, in fact, inhibits both SGLT1 and SGLT2 [42]. When phlorizin is orally administered, the strong inhibition of intestinal SGLT1 would cause severe life-threatening diarrhea similar to the glucose-galactose malabsorption caused by the loss-of-function mutations of the SGLT1 gene [178]. To overcome such disadvantages of phlorizin, its derivative T-1095 was synthesized [117]. T-1095 is an orally administrable esterified prodrug that is absorbed from the gastrointestinal tract and converted to an active form by the esterase in the liver [117]. Because it is inactive in the lumen of the small intestine, T-1095 does not inhibit SGLT1 in the small intestine [4]. Although the SGLT2/SGLT1 selectivity of the active form of T-1095 is much less than that of SGLT2 inhibitors currently in clinical use, orally administered T-1095 could selectively inhibit renal glucose reabsorption while minimally affecting intestinal glucose absorption [117]. In the pre-clinical studies, orally administered T-1095 increased glucose excretion into urine in diabetic animals and successfully decreased both blood glucose and HbA1c levels [117]. It suppressed the postprandial hyperglycemia after a meal load in diabetic animals. The h y p e r t r i g l y c e r i d e m i a a n d t h e d e v e l o p m e n t o f microalbuminuria associated with diabetic conditions were also ameliorated [4,117]. The results obtained from T-1095 on diabetic animal models confirmed that the inhibition of renal glucose reabsorption could be a novel therapeutic approach for diabetes mellitus. The SGLT2 inhibitors currently in clinical use include dapagliflozin, canagliflozin, empagliflozin, ipragliflozin, tofogliflozin, luseogliflozin, and ertugliflozin [2,89,113] (see Fig. 5). Sotagliflozin has been abandoned while enrolling into large clinical heart and kidney outcome trials. Although these compounds have been synthesized based on the phlorizin structure, they are C-glycosides, distinct from phlorizin that is an O-glycoside, making sure that they are not hydrolyzed in intestine and metabolically stable in blood after absorption from intestine [6]. Therefore, they are administered once a day orally for the treatment of patients with diabetes. The additional advantage of these clinically used SGLT2 inhibitors is their high-affinity and selectivity for SGLT2 except sotagliflozin designed as a dual inhibitor, inhibiting both SGLT2 and SGLT1. They are 150 to~3000 times more selective for SGLT2 than SGLT1 except sotagliflozin [89]. Physiological basis of clinical benefits of SGLT2 inhibitors SGLT2 inhibitors came in use clinically in 2012, and a bunch of clinical evidence had to be brought together to demonstrate their effectiveness in the treatment of diabetic patients. SGLT2 inhibitors reduce blood glucose and HbA1c, recover β-cell function, and ameliorate insulin resistance. They, furthermore, decrease body weight and blood pressure, and lower blood urate and triglyceride levels [6]. Large-scale clinical trials have also demonstrated that they are effective in the suppression of cardiovascular events and renal complications [212]. The usefulness of SGLT2-selective inhibition in the treatment of diabetes was also confirmed in Slc5a2 knockout mice showing that SGLT2 deletion reduced hyperglycemia associated with high-fat diet and obesity, improved glucose intolerance, and increased glucose-stimulated insulin secretion [74]. Not only the effectiveness but also the clinical safety of the drugs was established. It is also well recognized that the risk of hypoglycemia is relatively low when using SGLT2 inhibitors [6]. Effects of SGLT2 inhibitors on extra-renal SGLT2 The expression of SGLT2 was originally proposed to be kidney-specific, but detailed expression studies have revealed extra-renal expression of SGLT2. In pancreatic α-cells secreting glucagon, SGLT2 was shown to be expressed with SGLT1 [15]. It was demonstrated that the SGLT2 inhibitor dapagliflozin triggers glucagon secretion in human islets by directly acting on islet α-cells. This may explain why SGLT2 inhibitors cause the paradoxical increase of plasma glucagon levels [20]. The other example of extra-renal expression of SGLT2 is in cancers. It has been demonstrated that cancer cells upregulate glucose transporters such as GLUT1 to compensate increased demand of glucose associated with the Warburg effect [71]. SGLT2 was also shown to be expressed in some types of cancers such as pancreatic and prostate cancers, together with SGLT1 [81,153]. It is intriguing that SGLT2 inhibitors reduce tumor growth and survival in a xenograft model of pancreatic cancer [153]. Molecular docking of the gliflozin inhibitors of SGLT2 To elucidate the binding of gliflozins to human SLC5A1 and SLC5A2 proteins, we have applied our computational docking protocol described above to the generation of docked poses of phlorizin, canagliflozin, dapagliflozin, and tofogliflozin. The chemical structures of these compounds are presented in Fig. 5. Based on the assumption that the sugar moiety of these compounds should occupy the same binding site as glucose, we have employed filtering of the resulting poses based on the distance of the center of their pyranose rings to that of galactose in the vSGLT structure, similarly as we did for glucose docking. The best-scoring (the lowest binding energy) of the resulting poses is shown in Figs. 6a-e (Hediger et al., unpublished observation). Interestingly, the sugar moieties of the docked inhibitors are all rotated compared with the docked glucose molecule (Fig. 6a), but their rings still share a plane parallel to the membrane bilayer, similar to glucose. A notable exception is tofogliflozin, where the sugar moiety is sterically fixed compared with the aglycon tail and was predicted to bind in a perpendicular orientation. For phlorizin, which is a less selective inhibitor of both hSLC5A1 and hSLC5A2, aromatic stacking interactions are noticeable between the aglycon tail and protein residues Y290 and F98. This aligns well with experimental results showing that F101 in human SLC5A1 (analogous to F98 in human SLC5A2) is important for phlorizin binding [148]. For the other 3 inhibitors, their aglycon tails reach into a binding pocket lined by protein residues Y150 and W289, which engage in aromatic stacking interactions with the ligands. Notably, this binding pocket is not utilized by phlorizin and its existence might be responsible for the significantly higher selectivity of the other 3 inhibitors for hSLC5A2. While no obvious differences in sequence can be observed in this binding pocket between hSLC5A1 and hSLC5A2, dynamic factors such as extracellular gate opening could affect the accessibility or the size of this binding pocket. Notably, one relevant sequence difference is in the hinge region of the extracellular gate, where A464 in hSLC5A2 vs G464 in hSLC5A1 could provide more rigidity and thus a more well-formed binding pocket in the case of human SLC5A2 compared with human SLC5A1. Another analogous study on the selectivity of gliflozin compounds has suggested that these highly selective compounds interact with the C-terminal region of extracellular loop 5 (EL5) directly through residue H268, and that their binding leads to a partial closure of the extracellular gate [13]. The authors argue that direct interaction with the non-conserved residue H268 in hSLC5A2 (D268 in hSLC5A1) contributes to the selectivity of gliflozins, augmented by the absence of the second bound Na + ion at the Na3 site in hSLC5A2, which shifts the conformational equilibrium of the transporter to favor extracellular gate closure [13]. In summary, dynamic effects are likely to be responsible for the selectivity gliflozin compounds, and further crystallographic studies are necessary to verify the precise binding modes of these inhibitors to human SLC5A1 and SLC5A2. Clinical benefits of the SGLT2 inhibitors in type 1 and type 2 diabetes and role of the renin-angiotensin system SGLT2 inhibitors are effective glucose-lowering drugs that primarily act by blocking the SGLT2 transporter in the renal proximal tubule. The specific clinical benefits of the drugs are as follows. Body weight reducing effects of SGLT2 inhibitors These drugs lower body weight in T2D already within a few weeks after starting the therapy [93]. Initially, the weight loss is mostly due to diuretic effect through increased osmotic diuresis and a related decrease of the extracellular volume [205]. However, the increased urine volume of patients was normalized again after 4 weeks, which highlights that there is a homeostatic adaptive mechanism [3]. This weight reduction is accompanied by reduction of total body fat percentage due to the negative energy balance, increased lipolysis, and fatty acid oxidation, with simultaneous inhibition of lipogenesis and increased formation of ketone bodies. [45,131]. The lean tissue mass that mainly includes the muscle mass showed no significant change following SGLT2 inhibitor treatment in one study [157], while in another study, the loss of muscle mass, alongside the loss of fat mass in patients with T2D, was observed in response to SGLT2 inhibition [82]. Nephroprotective effects Elevation in glomerular filtration rate (GFR) is observed early in the pathogenesis of T1D and T2D. At the single-nephron level, diabetes-related renal hemodynamic adaptations occur to counteract loss-of-functional nephron mass, thereby increasing glomerular hydraulic pressure, a phenomenon known as glomerular hyperfiltration, leading to irreversible nephron damage and contributing to initiation and progression of kidney disease in diabetes. Reduced GFR is one of the key markers of predicting the risk of end-stage renal disease and renal death in diabetes. Reported prevalence of hyperfiltration at the whole-kidney level is 10-67% in T1D and 6-73% in patients with T2D [175]. Hyperfiltration in T1D is thought to precede the onset of albuminuria and decline in renal function and to predispose to progressive nephron damage by increasing glomerular hydraulic pressure and transcapillary convective flux of the ultrafiltrate carrying macromolecules including albumin. Increased GFR in single remnant nephrons to compensate for reduced nephron numbers further accelerates renal damage in diabetes. SGLT2 inhibitors have general nephroprotective effects. As outlined below, in diabetes patients, SGLT2 inhibitors activate the tubuloglomerular feedback (TGF) mechanism with the inhibition of glomerular hyperfiltration, which is characteristic in diabetes patients. Beneficial effects are also anticipated for patient groups without diabetes, such as chronic kidney disease (CKD) patients, but confirmatory studies are still required. Inhibition of glomerular hyperfiltration by SGLT2 inhibitors in patients with T1D via the tubuloglomerular feedback mechanism As already noted, glomerular hyperfiltration is a recognized risk factor for the development and progression of diabetic kidney disease (DKD). Also, it is a risk factor in the development of chronic kidney disease in general. Hyperfiltration of the remaining nephrons is due to activation of the renin-angiotensin-aldosterone system (RAAS), leading to increased systemic blood pressure, increased angiotensin sensitivity of efferent arterioles and to tubuloglomerular feedback (TGF) adaptation in the afferent arterioles. In addition, Na + reabsorption in the proximal tubules is increased in T1D and in T2D due to chronic hyperglycemia. As already noted, this results in the upregulation of the expression of both SGLT2 and GLUT2 [115,135]. Thereby, Na + (i.e. NaCl) exposure at the macula densa of the juxtaglomerular apparatus diminishes which, due to a "renal misinterpretation" of a "reduced effective arterial blood volume", leads to an increased dilation of the afferent arteriole, in order to preserve the alleged low intraglomerular pressure and the GFR in the spirit of a correct autoregulation. This resulting unfavorable hyperfiltration further accelerates nephron destruction and progression of renal failure. Clinical studies using the SGLT2 inhibitor empagliflozin revealed that SGLT2 inhibition significantly reduces the hyperfiltration in patients with T1D mellitus [22]. A recent study provided the first direct demonstration of changes in renal hemodynamic function by SGLT2 inhibitors using in vivo glomerular visualization with multi-photon microscopy in a diabetic animal model [82]. Within 2 h after application a single dose of empagliflozin, they observed an acute drop in hyperfiltration and albumin excretion in the diabetic mice. At the same time, the investigators visualized a distinct reduction of the extended diameters of the afferent glomerular arterioles. In parallel, the amount of the excreted adenosine in the urine significantly increased. The effect of the SGLT2 inhibitor could be prevented when using a selective adenosine blocker, thereby validating the hypothesis that SGLT2 inhibition leads to increased adenosine production at the macula densa which ultimately leads to constriction of the afferent arterioles. The mechanism that leads to the constriction of the afferent glomerular arterioles via the tubuloglomerular feedback (TGF) to counteract the hyperfiltration in the diabetic condition is outlined in Fig. 7. In addition, Box 1 presents the role of macula densa cells in controlling GFR and renin release via TGF. As shown in Fig. 7, in T1D, inhibition of the SGLT2mediated Na + -coupled glucose reabsorption in the proximal tubules results in an increase in the luminal NaCl concentration at the macula densa above the threshold value. As a result, macula densa cells trigger the reduction of renin release to suppress the renin-angiotensin-aldosterone system (RAAS) and vasoconstriction of the afferent arterioles to reduce hyperfiltration that is commonly associated with diabetes. This is achieved via generation of extracellular adenosine in the juxtaglomerular interstitium (see Box 1). As shown in Fig. 7, adenosine can directly act on G i -coupled A1 receptors expressed on afferent arteriolar smooth muscle cells [63,87] to trigger vasoconstriction of the afferent arterioles to reduce GFR. In addition, adenosine causes via A1R inhibition of renin release in juxtaglomerular cells (see Fig. 7). The latter is believed to occur via A1R-triggered TRPC6 channelmediated calcium entry, followed by calcium-mediated inhibition of renin release [63]. Thus, adenosine signaling through the TGF is of central importance in preventing diabetic hyperfiltration via SGLT2 inhibition, leading to the nephroprotective action of SGLT2 inhibitors, with prevention of subsequent damage of remaining nephron segments. The mechanism explains why in T1D, after SGLT2 inhibition, there is an initial decrease in GFR that is independent of systemic blood pressure. Box 1 Role of macula densa cells in TGF-mediated control of GFR and renin secretion. Fluid flow through the kidney nephron is being kept within a narrow range under healthy conditions for optimal maintenance of salt and water balance. The macula densa cells control the tubuloglomerular feedback (TGF) to fine-tune GFR [156]. As shown in Fig. 7, when the luminal NaCl concentration at the macula densa rises, as during ECV expansion or in response to SGLT2 inhibition, NaCl entry into macula densa cells via NKCC2 leads to the production of adenosine. The macula densa cells do not have enough Na + /K + -ATPases on their basolateral surface to excrete the added Na + taken up. This results in osmotic swelling and the current concept is that the swelling leads to exit of ATP across the basolateral membrane via stretch-activated, non-selective maxi-anion channels, followed by its conversion to adenosine via the ecto-5′-nucleotidase channels [16,87,143,144]. Adenosine then interacts with adenosine 1 receptors (A1R) on vascular smooth muscle, resulting in contraction of afferent arterioles and reduction of GFR (the TGF response). On the other hand, in response to decreased luminal NaCl concentration at the macula densa (the reverse situation illustrated in Fig. 8), i.e., in the context of reduced blood pressure or glomerular filtration, NKCC2 transport activity is reduced, which causes release of prostaglandin in macula densa that activates signaling cascades to promote renin release and activation of the renin-angiotensin-aldosterone system (RAAS). This in turn increases the blood pressure via aldosterone by increasing renal reabsorption of sodium and water. In addition, with luminal NaCl concentrations below the threshold value, vasodilation of afferent arterioles occurs, which increase glomerular filtration pressure and tubular fluid flow. Recently, it was observed that macula densa cells of mouse and human kidney also express SGLT1 where it is anticipated to serve as a glucose sensor to further regulate TGF, whereby glucose taken up mediates upregulation of nitric oxide (NO) synthase NOS1, followed by generation of NO that will decrease TGF and promote glomerular hyperfiltration [215]. In support of this concept, another study revealed that (1) deletion of SGLT1 reduced glomerular hyperfiltration in diabetic Akita mice but did not markedly change blood glucose levels and (2) the increase in macula densa NOS1 expression observed in diabetic Akita mice was abolished in the absence of SGLT1 [164]. Future studies on this interesting topic will be of great interest. Additionally, further studies on the localization of SGLT1 in distal tubules would be important, because earlier reports of rat [76,92,207] and human kidney [191] were not able to provide evidence for SGLT1 expression in macula densa cells. It will also be of interest to know whether SGLT1 is upregulated in macula densa cells in response to SGLT2 inhibition, in analogy to SGLT1 in the proximal straight tubules. Fig. 7 Effect of SGLT2 inhibition on renal hemodynamics via the tubuloglomerular feedback (TGF) mechanism: the macula densa cells are specialized epithelial cells that form the macula densa as part of the distal tubule sensing system of the same nephron. As shown on the right, these cells detect the luminal NaCl concentration in the tubular fluid. NaCl detection occurs after its uptake by SLC12A1 (NKCC2). Elevated filtration at the glomerulus or reduced reabsorption of Na + and water in the proximal tubule causes the tubular fluid at the macula densa to have a higher concentration of luminal NaCl. Inhibition of SGLT2-mediated Na + -coupled glucose transport significantly increases NaCl exposure at the macula densa. This is followed by increased transport activity of SLC12A1 (NKCC2) in macula densa cells. As part of the sensing mechanism, this ultimately leads to extracellular accumulation of adenosine in the juxtaglomerular interstitial space (see text for details). Adenosine can then directly act on G i -coupled A1 receptors expressed on afferent arteriolar smooth muscle cells [63,87], triggering vasoconstriction of the afferent arterioles to reduce GFR, as shown in Fig. 8a and b Beneficial effects of SGLT2 inhibitors in T2D-based and combination therapies Glomerular hyperfiltration with elevated GFR is an important risk factor for the development of diabetic kidney disease (DKD). This is also true for T2D patients, although hyperfiltration is more difficult to diagnose, because they often have a normal or even a reduced GFR due to loss of nephrons. It is likely, however, that they still exhibit hyperfiltration at the single nephron level. Blockage of the renin-angiotensin system (RAS) by antihypertensive agents such as ARBs (angiotensin receptor blockers) has proven effective for the treatment of T2D patients. ARBs improve DKD progression by lowering glomerular pressure and hyperfiltration, mainly by dilating efferent arterioles due to lack of constriction [85]. In T1D, adenosine plays a central role in executing the nephroprotective response of SGLT2 inhibitors, by ameliorating preglomerular arteriolar dilation and hyperfiltration via the adenosine receptor A1R, acting on the vasoconstriction of afferent arterioles. A recent study was launched to examine the renoprotective benefits of a combination therapy with the SGLT2 inhibitor dapagliflozin in T2D patients treated with ARBs [185]. The study was undertaken because (1) it was unclear how exactly SGLT2 inhibitors affect renal hemodynamics in patients with T2D whose renal physiology differs from that of previously studied T1D patients with hyperfiltration and (2) it was unknown whether the effects of SGLT2 inhibitors are diminished in response to glucose lowering. Therefore, the effects of dapagliflozin on renal hemodynamics in T2D patients treated with the blood sugar lowering agent metformin, together with ARB, were examined as well. The study revealed that the beneficial renal hemodynamic effects of SGLT2 inhibitors are fully independent of the glucose-lowering effects. Interestingly, lowering GFR was accompanied by a stable or even lowered renal vascular resistance (RVR), suggesting that the acute decline in GFR is mainly caused by post-glomerular vasodilation, rather than preglomerular restriction. Interestingly, in this study, the concentration of adenosine, which plays a key role in mediating the positive effects of SGLT2 inhibitors, was also significantly increased in T2D patients. Compared with the hyperfiltration in patients with T1D, however, baseline GFR and effective renal plasma flow (ERPF) were much lower in the T2D study, indicating that the preglomerular arteriolar diameter was already narrow, thus limiting or prohibiting further preglomerular vasoconstriction by adenosine (see Fig. 8c and d). Adenosine may, however, induce further post-glomerular vasodilation via A2R instead of preglomerular vasocontraction (in accordance with the concept that activation of adenosine receptors of type A1 leads to constriction of afferent arterioles and that of type A2 to dilation of post-glomerular arteries) [183]. Therefore, the beneficial effect of SGLT2 inhibitors on the renal hemodynamics of these T2D patients with RAS blockade is most likely due to enhancement of post-glomerular vasodilatation in patients with T2D, as illustrated in Fig. 8b. Are the effects of SGLT2 inhibitors on renal hemodynamics in T1D and T2D different? While the effects in the T1D in young adults revealed reduction of hyperfiltration and intraglomerular pressure by vasoconstriction of afferent arterioles, the T2D study in older adults revealed efferent arteriolar vasodilation as the mediator of the decrease in GFR and intraglomerular pressure. These are remarkably different effects of SGLT2 inhibitors in two different patient groups with diabetes. What could be the reason for these differences? First of all, there are important differences between the patient cohorts of the two studies that are beyond the difference of the T1D and T2D pathogenesis. These include age, glycemic control, blood pressure, and renal function. All these parameters could affect difference in SGLT2 inhibitor effects on renal hemodynamics. Notably, the number of nephrons declines with age (by about 50% in healthy individuals between 20 and 70 years of age) while the single-nephron GFR in the remaining glomeruli varies only slightly [32]. Therefore, the differences observed in the two studies may be in part a reflection of variability in nephron numbers among the patient cohorts. Also, there are great renal structural heterogeneities due to diabetic nephropathy in older T2D patients compared with younger T1D patients. In addition, the T2D study comprised a combination therapy with an ARB drug. Clinical benefits in T1D and T2D patients Taken together, SGLT2 inhibitors protect against major adverse kidney outcomes in individuals with T1D and T2D. In addition, they prevent kidney failure and reduce morbidity in patients with T2D. Combined therapies in T2D with other dru gs such as A RBs prove effective for add ed nephroprotective actions. Combination therapies with SGLT2 inhibitors were found to be relatively free of complications, as shown by meta-analyses of data from large clinical studies [112]. In addition, SGLT2 inhibitors turn out to have general nephroprotective effects, independent of diabetes. Overall, beneficial effects include lowering arterial blood pressure, improving blood sugar levels, reduction in blood uric acid levels, and slowing down progression of diabetic nephropathy. Renal-cardiac benefits of SGLT2 inhibitors SGLT2 inhibitors exhibit great benefits also for patients with heart failure but the precise mechanism underlying this renalcardiac benefit are not completely understood. Renal autoregulation could contribute toward the observed favorable effects in patients with heart failure with reduced ejection fraction (HFrEF), with and without diabetes, a topic that is currently under evaluation in clinical trials [91]. In addition, a variety of accompanied favorable changes may contribute toward these benefits: improving the blood glucose level, lowering arterial blood pressure, decrease in body weight and arterial blood volume, and reduction in uric acid levels. However, none of these factors fully explains the renal-cardiac benefits. Recently, direct effects of SGLT2 inhibitors on cardiomyocytes have been discussed although it is unclear what physiological role Na + -coupled glucose transporters play in these cells. Indeed, a recent report reveals that SGLT1 is expressed in cardiomyocytes from human tissue biopsies at both the RNA and protein levels. Interestingly, the study revealed that, while there was no expression of SGLT2, SGLT1 was expressed in normal myocardial tissue and significantly upregulated in ischemia and hypertrophy [33]. The increase in SGLT1 expression in ischemic and hypertrophic myocardium was associated with increased phosphorylation in activating domains of the intracellular second messengers ERK1/2 and mTOR, thereby mediating PKC-stimulated plasma membrane expression of SGLT1, representing a potential pharmacological target for cardio-protection. However, SGLT2 inhibitors are poor ligands for SGLT1. Thus, off-target effects of SGLT2 inhibitors that are not related to membrane transporters in cardiomyocytes cannot be completely excluded in this study. Safety of SGLT2 inhibitors and adverse effects Although there is concern that genital infections are common adverse effects of these inhibitors, with mycotic infections, urinary tract infections, and osmotic diuresis, meta-analyses of trials and a large population-based cohort study indicated that there is no increased risk, providing reassurance for patients [199]. Since SGLT2 inhibitors are, however, a fairly new class of drugs, information on long-term adverse effects are not available yet. The safety of SGLT2 inhibitors is, however, warranted due to their highly SGLT2-selective nature. As indicated above, SGLT2-selective inhibitors do not significantly inhibit intestinal SGLT1, which is important to prevent diarrhea that would be caused when SGLT1 would be highly inhibited, similar to glucose/galactose malabsorption [178]. In the kidney, SGLT2-selectivity is also important from the safety point of view. When SGLT2 in the proximal portion of the renal proximal tubules is completely inhibited, downstream SGLT1 in the distal portion of the proximal tubule compensates for the glucose reabsorption. When SGLT1 is fully functional, it can reabsorb 120 g of glucose per day, whereas under euglycemic conditions, it is responsible for the reabsorption of only 20 g of glucose per day [1]. This compensation of glucose reabsorption by SGLT1 is important to prevent hypoglycemia when using SGLT2 inhibitors for the treatment of diabetic patients. In fact, the individuals with familial renal glucosuria, a genetic loss of SGLT2 function, do not generally suffer from hypoglycemia, an observation which supports the safety of the use of SGLT2 inhibitors [186]. Therefore, it is crucial that SGLT2 inhibitors do not inhibit SGLT1 in the renal proximal tubules. Based on the plasma concentration and protein binding of the SGLT2 inhibitor canagliflozin, it was in fact estimated that it does not affect SGLT1 in proximal a b c d Fig. 8 Hypothesized renal hemodynamic effects of SGLT2 inhibitors: a and b beneficial effects in type 1 diabetes patients; c and d beneficial effects in type 2 diabetes patients treated with a renin-angiotensin system (RAS) blocker tubules where the drug inhibits SGLTs by binding them from the luminal side [107,116]. Similarly, SGLT2 inhibitors are expected not to affect SGLT1 expressed in the heart and skeletal muscle when used at clinical doses, which would further support the safety of the drugs [116]. The highly SGLT2-selective nature has been an important requirement for SGLT2 inhibitors as described above. Therefore, SGLT2 inhibitors with high selectivity that are not affecting other SGLTs have been developed and launched as therapeutic drugs [6]. However, it has been recognized that α-glucosidase inhibitors that eventually suppress glucose absorption from the small intestine are effective in the control of postprandial hyperglycemia that most anti-diabetic agents are not able to normalize, even though they reduce fasting blood glucose levels [8]. Because the inhibition of glucose absorption from the small intestine would have a beneficial contribution to the anti-diabetic action, the partial inhibition of SGLT1 in the small intestine without causing diarrhea has been considered. Among SGLT2 inhibitors currently used clinically, canagliflozin shows less selectivity to SGLT2 [89]. Furthermore, because canagliflozin shows the highest plasma protein binding, a higher dose is set for clinical usage [116]. Therefore, canagliflozin, when administered orally, inhibits glucose absorption from the upper small intestine transiently and partially, which does not cause diarrhea but contributes to reduce postprandial hyperglycemia [107,116,128]. Furthermore, the transient, partial inhibition of glucose absorption from the upper small intestine causes a partial shift of glucose absorption from the upper to lower small intestine and eventually increases glucose absorption from the lower small intestine, which increases the secretion of incretin GLP-1 from the lower small intestine [128]. This also contributes to anti-diabetic action of canagliflozin, which is the characteristic feature of canagliflozin among SGLT2 inhibitors. The effect of SGLT2 inhibitors on bone fractures has recently been discussed. Evidence indicating the direct effect of SGLT2 inhibitors on fracture risk is lacking and an increased number of falls probably contributes to fractures [131]. SGLT2 inhibitors might indirectly increase bone turnover by weight loss. Determining the relevance of the effect of SGLT2 inhibitors on bone fractures and mineral metabolism in T2D, however, requires further investigation [189]. Future perspectives While the transport mechanisms and physiological roles of human SGLT1 in intestine and kidney have been extensively studied, there are emerging observations for new roles in other tissues that require further attention. These include expression in the heart, where its role is completely unknown or in the brain where it is proposed contribute to glucose sensing. Some of the SGLT2 inhibitors can also interact with SGLT1 and, thus, while being beneficial for diabetes treatment, they could affect some of these other functions as well. Also, while SGLT2 expression was thought to be largely confined to the early proximal tubules of the kidney, recent studies reveal expression in certain cancer types as well. This opens new opportunities for the development of cancer therapies using SGLT2 inhibitors, thereby blocking delivery of glucose into cancer cells and impairing energy production. More studies are also needed to uncover the physiological roles of other members of the SLC5 family. Suitable experimental tools are needed to define the true role of SGLT3, a potentially interesting protein that might act as a glucose sensor. However, further examination of this hypothesis is still needed. Also, only sparse information is available on SGLT4, a transporter that is particularly interesting as it seems to reabsorb the metabolite 1,5-anhydroglucitol in the kidney. Since an FDA-approved clinical test for diabetes patients is in use that might depend on the functional activity of SGLT4, further studies of the role of this transporter in health and disease would be valuable. Also, information on SGLT5 is still very limited. The expression pattern of both SGLT4 and SGLT5 in intestine and kidney somewhat resembles that of SGLT1 and SGLT2. However, they do not appear to function as backup Na + -dependent glucose transporters and they exhibit different sugar specificities compared with SGLT1 and SGLT2. Therefore, it will be important to clarify their specific contributions toward solute transport in intestine and kidney. While the existing bacterial vSGLT and SiaT structures have contributed significantly to our understanding of the mechanism of inhibition of gliflozins, their detailed action on human SGLTs is still unknown. Structural biology efforts to elucidate the atomic structures of human SGLTs in complex with substrates and/or inhibitors are warranted. Such structures could also help to answer the question to what extent the selectivity of these compounds depends on particular dynamic properties of SGLTs or the interaction of the inhibitors with specific protein side-chains. Identifying further the precise mechanisms of the effects of SGLT2 inhibition on renal hemodynamics and how they differ in T1D and T2D patients, as well as further uncovering the regulation mechanisms of SGLT2 upregulation in early renal proximal tubules in the diabetic states and the compensatory mechanisms of upregulation of SGLT1 in late renal proximal tubules in the absence of SGLT2, may provide a foundation for future drug discovery and strategies for novel patienttailored therapies, including combination therapies. Funding information This article is funded by the Swiss National Science Foundation Sinergia grant # CRSII5_180326 "The role of mitochondrial carriers in metabolic tuning and reprogramming by calcium flow across membrane contact sites." Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-08-07T15:02:45.655Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "91ee287ce270aced5ee80034bf031df2fa3c2a01", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00424-020-02433-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e11716bac7b91ecdd6a74ee010531a71ab7a888b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235097433
pes2o/s2orc
v3-fos-license
Natural Language Processing for Computer Scientists and Data Scientists at a Large State University The field of Natural Language Processing (NLP) changes rapidly, requiring course offerings to adjust with those changes, and NLP is not just for computer scientists; it’s a field that should be accessible to anyone who has a sufficient background. In this paper, I explain how students with Computer Science and Data Science backgrounds can be well-prepared for an upper-division NLP course at a large state university. The course covers probability and information theory, elementary linguistics, machine and deep learning, with an attempt to balance theoretical ideas and concepts with practical applications. I explain the course objectives, topics and assignments, reflect on adjustments to the course over the last four years, as well as feedback from students. Introduction Thanks in part to a access to large datasets, increases in compute power, and easy-to-use programming programming libraries that leverage neural architectures, the field of Natural Language Processing (NLP) has become more popular and has seen more widespread adoption in research and in commercial products. On the research side, the Association for Computational Linguistics (ACL) conference-the flagship NLP conference-and related annual conferences have seen dramatic increases in paper submissions. For example, in 2020 ACL had 3,429 paper submissions, whereas 2019 had 2,905, and this upward trend has been happening for several years. Certainly, lowering barriers to access of NLP methods and tools for researchers and practitioners is a welcome direction for the field, enabling researchers and practitioners from many disciplines to make use of NLP. It is therefore becoming more important to better equip students with an understanding of NLP to prepare them for careers either directly related to NLP, or which leverage NLP skills. In this paper, I reflect on my experience setting up and maintaining a class in NLP at Boise State University, a large state university, how to prepare students for research and industry careers, and how the class has changed over four years to fit the needs of students. The next section explains the course objectives. I then explain challenges that are likely common to many university student populations, how NLP is designed for students with Data Science and Computer Science backgrounds, then I explain course content including lecture topics and assignments that are designed to fulfill the course objectives. I then offer a reflection on the three times I taught this course over the past four years, and future plans for the course. Course Objectives Most students who take an NLP course will not pursue a career in NLP proper; rather, they take the course to learn skills that will help them find employment or do research in areas that make use of language, largely focusing on the medium of text (e.g., anthropology, information retrieval, artificial intelligence, data mining, social media network analysis, provided they have sufficient data science training). My goal for the students is that they can identify aspects of natural language (phonetics, syntax, semantics, etc.) and how each can be processed by a computer, explain the difference between classification models and approaches, be able to map from (basic) formalisms to functional code, and use existing tools, libraries, and data sets for learning while attempting to strike a balance between theory and practice. In my view, there are several aspects of NLP that anyone needs to grasp, and how to apply NLP techniques in novel circumstances. Those aspects are illustrated in Figure 1. No single NLP course can possibly account for a level of depth in all of the aspects in Figure 1, but a student who has taken courses or has experience in at least two of the areas (e.g., they have taken a statistics course and have experience with Python, or they have taken linguistics courses and have used some data science or machine learning libraries) will find success in the course more easily than those with no experience in any aspect. This introduces a challenge that has been explored in prior work on teaching NLP (Fosler-Lussier, 2008): the diversity of the student population. NLP is a discipline that is not just for computer science students, but it is challenging to prepare students for the technical skills required in a NLP course. Moreover, similar to the student population in Fosler-Lussier (2008), there should be course offerings for both graduate and undergraduate students. In my case, which is fairly common in academia, as the sole NLP researcher at the university I can only offer one course once every four semesters for both graduate and undergraduate students, but also students with varied backgroundsnot only computer science. As a result, this is not a research methods course; rather, it is more geared towards learning the important concepts and technical skills surrounding recent advances in NLP. Others have attempted to gear the course content and delivery towards research (Freedman, 2008) giving the students the opportunity to have open-ended assignments. I may consider this for future offerings, but for now the final project acts as an open-ended assignment, though I don't require students to read and understand recent research papers. In the following section, I explain how we prepare students of diverse backgrounds to succeed in an NLP course for upper-division undergraduate and graduate students. Preparing Students with Diverse Academic and Technical Backgrounds Boise State University is the largest university in Idaho, situated in the capital of the State of Idaho. The university has a high number of non-traditional students (e.g., students outside the traditional student age range, or second-degree seeking students). Moreover, the university has a high acceptance rate (over 80%) for incoming first-year students. As is the case with many universities and organizations, a greater need for "computational thinking" among students of many disciplines has been an important driver of recent changes in course offerings across many departments. Moreover, certain departments have responded to the need and student interest in machine learning course offerings. In this section, we discuss how we altered the Data Science and Computer Science curricula to meet these needs and the implications these changes have had on the NLP course. 1 Data Science The Data Science offerings begin with a foundational course (based on Berkeley's data8 content) that has only a very basic math prerequisite. 2 It introduces and allows students to practice Python, Jupyter notebooks, data analysis and visualization, and basic statistics (including the bootstrap method of statistical significance). Several courses follow this course that are more domain specific, giving the students options for gaining practical experience in Data Science skills relative to their abilities and career goals. One path more geared towards students of STEM-related majors (though not targeting Computer Science majors) as well as some majors in the Humanities, is a certificate program that includes the foundational course, a follow-on course that gives students experience with more data analysis as well as probability and information theory, an introductory machine learning course, and a course on databases. The courses largely use Python as the programming language of choice. Computer Science In parallel to the changes in Data Science-related courses, the Department of Computer Science has seen increased enrollment and increased request for machine learningrelated courses. The department offers several courses, though they focus on upper-division students (e.g., artificial intelligence, applied deep learning, information retrieval and recommender systems). This is a challenge because the main Computer Science curriculum focuses on procedural languages such as Java with little or no exposure to Python (similar to the student population reported in Freedman (2008) Though the backgrounds can be quite diverse, my NLP course allows two prerequisite paths: all students must take a statistics course, but Computer Science students must take a Programming Languages course (which covers context free grammars for parsing computer languages and now covers some Python programming), and the Data Science students must have taken the introductory machine learning course. Figure 2 depicts the two course paths visually. NLP Course Content In this section, I discuss course content including topics and assignments that are designed to meet the course objectives listed above. Woven into the topics and assignments are the themes of ambiguity and limitations, explained below. Topics & Assignments Theme of Ambiguity Figure 3 shows the topics (solid outlines) that roughly translate to a single lecture, though some topics require multiple lectures. One main theme that is repeated throughout the course, but is not a specific lecture topic is ambiguity. This helps the students understand differences between natural human languages and programming languages. The Introduction to Linguistics topic, for example, gives a (very) highlevel overviews of phonetics, morphology, syntax, semantics, and pragmatics, with examples of ambiguity for each area of linguistics (e.g., phonetic ambiguity is illustrated by hearing someone say it's hard to recognize speech but it could be heard as it's hard to wreck a nice beach, and syntactic ambiguity is illustrated by the sentence I saw the person with the glasses having more than one syntactic parse). Probability and Information Theory This course does not focus only on deep learning, though many university NLP offerings seem to be moving to deep-learning only courses. There are several reasons not to focus on deep learning for a university like Boise State. First, students will not have a depth of background in probability and information theory, nor will they have a deep understanding of optimization (both convex and non-convex) or error functions in neural networks (e.g., cross entropy). I take time early in the course to explain discrete and continuous probability, and information theory. Discrete probability theory is straight forward as it requires counting, something that is intuitive when working with language data represented as text strings. Continuous probability theory, I have found, is more difficult for students to grasp as it relates to machine learning or NLP, but building on students' understanding of discrete probability theory seems to work pedagogically. For example, if we use continuous data and try somehow to count values in that data, it's not clear what should be counted (e.g., using binning), highlighting the importance of continuous probability functions that fit around the data, and the importance of estimating parameters for those continuous functions-an important concept for understanding classifiers later in the course. To illustrate both discrete and continuous probability, I show students how to program a discrete Naive Bayes classifier (using ham/spam email classification as a task) and a continuous Gaussian Native Bayes classifier (using the well-known iris data set) from scratch. Both classifiers have similarities, but the differences illustrate how continuous classifiers learn parameters. Sequential Thinking Students experience probability and information theory in a targeted and highly-scaffolded assignment. They then extend their knowledge and program, from scratch, a partof-speech tagger using counting to estimate probabilities modeled as Hidden Markov Models. These models seem old-fashioned, but it helps students gain experience beyond the standard machine learning workflow of mapping many-to-one (i.e., features to a distribution over classes) because this is a many-to-many sequential task (i.e., many words to many parts-of-speech), an important concept to understand when working with sequential data like language. It also helps students understand that NLP often goes beyond just fitting "models" because it requires things like building a trellis and decoding a trellis (undergraduate students are required to program a greedy decoder; graduates are required to program a Viterbi decoder). This is a challenging assignment for most students, irrespective of their technical background, but grasping the concepts of this assignment helps them grasp more difficult concepts that follow. Syntax The Syntax & Parsing assignment also deserves mention. The students use any parser in NLTK to parse a context free grammar of a fictional language with a limited vocabulary. 3 This helps the students think about structure of language, and while there are other important ways to think about syntax such as dependencies (which we discuss in the course), another reason for this assignment is to have the students write grammars for a small vocabulary of words in a language they don't know, but also to create a non-lexicalized version of the grammar based on parts of speech, which helps them understand coverage and syntactic ambiguity more concretely. 4 There is no machine learning or estimating a probabilistic grammar here, just parsing. Semantics An important aspect of my NLP class is semantics. I introduce them briefly to formal semantics (e.g., first-order logic), WordNet (Miller, 1995), distributional semantics, and grounded semantics. We discuss the merits of representing language "meaning" as embeddings and the limitations of meaning representations trained only on text and how they might be missing important semantic knowledge (Bender and Koller, 2020). The Topic Modeling assignment uses word-level embeddings (e.g., word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014)) to represent texts and gives them an opportunity to begin using a deep learning library (tensorflow & keras or pytorch). We then consider how a semantic representation that has knowledge of modalities beyond text, i.e., images is part of human knowledge (e.g., what is the meaning of the word red?), and how recent work is moving in this direction. Two assignments give the students a deeper understanding of these ideas. The transfer learning assignment requires the students to use convolutional neural networks pre-trained on image data to represent objects in images and train a classifiers to identify simple object types, tying images to words. This is extended in the Grounded Semantics assignment where a binary classifier (based on the words-as-classifiers model introduced in Kennington and Schlangen (2015), then extended to work with images with "real" objects in Schlangen et al. (2016)) is trained for all words in referring expressions to objects in images in the MSCOCO dataset annotated with referring expressions to objects in images (Mao et al., 2016). Both assignments require ample scaffolding to help guide the students in using the libraries and datasets, and the MSCOCO dataset is much bigger than they are used to, giving them more real-world experience with a larger dataset. Deep Learning An understanding of deep learning is obviously important for recent NLP researchers and practitioners. One constant challenge is determining what level of abstraction to present neural networks (should students know what is happening at the level of the underlying linear algebra, or is a conceptual understanding of parameter fitting in the neurons enough?). Furthermore, deep learning as a topic requires and understanding of its limitations and at least to some degree how it works "under the hood" (learning just how to use deep learning libraries without understanding how they work and how they "learn" from the data is akin to giving someone a car to drive without teaching them how to use it safely). This also means explaining some common misconceptions like how neurons in neural networks "mimic" real neurons in human brains, something that is very far from true, though certainly the idea of neural networks is inspired from human biology. For my students, we progress from linear regression to logistic regression (illustrating how parameters are being fit and how gradient descent is different from directly estimating parameters in continuous probability functions; i.e., maximum likelihood estimation vs convex and non-convex optimization), building towards small neural architectures and feedforward networks. We also cover convolutional neural networks (for transfer learning and grounded semantics), attention (Vaswani et al., 2017), and transformers including transformer-based language models like BERT (Devlin et al., 2018), and how to make use of them; understanding how they are trained, but then only assigning fine-tuning for students to experience directly. I focus on smaller datasets and fine-tuning so students can train and tune models on their own machines. Final Project There is a "final project" requirement. Students can work solo, or in a group of up to three students. The project can be anything NLP related, but projects generally are realized as using an NLP or machine/deep learning library to train on some specific task, but others include methods for data collection (a topic we don't cover in class specifically, but some students have interest in the data collection process for certain settings like second language acquisition), as well as interfaces that they evaluate with some real human users. Scoping the projects is always the biggest challenge as many students initially envision very ambitious projects (e.g., build an end-to-end chatbot from scratch). I ask students to consider how much effort it would take to do three assignments and use that as a point of comparison. Throughout the first half of the semester students can ask for feedback on project ideas, and at the halfway point in the semester, students are required to submit a short proposal that outlines the scope and timeline for their project. They have the second half of the semester to then work through the project (with a "checkpoint" part way through to inform me of progress and needed adjustments), then they write a project report on their work at the end of the semester with evaluations and analayses of their work. Graduate students must write a longer report than the undergraduate students, and graduate students are required to give a 10-minute presentation on their project. The timeline here is critical: the midway point for beginning the project allows students to have experience with classification and NLP tasks, but have enough time to make adjustments as they work on their project. For example, students students attempt to apply BERT fine-tuning after the BERT assignment even though it wasn't in their original project proposal. Theme of Limitations As is the case with ambiguity, limitations is theme in the course: limitations of using probability theory on language phenomena, limitations on datasets, and limitations on machine learning models. The theme of limitations ties into an overarching ethical discussion that happens at intervals throughout the semester about what can reasonably be expected from NLP technology and whom it affects as more practical models are deployed commercially. The final assignment critical reading of the popular press is based on a course under the same title taught by Emily Bender at the University of Washington. 5 The goal of the assignment is to learn to critically read popular articles about NLP. Given an article, they need to summarize the article, then scrutinize the sources using the following as a guide: can they (1) access the primary source, such as original published paper, (2) assess if the claims in the article relate to what's claimed by the primary source, (3) determine if experimental work was involved or if the article is simply offering conjecture based on current trends, and (4) if the article did not carry out an evaluation, offer ideas on what kind of evaluation would be approrpriate to substantiate any claims made by the article's author(s). Then students should relate the headline of the article to the main text and determine if reading the headline provides an abstract understanding of the article's contents, and determine to what extent the author identified limitations to the NLP technology they were reporting on, what someone without training in NLP might take away from the article, and if the authors identified the people who might be affected (negatively or positively) by the NLP technology. This assignment gives students experience in recognizing the gap between the reality of NLP technology, how it is perceived by others, whom it affects, and its limitations. We dedicate an entire lecture to ethics, and students are also asked to consider the implications of their final projects, what their work can and cannot reasonably do, and who might be affected by their work. 6 Discussion Striking a balance between content on probability and information theory, linguistics, and machine learning is challenging for a single course, but given the diverse student population at a public state school, this approach seems to work for the students. An NLP class should have at least some content about linguistics, and framing aspects of linguistics in terms of ambiguity gives students the tools to think about how much they experience ambiguity on a daily basis, and the fact that if language were not ambiguous, data-driven NLP would be much easier (or even unnecessary). The discussions about syntax and semantics are especially important as many have not considered (particularly those who have not learned a foreign language) how much they take for granted when it comes to understanding and producing language, both speech and written text. The discussions on how to represent meaning computationally (symbolic strings? classifiers? embeddings? graphs?) and how a model should arrive at those representations (using speech? text? images?) is rewarding for the students. While most of the assignments and examples focus on English, examples of linguistic phenomena are often shown from other languages (e.g., Japanese morphology and German declension) and the students are encouraged to work on other languages for their final project. Assignments vary in scope and scaffolding. For the probability and information theory and BERT assignments, I provide a fairly well-scaffolded template that the students fill in, whereas most other assignments are more open-ended, each with a set of reflection and analysis questions. Content Delivery Class sizes vary between 35-45 students. Class content is presented largely either as presentation slides or live programming using Jupyter notebooks. Slides introduce concepts, explain things outside of code (e.g., linguistics and ambiguity or graphical models), but most concepts have concrete examples using working code. The students see code for Naive Bayes (both discriminative and continiuous) classifiers, I use Python code to explain probability and information theory, classification tasks such as spam filtering, name classification, topic modeling, parsing, loading and prepossessing datasets, linear and logistic regression, sentiment classification, an implementation of neural networks from scratch as well as popular libraries. While we use NLTK for much of the instruction following in some ways what is outlined in Bird et al. (2008), we also look at supported NLP Python libraries including textblob, flair (Akbik et al., 2019), spacy, stanza (Qi et al., 2020), scikitlearn (Pedregosa et al., 2011), tensorflow (Abadi et al., 2016) and keras (Chollet et al., 2015), pytorch (Paszke et al., 2019), and huggingface (Wolf et al., 2020). Others are useful, but most libraries help students use existing tools for standard NLP pre-processing like tokenization, sentence segmentation, stemming or lemmatization, part-of-speech tagging, and many have existing models for common NLP tasks like sentiment classification and machine translation. The stanza library has models for many languages. All code I write or show in class is accessible to the students throughout the semester so they can refer back to the code examples for assignments and projects. This of course means that students only obtain a fairly shallow experience for any library; the goal is to show them enough examples and give them enough experience in assignments to make sense of public documentation and other code examples that they might encounter. The course uses two books, both which are available free online, the NLTK book, 7 and an ongoing draft of Jurafsky and Martin's upcoming 3rd edition. 8 The first assignment (Python & Jupyter in Figure 3) is an easy, but important assignment: I ask the students to go through Chapter 1 and parts of Chapter 4 of the NLTK book and for all code examples, write them by hand into a Jupyter notebook (i.e., no copy and pasting). This ensures that their programming environments are setup, steps them through how NLTK works, gives them immediate exposure to common NLP tasks like concordance and stemming, and gives them a way to practice Python syntax in the context of a Jupyter notebook. Another part of the assignment asks them to look at some Jupyter notebooks that use tokenization, counters, stop words, and n-grams, and asks them questions about best practices for authoring notebooks (including formatted comments). 9 Students can use cloud-based Jupyter servers for doing their assignments (e.g., Google colab), but all must be able to run notebooks on a local machine and spend time learning about Python environments (i.e., anaconda). Assignments are submitted and graded using okpy which renders notebooks and allows instructors to assign grading to themselves or teaching assistants, and students can see their grades and written feedback for each assignment. 10 Adjustments for Remote Learning This course was relatively straightforward to adjust for remote delivery. The course website and okpy (for assignment submissions) are available to the students at all times. I decided to record lectures live (using Zoom) then make them available with 7 http://www.nltk.org/book_1ed/ 8 https://web.stanford.edu/~jurafsky/ slp3/ 9 I use the notebooks listed here for this part of the assignment https://github.com/bonzanini/ nlp-tutorial 10 https://okpy.org/ transcripts to the students. This course has one midterm, a programming assignment that is similar in structure to the regular assignments. During an in-person semester, there would normally be a written final, but I opted to make the final be part of their final project grade. Reflection on Three Offerings over 4 years Due to department constraints on offering required courses vs. elective courses (NLP is elective), I am only able to offer the NLP course in the Spring semester of odd years; i.e., every 4 semesters. The course is very popular, as enrollment is always over the standard class size (35 students). Below I reflect on changes that have taken place in the course due to the constant and rapid change in the field of NLP, in our undergraduate curriculum, and the implications those changes had on the course. These reflections are summarized in Table 1. As I am, to my knowledge, the first NLP researcher at Boise State University, I had to largely develop the contents of the course on my own, requiring adjustments over time as I better understand student preparedness. At this point, despite the two paths into the course, most students who take the course are still Computer Science students. Spring 2017 The first time I taught the course, only a small percentage of the students had experience with Python. The only developed Python library for NLP was NLTK, so that and scikit-learn were the focus of practical instruction. I spent the first three weeks of the course helping students gain experience with Python (including Jupyter, numpy, pandas) then used Python as a means to help them understand probability and information theory. The course focused on generative classification including statistical n-gram language modeling with some exposure to discriminative models, but no exposure to neural networks. Spring 2019 Between 2017 and 2019, several important papers showing how transformer networks can be used for robust language modeling were gaining in momentum, resulting in a shift towards altering and understanding their limitations (so called BERTology, see Rogers et al. (2020) for a primer). This, along with the fact that changes in the curriculum gave students better experience with Python, caused me to shift focus from generative models to neural architectures in NLP and to shift to cover word-level embeddings more rigorously. I spent the second half of the semester introducing neural networks (including multi-layer perceptrons, convolutional, and recurrant architectures) and giving students assignments to give them practice in tensorflow and keras. After the 2017 course, I changed the pre-requisite structure to require our programming languages course instead of data structures. This led to greater pareparedness in at least the syntax aspect of linguistics. Spring 2021 In this iteration, I shifted focus from recurrant to attention/transformer-based models and assignmed a BERT fine-tuning assignment on a novel dataset using huggingface. I also introduced pytorch as another option for a neural network library (I also spend time on tensorflow and keras). This shift reflects a shift in my own research and understanding of the larger field, though exposure to each library is only partial and somewhat abstract. I note that students who have a data science background will likely appreciate tensorflow and keras more as they are not as object-oriented than pytorch, which seems to be more geared towards students with Computer Science backgrounds. Students can choose which one they will use (if any) for their final projects. More students are gaining interest in machine learning and deep learning and are turning to MOOC courses or online tutorials, which has led in some degree to better preparation for the NLP course, but often students have little understanding about the limitations of machine learning and deep learning after completing those courses and tutorials. Moreover, students from our university have started an Artificial Intelligence Club (the club started in 2019; I am the faculty advisor), which has given the students guidance on courses, topics, and skills that are required for practical machine learning. Many of the NLP class students are already members of the AI Club, and the club has members from many academic disciplines. Student Feedback I reached out to former students who took the class to ask for feedback on the course. Specifically, I asked if they use the skills and concepts from the NLP class directly for their work, or if the skills and concepts transferred in any way to their work. Student responses varied, but some answered that they use NLP directly (e.g., to analyze customer feedback or error logs), while most responded that they use many of the Python libraries we covered in class for other things that aren't necessarily NLP related, but more geared towards Data Science. For several students, using NLP tools helped them in research projects that led to publications. Conclusions & Open Questions With each offering, the NLP course at Boise State University is better suited pedagogically for students with some Data Science or Computer Science training, and the content reflects ongoing changes in the field of NLP to ensure their preparation. The topics and assignments cover a wide range, but as students have become better prepared with Python (by the introduction of new prerequisite courses that cover Python as well as changing some courses to include assignments in Python), more focus is spent on topics that are more directly related to NLP. Though I feel it important to stay abreast of the ongoing changes in NLP and help students gain the knowledge and skills needed to be successful in NLP, an open question is what changes need to be made, and a related question is how soon. For example, I think at this point it is clear that neural networks are essential for NLP, though it isn't always clear what architectures should be taught (e.g., should we still cover recurrant neural networks or jump directly to transformers?). It seems important to cover even new topics sooner than later, though a course that is focused on research methods might be more concerned with staying upto-date with the field, whereas a course that is more focused on general concepts and skills should wait for accessible implementations (e.g., huggingface for transformers) before covering those topics. With recordings and updated content, I hope to flip the classroom in the future by assigning readings and watching lectures before class, then use class time for working on assignments. 11 11 This worked well for the Foundations of Data Science course that I introduced to the university; the second time I Much of my course materials including notebooks, slides, topics, and assignments can be found on a public Trello board. 12
2021-05-23T13:29:20.406Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "265dae041d068c902cbada43fd528046abde5402", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/2021.teachingnlp-1.21.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "265dae041d068c902cbada43fd528046abde5402", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
13429089
pes2o/s2orc
v3-fos-license
Imaging phase separation near the Mott boundary in the correlated organic superconductors $\kappa$-(BEDT-TTF)$_{2}$X Electronic phase separation consisting of the metallic and insulating domains with 50 -- 100 $\mu$m in diameter is found in the organic Mott system $\kappa$-[($h$8-BEDT-TTF)$_{1-x}$($d$8-BEDT-TTF)$_{x}$]$_{2}$Cu[N(CN)$_{2}$]Br by means of scanning micro-region infrared spectroscopy using the synchrotron radiation. The phase separation appears below the critical end temperature 35 -- 40 K of the first order Mott transition. The observation of the macroscopic size of the domains indicates a different class of the intrinsic electronic inhomogeneity from the nano-scale one reported in the inorganic Mott systems such as High-$T_{c}$ copper and manganese oxides. Electronic phase separation consisting of the metallic and insulating domains with 50 -100 µm in diameter is found in the organic Mott system κ-[(h8-BEDT-TTF)1−x(d8-BEDT-TTF)x]2Cu[N(CN)2]Br by means of scanning micro-region infrared spectroscopy using the synchrotron radiation. The phase separation appears below the critical end temperature 35 -40 K of the first order Mott transition. The observation of the macroscopic size of the domains indicates a different class of the intrinsic electronic inhomogeneity from the nano-scale one reported in the inorganic Mott systems such as High-Tc copper and manganese oxides. Microscopic spatially inhomogeneous electronic states have been recently observed in many kinds of correlated electron systems. Nano-scale spatial variation of the superconducting gap has been revealed in the superconducting state of Bi 2 Sr 2 CaCu 2 O 8+δ by the scanning tunnelling spectroscopy and microscopy [1]. In the normal state, charge carriers doped into antiferromagnetic insulators tend to group into some regions of the sample in the form of stripes in some copper oxides [2]. Meanwhile a different kind of the microscopic phase separation takes place in half-doped manganese oxides [3]. Small variation from half doping causes phase segregation of electron-rich ferromagnetic and electron-poor antiferromagnetic domains with submicron size within the charge ordered phase. In the system with Mott transition, the nano-scale electronic inhomogeneity with preferred orientation has been found in slightly doped Mott insulator Ca 2−x Na x CuO 2 Cl 2 [4]. NiS 2−x Se x pyrite which is the band width controlled Mott system has shown also microscopic electronic inhomogeneity at the critical vicinity of the metal-insulator transition [5]. These microscopic spatial electronic inhomogeneities seem to be intrinsic nature near the criticality of changes in charge, spin, orbital, and lattice degrees of freedom in the correlated electron system. Organic charge transfer salts based on the donor molecule bis(ethylenedithio)-tetrathiafulvalene, abbreviated BEDT-TTF or ET, have been recognized as one of the highly correlated electron system [6]. Among them, κ-(ET) 2 X with X = Cu(NCS) 2 , Cu[N(CN) 2 ]Y (Y = Br and Cl), etc. have attracted considerable attention from the point of view of the strongly correlated quasi two dimensional electron system because the strong dimer structure consisting of two ET molecules makes the conduction band effectively half-filling [7,8,9]. The unconventional metallic, antiferromagnetic insulating and superconducting phases appear next to one another in the phase diagram [7,10,11]. The transitions among these phases are controlled by the applied pressure [11] and slight chemical substitution of the donor and anion molecules [12], which must change the conduction band width W with respect to the effective Coulomb repulsion U between two electrons on a dimer. Thus the κ-(ET) 2 X family has been considered to be the band width controlled Mott system in comparison with the filling controlled one in the inorganic perovskites such as High-T c copper oxides. Recently inhomogeneous electronic states have been suggested in the 13 C-NMR experiments near the first order metal-insulator transition in the artificially band width controlled κ-(ET) 2 Cu[N(CN) 2 ]Br [13]. Below characteristic temperature T * where the incoherent bad metallic state changes to the coherent good metal at lower temperature [14], 13 C-NMR lines fall into two groups indicating the metallic and antiferromagnetic insulating nature. The results imply that two phases coexist spatially and statically. Subsequent transport experiments have suggested also such coexistence of two phases at low temperature [15]. Although it has been demonstrated that an inhomogeneous electronic state is realized near the first order transition, the detail of the morphology, spatial distribution, size of domains and stability of the inhomogeneity have not been clarified yet. It is very important to obtain the real space information which can give us a clue to know either similar nano-scale electronic inhomogeneity is realized by the exotic mechanism based on the strong correlation effect or macroscopic phase separation occurs due to local potential modulation near the first order transition. In this letter we present the real space imaging of the electronic phase separation in the partly substituted κ- Br, where h8-ET and d8-ET denote fully hydrogenated and deuterated ET molecules, respectively. Scanning micro region infrared reflectance spectroscopy (SMIS) using the synchrotron radiation (SR) is applied to make the two dimensional map of the local electronic state. The results indicate that the macroscopic electronic phase separation takes place near the first order metal-insulator transition, which is different from the nano-scale electronic inhomogeneity reported so far in the inorganic correlated electron system. Single Br partly substituted by deuterated ET molecule were grown by the standard electrochemical oxidation method. The substitution x denotes the nominal mole ratio to the fully deuterated ET molecule in the crystallization. We checked the actual substitution with respect to the nominal value x by measuring the intensity of the molecular vibrational mode of the terminal ethylene groups. The substitution dependence of the macroscopic phase diagram and the superconducting properties have been examined [16]. The full volume of the superconductivity has been observed in the range of x = 0 − 0.5 when the samples are cooled slowly. Above x = 0.5, however, the superconducting volume fraction decreases and becomes about a few ten vol% at x = 1, which value strongly depends on the cooling condition. SMIS measurements were performed using SR at BL43IR in SPring-8 [17]. The polarized reflectance spectra were measured on the c-a plane along E a-axis and E c-axis with a Fourier transform spectrometer and a polarizer in the mid-infrared (IR) range by use of a mercury-cadmium-telluride detector at 77 K. An IR microscope with the controlled precision x-y stage and high intensity of SR light enable us to obtain the two dimensional reflectance spectrum map with the spatial resolution of ∼ 10 µm [18]. The sample was fixed by the conductive carbon paste on the sample holder with a gold mirror which was placed at the cold head of the helium flow type refrigerator. We gave careful consideration of less stress and good thermal contact to the crystals in the sample setting. The reflectivity was obtained by comparison with the gold mirror at each temperature measured. In order to make the real space image of the electronic states on the crystal surface by SMIS measurements, we use the shift of the frequency ω 3 of a molecular vibration mode ν 3 (a g ). The specific ν 3 (a g ) mode, which is a symmetric stretching mode of the central double bonded carbon atoms of ET molecule, has been found to be very sensitive to difference between metallic and insulating states due to the large electron-molecular vibration coupling [14]. The peak of the ν 3 (a g ) mode should shift to lower frequency in sharper shape in the insulating state at low temperature, while it shows opposite feature in the metallic state. Figure 1 shows the two dimensional contour map of the reflectivity peak frequency ω 3 of ν 3 (a g ) Br of (a) x = 0.5 and (b) x = 0.8. The polarized reflectance spectra (E a-axis and E c-axis ) in the micro region of ∼ 10 µmφ are taken with a step of 15 and 10 µm for x = 0.5 and 0.8 samples, respectively. The typical reflectivity spectra of ν 3 (a g ) in E c-axis are shown in Fig. 2(c), which are taken at O point in the dark color region and B' point in the bright color region of x = 0.5 sample. Bright region indicates the higher frequency of ω 3 which demonstrates the metallic feature [19]. It is noted that the different absolute values of ω 3 observed in the metallic (or insulating) region of x = 0.5 and 0.8 samples are caused by the the polarization dependence of the ν 3 (a g ) mode [14]. Some domains with lower and higher ω 3 can be found in the major metallic and insulating regions in x = 0.5 and 0.8, respectively. In x = 0.5, the bright region is dominant almost all over the surface. In contrast the dark region is dominant in x = 0.8. The structure and position of the domain are found to be stable on time, which can be confirmed by the mapping time (∼ 6 -8 hours / one map) in SMIS measurement. The domains seem not to be located at particular position such as the sample edge, step and scratch of the surface and so on. The shape is almost circle and no specific orientation with respect to the crystal axes is observed. samples, respectively. In addition the x = 0.75 sample which is not shown here has the insulating fraction of about 5/6. This insulating fraction reflects the bulk properties obtained by the magnetization measurements [16]. The superconducting volume fraction is almost the same with the sample volume from x = 0 to ∼ 0.5 in slow cooling condition. But the fraction starts to decrease with increasing x from ∼ 0.5 to higher value. The supercondcuting volume fraction around x = 0.75 -0.8 is expected to be in the range of a few ten vol% in slow cooling and a few vol% in fast cooling. Considering the variation of the fraction with the cooling condition and difference of the evaluation methods, consistency between the bulk magnetization and the present measurements on the metal-insulator fraction is reasonably well. We have not detected the smaller domain in the present SMIS measurements. Inside the rectangular region framed by dashed lines in Fig. 1(b), more fine mapping (3 µm step) was performed but the spectra were the same with each other. This does not exclude the possibility of the nano-scale inhomogeneity inside each scanning spot because the obtained spectrum may result in the average of nano-scale inhomogeneity in the measured spot. But close agreement between the magnetization and present results suggests that the phase separation mode in E c-axis in x = 0.5 sample. Imaging region is the same with that in Fig. 1(a). Picture elements are accidentally missing in the lower part of the map at 46 K [20]. occurs on macroscopic scale. Possible chemical inhomogeneity to the origin of the domain structure such as segregation of deuterated ET can be excluded by checking the molecular vibration mode of terminal ethylenes of ET at each scanning point. The vibration modes of the ethylene groups and the deuterated one appear at different frequencies around 1250 cm −1 and 1050 cm −1 , respectively. As can be seen in Fig. 2(c), almost same structure and intensity of the ethylene mode are observed at both O and B' points, which demonstrates the same substitution ratio of the deuterated ET molecule at the insulating and metallic regions. Therefore the present finding of the domain structure strongly indicates that the electronic phase separation appears in the macroscopic size due to the strong correlation effect near the Mott metal-insulator transition. In order to know the correlation between the formation of the phase separation and the electronic phase di-agram, the temperature variation of the phase separation was measured. Figure 3 shows the two dimensional contour maps on the reflectivity peak frequency ω 3 of the ν 3 (a g ) mode in E c-axis at 18, 32 and 46 K in x = 0.5 sample [20]. The scanning region is almost the same with the map at 4 K in Fig. 2(a). The measurements were performed from low (4 K) to high (46 K) temperature in sequence. The insulating domain does not change the position and size at higher temperature of 18 and 32 K. But the domain looks likely to disappear at 46 K. This temperature corresponds to the critical end point T cr ≃ 40 K of the Mott first order metal-insulator transition [10,11,14]. From T cr to both weak and strong correlation sides in the phase diagram, the T * line and the bad metal -insulator line T ins are elongated. In temperature above T cr , T * , and T ins , the half-filling bad metallic state exists in wide range of the correlation strength which can be tuned by pressure and substitution of anion. The critical point in the present system has been considered to be located around x = 0.5, where the phase separation may start to appear in larger x value than 0.5 [16]. In the weak correlation side from T cr , which corresponds to the present system with x ≤ ∼ 0.5, the bad metal changes to a correlated good metal through T * and then becomes superconducting [11,14]. In the strong correlation side, the bad metal develops into a Mott insulator through T ins and then becomes an antiferromagnetic Mott insulator at T N [11,14]. Disappearance of the domain structure at 46 K can be explained by no multi-electronic states competing with each other above T cr , T * , and T ins . On one hand below T cr , the phase separation occurs near the boundary of the first-order transition between the Mott insulator and the correlated good metal. In the phase separation the multiple potential minima of the free energy is required in general and the domain grows from a nucleation point which should be specific in the free energy variation in space. The possible origin of the space variation in the free energy is a glassy conformational order -disorder of the terminal ethylene groups of ET molecules [6,21]. The ethylene groups have been known to have the conformational disorder which is frozen by cooling through a temperature T glass ≃ 80 K. The degree of disorder depends on the cooling rate; cooling faster introduces larger number of disorders. Such disorder has been considered to modulate the electronic states locally [22,23]. The slight modulation of a potential energy in space may become a nucleation point of the domain growth in the phase separation. In order to make it clear that the mechanism of the phase separation and the process of the domain formation, space imaging technique with the molecular resolution must be developed. In conclusion, the experimental evidence of the electronic phase separation is obtained by using the the real space imaging technique on the single crystal surface of the organic Mott system κ-[(h8-ET) 1−x (d8-ET) x ] 2 Cu[N(CN) 2 ]Br. SMIS measurements using SR enable us to show the macroscopic size of the domain structure of the insulating and metallic regions. The observation of the micro-meter scale phase separation is different from the recent findings of the nano-scale electronic inhomogeneity in the strongly correlated inorganic system. The origin of the phase separation may be the combination of the strong electronic correlation near the Mott transition and the characteristic structural disorder inside the ET molecules.
2017-09-07T16:04:16.244Z
2004-02-05T00:00:00.000
{ "year": 2004, "sha1": "07f39204a7ca3bc2451e1761c5a344efac3f115e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0402147", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c4c41ad229676d840d9bdac98e16b9f4c1096430", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
55076231
pes2o/s2orc
v3-fos-license
On Supertwistors, the Penrose-Ward Transform and N=4 super Yang-Mills Theory It was recently shown by Witten that B-type open topological string theory with the supertwistor space CP^{3|4} as a target space is equivalent to holomorphic Chern-Simons (hCS) theory on the same space. This hCS theory in turn is equivalent to self-dual N=4 super Yang-Mills (SYM) theory in four dimensions. We review the supertwistor description of self-dual and anti-self-dual N-extended SYM theory as the integrability of super Yang-Mills fields on complex (2|N)-dimensional superplanes and demonstrate the equivalence of this description to Witten's formulation. The equivalence of the field equations of hCS theory on an open subset of CP^{3|N} to the field equations of self-dual N-extended SYM theory in four dimensions is made explicit. Furthermore, we extend the picture to the full N=4 SYM theory and, by using the known supertwistor description of this case, we show that the corresponding constraint equations are (gauge) equivalent to the field equations of hCS theory on a quadric in CP^{3|3}xCP^{3|3}. Introduction Let Z be a complex three-dimensional Calabi-Yau (CY) manifold, E a rank n complex vector bundle over Z and A a connection one-form on E. Consider the action [1] S hCS = Z Ω 0 ∧ tr A 0,1 ∧∂A 0,1 + 2 where Ω 0 is the nowhere vanishing holomorphic (3, 0)-form on Z and A 0,1 is the (0, 1)-component of the connection one-form A. Witten has obtained (1.1) as the full target space action of the open topological B-model on a complex three-dimensional target space, on which the CY restriction arises from N =2 supersymmetry of the corresponding topological sigma model and an anomaly cancellation condition. This holomorphic Chern-Simons (hCS) theory (1.1) describes inequivalent complex structures on the bundle E → Z. In a beautiful recent paper [2], Witten observed that the above-mentioned severe CY restriction can be relaxed by considering a topological B-model whose target space is a Calabi-Yau supermanifold 1 . Here, the fermionic dimensions will also make a contribution to the first Chern class, and this yields more freedom in the choice of the bosonic dimensions to have an overall vanishing first Chern class. In particular, an extension S hCS = Y Ω ∧ tr  0,1 ∧∂ 0,1 + 2 3 0,1 ∧ 0,1 ∧ 0,1 (1.2) of the action (1.1) to the supertwistor space P 3|4 was considered. Here Y is a subspace of P 3|4 parametrized by three complex bosonic coordinates together with their complex conjugate and four (holomorphic) fermionic coordinates, Ω is a holomorphic measure for bosonic and fermionic coordinates, and 0,1 is the (0, 1)-component of a connection one-form on a rank n complex vector bundle E over P 3|4 depending on both the bosonic and fermionic coordinates. It was shown [2] that there is a bijection between the moduli spaces of holomorphic Chern-Simons theory (1.2) on the supermanifold P 3|4 \ P 1|4 and of self-dual N =4 super Yang-Mills (SYM) theory on the space Ê 4 with a metric of signature (+ + + +) or (− − + +), depending on the reality conditions imposed on the supertwistor space (for related works see [3]- [10]). It was also demonstrated that the above twistor description allows one to recover Yang-Mills scattering amplitudes, in particular maximally helicity violating (MHV) ones 2 , and to clarify the holomorphicity properties 3 of these amplitudes 4 and identities appearing in this context [12]- [17]. Note that Witten described the correspondence between hCS theory on P 3|4 and anti-selfdual N =4 SYM theory by analyzing the sheaf cohomology interpretation of the linearized field equations on the supertwistor space. The main purpose of this paper is to give a more detailed and explicit description of this correspondence for 0 ≤ N ≤ 4 beyond the linearized level. 5 We will also discuss the supertwistor description of the full N =4 SYM theory along the lines proposed in [2]. In fact, we shall consider several special cases of the following general situation. Suppose we are given complex (super)manifolds X, Y, Z and a double fibration with surjective holomorphic projections π 1 and π 2 . Then we have a correspondence between Z and X, i.e. between points in one space and subspaces of the other one: points z in Z ↔ subspaces π 1 (π −1 2 (z)) in X , subspaces π 2 (π −1 1 (x)) in Z ↔ points x in X . (1.4) Using the correspondence (1.4), one can transfer data given on Z to data on X (and vice versa). One may take some analytic objects h on Z (Dolbeault cohomology classes, holomorphic vector bundles, etc.) and transform them to objects f on X which will be constrained by some differential equations, since the pull-back of h to Y has to be constant along the fibres of π 2 . The map PW : h → f is called the Penrose-Ward transform. Of course, one can also consider the inverse map PW −1 : f → h. If there is an anti-linear involution τ (real structure) on X, then the set 2 See [11] for an earlier discussion of this point. 3 The unexpected holomorphicity properties of the so-called maximally helicity violating amplitudes provided the original motivation to study the space P 3|4 := P 3|4 \ P 1|4 . Holomorphicity is an earmark of the topological Bmodel, and there is a six-dimensional description of self-dual SYM theory via the twistor correspondence. Thus one had to find a space which is CY and a complex (super)twistor space of Ê 4 at the same time: P 3|4 . 4 For a preliminary consideration of gravity amplitudes in this context, see [18]. 5 Still, it should be stressed that the supertwistor space is CY only for N =4. of fixed points of τ forms a real subspace X τ of X. In the real setup, the double fibration often simplifies to the nonholomorphic fibration π : Z → X τ . (1.5) This happens when π −1 1 (X τ ) ∼ = Z and therefore π 2 becomes a bijection. The correspondence (1.4) is preserved in this case. In particular, if field theories are given on the spaces Z and X, then a correspondence of the type (1.4) between both spaces can be lifted to a correspondence between solutions to the field equations on Z and solutions to those on X. In general, this correspondence will not be one-toone, since fields are usually defined only up to some (gauge) equivalence. However, one can often establish a one-to-one correspondence between elements of the moduli spaces of theories on Z and those on X. This will be specialized later in concrete examples. Our considerations in this paper are based on the results of many authors (see e.g. [19]- [43] and references therein). More details on twistor theory and the Penrose-Ward transform can be found in the books [44]- [47]. Special cases of the correspondence (1.3), (1.4) can be established between the twistor space Z = P 3 and the Graßmann manifold X = G 2,4 ( ) [44] as well as the space Z = P 3 = P 3 \ P 1 (which we also call twistor space) and X = 4 ⊂ G 2,4 ( ). In the following, we will focus on the geometry of the latter correspondence. 6 The holomorphic line bundle O(n) over P 1 has the transition function λ n + and the first Chern number n. See appendix B for more details. (2.13) 7 Note that we use the same notation π1 and π2 for projections in completely different diagrams throughout the paper. 8 In the literature, P 3 is often called the dual twistor space, while the space of totally null self-dual 2-planes (α-planes) is called the twistor space. 9 As usual in theoretical physics, we will use the prefix "super" instead of 2-graded, and do not imply supersymmetry by that term. It is obvious that the involution τ −1 has no fixed points but does leave invariant projective lines joining p and τ −1 (p) for any p ∈ P 3 . On the other hand, the involutions τ 1 and τ 0 have fixed points which form a three-dimensional real manifold fibred over S 1 ∼ = ÊP 1 ⊂ P 1 . The space T 3 ⊂ P 3 is called real twistor space. For the real structure τ 1 , this space is described by the coordinates (z 1 ± , e iχz1 ± , e iχ ) with 0 ≤ χ < 2π, and for the real structure τ 0 , the coordinates (z 1 ± , z 2 ± , λ ± ) are real. These two descriptions are equivalent. We shall concentrate on the real structures τ ±1 since all formulae for these two cases can be written in a unified form using ε = ±1. For instance, an extension of the involution τ ε to any function f (x, λ + ) is defined as where (x µ ) are the real coordinates with µ = 1, ..., 4. On the other hand, holomorphic sections of the bundle (2.3) which are invariant under the involution τ 0 are parametrized by real coordinates Metric on the moduli space of real curves. On the space Ê 4 of real holomorphic curves P 1 x ֒→P 3 , one can introduce the metric with g = diag(+1, +1, +1, +1) for the involution τ −1 on P 3 and g = diag(−1, −1, +1, +1) for τ 1 (and τ 0 ) and g = (g µν ). Thus, the moduli space of real rational curves of degree one in P 3 is the Euclidean space 10 Ê 4,0 or the Kleinian space Ê 2,2 . Note that, in the Euclidean case, the twistor space P 3 is the space with the coordinates (x µ , λ ± ) and one can define a trivial nonholomorphic fibration over the space Ê 4 with real coordinates x = (x µ ). Therefore, on the patches U + and U − covering the space P 3 , one can use the coordinates (x, λ + ) and (x, λ − ), respectively. In the Euclidean case, 10 In our notation, Ê p,q = (Ê p+q , g) is the space Ê p+q with the metric g = diag(−1, ..., −1 q , +1, ..., +1 p ). the double fibration (2.8) simplifies to the fibration (2.20) since π −1 1 (Ê 4 ) ∼ = P 3 ⊂ F 5 and therefore the restriction of the projection π 2 to π −1 1 (Ê 4 ) is a bijection. The twistor correspondence for the Kleinian case is more complicated. In particular, we have instead of the diffeomorphism (2.19) and one should consider the spaceP 3 := P 3 \P 0 with P 0 = P 3 (2.20). For more details on the Kleinian case ε = +1, see appendix C. To smoothen the discussion in the following, we ignore this subtlety and use always (2.20) implying the restriction to the spaceP 3 in all necessary cases. Furthermore, we will call matrix-valued functions τ ε -regular, if they are regular 11 for all values of λ ∈ D in the case ε = −1 and regular for λ ∈ D with |λ| = 1 in the case ε = +1 (and also for the real structure τ 0 ), where D ⊆ P 1 is the domain under consideration. Vector fields. On the complex manifold P 3 , we have the natural basis (∂/∂z α ± , ∂/∂z 3 ± ) in the space of antiholomorphic vector fields with where with the convention 13 ε 12 := −ε 21 = −1. Thus the vector fields form a basis of vector fields of type (0,1) over U + ⊂ P 3 in coordinates (x, λ + ), where ∂ αα := ∂/∂x αα . The explicit form of the basis of vector fields of type (0,1) on the open set U − ⊂ P 3 follows from the transformation rules where one introduces additionally Note that the bases (2.25) and (2.26) of vector fields on P 3 are holonomic, i.e. they commute pairwise. Furthermore, in the case ε = +1, the identification (2.23) only holds onP 3 . The twistor description of self-dual Yang-Mills fields Holomorphic bundles over the twistor space. Consider a rank n holomorphic vector bundle E over the twistor space P 3 . This bundle is defined by a holomorphic transition function f +− on the intersection U + ∩ U − of the two patches covering P 3 = U + ∪ U − , i.e. the function f +− takes values in the group of nonsingular n × n matrices annihilated by the vector fields (2.25) of type (0,1): In the twistor approach to self-duality, it is assumed that E is topologically trivial and its restriction to any projective line P 1 x ֒→P 3 is holomorphically trivial. These two conditions imply that there are regular matrix-valued functions, ψ + on U + and ψ − on U − , such that and i.e. ψ + and ψ − are smooth functions of x ∈ Ê 4 and holomorphic functions of λ + and λ − , respectively. Note that A αα dx αα will be an antihermitean n × n matrix if ψ ± satisfies the following condition 14 : The antihermitean gauge potential components can be calculated from (3.5) as Combining (3.4) and (3.5), we introduce matrix-valued functions 15 where λα + are given in (2.24). Analogously, one can introduce the component Aλ + , but it will vanish as 16 Aλ (3.9) 14 Here † means hermitean conjugation. 15 Here V A 0,1 denotes the interior product of a vector field V and a (0, 1)-form A 0,1 . 16 Note also that A + α and Aλ + are components of A 0,1 . Linear system and SDYM equations. Let us rewrite (3.8) together with (3.3) in the form with similar equations for ψ − . The compatibility conditions of this linear system are To be satisfied for all (λα + ), this equation has to vanish to all orders in λ + separately, from which we obtain the self-dual Yang-Mills (SDYM) equations for a gauge potential (A αα ). It is convenient to introduce the notation F αα,ββ = ε αβ fαβ + εαβf αβ , (3.14) in which the SDYM equations are rewritten as Gauge equivalent linear systems. Note that in (3.2), we have chosen special trivializations 17 ψ ± of E over U ± such that (3.3) and therefore (3.9) were satisfied. However, one can consider more general trivializations {ψ ± } of E such that and ∂λ ±ψ ± = 0, i.e.ψ ± =ψ ± (x, λ ± ,λ ± ) are regular matrix-valued functions smooth in all coordinates on U ± . From (3.16) it follows that ϕ := ψ +ψ −1 is a globally defined regular matrix-valued function on P 3 and therefore the above two trivializations are related by the gauge transformation In general trivializations {ψ ± }, we havê where the last equality in (3.20) follows from (3.11) and (3.9). Equations (3.19) and (3.20) can be rewritten in the form of a linear system (∂λ + +Âλ + )ψ + = 0 , which is gauge equivalent to the linear system (3.10), (3.11). The compatibility conditions of this linear system are in fact the field equations of hCS theory on the space P 3 , and, e.g. on U + , they take the form Thus we have a relation between hCS on P 3 and SDYM on the moduli space Ê 4 of real holomorphic sections of the fibration P 3 → P 1 . Supertwistor geometry Coordinates. A super extension of the twistor space P 3 is the supermanifold P 3|N with homogeneous coordinates (ω α , λα, η i ) subject to the identification (ω α , λα, η i ) ∼ (t ω α , t λα, t η i ) for any nonzero complex scalar t. Here (ω α , λα) are homogeneous coordinates on P 3 and η i with i = 1, ..., N are Graßmann variables. Interestingly, this supertwistor space is a Calabi-Yau supermanifold in the case N =4 and one may consider B-type open topological strings living in this space [2]. Vector fields. Note that one can project from F 5|4N onto P 3|N in two steps: first from F 5|4N onto F 5|2N R , which is given in coordinates by with the x αα R from (4.9), and then from F 5|2N R onto P 3|N , which is given in coordinates by The tangent spaces to the (0|2N )-dimensional leaves of the fibration (4.16) are spanned by the vector fields The coordinates x αα R , λ ± α and ηα i belong to the kernel of these vector fields which are also tangent to the fibres of the projection 4|4N → 4|2N onto the anti-chiral superspace. The tangent spaces to the (2|N )-dimensional leaves of the projection (4.17) are spanned by the vector fields 22V where Twistor correspondence for a real superspace. Let us now discuss the action of the antilinear involutions (2.9)-(2.11) on fermionic coordinates. In the Kleinian case, we can simply define and τ 0 (θ αi ) =θ αi and τ 0 (ηα i ) =ηα i , (4.22) which matches the definition for commuting spinors. In the Euclidean case ε = −1, we can only fix a real structure on the fermionic coordinates if the number of supersymmetries N is even (see e.g. [48,49]). In these cases, one groups together the fermionic coordinates in pairs of two and defines matrices The action of τ −1 is then given by for N =2 and for N =4. The last equation can also be written in components as where there is a summation over β and j. The same definition applies to ηα i : For the definition of λα ± , see section 2. We concentrate now on the real cases defined by the involutions τ 1 and τ −1 . Note in advance that complexified self-dual N -extended SYM theory can be described by the diagram (4.6) with . After imposing the reality condition (2.16) and (4.29) or (4.31), the coordinates (x αα R , ηα i ) belong to the real anti-chiral . We keep the coordinates λ ± α complex 25 and therefore the supertwistor space P 3|N has complex dimension (3|N ). For coordinates x αα R satisfying (2.16), the vector fields (4.19) and can be identified with bosonic vector fields of type (0, 1) on P 3|N similar to the vector fields (2.25) and (2.26) on P 3|0 . In the Euclidean case, this is due to the fact that as a real supermanifold, P 3|N is diffeomorphic 26 where π 1 is one of the projections in the diagram (4.6). In other words, the map π 2 , restricted to , is one-to-one (cf. (2.19) for the purely bosonic case). Moreover, (4.20) become odd 23 See appendix B. 24 Note that in Minkowski signature, chiral and anti-chiral superspaces are always complex. 25 In the Euclidean case, there are no fixed points of τ−1 on the Riemann sphere P 1 ∋ (λ ± α ) and therefore λ ± α must be complex. The case of signature (2,2) is discussed in the appendices C and D. 26 In the case ε = +1 (and also for the real structure τ0), there is a diffeomorphism of an open subsetP 3|N of P 3|N . See appendix C for more details. vector fields of type (0, 1) on P 3|N annihilating all complex coordinates on this space. For example, for τ 1 -real vector fields with |λ ± | = 1, we havē where η + i = η1 i + λ +η1 i and γ + is given in (2.24). Similar formulae can be written down for τ −1 -real and τ 0 -real cases. Thus, in the real setup, the double fibration (4.6) simplifies to the nonholomorphic fibration π : where (3|N ) stands for complex and (4|2N ) for real dimensions. Fibres over a point ( in the fibration (4.37) are real holomorphic sections P 1 The supertwistor description of self-dual N -extended super Yang-Mills theory Super self-duality for extended supersymmetry. Self-dual Yang-Mills (SDYM) fields on Ê 4,0 and Ê 2,2 are solutions to the self-duality equations which is equivalently written in spinor notation as Solutions to these equations form a subset of the solution space of Yang-Mills theory. Thus, a possible supersymmetric extension of the self-duality equations can be obtained by taking the full set of SYM field equations and imposing certain constraints on them. These constraints have to include (5.1) and keep the resulting set of equations invariant under supersymmetry transformations. This works for SYM theories with N ≤ 3, and the field content of the full N -extended SYM theory splits into a self-dual supermultiplet and an anti-self-dual supermultiplet. For N =4, the situation is more complicated, as the SYM multiplet (f αβ , χ αi , φ ij ,χα i , fαβ), where the fields have the helicities (+1, + 1 Holomorphic bundles and gauge potentials. The twistor description of complexified selfdual N -extended SYM theory is known and based on the diagram (4.12) (see e.g. [29,30,32]) or (implicitly) on the diagram (4.6) (see e.g. [37,40]). Here, we consider self-dual N -extended SYM theory in the real setting based on the fibration (4.37). Namely, let us consider a holomorphic bundle E over the supertwistor space P 3|N =Û + ∪Û − without 2 -grading on the fibres and with the coordinates (2.1), (4.1). As usual, the bundle E → P 3|N is defined by a holomorphic transition function annihilated by the vector fields (4.19), (4.20), (4.33) of type (0, 1) on P 3|N , Further, it is assumed that the restriction of E to any projective line P 1 x R ,η ֒→P 3|N is holomorphically trivial and therefore there exist regular matrix-valued functions ψ and This trivialization is similar to (3.2), (3.3) in the purely bosonic case, and using arguments identical to those from section 3, one can introduce matrix-valued components of a gauge potential, where (x R , η) = (x αα R , ηα i ), and a linear system of differential equations equivalent to the existence of holomorphic sections of the bundle E. Super self-duality. The compatibility conditions of the linear system (5.11)-(5.13) read where we have introduced covariant derivatives The compatibility conditions (5.14) suggest the introduction of the following self-dual super gauge field strengths: where f ij is antisymmetric and f αβ is symmetric. Let us focus on the case N =4 and discuss the cases N <4 later on. The set of physical fields for N =4 SYM theory consists of the self-dual and the anti-self-dual field strengths of a gauge potential A αα , four spinors χ i α together with four spinorsχα i ∼ ε ijklχ jkl α of opposite chirality and six real (or three complex) scalars φ ij = φ [ij] . For N =4 super SDYM, the multiplet is joined by an additional spin-one field Gαβ ∼ ε ijkl G ijkl αβ , as discussed before. Now the above super gauge field strengths contain in their expansion exactly these fields. The lowest component of f αβ , f i α and f ij will be the SDYM field strength, the spinor field χ i α and the scalars φ ij , respectively. By using Bianchi identities for the self-dual super gauge field strengths, one obtains [40] successively the superfield expansions and the field equations for the physical field content 27 , where := ∇ αα ∇ αα and the antisymmetrizations [i...j] are defined to have weight 28 one. Note that by construction, all fields depend on the coordinates x αα R . As we will see soon, (5.17) is in some sense an N -independent formulation [40] of the field equations of super SDYM theory in which the cases N <4 are governed by the first N +1 equations of (5.17), where fαβ = 0 is counted as one equation and so on. In the case N =4, one can introduce "dualized" fields for which the equations of motion take the form: After rescaling some of the fields as the equations (5.19) are the field equations of the Lagrangian for N =4 self-dual SYM given in [2]. Gauge equivalent trivializations. We have described the twistor correspondence between holomorphic bundles E over the supertwistor space P 3|N trivial on projective lines in P 3|N and solutions to the field equations of self-dual N -extended SYM theory on the space Ê 4 with metric (g µν ) = diag(−ε, −ε, +1, +1). The derivation of the N =4 super self-duality equations (5.17) is 27 The fields are scaled to match the discussion following ( based on trivializations of E overÛ ± such that eqs. (5.7) and therefore (5.9) is satisfied. However, there are other convenient trivializations of E overÛ ± such that the compatibility conditions of the corresponding linear system are described by holomorphic Chern-Simons theory [1,2] on the supertwistor space. Namely, since restrictions of the bundle E to (2|N )-dimensional leaves of the fibration (4.3) are trivial 29 , there exist τ ε -regular 30 matrix-valued functionsψ ± onŨ ± such that . These trivializations are analogous to the trivializations (3.16)- (3.22) in the N =0 case. Note that similarly to (3.28) in the purely bosonic case, one can also choose the trivializationsψ ± (z α ± , λ ± ,λ ± , η ± i ) but we will not consider them here. Super hCS theory. The compatibility conditions of the linear differential equations (5.28)-(5.30) are the field equations of hCS theory on the supertwistor space 32 P 3|N . OnÛ + they read Note that fibres 2|N λ over λ ∈ P 1 in the fibration P 3|N → P 1 are exactly the βR-superplanes introduces in section 4. Super self-duality is equivalent in the discussed superfield formulation to flatness of super Yang-Mills fields on these βR-superplanes. 30 See the definition on p. 6. 31 The function ϕ is τε-regular, and in particular, it can be singular on P0,N := P 3|N | |λ ± |=1 in the Kleinian case ε = +1, see appendix D. 32 More accurately, in the case ε = +1 (and also for the real structure τ0) 0,1 (and thus hCS theory) is defined only on the subsetP 3|N of P 3|N for which |λ±| = 1. See appendix D for more details. and similarly onÛ − . Here + α andÂλ + are functions of (x αα R , λ + ,λ + , η + i ). These equations are equivalent to the equations of self-dual N -extended SYM theory on Ê 4 which form a subset of equations (5.17). As already mentioned, the most interesting case is N =4 since the supertwistor space P 3|4 is a CY supermanifold and one can derive equations (5.31), (5.32) from a manifestly Lorentz invariant action [2,39]. For this reason, we concentrate on the equivalence with self-dual SYM for the case N =4; for other values of N , the derivation goes along the same lines. Recall that α andÂλ are sections of the bundles 33 O(1) ⊗ 2 andŌ(−2) over P 1 since the vector fields in (5.25) take values in O(1) and the holomorphic cotangent bundle of P 1 is O(−2). Together with the fact that η ± i 's take values in the bundle ΠO(1), this fixes the dependence of ± α andÂλ ± on λα ± andλα ± . In the case N =4, this dependence takes the form and similar for − α ,Âλ − . Here, again, A αα , χ i α , φ ij ,χα i is the ordinary field content of N =4 super Yang-Mills theory and the field Gαβ is the auxiliary field arising in the N =4 self-dual case, as discussed above. It follows from (5.32)-(5.34) that 34 Consider now the cases N <4. Since the η + i 's are Graßmann variables and thus nilpotent, the expansion (5.33) and (5.34) will only have terms up to order N in the η + i 's. This exactly reduces the expansion to the appropriate field content for N -extended super SDYM theory: One should note that the antisymmetrization [·] leads to a different number of fields depending on the range of i. For example, in the case N =2, there is only one real scalar φ 12 , while for N =4 there exist six real scalars. Inserting such a truncated expansion for N <4 into the field equations (5.31) and (5.32), we obtain the first N +1 equations of (5.17), which is the appropriate set of equations for N <4 super SDYM theory. To sum up, we have described a one-to-one correspondence between gauge equivalence classes of solutions to the N -extended SDYM equations on (Ê 4 , g) with g = diag(−ε, −ε, +1, +1) and 33 The bundleŌ(n) is the complex conjugate to O(n). 34 We use the symmetrization (·) with weight one, e.g. (αβ) =αβ +βα equivalence classes 35 of holomorphic vector bundles E over the supertwistor space P 3|N such that the bundles E are holomorphically trivial on each projective line P 1 x R ,η in P 3|N . In other words, there is a bijection between the moduli spaces of hCS theory on P 3|N and the one of self-dual N -extended SYM theory on (Ê 4 , g). It is assumed that appropriate reality conditions are imposed. The Penrose-Ward transform and its inverse are defined by the formulae (5.33)-(5.35). In fact, these formulae relate solutions of the equations of motion of hCS theory on P 3|N to those of self-dual N -extended SYM theory on (Ê 4 , g). One can also write integral formulae of type (3.25) but we refrain from doing this. Dual supertwistors and N -extended anti-self-duality Coordinates. In section 4, we described the supertwistor space P 3|N as the space of (1|0)dimensional subspaces in the space 4|N . Its dual supermanifold can be defined as a space of (3|N )-dimensional planes in 4|N parametrized by homogeneous coordinates (µ α , σα, θ i ) subject to the identification (µ α , σα, θ i ) ∼ (tµ α , tσα, tθ i ) for any nonzero complex number t. We again have the supermanifold 36 P 3|N * and the space P Note that ζ ± are coordinates on the patches V ± =V ± ∩ P 1 * covering the base 37 P 1 * = P 36 We use the subscript * to denote the dual supertwistor space, its subspaces and its preimages under projections π2. 37 Recall that we use the subscript * in P 1 * for distinguishing this Riemann sphere with the coordinates ζ± from the Riemann sphere P 1 with the coordinates λ±. whose holomorphic sections P 1|N x R ,η ֒→P 3|N * are defined by the equations and P 3|N * ⊃ P 1 * , respectively. From (6.5) and (6.7), we obtain again formulae (4.9), (4.10) and the equations Vector fields. Using (6.8) and (6.5), one can introduce a double fibration for the complex nonchiral case and the fibration in the case of (x αα ) and (θ αi , ηα i ) satisfying the reality conditions induced by τ ε as discussed in section 4. The superspace 4|4N and its real subspace R 4|2N L are the same as in section 4 and parametrized by the same coordinates. We have coordinates with obvious projections in the fibrations (6.9) and (6.10). It is not difficult to see that the tangent spaces of the (2|3N )-dimensional leaves of the projection π 2 : F 5|4N * → P 3|N * from (6.9) are spanned by the vector fields 14) (6.15) 20 In the real case, when the coordinates (x αα L , θ αi ) belong to the real chiral superspace R 4|2N L , we have the fibration (6.10), and the vector fields (6.15) and can be identified with bosonic vector fields 38 of type (0,1) on P 3|N * similar to the vector fields (4.19) and (4.33) as discussed in section 4 for the self-dual case. As an odd vector field of type (0,1) on P 3|N * we have∂ These bosonic and fermionic vector fields of type (0,1) on P 3|N * annihilate all complex coordinates (6.1), (6.2) (or, equivalently, (6.5)) on P 3|N * . Anti-self-dual gauge fields. Consider a holomorphic vector bundle E over the space P since f +− is holomorphic. Let us consider trivializations ψ ± overV ± similar to (5.6) and (5.7), i.e. such that From these equations, we obtain matrix-valued components of a super gauge potential one-form, 20) As in the self-dual case, one can rewrite (6.26) in component fields. The full set of equations of motion for N =4 is and the cases N <4 are governed by the first N +1 equations. where (µ α Substituting (6.39) and (6.40) into the field equations (6.37) and (6.38) of hCS theory on the supertwistor space P 3|4 * , one obtains the field equations (6.28) for N =4. The appropriate truncation for N <4 is done exactly as in the self-dual case: from the nilpotency of the θ i + 's it follows that there are less fields in the expansions (6.39) and (6.40), which, in turn, yields the first N +1 equations of (6.28). Again, we have described a one-to-one correspondence between gauge equivalence classes of solutions to the N -extended anti-self-dual Yang-Mills equations on (Ê 4 , g) and equivalence classes of holomorphic vector bundles E over the dual supertwistor space P 3|N * , analogously to the self-dual case. The Penrose-Ward transform here is given by the formulae (6.39)-(6.40). Note that one can introduce a holomorphic projection p : L 5|6 → P 1 × P 1 * We denote by z A (a) with A = 1, 2, 3 bosonic coordinates on the fibres over V a in the bundle (7.9). Additionally, we use odd variables θ i (a) and η (a) i as the fermionic coordinates on these fibres. Moduli of complex submanifolds. Holomorphic sections over V a of the bundle (7.9) are spaces L 2|0 x,θ,η These sections are not independent due to equation (7.8), which is solved by the choice x αα R = x αα − θ αi ηα i and x αα L = x αα + θ αi ηα i . (7.14) 41 The space P 3|3 * is the space of (3|3)-planes in 4|3 . Each such plane is naturally described by a ray, i.e. a (1|0)-dimensional subspaces of 4|3 , orthogonal to the plane. Thus the space P 3|3 * is biholomorphic to P 3|3 , which is the space of rays in 4|3 . The quadric is exactly the appropriate orthogonality condition between elements of both projective spaces. (7.19) This supermanifold is obviously projected onto 4|12 with coordinates (x αα , θ αi , ηα i ) (7.20) and onto L 5|6 with coordinates which are not all independent but equivalent to (z A (a) , λ (a) , ζ (a) , θ i (a) , η (a) i ). Note that we are considering the complex superspace 4|12 . Appropriate reality conditions will be discussed later on. which are properly glued onŨ a ∩Ũ b = ∅ into global vector fields on F 6|12 . Here D αi and D iα are vector fields given by (4.18) and (6.13). We shall also consider the antiholomorphic part of the exterior derivative d onŨ a . Holomorphic Chern-Simons theory on the quadric Holomorphic vector bundles over L 5|6 . For defining a holomorphic rank n vector bundle E over L 5|6 , one should consider a covering {U a } of L 5|6 and a collection {f ab } of holomorphic n × n matrices (Čech 1-cocycle) on nonempty intersections U a ∩ U b such that We restrict ourselves to topologically trivial bundles E → L 5|6 , i.e. those for which there exists a collection {ψ a } of regular matrix-valued functions (Čech 0-cochain) such that on any nonempty intersection U a ∩ U b . Since f ab is holomorphic on a trivialization of E over U a , we have ∂zA On nonempty intersections U a ∩ U b , we havê and therefore (8.5)-(8.7) define the (0, 1) part of a global gauge potential on the bundle E over the supermanifold L 5|6 . Super hCS theory on L 5|6 . Let us introduce the notation The compatibility condition of this linear system is the equation which is simply the field equation 43 F 0,2 Ua = 0 of hCS theory on the supermanifold L 5|6 . A special gauge. Note that restrictions of a vector bundle E → L 5|6 to the fibres 3|6 λ,ζ of the bundle (7.9) are holomorphically trivial since all these fibres are contractible. Therefore, there exist trivializationsψ a of E over U a such that and ∂zA on an open set V a = U a ∩ P 1 × P 1 * ⊂ L 5|6 with gluing conditions = ∂ρ (a) , will be nonzero. In such a gauge, one will have three nonzero componentsÃλ (a) ,Ãζ (a) andÃρ (a) , which may in principle be used for constructing an action of type (1.2) on the supermanifold L 5|6 . Supertwistors and the full N =4 super Yang-Mills theory In this section we shall consider N =3 SYM theory which is known to be equivalent to the N =4 SYM theory when formulated on Ê 4 . More explicitly, we shall consider the integrability of super Yang-Mills fields on super null lines, which turns out to be equivalent to the equations of motion of N =3 SYM theory [26,30,31,32,33], and its relation with super hCS theory on a (5|6)-dimensional supermanifold. Pulled-back bundle. Let us consider a holomorphic vector bundle E → L 5|6 and the pulledback bundle π * 2 E over the supermanifold F 6|12 with a covering {Ũ a } given by (7.18) and (7.19). Pull-backs 44 of the transition functions {f ab } of E to π * 2 E are constant along the fibres of π 2 , i.e. Holomorphic triviality on subspaces. Let us now consider holomorphic vector bundles E over L 5|6 such that their restriction to any submanifold L 2|0 x,θ,η ∼ = P 1 × P 1 * in L 5|6 is holomorphically trivial, or, equivalently, such that π * 2 E is trivial along the fibres of π 1 . For such bundles, there exist trivializations {ψ a } of π * 2 E overŨ a such that i.e. the regular matrix-valued functions ψ a are holomorphic in the coordinates on F 6|12 . It follows from (9.3) that is a globally defined regular matrix-valued (super)function on F 6|12 which generates gauge transformationsψ a → ψ a = φψ a for a = 1, ..., 4 , where (x, θ, η) = (x αα , θ αi , ηα i ). Note that the last equalities in (9.8)-(9.10) follow from a generalized Liouville theorem on P 1 × P 1 * which says that A i (a) is a local section of the bundle O(1, 0), A (a) i is a local section of the bundle O(0, 1) and A w (a) is a local section of the bundle O(1, 1) over P 1 × P 1 * . Summarizing, we have (implicitly) described the Penrose-Ward transform which maps solutions of hCS theory on L 5|6 to solutions of N =4 SYM theory on 4 . Note that the existence of a gauge in which Aλ ± = 0 = Aζ ± is equivalent to holomorphic triviality of the bundle x,θ,η ∼ = P 1 × P 1 * ֒→L 5|6 . Note furthermore that the moduli space of such bundles is a subset of the moduli space of all topologically trivial holomorphic bundles E over L 5|6 [45,34]. This means that the solution space of hCS theory on L 5|6 is larger than that of N =4 SYM theory. Looking for an action. We saw that the full set of equations of motion for N =3 SYM theory is encoded in the equation F 0,2 = 0 on the supermanifold L 5|6 which is the quadric in an open subset of P 3|3 × P 3|3 * . One might wonder whether there is some action principle for super hCS theory on this space. For complex three-dimensional supermanifolds, this is the hCS action (1.2). In the case of the (5|6)-dimensional supermanifold L 5|6 , the situation is less clear. Recall that this space is a Calabi-Yau supermanifold and thus it comes with a holomorphic volume form Ω 5|6 . Therefore, a possible ansatz is where we abbreviated 0,1 = 0,1|0,0 and ω 0,2 = ω 0,2|0,0 . For this ansatz to be correct, ω 0,2 must be nowhere vanishing (otherwise the total measure would be degenerate). Furthermore, the partial integration used for deriving the equations of motion demands that ω 0,2 is partially closed, i.e. it has to satisfy the equation 0,1 ∧∂ω 0,2 = 0. It is not clear whether such a (0, 2)-form exists on L 5|6 . Even less clear is the relation of the action (9.17) with string field theory for the target space L 5|6 . Therefore we leave this discussion to forthcoming work 46 . Reality conditions on the quadric In the purely bosonic case, one can introduce real (antihermitean) gauge fields on Ê 4 with a metric g of Euclidean signature (4, 0), Kleinian signature (2,2) or Minkowski signature (3, 1) by choosing an appropriate real structure on 4 . However, as already mentioned in section 4, on the superspace 4|4N there exists a real structure defining a Euclidean superspace only for an even number of supersymmetries. For simplicity, we restrict ourselves here to the Kleinian and Minkowskian cases. Real structure τ 1 . The Kleinian signature (2, 2) is related to anti-linear transformations 47 τ 1 of spinors defined in sections 2 and 4. Recall that 1) 46 Another possibility to obtain the equation (8.15) is to use an action of holomorphic BF type theories [42]. However, the relation of this kind of action with string field theory is also unclear. 47 We will not consider the map τ0 here. and obviously τ 2 1 = 1. Correspondingly for (λ ± , ζ ± ) ∈ P 1 × P 1 * , we have with stable points parametrizing a torus S 1 × S 1 * . For the coordinates (x αα ), we have with a metric ds 2 = det(dx αα ) of signature (2,2). Recall also that and therefore real (Majorana) fermions satisfy (cf. (4.29)) Reality of fields in the Kleinian case. For imposing reality conditions on the functions ψ a (and f ab ) inducing antihermiticity of the fields of N =3 (and N =4) SYM theory via the twistor 48 Our notation is slightly sloppy: We use the same symbol τ1 for maps defined on different spaces. correspondence, it is convenient to consider an open neighborhood (and an analytic continuation of the functions to a complex domain) of all these real spaces. In fact, for our purpose it is enough to consider the supermanifold where U ± and V ± cover projective spaces P 1 = U + ∪ U − and P 1 * = V + ∪ V − parametrized by homogeneous coordinates [λα] and [µ α ], respectively. Recall that the manifold P 1 × P 1 * is covered by four patches V a defined in (7.11) with coordinates (λ (a) , ζ (a) ) on V a . The involution τ 1 interchanges these patches as V 1 ↔ V 4 , V 2 ↔ V 3 and therefore (10.12) Considering (λ (a) , ζ (a) ) = 0, we impose a reality condition on the complex regular matrix-valued functions ψ a = ψ a (x αα , θ αi , ηα i , λ (a) , ζ (a) ) by taking them depending on τ 1 -real coordinates x αα , θ αi , ηα i and satisfying the equations which lead to the relations = f 42 ..., λ (4) , ζ (4) . (10.14) Now, using the definitions (9.8)-(9.10) one can show by direct calculation that the conditions (10.13) yield antihermitean superconnections and the real N =3, 4 supermultiplet of ordinary fields. Recall that the involution τ M interchanges α-superplanes and β-superplanes and therefore exchanges opposite helicity states. It might be identified with a 2 -symmetry discussed recently in the context of mirror symmetry [9] and parity invariance [15]. A τ M -real twistor diagram. Recall that [λα] and [µ α ] are homogeneous coordinates on two Riemann spheres and the involution τ M maps these spheres one into another. Moreover, fixed points of the map τ M : P 1 × P 1 * → P 1 × P 1 * form the Riemann sphere where P 1 (= P 1 * ) denotes the Riemann sphere P 1 with the opposite complex structure. Therefore, a real slice in the space F 6|12 = 4|12 × P 1 × P 1 * introduced in (7.15) and characterized as the fixed point set of the involution τ M is the space The fixed point set of the involution (10.15) is the diagonal in the space P 3|3 ×P 3|3 , which can be identified with the complex supertwistor space P 3|3 of real dimension (6|6). This involution also picks out a real quadric L The dimensions of all spaces in this diagram are real. For imposing the reality conditions on the superconnection components, one should proceed analogously to the case of Kleinian signature. We will not discuss this here. Conclusions In this paper, we considered two examples of the fibration π : Z → X τ , (11.1) which describe self-dual and anti-self-dual N -extended SYM theory in four real dimensions. As the supermanifold Z, we used the supertwistor space P 3|N = P 3|N \ P 1|N (self-dual case) and the dual supertwistor space P (anti-self-dual case) with 0≤N ≤4. As the supermanifold X τ , we chose the real anti-chiral superspace R 4|2N R (self-dual case) and the real chiral superspace R 4|2N L (anti-self-dual case). In both cases, we considered holomorphic Chern-Simons theory on the supermanifold Z and showed that, by using a gauge transformation on Z, one can bring Witten's form of the hCS field equations to the previously known constrained equations on the supercurvature field strength corresponding to N -extended self-dual or anti-self-dual SYM theory on R Considering hCS theory on the supertwistor space P 3|N , we gave an explicit expansion of the super gauge potential in coordinates on P 1 ⊂ P 3|N in which the equivalence of the equations of motion∂ 0,1 + 0,1 ∧ 0,1 = 0 to the equations of motion of self-dual N -extended SYM theory in four dimensions becomes manifest. All this was translated to the anti-self-dual case by using the dual supertwistor space P 3|N * . We also considered an example of the double fibration where X was chosen to be the superspace 4|12 or its real version Ê 4|12 with a metric on the body of signature (4, 0), (2, 2) or (3, 1). As supermanifold Z, we used the quadric L 5|6 in P 3|3 × P 3|3 * or a real subspace of it with the real structure depending on the signature of the metric on Ê 4 . The correspondence space Y = F 6|12 = 4|12 × P 1 × P 1 * was embedded as a submanifold 51 in Z × X by using the projections (π 1 , π 2 ). We showed that, by using a gauge transformation on the correspondence space, one can bring Witten's form of the hCS field equations to the well-known constraint equations on the supercurvature field strength corresponding to full N =3 SYM theory on the superspace 4|12 or one of its real subspaces. This theory is known to be equivalent to N = 4 SYM theory, when formulated on Ê 4 . There are a lot of open problems which deserve further study. On the field theory side, it is not clear yet how to construct an action for hCS theory on L 5|6 which will correspond to the action of N =4 SYM theory. Generalizations of the twistor correspondence and the Penrose-Ward transform to the string field theory (SFT) level may also be of interest. This could either be done in the setting proposed by [8], although it seems that due to the off-shell character of SFT one should employ the more general setting [10]; or one could concentrate on (an appropriate extension of) SFT for N=2 string theory. This theory is known to describe SDYM at tree level [50]; its SFT [51] is based on a description of N=2 string theory as a topological N=4 theory [52]. This description contains twistors from the outset: The coordinate λ ∈ P 1 , the linear system, integrability and the 50 Here the body is (Ê 4 , g). See also appendix B. 51 Recall that Y is fibred over X with fibres π −1 1 (x) diffeomorphic to submanifolds π2(π −1 1 (x)) of Z and Y is also fibred over Z with fibres π −1 2 (z) which are diffeomorphic to submanifolds π1(π −1 2 (z)) of X, i.e. Y ֒→Z × X. solution of the equations of motion by twistor methods were incorporated into the N=2 open SFT in [53,54]. However, this theory reproduces only classical bosonic SDYM theory, its symmetries and integrability properties [53,55,56]. Following various proposals, e.g. [35,57,58,4] (see also references therein), one can extend it to be spacetime supersymmetric. This is believed to lead to an explicit relation between the supersymmetric extension of N=4 topological string theory and the N=2 topological string (B-type) [5], but the picture is far from being complete. Acknowledgements We are grateful to O. Lechtenfeld, S. Uhlmann and M. Wolf for many useful comments. This work was partially supported by the Deutsche Forschungsgemeinschaft (DFG). A. Dictionary: homogeneous ↔ inhomogeneous coordinates The sphere S 2 is diffeomorphic to the complex projective space P 1 . This space can be parametrized globally by complex homogeneous coordinates λ1 and λ2 which are not simultaneously zero (in projective spaces, the origin is excluded). So, the Riemann sphere P 1 can be covered by two coordinate patches On the intersection U + ∩ U − , we get λ + = 1/λ − . Now let us consider the expansion (5.33) and (5.34) of the super gauge potentials of hCS theory on the supertwistor space. We get the following list of objects: Aλ +Ō (−2) 3 = 1 λ1λ1Âλ + . This implies the following expansions in homogeneous coordinates (cf. (5.33), (5.34)): For rewriting the equations of motion in terms of this gauge potential, we also need to rewrite the vector fields (4.19) and (4.33) in homogeneous coordinates. The vector fields along the fibres are easily rewritten, analogously to the corresponding components of the gauge potential. The vector field on the sphere can be calculated by consideringÂλ + dλ + = 3Θ 3 . This impliesΘ 3 = λ1dλ2 −λ2dλ1, which has a dual vector fieldV 3 defined byV 3 Θ 3 = 1. Altogether, we obtain the basisV The field equations (5.31) and (5.32) now take the form and yield the same equations (5.17) for the physical fields. B. Some mathematical definitions Interior product. For the interior product of a vector V with a one-form A, we use the notation V A := V, A . A second common notation for this product is i V A. Holomorphic line bundles. Given the Riemann sphere P 1 ∼ = S 2 with standard patches U + and U − and coordinates λ ± on the corresponding patches and λ ± = 1/λ ∓ on U + ∩ U − , the holomorphic line bundle O(n) is defined by its transition function z + = λ n + z − , where z ± are complex coordinates on fibres over U ± . For n ≥ 0, global sections of the bundle O(n) are polynomials of degree n in the coordinates λ ± and homogeneous polynomials of degree n in homogeneous coordinates (see also appendix A). The O(n) line bundle has first Chern number n. The complex conjugate bundle to O(n) is denoted byŌ(n). Its sections have transition functionsλ n + :z + =λ n +z− . Spinor conventions. All objects with space-time indices are rewritten in spinor notation by x αα = σ αα µ x µ etc., where the sigma-matrices are determined by the metric under consideration. The homogeneous coordinates λ1 and λ2 for a point in P 1 are regarded as components of a complex commuting spinors. Their indices are raised and lowered with the antisymmetric ε-tensors. We use the convention ε 12 = ε12 = −ε 12 = −ε12 = 1, implying ε αβ ε βγ = δ α γ . The complex conjugate is obtained by conjugating the components of the spinor. A second anti-linear conjugation, denoted by· is performed for different types of spinors as where the 2 × 2-matrix C is given by The conventions for Graßmann variables are discussed in the text around (4.21)-(4.27) and (10.16). These imply in particular that Furthermore, we adopt the following convention for the conjugation of products of Graßmann variables and supernumbers in general: With this choice, products of two real objects will be real. Note that this is not the common convention used for supersymmetry in Minkowski space, and here, we define τ M (ξ 1 ξ 2 ) = τ M (ξ 2 )τ M (ξ 1 ). A more detailed discussion can be found in [59]. To see how flag manifolds naturally arise, consider the following reformulation of the (bosonic part of the) discussion following (2.8). We fix the full space to be 4 . Then we can establish the following double fibration: Let (L 1 , L 2 ) be an element of F 12 , i.e. dim L 1 = 1, dim L 2 = 2 and L 1 ⊂ L 2 . Thus F 12 fibres over F 2 with P 1 as a typical fibre, which parametrizes the freedom to choose a complex onedimensional subspace in a complex two-dimensional vector space. The projections are defined as π 2 (L 1 , L 2 ) = L 1 and π 1 (L 1 , L 2 ) = L 2 . The full connection to (2.8) becomes obvious, when we note that F 1 = P 3 = P 3 ∪ P 1 and that F 2 = G 2,4 ( ) is the complexified and compactified version of Ê 4 . The advantage of the formulation in terms of flag manifolds is related to the fact, that the projections are immediately clear: one has to shorten the flags to suit the structure of the flags of the base space. The compactified version of the "dual" fibration (6.10) is where F 3 is the space of hyperplanes in 4 . This space is naturally dual to the space of lines, as every hyperplane is fixed by a vector orthogonal to the elements of the hyperplane. Therefore, we have F 3 = F * 1 = P 3 * ⊃ P 3 * . Also the third double fibration (7.15), which we used in the case of full N =3 SYM, is a restricted version of the diagram where F 2 = G 2,4 ( ) is again the complexified and compactified version of Ê 4 . The flag manifold F 13 is topologically the zero locus of a quadric in P 3 × P 3 * . For further details and the super generalization, see e.g. [22,38]. Supermanifolds and Calabi-Yau supermanifolds. The space Ê r|s is described by coordinates x i and θ j with 1 ≤ i ≤ r, 1 ≤ j ≤ s, where the θ j are real Graßmann variables satisfying the algebra {θ j , θ k } = 0. The superspace r|s is defined analogously, with complex coordinates: For our considerations, a supermanifold is defined to be a topological space which is locally diffeomorphic to Ê r|s or r|s . A supermanifold contains a purely bosonic part (the "body") which is parametrized in terms of bosonic coordinates. The body of a supermanifold is a real or complex manifold by itself. The 2 -grading of the superspace used for parametrizing the supermanifold induces a grading on the ring of functions on the supermanifold. For objects like subspaces, forms etc. which come with a dimension, a degree etc., we use the notation (i|j), where i and j denote the bosonic and fermionic part, respectively. We further introduce the parity-changing operator Π which, when acting on a fibre bundle, changes the parity of the fibre coordinates. For example, ΠO(n) → P 1 is parametrized by complex variables λ ± and Graßmann variables θ ± with θ + = λ n + θ − on U + ∩ U − . For a more extensive discussion of supermanifolds, see [59] and references therein. Calabi-Yau manifolds are manifolds with vanishing first Chern class which implies the existence of a globally well-defined holomorphic volume form. For our purposes, we define a Calabi-Yau supermanifold to be a supermanifold with a globally defined holomorphic volume form. Note that the body of a super CY is not a CY, in general. In the purely bosonic case, the 3-fold O(m) ⊕ O(n) → P 1 with coordinates z 1 ± , z 2 ± , λ ± is a CY, if and only if m + n = −2, and a volume form is then given by Ω 3,0 ± = ±dz 1 ± ∧ dz 2 ± ∧ dλ ± . In the super case, the fermionic coordinates can also be assigned to some line bundle, but because the Berezinian (i.e. the fermionic Jacobi determinant) enters as an inverse in the integration, a fermionic coordinate living in O(n) will contribute −n to the overall first Chern number. Thus the bundle is a CY supermanifold. Its holomorphic volume form is given byΩ where z i ± and θ j ± are coordinates of the bosonic and fermionic line bundles, respectively. The body of this supermanifold is O(1) ⊕ O(1) → P 1 and it is obviously not a CY manifold. C. The twistor geometry in the Kleinian case ε = +1 As mentioned several times in the text, one should consider hCS theory on domainsÛ ± of the supertwistor space P 3|N for which |λ ± | = 1 when working in the Kleinian case, i.e. when using the reality conditions obtained from the involution 53 τ 1 . In this and the following appendix, we will discuss this aspect in more detail. Let us start from the double fibration (4.6), which describes the complex supertwistor correspondence for 0 ≤ N ≤ 4. As before, we have complex coordinates (z α ± , λ ± , η ± i ) on the patchesÛ ± which cover P 3|N and (x αα R , λ ± α , ηα i ) on F 5|2N R . The projection π 1 is the trivial projection π 1 (x αα R , λ ± α , ηα i ) = (x αα R , ηα i ) and the projection π 2 is given by the formulae The action of the involution τ 1 on the coordinates of P 3|N is given by formulae (2.12) together with τ 1 (η ± i ) =η ± i /λ ± . It yields the reality conditions The set of fixed points under this involution 54 of the spaces contained in the double fibration (C.1) form real subsets . Recall that the body T 3 of the supermanifold 55 T 3|N is diffeomorphic to the space ÊP 3 \ÊP 1 (cf. (2.14)) fibred over S 1 ∼ = ÊP 1 ⊂ P 1 . Thus, we obtain the real version of the double fibration (C.1). Here, π 1 is again the trivial projection and π 2 is given by equations (C.2)-(C.4) with |λ ± | = 1. The tangent spaces to the (real) (2|N )-dimensional leaves of the fibration π 2 in (C.5) are spanned by the vector fields 53 For τ0, the description is similar and for that reason we focus on τ1. Note, however, that the P 1 embedded in the twistor spaces reduces to different S 1 s: for τ1, the constraint is λ± =λ −1 ± and for τ0 we have λ± =λ±. 54 Although τ1 was defined on P 3|N , it induces an involution on F 5|2N R which we will denote by the same symbol in the following. 55 For N = 4, T 3|N has a globally defined real volume form invariant under rescaling of homogeneous coordinates. The map π 2 in (C.9) restricted to the space R which is defined by the formulae (C.2) with |λ ± | = 1 and x αα R , ηα i subject to (C.4). Its inverse is given by and x 22 R , x 12 R and η2 i fixed by (C.4). Due to this diffeomorphism, the diagram (C.9) with the maps π 1 and π 2 restricted to R 4|2N R × H 2 becomes a nonholomorphic fibratioñ and onP 3|N , one can use either set of coordinates (z α ± , λ ± , η ± i ) and (x αα R , λ ± , ηα i ). For the dual supertwistor space P 3|N * , the discussion follows along the same lines. One merely replaces the coordinates (z α ± , λ ± , η ± i ) of P 3|N with the coordinates (wα ± , µ ± , θ i ± ) of P . Considering then the set of fixed points of the involution τ 1 as done above leads to fibrations similar to (C.5), (C.9) and (C.12). D. Comments on hCS theory in the Kleinian case ε = +1 In the case of the real structure τ 1 , i.e. ε = +1, which yields Kleinian signature (2, 2) on Ê 4 , we always discussed hCS theory onP 3|N in the text. This is due to a peculiarity of the Penrose-Ward correspondence in this case which we now discuss more explicitly. Consider the real supertwistor space T 3|N ⊂ P 3|N and a real-analytic function f τ +− : T 3|N → GL(n, ) which can be understood as an isomorphism f τ +− : E τ − → E τ + between two trivial complex vector bundles E τ ± → T 3|N . We assume that f τ +− satisfies the reality condition Given such a function f τ +− , one can extend it holomorphically into a neighborhoodÛ of T 3|N in P 3|N , such that the extension f +− of f τ +− satisfies the reality condition generalizing equation (D.1). The function f +− is holomorphic onÛ =Û + ∩Û − and can be identified with a transition function of a holomorphic vector bundle E over P 3|N =Û + ∪Û − which glues together two trivial bundles E + =Û + × n and E − =Û − × n . Obviously, the two trivial vector bundles E τ ± → T 3|N are restrictions of the trivial bundles E ± →Û ± to T 3|N . In the twistor approach, it is assumed that the bundle E is holomorphically trivial when restricted to any curve P 1 x R ,η ֒→P 3|N and therefore there exists a gauge in which the restriction of the transition function f +− to any P 1 x R ,η splits, where ψ τ ± are restrictions to R 4|2N R × S 1 of the matrix-valued functions ψ ± given by (D.3) and (D.4). Thus the initial twistor data consist of a real-analytic function 57 f τ +− on T 3|N satisfying (D.1) together with a splitting (D.5), from which we construct a holomorphic vector bundle E over P 3|N with a transition function f +− which is a holomorphic extension of f τ +− toÛ ⊃ T 3|N . In other words, the space of real twistor data is the moduli space of holomorphic vector bundles E → P 3|N with transition functions satisfying the reality conditions (D.2). In the purely real setting, one considers a real-analytic GL(n, )-valued function f τ +− on T 3|N satisfying the hermiticity condition (D.1) and the real double fibration (C.5). Since the pull-back of f τ +− to R 4|2N R × S 1 has to be constant along the fibres of π 2 , we obtain the constraint equations v + α f τ +− = 0 = v i + f τ +− , (D.6) 56 Recall that by 'regular', we mean smooth with nonvanishing determinant. 57 One could also consider the extension f+− and the splitting (D.5) even if f τ +− is not analytic, but in this case the solutions to the super SDYM equations can be singular. Such solutions are not related with holomorphic bundles. Recall that ψ τ + and ψ τ − extend holomorphically in λ + and λ − to H 2 + and H 2 − , respectively, and therefore we obtain from (D.9) that A ± α = λα ± A αα and A i ± = λα ± A iα , where A αα and A iα do not depend on λ ± . Then the compatibility conditions (D.10) of the linear systems (D.8) reduce to equations (5.14). In section 5 it was demonstrated that for ε = +1, these equations are equivalent to the field equations of N -extended SDYM theory on Ê 2,2 . Thus there are bijections between the moduli spaces of solutions to equations (D.10), the field equations of N -extended super SDYM theory on Ê 2,2 and the moduli space of τ 1 -real holomorphic vector bundles E over P 3|N . Consider now the extension of the linear systems (D. 8) to open domainsÛ ± = P 3|N ± ∪Û ⊃ T 3|N , whereV ± α and∂ i ± are vector fields of type (0, 1) onÛ s ± :=Û ± \P 0,N as given in (4.19)-(4.20). These vector fields annihilate f +− and from this fact and the splitting (D.3), one can also derive equations (D.11). Recall that due to the existence of a diffeomorphism between the spaces R 4|2N R × H 2 andP 3|N which is described in (C.10)-(C.11), the double fibration (C.9) simplifies to the nonholomorphic fibration (C.12). Moreover, since the restrictions of the bundle E → P 3|N to the P 1 x R -fibres of the fibration (C.12) are trivial, there exist regular matrix-valued functionsψ ± on U s ± such that f +− =ψ The existence of this gauge was already implied in [2]. Additionally, we impose the reality condition onψ ± . AlthoughÛ s consists of two disconnected pieces, the functionsψ ± are not independent on each piece because of the condition (D.14), which also guarantees (D.2) onÛ s . The functionsψ ± and their inverses are ill-defined on P 0,N since the restriction of π 2 to R 4|2N R × S 1 is a noninvertible projection onto T 3|N , see (C.10). Equating (D.2) and (D.12), one sees that the singularities ofψ ± on P 0,N split off in a matrix-valued function ϕ −1 , i.e. ψ ± = ϕ −1 ψ ± , (D. 15) which disappears from one can find regular matrix-valued functionsψ + onÛ s + andψ − onÛ s − which satisfy the reality condition (D.14). These functions define a further function f s +− =ψ −1 +ψ − :Û s → GL(n, ) which can be completed to a holomorphic function f +− :Û → GL(n, ) due to (D. 16). The latter one can be identified with a transition function of a holomorphic vector bundle E over the supertwistor space P 3|N . The restriction of f +− to T 3|N is a real-analytic function f τ +− which is not constrained by any differential equation. Thus, in the case ε = +1 (and also for the real structure τ 0 ), one can either consider two trivial complex vector bundles E τ ± defined over the space T 3|N together with an isomorphism f τ +− : E τ − → E τ + or a single complex vector bundle E over the space P 3|N . However, the appropriate hCS theory which has the same moduli space as the moduli space of (equivalence classes of) these bundles is defined onP 3|N . Moreover, real Chern-Simons theory on T 3|N has no moduli, since its solutions correspond to flat bundles over T 3|N with constant transition functions 58 defined on the intersections of appropriate patches covering T 3|N . To sum up, there is a bijection between the moduli spaces of solutions to equations (D.10) and to the hCS field equations on the spaceP 3|N since both moduli spaces are bijective to the moduli space of holomorphic vector bundles over P 3|N . In fact, whether one uses the real supertwistor space T 3|N , or works with its complexification P 3|N , is partly a matter of taste. However, the complex approach is more geometrical and more natural from the point of view of an action principle and the topological B-model. For example, equations (D.10) cannot be transformed by a gauge transformation to a set of differential equations on T 3|N as it was possible onP 3|N in the complex case. This is due to the fact that the transition function f +− , which was used as a link between the 58 Note that these transition functions are in no way related to the transition functions f+− of the bundles E over P 3|N or to the functions f τ +− defined on the whole of T 3|N . two sets of equations in the complex case does not satisfy any differential equation after restriction to T 3|N . From this we see that we cannot expect any action principle on T 3|N to yield equations equivalent to (D.10) as we had in the complex case. For these reasons, we have chosen to use the complex approach throughout the paper.
2018-12-07T08:03:12.905Z
2004-05-13T00:00:00.000
{ "year": 2004, "sha1": "e7e4a434ec7a86edd403256423eaff25601b7b22", "oa_license": null, "oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/atmp/2005/0009/0006/ATMP-2005-0009-0006-a002.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "83d69960634569277efb2fb0918d1a30b8eeee13", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
255082961
pes2o/s2orc
v3-fos-license
Electrically Polarized Withaferin A and Alginate-Incorporated Biphasic Calcium Phosphate Microspheres Exhibit Osteogenicity and Antibacterial Activity In Vitro Biphasic calcium phosphate microspheres were synthesized by the water on oil emulsion method and, subsequently, withaferin A was incorporated in the microspheres to evaluate their efficacy in biomedical applications. These withaferin A and alginate-incorporated biphasic calcium phosphate (BCP-WFA-ALG) microspheres were then negatively polarized, and the formation of biphasic calcium phosphates was validated by X-ray diffraction study. Although the TSDC measurement of the BCP-WFA-ALG microspheres showed the highest current density of 5.37 nA/cm2, the contact angle of the specimen was found to be lower than the control BCP microspheres in all the media. The water uptake into BCP-WFA-ALG microspheres was significantly higher than in the pure BCP microspheres. MTT assay results showed that there was a significant enhancement in cell proliferation rate with the BCP-WFA-ALG composite microspheres. The osteogenic differentiation of MG 63 cells on BCP-WFA-ALG microspheres exhibited an increased expression of osteogenic marker genes in the case of the BCP-WFA-ALG composite microspheres. Introduction Calcium phosphate-based ceramics have a widespread application in bone repair and regeneration owing to their outstanding bioactivity, biocompatibility, biodegradability, and osteoconductivity [1,2]. However, the desired combination of all these properties for specific applications is hard to achieve in single-phase matter. There are, in general, two approaches to solving this problem in terms of chemical modification of the substance: firstly, the use of composites, and secondly, ceramics with a mixture of multiphases [3,4]. In this respect, calcium phosphate and silicate are used most often in orthopedic applications. Calcium phosphate or apatite-based materials are found to have various compositions and as a result show different properties [5]. There is an increased interest in the usage of resorbable ceramics in the field of orthopedic applications. Biphasic calcium phosphate (BCP) ceramics has gained much attention in this regard. BCP ceramics have long been used as bone substitute materials that demand a controlled resorption rate in the body environment. BCPs are a mixture of two phases of apatite, i.e., hydroxyapatite (HA) which is more stable, and tri-calcium phosphate (TCP) which is more soluble. HA, with the formula Ca 10 (PO 4 ) 6 (OH) 2 , is one of the major CaP-based biomaterials whose stability is very high. The molar Ca/P ratio is 1.67, which closely resembles the composition of natural bone. It is known for its excellent osteoconduction and osteointegration properties. TCP, with the formula Ca 3 (PO 4 ) 2 , exists in two polymorphs, i.e., α-TCP and β-TCP, with a Ca/P ratio of 1.5. The bioresorbability of TCP is higher than that of HA, which makes it a suitable candidate to be used as bone cement and also in tissue engineering applications. Therefore, in the BCP mixture, the biodegradability and bioactivity can be tuned simply by controlling the TCP/HA ratio. Hence, there is a rapid growth of bone tissue in BCP ceramics [6][7][8]. The effectiveness of BCP as a bone graft material is enhanced when it is synthesized in the form of microspheres [9]. The spherical morphology of microspheres enables them to fill uneven damaged bone sites more efficiently and allows a large surface area for the attachment of bone cells. Hence, BCP microspheres can be used as bone filler materials. Additionally, the inter-space of microspheres provides the growth path for osteogenic activity, thereby inducing rapid bone growth. Another advantage of using porous microspheres is that they can be used as a drug delivery agent for various cells and bioactive substances, such as antibiotics, growth factors, etc. On the other hand, alginate has been chosen to be incorporated in the BCP microspheres because it has been extensively used in various biomedical applications and it is highly biocompatible, less toxic and cost effective. Moreover, withaferin A (WFA) has been incorporated into BCP-alginate composite microspheres and its efficacy has been evaluated [10][11][12]. Withaferin A is derived from the medicinal plant Withania somnifera (Ashwagandha). This plant has been used as an Indian traditional medicinal herb and is known for its biologically active constituents. The leaf and root extracts of this plant consist of withanones, withanolides, and withaferin. Among these, withaferin A is a major constituent that is found abundantly. Though it is known to have various pharmacological activities, such as anti-inflammation, pain relief, immune modulation, etc., few studies have been done on the effect of WFA on bone regeneration. This study aims to have a clear understanding of the osteogenic properties of WFA [13][14][15]. When it comes under stress, human bone exhibits a piezoelectric effect. As per Wolff's law, the surface charge induction under load is related to the crystallographic changes to be adopted by the bone. Since hydroxyapatite is the main inorganic component of bone, its crystallographic structure drives its piezoelectric nature. It has been established that the polarized hydroxyapatite surface stimulates osteogenicity by higher osteoconduction and protein adsorption. Moreover, these charged surfaces help in the regeneration of blood vessels. Hence, polarization of the microspheres was carried out in this study [16]. Fourier Transform Infrared Spectroscopy The broadband, as observed in the FTIR spectrum ( Figure 2), is due to the presence of an OH-band in the microspheres. It can be seen that there is absence of carbonate in the spectrum. The peaks at 545 and 615 cm −1 are due to a PO4 3− functional group arising due to BCP. The peaks at 1243 and 1600 cm −1 correspond to withaferin A, whereas the peaks at 1054 and 1422 cm −1 correspond to alginate. Thermogravimetric Analysis In the TGA thermogram (Figure 3), the weight of the sample was almost stable until 245 °C, after which there was sharp fall in the weight up to 415 °C. After this temperature, there was a slower degradation rate until 764 °C, after which there was an even slower degradation of the microspheres up to 1100 °C. Fourier Transform Infrared Spectroscopy The broadband, as observed in the FTIR spectrum ( Figure 2), is due to the presence of an OH-band in the microspheres. It can be seen that there is absence of carbonate in the spectrum. The peaks at 545 and 615 cm −1 are due to a PO 4 3− functional group arising due to BCP. The peaks at 1243 and 1600 cm −1 correspond to withaferin A, whereas the peaks at 1054 and 1422 cm −1 correspond to alginate. Fourier Transform Infrared Spectroscopy The broadband, as observed in the FTIR spectrum ( Figure 2), is due to the presence of an OH-band in the microspheres. It can be seen that there is absence of carbonate in the spectrum. The peaks at 545 and 615 cm −1 are due to a PO4 3− functional group arising due to BCP. The peaks at 1243 and 1600 cm −1 correspond to withaferin A, whereas the peaks at 1054 and 1422 cm −1 correspond to alginate. Thermogravimetric Analysis In the TGA thermogram (Figure 3), the weight of the sample was almost stable until 245 °C, after which there was sharp fall in the weight up to 415 °C. After this temperature, there was a slower degradation rate until 764 °C, after which there was an even slower degradation of the microspheres up to 1100 °C. Thermogravimetric Analysis In the TGA thermogram (Figure 3), the weight of the sample was almost stable until 245 • C, after which there was sharp fall in the weight up to 415 • C. After this temperature, there was a slower degradation rate until 764 • C, after which there was an even slower degradation of the microspheres up to 1100 • C. Thermally Stimulated Depolarization Current The TSDC ( Figure 4) measurement was carried out for polarized BCP microsphere samples. The highest current density of 5.37 nA/cm 2 was observed at the poling temperature of 521 °C and the current density was stable at 5 nA/cm 2 at a temperature of 550 °C. However, it can be inferred from this measurement that a charge density (Qp) of 3.96 µC/cm 2 remained in the sample when it was polarized at 550 °C. Contact Angle The contact angle of the BCP-WFA-ALG microspheres was found to be lower than that of the control BCP microspheres in all the media. Furthermore, the contact angle of the BCP-WFA-ALG microspheres in the DMEM cell medium was observed to be 44° ± 2.3 and in SBF medium it was found to be 46° ± 1.95; whereas the contact angles for the control BCP microspheres in DMEM medium and SBF medium were found to be 52° ± 1.86 and 57° ± 2, respectively. Swelling Ratio The water absorption capability of the synthesized BCP-WFA-ALG microspheres was evaluated using SBF (pH-7.4) as immersion fluid and the results were compared with pure BCP microspheres. BCP, BCP-ALG and BCP-WFA-ALG microspheres increased in Thermally Stimulated Depolarization Current The TSDC ( Figure 4) measurement was carried out for polarized BCP microsphere samples. The highest current density of 5.37 nA/cm 2 was observed at the poling temperature of 521 • C and the current density was stable at 5 nA/cm 2 at a temperature of 550 • C. However, it can be inferred from this measurement that a charge density (Qp) of 3.96 µC/cm 2 remained in the sample when it was polarized at 550 • C. Thermally Stimulated Depolarization Current The TSDC ( Figure 4) measurement was carried out for polarized BCP microsphere samples. The highest current density of 5.37 nA/cm 2 was observed at the poling temperature of 521 °C and the current density was stable at 5 nA/cm 2 at a temperature of 550 °C. However, it can be inferred from this measurement that a charge density (Qp) of 3.96 µC/cm 2 remained in the sample when it was polarized at 550 °C. Contact Angle The contact angle of the BCP-WFA-ALG microspheres was found to be lower than that of the control BCP microspheres in all the media. Furthermore, the contact angle of the BCP-WFA-ALG microspheres in the DMEM cell medium was observed to be 44° ± 2.3 and in SBF medium it was found to be 46° ± 1.95; whereas the contact angles for the control BCP microspheres in DMEM medium and SBF medium were found to be 52° ± 1.86 and 57° ± 2, respectively. Swelling Ratio The water absorption capability of the synthesized BCP-WFA-ALG microspheres was evaluated using SBF (pH-7.4) as immersion fluid and the results were compared with pure BCP microspheres. BCP, BCP-ALG and BCP-WFA-ALG microspheres increased in Contact Angle The contact angle of the BCP-WFA-ALG microspheres was found to be lower than that of the control BCP microspheres in all the media. Furthermore, the contact angle of the BCP-WFA-ALG microspheres in the DMEM cell medium was observed to be 44 • ± 2.3 and in SBF medium it was found to be 46 • ± 1.95; whereas the contact angles for the control BCP microspheres in DMEM medium and SBF medium were found to be 52 • ± 1.86 and 57 • ± 2, respectively. Swelling Ratio The water absorption capability of the synthesized BCP-WFA-ALG microspheres was evaluated using SBF (pH-7.4) as immersion fluid and the results were compared with pure BCP microspheres. BCP, BCP-ALG and BCP-WFA-ALG microspheres increased in weight significantly (80%, 98%, 120% approx.) in the initial few days of exposure. After 15 days, a Molecules 2023, 28, 86 5 of 13 saturation level (i.e., maximum swelling) was achieved in all three types of microsphere (220%, 254% and 274%). The water uptake pattern demonstrated in Figure 5 infers that the water uptake of the BCP-WFA-ALG microsphere was significantly higher than the pure BCP microsphere. Molecules 2022, 27, x FOR PEER REVIEW 5 of 13 weight significantly (80%, 98%, 120% approx.) in the initial few days of exposure. After 15 days, a saturation level (i.e., maximum swelling) was achieved in all three types of microsphere (220%, 254% and 274%). The water uptake pattern demonstrated in Figure 5 infers that the water uptake of the BCP-WFA-ALG microsphere was significantly higher than the pure BCP microsphere. Degradation The in vitro degradation behavior of the BCP-WFA-ALG microspheres was evaluated and compared with the pure BCP microspheres. Both the prepared microspheres were placed in SBF (pH-7.4) for 7 days and the degradation behavior showed an increased degradation rate in the case of the composite BCP-WFA-ALG microspheres as compared to the BCP microspheres alone ( Figure 6). Degradation The in vitro degradation behavior of the BCP-WFA-ALG microspheres was evaluated and compared with the pure BCP microspheres. Both the prepared microspheres were placed in SBF (pH-7.4) for 7 days and the degradation behavior showed an increased degradation rate in the case of the composite BCP-WFA-ALG microspheres as compared to the BCP microspheres alone ( Figure 6). weight significantly (80%, 98%, 120% approx.) in the initial few days of exposure. After 15 days, a saturation level (i.e., maximum swelling) was achieved in all three types of microsphere (220%, 254% and 274%). The water uptake pattern demonstrated in Figure 5 infers that the water uptake of the BCP-WFA-ALG microsphere was significantly higher than the pure BCP microsphere. Degradation The in vitro degradation behavior of the BCP-WFA-ALG microspheres was evaluated and compared with the pure BCP microspheres. Both the prepared microspheres were placed in SBF (pH-7.4) for 7 days and the degradation behavior showed an increased degradation rate in the case of the composite BCP-WFA-ALG microspheres as compared to the BCP microspheres alone ( Figure 6). MTT Assay Study An MTT assay study was employed to assess the proliferation of MG63 osteoblast-like cells derived from human osteosarcoma on both BCP and BCP-WFA-ALG microspheres. The cell density of both the samples after 1, 3, and 5 days of culture is displayed in Figure 7. As observed in the MTT assay result, there was a significant enhancement in cell proliferation rate in the BCP-WFA-ALG composite microspheres, even on day 1. The statistical study also confirmed the fact that the difference in cell density between BCP and BCP-WFA-ALG microspheres was significant (p < 0.05). MTT Assay Study An MTT assay study was employed to assess the proliferation of MG63 osteoblastlike cells derived from human osteosarcoma on both BCP and BCP-WFA-ALG microspheres. The cell density of both the samples after 1, 3, and 5 days of culture is displayed in Figure 7. As observed in the MTT assay result, there was a significant enhancement in cell proliferation rate in the BCP-WFA-ALG composite microspheres, even on day 1. The statistical study also confirmed the fact that the difference in cell density between BCP and BCP-WFA-ALG microspheres was significant (p < 0.05). Values are presented as the mean ± SD, * p < 0.05 denotes significant difference. Osteogenic Expression The osteogenic differentiation of MG63 cells on BCP-WFA-ALG microspheres was evaluated using gene expression level examination of osteogenic-related genes, such as osteocalcin (OCN) (Figure 8a), type I collagen (COL1) (Figure 8b), and RUNX2 (Figure 8c), and they were compared with the results obtained for BCP microspheres. The gene expression levels were measured after 1, 3, and 5 days of culture. The results indicated that there was an increased expression of osteogenic marker genes in the case of the BCP-WFA-ALG composite microspheres. Table 1 shows the primer sequences used in the RT-PCR study. Values are presented as the mean ± SD, * p < 0.05 denotes significant difference. Osteogenic Expression The osteogenic differentiation of MG63 cells on BCP-WFA-ALG microspheres was evaluated using gene expression level examination of osteogenic-related genes, such as osteocalcin (OCN) (Figure 8a), type I collagen (COL1) (Figure 8b), and RUNX2 (Figure 8c), and they were compared with the results obtained for BCP microspheres. The gene expression levels were measured after 1, 3, and 5 days of culture. The results indicated that there was an increased expression of osteogenic marker genes in the case of the BCP-WFA-ALG composite microspheres. Table 1 shows the primer sequences used in the RT-PCR study. Cellular Response The cell proliferation and cytoskeletal response of MG63 cells cultured on BCP-WFA-ALG microspheres was observed using FE-SEM ( Figure 9a) and CLSM (Figure 9b), respectively. Although, the seeded human osteoblast cells can adhere to both the BCP and BCP-WFA-ALG microspheres, they may act differently on both the specimens. Although the osteoblast cells showed near confluence on both samples, the cells were well spread on the BCP-WFA-ALG microspheres. Moreover, the BCP microspheres did not show significant changes in the SEM morphology of osteoblast cells; instead, the folding of the cellular membrane on microsphere surface was evident, which denotes that the cells encountered some force exerted by the pores on the microspheres. Figure 9a depicts the osteoblast cell proliferation on the BCP-WFA-ALG microspheres and Figure 9b shows the CLSM image of the stained cells, with the nucleus colored in blue and the cytoskeleton colored in green. Discussion Calcium phosphate-based ceramics, such as HA, TCP, BCP, etc., have gained tremendous importance in orthopedic applications due to their compositional similarity with natural bone [17,18]. However, HA, being more stable, has concerns over its biodegradability, whereas TCP has shortcomings regarding poor bioactivity [19]. Hence, BCP is a promising candidate as it offers a wide range of tunable osteoactivity, including biodegradability and bioactivity. BCP, when synthesized in the form of a microsphere, gives maximum benefit as there is an increase in injectability and flowability. The spherical shape enables it to fill bone defects with complex geometry. It is also less invasive to adjacent tissues due to its smooth surface, in contrast to other irregularly shaped particles Discussion Calcium phosphate-based ceramics, such as HA, TCP, BCP, etc., have gained tremendous importance in orthopedic applications due to their compositional similarity with natural bone [17,18]. However, HA, being more stable, has concerns over its biodegradability, whereas TCP has shortcomings regarding poor bioactivity [19]. Hence, BCP is a promising candidate as it offers a wide range of tunable osteoactivity, including biodegradability and bioactivity. BCP, when synthesized in the form of a microsphere, gives maximum benefit as there is an increase in injectability and flowability. The spherical shape enables it to fill bone defects with complex geometry. It is also less invasive to adjacent tissues due to its smooth surface, in contrast to other irregularly shaped particles used as bone fillers. Moreover, porous BCP microspheres are effective in targeted drug delivery and bone regenerative actions. Victor et al. demonstrated doxycycline loading as well as release action from BCP microspheres and found a correlation between the morphology of microspheres and drug release kinetics [20]. The FTIR spectrum showed the presence of the functional groups of all three components of the BCP-WFA-ALG microspheres. There is four-step degradation process in the TGA of the microspheres, where the first step involves dehydration of the microspheres, including physisorption and chemisorption of H 2 O along with the biopolymers. In the next two steps, there is degradation of BCP to change its form and, in the fourth step, P 2 O 7 formed in the last two steps reacts with the OH of hydroxyapatite. After about 900 • C, β-TCP forms HA, for which there is less degradation at higher temperatures. In the present study, WFA-incorporated BCP-ALG microspheres were evaluated for their osteogenic properties. It has been observed that the aforementioned composite microspheres exhibited superior osteogenic behavior compared to their pure BCP counterparts [21]. In one of the studies by Khedgikar et al., it was demonstrated that WFA causes osteoblastic differentiation using proteasomal inhibition; this was confirmed by the degradation of the RUNX2 protein and a decrease in Smurf2 gene expression [22]. There is some experimental evidence that shows that the slowing down of proteasomal activity has a great influence on bone metabolism and differentiation. All the samples used in this study were polarized, which further enhanced bone proliferation as well as microbial inhibition. The TSDC measurement carried out on the sample revealed that there is a remnant polarization charge left on the sample. The negatively polarized, i.e., N-poled sample had a stored charge density of 3.96 µC/cm 2 . Since natural bone is piezoelectric, an electric charge was generated due to mechanical stress. This charge stimulates osteogenic cell differentiation and results in fracture heating. This natural mechanism is simulated in the artificial bone substitute material by introducing sufficient polarization. This is due to the selective adhesion of certain ionic proteins and cell membranes by simple coulombic attraction, which results in enhanced proliferation of bone cells. Many studies have demonstrated that cell adhesion and growth are high when the surface energies are high and the contact angle is low [23,24]. In the present study, the contact angle measurement revealed that the BCP-WFA-ALG microsphere has a lower contact angle than the BCP microsphere, which further supports the fact that the former compound exhibits better osteogenicity. The water uptake capability of a substance determines its efficacy to be used in tissue engineering applications. Osteoblast cell attachment, growth, and proliferation are greatly affected by the swelling property of the scaffold. The water uptake of the BCP-WFA-ALG specimen in the present study was found to be greater than 100% which indicates better cell activity on its surface. In addition, the water uptake capability was significantly higher than that of the BCP microspheres. It has been established that alginate readily absorbs water, and in the present investigation, the water uptake capability of the BCP-ALG specimen was due to the alginate component of the specimen. However, BCP-WFA-ALG still has a higher water absorption capability and this makes it obvious that this phenomenon is due to the presence of WFA. Hence, the addition of WFA improves the bioactivity of the BCP microspheres. Cell growth and proliferation are determined by conducting a cell viability test using an MTT assay. The cell density of both samples increased with time but was higher in case of BCP-WFA-ALG [25]. The role of an orthopedic scaffold is to aid in the process of bone regeneration and then slowly disappear from the body, leaving behind the newly formed bone [26]. Hence, the degradability of the material is a crucial property. The degradability study done in the SBF solution showed that the BCP-WFA-ALG microspheres have a higher degradation rate than the BCP microspheres. In the BCP mixture, the TCP component counteracts the stability of HA and enhances the degradability. WFA has been shown to facilitate osteoblast cell survival and proliferation. The inherent anti-inflammatory property of WFA is also useful in suppressing inflammatory cytokines, which might pose an obstacle to cell adhesion and differentiation [27][28][29]. The osteoblast-specific transcription factor may be enhanced due to the presence of WFA. In the present study, this fact was confirmed by the RT-PCR analysis. Osteogenic markers, such as OCN, COL1, and RUNX2, have shown enhanced expression in case of BCP-WFA-ALG microspheres. There is less information about the detailed mechanism through which WFA influences osteogenic cell differentiation and growth. The higher attachment of osteoblast cells on the BCP-WFA-ALG microspheres, as validated by FE-SEM, may be attributed to the presence of WFA and alginate. The cytoskeletal studies have produced vivid pictures of the spreading of osteoblasts. The polarization of the microspheres is envisaged to have antimicrobial properties, as evidenced by our previous work in the presence of negative surface charge. Formation of BCP Nanoparticles and Microspheres Calcium phosphate apatite in powder form was synthesized by the aqueous precipitation method. In this procedure, a solution of 0.4 M Ca(NO 3 ) 2 ·4H 2 O (Merck, Rahway, NJ, USA) and 0.2 M (NH 4 ) 2 HPO 4 (Merck, USA) was prepared in a three-necked flask by dropwise addition of both the solutions, simultaneously, at room temperature using a buffer to keep the pH at 11. The buffer was then removed from the prepared white-colored precipitate by washing it with distilled water several times. The resultant powder thus obtained was sintered at 1100 • C for 1 h. The final product obtained was BCP and composed of 60% β-TCP and 40% HA. The particles of BCP powder of size < 75 nm were obtained after crushing in a mortar and pestle, subsequently using stainless-steel sieves to strain the micro-sized particles. The BCP nanopowder was weighed in a ratio of 15:1 with respect to alginate and added to a 1% sodium alginate solution, with 2% WFA solution mixed until the formation of a homogeneous slurry. A 10% gelatin solution was prepared using bovine skin gelatin (Merck, USA) and was added to a 2% polyvinyl alcohol (Daejung Chemicals, Dae-jung, Korea) solution at 60 • C. Then, 0.2% Triton X-100 (Invitrogen, Waltham, MA, USA) and 0.3% poly-ammonium salt (Invitrogen, USA) were added to that mixed solution. This final solution was added to the BCP-WFA-ALG homogenous-mixture slurry. This slurry was then extruded using a disposable 10 mL syringe (BD Plastipak, Curitiba, Brazil) and a needle of 0.7 mm diameter (BD Precision glide) onto an already cooled stirring oil kept on a magnetic stirrer. The preparation of alginate-WFA-BCP microspheres was carried out by the water-on-oil emulsion technique. The prepared granules were then taken out of the oil, rinsed with ethanol and kept at −10 • C for 10 min. After that, the samples were separated from the solution by washing them with ultrapure water, about three times, and filtered and then oven dried for 24 h at 37 • C. The as-formed specimens were sieved to obtain microspheres of size ranging between 500 µm to 1000 µm which is the ideal size to be used as bone substitute material. Polarization of the Microspheres The polarization of microspheres took place by applying an electric field of negative polarity with 2 KV/mm polarization voltage at 480 • C. The microspheres were coated with silver for electric conduction. All the samples (BCP control microspheres and BCP-WFA-ALG microspheres) were first polarized before further investigation. X-ray Diffraction (XRD) XRD was performed with an X-Pert PRO, PANalytical BV instrument for the prepared microspheres, and was analyzed. Cu-Kα radiation of wavelength 1.5406 Å was used, with current and voltage of 40 mA and 40 KV, respectively. XRD patterns were plotted for qualitative analysis within the interval of 20 • ≤ 2θ ≤ 70 • at a scan speed of 2 • /min. FTIR Analysis Fourier Transform Infra-red spectroscopy (FTIR) analysis was carried out to examine the presence of the functional groups of the BCP-WFA-ALG microspheres. TGA Study Thermogravimetric analysis (TGA) of the BCP-WFA-ALG microspheres was performed up to a maximum temperature of 1100 • C to understand the degradation behavior of the microspheres under high temperatures. Swelling Ratio and Degradation Analysis of BCP-WFA-ALG Microspheres Between 5 and 10 mg of dried BCP-WFA-ALG microspheres (W d ) were placed in a solution containing simulated body fluid (SBF) and the weight after swelling Ws(t) was obtained. The sample was then separated from the solution and dehydrated using kitchen towels. The swelling ratio (degree of swelling) was computed using the formula below: Swelling ratio = (Ws(t) − W d )/W d In this equation, Ws(t) is the weight of the microspheres after water absorption taken at a preset time 't' and Wd is the initial weight before water absorption. The degradation rate was also measured in the same experiment by keeping the sample immersed in the SBF solution for more time. Release Rate The WFA release rate was evaluated by soaking 500 mg of the microsphere in 50 mL of SBF (pH-7.4) solution to assess the ion release quality of the BCP-WFA-ALG microsphere. A dialysis technique was used to study the in vitro release kinetics of WFA. WFA was placed in the dialysis bags (cutoff 12,000 Da) and dialyzed against phosphate buffered saline (PBS) with continuous stirring at 37 • C for 1 day. At particular time periods, samples containing WFA were taken and quantified and were replaced with the same volume of fresh samples. Quantification of WFA was carried out using a microplate reader at 215 nm and the release rates at different time points were estimated. The release rate study was carried out in triplicate. In Vitro Cell Proliferation Testing: MTT Assay MG63 human osteoblast-like cell lines (NCCS, Pune, India) derived form human osteosarcoma were used in this study. The cells were incubated with 5% CO 2 in a humidified atmosphere at a temperature of 37 • C. The culture medium used was Dulbecco's modified Eagle's medium (DMEM) with a growth supplement of 10% fetal bovine serum (FBS, Thermo Fisher, MA, USA), 100 U/mL penicillin (Gibco, Waltham, MA, USA), and 100 mg/mL streptomycin (Gibco, USA). The culture solution was replaced every alternate day. MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide, Invitrogen, USA) assay was employed to assess the cell viability of the prepared composite microspheres. Cell proliferation was assessed by estimating mitochondrial succinate dehydrogenase activity. The BCP-WFA-ALG microspheres were fixed onto the base of a 24-well cell culture plate. Ethylene oxide (ETO) steam was used for 24 h to sterilize the samples at room temperature. After that, a cell suspension of 1 mL was added to all the sample plates. The culture medium was replaced every second day. The cells were seeded for 1, 3, and 5 days and 100 µL of MTT (5 mg/mL) was incorporated into each well and incubated for 4 h at 37 • C so that blue formazan crystals developed; these crystals were subsequently dissolved by adding 650 µL of dimethyl sulfoxide (DMSO, Invitrogen, USA) to each well and the solution was then transferred to a 96-well plate. An ELISA (Bio-Rad Laboratories, Hercules, CA, USA) microplate reader was used to measure the absorbance at 570 nm. The BCP microspheres acted as control for this study. Four tests were run, and the mean value was recorded. Cell Proliferation and Cytoskeletal Response The cell proliferation and visualization on the microspheres were observed by Field Emission-Scanning Electron Microscopy (FE-SEM; JSM-6700F, JEOL, Tokyo, Japan) and a confocal laser scanning microscope (CLSM, LSM700, Zeiss, Jena, Germany). MG63 osteoblast like cells were cultured on BCP-WFA-ALG microspheres for 2 days, after which the specimens were rinsed with phosphate buffered saline (PBS) four times and then fixed with 2.5% glutaraldehyde and maintained at 4 • C for 24 h. The specimens were then dehydrated with graded alcohol and subsequently lyophilized and gold coated by sputtering prior to SEM observation. In addition, after 2 days, the MG63 cells were rinsed four times with PBS, fixed with 4% paraformaldehyde (Thermofisher, Waltham, MA, USA) and permeabilized with 0.2% Triton X-100 (Merck, USA) and the cell morphology was tested using fluorescence imaging with a density of 5 × 10 4 /well on a 24-multiwell plate. The actin cytoskeleton was focused by staining with Alexa Fluor 488 Phalloidin (Invitrogen, USA) using a CLSM system. Statistical Analysis The data presented in the present study are the mean values with standard deviations. One-way analysis of variance (ANOVA) was carried out with SPSS (v.13.0, IBM SPSS, Atlanta, USA), and 'p < 0.05' was termed as the level of significant difference. Conclusions The presence of both the phases of BCP were validated by XRD study which showed the presence of 40% HA and 60% β-TCP. TSDC measurement showed a charge density of 3.96 µC/cm 2 was retained in the sample after polarization. The contact angle measurement implied the specimen to be hydrophilic and the microspheres exhibited maximum swelling of 270% in the case of the BCP-WFA-ALG microspheres. It was also found that the microspheres exhibited superior osteogenic behavior compared to their pure BCP counterparts. The better osteogenicity of the polarized specimens may be due to the presence of WFA as well as stimulation of the charged specimens that triggers the osteoblast cells, thus enabling better osteogenicity. However, WFA-incorporated BCP microspheres could be a potential candidate for use as bone fillers, scaffolds, or as bone substitute material, having an outstanding combination of bioactivity and biodegradability. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-12-25T16:11:07.400Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "489ce1ace91c00bb039d8ecc8cf87bc7a4924dfc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/1/86/pdf?version=1671707485", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14bd5a445b7043054d301eb9b48dccfa1db2af53", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
56324786
pes2o/s2orc
v3-fos-license
Research progress and prospects of CO 2 enhanced shale gas recovery and geologic sequestration CO2 injection to strengthen shale gas development is a new technology to improve shale gas recovery and realize geologic sequestration. Many scholars have studied these aspects of this technology: mechanism of CO2 displacement CH4, CO2 and CH4 adsorption capacity, affecting factors of shale adsorption CO2, CO2 displacement numerical simulation, and supercritical CO2 flooding CH4 advantages. Research shows that CO2 can exchange CH4 in shale formations, improve shale gas recovery, on the other hand shale formations is suitable for CO2 sequestration because shale gas reservoir is compact. The supercritical CO2 has advantages such as large fluid diffusion coefficient, CO2 dissolution in water to form carbonic acid that can effectively improve the formation pore permeability etc., so the displacement efficiency of supercritical CO2 is high. But at present the technology study mainly focus on laboratory and numerical simulation, there is still a big gap to industrial application, need to study combined effect of influence factors, suitable CO2 injection parameter in different shale gas reservoir, CO2 injection risk and Introduction CO 2 capture and geologic sequestration is a kind of technology of CO 2 emission reduction, many countries have taken it seriously.Shale gas reservoir has the characteristics of low porosity, low penetration and extreme compactness, which results in very low gas recovery ratio of shale reservoirs.Conventional natural gas recovery ratio can reach to about 60%, while shale gas reservoirs recovery ratio is only about 4.7% to 10% without hydraulic fracturing [1].While fracturing fluid is likely to cause pollution, need to study a new environmental friendly technology for shale gas development.CO2 injection not only can enhance recovery ratio of shale gas reservoirs, but also can realize the permanent geologic sequestration of CO 2 .It is a kind of new technology of CO 2 storage and use.Its application has great significance for shale gas development and environment protection. COdisplacement CH4 mechanism in shale gas reservoirs [2-4] From the micro, because CO 2 molecular structure is linear, diameter less than the diameter of CH 4, CO 2 can access smaller micro-pore, which increase contact area and contact time of CO 2 with shale reservoirs, sequentially increase CH 4 displacement amount of CO 2 in shale gas reservoirs. From the adsorption ability, the contrast laboratory experiments of shale adsorption performance of CO 2 and CH 4 showed that adsorption capacity, adsorption rate and adsorption equilibrium time of CO 2 is better than those parameters of CH 4 in the same experiments time.Numerical simulation model also showed a same result: CO 2 is easier to be adsorbed by shale than CH 4 for CO 2 has stronger adsorption ability.From the macro, CO 2 replacing CH 4 in shale reservoirs conform to the extension form of the Langmuir equation (equation 1).The injected CO 2 increased the total gas pressure and reduce the CH 4 partial pressure within the shale reservoirs, which will lead to CH 4 desorption from shale to achieve the new adsorption equilibrium. (1) : adsorption amount of gas component i,cm 3 /g : Langmuir adsorption constant of component i, cm 3 /g : gas pressure in reservoir, MPa : Mole fraction of gas component i : pressure of gas component j when adsorption amount of the gas component j reaches to 50 % of the adsorption limitation ,MPa : numbers of different gas component Influencing factors of CO2 adsorption in shale gas reservoirs [2-6] Shale desorption analytical test with shale sample which was made to pieces in a certain size proved that the adsorption of CO 2 in shale is the physical adsorption, which is influenced by temperature, pressure and the mineral composition of shale, moisture content, shale gas composition and other factors: 1) Adsorption capacity decrease with temperature increasing; 2) Adsorption capacity increase with pressure increasing; 3) Temperature and pressure have a comprehensive effect on the adsorption and desorption of shale.Under the low temperature and pressure range, the influence of pressure on adsorption performance is greater than that of temperature.The influence of temperature on adsorption performance is greater than that of pressure in high temperature and pressure condition; 4) Adsorption capacity increase with the content of organic matter increase.Montmorillonite with more calcareous has the highest CO2 adsorption capacity, while CO 2 adsorption capacity of kaolinite is the least; 5) Pore structure has an influence on the adsorption capacity; 6) When there are water in the shale gas reservoir, CO 2 will dissolve in water to form carbonic acid to erode the formation and increase CO 2 adsorption, which can effectively improve the permeability of the formation and increase shale gas production; 7) CO 2 will produce convection diffusion with shale gas with CO 2 injecting which will form a CO 2 -shale gas multivariate system.The shale gas content is bigger, the easier retrograde condensation occur, it will influence CO 2 adsorption and the way of shale gas production. Above conclusion is almost same at shale desorption analytical test with shale samples which was preserved the original core structure, compared with the test that simulated shale reservoir environment. CO2 displacement laboratory test with shale core sample [2,7] Studies showed that temperature and pressure of CO 2 injection have important effect on shale gas recovery.The recovery ratio increase with CO 2 injection temperature and pressure increasing.Gas recovery ratio is different at different producing start time under the condition of CO 2 injection.Injecting CO 2 10 hours later, the competitive adsorption of CO 2 and CH 4 is sufficient.The gas recovery ratio is higher than that gas production start at the same time of CO 2 injecting.The reason is that the concentration of CO 2 in the shale reservoirs or the CO 2 partial pressure is low when CO 2 inject for a short time, the displacement amount of CH 4 is very low.After enough competition adsorption, CO 2 will spread deep into the shale matrix and displace CH 4 more.At the same time, the shale volume will expand after a large amount of CO 2 adsorption, so that the pore adsorbed CH 4 will open again and CO 2 will displace the inner CH 4 of shale, which will increase the recovery ratio. Numerical simulation of CO2 displacement shale gas [3,6] The numerical simulation simulated the influence on recovery ratio of CO 2 injection timing, CO 2 injection rate, and the influence of crack numbers.The results showed that CO 2 injection parameters exists an optimal value.Multi-stage horizontal fracture will increase the crack number, but it will not improve gas recovery ratio obviously. The concept model of CO 2 displacement CH 4 of shale showed that displacement process can be divided into two situations: at low CO 2 injection pressure, shale samples with smaller micro-pore ratio surface area and smaller micro-pore volume ratio have higher CO 2 sequestration ability.While at higher CO 2 injection pressure, shale samples with larger micro-pore ratio surface area and smaller micro-pore volume ratio have higher CO 2 sequestration ability.CO 2 critical temperature is 31.1℃,critical pressure is 7.38 MPa, critical density is 0.448 g/m 3 .When the temperature and pressure of the shale gas reservoir are greater than the critical temperature and pressure, the injected CO 2 reaches the supercritical state.Supercritical CO 2 has the excellent characteristics different from the conventional liquid, gas, which can improve reservoirs pore and fissure development level, can improve the reservoirs permeability significantly, etc. Supercritical CO 2 can improve the single well production and gas recovery ratio of shale gas reservoirs with these characteristics. First, supercritical CO 2 viscosity is low, diffusion coefficient is larger than that of CH 4 , and the most important is its surface tension is zero.Therefore, it is very easy to flow into the reservoirs pore, and able to enter into any space greater than CO 2 molecular.Under the effect of external force, the supercritical CO 2 can displace free CH 4 in the tiny pore and fracture effectively.Second, supercritical CO 2 fluid density has strong ability of solvating, it can dissolve pollutants near wellbore area, reduce the flow resistance, increase shale reservoir permeability, which is benefit to shale gas production.When temperature and pressure of environmental conditions is higher than CO 2 critical condition, the density of CO 2 is more close to the density of the fluid, then the property differences between supercritical CO 2 and shale gas increases, which lead to the convection diffusion effect abating.Supercritical CO 2 displacing shale gas likes a piston displacement process under high pressure and temperature.That will increase displacement efficiency.In addition, the viscosity of supercritical CO 2 is much higher than that of shale gas.In higher pressure, the displacement efficiency improves with the increasing of the viscosity difference. Technical prospect and further research Suggestions The technology compared with the traditional fracturing technology, has these advantages: water saving, environmental protection, simple process, has a broad development prospects, but at present the main research forces on laboratory test and numerical simulation.In order to realize its industrialized application as soon as possible, still need to strengthen following research: 1) The adsorption and desorption performance of shale gas are influenced by many factors.At present, most researches are studies on a certain factor independently.The effects of multiple factors are not studied or less.In the future, it is necessary to study the results of the combined effects of various factors under stratigraphic conditions. 2) Needs to study CO2 injection pressure, CO 2 injection temperature and CO 2 injection timing of different characteristics shale gas reservoirs, to determine the reasonable CO 2 displacement parameters. 3) Select 1or 2 integrity shale gas reservoirs to carry out pilot test, verify the laboratory test and numerical simulation research, form series technology which adapt to shale gas reservoirs development with different depth, temperature, pressure ; 4) Needs to study whether is shale gas reservoirs still suitable for CO 2 geologic sequestration or not after the implementation of the fracturing measures; 5) Due to the corrosive effects of CO 2 and the risk of CO 2 leakage, it is necessary to strengthen the study of risk identification and risk countermeasures. Fig. 1 Fig. 1 Isothermal adsorption curves of CO2 and CH4 at 35℃ with same sample
2018-12-17T20:40:45.046Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "8f34cf72db443de1843386ac36f0a592ad18192f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/28/e3sconf_icaeer2018_04002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8f34cf72db443de1843386ac36f0a592ad18192f", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geology" ], "extfieldsofstudy": [ "Environmental Science" ] }
250697274
pes2o/s2orc
v3-fos-license
Thyroid disease‐specific quality of life questionnaires ‐ A systematic review Abstract Introduction Thyroid diseases are very common and rarely life‐threatening. One of the main therapeutic goals is an improvement in quality of life, making it important to measure in clinical and research settings. The aim of this systematic review is to provide an overview of the currently available thyroid‐specific quality of life questionnaires with regard to their validation quality in order to make recommendations for clinical use with a special focus on German questionnaires. Methods A systematic literature search was performed in Pubmed, Google Scholar and the Cochrane Library. A total of 904 studies were identified. After excluding duplicates, non‐English‐ or German‐language texts, full texts that were not freely available and studies with irrelevant content, 64 studies reporting on 16 different questionnaires were included in the analysis. Results Four questionnaires concerned benign thyroid diseases (ThyPRO, ThyPRO‐39, Thy‐R‐HRQoL and Thy‐D‐QOL), six malignant thyroid diseases (THYCA‐QoL, ThyCa‐HRLQOL, EORTC‐Thy34, MADSI‐Thy, QOL‐Thyroid and ThyCAT), and six endocrine orbitopathy (GO‐QOL, GO‐QLS, TED‐QOL, STED‐QOL, TAO‐QoL and Ox‐TED). Only five questionnaires were at least developed, if not validated, in German, and five were developed in more than two languages. Conclusions ThyPRO and the ThyPRO‐39 are the best‐evaluated questionnaires for benign thyroid diseases. Alternatively, in hypothyroid patients, the adequately validated Thy‐D‐QoL can be used. For malignant thyroid diseases, the choice should be made individually, as all six questionnaires (THYCA‐QoL, ThyCA‐HRQOL, EORTC‐Thy34, MDASI‐Thy, QOL‐Thyroid and ThyCAT) have different strengths and weaknesses. The GO‐QOL is the best‐validated questionnaire in endocrine orbitopathy. However, the TED‐QOL is also suitable as a short‐screening questionnaire for these patients. | INTRODUC TI ON Thyroid diseases are very common in the general population and their prevalence increase with age. 1 Common thyroid diseases include hypo-and hyperthyroidism, nodular goitre, thyroid cancer and autoimmune disorders such as Hashimoto thyroiditis or graves' disease with and without endocrine orbitopathy. 2 Due to the simple and widely available diagnostic tools, thyroid diseases are often detected early, although not every thyroid disease requires therapy. 3 However, there is a consensus that quality of life (QoL) is negatively influenced by thyroid dysfunction, both in hyperthyroidism and hypothyroidism, and one of the main aims in the therapy of thyroid dysfunction should be at least preserving or, ideally, improving QoL. [4][5][6] For this reason, the measurement of health-related QoL has become an important issue of interest, and many instruments have been developed to measure this outcome parameter. QoL is often defined as a multidimensional subjective construct containing the dimensions of general health, physical, psychological and social functioning. It can be best measured by patients themselves in form of questionnaires using patient-reported outcomes (PROs). 7 Typical domains in QoL questionnaires include anxiety, impaired social life or overall quality of life. Those more general domains are usually complemented in thyroid disease-specific questionnaires. Based on the underlying thyroid diseases these thyroid-specific QoL questionnaires often contain domains like goitre symptoms, eye symptoms or tiredness. Establishing relevant domains in a standardized manner should be part of the development process of each questionnaire. However, a crucial weak point of some questionnaires is this missing development step, and the lack of comprehensive assessment of measurement properties, such as validity and reliability, which prevents generalizability or comparability. 8 Based on these shortcomings the COSMIN (COnsensus-based Standards for the selection of health Measurement Instruments) initiative has developed a guideline for systematic reviews of PROs and a checklist for the evaluation of studies reporting on the development of PROs. [9][10][11][12] In 2016 a systematic review was conducted with regard to the quality of thyroid-specific PROs. 13 This review judges the quality of the 14 thyroid-specific QoL questionnaires available at the time and emphasizes the need for high quality and standard reporting of the development of thyroid-specific QoL questionnaires. However, since then, new questionnaires have been developed. Therefore, the aim of this systematic review is on the one hand the presentation of current thyroid disease-specific QoL questionnaires with regard to validity and reliability. On the other hand, this review focusses on the clinical usability of the respective questionnaires in order to make recommendations for clinical practice, especially with regard to validated questionnaires in the German language and in order to identify gaps regarding specific questions. | ME THODS At all times during the preparation of this Systematic Review, the PRISMA guidelines were followed. | Study selection, inclusion and exclusion criteria The process of study selection and inclusion was performed by two reviewers independently (C.B. and V.U.). In the first step, the total number of studies was reduced by excluding duplicates. Then, all titles and subsequently all abstracts were screened and checked for relevance. Relevant studies were read in their entirety. All studies for which no abstract or no complete text was available were excluded. Only studies in English or German language were included. Furthermore, only studies with human study populations were considered. In addition, those that used only general QoL questionnaires, such as the SF-36 or the EORTC-QLQ-C30 questionnaire, were also excluded. Studies that used questionnaires that did not measure QoL were also not considered further. Finally, those studies that used a thyroid-specific questionnaire but contained little information were also not analysed if there were studies about the same questionnaire presenting more information. All guidelines, reviews and meta-analyses were excluded, but references were screened for further relevant studies. Consequently, only studies that reported on the development of a thyroid-specific QoL questionnaire were included. After this selection process the authors C.B. and V.U. conferred about their results and any differences were resolved by consensus. The remaining studies were read and analysed by both authors independently, and relevant information was extracted, implementing an excel spreadsheet. Again, any discrepancies were resolved by consensus. Finally, the retrieved studies were categorized into the following three groups: (1) benign thyroid disease questionnaires, (2) malignant thyroid disease questionnaires and (3) endocrine orbitopathy questionnaires. | Extracted information The following information was extracted from the relevant studies if available: • Author(s) • This produced 220 studies, which were then read by both aforementioned authors and were independently analysed with regard to the inclusion and exclusion criteria, i.e. studies that assessed the QoL of patients with thyroid disease through thyroid-specific questionnaires. After further consensus, the selection was further narrowed to 71 studies, discussing the development of 16 different, relevant questionnaires. The questionnaires that did not fit the inclusion criteria were excluded because they either were more concerned with symptoms, therapy satisfaction or anxiety (ThySRQ, ThyTSQ, HCQ and WSCI-T) or they were not specific enough (the EORTC questionnaires). Also, the NEI-VFQ-25 was excluded, because even from the abstract it was obvious, that the questionnaire was, albeit thoroughly validated, not useful in a research or clinical setting. Four out of the remaining 16 questionnaires were assigned to benign thyroid diseases including hypothyroidism and Graves' disease. Six questionnaires were assigned to malignant thyroid diseases, and six questionnaires were assigned to the third category 'endocrine orbitopathy'. In the following sections, the identified questionnaires are presented in more detail. | Benign thyroid disease QoL questionnaires The following section describes the four questionnaires for benign thyroid diseases. An overview is presented in Table 1. | ThyPRO The ThyPRO (Thyroid Patient Reported Outcome) questionnaire was developed and validated by Watt et al. 15 It consists of 85 items, which are divided into a total of 13 scales and one single-item scale. Each item has five response options on a Likert scale ranging from '0 = no symptoms or problems' to '4 = severe symptoms or problems' based on the period of the last 4 weeks. The higher the score, the more strongly QoL is affected. The ThyPRO was developed based on a systemic review, 5 interviews with patients and professionals 15 and cognitive interviews after operationalizing the problems into items. 16 The preliminary questionnaire was adjusted after analysing construct validity, by 'multitrait scaling' analysis (convergence + discriminant validity) and reliability by Cronbach's alpha. 17 Subsequently, clinical validity was evaluated by 'known-groups' and reliability by test-retest analysis of the final questionnaire. 15 Dimensionality of the scales was confirmed by confirmatory factor analysis and the extent of differential item functioning was tested by ordinal regression. 18,19 Sensitivity to changes after clinically relevant therapies was tested and compared with the generic quality of life questionnaire SF-36. 20 The ThyPRO was originally developed in Danish and English but has been translated and cross-culturally validated in many other languages such as Dutch, Indian, Italian, Serbian and Swedish. 21 It has also been trans- | ThyPRO-39 The ThyPRO-39 questionnaire is the short version of the ThyPRO and has 39 items. 14,27 These are divided into 12 scales and one single item. The median response time is 4 min, which is shorter than the long version at 14 min. The development of the short version was divided into the steps of item selection, scale scoring and validation. Items were excluded for which previously missing scores were frequent, such as impaired sex life, which did not conform to item response theory, and for which differential item functioning or cross-cultural weaknesses occurred in the previous validation studies of the long version. An additional score comprising seven scales covering mental and social well-being and functioning was created, and the degree of agreement between the short and long scales was assessed using agreement plots and intraclass correlations. Effect sizes and validity indices for response to the change in therapy were calculated for validation. Clinical validity was tested by the ability to discriminate between clinical patient groups. Testretest reliability was also collected and checked with the long version. Each scale can be evaluated individually, but there is also an additional score that combines seven scales. The short version of the ThyPRO can assess QoL in many different benign thyroid diseases, and in many different languages, for instance, German, Romanian, Greek or Spanish. 22,23,26,28 | Thy-R-HRQ oL The Thy-R-HRQoL (Thyroid-Related Health-Related Quality of Life) questionnaire was created by Kaniuka-Jakubowska et al. and is suitable for assessing QoL in euthyroid goitre. 29 It consists of a total of 49 questions divided into 7 domains. The answer options range from '1 = definitely no' to '6 = definitely yes'. Consequently, the QoL decreases as the score increases. In terms of content, three of the domains reflect the influence of the disease on the shape of the throat, dyspnea and performing the social role, and three others reflect the severity of subjective symptoms such as difficulty breathing, foreign TA B L E 1 Overview of the questionnaires developed for benign thyroid diseases Each of the domains is then rated on how important the aspect was to the patients, from '3 = very important' to '0 = not at all important'. The two questions in a domain can then be multiplied to give a weighted domain score, which can range from '−9 = maximum negative' to '+3 = maximum positive'. However, the 18 domains can also be combined into an overall average weighted score. The lower the score, the worse the QoL. For internal validation, Cronbach's alpha was determined with a value of 0.949. A forced one-factor analysis was also performed, which confirmed that all domains can be combined into an average score. The focus of this questionnaire is clearly on QoL rather than symptoms, as the authors also developed a questionnaire for symptoms of hypothyroidism only, the Thyroid Symptom Rating Questionnaire (ThySRQ), and one for the effects of L-thyroxine therapy, the Thyroid Therapy Satisfaction Questionnaire (ThyTSQ ). 31 | Further questionnaires for benign thyroid diseases In addition to the questionnaires mentioned above, further questionnaires for the assessment of QoL in benign thyroid diseases were found in the literature search. Some of these were not validated and only developed for one study specifically or only capture a partial aspect of QoL, such as physical and mental symptoms. Four studies measured QoL in subclinical and manifest hypothyroidism with nonvalidated questionnaires designed for the author's own studies. 33 | Malignant thyroid disease QoL questionnaires The following section describes the six questionnaires for malignant thyroid diseases. An overview is presented in Table 2. The focus here is more on psychological and functional complaints, compared with the THYCA-QoL, which focuses on symptoms. This questionnaire was also developed as a specific extension for thyroid cancer patients to the general EORTC-QLQ C30 questionnaire. 46 Thus, the response period, the response options and also the scoring are the same as for the EORTC and THYCA-QoL Questionnaire. The questionnaire is based on a literature review, focus group discussions and an assessment of the problems for relevance by patients and experts. Initially, a preliminary version was constructed as a pilot test. Subsequently, the resulting questionnaire has been analysed with respect to scaling, validation and reliability. refer to the past week and can be transformed into a score from 0 to 100. A high score correlates with a high symptom burden and is associated with low QoL. Since the beginning, patients from different regions of the world have been involved in the process of questionnaire development. As a result, it is already available in 15 languages: Arabic, Chinese, Dutch, English, French, German, Greek, Hebrew, Hindi, Japanese, Italian, Polish, Portuguese, Spanish and Tamil. The questionnaire was based on two literature reviews and structured interviews with patients and professionals. The problems identified were rated by them for relevance and importance and shortened to a list of 47 items. This provisional item list was tested and evaluated with debriefing interviews. Criteria previously established according to EORTC guidelines determined which items were retained, modified or excluded. In addition, items were categorized into hypothetical scales, and Cronbach's alpha and item-scale correlations were evaluated. Psychometric testing to validate the final questionnaire is still to be conducted in the fourth phase of development. | ThyCAT The special feature of ThyCAT is that it is not a classical questionnaire but a computer adaptive test to measure the QoL of patients suffering from thyroid cancer. It was developed in English by Aschebrook-Kilfoy et al. 54 | Further questionnaires for thyroid cancer The questionnaire of Emmanouilidis et al. has a focus on QoL after thyroidectomy and radio ablation therapy. 56 | QoL questionnaires for endocrine orbitopathy The following section describes the six questionnaires for endocrine orbitopathy. An overview is presented in Table 3 Qualitative analyses of focus group discussions were conducted to create the STED-QoL. The resulting questionnaire was pilot-tested and modified according to psychometric analyses. For validation, a factor analysis was performed to confirm unidimensionality, and a Rasch analysis was performed to test the 'item response theory'. Furthermore, the ability of the questionnaire to discriminate between different disease activities was tested using ANOVA analysis. Evaluations of reliability have not yet been performed. | TAO-QoL The TAO-QoL (Thyroid Associated Ophthalmology Quality of Life) questionnaire was developed by Tehrani et al. 67 The aim was to develop a questionnaire for German patients with endocrine orbitopathy, which measures the QoL also after surgical therapy. It contains 90 items and was designed in collaboration with ophthalmologists and endocrinologists. Four categories ranging from '1 = maximum satisfaction' to '4 = minimum satisfaction' are available for response. From all answers, the average can be calculated for a total score, where low values indicate a high QoL. The validation was indirectly demonstrated by a correlation between the QoL scores and clinical parameters (Hertel value). However, a correlation value was not available. Reliability, measured by Cronbach's alpha, was low with a value of 0.63. | Ox-TED quality of life score The Ox-TED (Oxford Thyroid Eye Disease) Quality of Life Score was created by Insull et al. 68 The questionnaire consists of seven questions about the influence of the disease, the therapy, the changed appearance on the general QoL and on daily activities. The questions can each be answered with a scale from '1 = does not bother' to '10 = very bothersome'. All scores were added together for evaluation. The total score ranges from 7 to 100. The greater this score, the greater the influence on QoL. Information on validity and reliability is not available. | Further questionnaires for endocrine orbitopathy Two more questionnaires could be identified that did not have validity and reliability testing. Finamor et al. created a 10-item questionnaire called GO-HRQL with three response categories ('0 = not impaired, 0.5 = somewhat impaired and 1 = very impaired'). 69 The score ranges from '0 = minimal' to '10 = maximum' impact on QoL. It includes psychosocial aspects such as change in appearance and influence on self-esteem and social contacts, and visual function aspects such as walking or reading. The second questionnaire was developed by Sisson et al. 70 It consists of four items with the four response categories '0 = none to 3 = strong', which ask for the pain in the eyes, changed appearance and visual acuity. | DISCUSS ION The aim of this systematic review was to provide an overview of currently available thyroid-specific Qol questionnaires with a focus on the quality of the studies by considering the validation and reliability process. In contrast to the work of Wong et al., 13 the focus is not so much on the assessment of the quality of the questionnaires, as this is described clearly and in detail there. Instead, the focus is more on the clinical applicability of the questionnaires and the identification of gaps regarding specific questions. In this systematic review, a total of 16 questionnaires could be identified, and others that capture only partial aspects of QoL. Since the assessment of Qol depends on the underlying thyroid disease, the questionnaires are divided into three main groups. For benign thyroid disease, four specific questionnaires were identified. Of these, three have been evaluated in sufficient quality to be used for studies. These include the ThyPRO, the ThyPRO-39 and the Thy-D-QoL. 14,15,30 The ThyPRO questionnaire is well- 51,52 This questionnaire was the first to be developed and has been used by many studies. The use of this questionnaire is supported by the fact that its content covers not only physical but also psychological, social and spiritual aspects. The ThyCAT is the only computer adaptive test. A big advantage is that it is very short with less than 2 min of answer time for 10 questions. Disadvantages are that it requires a smartphone or computer to access the software to complete and reliability has not been tested. In terms of content, it is based on the QOL-Thyroid. No test-retest analysis was performed on any of the questionnaires. In addition, criterion validity was only examined for the MDASI-Thy. Similarly, response time or completion rates are an important aspect to evaluate the questionnaires. Response times were only reported for the ThyCAT and the EORTC-Thy34, thus a comparison with other questionnaires in this respect is not possible. In conclusion, each of the six available questionnaires has strengths but also weaknesses. A recommendation for a single questionnaire cannot be given, as individual consideration should be given to which drawbacks have the least impact on use in the planned study or the clinical setting. For a quick screening, the ThyCAT is certainly a good option. If the focus is more on specific symptoms, the MDASI-Thy could be the right choice. If different dimensions of QoL or a healthy control group are required, the EORTC questionnaire with a specific module will be a good choice. 47 If this module is sufficiently validated in the future, it will probably become the gold standard for assessing QoL in malignant thyroid disease research. Out of the six specific questionnaires that were identified on the topic of QoL in endocrine orbitopathy, only three are useful for clinical and research use. The TAO-QoL and Ox-Ted do not have sufficient quality for use, and the STED-QoL was developed exclusively for Asian patients. The GO-QOL questionnaire by Terwee et al. is the most commonly used disease-specific QoL questionnaire. The GO-QLS, on the other hand, has a focus on the impact of impaired vision, and less on consequences due to altered appearance. In contrast to the two questionnaires, the three items of the TED-QoL are less specific and superficial but quick to answer. Therefore, the TED-QoL is well-suited for a quick screening on general limitations of Qol or in studies in which Qol is measured as a secondary outcome parameter. A comprehensive analysis of aspects affecting QoL is not possible with the TED-QoL, in contrast to the GO-QoL and the GO-QLS. Response times and completion rates for these three questionnaires are reported by Fayers et al. 65 The completion times are short at 3 min or less for all three questionnaires. The GO-QOL is the best-validated questionnaire. Reliability and external, construct and criterion validity are high. An advantage over the TED-QOL and GO-QLS is that GO-QOL is the only questionnaire for which longitudinal validity has been confirmed. Thus, a comparison of QoL before and after therapies is possible with the GO-QOL. In the course of time, several reviews have been made on the assessment of QoL in endocrine orbitopathy. [71][72][73][74] Since the most recent reviews by Bartalena There are certain limitations to this review. Firstly, we did not provide a quality appraisal per se or an analysis of bias. However, as this is no typical systematic review, we did not deem it necessary. Also, the tables detailing the different questionnaires and our comments with regards to the different questionnaires in the discussion section provide enough data and information to allow the reader to make an informed decision on which questionnaire might be relevant for them. Secondly, we did not include questionnaires dealing only with the aspect of anxiety. The respective questionnaires are mentioned, but since anxiety is only one aspect influencing the multi-faceted QoL, we deemed it beyond the scope of this review to go into detail there. Our review indicates that currently for malignant thyroid diseases and for endocrine orbitopathy there is still some work to be done to establish a clinically usable and rigorously validated questionnaire. In case of malignant diseases, this will probably be no issue anymore in the foreseeable future, when the thyroid-specific EORTC questionnaire is available. However, as mentioned above, the currently available questionnaires for endocrine orbitopathy are not sufficiently validated or do not cover all aspects of thyroidspecific QoL. Thus, future research should cover this currently existing gap. For the assessment of QoL in patients with endocrine orbitopathy, the GO-QOL is best validated. As a short screening questionnaire in the clinic or for questions in which QoL is a secondary outcome, the TED-QoL seems to be an appropriate alternative. ACK N OWLED G EM ENTS Open access funding enabled and organized by ProjektDEAL. FU N D I N G I N FO R M ATI O N This research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector. CO N FLI C T O F I NTE R E S T The authors have no relevant financial or nonfinancial interests to disclose. E TH I C A L A PPROVA L The conducted research is not related to either human or animal use.
2022-07-21T06:16:22.234Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "81969683d5439b2dbdacf7b6756613a04576782e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "5e37a50605543a854815eddf45c629bb2e24b85b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
209316720
pes2o/s2orc
v3-fos-license
Phase 2 study of efgartigimod, a novel FcRn antagonist, in adult patients with primary immune thrombocytopenia Abstract Primary immune thrombocytopenia (ITP) is an acquired autoimmune bleeding disorder, characterized by a low platelet count (<100 × 109/L) in the absence of other causes associated with thrombocytopenia. In most patients, IgG autoantibodies directed against platelet receptors can be detected. They accelerate platelet clearance and destruction, inhibit platelet production, and impair platelet function, resulting in increased risk of bleeding and impaired quality of life. Efgartigimod is a human IgG1 antibody Fc‐fragment, a natural ligand of the neonatal Fc receptor (FcRn), engineered for increased affinity to FcRn, while preserving its characteristic pH‐dependent binding. Efgartigimod blocks FcRn, preventing IgG recycling, and causing targeted IgG degradation. In this Phase 2 study, 38 patients were randomized 1:1:1 to receive four weekly intravenous infusions of either placebo (N = 12) or efgartigimod at a dose of 5 mg/kg (N = 13) or 10 mg/kg (N = 13). This short treatment cycle of efgartigimod in patients with ITP, predominantly refractory to previous lines of therapy, was shown to be well tolerated, and demonstrated a favorable safety profile consistent with Phase 1 data. Efgartigimod induced a rapid reduction of total IgG levels (up to 63.7% mean change from baseline), which was associated with clinically relevant increases in platelet counts (46% patients on efgartigimod vs 25% on placebo achieved a platelet count of ≥50 × 109/L on at least two occasions, and 38% vs 0% achieved ≥50 × 109/L for at least 10 cumulative days), and a reduced proportion of patients with bleeding. Taken together, these data warrant further evaluation of FcRn antagonism as a novel therapeutic approach in ITP. resulting in increased risk of bleeding and impaired quality of life. Efgartigimod is a human IgG1 antibody Fc-fragment, a natural ligand of the neonatal Fc receptor (FcRn), engineered for increased affinity to FcRn, while preserving its characteristic pH-dependent binding. Efgartigimod blocks FcRn, preventing IgG recycling, and causing targeted IgG degradation. In this Phase 2 study, 38 patients were randomized 1:1:1 to receive four weekly intravenous infusions of either placebo (N = 12) or efgartigimod at a dose of 5 mg/kg (N = 13) or 10 mg/kg (N = 13). This short treatment cycle of efgartigimod in patients with ITP, predominantly refractory to previous lines of therapy, was shown to be well tolerated, and demonstrated a favorable safety profile consistent with Phase 1 data. Efgartigimod induced a rapid reduction of total IgG levels (up to 63.7% mean change from baseline), which was associated with clinically relevant increases in platelet counts (46% patients on efgartigimod vs 25% on placebo achieved a platelet count of ≥50 × 10 9 /L on at least two occasions, and 38% vs 0% achieved ≥50 × 10 9 /L for at least 10 cumulative days), and a reduced proportion of patients with bleeding. Taken together, these data warrant further evaluation of FcRn antagonism as a novel therapeutic approach in ITP. | INTRODUCTION Primary immune thrombocytopenia (ITP) is an acquired autoimmune bleeding disorder characterized by a low platelet count (<100 × 10 9 /L) in the absence of other causes or disorders associated with thrombocytopenia. [1][2][3] The low platelet count increases the risk of skin and mucosal bleeding, gastrointestinal bleeding complications and rarely, serious intracranial hemorrhages. 2,4,5 Patients may suffer from depression and fatigue 6 as well as side effects of existing therapies, impairing their quality of life. [7][8][9][10][11][12] Current therapeutic approaches include non-specific immunosuppression (eg, steroids and rituximab), inhibition of platelet clearance (eg, splenectomy, intravenous immunoglobulin [IVIg], anti-D globulin, and the recently FDA-approved Syk inhibitor fostamatinib 13 ) or stimulation of platelet production (eg, thrombopoietin receptor agonist [TPO-RA]). 4,14 Splenectomy remains the only treatment that provides sustained remission off therapy for one year or longer for a high proportion of patients. 3 Autoantibodies in ITP, which are predominantly of the IgG class, mediate pathogenic actions by targeting surface glycoproteins (GP) expressed on platelets and megakaryocytes, the progenitor cells of platelets. 15,16 Detectable in most patients, they can opsonize platelets, resulting in clearance by splenic macrophages, induce platelet apoptosis, 17 complement-dependent lysis 18 or desialylation of platelets, and Fc-independent liver clearance. 19 Moreover, they can inhibit megakaryocyte proliferation and differentiation resulting in diminished platelet production. [20][21][22] Recently, it has been reported that some anti-GP antibodies interfere with platelet functionality, inhibiting platelet aggregation 23 and blood clot formation. 24 The majority of antiplatelet antibodies is directed against GPIIb/IIIa and GPIb/IX, 25,26 but additional targets have been identified. 14 The central role of autoantibodies in the pathogenesis is further illustrated by occurrence of ITP in infants born to mothers with ITP, due to placental transfer of autoantibodies, 27 and by historical use of IgG-depleting treatments like immunoadsorption and plasmapheresis, which lead to a reduction of platelet-associated autoantibodies 28 and increased platelet count. 29 The neonatal Fc receptor (FcRn) is the central regulator of IgG homeostasis, rescuing IgGs from lysosomal degradation, prolonging IgG half-life, and promoting tissue distribution of IgGs. 30,31 Albumin is also recycled by FcRn, but binds at a site distinct from that of IgGs. 32 Efgartigimod is a human IgG1 antibody Fc-fragment. 33 This natural ligand of FcRn has been engineered with ABDEG mutations, located in the CH2 and CH3 domain of the Fc fragment to increase affinity for FcRn whilst preserving its characteristic pH-dependent binding. Due to its increased affinity for FcRn at both acidic and neutral pH, efgartigimod outcompetes IgGs for binding to FcRn, resulting in accelerated degradation of endogenous IgGs. 30,34,35 In healthy volunteers (NCT03457649), efgartigimod was well tolerated and induced a rapid reduction of total IgGs and all IgG subtypes. 33 A Phase 2 study in patients with myasthenia gravis, an IgG autoantibody-mediated neuromuscular condition (NCT02965573), showed similar tolerability, and IgG reduction associated with clinically and statistically significant improvements on efficacy scales. 36 Targeted reduction of autoantibodies through FcRn blockade may prevent their pathogenic actions and represents a novel treatment modality in ITP. We investigated the safety and efficacy of efgartigimod in adult patients with primary ITP in a randomized, double-blinded, placebo-controlled Phase 2 study (NCT03102593). | Study design and treatment intervention In this randomized, double-blinded, placebo-controlled Phase 2 study ( Figure S1), patients were randomized 1:1:1 to receive four weekly doses of either placebo or efgartigimod, at a dose of 5 mg/kg or 10 mg/kg body weight administered as an intravenous infusion. Patients were followed for up to 21 weeks (an initial eight-week period extended to 21 weeks after protocol amendment, Figure S1). After an additional protocol amendment, implemented part way through the study, patients | Patients Thirty-eight patients were randomized in 19 study centers in Ukraine and seven countries in Europe. The study included patients aged 18 to 85 years, with confirmed primary ITP according to the American Society of Hematology guidelines, 3 and an average of two platelet count measurements during the screening <30 × 10 9 /L (with no single reading >35 × 10 9 /L). Concurrent ITP therapy (ie, oral corticosteroids, oral immunosuppressants, and/or TPO-RA) was permitted during the study, had to be on a stable dose and dosing frequency for at least four weeks prior to screening, and maintained during the study. Additionally, patients with total IgG level <6 g/L at screening were excluded. The presence of antiplatelet antibodies was not an inclusion criterion. All patients provided written informed consent prior to the commencement of any study-related procedures. | Safety and efficacy assessments The primary outcome was safety, assessed throughout the course of the study, including vital signs, electrocardiogram parameters, physical examination abnormalities, and clinical laboratory assessments. Treatment emergent adverse events (TEAEs) were coded according to the Medical Dictionary for Regulatory Authorities version 19.1. Secondary outcomes included platelet count responses and bleeding assessments. Other outcomes were the evaluation of the pharmacodynamic (PD) and pharmacokinetic parameters (PK), and immunogenicity. Measurements of circulating and platelet-bound autoantibodies were performed at Sanquin Diagnostic Laboratory using a commercially available solid-phase ELISA according to the manufacturer's instructions (PakAutoAssay, Immucor GTI Diagnostic, Inc, USA). 37 | Statistical analyses This study was exploratory and not powered to address any predefined hypothesis. Safety was assessed using the safety analysis set. The TEAEs were described for each treatment arm by preferred Study demographics and baseline characteristics were generally comparable across the treatment groups (Table 1). Twenty-eight (73.7%) patients were classified as chronic (more than 12 months from diagnosis), eight (21.1%) as persistent (between 3-12 months from diagnosis), and two (5.3%) as newly diagnosed (within 3 months of diagnosis). Median duration of ITP was 4.82 years (range 0.1-47.8). | Clinical pharmacology Efgartigimod at 5 and 10 mg/kg induced a rapid reduction of total IgG levels ( Figure S3), up to a maximum mean change of 60.4% on efgartigimod 5 mg/kg (from 9. | Safety Nine (69.2%) patients treated with efgartigimod at 5 mg/kg, 11 (84.6%) with efgartigimod at 10 mg/kg, and seven (58.3%) with placebo experienced at least one TEAE, which were mainly mild or moderate in severity ( Table 2). No deaths were reported. No clinically relevant changes in vital signs, electrocardiogram parameters, physical examination, and clinical laboratory assessments (eg, albumin) were observed. One (7.7%) patient treated with efgartigimod at 10 mg/kg experienced a worsening of ITP leading to drug discontinuation. This serious TEAE was the only TEAE with CTCAE severity grade 4 (ie, life threatening) and was considered unlikely related to efgartigimod. One | Efficacy Both efgartigimod-treated groups achieved a higher maximum mean platelet count change from baseline compared to the placebo group figure). Arrows on the X-axis indicate time points of treatment administration ≥50 × 10 9 /L on at least two occasions during the double-blind period (Table S1). Eight out of 12 (66.7%) patients achieved platelet counts | Bleeding-related events The incidence, location and severity of any bleeding symptoms were recorded using the World Health Organization (WHO) bleeding scale, and the ITP-specific bleeding assessment tool (ITP-BAT) ( Figures S6A and S6B, respectively). 38 The proportion of patients with bleeding (total WHO >0) decreased in both efgartigimod 5 and 10 mg/kg groups, from 46.2% at baseline to a minimum of 7.7% at day 64, and from 38.5% at baseline to a minimum of 7.7% at day 29, respectively (all timepoints shown in Figure 2). In the placebo group, the proportion of patients with bleeding decreased from 33.3% at baseline to a minimum of 25.0% at day 50. Targeting FcRn with efgartigimod resulted in rapid and selective IgG reduction, and a greater numerical reduction was observed in the efgartigimod 10 mg/kg group, without impacting the levels of other immunoglobulin isotypes. Additionally, the total IgG reduction did not reach the low thresholds previously reported to be associated with increased risk of infection in diseases causing hypogammaglobulinemia. 39 Notably, efgartigimod administration did not result in a reduction of albumin levels, which has been observed with some anti-FcRn monoclonal antibodies, 40,41 suggesting that the Fc fragment efgartigimod is not interfering with albumin binding or influencing the fate of FcRn. 33 Autoantibodies were identified in all patients in this study and were generally reduced following efgartigimod treatment. However, no apparent correlation with the extent of the clinical effect could be observed, which could possibly be due to the small sample size and the inherent autoantibody assay limitations in ITP. 42 Efgartigimod-treated groups achieved a higher maximum mean platelet count change from baseline compared to the placebo group. | DISCUSSION The early and substantial increase in the efgartigimod 5 mg/kg group could be explained by one patient who was receiving a stable dose of TPO-RA (eltrombopag) as concurrent ITP therapy, and whose platelet count increased to more than 500 × 10 9 /L from day 8 to 15. It will be interesting to further investigate whether there is a synergistic effect In this study, a high variability in onset and duration of response was observed following a short exposure to efgartigimod. This could be suggestive of differential contributions of the various pathogenic autoantibody activities across different patients. As exemplified in Figure S7A, some efgartigimod-treated patients showed a rapid increase in platelet counts, reminiscent of response times reported for anti-CD16 antibody therapy, 44 IVIg therapy or splenectomy. This suggests that in some patients, a limited reduction of autoantibody levels is sufficient to inhibit Fc gamma receptor-mediated phagocytosis of opsonized platelets by macrophages present in the liver and spleen. Other patients showed a delayed time to response as illustrated in Figure S7B. For those patients, a rise in platelet counts was only observed after the fourth infusion (day 22), which could indicate that either a more profound autoantibody reduction is needed, and/or that the main pathogenic action of the autoantibodies consists of impairing platelet production by the megakaryocytes in the bone marrow. In such a scenario, megakaryocyte recovery would need to take place first before platelet counts can increase. Additionally, some patients demonstrated a double platelet peak following efgartigimod treatment, as exemplified in Figure S7C, suggesting two distinct pathogenic autoantibody mechanisms with different kinetics. Interestingly, this phenomenon was also described in patients with acute ITP treated with plasmapheresis. 45 Most patients who responded to efgartigimod had a transient Patients benefited at both doses tested, further supporting the IgG reduction hypothesis. There were some signals that the 10 mg/kg dose may be superior, including the absence of newly diagnosed patients in this group, who may more readily respond to treatments. Additionally, two patients in the main study did not receive all four 10 mg/kg doses, potentially lowering the response rate in this cohort. Furthermore, patients whose platelet counts did not increase with efgartigimod 5 mg/kg in the main study, had an increased platelet count upon treatment with efgartigimod 10 mg/kg in the open-label treatment period. Finally, a decreased incidence of bleeding, measured using the bleeding scales (total WHO and ITP-BAT scores >0), was observed in both efgartigimodtreated groups, with numerically greater reduction in the efgartigimod 10 mg/kg group. Limitations of this signal-finding study included the small number of patients and heterogeneity of the recruited patient population which limited the assessment of effect in different patient profiles. Additionally, the treatment intervention was short, making efficacy analyses challenging and undermining assessment of the duration of effect and potential utility as chronic treatment. To conclude, a short treatment cycle of 4 weekly infusions of efgartigimod in patients with ITP predominantly refractory to previous lines of ITP therapy was well tolerated, markedly reduced IgG levels, was associated with clinically relevant increases in platelet counts in a substantial proportion of patients, and reduced the proportion of patients with bleeding ( Figure 2). This suggests that targeted IgG reduction with efgartigimod is a potential new treatment modality in primary ITP and warrants further evaluation of longer-term treatment in a larger Phase 3 study. ACKNOWLEDGMENTS The authors thank the study investigators, coordinators, nurses, and patients and their families for their invaluable contributions to this study. This study was sponsored by argenx. We acknowledge Thierry Cousin who was the medical monitor of the study. We also thank Paul A. Imbach for expert input during the preparation of the manuscript.
2019-12-12T10:30:51.014Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "09f5d50976a4b7be7fda88fb9116c5684d6d086d", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ajh.25680", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4c4a6f80a5a9cc9cb0e60e15cfaf6fb500de7fa6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3225798
pes2o/s2orc
v3-fos-license
UK guidelines on the management of variceal haemorrhage in cirrhotic patients These updated guidelines on the management of variceal haemorrhage have been commissioned by the Clinical Services and Standards Committee (CSSC) of the British Society of Gastroenterology (BSG) under the auspices of the liver section of the BSG. The original guidelines which this document supersedes were written in 2000 and have undergone extensive revision by 13 members of the Guidelines Development Group (GDG). The GDG comprises elected members of the BSG liver section, representation from British Association for the Study of the Liver (BASL) and Liver QuEST, a nursing representative and a patient representative. The quality of evidence and grading of recommendations was appraised using the AGREE II tool. The nature of variceal haemorrhage in cirrhotic patients with its complex range of complications makes rigid guidelines inappropriate. These guidelines deal specifically with the management of varices in patients with cirrhosis under the following subheadings: (1) primary prophylaxis; (2) acute variceal haemorrhage; (3) secondary prophylaxis of variceal haemorrhage; and (4) gastric varices. They are not designed to deal with (1) the management of the underlying liver disease; (2) the management of variceal haemorrhage in children; or (3) variceal haemorrhage from other aetiological conditions. The GDG comprises elected members of the BSG liver section, representation from British Association for the Study of the Liver (BASL) and Liver QuEST, a nursing representative and a patient representative. The quality of evidence and grading of recommendations was appraised using the AGREE II tool. The nature of variceal haemorrhage in cirrhotic patients with its complex range of complications makes rigid guidelines inappropriate. These guidelines deal specifically with the management of varices in patients with cirrhosis under the following subheadings: ▸ a fibrinogen level of <1 g/L (level 5, grade D), or ▸ a prothrombin time (international normalised ratio) or activated partial thromboplastin time >1.5 times normal (level 5, grade D). 1.7. Offer prothrombin complex concentrate to patients who are taking warfarin and actively bleeding (level 5, grade D). 1.8. Treat patients who are taking warfarin and whose upper gastrointestinal bleeding has stopped in line with local warfarin protocols (level 5, grade D). 1.9. There is insufficient evidence for the use of recombinant factor VIIa in acute variceal haemorrhage (level 1b, grade B). INTRODUCTION The guidelines refer closely to the Baveno V consensus statement published in 2010 1 and the 2012 NICE Guidelines on Acute Upper GI bleeding (CG141). 2 These documents are widely used and offer useful evidence-based guidance. However, we feel that owing to significant recent advances, further additions and refinements to the published guidance, with particular focus on resource implications, service development and the patient pathway, are necessary. The previously mentioned documents 1 2 do not cover all the recent advances-in particular, in the field of acute variceal bleeding and the role of transjugular intrahepatic portosystemic stent shunt (TIPSS). There have also been developments and better insights into drug treatment for prevention of varices and variceal bleeding-in particular, the role of non-cardioselective β blockers (NSBB). Guideline development These guidelines were drafted after discussions within the liver section of the British Society of Gastroenterology (BSG) and acceptance of the proposal by the Clinical Services and Standards Committee (CSSC). There followed division of sections to be researched by designated authors and an exhaustive literature review. The Baveno V consensus and NICE guidelines were closely followed and guideline quality was assessed using the AGREE tool 3 (section 'Assessing the quality of guidelines: the AGREE II instrument'). A preliminary guideline document was drafted by the authors following discussion and, where necessary, voting by members of the Guidelines Development Group (GDG). The draft guidelines were submitted for review by CSSG, then BSG council members. Finally, full peer review was undertaken by reviewers selected by the editor of Gut. Attempts were made to preserve the format of the original guidelines, with additional sections relating to service development, the patient pathway and pre-primary prophylaxis. The section on the management of acute variceal bleeding has been extensively rewritten to take into account recent important developments in interventional radiology, drug treatment and resuscitation. Assessing the quality of guidelines: the AGREE II instrument The AGREE II instrument is an accepted method for appraising clinical guidelines. 3 Six domains are listed: Scope and purpose The guidelines are intended for use by clinicians and other healthcare professionals managing patients with cirrhosis and gastro-oesophageal varices in light of recent guidance published by NICE 2 and the Baveno V Consensus. 1 Important subsequent developments are covered in depth due to the potential impact on clinical practice. The guidelines are primarily aimed at management of adult patients Guideline development group membership and stakeholder involvement Membership of the group includes gastroenterologists, hepatologists and interventional radiologists with nursing and patient representation. Rigour of development The published literature was searched using Pubmed, Medline, Web of Knowledge and the Cochrane database between October 2013 and February 2015. The GDG met through a series of teleconferences during that time. The guidelines rely considerably on consensus statements published by the Baveno V Consensus and NICE. 1 2 The style of graded recommendations is determined by the level of supporting evidence (graded level 1 to 5) as described by the Oxford Centre For Evidence Based Medicine 4 (table 1) and is as follows: A: consistent level 1 studies; B: consistent level 2 or 3 studies or extrapolations from level 1 studies; C: level 4 studies or extrapolations from level 2 or 3 studies; D: level 5 evidence or troublingly inconsistent or inconclusive studies of any level. Areas of disagreement about the recommendation grade were subjected to discussion and, if necessary, voting by members of the guidelines group. Where possible, the health benefits, side effects and risks of recommendations have been discussed. The guidelines were subject to peer review after submission for consideration of publication in Gut. Clarity and presentation Recommendations are intended to be specific to particular situations and patient groups; where necessary, different options are listed. Key recommendations are linked to discussion threads on a discussion forum hosted on the BSG website. Applicability Where necessary, we have discussed organisational changes that may be needed in order to apply recommendations. We have attempted to identify key criteria for monitoring and audit purposes. Editorial independence and conflict of interest Guideline group members have declared any conflicts of interest. Scheduled review of guidelines The proposed time for review of the guidelines is 5 years to take into account new developments. To ensure that there is a facility for feedback after publication, links to the BSG discussion forums corresponding to the particular section of these guidelines are included with this document. This facility to provide new evidence is provided to all BSG members. In accordance with the AGREE II tool the BSG forum will provide feedback. SERVICE DELIVERY AND DEVELOPMENT Despite improvements in outcomes following variceal bleeding, the need to optimise the management of acute variceal bleeding is highlighted in recent publications and national reports. In a national audit, 5 variceal bleeding accounted for just over 10% of all admissions with acute GI bleeding in the UK, with two-thirds having a previous history of variceal bleeding and over 50% presenting during normal working hours. Endoscopy within 24 h of presentation was achieved in only 66% of all patients and in 70% of patients with documented cirrhosis. Most procedures were performed in the endoscopy department, with just 14% performed under general anaesthetic despite high-risk stigmata and endoscopic therapy being required in two-thirds of cases. Notably, antibiotics were administered in only 27% of patients before endoscopy, and administration of vasoactive drugs before endoscopy was only slightly higher at 44%. Furthermore, only four patients (<1%) were referred for TIPSS, which may reflect the lack of access to interventional radiology, and that the audit was conducted before the trial of early TIPSS. 6 The National Confidential Enquiry into Patient Outcome and Death (NCEPOD) report 'Measuring the units' assessed clinical management before death of 594 patients with alcoholic liver disease over a 6-month period in the UK. 7 Gastrointestinal bleeding was noted in 35% of cases, with approximately 50% having variceal bleeding. Delays in endoscopy were noted in 10% of cases, and several aspects of clinical and/or organisational care were judged to be poor or unacceptable in 18% of patients presenting with GI bleeding. There were deficiencies noted in the out-of-hours rotas for GI bleeding, with 27% of hospitals not having a dedicated-out-of hours GI bleeding service. Studies from other countries have also reported deficiencies, with delays in admission to hospital and administration of antibiotics. Two observational studies showed that access to emergency endoscopy and use of prophylactic antibiotics and vasoactive drugs was better in tertiary centres, although this did not appear to affect survival. 8 9 Acute variceal haemorrhage refractory to endoscopic and pharmacological treatments, where TIPSS is usually indicated, must be managed with appropriate resources. TIPSS is an established interventional treatment for refractory or recurrent variceal haemorrhage. It remains a highly specialised procedure, requiring adequate training and experience. Knowledge of the relevant equipment, anatomy and how to deal with any complications is essential. It should therefore be performed in centres with adequate personnel, multidisciplinary support and equipment required to optimise management and minimise risks. 10 Regional centres with easily accessible interventional radiology services are generally best equipped to perform this procedure. Setting up regional agreements and pathways to allow transfer of appropriate patients to hospitals that undertake TIPSS procedures is an important step. These pathways could also be used to provide emergency endoscopic management if necessary due to problems with out-of-hours endoscopic cover in smaller ‡Validating studies test the quality of a specific diagnostic test based on prior evidence. An exploratory study collects information and trawls the data (eg, using a regression analysis) to find which factors are 'significant'. §Good reference standards are independent of the test, and applied blindly or objectively to all patients. Poor reference standards are haphazardly applied, but still independent of the test. Use of a non-independent reference standard (where the 'test' is included in the 'reference', or where the 'testing' affects the 'reference') implies a level 4 study. ¶Good follow-up in a differential diagnosis study is >80%, with adequate time for alternative diagnoses to emerge (eg, 1-6 months acute, 1-5 years chronic). **Met when all patients died before the treatment became available but some now survive while receiving it; or when some patients died before the treatment became available but none now die while receiving it. † †An 'absolute SpPin': a diagnostic finding whose Specificity is so high that a Positive result rules in the diagnosis. An 'Absolute SnNout': a diagnostic finding whose Sensitivity is so high that a Negative result rules out the diagnosis. ‡ ‡Split-sample validation is achieved by collecting all the information in a single tranche, then artificially dividing this into 'derivation' and 'validation' samples. § §Poor quality cohort study: one that failed to clearly define comparison groups and/or failed to measure exposures and outcomes in the same (preferably blinded) objective way in both exposed and non-exposed individuals and/or failed to identify or appropriately control known confounders and/or failed to carry out a sufficiently long and complete follow-up of patients. Poor quality case-control study: one that failed to clearly define comparison groups and/or failed to measure exposures and outcomes in the same (preferably blinded) objective way in both cases and controls and/or failed to identify or appropriately control known confounders. ¶ ¶Poor quality prognostic cohort study: one in which sampling was biased in favour of patients who already had the target outcome, or the measurement of outcomes was accomplished in <80% of study patients, or outcomes were determined in an unblinded non-objective way, or there was no correction for confounding factors. hospitals. This model referred to as "spoke and wheel" or network model, is well established for other complex procedures and helps to expedite and streamline the process. In the NCEPOD report 'Measuring the units' just 15% of hospitals had on-site access to TIPSS, while 72% had access to TIPSS in other centres. 7 There have been significant efforts to address the need to improve the upper GI bleeding (UGIB) service. A toolkit was produced in collaboration with BSG; Association of Upper Gastrointestinal Surgeons (AUGIS); Royal Colleges of Physicians, Radiology and Nursing; and Academy of Medical Royal Colleges. 11 The key nine service standards recommended by the document are detailed below: 1. There will be a nominated individual with the authority to ensure implementation by the contracted provider. 2. Contracted providers will ensure the minimum service is adequately resourced. 3. All patients with suspected UGIB should be properly assessed and their risk scored on presentation. 4. All patients should be resuscitated before therapeutic intervention. 5. All high-risk patients with UGIB should be endoscoped within 24 h, preferably on a planned list in the first instance. 6. For patients who require more urgent intervention either for endoscopy, interventional radiology or surgery formal 24/7 arrangements must be available. 7. The necessary team, meeting an agreed competency level, should be available throughout the complete patient pathway. 8. Each stage of the patient pathway should be carried out in an area with 'appropriate' facilities, equipment and support including staff experienced in the management of UGIB. 9. All hospitals must collect a minimum dataset in order to measure service provision against auditable outcomes (case-mix adjusted as appropriate). NICE recommendations for endoscopy provision are detailed in the section 'Management of active variceal haemorrhage' recommendations. 2 The BSG has also produced a care bundle for patients admitted with decompensated cirrhosis in light of the NCEPOD report with a check list method which includes gastrointestinal bleeding. 12 Since the 2008 Darzi report, quality has become a priority for the NHS. 13 With these guidelines there is real opportunity to introduce quality outcomes based on good clinical evidence. Furthermore by incorporating them into the liver accreditation scheme, Liver Quest, one can improve and assure quality in liver services across the UK. 14 Therefore a small number of quality outcomes measures have been chosen and form part of the key recommendations. 15 DEFINITIONS It is important to define the terms that should be used in the context of a variceal bleed. These are the Baveno V consensus definitions. 1 Variceal haemorrhage Variceal haemorrhage is defined as bleeding from an oesophageal or gastric varix at the time of endoscopy or the presence of large oesophageal varices with blood in the stomach and no other recognisable cause of bleeding. An episode of bleeding is clinically significant when there is a transfusion requirement for 2 units of blood or more within 24 h of the time zero, together with a systolic blood pressure of <100 mm Hg or a postural change of >20 mm Hg and/or pulse rate >100 bpm at time zero (time zero is the time of admission to the first hospital to which the patient is taken). Time frame of acute bleeding The acute bleeding episode is represented by an interval of 120 h (5 days) from time zero. Evidence of any bleeding after 120 h is the first rebleeding episode. Failure to control active bleeding Failure to control active bleeding is defined as death or need to change treatment defined by one of the following criteria: 16 17 1. Fresh haematemesis or nasogastric aspiration of ≥100 mL of fresh blood ≥2 h after the start of a specific drug treatment or therapeutic endoscopy. 2. Development of hypovolaemic shock. 3. 30 g/L drop in haemoglobin (9% drop of haematocrit) within any 24 h period if no transfusion is given. This time frame needs to be further validated. Variceal rebleeding Variceal rebleeding is defined as the occurrence of a single episode of clinically significant rebleeding from portal hypertensive sources from day 5. Clinically significant rebleeding is defined as recurrent melaena or haematemesis in any of the following settings: 1. hospital admission; 2. blood transfusion; 3. 30 g/L drop in haemoglobin; 4. death within 6 weeks. Early mortality Death within 6 weeks of the initial episode of bleeding. NATURAL HISTORY OF VARICES IN CIRRHOSIS Development of varices The rise in portal pressure is associated with the development of collateral circulation, which allows the portal blood to be diverted into the systemic circulation. These spontaneous shunts occur (a) at the cardia through the intrinsic and extrinsic gastrooesophageal veins; (b) in the anal canal where the superior haemorrhoidal vein belonging to the portal system anastomoses with the middle and inferior haemorrhoidal veins which belong to the caval system; (c) in the falciform ligament of the liver through the para-umbilical veins, which are the remains of the umbilical circulation of the fetus; (d) in the abdominal wall and the retroperitoneal tissues, from the liver to the diaphragm, veins in the lienorenal ligament, in the omentum and lumbar veins; and (e) blood diversion from the diaphragm, gastric, pancreatic, splenic, and adrenal veins, which may drain into the left renal vein. Numerous lines of evidence suggest that varices develop and enlarge with time. Christensen et al 18 followed up a cohort of 532 patients with cirrhosis and showed that the cumulative incidence of patients with varices increased from 12% to 90% over 12 years. In a study involving 80 patients followed up for 16 months, Cales and Pascal 19 showed that 20% of patients who did not have varices developed new varices and 42% of patients with small varices showed definite enlargement. Czaja et al 20 also showed that the prevalence of varices increased from 8% to 13% over 5 years in a cohort of patients with chronic active hepatitis even though they were treated with prednisolone. Merli et al, 21 in a study of 213 patients with cirrhosis with no or small varices, demonstrated that the annual progression of varices was 12%. A recent database analysis by D'Amico et al 22 using a competing risk model showed that the cumulative incidence of varices at 10 and 20 years was 44% and 53%, respectively, suggesting an overestimation in previous studies not using a competing risk model. The main factors that appear to determine the development of varices are continued hepatic injury, the degree of portosystemic shunting, endoscopic appearances and portal pressure. Evidence for the role of hepatic injury is derived from studies in which varices were shown to regress with time. Baker et al 23 followed up a cohort of 115 patients with oesophageal varices and showed that varices had disappeared in nine patients, regressed in seven and remained unchanged in six. They concluded that the disappearance and regression of varices might be related to abstinence from alcohol. This observation was confirmed in a study by Dagradi 24 who followed up a cohort of patients with alcoholic cirrhosis over 3 years and showed a reduction in variceal size in 12 of the 15 patients with alcoholic cirrhosis who stopped drinking and an enlargement in variceal size in 17 patients who continued to drink. On the other hand, Cales and Pascal 19 showed that regression of varices occurred in 16% of patients with alcoholic cirrhosis who continued to imbibe alcohol. This might be related to the development of large portosystemic collaterals, which decompress the portal system and reduce the risk of the development of large oesophageal varices. The degree of portosystemic shunting can be quantified by measuring the diameter of portal veins and collaterals, and can be significant in those with gastrorenal or splenorenal shunting. 25 26 Others have shown that the presence of alcoholic cirrhosis, Child's B or C cirrhosis and red whale signs on index endoscopy predicted progression of varices. 21 Groszmann et al 27 in a placebo-controlled randomised trial of timolol in 213 cirrhotic patients without varices showed that a baseline hepatic venous pressure gradient (HVPG) of >10 mm Hg or a ≥10% increase in HVPG during follow-up were both predictive of the development of varices. Diagnosis of gastro-oesophageal varices Until recently, endoscopy has been used exclusively to diagnose varices. Non-invasive methods of screening for varices include capsule endoscopy, transient elastography and use of laboratory and radiological findings. Endoscopy There is universal acceptance that endoscopy is the 'gold standard' for diagnosing gastro-oesophageal varices. The main limitations are intraobserver variability in the diagnosis of small or grade I oesophageal varices (figure 1A-C). Recently, unsedated nasal gastroscopy has been found to have similar accuracy to conventional endoscopy and has the advantage of tolerability and potential cost saving since it can be performed in the clinic setting in some institutions. 28 29 However, there are no controlled studies and banding of varices is not possible. Capsule endoscopy Capsule endoscopy uses a 26 mm pill-shaped device which transmits video footage which is stored and later analysed. Patients are not sedated, but patient cooperation is essential. In a large study by de Franchis et al, 30 capsule endoscopy was compared with standard gastroscopy. The primary end point of 90% or greater concordance was not achieved. Lapalus et al, 31 in a prospective study of 120 patients, demonstrated similar results with capsule endoscopy. Therefore, capsule endoscopy cannot be considered an alternative to standard endoscopy, although may have a role in patient who refuse gastroscopy. Transient elastography Transient elastography ((FibroScan, Echosens, Paris, France) uses the principles of ultrasound to derive tissue stiffness by measuring the speed of propagation of a low-frequency wave, which then correlates with liver fibrosis. Vizzutti et al 32 in a study of 61 patients with hepatitis C showed a sensitivity for prediction of oesophageal varices of 90% using a threshold 17.6 kPa. However, specificity was poor at 43%. A study of 298 patients found the optimal cut-off point for the prediction of oesophageal varices was 21.5 kPa (sensitivity 76% and specificity 78%). 33 In one uncontrolled study the use of transient elastography was found to be as effective as HVPG at predicting portal hypertension-related complications. 34 Therefore, the role of transient elastography in predicting varices is controversial due to the lack of consistent results and controlled studies. This modality may be more useful for predicting decompensation in patients with cirrhosis. Radiological and serum parameters. A prospective study of 311 patients with chronic hepatitis C showed that a platelet-to-spleen size ratio with a threshold of 909 had positive and negative predictive values of 100% and 94%, respectively. 35 These good results have not been reproduced by others as demonstrated in a meta-analysis. 36 Risk factors for first variceal bleeding The factors that predispose to, and precipitate, variceal haemorrhage are still not clear. The suggestion that oesophagitis may precipitate variceal haemorrhage has been discarded. 37 Presently, the most important factors that have been held responsible include (i) pressure within the varix, (ii) variceal size, (iii) tension on the variceal wall and (iv) severity of the liver disease. Portal pressure In most cases, portal pressure reflects intravariceal pressure 38 and a HVPG >10 mm Hg is necessary for the development of oesophageal varices. 27 There is no linear relationship between the severity of portal hypertension and the risk of variceal haemorrhage, although HVPG >12 mm Hg is an accepted threshold for variceal bleeding. 39 40 However, the HVPG tends to be higher in bleeders as well as in patients with larger varices. In a prospective study comparing propranolol with placebo for the prevention of first variceal haemorrhage, Groszmann et al 41 showed that bleeding from varices did not occur if the portal pressure gradient (PPG) could be reduced to <12 mm Hg. Others have shown that a 20% reduction in portal pressure protects against further bleeding. 42 These haemodynamic goals have been accepted as the aim of pharmacological treatment of portal hypertension. It is important to appreciate that gastric varices can bleed at pressures <12 mm Hg, and the influence of wall tension of the varix plays a greater role in the risk of bleeding. 43 A greater pressure reduction may be necessary to protect against bleeding. This is further discussed in the section 'Gastric varices'. At present, measurement of portal pressure in guiding pharmacological treatments is limited to clinical trials in the UK. Published results are variable owing to the lack of a definition distinguishing between large and small varices. Small (grade I) varices tend to be narrow and flatten easily with air, whereas larger (grade 2 and 3) varices are usually broader and flatten with difficulty, if at all. Numerous studies 40 44 have shown that the risk of variceal haemorrhage increases with the size of varices. 45 Variceal wall and tension Polio and Groszmann 46 using an in vitro model showed that rupture of varices was related to the tension on the variceal wall. The tension depends on the radius of the varix. In this model, increasing the size of the varix and decreasing the thickness of the variceal wall caused variceal rupture. Recently, endoscopic ultrasound and manometry have been used to estimate wall tension of varices. 47 Endoscopic features such as 'red spots' and 'whale' markings were first described by Dagradi. 24 They have been described as being important in the prediction of variceal haemorrhage. These features represent changes in variceal wall structure and tension associated with the development of microtelangiectasias and reduced wall thickness. In a retrospective study by the Japanese Research Society for Portal Hypertension, Beppu et al 48 showed that 80% of patients who had blue varices or cherry red spots bled from varices, suggesting that this was an important predictor of variceal haemorrhage in cirrhosis. Severity of liver disease and bleeding indices Two independent groups prospectively assessed factors predicting first variceal haemorrhage in cirrhosis (table 2). The North Italian Endoscopic Club (NIEC) 49 reported their findings in 1988, followed in 1990 by data from the Japanese. 50 Both these studies showed that the risk of bleeding was based on three factors: severity of liver disease as measured by Child class, variceal size and red wale markings. The NIEC study showed a wide range for the risk of bleeding of 6-76%, depending on the presence or absence of the different factors. Using the same variables the NIEC index was simplified by de Franchis et al 51 and shown to correlate with the original index. Further studies showed that the HVPG and intravariceal pressure were also independent predictors of first variceal haemorrhage when analysed in conjunction with the NIEC index. 52 53 In summary, the most important factors that determine the risk of variceal haemorrhage are the severity of liver disease, size of varices, and presence of red signs. Measurement of HVPG is a useful guide for selection of patients for treatment and their response to treatment, although the predictive value does not appear to improve on the NIEC index and presence of red whale marking. 54 Risk and mortality of first variceal bleed Data describing the overall risk of bleeding from varices must be viewed with caution and have some pitfalls in interpretation. The natural history of patients who have varices that are diagnosed as part of their baseline investiations is different from that of patients who have complications of liver disease such as ascites and encephalopathy. Most studies do not comment on either the severity of liver disease or whether patients with alcoholic cirrhosis are continuing to drink. Both these factors have a significant effect on the risk of variceal haemorrhage. Most studies report bleeding from varices in about 20-50% of patients with cirrhosis during the period of follow-up. Baker et al 23 reported variceal bleeding in 33 of 115 patients that they followed up for a mean of 3.3 years, with a mortality of 48% from the first variceal haemorrhage. These data were confirmed by Christensen et al. 18 About 70% of the episodes of bleeding occur within 2 years of diagnosis. Recent studies demonstrate a dramatic reduction in mortality following variceal bleeding of 20% 6-week mortality 55 and 15% in-hospital mortality, 5 with contributions from improved endoscopic, pharmacological and radiological therapies, notably TIPSS. Intensive care treatment has also improved, with outcomes being particularly good for those requiring minimal organ support. Analysis of the non-active treatment arms in the primary prophylaxis trials comparing propranolol with placebo show results similar to those of the primary prophylaxis shunt trials, with most episodes of bleeding occurring within the first 2 years of follow-up. In these studies the rate of first variceal haemorrhage ranged from 22% to 61%. 56-60 This large difference in the rate of first bleed relates almost certainly to the number of patients with severe liver disease included in the study (Pascal, Child C-46%, bleeding-61%; Italian Multicenter Project for Propranolol in Prevention of Bleeding (IMPP), Child C-6%, bleeding-32%; Conn, Child C-6%, bleeding-22%). Mortality varied from 24% to 49% over 2 years (Pascal, mortality-49%; IMPP, mortality-24%; Conn, mortality-24%). Primary prophylaxis Since 30-50% of patients with portal hypertension will bleed from varices and about 20% will die from the effects of the first Table 2 Scoring systems for quantifying the severity of cirrhosis Severity of liver disease can be described using the Child-Pugh score or MELD score. The Child-Pugh score is the sum of severity scores for Child class, variceal size and red wale markings the variables shown below. Child-Pugh class A represents a score of ≤6, class B a score of 7-9, and class C, ≥10. The MELD score is a formula that includes three laboratory-based variables reflecting the severity of liver disease. It was originally used to predict the short-term mortality after placement of a transjugular intrahepatic portosystemic stent-shunt for variceal bleeding. Subsequently, it has been used in selecting candidates for liver transplantation. MELD score: please use the online calculator https://www.esot.org/Elita/meldCalculator. aspx. INR, international normalised ratio. bleed, it seems rational to develop prophylactic regimens to prevent the development of, and bleeding from, these varices. However, most of the published trials do not have sufficient power to identify favourable treatment effects. Based on the expected bleeding and death rates in the control group, the minimum number of patients needed to detect a 50% reduction in bleeding would be 270, and 850 patients in each arm to detect the same reduction in mortality. A proposed algorithm for surveillance and prophylaxis of varices is shown in figure 2. At this time there is insufficient evidence to support treating patients without varices or 'pre-primary prophylaxis'. A large randomised placebo-controlled trial of timolol in patients without varices and portal hypertension defined as HVPG >6 mm Hg did not show any effect on the development of varices or variceal bleeding. 27 The role of drug treatment in preventing bleeding in patients with small varices is unclear. Three randomised placebo-controlled trials have studied this. Cales et al 61 showed that propranolol in patients with small, or no, varices resulted in greater development of varices. However, patients without varices were included and there was significant loss of patients to follow-up. The second trial showed that nadolol reduced variceal bleeding without survival benefit and increased adverse events. 62 Sarin et al 63 did not show any effect with propranolol, despite a significant effect on portal pressure. Portacaval shunts Four trials of portacaval shunts have been published, which randomised a total of 302 patients 64-67 either to prophylactic shunt surgery or to non-active treatment. A meta-analysis of these studies showed a significant benefit in the reduction of variceal bleeding (OR=0.31, 95% CI 0.17 to 0.56) but also a significantly greater risk of hepatic encephalopathy (OR=2, 95% CI 1.2 to 3.1) and mortality (OR=1.6, 95% CI 1.02 to 2.57) in patients treated with shunt surgery. 68 At this time, there is no evidence for the use of TIPSS for primary prophylaxis. 1 Devascularisation procedures Inokuchi 50 showed that there was a significant reduction in variceal bleeding and in mortality in patients treated with a variety of devascularisation procedures. There are, however, numerous problems with the interpretation of this study because of the use of different procedures in each of the 22 centres. These results require confirmation. Pharmacological treatment Non-cardioselective β blockers The mainstay of the pharmacological approach to the primary prophylaxis of variceal haemorrhage has been NSBB. Propranolol which has been shown to reduce the PPG, reduce azygos blood flow, and also variceal pressure. It achieves this by causing splanchnic vasoconstriction and reducing cardiac output. There is no clear dose-related reduction in HVPG or correlation of HPVG reduction with reduction in heart rate. 69 Observational studies have shown that a 10-12% reduction in HVPG after acute administration of propranolol was associated with reduced bleeding and hepatic decompensation. 54 70 However, HVPG monitoring is not routinely available in most centres outside of larger institutions. A meta-analysis of nine placebo-controlled randomised trials (964 patients) showed that the pooled risk difference for bleeding was −11% (95% CI −21% to −1%), and for death was −9% (95% CI −18% to −1%) in favour of propranolol. 71 Nadolol exerts similar effects on portal haemodynamics, although the effect on blood pressure may not be as pronounced. Two placebo-controlled trials 58 59 have shown reduced bleeding, although in one study this was only seen on per protocol analysis. 59 There was no effect on overall survival. Carvedilol is a NSBB like propranolol, and a vasodilator due to α1 receptor blockade. The latter reduces portocollateral resistance, and by actions on hepatic stellate cells leads to a reduction in intrahepatic resistance. Haemodynamic studies demonstrate a greater reduction in portal pressure with carvedilol than with propranolol, although blood pressure is reduced. 72 73 The optimum dose is 6.25-12.5 mg/day. 74 Higher doses are not more effective and are associated with more adverse events-in particular, hypotension. Carvedilol at a dose of 12.5 mg/day at current UK prices is considerably cheaper than propranolol 40 mg twice a day and nadolol 80 mg/day (monthly cost, £1.20, £5.62 and £5, respectively). Two RCTs of carvedilol versus variceal band ligation (VBL) in primary prophylaxis have been published. 75 76 The first study 75 showed significantly reduced bleeding in the carvedilol arm (10% vs 23%, relative hazard 0.41; 95% CI 0.19 to 0.96), with no effect on survival. The second trial by Shah et al 76 did not show any differences in bleeding or mortality. Compliance with VBL was better in the latter trial, and unlike the first trial, there were significantly more patients with viral hepatitis than alcoholic cirrhosis. A further study 74 assessed the effect of carvedilol in patients who were haemodynamic non-responders to propranolol, where haemodynamic response was defined as HVPG reduction to ≤12 mm Hg or by >20% of baseline after 4 weeks of treatment. Patients who were haemodynamic non-responders or intolerant to carvedilol were treated with VBL. Carvedilol resulted in significantly lower variceal bleeding compared with VBL, and haemodynamic responders to carvedilol or propranolol had significantly lower mortality than those treated with VBL. It is worth noting that the study was not randomised. There have been recent suggestions based on low-level evidence that NSBB may result in a poorer outcome in patients with cirrhosis and refractory ascites. 77 The 'window hypothesis' for β blockers in cirrhosis has also recently been described, suggesting that NSBB are helpful in the compensated and early decompensated cirrhotic period, but may not be helpful in very early cirrhosis, such as in a patient with no varices, and may be harmful in patients with end-stage cirrhosis with refractory ascites. 78 However, recent large observational studies question the last hypothesis, with improved survival seen in patients with refractory ascites treated with NSBB, 79 unless patients have an episode of spontaneous bacterial peritonitis. 80 Therefore, until there are further prospective controlled studies, NSBB should be continued in patients with refractory ascites. The clinician must carefully monitor haemodynamic parameters such as blood pressure, and discontinue NSBB in patients with hypotension and renal impairment as can occur after an episode of spontaneous bacterial peritonitis. Other potentially severe adverse events with NSBB include symptomatic bradycardia, asthma and cardiac failure. Less severe side effects such as fatigue, insomnia and sexual dysfunction may also result. Isosorbide mononitrate Interest in the use of vasodilators such as isosorbide mononitrate (ISMN) developed after the demonstration that it reduces portal pressure as effectively as propranolol, 81 but has subsequently waned. A trial comparing ISMN with propranolol showed no significant difference between these agents. 82 Another randomised trial of ISMN versus placebo did show any difference in the two arms. 83 Therefore, ISMN is not recommended as monotherapy in primary prophylaxis. β Blocker and ISMN The combination of nadolol and ISMN has been compared with nadolol in a RCT. The combination therapy reduced the frequency of bleeding significantly but no significant differences were detected in mortality. 84 However, Garcia-Pagan et al 85 in a double-blind RCT of propranolol plus ISMN versus propranolol plus placebo failed to show any differences between the two arms. Combination therapy is associated with more side effects. Proton pump inhibitors A placebo-controlled randomised trial reported reduced bleeding and mortality with rabeprazole after eradication of varices. 86 However, the study had a heterogeneous population with VBL performed for both primary and secondary prophylaxis and small numbers (n=43), limiting the validity of the conclusions. Furthermore, there was no arm comparing proton pump inhibitors with NSBB. The use of proton pump inhibitors in patients with cirrhosis and ascites was associated with increased risk of spontaneous bacterial peritonitis in a large retrospective study. 87 This was not confirmed in a larger prospective non-randomised study. 88 However, a recent prospective observational study has shown proton pump use to be associated with increased mortality in cirrhosis. 89 Proton pump inhibitors are also associated with increased risk of Clostridium difficile infection. 90 There remains continuing concern about proton pump inhibitors in patients with cirrhosis, therefore caution should be used. Endoscopic therapy Variceal band ligation VBL has been compared with NSBB in 19 trials in a recent Cochrane meta-analysis of 1504 patients. 91 Despite reduced bleeding (RR=0.67, 95% CI 0.46 to 0.98) with VBL, there was no difference in overall mortality and bleeding-related mortality. The difference in bleeding was not seen when only trials with low selection or attrition bias were included. Banding can have serious complications. The risk of fatal banding-induced bleeding was highlighted in a meta-analysis showing reduced fatal adverse events with NSBB (OR=0.14, 95% 0.02 to 0.99). 92 The optimal timing of banding intervals is discussed in the section 'Secondary prophylaxis of variceal haemorrhage'. A randomised trial of 96 patients who underwent endoscopic surveillance at 6 or 3 months after eradication of varices with VBL did not demonstrate a difference in bleeding on mortality. 93 However, the trial had a heterogeneous study group of patients who underwent VBL both for primary (65%) and secondary prevention (35%). Sclerotherapy Nineteen trials have compared endoscopic variceal sclerotherapy with no treatment. 68 Owing to the marked heterogeneity between these studies a meta-analysis is clinically inappropriate. 68 Sclerotherapy does not offer any benefit in combination with NSBB or VBL compared with VBL or NSBB alone, and increases iatrogenic complications such as strictures. [94][95][96] At this time sclerotherapy cannot be recommended for prophylaxis of variceal haemorrhage in patients with cirrhosis. Recommendations: primary prophylaxis of variceal haemorrhage in cirrhosis (figure 2) 1. What is the best method for primary prophylaxis? 1. MANAGEMENT OF ACTIVE VARICEAL HAEMORRHAGE The average 6-week mortality of the first episode of variceal bleeding in most studies is reported to be up to 20%. There has been considerable improvement in survival since the early 1980s when the in-hospital mortality was 40-50%, 97 compared with 15% from a recent UK audit. 5 Such is the improvement in outcomes, that a patient with Child's A cirrhosis is very unlikely to succumb to an index variceal bleed. Studies have shown the Child-Pugh score, MELD score, and HVPG to be strong predictors of outcomes. [98][99][100][101][102][103] The MELD score has been shown to outperform Child's score in a recent study, with a score >19 associated with 20% 6 week mortality. 103 Furthermore, the MELD score has been shown to perform as well as the traditional intensive care unit scores in predicting mortality in patients admitted to intensive care in the UK. 104 MELD >18, active bleeding, transfusing >4 units of packed red blood cells have been shown to be predictors of mortality and early rebleeding. 99 101 102 HVPG has also been shown to predict outcome when measured at 2 weeks after a bleed, 44 and a value of ≥20 mm Hg when measured acutely within 48 h has been shown to provide significant prognostic information. 100 However, this technique is not used routinely in the management of patients around the world, and substitution of clinical data in the latter study was shown to provide the same clinical predictive value. 100 These scoring systems are not purely academic; they allow the referring clinician to predict those patients with a high chance of rebleeding to be transferred to a specialist centre offering, for instance, TIPSS before the patient rebleeds. Nonetheless, probably the most important step in the management of acute variceal haemorrhage is the initial resuscitation assessed according to standard 'ABC' practice, together with protection of the airway to prevent aspiration. Although early endoscopy allows for accurate diagnosis of the bleeding site and decisions about management (figure 3), therapeutic intervention in acute variceal bleeding can be initiated, safely in most cases, before diagnostic endoscopy. As similar efficacy is demonstrated with pharmacological treatment as with sclerotherapy, the former should be first-line therapy. 99 β Blockade should not be started in the acute setting, and those already taking β blockers as prophylaxis should probably stop taking them for 48-72 h in order that the patient's physiological response to blood loss can be allowed to manifest. General considerations Patient evaluation The majority of patients with a variceal bleed will be sufficiently stable to enable a full history and examination to take place. History of alcohol excess and or intravenous drug use should be sought and may become particularly relevant if the patient has withdrawal symptoms after admission. Comorbidity is important when estimating risk and deciding on use of vasopressors. The following risk factors doubled mortality after an acute variceal bleed in one US study: older age, comorbidities, male gender and not undergoing a gastroscopy within 24 h. 105 A full examination is helpful for the important negatives as much as the positives. Baseline observations should include the temperature, as infection is a serious complication with significant mortality. Confusion may be present because of encephalopathy, intoxication with alcohol or drugs or withdrawal from alcohol or drugs. The patient should be on continuous BP and pulse monitor and their haemodynamic status recorded. An oxygen saturation monitor is helpful. Stigmata of chronic liver disease and concurrent jaundice provide insight into the current status of a patient's liver, and also give warning of potential further decompensation if significant bleeding persists (see scoring systems above). Pneumonia must be actively excluded. Evidence of ascites requires a diagnostic tap to search for infection. Investigations including full blood count, coagulation profile, liver and renal function and blood group and save and cross-match. Blood and urine should also be cultured. An ultrasound scan later in the admission is helpful to identify subclinical ascites, flow in portal vein and any obvious emergence of an hepatocellular carcinoma (HCC). Location of patient A decision must be made as to where the patient is best managed. Variceal bleeding is unpredictable, generally occurs in patients with significant liver disease and is associated with significant mortality. Hence, a high-dependency unit is usually the most appropriate initial location, although a properly staffed 'gastrointestinal bleeding bed' may be appropriate. If a patient is vomiting blood, or there is a perceived risk of a haemodynamically unstable patient having blood in the stomach, then the patient must be intubated before endoscopy, and return to an intensive care or highdependency unit will be necessary until extubation. Volume resuscitation and blood products Intravenous access (two 16-18G cannulae) should have been secured on admission with a reported GI bleed. Further intravenous access may be necessary. In patients with poor venous access, advanced liver disease, or renal failure associated with their liver disease, central venous access may be helpful with guiding fluid infusions. However, the drawbacks include the risk of the procedure and a potential source of infection. Therefore, there is no absolute requirement for a central line, and no evidence of unequivocal benefit. Intravenous fluid resuscitation should be initiated with plasma expanders aiming to maintain a systolic blood pressure of 100 mm Hg. Care with monitoring is paramount in this group of patients. Overtransfusion has been shown to have a deleterious effect on outcome. In a recent single-centre RCT, a restrictive transfusion policy of maintaining haemoglobin between 70 and 80 g/L improved the control of variceal bleeding (11% vs 22%, p=0.05), and lowered HVPG compared with a liberal transfusion policy without effect on 45-day survival. 106 However, it should be noted that these results were from a single Spanish centre, which was a tertiary unit for variceal bleeding, where all patients underwent endoscopy within 6 h. Nonetheless, a restrictive transfusion policy has been recommended for some time 1 and there is now good evidence to support not transfusing a stable patient with a haemoglobin of ≥80 g/L. However, underresuscitation should also be avoided and while goal-oriented fluid replacement has generally not been useful in an intensive therapy unit setting, a venous saturation >70% remains an easily measurable target with some evidence to support it. 107 Interpretation and management of clotting profile is challenging in liver disease, where there is usually a balanced deficiency of both procoagulant and anticoagulant factors. 108 The NICE guidelines recommend activation of a hospital's massive transfusion policy when there is major haemorrhage, and platelet support when the value is <50, and clotting factor support when the international normalised ratio (INR) is >1.5 times normal. 2 There is no evidence for the use of 'prophylactic' clotting or platelet support to reduce the risk of rebleeding. There is insufficient evidence to support the routine use of transexamic acid, or recombinant factor VIIa. 109 Pharmacological treatment The two major classes of drugs that have been used in the control of acute variceal bleeding are vasopressin or its analogues (either alone or in combination with nitroglycerine) and somatostatin or its analogues. Terlipressin is the only agent that has been shown to reduce mortality in placebo-controlled trials. However, in trials comparing terlipressin, somatostatin and octreotide, no difference in efficacy was identified in a systematic review 110 and in a recent large RCT. 111 Prophylactic antibiotics can result in a similar survival benefit following acute variceal bleeding. Vasopressin Vasopressin reduces portal blood flow, portal systemic collateral blood flow and variceal pressure. It does, however, have significant systemic side effects such as an increase in peripheral resistance, and reduction in cardiac output, heart rate and coronary blood flow. In comparison with no active treatment, the pooled results of four randomised trials showed that it reduced failure to control variceal bleeding (OR=0.22, 95% CI 0.12 to 0.43), although survival was unaffected. 68 Meta-analysis of five trials comparing sclerotherapy with vasopressin has shown a significant effect on reduction in failure to control bleeding (OR=0.51, 95% CI 0.27 to 0.97), with no effect on survival. 68 Vasopressin with nitroglycerine The addition of nitroglycerine enhances the effect of vasopressin on portal pressure and reduces cardiovascular side effects. 112 Meta-analysis of three randomised trials comparing vasopressin alone with vasopressin and nitroglycerine showed that the combination was associated with a significant reduction in failure to control bleeding (OR=0.39, 95% CI 0.22 to 0.72), although no survival benefit was shown. 68 Terlipressin Terlipressin is a synthetic analogue of vasopressin, which has an immediate systemic vasoconstrictor action followed by portal haemodynamic effects due to slow conversion to vasopressin. In a Cochrane meta-analysis of seven placebo-controlled trials, terlipressin was shown to reduce failure to control bleeding (RR=0.66, 95% CI 0.55 to 0.93) and also to improve survival (RR=0.66, 95% CI 0.49 to 0.88). 113 In the same meta-analysis, there was no difference between terlipressin versus vasopressin, balloon tamponade or endoscopic therapy in failure to control bleeding or survival. 113 The role of terlipressin in combination with VBL is explored in the section 'Endoscopic therapy in combination with pharmacological therapy'. The recommended dose of terlipressin is 2 mg IV every 4 h, although many units reduce the dose to 6 hourly as it may cause peripheral vasoconstriction which manifests as painful hands and feet. While 5 days of IV treatment has been advocated in the Baveno V guidelines, 1 this prolonged treatment has not been shown to have a survival benefit, and for pragmatic reasons many units will stop treatment shortly after satisfactory haemostasis. In a randomised trial terlipressin given for 24 h after satisfactory haemostasis with VBL after oesophageal variceal bleeding was as effective as 72 h of treatment. 114 In patients intolerant of terlipressin or in countries where terlipressin is not available, alternatives should be considered. Somatostatin and octreotide Somatostatin causes selective splanchnic vasoconstriction and reduces portal pressure and portal blood flow. 115 Octreotide is a somatostatin analogue. The mechanism of action of these two agents is not clear. Inhibition of glucagon increases vasodilatation rather than a direct vasoconstrictive effect and post-prandial gut hyperaemia is also reduced. The actions of octreotide on hepatic and systemic hemodynamics are transient, making continuous infusion necessary. Octreotide is given as a 50 μg bolus followed by an infusion of 25-50 μg/h. Somatostatin is given as a 250 mg intravenous bolus followed by an infusion of 250 mg/h. Somatostatin and octreotide have been shown to be as effective as terlipressin in acute variceal bleeding in a meta-analysis. 110 Seo et al 111 in a large RCT of 780 patients comparing these three agents failed to show a difference in treatment success (range 83.8-86.2%), rebleeding (range 3.4-4.4%) and mortality (range 8-8.8%). A low systolic blood pressure at presentation, high serum creatinine level, active bleeding in the emergency endoscopy, gastric variceal bleeding and Child-Pugh grade C were independent factors predicting 5-day treatment failure. 111 Antibiotics Antibiotics that provide Gram-negative cover are one of the interventions which positively influence survival in variceal haemorrhage as shown in a Cochrane meta-analysis of 12 placebo-controlled trials (RR=0.79, 95% CI 0.63 to 0.98). 116 Antibiotics were also shown to reduce bacterial infections (RR=0.43, 95% CI 0.19 to 0.97) and early rebleeding (RR=0.53, 95% CI 0.38 to 0.74). 116 Therefore, short-term antibiotics should be considered standard practice in all cirrhotic patients who have a variceal bleed, irrespective of the presence of confirmed infection. Third-generation cephalosporins, such as ceftriaxone (1 g IV, daily), have been shown to be more effective at reducing Gram-negative sepsis than oral norfloxacin, 117 but choice of antibiotics must be dictated by local resistance patterns and availability. Proton pump inhibitors One RCT compared a short course of proton pump inhibitors with vasoconstrictor therapies after haemostasis in acute variceal bleeding. 118 Despite larger ulcers noted in the vasoconstrictor arm, there were no differences in bleeding or survival. Nearly 50% of patients had ascites, which might have implications in light of the reports of increased incidence of spontaneous bacterial peritonitis as mentioned earlier. Endoscopic therapy Endoscopy should take place within 24 h of admission and earlier if there is excessive bleeding, based on low-level evidence. 105 While many guidelines and reviews suggest that endoscopy should be carried out within 12 h the only study that examined the influence of timing on outcome failed to demonstrate any advantage of endoscopy before 12 h. 119 The optimal time is after sufficient resuscitation, and pharmacological treatment, with the endoscopy performed by a skilled endoscopy team, in a suitably equipped theatre environment and with airway protection. Airway protection is essential where risk of aspiration is high, and affords the endoscopist time for thorough evaluation, including complete clot aspiration and controlled application of treatment, including tamponade if required. The endoscopy team must comprise an experienced endoscopy nurse acquainted with the equipment necessary for endoscopy therapy of varices, and a skilled endoscopist, competent in using banding devices and deployment of balloon tamponade. Variceal band ligation This technique is a modification of that used for the elastic band ligation of internal haemorrhoids. Its use in humans was first described in 1988. 120 A meta-analysis of seven trials comparing VBL with sclerotherapy in acute bleeding showed that VBL reduced rebleeding from varices (OR=0.47, 95% CI 0.29 to 0.78), reduced mortality (OR=0.67, 95% CI 0.46 to 0.98) and resulted in fewer oesophageal strictures (OR=0.10, 95% CI 0.03 to 0.29). 121 The number of sessions required to obliterate varices was lower with VBL (2.2 fewer sessions (95% CI 0.9 to 3.5)). Sclerotherapy Sclerotherapy has been replaced by VBL and should no longer be offered as standard of care in acute variceal haemorrhage. Other endoscopic measures In an RCT, cyanoacrylate offered no benefit over VBL, with the additional risk of embolisation and trend towards increased rebleeding with cyanoacrylate. 122 Haemostatic powder (TC-325; Hemospray; Cook Medical, USA) has been described in a small study of nine patients who received endoscopic spray treatment for acute variceal bleeding. The study reported no rebleeding within 24 h and no mortality at 15 days. 123 Endoscopic therapy in combination with pharmacological therapy The role of combining vasoactive drugs with endoscopic therapy (VBL or sclerotherapy) was reported in a meta-analysis of eight trials. 124 Combination therapy resulted in better initial control of bleeding (RR=1.12, 95% CI 1.02 to 1.23), and 5-day haemostasis (RR=1.28, 95% CI 1.18 to 1.39), without any difference in survival. Adverse events were similar in both groups. Two RCTs have compared VBL with sclerotherapy in combination with vasoactive agents in acute variceal bleeding. 125 126 Lo et al 125 used vasopressin and found that VBL resulted in better 72 h haemostasis (97% vs 76%, p=0.009), with fewer complications (5% vs 29%, p=0.007). Villanueva et al used somatostatin, and reported lower failure to control acute bleeding with VBL (4% vs 15%, p=0.02), with fewer serious complications (4% vs 13%, p=0.04). Overall survival was similar in both trials. 125 126 Balloon tamponade Balloon tamponade is highly effective and controls acute bleeding in up to 90% of patients although about 50% rebleed when the balloon is deflated. 127 128 It is, however, associated with serious complications such as oesophageal ulceration and aspiration pneumonia in up to 15-20% of patients. Despite this, it may be a life-saving treatment in cases of massive uncontrolled variceal haemorrhage pending other forms of treatment. An appropriately placed Sengstaken-Blakemore tube allows for resuscitation, safe transportation and either repeat endoscopy or radiological shunting in a patient with a stable cardiovascular system. The oesophageal balloon is rarely required, must never be used on its own and should be used only if there is continuing bleeding despite an adequately inflated gastric balloon correctly placed and with appropriate tension. Placement of the tube endoscopically or over a guide wire might reduce the risk of complications, especially oesophageal rupture. Removable oesophageal stents The SX-Ella Danis stent (ELLA-CS, Hradec Kralove, Czech Republic) is a removable covered metal mesh stent placed endoscopically in the lower oesophagus without radiological screening. It has no role in the management of gastric variceal bleeding. These stents can be left in situ for up to 2 weeks unlike the Sengstaken-Blakemore tube which should be removed after a maximum of 24-48 h. 129 130 No published controlled trials have compared this modality with balloon tamponade. Transjugular intrahepatic portosystemic stent-shunt Several uncontrolled studies have examined the role of salvage bare TIPSS in acute variceal bleeding. In a review of 15 studies, control of bleeding was achieved in 90-100%, with rebleeding in 6-16%. 131 Mortality varied between 75% (in hospital) and 15% (30 day). It is important to appreciate that sclerotherapy was used as first-line endoscopic therapy in most of these studies. Long-term follow-up of a study that compared TIPSS with H-graft portacaval shunts in patients for whom nonoperative management had failed suggested that H-grafts were a useful method of reducing portal pressure and had a significantly lower failure rate ( p=0.04), but had no significant improvement in overall survival despite a benefit seen in Child's A and B disease. 132 A recent RCT compared emergency portocaval surgery with bare TIPSS within 24 h of presenting with acute oesophageal variceal bleeding in unselected cirrhotic patients. Emergency portocaval surgery resulted in better outcomes for long-term bleeding control, encephalopathy and survival ( p<0.001). 133 Before wider application of surgery for acute variceal bleeding, more data are needed in light of the recent adoption of covered stents. There has also been a generalised established change in practice in using covered TIPSS stents ( polytetrafluoroethylene (PTFE)) rather than a bare metal stent, with evidence to support this change. In randomised controlled studies, these stents were shown to have higher primary patency rates than bare stents, without significant differences in survival, and the potential for reduced incidence of hepatic encephalopathy. 134 135 There is, however, growing evidence from two RCTs for the earlier use of TIPSS in selected patients stratified by HVPG, Child-Pugh class and active bleeding, and not just use as a salvage option. 6 136 Monescillo et al 136 randomised patients presenting with acute oesophageal variceal haemorrhage to bare TIPSS or standard of care if the HVPG was ≥20 mm Hg within 24 h of admission. Significantly reduced treatment failure, as defined by failure to control acute bleeding and/or early rebleeding (12% vs 50%), was seen and improved survival (62% vs 35%) in the patients randomised to undergo a TIPSS procedure. However, the standard of care was sclerotherapy and not combination endoscopic and pharmacological treatment. This limitation and the lack of availability of HVPG measurement in most centres meant this trial did not have a significant impact on clinical practice. Garcia-Pagan et al 6 selected patients with active bleeding and Child's B cirrhosis or patients with Child's C cirrhosis (Child's score <14) for randomisation to early PTFE-covered TIPSS within 72 h or standard of care with VBL and pharmacological treatment. This has shown encouraging results with reduced risk of treatment failure (3% vs 50%), improved survival (86% vs 61% at 1 year), yet without increased risk of hepatic encephalopathy. The results were supported by an observational study from the same group, although a survival benefit was not seen. 137 Furthermore, a recent well-conducted observational study did not demonstrate such high survival rates with early TIPSS, with 11-year survival of 67%, which was similar to that of patients given endoscopic and pharmacological treatments only. 138 Therefore, larger multicentred RCTs need to be undertaken to further evaluate the role of early TIPSS. It is important to make the distinction between salvage TIPSS and early TIPSS to prevent rebleeding. Liver transplantation This is probably appropriate only for patients who bleed while awaiting liver transplantation, although studies comparing VBL or TIPSS placement with urgent liver transplantation in this situation need to be done. Liver transplantation is an exceedingly rare option for the vast majority of patients, both because it is not commonly available and because of shortages and delays in organ procurement. No controlled trials of liver transplantation in uncontrolled/active bleeding are available. Recommendations for the control of variceal bleeding in cirrhosis are given below and in figure 3. SECONDARY PROPHYLAXIS OF VARICEAL HAEMORRHAGE β Blockers A meta-analysis of 12 trials comparing propranolol or nadolol 139 with no active treatment showed a significant reduction in rebleeding but no significant reduction in mortality. 140 The greater reduction in portal pressure with carvedilol compared with propranolol has been described in the section 'Primary prophylaxis' of this guideline. Nitrates The addition of ISMN to NSBB has been shown to reduce variceal rebleeding compared with NSBB alone, although no survival benefit was seen. 141 In addition, adverse events leading to drug withdrawal were more common in the group receiving combined drug treatment. A meta-analysis of ISMN alone or with either NSBB or endoscopic therapy reported that there was no mortality benefit from combining nitrates and NSBB compared with NSBB alone. 142 Side effects of ISMN include dizziness and headache. Owing to the side effects and relative lack of data, ISMN is not commonly used in clinical practice. A recent RCT of 121 patients reported carvedilol to be similar to combined ISMN and NSBB therapy in the prevention of variceal rebleeding and mortality, although severe adverse events were less common with carvedilol. 143 Simvastatin A recent abstract of a multicentre RCT of 158 patients reported a survival benefit (91% vs 78%, p=0.03) from adding simvastatin to VBL and NSBB compared with placebo, VBL and NSBB, as treatment for the prevention of variceal rebleeding. 144 There was no difference in rebleeding and the survival benefit was restricted to Child A and B patients. Serious adverse events were similar in both groups. More data are required to investigate this interesting observation of a survival benefit from simvastatin in this situation, which may relate to its effects on hepatocellular function, fibrosis and portal pressure. Proton pump inhibitors A double-blind randomised placebo-controlled trial showed that pantoprazole reduced the size of ulcers in patients who underwent VBL. However, the total number of ulcers and other outcomes were similar in the two groups. 145 Endoscopic therapy VBL has been accepted as the preferred endoscopic treatment for the prevention of variceal rebleeding, with a lower rate of rebleeding, mortality and complications than sclerotherapy. 146 147 The time interval between VBL sessions to achieve eradication of varices is debateable. However, a recent RCT comparing monthly with biweekly VBL after initial haemostasis with VBL in 70 patients suggested that there were fewer post-VBL ulcers in the monthly group (11% vs 57%; p<0.001). 148 Variceal recurrence, rebleeding and mortality were similar in both groups. Two meta-analyses showed there is no evidence that the addition of sclerotherapy to VBL improves clinically relevant outcomes, including variceal rebleeding and death, and the combination led to higher stricture rates. 149 150 Endoscopic therapy versus drug therapy VBL has been reported to be more effective than combined NSBB and ISMN drug therapy. 151 However, an 8-year follow-up study of this RCT found that although VBL was superior in reducing variceal rebleeding, survival rates were significantly higher in the group treated with combined drug treatment. 152 Other studies have found no superiority of VBL over combined drug therapy for prevention of variceal rebleeding or mortality. 153 154 A recent small multicentre RCT reported carvedilol to be similar to VBL in the prevention of variceal rebleeding, with a trend in favour of survival with carvedilol (73% vs 48%, p=0.110). 155 Several meta-analyses have compared drug therapy with VBL in the prevention of variceal rebleeding. One meta-analysis of six RCTs showed no significant difference in variceal rebleeding rates when comparing VBL alone with combined NSBB and ISMN therapy. However, all-cause mortality was significantly higher in patients treated with the VBL (RR=1.25, 95% CI 1.01 to 1.55). 156 Three meta-analyses comparing drug therapy (NSBB alone or with ISMN) with endoscopic therapy alone reported no difference in variceal rebleeding or mortality. [157][158][159] Endoscopic+drug therapy versus either alone Numerous studies and several meta-analyses have compared combined endoscopic and drug therapy with monotherapy (endoscopic or drugs alone) in the prevention of variceal rebleeding. A meta-analysis of 23 trials assessing sclerotherapy or VBL combined with NSBB reported that combination therapy reduced rebleeding more than either endoscopic therapy or NSBB alone ( pooled RR=0.68, 95% CI 0.52 to 0.89), although no difference in mortality was detected. 160 A meta-analysis of fewer studies suggested no significant difference in rebleeding between combined drug and VBL therapy and either alone. 157 A further meta-analysis reported reduced variceal rebleeding (RR=0.601, 95% CI 0.440 to 0.820) but similar mortality with combined drug and endoscopic therapy versus endoscopic therapy alone. 159 Another meta-analysis of 17 trials (14 using sclerotherapy and three using VBL) reported that combined endoscopic and NSBB therapy reduced rebleeding (OR=2.20, 95% CI 1.69 to 2.85) and overall mortality (OR=1.43, 95% CI 1.03 to 1.98) compared with endoscopic therapy alone. 161 A further meta-analysis of 10 RCTs suggested that combination therapy reduces the risk of rebleeding from oesophageal varices compared with VBL (RR=0.68, 95% CI 0.45 to 0.93) or medical treatment (RR=0.60, 95% CI 0.43 to 0.84). 162 This meta-analysis included seven trials comparing combination therapy with VBL and three trials comparing combination therapy with drug treatment. Combined VBL and drug therapy gave a survival benefit when compared with VBL alone (RR=0.52, 95% CI 0.27 to 0.99), but not when compared with medical treatment alone. Another recent meta-analysis assessed five studies comparing VBL alone with combination VBL and drug therapy, and four studies comparing drugs alone or combined with VBL. 163 This found that adding drugs to VBL reduced rebleeding (RR=0.44, 95% CI 0.28 to 0.69) with a trend towards reduced mortality, but adding VBL to drug treatment did not significantly affect either rebleeding or mortality. The meta-analyses are not entirely consistent, although it would appear that combined VBL and drug treatment might improve survival, but is likely to increase adverse effects compared with VBL alone. There appears to be less clear benefit from combined VBL and drug treatment compared with drug treatment alone. Transjugular intrahepatic portosystemic stent-shunt Three meta-analyses comparing TIPSS with endoscopic treatment (sclerotherapy or VBL) have been published. [164][165][166] The results are similar, with the largest meta-analysis of 12 RCTs showing that (bare) TIPSS reduces variceal rebleeding (OR=0.32, 95% CI 0.24 to 0.43), but is associated with an increased risk of encephalopathy (OR=2.21, 95% CI 1.61 to 3.03). 166 No differences in survival were seen. [164][165][166] Despite the problem of shunt insufficiency and the cost of shunt surveillance, TIPSS has been shown to be more cost-effective than endoscopic therapy. 167 A meta-analysis of six studies comparing TIPSS (both bare and covered) with or without variceal embolisation showed that adjuvant embolisation during TIPSS reduced rebleeding (OR=2.02, 95% CI 1.29 to 3.17) with similar shunt dysfunction, encephalopathy and mortality rates. 168 However, owing to heterogeneity of the study methodology, the authors recommended larger randomised studies using covered stents to confirm the findings. Generally, TIPSS placement using PTFE-covered stents 134 is recommended for patients for whom endoscopic and pharmacological treatment for the prevention of variceal rebleeding fails. 1 The evidence for undertaking an 'early' TIPSS procedure 6 in patients shortly after a first variceal bleed has been discussed in the "Management of acute variceal bleeding" section of this guideline. Surgery A meta-analysis demonstrated that non-selective shunts reduced rebleeding compared with no active treatment or sclerotherapy, at the expense of increased encephalopathy, with no survival benefit. 68 Non-selective shunts resulted in similar outcomes compared with distal splenorenal shunts. 68 Extended follow-up of a randomised study comparing portocaval shunt surgery with sclerotherapy following acute variceal bleeding, reported better long-term bleeding control (100% vs 20%, p<0.001) and improved survival (5-year survival 71% vs 21%, p<0.001) in the portocaval shunt arm. 169 Distal splenorenal shunt surgery was compared with TIPSS in a multicentre RCT including 140 patients with Child's A and B cirrhosis. 170 Results showed similar rebleeding and survival, but higher rates of shunt dysfunction and re-intervention in the TIPSS group, although covered stents were not used. A follow-up study suggested that TIPSS was more cost-effective. 171 Portosystemic shunts (total surgical, distal splenorenal or bare TIPSS) were compared with endoscopic therapy for variceal rebleeding in a Cochrane database systematic review. 172 Twenty-two trials incorporating 1409 patients were included. All shunt therapies reduced rebleeding (OR=0.24, 95% CI 0.18 to 0.30) at the expense of higher rates of encephalopathy (OR=2.09, 95% CI 1.20 to 3.62), with no survival advantage. TIPSS was complicated by a high incidence of shunt dysfunction. Laparoscopic splenectomy plus VBL was also compared with TIPSS for variceal rebleeding in a recent non-randomised trial of 83 patients. 173 This reported surgery plus VBL to be better than TIPSS in preventing variceal rebleeding, with low rates of encephalopathy. Liver transplantation should be considered in eligible patients following a variceal bleed determined by the selection criteria of the country. 174 There is no clear evidence that prior shunt surgery has a significant impact on transplant outcome. 169 Recommendations for the secondary prophylaxis of variceal bleeding in cirrhosis are given below and in figure 3. Recommendations: secondary prophylaxis of variceal haemorrhage in cirrhosis (figure 3) 1. Should VBL be used in combination with NSBB? 1. GASTRIC VARICES Natural history At first endoscopy in patients with portal hypertension, 20% are shown to have gastric varices. 175 They are commonly seen in patients with portal hypertension due to portal or splenic vein obstruction. 175 Only 10-20% of all variceal bleeding occurs from gastric varices, but outcome is worse than with bleeding from oesophageal varices. 175 176 Gastric varices can be classified on the basis of their location in the stomach and relationship with oesophageal varices. This classification has implications for management. The commonly used Sarin classification divides them into (a) gastro-oesophageal varices (GOV), which are associated with oesophageal varices; and (b) isolated gastric varices (IGV), which occur independently of oesophageal varices. 175 Both GOV and IGV are subdivided into two groups. Type 1 GOV are continuous with oesophageal varices and extend for 2-5 cm below the gastrooesophageal junction along the lesser curvature of the stomach. Type 2 GOV extend beyond the gastro-oesophageal junction into the fundus of the stomach. Type 1 IGV refers to varices that occur in the fundus of the stomach and type 2 IGV describes varices anywhere else in the stomach, including the body, antrum and pylorus. The most common type of varices seen in cirrhosis is GOV type 1. Patients who bleed from IGV are at a significantly higher risk of dying from an episode of variceal bleeding than patients bleeding from GOV. 177 Management of acute gastric variceal bleeding Although no studies have reported the use of vasopressors and antibiotics specifically for the initial management of gastric variceal haemorrhage, any patient with suspected variceal bleeding should be managed as described above (see section 'Management of active variceal haemorrhage'). Once endoscopy has identified the source of bleeding as gastric varices, therapeutic options include endoscopic methods, TIPSS, other radiological procedures, surgery and long-term NSBB. Splenic vein thrombosis should be considered and appropriate investigations undertaken in patients presenting with gastric variceal bleeding. Endoscopic therapy Endoscopic sclerotherapy Sclerotherapy has been largely replaced by VBL and tissue adhesives or thrombin when appropriate for gastric varices, owing to the lower complication and rebleeding rates. Endoscopic VBL Standard VBL or the use of detachable snares has been shown to control active bleeding from gastric varices, but rebleeding and recurrence rates are high. 178 179 As GOV-1 are generally considered extensions of oesophageal varices, VBL is often used to treat bleeding from here. However, given the larger diameter and the anatomy of other types of gastric varices, and the limited data on use of VBL in this situation, this technique is generally not recommended for these. A non-randomised study comparing cyanoacrylate with VBL for gastric variceal bleeding reported similar haemostasis rates, but lower rebleeding with cyanoacrylate (32% vs 72%). 194 Survival and complication rates were similar in both groups. In a controlled but non-randomised study comparing cyanoacrylate with sclerotherapy for gastric variceal bleeding, Oho et al 188 showed that the haemostasis rate was significantly higher in the cyanoacrylate group. Survival was also significantly greater in patients treated with cyanoacrylate. Mishra et al 187 reported a randomised study comparing cyanoacrylate injection with β blockers in the prevention of rebleeding in 67 patients with bleeding GOV-2 or IGV-1. During a median 26-month follow-up, patients in the cyanoacrylate group had significantly lower rates of both variceal rebleeding (15% vs 55%) and mortality (3% vs 25%). Treatment modality, presence of portal hypertensive gastropathy and gastric variceal size >20 mm correlated with mortality. Another recent RCT compared repeated gastric variceal obturation with or without NSBB in patients with bleeding GOV-2 or IGV-1. 182 Mortality and rebleeding rates were similar in the two groups, although adverse effects were more common in the combination group. In a non-randomised study, Lee et al 185 suggested that endoscopic ultrasound (EUS)-guided biweekly cyanoacrylate injection versus 'on demand' injection after recurrent bleeding led to significantly lower rebleeding (19% vs 45%) from gastric varices, although survival was similar. However, others have not confirmed this approach. 189 EUS-guided coil therapy has recently been described as having similar efficacy, but fewer adverse events, compared with cyanoacrylate injection in a small nonrandomised study. 191 Binmoeller et al 180 described a new method for the management of fundal gastric varices in 30 patients, using EUS and a combination of 2-octyl-cyanoacrylate and coils. Haemostasis was achieved in 100% of patients with no procedure-related complications. Use of coils appeared to reduce the volume of cyanoacrylate required to obliterate varices. Endoscopic injection of thrombin Injection of bovine thrombin to successfully control gastric variceal bleeding was initially described in a small cohort in 1994. 195 199 reported a high rate of initial haemostasis in acute bleeding. However, failure to control bleeding or rebleeding was reported in >50%, suggesting that thrombin has a role in bridging to definitive treatment in acute bleeding. Where thrombin was used as prophylaxis, rebleeding occurred in 20%. To date, no randomised studies assessing thrombin injection for gastric variceal bleeding have been reported. New endoscopic therapies Two recent reports have described the successful use of Hemospray (Cook Medical, USA) in the management of active gastric variceal bleeding refractory to cyanoacrylate injection therapy. 200 201 In the latter case this was used as a bridge to a TIPSS procedure, 201 but in the former case TIPSS was not undertaken owing to pre-existing cardiomyopathy. 200 No rebleeding was reported in either case at a 30-day follow-up. Further data on the use of haemostatic powders in gastric variceal bleeding are required. Balloon tamponade Insertion of a Sengstaken-Blakemore or Linton-Nachlas tube may sometimes help to temporarily stabilise the patient with severe gastric variceal bleeding, which is uncontrolled by standard endoscopic methods as described above. 127 The Linton-Nachlas tube has been reported to have greater efficacy in gastric varices haemorrhage in a controlled trial. 128 However, rebleeding is almost universal if another treatment modality is not instituted. Transjugular intrahepatic portosystemic stent-shunt An initial TIPSS series using bare stents reported control of active bleeding from gastric varices in almost all patients in whom the shunt was performed successfully. [202][203][204][205][206] Tripathi et al 43 described 272 patients who had a TIPSS procedure for either gastric or oesophageal variceal bleeding. They reported similar rebleeding rates after TIPSS for either gastric or oesophageal varices. Initial PPG was lower in patients with bleeding from gastric varices. In addition, mortality was lower in those patients with initial PPG >12 mm Hg, who had TIPSS for gastric compared with oesophageal variceal bleeding. Shunt insufficiency and encephalopathy rates were similar in both groups. The authors suggested aiming to reduce HVPG to <7 mm Hg in gastric variceal bleeding. Lo et al 207 undertook a randomised trial in 72 patients comparing TIPSS with cyanoacrylate injection in the prevention of gastric variceal rebleeding. Control of active bleeding had been achieved with cyanoacrylate in all patients before randomisation. They reported a significantly lower rate of gastric variceal rebleeding with TIPSS (11% vs 38%), although overall upper gastrointestinal rebleeding was similar in both groups. Encephalopathy was more common in those patients treated with TIPSS (26% vs 3%), but overall complications and survival were similar in both groups. A non-randomised study compared TIPSS with cyanoacrylate injection for gastric variceal bleeding. 208 No differences were found in haemostasis, rebleeding or survival, but the group treated with TIPSS had increased encephalopathy. Another comparative study described lower rebleeding with TIPSS, but reduced in-patient length of stay with cyanoacrylate, and similar mortality. 209 This study also reported cyanoacrylate to be more cost-effective. Other radiological procedures The use of balloon-occluded retrograde transvenous obliteration (B-RTO) for the treatment of bleeding gastric varices was pioneered by the Japanese. 184 210 This procedure involves insertion of a balloon catheter into an outflow shunt (gastrorenal or gastric-inferior vena caval) via the femoral or internal jugular vein. Blood flow is blocked by balloon inflation, then the veins draining gastric varices are embolised with microcoils and a sclerosant injected to obliterate the varices. In a small randomised study, B-RTO was compared with TIPSS in the management of 14 patients with active gastric variceal bleeding and gastrorenal shunts. 211 Immediate haemostasis, rebleeding and encephalopathy were similar in both groups. In a non-randomised study of 27 high-risk patients, Hong et al 212 compared B-RTO with cyanoacrylate injection in acute gastric variceal bleeding. Active bleeding at baseline was more common in the cyanoacrylate group. Haemostasis rates after B-RTO and cyanoacrylate were similar at 77% and 100%. Rebleeding was higher in the cyanoacrylate group (71% vs 15%), with complications and mortality similar in both groups. This rebleeding rate after cyanoacrylate is much higher than figures reported from other studies. A large Korean retrospective study evaluated B-RTO for the management of gastric variceal haemorrhage. 213 Technical success of B-RTO was 97% with procedure-related complications seen in 4% and rebleeding in 22%. Another retrospective study of B-RTO for bleeding gastric varices described 95% technical success and 50% 5-year survival. 214 Cho et al 215 assessed B-RTO in 49 patients who had gastric varices with spontaneous gastro-systemic shunts. Procedural success rate was 84% but two procedure-related deaths occurred. No variceal recurrence or rebleeding was noted. It has been reported that B-RTO can increase PPG and may aggravate pre-existing oesophageal varices and ascites. 215 216 Although B-RTO appears to be an effective alternative to TIPSS in patients with gastric variceal bleeding who have appropriate shunts, 217 it is rarely performed outside Asian centres. 218 Percutaneous transhepatic variceal embolisation with cyanoacrylate and standard endoscopic cyanoacrylate injection have also been compared in a non-randomised study of 77 patients. 219 The authors reported lower rebleeding with the percutaneous approach, although mortality was similar in both groups. Surgery Surgery for portal hypertension should be performed by experienced surgeons in lower-risk patients, ideally in specialist units. 220 Because of the increasing use of simpler endoscopic and radiological procedures as described above, the need for such an intervention has reduced dramatically, and is mainly confined to splenectomy or splenic artery embolisation in patients with splenic vein thrombosis. 221 222 Under-running of gastric varices has been shown to control active bleeding but is followed by recurrence of bleeding in 50% of patients and is associated with a perioperative mortality of >40%. 223 Complete devascularisation of the cardia, stomach and distal oesophagus for bleeding from gastric varices is associated with good control of bleeding but is followed by rebleeding in >40% of patients and early mortality in about 50%. 224 The use of distal splenorenal shunting for bleeding from gastric varices in patients with cirrhosis was reported in six patients with Child class A or B cirrhosis. 225 Although good control of bleeding was attained, two patients died in the postoperative period. Orloff et al 169 reported that a portal-systemic shunt can be an effective treatment for bleeding varices in patients with portal vein thrombosis and preserved liver function. Primary prophylaxis of gastric variceal bleeding A randomised study of 89 patients compared β blockers, cyanoacrylate injection and no active treatment in the primary prevention of bleeding from larger (>10 mm) GOV-2 and IGV-1. 226 Over a 26-month follow-up period, bleeding occurred in 38%, 10% and 53% of patients in the β blocker, cyanoacrylate and no-treatment groups, respectively. The cyanoacrylate group had significantly lower bleeding rates than the other groups for GOV-2, but not for IGV-1 patients. Mortality was lower in the group treated with cyanoacrylate (7%) than in those given no treatment (26%) but was similar to that in the β blocker group (17%). However, this was a small, single-centre study with an unusually high failure rate for NSBB. Many clinicians have significant concerns about the safety of cyanoacrylate injection in the context of primary prophylaxis. In a retrospective study, Kang et al suggested that cyanoacrylate injection may be an effective prophylactic treatment for higher-risk gastric varices. 227 A retrospective study evaluated the clinical outcomes of B-RTO for gastric varices, in which the procedure was performed as a primary prophylactic treatment in 40 patients. 228 The procedure was successful in 79% of patients, although procedural complications were reported in 9%. Survival at 1 and 5 years was 92% and 73%, respectively. Recommendations: management of active haemorrhage from gastric varices (figure 3) 1. What is the optimal management of bleeding gastrooesophageal varices? Correction notice This article has been corrected since it published Online First. The article now has an Open Access licence.
2016-05-12T22:15:10.714Z
2000-06-01T00:00:00.000
{ "year": 2015, "sha1": "44e028609162d209ceec11857ffe3529c4527d3c", "oa_license": "CCBYNC", "oa_url": "https://gut.bmj.com/content/64/11/1680.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a65abbbd3b970ec4f9d8fa95c2f6839e86706bb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15163161
pes2o/s2orc
v3-fos-license
Efficiently mining Adverse Event Reporting System for multiple drug interactions. Efficiently mining multiple drug interactions and reactions from Adverse Event Reporting System (AERS) is a challenging problem which has not been sufficiently addressed by existing methods. To tackle this challenge, we propose a FCI-fliter approach which leverages the efforts of UMLS mapping, frequent closed itemset mining, and uninformative association identification and removal. By applying our method on AERS, we identified a large number of multiple drug interactions with reactions. By statistical analysis, we found most of the identified associations have very small p-values which suggest that they are statistically significant. Further analysis on the results shows that many multiple drug interactions and reactions are clinically interesting, and suggests that our method may be further improved with the combination of external knowledge. Introduction It is well understood that adverse drug reactions may pose serious health concerns on patients. The situation becomes more complicated when two or more drugs are taken together. Interactions between multiple drugs may yield additional reactions than taking them separately. To monitor the adverse drug reactions, the US Food and Drug Administration built an Adverse Event Reporting System (AERS), a postmarketing drug safety surveillance database which contains adverse reports from various sources. However, AERS is essentially a large collection of drug reaction reports. A report involving multiple drugs and reactions does not necessarily indicate a causal relationship between them. In fact, records in AERS come from multiple sources coded as "Foreign", "Study", "Literature", "Consumer", "Health Professional", etc. It is not clear whether all sources produce similar accurate reports to AERS. Thus, mining such a large data for causative adverse drug reactions poses a major challenge in drug safety studies. The existing work on AERS data mining and analysis mainly focuses on using statistic approaches. Some studies identify the reactions caused by one drug, or the drug-drug interactions between two drugs, using statistical approaches such as Bayesian methods [1] [2] and propensity score matching [3]. Some studies focus on the analysis of a few specific adverse reactions [4] or a few drug-drug interaction pairs [5]. In [2], the authors also extend the self-controlled case series (SCCS) to analyze multiple drug interactions. However, these methods did not answer the question of how to efficiently discover multiple drug interactions, i.e., drug-drug interactions that involve two or more drugs. There are many reports in AERS involving more than 2 drugs. To tackle this challenge, Harpaz et al. [6] used association rules mining technique to find frequent patterns. A frequent pattern (a.k.a., frequent itemset) in AERS is a set of drugs and reactions that appear in at least k reports, where k is an adjustable parameter that is known as minimum support. The lower k is, the more patterns will be found and thus more computational time is needed. However, using frequent pattern mining has two major limitations. First, it is computationally very costly. If a pattern is frequent, then all its sub patterns are frequent and should be outputted under the same support level k. A pattern with length x will have 2 x sub patterns (including the empty pattern and itself). This implies that it is computationally intractable to find a lengthy pattern because the number of sub patterns is exponential to its length. The counter measurement is to increase k or limit the output pattern size. But by doing this, we will miss a large volume of lengthy patterns and low support patterns. In [6], authors use 50, a quite high support level for mining AERS, and obtained only 2603 itemsets. Second, the association rules suggested by frequent patterns are not sufficient to support the causative relationships between drug interactions and reactions. For example, if (drug A , drug B , reaction A , reaction B ) is a frequent itemset, we cannot conclude that it is supportive evidence that the interaction of drug A and drug B leads to the reaction A and reaction B . It may be caused by the facts that (1) drug A causes reaction A ; drug B causes reaction B , drug A and drug B are often taken together. Given the above challenging background, in this work we propose a very efficient mining method based on UMLS mapping, Frequent Closed Itemset Mining and filtering (FCI-filter) for mining multiple drug interactions from AERS. Our method efficiently finds a large number of multiple drug interactions and effectively prunes out uninformative patterns. It is important to point out that in this work we do not target on finding causative relationships between drug interactions and reactions, but on finding informative associations by eliminating associations that are not sufficient to support causative relationships. UMLS Mapping A drug or a reaction may have different names in the AERS, for example: Alpha Lipoic Acid is also known as ALA or Lipoic Acid. In many cases a drug name in AERS not only includes the drug but also its dosage. Therefore, it is not accurate to build a transactional database based on the drug or reaction names in AERS. To tackle this issue, we map each drug or reaction name to a UMLS concept, by LDPMap [7]. The UMLS is a very comprehensive collection of medical terms from various sources, such as HUGO, SNOMED CT, RxNorm, ICD9, MedDRA, etc. The RxNorm contains a large collection of drug names and has been successfully used in [6] for mapping drug names. The MedDRA was used for coding reactions in AERS. In the UMLS, a medical term may have various synonyms and may appear in more than one source, but it has only one unique identifier known as a CUI. In [7], we designed a layered dynamic programming mapping method (LDPMap) to effectively find a best matching UMLS CUI for any input of medical term. We have known that LDPMap is much more accurate in mapping medical terms to the UMLS than the UMLS Metathesaurus Browser [8] and MetaMap [9]. Here, we utilize LDPMap to map each drug and reaction to a UMLS CUI. In order to increase the accuracy, dosage related characters such as "oz", "ml" and "mg" in drug names were removed before applying LDPMap. After applying LDPMap on the AERS data of 2012q3, we obtained 10297 unique drugs and 6838 unique reactions, and built a transactional database AERS_tdb containing 134508 records. Frequent Closed Itemset Mining In data mining, a closed itemset is defined as an itemset which does not have a superset that has the same support as this itemset, and a frequent closed itemset is an itemset that is both closed and frequent. By using the concept of closed itemset, we will be able to eliminate the problem of enumerating exponential numbers of subsets. For example, if drug A , drug B , reaction A , reaction B is a frequent closed itemset, then we do not need to output any of its subsets (such as drug A , reaction A ) unless such a subset appears in a record that does not contain all items of drug A , drug B , reaction A , reaction B . Thus, we can see that by using the concept of frequent closed itemset, it is possible to significantly reduce the computational cost and eliminate the output of redundant information. In this study, we use MAFIA [10], an efficient frequent closed itemset mining tool, to mine frequent closed itemset in AERS_tdb, with support level set to be 0.00005, which implies that any closed itemset appearing in 6.7254 or more records in AERS_tdb will be outputted. As a result, we obtained 4811379 frequent closed itemsets. Since we are interested in drug reaction relationships, we removed any itemset that contains only drugs or only reactions, and finally we got 1903630 itemsets containing both drugs and reactions. This is several orders of magnitude larger than the 2603 items obtained in [6]. In addition, we observed that the maximum number of drugs contained in one itemset is 20. This suggests that these 20 drugs are often taken together and with common reactions. Uninformative Association Identification and Removal As mentioned above, the association rules suggested by frequent closed itemsets are not equivalent to the causative relationships between drug interactions and reactions. An itemset is not sufficient to support a causative relationship if its items and supporting transactions (i.e., transactions containing these items) can be obtained from the interaction of other itemsets and their supporting transactions. In this case, this itemset is considered uninformative. Formally, Let I denote an itemset, and T denote the complete set of transactions containing this itemset. We have the following rule: Rule 1: I is not sufficient to support causative relationships if there exist a list of itemsettransaction pairs I 1 ×T 1 , I 2 ×T 2 , … I n ×T n , I I 1 I 2 … I n and T = T 1 T 2 … T n such that none of T 1 , T 2 …,T n is equal to T. In other words, if we view an itemset and its supporting transactions as a block, the above interaction can be described as a "block horizontal union" [11]. Thus, an itemset is not sufficient to support causative relationships if its block can be obtained by a block horizontal union on other blocks with different transaction sets. Here is an example: drug A , reaction A , appears in and only in records 1, 3, 5 drug B , reaction B , appears in and only in records 1, 2, 5 drug A , drug B , reaction A , reaction B appears in and only in records 1, 5. Then drug A , drug B , reaction A , reaction B is not sufficient to support a causative relationship such that the interaction of drug A and drug B causes reaction A and reaction B , because this relationship is a logical result of taking both drugs together. However, if in the above, drug A , reaction A appears in and only in records 1, 5, then we cannot judge drug A , drug B , reaction A , reaction B as "not sufficient to support a causative relationship". In the following, we will use the above rule to eliminate frequent closed itemsets that are not sufficient to establish a causative relationship. Interestingly, we find that block interaction is not necessary for frequent closed itemsets and rule 1 can be simplified as: Rule 2: A frequent closed itemset I is not sufficient to support causative relationships if there exist a list of frequent closed itemsets I 1 , I 2 , … I n where I I 1 I 2 … I n . This is because for frequent closed itemsets, if I I 1 I 2 … I n , we can conclude that for T = T 1 T 2 … T n , none of T 1 , T 2 …, T n is equal to T. Othewise, if one of the transaction set, say T k , is equal to T, then it is a contradiction to the assumption that I k is a closed itemset, because in this case I k would be a superset of I k with the same support as I k . Next we will design an efficient filtering algorithm based on Rule 2. For an itemset I with p drugs, if I I 1 I 2 … I n , we can observe that for any I k (1 k n), it must not contain more than p drugs. Thus, the filtering algorithm does not need to consider all itemsets in order to decide whether an itemset needs to be filtered out. We organize itemsets into groups by the number of drugs they contains. Let IS k denote the itemset with k drugs, our filtering algorithm can be summarized by the following pseudo code: Algorithm FCI-filter (IS 1, IS 2 ,…, IS m ) 1: for i=1:m 2: for each itemset X in IS 1 … IS i 3: for each itemset Y in IS i if X Y mark covered items in Y; endif endfor 4: endfor for each itemset Y in IS i if all items in Y are marked remove Y; endif endfor 5: endfor 9: return IS 1, IS 2 ,…, IS m By applying FCI-Filter to the 20 frequent closed itemsets mined from AERS_tdb, we filtered out 654484 frequent closed itemsets and kept 1249146 frequent closed itemsets as the candidate associate rules. Statistical validation We use the following statistical method to validate the filtered itemsets. Assume the counts for taking drug(s) and have reaction(s) follows a Poisson distribution. For any drug(s) and reaction(s), we will have the following frequency: Total cases: Taking drug(s): Have reaction(s): If the drug(s) will not affect the rate of having reaction(s), the expected counts of taking drug(s) and having reaction(s) would be , as is the portion of people taking drug. The P-value is based on the observed counts of taking drug(s) and having reaction(s) denoted by and its expectation , which is . Results By applying UMLS mapping and Frequent Closed Itemset Mining, we obtained a large number of itemsets of drug interactions and reactions (Table 1). After applying algorithm FCI-Filter, we removed a significant amount of itemsets that are insufficient to support causative relationships ( Table 1). Number of drugs Itemsets before filtering We subjected the itemsets (i.e., drug interactions and reactions) after filtering in Table 1 to statistical validation, and found that most itemsets have very significant low p-values ( Figure 1). In addition, for drug counts greater than 10, p-value histogram ( Figure 2) is similar to Figure 1, which further confirms the effectiveness of our drug interaction mining approach. Discussions A clinical evaluation of the data mining results reveals some interesting findings as listed in Table 2. For instance, Aripiprazole, Citalopram hydrobromide and Mirtazapine, the three antidepressants sometimes used in combination therapies, were found to be in association with adverse cardiovascular events (Case 1 of Table 2). This result is highly interesting, since the potential cardiovascular side effects of antidepressants and antipsychotics have long been under debate [12] [13]. Recently in 2011, the US Food and Drug Administration (FDA) announced that "Citalopram causes dose-dependent QT interval prolongation. Citalopram should no longer be prescribed at doses greater than 40 mg per day." Further clinical study of Aripiprazole, Citalopram hydrobromide and Mirtazapine is required to explore their association with adverse cardiovascular events. In addition to the above findings, we also observed interesting interactions involving a good number of drugs. For example, the following interaction contains 7 drugs and many reactions: The actions of this combination of drugs along with the reported biochemical effects is interesting. Many of these drugs act on ion channels or receptors, and the diverse array of biochemical effects that they result in is overwhelming. They result in increased activities of alanine aminotransferase, aspartate aminotransferase and blood lactate dehydrogenase. They also result in increased concentrations of blood creatinine, glucose and urea, as well as decreased concentrations in hemoglobin and blood uric acid. Many of these outcomes can be partly accredited to abnormal kidney or liver function, but they along with the other associated symptoms make analyzing their overall effects quite complex. However, this type of data analysis can provide valuable pieces of information that can act as a starting point in order to investigate why this combination of drugs has the resulting effects. Future work We have demonstrated in the above that FCI-filter is very effective in identifying important multiple drug interactions and reactions. However, the clinical evaluation also suggests some future improvements of our data mining strategy. An integration of clinical knowledge outside of the AERS database can be helpful (Case 3, 4, and 5 of Table 2). For instance, in Case 5 of Table 2, the hypotension side effect of Bromocriptine (single drug) is not statistically revealed from the AERS data set, although it is well known clinically to cause potential hypotension. As such, external knowledge can make the filtering of the Frequent Closed Itemset Mining more effective.
2018-04-03T00:47:14.072Z
2014-04-07T00:00:00.000
{ "year": 2014, "sha1": "2a85f9305668b46e0e03386223fef13a5a6aa50f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2a85f9305668b46e0e03386223fef13a5a6aa50f", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12312618
pes2o/s2orc
v3-fos-license
The VSOP 5-GHz AGN Survey: IV. The Angular Size/Brightness Temperature Distribution The VSOP (VLBI Space Observatory Programme) mission is a Japanese-led project to study radio sources with sub-milliarcsec angular resolution, using an orbiting 8-m telescope on board the satellite HALCA with a global earth-based array of telescopes. A major program is the 5 GHz VSOP Survey Program, which we supplement here with VLBA observations to produce a complete and flux-density limited sample. Using statistical methods of analysis of the observed visibility amplitude versus projected (u,v) spacing, we have determined the angular size and brightness temperature distribution of bright AGN radio cores. On average, the cores have a diameter (full-width, half-power) of 0.20 mas which contains about 20% of the total source emission, and (14+/-6)% of the cores are<0.04 mas in size. About (20+/-5)% of the radio cores have a source frame brightness temperature 10^{13}K, and (3+/-2)% have 10^{14}K. A model of the high brightness temperature tail suggests that the radio cores have a brightness temperatures approx 10^{12}K, and are beamed toward the observer with an average bulk motion of v/c=0.993 +/- 0.004. Introduction On 1997 February 12, the Institute of Space and Astronautical Science of Japan (ISAS) launched a satellite called HALCA, with an 8-m radio telescope dedicated exclusively to Very Long Baseline Interferometry (VLBI) (Hirabayashi et al. 1998). The mission, called VSOP, with a spacecraft apogee height of 21400 km, gives unparalleled brightness temperature sensitivity, and allows studies of radio sources with angular resolution as small as 0.2 mas. About 75% of the mission observing time was devoted to peer-reviewed scientific projects, proposed by the world-wide astronomical community (called General Observing Time, GOT). Many VSOP publications show the complexity and evolution of the sub-milliarcsec structure of AGNs (Piner et al. 2000;Lobanov & Zensus 2001;Lister et al. 2001a;Tingay et al. 2002;Kameno et al. 2003;Murphy et al. 2003;Giroletti et al. 2004). In order to insure that a complete flux-density limited sample of AGNs were observed during the observing lifetime, the mission-led part, the VSOP 5-GHz AGN Survey, was given a major portion of the remaining observing time for sources which were not already included in GOT proposals. A general goal of the survey is a compilation of a catalog of AGN which would be used in part for planning for future space VLBI missions. A more immediate goal, reported in this paper, is to characterize the properties of the sub-milliarcsec structure in AGN, especially their angular size and brightness temperature distributions. This paper, the fourth in the VSOP AGN Survey series, presents a non-imaging statistical analysis of the angular size and brightness temperature distributions of strong AGNs. This is complementary to the approach in Paper III Scott et al. (2004) which shows the images, model fits, angular sizes and bright temperatures for 102 sources. A description of the survey compilation and supporting VLBA observations was given in Paper I by Hirabayashi et al. (2000), with additional material in Fomalont et al. (2000a). The VSOP observations and data reductions are described in Paper II by Lovell et al. (2004) (see also Moellenbrock et al. (2000)). In §2, we discuss the source selection and in §3 describe how the observed visibility amplitude versus projected spacing was determined in order to obtain a statistic which could be used to determine the angular size properties of AGN. In §4, we derive the angular size and brightness temperature distributions. We compare them with other high resolution surveys and scintillation observations, and fit a simple model to the distributions. The major results are summarized in §5. The VSOP AGN Survey Sample The VSOP 5 GHz AGN sample was defined to include all cataloged extragalactic, flat-spectrum radio sources in the sky with • a total flux density at 5 GHz, S ≥ 0.95 Jy • a spectral index α ≥ −0.50 (S ∝ ν α ) • a galactic latitude |b| ≥ 10 • . This sample contains 344 sources. This spectral index criterion eliminated from consideration about 300 extragalactic sources with flux density > 1 Jy near the galactic plane, or with steep radio spectra and thus little milliarcsec structure. These sources are arcseconds in size and dominated by double-lobed structures (eg. FRI and FRII type radio sources) and are generally associated with radio galaxies. At the beginning of 2002 when this database was assembled, about 50% of the VSOP observations had been observed and processed. Because the selection of observed sources was randomized by the need to fill in observing holes between the GOT observations, little bias is introduced by not completing the observations of the entire list (See Fig. 1 of Paper III) before this analysis of the data. An integral part of the planning for the VSOP Survey was VLBA observations over a 24-hour period in 1996 June, call the VLBA pre-launch survey (VLBApls). All 303 sources of the 344 sample sources north of declination −43 • were observed at 5 GHz, and the survey results are given by Fomalont et al. (2000b) 2 . The VLBApls survey served two important functions for the VSOP Survey. First, the VLBA results, with baselines up to 8000 km, indicated which of the AGN are sufficiently large so that higher resolution VSOP observations on baselines between 5000 km and 25000 km were not feasible. Secondly, when combined with the VSOP data, this enhanced database covers baselines between 100 km and 25000 km so that a detailed analysis over a large range of the angular structure of AGNs could be made. Although the VLBA data were implicitly used in Paper III in order to obtain VSOP images which were consistent with the lower resolution data, direct use of the VSOP+VLBA data provides a more accurate determination of the sub-milliarcsec properties of AGNs, less susceptible to selection effects, observational biases and the present incompleteness of the observations. Determining the Visibility Database The observational procedures and data reduction techniques for the survey experiments are given in Papers II and III. After calibration and editing, the basic database for each source is composed of a set of correlated visibilities (amplitude and phase), measured at many (u,v) spacings. Numerous examples of the character of the data are given in Fig. 2 of Paper III. For many of the survey observations only two or three ground telescopes with HALCA for three or four hours were used. The large variation of the observation time and ground telescope participation produced a set of images with a wide range of quality and resolution, and direct analysis of them can lead to uncertainties in determining the unbiased properties of AGNs. Thus, our statistical analysis uses the visibility data directly in order to determine the AGN distribution of angular size and brightness temperature.. In order to extend the range of resolution of the data, we concatenated the VSOP data and the VLBApls data, and for this reason we have restricted the analysis to the 303 survey sources north of declination −43 • which is covered by the VLBA observations. In Fig. 1 we give an example of the concatenated data set for J1626−2951 and relevant processing. Fig. 1a shows the visibility amplitude versus the projected (u,v) spacing for the VSOP data observed on 22 February 1998, plus the VLBA data observed on 05 June 1996. The effect of the source variability is obvious since the VLBA points which overlap in spacing with the VSOP points are considerably higher. Using the flux density monitoring of the sources with the Australia Telescope Compact Array (ATCA) (Tingay et al. 2003), we find that the flux density of this source in June 1996 was 3.0 Jy, but only 2.1 Jy in February 1998. Although the variable component in most AGN are confined to a small region of the emission extent, the overlap in spacing and position angle between the VLBA and VSOP surveys are sufficient to compare their visibility amplitude. In the example of Fig. 1a, if the VLBA amplitude scale is multiplied by 2.1/3.0, a better continuity between the VLBA and VSOP points, as shown in Fig. 1b, is obtained. (The large change in flux density for this source is atypical.) Source variability corrections to an accuracy of 10% were obtained from the ATCA and University of Michigan 3 source monitoring programs which include most of the sources in the VSOP sample. Finally, in order to decrease the size of the database, we averaged the observed visibility amplitude in bins of width 40 Mλ, and this plot is shown in Fig. 1c. With the above processing, we obtained a database containing the visibility amplitude over a wide range of projected spacing for all sources. The VSOP Survey source list of the 303 sources north of declination −43 • is given in Table 1. It is arranged as follows: Columns 1 and 2 give the J2000 name, and an alternative name. Column 3 lists the total flux density of the source at 5 GHz. Most of the sources are variable and the total flux density comes from that of the original finding catalog, as described in , or from the VLBA observations. The Relative Visibility versus Projected Spacing The data for all sources were processed to that of the form given in Fig. 1c. In order to have a statistic which is independent of the total flux density of the source, we normalized the measured correlated flux density of each source to the average visibility amplitude in the first bin of average spacing 20 Mλ. We will denote this normalized flux density as the relative visibility (RV) and it is a property of the source structure, regardless of the total intensity of the source. The properties of the RV with projected (u,v) spacing (PS) is the statistic that are used to categorize the source structure properties of AGN. In order to determine an unbiased RV versus PS distribution, we averaged the contributions of the 303 sources in the following manner. First, we included the 115 fully-reduced sources, Class=A, 48.1% of the sample. We contend that those sources not yet observed or reduced, Class=B, have the same average properties of those in Class=A. We included a representative portion of the 50 sources (Class = C) which were too resolved to be observed with HALCA but are 5 http://www.physics.purdue.edu/∼mlister/MOJAVE/ nevertheless part of the VSOP AGN Sample. In practice we choose all 50, but reduced their weight in the analysis to 0.481 to match the portion of VSOP observations which have been observed and reduced by 2002 February. Fig. 2 shows the dependence of the average RV versus PS for the sample. Although data from 165 sources of the 303 in the sample are included, the distribution should be representative of the entire sample, as described above. A preliminary version of this distribution, with fewer sources, was presented by Lovell et al. (2000). The distribution shows a strong decrease between 20 Mλ and 140 Mλ, with a less steep decrease at longer spacings. At 500 Mλ, the RV is 20% of that at 20 Mλ. Based on the measured total flux density of the sources (see Table 1, col 3) the visibility amplitude at the first point at 20 Mλ is, on average, only 52% of the total source flux density at zero projected spacing. Thus, a typical AGN contains about half of its emission in angular scales > 10 mas which is invisible in the VLBA and VSOP observations, but contained in the total source flux density. For comparison, the distribution from the VSOP observations of the Pearson-Readhead (PR) sample at 5 GHz (Lister et al. 2001a) is also shown in Fig. 2. This sample of 27 sources is defined by δ > 35 • , total flux density > 1.3 Jy, with correlated flux density > 0.4 Jy on a 6000 km (100 Mλ) baseline. Using the same visibility amplitude normalization as that used with the VSOP Survey sample, the PR distribution is in reasonable agreement with that found with the VSOP sample. For the PR sample, there is smaller decrease of the RV for the longer spacings since the sample definition included a criterion concerning the compactness of a source. The distribution of the RV versus PS can be interpreted as that produced by a typical AGN structure. We have, thus, fit this distribution to an average source structure, given by the dashed line in Fig. 2. For the space baselines of P S > 180 M λ, the distribution can be fit with a component of angular size 0.20 ± 0.02 mas, containing 0.40 ± 0.04% of the mas-scale flux density, or about 20% of the total flux density. The fit to shorter spacings is more ambiguous, and a range of angular scale components are needed. Most of the emission is contained in an ≈ 1 mas-sized component, but some emission is required in a component of ≈ 2.4 mas to fit the shortest VLBA spacings. As described above, a typical AGN contains even more extended emission with a scale size > 10 mas which is unobservable with the VLBA or VSOP, but have been imaged with lower frequency VLBI surveys. The fit in Fig. 2 of the three components with different angular scales is consistent with the known structure of many AGNs: The smallest angular component is the radio core which is generally less than 1 mas in size. This component is associated with the inner part of the radio jet which is often beamed toward the observer. The two larger components are consistent with the radio jet and internal structures observed in many sources. The radio emission, which is completely resolved out in these observations, is associated with larger-scale kpc-size emission. Since the structure of most radio sources is asymmetric with the radio core at one end of a linear structure, we originally fit the data in Fig. 2 to an asymmetric spatial distribution of the three components. However, the fitting is insensitive to source asymmetry, but depends strongly on the component angular scales. The Angular Size Distribution Each plotted point in Fig. 2 represents the average RV for the set of sources. By fitting this RV distribution to a typical AGN source model, we obtained an average source structure. In Fig. 3 we show the range of values for the RV associated with three spacings; 60 Mλ, 220 Mλ and 440 Mλ, and the spread of these values is related to the distribution of the angular size of the AGN population. At the shortest spacing of 60 Mλ, about 60% of the sources are nearly unresolved (RV> 0.8), and about 12% of the sources have RV< 0.4. At the long spacing of 440 Mλ, nearly 40% of the sources have RV< 0.2, and only about 30% have RV> 0.4. In order to determine the range of source sizes which are consistent with the distributions in Fig. 3, we used the template structure of the threecomponent average source model. In other words, we assumed that all sources have the same shape RV versus PS, but are scaled in angular size (or spacing). We then convolved this template structure with a two-parameter log-normal distribution P (θ), where θ is the angular size of the radio core, with parameters θ l , the log-mean of the angular size distribution, and d, the dispersion in the log of the angular size. The best fit for these two parameters was obtained by minimizing the χ 2 difference between the observed distribution in Fig. 3 with that expected from the template structure model, convolved with the distribution in Eq. (1). The result of this fit is a core size of θ l = 0.052 mas and dispersion d = 1.45, and the fit is shown by the dark plotted points in Fig. 3. The error bars represent the estimated error based on the number of sources used in each spacing range. The cross-hatched histogram in Fig. 4 shows the angular size distribution for this fit. Approximately 80% of the radio sources have a core angular size in the range 0.03 mas to 0.8 mas. (The two larger radio components follow the same distribution, but are 5 and 12 times larger than the core.) About 14% and 4% of the sources may have an angular size less than 0.06 mas and 0.04 mas, respectively. Although the smaller angular sizes are clearly beyond the resolution capabilities of these observations of about 0.15 mas for the stronger cores (Paper III), our assumption of reasonable continuity in the distribution of angular sizes, implied by the use of Eq. (1), does infer that these small angular size components are likely to exist. We also fit the RV vs PS distribution in Fig. 3 with an angular size model which attempts to minimize the number of small sources. The fainter, circle points show a fit to the data with parameters θ l = 0.09 mas and d = 1.08. These values produce an additional χ 2 deviation from the data which makes this solution (or a more extreme one) less than 15% as likely as a solution closer to the best model. This angular distribution limit is shown by the open histogram in Fig. 4, where it is referred to as 'largest core'. For this distribution, the proportion of sources with cores with an angular diameter less than 0.06 mas has dropped from 14% to 6%, with only 1% less than 0.04 mas. The difference between the two distributions is an indication of the model error. The Brightness Temperature Distribution The brightness temperature distribution of the core component in the observer's frame, T b ∝ S/θ 2 , can be derived from the angular size distributions given in Fig. 4, and the average core flux density, S. To obtain this core flux density, Fig. 5a shows a similar distribution to that of Fig. 2, but with the y-axis as the correlated flux density, rather than the RV. We have divided the distribution into a low and a high redshift distribution, separated at z = 0.8 which is the redshift median value for the sample. Thus, the core component, which becomes dominant at spacings greater than 300 Mλ, has an average flux density of 0.5 Jy. The dependence of the core flux density with redshift is small, so that the assumption of converting the angular size distribution to a brightness temperature distribution using a welldefined average core flux density of 0.5 Jy is valid. The brightness temperature distributions in the observer's reference frame are shown in Fig. 6, for the best-fit angular size distribution and the 'large-core' angular distribution. About 14% of the sources have T b > 1.0 × 10 13 K for the best fit angular size distribution; whereas only 6% are above this temperature for the large-core fit. For both distribution, approximately half of all AGN have a radio core with T b > 1.0 × 10 12 K. These distributions agree well with that in Paper III derived from the images. Because our data modeling does not contain rigid cutoffs at the high resolution limit of the observations, but uses a reasonable extrapolation below the formal resolution limit the VSOP observations, the distributions in Fig. 6 extended to higher brightness temperatures than that from other VLBI surveys of AGN. Our addition of AGN which were not observed with VSOP because of the lack of significant small-scale structure also extends the brightness temperature distributions to lower values than other VLBI surveys. To correct the brightness temperature T b from the observer to the source reference frame, the factor (1 + z), where z is the source redshift, should be applied to the brightness temperature. Fig. 5b shows the redshift distribution of the 267 sources in the sample with measured redshifts. The distribution is relatively flat out to z = 1.5, and then drops off with a maximum redshift somewhat less than 3.0 for this sample. The average value of (1 + z) is 1.81. The plot in Fig. 5c shows a comparison of the approximate source brightness temperatures given in Table 1, col (8) versus redshift. There is clearly little systematic dependence of the brightness temperature with redshift, hence a simple multiplication of the brightness temperature scale in Fig. 6 by 1.81 converts from the observer frame to the source frame. This assumption should produce an error no larger than the 15% difference in the average core flux density for high and low brightness sources, as shown in Fig. 5a. We believe that the statistical analysis of the VSOP+VLBA observations gives the most realistic and unbiased estimate of the proportion of high brightness radio cores at 5 GHz yet available. In the source reference frame approximately 25% of the radio cores have T b > 1.0×10 13 K using the best fit angular size distribution, but this proportion drops to 16% for the largest core angular size distribution. The proportion of the core with T b > 1.0 × 10 14 K is 4% and 1% for the two distributions. This brightness temperature corresponds to a resolution which is a factor of 10 sharper than the observed VSOP data, but we believe the percentages are reliable. Approximately 65% of the radio cores have T b > 1.0 × 10 12 K. The above results are consistent with other surveys of compact radio sources. In the 15-GHz observations of Kovalev et al. (2004), 18 of 160 sources (13%) have T b > 1.0 × 10 13 K and 46% have T b > 1.0 × 10 12 K. A 22 GHz survey (Moellenbrock et al. 1996) found a similar percentage of high brightness objects. Our percentage of cores with T b > 1.0 × 10 12 K is in reasonable agreement (as it should be since the data overlap is large) with the 53% found by Scott et al. (2004) from the images and model-fitting of 102 sources from the VSOP Survey 6 . The highest brightness temperatures that has been measured for an individual source is 5.8 × 10 13 K for AO0235+164 at 5 GHz (Frey et al. 2000). The scintillation of many radio sources also implies that they contain radio components with 6 Twenty sources in the Scott sample are not included in this statistical analysis because they were south of δ < −43, or not within the strict definition of the catalog completeness. About 30 additional sources, with good visibility data, but not yet imaged, are included this statistical analysis. (Lovell et al. 2003), about 12% were classified as variable. The variability in the total flux density averaged about 6%, and that the variable time scales varied from a few hours to a few days. An approximate interpretation of these results is: about 12% of the AGNs contain a radio component with 6% of the total source flux density and T b ≥ 10 14 K. The results from our statistical analysis suggests that about 20% of the AGNs contain a radio component with 20% of the total source flux density and T b ≥ 10 13 K. These two descriptions of the properties of the high brightness radio cores are compatible. Finally, a correlation between sources which scintillate and those which have relatively large correlated flux density observed by VSOP was reported by Lister, Tingay & Preston (2001b) using the PR sample of sources. The detailed study of the scintillation properties of the VSOP sample is in progress. Brightness Temperature Modeling It is generally assumed that the maximum brightness temperature from a synchrotron emitting radio source is T max ≈ 10 12 K, because the strong inverse-Compton emission will quickly quench the radio emission above this brightness (Kellermann & Pauliny-Toth 1969;Lister et al. 2001a;Tingay et al. 2001;Kellermann 2002). Calculations based on equipartion of energy arguments suggest that this limit may be as low as 10 10.5 K (Readhead et al. 1996). Other emission mechanisms have also been proposed, including relativistic induced Compton scattering (Sincell & Krolik 1994) and coherent synchrotron emission processes (Melrose 2002), which have a higher brightness temperature limit. The observed brightness temperatures above T max , however, can be produced by the Doppler boosting (Shklovsky 1963) of the emission from the radio core material which moves with a relativistic bulk velocity v b nearly in the direction to the observer, given by the angle, ψ. In order to obtain estimates of the three parameters, T max , β = v b /c and ψ needed to reproduce the high brightness tail of the distribution in Fig. 6, we used the following simple model: The number density of the source brightness temperature, T < T max , is proportional to log(T max /T ); the motion of this material is β, and the maximum orientation to the line of sight is ψ. For a more detailed modeling of superluminal sources, see Vermeulen & Cohen (1994). The plotted points in Fig. 6 are those for the values T max = 1.0 × 10 12 K, β = 0.993, ψ = 35 • . Reasonable fits are obtained for the range 0.5 < T max < 2.0 and 0.990 < β < 0.997. Similar model parameters are obtained from multi-epoch observations of the 'superluminal' motions of radio components, which suggest β ≈ 0.99, corresponding to γ = (1 − β 2 ) −0.5 ≈ 10 (Vermeulen & Cohen 1994; Kellermann et al. 2000;Kovalev et al. 2004). Conclusions Before the launch of HALCA, there was little direct evidence concerning the brightness temperature distribution of radio components associated with AGN. Figs. 2 and 3 show that at 5 GHz with baselines up to 25,000 km there is significant emission for many AGNs, and future use of a space-VLBI mission with substantially longer baselines are required to probe the evolution and structure of these high brightness radio cores. Future space VLBI missions with longer baselines and substantially improved sensitivity are, thus, required to probe the evolution and structure of these high brightness cores in AGNs. The VSOP AGN Survey was compiled in order to determine the properties of the sub-mas radio structure of strong AGN. The source sample covered the entire sky (|b| > 10 • ) and included sources at 5 GHz above 1 Jy, with a relatively flat spectral index. No other criteria were used. The analysis in this paper provides an attempt at an unbiased determination of the radio source structure parameters by using the measured RV versus PS properties in a simple and straight-forward manner. An analysis from the derived images and models of the individual sources (Scott et al. 2004) obtains similar results on the angular size and brightness temperature of the AGNs. The major properties of AGN derived from the statistical analysis described in this paper are: • About 50% of the total emission from an average AGN is completely resolved using at the shortest VLBA spacings, and are there-fore contained in a component > 10 mas in size. • About 40% of the milliarcsec emission (20% of the total emission) comes from a radio core of average size 0.20 ± 0.02 mas. • About 80% of the radio cores have an angular size in the range of 0.03 to 0.8 mas. We estimate that 10±4% of the cores are < 0.06 mas at 5 GHz. • A majority of the AGN radio cores have a brightness temperature in excess of 1.0×10 12 K, and we estimate that 20 ± 5% of the cores have T b > 1.0 × 10 13 K and 3 ± 2% have T b > 1.0 × 10 14 K in the source reference frame. • The derived brightness temperature distribution is in good agreement with the results from other high-resolution radio source surveys and with radio scintillation observations. We gratefully acknowledge the VSOP project, which is led by the Institute of Space and Astronautical Science (ISAS, now called the Japan Aerospace Exploration Agency, JAXA) in cooperation with many organizations and radio telescopes around the world. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. SH acknowledges support through an NRC/NASA-JPL Research Associateship: WKS thanks the support from the Canadian Space Agency: RD is supported by the Japanese Society for the Promotion of Science: JEJL thanks the support from the Australian Commonwealth Scientific & Industrial Research Organisation. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of technology, under contract with the National Aeronautics and Space Administration. Finally, we thank an industrious referee for major improvements to the organization and content of the paper. Little core flux density versus redshift is observe; hence, the correction from the observer's frame to the source frame by simply increasing the brightness temperature scale by 1.81 is valid. Note.-Column (7): A=AGN sample and observed with VSOP; B=AGN sample, but not yet observed with VSOP or reduced; C=AGN sample, but too faint to be observed with VSOP; D=Removed from AGN sample. Redshift and identification for J1501−3918 and J1658−0739 from I.A.G. Snellin & P.G. Edwards (2004, private communication); redshift for J1522−2730 from Heidt et al. (2004)
2014-10-01T00:00:00.000Z
2004-07-03T00:00:00.000
{ "year": 2004, "sha1": "98615ec00bad9f0cfa96c0dff9a13031b4b6390a", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "b9c93f76e1b089ce25305befaa26196f3eb764cb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
197793430
pes2o/s2orc
v3-fos-license
CAN LAW ON PROBATION IMPROVE THE IMPLEMENTATION OF THE MEASURES FOR PROVIDING THE DEFENDANT’S PRESENCE IN THE CRIMINAL TRIALS IN MACEDONIA? The author critically elaborates the jurisdiction of the new Probation Service as regulated within the provisions of the newly enacted Law on Probation in Republic of Macedonia. He states that the Macedonian legislator has omitted to regulate one very important part of the Probation service’s jurisdiction, such as the implementation of the measures for providing the defendant’s presence during the criminal procedure. The author stresses the fact that in one broader European sense, the Probation Services has imminent jurisdiction regarding the proper implementation of these measures, as ordered by the courts. Through this jurisdiction the probation service is serving to the court as Pre-trial service. In order to overcome this situation, author initially examines the connection between these measures and the Probation service and in addition provides specific suggestions for further improvement of Law on Probation provisions’. INTRODUCTION Measures for providing of the defendants' presence are often seen as necessary evil for effective and efficient commencement of the criminal trials. This is due to the fact that these measures are bearing significant restrictions to the defendants' basic human rights guaranteed during the criminal trials while the defendant is primarily observed as innocent until proven guilty beyond reasonable doubt. This means that trough the imposition of these measures during the criminal trial defendants' rights generally and in particular right to liberty and right to free movement are severely restricted despite the fact that the defendant is considered innocent. Henceforward, extensive implication of these measures could even harm defendants' presumption of innocence. Due to these reasons implication of the measures for providing defendant's presence during the criminal trials by the court should be restricted only in those cases where they are necessary of inevitable. For these reasons the courts should address extra caution while deciding which measure is most suitable to implement while at the same time to provide al less intrusion to the defendant's guaranteed rights. For these reasons, while deciding which measure for providing defendants' presence during the criminal trial is the most appropriate and the most effective, courts often can't rely solely upon the evidence provided by the prosecution. This fact, according to Macedonian experience, relies to the prosecutors' practice where in many cases supported evidence of the request for imposition of some measure for providing defendant's presence is not sufficient since it lacks important information regarding the defendant's personality of his/hers family or social ties and connections. It is needless to say that this information is of essential importance for the court while deciding whether to impose a measure at all, or to determine the most appropriate measure for providing of the defendants' presence during the trial. From the comparative point of view, this problem is also observed in other criminal justice systems in EU states. Due to that, several EU member states have introduced within their criminal justice systems specific state agencies which are responsible for gathering and administering these specific evidence to the court regarding the defendants' personality and his/hers social milieu. In most cases these information regarding the defendants' profile to the court as part of the procedure for implementation of the measures for providing of the defendant's presence and as part of the sentencing process are served as the Pre-sentence report by the Police, by the Probation services or by other state bodies. Delegation of this duty to the Probation services in most cases is justified by the fact that there is strong thread of similarity between the alternative criminal sanctions and the less severe measures for providing of the defendant's presence during the criminal trial, despite the fact that they serve completely different purpose. In this text author elaborates the jurisdiction of the newly enacted Law on Probation in Republic of North Macedonia and examines the possibility for transferring of this above mentioned duty to the Macedonian probation service in comparison to the jurisdiction of equivalent services in several EU member states. Law on Probation in Republic of North Macedonia was enacted on 25-th of December, 2015 (No. 226/2015) 1 with vacatio legis until 01-st of November 2016, with main idea to foster and to increase the implementation of the alternative sanctions by 1 Unfortunately, despite the fact that the vacatio legis has elapsed, this Law has been implemented in practice only since the beginning of 2018, and until now there are only less than 20 Probation officers in the Republic of North Macedonia the courts. Unfortunately, this Law has omitted to regulate this additional and equally important area -implementation of the less severe measures than detention for providing of the defendant's presence during the criminal trial. Furthermore, this article also contains specific recommendations for extending of the outreach of the Law on probations. THEORETICAL BACKGROUND fOR CONNECTION Of THE LESS SEVERE MEASURES fOR PROVIDING DEfENDANT'S PRESENCE AND PROBATION SERVICES Considering the nature of the alternative sanctions and the nature and the specific purpose of the less severe measures than detention for providing of the defendant's presence during the trial, it is acceptable to interconnect these two types of measures within one agency for their proper administration. 2 The interconnection between these two types of measures, as Hucklesby and Marshall 3 emphasizes lays upon their minimum limitation of the defendant's right to liberty and bears minimum limitation towards their social activities, despite the fact that alternative measures are criminal sanctions and are imposed only upon finished criminal procedure and to the defendants which were found guilty, while the second measures for providing defendants' presence are imposed only to the defendants which are presumed innocent and during the phase of the criminal trial. In addition, these two types of measures identically impose certain obligations or limitations to the defendants in order to test their responsibility and capability to properly function with their everyday life within the community of their origin. 4 Henceforward the same arguments which are used to justify the necessity to reduce the implementation of the imprisonment sanctions and to foster the imposition of the alternative sanctions are or can be used in the justification of the promotion of the implementation of these less intrusive measures to the defendant's right to liberty in comparison to the detention. The reason for this analogy can be found in at least two different aspects. The first aspect is connected to the empirically proved fact 6 that the processes of the resocialization and punishment of the convicted persons are far more effective and efficient if this person is not deprived from his/hers natural environment, meaning that the sanction is served within the convicted persons' community. Due to this fact if these arguments are plausible to the convicted persons they should be even more acceptable to the persons whish are standing trial and are protected with the principle of presumption of innocence. Hence, if the defendant is considered innocent until proven guilty it should be also treated likewise by the courts and his/hers right to liberty should be deprived only in specific, limited by law and necessary cases. This means that defendants' presence during the criminal trial in every other case should be provided, if needed, only through the imposition of the less severe measures for providing the defendant's presence. These measures as determined within the Criminal Procedure Code are very similar to the alternative sanctions as regulated within the Criminal Code, particularly regarding their implementation. Second aspect considers the fact that specific state body or agency is necessary for proper administration of these measures, both the alternative sanctions and the alternatives to detention. Since establishment of a specific state agency for administration of the criminal sanctions is always expensive and connected with significant financial burden to the state's budget, it is also appropriate to provide as much as possible similar workload to these agencies which would reduce the court's or prison authorities' workload, but in the same time it would increase the overall court's and criminal justice system efficiency. This means that if we establish new criminal justice agency, then this agency should be in charged with performance of the complete workload of the other criminal justice stakeholders (such as the courts and prison authorities) that provide same or similar services in order to provide specialization of its services. It is needless to mention that the specialization of the workload of one agency leads to increase of the quality of its work performance. 7 Additional opinions which support the idea for concentration of the duties for implementation of these measures by the probation services are based upon the facts that probation service officers have more appropriate educa- tional background and experience than the police officers for implementation of these measures. This, on the other hand, generates better interpersonal relations between the parole officers and the defendants, which means better answer to the defendant's needs and overall improves the satisfaction from the implementation of these measures during the trial by the defendants 8 . In this sense it could be expected that the defendants would be more willing to obey the court's imposed orders and reduce the possibility of risk of absconding or possible further committing crimes as defendants' desired and expected behavior. Hence more, considering the EU member states' criminal justice systems, together with the EU acquis, we can conclude that this jurisdiction of the Probation services is often common in most states that do have established such specific service. For example this experience of the Probation service can be found within the criminal justice systems in several EU member states, such as Netherlands, Belgium, Slovakia, UK or Austria. 9 Furthermore, in most of the EU member states Probation services are providing presentence reports or are performing personal information data gathering for the courts which provides the necessary information of the defendant's character and social ties to the judges while determining which is the most appropriate measure for providing of the defendant's presence during the criminal trials. 10 Possibility of interconnection of the implementation of the alternative sanctions and less severe measures than detention can be also indirectly concluded trough the EU acquis 11 Ibid. 11 Since Republic of North Macedonia is a candidate member state to the EU, and has started its accession process through its specific "High Level Accession Dialogue -HLAD" process, it is needless to mention that EU acquis is also source of law for our national legal system and Macedonian legal system need to be harmonized with the EU acquis 12 EU Framework decisions No. 2009/829/JHA on supervision measures as an alternative to provisional detention agency is responsible for this. Due to this fact we can conclude that there is no formal objection for Probation agencies to implement these less severe measures to detention. In regard to the above mentioned arguments we can conclude that despite the fact that on first sight it might appear that we are comparing "apples and pears", these two types of measures are having significant mutual resemblance. This is primarily based upon the facts that the implementation of these less severe measures and the alternative sanctions in practice is usually connected with same or similar problems and they have similar implementation methodology. Due to this, in the next chapter we will critically examine the provisions of the Macedonian Law on Probation and trough the comparative method we will examine the areas where this law could be improved in order to be suitable tool for proper implementation of the less severe measures than detention by the Macedonian courts. ANALYSIS Of THE JURISDICTION Of THE MACEDONIAN LAW ON PROBATION The enactment of the Law on Probation 13 was eagerly expected by the Macedonian academics and judicial professionals since it was considered as one useful and necessary tool for proper implementation of the alternative measures together with the improved implementation of the less severe measures than detention during the criminal trials. Unfortunately, it is obvious that the Macedonian legislator has significantly reduced the impact of this law only within the implementation of the alternative measures to prison as regulated within the Criminal Code 14 and somehow has timidly introduced risk evaluation as possibility for assistance to the judges. Having on mind several EU member states' experience, 15 16 was omitted from the Macedonian legislator. Furthermore, besides the implementation of the alternative sanctions, and measures for providing of the defendant's presence, another most common official duty of the probation services 17 are support to the courts while implementation of the less severe measures than detention for providing of the defendants presence and providing risk evaluation of the defendants' personal character. 18 In order to exam whether there are actual possibilities for extension of the jurisdiction of the Macedonian probation service we need to evaluate the actual reach of the provisions of the Law on Probation, together with the correlation between the alternative sanctions and the measures for providing of the defendants' presence. Finally, we need to evaluate the correlation to the EU standards and whether they are in correlation to the types of alternative sanctions which are dealt by the Macedonian probation services. Probation services during the criminal procedure and implementation of the risk evaluation schemes In most of the criminal justice systems where the probation service is established, one of its core duties is the analysis of the defendants' personality, trough creation of the risk evaluation schemes 19 . Performance of the risk evaluation is particularly important for the courts while deliberating the most appropriate measure for providing defendant's presence during the criminal trial, or sanction at the end of the criminal trials. 20 16 For the jurisdiction and organization of US Fortunately, this duty of the probation service has been envisioned by the Macedonian legislator and within the Article 12 of the Macedonian Law on Probation. This article regulates the obligation of the probation services to summon the defendant and to perform an interview with him/her, and/or to collect additional documents and personal data from other state agencies as requested by the courts and by using specific risk assessment tools to generate final report to the court regarding the defendants' state of risk. However, despite the fact that duty of the Macedonian probation service means great improvement of the judges' position, unfortunately the authorization for the judge to request this information from the probation service has not yet been prescribed within the provisions of the Criminal Procedure Code 21 . This means that, at this point, there are no legal grounds within the Criminal Procedure Code for the judges to undertake the activities as regulated within the article 12 of the Macedonian Law on Probation. Therefore we think that this legal situation should be implemented within the new amendments of the Criminal Procedure Code 22 in order to introduce the possibility for the court to request from the probation service performance of the risk evaluation schemes and presentence reports as part of the courts' decision making process for implementation of the measures for providing of the defendant's presence or of the alternative sentences. Hence we think that only with this provisions in the CPC, the judges will be able to use this probation services' duty as an effective tool for assessment of the most suitable measure for providing defendant's presence. Similarities between the less severe measures than detention for providing defendant's presence during the criminal trials and alternative sanctions As a precondition to the determination whether it is possible to transfer the authority of implementation of the less severe measures for providing of the defendant's presence during criminal trial to the probation services it is necessary to analyze the level of similarity between these measures and the alternative sanctions. The level of similarity between these measures rests upon the fact that implementa- Unfortunately, these amendments do not contain any amendments regarding the presentence reports or risk evaluation for the defendants tion of alternative sanctions and the alternatives to detention bear same or similar burden regarding the professionalism and knowledge of the probation agencies' employees. Hence, it is often practical to correlate implementation of these two types of measures into one state agency. Macedonian legislator within the article 144 of the Criminal Procedure Code has regulated the following measures for precaution: ban for leaving the residence, mandatory reporting to a specific state organ or official person, temporary ban of driving license or ban for its issue, temporary ban of the passport or ban for its issue, temporary restriction for visiting specific places or areas, restrictions regarding maintaining contact with specific persons and temporary ban for undertaking specific professional activities or work related activities. These measures together with the house detention, bail, short time detention and citation as considered as less severe measures to detention and serve for providing of the defendants presence during the criminal trials. The above mentioned measures for providing of the defendants' presence during the criminal trials are also harmonized with the EU Framework Decision 2009/829/JHA on Supervision Measures as an Alternative to Provisional Detention. 23 The general idea for implementation of these measures is based upon the theory that the defendant's right to liberty will be of primal importance, while this right might be limited only in inevitable cases, where the detention, as most severe measure, will be imposed only in strictly limited and necessary cases. These two measures bear significant resemblance due to the fact that both of them are implemented within the premises of the defendant's or convict's house, together with the fact that control over the implementation of this measure, so far, has been performed by the police officers. This means that with both measures the convicted person or the detainee should not leave the premises of the house or residence, while supervision and control of the proper implementation, so far until the enactment of the Law on Probation, was dedicated to the police officers. Granting the implementation of the house imprisonment to the probation service is far more efficient and effective, since the probation service's officers have additional knowledge and training than regular police officers, in order to be able to determine whether the detainees or convicted persons are law abiding citizens and that they do obey the limitations and restrictions imposed by the court with these measures. In addition, police officers, generally, are not sufficiently trained regarding meeting these specific duties, which in this situation leaves them unguarded or unprepared, regarding the possible obstructions or factual needs of the detainees or convicted persons, which means that they can't provide the proper support or monitoring over the less severe measures for providing defendants' presence. 3.2.2. Similar arguments can be set regarding the implementation of the electronic monitoring, 27 which is defined as a measure for support of the implementation of the house detention within the provisions of the article 163 of the CPC. 28 Unfortunately this measure is also not covered by the provisions of the Law on Probations, despite the fact that the probation service is the most suitable state agency for undertaking these activities. In addition, implementation of the electronic monitoring within the CPC remains vaguely regulated, since the CPC is does not provide further provisions regarding this issue and probably it should be regulated with additional legal bylaws or other laws. This simple "face-to-face" comparison between these alternative sanctions and the preventive measures for providing the defendant's presence during the criminal trial reveals the pattern that, despite the fact that between the two measures are significant differences regarding the purpose of their implementation, the practical implementation is the same. However, leaving these measures to be implemented by two, or sometimes three different state agencies, one to the probation service supported with additional training regarding meeting the convicted persons' personal needs and characteristics, while the second implemented by the police and courts that does not have any understandings regarding the defendants' personal needs and characteristic, opens the floor for unequal and erroneous implementation. Due to these facts, we deem that the implementation of the preventive measures as regulated within the CPC should be delegated to the probation service, since this service, if properly staffed with trained employees, should implement these measures with greater success, particularly taking into consideration the needs and individual characteristics of the defendants' and not disregarding the aim of the criminal justice process. 31 Furthermore, testing of one similar measure during the criminal trial could provide significant insight to the law-enforcement agencies regarding the effectiveness of this measure to the specific persons if it will be imposed at the end of the criminal trial as a sanction. This means that the effectiveness of one measure imposed to the defendant regarding his/hers preparedness to follow instructions, obeying certain rules etc., can be evaluated during the early stages of the criminal procedure. On the other hand imposition of these measures automatically as a sanction, simply because they were implemented as a measure during the criminal trial, without knowing the insight of the real defendant's behavior during the imposition of this measure within the criminal trial, is also not acceptable or desirable practice. where he/she is presumed innocent and at the end of the trial where he/she is proven to be guilty. This practice is comprehensively elaborated by the case law of the European Court of Human Right in several judgments where the court is elaborating the reasoning of the implementation of these measures during the trial and at the end of the trial as sanctions. 32 Finally, by establishing of the implementation of these two types of measures "under the hood" of one state agency will increase the imposition and implementation these measures and sanctions, since the judges will be confident that there is one state agency which undertakes all necessary preconditions for proper implementation of these measures. Furthermore, in correlation to the risk evaluation schemes judges would be completely certain that they have done the right selection and made a proper decision regarding the special and general prevention, as general aim of the criminal justice process. 33 CONCLUSION The enactment of the new Law on Probation in Republic of North Macedonia has been long expected as an effective tool for judges for implementation of the alternative sanctions and less severe measures for providing defendant's presence as an alternative to detention. Unfortunately, Macedonian Law on Probation has failed to meet these expectations. This is due to the fact the Macedonian legislator with the enactment of the Law on Probation has omitted to regulate the implementation of the alternative measures to detention as a measures for providing of the defendant's presence during the criminal trials, despite the fact that this part of the criminal justice system can be considered as genuine area of jurisdiction of the probation services. This is based upon the fact that there is great resemblance between the implementation and the essence of the alternative sanctions and the measures for providing of the defendant's presence which are less severe than detention during the trial. also considering the positive regulation within the Macedonian criminal justice system. Having on mind the comparative experience from several EU member states we can conclude that probation services are usually preferable choice than police for implementation of these less severe measures for providing defendant's presence. This is due to the fact that probation services are, generally, more specialized, better trained and staffed and better equipped for undertaking these activities than police or other state agencies. Finally, in order to improve the practical implementation and increase the frequency of these less severe measures than detention it is necessary to introduce legal amendments of the Law on Probation which will allow Macedonian probation service jurisdiction over the implementation of these measures. Amendments should be also performed to the Criminal Procedure Code, where this duty of implementation of these less severe measures for providing defendant's presence will be delegated to the Probation Services.
2019-07-21T18:03:47.183Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "18b7fb27d3ca20146b9d811ce650665166d79a3e", "oa_license": "CCBYNC", "oa_url": "https://hrcak.srce.hr/ojs/index.php/eclic/article/download/9012/5100", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7f2b103cf4996425959b0aa8643035eddc64cdf6", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
229371744
pes2o/s2orc
v3-fos-license
An Unusual Case of Cholestatic Hepatitis due to Light-Chain Deposition Disease Light-chain deposition disease (LCDD) is a rare paraproteinaemia characterized by the deposition of monoclonal immunoglobulins with a non-fibrillar structure and hence Congo red negative deposits. Kidney disease is the more frequent manifestation, but other organs may also be involved. A 70-year-old man with hypertension and mild chronic renal failure showed a hepatomegaly without splenomegaly. His renal and liver test rapidly got worse. A serum electrophoresis and immunofixation isolated monoclonal kappa light-chain gammopathy, with serum free kappa light chain excess. The bone marrow biopsy showed the presence of interstitial infiltration of plasma cells like multiple myeloma type at initial phase. Periumbilical fat biopsy was negative. Echocardiography demonstrated an infiltrative cardiac disease. The biopsies of the duodenum small intestine mucosa showed flaps with eosinophil material (Masson's staining) with atrophic crypts and chronic inflammation at chorion level. Amyloid substance was negative. There was a strong positivity for light chains kappa compatible with LCDD. A liver biopsy confirmed this finding. Therapy with dexamethasone and bortezomib improved clinical state and hepatic and renal laboratory tests. Chemotherapy based on novel anti-myeloma agents should be rapidly considered in LCDD patients with severe organ involvement. Introduction Light-chain deposition disease (LCDD), heavy-chain deposition disease, and light-and heavy-chain deposition disease are a rare group of paraproteinaemias characterized by the deposition of monoclonal immunoglobulins with a non-fibrillar structure and hence Congo red negative deposits [1]. The diagnosis of LCDD requires histological demonstration of monotypic light-chain (LC) deposition on immunofluorescence microscopy and ultrastructural analysis of the involved organs or tissues. LCDD can occur in the context of isolated monoclonal gammopathy or of symptomatic multiple myeloma and Waldenström's macroglobulinemia. Light chain deposits are usually the κ (kappa) isotype and can affect almost all organs [2]. Kidney disease is the more frequent manifestation, resulting in chronic kidney failure with glomerular proteinuria, and sometimes nephrotic syndrome [3] but heart, liver [4], gastrointestinal tract, and peripheral nerves may also be involved. Liver involvement has been rarely reported in LCDD in asymptomatic patients, but in symptomatic ones, LCDD-associated liver involvement mainly manifests as cholestatic hepatitis and is associated with high mortality [5]. We report in this paper a patient with myeloma-associated LCDD who developed rapidly progressive liver and renal failure secondary to κ-light chain deposition, which rapidly recovered after chemotherapy. Patient has given his written informed consent to publish his case. Case Report A 70-year-old man with hypertension, kidney stones disease and mild chronic renal failure was admitted to our department with asthenia and sudden weight loss. Physical examinations showed hepatomegaly without splenomegaly. A liver ultrasound confirmed hepatomegaly with mild hepatic steatosis and a non-homogeneous echostructure with a starry sky appearance. There was no evidence of biliary obstruction, and the kidneys had a normal size without urinary tract obstruction. There was liver stiffness (Fibroscan © : 53.3 kPa with IRQ 18). Blood tests showed serum creatinine: 2.3 mg/dL, ESR: 120 mm/h, γGT: 2003 IU/L, P-ALC: 732 IU/L, fibrinogen: 700 mg/dL, presence of monoclonal component IgA k: 14 g/L. Baseline liver tests, serum calcium, and blood coagulation parameters were normal. There was no history of alcohol abuse. Serological tests for hepatitis A, B and C, Epstein-Barr virus, cytomegalic virus, and herpes simplex virus were negative. A serum electrophoresis and immunofixation isolated monoclonal kappa LC gammopathy, with serum free kappa light chain excess of 47 mg/L, with a kappa/lambda ratio of 2,76. 24-h proteinuria was 1.71 g, Bence-Jones proteinuria was negative. The whole body radiological evaluation did not demonstrate osteolytic lesions. The bone marrow biopsy showed the presence of interstitial infiltration (between 10 and 20%) of plasma cells like a plasmacellular dyscrasia preferentially multiple myeloma type at initial phase. Moreover, we performed a periumbilical fat biopsy that was negative for the staining with the Congo red, and there were no aspects related to amyloid deposits. Transthoracic echocardiography demonstrated moderate hypertrophic cardiomyopathy (no pulmonary hypertension), with mainly septal evidence of infiltrative cardiac disease (left ventricle ejection fraction 60%) and organized pericarditis adherent to the right ventricle (thickness 14 mm), without signs of compression on the cardiac chambers. The patient also underwent gastroscopy, and the biopsies of the duodenum small intestine mucosa showed flaps with eosinophil material at the level of the lamina propria (Masson's staining) with atrophic crypts and chronic inflammation at the chorion level (Fig. 1 a, 1. b, 1. c). The Congo red staining for the research of amyloid substance was negative. The search for amyloid A and P was negative; but there was a strong positivity for light chains kappa +++ on immunohistochemistry, preferentially compatible with LCDD. A liver biopsy was also performed which confirmed the presence of amorphous eosinophilous deposits at the sinusoidal level associated with atrophy moderate hepatic parenchyma (Fig. 2 a, 2. b, 2. c). The substance was Congo red negative, kappa +++ light chains, PAS-. We concluded that it was LCDD with hepatic, gastrointestinal, probably renal and cardiac involvement, associated with IgA K myeloma. We started therapy with dexamethasone and bortezomib with improvement of hepatic and renal function laboratory tests. We observed a progressive and complete recovery of cholestatic and cytolytic hepatitis over the next 6 months with mild improvement of renal failure. Discussion LCDD is a rare disease of non-fibrillar deposition of monoclonal light-chain immunoglobulins, usually manifesting in the fifth or sixth decade of life with a male predominance, but the incidence is unknown [6]. In LCDD, the monoclonal light chains are of the kappa subtype in 92% of cases, and it is typically associated with multiple myeloma or other lymphoproliferative disorders [6]. LCDD occurs in about 5% of patients with multiple myeloma, typically reported as the underlying disease in these patients [7,8]. However, recent reports indicate that a high proportion of LCDD cases are now found in patients without symptomatic plasma cell disorder, who meet the new criteria for monoclonal gammopathy of renal significance [9]. Less frequently, LCDD may occur in the context of lymphoid hemopathy such as Waldenström disease. Unlike AL amyloidosis, LC deposits in LCDD, mostly the kappa isotype, do not stain with Congo red, and show a typical histological linear pattern along basement membranes on immunofluorescence microscopy, with a characteristic linear powdery punctate appearance on electron microscopy [10,11]. Based on histological and postmortem studies, liver involvement has been considered to be frequent in LCDD, but severe liver complications have been rarely reported [12]. Principal presentation was intrahepatic cholestasis, associated with cytolysis in 33% of cases. The most frequent histological finding was linear deposits of monoclonal LC in the sinusoidal or perisinusoidal spaces with a predominance of kappa isotype, as we reported. A cardiac origin was also unlikely to account for liver failure, given the absence of right ventricular heart failure at echocardiography. The transient septic state may have contributed to worsening liver and kidney function, but its resolution and the liver biopsy results undoubtedly showed evidence of LC deposition. We finally attributed liver and kidney failure as a specific manifestation of the disease in our patient. Steroids and melphalan, high-dose melphalan, autologous stem cell transplantation and, more recently, bortezomib-based chemotherapy, are some of the treatment options [13]. Chemotherapeutic regimens containing the proteasome inhibitor bortezomib have shown efficacy and good tolerance profile in both AL amyloidosis [14] and Randall-type MIDD, but their effect on the clearance of LC tissular deposits, although possibly accepted, remains to be demonstrated. Another study reported the disappearance of nodular mesangial lesions and disappearance of kappa light chain deposits in a patient with kidney LCDD under long-term chemotherapy [15]. Conclusion These data indicate that chemotherapy, based on novel anti-myeloma agents, should be rapidly considered in LCDD patients with severe organ involvement, in order to rapidly induce efficient and sustained suppression of the pathogenic monoclonal LC. Our case confirms that liver involvement may be a important complication that requires prompt diagnosis and therapy.
2020-11-26T09:06:43.920Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "7c73701359529ed644f85e737b09ef84b3959f01", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/509508", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c47168032cb9625a622f042c014340ea997eae2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
215847331
pes2o/s2orc
v3-fos-license
Software Defined Network of Video Surveillance System Based on Enhanced Routing Algorithms Software Defined Network (SDN) is a new technology that separate the ‎control plane from the data plane. SDN provides a choice in automation and ‎programmability faster than traditional network. It supports the ‎Quality of Service (QoS) for video surveillance application. One of most ‎significant issues in video surveillance is how to find the best path for routing the packets ‎between the source (IP cameras) and destination (monitoring center). The ‎video surveillance system requires fast transmission and reliable delivery ‎and high QoS. To improve the QoS and to achieve the optimal path, the ‎SDN architecture is used in this paper. In addition, different routing algorithms are ‎used with different steps. First, we evaluate the video transmission over the SDN with ‎Bellman Ford algorithm. Then, because the limitation of Bellman ford ‎algorithm, the Dijkstra algorithm is used to change the path when a congestion occurs. Furthermore, the Dijkstra algorithm is used with two ‎controllers to reduce the time consumed by the SDN controller. ‎ POX and Pyretic SDN controllers are used such that POX controller is ‎responsible for the network monitoring, while Pyretic controller is responsible for the ‎routing algorithm and path selection. Finally, a modified Dijkstra algorithm is further proposed and evaluated with two ‎controllers to enhance the performance. The results show that the modified Dijkstra algorithm outperformed the other approaches in the aspect of QoS parameters. Introduction: Video surveillance is critical for different aspects of life. The main objective of surveillance system to keep people's care, or minimize human dangers associated with illegal or criminal activity. The video surveillance frameworks are very significant in our daily lives owing to the number of applications they make possible. The causes for vitil ieu et ue iv eutggnisu benefit in such frameworks are differing, ranging from protection requests and military packages to scientific purposes (1). A video surveillance that uses the SDN comprises number of IP cameras, OpenFlow switches, a monitoring center and a controller. The objective of creating such a framework is to watch and monitor a predefined place. IP cameras capture the video and send the video file through the network to the monitoring center. The policy of the controller over the network is responsible about finding the best path between the IP cameras and monitoring center. After that, the controller should send all Open Flow tables and the information about the path to Open Flow switches for chosen the best path (2). The Open Networking Foundation(3) (ONF) defines the SDN as follows: " In the SDN architecture, the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications." (4). The SDN architecture consist three layers. First layer (Infrastructure layer) consists of both physical and virtual network devices. Second layer (Control layer) involves of a centralized control plane. It provides centralized global view to entire network. Third layer (Application layer) contains of network services, application that used to interact with control layer (5). The SDN uses the OpenFlow protocol to interface with OpenFlow switches. It allows both the controller and all the switches to understand each other (6). In Computer Networks, routing is performed by defining some flow rules in a routing table , these rules contain the source and destination IP-addresses and MAC-address. When a packet arrives at a device, the device checks the flow table if it is available or not, and take the action (forwards, reject, send to the controller) as per the rules set by the routing protocol (6). The routing time of SDN networks is lesser compared to traditional Networks. At increase N node the conventional networks is consume more time for change the path while SDN require less time (7). eeu uee ue ieu mht iu eu ieu evitil ngle tieS . The Bellman ford algorithm uses relaxation to select single source shortest paths on the graphs, it applied by (8). The time complexity for Bellman Ford is N 3 . As a result, it consume more time for finding all the paths (9). Because the surveillance system should be fast and reliable, the routing algorithm requires less time to chose the path. Consequently, the Dijkstra algorithm is more suitable than bellman ford for video surveillance system . The time complexity of Dijkstra algorithm is N 2 (9) which is less than the Bellman Ford algorithm. Related Works Different theories exist in the literature regarding the evolution of video surveillance systems and their relation to routing techniques. A considerable amount of literature has been published on how the captured video can be transmitted over the traditional networks, There are relatively few published studies in the area of video transmission over the SDN. Panwaree, et al. (10) proposed that the video send over two types of OpenFlow enabled network testbeds (Mininet emulated and Open-v-Switch PC cluster ). The authors use a POX controller in both methods and the VLC media player in both server and client sides . The shortest path algorithm was used as routing algorithm. Harold and Arjan (11) have achieved three contributions, to begin with, it shows the video over software defined network (V-SDN), a network construction that select the best path using a nwtwork wide-view. Then, it portrays the V-SDN protocols, which utilized by the designer to get information about QoS from the network. At last, it displays the results of applying a system model and calculate the behavior of system utilizing message complexity. The author did not show the type of controller that used in the system. A routing protocol was used to find the best path among the IP cameras and the checking center. System Model The proposed system is emulated by using Mininet emulator, which is a software emulator for prototyping and running the network topology. In particular, two SDN controllers are used; the POX and Pyretic controllers that can work with OpenFlow switches. The compression method and encoding method are applied to video before it has being transmitted. Fig.1 represents the block diagram of the proposed video-surveillance system. SDN Configuration The SDN controller describes the set of flows that happen in the SDN data-plane. Each flow in flow table must first get permission from the mht controller, that confirms the communication is permissible by the network rules (5). The SDN consist three main modules the topology discovery module, statistics gathering module and route computation module (15). The SDN-controller asks OF-switches for get information around configuration (topology discovery module). The information consist of operational ports and their MAC-addresses using Ofpt-Features-Request-message. This message contain (Oftp-Packet-Out and Oftp-Packet-In). The controller (SDN) sends link layer discovery protocol (LLDP-packets) for all ports in the OF-switch using Oftp-Packet-Out. This message send with the (LLDP-packet), which holds information to direct the packet to the connected port. The switches sends LLDP-packet with Oftp_packet_in message to mht controller. This packet contains the switch-ID and entering port-ID (16). The controller has complete information about the topology consequently the controller uses the routing algorithm to discovery the shortest-path for one switch to other switches. After that, the controller builds the flow tables for all switches and send it with OpenFlow protocol. The OpenFlow switches contain three layers; the open flow protocol API, abstraction layer and the software layer. The OpenFlow is responsible for the communication between Opswitches and the SDN-controller. The abstraction layer contains the flow-table one or multiple tables. The last layer packet-processing function is the packet that treating in virtual switch (5). The flow-tables are the essential data constructions in an Op-switches. These flow-tables allow the Op-switches to calculate received packets and apply the suitable decision (17). The Flow tables contain of a number of listed flow entries. Each entry consist three components rule, actions, and status. The rule component consists of many fields that use to compare with incoming packet (source IP, MAC and destination IP, MAC, etc.). These fields include the link-layer devices, network-layer devices and transport-layer. The action contain many decision: 1. Forwarding the received packet to a specific port. 2. Forwarding received packet to the controller. 3. Dropping the received packet. 4. Flooding the received packet for all available ports. 5. Send to normal processing pipeline. The Network Topology and Video File The network of proposed video surveillance system will be created. The switches should connect the hosts (prefer camera) to each other with active SDN controller. The switches that should be use called Open-v-Switch (OVS). The OVS is a manufacture quality that designed to enable huge network automation by way of programmatic extension, whilst still supporting standard interfaces and protocols. The proposed system will follow the same steps and the same method that used by (8) to evaluate the performance metrics. Figure-2 shown the network topology for video surveillance system. The author uses Host6 to sent video file to Host8 and uses Host7 to sent background traffic to Host9 to make the network congestions The video file sent frame size is (352 X 288) and encoded at 30-fps using an h.264/svc codec with 6 clips each clip 10 seconds (total 60-second video length 1800 frame and 5364 packets). The network setting information can borrow from the paper to get the same result, so the Table-1 below explains emulation parameters such as the bandwidth, delay and the details of video format. H6 divides a large video frame into many fragments, so the total packets are 5364 video packet. The controller executes the Bellman-Ford algorithm to find the shortest-path for each transmitted. The Bellman-Ford algorithm finds the shortest-path with careless of the link utilization status for both the background-traffic and the video-flow. Therefore, the path for video-flow is H6, switch-1, switch-2, switch-3, H8 and for background traffic is H7, switch-2 , switch-3 and H9. Consequently, the path between switch 2 and switch 3should become more congested. Figure 3. Bellman Ford over SDN controller This congestion may cause to slow the network Because of this congestion, the end-to-end delay and Packet Loss Ratio (PLR) should be increased and decreases the peak signal to noise ratio (PSNR). All these factors may affect the efficiency and the effectiveness of a video surveillance system. A possible solution is to find another algorithm that enhances sending and receiving videos for video surveillance system. The Dijkstra Algorithm Over One SDN Controller (Pyretic Controller) The first scenario that uses the Dijkstra algorithm instead of the Bellman-Ford algorithm with one controller in same network and hosts and link settings (Fig.5). The important step in the algorithm based on data ‫‬ ‫‬ ‫‬ structure storage is to utilize an appropriate data structure to store the network information (19). This factor can lead to change the path for transmission, so if the path is congested it can switch to another path for fast transmission. When the links are under huge-load, their associated-weights will be increased, so the links should have a lower probability to be selected for data transmission. The logs of the SDNcontroller, which runs the Dijkstra algorithm and display the original path of the video-flows is H6, switch-1, switch-2, switch-3 ‫‬ , H8 before the insertion of background traffic. After that when, the background-traffic should be inserted into the network the link between switch-2, switch-3 become congestion. Because this congestion the Dijkstra algorithm will change the path transmission to H6, switch-2, switch-4, switch-5, switch-3, H8 as new path transmission to solve the congestion. Figure 5. Dijkstra algorithm with one controller Although fewer video packets are still lost during the path change processes ‫‬ , ‫‬ the acquired results are still recorded better than the Bellman-Ford algorithm. Fig.6 shown the pseudocode of Dijkstra algorithm The Dijkstra Algorithm With Two SDN Controllers (Pox And Pyretic) This section, discuss the Dijkstra algorithm with two controllers (POX, Pyretic). One reason why two controllers have been used. The main cause is to divide the jobs between POX controller and Pyretic controller. The Pyretic controller responsible for routing (Dijkstra algorithm) and POX controller responsible for the monitoring jobs. The network performance improved when using two controllers to speed up the path selection process. Fig.7 show the topology when using two SDN controllers. Figure 7. Dijkstra algorithm with two controller To achieve some level of performance and scalability it will use a multi-controller architecture that contains the set of controllers working together. The multi-controller can be designed in two architectures a flat or a hierarchical design. In a flat or horizontal architecture, the SDN-controllers are located horizontally on one-level. In addition, the control plane consists of one layer, and each controller has the same responsibilities at the same time and has a partial view of its network. In a hierarchical or vertical architecture, the SDNcontrollers are located vertically (20) . The proposed system used flat or horizontal architecture because the flat architecture has several advantages such as reduced control latency and improved resiliency (21). First, when H6 starts to a transmit video file to H8, the pyretic controller use the Dijkstra algorithm to find the shortest-path from source to destination. The algorithm finds S1-S2-S3 is the shortest path. The when starts H7 to transmit background traffic to H9 the algorithm chooses S2-S3 as the shortest path from H7 to H9. The link between S2 and S3 has become congested. At the same time, the POX controller is checking the link status by checking the link bandwidth. If the POX controller found the bandwidth is less than 1Mbps consequently POX send command for the pyretic controller to find a new path. The Modified Dijkstra Algorithm with Two SDN Controllers In this section, a new approach to modify the Dijkstra algorithm is discussed. This approach is implemented using same topology proposed. The proposed system will follow the same steps and the same method to evaluate the performance metrics. The pyretic controller uses the Dijkstra algorithm to find the shortest-path from source to destination. The algorithm finds S1-S2-S3 is the shortest path. Then when H7 starts to transmit background traffic to H9 the algorithm chooses S2-S3 as the shortest path from H7 to H9. The link between S2 and S3 becomes congested. At the same time, the POX controller detects that the available bandwidth of the link in the video path is less than 1Mbps. Then the link weight is increased consequently, POX controller re-helps the video by removing the link that causes congestion and use the Dijkstra algorithm to help H7-H9 find a new path. If finds a new path, it move the traffic flow of H7-H9 to the new path. In addition to, the original video is still the original Transmission path (from H6-H8) so the congestion problem is solved. Fig.8 shows execution for this modification. The green rectangle represents the original path for H6-H8 (S1-S2-S3) and background traffic H7-H9 (S2-S3). The red rectangle represents a new path for background traffic is the path (S2-S4-S5-S3) . Performance Metrics In this section, the results for all previous scenarios are discussed. The performance metrics that will be used for comparison between these scenarios are: 1. End-to-End delay. Packet Loss Rate. 3. Peak Signal to Noise Ratio.  End-to-End Delay The End-to-End packets delay can be calculated by: Delay [Packet Number] = Receiving Time -Sending Time The Receiving Time can be found in the file received by destination host. For example, when sending from H6 to H8 the received file found in H8 contain receiving time column. In addition, the Sending Time can be found in the sent file in H6. The proposed system uses file written in C language for subtracting the sending time from receiving time .  ecaRPaoLttkcaP The packet Loss Rate is the second metrics, which is used to compare the results with (8) . It is calculated by: PLR = ((Total Packets-Received Packet)/Total packets)*100% The total packet from comparison paper is 5364 packets. The packets number column found in receiving a file in the destination. Therefore, it can calculate the number of packets that arrive from the network and subtract it from total packets to get the missing packet that was the loss in the network. Then divide it by the total packets.  Peak Signal to Noise Ratio The Peak Signal to Noise Ratio is the third metric that will be used for performance evaluation and comparison. The definition of PSNR "is the ratio between the maximum possible signal in the video frame and the noise, which corrupts the signal accuracy" (13). PSNR is calculated as follows: = 20 · log 10 ( ) -10 · log 10 ( ) Use prepare-received trace1 to convert the received-file to the format necessary for SVEFfiles. The result from prepare-received trace is frame level-received trace. Then the frame-level received-trace and the original NALU-trace and traffic trace are processed by prepare-received trace2 to received NALU-trace (Fig.9). The received NALU trace file was fed into nalufilter filter file that will remove the late-frames and the frames, which cannot be decoded depend on frame dependencies. The JSVM version (9.19.8) cannot decode video packets-affected by out of order, corrupted, or missing NALUs (22). Therefore, SVEF uses filtered packet trace-file to extract the corresponding packets in the original H.264 video-file by means of Bit Stream Extractor Static. The result from Bit Stream Extractor Static is used by H264decoder to create file has YUV extension. The PSNR calculation of original-YUV and receiving-YUV file need the same number of video frames. Therefore, this method will hide the missing frames by copying the previous frame. The copied frame was done by a file written in C language called frame filter .Finally, the original YUV and receiving YUV file (output from frame filter) are used to calculate PSNR . Results In this section, the results of four scenarios are discussed. First, applied Bellman Ford in SDN. Second, applied the Dijkstra algorithm with one SDN controller. Third, run the Dijkstra algorithm with two SDN controllers. Finally, modify the Dijkstra algorithm with two controllers.  First performance metrics is end to end delay: The delay that shown in Fig.10, part A represent the delay with Bellman Ford algorithm. When the video start to transmit the delay is reach to 0.06 sec. after that when the background-traffic is starts transmission the network became congestion. Therefor the delay is rising up to 0.1 sec and continues in this value to the end transmission. The delay in the part B reveals that the delay of Dijkstra algorithm is less than the delay obtained by Bellman-Ford algorithm. This enhancement in the delay is achieved because of using Dijkstra algorithm that choose another path for transmission when the path congestion occurred. The delay is starting with 0.04 sec when the video file is transmitting over the network. In packet 1200 the background traffic starts transmission. The congestion has caused the rising in the end to end delay up to 0.08 sec in many packets. After that the end to end delay decreases down to 0.03 sec in packet 4000 due to the reroute capability of the Dijkstra algorithm . eeu sn i ieef ieu ugnh ee ieu htraii n ngle tieS ftie ife seii eggu i. eeu end to end delay was improving when using two controllers because the flat architecture for multiple controllers reduces the controller latency. The delay is almost steady at 0.025 sec when sending only video files. Then at packet 2500, the background traffic starts transmission. Therefore, the link [2][3] becomes congested. The end to end delay is rising to 0.06 sec. After that, the delay reduces when the controller reroutes the path for video file to another path. The highest point in this figure is 0.06 sec is less than the Dijkstra algorithm with one controller. As a result, two controllers. Figure 10. Mininet Emulation results of four scenarios over the SDN The part D represent the delay of modify Dijkstra algorithm that begins with 0.015 sec. When the network is congested the delay increase to 0.03 sec. then the controller removes the path that causes congestion from the routing table for background traffic and adds a new path from switch 2 to switch 4 to solve the congestion problem and reduce the delay to 0.02 sec. The highest value in the figure is 0.03 sec represents the least peak value for all previous method. enbgu -2 gntiieu ugnhseSsn tiei ee nggs ueteviisuin te i .  The second performance metrics is PLR: The PLR comparison is discussd in the Table-3 for all scenarios. The Bellman-Ford algorithm has made a high loss rate of 21%, which is bad approach to select the path for video surveillance system. At first, the PLR obtained by [8] is validated. According to Eq. (3), the PLR is calculated by subtracting the number of packets arrived to the destination through the network and subtract it from total sent packets to get the missing packets that was the loss in the network. This approach is applied for all scenarios such that PLR values is shown in Table -3. The Modify Dijkstra algorithm with two controllers is good approach to select the path for video surveillance system. It has loss rate 3% with loss 168 packets.  The third performance metrics is PSNR The PSNR of Bellman-Ford is equal to 35 dB in the starting of transmission and reduces to 10 dB after the frame-154 when the congestion occur, as shown in ( figure-11 A). The PSNR performance when Dijkstra algorithm is used with the pyretic controller is show in (figure-11 B) it is obvious that the PSNR is improved compared to Bellman-Ford algorithm. The figure shows that the frames of PSNR is equal to 35 dB in the starting and reduces to 10 dB after happening the congestion. The Dijkstra algorithm has improved the PSNR to15 dB and has average PSNR equal to 15 dB. The PSNR for the Dijkstra algorithm with two controllers is show in (figure-11 C). It can be observed that the PSNR value is 35 dB when video file starts to transmit over the network. After that, the background traffic is starts to transmit over the network and causes the network congestion. Therefore, PSNR is reduced to 15 dB. After that, the controller reroutes the path for background traffic to another path and improve the PSNR to 25 dB . The PSNR for modifying the Dijkstra algorithm is show in (figure-11 D). The PSNR start with 35 dB and reduce to 15 dB when the network congestion. Then PSNR reaches to 35 dB again as result the controller reroutes the background traffic to a new path to solve the congestion. Therefore, this method is more suitable for video surveillance. The PSNR comparison is explain in Table-4. Figure 11. PSNR of four scenario Conclusion: The proposed system improves the transmission for video in many steps. The following conclusions can be drawn from the ‫‬ ‫‬ ‫‬current study: With traditional networking, networking functionality is generally carried out via hardware devices consisting of a router, switches, firewalls. each of which ought to be manuallyconfigured through an IT-administrator who is chargeable for making sure every tool is up to date with the trendy configuration settings. consequently, ieu software defined networking is more quickly to find the solution for those issues. in addition, the SDN has no problem in overcoming the limitations of traditional networking. The SDN separating the hardware from the software i.e. separating the control plane from the forwarding plane. The main disadvantage of the Bellman-Ford algorithm that it does not consider weightings and slower update for the paths. Therefore, the Dijkstra algorithm is used to enhance the video transmission. The reason for using the Dijkstra algorithms is the link status that considers by the algorithm. The controller detects the congestion then the link weight is increased. Consequently, the controller rehelps the video by changing the path for video while Bellman-Ford still the video transmission in the same path. The proposed system uses one controller with the Dijkstra algorithm. This scenario for applicate the Dijkstra with SDN controller. From the result, the Dijkstra algorithm enhances the video transmission than Bellman-Ford. After the congestion occurs the Dijkstra algorithm attempt to found a new path to solve this problem. Therefore, the delay, PRL, and PSNR improved. The video surveillance system uses two controllers with Dijkstra algorithm to improve the video performance by reducing the latency and make the network management more flexible. The controllers are designed in flat architecture to achieve the scalability. The reason for this setup is that the two controllers are cooperating with each other (the POX controller responsible for monitoring the network and the pyretic responsible for selecting the path according to the used algorithm). The main contribution of this study is to enhance the performance by modifying the Dijkstra algorithm. It solves the congestion problem from the flow tables that locate inside the devices (switches, routers). The modification is done by removing one of the paths that sent on the same delivery path then established a path new and adds it in the flow tables .
2020-03-26T10:17:41.016Z
2020-03-18T00:00:00.000
{ "year": 2020, "sha1": "2e5120335e3b9eba1961e49c36d3521d8bb59403", "oa_license": "CCBY", "oa_url": "https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/download/2989/3131", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8841838a14f33856198075dd8832f1ec6f03a1e2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics" ] }
15661150
pes2o/s2orc
v3-fos-license
Maternal Obesity Is Associated with Alterations in the Gut Microbiome in Toddlers Children born to obese mothers are at increased risk for obesity, but the mechanisms behind this association are not fully delineated. A novel possible pathway linking maternal and child weight is the transmission of obesogenic microbes from mother to child. The current study examined whether maternal obesity was associated with differences in the composition of the gut microbiome in children in early life. Fecal samples from children 18–27 months of age (n = 77) were analyzed by pyro-tag 16S sequencing. Significant effects of maternal obesity on the composition of the gut microbiome of offspring were observed among dyads of higher socioeconomic status (SES). In the higher SES group (n = 47), children of obese (BMI≥30) versus non-obese mothers clustered on a principle coordinate analysis (PCoA) and exhibited greater homogeneity in the composition of their gut microbiomes as well as greater alpha diversity as indicated by the Shannon Diversity Index, and measures of richness and evenness. Also in the higher SES group, children born to obese versus non-obese mothers had differences in abundances of Faecalibacterium spp., Eubacterium spp., Oscillibacter spp., and Blautia spp. Prior studies have linked some of these bacterial groups to differences in weight and diet. This study provides novel evidence that maternal obesity is associated with differences in the gut microbiome in children in early life, particularly among those of higher SES. Among obese adults, the relative contribution of genetic versus behavioral factors may differ based on SES. Consequently, the extent to which maternal obesity confers measureable changes to the gut microbiome of offspring may differ based on the etiology of maternal obesity. Continued research is needed to examine this question as well as the relevance of the observed differences in gut microbiome composition for weight trajectory over the life course. Introduction Obesity is a substantial public health problem globally. In the US, it is estimated that 16.9% of children ages 2-19 years and 33.8% of adults $20 years are obese [1,2]. However, early life antecedents of obesity are not well delineated. In children under 3 years of age, the strongest predictor of obesity in adolescence and adulthood is parental obesity [3]. Compared to paternal obesity, maternal obesity has greater predictive value for body mass index (BMI) of offspring through adolescence [4,5]. However, the relative influence of genetics versus environmental pathways in the transgenerational transmission of obesity from parent to child is unknown. A novel possible mechanistic pathway linking parental and child weight is the transmission of commensal microbiota via parental exposures, particularly maternal. The microbiota are a consortium of trillions of bacteria that are resident to a variety of human body niches [6]. The vast majority of these microbes reside within the gastrointestinal (GI) tract where they form microbial communities whose structures are stable during periods of homeostasis and heavily involved in host metabolic and nutritional functions, including food digestion and vitamin synthesis [7,8]. Disruptions in the relative abundances of microbes that comprise these communities have been associated with obesity and high-fat diets [9][10][11][12][13][14]. For example, obese mice have abnormal levels of GI Firmicutes and Bacteroidetes, two primary phyla of the GI tract microbiota [12]. Such skewed bacterial abundances may lead to alterations in energy procurement from food and related propensity toward obesity. When microbiota from obese mice are transferred into germ-free mice, recipient mice have increased body fat, providing strong evidence of a causal link between the microbiota and obesity [14]. Factors affecting the establishment of bacterial abundances in early life are not well understood. During birth, the neonate is rapidly colonized by maternal bacteria via vertical transmission from the gastrointestinal and reproductive tracts as well as environmental microbes [15][16][17]. In very early life, mothers are likely to be primary donors of bacteria through physical contact and breast milk. Demonstrating such maternal influence, at one and six months of age, infants of obese mothers have significantly different bacterial population abundances compared to infants of non-obese mothers [18]. Importantly, during the first year of life, the microbiota show great transience and volatility [19]. As solid foods are introduced to the diet, the structure of the microbiota stabilizes and begins to reflect the adult profile [20]. Thus, it is important to determine if maternal influences on gut microbial groups persist in children past early infancy despite competing factors. In addition, the recent advent of next generation pyrosequencing allows for wider study of microbial communities than permitted by earlier methods, including denaturing gradient gel electrophoresis (DGGE) and polymerase chain reaction (PCR). Utilization of this technology permits the analyses of entire bacterial communities rather than examination of smaller classification subsets selected by a priori hypotheses. To our knowledge, pyrosequencing has not been used in studies associating parental obesity to child microbiota communities. Addressing these gaps in the literature, the current study examined the association between maternal obesity and the gut microbiota profiles of toddlers at approximately two years of age using pyrosequencing technology. We hypothesized that the microbiota of children born to obese mothers would have a significantly different gastrointestinal microbiota, as assessed using alpha and beta diversity measurements, when compared to children born to normal weight mothers. We also hypothesized that differences in abundances of bacterial populations previously associated with obesity would be observed in children of obese versus non-obese mothers. Study Design We recruited 79 women with children approximately two years of age from the general community of Columbus, Ohio. Children were excluded if their mother reported the child had a major health condition or developmental delay. Children were also excluded if they were already toilet trained. Each woman completed an online questionnaire which included assessment of her health behaviors and exposures (e.g., medications) during pregnancy as well as health and feeding behaviors in her child. Within 7 days of completing the online questionnaire, each woman collected a stool sample from her child per the protocol detailed below. Two samples were removed from statistical analyses due to low sequence count (,5108), resulting in final sample of 77 mother-child pairs. This study was approved by the Ohio State University Biomedical Institutional Review Board. All women completed written informed consent for themselves and provided written consent on behalf of their children. Women received modest compensation for their participation. Data collection occurred from May 2011 to December 2012. Parental Characteristics Women reported information about their age, race (self and child's father), marital status, education level (self and child's father), and total family income per year. Body mass index (BMI; kg/m 2 ) was calculated based on the provided maternal and paternal heights and weights. BMI values $30 were classified as obese. Perinatal Health Information Self-report data was collected regarding exposure to antibiotics during pregnancy and while breastfeeding (if applicable). With regard to birth outcomes, women reported the route of delivery (vaginal versus C-section), gestational age at the time of delivery and the child's sex. Child Diet and Growth Women reported the occurrence and duration of breastfeeding and the age at which formula (if applicable), cereals/grains, fruits/ vegetables, and meats were introduced as part of the child's diet. The current frequency of each food type was also reported, from less than once per month to two or more times per day. Women reported the number of times their child had been exposed to antibiotic medications, with completion of a full prescription course (e.g., 10 days) considered as one exposure. Women also reported child exposure to probiotics in capsule/supplement form or in formula or food which specified it contained probiotics. Finally, to determine the child's growth trajectory, women reported their child's height and weight percentile at the most recent well-visit to the pediatrician. A weight/height ratio was calculated and children were categorized into three groups: those whose weight percentile was greater than their height percentile (n = 11), those in the same percentile bracket (n = 31), and those whose weight percentile was lower than their height percentile (n = 33). Stool Sample Collection and Storage Women were provided with sterile wooden applicators and sterile 50 ml plastic conical collection tubes for collection. They were instructed to sterilely collect the stool sample from child's soiled diaper with the wooden applicator and place in the collection tube. Samples were then stored at 4uC (i.e., refrigerated) for up to 24 hours until collection by study personnel from the participant's home or delivery by the participant to OSUWMC. In the latter case, women were instructed to transport samples in a cooler with ice. Upon arrival at the Wexner Medical Center, samples were placed in long-term storage at 280uC until pyrosequencing was conducted. These primers were used for single-step 30 cycle PCR. The following thermoprofile was used: a single cycle of 94uC for 3 minutes, then 28 cycles of: 30 seconds at 94uC; 40 seconds at 53uC, 1 minute at 72uC, with a single 5 minute cycle at 72uC for 5 minutes for elongation. Amplicons were pooled at equivalent concentrations and purified (Agencourt Bioscience Corporation, MA, USA). Sequencing was performed with the Roche 454 FLX Titanium system using manufacturer's guidelines. Sequencing Analysis Analysis was performed using the open-source software package, Quantitative Insights Into Microbial Ecology (QIIME), v.1.7.0. [23]. Sequences were provided via.fasta file and sequence quality was denoted with a.qual file. Barcodes were trimmed and low-quality reads were removed. An average quality score of 25 was used. Minimum sequence length of 200 and maximum length of 1000 were used. No mismatches were allowed in the primer sequence. An average of 14862 sequences were attained per sample, and a total of 77.06% of sequences passed quality filtering. Sequences were clustered based upon 0.97 similarity using UClust into operational taxonomic units (OTUs) [24]. A representative sequence was selected from each OTU and the RDP classifier was used to assign taxonomy to the representative sequence [25]. Sequences were aligned using PyNAST [26] against a Greengenes core reference alignment database [27] and an OTU phylogenetic tree was assembled based upon this alignment [28]. Phylogenetic Investigation of Communities by Reconstruction of Unobserved States, or PiCRUST, was used to identify differences in predictive metagenome function [29]. In summary, OTUs were picked from a demultiplexed fasta file containing the sequences for all 77 subjects using the closed-reference procedure, against the GreenGenes 13_5 reference database [30]. These OTUs were normalized by the predicted 16 s copy number, and functions were predicted from these normalized OTUs with the use of GreenGenes 13_5 database for KEGG Orthologs. From this, a BIOM table containing the predicted metagenome for each sample was attained. Each sample was rarefied at 2,000,000 before further analysis. Downstream statistical analysis was performed using STAMP [31]. Statistical Analysis The Shannon Diversity Index (SDI), a measurement of withinsample (alpha-diversity) community diversity, as well as Chao1 (estimates richness), equitability (measures evenness), and obser-ved_species (calculates unique OTUs) were used to ascertain differences in alpha diversity based on maternal obesity status [32]. All alpha-diversity measurements were calculated with QIIME and significance was measured using a parametric t-test at a depth of 5930 sequences for comparison of all obese vs non-obese groups. Depths of 4534 sequences for comparison of maternal obesity among the high income group alone, and 5126 sequences for comparison among the low income group alone were also used. UniFrac unweighted distance matrices were calculated from the OTU phylogenetic tree for beta diversity analyses [33]. A sampling depth of 5108 sequences/sample was used for beta diversity for all groups. The adonis statistic, available through the vegan package on the open-source statistical program R, and further employed in QIIME, was used to measure differences in variance between two groups based upon their microbiota UniFrac distance matrices [34,35]. Groups were split based upon maternal and paternal BMI, as well as by income level and differences in community structure were determined using adonis. The permdisp statistic, also available through vegan, was then performed to verify equal variances between groups dichotomized by obesity. Chi-square analyses and two-sample t-tests were used to determine the demographic and behavioral similarity between the maternal obesity groups to identify possible confounding factors. Additionally, Pearson's correlations, univariate analysis of variance (ANOVA) and regression analyses were used to examine associations between variables including maternal BMI, child's weight/height ratio and the SDI. The relative abundance of bacterial groups in samples from children of obese and non-obese mothers were compared using Mann-Whitney U-tests. All analyses were performed using SPSS v.21 (IBM, Chicago, IL). For predictive functional group analysis in STAMP, Welch's t-tests were used for two group comparisons, while Kruskal-Wallis H-tests were used for multiple group comparisons. P-values were corrected for multiple-tests using the Benjamini-Hochberg method [36], with a q-value of 0.10. To identify potential factors which may confound the relationship between maternal obesity and the composition of the child microbiome, we examined the demographic and behavioral similarity between obese and non-obese women (Tables 1 & 2). Obese and non-obese women did not differ significantly in race, marital status, maternal age at the time of delivery, antibiotic exposure during pregnancy or breastfeeding, or delivery route (vaginal versus C-section). Obese women had heavier male partners than did non-obese women, with BMIs of 31.2065.98 vs. 26.9164.60, respectively (t(75) = 3.49, p = 0.001). Obese women and their partners had completed less education than non-obese women and their partners (ps #0.014). However, women did not differ in annual household income based on obesity status (X 2 (3) = 1.92, p = .59), although household income was significantly correlated with both maternal (r = .65, p,0.001) and paternal education (r = .52, p,0.001). Maternal obesity and beta diversity in the child gut microbiome Unweighted UniFrac distance matrices were used to assess differences between the microbial communities, known as beta diversity, in children of obese compared to non-obese mothers. Permutational multivariate ANOVA using adonis showed that children of obese versus non-obese mothers had a different microbiota community structure (r 2 = 0.01539, p = 0.044). However, this did not result in clustering of two distinct populations using a principle coordinate analysis (PCoA) (Fig. 1). To further explain the significant adonis statistic in the absence of obvious clustering, permdisp, a statistic that measures the extent to which variances in different populations are equivalent, was used to compare the two groups. Dispersion of the community structures of children born to obese versus non-obese mothers differed signficantly, with greater variance among children of non-obese mothers (p = 0.035, F = 4.843). In contrast, there was no difference in between-sample community structure as measured via adonis in children of obese versus non-obese fathers (r 2 = 0.01214, p = 0.801). Next, we examined whether the strength of the association between beta diversity and maternal obesity differed among children of mothers from higher versus lower socioeconomic backgrounds. Analyses showed no main effects of socioeconomic indicators; neither maternal education (r 2 = 0.01267, p = 0.615) nor income level (r 2 = 0.01331, p = 0.409) were associated with shifts in the offspring microbial profile. Similarly, neither maternal education nor income were associated with clustering on a PCoA (Fig. S1). Next, the interaction between obesity status with both education (high school graduate or less versus college graduate or more) and income (, 50 k versus $ 50 k) was examined. An interaction effect between income and obesity status was observed; in the high-income group, a different microbiota community structure was seen in the children of obese versus non-obese mothers (r 2 = 0.02547, p = 0.041). However, in the lower-income group, no significant effects of maternal obesity on beta diversity were observed (r 2 = 0.03798, p = 0.139). Also, in dyads from highincome households, the microbiota of children of obese mothers had greater homogeneity among the samples compared to those from non-obese mothers (F = 11.942, p = 0.003). Furthermore, clustering based on obesity status was observed using a PCoA in the high income group only ( Fig. 2A-B). Similar effects were seen when using education as an indicator of socioeconomic status. Among mothers with a high education, children born to obese mothers had a different community structure than those born to non-obese mothers (r 2 = 0.02049, p = 0.045) and this was partly explained by significantly greater homogeneity in variance (F = 6.215, p = 0.02). In contrast, among children born to women with less education, there were no significant differences in beta diversity based on maternal obesity status (r 2 = 0.05327, p = 0.61). Thus, similar results were observed in relation to income and education as indicators of socioeconomic status. Compared to education level, income was more evenly distributed in the obese and non-obese groups, providing greater statistical power. Thus, all downstream analyses focused on income. Maternal obesity and alpha diversity in the child gut microbiome We next examined the relationship between maternal BMI and alpha diversity of the child microbiota. First, we examined the Shannon Diversity Index (SDI), a measure of the overall diversity within a microbial community. Two samples were below the threshold for SDI, resulting in a sample of 75 for these analyses. Results showed that children of obese mothers had a significantly higher SDI than children of non-obese mothers (t(73) = 2.1, p = 0.04; Fig. 3A). Greater alpha diversity in children born to obese mothers was associated with greater equitability (t(73) = 1.96, p = 0.05; Fig. 3B) and a trend towards greater richness as estimated by Chao1 (t(73) = 1.83, p = 0.07; Fig. 3C). Furthermore, children of obese mothers had higher number of unique OTUs as defined by QIIME variable observed_species (t(73) = 2.25, p = 0.03; Fig. 3D). Next, we examined interactions between maternal socioeconomic status and obesity on alpha diversity of the child gut However, there were no significant differences in either the Chao1 estimation or OTUs (i.e., observed_species in QIIME) between children born to obese or non-obese fathers (data not shown). When entered into a regression model together, maternal BMI remained a significant predictor of the SDI (b = 0.324, p = 0.008) while paternal BMI was no longer significantly associated (b = 0.085, p = 0.48) suggesting that maternal BMI was the critical predictor. In addition, univariate ANOVA demonstrated that the child weight/height ratio showed no association with the toddler SDI (F(2,72) = 0.58, p = .565). Moreover, maternal BMI remained a significant predictor after including the child's WHR in the model (b = 3.178, p = 0.002), indicating an effect of maternal BMI that was independent of the child's current body composition. Maternal obesity and phylogenetic shifts in child gut microbiome We next examined phylogenetic shifts in the fecal microbiome of the children, to determine if differences in abundances of given genera were evident. An area graph of the phyla present in all subjects indicated that considerable variability existed across children in the abundances of the highly abundant phyla, wherein a wide range of ratios between Firmicutes:Bacteroidetes was observed (Fig. 5). Mann-Whitney U-tests revealed no significant differences in the two largest bacterial phyla in the gut, Firmicutes (p = 0.667) and Bacteroidetes (p = 0.914) when the relative abundances found in children from obese versus non-obese mothers were compared. When analyses were conducted separately among higher versus lower income groups, no significant effects of maternal obesity on the child gut microbiome at the phyla level were observed that withstood multiple test correction. Next, genera-level abundances were examined. The Mann-Whitney U test was used due to the skewed distributions of the population abundances. Benjamini-Hochberg tests for multiple comparisons were used, with a q-value set at 0.10. In the overall sample, there were limited significant differences between children born to obese versus non-obese mothers after multiple test correction (Table 3). However, examination of interactions between SES and obesity status revealed multiple associations. Among children of high-income mothers, abundances of the genera Parabacteroides (p = 0.008, q,0.10), Eubacterium (p = 0.021, q,0.10), Blautia (p = 0.025, q,0.10), and Oscillibacter (p = 0.011, q,0.010), as well as an undefined genus in Bacteroidales (p = 0.005, q,0.10) differed significantly based on maternal obesity status (Table 4). In contrast, after correction for multiple tests, there were no significant differences between children born to obese versus non-obese mothers in the low-income group ( Table 5). Other behavioral and environmental influences upon the microbiota In addition to influence by exposure to maternal bacteria, mothers could affect the toddler microbiome via control of the toddler diet, as diet is a primary factor in determining population abundances of the GI microbiota. In chi-square analyses, we found no significant differences in dietary patterns in children of obese versus non-obese women (Table 2). Specifically, children did not differ significantly in duration of breastfeeding, age at which grains/cereals or other foods were introduced, or the frequency of consuming meat or vegetables (p's$0.15). Children of obese versus non-obese mothers also did not differ in the extent to which they had been exposed to antibiotic medications (during pregnancy, breastfeeding, or directly during childhood) or probiotics in food or supplement form (p's$ .34). Because significant results in this study were found predominately in high-income dyads, we further examined potential dietary differences in children born to obese versus non-obese mothers in the high income group. Results also showed no Figure 1. In the overall sample, datapoints did not cluster on a principle coordinate analysis (PCoA) scatter-plot as a function of maternal obesity. The beta-diversity non-parametric statistic adonis showed that children born to obese (n = 26) versus non-obese mothers (n = 51) had unique microbial profiles (p = 0.044). However, this was due to greater homogeneity among the obese group as measured with permdisp (p = 0.035). doi:10.1371/journal.pone.0113026.g001 Figure 2. Interactive effects of maternal obesity and socioeconomic status were observed; effects of maternal obesity on the child microbiome were primarily seen among the higher SES group. A) In the higher income group, children born to obese versus non-obese mothers clustered (adonis, p = 0.041) and had higher homogeneity (permdisp, p = 0.003). B) These effects of maternal obesity were not seen in children in the lower income group. doi:10.1371/journal.pone.0113026.g002 differences in breastfeeding duration, age at which grains/cereals or other foods were introduced, or the frequency of consuming meat or vegetables among children of obese versus non-obese mothers in this group (p's$0.13). We also examined the potential role of three key environmental factors that may covary with maternal obesity status and SES: route of delivery (vaginal versus C-section), duration of breastfeeding, and antibiotic exposure in mothers and children. Analyses showed no significant associations between these factors and the community structure of the child gut microbiome (Table S1), and no clustering observed using PCoA (Fig. S2). Also, as described earlier, these exposures did not differ based on maternal obesity status ( Table 2). Further analyses among the high-income group also showed that route of delivery, maternal antibiotic use in pregnancy/breastfeeding (combined due to low occurrence), and antibiotic exposure in the child did not differ significantly based on maternal obesity status (ps$ .12). Predictive metagenome The predictive metagenome program, PiCrust, was used to examine if maternal obesity and other factors (duration of breastfeeding, maternal use of antibiotics during breastfeeding or pregnancy, child use of antibiotics, and birth route) were associated with altered functioning of the microbial groups. Abundances of Kyoto Encyclopedia of Genes and Genomes (KEGG) Orthologies, or KOs, were highly similar across children (Fig. S3). Deeper analysis of the KOs revealed that carbohydrate metabolism was significantly lower in children born to obese mothers. However, these differences in KO abundances did not pass correction for multiple tests, due to low effect sizes (Table S2). Likewise, when high and low-income participants were examined separately, maternal obesity was not associated with any significant differences in functional group abundance after multiple test correction (Table S3), nor were differences detected in functional groups based upon breastfeeding duration, antibiotic use by mother or child, and birth route (Tables S4-S6). Discussion Children born to obese mothers are at greater risk for obesity in adulthood compared to children of non-obese mothers, with odds ratios ranging from 1.23 to 6.12 depending on sex and age [3,37,38]. Factors including diet and genetics contribute to, but do not fully explain this increased risk [39]. The gut microbiome may play a clinically meaningful role; bacteria that affect metabolic processes are transmitted from the mother to the infant during birth and subsequently through physical contact and, in many cases, breastfeeding [15][16][17]. Obese adults have different microbial community profiles in the gut [9][10][11], and studies show that transplanting microbiota from obese mice into germ-free mice can lead to increased body fat [14], illustrating that altered profiles of microbiota can be both obesogenic and transmittable. However, the extent to which the microbiome may contribute to the intergenerational transmission of obesity in humans is not known. This study provides novel evidence that maternal obesity is related to measurable differences in the composition of the gut microbiome in offspring, as reflected by measures of both alpha (Shannon Diversity Index, equitability, unique OTUs) and beta diversity (per adonis). Despite the lack of group clustering on a PCoA, differences in beta diversity were explained using permdisp, which indicated increased homogeneity among the microbiomes of the obese-group and increased dispersion among the non-obese group. Our results suggest that the relationship between maternal obesity and the composition of the child gut microbiome remain after accounting for paternal BMI and indicators of child body composition, supporting an exposure rather than purely genetic pathway. This is consistent with epidemiological studies showing that maternal BMI is more strongly associated with obesity in offspring than is paternal BMI [4,5]. In addition, in metagenome function analyses using PiCRUST, lower abundances of communities related to carbohydrate metabolism were observed in children born to obese versus non-obese mothers, although this result did not remain significant after statistical correction for multiple comparisons. Importantly, effects of maternal obesity on the composition of the gut microbiome in offspring were stronger and more consistent among those born to mothers of higher socioeconomic status (SES) as defined by income and/or education. Specifically, when higher and lower income groups were examined separately, differences in beta diversity in relation to maternal obesity (per adonis/permdisp and PCoA) were evident only in the higher income group, as were multiple measures of alpha diversity. Less dispersion of profiles among children born to obese compared to non-obese mothers, particularly among those of high SES, indicates that these children are developing microbial profiles typified by greater homogeneity of community structures. Additional studies are needed to determine if similar effects are present in older children, adolescents, and adults. Also demonstrating effects of socioeconomic status, among the high-income group only, children born to obese versus non-obese mothers had greater abundances of Parabacteroides spp., Oscillibacter spp., and an unclassified genus of the order Bacteroidales as well as lower Blautia spp., and Eubacterium spp. Of note, differences in Eubacteriaceae, Oscillibacter and Blautia have been found in prior studies of diet and obesity [40][41][42], but the clinical relevance of these bacterial types in affecting obesity risk is not fully understood. Also, when PiCRUST was used to examine metagemone function based on obesity status in the higher income group only, no significant differences were found. The mechanisms underlying the interaction between maternal obesity and SES in predicting the composition of the child gut microbiome are not known. Obesity is a health condition with multifactorial origins, both genetic and behavioral (i.e., diet, physical activity). Research on the true interaction between socialenvironmental and genetic factors (i.e., moderating effects) is sparse. However, among obese adults, the relative contribution of Figure 4. As with measures of beta diversity, differences in alpha diversity in relation to maternal obesity were seen predominately in the higher SES group. In the higher-income group, children born to obese versus non-obese mothers had significantly higher A) Shannon Diversity Index, B) equitability, C) Chao1 estimation, and D) observed operational taxonomic units (OTUs) (ps#0.05). In contrast, in the lower-income group, no significant effects of maternal obesity on alpha diversity indicators were observed (E-H). doi:10.1371/journal.pone.0113026.g004 genetic versus behavioral factors may differ in those from higher versus lower socioeconomic backgrounds [43]. Relatedly, the extent to which maternal obesity confers measureable changes to the gut microbiome of offspring may differ based on the etiology of maternal obesity. Our finding of higher SDI among children of obese versus nonobese mothers contrasts prior research linking obesity with lower alpha diversity [9,44]. However, previous studies have focused on adults or used mouse models with experimentally-induced obesity. This is one of the first studies to ascertain SDI among toddlers as a function of maternal obesity. Higher SDI in children born to obese mothers may reflect interactions between their unique betadiversity community profile and age-related effects, possibly downregulated immune surveillance or reduced GI motility, which could result in greater growth and diversification of microbial groups. Due to the novelty of the study, further investigation is required. In early life, parents largely control the diet of the child, and tend to offer solid foods that reflect their own adult diets [45]. Diet can substantially affect the composition of the gut microbiome [40,46,47]. In our sample, we found no differences in the children from obese and non-obese mothers in terms of breastfeeding behavior, age at which solid foods were introduced, or the current frequency of consumption of meat, vegetables, and cereals/grains regardless of maternal SES. This suggests that diet did not explain the observed differences in the children's gut microbiome related to maternal obesity and SES. However, this study did not include detailed food diaries that would capture the volume and quality of foods (e.g., high versus low fat meats) consumed. Thus, the possibility remains that differences in feeding behaviors contribute to the observed association with maternal obesity and/or the interaction between maternal obesity and SES. In addition, other key factors that can affect the gut microbiome including antibiotic exposure, breastfeeding, and route of delivery were examined, but did not account for the observed effects of maternal obesity, or the interaction between maternal obesity and SES. After correction for multiple comparisons, there were not significant differences in individual KOs based upon these factors. Moreover, as described, these factors did not differ significantly based on obesity status, regardless of maternal SES. However, the role of such factors requires further attention. If continued research supports the notion that obese mothers may pass obesogenic microbiota to their infants, interventions could target manipulation of maternal vaginal and gut microbiome. Prior research has shown that administration of antibiotics during the delivery process reduces vaginal Lactobacillus spp. levels in the mother and corresponds to lower levels of lactobacilli in oral samples from newborns [48]. In this case, these effects are potentially detrimental, as early colonization with Lactobacillus spp. may have a preventative role in the development of allergic diseases. However, such studies demonstrate that interventions that affect population abundances in the mother can have downstream effects in the neonate's own microbial structure. A strength of this study is a focus on children between 18 and 27 months of age. Prior studies have shown that infants of obese mothers have differences in the gut microbiota, specifically the numbers of Bacteroides spp. and Staphylococcus spp. in the stool [18]. However, the microbiota are characterized by a lack of consistency and high volatility during the first year of life [19]. These profiles generally stabilize and increase in diversity, more closely resembling adult profiles, when the range of dietary exposures for the child expands [20,49]. Thus, the current data extend prior findings and support the hypothesis that early life exposures may have lasting effects on the gut microbiota. However, considerable variability of the major phyla is still a hallmark of the 18-27 month old child microbiota. In future studies, long-term and longitudinal examination through early childhood and adolescence would be highly valuable in explicating the extent to which observed effects persist and ultimately influence weight. This study utilized deep pyrosequencing technology which adds upon prior studies by allowing for whole bacterial community profiling of the toddler microbiome. Utilization of this technology allowed increased sensitivity in detecting differences in the gastrointestinal microbiota community structure between children born to obese and non-obese mothers. PiCRUST was used for prediction of metagenome function based upon 16 s rRNA abundances. As reviewed, some effects in relation to maternal obesity were suggested, but these did not remain significant after correction for multiple tests. Unique microbial profiles would be expected to result in differences in microbiome function. True metagenomic shotgun sequencing will likely provide greater power to examine effects of factors such as maternal obesity on the function of the microbiota in children. In this study, parental BMI as well as children's body composition indicators (height and weight percentile) were collected via maternal report rather than direct measurement. Current maternal BMI was not the focus because 1) maternal BMI may have changed considerably since the target pregnancy (e.g., due to weight retention after the target pregnancy or subsequent weight gain) and 2) women were of childbearing age, thus a meaningful proportion were pregnant with another child at the time of data collection. Prior studies suggest that among women of reproductive age, BMI classified by self-reported height and weight is generally accurate, resulting in correct categorization of 84%-87% and an underestimate in BMI of 0.8 kg/m 2 [50,51]. Because BMI by self-report tends to be slightly lower than true BMI, effects of maternal obesity on outcomes of interest may be underestimated in the current study. In addition, this study did not include collection of maternal specimens, such as vaginal or fecal samples, which would permit profiling of maternal microbial communities. This is clearly a critical next step in establishing a direct link from maternal to child microbial profiles. In conclusion, obesity is a worldwide public health issue. Identification of modifiable early life antecedents is key to addressing this disease process. A rapidly growing body of literature indicates that the gut microbiome plays a critical role in the development of obesity. Adding to this literature, the current study provides novel evidence that maternal obesity is associated with different microbial profiles in offspring 18-27 months of age. The potential role of the gut microbiome in this intergenerational transmission of obesity risk warrants further attention. In particular, the stability of such effects into later childhood and adolescence, the clinical relevance of abundances of specific bacteria in conferring risk for obesity, and the ultimate impact of early life microbial profiles on long-term weight trajectory remains to be explicated. Figure S1 Indicators of socioeconomic status (SES), maternal education (A) and income (B) did not predict differences in the offspring microbiota community structure. Supporting Information (TIF) Figure S2 Other key factors which may impact the gut microbiome were not associated with differences in community structure, including (A) birth route (B) antibiotic use by the mother while breastfeeding (C) antibiotic use during pregnancy (D), child antibiotic use or (E) duration of breastfeeding. (TIF) Figure S3 KEGG Orthologues (KOs) were highly similar across individuals. PiCRUST was used to predict metagenomic function of the child microbiome. An area graph produced by QIIME indicated that overall abundances of KOs were similar across samples. (TIF)
2016-05-12T22:15:10.714Z
2014-11-19T00:00:00.000
{ "year": 2014, "sha1": "c3006208b13e9a3d2dcd1b58deb09cd039fd3462", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0113026&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3006208b13e9a3d2dcd1b58deb09cd039fd3462", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231572963
pes2o/s2orc
v3-fos-license
Audiovisual Saliency Prediction in Uncategorized Video Sequences based on Audio-Video Correlation Substantial research has been done in saliency modeling to develop intelligent machines that can perceive and interpret their surroundings. But existing models treat videos as merely image sequences excluding any audio information, unable to cope with inherently varying content. Based on the hypothesis that an audiovisual saliency model will be an improvement over traditional saliency models for natural uncategorized videos, this work aims to provide a generic audio/video saliency model augmenting a visual saliency map with an audio saliency map computed by synchronizing low-level audio and visual features. The proposed model was evaluated using different criteria against eye fixations data for a publicly available DIEM video dataset. The results show that the model outperformed two state-of-the-art visual saliency models. INTRODUCTION Though a lot of research has been done in the general field of unimodal saliency models for both images and videos, no substantial contributions exist for bimodal models. Of more consequence is the lack of a model for computation of audiovisual saliency in complex video sequences. Existing literature for audio-video saliency modeling is scarce and often targets a specific class of videos [10], [27], [28]. Therefore, an extended saliency model to predict salient regions in complex videos with different sound classes is required. Many existing saliency algorithms are designed for images [6], [16], [24] using visual cues such as color, intensity, orientation etc., while other models [7], [14], [22] take social cues like faces into account resulting in more accurate eye movement predictions. Spatiotemporal saliency models [11], [15], [21] usually incorporate temporal cues like motion but ignore the effect of audio stimuli-an integral component of video content-on human gaze. Subsequently, such models are classified as unimodal models [4] where only visual stimuli are used. Interestingly, the effect of audio stimuli is relevant to human eye movements. In [25] the authors find eye movements to be spatially biased towards the source of audio using an eye tracking experiment on images with spatially localized sound sources in three conditions: auditory Maryam Q. Butt and Anis Ur Rahman are with School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan. Corresponding Author: Anis U. Rahman, e-mail: anis.rahman@seecs.edu.pk (A), visual (V) and audio-visual (AV). Moreover, another study [29] analyzed the effects of different type of sounds on human gaze involving an experiment with thirteen sound classes under audio-visual and visual conditions. The sound classes are further clustered into on-screen with one sound source, on-screen with more than one sound source, and offscreen sound source. The results show that human speech, singer(s) and human noise (on-screen sound source clusters) highly affect gaze and, more importantly, linked audiovisual stimuli has a greater effect than unsynchronized audio-visual events. The focus of this work is to propose a generic audiovisual saliency model for complex video sequences. The work differs from previous research [10], [27], [28] in that it does not restrict input videos from a certain category. To accomplish that an audio source localization method was used to relate an audio signal with an object in the video frames in a rank correlation space. The proposed model was evaluated against eye fixations ground truth from DIEM dataset. The original contribution of this study is as follows: 1) Propose an audio-visual saliency model for complex scenes that, unlike existing literature, does not restrain videos to any specific category. 2) Present and analyze the results of experimental evaluation on a publicly available dataset to examine how our proposed saliency model compares to two state-of-theart audio-visual saliency models. The remainder of the paper is organized as follows: Section 1 narrates background knowledge of saliency modeling and identifies the novel contribution of this work. Section 2 provides a detailed review of state-of-the-art literature while Section 3 describes the proposed solution. Section 4 summarizes the implementation details as well as outlines the properties of video sequences used for experimentation. This section also explains the different saliency evaluation metrics. Section 5 presents our results followed by a discussion in Section 6. Section 7 summarizes our findings and concludes with future perspectives. color, intensity and orientation features [1], [3], [24]. Other biologically-inspired models [20], [21] exploit spatial contrast and motion, and simulate interactions between neurons using excitation and inhibition mechanisms. While others [18], [19] propagate spatial/ temporal saliency using multiscale color and motion histograms as features. In [19] pixel-level spatiotemporal saliency is computed from spatial and temporal saliencies via interaction and selection driven from superpixel-level saliency. In [18] temporal saliency is propagated forward and backward via inter-frame similarity matrices and graph-based motion saliency, whereas spatial saliency is propagated over a frame using temporal saliency and intra-frame similarity matrices. In most of these models conspicuity maps are constructed using a variety of approaches with different visual features that are later integrated together to get a final saliency map. Based on the fact that eyes are the most important sensory organs that provide much of the information around humans, many state-of-the-art visual models [18], [19] aim at saliency computation for complex dynamic scenes. But such unimodal models tend to overlook other influential social cues like faces in social interaction scenes, and hence exhibit lower predictability [2], [30]. Moreover, social scenes involve a lot more sensory signals influencing eye movements spatially such as auditory information including voice tone, music, etc, and different kinds of sounds affect eye fixations differently [25], [29]. Thus, there is a need for a bimodal saliency model incorporating both visual and audio information channels. Rapantzikos et al. [26] proposed an audio-visual saliency model for movie summarization. The visual saliency map is constructed using traditional features such as intensity, color and motion, and simulating feature competition as energy minimization via gradient descent. This map is thresholded and averaged per frame to compute a 1D visual saliency curve. While maximum average Teager energy, mean instant amplitude and mean instant frequency, are extracted as audio features by applying Teager Kaiser energy operator and energy separation algorithm on the audio signal. The resulting feature vector is normalized to a range [0, 1] followed by weighted fusion to get an audio saliency curve. The final audio-visual saliency curve is a weighted linear combination of audio and visual saliency curves. The local maxima feature of audio-visual saliency curve is used for key-frame selection. The experiments are conducted on movie database of A.U.T.H but no comparison and evaluation is given. Coutrot and Guyader [9] proposed an audiovisual saliency model for natural conversation scenes; a linear combination of low-level saliency, face map, and center bias. Low-level saliency map is constructed via Marat's spatiotemporal saliency model [21]. While for face map construction a speaker diarization algorithm is proposed that uses motion activity of faces and 26 Mel-frequency cepstral coefficients (MFCCs) as visual and audio features respectively. Center bias is a time-independent 2D Gaussian function centered on the screen. The three maps are linearly combined into final audiovisual saliency map using expectation maximization to determine the weight for each. The resulting model performs better compared to the same model without speaking and mute face differentiation. However, the target video dataset belongs to a limited category: conversation scenes only. Sidaty et al. [28] proposed an audiovisual saliency model for teleconferencing and conversational videos. Three best performing models on target database i.e. Itti et al [13], Harel et al. [12] and Tavakoli et al. [31] are selected as spatial models. Acoustic energy is computed per frame and block matching algorithm is used to construct an audio map using the face stream of video. Then peak matching is used for audio-visual synchronization. Five fusion schemes are used to get a final map. Experiments performed on XLIMedia database created by the authors showed that the proposed model performed better compared to spatial models. Again the limitation of this work is that it only targets conferencing and conversational videos. All in all, one of the major limitation of the aforementioned visual models is that they treat videos as a mute sequence of images and ignore any influence of audio stimuli. This results in inaccurate predictions where sound guides eye movement. Furthermore, another limitation of literature is the absence of an audiovisual model for complex dynamic scenes; that is, many of the state-of-the-art models restrict the dataset used to only one specific category, for instance, conversational videos. This limits the models' performance when dealing with videos containing different sound classes. PROPOSED SOLUTION This section explains the proposed solution for audio-visual saliency computation for videos. The framework consists of five major stages as illustrated in Figure 1. The first stage is the extraction of audio energy descriptors and object motion descriptors per frame using audio and visual stimuli as separate channels. The next stage computes an audio saliency map using these descriptors. In parallel, another stage computes visual saliency map and motion map. The former using low-level features while the latter from a colorcoded optical flow similar to one done for the audio maps. The last stage normalizes and combines all these maps into a unified audiovisual saliency map. Feature Extraction In this stage, we extracted visual and acoustic features from a given input video. The stage comprised two phases of feature extraction, one for audio features and the other for visual features. Audio Feature Extraction The step outputs an audio energy descriptor a(t) extracted from an audio signal featuring changing patterns of an audio signal strength. Note that the signal was obtained with the same temporal resolution as the video frames. Hence, the signal was first segmented into frames according to the frame rate of video so-that each audio frame corresponds to a video frame. Using short-term Fourier transform (STFT), this framed signal was transformed into a time-frequency domain to get a spectrogram of the signal at each frame. The descriptor a(t) was computed by the integration of the where the windowing function W (t) is defined so that neighboring windows overlap by 50%. The final descriptor was post-processed using a 1D Gaussian kernel. Visual Feature Extraction Based on the assumption that a moving object is a prime candidate to be an audio signal source, acceleration per frame of all moving objects in a given input video was computed as motion descriptor. First, the moving objects were segmented per frame using optical flow estimation and tracked along with all frames via color histograms of the regions in HSV color space. The process is described as follows: 1) Optical flow computation. The method proposed by [8] was used to compute dense optical flow and corresponding color-coded optical flow images per video frame. The method used apparent motion of each pixel to compute forward and backward optical flows where the former depicts the motion of pixels of frame t with reference to frame t + 1 and the latter was the motion of pixels of frame t with respect to frame t − 1. The resulting flows were averaged out to get a mean optical flow per frame, later used to compute an audio saliency map. 2) Frame segmentation. The color-coded mean optical flow per frame was used as input for the segmentation step. Mean shift, a nonparametric clustering algorithm was applied to segment input image in LUV color space. The oversegmented result of the step was followed by a simple region merging technique based on DeltaE, a color difference score, to merge the closely similar regions. Regions smaller than 200 pixels were filtered as noise and insignificant regions. 3) Region tracking. Once individual frames were segmented, a number of tracks were initialized in the first frame using the segmented regions' location and appearance features. All regions in following segmented input frames were either assigned to an existing track or initialized to a new track based on its location and appearance similarities. The location similarity d E was computed by Euclidean distance between the centroid of a new region C n and that of an existing track C e using, This resulted in a list of candidate tracks similar to the region under consideration for assignment within a specified search radius r. For appearance similarity, AS LUV histograms of existing candidate tracks H e were compared to the new region's histogram H n using cosine similarity cosθ as, The region C n was assigned to a track whose cosθ was maximum and greater than a specified threshold. The centroid of the track was updated to the centroid of the newly assigned region and its histogram was replaced with the mean of the existing histogram and new region's histogram. Otherwise, if cosθ was less than the specified threshold, the region was used to initialize a new track. 4) Calculate acceleration. In this step objects' acceleration was computed using the motion descriptors. Average of forward and backward optical flow resulted in acceleration at each pixel (x, y, t) per frame using, where x and y are spatial coordinates, t is frame number and F + and F − indicate forward and backward optical flow. The acceleration of regions ST t i where i is region index per frame t was computed as the average acceleration of all pixels belonging to that region as: The resulting acceleration vector was filtered using a Gaussian kernel to remove noise. The result was a motion descriptor of objects in a given input video. Audio Saliency Map Computation For the audio saliency map computation, we used audiovideo correlation method proposed in [17]. The correlation between the aforementioned audio and motion descriptors was used to localize the source of the sound signal in input video frames to indicate audio saliency. Winner-Take-All (WTA) hash [33], a subfamily of hashing functions controlled by the number of permutations N and window size S, was used to transform both descriptors in rank correlation space. Once in the common rank correlation space, Hamming distance was used to relate the audio signal to an object. Visual Saliency Map Computation A classical visual saliency map was used as proposed in [12]. The model approaches the problem of saliency computation by defining Markov chains over feature maps, extracted for features of intensity, color, orientation, flicker, and motion, and treats equilibrium locations as saliency values. In detail, each value of the feature map(s) is considered a node and the connectivity between them is weighted by their dissimilarity. Once a Markov chain is defined on this graph, the equilibrium distribution of this chain computed by repeated multiplication of Markov matrix with an initially uniform vector accumulates mass at highly dissimilar nodes providing activation maps. A similar mass concentration process is applied to these activation maps and output is summed into a final saliency map. Motion Map Computation Motion map indicates the regions of high motion computed using mean optical flow per frame as described in Section 3.1.2. Adaptive thresholding proposed in [5] was applied on the flows to discard any inconsequential low motion as, where pixel I p is set to zero if its brightness is T percent lower than average brightness of its surrounding pixels. Normalization and Combination In this final stage, the three computed maps: a) visual saliency map, b) audio saliency map, and c) motion map were normalized before combining them together into a final audiovisual saliency map. Here the visual saliency map was a sum of normalized activation maps computed using mass concentration algorithm. The other two maps were normalized to a specified range [0 − 1] using simple linear transformations. The resulting normalized maps were linearly combined to get the final audiovisual saliency map. IMPLEMENTATION AND EVALUATION The proposed solution was implemented in MATLAB 2014b and Windows 10 on a 64-bit architecture machine with Intel i5 processor. The same setup was used for evaluation purposes. The parameters used for the proposed solution are given in Table 1. Dataset Dynamic images and eye movements (DIEM) dataset [23] was used for evaluation of the proposed approach. The dataset comprises 85 (eighty-five) videos with or without audio of varying genres. Eye fixation data is collected via binocular eye tracking experiment with 250 participants in total with ages ranging between 18 and 36 years with normal/corrected-to-normal vision. In this work, for evaluation 25 (twenty-five) videos with audio were randomly selected. The video sequences are listed in Table 2 along with its properties. Evaluation Metrics The proposed solution was evaluated using four criteria. 1) Area under the curve (AU C) . is a location-based metric, where saliency pixels equal to the total recorded fixations are randomly extracted. The true positives (T P ) and false positives (F P ) are calculated for different thresholds treating saliency pixels as a classifier. The resulting values are used to plot an ROC curve and compute AU C-the ideal score being 1.0 and a value of 0.5 indicating random classification. 2) Kullback-Leibler divergence (D KL ). is a distributionbased dissimilarity measure given as, it estimates the loss of information when saliency map M s is used to approximate a fixation map M f -both considered as probability distributions. The ideal D KL score is zero, meaning the saliency and fixation maps are exactly same, otherwise poorer than the scale of the saliency model. 3) Normalized scanpath saliency (N SS). is computed using, No Video Sequence Scene Type Audio Source game trailer lego indiana jones Computer Game harry potter 6 trailer Movie home movie Charlie bit my finger againMovie news bee parasites News news sherry drinking mice News news us election debate News planet earth jungles monkeys sport football best goals Sports where saliency map M s is normalized to zero mean and unit standard deviation, then averaged for N fixations. Zero score means a chance prediction whereas a high score indicates high predictability of the saliency model. 4) Linear correlation coefficient (CC). is another distribution-based metric computed using, its output ranges between −1 and +1, the closer is the score to any of these, the better is predictability of the saliency model. Comparison Methods Based on our literature review, we found no other audiovisual saliency model for complex dynamic scenes. For the sake of comparison, we compared our proposed audiovisual saliency model against two state-of-the-art visual saliency models. The first model proposed in [19] derives pixel-level spatial/temporal saliency map from superpixel-level spatial/temporal saliency map constructed using motion and color histogram features. The other spatiotemporal saliency detection model proposed by Liu et al. [18] is based upon superpixel-level graph and temporal propagation. RESULTS For evaluation, we computed saliency maps for the selected videos from DIEM dataset using the two state-of-the-art (Table 3) for the resulting saliency maps for the first 300 frames per video were compared to assess eye movement predictability. We observe that the proposed model not only outperforms both comparison methods but also results in a satisfactorily higher average score in terms of AU C. Moreover, a lower D KL score indicates a better saliency model with less dissimilarity to the ground truth. For the remaining evaluation metrics, CC and N SS, the proposed method results in slightly lower scores; however, the results still suggest that the proposed model makes better eye movement predictions, and thus supports the idea of incorporating audio features when computing spatiotemporal saliency for unconstrained videos. Some of the video sequences performed better for instance stewart lee, news us election debate and one show, with on-screen sound source with no object occlusion, and interaction. Figure 2 illustrates the saliency maps obtained by different methods. The visual comparison demonstrates that our proposed model performs comparatively better. For instance, video sequence with an on-screen audio sourcetype in the third row, visual models failed to correspond to the ground truth (GT) as they considered both faces salient; by contrast, the proposed audiovisual model marked the talking face salient. DISCUSSION Spatiotemporal saliency detection is a challenging problem. It is worth mentioning that existing models ignore the audio signal in the input media. However, a number of experimental studies [25], [29], [32] discuss the influence of aural stimuli on early attention when viewing complex scenes; that is, audio stimuli can provide useful information in guiding eye movements. This influence can be incorporated into existing bottom-up models by the inclusion of low-level audio properties like energy, frequency, amplitude, etc. The resulting audiovisual saliency model makes more sense in application areas like video summarization/compression, event detection, gaze prediction, and robotic vision and interaction. There exist some models in the literature based upon multiple stimuli [9], [26], [28] but they lack a generic solution by limiting the models to specific categories of videos. A major reason for this lack in literature is due to one of the foremost challenges of audiovisual saliency models: localization of audio source in a given frame. Some methods either use microphone arrays to triangulate a single source or only target stationary sources in a scene. The models fail to perform for dynamic videos, as they assume a single audio source. Furthermore, an approach overcoming these restrictions use correlation analysis between audio and video segments, the audio source is a set of relevant pixels rather than an object. The approach has been used in more recent works where object segmentation precedes audiovisual correlation, making audio source separation maintain the source object shape. Since both audio and video signals are from different domains, reliable correlation requires feature transformation into a suitable space. Moreover, it requires a method to relate an audio descriptor to an object descriptors in a video frame, that is, segmentation and tracking of diversified objects in a video frame is in itself a challenging task. To be precise, the literature lacks in techniques for multiple objects, the case in our dataset with no a priori information about objects like shape, color, size, etc. In terms of eye movement predictability, the proposed audiovisual saliency model performed better for two evaluation metrics. However, they resulted in comparable scores for the other two metrics. This result can be attributed to the difficulty in segmenting and tracking of multiple interacting objects in varying conditions like motion blur, crowd, etc. Moreover, multiple and/or off-screen audio sources make it a more challenging task to locate an audio source, in consequence, affecting the model's performance. The proposed saliency model exhibits higher time complexity (Table 4) attributed to dense optical flow computation, inherently compute-intensive being an optimization problem. The main advantage of using the method is that it estimates both forward and backward flows, and hence the optical flow of occluded regions is also computed correctly. Other alternative motion estimation approaches are blockmatching and phase correlation that can be used instead to propose a more efficient solution. Likewise, segmentation of multiple objects is a complex task involving meanshift segmentation, a non-parametric clustering using kernel density estimation. The approach is not scalable due to its large feature space dimensions. Alternatively, a simpler histogram or superpixel-based segmentation methods can be used to reduce computational complexity, as well as increase model predictability. A shortcoming of the current study is the use of a subset of the available dataset for evaluation. It may be interesting to perform evaluation using the entire video dataset and/or other available datasets to enforce our finding that aural stimuli alongside visual stimuli can provide useful information in guiding eye movements. All in all, the proposed solution scored reasonably well, however it can be further improved. An improvement in segmentation and tracking techniques may contribute to a better audio saliency map, and in turn towards a better final saliency map. Furthermore, the use of a more sophisticated visual saliency model, as well as the use of more suitable combination techniques can improve the final result. CONCLUSION Existing bottom-up saliency models only use visual stimuli while available audio stimuli in the input media remain unused. In this paper, we proposed an audiovisual saliency model incorporating both low-level visual and audio information to produce three different saliency maps: an audio saliency map, a motion saliency map, and a visual saliency map. These maps were linearly combined to get a final saliency map. These maps were evaluated for DIEM dataset using four different criteria. The results show an overall improvement against two state-of-the-art visual saliency models and enforce the idea that of aural stimuli can provide useful information to guide eye movements.
2021-01-12T02:15:47.216Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "dbd652dd10bf6957fc75e90845f7a12363699389", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10042304.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "dbd652dd10bf6957fc75e90845f7a12363699389", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
23177022
pes2o/s2orc
v3-fos-license
Betanodavirus B2 Causes ATP Depletion-induced Cell Death via Mitochondrial Targeting and Complex II Inhibition in Vitro and in Vivo* The betanodavirus non-structural protein B2 is a newly discovered necrotic death factor with a still unknown role in regulation of mitochondrial function. In the present study, we examined protein B2-mediated inhibition of mitochondrial complex II activity, which results in ATP depletion and thereby in a bioenergetic crisis in vitro and in vivo. Expression of protein B2 was detected early at 24 h postinfection with red-spotted grouper nervous necrosis virus in the cytoplasm. Later B2 was found in mitochondria using enhanced yellow fluorescent protein (EYFP) and immuno-EM analysis. Furthermore, the B2 mitochondrial targeting signal peptide was analyzed by serial deletion and specific point mutation. The sequence of the B2 targeting signal peptide (41RTFVISAHAA50) was identified and its presence correlated with loss of mitochondrial membrane potential in fish cells. Protein B2 also was found to dramatically inhibit complex II (succinate dehydrogenase) activity, which impairs ATP synthesis in fish GF-1 cells as well as human embryonic kidney 293T cells. Furthermore, when B2 was injected into zebrafish embryos at the one-cell stage to determine its cytotoxicity and ability to inhibit ATP synthesis, we found that B2 caused massive embryonic cell death and depleted ATP resulting in further embryonic death at 10 and 24 h post-fertilization. Taken together, our results indicate that betanodavirus protein B2-induced cell death is due to direct targeting of the mitochondrial matrix by a specific signal peptide that targets mitochondria and inhibits mitochondrial complex II activity thereby reducing ATP synthesis. The betanodavirus non-structural protein B2 is a newly discovered necrotic death factor with a still unknown role in regulation of mitochondrial function. In the present study, we examined protein B2-mediated inhibition of mitochondrial complex II activity, which results in ATP depletion and thereby in a bioenergetic crisis in vitro and in vivo. Expression of protein B2 was detected early at 24 h postinfection with red-spotted grouper nervous necrosis virus in the cytoplasm. Later B2 was found in mitochondria using enhanced yellow fluorescent protein (EYFP) and immuno-EM analysis. Furthermore, the B2 mitochondrial targeting signal peptide was analyzed by serial deletion and specific point mutation. The sequence of the B2 targeting signal peptide ( 41 RTFVISAHAA 50 ) was identified and its presence correlated with loss of mitochondrial membrane potential in fish cells. Protein B2 also was found to dramatically inhibit complex II (succinate dehydrogenase) activity, which impairs ATP synthesis in fish GF-1 cells as well as human embryonic kidney 293T cells. Furthermore, when B2 was injected into zebrafish embryos at the one-cell stage to determine its cytotoxicity and ability to inhibit ATP synthesis, we found that B2 caused massive embryonic cell death and depleted ATP resulting in further embryonic death at 10 and 24 h postfertilization. Taken together, our results indicate that betanodavirus protein B2-induced cell death is due to direct targeting of the mitochondrial matrix by a specific signal peptide that targets mitochondria and inhibits mitochondrial complex II activity thereby reducing ATP synthesis. Betanodaviruses cause viral nervous necrosis, an infectious neuropathological condition in fish that is characterized by necrosis of the central nervous system, including the brain and retina and by clinical signs (e.g. abnormal swimming behavior and development of a darker body color) (1). This disease can cause mass mortality in larval and juvenile populations of several teleost species and is of global economic importance (2). The family Nodaviridae is comprised of the genera Alphanodavirus and Betanodavirus. Alphanodavirus predominantly infects insects, whereas Betanodavirus predominantly infects fish (3)(4)(5). Nodaviruses are small, non-enveloped, spherical viruses with bipartite positive-sense RNA genomes (RNA1 and RNA2) that are capped but not polyadenylated (3). RNA1 encodes an ϳ110-kDa non-structural protein that has been designated RNA-dependent RNA polymerase or protein A. This protein is vital for replication of the viral genome. RNA2 encodes a 42-kDa capsid protein (6,7), which may also function in the induction of cell death (8,9). Nodaviruses also synthesize RNA3, a sub-genomic RNA species from the 3Ј terminus of RNA1. RNA3 contains two putative open reading frames that potentially encode an 111-amino acid protein B1 and a 75-amino acid protein B2 (3,10,11). Recently, betanodavirus B1 was found to play an anti-necrotic death function in the early replication stages (10). In contrast, betanodavirus B2 was found to either be a suppressor of host siRNA silencing (12,13) or a necrotic death factor (11,14). Mitochondria are organelles required for cellular energy production, programmed cell death regulation (15), reactive oxygen production (16), and intermediary metabolism (17). Changes in mitochondrial function, such as suppression of mitochondrial metabolism, accumulation of reactive oxygen species, loss of mitochondrial membrane potential, and reduced respiration have been shown to play a key role in induction of cell death (18,19). Mitochondria produce the majority of cellular ATP via oxidative phosphorylation. The mitochondrial electron transport chain removes electrons from an electron donor (NADH for Complex I or QH 2 for Complex III) and passes them to a terminal electron acceptor (O 2 ) via a series of redox reactions. These reactions are coupled to the creation of a proton gradient across the mitochondrial inner membrane. The resulting transmembrane proton gradient is used to make ATP via ATP synthase (20). Mitochondrial disorders are often present as neurological diseases such as Parkinson disease (21,22), Alzheimer disease (23)(24)(25)(26)(27), and Huntington disease (28). Several viruses and viral proteins can modulate the mitochondria-mediated death pathway in infected cells (29). Viral factors may be pro-cell death modulators, which when inserted into mitochondria trigger loss of mitochondrial membrane potential (MMP) 2 or promote loss of MMP indirectly through activation of host factors, or anti-cell death modulators (which have sequence and/or structural similarity to BH1-4 domain of the Bcl-2 family or inhibit cell death via other mechanisms) (29). Mitochondrial membrane permeabilization culminates in the loss of mitochondrial transmembrane potential (⌬⌿ m ), an arrest of mitochondrial bioenergetic and biosynthetic functions, in the release of mitochondrial intermembrane space proteins (including cytochrome c (30,31) and Smac/DIABLO (32), apoptosis-inducing factor (33,34), and endonuclease G (35)) into the cytosol, and then in exposure to pro-cell death signals (36,37). In our previous study of betanodavirus-induced host cell death, the red-spotted grouper nervous necrosis virus (RGNNV) TN1 strain induced apoptosis and post-apoptotic necrosis in a grouper liver cell line (GL-av) (38). The RGNNV infection also induced loss of the MMP, which was blocked by the MMP transition pore inhibitor BKA (38) as well as the Bcl-2 member protein zfBcl-xL (13). In addition, B2 protein (a novel necrotic cell death inducer translated from subgenomic RNA3) acts via a Bax-mediated pathway (14) and is prevented from acting via overexpression of the anti-apoptotic gene of zfBcl-xL (11,14). However, the molecular mechanism of protein B2 induction of mitochondria-mediated necrotic cell death is still unclear and may provide insight into the molecular pathogenesis of RNA virus infection. In the present study, we demonstrate that RGNNV B2 targets mitochondria using a specific signal peptide that directs B2 to the mitochondrial matrix, subsequently causing mitochondrial disruption and necrotic cell death in fish cells. Furthermore, we attempt to determine how B2 induces necrotic cell death. EXPERIMENTAL PROCEDURES Cells and Virus-The grouper cell line (GF-1) was obtained from Dr. Chi (Institute of Zoology and for the Development of Life Science, Taiwan). The GF-1 cells were grown at 28°C in Leibovitz's L-15 medium (Invitrogen) supplemented with 5% fetal bovine serum and 25 g/ml of gentamycin. The human embryonic kidney 293T cells were grown at 37°C in low glucose Dulbecco's modified Eagle's medium (DMEM; Invitrogen) supplemented with 5% fetal bovine serum and 5% CO 2 . Naturally infected red grouper larvae collected in 2002 in the Tainan prefecture were the source of red-spotted grouper nervous necrosis virus Tainan number 1 (RGNNV TN1), which was used to infect GF-1 cells. The virus was purified as previously described (7,38) and stored at Ϫ80°C until use. For cell transfection, 3 ϫ 10 5 GF-1 cells were seeded in 60-mm diameter culture dishes. On the following day, 2 g of recombinant plasmid was mixed with Lipofectamine 2000 (Invitrogen), and the transfection procedure was carried out according to the manufacturer's instructions. Immunofluorescence Assay-RGNNV-infected GF-1 cells with multiplicity of infection (m.o.i. ϭ 5) were cultured on 35-mm Petri dishes. At 24, 48, and 72 h post-infection, cells were rinsed once with phosphate-buffered saline (PBS), fixed with 4% paraformaldehyde for 15 min at room temperature, The enzymatic cutting sites on nucleotides of primers are underlined. and then permeabilized for 10 min with 0.2% Triton X-100 in PBS at room temperature. The immunofluorescence assay was performed by incubating these cells with primary polyclonal antibodies (1:50 dilution) against RGNNV protein B2 (14) for 1 h at room temperature, washing with PBS ϩ 0.05% Tween 20 (PBST), incubating with secondary antibodies conjugated to TRITC or fluorescein isothiocyanate (FITC-conjugated goat anti-rabbit IgG; 1:100 dilution; Jackson ImmunoResearch Laboratories) for 40 min at room temperature, and washing three times with PBST. Immunofluorescence was examined using an Olympus IX70 fluorescence microscope equipped with a 488-nm excitation and 515-nm long-pass filter for detection RGNNV B2-fluorescein. MitoTracker, Annexin-V-Alexa 568 Staining, and Mitochondrial Membrane Potential Assay-Live cells were labeled with the mitochondrion-specific dye (MitoTracker Red CMXRos) in accordance with the manufacturer's instructions and as described previously (11,38). For annexin-V-Alexa 568 staining, cells from the culture medium were washed with PBS, and incubated for 10 -15 min with 100 l of a HEPES-based annexin-V-Alexa 568 staining solution, according to the manufacturer's instructions (Roche Applied Science). To assay the mitochondrial membrane potential, the culture medium was discarded from each dish, 500 l of diluted MitoCapture reagent (Mitochondria BioAssay TM Kit; BioVision, Mountain View, CA) was added, and the dishes were incubated at 37°C for 15-20 min. ATP Assay-The cellular ATP concentration was measured using an ATP Colorimetric/Fluorometric Assay Kit (BioVision). Cells (10 6 ) were lysed in 100 l of ATP assay buffer, homogenized, and centrifuged (13,000 ϫ g, 2 min, 4°C) to pellet insoluble materials. The supernatants were collected and added to 96-well plates (50 l per well) along with 50 l/well of the reaction mixture (ATP probe, ATP Converter, Developer Mix in ATP assay buffer). The plates were incubated at room temperature for 30 min, while being protected from light, and absorbance in the wells was measured at 570 nm using a microplate reader. The absorbance of the no-ATP control was subtracted from each reading. NAD ϩ /NADH Ratio Assay (Complex I Activity Assay)-NADH concentration, NAD concentration, and their ratios were measured using an NAD ϩ /NADH Quantification kit (BioVision). GF-1 and 239T cells were cultured in 35-mm Petri dishes for 24 h, transfected with pEYFP, EYFP-B2, and EYFP-B2(del) for 24 h, treated or not treated with the Complex I inhibitor diphenyleneiodonium (DPI, 100 M) (Sigma) for 24 h, rinsed once with PBS, homogenized in buffer (10 6 cells in 400 l of NADH/NAD extraction buffer), and centrifuged (13,000 ϫ g, 5 min, 4°C) to pellet insoluble materials. The supernatants were transferred to new labeled tubes and 50 l of each was transferred to 96-well plates. NAD needs to be decomposed before NADH can be detected. To decompose NAD, 200 l of each supernatant was transferred to Eppendorf tubes, heated to 60°C for 30 min in a water bath, and cooled on ice. The resulting NAD-decomposed samples were transferred to 96-well plates (50 l/well), treated with the NAD cycling mixture (100 l/well), incubated at room temperature for 5 min to convert NAD to NADH, and treated with NADH developer (10 l/well) at room temperature for 2 h. The OD at 450 nm of each well was read, and the NAD/NADH ratio was calculated as follows: Succinate Dehydrogenase (SDH) Activity Assay (Complex II Activity Assay)-GF-1 and 239T cells were cultured 24 h in 35-mm Petri dishes, transfected with pEYFP, EYFP-B2, or EYFP-B2(del) for 24 h, and treated or not treated with the Complex II inhibitor 3-nitropropionic acid (3-NP, 10 mM; Sigma) for 24 h. GF-1 cells and 239T cells (2 ϫ 10 6 ) were washed with PBS, homogenized in 0.1 ml of extraction buffer (20 mM Tris-HCl, pH 7.2, 250 mM sucrose, 2 mM EGTA, 40 mM KCl, 1 mg/ml of BSA) using a glass homogenizer, and centrifuged (2000 ϫ g, 5 min, 4°C) to pellet insoluble materials. The supernatants were placed into new labeled tubes, transferred to 96-well plates (90 l/well), treated with a combination of 10ϫ activity buffer (10 l of 500 mM Tris HCl, pH 8.3, 5 mM EDTA, 100 mM succinate) and 2-(p-iodophenyl)-3-(p-nitrophenyl)-5-phenyltetrazolium chloride (20 l of 10 mM), and incubated (room temperature, 90 min). The OD at 490 nm of each well was measured by a plate reader (39). Maintenance of Fish Embryos in Culture-Techniques for the care and breeding of zebrafish have been previously described in detail (40). Embryos were collected from natural matings and maintained in embryo medium (15 mM NaCl, 0.5 mM KCl, 1 mM CaCl 2 , 1 mM MgSO 4 , 0.05 mM Na 2 HPO 4 , 0.7 mM NaHCO 3 ) at 28.5°C. Embryos were staged according to standard morphological criteria (41). Microinjection of EYFP and EYFP-B2-To induce expression of protein B2 in zebrafish embryos, 2 nl of 10 or 30 ng/l of pEYFP-C1/pEYFP-B2 (which was linearized with EcoRI) was injected into the one-cell stage of each embryo using a gasdriven microinjector (Medical System Corporation, Greenvale, NY) (42). Apoptotic Cell Staining-Embryos were harvested at 6 and 24 h post-fertilization (hpf) fixed with 4% paraformaldehyde in PBS (pH 7.4) at room temperature for 30 min, stained with acridine orange (1 g/ml) for 3-5 min, washed twice in PBS, and evaluated under a fluorescence microscope (using incident light at 488 nm for excitation, with a 515-nm long pass filter for detection) (43). Cell Counts and Statistical Analyses-Loss of MMP and percentage of annexin V-fluorescein positive cells were determined in each sample by counting 200 cells. Each result is expressed as the mean Ϯ S.E. Data were analyzed using either paired or unpaired Student's t tests, as appropriate. A value of p Ͻ 0.05 was taken to indicate a statistically significant difference between group mean values. Cytosolic B2 was concentrated in mitochondria and appeared as dot-like structures (Fig. 1B, k, indicated by arrows). To track protein B2 directly, we transfected GF-1 cells with pEYFP-C1 vector containing the open reading frame of B2 cDNA and found transient expression of EYFP-B2 fused proteins. At 48 h post-transfection, Western blotting detected expression of the ϳ40.5-kDa EYFP-B2 fusion protein ( Fig. 2A, lane 3) and 32-kDa EYFP (Fig. 2A, lane 2). Immunofluorescence revealed that EYFP-B2 fusion protein (Fig. 2B, b and d, indicated by arrows) but not EYFP alone (Fig. 2B, a and c) targets mitochondria directly. Using immuno-EM, EYFP-B2 was found to be transiently expressed and to concentrate and aggregate in the mitochondrial matrix (Fig. 2B, f, indicated by arrows). EYFP-B2 and EYFP (Fig. 2B, e) were labeled with anti-EYFP IgG. Furthermore, we used MitoTracker Red CMXRos to confirm the localization of protein B2 in mitochondria. Superimposition of the green fluorescent image of EYFP-B2 (Fig. 2B, h) with the MitoTracker Red dye image (Fig. 2B, i) resulted in a yellow green fluorescent image (Fig. 2B, j) and phase-contrast image (Fig. 2B, g) indicating colocalization in mitochondria. Taken together, these results demonstrated that EYFP-B2 targets mitochondria in either RGNNV-infected or EYFP-B2 overexpressing GF-1 cells. B2 Has a Mitochondrial Targeting Peptide-Next, we determined the signal sequence used by B2 to target mitochondria during RGNNV infection. Signal sequence analysis, using two prediction databases (iPSORT and TargetP 1.1), identified the mitochondria target peptide of protein B2 (Fig. 3A). To locate the mitochondria targeting signal of protein B2, a sequential N-terminal to C-terminal deletion approach was used to construct EYFP-C1 vectors fused with different lengths of the B2 gene (Fig. 3B). Furthermore, we checked that the homology of the mitochondria-targeting motif of betanodaviruses protein B2 were high (Fig. 3C), but very low homology in alphanodaviruses (data not shown). The fusion proteins were expressed in GF-1 cells, and cellular expression of B2 protein fragments was assessed by Western blot analysis. Then, we examined the localization of these B2 fragments using MitoTracker (to detect mitochondrial localization) and fluorescence microscopy at 48 h post-transfection. EYFP-B2 (Fig. 3E, d-f), 1-50 (Fig. 3E, g-i), and 10 -50 (Fig. 3E, j-l), but not EYFP (Fig. 3E, a-c) and B2-del [41][42][43][44][45][46][47][48][49][50] (Fig. 3E, m-o), were detected in mitochondria as green, red, and yellow green fluorescence, respectively. Taken together, the data indicate that residues 41 to 50 of protein B2 have a mitochondrial targeting function. (11,14), protein B2 was found to induce mitochondria-mediated cell necrosis and to target mitochondria. Here, we determined that the B2 targeting signal peptide (aa 41-50) is required for induction of MMP loss and necrotic cell death. Additionally, to determine whether B2 reduced ATP synthesis in the mitochondrial respiratory chain, we assayed the NAD ϩ /NADH ratio for the NADH ubiquinone oxidoreductase activity of complex I. EYFP-B2 (0.95) and EYFP-B2 (del; 0.98) had no effect on the NAD ϩ /NADH ratio in GF-1 cells at 48 h post-transfection (compare with EYFP (1.0) control, complex I inhibitor DPI (0.08) control, and complex II 3-NP (1.02)) (Fig. 5B). In the assay of SDH for complex II (complex II that consists of four protein subunits such as SDHA, SDHB, SDHC, and SDHD) activity, EYFP-B2 was found to markedly reduce complex II activity to 65% in GF-1 cells and 54% in 293T cells (compare with EYFP alone, all 100% in GF-1 and 293T cells; EYFP-B2(del) as a negative control, 92% in GF-1 cells and 93% in 293T cells; DPI control, 99% in GF-1 cells and 97% in 293T cells; 3-NP control, 35% in GF-1 cells and 43% in 293T cells; Fig. 5C). Then, we found that protein B2 targeting into mitochondria could induce complex II subunit proteins, such as SDHB, SDHC, and SDHD, and partial degradation (Fig. 5D, lane 2) at 48 h in 293T cells as compared with negative control (Fig. 5D, lane 1). These data suggest that protein B2 induces mitochondrial ATP depletion and necrotic cell death via inhibiting complex II activity and components of complex II degradation. DISCUSSION Betanodavirus causes viral nervous necrosis and the infected fish to lie on its side, float belly up, or swim abnormally (such as in circles or to the right). Histopathological changes include extensive cellular vacuolation and necrotic neuronal degeneration in the central nervous system and retina (44). The molecular mechanisms involved in the pathogenesis of this disease are still unknown. In the present study, the sequence ( 41 RTFVISAHAA 50 ) in RGNNV protein B2 was identified as the molecular signal used to target the mitochondrial matrix. Protein B2 induced MMP loss in both fish and human cells, followed by mitochondrial ATP depletion. Furthermore, this novel necrotic death factor triggered ATP depletion in embryos and consequently embryonic cell death. These results may be the first to elucidate the FIGURE 6. Protein B2 induces ATP depletion and death of zebrafish embryos. Embryos were injected with vector control (EYFP) or B2 gene (EYFP-B2) at the one-cell stage. A, phase-contrast images of embryos injected with EYFP (a and g), a higher dose of EYFP-B2 (30 ng/l) (b and h), and lower dose of EYFP-B2 (10 ng/l) (c and i) and after staining with acridine orange at 10 (hpf, d-f) and 24 hpf (j-l). B, the intracellular ATP concentration (measured as percentage of the vector control concentration; n ϭ 10,000 cells) at 10 and 24 hpf. C, embryonic death rate (n ϭ 120 embryos per sample) at 10 and 24 hpf. All data were analyzed using either paired or unpaired Student's t tests as appropriate. *, p Ͻ 0.05 indicates a statistically significant difference between mean values of the groups. mechanism of betanodavirus-induced neuronal degeneration in the central nervous system and retina (44). A Motif of the RGNNV B2 Protein Is Required for Mitochondrial Targeting during the Early Replication Stage of Infection-B2 (a 75-amino acid protein translated from subgenomic RNA3) (13) may either be a suppressor of host siRNA silencing (12,13) or a necrotic death factor (11,14). Mitochondria appear to be the targets of B2. In our system, B2 was expressed early during replication at 24 h post-infection (Fig. 1A, lane 2). Immunofluorescence assay showed that B2 targets mitochondria-like particles (Fig. 1B, j and k) and to some extent exists in the nucleus at 48 h post-infection, where it may act as a host siRNA silencing suppressor (Fig. 1B, j, indicated by arrows). Using tag EYFP tracing (Fig. 2B, g and h) and MitoTracker staining (Fig. 2B, i and j), protein B2 was shown to target mitochondria. Furthermore, we wanted to know whether B2 localized in the outer membrane or inner membrane of mitochondria because protein function differs between them (29). Immuno-EM staining showed that most protein B2 localized and formed small complexes in the mitochondrial matrix (Fig. 2B, f, indicated by arrows). Most mitochondrial proteins are synthesized in the cytoplasm as precursors, which are post-translationally translocated into either the outer or inner membrane of the mitochondria (45)(46)(47). These proteins usually have an N-terminal cleavable sequence that is either positively charged or hydrophobic and usually forms an amphipathic helix (48 -52). Proteins targeted to mitochondrial membrane often contain an Nor C-terminal targeting sequence and anchoring signal sequence (45,53), but the midsequence location of the B2 targeting motif is not well known. In our study, the targeting motif ( 41 RTFVISAHAA 50 ) of protein B2 contained 10 amino acid residues and played a very important function early during RGNNV infection. Deletion of this sequence resulted in loss of MMP (Fig. 4A, g-i) and loss of necrosis-inducing activity (loss of SDH activity; Fig. 5C) in fish GF-1 cells. The positively charged and hydrophobic residues within the signal peptide are essential for mitochondrial targeting (45,52,54). When the positively charged residues (Arg 52 and Arg 53 ) and the two hydrophobic residues (Val 44 and Ile 45 ) near the signal sequence were changed to uncharged Ala residues (Fig. 3A), targeting to the mitochondrion was lost, implying that these 4 residues (Val 44 , Ile 45 , Arg 52 , and Arg 53 ) of protein B2 are necessary for targeting (Fig. 3E, g-i) and necrosis induction (Fig. 4C, j-l). Protein B2 Induces MMP Loss and Produces a Mitochondrial Energy Crisis-The mitochondria are vital cellular machines for maintaining cellular energy and use oxygen to produce ATP through a process known as oxidative phosphorylation (16). The inner mitochondrial membrane contains a respiratory chain of four multisubunit protein complexes that release energy used to pump protons across this membrane. The created electrochemical gradient of protons and resulting mitochondria membrane potential (MMP, ⌬⌿) drives ATP formation from ADP and phosphate (16). Thus, damage to mitochondria plays an important role in a wide range of human diseases (20,55). Mitochondrial disorders often present as neurological disorders such as Parkinson disease (21,22), Alzheimer disease (23)(24)(25)(26)(27), and Huntington disease (28). A variety of studies have suggested that neural cell death could be due to mitochondrial energy deficits. Cell death could be mediated by loss of MMP, release of cytochome c, and depletion of ATP (56). In our system, protein B2 entered mitochondria to inhibit the activity of complex II (Fig. 5C) but not complex I (Fig. 5B), and thereby deplete ATP (Fig. 5A) and degrade the component proteins of complex II as SDHB, SDHC, and SDHD (Fig. 5D). In fish and human cells, marked depletion of ATP (Fig. 6C) occurred in early zebrafish embryos used as a model animal system (40), which ended in embryonic death (Fig. 6, A and B). These in vitro and in vivo results suggest that protein B2 triggers a mitochondrial energy crisis. Viral Proteins Induce Mitochondrial Disruption and Cellular Death-Recently, some well known DNA and RNA viruses have been shown to cause mitochondria-mediated cell death. A number of viral polypeptides modulate apoptosis by either increasing or decreasing MMP through modification of the outer mitochondrial membrane or by acting on upstream/ downstream steps of the cell apoptotic cascade (29). In contrast, only a few viral proteins (e.g. HCV NS4A (57) and HIV-1 Tat (58)) accumulate in mitochondria, induce MMP loss, and activate caspase-3-dependent cell death. Chen et al. (38) showed that the RGNNV TN1 strain induced apoptosis and post-apoptotic necrosis in a grouper liver cell line (GL-av). RGNNV infection in fish cells induced loss of MMP, which was blocked by the mitochondrial membrane permeability transition pore inhibitor BKA (38) as well as the Bcl-2 member protein zfBcl-xL (11). Moreover, they showed that cell death was dependent on viral RNA replication, indicating the need for the expression of the viral death factor(s), protein ␣/B2, before cyto- chrome c could be released and caspase-3-independent signaling could be activated in mitochondria (59). Furthermore, in betanodavirus-infected cells, both the viral capsid protein ␣ (8,9) and non-structural protein B2 (11,14) all triggered loss of MMP and necrosis. Protein ␣ induces MMP loss, cytochrome c release, and increased caspase-8 and -3 activation, and zfBcl-xL blocks these post-apoptotic necrosis processes thereby rescuing virusinfected cells (8,9). On the other hand, protein B2 up-regulates the pro-apoptotic gene Bax to enhance MMP loss, but does not induce release of cytochome c and activation of caspase-3-independent signaling (11). Thus, necrosis can be blocked by B2-specific siRNA and the anti-apoptotic protein zfBcl-xL (11,14). In the present study, protein B2 disrupted mitochondria via targeting (Fig. 3E) and inhibition of complex II function (Fig. 5, C and D). The resulting MMP loss (Fig. 4A) and ATP depletion (Figs. 5A and 6B) contributed to necrosis induction, which is a strategy used by viruses to modulate the rate of cell death. As summarized in Fig. 7, protein B2 is a non-structural protein expressed early in the infection cycle. It localizes primarily in the mitochondria and secondarily in the nucleus. A targeting signal sequence of 10 amino acids in the central region of the B2 protein directs B2 to the mitochondrial matrix where it disrupts mitochondrial function. Protein B2 inhibits complex II activity leading to induction of MMP loss and ATP depletion culminating in a bioenergetic crisis and necrosis. These findings may provide new insights into the molecular pathogenesis of RNA viruses and suggest new clinical treatments.
2018-04-03T06:14:49.727Z
2010-09-24T00:00:00.000
{ "year": 2010, "sha1": "3576925189be827c9211c031d3365dcf0e48dcbe", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/285/51/39801.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d4c49e6d5d6feee339020b439f0d592c8ff92c87", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
11488346
pes2o/s2orc
v3-fos-license
The Role of Sirt3 in Mediating Cardioprotective Effects of Ras Inhibition on Cardiac Ischemia-reperfusion Cardiac ischemia-reperfusion stimulates the renin-angiotensin system (RAS) associated with elevated levels of circulating angiotensin II. Numerous studies demonstrate that the antagonist for the angiotensin II type 1 receptor, losartan improves cardiac function in animal models of ischemia-reperfusion. Molecular mechanisms of the cardioprotective effects of RAS inhibitors on cardiac ischemia-reperfusion remain poorly understood, and are not associated with the anti-hypertensive action of these drugs. This Commentary focuses on the study published in the the role of SIRT3 in the cardioprotective action of losartan against ischemic-reperfusion injury. We provide comprehensive discussion of the role of mitochondria in the cardioprotective effects of losartan through SIRT3. This article is open to POST-PUBLICATION REVIEW. Registered readers (see " For Readers ") may comment by clicking on ABSTRACT on the issue's contents page. Cardiac ischemia-reperfusion (IR) is known to stimulate the renin-angiotensin system (RAS) that may have deleterious effects on heart metabolism and function (Figure 1).Activation of RAS during myocardial infarction and ischemic heart disease is associated with elevated levels of circulating angiotensin II (AngII) (1).Also, activation of cardiac RAS in the ischemic myocardium increases intracellular synthesis of AngII that together with circulating AngII exert detrimental effects on cardiac function through autocrine and paracrine mechanisms (2).Notably, short-term treatment with AngII exerts cardioprotective effects on IR similar to those induced by ischemic preconditioning in isolated Langendorff-perfused hearts (3).The deleterious effects of AngII on the ischemic myocardium are mediated through AngII type 1 (AT1) receptors and include suppression of contractility, arrhythmias, alterations of Ca 2+ homeostasis and energy metabolism, increased reactive oxygen species (ROS) generation, etc. (4).Consequently, inhibition of AngII production or action in tissues, constitute important therapeutic strategies to protect the heart against IR.Indeed, both angiotensin-converting enzyme (ACE) inhibitors (5), and AT1 receptor blockers have been shown to exert cardioprotection against IR injury (6).Several studies demonstrate that the AT1 receptor antagonist, losartan improves cardiac function in the isolated Langendorff-perfused heart subjected to global IR (7), as well as in in vivo models of IR induced by coronary artery ligation (8).Notably, the cardioprotective effects of RAS inhibitors on cardiac IR are not associated with the anti-hypertensive action of these drugs (9).Despite the high number of studies available so far, the molecular mechanisms of cardioprotection by RAS inhibition remain unknown.Although blockade of AT1 receptors improves post-ischemic recovery, prevents arrhythmia, increases Ca 2+ storage in the sarcoplasmic reticulum, reduces ROS, and attenuates mitochondrial dysfunction, a cause-effect relationship between these effects has not been established.The article by Klishadi and co-authors published in the Journal of Pharmacy and Pharmaceutical Sciences (10) attempts to establish a role for SIRT3 in the cardioprotective action of losartan following IR injury.The authors demonstrated that pre-treatment of rats with losartan (10 mg/kg/day) for 4 weeks significantly improved the recovery of hearts after in vivo IR induced by coronary artery ligation (30 min) and subsequent reperfusion (120 min).They found that electrical heart abnormalities (ventricular tachycardia and ectopic beats) after IR were _________________________________________ attenuated by losartan, a finding that was associated with increased SIRT3 protein levels.The authors concluded that chronic administration of losartan at non-hypotensive levels, could exert cardioprotection in part, through normalization the SIRT3 protein level in the ischemic myocardium (10).However, the involvement and role of mitochondrial SIRT3 in these cardioprotective effects of losartan were not considered, limiting the interpretation of the data.Sirtuins are class III histone deacetylases that depend on NAD + for their activity, and play an essential role in the regulation of protein activity by deacetylation.There are seven sirtuin isoforms (SIRT1-7) which subcellular localization varies between the cytoplasm (SIRT2), nucleus (SIRT1, 6, 7) and mitochondria (SIRT3, 4, 5) (11).Proteomic analysis has identified 277 lysine acetylation sites on 133 mitochondrial proteins, thereby establishing that lysine acetylation is an abundant posttranslational modification in mitochondria (12).Most lysineacetylated proteins (~100 proteins) from mitochondrial fractions were metabolic enzymes involved in various aspects of energy metabolism, including the TCA cycle, fatty acid oxidation, and oxidative phosphorylation (13).SIRT3 is the main mitochondrial sirtuin isoform that plays a central role in fatty acid oxidation and ATP synthesis in cells (14).Its expression decreases with age, and neurodegenerative, cardiovascular and metabolic diseases.The study by Klishadi et al (10) did not evaluate mitochondrial function and/or acetylation of mitochondrial proteins in losartan-pretreated versus untreated rats subjected to IR.Also, lack of data on the enzymatic activity of SIRT3 in mitochondria obscures the contribution of SIRT3 to losartan-induced cardioprotection in the ischemic myocardium. We have previously shown ( 14) that pretreatment of rats with the direct renin inhibitor, aliskiren (50 mg/kg/day) improved cardiac function after permanent coronary artery ligation for four weeks.The beneficial effects of aliskiren were associated with the improved respiratory function of mitochondria and inhibition of mitochondrial permeability pore (PTP) opening.Interestingly, hearts of aliskiren-treated rats demonstrated high SIRT3 levels and decreased acetylation of mitochondrial proteins including cyclophilin D (CyP-D), a key regulator of PTP formation (15).These data suggest that chronic inhibition of RAS could exert cardioprotective actions through inhibition of PTP formation by SIRT3-mediated deacelylation of CyP-D. Chronic blockade of AT1 receptors with losartan could also reduce damaging autocrine/paracrine effects of AngII on coronary arteries and myocardium.Losartan-induced vasodilatation could improve oxygen and substrate delivery to the ischemic myocardium at reperfusion.In addition, inhibition of AT1 receptor by losartan could prevent ROS accumulation by NADH-oxidase (4), inducible nitric oxide synthase (iNOS) (16) and mitochondria (17,18) in cardiac cells.A role of losartan in maintaining intracellular Ca 2+ homeostasis in isolated guinea pig ventricular myocytes following IR injury has been proposed (19).Since ROS and Ca 2+ are the main inducers of mitochondrial PTP, reductions in their levels by losartan following IR could prevent pore opening and improve mitochondrial function and ATP production.The latter could lead to a reduction in the AMP to ATP ratio and stimulation of AMP kinase (AMPK), a serine/threonine kinase that acts as a "fuel sensor" and regulates energy metabolism in the heart.Activation of AMPK is known to stimulate ATP synthesis, glucose transport, glycolysis and fatty acid oxidation, and inhibits energy-consuming anabolic pathways such as protein synthesis (20).Indeed, we have shown that losartan enhanced AMPK phosphorylation in AngII-treated cardiomyocytes (17).Losartan-induced activation of AMPK could upregulate SIRT3 activity through changes in the NAD + /NADH ratio that is the main regulator of sirtuins.AMPK-dependent increases in protein expression of SIRT3 and manganese superoxide dismutase (MnSOD) were found in the mouse skeletal muscle (21).Interestingly, the beneficial effects of SIRT3 can be mediated through a direct upregulation of antioxidant capacity of cardiomyocytes.SIRT3 has been shown to induce deacetylation and translocation of the forkhead box O3 (FoxO3), a transcription factor, to the nucleus, where it activates antioxidant-encoding genes such as MnSOD and catalase, thereby decreasing cellular levels of ROS (22).Also, SIRT3 can stimulate PGC-1α and its downstream targets that regulate mitochondrial biogenesis and play a crucial role in cardiac diseases (23). It is likely that acetylation of CyP-D due to downregulation of SIRT3 facilitates its interaction with the PTP complex and stimulate pore opening leading to mitochondria-mediated cell death and cardiac dysfunction.A causal role of CyP-D acetylation induced by downregulation of SIRT3 in mitochondrial PTP opening was demonstrated previously (24).As mentioned above, aliskiren prevented CyP-D acetylation that was associated with upregulation of SIRT3 expression and PTP inhibition in post-infarction rat hearts (15).Notably, the beneficial effects of losartan on mitochondria can also be mediated through AngII receptors present in mitochondria.We (15) and others (25) reported the expression of AT1 and Ang II type 2 (AT2) receptors in cardiac and kidney mitochondria.In addition, a role for AT2 receptor activation in losartan-mediated cardioprotection cannot be excluded in the setting of RAS activation.This point needs to be evaluated.In addition to acetylation of CyP-D due to downregulation of SIRT3, cardiac IR can activate CyP-D through its interaction with the peroxisome proliferator-activated receptor alpha (PPARα).We have recently shown that the PPARα/CyP-D was associated with PTP opening in cultured cardiomyocytes subjected to oxidative stress [26] and in vivo cardiac IR [27].Activation of AMPK by metformin abrogated the interaction and prevented PTP opening in both cases. In conclusion, the study presented by Klishadi and coauthors (10) is an interesting study that attempts to elucidate the role of AngII/AT1 receptors/SIRT3 pathway in losartan-induced cardioprotection against IR injury.This report together with previous studies indicates the importance of mitochondria in attenuation of cardiac dysfunction by the chronic use of RAS inhibitors in response to oxidative stress.
2017-04-19T01:56:41.217Z
2015-10-15T00:00:00.000
{ "year": 2015, "sha1": "ee59ba4dc8f738877f1a9b114ac8b1042d50ca8d", "oa_license": "CCBYSA", "oa_url": "https://journals.library.ualberta.ca/jpps/index.php/JPPS/article/download/24948/19040", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "ee59ba4dc8f738877f1a9b114ac8b1042d50ca8d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269930800
pes2o/s2orc
v3-fos-license
Image-based identification and isolation of micronucleated cells to dissect cellular consequences Recent advances in isolating cells based on visual phenotypes have transformed our ability to identify the mechanisms and consequences of complex traits. Micronucleus (MN) formation is a frequent outcome of genome instability, triggers extensive disease-associated changes in genome structure and signaling coincident with MN rupture, and is almost exclusively defined by visual analysis. Automated MN detection in microscopy images has proved extremely challenging, limiting unbiased discovery of the mechanisms and consequences of MN formation and rupture. In this study we describe two new MN segmentation modules: a rapid and precise model for classifying micronucleated cells and their rupture status (VCS MN), and a robust model for accurate MN segmentation (MNFinder) from a broad range of microscopy images. As a proof-of-concept, we define the transcriptome of non-transformed human cells with intact or ruptured MN after inducing chromosome missegregation by combining VCS MN with photoactivation-based cell isolation and RNASeq. Surprisingly, we find that neither MN formation nor rupture triggers a unique transcriptional response. Instead, transcriptional changes are correlated with increased aneuploidy in these cell classes. Our MN segmentation modules overcome a significant challenge to reproducible MN quantification, and, joined with visual cell sorting, enable the application of powerful functional genomics assays, including pooled CRISPR screens and time-resolved analyses of cellular and genetic consequences, to a wide-range of questions in MN biology. Introduction Recent advances in automated image analysis have led to the development of high-throughput platforms to isolate specific cell classes and match visual phenotypes to specific genetic and expression profiles.These platforms bring the power of pooled genetic screening and population-based analyses to a huge range of phenotypes that are defined solely by visual changes in subcellular features.One such feature are micronuclei (MN), nuclear compartments containing a few chromosomes or chromatin fragments that result from mitotic segregation errors and persistent DNA damage (Bona and Bakhoum, 2024;Guo et al., 2019).Increased MN frequency is a hallmark of carcinogen exposure, cancer development, and aging, and MN are potent drivers of massive genome structure changes, pro-inflammation and metastasis signaling, and senescence (Bakhoum et al., 2018;Dou et al., 2017;Harding et al., 2017;He et al., 2019;Mackenzie et al., 2017;Mohr et al., 2021;Soto et al., 2018;Zhang et al., 2015).These processes are linked to MN rupture, which exposes the chromatin to the cytosol for the duration of interphase (Hatch et al., 2013) and may contribute to tumorigenesis, metastasis, aging, and inflammatory disorders (Bona and Bakhoum, 2024;Guo et al., 2019). Most studies on the biology and consequences of MN formation and rupture take advantage of the fact that MN can be induced with high frequency in cultured cells, for instance by inhibiting the spindle assembly checkpoint kinase Mps1 (Krupina et al., 2021).However, these interventions cause diverse "off-target" nuclear and cellular changes, including chromatin bridges, aneuploidy, and DNA damage. Several sophisticated techniques have been developed to overcome this challenge that enrich or isolate MN or micronucleated cells, including live-cell imaging of single cell arrays, inducing Y chromosome missegregation by disrupting the centromere, and purifying MN from lysed cells by flow cytometry, which have led to new insights into MN rupture, function, and consequences (Agustinus et al., 2023;Ly et al., 2016;Mohr et al., 2021;Papathanasiou et al., 2023;Zhang et al., 2015).However, all have significant limitations for unbiased analysis of the cellular consequences of MN formation and rupture and lack features necessary to perform high-throughput analyses on micronucleated cells.For these studies, what is needed is a way to visually identify micronucleated cells within a larger population rapidly and robustly and target them for downstream analysis. Automated detection of MN in microscopy images using conventional morphological transformations is challenging due to the diversity of MN shapes and sizes, their similarity to nuclear features that co-occur at high rates, including nuclear blebs and chromatin bridges, and their frequent proximity to nuclei.To address this, we developed two image analysis pipelines that combine neural network-based pixel classification with pre-and post-processing steps to rapidly identify micronucleated cells from low resolution images (VCS MN) or segment MN with high recall across multiple cell lines, chromatin labels, and imaging conditions (MNFinder).We demonstrate the utility of this approach by combining VCS MN with a phenotype-based cell isolation method, called Visual Cell Sorting (VCS), to define the transcriptomic profile of hTERT-RPE1 cells with none, intact, or ruptured MN by RNAseq after inducing chromosome missegregation.During VCS, single cells expressing nuclearlocalized Dendra2 are photoconverted on-demand based on the results of the MN classifier.Specific cell classes are then isolated by gating on Dendra fluorescence ratios during FACS (Hasle et al., 2020).We show that we can recapitulate an established aneuploidy signature using this method and find that, surprisingly, neither micronucleation nor rupture is sufficient to induce substantial transcriptional changes in these conditions.We envision that the MN segmentation and cell isolation platforms described here will be widely applied to fundamental questions in cell division and nucleus biology and to cell-based models of human disease to enable new discoveries into the contribution of MN to cellular dysfunction. Machine vision identifies micronucleated cells within a mixed population We initially developed an automated pipeline to identify micronucleated cells in a mixed population on hTERT RPE-1 (RPE1) cells.RPE1 cells are a near-diploid, non-transformed human cell line with very low frequencies of spontaneous MN that have been extensively used for studies of chromosome missegregation and micronucleation (He et al., 2019;Kneissig et al., 2019;Mammel et al., 2021;Santaguida et al., 2017;Zhang et al., 2015).RPE1 cells also do not activate the cGAS-STING innate immune pathway upon micronucleation (Chen et al., 2020), similar to many cancer cells (Kwon and Bakhoum, 2020;Stetson et al., 2008).We anticipated that this would limit inflammatory signaling and increase the sensitivity of downstream analyses of the consequences of MN formation and rupture (Bakhoum et al., 2018;He et al., 2018;Santaguida et al., 2017).To enable automated image analysis and live-cell marking, we co-expressed a fluorescent chromatin marker (H2B-emiRFP703) to identify nuclei and MN with 3xDendra2-NLS (nuclear localization signal) to identify ruptured MN and photoactivate selected cells (Hasle et al., 2020;Hatch et al., 2013;Matlashov et al., 2020) (Fig. 1A).We treated these cells, referred to as RFP703/Dendra, with a low dose of an Mps1 inhibitor (Mps1i) to induce MN, which increased MN frequency to 50% of cells with 1 MN per cell on average (Fig. S1A-B). To distinguish MN from morphologically similar nuclear features also induced by Mps1i, including chromatin bridges and nuclear blebs (He et al., 2018;Maciejowski et al., 2015), we trained a neural net classifier on low resolution single channel images of RFP703/Dendra after Mps1i incubation.For this first model, called VCS MN, we opted for low resolution training images, increased speed of analysis, and positive predictive value (PPV) privileged over recall to optimize downstream classifier integration into a visual cell sorting platform.To train the model, H2B channel images were passed to a Deep Retina neural net (Caicedo et al., 2019a) to generate nuclear masks, which were then used to crop the field into single cell images, excluding cells on the image edges.On average, 75% of nuclei per field were correctly segmented and cropped.A U-Net classifier using pyTorch's ResNet18 pre-trained model as its base architecture (Ronneberger et al., 2015) was then trained on 2,000 single cell crops combining the H2B image with the results of Sobel edge detection.Classified MN pixels were converted to a mask that was mapped back onto the whole field image (Fig. 1B).MN segments were then assigned to "parent" nuclei by proximity, which correctly associated 97% of MN (Fig. S1C).Nuclei associated with at least one MN were labeled as MN+ cells and those associated with no MN were labeled as MN-cells.We validated the ability of VCS MN to classify cells on 6 whole field images from two experiments and calculated recall values of 86% and 65%, respectively, for MN-and MN+ labeled cells, and PPVs of 73% and 93% (Fig. 1C).Analysis of MN classification on cropped images found a recall value of approximately 70% and a PPV of 89% for MN identification (Fig. 1D), indicating that we successfully limited false positives in our MN+ cell class with the tradeoff of decreased purity of the MN-cell pool.We also observed a small, but statistically significant, reduction in recall for ruptured MN (Fig. 1D), likely driven by their smaller size. To determine whether this pipeline could achieve similar accuracy in another cell line, we retrained the VCS MN classifier on images of multiple cell lines acquired at two magnifications (see Methods) and assessed prediction quality on low-resolution images of U2OS cells induced to form MN. We calculated recall values of 82% and 83% in MN-and MN+ cells, respectively, PPVs of 81% and 83%, and an MN PPV of 88% (Fig. 1E).In summary, VCS MN can automatically identify the majority of micronucleated and non-micronucleated cells in low-resolution images of cells from multiple sources containing a mix of contaminating objects with high precision. RPE1 cells with ruptured MN were further classified from the MN+ population based on MN Dendra2 intensity: NLS-3xDendra2 is present in intact MN and absent in ruptured MN (Fig. 1F).We quantified the maximum Dendra2 intensity in the nucleus and corresponding MN segments and MN with a signal less than 0.16 of the nucleus was classified as "ruptured."This threshold correctly classified approximately 90% of MN (Fig. S1D).When appended to the VCS MN pipeline, this analysis correctly identified 60% of rupture-cells (cells with only intact MN) and 70% of rup-ture+ cells (cells with at least 1 ruptured MN) with a PPV near 75% in both cases (Fig. 1G).The difference in recall is likely due to the increased probability of a multi-micronucleated cell being correctly classified as MN+ and having at least 1 ruptured MN (Fig. S1E-F). MNFinder accurately segments MN in images of attached cells Due to the analysis constraints we imposed, MN classified by the VCS MN module were typically undersegmented (Fig. 2A) and performance diminished substantially on images taken using different magnifications or with different chromatin labeling agents.Therefore, we developed a new module, called MNFinder, that privileges accurate MN and nuclear segmentation across cell types, DNA labels, and image resolution over PPV.MNFinder takes a single channel chromatin image as input and integrates the results of two independent nucleus/MN (nuc/MN) and cell segmenter pipelines to generate three object group: 1) MN, 2) cells, which group nuclei with associated MN, and 3) nuclei.Both the nuc/MN and cell pipelines use a UNet classifier for initial segmentation followed by additional image processing steps to refine the results (Fig. 2B). We made several changes to the neural net input and architecture to design MNFinder.To adjust for highly imbalanced data sets, we incorporated attention gates in the upsampling blocks and employed focal loss during training (Lin et al., 2018).We also modified the classifier to segment both nuclei and MN to better discriminate between nucleusassociated and MN-associated pixels, increased the size of input image tiles from 48x48 px to 128x128 px to increase contextual information, and oversampled input images by 25%.Final classification results integrate the predictions from all crops containing a given object. For nucleus/MN segmentation, we tested a variety of UNet-derived architectures (Table 1) and found that a basic UNet with attention gates performed well across multiple cell lines (Oktay et al., 2018;Su et al., 2021).We also observed incorporating multiscale downsample blocks identified some MN otherwise missed, but produced an overall reduction in performance.Therefore we developed an ensemble classifier (Fig. S2A) that takes predicted MN weights from both UNet types as inputs to generate the final MN predictions.Nucleus predictions are retained from the basic attention gate UNet.This classifier was trained on images of live RPE1 and U2OS cells expressing H2B-emiRFP703 or fixed RPE1 cells, HeLa cells, and hTERT human fetal fibroblasts (HFF) labeled with DAPI using 128x128 random crops with image augmentation.To adjust for misclassification of large MN as nuclei, nuclei with an area below 250 px are automatically reclassified and undersegmentation of MN is further limited by expanding MN object boundaries to their convex hull (Fig. S2B). Cell identification is not possible with standard UNets.To overcome this limitation and improve nucleus segmentation, we developed a UNet and image processing pipeline that outputs "cell" masks, defined as the concave hull of each nucleus and its associated MN, based on predicted distance and proximity maps (Fig. S2C).Map predictions are derived from single channel image crops using a multi-decoder UNet with a UNet3+ architecture in the two main decoders (Huang et al., 2020;Mahbod et al., 2022).To improve accuracy and decrease training time, we added a third arm to the UNet that classifies foreground pixels and feeds these predictions into the distance and proximity decoders (Fig. S2D).Given the complexity of this UNet, we used a constant feature depth at every level in the encoder and decoder pathways and replaced most concatenation operations with addition (Lu et al., 2022).This UNet was trained on the same images as the nucleus/MN segmenter after annotation with concave hulls automatically generated on annotated MN and nuclei (Fig. S2C).To generate cell masks from the predicted distance and proximity maps, the results are summed and used for watershed segmentation followed by elimination of false boundaries based on the proximity map predictions of true cell boundaries (Fig. S2E). In the last step of the MNFinder module, the Nuc/MN and cell segmentation results are integrated to produce a final set of labels, identifying each unique MN, each cell with its nuclei and MN, and each unique nucleus (Fig 2B).We validated MNFinder on single channel images of RPE1 RFP703/Dendra, U2OS RFP703/Dendra, HeLa H2B-GFP, and HFF cells after incubation in Mps1i for 24 hours.Cells were imaged live and fixed, using a 20x widefield and 40x confocal microscopy, and using H2B and DAPI to visualize DNA.In these images, MN were present at a frequency between ~30-70% of cells due to induction of chromosome missegregation.These levels are elevated compared to some cancer cell lines and tumor samples (Jdey et al., 2017).Therefore, we also analyzed publicly available images of unperturbed U2OS cells from Broad Bioimage dataset BBBC039v1, which have an MN frequency of 8% (Table 2).MNFinder showed significant improvement in recall over VCS MN with an additional improvement in PPV for some conditions (Fig. 2D).Importantly, recall and PPV were largely insensitive to image resolution, DNA label, and cell type.HeLa H2B-GFP 40x images were an outlier in terms of performance for unclear reasons, potentially due to increased nuclear shape diversity.We also calculated the per object mIoU to determine the quality of the segmentation and found that most MN were accurately segmented with mIoU values between 69-79% (Fig. 2D, Table 1). These metrics indicate that MNFinder provides accurate and robust MN segmentation across multiple cell lines and image acquisition settings.MNFinder identifies MN with similar sensitivity and substantially improved specificity compared to existing MN enumeration programs (Table 3) (Ibarra- Arellano et al., 2024;Pons and Mauvezin, 2022) and is the only one to report a high mIoU, which is necessary for quantifying MN characteristics.This module is available as a Python package, MNFinder, via PyPI and on the Hatch Lab GitHub repository. VCS MN suitability for analysis of micronucleated RPE1 RFP703/Dendra cell transcriptomes To demonstrate the utility of our MN segmentation modules, we asked whether the VCS MN could be used for optical cell isolation to obtain cell populations substantially depleted and enriched for MN.Visual cell sorting (VCS) is a recently developed optical cell isolation pipeline that specifically labels and isolates multiple populations of adherent cells in a single experiment by combining on-demand image analysis with UV (405 nm) pulses of different durations targeted with single cell accuracy.It can generate up to four different proportions of converted Dendra2, which can be quantified and sorted by FACS (Hasle et al., 2020) (Fig. 3A). We first validated the utility of micronucleated RPE1 RFP703/Dendra cells for VCS analysis.We confirmed that we could specifically activate and sort two populations of RFP703/Dendra cells by classifying cells in a mixed pool based on CellTrace far red labeling (Fig. S3A-C).We next confirmed that Dendra2 red:green ratios were stable for the duration of a VCS MN isolation experiment by randomly converting RFP703/Dendra cells using a short, long, or no UV pulse and analyzing nuclear fluorescence intensity 0, 4, and 8 hours after activation.Quantification of nuclear red:green ratios from the same fields over time showed the persistence of three distinct fluorescent populations and a minimal loss of red fluorescence (Fig. 3B), indicating that photoconversion persisted over the time required to activate multiple 6well cell populations. We also confirmed that VCS MN could accurately activate and isolate MN+ and MN-cells.RFP703/Dendra cells were incubated with Mps1i one day prior to imaging to generate MN and Cdk1i added prior to imaging to prevent mitosis, which dilutes the Den-dra2(red) signal and frequently alters MN status (Hatch 2013).Cells were activated based on VCS MN analysis results and isolated by FACS.Isolated cells were replated in medium containing Cdk1i, fixed, and MN content quantified by manual fluorescent image analysis.Comparison of classifier PPV for MN+ and MN-cells during activation and MN+ and MN-cell frequency after FACs found a strong enrichment for the correct cell type in each group, with the increased purity of MN+ classified cells being retained during sorting (Fig. 3C). To determine how MN frequency affects MN cell isolation, we used the PPV and recall values from the VCS MN analysis of untreated U2OS cells (Fig. 1E: U2OS Broad, MN frequency = 8%) to estimate MN+ and -cell population purity.As expected, the purity of the MN-population increases and the purity of the MN+ population decreases compared to populations with a higher MN frequency (Fig. S3D).However, this represents a nearly 7-fold enrichment of MN+ cells in the isolated population and the MN+ cell purity is comparable to the enrichment of tumor cells in patient biopsies (Wu et al., 2021).Thus, VCS can be combined with the VCS MN neural net to generate cell populations enriched for MN-and MN+ cells from a variety of conditions that are highly suitable for discovery, including genetic screening, bulk RNA and proteomic analyses, and single cell sequencing. To validate this pipeline for transcriptome analysis, we used RNASeq to define gene expression changes in RPE1 cells after Mps1i incubation and VCS.Cells were incubated with DMSO or Mps1i and each population was randomly activated with short and long UV pulses (Fig. 4A).Conversion of 1,500-2,000 fields at 20x magnification allowed us to isolate 13k cells after FACS for each photoconverted population in each condition.Isolated populations were processed for RNASeq and principal component analysis (PCA) revealed that, as expected, cells clustered first by treatment group (Fig. 4B).Analysis of the DMSO samples found minimal differences in gene expression associated with UV pulse duration (Fig. S4A-B, Table 4), consistent with previous results (Hasle et al., 2020).Therefore, data from cells activated at both pulse lengths were pooled in subsequent analyses.MA analysis identified 2,200 differentially expressed genes (DEGs) in Mps1i versus DMSO treated cells, 63 of which had absolute foldchanges > 1.5 (Fig. 4C, Table 5-6).We used GSEA analysis to compare our results to previously identified changes in RPE1 cells after mitotic disruption to induce aneuploidy (Table 7-8) and found substantial overlap between enriched Hallmark pathways in Mpsi1i cells isolated by VCS and previous studies (He et al., 2018;Santaguida et al., 2017) .These included increased expression of inflammation, EMT, and p53 associated genes (Fig. 4D).Additional changes observed in VCS-processed samples fell into similar function categories and are potentially due to differences in sequencing depth.These data confirm that the VCS MN pipeline can accurately and sensitively identifies biologically relevant transcriptional changes in aneuploid RPE1 cells. MN rupture induces few unique transcriptional changes and does not contribute to the initial aneuploidy response. To determine whether MN formation induces a transcriptional response, we treated RFP703/Dendra cells with Mps1i, activated MN+ and MN-cells based on VCS MN analysis results, and isolated differentially activated populations by FACS in duplicate (Fig. 5A).PCA revealed that results clustered largely by replicate and, consistent with this, only a few DEGs were identified with just two having absolute fold-changes greater than 1.5 (Fig. 5 B-C, Table 12).Both highly altered DEGs were also strongly upregulated by Mps1i treatment (Table 13).Although our analysis did identify some batch effects, our data strongly suggest that micronucleation does not induce a unique transcriptional response. We next compared gene expression between micronucleated cells classified as rupture+ and rupture-.Because the overall MN rupture frequency increases over time (Hatch et al., 2013), we first synchronized cells in G1 using a Cdk4/6 inhibitor followed by release into Mps1i (Mammel et al., 2021) (Fig. 5D).This results in a more consistent rate of MN rupture (Fig. S5A).We modeled how the 4-5 hours required for analysis and activation of 1 well would alter population purity by manually quantifying the frequency of rupture+ cells in images taken 5 hours apart.Based on the increase in rupture+ cells we observed, we estimated only a small decrease in purity of the rupture-population (Fig. S5B), with a sustained high level of enrichment for both populations.PCA revealed that results clustered first by condition, indicating a transcriptional difference between rupture+ versus rupture-cells (Fig. 5E), and the MA plot identified 106 DEGs, 14 of which had absolute fold changes greater than 1.5 fold (Fig. 5F, Table 14).Of these, 3 were unique to cells with ruptured MN (Table 15).GSEA analysis confirmed that most of the pathways altered in MN+ or rup-ture+ cells overlapped with those identified in the total aneuploid Mps1i population (Fig. 5G, Table 16-17). We next asked whether micronucleation or MN rupture contributed to the transcriptional response to aneuploidy.We first quantified aneuploidy frequency in MN-, rupture-, and rupture+ cells to determine whether transcription changes could reflect underlying differences in ploidy.Cells were labeled with probes against chromosomes 1, 11, or 18, all of which frequently missegregate into MN (Fig. S6A), and ruptured MN were identified by loss of H3K27Ac (Mammel et al., 2021;Mohr et al., 2021) (Fig. 6A).Quantification of chromosome foci number found that aneuploidy frequency varied between chromosomes but was consistently higher for MN+ compared to MN-cells.This trend was also observed in rupture+ versus rupture-micronucleated cells (Fig. 6B).Similar results were obtained when transcription loss due to MN rupture was considered (functional aneuploidy) (Fig. S6B).We next compared the fold change of highly upregulated or downregulated genes in the Mps1i dataset to results from analysis of the subsetted populations of MN+ and rupture+ cells.All replicates were analyzed individually to reduce noise from batch effects in the MN+ results.This analysis identified one gene cluster that increased in expression in the subset of Mps1i cells with ruptured MN and included the genes FILIP1L, CREB5, TNFAIP3, ATF3, and EGR1 (Fig. 6C, Table 18).We attempted to validate increased protein expression of EGR1 and ATF3 in rupture+ cells by immunofluorescence.As a positive control, we quantified an increase EGR1 and ATF3 nuclear mean intensity after addition of hEGF and DNA damage by doxorubicin, respectively (Fig. S6C-D).Both genes were defined as upregulated by Mps1i and showed increased expression in Mps1i treated cells compared to controls by immunofluorescence (Fig. 6D-E).However, analysis of rup-ture+ versus other classes of Mps1i cells found no increase in EGR1 expression and only a small increase in ATF3 that was less than that observed between DMSO and Mps1i treated cells (Fig. 6D-E).Overall, our results strongly suggest that protein expression changes in MN+ and rupture+ cells are driven mainly by increased aneuploidy rather than cellular sensing of MN formation and rupture. Discussion In this study, we present two machine-learning based modules to identify MN and micronucleated cells based on single channel fluorescence images and combine one with visual cell sorting to profile transcriptional responses to MN formation and rupture.We demonstrate that our MN segmentation pipeline, MNFinder, can robustly classify and segment MN from DNA labeled images across multiple cell types and fluorescent imaging conditions.Further, we demonstrate that a separate MN cell classifier, VCS MN, rapidly and robustly identifies micronucleated cells from low resolution images and can be combined with single-cell photoconversion to accurately isolate live cells with none, intact, or ruptured MN from a mixed population.Using this platform, we find that, unexpectedly, neither micronucleation nor rupture triggers gene expression changes beyond those associated with increased aneuploidy.Overall, our study brings a powerful high-throughput optical isolation strategy to MN biology and we anticipate that it will enable a wide range of new investigations. VCS MN isolation has several advantages over current methods to identify the mechanisms and consequences of MN formation and rupture.First, it can be used on any adherent cell line in the absence of genetic perturbations.This overcomes challenges involved with using lamin B2 overexpression to inhibit MN rupture (Hatch et al., 2013), which is limited to specific cell lines and MN types (Mammel et al., 2021;Xia et al., 2019) and is complicated by additional changes in mitosis and gene expression (Agustinus et al., 2023;Han et al., 2020;Kuga et al., 2014;Liwag et al., 2024).In addition, it overcomes cell line and MN content restrictions imposed by systems that induce missegregation of single chromosomes or chromosome arms (Lin et al., 2023;Ly et al., 2019Ly et al., , 2016;;Shoshani et al., 2021;Trivedi et al., 2023) by enabling analysis of all missegregation events in any genetic background.Unlike live single cell assays, it is highly scalable and eliminates selection pressures and restrictions added by clonal expansion (Mohr et al., 2021;Papathanasiou et al., 2023;Zhang et al., 2015).Importantly, VCS MN isolation captures whole live cells, overcoming limitations associated MN purification (Agustinus et al., 2023;Klaasen et al., 2022;Mohr et al., 2021;Papathanasiou et al., 2023;Tang et al., 2022) and permits time-resolved analyses of cellular changes and MN chromatin by population-level analyses.VCS has several advantages over similar optical isolation or in situ sequencing techniques as it can be adapted to any wide-field microscope by adding a digital micromirror to existing equipment, and can be performed on attached cells, which are critical to achieve the nuclear and cytoplasm spreading required for accurate MN identification (Li et al., 2015). VCS MN isolation does have limitations.Due to Dendra2 signal decay and ongoing MN rupture, only about 200,000 cells can be analyzed and targeted per experiment.For optical pooled screening or analysis of rare cells, this limits the number of genes or depth of analysis that can be achieved.Cell fixation would overcome this issue and efforts to improve sample extraction in these conditions are ongoing (Kanfer et al., 2021;Yan et al., 2021).VCS MN isolation also requires introduction of at least one photoconvertible or activatable protein to mark the cells and a second fluorescent protein to discriminate ruptured MN.This limits the channels available for additional phenotype identification.However, recent advances in cell structure prediction (Johnson et al., 2023) may vastly expand the phenotypic information available from limited cell labels.VCS MN segmentation and MNFinder precision vary across cell types, and widely divergent nuclear morphologies from the training set could significantly impair performance.Additional training of the neural net should improve this metric, but different algorithm architectures will likely be required to identify MN in signal-rich environments like organoids or tissue samples. We observed upregulation of several pathways, including inflammation, endothelial-to-mesenchymal transition, and p53, in Mps1i-treated RPE1 cells that were previously identified as enriched in similar studies (He et al., 2019;Santaguida et al., 2017).These results demonstrate the suitability of our platform for detecting biologically relevant transcript changes in aneuploid cells.However, our analysis of micronucleated cells and cells with ruptured MN found only a handful of genes that were uniquely upregulated by MN rupture and no changes that indicated a contribution of either condition to the aneuploidy response.Thus, in line with previous results (Santaguida et al., 2017), our findings suggest that MN and MN rupture are not sensed by the cell outside of their contribution to aneuploidy through dysregulated transcription and limited replication of the sequestered chromatin (Hatch et al., 2013;Papathanasiou et al., 2023;Zhang et al., 2015).Of significant interest is whether similar results will be obtained in cells with more robust cGAS/STING signaling.There is a discrepancy about whether cGAS binding to ruptured MN is sufficient to initiate signaling, and how MN chromatin content may mediate this (Bakhoum et al., 2018;Chen et al., 2020;Dou et al., 2017;Harding et al., 2017;MacDonald et al., 2023;Mackenzie et al., 2017;Mohr et al., 2021;Willan et al., 2019), that VCS MN isolation is ideal for resolving. VCS MN isolation is a highly flexible platform that enables powerful new approaches to address fundamental questions in MN biology.VCS MN isolation can be used for optical pooled screening, an unbiased method that would be ideal to identify mechanisms of MN rupture, genetic changes that promote proliferation of micronucleated cells, and, in combination with dCas9-based chromosome labeling (Chen et al., 2013;Maass et al., 2018;Tanenbaum et al., 2014), mechanisms that enrich specific chromosomes in MN and could drive cancer-specific aneuploidies (Ben-David and Amon, 2020).Recovering live cell populations of cells with intact and ruptured MN will also enable precise analysis of post-mitotic genetic and functional changes caused by these conditions.For instance, these cell populations can be analyzed for acquisition of disease-associated behaviors, including proliferation, migration, and used in in vivo tumorigenesis and metastasis assays to directly assess their contribution to cancer development.In summary, automated MN segmentation and VCS MN isolation are poised to provide critical insights into a wide-range of questions about how MN form, rupture, and cause disease pathologies. hTERT RPE-1 NLS-3xDendra2/H2B-emiRFP703 and U2OS NLS-3xDendra2/H2B-emiRFP703 cell lines were produced through serial transduction of lentiviruses.RPE1 and U2OS cells were validated by STR sequencing.Lentivirus was produced in HEK293T cells using standard protocols and filtered medium was added with polybrene (Sigma, #H9268) for transduction.Cells were selected with 10 µg/mL blasticidin (Invivogen) and 500 µg/mL active G418 (Gibco) and FACS sorted on an Aria II sorter (BD Biosciences) for the top 20% brightest double positive cells.hTERT RPE-1 NLS-3xDendra2-P2A-H2B-miRFP703 cells were created through viral transduction and FACs sorting for the brightest double positive population.HeLa-H2B cells were a gift from Dr. Daphne Avgousti (Fred Hutchinson Cancer Center) and were originally acquired from Millipore (SCC117).HFF cells were a gift from Dr. Denise Galloway (Fred Hutchinson Cancer Center) (Kiyono et al., 1998). Fixed cell training and validation images were acquired with a Leica DMi8 laser scanning confocal microscope using the Leica Application Suite (LAS X) software and a 40x/1.15NA Oil APO CS objective (Leica) or on a Leica DMi8 microscope outfitted with a Yokogawa CSU spinning disk unit, Andor Borealis illumination, ASI automated Stage with Piezo Z, with an environmental chamber and Automatic Focus using a 40x/1.3NA Oil PLAN APO objective.Images on the spinning disk microscope were captured using an iXon Ultra 888 EMCCD camera and MetaMorph software (v7.10.4). Micronucleus segmentation and cell classifiers VCS MN: The neural net was created using the FastAI 1.0 library in Python, a UNet with Torchvision's ResNet18 pre-trained model as its base architecture (Ronneberger et al., 2015).Training for MN recognition was performed using ~2,000 images of individual cells as training data, a further 164 for validation, and 177 for testing.Training images were of RPE-1 NLS-3xDendra2-P2A-H2B-miRFP703 cells after incubation in 0.5 µM reversine (an Mps1 inhibitor, EMD Millipore) or DMSO for 24 h and taken with a 20x widefield objective on the VCS microscope.Nuclei were segmented on H2B channel images using the Deep Retina neural net (Caicedo et al., 2019) and 48x48 px image crops were generated centered on each nucleus.For training, MN pixels in cropped images were manually annotated.MN associated with chromatin bridges were ignored to ensure that labeled MN were discrete nuclear compartments. The VCS MN classifier takes as input a 2-channel 20x image.It applies the Deep Retina neural net to the H2B channel to segment nuclei, discards any touching the edge of the image, and generates a 48x48 px crops centered on each nucleus.Each crop is processed with Sobel edge detection and linearly enlarged to 96x96 px.To accommodate the ResNet18 3-channel architecture, each crop is expanded to the H2B channel, a duplicate of the H2B channel, and the results of Sobel edge detection.Identified MN are mapped back to the full image and assigned to the closest segmented nucleus.MN more than 40 px away from a nucleus are discarded. Once MN are assigned to cells, the classifier calculates the maximum Dendra2 MN/nucleus intensity ratio for each MN.MN with a ratio below 0.16 are classified as ruptured.This threshold was identified using the JRip classifier in Weka 3.8.6 to define the optimal threshold to separate manually annotated intact and ruptured MN (Cohen, 1995;Witten et al., 2017).Nuclear segments are classified as MN+ or MN-cells based on the presence or absence of an associated MN segment.MN+ cells are then further classified into those with only intact MN (rupture-) or those with at least one ruptured MN (rupture+). For analysis of MN recall and PPV, MN were segmented using PixelStudio 4.5 on an iPad (Apple).Recall was calculated as the proportion of all MN that overlapped with a predicted segment.Positive predictive value was calculated as the proportion of all predicted segments that overlapped with a MN.Mean Intersection over Union (mIoU) was calculated per object by quantifying the overlap between groups of true positive pixels and their respective ground truths. For analysis of U2OS cells, the VCS MN segmentation module was retrained on a collection of images of RPE1, U2OS, HFF, and HeLa cells after incubation in 100 nM BAY1217389 or 0.5 µM reversine for 24 h.Live images of RPE1 and U2OS NLS-3xDendra2/H2B-emiRFP703 cells were acquired on the VCS microscope at 20x.Images of fixed cells were taken on either the LSM or spinning disk confocal microscopes at 40x after fixation in 4% paraformaldehyde (Electron Microscopy Sciences, #15710) for 5 min at room temperature.Cells were labeled with DAPI as indicated.~2,300 crops of U2OS NLS-3xDendra2/H2B-emiRFP703 cells taken on the VCS microscope at 20x were used for training with another 233 held back for validation and 910 for testing.Three images of Hoechst labeled U2OS cells taken at 20x on a widefield microscope at 16 bit depth were downloaded from the Broad Bioimage Benchmark Collection (BBBC039v1, (Bray et al., 2016;Caicedo et al., 2019b;Ljosa et al., 2012) and linearly scaled by 0.5.Crops were generated centered on manually-annotated cell nuclei and fed to VCS MN to determine PPV, recall, and mIoU for this data set. MNFinder: The MNFinder neural nets were created using TensorFlow 2.0 without transfer learning.Training was performed using 128x128 px crops generated from the same training and validation data used for retraining the VCS MN. For nucleus/MN segmentation (semantic segmentation) predictions are taken from two UNet-based neural nets, with MN predictions fed into a third ensembling UNet.All UNets are trained independently but are otherwise identical, save for the incorporation of multiscale downsampling into one of the input UNets.For cell segmentation (instance segmentation) a UNet architecture incorporating 3 decoder pathways is used to predict distance maps, proximity maps, and foreground pixels.The distance and proximity map decoders incorporate features from a UNet3+ design: specifically, additional skip connections from multiple layers of the encoder and decoder pathways and deep supervision during training (Huang et al., 2020).Training data were generated from annotated nuclei and MN images by generating a concave hull grouping a nucleus and associated MN using the cdBoundary package in Python (Duckham et al., 2008).This hull was transformed into a distance map by calculating the Euclidean distance transform (EDT) with each pixel value encoding the shortest distance between that pixel and the background.Proximity maps were generated by setting all pixels as foreground pixels except for those belonging to other hulls and applying an EDT, masked by the cell's boundaries, and raising this to the 4th power to sharpen edges.Both maps are scaled from 0-1 for each cell. MNFinder input images taken at 20x are cropped using a 128x128 px sliding window, advancing the window by 96 px horizontally and vertically to oversample the image.40x images are scaled down by a factor of 2 prior to input.Crops are expanded into 2-channel images, with the second channel the result of Sobel edge detection.These images are processed by the neural nets, post-processed as described, and reassembled by linear blending into a complete field.Recall and positive predictive values were calculated using the same as for VCS classifier validation. MNFinder was validated on images of RPE1, U2OS, HFF, and HeLa cells after incubation in 100 nM BAY1217389 or 0.5 µM reversine for 24 h.Live images of RPE1 and U2OS NLS-3xDendra2/H2B-emiRFP703 cells were acquired on the VCS microscope at 20x.Images of fixed cells were taken on either the LSM or spinning disk confocal microscopes at 40x after incubation in 4% paraformaldehyde (Electron Microscopy Sciences, #15710) for 5 min at room temperature.PPV and recall for MN segmentation were calculated for individual input UNets and the ensemble UNet. Outline of VCS MN cell isolation experiments Cells for VCS were plated onto 6-well glass-bottom, black-walled plates at a density of 50,000-225,000 cells per well 1-2 days before activation.An extra unactivated well was plated as a control.One day before imaging, 100 nM Mps1i was added to the medium.One hour prior to imaging, cells were washed 1x in PBS and medium changed to phenol red free (GIBCO) containing 10 µM .The plate was transferred to the microscope, the plate center and micromirror device were aligned, and the appropriate journals (see (Hasle et al., 2020)) were initiated for VCS activation.Imaging conditions were optimized for each experiment.Images were acquired using MetaMorph and analyzed on a dedicated linked computer.1-bit masks of MN+ nuclei and MN-nuclei were transmitted back to Meta-Morph, which directed UV pulses at the segmented nuclei.Activation occurred using either a 200 ms or 800 ms pulse of the 405 nm laser.After imaging, the initial 5 positions and last 5 positions were reimaged for quality control as well as 5 random positions in the unactivated well.Classifier predictions were compared to the first 3 and last 3 images from each VCS experiment, each manually annotated prior to downstream analysis, including RNA extraction. Activated and unactivated cells were trypsinized, suspended in 2% FBS, and sorted using a FACS Aria II (BD Biosciences).Compensation for PE-blue excitation of unconverted Dendra2 was performed on the unactivated cells.Dendra2 activation-based sorting gates were defined on single cells positive for both Dendra2 and emiRFP703 using the PE-Blue-A/FITC-A ratio.Cells were sorted into 2% FBS then pelleted and either flash frozen on dry ice or replated onto poly-L-lysine coated coverslips.CellTrace: Activation and sorting accuracy were analyzed for RPE1 RFP703/Dendra2 cells by incubating cells in CellTrace far red (Ther-moFisher) for 10 min at 37 ºC, trypsinizing and pelleting cells, mixing 1:1 with unlabeled cells, and plating.A classifier segmented nuclei with the Deep Retina neural net on the GFP channel and measured the mean far-red intensity in the nucleus (Hasle et al., 2020).Threshold intensity for activation was experimentally determined.Cells were sorted by FACs for Den-dra2 ratio and CellTrace intensity using compensation to eliminate emiRFP703 spectral overlap and then reanalyzed on the same machine.Mps1i+/-isolation: Cells incubated in Msp1i or DMSO were imaged and activated using a random classifier.1-bit masks of nuclei generated using the Deep Retina neural net were randomly assigned to receive 800 ms or 200 ms pulses.At least 13k cells were collected per sorting bin and samples were pelleted, flash frozen, and stored at -80˚C.Micronucleus+/-isolation: Two wells were imaged sequentially per experiment with the activation time for MN+ and MN-nuclei reversed between wells.Rupture+/-isolation: Cells were plated 2 days before imaging in medium containing 1 µM Cdk4/6i (PD-0332991, Sigma).Twenty-four hours later, cells were rinsed 3x with PBS and the medium replaced with 100 nM BAY. Only 1 well was imaged per experiment with rupture-cells receiving 800 ms and rupture+ cells receiving 200 ms pulses. RNA isolation and sequencing We extracted RNA from frozen cell pellets using the RNAqueous micro kit (ThermoFisher), according to the manufacturer's protocol.Residual DNA was removed by DNase I treatment and RNA was further purified by glycogen precipitation (RNA-grade glycogen; ThermoFisher) and resuspension in ultra-pure H2O heated to 65 ºC.RNA quality and concentration was checked by the Genomics Core at the Fred Hutchinson Cancer Center with an Agilent 4200 Tapestation HighSense RNA assay and only samples with RIN scores above 8 and 28S/18S values above 2 were further processed.cDNA synthesis and library preparations were performed by the Genomics Core using the SMARTv4 for ultra-low RNA input and Nextera XT kits (Takara).Sequencing was also performed on an Illumina NextSeq 2000 sequencing system with paired-end, 50 bp reads. RNAseq and gene-set enrichment analysis We quantified transcripts with Salmon to map reads against the UCSC hg38 assembly at http://refgenomes.databio.org(digest: 2230c535660fb4774114bfa966a62f823fdb6d21acf138d4), using bootstrapped abundance estimates and corrections for GC bias (Patro et al., 2017).For comparisons with data from He, et al. and Santaguida, et al., the original FASTA files deposited at the Sequence Read Archive were downloaded with NCBI's SRA Toolkit and quantified with Salmon (He et al., 2019;Santaguida et al., 2017).No GC-bias correction was applied as only single-end reads were available. Transcript abundances were processed to find differentially expressed genes (DEGs) with the R package DESeq2 version 3.16 in R 4.2.1, RStudio 2022.07.2 build 576, and Sublime Text build 4143.Files were imported into DESeq2 with the R package tximeta (Love et al., 2020(Love et al., , 2014)), estimated transcript counts were summarized to gene-level, and low-abundance genes were filtered by keeping only those genes with estimated counts ³ 700 in at least 2 samples.DEGs were identified using a likelihood ratio test comparing the full model with one with the condition of interest dropped and an FDR of 0.05.Log-fold changes were corrected using empirical Bayes adaptive shrinkage (Stephens, 2017).Operations were performed before pseudogenes were filtered from dataset. Live-cell imaging for MN rupture frequency analysis RPE1 NLS-3xDendra2/H2B-emiRFP703 cells were plated 2 days before imaging and treated for 24 hours with either 1 µM Cdk4/6i or DMSO.One day before imaging, cells were rinsed and incubated in 100 nM BAY1217389.Nineteen hours later, the media was exchanged for Cdk1i medium, 5 positions were imaged in each well and rupture-cells were activated.These positions and the surrounding area were imaged every hour for 11 hours and the status of photoconverted cells manually recorded. DNA FISH RPE1 cells plated onto poly-L lysine coverslips were fixed in -20ºC 100% methanol for 10 min, rehydrated for 10 min in 1xPBS and processed for IF.Cells were then refixed in 4% PFA for 5 min at RT then incubated in 2xSSC (Sigma) for 2 x 5 min RT.Cells were permeabilized in 0.2 M HCl (Sigma), 0.7% TritonX-100 in H2O for 15 min at RT, washed in 2xSSC, and incubated for 1 h at RT in 50% formamide (Millipore).Cells were rewashed in 2xSSC, inverted onto chr 1, 11, or 18 XCE probes (MetaSystems), and the coverslips sealed with rubber cement.Probes were hybridized at 74ºC for 3 min and then incubated for 4 hours (chr 18) or overnight (chrs 1 and 11) at 37 ºC.After hybridization, coverslips were washed in 0.4xSSC at 74 ºC for 5 min, then 2xSCC 0.1% Tween20 (Fisher) for 2 x 5 min at RT.DNA was labeled by incubation in 1 µg/mL DAPI for 5 min at RT, and coverslips mounted in VectaShield.Images were acquired as 0.45 µm step z-stacks through the cell on the confocal LSM with a 40x objective.Cells that had more or less than two FISH foci were classified as aneuploid for that chromosome. Image analysis Dendra2 ratio stability: Nuclei were segmented on images taken at the start and end of an Mps1i +/-VCS experiment by thresholding on the GFP channel, measuring the mean intensity of GFP and RFP, and calculating the RFP:GFP ratio per nucleus for each image group.MN+/-sorting accuracy: Cells replated and fixed after sorting were imaged on the LSM confocal at 40x with 0.45 µm z-stacks through the cell.Image names were randomized prior to quantification of MN+ cells.ATF3 and EGR1 intensity: Images were acquired as 0.45 µm step z-stacks through the cell on the confocal LSM with a 40x objective.Images were corrected for illumination inhomogeneity by dividing by a dark image and background subtracted using a 60 px radius rolling-ball in FIJI (Schindelin et al., 2012) (v2.9.0).Single in focus sections of each nucleus was selected and nuclei masks generated by thresholding on RFP-NLS.Mean intensity of ATF3 or EGR1 were calculated for each nucleus and normalized for each replicate by scaling to the median value for the DMSO control.Statistics were calculated on the raw values. Statistical analyses Shorthand p-values are as follows: ns: p-value >= 0.05 *: p-value < 0.05 **: p-value < 0.01 ***: p-value < 0.001 ****: p-value < 0.0001 Generalized estimating equations (GEE) were used to determine statistical differences for nominal data with multiple variables using binomial distributions and a logit link function (Halekoh et al., 2006).For Fig. 1C-D, data were assessed using the formula: (# recalled, # missed) ~ MN status where MN status is whether ruptured or intact.For Fig. 5E, we also used a binomial distribution and a logit link function.For Fig. 5I-J The predicted change to classifier PPV in Fig. 5E was determined by reducing the true positive rate in the rupture-population by the difference in mean rupture frequencies between the beginning and end of the experiments and increasing the true positive rate in the rupture+ population by the same. Figure 2 : Figure 2: MNFinder module robustly segments MN across cell types and imaging conditions.(A) Representative image showing undersegmentation by the VCS MN neural net.(B) Overview of MNFinder module for classifying and segmenting MN and nuclei.Images are tiled by a sliding window and processed by 2 neural nets in parallel: one for classifying regions as nuclei or MN (Nuc/MN) and one for classifying cells.Nuc/MN classifier results are post-processed to correct MN that were misclassified as small nuclei and to expand MN masks.Cell classifier gradient map outputs are used to define cells through watershed-based post-processing.Nuc/MN results are then integrated with cell results to produce final labels of individual MN, cells, and MN.Image crops are reassembled onto the final image by linear blending.C-D) Example images and MN pixel predictions using MNFinder on multiple cell types (RPE1 H2B-emiRFP703, U2OS, HeLa H2B-GFP, and HFF), chromatin labels (DAPI, Hoechst, H2B-FP), and magnifications (20x, 40x).MN recall, PPV, and mean intersection-over-union (mIOU) per object were quantified across conditions.Dotted line = performance of the VCS neural net on RPE1 H2B-emiRFP703 live 20x images.Performance is similar across conditions except H2B-GFP in fixed HeLa cells (teal squares).N = 1.n (on graph) = cells. Figure 3 : Figure 3: VCS can isolate RPE1 RFP703/Dendra micronucleated cells (A) Overview of VCS protocol.Cells are plated in multi-well plates, during imaging cellular phenotypes are quantified, VCS MN is deployed, and specified classes are photocoverted for either 200 or 800 ms, yielding two different ratios of red:green fluorescence.These differences are quantified by FACs and gated for cell sorting.Graphic created with BioRender.com.(B) Quantification of nuclear red:green ratios from images of the same field taken 0, 4, and 8 h after photoconversion displayed as histograms.Representative images from each time point pseudocolored by log10 Dendra red:green ratio (below).N = 1, n = 82, 353, 285, 313.(C) Experimental design of MN cell isolation validation.Classifier PPV calculated on images acquired during activation and frequency of MN-or MN+ cells manually quantified in cells plated and fixed post-sorting.Pre-FACS: N = 2, n = 328, 186 post-FACS N=1.n = 338, 353. Figure 4 : Figure 4: VCS pipeline identifies Mps1i transcriptional response.(A) Timeline of experiment.(B) PCA plot showing clustering of Mps1i-treated and DMSO-treated cells by treatment (major) and by replicate (minor).Each experimental replicate represents 2 technical replicates.(C) MA plot.Differentially expressed genes (FDR adjusted p-value < 0.05) are in green.Gray lines represent 1.5 fold-change in expression.(D) Heatmap of Hallmark pathway enrichment between VCS data and data from (Santaguida et al., 2017) and (He et al., 2019) analyses of RPE1 cells after induction of chromosome missegregation.Hallmark pathways (bottom) were clustered based on manually annotated categories (top). Figure 5 : Figure 5: Micronucleation and rupture transcriptional changes largely overlap with aneuploidy response.(A) Timeline of experiment for MN+ and MN-cell isolation from RFP703/Dendra cells.(B) PCA plot showing clustering of MN+ and MN-cells by replicate (major) and condition (minor).(C) MA plot.Of identified DEGs, only 2 have fold-changes larger than 1.5.Both, TNFAIP3 and EGR1, are also significantly upregulated in Mps1i treated cells.(D) Timeline of experiment for rupture+ and rupture-cell isolation.(E) PCA plot showing clustering of intact MN and ruptured MN cells by condition and replicate.(F) MA plot.Three highly differently expressed genes unique to this dataset are indicated on plot.(G) Heatmap of Hallmark pathway enrichment in datasets of DMSO vs Mps1i, Mps1i-treated cells with and without MN, and synchronized, Mps1i-treated, MN+ cells with and without MN rupture.Pathways are grouped based on manual annotation (left) and show substantial overlap between categories enriched in Mps1i+ cells versus the MN+ and rupture+ subsets. and S5F-G, we used the formula: (# aneuploid, # normal) ~ Status × Chr where Status is whether the cell was MN+/-(Figs 5I, S5F) or Rupture+/-(Figs 5J, S5G) and Chr is chromosome identity.p-values for each individual property were calculated using the drop1 function in R. In Fig. 6D-E and S6 C-D, we used a gamma distribution and the formula mean intensity ~ Population.Statistical significance for differences between single nominal variables in other figures were by Barnard's exact test. Figure S2 : Figure S2: Details of UNet architectures and output post-processing in MNFinder module.(A)The Nuc/MN ensemble classifier takes a single channel input image of chromatin and feeds it into two parallel, attention-gated UNets, one of which also has multiscale downsamplers (yellow).In these blocks input is fed into three parallel, differently-sized convolution operations that are then concatenated.The nucleus weights from the basic UNet are retained and both sets of MN weights are fed to a third UNet for ensembling to produce the final predictions.(B) Results from the Nuc/MN UNet are further processed to improve accuracy.To limit misclassification of large MN as small nuclei, nuclei under a user defined area threshold are reclassified as MN.To limit MN undersegmentation, MN pixel groups are expanded by transforming each into their convex hulls.(C) Example of how a "cell" is generated from existing training data by defining a concave hull that groups a nucleus and any associated MN.Distance and proximity maps are used to define cell boundaries and are derived from convex hulls for training as described in Methods.(D) Diagram of the triple decoder cell segmenter UNet.Two of the decoders have a UNet3+-like architecture with multiple skip connections and deep supervision during training.Feature depths are kept constant and most concatenation/max-pooling operations are replaced with addition to reduce training overhead.One decoder generates distance maps of a concave hull containing each nucleus and any associated MN (a "cell") and the other generates a proximity map of each cell's distance to any other.The output of a third decoder that uses a standard UNet with attention gates to segment foreground pixels (nuclei or MN) is used as input into every level of the distance-and proximity-map decoders via an integration block (magenta).(E) Resulting distance and proximity maps from the cell segmenter UNet are combined to generate seeds for watershed segmentation.To correct for oversegmentation, only labels with boundaries that intersect a skeletonized proximity map or border background pixels are retained. Figure S3 . Figure S3.Controls for VCS MN isolation experiments.(A) Outline of RFP703/Dendra VCS validation experiment using CellTrace labeling as the activation trigger.Cells were incubated with CellTrace far-red and mixed with unlabeled cells at a 1:1 ratio.Nuclei were classified based on CellTrace fluorescence intensity and converted with either an 800 ms (CellTrace+) or 200 ms (CellTrace-) UV pulse.The well was only partially converted prior to FACs analysis and sorting.Representative image of the mixed population prior to photoconversion is shown.Scale bar = 10 µm.(B) FACS plot of Dendra2 red:green ratio versus CellTrace fluorescence.Colored bars represent gates.Values are percentage of negative and positive CellTrace cells present in 200 ms and 800 ms gate, respectively.(C) Histogram of CellTrace fluorescence in cells sorted by Dendra2 ratio after re-analysis by FACs.(D) Predicted classifier PPV (population purity) for untreated low MN frequency U2OS cells (U2OS Broad).We observe a lower but still substantial enrichment of micronucleated cells in the MN+ population compared in a high MN frequency population (Fig. 3C).N = 1, n = 17 cells. Figure S4 : Figure S4: Differential UV pulses do not induce substantial transcriptional changes.(A) PCA plot of cells treated with DMSO and exposed to 800 ms or 200 ms UV. (B) MA plot of the data in A).Only 6 differentially expressed genes were identified in cells exposed to 800 ms vs 200 ms UV and only 3 were downregulated over 1.5 fold: DDX39B, FASN, RGPD6. Figure S5 : Figure S5: Cell synchronization reduces loss of intact MN cell population purity.(A) Change in rupture frequency over time in asynchronous and synchronized cells treated with Cdk1i.Other = mitotic, MN-, or Dendra2-cells.N=1, n=~200 cells per time point.(B) Change in MN rupture frequency between the start and end of a VCS experiment (4h) and predicted change in classifier PPV due to ongoing rupture of intact MN based on values in A).
2023-05-09T13:12:58.295Z
2023-05-05T00:00:00.000
{ "year": 2024, "sha1": "ca225443b73ad2e3acea66aceb07dcb42030fc3c", "oa_license": "CCBYNCND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/06/07/2023.05.04.539483.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "77cc63863ab30715a1c821ef638cd49e42ab67a0", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
62893909
pes2o/s2orc
v3-fos-license
Robotic Astronomy and the BOOTES Network of Robotic Telescopes The Burst Observer and Optical Transient Exploring System (BOOTES), started in 1998 as a Spanish-Czech collaboration project, devoted to a study of optical emissions from gamma ray bursts (GRBs) that occur in the Universe. The first two BOOTES stations were located in Spain, and included medium size robotic telescopes with CCD cameras at the Cassegrain focus as well as all-sky cameras, with the two stations located 240 km apart. The first observing station (BOOTES-1) is located at ESAt (INTA-CEDEA) in Mazagón (Huelva) and the first light was obtained in July 1998. The second observing station (BOOTES-2) is located at La Mayora (CSIC) in Málaga and has been operating fully since July 2001. In 2009 BOOTES expanded abroad, with the third station (BOOTES-3) being installed in Blenheim (South Island, New Zealand) as result of a collaboration project with several institutions from the southern hemisphere. The fourth station (BOOTES-4) is on its way, to be deployed in 2011. Introduction Robotic astronomical observatories were first developed in the 1960s by astronomers after electromechanical interfaces to computers became common at observatories.Nowadays there more than 100 spread worldwide (Fig. 1).See [1] for an overview.Here are some important definitions in the field of Robotic Astronomical Observatories1 : Automated scheduled telescope (Robot): A telescope that performs pre-programmed observations without immediate help of a remote ob-server (e.g.avoiding an astronomer moving the mount by hand). Remotely operated (remote) telescope Robot: A telescope system that performs remote observations following the request of an observer. Autonomous Robot (observatory): A telescope that performs various remote observations and is able to adapt itself to changes during task execution without any kind of human assistance (e.g.weather monitoring; the system must not endanger humans!).BOOTES (the Burst Observer and Optical Transient Exploring System, BOOTES), started in 1998 as a Spanish-Czech collaboration project [2] devoted to the study of optical emissions from gamma ray bursts (GRBs) that occur in the Universe.Nowadays it consists of 4 stations, three of them hosting 60 cm fast slewing robotic telescopes aimed at contributing significantly to various scientific fields. 2 The BOOTES network of robotic telescopes BOOTES-1 The Since 2001, with the new location of the existing enclosure 100 m away from the original site, and with the addition of the second enclosure (dubbed BOOTES-1B to distinguish it from BOOTES-1A, the old one), various setups have been accomplished, the current one, as of summer 2010, being as follows: • A 0. BOOTES Scientific Goals The BOOTES scientific goals are multifold, and are detailed below. Observation of the GRB error box simultaneously to GRB occurrence Although the first detected optical counteparts were not brighter than 19th mag a few hours after the burst, there have been several GRBs for which the optical transient emission has been detected simultaneously to the gamma-ray event, with magnitudes in the range 5-10.The faint transient emission that was detected a few hours after the event is a consequence of the expanding remnant that the GRB leaves behind it.This provides information about the surrounding medium, but not about the central engine itself.The fast slewing 0.6 m BOOTES telescopes are producing important results in this field [4].See Fig. 4. In this respect, coordinated observations of GRBs in various filters is most essential, as only a few GRBs have exceptionally bright optical counterparts.Observers are of course interested in collecting as much data as possible, with the best possible resolution. One of the goals of the observers is to take spectra of the transient while it is bright enough, so that the transient redshift and other properties can be measured.Using data taken with different filters, one can construct the spectral energy distribution of the event and estimate the object redshift.Networked RTS telescopes (like BOOTES) at favourable locations can simultaneously observe objects in different filters. The idea is to enable these telescopes to communicate with each other and provide simultaneous images in two or more filters.This system should balance the need to take some data with the possibility of taking data in multiple filters.This can be achieved by sending commands to take images in different filters when the system knows that it has at least some images of the event.This kind of decision is best made in a single component-observation coordinator. The coordinator will be connected to two or more telescope nodes.It will collect information from GCN and from all connected nodes.A node will report to the coordinator when it receives a GCN notice, when it starts its observation and as soon as it gets an image passed through the astrometry and it contains the whole error area of the GRB.It will also report when the transient detection software identifies a possible optical transient. When the coordinator receives messages about correct observation by two telescopes, it will decide which filter should be followed at which telescope, and will send out commands to carry out further observations.The coordinator will periodically revisit its observing policy, and send out commands to change filter accordingly. As the system is "running against the clock" for the first few minutes after the GRB event, trying to capture the most interesting part of the transient light curve, it cannot wait for completion of the transient source analysis.In the case of two telescopes, the coordinator will command different filters as soon as it knows that both telescopes have acquired the relevant field.The current astrometry routines take a few seconds to run, and it is expected that observations with different filters can already have started within this time-frame. The detection of optical flashes (OTs) of cosmic origin These events could be unrelated to GRBs and could constitute a new type of different astrophysical phenomenon (perhaps associated to QSOs/AGNs).If some of them are related to GRBs, the most recent GRB models predict that there should be a large number of bursting sources in which only transient X-ray/optical emission should be observed, but no gamma-ray emission.The latter would be confined in a jet-like structure and pointing towards us only in a few cases. Monitoring a range of astronomical objects These are astrophysical objects ranging from galactic sources such as comets (Fig. 5), cataclysmic variables, recurrent novae, compact X-ray binaries to extragalactic sources such as distant supernovae and bright active galactic nuclei.In the later case, there are hints that sudden and rapid flares occurs, though of smaller amplitude. Networking One step further from GRB observation is coordinated observation of targets -e.g.observation of variable stars for more than 12 hours (i.e.taking advantage of telescopes in different time zones).The observer should contact the coordinator, and either add a new target, or select a predefined target which he/she wants to observe.The coordinator should list for the observer telescopes which can observe the target of his/her choice, and propose filters and exposure times. The observer can then decide which telescopes are to be used, and the coordinator will send observation requests to the nodes, and will collect back information about observation progress.Currently only observer-selected coordinated observations are envisioned.When that works properly, the observer can be replaced by network scheduling software. Conclusions Robotic Telescopes are opening a new field in Astrophysics in terms of optimizing the observing time, with some of them able to provide pre-reduced data.The big advantage is that they can be placed in remote locations where human life conditions will be hostile (Antartica now, the Moon in the near future).BOOTES (http://www.iaa.es/bootes) is an example of such a telescope system.Technological development in various fields is much involved, and some robotic astronomical observatories are moving towards intelligent robotic astronomical observatories. One immediate application of small/medium size robotic telescopes is in the study of GRBs, which can be considered the most energetic phenomenon in the Universe.In combination with space missions like Integral, Swift andFermi, they are used for triggering larger size instruments in order to perform more detailed studies of host galaxies and intervening material on the line of sight.These robotic astronomy observatories will provide a unique opportunity to unveil the high-z Universe in years to come. Fig. 1 : Fig. 1: Distribution of robotic telescopes in the world first robotic astronomical observatory in Spain was placed in the INTA's Estación de Sondeos Atmosféricos (ESAt) at the Centro de Experimentación de el Arenosillo in Mazagón (Moguer, Huelva).It has an extraordinary sky close to the Atlantic ocean with more than 300 clear nights a year, limited to the East by the Doñana National Park.For the first two years after 1998, BOOTES provided rapid follow-up observations for more than 40 GRBs detected by Batse aboard the CGRO until it was turned off in May 2000.It consisted of a 0.2 m Schmidt-Cassegrain reflector telescope (at f/10) with a CCD camera at the Cassegrain focus, providing a 40 ×30 FOV and a couple of CCD cameras attached to the main optical tube providing a 16 • × 11 • FOV. BOOTES-2 robotic astronomical station was officially opened on 7 Nov 2001 and it is located at CSIC's Estación Experimental de La Mayora in Algarrobo Costa (Málaga).It is limited to the South by the Mediterranean sea and to the North by the Tejeda-Almijara Mountains Nature Park with Maroma peak (2.068 m. a.s.l.).Unlike the two domes of the BOOTES-1 station 200 km away, its dome is controlled by a hydraulic opening system controlled automatically according to the existing weather conditions.BOOTES-2 at first hosted a 0.3-m Schmidt-Cassegrain reflector telescope (f/10), which was replaced in 2009 by a 0.6 m Ritchey-Chrétien fast slewing telescope.This was officially opened on 27 Nov 2009.Thus, the new configuration in the BOOTES-2 station has the following instruments:• The TELMA (TELescopio MAlaga) Ritchey-Chrétien reflector telescope (0.6 m, f/8, see Fig.2) with an EMCCD narrow field camera with variou filters (clear, Johnson R., Sloan g'r'i' and UKIRT Z and Y-band filters) providing a 10 × 10 FOV.• An all-sky camera (CASANDRA-2) providing a 180 • FOV. Fig. 3 : Fig. 3: The YA 0.6 m telescope in Blenheim (New Zealand) depicted against the centre of the Milky Way in an image recorded by CASANDRA-3.The fourth station (BOOTES-4) will be deployed in 2011 Fig. 4 : Fig. 4: Optical afterglow lightcurves of some GRBs detected by BOOTES and rapidly imaged (within 1 min) after detection by scientific satellites Fig. 5 : Fig. 5: The evolution of comet 17P/Holmes following the October 2007 outburst, imaged on a nightly basis with the BOO-2 telescope in Spain.The FOV is 10 × 10 in all frames
2018-12-27T10:29:14.892Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "e8c83957cbf17207db8efe1706cf3caac1f3a679", "oa_license": "CCBY", "oa_url": "https://ojs.cvut.cz/ojs/index.php/ap/article/download/1308/1140", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e8c83957cbf17207db8efe1706cf3caac1f3a679", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236954026
pes2o/s2orc
v3-fos-license
IL6 trans-signaling associates with ischemic stroke but not with atrial fibrillation Background Pro-inflammatory processes underlie ischemic stroke, albeit it is largely unknown if they selectively associate with the risk of atherothrombotic or cardioembolic ischemic stroke. Here we analyze whether pro-inflammatory interleukin (IL) 6 trans-signaling, is associated with the risk of ischemic stroke and underlying atrial fibrillation (AF). Methods During a 20-year follow-up, 203 incident ischemic strokes were recorded from national registers in the cohort of 60-year-old men and women from Stockholm (n = 4232). The risk of ischemic stroke associated with circulating IL6 trans-signaling, assessed by a ratio between the pro-inflammatory binary IL6:sIL6R complex and the inactive ternary IL6:sIL6R:sgp130 complex (B/T ratio), was estimated by Cox regression and expressed as hazard ratio (HR) with a 95% confidence interval (CI) in the presence or absence of AF. Risk estimates were adjusted for cardiovascular risk factors and anticoagulant treatment. In a secondary analysis, the association of IL6 trans-signaling with the risk of incident AF (n = 279) was analyzed. Results B/T ratio > median was associated with increased risk of ischemic stroke in study participants without AF (adjusted HR 1.49; 95% CI 1.08–2.06), while an association could not be demonstrated in the presence of AF. Moreover, the B/T ratio was not associated with the risk of AF (HR 0.96; 95% CI 0.75–1.24). Conclusions Pro-inflammatory IL6 trans-signaling, estimated by the B/T ratio, is associated with ischemic stroke in individuals without AF. These findings suggest that the B/T ratio could be used to assess the risk of non-AF associated ischemic stroke. Supplementary Information The online version contains supplementary material available at 10.1186/s12883-021-02321-6. Background Inflammation driven by interleukin (IL) 6 represents one of the mechanisms underlying different forms of ischemic stroke [1][2][3]. IL6 signals through two pathways mediating opposing effects. In trans-signaling, IL6 mediates a potent pro-inflammatory and pro-atherogenic effect while classical IL6 signaling entails effects essential to the immune system and tissue homeostasis [4]. In classical signaling, IL6 binds the membrane bound IL6 receptor (IL6R) and the signal transducing receptor, glycoprotein 130 (gp130). In IL6 trans-signaling on the other hand, IL6 binds a soluble IL6R isoform (sIL6R) forming the circulating IL6:sIL6R (binary) complex able to bind and activate gp130. The active binary IL6:sIL6R complex is inhibited by the soluble gp130 (sgp130) through the rapid formation of an IL6:sIL6R:sgp130 (ternary) complex. The ternary complex impedes IL6 trans-signaling by preventing binding to gp130 [5]. IL6 trans-signaling contributes to sustained inflammation in chronic conditions such as atherosclerosis [4,6] and experimental research indicate that transsignaling could be detrimental in the ischemic brain [7]. We recently demonstrated that IL6 trans-signaling is associated with the risk of ischemic stroke and the binary/ternary complex (B/T) ratio, improved risk classification measures in individuals otherwise classified as at low-intermediate risk for cardiovascular events [8]. We have also shown that all IL6 trans-signaling components (IL6, sIL6R and sgp130) are expressed in high-grade carotid artery plaques indicating a role for IL6 trans-signaling in large vessel cerebrovascular disease [9]. Biomarkers of IL6 trans-signaling are emerging in cardiovascular risk prediction [6,[10][11][12]. However, there are no studies addressing their role as predictors of cardioembolic vs. atherothrombotic ischemic stroke. As the preventive treatment modalities in ischemic cerebrovascular disease differ, novel biomarkers able to improve prediction of cardioembolic and atherothrombotic stroke would be of great clinical relevance. Atrial fibrillation (AF), the most common supraventricular arrhythmia is associated with an increased risk of cardiac embolization and ischemic stroke and the inflammatory state described in AF includes higher plasma levels of IL6 associated with an increased risk of secondary thromboembolism and mortality [13][14][15]. IL6 trans-signaling specifically has however not been studied in relation to AF or cardioembolic stroke risk. Aims We aimed at investigating the role of IL6 trans-signaling in ischemic stroke in relation to AF. The primary aim was to analyze a potential association between IL6 trans-signaling, mirrored by the B/T ratio, and the risk of ischemic stroke in a prospective cohort of middle-aged subjects free of prevalent cardiovascular disease (CVD) with and without AF. The secondary aim was to analyze possible associations between IL6 trans-signaling, and the risk of incident AF. Study population In the cohort study of 60-year-old men and women from Stockholm, every third man and woman living in the Stockholm County and turning 60 years in 1997-1998 was invited to participate and, with a 78% positive response rate, 4232 participants were included [6]. At the baseline visit, participants were given a self-administered questionnaire on lifestyle, medical family history, past and chronic diagnoses and current medication. Body weight, height and blood pressure was measured, a 12-lead resting electrocardiogram (ECG) recorded and fasting blood samples drawn in the morning and immediately frozen to -80 degrees Celsius for future analyses. Subjects with signs of infection were rescheduled to a later date to avoid the inflammatory markers from being affected. Subjects were excluded from the present analysis if they had not filled out the questionnaire (n = 122), lacked serum samples (n = 96), had prevalent coronary or cerebrovascular disease at baseline (n = 369) or incident coronary events during follow-up (n = 433). In addition, participants inaccurately categorized as incident ischemic stroke with the International Classification of Diseases 10th revision (ICD-10) diagnosis codes I649 or I652 (n = 19) were restricted from final analyses leaving 3193 individuals in the primary analysis. For a detailed overview of the exclusions please see Additional Fig. 1. Outcome ascertainment Study participants were followed up until December 31 st , 2017 via their personal identification numbers through linkage to the Swedish national registers; The Swedish National Inpatient Register with a 100% capture of all hospitalized patients in Sweden and The National Cause of Death Register recording all deaths and cause of death diagnoses in Sweden. Primary diagnoses of non-fatal and fatal ischemic stroke (I63) were registered. In secondary analyses, main and secondary diagnoses of incident AF (I48) were registered to assess incident AF. The ICD-10 code I489 includes atrial flutter, i.e. cases of atrial flutter were included. To analyze the risk of incident AF, subjects with prevalent AF were excluded (n = 29) as were incident ischemic stroke cases (n = 198) due to the wellknown high proportion of undiagnosed AF in this group (Additional Fig. 1). Biochemical measurements and derivation of the binary and ternary complex molar concentrations Baseline serum levels of IL6, sIL6R and sgp130 were analyzed as described in the Additional files and in a prior publication [6]. The binary (IL6:sIL6R) and ternary (IL6:sIL6R:sgp130) complexes were estimated from their molar concentrations with formulas previously presented [6,16,17]. The ratio between the binary and ternary complex, B/T ratio was calculated for each individual. Statistical analysis Continuous variables are presented as median and interquartile range (IQR) while binary variables are presented as numbers and percentages. The risk of ischemic stroke associated with IL6 transsignaling estimated by the B/T ratio was analyzed by Cox proportional hazards model. The risk estimates are given as hazard ratios (HR) with 95% confidence intervals (CI). In the primary analysis, the risk of ischemic stroke was analyzed in subgroups defined by the presence or absence of AF (prevalent or incident). The results from the primary analysis are presented in a crude model and after adjustment for the common cardiovascular risk factors identified at baseline: sex, body mass index (BMI) expressed as kg/m 2 , hypertension (self-reported, or blood pressure > 140/90 mm Hg recorded at the baseline visit), diabetes mellitus (selfreported, or fasting glucose > 7.0 mmol/L in the baseline test), hypercholesterolemia (self-reported or fasting total cholesterol > 5.0 mmol/L), smoking, and chronic treatment at baseline with anticoagulant drugs with codes from the Anatomic Therapeutic Chemical classification system (ATC): B01AA (vitamin K antagonists) or B01AB (heparin group). In secondary analyses, the risk of incident AF associated with IL6 trans-signaling was analyzed, including each of the IL6 trans-signaling components IL6, sIL6R, sgp130, and the B/T ratio categorized in quartiles or dichotomized at the median. The secondary analysis is presented in a crude model and in a model adjusted for sex, hypertension, BMI, and left ventricular hypertrophy defined as the presence of either one of two established ECG criteria, the Minnesota Code or Cornell voltageduration product. To account for the effect of age on the risk of AF, the cumulative AF incidence was also presented graphically using Kaplan Meier curves and stratified by the B/T ratio dichotomized at the median and in additional analyses stratified by quartiles of IL6, sIL6R and sgp130 and for IL6 also dichotomized at the 75 th percentile. Log-rank test was used to test for equality in the survivor functions. To estimate the difference in time to AF diagnosis, quantile regression for censored data was implemented using Laplace regression analysis, expressed in years with 95% CI and adjusted for the above-mentioned confounders. All statistical analyses were performed on Stata statistical software, Release 14. College Station, TX: StataCorp LP. Results In Table 1, the clinical characteristics of the study populations are presented stratified by the occurrence of an ischemic stroke during follow-up or not. As expected, those that suffered an ischemic stroke carried a greater cardiovascular risk burden than those who did not. Moreover, stroke cases had a significantly higher B/T ratio at baseline compared to non-cases (p = 0.0003). During an approximately 20-year follow-up, there were 203 fatal and non-fatal cases of ischemic stroke. Prevalent AF was registered in 29 study participants at baseline and incident AF in 279 participants during follow-up. Two of the participants with AF at baseline were on warfarin prophylaxis and five suffered an ischemic stroke during follow-up. Of the incident AF cases, 161 were diagnosed with AF as a main diagnosis, 116 as a secondary diagnosis and two had an AF diagnosis not categorized as main or secondary. Study participants with AF (prevalent or incident) suffered more ischemic strokes during followup than those without AF while levels of the B/T ratio did not differ ( Table 2). Risk of future ischemic stroke associated with the B/T ratio in subjects with and without atrial fibrillation Figure 1 presents graphically the risk of future ischemic stroke associated with the B/T ratio > median (1.58) in subjects without AF (adjusted HR 1.49; 95% CI 1.08-2.06) and in those with prevalent or incident AF (adjusted HR 1.54; 95% CI 0.81-2.91). Crude and adjusted risk estimates are presented in Additional Table 1. Risk of incident atrial fibrillation and time to atrial fibrillation associated with the B/T ratio and IL6 trans-signaling components There was no increased risk of incident AF associated with the B/T ratio categorized into quartiles or dichotomized at the median (Table 3). There was no difference in cumulative incidence of AF when stratifying by the B/T ratio cut at the median, p = 0.87 (Fig. 2). Investigating the association with IL6 signaling further, there was a borderline significant association between the risk of AF and IL6 > 75 th percentile (HR 1.23; CI 0.93-1.63), as seen in Additional Table 2. In addition, a pattern of earlier AF diagnosis associated with high IL6 levels was seen in Kaplan Meier curves (Additional Fig. 2). At the end of follow-up, when adjusting for the above-mentioned confounders (sex, BMI, smoking, hypertension, diabetes, hypercholesterolemia and chronic treatment at baseline with anticoagulants), participants with IL6 levels > 75 th percentile were diagnosed with AF 4.3 years earlier (95% CI 1.1-7.5 years, p = 0.008) compared to those with lower levels. No association with increased risk of AF or earlier AF diagnosis was observed for the soluble trans-signaling receptors, sIL6R or sgp130 (Additional Table 3, Additional Fig. 3). Discussion Here we demonstrate that IL6 trans-signalling, mirrored by the B/T ratio > median, is associated with increased risk of ischemic stroke albeit the association could not be shown in the small group of participants with AF nor with the risk of AF specifically. Increased IL6 levels in the circulation and in the central nervous system have been demonstrated in the acute phase of ischemic stroke [18,19]. The role of IL6 is however controversial with evidence of both detrimental and protective properties in ischemic stroke models and no effect on infarct size or neurological function in IL6 knock-out mice [20,21]. We previously demonstrated that the B/T ratio, a biomarker assessing the pro-inflammatory IL6 trans-signaling pathway, could predict an increased risk of incident ischemic stroke, pre-eminently in individuals with low LDL cholesterol [8]. Moreover, in a small "real-world" cohort of patients with carotid artery stenosis undergoing carotid endarterectomy (n = 78), we analyzed the expression of IL6 and IL6 receptor genes in carotid artery Fig. 1 Risk of future ischemic stroke associated with the B/T ratio > median in subjects with and without a diagnosis of AF analyzed by Cox regression and expressed as hazard ratio with 95% confidence interval. Adjustments were made for sex, smoking, hypertension, hyperlipidemia, diabetes mellitus, BMI, and antithrombotic treatment Table 1 Baseline characteristics of the study population stratified by ischemic stroke plaques and found that their expression were upregulated in patients with a recent ischemic cerebrovascular event (≤ 6 months from carotid surgery) [9]. Our former and current data suggest that the IL6 trans-signaling pathway is mainly associated with atherosclerosis related cerebrovascular disease as we could not demonstrate an association with the risk of AF or an increased risk of ischemic stroke in subjects with AF. Of note is that the prevalence and risk factors for AF overlap those for large and small vessel cerebrovascular disease i.e. the underlying pathophysiology is not by default cardioembolic in individuals with AF [22]. A possible implication of our findings is however that the B/T ratio may represent a biomarker to identify individuals at risk for atherothrombotic but not cardioembolic stroke. Results from Mendelian randomization studies indicate a causal association between the IL6R, AF and ischemic stroke and that the increased risk of ischemic stroke associated with the IL6R is accounted for by AF [23,24]. A more recent Mendelian randomization study on the other hand found an association between IL6R and the risk of large vessel stroke but not cardioembolic stroke [25]. Nevertheless, these studies do not consider the differential role of IL6 classical and IL6 trans-signaling in the modulation of inflammatory processes, and that the synthesis and release of sIL6R and sgp130 is regulated at a post-transcriptional level. Whether IL6 classical or IL6 trans-signaling is associated with AF or not has not been analyzed in epidemiological studies to this date. Our data suggest that IL6 is possibly associated with an increased risk of AF and that this association is not mediated by IL6 trans-signaling, but potentially driven by the IL6 classical signaling. The observation that there was no difference in the B/T ratio with regards to AF, as well as the fact that neither sIL6R nor sgp130 serum levels were associated with the risk of AF; suggest that the stroke risk associated with the B/T ratio is independent of AF. Strengths and limitations This is the first prospective population-based cohort study exploring the association between the pro-inflammatory IL6 trans-signaling and ischemic stroke in relation to AF. The unique Swedish personal identification number and mandatory reporting of inpatient and hospital-based outpatient diagnoses in national Swedish registers enable a 100% follow-up. With an overall positive predictive value of diagnoses in these registers of 85-95%, diagnoses may be considered reliable. One major limitation in the study is the lack of diagnoses from primary care in the Swedish inpatient registers. In relation to ischemic stroke this is a marginal problem as stroke is mainly diagnosed in hospitals. AF however is also diagnosed in primary care. To account for this, we have recorded both main and secondary diagnoses of AF, but we cannot entirely compensate for the lack of primary care diagnoses. In addition, as AF in the early stages of the disease is paroxysmal and often asymptomatic, potential delays in diagnosing the condition is impending and thus the first recorded diagnosis is not always equal to the first presentation of the disease. Furthermore, we did not perform long-term ECG recordings in the study, possibly missing paroxysmal AF which it problematic as even subclinical atrial tachyarrhythmias are associated with an increased risk of ischemic stroke [26]. In addition, we did not have enough power to analyze differences in association between pre-and post-stroke AF. Moreover, in the present study we aimed at analyzing the risk of ischemic stroke associated with IL6 trans-signaling in individuals with and without AF. Without access to information on findings from computer tomography/ magnetic resonance imaging, carotid ultrasound, longterm ECG recordings, echocardiogram, etc. we were forced to use the composite outcome of ischemic stroke despite its heterogeneous pathophysiological mechanisms and can thus merely speculate on whether the association seen is primarily driven by atherothrombotic rather than cardioembolic origin. Misclassification would however lead to an underestimation of AF prevalence and incidence and cardioembolic strokes in our cohort and would marginally affect our results: misclassification of ischemic stroke as atherothrombotic could possibly dilute the observed risk since the B/T ratio was associated neither with AF nor with ischemic stroke in AF patients. The non-significant association with stroke risk in the smaller group of Fig. 2 Cumulative incidence of AF in subjects without prevalent AF at baseline stratified by the B/T ratio dichotomized at the median participants with prevalent/incident AF could however be due to lack of power albeit the risk estimate was comparable to that in the non-AF group. In addition, we did not have access to information on prospective changes in medication. With the primary aim to analyze the B/T ratio as a predictive marker of ischemic stroke this is however not mandatory. Moreover, we only have serum samples from baseline and thus cannot exclude that serum levels at baseline do not mirror the levels during follow-up. In addition, the cut-offs for the B/T ratio and IL6 are data driven which can prevent the results from being generalizable. The aim of the study is however not to find suitable reference values but to analyze associations between IL6 trans-signaling and ischemic stroke. On the other hand, we believe that the data derived from this cohort, being a population-based cohort with participants randomly chosen and with a high positive response rate (78%), are representable in the context of other populations with comparable lifestyle and societal structure. Finally, the study is observational preventing any mechanistic conclusions from being drawn. The experimental evidence underlying the hypothesis of this study is however in line with the present findings. Conclusions IL6 trans-signaling, assessed by the B/T ratio, is associated with an increased risk of ischemic stroke in patients without known AF. The results suggest the B/T ratio could be a novel biomarker for a more personalized ischemic stroke risk assessment.
2021-08-09T13:56:25.026Z
2021-08-09T00:00:00.000
{ "year": 2021, "sha1": "834c1277f3531564cb2f544e06eae2a599d5de4a", "oa_license": "CCBY", "oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/s12883-021-02321-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "834c1277f3531564cb2f544e06eae2a599d5de4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67856431
pes2o/s2orc
v3-fos-license
Distributed Learning with Sublinear Communication In distributed statistical learning, $N$ samples are split across $m$ machines and a learner wishes to use minimal communication to learn as well as if the examples were on a single machine. This model has received substantial interest in machine learning due to its scalability and potential for parallel speedup. However, in high-dimensional settings, where the number examples is smaller than the number of features ("dimension"), the speedup afforded by distributed learning may be overshadowed by the cost of communicating a single example. This paper investigates the following question: When is it possible to learn a $d$-dimensional model in the distributed setting with total communication sublinear in $d$? Starting with a negative result, we show that for learning $\ell_1$-bounded or sparse linear models, no algorithm can obtain optimal error until communication is linear in dimension. Our main result is that that by slightly relaxing the standard boundedness assumptions for linear models, we can obtain distributed algorithms that enjoy optimal error with communication logarithmic in dimension. This result is based on a family of algorithms that combine mirror descent with randomized sparsification/quantization of iterates, and extends to the general stochastic convex optimization model. Introduction In statistical learning, a learner receives examples z 1 , . . . , z N i.i.d. from an unknown distribution D. Their goal is to output a hypothesisĥ ∈ H that minimizes the prediction error L D (h) ∶= E z∼D ℓ(h, z), and in particular to guarantee that excess risk of the learner is small, i.e. where ε(H, N ) is a decreasing function of N . This paper focuses on distributed statistical learning. Here, the N examples are split evenly across m machines, with n ∶= N m examples per machine, and the learner wishes to achieve an excess risk guarantee such as (1) with minimal overhead in computation or communication. Distributed learning has been the subject of extensive investigation due to its scalability for processing massive data: We may wish to efficiently process datasets that are spread across multiple data-centers, or we may want to distribute data across multiple machines to allow for parallelization of learning procedures. The question of parallelizing computation via distributed learning is a well-explored problem (Bekkerman et al., 2011;Recht et al., 2011;Dekel et al., 2012;Chaturapruek et al., 2015). However, one drawback that limits the practical viability of these approaches is that the communication cost amongst machines may overshadow gains in parallel speedup (Bijral et al., 2016). Indeed, for high-dimensional statistical inference tasks where N could be much smaller than the dimension d, or in modern deep learning models where the number of L D (w) = E (x,y)∼D φ(⟨w, x⟩, y). Our results concern the communication complexity of learning for linear models in the ℓ p ℓ q -bounded setup: weights belong to W p ∶= w ∈ R d w p ≤ B p and feature vectors belong to X q ∶= x ∈ R d x q ≤ R q . 1 This setting is a natural starting point to investigate sublinear-communication distributed learning because learning is possible even when N ≪ d. Consider the case where p and q are dual, i.e. 1 p + 1 q = 1, and where φ is 1-Lipschitz. Here it is well known (Zhang, 2002;Kakade et al., 2009) that whenever q ≥ 2, the optimal sample complexity for learning, which is achieved by choosing the learner's weightsŵ using empirical risk minimization (ERM), is where C q = q − 1 for finite q and C ∞ = log d, or in other words We see that when q < ∞ the excess risk for the dual ℓ p ℓ q setting is independent of dimension so long as the norm bounds B p and R q are held constant, and that even in the ℓ 1 ℓ ∞ case there is only a mild logarithmic dependence. Hence, we can get nontrivial excess risk even when the number of examples N is arbitrarily small compared to the dimension d. This raises the intriguing question: Given that we can obtain nontrivial excess risk when N ≪ d, can we obtain nontrivial excess risk when communication is sublinear in d? To be precise, we would like to develop algorithms that achieve (3)/(4) with total bits of communication poly(N, m, log d), permitting also poly(B p , R q ) dependence. The prospect of such a guarantee is exciting because-in light of the discussion above-as this would imply that we can obtain nontrivial excess risk with fewer bits of total communication than are required to naively send a single feature vector. 1 Recall the definition of the ℓp norm: w p = ∑ d i=1 wi p 1 p . Contributions We provide new communication-efficient distributed learning algorithms and lower bounds for ℓ p ℓ q -bounded linear models, and more broadly, stochastic convex optimization. We make the following observations: • For ℓ 2 ℓ 2 -bounded linear models, sublinear communication is achievable, and is obtained by using a derandomized Johnson-Lindenstrauss transform to compress examples and weights. • For ℓ 1 ℓ ∞ -bounded linear models, no distributed algorithm can obtain optimal excess risk until communication is linear in dimension. These observations lead to our main result. We show that by relaxing the ℓ 1 ℓ ∞ -boundedness assumption and instead learning ℓ 1 ℓ q -bounded models for a constant q < ∞, one unlocks a plethora of new algorithmic tools for sublinear distributed learning: 1. We give an algorithm with optimal rates matching (3), with communication poly(N, m q , log d). 2. We extend the sublinear-communication algorithm to give refined guarantees, including instancedependent small loss bounds for smooth losses, fast rates for strongly convex losses, and optimal rates for matrix learning problems. Our main algorithm is a distributed version of mirror descent that uses randomized sparsification of weight vectors to reduce communication. Beyond learning in linear models, the algorithm enjoys guarantees for the more general distributed stochastic convex optimization model. To elaborate on the fast rates mentioned above, another important case where learning is possible when N ≪ d is the sparse high-dimensional linear model setup central to compressed sensing and statistics. Here, the standard result is that when φ is strongly convex and the benchmark class consists of k-sparse linear predictors, i.e. W 0 ∶= w ∈ R d w 0 ≤ k , one can guarantee With ℓ ∞ -bounded features, no algorithm can obtain optimal excess risk for this setting until communication is linear in dimension, even under compressed sensing-style assumptions. When features are ℓ q -bounded however, our general machinery gives optimal fast rates matching (5) under Lasso-style assumptions, with communication poly(N q , log d). The remainder of the paper is organized as follows. In Section 2 we develop basic upper and lower bounds for the ℓ 2 ℓ 2 and ℓ 1 ℓ ∞ -bounded settings. Then in Section 3 we shift to the ℓ 1 ℓ q -bounded setting, where we introduce the family of sparsified mirror descent algorithms that leads to our main results and sketch the analysis. Related Work Much of the work in algorithm design for distributed learning and optimization does not explicitly consider the number of bits used in communication per messages, and instead tries to make communication efficient via other means, such as decreasing the communication frequency or making learning robust to network disruptions (Duchi et al., 2012;Zhang et al., 2012). Other work reduces the number of bits of communication, but still requires that this number be linear in the dimension d. One particularly successful line of work in this vein is low-precision training, which represents the numbers used for communication and elsewhere within the algorithm using few bits (Alistarh et al., 2017;Zhang et al., 2017;Seide et al., 2014;Bernstein et al., 2018;Tang et al., 2018;Stich et al., 2018;Alistarh et al., 2018). Although low-precision methods have seen great success and adoption in neural network training and inference, low-precision methods are fundamentally limited to use bits proportional to d; once they go down to one bit per number there is no additional benefit from decreasing the precision. Some work in this space tries to use sparsification to further decrease the communication cost of learning, either on its own or in combination with a low-precision representation for numbers (Alistarh et al., 2017;Wangni et al., 2018;Wang et al., 2018). While the majority of these works apply low-precision and sparsification to gradients, a number of recent works apply sparsification to model parameters (Tang et al., 2018;Stich et al., 2018;Alistarh et al., 2018); We also adopt this approach. The idea of sparsifying weights is not new (Shalev-Shwartz et al., 2010), but our work is the first to provably give communication logarithmic in dimension. To achieve this, our assumptions and analysis are quite a bit different from the results mentioned above, and we crucially use mirror descent, departing from the gradient descent approaches in Tang Lower bounds on the accuracy of learning procedures with limited memory and communication have been explored in several settings, including mean estimation, sparse regression, learning parities, detecting correlations, and independence testing (Shamir, 2014;Duchi et al., 2014;Garg et al., 2014;Steinhardt and Duchi, 2015;Braverman et al., 2016;Steinhardt et al., 2016;Acharya et al., 2018a,b;Raz, 2018;Han et al., 2018;Sahasranand and Tyagi, 2018;Dagan and Shamir, 2018;Dagan et al., 2019). In particular, the results of Steinhardt and Duchi (2015) and Braverman et al. (2016) imply that optimal algorithms for distributed sparse regression need communication much larger than the sparsity level under various assumptions on the number of machines and communication protocol. Linear Models: Basic Results In this section we develop basic upper and lower bounds for communication in ℓ 2 ℓ 2 -and ℓ 1 ℓ ∞ -bounded linear models. Our goal is to highlight some of the counterintuitive ways in which the interaction between the geometry of the weight vectors and feature vectors influences the communication required for distributed learning. In particular, we wish to underscore that the communication complexity of distributed learning and the statistical complexity of centralized learning do not in general coincide, and to motivate the ℓ 1 ℓ qboundedness assumption under which we derive communication-efficient algorithms in Section 3. Preliminaries We formulate our results in a distributed communication model following Shamir (2014). Recalling that n = N m, the model is as follows. • For machine i = 1, . . . , m: We refer to ∑ m i=1 b i as the total communication, and we refer to any protocol with b i ≤ b ∀i as a (b, n, m) protocol. As a special case, this model captures a serial distributed learning setting where machines proceed one after another: Each machine does some computation on their data z i 1 , . . . , z i n and previous messages W 1 , . . . , W i−1 , then broadcasts their own message W i to all subsequent machines, and the final model in (1) is computed from W , either on machine m or on a central server. The model also captures protocols in which each machine independently computes a local estimator and sends it to a central server, which aggregates the local estimators to produce a final estimator (Zhang et al., 2012). All of our upper bounds have the serial structure above, and our lower bounds apply to any (b, n, m) protocol. ℓ 2 ℓ 2 -Bounded Models In the ℓ 2 ℓ 2 -bounded setting, we can achieve sample optimal learning with sublinear communication by using dimensionality reduction. The idea is to project examples into k =Õ(N ) dimensions using the Johnson-Lindenstrauss transform, then perform a naive distributed implementation of any standard learning algorithm in the projected space. Here we implement the approach using stochastic gradient descent. The first machine picks a JL matrix A ∈ R k×d and communicates the identity of the matrix to the other m − 1 machines. The JL matrix is chosen using the derandomized sparse JL transform of Kane and Nelson (2010), and its identity can be communicated by sending the random seed, which takes O(log(k δ) ⋅ log d) bits for confidence parameter δ. The dimension k and parameter δ are chosen as a function of N . Now, each machine uses the matrix A to project its features down to k dimensions. Letting x ′ t = Ax t denote the projected features, the first machine starts with a k-dimensional weight vector u 1 = 0 and performs the online gradient descent update (Zinkevich, 2003;Cesa-Bianchi and Lugosi, 2006) over its n projected samples as: where η > 0 is the learning rate. Once the first machine has passed over all its samples, it broadcasts the last iterate u n+1 as well the average ∑ n s=1 u s , which takesÕ(k) communication. The next machine machine performs the same sequence of gradient updates on its own data using u n+1 as the initialization, then passes its final iterate and the updated average to the next machine. This repeats until we arrive at the mth machine. The mth machine computes the k-dimensional vectorû ∶= 1 N ∑ N t=1 u t , and returnsŵ = A ⊺û as the solution. Theorem 1. When φ is L-Lipschitz and k = Ω(N log(dN )), the strategy above guarantees that where E S denotes expectation over samples and E A denotes expectation over the algorithm's randomness. The total communication is O(mN log(dN ) log(LB 2 R 2 N ) + m log(dN ) log d) bits. ℓ 1 ℓ ∞ -Bounded Models: Model Compression While the results for the ℓ 2 ℓ 2 -bounded setting are encouraging, they are not useful in the common situation where features are dense. When features are ℓ ∞ -bounded, Equation (4) shows that one can obtain nearly dimension-independent excess risk so long as they restrict to ℓ 1 -bounded weights. This ℓ 1 ℓ ∞ -bounded setting is particularly important because it captures the fundamental problem of learning from a finite hypothesis class, or aggregation (Tsybakov, 2003): Given a class H of {±1}-valued predictors with H < ∞ we can set x = (h(z)) h∈H ∈ R H , in which case (4) turns into the familiar finite class bound log H N (Shalev-Shwartz and Ben-David, 2014). Thus, algorithms with communication sublinear in dimension for the ℓ 1 ℓ ∞ setting would lead to positive results in the general setting (1). As first positive result in this direction, we observe that by using the well-known technique of randomized sparsification or Maurey sparsification, we can compress models to require only logarithmic communication while preserving excess risk. 2 The method is simple: Suppose we have a weight vector w that lies on the simplex ∆ d . We sample s elements of [d] i.i.d. according to w and return the empirical distribution, which we will denote Q s (w). The empirical distribution is always s-sparse and can be communicated using at most O(s log (ed s)) bits when s ≤ d, 3 and it follows from standard concentration tools that by taking s large enough the empirical distribution will approximate the true vector w arbitrarily well. The following lemma shows that Maurey sparsification indeed provides a dimension-independent approximation to the excess risk in the ℓ 1 ℓ ∞ -bounded setting. It applies to a version of the Maurey technique for general vectors, which is given in Algorithm 1. Lemma 1. Let w ∈ R d be fixed and suppose features belong to X ∞ . When φ is L-Lipschitz, Algorithm 1 guarantees that where the expectation is with respect to the algorithm's randomness. Furthermore, when φ is β-smooth 4 Algorithm 1 guarantees: The number of bits required to communicate Q s (w), including sending the scalar w 1 up to numerical precision, is at most O(s log (ed s) + log(LB 1 R ∞ s)). Thus, if any single machine is able to find an estimator w with good excess risk, they can communicate it to any other machine while preserving the excess risk with sublinear communication. In particular, to preserve the optimal excess risk guarantee in (4) for a Lipschitz loss such as absolute or hinge, the total bits of communication required is only O(N + log (LB 1 R ∞ N )), which is indeed sublinear in dimension! For smooth losses (square, logistic), this improves further to only O( N log (ed N ) + log (LB 1 R ∞ N )) bits. 2.4 ℓ 1 ℓ ∞ -Bounded Models: Impossibility Alas, we have only shown that if we happen to find a good solution, we can send it using sublinear communication. If we have to start from scratch, is it possible to use Maurey sparsification to coordinate between 2 We refer to the method as Maurey sparsification in reference to Maurey's early use of the technique in Banach spaces (Pisier, 1980), which predates its long history in learning theory (Jones, 1992;Barron, 1993;Zhang, 2002). 3 That O(s log (ed s)) bits rather than, e.g., O(s log d) bits suffice is a consequence of the usual "stars and bars" counting argument. We expect one can bring the expected communication down further using an adaptive scheme such as Elias coding, as in Alistarh et al. (2017). 4 A scalar function is said to be β-smooth if it has β-Lipschitz first derivative. all machines to find a good solution? Unfortunately, the answer is no: For the ℓ 1 ℓ ∞ bounded setting, in the extreme case where each machine has a single example, no algorithm can obtain a risk bound matching (4) until the number of bits b allowed per machine is (nearly) linear in d. Theorem 2. Consider the problem of learning with the linear loss in the (b, 1, N ) model, where risk is x⟩]. Let the benchmark class be the ℓ 1 ball W 1 , where B 1 = 1. For any algorithm w there exists a distribution D with x ∞ ≤ 1 and y ≤ 1 such that The lower bound also extends to the case of multiple examples per machine, albeit with a less sharp tradeoff. Proposition 1. Let m, n, and ε > 0 be fixed. In the setting of Theorem 2, any algorithm in the (b, n, m) This lower bound follows almost immediately from reduction to the "hide-and-seek" problem of Shamir (2014). The weaker guarantee from Proposition 1 is a consequence of the fact that the lower bound for the hide-and-seek problem from Shamir (2014) is weaker in the multi-machine case. The value of Theorem 2 and Proposition 1 is to rule out the possibility of obtaining optimal excess risk with communication polylogarithmic in d in the ℓ 1 ℓ ∞ setting, even when there are many examples per machine. This motivates the results of the next section, which show that for ℓ 1 ℓ q -bounded models it is indeed possible to get polylogarithmic communication for any value of m. One might hope that it is possible to circumvent Theorem 2 by making compressed sensing-type assumptions, e.g. assuming that the vector w ⋆ is sparse and that restricted eigenvalue or a similar property is satisfied. Unfortunately, this is not the case. Proposition 2. Consider square loss regression in the (b, 1, N ) model. For any algorithmŵ there exists a distribution D with the following properties: • x ∞ ≤ 1 and y ≤ 1 with probability 1. • Σ ∶= E[xx ⊺ ] = I, so that the population risk is 1-strongly convex, and in particular has restricted strong convexity constant 1. That Ω(d) communication is required to obtain optimal excess risk for m = N was proven in Steinhardt and Duchi (2015). The lower bound for general m is important here because it serves as a converse to the algorithmic results we develop for sparse regression in Section 3. It follows by reduction to hide-and-seek. 5 The lower bound for sparse linear models does not rule out that sublinear learning is possible using additional statistical assumptions, e.g. that there are many examples on each machine and support recovery is possible. See Appendix B.2 for detailed discussion. Sparsified Mirror Descent We now deliver on the promise outlined in the introduction and give new algorithms with logarithmic communication under an assumption we call ℓ 1 ℓ q -boundness. The model for which we derive algorithms in this section is more general than the linear model setup (2) to which our lower bounds apply. We consider problems of the form minimize where ℓ(⋅, z) is convex, W ⊆ W 1 = w ∈ R d w 1 ≤ B 1 is a convex constraint set, and subgradients ∂ℓ(w, z) are assumed to belong to X q = x ∈ R d x q ≤ R q . This setting captures linear models with ℓ 1 -bounded weights and ℓ q -bounded features as a special case, but is considerably more general, since the loss can be any Lipschitz function of w. We have already shown that one cannot expect sublinear-communication algorithms for ℓ 1 ℓ ∞ -bounded models, and so the ℓ q -boundedness of subgradients in (8) may be thought of as strengthening our assumption on the data generating process. That this is stronger follows from the elementary fact that x q ≥ x ∞ for all q. Statistical complexity and nontriviality. For the dual ℓ 1 ℓ ∞ setup in (2) the optimal rate is Θ( log d N ). While our goal is to find minimal assumptions that allow for distributed learning with sublinear communication, the reader may wonder at this point whether we have made the problem easier statistically by moving to the ℓ 1 ℓ q assumption. The answer is "yes, but only slightly." When q is constant the optimal rate for ℓ 1 ℓ q -bounded models is Θ( 1 N ), 6 and so the effect of this assumption is to shave off the log d factor that was present in (4). Lipschitz Losses Our main algorithm is called sparsified mirror descent (Algorithm 2). The idea behind the algorithm is to run the online mirror descent algorithm (Ben-Tal and Nemirovski, 2001;Hazan, 2016) in serial across the machines and sparsify the iterates whenever we move from one machine to the next. In a bit more detail, Algorithm 2 proceeds from machine to machine sequentially. On each machine, the algorithm generates a sequence of iterates w i 1 , . . . , w i n by doing a single pass over the machine's n examples z i 1 , . . . , z i n using the mirror descent update with regularizer R(w) = 1 2 w 2 p , where 1 p + 1 q = 1, and using stochastic gradients ∇ i t ∈ ∂ℓ(w i t , z i t ). After the last example is processed on machine i, we compress the last iterate using Maurey sparsification (Algorithm 1) and send it to the next machine, where the process is repeated. To formally describe the algorithm, we recall the definition of the Bregman divergence. Given a convex regularization function R ∶ R d → R, the Bregman divergence with respect to R is defined as For the ℓ 1 ℓ q setting we exclusively use the regularizer R(w) = 1 2 w 2 p , where 1 p + 1 q = 1. The main guarantee for Algorithm 2 is as follows. The total number of bits sent by each machine-besides communicating the final iterateŵ-is at most O(m 2(q−1) log(d m) + log(B 1 R q N )), and so the total number of bits communicated globally is at most In the linear model setting (2) with 1-Lipschitz loss φ it suffices to set s 0 = Ω(N ), so that the total bits of communication is We see that the communication required by sparsified mirror descent is exponential in the norm parameter q. This means that whenever q is constant, the overall communication is polylogarithmic in dimension. It is helpful to interpret the bound when q is allowed to grow with dimension. An elementary property of ℓ q norms is that for q = log d, x q ≈ x ∞ up to a multiplicative constant. In this case the communication from Theorem 3 becomes polynomial in dimension, which we know from Section 2.4 is necessary. The guarantee of Algorithm 2 extends beyond the statistical learning model to the first-order stochastic convex optimization model, as well as the online convex optimization model. Proof sketch. They basic premise behind the algorithm and analysis is that by using the same learning rate across all machines, we can pretend as though we are running a single instance of mirror descent on a centralized machine. The key difference from the usual analysis is that we need to bound the error incurred by sparsification between successive machines. Here, the choice of the regularizer is crucial. A fundamental property used in the analysis of mirror descent is strong convexity of the regularizer. In particular, to give convergence rates that do not depend on dimension (such as (3)) it is essential that the regularizer be Ω(1)strongly convex. Our regularizer R indeed has this property. On the other hand, to argue that sparsification has negligible impact on convergence, our analysis leverages smoothness of the regularizer. Strong convexity and smoothness are at odds with each other: It is well known that in infinite dimension, any norm that is both strongly convex and smooth is isomorphic to a Hilbert space (Pisier, 2011). What makes our analysis work is that while the regularizer R is not smooth, it is Hölder-smooth for any finite q. This is sufficient to bound the approximation error from sparsification. To argue that the excess risk achieved by mirror descent with the ℓ p regularizer R is optimal, however, it is essential that the gradients are ℓ q -bounded rather than ℓ ∞ -bounded. In more detail, the proof can be broken into three components: • Telescoping. Mirror descent gives a regret bound that telescopes across all m machines up to the error introduced by sparsification. To argue that we match the optimal centralized regret, all that is required is to bound m error terms of the form • Hölder-smoothness. We prove (Theorem 7) that the difference above is of order • Maurey for ℓ p norms. We prove (Theorem 6) that Q s (w i n+1 ) − w i n+1 p ≲ 1 s 1−1 p and likewise that With a bit more work these inequalities yield Theorem 3. We close this section with a few more notes about Algorithm 2 and its performance. Remark 1. We can modify Algorithm 2 so that it enjoys a high-probability excess risk bound by changing the final step slightly. Instead of subsampling (i, t) randomly and returning Q s (w i t ), have each machine i average all its iterates w i 1 , . . . , w i n , then sparsify the average and send it to the final machine, which averages the averaged iterates from all machines and returnsŵ as the result. There appears to be a tradeoff here: The communication of the high probability algorithm isÕ(m 2q−1 + mN q ), while Algorithm 2 has communicationÕ(m 2q−1 + N q ). We leave a comprehensive exploration of this tradeoff for future work. Remark 2. For the special case of ℓ 1 ℓ q -bounded linear models, it is not hard to show that the following strategy also leads to sublinear communication: Truncate each feature vector to the top Θ(N q 2 ) coordinates, then send all the truncated examples to a central server, which returns the empirical risk minimizer. This strategy matches the risk of Theorem 3 with total communicationÕ(N q 2+1 ), but has two deficiencies. First, it scales as N O(q) , which is always worse than m O(q) . Second, it does not appear to extend to the general optimization setting. Smooth Losses We can improve the statistical guarantee and total communication further in the case where L D is smooth with respect to ℓ q rather than just Lipschitz. We assume that ℓ has β q -Lipschitz gradients, in the sense that for all w, w ′ ∈ W 1 for all z, Theorem 4. Suppose in addition to the assumptions of Theorem 3 that ℓ(⋅, z) is non-negative and has β q -Lipschitz gradients with respect to ℓ q . Let L ⋆ = inf w∈W L D (w). If we run Algorithm 2 with learning rate The total number of bits sent by each machine-besides communicating the final iterateŵ-is at most O(m 2(q−1) log(d m)), and so the total number of bits communicated globally is at most Compared to the previous theorem, this result provides a so-called "small-loss bound" (Srebro et al., 2010), with the main term scaling with the optimal loss L ⋆ . The dependence on N in the communication cost can be as low as O( √ N ) depending on the value of L ⋆ . Fast Rates under Restricted Strong Convexity So far all of the algorithmic results we have present scale as O(N −1 2 ). While this is optimal for generic Lipschitz losses, we mentioned in Section 2 that for strongly convex losses the rate can be improved in a nearly-dimension independent fashion to O(N −1 ) for sparse high-dimensional linear models. As in the generic lipschitz loss setting, we show that making the assumption of ℓ 1 ℓ q -boundness is sufficient to get statistically optimal distributed algorithms with sublinear communication, thus providing a way around the lower bounds for fast rates in Section 2.4. The key assumption for the results in this section is that the population risk satisfies a form of restricted strong convexity over W: In a moment we will show how to relate this property to the standard restricted eigenvalue property in high-dimensional statistics (Negahban et al., 2012) and apply it to sparse regression. Our main algorithm for strongly convex losses is Algorithm 3. The algorithm does not introduce any new tricks for distributed learning over Algorithm 2; rather, it invokes Algorithm 2 repeatedly in an inner loop, Algorithm 3 (Sparsified Mirror Descent for Fast Rates). Input: Let examples have order: z 1 1 , . . . , z 1 n , . . . , z m 1 , . . . , z m n . For round k = 1, . . . , T : Letŵ k be the result of running Algorithm 2 on N k consecutive examples in the ordering above, with the following configuration: 1. The algorithm begins on the example immediately after the last one processed at round k − 1. relying on these invocations to take care of communication. This reduction is based on techniques developed in Juditsky and Nesterov (2014), whereby restricted strong convexity is used to establish that error decreases geometrically as a function of the number of invocations to the sub-algorithm. We refer the reader to Appendix C for additional details. The main guarantee for Algorithm 3 is as follows. Theorem 5. Suppose Assumption 1 holds, that subgradients belong to X q for q ≥ 2, and that W ⊂ W 1 . When the parameter c > 0 is a sufficiently large absolute constant, Algorithm 3 guarantees that The total numbers of bits communicated is Treating scale parameters as constant, the total communication simplifies to O N 2q−2 m 2q−1 log d . Note that the communication in this theorem depends polynomially on the various scale parameters, which was not the case for Theorem 3. Extension: Matrix Learning and Beyond The basic idea behind sparsified mirror descent-that by assuming ℓ q -boundedness one can get away with using a Hölder-smooth regularizer that behaves well under sparsification-is not limited to the ℓ 1 ℓ q setting. To extend the algorithm to more general geometry, all that is required is the following: • The constraint set W can be written as the convex hull of a set of atoms A that has sublinear bit complexity. • The data should be bounded in some norm ⋅ such that the dual ⋅ ⋆ admits a regularizer R that is strongly convex and Hölder-smooth with respect to ⋅ ⋆ • ⋅ ⋆ is preserved under sparsification. We remark in passing that this property and the previous one are closely related to the notions of type and cotype in Banach spaces (Pisier, 2011). Here we deliver on this potential and sketch how to extend the results so far to matrix learning problems where W ⊆ R d×d is a convex set of matrices. As in Section 3.1 we work with a generic Lipschitz loss p 2 ) denote the Schatten p-norm, we make the following spectral analogue of the ℓ 1 ℓ q -boundedness assumption: W ⊆ W S 1 ∶= W ∈ R d×d W S 1 ≤ B 1 and subgradients ∂ℓ(⋅, z) belong to X Sq ∶= X ∈ R d×d X Sq ≤ R q , where q ≥ 2. Recall that S 1 and S ∞ are the nuclear norm and spectral norm. The S 1 S ∞ setup has many applications in learning (Hazan et al., 2012). We make the following key changes to Algorithm 2: • Use the Schatten regularizer R(W ) = 1 2 W 2 Sp . • Use the following spectral version of the Maurey operator Q s (W ): Let W have singular value de- • Encode and transmit Q s (W ) as the sequence (u i 1 , v i 1 ), . . . , (u is , v is ), plus the scalar W S 1 . This takesÕ(sd) bits. Proposition 5. Let q ≥ 2 be fixed, and suppose that subgradients belong to X Sq and that W ⊆ W S 1 . If we run the variant of Algorithm 2 described above with learning rate η = B 1 Rq 1 CqN and initial pointW = 0, then whenever s = Ω(m 2(q−1) ) and s 0 = Ω(N q 2 ), the algorithm guarantees where C q = q − 1. The total number of bits communicated globally is at mostÕ(m 2q−1 d + N q 2 d). 7 We may assume σi ≥ 0 without loss of generality. In the matrix setting, the number of bits required to naively send weights W ∈ R d×d or subgradients ∂ℓ(W, z) ∈ R d×d is O(d 2 ). The communication required by our algorithm scales only asÕ(d), so it is indeed sublinear. The proof of Proposition 5 is sketched in Appendix C. The key idea is that because the Maurey operator Q s (W ) is defined in the same basis as W , we can directly apply approximation bounds from the vector setting. Discussion We hope our work will lead to further development of algorithms with sublinear communication. A few immediate questions: • Can we get matching upper and lower bounds for communication in terms of m, N , log d, and q? • Currently all of our algorithms work serially. Can we extend the techniques to give parallel speedup? • Returning to the general setting (1) A.1 Sparsification In this section we provide approximation guarantees for the Maurey sparsification operator Q s defined in Algorithm 1. Theorem 6. Let p ∈ [1, 2] be fixed. Then for any w ∈ R d , with probability at least 1 − δ, Moreover, the following in-expectation guarantee holds: Proof of Theorem 6. Let B = w 1 , and let Z τ = w 1 sgn(w iτ )e iτ − w, and observe that E[Z τ ] = 0 and Q s (w) − w = 1 s ∑ s τ =1 Z τ . Since w p ≤ B, we have Z τ p ≤ 2B, and so Lemma 2 implies that with probability at least 1 − δ, Lemma 2. Let p ∈ [1, 2]. Let Z 1 , . . . , Z s be a sequence of independent R d -valued random variables with Z t p ≤ B almost surely and E[Z t ] = 0. Then with probability at least 1 − δ, Furthermore, a sharper guarantee holds in expectation: Proof of Lemma 2. To obtain the high-probability statement, the first step is to apply the standard Mcdiarmidtype high-probability uniform convergence bound for Rademacher complexity (e.g. Shalev-Shwartz and Ben-David (2014)), which states that with probability at least 1 − δ, where ǫ ∈ {±1} n are Rademacher random variables. Conditioning on Z 1 , . . . , Z n , we have On the other hand, for the in-expectation results, Jensen's inequality and the standard in-expectation symmetrization argument for Rademacher complexity directly yield From here the proof proceeds in the same fashion for both cases. Let Z t [i] denote the ith coordinate of Z t and let where the inequality follows from Jensen's inequality since p ≤ 2. We now use that cross terms in the square vanish, as well as the standard inequality x 2 ≤ x p for p ≤ 2: Proof of Lemma 1. We first prove the result for the smooth case. Let x and y be fixed. Let B = w 1 , and let us abbreviate R ∶= R ∞ . Let Z τ = ⟨ w 1 sgn(w iτ )e iτ − w, x⟩, and observe that E[Z τ ] = 0 and ⟨Q s (w) − w, x⟩ = 1 s ∑ s τ =1 Z τ . Since we have w 1 ≤ B and x ∞ ≤ R almost surely, one has Z τ ≤ 2BR almost surely. We can write Using smoothness, we can write Since E[Z s Z 1 , . . . , Z s−1 ] = 0, and since Z s is bounded, taking expectation gives Proceeding backwards in the, fashion, we arrive at the inequality The final result follows by taking expectation over x and y. For Lipschitz losses, we use Lipschitzness and Jensen's inequality to write The result now follows by appealing to the result for the smooth case to bound E x ⟨Q s (w) − w, x⟩ 2 , since we can interpret this as the expectation of new linear model loss E x,yφ (⟨w ′ , x⟩, y) ∶= E x (⟨w ′ , x⟩ − ⟨w, x⟩) 2 , where y = ⟨w, x⟩. This loss is 2-smooth with respect to the first argument, which leads to the final bound. Lemma 3. Let w ∈ R d be fixed and let F ∶ R d → R have β q -Lipschitz gradient with respect to ℓ q , where q ≥ 2. Then Algorithm 1 guarantees that Proof of Lemma 3. The assumed gradient Lipschitzness implies that for any w, w ′ where 1 p + 1 q = 1. As in the other Maurey lemmas, we write Z τ = ( w 1 sgn(w iτ )e iτ − w), so that E[Z τ ] = 0 and Q s (w) − w = 1 s ∑ s τ =1 Z τ . We can now write Using smoothness, we have Proceeding backwards in the same fashion, we get A.2 Approximation for ℓ p Norms In this section we work with the regularizer R(θ) = 1 2 θ 2 p , where p ∈ [1, 2], and we let q be such that 1 p + 1 q = 1. The main structural result we establish is a form of Hölder smoothness of R, which implies that ℓ 1 bounded vectors can be sparsified while preserving Bregman divergences for R, with the quality degrading as p → 1. The remainder of this section is dedicated to proving Theorem 7. We use the following generic fact about norms; all other results in this section are specific to the ℓ p norm regularizer. For any norm and any x, y with x ∨ y ≤ B, we have To begin, we need some basic approximation properties. We have the following expression: Proposition 6. For any vector θ, ∇R(θ) q = θ p . Proof of Proposition 6. Expanding the expression in (13), we have Using that q = p p−1 , this simplifies to Proof of Lemma 4. We write Using (12) and the expression for R, it follows that This is further upper bounded by The result follows by using that ∇R(b) q = b p ≤ B, by Proposition 6. Lemma 5. Let p ∈ [1, 2] and let h(x) = x p−1 sgn(x). Then h is Hölder-continuous: Proof of Lemma 5. Fix any x, y ∈ R and assume x ≥ y without loss of generality. We have two cases. First, when sgn(x) = sgn(y) we have where we have used that p − 1 ∈ [0, 1] and subadditivity of x ↦ x p−1 over R + , as well as triangle inequality. On the other hand if sgn(x) ≠ sgn(y), we have Now, using that sgn(x) ≠ sgn(y), we have Putting everything together, this establishes that Lemma 6. Suppose that a p ∨ b p ≤ B. Then it holds that and Using Lemma 4, this is at most Now, applying Hölder's inequality, this is upper bounded by To conclude, we plug in the bound from Lemma 6. B Proofs from Section 2 B.1 Proofs from Section 2.2 Proof of Theorem 1. Let A ∈ R k×d be the derandomized JL matrix constructed according to Kane and Nelson (2010), Theorem 2. Let x ′ t = Ax t denote the projected feature vector and w ⋆ = arg min w∶ w 2 ≤1 L D (w). We first bound the regret of gradient descent in the projected space in terms of certain quantities that depend on A, then show how the JL matrix construction guarantees that these quantities are appropriately bounded. Since φ is L-Lipschitz, we have the preliminary error estimate Now recall that the m machines are simply running online gradient descent in serial over the k-dimensional projected space, and the update has the form u t ← u t−1 − ∇φ(⟨u t , x ′ t ⟩, y t ), where η is the learning rate parameter. The standard online gradient descent regret guarantee (Hazan, 2016) implies that for any vector u ∈ R k : Equivalently, we have Since the pairs (x t , y t ) are drawn i.i.d., the standard online-to-batch conversion lemma for online convex optimization (Cesa-Bianchi and Lugosi, 2006) yields the following guarantee for any vector u: Applying Jensen's inequality to the left-hand side and choosing u = u ⋆ ∶= Aw ⋆ , we conclude that or in other words, We now relate this bound to the risk relative to the benchmark L D (w ⋆ ). Using (16) we have Taking expectation with respect to the draw A, we get that It remains to bound the right-hand side of this expression. To begin, we condition on the vector x with respect to which the outer expectation in (17) is taken. The derandomized JL transform guarantees (Kane and Nelson (2010), Theorem 2) that for any δ > 0 and any fixed vectors x, w ⋆ , if we pick k = O log(1 δ) ε 2 , then we are guaranteed that with probability at least 1 − δ, We conclude that by picking ε = O 1 √ N , with probability 1 − δ, To convert this into an in-expectation guarantee, note that since entries in A belong to {−1, 0, +1}, the quantities Ax 2 , Aw ⋆ 2 , and ⟨Ax, Aw ⋆ ⟩ all have magnitude O(poly(d)) with probability 1 (up to scale factors B 2 and R 2 ). Hence, Picking δ = 1 poly(d)N and using the step size η = B 2 2 L 2 R 2 2 N , we get the desired bound: Since this in-expectation guarantee holds for any fixed x, it also holds in expectation over x: Using this inequality to bound the right-hand side in (17) B.2 Proofs from Section 2.4 Our lower bounds are based on reduction to the so-called "hide-and-seek" problem introduced by Shamir (2014). Definition 1 (Hide-and-seek problem). Let {P j } d j=1 be a set of product distributions over {±1} d defined via Theorem 8 (Shamir (2014)). Let W ∈ [d] be the output of a (b, 1, N ) protocol for the hide-and-seek problem. Then there exists some j ⋆ ∈ [d] such that Proof of Theorem 2. Recall that W 1 = w ∈ R d w 1 ≤ 1 . We create a family of d statistical learning instances as follows. Let the hide-and seek parameter ρ ∈ [0, 1 2] be fixed. Let D j have features drawn from the be the jth hide-and-seek distribution P j and have y = 1, and set φ(⟨w, x⟩, y) = −⟨w, x⟩y, so that L D j (w) = −2ρw j . Then we have min w∈W 1 L D j (w) = −2ρ. Consequently, for any predictor weight vector w we have If L D j (ŵ) − L D j (w ⋆ ) < ρ, this implies (by rearranging) thatŵ j > 1 2 . Sinceŵ ∈ W 1 and thus ∑ d i=1 ŵ j ≤ 1, this implies j = arg max iŵi . Thus, if we define W = arg max iŵ as our decision for the hide-and-seek problem, we have Appealing to Theorem 8, this means that for every algorithmŵ there exists an index j for which To conclude the result we choose ρ = 1 16 d bN ∧ 1 2 . Proof of Proposition 1. This result is an immediate consequence of the reductions to the hide-and-seek problem established in Theorem 2. All that changes is which lower bound for the hide-and-seek problem we invoke. We set ρ ∝ d bN in the construction in Theorem 2, then appeal to Theorem 3 in Shamir (2014). Proof of Proposition 2. We create a family of d statistical learning instances as follows. Let the hide-and seek parameter ρ ∈ [0, 1 2] be fixed. Let P j be the jth hide-and-seek distribution. We create distribution D j via: 1) Draw x ∼ P j 2) set y = 1. Observe that E[x i x k ] = 0 for all i ≠ k and E x 2 i = 1, so Σ = I. Consequently, we have Let w ⋆ = arg min w∈ w 1 ≤1 L D j (w). It is clear from the expression above w ⋆ i = 0 for all i ≠ j. For coordinate j we have w ⋆ j = arg min −1≤α≤1 α 2 − 4ρα . Whenever ρ ≤ 1 2 the solution is 2ρ, so we can write w ⋆ = 2ρe j , which is clearly 1-sparse. We can now write the excess risk for a predictor w as Now suppose that the excess risk for w is at most ρ 2 . Dropping the sum term in the excess risk, this implies (w j − 2ρ) 2 < ρ 2 . It follows that w j ∈ (ρ, 3ρ). On the other hand, we also have i≠j w 2 i < ρ 2 , and so any i ≠ j must have w i < ρ. Together, these facts imply that if the excess risk for w is less than ρ 2 , then j = arg max i w i . Thus, for any algorithm outputŵ, if we define W = arg max iŵi as our decision for the hide-and-seek problem, we have The result follows by appealing to Theorem 2 and Theorem 3 in Shamir (2014). B.3 Discussion: Support Recovery Our lower bound for the sparse regression setting (5) does not rule out the possibility of sublinear-communication distributed algorithms for well-specified models. Here we sketch a strategy that works for this setting if we significantly strengthen the statistical assumptions. Suppose that we work with the square loss and labels are realized as y = ⟨w ⋆ , x⟩ + ε, where ε is conditionally mean-zero and w ⋆ is k-sparse. Suppose in addition that the population covariance Σ has the restricted eigenvalue property, and that w ⋆ satisfies the so-called "β-min" assumption: All non-zero coordinates of w ⋆ have magnitude bounded below. In this case, if N m = Ω(k log d) and the smallest non-zero coefficients of w ⋆ are at leastΩ( m N ) the following strategy works: For each machine, run Lasso on the first half of the examples to exactly recover the support of w ⋆ (e.g. Loh et al. (2017) This strategy has O(mk) communication by definition, but the assumptions on sparsity and β-min depend on the number of machines. How far can these assumptions be weakened? C Proofs from Section 3 Throughout this section of the appendix we adopt the shorthand B ∶= B 1 and R ∶= R q . Recall that 1 p + 1 q = 1. To simplify expressions throughout the proofs in this section we use the conventionŵ 0 ∶=w andw i ∶= w i n+1 . We begin the section by stating a few preliminary results used to analyze the performance of Algorithm 2 and Algorithm 3. We then proceed to prove the main theorems. For the results on fast rates we need the following intermediate fact, which states that centering the regularizer R atw does not change the strong convexity from Proposition 3 or smoothness properties established in Appendix A.2. Proof of Proposition 7. Let R 0 (w) = 1 2 w 2 p . The result follows from Proposition 3 and Theorem 7 by simply observing that ∇R(w) = ∇R 0 (w −w) so that D R (w w ′ ) = D R 0 (w −w w ′ −w). To invoke Theorem 7 we use that a −w 1 ≤ 2B, and likewise for b and c. Lemma 7. Algorithm 2 guarantees that for any adaptively selected sequence ∇ i t and all w ⋆ ∈ W, any individual machine i ∈ [m] deterministically satisfies the following guarantee: and furthermore, L D is convex, this implies While the regret guarantee implies that this holds for each machine i conditioned on the history up until the machine begins working, it suffices for our purposes to interpret the expectation above as with respect to all randomness in the algorithm's execution except for the randomness in sparsification for the final iterateŵ. We now sum this guarantee across all machines, which gives Rewriting in terms ofw i and its sparsified versionŵ i and using that w 1 1 =w, this is upper bounded by We now bound the approximation error in the final term. Using Proposition 7, we get Since p ≤ 2, the second summand dominates, leading to a final bound of O B 2 m 1 s p−1 2 . To summarize, our developments so far (after normalizing by N ) imply Letw denote w i t for the index (i, t) selected uniformly at random in the final line of Algorithm 2. Interpreting the left-hand-side of this expression as a conditional expectation overw, we get Note that our boundedness assumptions imply ∇ i t 2 q ≤ R 2 and D R (w ⋆ w) = D R (w ⋆ 0) ≤ B 2 2 , so when s = Ω(m 2 p−1 ) this is bounded by where the second inequality uses the choice of learning rate. From here we split into two cases. In the general loss case, since L D is R-Lipschitz with respect to ℓ p (implied by the assumption that subgradients lie in X q via duality), we get We now invoke Theorem 6 once more, which implies that We see that it suffices to take s 0 = Ω((N C q ) p 2(p−1) ) to ensure that this error term is of the same order as the original excess risk bound. In the linear model case, Lemma 1 directly implies that and so s 0 = Ω(N C q ) suffices. Proof of Theorem 4. We begin from (18) in the proof of Theorem 3 which, once s = Ω(m 2 p−1 ), implies wherew is the iterate w i t selected uniformly at random at the final step and the expectation is over all randomness except the final sparsification step. Since the loss ℓ(⋅, z) is smooth, convex, and non-negative, we can appeal to Lemma 3.1 from Srebro et al. (2010), which implies that Using this bound we have Let ε ∶= 2ηC q β q . Rearranging, we write , and so, by rearranging, The choice η = B 2 CqβqL ⋆ N ∧ 1 4Cqβq ensures that ε ≤ 1 2, and that Now, Lemma 3 implies that, conditioned onw, we have E L D (ŵ) ≤ L D (w) + βqB 2 s 0 . The choice s 0 = βqB 2 N CqL ⋆ ∧ N Cq guarantees that this approximation term is on the same order as the excess risk bound of w. Proposition 8. Suppose we run Algorithm 2 with initial pointw that is chosen by some randomized procedure independent of the data or randomness used by Algorithm 2. Suppose that we are promised that this selection procedure satisfies E w − w ⋆ 2 p ≤B 2 . Suppose that subgradients belong to X q for q ≥ 2, and that W ⊆ W 1 . Then, using learning rate η ∶=B R 1 CqN , s = Ω m 2(q−1) ( B B ) 4(q−1) , and s 0 = Ω(( N Cq) q 2 ⋅ ( B B ) q ), the algorithm guarantees Proof of Proposition 8. We proceed exactly as in the proof of Theorem 3, which establishes that conditioned onw, We now take the expectation overw. We have that E D R (w ⋆ w) = 1 2 E w − w ⋆ 2 p ≤B 2 2. It is straightforward to verify from here that the prescribed sparsity levels and learning rate give the desired bound. We will show inductively that E ŵ k − w ⋆ 2 p ≤ 2 −k B 2 =∶ B 2 k . Clearly this is true forŵ 0 . Now assume the statement is true forŵ k . Then, since E ŵ k − w ⋆ 2 p ≤ B 2 k , Proposition 8 guarantees that where c > 0 is some absolute constant. Since the objective satisfies the restricted strong convexity condition (Assumption 1), and since L D is convex and W is also convex, we have ⟨∇L D (w ⋆ ), w − w ⋆ ⟩ ≥ 0 and so so the recurrence indeed holds. In particular, this implies that The definition of T implies that T ≥ log 2 N 32C q γB Rc 2 , and so This proves the optimization guarantee. To prove the communication guarantee, let m k denote the number of consecutive machines used at round k. The total number of bits broadcasted-summing the sparsity levels from Proposition 8 over T rounds-is at most plus an additive O(m log(BRN )) term to send the scalar norm for each sparsified iterateŵ i . Note that we have m k = N k n ∨ 1, so this is at most The first term in this sum simplifies to O log d ⋅ CqR 2 nγ 2 B 2 2q−1 ⋅ ∑ T k=1 2 (4q−3)k , while the second simplifies to O log d ⋅ R γB q 2 q ⋅ ∑ T k=1 2 qk . We use that ∑ T t=1 β t ≤ β T +1 for β ≥ 2 to upper bound by Substituting in the value of T and simplifying leads to a final bound of Proof of Proposition 4. It immediately follows from the definitions in the proposition that Algorithm 3 guarantees where γ q is as in Assumption 1. We now relate γ q and γ. From the optimality of w ⋆ and strong convexity of the square loss with respect to predictions it holds that for all w ∈ W p , Our assumption on γ implies E⟨x, w − w ⋆ ⟩ 2 = Σ 1 2 (w − w ⋆ ) 2 2 ≥ γ w − w ⋆ 2 2 . Using Proposition 9, we have Thus, it suffices to take γ q = γ 4k . The following proposition is a standard result in high-dimensional statistics. For a given vector w ∈ R d , let w S ∈ R d denote the same vector with all coordinates outside S ⊆ [d] set to zero. Proposition 9. Let W, w ⋆ , and S be as in Proposition 4. All w ∈ W satisfy the inequality (w − w ⋆ ) S c 1 ≤ (w − w ⋆ ) S 1 . Proof of Proposition 9. Let ν = w − w ⋆ . From the definition of W, we have that for all w ∈ W, w ⋆ 1 ≥ w 1 = w ⋆ + ν 1 . Applying triangle inequality and using that the ℓ 1 norm decomposes coordinate-wise: Rearranging, we get ν S c 1 ≤ ν S 1 . Proof of Proposition 5. To begin, we recall from Kakade et al. (2012) that the regularizer R(W ) = 1 2 W 2 Sp is (p − 1)-strongly convex for p ≤ 2. This is enough to show under our assumptions that the centralized version of mirror descent (without sparsification) guarantees excess risk O CqB 2 1 R 2 q N , with C q = q −1, which matches the ℓ 1 ℓ q setting. What remains is to show that the new form of sparsification indeed preserves Bregman divergences as in the ℓ 1 ℓ q setting. We now show that when W and W ⋆ have W S 1 ∨ W ⋆ S 1 ≤ B, To begin, let U ∈ R d×d be the left singular vectors of W and V ∈ R d×d be the right singular vectors. We defineσ = W S 1 s ∑ s τ =1 e iτ , so that we can write W = U diag(σ)V ⊺ and Q s (W ) = U diag(σ)V ⊺ . Now note that since the Schatten norms are unitarily invariant, we have W − Q s (W ) Sp = U diag(σ −σ)V ⊺ Sp = σ −σ p for any p. Note that our assumptions imply that σ 1 ≤ B, and thatσ is simply the vector Maurey operator applied to σ, so it follows immediately from Theorem 6 that Returning to the Bregman divergence, we write ≤ D R (W Q s (W )) + 2B ∇R(Q s (W )) − ∇R(W ) S∞ .
2019-03-05T14:02:49.506Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "19aa08300b7f10a5121fac5aad2f56a5a7dfcb87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a2c497ea5224a141e2442f4dc0ba0f254dff507c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
8385033
pes2o/s2orc
v3-fos-license
Toward a Predictive Understanding of Earth’s Microbiomes to Address 21st Century Challenges ABSTRACT Microorganisms have shaped our planet and its inhabitants for over 3.5 billion years. Humankind has had a profound influence on the biosphere, manifested as global climate and land use changes, and extensive urbanization in response to a growing population. The challenges we face to supply food, energy, and clean water while maintaining and improving the health of our population and ecosystems are significant. Given the extensive influence of microorganisms across our biosphere, we propose that a coordinated, cross-disciplinary effort is required to understand, predict, and harness microbiome function. From the parallelization of gene function testing to precision manipulation of genes, communities, and model ecosystems and development of novel analytical and simulation approaches, we outline strategies to move microbiome research into an era of causality. These efforts will improve prediction of ecosystem response and enable the development of new, responsible, microbiome-based solutions to significant challenges of our time. N ow well into the 21st century, the Human Genome Project fades from our rearview mirror but its lasting impact extends far into our future (1). Massively parallel DNA sequencing platforms plus significant technological advances derived from this previous international, public, and private initiative continue to drive economic development and numerous paradigm shifts across domains of the biological, physical, and social sciences. Foremost among these paradigm shifts has been the realization that our species, Homo sapiens, is at least as microbial as human in terms of cell numbers (2) and much more so in terms of genetic potential (3). The subsequent initiative to sequence our bodies' "second genome," represented by the NIH-funded Human Microbiome Project and its European equivalent, Meta-HIT, has catalyzed numerous discoveries and sparked interest in identifying the contributions of our microbiota to our health, development, behavior, and emotions (summarized in Table 1 of reference 4). As a result of this initiative and our anthropocentric tendencies, the term "microbiome" is now becoming a familiar concept to the general public and serves as a nucleation point for academic and industrial efforts aiming to uncover hidden microbial roles in health and disease and to discover microbiome-based interventions. If our efforts are successful, their societal and economic impacts will likely be substantial and accompanied by both philosophical debate and ethical considerations. We often overlook the fact that the concept and impact of the "microbiome" extend far beyond the human body. In fact, microorganisms have populated, dominated, and shaped our planet and its inhabitants for over 3.5 billion years. Plants and multicellular animals (Metazoa) first emerged~800 million and~700 million years ago, respectively. Modern humans have existed for roughly only 250,000 years, and are thus merely a recently emerged twig in the tree of life. It is perhaps not surprising that single-celled microorganisms, the pioneers of life on Earth, played critical roles in the evolution and functioning of all other living organisms (5). Like a modern-day corporation, most eukaryotes have outsourced (or, more accurately, insourced) several key functions to bacteria (6). The mitochondrion that functions as a cellular power plant in eukaryotes evolved from once-free-living bacteria that were engulfed; similarly, the chloroplast that is the center of photosynthesis in plants was likely derived from one or more free-living bacteria. This intermingling of genes and functions across the tree of life continues, allowing multicellular organisms to adapt more rapidly to new environments, using the versatility of their microbial partners (7)(8)(9)(10)(11)(12)(13)(14)(15). The ubiquity of microorganisms and their breadth of impact on the habitability of our planet have prompted musings of what life would be like without them (16). However, unlike "germfree" animals or plants in the confines of the laboratory, the health of the planet's environment and that of its inhabitants are absolutely dependent on their microbial partners. MICROBES DROVE THE FORMATION OF OUR BIOSPHERE So how did we get here? Stepping back approximately 2.5 to 2.3 billion years, we observe the Great Oxidation Event, a cataclysmic shift in the oxidation-reduction status of our planet that can be seen and traced in the geologic record, including global iron deposits (17). What was initially a nonoxidizing atmosphere, dominated by methane, hydrogen sulfide, and carbon dioxide, flipped (in geologic time) to an environment with abundant molecular oxygen. This flip was mediated by the rise of microorganisms capable of oxygenic photosynthesis, ancestors of today's Cyanobacteria (18), eliminating countless oxygen-sensitive microorganisms and resulting in one of Earth's most significant mass extinctions. However, the energy available from oxygen-coupled redox reactions (aerobic respiration) was significantly greater than the previous anaerobic lifestyles and allowed rapid diversification of new functions in a period termed the "archaean genetic expansion" (19). This period of energetic adaptation led to species diversification that was the precursor to the evolution of multicellular organisms and ultimately, plants, animals, and the remainder of the tree of life. As such, the planet's collective microbial ancestors facilitated the formation of the biosphere as we know it (Fig. 1). While microorganisms were initially viewed as a curiosity to be seen under the rudimentary microscopes of Anton van Leeuwenhoek, they are now appreciated as the "biogeochemical engines" that continue to support all life on Earth (20). Microorganisms are major drivers of the Earth's carbon cycle. In the ocean, phytoplankton (single-celled photosynthetic bacteria and algae) drive the "biological carbon pump" and are responsible for approximately half of the global carbon fixed from the atmosphere each year, with the remainder sequestered by the Earth's terrestrial vegetation (21). Microorganisms also perform key functions in the stabilization and recycling of this fixed carbon across our oceans and landforms. In our soils, microbes transform plant polymers and deposit their products on soil minerals, forming the basis for much of the Earth's terrestrial carbon stocks (22). However, in the face of disturbances like the tillage of agricultural soils (23) or thawing of permafrost (24,25), microbial activity can result in the release into the atmosphere of large amounts of carbon that has been stored for thousands of years, with a potentially positive feedback to global temperatures (26). Nitrogen fixation is another remarkable chemical feat achieved by microorganisms. Microorganisms catalyze this energetically costly reaction at ambient temperatures and pressures, frequently forming close couplings (including symbioses) with higher organisms such as plants (27) and insects (28). Our planet's ecosystems and inhabitants subsisted primarily on this microbially fixed nitrogen for 4 billion years until the beginning of the 20th century with the production of nitrogen fertilizer by the Haber-Bosch process (29). As transformative as this engineering process of nitro- gen fixation was for the production of food on our planet, it utilizes approximately 1% of global fossil energy (30) for production of the heat and pressure needed to accomplish this feat without microorganisms. Microorganisms provide a wide range of ecosystem functions beyond carbon and nitrogen cycling. Collectively, they purify the water in our rivers, streams, lakes, reservoirs, and aquifers, naturally controlling the flux of nutrients like nitrogen and phosphorus that can regulate the development of stable ecosystems and the establishment of complex food webs. However, detrimental events can occur when the balance of microorganisms in nature is altered because of either natural or human interventions. Microbes are sources of other greenhouse gases (including methane and nitrous oxide) that are more potent or long-lived than CO 2 . In agricultural systems, fertilizer and manure applications stimulate the microbial release of 4 to 6 Tg of nitrous oxide per year (31), while microbially produced methane associated with rice paddies and livestock production represents approximately 30% of global methane emissions (32). Nutrient runoff from agricultural, industrial, and municipal sources promotes the growth of harmful microorganisms in our waterways, for example, forming toxic algal blooms that threaten our water supplies, health, and ecosystems (33) and contributing to dead zones in our oceans (34). The disturbance of aquifer biogeochemistry due to the drilling of wells and irrigation contributed to the "largest mass poisoning of a population in history" (35), where microorganisms mobilized naturally occurring but previously immobile arsenic (36,37). GLOBAL CHALLENGES OF HUMAN POPULATION GROWTH AND ENVIRONMENTAL CHANGE The development of the Haber-Bosch process for the production of nitrogen fertilizer that led to the advent of modern production agriculture has been described as the "detonator of the population explosion" (38). With a current population of 7.3 billion and the majority of the world's population now residing within urban centers, we are entering an unprecedented phase in our Earth system, one that we, and our planet's microbiomes, have never before experienced. A number of challenges arise related to sustainable production of food, energy, and chemicals to support Earth's ever-growing human population (Fig. 2). Additionally, there is a pressing need to understand, predict, and respond to global environmental change, prevent and reverse ecosystem degradation, and manipulate the microbial origins of plant, human, and livestock diseases. To put our current and projected future Earth system state into context, the rate of CO 2 entering our atmosphere is unprecedented over at least 56 million years (39), demonstrating that human impacts on our planet may persist over geologic time. If this rate of emissions were to continue over the next few centuries, atmospheric CO 2 may reach 2,000 ppmv (5 times the current concentrations), average annual temperatures would rise by 8°C, and our oceans would acidify by 0.7 pH unit (39), producing conditions not experienced on Earth since the Paleocene-Eocene Thermal Maximum~55 million years ago. As a result, our planet's natural biomes and those that we manage for food and fuel will likely experience conditions beyond their contemporary climate boundaries, and our current understanding of the sensitivities of their microbial components limits our ability to predict how they will respond (40). THE ROLE OF MICROBIOME RESEARCH TO IMPROVE HUMAN HEALTH AND RESILIENCE The interface of the human microbiome and health is vast, and we are at the early stages of a potential scientific revolution in this area. Despite enormous progress in the provision of a stable food supply in many parts of the world, undernutrition persists for a sizable fraction of the population in many locations (41). Simultaneously, overnutrition affects a substantial, and growing, proportion of the human population, with obesity, type 2 diabetes, and other related metabolic syndromes affecting people in both developed and developing countries (42,43). Recent studies provide evidence that particular microbiome disruptions may play important roles in malnutrition (44,45) and obesity (46)(47)(48) and in modulating associations between diet and disease (49). Beyond the gut, the human microbiome likely affects all organs through the immune, circulatory, and nervous systems, including communicating with our brains (50) and affecting our behavior and cognitive function (51,52). Other emerging concepts are that a portion of the human microbiome is heritable (53) and that we have coevolved with microbes with specific properties (54). A natural extension of these concepts is that microorganisms play essential roles across the human life span, including development, maturation, reproduction, and senescence. For example, a growing body of evidence implicates microbiome perturbation during a critical window of early-life development of our immune system in the rapid increases of allergic and autoimmune conditions, including asthma, atopic dermatitis, food allergies, and inflammatory bowel disease, among others (55)(56)(57)(58). We now appreciate that antibiotics and our increasingly industrialized lifestyles likely contribute to loss of microbes that are essential for healthy immune system development and with which our species coevolved (59,60). URBANIZATION AND THE INTERSECTION OF THE HUMAN AND ENVIRONMENTAL MICROBIOMES Urbanization is a global phenomenon occurring at unprecedented pace and scale. In 1900, only 10% of the global populations were urban dwellers. Now, for the first time in history, more than half the world's population lives in cities. It has been projected that 70% of humanity will live in cities by 2050 (61) (Fig. 2). Cities and the buildings within them represent an unprecedented facet of human or even planetary evolutionary history. One consequence of increased urbanization is that most of the world's people will be in regular contact with new combinations of microorganisms that thrive in urban built environments rather than the combinations of microorganisms characteristic of natural environments (62). This has prompted a new line of research to investigate the microbiology of built environments (reviewed elsewhere [63][64][65]). Among the emerging themes from this nascent field are that indoor microbiomes derive largely from our own bodies (66,67) and patterns of occupancy (68,69). In addition, several studies have demonstrated that characteristics such as surface materials and ventilation strategies influence the diversity and abundance of indoor microbial communities (e.g., [70][71][72][73][74]. Although we currently lack sufficient mechanistic understanding to understand the importance of indoor environmental quality in terms of microbial diversity, composition, and function to our health and development, recent evidence suggests benefits of exposure to a more diverse microbiota (75)(76)(77). A critical next step is to understand the public health implications of exposure to distinct collections of microbiomes characteristic of the built environment. THE SOCIETAL BENEFITS OF HARNESSED MICROBIOME FUNCTIONS Feeding our growing population is a grand challenge facing society. The last 100 years have seen great advances in increasing the amount of land that can support agricultural activity and the yield of food-grade crops per acre. The emergence of industrialized agriculture in early 20th century, improving the quantity and nutritional value of food, depended, in part, on understanding the role of nitrogen-fixing symbiotic microbes in yield and the role of the plant immune system in breeding for disease resistance. In contrast, incidents such as the "Dust Bowl," the catastrophic wind erosion of degraded soils in the United States and Canada during the 1930s (78), and the continual emergence or spread of plant diseases (79,80) illustrate the delicate balance between a need to intensify food production to meet population demand and the unwanted and potentially dangerous long-term consequences of altering the ecology of natural systems. Microbes protect our crops. Soil microorganisms, either as individuals or as communities, both help plants acquire nutrients and help protect crops from insect pests (81) and microbial pathogens (82). Through a better understanding of these processes, we may soon be able to harness microbes to protect crops from the many microbes that cause diseases that ravage them, leading to famine (83,84), societal upheaval, and conflict. The projected increases in population size and the desire to provide highnutritional-quality crops to a larger fraction of the population, combined with limitations in arable land and the need to maintain or enhance ecosystem services while simultaneously increase crop yields, reinforce a need to understand the impact of plant-soil-microbe interactions on agricultural productivity. This understanding must be developed for different geographic and cropping systems to enable accurate prediction of how modern agricultural management practices impact the ecology and function of microorganisms. Determining how the interactions of microbes, plants, and soil conditions confer resistance to abiotic and biotic stress or impact nutrient availability under current or future local climate conditions is likely key to producing sufficient food for a growing population, providing Microbes are Earth's "master chemists." The need to provide a sustainable and renewable supply of energy and chemicals is another grand challenge facing society. Microbes produce enzymes that catalyze all major biochemical transformations of inorganic and organic matter on the planet. They are also the reservoir of literally billions to trillions of genes that can ultimately be tapped for the construction of pathways to produce compounds with environmental, industrial, and pharmaceutical value. Today's global economy is heavily influenced by humankind's use of microbial activities, from our ancient practice of coopting yeast for brewing and baking (85), the discovery and production of antibiotics (86), microbial production of life-saving hormones such as insulin (87), and the use of nitrogen-fixing microbial inoculants to reduce fertilizer needs for food and bioenergy crops (88,89) to the presence of enzymes in our low-temperature detergents and the recent design of microbes to synthesize fuels (90) and valuable chemicals from renewable substrates (91). The desire to increase economic activity and affluence in emerging and developing countries is projected to create a large future demand for chemicals and fuels (92). The burgeoning demand for oil and other natural resources makes sustainable biocommodity production an attractive alternative way to meet the needs of these and other populations (93). Advances in biology, engineering, and genomics hold the promise that single species, consortia, or synthetic populations of microbes could produce alternatives to fuels or chemicals that have been derived from oil or other fossil fuels over the last 100 years (94). The microbe-based manufacturing of biocommodities could also provide numerous environmental benefits, especially if it depends on bio-based catalysts (enzymes) or sustainable and local production processes. Several successful industrial processes use mixed microbial cultures to make food and vitamins (95,96). A recent report of improved hydrogen, methane, or chemical production by mixed consortia (97) illustrates how knowledge of microbial activities has the potential for increasing product yield and generating fewer toxic by-products and less waste than traditional chemical processes. In addition, the use of lignocellulose or other renewable feedstocks for microbial production of fuels and chemicals can achieve reductions in net greenhouse gas emissions compared to producing the same compounds from oil or other fossil fuels (98). Other potential benefits of using microbial processes include the generally lower energy needs (temperature and pressure) for biomanufacturing and the potential for microbes to improve the efficiency of extraction or subsequent utilization of fossil fuels. Given the finite area available and our population growth-related challenges, the understanding of soil or aquatic microbial communities also has the potential for remediation and reclamation of currently contaminated environments for future use. Understanding and harnessing "microbial dark matter." Historically, our progress in harnessing specific microorganisms for societal benefit has been constrained in part by our ability to cultivate only a minor fraction of the microbial diversity we now recognize. The tools of (meta)genomics that were advanced by the human genome sequencing efforts have made possible the largescale DNA sequencing of mixed microbial communities and have revealed that we are surrounded by "microbial dark matter." Much like the physical sciences community has coordinated to define and understand the universe's dark matter (99), microbi-ologists have embarked on a similar voyage using DNA sequencing to discover the hidden diversity and genetic potential of Earth's microbiomes (5,64,(100)(101)(102)(103)(104)(105)(106). As a result, we are rapidly and continually growing new branches on the tree of life (107)(108)(109)(110)(111), and if we are to eventually harness this new knowledge for the benefit of humankind and our planet, we must strive to define the functions contained within this vast genetic potential (112) and determine its interaction with, and regulation by, the microbiome's local environment. CROSS-CUTTING CHALLENGES TO MICROBIOME-BASED INNOVATION: TECHNOLOGICAL ROADBLOCKS Despite the potential to understand, predict, and harness the Earth's critical microbiomes, several key barriers remain (Fig. 3). Just as the Human Genome Project reached across the traditional biological, physical, engineering, and social science domains to develop or respond to new technologies, next-generation advances in microbiome research must also reach beyond traditional microbiology (113). Although there have been significant advances in our ability to obtain microbial genomic information, fundamental challenges exists regarding the scalability and portability of microbial readout technology. Even with improvements, our ability to decode the functional relevance of microbiota at appropriate scales is severely limited. Similarly, our inability to establish causality in complex microbial networks limits our ability to make informed manipulations that lead to predictable outcomes in natural systems. Without systems to predict or preempt outcomes of microbiome disturbance or manipulation, we will have limited capabilities to understand the societal impacts of this new knowledge. Success in understanding, predicting, and potentially manipulating microbiomes for societal benefit will require a broadly interdisciplinary approach; unintended consequences must be thoroughly considered. DECODING FUNCTIONS OF MICROBIAL GENOMES The rate of DNA sequencing now outpaces our ability to determine gene functions by many orders of magnitude. In effect, we are transcribing countless libraries of books but have only a rudimentary understanding of the languages in which they are written. In most cases, what we hope to know is the products of these genes and their functions and how their production is regulated in nature. Across microbial genomes, there are whole families of genes possessing conserved "domains of unknown functions" that likely provide critical (but unknown) capabilities essential for microbial survival (114,115). To identify the biological roles of these genes requires new computational approaches that decode patterns of gene covariation across environments, conditions, and genomes to predict function. We must also develop technologies for highthroughput functional determination. For example, massively parallel systems are needed so to that candidate genes can be optimized for expression, purified, or assayed in vivo or in vitro. Integrating these advances with nanoscale liquid handling (116), droplet compartmentalization of reactions (117), and highthroughput chemical imaging (118) can increase the rate of biochemical characterization of microbial genes by several orders of magnitude. Such advances will be critical to mine the genetic potential of microbes and enable a new understanding of the beneficial and detrimental aspects of microbiome function. While obtaining genomic information has been simplified in approach, scaled in throughput, and reduced in cost, DNA se-quence is a measurement of potential and not of function or activity. Other biological (macro)molecules (RNA, proteins, metabolites) provide more appropriate windows into microbial activity in situ, and improvements in the accuracy, integration, spatial and temporal resolution, and cost of analyzing these components will be a new frontier in microbiome research. Improved temporal resolution of microbiome gene expression (metatranscriptomics) or protein translation (metaproteomics) continues to illuminate the functional roles of individual species within complex microbial communities (119,120), while metabolomic approaches are beginning to yield insights into the complexity of microbiome chemistry (121,122). Metabolomic technologies can provide critical insights into the activities of specific genes, microbes, and microbiomes, for example, when integrated with mutant libraries (123). Various forms of chromatography coupled with mass spectrometry are used for this purpose, but all are hampered by our limited ability to translate mass spectra into reliable identification of specific molecules (124). Just as the functions of many genes in a genome are unknown, most ions from mass spectrometry of microbial cultures or communities are also unknown. Efforts to develop microbiome-relevant mass spectrometry libraries would help significantly (125), supported by developments in approaches to the structural elucidation of novel metabolites. If these technical advances are accompanied by community-adopted databases and computational platforms, the broader scientific community would leverage the many parallel efforts in this area (126,127). Because of sensitivity limitations, cost constraints, and the destructive nature of many analytical procedures, trade-offs currently exist between spatial and temporal analyses of microbiome function. The sensitivity of many existing analytical methods can limit their application to relatively large sample volumes, and for this reason, many 'omic approaches rarely sample microorganisms in the environment on the most relevant spatial or time scales. Microorganisms exist and interact across micron-scale physical and chemical gradients, but common approaches to microbiome sampling do not capture important biological, physical, and chemical heterogeneity that is key to understanding interspecies interactions and the true environment that microbes are responding to. For example, when soil cores are homogenized to study microbial composition or activity and its relationship with soil physicochemical properties, at a human scale, this is equivalent to sampling an area of around 1,000 km 2 (128). If microbial ecologists were to study the biological, physical, and chemical properties of the soil microbial ecosystem at the same relative scale at which plant ecologists survey these ecosystems, they would need to survey areas of 100 m 2 , the size of soil microaggregates (128). One question is whether we need information at this scale. It appears that we might; microbe-mineral interactions at this scale are critical determinants of the storage of carbon and the retention of nutrients in our soils, and spectroscopic measurements at this scale have led to a paradigm shift in the theories of soil organic matter transformation (22). The technological barriers to studying microbiomes at the appropriate scale are immense but not insurmountable. Discoveries at the macroscale will always be important and could be evaluated at the nano-or microscale by using targeted and potentially nondestructive approaches that are more amenable to higher spatial and temporal resolution and higher throughput. For example, infrared (IR) imaging involves the label-free detection of functional groups associated with macromolecules through acquisition of spectra that originate from vibrational frequencies characteristic of specific chemical bonds as they respond to IR light of various wavelengths. Fourier transform IR (FTIR) spectromicroscopy, a nondestructive means to monitor chemical signatures associated with microbial growth and metabolism, when combined with high-energy light sources (e.g., as generated by synchrotrons), can be deployed at or below the single-cell scale. Further developments applying nanotechnology to IR imaging many allow finer spatial resolution even without the need for synchrotron light sources (129). Although the chemical resolution of approaches like FTIR spectromicroscopy is comparatively low, as a nondestructive method, they may be coupled with destructive methods with greater chemical resolution, for example, mass spectrometry imaging based on laser or ion beam ablation (130,131). PHYSICAL AND CHEMICAL CHARACTERIZATION OF MICROBIAL HABITATS A detailed understanding of microbial interactions with their host or environment requires knowledge of the physical and chemical conditions that microbes experience directly. For example, with the exception of some aquatic environments, nearly all microbial ecosystems are associated with porous media that impact cell movement in addition to water flow and chemical diffusion (e.g., soil particles, mucous membranes, root mucilage, oral biofilms) and understanding the physical constraints on nutrient transport and communication requires physical characterization at the nanometer-micrometer scale. Approaches such as X-ray computed tomography allow detailed resolution of the physical structure of an environment at the scale of the microorganism (nanometer-micrometer) and above by the use of intact samples but currently provide limited chemical information (132). Although detailed chemical information can be obtained by methods that require thin sectioning like X-ray fluorescence, near-edge X-ray absorption fine-structure spectroscopy (133), or nanoscale secondary ion mass spectrometry (134), their destructive nature prohibits our ability to monitor the dynamics inherent to microbial systems. At the micrometer-centimeter scale, electrochemical and optical probes have been productively used to profile gradients in pH, redox, and oxygen; however, these probes and their associated equipment are intrusive to the ecosystem and often expensive, limiting their application to only a few point measurements or limited time series, and their fragile nature makes them best suited for laboratory use. New applications of low-cost, low-power, silicon-based sensor arrays (e.g., charge-coupled device or complementary metal-oxide semiconductor) have the potential to deliver field-or lab-deployable sensor networks to monitor both the variability of environments' physical (e.g., temperature or moisture) and chemical properties and the activity of microorganisms (e.g., nutrient transformation or respiration). Autonomous sensor networks of this form could expand the monitored scale from centimeters to kilometers, allowing microbial information to be utilized at scales relevant to gaining knowledge to understand, predict, and possibly mitigate some impacts of disturbance, such as climate change. These networks would clearly have broad applicability in water and environmental quality monitoring, agriculture, and many areas of industry. TECHNOLOGIES FOR ROBUST, PORTABLE, GENOME-CENTRIC ANALYSES OF MICROBIOMES The types of global monitoring and data integration required to develop a predictive understanding of Earth's microbiomes also require significant advances in DNA sequencing. Further reductions in cost and turnaround time, as well as improved data integration across DNA sequencing platforms and unit mobility, could allow real-time "field" studies so researchers reliably distinguish members of different microbiomes and can readily observe their dynamics. Continued transformative improvements in DNA sequencing technologies could provide systems to facilitate more robust genome-centric analyses in a manner that would allow rapid data turnaround in field-deployable units. Such approaches, if integrated with appropriate user interfaces and standardized computing platforms, could make DNA sequencing-based analysis of microbiomes as routine as a blood test or a water nitrate measurement. However, simply acquiring more sequence data does not represent a panacea. New technologies that increase sequence throughput and mobility must be accompanied by parallel advances in bioinformatics and statistics, first to ensure data quality and comparability but also to synthesize this information into biologically meaningful formats, driving the adoption of, and accessibility to, quantitative genomecentric microbiome information. BUILDING THE FRAMEWORK FOR MASSIVELY PARALLEL GENOME-CENTRIC QUANTITATIVE MICROBIOME ANALYSIS De novo assembly of microbiome sequence data represents a massive computational burden that currently requires supercomputing facilities, and the population variation within genomes that is common to microbiomes can inhibit complete assembly. Advanced technologies that deliver long sequence reads will undoubtedly help with both of these issues; however, as we expand our investigation of microbiomes, there is a need for high-quality reference catalogs of microbial genomes. The need is not simply for more sequence information, but rather for a supporting and extensive catalog of reference genomes for which functional roles have been elucidated. Currently, the microbial gene or genome catalogs represent a minute fraction of known microbial diversity, with the entries heavily biased toward a few species and environments. Just like the targeted broadening of diversity within our human genome catalogs, initial efforts to expand microbial genome and gene catalogs have begun (135,136). Catalogs to date have focused mostly on bacteria and archaea because of their lower genome complexity; however, critical components of most microbiomes (viruses, fungi, and other microeukaryotes) have not received the same attention. While many important microbial targets for sequencing may not be immediately culturable, approaches based on single-cell sorting and subsequent sequencing (108,137) will be important components in building out these global microbiome references. A coordinated effort to produce and share such reference catalogs would substantially enhance the predictive value of metagenomic sequence information, while also potentially reducing the computational burden that is required for de novo analyses. Each of these advances drives toward a future where the computation and prediction of the functional importance of microbiome composition will be directly determined on handheld devices, enabling rapid and accurate source tracking and monitoring of microorganisms from our hospitals to our farms and oceans. Despite transformations in our ability to decode microbial nucleic acids, analysis of the composition of a microbiome remains fraught with biases-that are largely ignored. The extreme bias introduced through cultivation of microorganisms was noted and drove the adoption of cultivation-independent approaches; however, DNA extraction alone can introduce greater variance in detected microbial abundance than the variables whose impact we wish to understand (138,139); such observations must lead to standardization of protocols (100). However, given the variation within and between systems under study (e.g., soils, intestines, water, air, insects, plants, and even computer keyboards and cell phones), a universal nucleic acid extraction method, while researchworthy, may be unrealistic. Following nucleic acid extraction, all subsequent steps (e.g., purification, amplification, library preparation, sequencing, data analysis) introduce more unquantified uncertainty and bias that prohibit truly quantitative analyses. In the likely absence of a universally appropriate and accepted protocol, bias must be quantified, for example, by using universal standards added to samples at appropriate stages of processing. These standards would be validated by dedicated organizations such as the National Institute of Standards and Technology and supplied as components of commercial extraction kits or individually. Such a set of standards would allow the community to emulate others such as the Microarray Quality Control Consortium (140) in standardizing data generated across protocols and platforms while encouraging market diversity. It will be critical to publicly share experimental metadata and quality information alongside primary data in formats that can be easily queried and included in statistical models. DEVELOPING AND INTEGRATING TOOLS FOR ROBUST HYPOTHESIS TESTING The complexity of most naturally occurring microbiomes and our inability to cultivate the majority of microorganisms naturally led to the widespread use of cultivation-independent methods to determine composition and predict function. To date, the majority of microbiome studies infer causation from correlation, particularly when more diverse microbiomes are the subject of investigation. Microbiomes, as well as being complicated, are also typically complex, with many organisms combining in a nonlinear manner to form an integrated network with emergent properties. As a result, we often lack the ability to test (i) predictions of keystone microbial species, (ii) assumptions of high functional redundancy, and (iii) the belief that microbial communities are highly resilient to disturbance. In model organisms, we can study complex genetic and metabolic networks through precise manipulation of the components. For each node in these networks (e.g., genes), we can subtract or silence, add or enhance, either individually or in combination, and observe the response of subnetworks or the entire network to these precise manipulations. This allows accurate testing of the roles of individual components in addition to their importance in the system context. Unfortunately, our ability to perform similar analysis for complex networks of microorganisms is limited. What if we could specifically remove or inhibit a given organism, a group of organisms, or specific functions shared across many organisms and observe the response of the system as a whole? This would transition the era of microbiome study away from correlation and toward the knowledge needed to attribute causation, improve prediction, and enable precise manipulation. Building defined microbial communities using tens to hundreds of isolated organisms is a valuable means to test the roles of individuals in low-complexity systems. However, this may be akin to building a genome de novo from a subset of genes to determine how the whole system functions, yet even this approach leads to surprises (115). To systematically evaluate functional roles in complex coevolved microbiome networks, subtracting organisms, observing the system's function, and subsequently replacing organisms (or mutant variants) in a manner analogous to gene deletion and complementation would precisely define the roles of individuals and their key functions. To accomplish this full cycle will require tools that precisely inhibit specific microorganisms, an ability to cultivate (or selectively capture) more microorganisms, and the ability to rapidly develop comprehensive mutant libraries in addition to all of the aforementioned tools for monitoring the environment and the microbiome's composition and function. Several opportunities exist for the manipulation of a microbiome and its constituent parts in well-studied model or laboratory systems. Beginning at the population level, new technologies allow the high-throughput disruption of genes and the monitoring of their contributions to microbial fitness. High-density transposon mutagenesis coupled with high-throughput sequencing (transposon sequencing [141], for example, and its barcoded derivative random bar code transposon site sequencing [142]) allows high-throughput fitness profiling of populations of mutants cultivated under selecting conditions. These approaches have revealed the essential roles of genes with no previously known function (143), steadily increasing our view of gene essentiality (144). Alteration of individual genes, species, or functional groups in a complex microbial community could be achieved via sequencespecific gene editing or deletion with CRISPR/Cas9 delivered by phage or conjugative elements (145,146) and the use of contractile nanotubes that can target bacteria with strain-specific activity (147). In addition, as our understanding of metabolic networks in microbiomes advances, manipulation of specific members or functional groups may be achieved through the addition or removal of substrates based on thermodynamic or kinetics-based model predictions. The properties of the physical environment are also potentially critical determinants regulating individual fitness within microbiomes (11,148). Consequently, the design and construction of synthetic systems with controlled physical properties such as permeability, porosity, and roughness based on natural systems will be extremely valuable in determining the key factors regulating microbiome assembly, development, stability, and activity (149)(150)(151). Next-generation mathematical models are required to represent the complexity of microbiomes, scaling perhaps from the fundamentals of microbial electron transport (152) and the thermodynamics of microbial redox reactions (153), to genome-scale metabolic models of individuals and populations (154,155), to microbial community function at the ecosystem (e.g., gut, soil, ocean) (156)(157)(158)(159)(160), and ultimately to the Earth system scale (161)(162)(163). To be useful, these models should embed the properties of microbial physiology and evolution into the physical, chemical, and biological heterogeneities characteristic of the intended length, time, and spatial scales of communities. Fully coupled models, such as those representing the microbial, environmental, and host aspects of the system (e.g., plant rhizosphere or animal intestinal tract) would allow dynamic feedbacks to be evaluated and would enlighten our understanding of the emergent phenomena (164). THE POTENTIAL FOR TRANSFORMATIONAL DISCOVERIES UNDER A UMI Advancing microbiome science will require the cooperation, coordination, and collaboration of scientists and engineers from many disciplines-just like our natural ecosystems, diversity promotes productivity and stability. By extension, such efforts would likely require diversity in funding and, ideally, coordination of federal agencies, private industry partners, and philanthropic do-Editorial nors. For this reason, we, as a community of scientists, are one of several groups that support calls for a unified microbiome initiative (UMI) (4,165,166) and agree with calls for such an initiative to be built upon local leadership (103). In considering the potential value of a UMI, it is clear that improved knowledge of the activities of microbial communities can positively impact our health and that of our planet and importantly inform decision making on social and economic issues (Fig. 4). POTENTIAL BENEFITS OF A UMI TO GROWING BIOECONOMIES There are many examples of the profound effect of biotechnology on the medical, agricultural, and industrial economic sectors. In 2012, revenues from genetically modified organisms were~2.5% of the United States gross domestic product, and the resulting United States National Bioeconomy Blueprint called for research and innovation to create a new bioeconomy (167). The UMI could provide knowledge to develop new microbial community applications, including designer communities for crop plants, animals, pollinator species, or rain clouds that could improve agricultural output, help mitigate the ecological and economic impact of drought, and in the case of livestock microbiomes, greatly reduce the greenhouse gas contribution of agricultural practices. The knowledge derived from UMI activity can also spur the development of sustainable methods to produce valuable bio-based fuels and commodities, improve the recovery of valuable subsurface fuels and chemicals, enable the manufacture of new bio-inspired materials, and catalyze the development of new industries that generate high-value products from renewable waste while lessening society's reliance on fossil fuels. Achieving this goal requires an understanding of biological systems and the creation of resultant biotechnologies to benefit humankind, enhance the biosphere, and enable economic growth. UMI-enabled discoveries can have a substantial positive societal and economic impact. GLOBAL ETHICAL, LEGAL, AND SOCIAL ISSUES ASSOCIATED WITH A UMI AND THE POTENTIAL FOR INNOVATION The public benefit of a UMI can be enhanced by a coordinated set of transdisciplinary, transcontinental research activities. For Editorial example, the ability to address drought resistance in crops or remediation of contaminated waters or soils needs attention by international scientific, ethical, and political experts. Recent successful examples of transcontinental research initiatives include the Human Genome Project and the International Stem Cell Initiative. Similar cooperation on the UMI could galvanize private and public funders from around the globe to collaborate on setting science priorities; provide training for scientists, legal, ethics and policy experts; harmonize international trade and intellectual property issues; and develop a suite of local, regional, and even global funding instruments to maximize societal benefits. Given the wide-ranging potential impacts of the global microbiome, the pursuit of UMI-based discoveries and solutions must incorporate ethical and societal implications of these discoveries and their applications and consider the current biotechnology regulatory environment. However, because it has implications for human, animal, and crop health, as well as the environment more broadly, microbiome research poses novel challenges for existing regulatory frameworks (168). The gaps in current regulation mean that meeting these challenges at a pace that keeps up with microbiome science will likely require innovation in the implementation of ethical and societal implications in the scientific process. Microbiome research and applications present unique challenges to the existing global regulatory systems (169) because traditional risk structures-the risk-benefit analyses used for traditional biotechnology products such as protein therapeutics-do not apply and because microbial communities have the potential to evolve and interact with ecological networks that cross national borders. In addition, applications of findings from microbiome research can occupy niches that are not clearly in the jurisdiction of any particular government agency. In the United States, for example, regulation by the Food and Drug Administration is product based and focuses on the safety of products for humans and animals, but probiotics and prebiotics do not fit into regulated categories. The U.S. Department of Agriculture focuses on food safety and animal and plant health but not environmental impacts of interventions that target animal or plant microbiomes. The Environmental Protection Agency regulates pollutants and toxins through the Clean Air Act and the Clean Water Act and animals, plants, and other species through the Endangered Species Act. However, these strategies do not apply to regulation or oversight of modification of naturally occurring microbial species. Recognizing that science advances may require new regulatory policies, the U.S. Government recently released a memorandum (170) to initiate a process to update the Coordinated Framework for the Regulation of Biotechnology, last revised in 1992. The aim is to coordinate and modernize the federal regulatory framework and systems that govern the vastly altered landscape of biotechnology products, including the Food and Drug Administration, the Environmental Protection Agency, and the U.S. Department of Agriculture, while attempting to reduce barriers to innovation. Recent ethics and policy discussions regarding synthetic biology (171), genome editing (172), and approaches such as those using CRISPR-Cas systems to modify and drive the evolution of mosquito populations in the wild (173) point similarly to a need for professional self-regulation and for individual scientists to become aware of, identify, and incorporate ethical and societal considerations into actual practice (174). National-level discussions that continue to rely on the concepts of biohazard containment and risk management will likely be insufficient tools for future UMI-based research, since this work will likely provide new definitions of what is "normal," "healthy," or "diseased" (175). On the other hand, a UMI provides rich opportunities to test innovations in integrating ethical and societal considerations into microbiome research, with the input of a wide range of scientific disciplines and stakeholders. One goal of such innovation could be to use research ethics consultation (176) and stakeholder engagement (177) to identify how aims and benefits of microbiome research, and thus the underlying values, can be brought into alignment with needs of relevant communities. Engaging the public in microbiome research through crowdsourcing (e.g., the American Gut Human Food Project) and citizen science (178,179) could enhance trust in the research by transforming "the public" into stakeholders and encourage broader discussion about ethical responsibilities that would extend beyond the professional scientific community (4). It would be irresponsible to proceed with mass manipulation of microbiomes without having the structures and knowledge in place to evaluate the potential consequences. CONCLUDING POINTS As was the case in other game-changing scientific initiatives (the Human Genome Project, the development of the Internet, the exploration of space), achieving the goals of the UMI requires combined expertise and technologies spanning numerous domains. The resultant discoveries and enabling technologies can provide the underpinning knowledge to develop applications within or across human and animal health, food production and safety, and the environment-all contributing to robust and sustainable bioeconomies while preserving the intrinsic value and biodiversity of our ecosystems. These applications have the potential to transform many scientific disciplines, to impact scholars in the social sciences and elsewhere, to spawn new economic opportunities, and to benefit the lives of citizens around the globe. ACKNOWLEDGMENTS We thank Jeff Miller and Sharif Taha for helpful discussion and thank Diana Swantek of LBNL for graphic art support.
2016-10-26T03:31:20.546Z
2016-05-13T00:00:00.000
{ "year": 2016, "sha1": "d45adcd8873cfdd6e21fa3fd4a4590727aab9e01", "oa_license": "CCBY", "oa_url": "https://mbio.asm.org/content/mbio/7/3/e00714-16.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e7f2448e0e0a04c6194cb0c1ee99d802c76a6fd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
199212490
pes2o/s2orc
v3-fos-license
Faults technology of Lada Kalina cars by means of defined complex of diagnostic parameters The technology of observing the big group of cars allows us to establish how often we meet the diagnoses that we are interested in and what the odds of these diagnoses are according to diagnostic parameters for the developed system. On the basis of the calculations according to the data of the experimental researches we can say that for the car with the set of signs, corresponding to the set complex of diagnostic parameters, the most probable diagnosis is the first one: fault of electrical equipment elements. The obvious condition of the diagnostics efficiency is the significant decline of the failure probability of nodes, units and vehicles in general and also the exception of incorrect scheduled maintenance that is reached under the qualitatively perfected system of diagnosing. Introduction Malfunction diagnosing of nodes, units and vehicles in general is detected in most cases according to several signs, and the analysis of these signs allows one "to establish" the most probable diagnosis [1][2][3][4]. If the repairmen's qualification is high and the used equipment, by means of which faults are detected, is more effective, the received result will be more reliable. Sometimes various faults are partially followed by the identical diagnostic parameters when changing the technical condition of the vehicle. For example, the fault of the electrical equipment elements (D1) is followed by uneven operation of the engine -у1, the incomplete fuel combustion -у2, the increased fuel consumption -y3, obstructed or impossible engine starting -у4. The incorrect valves adjustment (D2) is followed by uneven operation of the engine -y1, the incomplete fuel combustion -у2, the increased fuel consumption -у3. The coking of piston rings (D3) is followed by the signs mentioned earlier y1, ..., у3 and the presence of engine oil in the combustion chamber -у5. We will consolidate the description of the above-stated diagnoses in a matrix, designating the existence of the sign as '1', and the lack of the sign as 'О' (table 1). The controlled diagnostic parameters have accidental dispersion because of measurement errors, the accidental combination of operation modes of different elements of the vehicle, etc. Therefore, the existence or the absence of the diagnostic sign according to certain diagnosis Di is not a reliable event ('1' or 'О'), and is observed with some conditional probability PDi(yj). Method of forecasting We will make the calculation of the most probable diagnosis according to Bayes's formula [5][6][7][8] that is applied to diagnostics: where Pyj(Di)probability of Di diagnosis while observing уj parameter; P(Di)probability of Di diagnosis; PDi(yj)probability of yj parameter observation under the Di diagnosis; P(yj)probability of уj parameter observation according to all diagnoses. During diagnosis, according to a complex of signs, the formula will be written in the same way, but instead of the single yj parameter, the complex of у* parameters will be considered. The probability of the joint observation of the independent signs, which make up the analyzed complex of diagnostic parameters, can be expressed by the multiplication of the observation probabilities of each parameter under the considered diagnosis: If there are no some signs in the complex, the multiplication has the absence probability of the The observation probability of signs' complex, according to all diagnoses, is determined by the formula of total probability: Then it is necessary to consider the condition of the effective application of the diagnostics in vehicles faults detection [9,10]: As we use the probabilistic approach, one more diagnosis is added to three diagnoses mentioned above (table 2). This diagnosis forms the full group of events, the D4 diagnosis means other possible malfunctions. We made the diagnosis for the car with the complex of signs: uneven engine operation -у1, the incomplete fuel combustion -у2, obstructed or impossible engine starting -у4, other diagnostic parameters are not observed (noted by the sign '~') i.e.: 5 4 3 2 1, , , , y y y y y y  .   (5) Using the data from table 2, we will calculate the diagnoses probabilities for the complex of the diagnostic parameters   Conclusion The coking of piston rings is almost improbable, it is a little probable if the engine valves are not adjusted. If the first diagnosis is not confirmed while checking the engine, then the fourth diagnosis will be the second in order of importance: the fault cause of the engine of the Lada Kalina car is in something else. The obvious condition of the diagnostics efficiency is the significant decline of the failures' probability of nodes, units and vehicles in general and also the exception of incorrect scheduled maintenance that is reached under the qualitatively perfected system of diagnosing. Aiming diagnostics when controlling the most important nodes, units and systems of the vehicle it is possible to reduce the cost of failures. The diagnostics efficiency depends largely on the variation coefficient of operating time to the limit condition of the vehicle elements. At rather stable values of this operating time it is possible to predict reliably the moment of failure and in due time to carry out the maintenance service (MS) and also to plan in advance the costs of buying necessary repair parts [9][10][11][12][13][14][15][16][17][18].
2019-08-03T01:36:19.491Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "7bd5b0adc9de1f3f4ff6bce3438c1fdcf0c794a6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/560/1/012102", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b18fd24c40d9e302e51eec95fb5914f4accb7ea7", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Engineering" ] }
54859766
pes2o/s2orc
v3-fos-license
Modulatory Effect of Phytoestrogens and Curcumin on Induction of Annexin 1 in Human Peripheral Blood Mononuclear Cells and their Inhibitory Effect on Secretory Phospholipase A2 Purpose: To investigate the modulatory effects of phytoestrogens (coumestrol, daidzein and genistein) and curcumin on the induction and secretion of annexin-1 (ANXA-1) in human peripheral blood mononuclear cells (PBMCs) under inflammatory and non-inflammatory conditions, as well as their effect on the activity of phospholipase A2-V (sPLA2-V). Methods: The modulatory effects of phytoestrogens and curcumin on the induction ofANXA1 were investigated via sandwich ELISA method, while their effects on the activity of sPLA2-V were determined by photometric assays. Besides, the cell viability of these compounds was determined by standard trypan blue exclusion method using PBMCs. Results: The results indicate a significant increase (p < 0.05) in the total content ofANXA1, particularly by coumestrol (p < 0.01), in both inflammatory and non-inflammatory cells. Besides, the compounds also exhibited a dose-dependent inhibition of sPLA2-V activity; however, among these compounds, curcumin and genistein were the strongest inhibitors with an IC50 value of 11.1 ± 0.3 µM and13.6 ± 0.6 µM respectively. Conclusion: The investigated compounds have a potential to induce synthesis and secretion ofANXA1 as well as inhibitory activity of sPLA2-V, suggesting their inhibitory role in phospholipid metabolism and inflammation. INTRODUCTION Inflammation, a widely addressed issue with clinical implications, has attracted a lot of researchers in multiple disciplines.Mechanistically, inflammation is a complex set of interactions among soluble and cellular components, recruited at and by destruction of connective tissue.The major soluble components include certain intercellular messengers such as cytokines [1].Among cellular components, blood-borne monocytes/ macrophages are an integral part of inflammation. Among all the anti inflammatory mediators, Annexin 1 (ANXA 1) has been proven to be the key anti-inflammatory mediator to control the inflammatory cascades.ANXA1 is a 37-kd protein made up of 346 amino acids and the first member of the annexin super family of calcium and phospholipid binding proteins [2].ANXA1 and its N-terminal portion can influence many inflammatory mechanisms and regulate the synthesis of eicosanoids, leukocyte migration and apoptosis of inflammatory cells [3].The ability ofANXA1 to regulate the synthesis of inflammatory mediators (such as eicosanoids) is due to its ability to restrain the activity of phospholipase A 2 (PLA 2 ).PLA 2 is the only super family of esterases that slices the acyl ester linkage at the sn-2 site of membrane phospholipids, liberating free fatty acids and lysophospholipids [4].These hydrolases are proinflammatory cytokines responsible for the release of arachidonic acid and eicosanoids synthesis.PLA 2 can be segregated into three major classes, Ca +2 dependent cytosolic PLA 2 (cPLA 2 ), Ca +2 independent cytosolic PLA 2 (iPLA 2 ) and secretory PLA 2 (sPLA 2 ).Amongst these, sPLA 2 s are the main contributors for the extensive production of arachidonic acid in inflammatory pathways [5]. To counteract the mediators of inflammatory cascades, a number of anti-inflammatory agents have been used.Among these, glucocorticoids (GCs) are the mainstay for the management of various inflammatory and immune disorders.GCs inhibit inflammation through a series of mechanisms, of which the induction ofANXA1 is a key mechanism.However, adverse effects associated with their long-term usage and development of resistance limit the use of GCs in chronic maintenance therapies.Among these deleterious effects, adrenal atrophy, cushing syndrome, hypertension, gastrointestinal bleeding, hypo-gonadism and fetal growth retardation [6] are associated with the long-term use of GCs.Considering the above mentioned clinical complications, global focus is to discover safer and more efficacious agents. Curcumin, a natural compound found in curcuma species has been used successfully as an antiinflammatory agent.The pharmacological perspective of curcumin is under investigation and includes demonstration of its targets such as transcription factors, cytokines, cell adhesion molecules, surface receptors, growth factors and various kinases [7] thereby, causing either direct cellular pathway inhibition or activation of secondary cellular responses.Furthermore, coumestrol, genistein and daidzein sourced from soybeans are the phytoestrogens.Phytoestrogens are substances that promote estrogenic actions in mammals and structurally resemble mammalian estrogen 17β-estradiol [8]. They mimic the biological activity of estrogens and have wide range of biological activities including estrogenic, antioxidant, anti inflammatory, antithrombotic, anti allergic, hypolipidemic and anti cancer properties [9].Many biological and anti inflammatory effects of curcumin and phytoestrogens have been reported. However, to the best of our knowledge, their ability to modulate the induction ofANXA1 has not been reported to date.Therefore, the aim of the current study was to investigate the effect of phytoestrogens and curcumin on the induction ofANXA1 in human peripheral blood mononuclear cells as well as their inhibitory effects on the secretory phospholipase A 2 -V (sPLA 2 -V) enzyme. Separation of human peripheral blood mononuclear cells (PBMCs) PBMCs were obtained from blood donated by healthy volunteers.This study was approved by the human ethical committee of Universiti Kebangsaan Malaysia (UKM) (approval no.UKM 1.5.3.5/244/NF-050-2012) and conformed to the principles outlined in the Declaration of Helsinki [23].Venous blood was collected in heparinized tubes and processed right away.Human PBMCs were isolated using gradient centrifugation in Lympoprep (Axis-Shield PoC AS, Oslo, Norway).Blood was diluted at 1:2 with RPMI-1640 medium, carefully layered on Lymphoprep, and centrifuged at 600 g for 20 min at 20 o C. The PBMCs layer was removed and washed twice with RPMI-1640 and re-suspended in RPMI-1640 complete medium in a culture tube.The cells were adjusted to 5x10 5 cells/ml by using haemocytometer. Cell viability Cell viability was determined by the standard trypan blue exclusion method.The PBMCs (5 x 10 5 cells/ml) were incubated with various concentrations of the compounds ranging from 5 to 100 µg/ml, each in triplicate at room temperature for overnight.The blue dye uptake was a signal of cell death.The percentage viability was calculated from the total cell counts.The concentration of compounds at which viability was > 95% was used for further studies [10]. Incubation of compounds in inflammatory and non-inflammatory PBMCs PBMCs were incubated overnight at 37 °C and 5 % CO 2 either in the presence or in the absence of the test compounds.PBMCs were stimulated by 1 µg/mL of lipopolysaccharide (LPS) from Salmonella enteritica (Sigma, Steinheim, Germany) for the inflammatory condition.Dexamethasone at 0.4 µg/mL (10 -6 M) was used as a positive control and a mixture containing DMSO, PBS and RPMI-1640 was used as a negative control.Control groups were taken for stimulated as well as for non-stimulated PBMCs [11]. Extraction of extracellular, intracellular and cell surface-associated Annexin 1 Following incubation period, cells were gently centrifuged (300 g) for 5 min at 4 °C and the supernatant was separated for extracellular ANXA1.This centrifugation did not cause any lysis.The membrane bound ANXA1 was removed into the medium by washing the cells with Ca +2 free salt solution containing EDTA (Sigma, Steinheim, Germany).The pelleted cells were re-suspended in PBS containing 2 mM EDTA and were further incubated for 2-3 min to get rid of ANXA1 attached to cell membranes.After incubation, cells were centrifuged again at the specifications mentioned before.The supernatant was isolated for measurement of membrane bound ANXA1.Subsequent to the removal of membrane associated ANXA1, cells were lysed and the lysate was taken for the calculation of entire intracellular ANXA1 [10]. Quantification of Annexin 1 by ELISA ANXA1 was deliberated by sandwich Enzyme linked-immunosorbent Assay (ELISA) (USCN Life Science, China).The whole assay was performed as described by the manufacturer.ELISA was performed in duplicate, and data was obtained from three different donors. Enzyme and DTNB yielded final concentrations of 100 ng/mL and 87 µM respectively.Assays were performed in 96-well microliter plates at room temperature containing DTNB, substrate solution and the respective test substance.100% activity of enzyme was calculated by adding substrate and enzyme only.DMSO served as a negative control and was inactive at the concentration used in the assay (1.7 %v/v). Statistical analysis The results are expressed as mean ± standard deviation.All statistical analyses were performed using Graph Pad Prism 5. One-way ANOVA, followed by a post-test (Tukey's multiple comparison) was carried out when major divergence at p < 0.05 and p < 0.01 was present. Cell viability The cell viability was assessed prior to every experiment.The viability of PBMCs incubated in the presence of test compounds at concentrations 10 µg/mL was always greater than 95%.The concentrations of compounds higher than 10µg/mL decreased the cell viability below 95%. Effect on annexin-1 level and secretion in normal PBMC The modulation of ANXA1 from PBMCs was observed by incubating the PBMC either in the presence or in the absence of curcumin, coumestrol, ginestein and diadzine at 10 µg/mL for all compounds.After incubation, intracellular, extracellular and membrane bound ANXA1 levels were calculated as described in methods.Considerable increase was observed in the basal levels of intracellular ANXA1 with all compounds; however, coumestrol exhibited maximum increase in the intracellular level of ANXA1.A significant increase in the level of extracellular ANXA1 was found for all the compounds as compared to control (Figure 1a).Results presented in Figure 1a illustrate an increase in plasma membrane bound ANXA1 and statistically significant increase by coumestrol.These results designate the induction of synthesis and stimulation of secretion of ANXA1 in PBMCs. Collectively, a manifold increase in total level of ANXA1 was observed after combining the extracellular, intracellular and membrane bound contents of ANXA1 (Figure 1b).As compared to the untreated cells (8.1 ± 0.4 ng/mL), the total content of ANXA1 was substantially increased (21.5 ± 0.8 ng/mL) with coumestrol.With curcumin, genistein and daidzein, the level of ANXA1 increased to 14.7 ± 1.1, 13.2 ± 0.4 and 12.8 ± 0.3 ng/mL respectively.Amongst all, coumestrol exhibited the strongest stimulation of ANXA1.On the other hand, curcumin also showed worth mentioning results.Ginestein and diadzine also moderately up regulated ANXA1. Effect on Annexin-1 level and secretion in inflammatory PBMCs The levels of ANXA1 were also measured in inflammatory conditions.For this purpose, PBMCs taken from the same blood donors were subjected to inflammatory condition.Intracellular, extracellular and membrane bound ANXA1 level was measured as described above.Figure 2a illustrates an increase in the extracellular level of ANXA1 in stimulated PBMCs, relative to control.An increase in intracellular and plasma membrane-bound ANXA1 was also observed.In inflammatory PBMC, a net increase was seen in the total content (extracellular, intracellular and membrane bound) of ANXA1.The level of ANXA1 increased to 15 ± 0.7 and 11.5 ± 0.9 ng/mL for coumestrol and curcumin, respectively.Likewise, an increasing trend was seen for ginestein (8.9 ± 0.3 ng/mL) and diadzine (9.8 ± 0.3 ng/mL), correspondingly.For all the compounds, an increasing trend in the levels of ANXA1 was seen in both normal and inflammatory cells.Similar to non-inflammatory condition, the potency of coumestrol was also highest in inflammatory condition. Inhibition of sPLA 2 -V Secretory PLA 2 inhibitory activity was determined using the Ellman method.The activity was assessed by detecting free thiols using Ellman's reagent DTNB (5, 5'-Dithio-2 nitrobenzoic acid).The inhibition of sPLA 2 activity from human source was determined by different concentrations of the inhibitors.The IC 50 values for sPLA 2 were calculated by linear XY scattered plot (Table 1).The inhibition of sPLA 2 activity varied for different inhibitors at different concentrations.ANXA1 inhibited sPLA 2 in a dose-dependent manner.As the concentration of ANXA1 was increased, a corresponding decrease in the enzyme activity was observed.ANXA1 preferentially inhibited sPLA 2 enzyme activity with an IC 50 value of 4.9 x 10 -7 and showed > 80% at 400 ng/mL.Dexamethasone also exhibited sPLA 2 inhibition in a dose dependent manner with an IC 50 value of 0.61 ± 0.1 µM (Figure 3 and Table 1).Similarly, curcumin, genistein and daidzein inhibited the sPLA 2 activity in a concentration dependent manner (Figure 4 and Table 1).Of all the compounds, curcumin exhibited the strongest inhibition on sPLA 2 enzyme activity with an IC 50 value of 11.1 ± 0.3 µM. Among the phytoestrogens, genistein showed the strongest activity with an IC 50 value of 13.6 ± 0.6 µM. Coumestrol also moderately inhibited sPLA 2 activity in a concentration dependent manner DISCUSSION ANXA1 role as an anti-inflammatory protein has been brought to limelight in recent years.ANXA1 was initially thought to be just a cytosolic protein to exert its anti-inflammatory effects through inhibition of PLA 2 .However, recent studies on ANXA1 have uncovered new effects and actions of ANXA1.In resting conditions, human neutrophils, monocytes and macrophages constitutively contain ANXA1 [13].ANXA1 expression and function with GCs treatment are already known but the intracellular signalling pathways involved in the expression of ANXA1 in response to steroidal hormones are still unclear. Although some reports have illustrated that GCs up-regulate ANXA1 synthesis by genomic mechanisms [14], different intracellular studies state that this effect is independent of the activation of nuclear GCs receptors.In agreement with these reports, a recent study reported that in CCRFCEM cells, ANXA1 synthesis was independent of the nuclear GCs receptors [15].Dexamethasone significantly increases the cellular content of ANXA1 when incubated overnight.The overnight incubation of PBMCs with 10 µg/mL of investigated compounds result in the up-regulation of ANXA1.Moreover, a significant increase in the amount of ANXA1 excreted out of plasma membrane was also observed.It has been reported that the increase in the cellular turnover ofANXA1 may cause a rapid export of the proteins from intracellular stores to extracellular sites and as a consequence the de novo synthesis of proteins may took place to replenish the depleted intracellular levels [16].Based on these findings, our results indicate that the investigated compounds may have a potential to efficiently stimulate the ANXA1 secretion. Recent studies on the induction of ANXA1 expression have shown that chemically similar estrogen hormones also have role in the induction of the ANXA1.In studies carried out previously, researchers had demonstrated that in human lymphoblastic CCRF-CEM cell line, estrogen hormone and 17 β-estradiol (E 2 β) induced the synthesis of ANXA1 [15].It is well established that E 2 β exerts major effects on cell growth, differentiation and function by specifically interacting with intracellular estrogen receptors (ER) [17].The complex E 2 β -ER migrates to the nucleus of the cells, where it binds to estrogenresponsive elements present in the genomic DNA.It was reported that the up-regulation of ANXA1 was due to the action of E 2 β on estrogen receptors. Coumestrol, genistein and diadzine are phytoestrogens.Phytoestrogens promote estrogenic actions in mammals.Mechanistically, phytoestrogens have been shown to bind to two types of estrogen receptors: estrogen receptor α and receptor β, respectively.On the basis of above mentioned facts, it is hence proved that the investigated compounds resemble E 2 β in structure and functions, and also exhibit results similar to E 2 β.Curcumin is renowned for its anti inflammatory effects and treatment of various diseases.In this study, it has been observed that curcumin up-regulate ANXA1 in PBMC, but the mechanisms by which curcumin up regulate ANXA1 are still unknown. Among the PLA 2 enzymes, sPLA 2 enzymes play an important role in the pathogenesis of inflammatory diseases [18].Elevated levels of sPLA 2 enzymes are detected in many inflammatory conditions.Inhibiting sPLA 2 s would probably be a preferable strategy since their induced levels are predominantly associated with pathological conditions.Though, a selective inhibition of just one isoform of sPLA 2 may not be sufficient to exert the desired effect.The originally recognized activity of ANXA1 as an inhibitor of phospholipase A 2 (PLA 2 ) was at first proposed to be accountable for its anti inflammatory actions [19].The inhibition of PLA 2 activity, including the production of arachadonic acid, was thought to be a result of ANXA1 binding to the substrate, rather than directly to the enzyme, leading to the depletion of substrate sites and a subsequent reduction of PLA 2 activity.However, this idea was reassessed and it is now obvious that a secretory form (sPLA 2 ) as well as a cytosolic form of PLA 2 (cPLA 2 ) exist.ANXA1 has shown the inhibition of sPLA 2 in concentration dependent manner.The same mechanism has also been observed for cPLA 2 inhibition [20]. Anti inflammatory properties of GCs have been attributed to the liberation and enhanced synthesis of ANXA1, which inhibit PLA 2 , resulting in a decreased eicosanoid synthesis.However in the current study, dexamethasone has been found to have inhibited the sPLA 2 in a concentration dependent manner.In the previous studies, dexamethasone had been reported to inhibit the PLA 2 in U937 cells but, the expression of ANXA1 was not inducible by GCs in these cells [21]. Flavonoids are antioxidants known to act as antiinflammatory compounds by scavenging the free radicals.Therefore, a single molecule having both potencies of PLA 2 inhibition as well as antioxidant activity can serve as a better antiinflammatory molecule.In the present study, the flavonoids, genistein and diadzine inhibited sPLA 2 -V in a concentration dependent manner.Many inhibitors inhibit the sPLA 2 activity either by binding to the substrate, or by chelating with calcium.But the inhibition of other sPLA 2 isomers by genistein was observed to be independent of substrate and chelation of calcium, though sPLA 2 isomers exhibited more than 70% homology [22].Diadzine and genistein belong to same class and have structural and functional similarities.So, it is possible that diadzine has same mechanism as followed by the genistein. CONCLUSION The results illustrate that the phytoestrogen and curcumin have potentials to induce synthesis and secretion of ANXA1 in peripheral blood mononuclear cells (PBMCs).Our findings on sPLA 2 -V inhibition provide further evidence to rationalize the anti-inflammatory activities of the tested compounds.The present findings on sPLA 2 -V inhibitory functionality should encourage further investigations that would address other PLA 2 isoforms in inflammatory cascades. Figure 1 : Figure 1: (a) Concentration of extracellular, intracellular and plasma membrane bound levels of ANXA1 in PBMC in the absence of compounds (control), Dexamethasone (0.4 µg/mL) and in the presences of compounds (10 µg/mL).(b) Total content of ANXA1 level in PBMC. Figure 2 : Figure 2: (a) Concentration of extracellular, intracellular and plasma membrane bound levels of ANXA1 in inflammatory PBMC in the absence of compounds (control), Dexamethasone (0.4 µg/mL) and in the presence of compounds (10 µg/mL).(b) Total content of ANXA1 level in inflammatory PBMCs. Figure 4 : Figure 4: Concentration-dependent inhibition of sPLA2-V enzyme by curcumin, coumestrol, genistein and daidzein.Inhibition is expressed as percent of control.The data represents mean ± SD (n = 3).
2018-12-05T04:28:26.392Z
2014-02-25T00:00:00.000
{ "year": 2014, "sha1": "e29b9924b8a4623df12f0a46496125b476af4870", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/101476/90660", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7362c3cb29cfab0f94bb5f071f4ce146b6209564", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
247292565
pes2o/s2orc
v3-fos-license
Recurrence properties for linear dynamical systems: An approach via invariant measures We study different pointwise recurrence notions for linear dynamical systems from the Ergodic Theory point of view. We show that from any reiteratively recurrent vector $x_0$, for an adjoint operator $T$ on a separable dual Banach space $X$, one can construct a $T$-invariant probability measure which contains $x_0$ in its support. This allows us to establish some equivalences, for these operators, between some strong pointwise recurrence notions which in general are completely distinguished. In particular, we show that (in our framework) reiterative recurrence coincides with frequent recurrence; for complex Hilbert spaces uniform recurrence coincides with the property of having a spanning family of unimodular eigenvectors; and the same happens for power-bounded operators on complex reflexive Banach spaces. These (surprising) properties are easily generalized to product and inverse dynamical systems, which implies some relations with the respective hypercyclicity notions. Finally we study how typical is an operator with a non-zero reiteratively recurrent vector in the sense of Baire category. (a) recurrence: the operator T is said to be recurrent if the set Rec(T ) := x ∈ X : x ∈ Orb(T x, T ) , is dense in X, where each vector x ∈ Rec(T ) is called a recurrent vector for T . By the (not so well-known) Costakis-Manoussos-Parissis' theorem (see [14,Proposition 2.1]), this notion is equivalent to that of topological recurrence, i.e. for each non-empty open subset U of X one can find n ∈ N such that T n (U ) ∩ U = ∅; and in this case, the set Rec(T ) of recurrent vectors for T is a dense G δ subset of X; (b) hypercyclicity: the operator T is said to be hypercyclic if there exists a vector x ∈ X, called a hypercyclic vector for T , whose orbit Orb(x, T ) is dense in X. By the Birkhoff's Transitivity theorem (see [29,Theorem 1.16]), this notion is equivalent to that of topological transitivity, i.e. for each pair U, V of non-empty open subsets of X one can find n ∈ N 0 such that T n (U ) ∩ V = ∅; and in this case, the set of hypercyclic vectors for T , denoted by HC(T ), is a dense G δ subset of X. If given a point x ∈ X and a set A ⊂ X we denote the return set from x to A as N T (x, A) := {n ∈ N 0 : T n x ∈ A}, which will be denoted by N (x, A) if no confusion with the map arises, we can reformulate the above notions in the following terms: a vector x ∈ X is recurrent if and only if N (x, U ) is an infinite set for every neighbourhood U of x; and a vector x ∈ X is hypercyclic if and only if N (x, U ) is an infinite set for every non-empty open subset U of X. Historically, hypercyclicity and its generalizations have been the most studied notions in linear dynamics while the systematic study of the linear dynamical recurrence-kind properties started recently in 2014 with [14], in spite of the great non-linear dynamical knowledge already existing in this area (see for instance [22]). Direct relations between these properties and Ergodic Theory arise when we are able to consider a probability (or a positive finite) Borel measure µ on X (i.e. defined on B(X), the σ-algebra of Borel sets of X), which will sometimes be required to have full support (i.e. µ(U ) > 0 for every non-empty open subset U of X). We will only consider Borel measures in this work, and the word "Borel" will sometimes be omitted. If such a measure µ exists, we can study the dynamical system (X, B(X), µ, T ) from the point of view of Ergodic Theory and relevant properties are: (a) invariance: the operator T is said to be µ-invariant, or equivalently, the measure µ is called T -invariant, if for each A ∈ B(X) the equality µ(T −1 (A)) = µ(A) holds. By the Poincaré's Recurrence theorem (see [36,Theorem 1.4]), this notion implies that for every A ∈ B(X) with µ(A) > 0 there is n ∈ N such that T n (A) ∩ A = ∅. The Dirac mass δ 0 at 0 is always an invariant measure for any operator T , and we will say that a T -invariant probability measure µ is non-trivial if it is different from δ 0 . (b) ergodicity: the operator T is said to be ergodic with respect to µ, provided that the measure µ is T -invariant, and for each A ∈ B(X) with T −1 (A) = A we have that µ(A) ∈ {0, 1}. It is well known that the last statement is equivalent to the fact that, for each pair of sets A, B ∈ B(X) with µ(A), µ(B) > 0 there is n ∈ N 0 such that µ T −n (A) ∩ B > 0 (see [36,Theorem 1.5]). When T is ergodic with respect to a measure with full support, it follows from the Birkhoff's Pointwise Ergodic theorem that T is not only hypercyclic, but even frequently hypercyclic: there exists a vector x ∈ X such that for each non-empty open subset U of X the return set N (x, U ) has positive lower density; in other words: Such a vector x is said to be a frequently hypercyclic vector for T , and the set of all frequently hypercyclic vectors is denoted by FHC(T ). See [3,Corollary 5.5] for the details of this argument, and for more on frequent hypercyclicity. When T is only supposed to admit an invariant measure µ, it follows easily from the Poincaré's Recurrence theorem that µ-almost every x ∈ X is a recurrent point for T (see [22,Theorem 3.3]). Our main line of thought in this work will be to connect various (stronger) notions of recurrence via invariant measures, proceeding essentially in two steps: -if T admits vectors with a certain (rather weak) recurrence property, prove that it admits a non-trivial invariant measure, perhaps with full support (see Theorem 2.3); -if T admits a non-trivial invariant measure (perhaps with full support), prove that it admits vectors with a certain strong recurrence property (see Lemmas 3.1 and 4.4). This approach in the context of linear dynamical systems comes from the paper [24], which extends to the linear setting some well-known results in the context of compact dynamical systems (see [22,Chapter 3 and Lemma 3.17]). The various recurrence notions which we will consider were introduced and studied in the work [10], but the initial study of recurrence in linear dynamics started in [14]. In the next subsection, we recall the relevant definitions and present the first main result of this paper. Furstenberg families: recurrence and hypercyclicity notions The Banach spaces X considered in this subsection can be either real or complex. Let us first recall the following definitions from [10]: Definition 1.1. Given a non-empty collection of sets F ⊂ P(N 0 ) we say that it is a Furstenberg family if for each A ∈ F we have The dual family of F is defined as the collection of sets Definition 1.2. Let (X, T ) be a linear dynamical system and let F be a Furstenberg family. A point x ∈ X is said to be F-recurrent (resp. F-hypercyclic) if N (x, U ) ∈ F for every neighbourhood U of x (resp. for every non-empty open subset U of X). We will denote by FRec(T ) (resp. FHC(T )) the set of such points and we say that T is F-recurrent (resp. F-hypercyclic) if FRec(T ) is dense in X (resp. if FHC(T ) = ∅). The families F for which there exist F-hypercyclic operators are by far less common than those for which F-recurrence exists since having an orbit distributed around the whole space is much more complicated than having it just around the initial point of the orbit. Furstenberg families associated just to recurrence will be used in the following subsection, but in the present one we focus on the most known cases of families for which both notions exist. In particular, a point x ∈ X is said to be (a) frequently recurrent (resp. frequently hypercyclic) if dens(N (x, U )) > 0 for every neighbourhood U of x (resp. for every non-empty open subset U of X). We will denote by FRec(T ) (resp. FHC(T )) the set of such points, and we say that T is frequently recurrent (resp. frequently hypercyclic) if FRec(T ) is dense in X (resp. if FHC(T ) = ∅); (b) U -frequently recurrent (resp. U -frequently hypercyclic) if dens(N (x, U )) > 0 for every neighbourhood U of x (resp. for every non-empty open subset U of X). We will denote by UFRec(T ) (resp. UFHC(T )) the set of such points, and we say that T is U -frequently recurrent (resp. U -frequently hypercyclic) if UFRec(T ) is dense in X (resp. if UFHC(T ) = ∅); (c) reiteratively recurrent (resp. reiteratively hypercyclic) if Bd(N (x, U )) > 0 for every neighbourhood U of x (resp. for every non-empty open subset U of X). We will denote by RRec(T ) (resp. RHC(T )) the set of such points and we say that T is where for any A ⊂ N 0 its: In particular, frequent, U -frequent and reiterative recurrence are clearly stronger notions than "usual" recurrence as defined in Subsection 1.1, and frequent recurrence is a stronger notion than U -frequent recurrence, which is in its turn stronger than reiterative recurrence. We point out that all these notions are not specific to the linear setting; we will actually use them in the context of Polish dynamical systems in Sections 2, 3, 5 and 6. However, since we are focused on linear dynamical systems, our first main result connects all of them in the framework of adjoint operators on separable dual Banach spaces: Theorem 1.3. Let T : X → X be an adjoint operator on a (real or complex) separable dual Banach space X. Then we have the equality (b) The following statements are equivalent: (i) T is frequently recurrent; (ii) T is U -frequently recurrent; (iii) T is reiteratively recurrent; (iv) T admits an invariant probability measure with full support. In particular, these results hold whenever T is an operator on a (real or complex) separable reflexive Banach space X. The above theorem is in spirit similar to [24,Theorem 1.3], where it is proved that every (U -)frequently hypercyclic operator on a separable reflexive space admits an invariant measure with full support. It is observed in [24,Remark 3.5] that the arguments extend to every adjoint operator acting on a separable dual Banach space. It is also proved in [24,Proposition 2.11] that, in this same setting, operators admitting an invariant measure with full support are exactly those which are frequently recurrent. However, the notion of frequent recurrence introduced in [24, Section 2.5] is rather different from the one given in Definition 1.2 since in [24], T ∈ L(X) is called frequently recurrent if for every non-empty open subset U of X there exists a vector x U ∈ U for which just the positive lower density of the return set N (x U , U ) is required. This notion is (at least formally) weaker than the one used here (see Remark 2.5). The proof of Theorem 1.3 relies on some modifications of the arguments of [24, Section 2], which will be presented in Sections 2 and 3. We mention that it cannot be extended to all operators acting on separable (infinite-dimensional) Banach spaces. Indeed, it is shown in [10, Theorem 5.7 and Corollary 5.8] that there even exist reiteratively hypercyclic operators on the space c 0 (N) which do not admit any non-zero U -frequently recurrent vector. Uniform, IP * , ∆ * -recurrence and unimodular eigenvectors In this subsection the underlying Banach spaces X are assumed to be complex. A vector x ∈ X is a unimodular eigenvector for T provided x = 0 and T x = λx for some unimodular complex number λ ∈ T = {z ∈ C : |z| = 1}. The set of unimodular eigenvectors of T will be denoted by E(T ), and Unimodular eigenvectors are clearly frequently recurrent vectors for T , but they enjoy some stronger recurrence properties like uniform, IP * and even ∆ * -recurrence (see Definition 1.5 below). Our general aim in this paper is to investigate some contexts in which these strong forms of recurrence actually imply the existence of unimodular eigenvectors. We will see that it is indeed the case in (at least) the following two situations: -when T is an operator on a complex Hilbert space (see Theorem 1.7 below); -when T is a power-bounded operator on a complex reflexive Banach space (Theorem 1.9). Let us now introduce these stronger recurrence notions which are defined by considering Furstenberg families only relevant for the notion of recurrence, and, contrary to those used in Subsection 1.2, having no hypercyclicity analogue. We will denote by S := {A ⊂ N 0 : A is syndetic}, the Furstenberg family of syndetic sets. We will denote by IP := {A ⊂ N 0 : A is an IP-set}, the Furstenberg family of IP-sets. the Furstenberg family of ∆-sets. From Definition 1.2 and the dual families notation we have: Definition 1.5. Let (X, T ) be a linear dynamical system. A point x ∈ X is said to be (a) uniformly recurrent if N (x, U ) ∈ S for every neighbourhood U of x. We will denote by URec(T ) the set of such points and T is uniformly recurrent if URec(T ) is dense in X; (b) IP * -recurrent if N (x, U ) ∈ IP * for every neighbourhood U of x. We will denote by IP * Rec(T ) the set of such points, and T is IP * -recurrent if IP * Rec(T ) is dense in X; (c) ∆ * -recurrent if N (x, U ) ∈ ∆ * for every neighbourhood U of x. We will denote by ∆ * Rec(T ) the set of such points, and T is ∆ * -recurrent if ∆ * Rec(T ) is dense in X. It is shown in [6, Proposition 2] that the above Furstenberg families do not admit a respective hypercyclicity counterpart. As in the previous subsection these recurrence notions could be defined for (non-linear) Polish dynamical systems, but since the eigenvectors will play a fundamental role in the connection between those concepts we will directly work with complex linear maps. The relation ∆ * ⊂ IP * ⊂ S between the families (see [8]), From there the following question was proposed in [10]: Question 1.6 ([10, Question 6.3]). Does there exist an operator which is uniformly recurrent but not IP * -recurrent? The uniformly recurrent operators considered in [10] were also IP * -recurrent, and in fact a partial negative answer to Question 1.6 was already given in [10, Theorem 6.2] for the particular case of power-bounded operators, condition which implies the equality of the two sets IP * Rec(T ) and URec(T ). The second main result of this paper provides a negative answer to Question 1.6 for operators acting on a complex separable Hilbert space H, by showing the following stronger statement: any uniformly recurrent operator T ∈ L(H) has a spanning set of unimodular eigenvectors. More precisely, define the sets Moreover: (a) The following statements are equivalent: (viii) T admits a non-trivial invariant probability measure µ with H z 2 dµ(z) < ∞. (b) The following statements are equivalent: (i) the set span(E(T )) is dense in H; (ii) T is ∆ * -recurrent; (iii) T is IP * -recurrent; (iv) T is uniformly recurrent; (v) the set FRec bo (T ) is dense in H; (vi) the set UFRec bo (T ) is dense in H; (vii) the set RRec bo (T ) is dense in H; (viii) T admits an invariant probability measure µ with full support and H z 2 dµ(z) < ∞. The proof of Theorem 1.7 is really specific to the Hilbertian setting in a somewhat roundabout way. It relies on the following three main arguments: -the existence of a non-trivial invariant measure with a finite second-order moment, under the assumption of the existence of a reiteratively recurrent vector with bounded orbit; this argument is the same as the one employed in the proof of Theorem 1.3 above; -the fact that any operator on a space of type 2, admitting an invariant measure with a finite second-order moment, admits in fact a Gaussian invariant measure whose support contains that of the initial measure (see Remark 4.5); -and lastly, the fact that on spaces of cotype 2, the existence of a Gaussian invariant measure for an operator T implies that the unimodular eigenvectors of T span a dense subspace of the support of the measure (see Step 3 of Lemma 4.4). These last two "facts" are far from being trivial, and we refer the reader to [3,Chapter 5] for a proof, as well as for an introduction to the role of Gaussian measures in linear dynamics. Since the only spaces which are both of type 2 and of cotype 2 are those which are isomorphic to a Hilbert space, our proof of Theorem 1.7 does not seem to admit any possible extension to a non-Hilbertian setting. The following question remains widely open: Question 1.8. Let X be a complex Banach space and let T : X → X be a uniformly recurrent operator. Is span(E(T )) a dense set in X? What about the cases where T is an adjoint operator on a separable dual Banach space or where X is a reflexive Banach space? A partial (but not completely satisfactory) answer is our third and last main result, which only concerns the power-bounded operators on complex reflexive Banach spaces. It extends [10,Theorem 6.2] by showing that such an operator T ∈ L(X) is again uniformly recurrent if and only if has a spanning set of unimodular eigenvectors. More precisely, we have: Theorem 1.9. Let T : X → X be a power-bounded operator on a complex reflexive Banach space X. Then we have the equality span(E(T )) = URec(T ). In particular: (a) The following statements are equivalent: The following statements are equivalent: (i) the set span(E(T )) is dense in X; (ii) T is ∆ * -recurrent; (iii) T is IP * -recurrent; (iv) T is uniformly recurrent. The proof of Theorem 1.9 relies on the splitting theorem of Jacobs-Deleeuw-Glicksberg (see [31,Section 2.4]). Here the unimodular eigenvectors are obtained in a very different way than in the proof of Theorem 1.7 (via characters on a certain compact abelian group). Even though the arguments used in the proofs of the two theorems above still hold for complex finite-dimensional spaces, in this situation one can use directly the canonical Jordan decomposition (see [14,Theorem 4.1] and [10,Theorem 7.3]) to obtain a spanning set of unimodular eigenvectors even from "usual" recurrence as defined in Subsection 1.1. Organization of the paper Section 2 is devoted to the statement and proof of a purely non-linear result (Theorem 2.3) which allows to construct invariant measures from reiteratively recurrent points for a rather general class of Polish dynamical systems (which includes the compact ones). Theorem 2.3 is a modest improvement of [24, Theorem 1.5, Remarks 2.6 and 2.12] and its proof is based on a modification of the construction given in [24,Section 2]. In Section 3, we prove some results where frequent recurrence is deduced from reiterative recurrence, in particular Theorem 1.3. Theorems 1.7 and 1.9, which provide links between strong forms of recurrence and the existence of unimodular eigenvectors, are proved in Section 4. Sections 5 and 6 present some applications of the above results in terms of product and inverse dynamical systems respectively, while we study in Section 7 the "typicality", in the Baire Category sense, of some recurrence properties. Lastly, we gather in Section 8 some possibly interesting open questions and a few comments related to them. Invariant measures from reiterative recurrence In this section, we present a modification of the construction of [24, Section 2] which allows to construct invariant measures from reiteratively recurrent points for a rather general class of Polish dynamical systems, including the compact ones (see Remark 2.4). Topological assumptions and initial comments We begin this section with some notation: whenever we consider a space of functions we will use the symbol 1l to denote the function constantly equal to 1, and given a subset A of the domain of the functions, we will write 1l A for the indicator function of , the space of all bounded sequences of real numbers, 1l ∈ ℓ ∞ is the sequence with all its terms equal to 1, and for every A ⊂ N, 1l A ∈ ℓ ∞ will be the sequence in which the n-th coordinate is 1 if n ∈ A and 0 otherwise. In fact we will write the result of the action of m on a "function" φ ∈ ℓ ∞ as the integral: Given a topological space (X, τ ) we will denote by B(X, τ ) its σ-algebra of Borel sets. If there is no confusion with the topology we will simply write B(X). All the measures considered in this section will be non-negative finite Borel measures, i.e. they could be the null measure, and since they will be defined on Polish spaces the finiteness condition will imply their regularity (see [13,Proposition 8.1.12]). Given a (non-negative) finite Borel measure µ on a topological space (X, τ ) we will denote its support by When µ is positive and regular it is easy to show that supp(µ) is non-empty, and the smallest τ -closed subset of X with full measure, i.e. µ(supp(µ)) = µ(X), the later being true even if µ is not regular but X is second-countable (see [30,Proposition 2.3]). Moreover, a point x belongs to supp(µ) if and only if µ(U ) > 0 for every neighbourhood U of x. Before presenting the "measures' constructing machine" that will be used in the rest of this work, we give name to some properties that a Polish dynamical system (X, T ) may have. In particular, let (X, τ X ) be the underlying Polish space, τ a Hausdorff topology in X and let K τ be the set of τ -compact subsets of X. The properties that we are going to consider are the following: (I) T is a continuous self-map of (X, τ ) (i.e. T : X → X is τ -continuous); (II) τ ⊂ τ X (i.e. τ is coarser than τ X ); (III) B(X, τ ) = B(X, τ X ) (i.e. both topologies have the same Borel sets); (IV) every τ -compact set is τ -metrizable (i.e. every K ∈ K τ is τ -metrizable); (III*) every point of X has a neighbourhood basis for τ X consisting of τ -compact sets. In [24, Fact 2.1] it is shown easily how (II) and (III*) imply conditions (III) and (IV). For the concrete recurrence results that we obtain, it is necessary to assume conditions (I), (II) and (III*) in order to use the reiteratively recurrent points in a successful way. However, without property (III*) and assuming just conditions (I), (II), (III) and (IV) we can carry out the "construction" on which everything is based: . Let (X, T ) be a Polish dynamical system. Assume that X is endowed with a Hausdorff topology τ which fulfills (I), (II), (III) and (IV). Then for each x 0 ∈ X and each Banach limit m : ℓ ∞ → R one can find a (non-negative) T -invariant finite Borel regular measure µ on X for which µ(X) ≤ 1 and such that Moreover, we have the inclusion Remark 2.2. Lemma 2.1 is a rather technical result which allows us to construct invariant measures. Note that: (a) Assumptions (I), (II), (III) and (IV) are fulfilled by the initial topology τ X . However, if the τ X -compact sets are too small, given an arbitrary point x 0 ∈ X (even with some kind of recurrence property) we could have m(1l N (x 0 ,K) ) = 0 for every τ X -compact set K ⊂ X and hence the measure µ obtained could be the null measure on X. We will consider a strictly coarser topology τ τ X in order to obtain "interesting measures" from Lemma 2.1 (see Theorems 1.3 and 1.7). (b) Following the previous comment, even if the τ -compact sets are big enough, the measure µ could be the null measure on X if we choose a point x 0 ∈ X for which the return sets N (x 0 , K) are too small and hence m(1l N (x 0 ,K) ) = 0 for every K ∈ K τ . We will get "interesting measures" whenever we combine Lemma 2.1 together with the existence of a point x 0 ∈ X and a Banach limit m for which m(1l N (x 0 ,K) ) > 0 for some τ -compact subsets K of X. Those conditions will come from property (III*) together with the existence of a reiteratively recurrent point x 0 ∈ RRec(T ), see Theorem 2.3. simply by choosing a non-principal ultrafilter U on N and considering the Banach limit Moreover, under the same assumptions it is also stated in [24, Remark 2.12] that for each x 0 ∈ X and each K ∈ K τ one can find a T -invariant finite Borel measure µ on X such that µ(K) ≥ dens(N (x 0 , K)). This just ensures that the above inequality holds true for only one fixed τ -compact subset K of X. We will encounter the same problem when working with the upper Banach density, and we will have to combine some more sophisticated Banach limits in order to cope with several τ -compact sets at the same time, see Subsection 2.3. Here is the main result of this section: Let (X, T ) be a Polish dynamical system. Assume that X is endowed with a Hausdorff topology τ which fulfills (I), (II), and (III*). If x 0 ∈ X is a reiteratively recurrent point for T , then one can find a T -invariant probability measure µ x 0 on X such that Moreover, if T is reiteratively recurrent then one can find a T -invariant probability measure µ on X with full support. Remark 2.4. If the Polish dynamical system T : (X, τ X ) → (X, τ X ) is locally compact, its initial topology τ X already fulfills properties (I), (II) and (III*), and hence (III) and (IV). In particular, the later is true whenever (X, τ X ) is a (metrizable) compact space. Proof of Lemma 2.1 We modify the construction given in [24,Section 2.2]. Let (X, T ) be a Polish dynamical system, denote by τ X the initial topology of X and assume that X is endowed with a Hausdorff topology τ which fulfills (I), (II), (III) and (IV). Fix x 0 ∈ X and let m : ℓ ∞ → R be a Banach limit. For each K ∈ K τ denote by C (K, τ ) the space of all τ -continuous real-valued functions on K. Proof. The first part is obvious by the Riesz's Representation theorem since, as mentioned in [24,Fact 2.2], the formula defines a (non-negative) linear functional on C (K, τ ). Moreover, the measure µ K satisfies By (III) we have the equality B(X, τ ) = B(X, τ X ) and hence for each K ∈ K τ we can extend the measure µ K into a Borel measure on the whole space X (still denoted by µ K ) using the formula Clearly µ K (X) ≤ 1, which implies the regularity of these measures. However, since the compact sets K ∈ K τ are not necessarily T -invariant and we could have T −1 (K) ∩ K = ∅, the measures µ K are not necessarily T -invariant. As in [24] we will define the T -invariant measure we are looking for by taking the supremum of the measures µ K , and this is possible due to the following fact: Proof. The proof is exactly the same than that of [24, Fact 2.3] and it uses essentially conditions (II), (IV) and the positivity of m. Since a finite union of τ -compact subsets of X is still an element of K τ , from Fact 2.2.2 we deduce that the family (µ K ) K∈Kτ has the following property: for any pair Proof. The proof is exactly the same than that of [ Proof. The first part of the proof is exactly the same than that of [24, Fact 2.5] and it uses essentially conditions (I), (IV), the positivity of m and the fact that it is shift-invariant. By Fact 2.2.1, for each K ∈ K τ we have that To finish the proof of Lemma 2.1 we include a property, not shown in [24,Section 2.2], about the support of the measure constructed: By the definition of µ we get µ(U ) = 0 and hence that x / ∈ supp(µ). Proof of Theorem 2.3 Let (X, T ) be a Polish dynamical system, denote by τ X the initial topology of X and assume that X is endowed with a Hausdorff topology τ which fulfills (I), (II) and (III*). there exists an increasing sequence of natural numbers (N k ) k∈N ∈ N N and a sequence of Then we fix the Banach limit m : ℓ ∞ → R defined as for some fixed non-principal ultrafilter U ⊂ P(N) on N. By (1) we have Since τ fulfills (I), (II) and (III*), by [24, Fact 2.1] it also has properties (III) and (IV) so we can apply Lemma 2.1 to x 0 and m obtaining a (non-negative) T -invariant finite Borel measure µ on X for which µ(K) ≥ m(1l N (x 0 ,K) ) for each K ∈ K τ and such that In particular we get µ(U ) ≥ m(1l N (x 0 ,U ) ) > 0 so µ is a positive T -invariant finite Borel measure. Normalizing µ we get the desired measure. Proof. Set O(x 0 ) := Orb(x 0 , T ) τ . Using (III*), let (U n ) n∈N be a basis of τ X -neighbourhoods of x 0 consisting of τ -compact sets. Applying Fact 2.3.1 to each set U n we obtain a sequence (µ n ) n∈N of T -invariant probability measures on X for which µ n (U n ) > 0 and such that supp(µ n ) ⊂ O(x 0 ) for each n ∈ N. Then the measure is a T -invariant probability measure on X. Moreover, for any τ X -neighbourhood U of x 0 there is an integer n ∈ N with U n ⊂ U and hence This implies that x 0 ∈ supp(µ x 0 ). Also, given x / ∈ O(x 0 ) there is a τ -neighbourhood V of x, which by (II) is also a τ X -neighbourhood of x, such that V ∩ O(x 0 ) = ∅. Since supp(µ n ) ⊂ O(x 0 ) for every n ∈ N we deduce that µ n (V ) = 0 for every n ∈ N and by the definition of µ x 0 we get µ x 0 (V ) = 0. This implies that x / ∈ supp(µ x 0 ) and hence To complete the proof of Theorem 2.3, let T be reiteratively recurrent. Since X is separable there is a countable set {x n : n ∈ N} ⊂ RRec(T ) which is dense in X. Applying Fact 2.3.2 to each point x n we obtain a sequence (µ xn ) n∈N of T -invariant probability measures on X such that x n ∈ supp(µ xn ) for each n ∈ N. Finally, the measure is a T -invariant probability measure on X with full support. If for each open subset U of X there is a point x U ∈ X such that Bd(N (x U , U )) > 0, then one can find a T -invariant probability measure µ on X with full support. Indeed, one just has to use (III*) to consider an appropriate countable family of τ -compact sets whose τ X -interiors form a base of the initial topology τ X , apply Fact 2.3.1 to those τ -compact sets and take an infinite convex combination of the obtained measures. 3 From reiterative to frequent recurrence A key lemma An important tool for the proof of Theorem 1.3 is the following lemma: for m-a.e. point x ∈ X, that is, for each n ∈ N there is a set A n ⊂ X with m(A n ) = 1 such that dens(N (x, U n )) = m(U n ) for every x ∈ A n . Since a countable union of null sets is again null, the set satisfies m(A) = 1. We claim that A ⊂ FRec(T ). Indeed, for every x ∈ A and every neighbourhood U of x there is an integer n ∈ N such that x ∈ U n ⊂ U . Since A ⊂ supp(m) we have that U n ∩ supp(m) = ∅ and hence The arbitrariness of the neighbourhood U of x implies that x ∈ FRec(T ). Now, since m(A) = 1 and m(B) > 0 we obtain A ∩ B = ∅ and hence Since this is true for every set B ∈ B(X) with µ(B) > 0 we deduce that µ(FRec(T )) = 1. Then µ(FRec(T )) = 1 and in particular, since supp(µ) is the smallest closed subset of X with full µ-measure, we get that supp(µ) ⊂ FRec(T ). Let (X, T ) be a Polish dynamical system, denote by τ X the initial topology of X and assume that X is endowed with a Hausdorff topology τ which fulfills (I), (II), and (III*). Then we have the equality Moreover: (a) The following statements are equivalent: (i) FRec(T ) = ∅; (ii) UFRec(T ) = ∅; (iii) RRec(T ) = ∅; (iv) T admits an invariant probability measure. (b) The following statements are equivalent: (i) T is frequently recurrent; (ii) T is U -frequently recurrent; (iii) T is reiteratively recurrent; (iv) T admits an invariant probability measure with full support. As we already mentioned in the Introduction, this result is false for general Polish dynamical systems: there exist even reiteratively hypercyclic operators on c 0 (N) without any non-zero U -frequently recurrent vector (see [10, Theorem 5.7 and Corollary 5.8]). Working with linear dynamical systems implies a reformulation of the above result, which is Theorem 1.3. Proof of Theorem 1.3 Let T : X → X be an adjoint operator on a separable dual Banach space X. Denote by τ · the norm topology, consider the weak-star topology w * and note that: (I) since T is an adjoint operator, it is a continuous self-map of (X, w * ); (II) by the definition of the topologies, we have w * ⊂ τ · ; (III*) by the Alaoglu-Bourbaki's theorem, the translation of the family of closed balls centred at 0 is a τ · -neighbourhood basis consisting of w * -compact sets. If T : X → X is an operator on a separable reflexive Banach space X the same conditions hold for the weak topology. From here one can apply the same arguments as those used in the proof of Theorem 3.3. In particular, if we consider a point x 0 ∈ RRec(T ) \ {0} then the measure µ x 0 obtained by Theorem 2.3 is a non-trivial invariant probability measure. Remark 3.4. The equality FRec(T ) = RRec(T ) and hence the equivalences (i) ⇔ (ii) ⇔ (iii) established in Theorem 1.3 are still true when the underlying space X is a nonseparable reflexive Banach space. Indeed, given an operator T : X → X on a non-separable reflexive Banach space X, and given a point x 0 ∈ RRec(T ) we can consider the separable closed T -invariant subspace Z := span(Orb(x 0 , T )), which is again reflexive. Then T | Z : Z → Z is an operator on a separable reflexive Banach space. Moreover, recurrence is a local property, i.e. for each Furstenberg family F we have the equality: Applying Theorem 1.3 to T | Z we have x 0 ∈ RRec(T | Z ) and hence x 0 ∈ FRec(T | Z ) ⊂ FRec(T ). However, we cannot say the same about statement (iv) of Theorem 1.3 since separability is essential to construct and extend the invariant measures onto the whole space. The above arguments are also restricted to the reflexive case because closed subspaces of a dual Banach space are not necessarily dual Banach spaces (consider c 0 (N) ⊂ ℓ ∞ (N)). From uniform recurrence to unimodular eigenvectors Our aim in this section is to connect some recurrence properties (stronger than those considered in Sections 2 and 3), for linear dynamical systems on complex Banach spaces, to the existence of unimodular eigenvectors. This investigation is motivated by the fact that, given a complex linear map T : X → X on a complex topological vector space X, the linear span of its unimodular eigenvectors E(T ) consists of ∆ * -recurrent vectors. It is shown in [10, Lemma 7.1 and Corollary 7.2] that they are IP * -recurrent, and in fact, the same arguments hold by using [22,Proposition 9.8] applied to the Kronecker system consisting of the compact group T k and the (left) multiplication (z 1 , ..., z k ) → (λ 1 z 1 , ..., λ k z k ) for a fixed k-tuple (λ 1 , ..., λ k ) ∈ T k . We give an alternative proof via invariant measures: By the triangular inequality we get |λ n − 1| < ε for each n ∈ A and hence ∆ * ∋ A ⊂ {n ∈ N : |λ n − 1| < ε} so {n ∈ N : |λ n − 1| < ε} ∈ ∆ * . Since the Furstenberg family ∆ * is a filter (see [8]) and λ ∈ T and ε > 0 were chosen arbitrarily the proof is finished. Our goal is now to prove Theorem 1.7, which states that for any operator acting on a complex Hilbert space, the existence of a non-zero reiteratively recurrent vector with bounded orbit, and in particular the existence of a uniformly recurrent vector, implies the existence of a unimodular eigenvector. The proof of Theorem 1.7 relies heavily on the machinery of Gaussian measures on (complex separable) Hilbert spaces. We begin by recalling some basic facts concerning these Gaussian measures, as well as some deeper results pertaining to the Ergodic Theory of Gaussian linear dynamical systems. We refer the reader to one of the references [12] or [15] for more about Gaussian measures on Banach spaces, and to [3] and [4] for more on their role in linear dynamics. Ergodic Theory for linear dynamical systems and Gaussian measures The study of Ergodic Theory in the framework of linear dynamics started with the pioneering work of Flytzanis (see [20,21]), and was then further developed in the papers [1], [2] and [4], among others, focusing on the existence of invariant Gaussian measures satisfying some further dynamical properties such as weak/strong mixing. Definition 4.2. A Borel probability measure m on a complex Banach space X is said to be a Gaussian measure if every continuous linear functional x * ∈ X * has a complex Gaussian distribution when considered as a random variable on (X, B(X), m). It is now well understood that the dynamics of a linear dynamical system (X, T ) are closely related to the properties of the unimodular eigenvectors of T . The situation is especially well understood in the Hilbertian setting, since the existence of an invariant Gaussian measure (with full support, or with respect to which T is ergodic or weakly/strongly mixing) can be fully characterized in terms of the properties of the set E(T ). See [1] and [3] for details. These characterizations do not hold true, in general, in the Banach space setting, but still many results are preserved allowing for a rather through understanding of Ergodic Theory of linear dynamical systems in this Gaussian framework. See [2], [3] and [4] for details. Even though Gaussian measures are an essential tool for our proof of Theorem 1.7 (see Lemma 4.4 below), the properties that such measures (may) have are properties that arbitrary probability measures can have too. We introduce these properties following [12]: Let µ be a probability measure on a Banach space X: (a) suppose that there exists an element x ∈ X such that X x * , z dµ(z) = x * , x for every x * ∈ X * , then x is called the expectation of the measure µ, and in this case we will write X zdµ(z) := x; (b) we say that µ is centered if its expectation exists and it is equal to 0 ∈ X; (c) we say that µ has a finite second-order moment, if X z 2 dµ(z) < ∞. If µ has a finite second-order moment then its expectation (called the Pettis integral of µ) exists (see [16, Page 55]). Given a centered probability measure µ on X with a finite second-order moment, following [12, Page 169] and [3, Theorem 5.9], we can define the covariance operator of such a measure µ as the bounded linear operator R : X * → X satisfying for every pair of elements x * and y * of X * . In other words, Any Gaussian measure m on X has a finite second-order moment (see [3,Exercise 5.5]), and since we will consider in this work only centered Gaussian measures, we will always have an associated covariance operator for such a measure m. When H is a complex separable Hilbert space, the covariance operator of a centered probability measure µ on H with a finite second-order moment is usually defined, in a slightly different way, as the bounded linear operator S : H → H for which Observe that, contrary to (2) Proof. Suppose first that µ is a centered measure on H. Then, since H is a Hilbert space, the covariance operator S of µ defined as in (3) satisfies x, z y, z dµ(z) for every x, y ∈ H, and by [3,Corollary 5.15], it is also the covariance operator of a certain Gaussian measure m on H. From now on we split the proof in three steps: Step 1. The Gaussian measure m is T -invariant: x, z y, z dµ(z) = Sx, y , since µ is T -invariant. By [3,Proposition 5.22] we deduce that m is T -invariant. Step 2. We have the equality span(supp(µ)) = supp(m): where the equality ( * ) follows from the continuity of the maps y, · : H → C. Step 3. We have the inclusion supp(µ) ⊂ span(E(T )): In [3,Theorem 5.46] it is stated that if a Banach space X has cotype 2, then every operator in L(X) admitting a Gaussian invariant measure with full support has a spanning set of unimodular eigenvectors. Since the support of m is a closed linear subspace of H (Step 2) and every Hilbert space has cotype 2, [3,Theorem 5.46] applied to the T -invariant measure m (Step 1) implies that supp(m) ⊂ span(E(T )), and hence using again Step 2 we get that supp(µ) ⊂ span(supp(µ)) = supp(m) ⊂ span(E(T )). Remark 4.5. If we start the proof of Lemma 4.4 with the underlying space being a Banach space X which has type 2, then there exists a Gaussian measure m on X whose covariance operator is R, as defined in (2). Indeed, since R is a symmetric and positive operator it admits a square root: there exist some separable Hilbert space H and an operator K : H → X such that R = KK * (see [3, Page 101]). Moreover, by the finite second-order moment condition of µ, the operator K * is an absolutely 2-summing operator and hence such a Gaussian measure m on X exists by [3,Corollary 5.20]. However, in the Step 3 of the proof above the underlying space needs to have cotype 2. Since the only spaces which are both of type 2 and of cotype 2 are those which are isomorphic to a Hilbert space, the proof of Lemma 4.4 does not extend outside of the Hilbertian setting. Normalizing µ we get a T -invariant probability measure with full support and finite secondorder moment. Moreover, in both cases (a) and (b) we have: We finish this section with the proof of Theorem 1.9, which concerns the power-bounded operators on complex reflexive Banach spaces X. The proof relies on the splitting theorem of Jacobs-Deleeuw-Glicksberg, and is really specific to the setting of power-bounded operators. We follow the presentation and notation of [31, Section 2.4]: if S is a semigroup of L(X), we say that S is weakly almost periodic if for any x ∈ X the set S x = {Sx : S ∈ S } has a w-compact closure. Proof of Theorem 1.9 Given a power-bounded operator T : X → X on a complex reflexive Banach space X, we already know that span(E(T )) ⊂ URec(T ) so we just have to show the inclusion URec(T ) ⊂ span(E(T )). We set O(x) := Orb(x, T ) w for each x ∈ X. Since T is power-bounded, every T -orbit is bounded and has a w-compact closure. Hence, by the Jacobs-Deleeuw-Glicksberg theorem [31, Section 2.4, Theorem 4.4] applied to the (weakly almost periodic) abelian semigroup of operators {T n : n ∈ N 0 } ⊂ L(X), we obtain that X = X rev ⊕ X f l where Moreover, by the second part of this same theorem [31, Section 2.4, Theorem 4.5] we also get that X rev = span(E(T )). Let us now show that URec(T ) ⊂ X rev . Indeed, given x ∈ URec(T ) \ {0} we can consider which is a w-compact dynamical system. Since the weak topology is coarser than the norm topology we have x ∈ URec(T | O(x) ) and hence by [22,Theorem 1.17] which implies that x ∈ O(y). The arbitrariness of y ∈ O(x) shows that x ∈ X rev . Product dynamical systems Given a property of a dynamical system T : X → X, it is usual to ask whether the product dynamical system T × T : X × X → X × X has the same property. Studied cases in linear dynamics are transitivity or hypercyclicity (which gives us the concept of topological weak mixing), and in general F-transitivity or F-hypercyclicity (see [7] and [18]). Here we show that the above theorems still work for the product systems. Theorem 5.1 (From Reiterative to N -Dimensional Frequent Recurrence). Let N ∈ N and suppose that for each 1 ≤ i ≤ N there is a Polish dynamical system (X i , T i ) such that (X i , τ X i ) can be endowed with a Hausdorff topology τ i which fulfills (I), (II), and (III*) with respect to the map T i and the topology τ X i . Then for the product dynamical system T : (X, τ X ) → (X, τ X ), where τ X is the product topology of the N -th τ X i topologies, we have the equality In particular: (a) The following statements are equivalent: (i) FRec(T ) = ∅; (ii) UFRec(T ) = ∅; (iii) RRec(T ) = ∅; (iv) RRec(T i ) = ∅ for every 1 ≤ i ≤ N . (b) The following statements are equivalent: (i) T is frequently recurrent; (ii) T is U -frequently recurrent; (iii) T is reiteratively recurrent; (iv) T i is reiteratively recurrent for every 1 ≤ i ≤ N . Proof. We clearly have the inclusion RRec(T i ). we can consider the product measure µ x 0 := N i=1 µ x i on the product space X, which is a T -invariant measure (see [36, Theorem 1.1 and Definition 1.2]) for which x 0 ∈ supp(µ x 0 ). Applying now Lemma 3.1 we deduce that x 0 ∈ FRec(T ) τ X . The following immediate corollaries yield a product version of Theorem 1.3: Corollary 5.2. Let N ∈ N and consider for each 1 ≤ i ≤ N an adjoint operator T i : X i → X i on a separable dual Banach space X i . Then, for the direct sum operator T = T 1 ⊕ · · · ⊕ T N : X → X on the direct sum space X = X 1 ⊕ · · · ⊕ X N , we have the equality RRec(T i ). In particular, the following statements are equivalent: (i) T is frequently recurrent; (ii) T is U -frequently recurrent; (iii) T is reiteratively recurrent; (iv) T i is reiteratively recurrent for every 1 ≤ i ≤ N . Moreover, the result holds whenever some of the T i are operators defined on some reflexive Banach spaces X i . In the statement above, and whenever we consider a direct sum space X 1 ⊕ · · · ⊕ X N , one can use any norm defining the usual product topology on X 1 ⊕ · · · ⊕ X N (see Theorem 5.8). Definition 5.3. Let (X, T ) be a linear dynamical system and let n ∈ N. We will denote by T n : X n → X n the n-fold direct sum of T with itself, i.e. the dynamical system T n := T ⊕ · · · ⊕ T n : X ⊕ · · · ⊕ X n −→ X ⊕ · · · ⊕ X n , where X n := X ⊕ · · · ⊕ X n is the n-fold direct sum of X with itself. Corollary 5.4. Let T : X → X be an adjoint operator on a separable dual Banach space X. Then the following statements are equivalent: (i) for every n ∈ N, T n is frequently recurrent; (ii) for every n ∈ N, T n is U -frequently recurrent; (iii) for every n ∈ N, T n is reiteratively recurrent; (iv) T is reiteratively recurrent. In particular, the result holds whenever T is an operator on a reflexive Banach space X. As a consequence of the above fact we can prove some results related with hypercyclicity. We start with an independent proof of [18, Theorem 2.5 and Corollary 2.6] for the particular case of the reiteratively hypercyclic (adjoint) operators: Theorem 5.5. Let T : X → X be a reiteratively hypercyclic adjoint operator on a separable dual Banach space X. Then for every n ∈ N the operator T n is reiteratively hypercyclic and frequently recurrent. In particular, the result holds whenever T is an operator on a separable reflexive Banach space X. Proof. Let n ∈ N. Since T is reiteratively hypercyclic we know that: (a) T is topologically weakly mixing (see [6, Page 548]), and hence T n is topologically transitive, and in particular hypercyclic; (b) T is reiteratively recurrent, and by the above results T n is frequently recurrent, in particular reiteratively recurrent. If we start just with reiterative recurrence, having a dense set of orbits converging to 0 implies a strong notion of hypercyclicity: Theorem 5.6. Let T : X → X be an adjoint operator on a separable dual Banach space X. Suppose that there is a dense set X 0 ⊂ X such that T k x → 0 as k → ∞ for each x ∈ X 0 . The following statements are equivalent: (i) for every n ∈ N, T n is U -frequently hypercyclic and frequently recurrent; (ii) T is reiteratively recurrent. In particular, the result holds if T is an operator on a separable reflexive Banach space X. Proof. Clearly (i) implies (ii) even if T : X → X is not a linear map. If we suppose (ii) and we fix n ∈ N, by the above results we get that T n is frequently recurrent and in particular U -frequently recurrent. Let Y 0 := X 0 ⊕ · · · ⊕ X 0 the n-direct sum of the set X 0 . Then Y 0 is a dense subset of the n-fold direct sum X n and every orbit of a point of Y 0 converges to (0, ..., 0) ∈ X n . By [10,Theorem 2.12], the existence of Y 0 and the U -frequent recurrence imply that T n is U -frequently hypercyclic. It would be interesting to change the assumption of U -frequent hypercyclicity in the above statement into that of frequent hypercyclicity. However, as exposed in [10, Question 2.13], the following is an open problem: Question 5.7 ([10, Question 2.13]). Let T be a frequently recurrent operator admitting a dense set of vectors with orbit convergent to 0. Is T is frequently hypercyclic? If now we focus on Theorems 1.7 and 1.9, their generalizations for product linear dynamical systems follow in a much easier way, since any N -tuple formed by unimodular eigenvectors is a linear combination of such vectors for the direct sum map: Theorem 5.8. Let N ∈ N and suppose that for each 1 ≤ i ≤ N : (a) we have an operator T i : H i → H i on a complex Hilbert space H i . Then, for the direct sum operator T = T 1 ⊕ · · · ⊕ T N : H → H on the direct sum Hilbert space H = H 1 ⊕ · · · ⊕ H N , we have the equality: RRec bo (T i ), In particular, the following statements are equivalent: (i) the set span(E(T )) is dense in H; (ii) the set RRec bo (T i ) is dense in H i for every 1 ≤ i ≤ N . (b) we have a power-bounded operator T i : X i → X i on a complex reflexive Banach space X i . Then, for the direct sum operator T = T 1 ⊕ · · · ⊕ T N : X → X on the direct sum space X = X 1 ⊕ · · · ⊕ X N , we have the equality URec(T i ). In particular, the following statements are equivalent: Proof. Since the vector (0, ..., 0, x i , 0, ..., 0) belongs to E(T ) whenever x i ∈ E(T i ) for each 1 ≤ i ≤ N , it is enough to apply Theorems 1.7 and 1.9 to each operator T i . Finally we get the desired generalization of Theorems 1.7 and 1.9: Corollary 5.9. Let T ∈ L(H) where H is a complex Hilbert space. The following statements are equivalent: (i) for every n ∈ N, the set span(E(T n )) is dense in H n ; (ii) for every n ∈ N, T n is ∆ * -recurrent; Inverse dynamical systems As in the case of products, given a (topological) dynamical system T : X → X with some property, it is natural to ask whether the inverse dynamical system T −1 : X → X (if it exists and is continuous) has the same property. This is true for hypercyclicity and reiterative hypercyclicity (see [9]), but it fails for U -frequent hypercyclicity (see [33]) and frequent hypercyclicity (see [34]). It is also known that the inverse of a frequently hypercyclic operator is U -frequently hypercyclic (see [5,Proposition 20]). If we focus on recurrence properties, the inverse of a recurrent operator is again recurrent as [14,Proposition 2.6] shows. A simpler proof (in a transitive style) of that fact would be: Question 6.2 ([10, Question 2.14]). Let T be an invertible operator. If T is reiteratively (resp. U -frequently, frequently or uniformly) recurrent, does T −1 has the same property? In fact, we could ask the same question for IP * , ∆ * -recurrence and unimodular eigenvalues. However, for the latest the linearity is enough to show it since if T x = λx for some λ ∈ T, then T −1 x = λ(T −1 λx) = λx, so clearly span(E(T )) = span(E(T −1 )). In order to answer Question 6.2 in our dual/reflexive or Hilbertian setting we just have to recall the following trivial fact: given a homeomorphism T : X → X on a Polish space X and a Borel measure µ on X, µ is T -invariant if and only if it is T −1 -invariant. Theorem 6.3 (From Reiterative to Inverse Frequent Recurrence). Let T : X → X be a homeomorphism of the Polish space (X, τ X ), and assume that X is endowed with a Hausdorff topology τ 1 which fulfills (I) for T , (II), and (III*). Then we have Moreover: (a) If T is reiteratively recurrent then T −1 is frequently recurrent. As a corollary of the above theorem, and using the arguments from Theorem 1.3 we have: Moreover: (a) T is reiteratively (and hence frequently) recurrent if and only if so is T −1 . In particular, the result holds whenever T is an operator on a reflexive Banach space X. Proof. Let S : Y → Y be an operator on a Banach space Y such that Y * = X and S * = T . It is a known fact that T is invertible if and only if S is invertible, and in this case, T −1 = (S −1 ) * , so T −1 is also an adjoint operator on the separable Banach space X and hence it is w * -continuous. The result follows from the above theorem applied to T : X → X, (X, · ) and the topology w * . With the above fact we give an alternative prove of [9, Theorem 3.6] for adjoint operators: Theorem 6.5. Let T : X → X be an invertible adjoint operator on a separable dual Banach space. If T is reiteratively hypercyclic (and hence frequently recurrent) then so is T −1 . In particular, the result holds whenever T is an operator on a separable reflexive Banach space X. Proof. By the above theorem T −1 is frequently recurrent and in particular reiteratively recurrent. Since hypercyclicity (or transitivity) is also preserved by taking the inverse system, [10, Theorem 2.1] implies that T −1 is reiteratively hypercyclic. We cannot change the assumption of reiterative hypercyclicity in the statement of Theorem 6.5 above into the assumption of U -frequent hypercyclicity since there are invertible U -frequently hypercyclic operators on ℓ p (N) (1 ≤ p < ∞) whose inverse is not U -frequently hypercyclic (see [33]). However, it would be interesting to know whether it is possible to change the assumption of reiterative hypercyclicity into that of frequent hypercyclicity: even though it is known that there are invertible frequently hypercyclic operators on ℓ 1 (N) whose inverse is not frequently hypercyclic (see [34]), one can check that these are not adjoint operators and moreover by [5,Proposition 20] the inverse of a frequently hypercyclic operator is always U -frequently hypercyclic. All the counterexamples mentioned here are C-type operators, which were introduced for the first time in [32] and further developed in [26,33,34], so a possible counterexample for the frequent hypercyclicity case could arise from those operators. If, on the other hand, one wishes to prove an analogue of Theorem 6.5 for the frequent hypercyclicity case in our dual/reflexive framework, one cannot take a similar approach since there are chaotic operators, which are in particular frequently recurrent and hypercyclic, but not U -frequently hypercyclic (see [32] and [26]) and hence not frequently hypercyclic. If we now focus on uniform, IP * and ∆ * -recurrence, Theorems 1.7 and 1.9 combined with the equality span(E(T )) = span(E(T −1 )) give us the following: In particular, T is uniformly (and hence IP * and ∆ * ) recurrent if and only if so is T −1 . Corollary 6.7. Let T : X → X be an invertible operator on a complex reflexive space X. If T is power-bounded, then we have In particular, if T is uniformly recurrent then span(E(T −1 )) is a dense set in X. Moreover, if T −1 is also power-bounded then the above inclusion is an equality and T is uniformly (and hence IP * and ∆ * ) recurrent if and only if so is T −1 . 7 How typical is a reiteratively recurrent operator? Let H be a complex separable Hilbert space. For any M > 0, denote by L M (H) the set of bounded operators T ∈ L(H) such that T ≤ M . Our aim in this short section is to present a result pertaining to the typicality of reiteratively recurrent operators of L M (H), with M > 1, for one of the two (Polish) topologies SOT and SOT * . The framework that we use here is presented in detail in [26, Chapters 2 and 3], so we will be rather brief in our presentation and refer the readers to the works [25], [26] or [27] for more on typical properties of operators on Hilbert or Banach spaces. We recall that the Strong Operator Topology (SOT) on L(H) is defined as follows: any T 0 ∈ L(H) has a SOT-neighbourhood basis consisting of sets of the form . Does there exist an operator (possibly on a Fréchet space) which is uniformly recurrent but not ∆ * -recurrent? What about distinguishing uniform recurrence from IP * -recurrence? Note that these two questions make sense in the more general context of complex Fréchet spaces, and in fact both questions are still unsolved for that rather general class of spaces. It is clear that, in every possible complex context, a positive answer to Question 8.1 implies a negative one to Question 8.2. Moreover, it would even imply a negative answer for the real case of Question 8.2: given any uniformly recurrent real linear dynamical system we could consider its complexification, and by the product-arguments used for Theorem 5.8 we would get unimodular eigenvectors and hence ∆ * -recurrence; the initial real dynamical system could possibly not contain the obtained unimodular eigenvectors, but the real and complex parts of such vectors would clearly be ∆ * -recurrent for the original real system. It is worth mentioning that uniform and IP * -recurrence are completely distinguished for compact dynamical systems (see the construction from [19], its properties in [11] and use [22, Theorems 1.15 and 9.12]), so the question is if the linearity avoids that distinction. The technique used in the proof of Theorem 1.7 (via Gaussian measures) is very different from the one used in Theorem 1.9 (via the Jacobs-Deleeuw-Glicksberg theorem). Indeed we loose the contact with measures and the unimodular eigenvectors are obtained from a totally different construction (see [31,Section 2.4]). It seems to us that a more general "eigenvectors' constructing machine", not restricted to the measures or power-bounded assumptions, should be developed in order to provide a better answer to Question 8.1. What we know for the moment, leaving apart the power-bounded case which seems very specific, is the following: Proof. We have that T | K : (K, w) → (K, w) is a w-compact dynamical system, so it admits a T | K -invariant probability measure µ on K (see [22,Page 62]). Since the norm topology and the weak topology on H have the same Borel sets, we can extend the measure µ into a Borel probability measure on the whole space H (still denoted by µ) using the formula µ(A) := µ(K ∩ A) for every Borel set A ∈ B(H). Note that µ is T -invariant. We deduce that: (a) µ is non-trivial, since 0 / ∈ K; (b) µ has a finite second-order moment, since supp(µ) ⊂ K. a T -invariant w-compact subset of H for any point x ∈ H with bounded T -orbit, the arguments of the above proposition imply that for any M > 1, a SOT * -typical operator T ∈ L M (H) has the property that every bounded orbit of T contains 0 in its weak closure. Proposition 8.5. Let T be an adjoint operator on a complex separable dual Banach space X. Let n ∈ N and λ ∈ T. Given a [λT ] n -invariant w * -compact and convex subset K of H for which 0 / ∈ K, we have E(T ) ∩ span(Orb(x, T )) = ∅ for some x ∈ K, and in particular E(T ) = ∅. Proof. By the Schauder's Fixed-Point theorem there is x ∈ K for which the identity [λT ] n x = x holds. Taking α = λ −n ∈ T we get that (α − T n )x = 0. If we split the polynomial (α − z n ) ∈ C[z] we have where the α i 's are distinct n-th roots of α in T. Considering the vectors y 0 := x and y j : we have y 0 = 0 since 0 / ∈ K, but y n = (α − T n )x = 0. Then for some 0 ≤ k ≤ n − 1 we have that y k ∈ E(T ) ∩ span(Orb(x, T )). In particular E(T ) = ∅. Another natural question concerning Theorem 1.7 is the relevance, in assertions (v) to (vii) of both parts (a) and (b), of the assumption that the vectors under consideration have bounded orbit. This fact is used in order to ensure that the invariant measures, which by Theorem 2.3 can be constructed from each reiteratively recurrent vector, have a finite second-order moment. To omit this boundedness assumption (or weak versions of it) seems to require new ideas. We recall here the following open problem from [26]: Question 8.6 ([26, Question 8.3]). Does there exist an operator on a complex separable Hilbert space admitting a non-trivial invariant probability measure but no eigenvalues? The following product and inverse questions also remain open: Question 8.7. Let T be an operator on a Fréchet space X. If T is reiteratively (resp. U -frequently, frequently, uniformly) recurrent, does the n-fold direct sum T ⊕· · ·⊕T , n ≥ 2, have the same property? Question 8.9 ([10, Question 2.9]). Let T be an operator on a Fréchet space X. Do we always have that either FRec(T ) = X or FRec(T ) is a meager set? This seems to be a non-trivial question even in the dual/reflexive setting, since the frequently recurrent points obtained in our construction form a "big" set with respect to a certain invariant measure, and usually this has nothing to do with the "bigness" from the topological point of view (i.e. in the Baire Category sense). In fact, given any chaotic operator T : X → X (i.e. hypercyclic with a dense set of periodic points), it admits an invariant probability measure µ on X with full support (see [28,Corollary 3.6]) and hence µ(FRec(T )) = 1 by Lemma 3.1. However, since T is hypercyclic we have that FRec(T ) is a meager set, otherwise by [10, Theorem 2.7] the set FHC(T ) = FRec(T ) ∩ HC(T ) would be co-meager contradicting [5,Corollary 19].
2022-03-08T06:47:31.404Z
2022-03-06T00:00:00.000
{ "year": 2022, "sha1": "196012795cd6f46bb9fbe7b3ebe7dda41316945e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.matpur.2022.11.011", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "196012795cd6f46bb9fbe7b3ebe7dda41316945e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
216232663
pes2o/s2orc
v3-fos-license
E ff ect of Freeze-Drying on Quality and Grinding Process of Food Produce: A Review : Freeze-drying is an important processing unit operation in food powder production. It o ff ers dehydrated products with extended shelf life and high quality. Unfortunately, food quality attributes and grinding characteristics are a ff ected significantly during the drying process due to the glass transition temperature (during drying operation) and stress generated (during grinding operation) in the food structure. However, it has been successfully applied to several biological materials ranging from animal products to plants products owning to its specific advantages. Recently, the market demands for freeze-dried and ground food products such as spices, vegetables, and fruits are on the increase. In this study, the e ff ect of the freeze-drying process on quality attributes, such as structural changes, the influence of glass transition during grinding, together with the e ff ect on grinding e ffi ciency in terms of energy requirement, grinding yield, and morphological changes in the powder as a result of temperature, drying time were discussed. An overview of models for drying kinetics for freeze-dried food sample, and grinding characteristics developed to optimize the drying processes, and a prediction of the grinding characteristics are also provided. Some limitations of the drying process during grinding are also discussed together with innovative methods to improve the drying and grinding processes. Introduction Drying is a unit operation by which free water in the food is significantly reduced, thus promoting the concentration of dry matter without damaging the tissue, wholesomeness, and physical appearance of the food. The practice of drying differs among food processors. However, all methods are driven to achieving an extended storability by reducing the water activity from food products which can slow down the rate of deterioration and maintain its quality. Drying of food products is advantageous as it minimizes packaging, storage, and transportation costs [1]. It has been practiced as sun-drying method, which is considered as a traditional method practiced in most of the developing countries, or it can be conducted with improved technology such as spray drying, hot air drying, vacuum drying and freeze-drying [2]. Though the operational methods of conventional drying differ from modern technology, drying from these methods is all achieved either through radiation, convection, conduction, or a combination of heat transfer mechanisms. All of these drying methods consequently may affect partially or totally the quality of the product [2]. However, in recent times, vacuum freeze-drying has gained a lot of recognitions especially for the drying of high quality, nutritional delicate and expensive products. Freeze-drying is based on the sublimation mechanism by direct dehydration of frozen products. The wide acceptability of this method thus lies in its process which involve freezing of all infants and children [16]. In addition, cereals, roots, and tubers are also preserved in their powder form and have a wide range of uses, e.g., rice flour, wheat flour, and yam flour. However, to get the best-powdered products, the quality of the drying process, determines the efficiency that would be observed during the grinding process. Therefore, drying and grinding unit operations are coherent towards the production of food powder, and as such, the parameters involved should be critically considered. These lead to a number of researches being conducted with the aim of improving food powder quality by freeze-drying operations [17][18][19]. In the last few decades, a number of reviews have been conducted on the subject freeze-drying. However, most of these reviews have centered on the quality characteristics of the food product and improving the freeze-drying process with limited emphasis on the effect of the drying process on the grinding characteristics of the food [20][21][22] Therefore, the primary objective of this review was to provide an overview of the effect of freeze-drying on the quality attributes of agricultural products and to state the influence of freeze-drying on the grinding process. A review of the mathematical models for freeze-drying will also be provided. Further discussions will be made on future trends and solution to grinding challenges. Freeze-Drying of Food Produce Freeze-drying as an industrial process involves the dehydration by sublimation of frozen ice present within the food molecules. It is a preferred method for drying foods containing compounds that are thermally sensitive and prone to oxidation since it operates at low temperatures and under a high vacuum. It has been applied to various food products such as strawberry, apple, tomato, potatoes, asparagus, and pumpkins. However, its limitations lie in the resulting shrinkage and collapse that is sometimes experienced during the drying process when not properly conducted. This led to much research conducted on freeze-drying in the last 3 decades. Recently, freeze-drying has become a welcomed idea especially in the pharmaceutical and other bioproduct industries and gradually becoming a well-utilized processing method in the food industries owing to the assumptions of product quality good perform at low temperature, a reduction in injuries caused to liable bioproducts that drying at high temperature or ambient temperature would have caused and subsequently due to its approval by the food standard bodies. However, when deciding on the type of processing methods to be used for a particular product, it is required that we look beyond the surface, as freeze-drying is reported to be accompanied by several complex interactions which may prove to be a problem at varying processing stages [23][24][25]. Consequently, chemical and physical changes may also be accompanied by freezing, heating and mass transfer within the equipment. In lieu of this, the authors would like to state that our intention is not to rule out the importance and advantages derived from freeze-drying, but to emphasize the possible situations or occurrences that could be observed during freeze-drying as a preprocess for grinding unit operation. Basis of Freeze-Drying The process of freeze-drying can be visualized in terms of three steps: initial freezing, primary drying, and secondary drying. The combination of the process thus involves the dehydration of more than 99% water from the initial dilute solution, which is a direct function of the temperature rather than a function of the initial solution concentration. The steps during freeze-drying have been previously discussed, therefore, it is presented briefly in this review [26]. The initial freezing process involves the formation of ice nuclei which are dependent on factors such as the cooling rate, interfacial energy, and the interfacial morphology or the nanostructure of foreign bodies [27]. Thus, ice formation may take place at a temperature below 0 • C. A faster freezing rate results in the formation of small ice crystals within the food structure. The consequence of this is that sublimation is faster during the primary drying and slower in the secondary drying stage due to the size and formation of the ice at this level. Hence, viscosity at this level is increased as moisture is dehydrated and at some point, the saturation level is attained and there is no further increase in concentration or viscosity. Thereby at this level, the glass transition temperature (T g ), which is associated with the maximum freeze concentration is said to take place. In food components such as carbohydrates, the initial freezing level may be of importance, this is because, carbohydrates can be easily verified during the freezing, based on the T g . The T g values of disaccharides and higher oligomeric sugars lie above −30 • C [28,29]. However, the T g is not a true thermodynamic change in state, but rather a kinetic limit at high viscosity (Detail on freeze-drying of carbohydrate will be discussed later), thus it can only be used in critical cases when the sample is difficult to freeze-dry. The stage of primary drying is the period at which the ice crystals separate from the solute phase via sublimation. At this point, the water vapor is continuously removed from the food by keeping the pressure in the freeze drier cabinet below the vapor pressure at the surface of the ice, and subsequent removal of vapor with a vacuum pump and condensing it on refrigeration coils. This creates a partly dried food sample. The sublimation rate can, however, be affected by the sample thickness and the material cellular structure creating a constraint in the mass transfer coefficient, thereby reducing the rate of dehydration. This is because the freezing process of food materials occur either in natural convection conditions or a forced convection environment (especially for the materials at the upper surface), and because the material is made-up of cellular structures that are separated by air space and can influence the medium through which heat and mass transfer occur [30]. Nowak et al. [30] established in their study of celery the influence of plant material structure (tissue) during freeze-drying. They reported the disruption of the cell walls resulting from a mechanically damaged sample to influence the water vapor transportation to the surface of the sample for sublimation. Thus, thermal changes occurring in the sample material caused the thermal field during freezing to be uneven with a different temperature recorded for the upper layers and the lower layers. This consequently influenced the material heat transfer capability during freeze-drying. At this point, the sample temperature, therefore, becomes a critical parameter to continue the drying process. To maintain sublimation, heat energy is applied to the product to compensate for sublimation cooling. Craig et al. [31], reported that an increase of temperature by 1 • C gave rise to a faster drying process by 13%. However, the heat extracted from the drying sample as water vapor must carefully balance the amount of energy added to the sample, as an increase above the T g , results in melting of the ice into a solute phase. Consequently, conducting primary drying of food sample at temperature above the T g will lead to the production of inferior qualities. This temperature point during drying is known as "critical temperature" [32]. At the critical temperature point, the concentrated solutes in the food are sufficiently mobile to flow under the forces operating within the food structure. When this occurs, there is an instantaneous irreversible collapse of the food structure, which restricts the rate of vapor transfer and effectively ends the drying operation. Table 1, shows the collapse temperature for selected foods during freeze-drying. Therefore, in practice, there is a maximum ice temperature, a minimum condenser temperature, and a minimum chamber pressure, and these control the rate of mass transfer. The secondary drying level is a dynamic process associated with high vapor flow rates, this level of drying is much less efficient, as it involves a usage time representing 30-40% of the total process time but only a total of about 5-10% of sample moisture is dehydrated [42]. This stage begins after all the frozen water has sublimed and thus can be facilitated by increasing the product temperature. However, attention must be focused on certain delicate properties of food, such as protein. Proteins have low thermal stability and can collapse at a temperature higher than its stability, therefore it is necessary to consider the thermal stability of food products based on its component to prevent collapse. The collapse is because of the 'bound water' (to protein molecules) or the water molecules 'trapped' in the glass phase which are removed during this stage. Although sample collapse during secondary drying is generally less likely than collapse during primary drying. It is possible to induce collapse in the dried matrix by exposing the sample to a temperature above its T g [42]. The increased mobility of this rubbery zone after collapse can accelerate protein degradation [28]. This is observed at the ice concentration interface of the freeze-drying system. Growing evidence has shown that the loss of protein activity is due to accumulation and unfolding of the molecules at this interface which upon drying becomes solid or air. The main source for protein stress during freeze-drying, apart from the drying itself, is associated with the freezing process. The destabilizing effects of freezing are significant but yet to be fully explained, however, it is known to be highly protein-dependent. Cold denaturation of proteins is caused by a decrease in hydrophobic effects and the hydration of non-polar residues and may explain the denaturation of some proteins even though cold denaturation kinetics might be too slow to unfold the protein during the freeze-drying process [43]. In general, the capability of providing a very high quality dehydrated product by freeze-drying is an expensive method and the high costs of process limit its application to industrial scale. Therefore, the quality of dehydrated products, complex biochemical criteria, e.g., units of biological activity per milligram of product, as well as cost and time for processing, should all be taken into consideration to carry out the operation under optimal conditions. Glass Transition during Freeze-Drying T g , was earlier defined as the temperature at which an amorphous system changes from the glassy to the rubbery state, and it specifically denotes the property of the amorphous material [44]. T g is a second-order thermodynamic phase transition altering the intrinsic properties of foods including heat capacity, free volume, and viscosity. An amorphous food is formed at non-equilibrium conditions either by removing the dispersing medium (such as water), or from the melt by cooling, or by rapid supercooling. This material is not at thermodynamic equilibrium and therefore is unstable relative to the crystalline form [45]. Since the last decade, the quality of food has become the main focus of food research. The application of the knowledge learned in the area of glass transition of polymers to food systems had contributed a lot of success to understanding and predicting the behavior of foodstuffs [46]. Dried products obtained from most of the common drying processes especially freeze-drying are predominantly in a glassy amorphous form. The mobility of the solid matrix in this state is highly limited. For the product to be stable for long periods of storage, this physical state should not be altered. When the temperature is above T g , an amorphous solid exists in a "rubbery" state. In this state, the molecular mobility of the matrix and the reactants are accelerated, which results in an increased rate of Physico-chemical changes in dried products, such as sticking, caking, collapse, crystallization, agglomeration, loss of volatiles, browning, and oxidation [47][48][49]. In freeze-drying, collapse is a frequent problem if certain operation variables are not well set [50]. This phenomenon occurs when the solid matrix of the foodstuff can no longer support its weight, leading to drastic structural changes shown as a marked decrease in volume, increase in the stickiness of dry powders, and loss of porosity. The collapse temperature is related to the T g of the maximally frozen concentrated solute [51], and it represents the temperature above which the solute matrix loses its shape and quality is decreased. Therefore the relationship between rehydration and T g should be interpreted from porosity and collapse during freeze-drying. In most recent studies, T g was measured using differential scanning calorimetry (DSC), by analyzing the changes in heat capacity due to T g altering the heat flow over a range of temperature. Therefore, several authors prefer to report the temperature at which the T g begins, stating that changes in water mobility causing a quality loss in dried products beginning at this temperature [52]. In an earlier report by Anglea et al. [53], the value obtained for T g for several plant tissues including potato, apple, and sweet potato as well as the osmotically dehydrated potato was −45 • C. The similarity in the glass transition temperature for these fruits as well as for some osmotically treated potato samples was the foundation some research followed in determining the T g of plant products and thereby assumptions that T g of plant products are close to −45 • C is made [54]. However, in recent research by Caballero-Ceron et al. [55], the T g should rather be treated as a factor of the critical moisture content X c , and water activity a wc in the sample, this is because the X c and a wc increases with the molecular weight of the components in the glassy structure. Therefore, T g is certain to change with critical moisture content X c , and water activity a wc in the food product. This phenomenon was reported for the carbohydrate-protein system using encapsulated trehalose-whey protein isolate as shown in Figure 1. The finding supported the phenomena that plasticizing effect by water resulted to a decrease of the T g [56]. Therefore, a similar plant product can show different T g based on the critical moisture content. its shape and quality is decreased. Therefore the relationship between rehydration and should be interpreted from porosity and collapse during freeze-drying. In most recent studies, was measured using differential scanning calorimetry (DSC), by analyzing the changes in heat capacity due to altering the heat flow over a range of temperature. Therefore, several authors prefer to report the temperature at which the begins, stating that changes in water mobility causing a quality loss in dried products beginning at this temperature [52]. In an earlier report by Anglea et al., [53], the value obtained for for several plant tissues including potato, apple, and sweet potato as well as the osmotically dehydrated potato was −45 °C. The similarity in the glass transition temperature for these fruits as well as for some osmotically treated potato samples was the foundation some research followed in determining the of plant products and thereby assumptions that of plant products are close to −45 °C is made [54]. However, in recent research by Caballero-Ceron et al., [55], the should rather be treated as a factor of the critical moisture content , and water activity in the sample, this is because the and increases with the molecular weight of the components in the glassy structure. Therefore, is certain to change with critical moisture content , and water activity in the food product. This phenomenon was reported for the carbohydrate-protein system using encapsulated trehalose-whey protein isolate as shown in Figure 1. The finding supported the phenomena that plasticizing effect by water resulted to a decrease of the [56]. Therefore, a similar plant product can show different based on the critical moisture content. Furthermore, the glass transition temperature of food products has been pointed out to be responsible for the deterioration mechanisms during processing, and an indicator of food stability [57]. It has been also reported that when the temperature of some processes exceeds , the quality of foodstuffs is seriously altered [58]. However, most available literature discussed quality in terms of structure, collapse, and shrinkage effect. This is because, shrinkage and are interrelated in that significant change in volume can be noticed only if the temperature of the process is higher than the of the material at that particular moisture content [59]. Therefore, there's a need to intensify and establish the influence of glass transition temperature on the biochemical component of food, as these parameters may also serve as a useful tool for the choice of the appropriate materials to be freezedried. Selected Changes Associated with Freeze-Drying Freeze-drying is an important step to other processing units such as grinding unit operation. It is one of the basis that prepares the food sample towards grinding and as well determines to a large extent the expected quality of the product at the grinding stage. For the food technologist, properties such as shrinkage and rehydration capacity are the determinant for the quality of a dried product. Furthermore, the glass transition temperature of food products has been pointed out to be responsible for the deterioration mechanisms during processing, and an indicator of food stability [57]. It has been also reported that when the temperature of some processes exceeds T g , the quality of foodstuffs is seriously altered [58]. However, most available literature discussed quality in terms of structure, collapse, and shrinkage effect. This is because, shrinkage and T g are interrelated in that significant change in volume can be noticed only if the temperature of the process is higher than the T g of the material at that particular moisture content [59]. Therefore, there's a need to intensify and establish the influence of glass transition temperature on the biochemical component of food, as these parameters may also serve as a useful tool for the choice of the appropriate materials to be freeze-dried. Selected Changes Associated with Freeze-Drying Freeze-drying is an important step to other processing units such as grinding unit operation. It is one of the basis that prepares the food sample towards grinding and as well determines to a large extent the expected quality of the product at the grinding stage. For the food technologist, properties such as shrinkage and rehydration capacity are the determinant for the quality of a dried product. During freeze-drying, the pure water present is converted to ice crystals within the solid matrix of the food. This usually modifies the structure, as the volume increases because of the lower density of ice compared to the liquid water. The interaction between kinetic constraints on the formation of crystal and thermodynamic driving forces is generally responsible for the pattern of microstructure in freeze-dried foods. Unlike in properly freeze-dried food material, shrinkage is usually observed in the morphological structures of foods that are dried at high temperatures. This occurs due to the dehydration effect resulting from the contraction of the viscoelastic matrix that was previously occupied by water. Izli and Polat [60], verified the influence of freeze-drying and convective drying method on the microstructure of the quince fruits. In the report (as presented in Figure 2), it was obvious from the photograph that the freeze-dried quince possesses a homogeneous honeycomb structure. This indicates that freeze-drying has a minimal effect on the cell structure due to dehydration via sublimation. Consequently, shrinkage was higher in the convective dried sample as a result of excess microstructural stress induced due to high moisture gradients within the product. The effect was dependent on the drying temperature, as increasing cell wall and microcavity breakage is observed in the sample as temperature increases. In essence, the porosity of the food sample increases with less shrinkage, which in turn, favors the rehydration ratio of the dried samples [60]. However, higher porosity in food also means an increase in the surface area, which implies a shorter shelf-life due to surface exposure and a higher rehydration ratio owing to the development of many open pores serving as capillaries for water uptake [61]. Consequently, higher pores also create pathways for oxygen permeability which can result in rancidity especially in food containing lipids. Therefore, it is essential to control the drying method to suit the intended purposes and achieve the desired porosity. the food. This usually modifies the structure, as the volume increases because of the lower density of ice compared to the liquid water. The interaction between kinetic constraints on the formation of crystal and thermodynamic driving forces is generally responsible for the pattern of microstructure in freeze-dried foods. Unlike in properly freeze-dried food material, shrinkage is usually observed in the morphological structures of foods that are dried at high temperatures. This occurs due to the dehydration effect resulting from the contraction of the viscoelastic matrix that was previously occupied by water. Izli and Polat [60], verified the influence of freeze-drying and convective drying method on the microstructure of the quince fruits. In the report (as presented in Figure 2), it was obvious from the photograph that the freeze-dried quince possesses a homogeneous honeycomb structure. This indicates that freeze-drying has a minimal effect on the cell structure due to dehydration via sublimation. Consequently, shrinkage was higher in the convective dried sample as a result of excess microstructural stress induced due to high moisture gradients within the product. The effect was dependent on the drying temperature, as increasing cell wall and microcavity breakage is observed in the sample as temperature increases. In essence, the porosity of the food sample increases with less shrinkage, which in turn, favors the rehydration ratio of the dried samples [60]. However, higher porosity in food also means an increase in the surface area, which implies a shorter shelf-life due to surface exposure and a higher rehydration ratio owing to the development of many open pores serving as capillaries for water uptake [61]. Consequently, higher pores also create pathways for oxygen permeability which can result in rancidity especially in food containing lipids. Therefore, it is essential to control the drying method to suit the intended purposes and achieve the desired porosity. Furthermore, it should be noted that freeze-drying could be critical as it can cause cell damage to food samples e.g., fruits and vegetables. This is because, during freeze-drying, ice crystals within the food cells increase as the temperature decreases, which causes a compression force, pushing and rupturing the cell walls of the food material [61]. Thus, to minimize the damage, other methods, such as microwave-drying is used alongside freeze-drying [62]. Carbohydrates are favored as excipients because they are chemically safe and can be easily vitrified during freezing by the . However, drying conditions have been reported to damage the surface and alter the interior structure of starch granules, eventually affecting their properties, such Furthermore, it should be noted that freeze-drying could be critical as it can cause cell damage to food samples e.g., fruits and vegetables. This is because, during freeze-drying, ice crystals within the food cells increase as the temperature decreases, which causes a compression force, pushing and rupturing the cell walls of the food material [61]. Thus, to minimize the damage, other methods, such as microwave-drying is used alongside freeze-drying [62]. Carbohydrates are favored as excipients because they are chemically safe and can be easily vitrified during freezing by the T g . However, drying conditions have been reported to damage the surface and alter the interior structure of starch granules, eventually affecting their properties, such as chemical reactivity, gelatinization, retrogradation, and pasting properties. In a study conducted by Apinan et al. [63] on potato starch, it was found that freeze-dried potato starch granules displayed higher enzymatic susceptibility than native potato starch granules (dried by convective method), this occurred due to the alteration of the surface structure during the drying process (Figure 3), as hydrolysis of the starch granule are increased by ∝-amylase. A change in the structure of mannitol and lactose has also been reported during freeze-drying [64], which occurred as a result of the separation of a frozen solution in the form of crystalline phase. This process is dependent on processing conditions. Mannitol is crystallized during freezing, whereas sucrose remains in the amorphous state right through the drying process [64]. The presence of porous particles is expected after the sublimation of ice crystals during the freeze-drying process. However, this mostly influences the sample bulk density and compressibility. Samples having greater porosity were reported to usually show smaller compressibility. In the report of Mirhosseini et al. [65], the compressibility and compatibility of powder affected its flow properties in the micro-scale through the adhesion forces between the particles. The angle of repose is also one of the critical features indicating the degree of flow characteristics of powder granules. Thus, the increase in the angle of repose is associated with decreasing the flowability characteristics, which measures the powder resistance to flow under gravity due to frictional forces resulting from the surface properties of the granules. occurred due to the alteration of the surface structure during the drying process (Figure 3), as hydrolysis of the starch granule are increased by ∝-amylase. A change in the structure of mannitol and lactose has also been reported during freeze-drying [64], which occurred as a result of the separation of a frozen solution in the form of crystalline phase. This process is dependent on processing conditions. Mannitol is crystallized during freezing, whereas sucrose remains in the amorphous state right through the drying process [64]. The presence of porous particles is expected after the sublimation of ice crystals during the freeze-drying process. However, this mostly influences the sample bulk density and compressibility. Samples having greater porosity were reported to usually show smaller compressibility. In the report of Mirhosseini et al., [65], the compressibility and compatibility of powder affected its flow properties in the micro-scale through the adhesion forces between the particles. The angle of repose is also one of the critical features indicating the degree of flow characteristics of powder granules. Thus, the increase in the angle of repose is associated with decreasing the flowability characteristics, which measures the powder resistance to flow under gravity due to frictional forces resulting from the surface properties of the granules. From the textural viewpoint, freeze-drying plays a major role in determining the textural characteristics of food during chewing of ready-to-eat food e.g., dried squid, potato, apple, and mushroom. It also influences post-processing unit operations of drying such as grinding. In most food scenario, food hardness is usually evaluated based on the grinding and breakage energy requirement. Therefore it is important to maintain a good texture property for optimum efficiency during grinding to achieve high food powder quality and yield. Arumuganathan et al., [66], established the influence of freeze-drying conditions on the structural properties which influence the mushroom texture. In their report, they described maintaining the structure of mushroom resulting from freeze-drying process to be of great influence to achieving soft texture property. This is because, during cellular structure deformation during dehydration, tissues are fused together leading to an increase in firmness that causes sample hardness. However, when the original dimensions are maintained during drying, the effect of concentration of water-soluble components due to the mobility of aqueous phase is prevented and hence the resulting product is of a tender texture and consequently requires minimal breakage or grinding energy. Drying Kinetic The study of drying kinetics is very important to engineering and process optimization as it helps to select the best and most appropriate drying methods and control the drying process. It shows the diffusion of moisture from food and its relationship with drying variables over time [5]. It has From the textural viewpoint, freeze-drying plays a major role in determining the textural characteristics of food during chewing of ready-to-eat food e.g., dried squid, potato, apple, and mushroom. It also influences post-processing unit operations of drying such as grinding. In most food scenario, food hardness is usually evaluated based on the grinding and breakage energy requirement. Therefore it is important to maintain a good texture property for optimum efficiency during grinding to achieve high food powder quality and yield. Arumuganathan et al. [66], established the influence of freeze-drying conditions on the structural properties which influence the mushroom texture. In their report, they described maintaining the structure of mushroom resulting from freeze-drying process to be of great influence to achieving soft texture property. This is because, during cellular structure deformation during dehydration, tissues are fused together leading to an increase in firmness that causes sample hardness. However, when the original dimensions are maintained during drying, the effect of concentration of water-soluble components due to the mobility of aqueous phase is prevented and hence the resulting product is of a tender texture and consequently requires minimal breakage or grinding energy. Drying Kinetic The study of drying kinetics is very important to engineering and process optimization as it helps to select the best and most appropriate drying methods and control the drying process. It shows the diffusion of moisture from food and its relationship with drying variables over time [5]. It has often been used in describing the microscopic and macroscopic mechanisms of mass, heat, and momentum transfer in the drying processes, which are majorly influenced by thermodynamic conditions, the drying system, and the material properties of the agricultural product. Drying kinetics also describes the influence of moisture removal and other variables involved and thus, could be used for making appropriate choices of dryers and operating conditions for food product processing. Fick's law type models are commonly used to represent the air-drying kinetics in the falling rate period. However, this simple model is not always adequate to represent the complex process of drying. On the other hand, some complex theories that represent the drying process from the microscopic standpoint of mass and heat transfer between each phase inside the food particle [67]. This approach thus makes use of volume averaging method to solve the governing transport equations, however, it is sometimes too complex for practical use since the required parameters are difficult or impossible to determine experimentally. For practical purposes, it is often useful to use a lumped-parameter model supported by carefully designed experimentation at a laboratory scale. Ratti and Crapiste [68] developed a lumped parameter model for hygroscopic shrinking food systems. This model is represented in Equation (1); where n w water flux, k g is mass transfer coefficient, a w is water activity, P ws is water vapor saturation, P w∞ is the water vapor at equilibrium, φ is characteristic drying parameter, X 0 is water content in dry base and Bi md is the Biot number for mass transfer defined as; and P 1 is the equilibrium relationship at the solid gas interface, that is obtained from L is the sample thickness (m). D is effective mass diffusivity inside the sample (m 2 s −1 ), and ρ is mass density. The parameter φ in Equation (1) was shown theoretically and experimentally to be independent of drying conditions and particle geometry, and only a function of moisture content [68] and it is represented as Recently simulation has been used as a preliminary evaluation of freeze-drying, and several models concerning the heat and mass transfer phenomena during freeze-drying were reported [69]. However, in most cases, adjustable parameters are needed to match the model predictions with experimental data. A new model developed by George and Datta [70], for validation of heat and mass transfer in frozen dried vegetables base on sample thickness as shown in Figure 4 was developed. This model considers the heat and mass flux which are responsible for moisture transfer from a frozen sample in the freeze-dryer due to the higher temperature of the plate on which the sample is rested during drying. Therefore, heat flux occurs through the frozen layer. Thus the model was analyzed as a function of the drying time or the effective mass diffusivity inside the sample, and it is represented as; or where D is the effective mass diffusivity inside the sample (m 2 s −1 ), t is the drying time, X 0 is the initial water content (KgH 2 0kgdry −1 ), and x is the fraction of initial moisture remaining (dimensionless). R is the universal gas constant, T temperature ( • C), L thickness of sample (m), ρ is mass density, s is solid, P is the vapor pressure, f is the freezing point and ew of water at the environment. Both sublimation and desorption are taken into account in the set of coupled non-linear partial differential equations. These equations were solved numerically by using a finite element scheme, and the simulation results agreed closely to experimental data as shown in Figure 4 for fresh carrot, capsicum and mushroom for all sample thickness. Relationship between Drying and Grinding During the drying process, most foods experience volumetric changes accompanied by internal stress formation across the food structure due to their viscoelastic properties [71]. These changes, which are mostly influenced by the drying temperature, drying speed, mobility of the solid matrix, and the volume of moisture removed per time, has a substantial effect on the food mechanical strength, which is noticeable in the sensitivity to breakage, stress cracking, and dry-grinding quality [72,73]. When water is removed from the material, an unbalanced pressure is created between the inner components of the material and the external pressure, generating contracting stresses that lead to material shrinkage and changes in shape and, occasionally, hardening effects [74]. This is the reason why drying under vacuum in a freeze-dryer is suitable for foods intended to be made into powdered products as less shrinkage and high porosity as a result of the mode of mass transfer through sublimation is obtained with freeze-drying, resulting in a less hard dried product [75]. However, just as mentioned in the previous subsection, the tendency of freeze-drying modifying the structure of food product is possible especially food with starch component due to hydrolysis. For example, rice flour with high damaged starch content was reported due to rapid hydration and hydrolysis by α-and β-amylase. This damage to the food starch component during drying and subsequent grinding is a big concern to the flour production, as it is separate from the intact granules impacting both the solubility and the susceptibility to enzymatic digestion. Thus, flour with a fine particle size has more swelling power and it's more prone to form rigid gel structures than coarse particles [76]. Despite the importance of drying to grinding, it appears that most reports studied the effect of drying on the nutritional and physicochemical properties of food and there are very limited reports focusing on the effect of drying on the grinding process. Nevertheless, since drying directly influences the process of grinding and the final powdered product, it is necessary to present useful facts from the limited information available. Therefore, the influence of the freeze-drying process on the grinding energy requirement, grindability, and the grinding yield will be discussed in the next subsections. Energy Consumption During grinding, mechanical energy is required to breakdown the particle size and also to Relationship between Drying and Grinding During the drying process, most foods experience volumetric changes accompanied by internal stress formation across the food structure due to their viscoelastic properties [71]. These changes, which are mostly influenced by the drying temperature, drying speed, mobility of the solid matrix, and the volume of moisture removed per time, has a substantial effect on the food mechanical strength, which is noticeable in the sensitivity to breakage, stress cracking, and dry-grinding quality [72,73]. When water is removed from the material, an unbalanced pressure is created between the inner components of the material and the external pressure, generating contracting stresses that lead to material shrinkage and changes in shape and, occasionally, hardening effects [74]. This is the reason why drying under vacuum in a freeze-dryer is suitable for foods intended to be made into powdered products as less shrinkage and high porosity as a result of the mode of mass transfer through sublimation is obtained with freeze-drying, resulting in a less hard dried product [75]. However, just as mentioned in the previous subsection, the tendency of freeze-drying modifying the structure of food product is possible especially food with starch component due to hydrolysis. For example, rice flour with high damaged starch content was reported due to rapid hydration and hydrolysis by αand β-amylase. This damage to the food starch component during drying and subsequent grinding is a big concern to the flour production, as it is separate from the intact granules impacting both the solubility and the susceptibility to enzymatic digestion. Thus, flour with a fine particle size has more swelling power and it's more prone to form rigid gel structures than coarse particles [76]. Despite the importance of drying to grinding, it appears that most reports studied the effect of drying on the nutritional and physicochemical properties of food and there are very limited reports focusing on the effect of drying on the grinding process. Nevertheless, since drying directly influences the process of grinding and the final powdered product, it is necessary to present useful facts from the limited information available. Therefore, the influence of the freeze-drying process on the grinding energy requirement, grindability, and the grinding yield will be discussed in the next subsections. Energy Consumption During grinding, mechanical energy is required to breakdown the particle size and also to overcome the frictional force between the moving parts of the machine and the food material [77]. However, this energy is a direct function of how well the drying process was carried out. Drying as a pre-process before grinding has been reported to alter the physical and chemical characteristics of the food in terms of moisture content, hardness, and size, causing a reduction in the required time and energy needed for grinding and particle size distribution [78]. The initial moisture content of several foods before grinding is one of the most important factors determining particle size distribution, grinding energy, product yield, and ground loss. Thus, it is important to control the drying process. Several reports have confirmed that food materials with high moisture prior to grinding gave rise to larger particle sizes, low product yield, and consumed higher amounts of energy than food material of low moisture content because the food with high moisture content became tough since water act as a plasticizer between the food materials, making it difficult to grind [79,80]. A few other researchers who reported a slight deviation from the general belief that moisture content increases the energy requirement during grinding stated that, although the difference in moisture content of the food samples tested was insignificant, the energy requirement for grinding was linked to some other parameters, such as the size, hardness, and fracturability of the food material as larger food sizes and hardness used more energy, irrespective of the moisture present [81]. Several factors influence the rate of dehydration during freeze-drying, such as the sample thickness, material properties, and the quality of air velocity. Therefore, it is important to pay attention to the preliminary freezing process of food products, as the possibility of sample stiffening could occur to the sample structure at the initial stage of freeze-drying, thereby preventing solute and liquid motion during freeze-drying. The implication of this is observed in the sample hardness which results in greater energy requirement at the grinding stage. Furthermore, rehydration can occur in freeze-dried sample owning to the poor quality and/or alterations of freeze-dried products. This occurrence that is sometimes encountered is generally linked to the quality of the raw material (i.e., nature of the material) and to processing conditions (operating pressure, heating temperature, freezing rate, freeze-drying process control) This is why freeze-dried products are sometimes adulterated to prevent stiffness and enhance moisture dehydration [82]. Another important feature that influences the energy requirement is drying duration. The drying duration has characteristic effects on the physical and chemical nature of food by altering its natural state, which in turn, determines the grinding characteristics of the food. However, in the freeze-drying operation, the duration of drying has more implications on the energy consumption at the drying period than at grinding operation. Generally, freeze-drying requires a long drying period to achieve a complete dehydration process. This is due to poor internal heat transfer inside the product and low working pressures. The principal heat transfer phenomenon is radiation and since there is poor ambient convection and poor conduction between surfaces making contact under vacuum. To solve this, the freezing rate is adjusted, thus affecting primarily the size of ice crystals that are formed within the matrix of food sample, and thereafter the final porosity of the freeze-dried product [83]. From the standard law of mass transfer through porous media, it can be deduced that the larger the pore size, the easier it will be to remove water vapor from the product [84]. Thus, larger pore sizes result, in a quicker dried product and a less hard food sample, which will have a fragile structure that requires minimal crushing energy. Similarly, the shrinkage effect occurring mostly due to collapse during freeze-drying especially when the sample is drying at a temperature greater than the T g result in increased hardness and thus influences the grinding ability [85]. The energy size reduction principles which were formulated by Kick, Bond, and Rittinger for designing grinding operations and envisioning the grinding performance has been used to determine the grinding energy requirement of various types of food materials [78,86]. In the principles, the constants, i.e., Bond's (work index), Kick's, and Rittinger's constants, are determined based on the initial and final particle sizes of the material. The Equations (7)-(10) are used for the evaluation of energy and grinding constants: Bond s work index : Kick s law : Rittinger s law : where E is the energy requirement for grinding, L 1 and L 2 are the mean diameters of the initial and final ground particle size of samples, respectively, and k b , k k, and k r are the Bond, Kick, and Rittinger constants, respectively. The work index (w ind ) is referred to as the energy required to grind the material with large particle size well enough for it to pass through a sieve with a diameter size of 100 µm [86]. Another method of analyzing energy requirements during the grinding process involves the use of a stress model [87]. The model was initially developed for estimating the energy requirement during wet grinding and has been recently modified for dry grinding processes [10,88]. In this model, the stresses acting upon each particle are estimated since the specific surface area or the particle sizes were found to be influenced by the overall operational parameters. The basic principle of the stress model is that, for every given food, the product fineness is dependent on the crushing process, which is determined by two conditions (i) how often each of the food particles and its resulting fragments are stressed, and thus, denoted as the 'number of stress event' of each sample particle (SN F ), and (ii) how large the specific energy at each stress event is, which is denoted as stress intensity (SI). Since the values obtained for SN F and SI are dependent on the overall operation process parameters, when evaluating the actual energy used for grinding any feed, it is important to consider the parameters of the grinder (which is independent of the size of the food or the particle sizes obtained). Therefore, instead of focusing on the product-related model only, the grinder-related model is also considered and involves (i) the crushing behavior of the grinder, which is determined by the number of stress events supplied by the grinder per unit time, (ii) the frequency stress event (SFM), and (iii) the energy supplied to the food particle by the grinder at each stress event, known as the stress energy (SE). In a report by Mucsi and Racz [88] on red grape seed, the dispersibility of the sample was analyzed using the product-related stress model and in this report, the stress intensity (SI) on the grinding media (GM) was used to describe the effect of the operational parameters and thus, the stress number was estimated by Equation (11): where n is the revolution number, t is the resistance time, ε GM is the porosity of the grinding media, ε α is the porosity of bulk grinding media, x is the particle size, d is the diameter of the grinding media, and ϕ m is the material filling ratio, estimated by Equation (12): where V m is the material volume and V P GM is the pore volume between the grinding media. The number of stress events SN F and the stress intensities (SI) acting on each feed particles determine the crushing result. Since the stress intensity (SI) can be considered to be the specific energy consumed at each stress event (i.e., by each particle), the overall specific energy consumption of the grinder is a good measure of the product stress number and stress intensity. Therefore, at a constant stress intensity, the product fineness can be correlated either with the stress number or the specific energy input. In a different opinion, Bunge [89] proposed that the stress intensity (SI) is not constant, and therefore, suggested that the stress intensity is a function of the energy density in the grinder. In physical considerations, grinding mechanisms take place in two different ways. The first is due to high-velocity gradients near the blade (crusher) and near the chamber wall (grinder wall) so that the grinding media moves with different velocities. Therefore, grinding media (particles) with high velocities bump on grinding media with lower velocities and lose a part of their kinetic energy, which can be used for crushing. The second view is that, at the area close to the grinder wall, the crushing of particles is based on centrifugal acceleration. Therefore, in this zone, the particles are stressed by pressure between the grinding media and the wall. However, the stress intensity by centrifugal force is relatively small compared to that of kinetic energy. Based on this finding, a characteristic parameter for stress intensity was derived. For the derivation, a number of assumptions were made: (i) it was assumed that only single particles are stressed intensively between the grinding media, (ii) that the tangential velocity of the grinding media is proportional to the circumferential speed of the blade, (iii) that the diameter of the blade is kept constant, and (iv) that the elasticity of the food material is small compared to that of the particles. Therefore, the stressed particle volume does not depend on the grinding media size and the stress intensity (SI) is expressed by Equation (13): where SI GM is the stress intensity of the grinding media, d is the diameter of the blade, ρ is the elasticity of the food material, and V is the tangential velocity. In recent work reported by Racz and Csoke [10], the authors stated that Equation (12) proposed by Bunge [89] is only possible assuming one single particle is involved and no particles are sticking to the wall of the grinder. This, however, not practicable, because during ultrafine and nano-grinding the drastically increasing specific surface is accompanied by very high free surface energy, which leads to an agglomeration phenomenon. Therefore, the effect of the adhered particles to the grinding media should be accounted for during the evaluation of stress intensity (Equation (14)): where ε α is the thickness of the adhered particle layer onto the grinding media surface. The advantage of the stress model for analysis energy requirement for freeze-dried sample thus lies in its ability to estimate the quality of energy required based on the porosity. Therefore, samples with varying pore value or quality can easily be shown to require different energy values for grinding. However, at the moment of making this review, no report was found for the analysis of grinding energy on sample freeze-dried. Therefore, to establish the effect of porosity in food during grinding, more work is needed to be conducted. Grinding Yield and Morphological Characteristics For many food materials, the grinding yield is dependent on several factors, such as the initial moisture content, shape, hardness, and the nature of the food sample, which are directly related to the drying process as the drying process usually influence the factors listed. For example, products with high moisture content resulting from insufficient drying give rise to less grinding yield, just as it requires greater energy for grinding as explained in the previous subsection, whereas a product with less moisture becomes more brittle and is easily broken and converted to more powder. During the freeze-drying process, the freezing step is one of the most important steps because it fixes the structure and the physical properties of the frozen material and, consequently, it determines the final characteristics of the freeze-dried material stability and morphology [19]. When conducting freezing at a low rate, larger pores are obtained. This consequently gives rise to a more brittle and easily broken material. This is because the sublimation of ice crystals grown within the food cellular structure leaves a dried matrix representing a fingerprint of the ice crystal sizes and shapes. However, when conducting freeze-drying at a high freezing rate, it results in a lesser growth of the ice crystal [19]. In a poorly conducted freeze-drying process, the growth of ice crystals can result in ruptures, pushes, and compression of the cells, which cause damage to the frozen tissue. This damage is more pronounced during slow freezing rate and will be discussed in the next subsection. Caparino et al. [90] reported the influence of freeze-drying on the microstructure of mango powder produced from the puree. They established that freeze-drying mango at a low temperature of −25 • C gave rise to a more porous structure due to less shrinkage and collapse and generated a fine homogenous powder. However, the report of Larder et al. [91] showed evidence that the freeze-drying treatment altered starch morphology and disrupted double helices and crystalline structures. The mechanism behind these changes was attributed to the disordering effects of the freezing of water and the sublimation of ice crystals on native starch granules. Thus, the starch hydrogels experienced volume reduction which, in turn, caused a reduction in the mesh size due to a loss of ordered structure of starch gels. However, the influence of the morphological changes experienced during freeze-drying can be dependent on the nature of the material. The crystalline structure formed during the freeze-drying of starch is packed by ordered double helices [92]. If the orderly-arranged double helices are disordered or disassociated, starch crystallinity is reduced. In the report of Larder et al. [91] on normal maize starch and high amylose maize starch during freeze-drying, the variation of the amylopectin to amylose determined the behavior of the cellular structure during freeze-drying. Normal maize starch gel contained more amylopectin but less amylose compared to the high amylose maize starch. Therefore, native maize starch showed more crystals from amylopectin, but structures were less ordered compared to the high amylose maize starch. Zhang et al. [93] stated that Amylose-derived crystals always required more energy to disassociate, indicating amylose-based reassembled aggregates may be less damaged during freeze-drying treatment. This gives food with more amylose along with an amylose-based component to keep their ordered structure with greater gel strength and resistance to destruction during freeze-drying. In regards to the ordered structure, reassembled aggregates compactness and network features of starch gels determined their changes in microstructures during freeze-drying treatment [94]. Thus, the combined effect from the freeze-drying process and the resulting influence of how the freezing process was conducted could possibly influence the grinding yield. This is because a collapsed structure during freeze-drying will result in an increase in firmness of the sample, thereby increasing hardness and requiring a high energy for an increased grinding yield. Limitations in Grinding as a Result of the Drying Process Grinding is a very important unit operation to achieve powdered food products during dry grinding because it determines, to a great extent, the quality of food materials and what the food powder will be used for, as judged by its characteristics. During grinding operations, most food is faced with limitations that result in poor grinding ability. Several research studies have reported factors that limited the grinding ability in relation to the drying processes prior to grinding [95,96]. Baudelaire [7] in the handbook of food powder, grouped the limitations observed during grinding as intrinsic (water content) and extrinsic (glass transition and caking) factors. Awareness of these factors is of great importance in optimizing the grinding process. Impact of Freezing Rate On Crystallization and Microstructure The phase transition part of the freezing process involves the conversion of water to ice through the crystallization process, and is the key step determining the efficiency of the process and the quality of the frozen product [97]. As mentioned in the early part of this review, the formation of large ice crystals within the tissue of foods results in a significant structural deformation in the tissue. On the other hand, the formation of fine crystals results in evenly distributed ice crystals within and outside the cells which prevent cell damage and an improved texture property requiring less energy during grinding. This process of crystallization consists of two main successive stages; nucleation and crystal growth [50]. For large-sized foods, the nucleation will start predominantly at the exterior surface of the food, which is exposed to the coolant [19]. The interaction between these two steps determines the crystal characteristics, i.e., size, distribution and morphology of the crystals. In fresh foods that have retained a cellular structure at the start of freezing, the nucleation can be extracellular and/or intracellular. At slow freezing rates, the nucleation is extracellular, while at fast freezing rates during cryogenic freezing the nucleation is mainly intracellular. The turgor pressure inside the cell makes it thermodynamically favorable that nucleation starts extracellular [98]. Additionally, the changes of intercellular ice nucleation depend on the ratio of freezing rate and the time scale of water permeation through the cell membrane and it is mainly determined by the temperature gradient. The freezing rate is, however, a critical parameter contributing to the distribution and morphological variation observed in the microstructure of freeze-dried food materials. Silva-Espinoza et al. [99] reported the influence of the freeze-drying rate on orange puree with added gum arabic and bamboo fiber. In the research, it was established that a slow rate of freezing resulted in the formation of bigger ice crystals and an expansion of the cells in the structure, resulting in larger pore formation. Similarly, Voda et al. [19] reported the impact of freezing conditions, such as drying at a very low temperature (−196 • C) as compared to a slightly low temperature (−28 • C) to also have an effect on the microstructure of carrot apart from the freezing rate. In their report as shown in Figure 5, the sample frozen at a lower temperature shows smaller pores as ice crystals formed at slow rate, especially during fast cooling conditions. On the other hand, slow freezing allows ice crystals to grow outside cells, causing damage by cell collapse and rupture. The implication of the resulting product from such slow freeze-drying usually would result in difficulty during the grinding of food as a consequence of increased firmness and hardness. Similarly, due to food material collapse, moisture can be trapped inside the food material, thereby causing a caking effect during grinding. ice crystals and an expansion of the cells in the structure, resulting in larger pore formation. Similarly, Voda et al. [19] reported the impact of freezing conditions, such as drying at a very low temperature ( − 196 °C) as compared to a slightly low temperature ( − 28 °C) to also have an effect on the microstructure of carrot apart from the freezing rate. In their report as shown in Figure 5, the sample frozen at a lower temperature shows smaller pores as ice crystals formed at slow rate, especially during fast cooling conditions. On the other hand, slow freezing allows ice crystals to grow outside cells, causing damage by cell collapse and rupture. The implication of the resulting product from such slow freeze-drying usually would result in difficulty during the grinding of food as a consequence of increased firmness and hardness. Similarly, due to food material collapse, moisture can be trapped inside the food material, thereby causing a caking effect during grinding. Caking during Grinding Cake formation is a major issue that occurs during drying and grinding to form food powder due to the high cohesive force existing between the powder particles [100]. Caking has been an issue for further powder utilization in the food industry and the major contributing factors are the extent of drying, the temperature during grinding, the moisture content of the food material, and the ambient humidity [101]. Many available publications stated only whether caking was observed and under what conditions it was noted. However, quantitative methods have been utilized for the characterization of caking phenomena. Among these methods for accessing powder caking are the technique of powder flowability, angle of repose, inter-particle cohesion, size reduction and particle morphology [100] Collapse, stickiness and caking are a common occurrence with improper freezedrying process. This is due to the interparticle bridging manifesting from the loss of structure and a decrease in the sample volume when the solid matrix collapse beyond the . The occurs over a temperature range and this is often a relatively narrow range of about 10 to 20 °C for amorphous sugars. However, a much larger range of about 50 °C may be expected for the glass transition of polymers in foods. Within this temperature range, the can be referred to as the temperature initiating the onset of glass transition or the mid-point temperature of the change in specific heat capacity [101,102]. Many food samples contain amorphous glassy components, e.g., amorphous sugars, and these components are thermodynamically unstable and can crystallize. However, this requires that the molecules can move. When an amorphous component is given sufficient conditions of temperature and water content, they can mobilize as a highly viscous flow, which can make them sticky and lead to caking [103]. The of powder increases the molecules interacting at the surface of powders in contact [45]. However, some cautions should be exercised when interpreting the Caking during Grinding Cake formation is a major issue that occurs during drying and grinding to form food powder due to the high cohesive force existing between the powder particles [100]. Caking has been an issue for further powder utilization in the food industry and the major contributing factors are the extent of drying, the temperature during grinding, the moisture content of the food material, and the ambient humidity [101]. Many available publications stated only whether caking was observed and under what conditions it was noted. However, quantitative methods have been utilized for the characterization of caking phenomena. Among these methods for accessing powder caking are the technique of powder flowability, angle of repose, inter-particle cohesion, size reduction and particle morphology [100] Collapse, stickiness and caking are a common occurrence with improper freeze-drying process. This is due to the interparticle bridging manifesting from the loss of structure and a decrease in the sample volume when the solid matrix collapse beyond the T g . The T g occurs over a temperature range and this is often a relatively narrow range of about 10 to 20 • C for amorphous sugars. However, a much larger range of about 50 • C may be expected for the glass transition of polymers in foods. Within this temperature range, the T g can be referred to as the temperature initiating the onset of glass transition or the mid-point temperature of the change in specific heat capacity [101,102]. Many food samples contain amorphous glassy components, e.g., amorphous sugars, and these components are thermodynamically unstable and can crystallize. However, this requires that the molecules can move. When an amorphous component is given sufficient conditions of temperature and water content, they can mobilize as a highly viscous flow, which can make them sticky and lead to caking [103]. The T g of powder increases the molecules interacting at the surface of powders in contact [45]. However, some cautions should be exercised when interpreting the response of caking with T g . This is because many other factor associated with powder such as storage temperature, exposure time, and atmospheric humidity may alter the handling behavior and appearance of the powder therefore causing caking effect, and also because glass transition only occurs over a range of temperature (10-20 • C for amorphous sugar, and ≤ 50 • C for food polymers). Such case as found in the report of Fitzpatrick et al. [100] who stated that the resulting cake formation found in lactose was dependent on the relationship between, cohesiveness and exposure time as lactose power water content was found to increase with the exposure period leading to cake formation. However, the temperature had less significant effect except for condition at 40 • C (100% RH) which lead to greater water uptake and a resulting cake formation ( Figure 6). formation. However, the temperature had less significant effect except for condition at 40 °C (100% RH) which lead to greater water uptake and a resulting cake formation ( Figure 6). Furthermore, the caking of dried mango during the grinding process is a usual occurrence that limits the process due to stickiness and this sticky nature of the powder has been described as the ''amorphous viscosity'' theory [101]. Based on this theory, the critical factor causing stickiness is viscosity and an increase in viscosity usually observed as increased grinding time results in the formation of more cakes in food powder [101]. However, from the literature reviewed, we can also deduce that not only the grinding time causes caking in the powder. For instance, caking may be due to the strengthening of the liquid junction that occurs in products during processing and storage [104]. Fortunately, to correct caking errors, as should have been evident at this point, the strict control of moisture content and storage at low temperatures, when possible, are key factors in minimizing the effects of caking of powders. Furthermore, the addition of food additives in food, such as glucose syrups in a range of about 40-60%, has also been reported to reduce the stickiness and improve the powder recovery by reducing the cohesive forces between the particles [105]. Effect of Grinding Technology on Food Powder Component The content and bioavailability of the nutrients depends on food nature but also on their processing methods. Reducing material size during grinding or shearing to fine particles with 10 to 25 μm particle diameters or less increases the food products' interfacial activities [106]. Powder with particle size below 25 μm is usually called superfine powder. At this level, the food shows an improve dissolution rate of the effective component present in the food products, by increasing effective release of the bioactive component. However, the particle size obtainable from the grinding process is a function of the type of the mill or grinding technology used. For instance, the report of Dewettinck et al., [107] indicated that the type of mill and the flow diagram used during grinding process influenced how the bran and germ are separated by the starchy endosperm. When high amounts of bran and germ are separated, the flour obtained from the starchy endosperm have lower amounts of micronutrient, because they are concentrated in bran and germ. Additionally, the type of force (e.g., shear, compression, or impact) used to break the grain for reaching the starchy endosperm and to further grind the starchy endosperm into flour can also influence the availability of nutrients [108]. This was similar to the report of Violeta et al., [109] who investigated the influence of milling method on the bioactive components of multigrain flour (base on wheat, rye, and triticle). In their report, they discovered that the multigrain flours obtained from Buhler laboratory mill had bioactive compounds values of about 1.75-2 times higher compared to the corresponding multigrain flours obtained with laboratory disc mill. These results indicate that the intensity of the grinding process varies with the milling equipment. When compared to the laboratory disc mill, the Buhler mill ensured a more Furthermore, the caking of dried mango during the grinding process is a usual occurrence that limits the process due to stickiness and this sticky nature of the powder has been described as the "amorphous viscosity" theory [101]. Based on this theory, the critical factor causing stickiness is viscosity and an increase in viscosity usually observed as increased grinding time results in the formation of more cakes in food powder [101]. However, from the literature reviewed, we can also deduce that not only the grinding time causes caking in the powder. For instance, caking may be due to the strengthening of the liquid junction that occurs in products during processing and storage [104]. Fortunately, to correct caking errors, as should have been evident at this point, the strict control of moisture content and storage at low temperatures, when possible, are key factors in minimizing the effects of caking of powders. Furthermore, the addition of food additives in food, such as glucose syrups in a range of about 40-60%, has also been reported to reduce the stickiness and improve the powder recovery by reducing the cohesive forces between the particles [105]. Effect of Grinding Technology on Food Powder Component The content and bioavailability of the nutrients depends on food nature but also on their processing methods. Reducing material size during grinding or shearing to fine particles with 10 to 25 µm particle diameters or less increases the food products' interfacial activities [106]. Powder with particle size below 25 µm is usually called superfine powder. At this level, the food shows an improve dissolution rate of the effective component present in the food products, by increasing effective release of the bioactive component. However, the particle size obtainable from the grinding process is a function of the type of the mill or grinding technology used. For instance, the report of Dewettinck et al. [107] indicated that the type of mill and the flow diagram used during grinding process influenced how the bran and germ are separated by the starchy endosperm. When high amounts of bran and germ are separated, the flour obtained from the starchy endosperm have lower amounts of micronutrient, because they are concentrated in bran and germ. Additionally, the type of force (e.g., shear, compression, or impact) used to break the grain for reaching the starchy endosperm and to further grind the starchy endosperm into flour can also influence the availability of nutrients [108]. This was similar to the report of Violeta et al. [109] who investigated the influence of milling method on the bioactive components of multigrain flour (base on wheat, rye, and triticle). In their report, they discovered that the multigrain flours obtained from Buhler laboratory mill had bioactive compounds values of about 1.75-2 times higher compared to the corresponding multigrain flours obtained with laboratory disc mill. These results indicate that the intensity of the grinding process varies with the milling equipment. When compared to the laboratory disc mill, the Buhler mill ensured a more intense grinding process, which resulted in higher amounts of particles originating mainly from pericarp and testa with lower size. However, when discussing the size generated from grinding operation, precaution and proper check must be ensured. This is because the majority of bioactive compounds such as lipids, proteins, carbohydrates, and vitamins present in food are readily exposed with smaller size and they are sensitive to high acidic environment, enzyme activity, and may react with oxygen in the environment. For instance, Polyunsaturated fatty acids of the n-3 family such as docosahexaenoic acid and eicosapentaenoic acid which can be found in nuts and seed are known to have various health benefits, such as ameliorative effects on hypertension, inflammation, immune problems, and other diseases [110]. However, n-3 fatty acids are susceptible to oxidative deterioration, limiting their use in foods because of flavor degradation by oxidation. In addition, hydroperoxide and their secondary products originating due to lipid oxidation are thought to be toxic [111]. Furthermore, issues relating to health safety could also be a factor to consider, as the larger surface area increases the risk for bioaccumulation of microorganisms. In functional foods where bioactive component often gets degraded and eventually led to inactivation due to the hostile environment, encapsulation of these bioactive components is a readily available solution that has been used for extending the shelf-life of food products by slowing down the degradation processes or prevents degradation until the product is used. Moreover, the edible coatings on various food materials could provide a barrier to moisture and gas exchange and deliver colors, flavors, antioxidants, enzymes, and anti-browning agents and could also increase the shelf-life of manufactured foods, even after the packaging is opened [112]. Innovations in Freeze-Drying Technology towards Achieving Grinding Efficiency Food structure porosity is an important and distinct attribute of a freeze-dried product. Though directly related to some physical and chemical changes observed in a freeze-dried product such as rancidity and oxidation [113]. It remained a desirable attribute during food processing as well as the grinding operation of the freeze-dried products as it aids the production of a highly crispy texture. The desirability of porosity during drying is mainly due to its tendency to create a pathway through which heat and mass are transferred during the drying operation. This, in turn, affects the drying characteristics of a material and, hence, its time/temperature history, which has a direct relationship with the chemical and/or biochemical change [114]. Therefore, controlling the development of pores in food during freeze-drying is crucial. The process of pore formation during freeze-drying is affected by either the intrinsic (chemical composition and the initial structure before drying operation) and extrinsic factors (temperature, pressure, gas atmosphere) [115]. During freeze-drying, the ice sublimation creates pores -the walls of which may shrink or collapse due to surface forces, capillary suction pressure or gravity. This, however, can be controlled or reduced by lowering the surface tension through the use of a surfactant, organic solvent or the introduction of pretreatment such as blanching can be applied prior to drying operation. A review of the pretreatment methods has been recently published [116]. During the initial stage of freeze-drying, the composition of the freeze-concentrated phase surrounding the ice dictates the T g . The T g theory is one of the concepts that has been proposed to explain the process of shrinkage and collapse during the drying processes. According to this concept, there is negligible collapse (more pores) in material if it is processed below glass transition, and the higher the difference between the process temperature and the T g , the higher the collapse. Fan et al. [117] found that shrinkage or collapse stopped when the processing temperature was below the T g . In primary drying, T g is very relevant and the vacuum must be sufficient to ensure that sublimation is occurring. At the end of primary drying, the pore size and the porosity are dictated by ice crystal size, if the collapse of the wall of the matrix that surrounded the ice crystal does not occur [54]. Consequently, the degree of liquid saturation also affects the pore pressure of the matrix; thus, a higher saturated matrix has a greater possibility of collapse (i.e., fewer pores). Flashing off the moisture instantaneously causes increased pressure inside the matrix, which can be a counterbalancing force to the capillary forces of collapse [118]. The high viscosity (i.e., mechanical strength) of the concentrated amorphous solution around the ice may as well prevent or retard shrinkage. In the report of Bhandari et al. [45], the collapse rate increases as the viscosity of the drying matrix decrease below 10 7 Pa s, above its T g . For various food liquids during freeze-drying, the collapse temperature can vary between −5 and −60 • C depending on their composition. The collapse temperatures can be raised by the addition of high-molecular-weight materials. However, it should be noted that the increase in collapse temperature is directly related to the increase in glass transition temperature. Ratti [119] also reported that at a low drying rate, the moisture profiles prevail in the sample, stresses inside the food are minimal, and shrinkage is pronounced but uniform. At a high drying rate, the surface moisture decreased very fast so that the surface became stiff (case hardening phenomenon), limiting subsequent shrinkage. Thus, the moisture transport mechanisms and the drying rate plays an important role in the formation of pores. Consequently, this consequently will improve the grinding efficiency, provided that an adequate drying process, porosity, and moisture content are ensured. Conclusions Drying and grinding processes are the predominant unit operation that determines the quality of powdered food products. Among the several drying operation methods, freeze-drying is widely used for dehydrating agricultural products such as fruits, vegetables. Despite the long processing time and being an expensive method, it is mostly preferred due to its high final product quality. Although during freeze-drying, the glass transition temperature is a critical point that requires high consideration and monitoring to achieve a high-quality product. Energy models are used for quantifying the required energy for grinding. The stress model showed a tendency to be the most appropriate model for analyzing energy during grinding of freeze-dried samples, as it incorporates the quality of porosity, volume, the dimension of sample and as well considers the effect of adhesion and grinder properties. In addition, limitations of grinding such as the impact of freeze-drying rate on crystallization and caking occurring during freeze-drying can be solved by monitoring the glass transition temperature, ensuring adequate porosity during drying and maintaining a suitable moisture content of sample during drying and grinding. Conflicts of Interest: The authors declare no conflict of interest.
2020-03-26T10:25:18.210Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "0bfc27a8119743af628680387161ac47f100bc5d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/8/3/354/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "54a57c3f1657d87957b8df8969f638a3e9f675e3", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
226041913
pes2o/s2orc
v3-fos-license
Design and Analysis of a Variable Inertia Spatial Robotic Tail for Dynamic Stabilization This paper presents the design of a four degree-of-freedom (DoF) spatial tail and demonstrates the dynamic stabilization of a bipedal robotic platform through a hardware-in-loop simulation. The proposed tail design features three active revolute joints with an active prismatic joint, the latter of which provides a variable moment of inertia. Real-time experimental results validate the derived mathematical model when compared to simulated reactive moment results, both obtained while executing a pre-determined trajectory. A 4-DoF tail prototype was constructed and the tail dynamics, in terms of reactive force and moments, were validated using a 6-axis load cell. The paper also presents a case study where a zero moment point (ZMP) placement-based trajectory planner, along with a model-based controller, was developed in order for the tail to stabilize a simulated unstable biped robot. The case study also demonstrates the capability of the motion planner and controller in reducing the system’s kinetic energy during periods of instability by maintaining ZMP within the support polygon of the host biped robot. Both experimental and simulation results show an improvement in the tail-generated reactive moments for robot stabilization through the inclusion of prismatic motion while executing complex trajectories. Introduction The tail is one of the most distinctive features visible in most vertebrate animal species, from mammals to fish to reptiles. These animals use their tails to assist locomotion in different forms. For example, kangaroos use tails to balance their body midair while hopping [1], while monkeys utilize their tails for climbing and navigating through tree branches [2]. Tuna exhibit excellent propulsion performance using their tails [3] and lizards have been observed leveraging their tails for pitch control and self-righting mid-air while falling [4,5]. Many research studies have highlighted the importance of the tail as a tool for stabilization, self-righting, and position manipulation [6,7]. This has encouraged research into the study of robotic tail-like appendages on bio-inspired robots for enhanced maneuverability and stabilization. An upward trend in the exploration of tail applications in bio-inspired robotics has been seen in recent years. Lio et al. demonstrated the use of a single degree-of-freedom (DoF) active tail on a kangaroo robot to compensate for unwanted angular momentum in the pitch axis during the air phase generated by a hopping motion [8]. Patel et al. designed a one-degree-of-freedom tail to assist in the turning of high-speed terrestrial robot [9]. That tail design was later developed into a 2-DoF (pitch and roll) rigid tail, rotating in a conical motion to stabilize the roll motion of a four-wheeled vehicle. The system used inverse dynamics in addition to servomotor constraints and torque input to generate desired trajectories for the tailed, wheeled robot [10]. A tail was also designed for a Two-Wheg Robot to assist it with climbing [11]. Suarez et al. also utilized a small scale dual arm and one degree-of-freedom tail to control an aerial robot for flying and guilding [12]. In a recent study, Heim et al. found that a long and lightweight active tail could be more effective and simplify body-pitch control as compared to other tail models with the same moment of inertia [13,14]. This study also demonstrated that the use of a rigid link with a heavy mass at the end provides a simple and effective way to design robotic tails. Other researchers incorporate more complex mode shapes in their tail designs in order to generate complex moments in multiple planes. A recent trend in tail design has been the use of cable-driven, segmented structures to change the curvature profile and total mass moment of inertia for such robotic tails [13,14]. Rone et al. [13] demonstrated the use of cable-driven continuum robotic tails to generate torques in the roll, pitch, and yaw directions as well as to change the moment of inertia through bending the tails into different curvatures. Multiple linear actuators were used to drive the cables connected to two segments of tail to generate complex bending modes and torque profiles. A high-fidelity distributed parameter model was then used for dynamic control [15]. Instead of using a DC electric motor, piezo actuators were used in a small-sized robot, an insect-sized (142 mg) aerial robot, to allow for rapid dynamic maneuvers and stabilization [16]. Many biped and quadruped robots have been equipped with tails to assist in the control of body attitude. The under-actuated biped robot, Zappa, walks using the moments generated by its tail, where the tail's changes in orientation enable motion [17]. The MIT Cheetah is also equipped with a 1-DOF tail to generate moment impulses for mid-air attitude adjustment and disturbance rejection while running at high speed [18]. Rigid link tails provide simple and efficient ways to stabilize the robot but can generate only simple moments while continuum robotic tails provide changeable mass moment of inertia property to generate complex moments for dedicated stabilizations at the cost of dedicated controllers and additional actuators. Building upon these previous lines of inquiry, the goal of the presented research is to design and develop a novel robotic tail platform capable of generating moments in the roll, pitch, and yaw directions, and changing the inertial property of the tail in order to help stabilize and manipulate a biped robot while in motion. Different from other multi-segment tails that either require a large base actuation unit [19] or strong base link to support each heavy self-actuated link [20], this work pursues a simple tail design and the associated control, and the functionality of changing the tail's inertial properties as multi-segment robotic tails do. In the proposed design, the moment of inertia with respect to the base of the tail robot was made controllable by enabling the variation of the position of an end effector mass using a prismatic joint. The simple design of the tail also reduces the complexity of the needed real-time control, and part manufacturing and assembly. Prior work seeks to control the mid-air attitude of a quadruped robot or uses the simple assumption that all unwanted angular momentum is dissipated into the ground once contact occurs [9,10,18]. This research explores the effects of a robotic tail as a tool to stabilize and dissipate the excess kinetic energy of a biped robot while in contact with the ground. A controller is designed and validated for the robotic tail in order to control the attitude and stability of a simulated biped robot on the ground with excess kinetic energy in the form of an unexpected impact. The remainder of this work has been divided into the following sections in order to present the design, modeling, validation, and simulation of the system. Section 2 talks about the mechanical and mechatronic design of this robotic tail. Section 3 discusses the forward kinematic and dynamic modeling of the system. Section 4 presents the control architecture of the tail. Section 5 presents a case study for stabilizing a biped robot using the proposed and Section 6 concludes this paper and discusses future work. Mechanical and Mechatronic Design The proposed robotic tail is a prismatic joint, situated on a spherical joint composed of three independent revolute joints, with a moving mass to affect the tails moment of inertia. This design enables the execution of complex loading profiles without the need for significant actuation in the spherical joint. The reduced actuation requirements allow for lower cost and lower performance actuators to be used in the base without sacrificing overall functionality. Mechanical Design This section presents a simple design that enables a 4-DOF rigid tail to achieve both rotation around the x, y, and z axes and translational motion of a moving mass in its local frame. Figure 1A,B show the prototype of the proposed robotic tail and its kinematic diagram, respectively. The three-axis (spherical) rotation is achieved by the three servomotors located at the base, whereas the moving mass is actuated through a custom-made rack-and-pinion linear actuator that can move along the tail main link. Links 1, 2, and 3 are designed to make the rotational axis of the three servomotors intercept at one point, forming a 3-DOF, spherical joint. The rack in Link 3 is made of 1060 aluminum alloy for its high strength yet relatively low weight, while other parts of the tail are made of acrylonitrile butadiene styrene (ABS) material for 3D rapid prototyping. The goal is to reduce the mass of the tail as much as possible with the exception of the moving payload. The payload is composed of a DC motor that actuates the pinion and translates along the Link 3. The motor is equipped with an incremental encoder for position feedback. Mechatronics Design To achieve real-time control, a simple mechatronic architecture was developed to control and sense the proposed tail's pose. As shown in Figure 2, the robotic tail is controlled by an ARM Cortex-M4 microcontroller (located in Link 1). The microcontroller communicates with the host computer over a wired USB connection. The controller receives actuator commands from Simulink TM (running on the host computer) and generates pulse-width modulation (PWM) signals corresponding to the desired position to send to the spherical joint actuators. The controller reads the incremental encoder to record the position of the end effector and uses two limit switches for homing. Position measurements are sent to the host computer (Simulink) for use in the high-level controller. Kinematic Modeling The design of the rigid tail shows that three servomotors rotate the links independently while a DC motor drives the moving mass. The forward kinematics of the proposed tail design were computed using the Denavit-Hartenberg (DH) convention [21]. Figure 3 shows the coordinate frame assignment for each link, where frames 0-2, {F 0−2 }, are attached to Links 1-3 with rotation angles of θ 1−3 respectively. The base frame {F B } is attached to the center of the bottom plane of tail robot. The end effector frame {F E } is attached to the moving mass. The scalar value δ defines the translational distance between X E and Z 3 along Z E . The forward kinematics of the robotic tail can then be determined Table 1, generated from the frame coordinate assignments. Using the DH convention, a homogenous transformation matrix A i j of any frame {F j } relative to any other frame {F i } can be calculated through the chain multiplication property via an intermediate frame F k using Equation (1) as follows: Thus, the forward kinematics can be computed by applying the chain multiplication rule in Equation (1) to describe the configuration of the tail through the joint space configuration vector Table. i a α θ d Dynamics Modeling The dynamic model of the proposed tail is built upon the forward kinematics developed in the previous section. Based on the forward kinematics, the linear and angular velocities of each link can be obtained using the following recursive formulation as shown in Equation (2) from the base link to the end effector: The dynamics of the proposed system are obtained using the Euler-Lagrange method [18]. The Lagrangian for the proposed tail can be expressed as a sum of total kinetic energy T i and potential energy V i of the system using Equation (3): where, 0 g and 0 p Ci represent the gravity vector and the position vector of the CoM of Link i expressed in the frame {F 0 }. With the total energy of the system computed in Equation (3), the joint forces/torques can be computed using the Euler-Lagrange equations of motion using Equation (4) as follows, where, q i andq i represent the displacement and rate of change of displacement of joint i and τ i is the joint torque/force for i th revolute/prismatic joint. In the more general vector formulation of the Euler-Lagrange Equation (4), M(q), C(q,q), and G(q) represent the mass/inertia matrix of the system, the Coriolis and centrifugal (effect) matrix, and the gravity loading vector, respectively. The tail motion generates wrench on the tail attachment while the tail base keeps static with respect to its attached body. To estimate the wrench passed by the tail to biped robot, the Euler-Lagrange Method can be extended by adding a virtual 6-DOF joint between the tail base and tail attachment which could rotate about and translate along the X, Y, and Z axis. The base frame {F B } is fixed on the tail base while the virtual frame {F V } is fixed on the biped robot. The attitude and the translation of {F B } with respect to {F V } are described by the Euler angles Θ = (ψ, θ, φ) and displacement vector X = (x, y, z) , respectively. The transformation from {F V } to {F B } is performed by translation along X, Y, Z and then rotation around the X, Y, Z axis of the current frame. Using these rules, a homogeneous transformation matrix A B V , similar to Equation (1), can be made to transform the quantities in {F V } to {F B }. For computational advantage, the angular velocity of the base frame {F B }, in its own coordinate system is represented as a function of the Euler angle rateṡ Θ = (ψ,θ,φ) T in [21] using Equation (5): By setting {F V } identical to {F B },Ẋ =Θ = (0, 0, 0) , and assuming infinitesimally small displacement in position, ∆X, and attitude, ∆Θ, B ω B can approximated toΘ. This infinitesimal displacement vector can be combined with Equation (4) to estimate the wrench generated by the tail at base. Validation of Dynamic Model and Simulation Results To validate the dynamic model of the proposed tail simulation, results from the MATLAB TM implementation of the presented model were compared against an (MSC R ) ADAMS TM simulation in addition to actual hardware experimentation. To assess the dynamics of the tail, a joint space trajectory, obtained using inverse kinemics, was executed to match the desired end effector trajectory as shown in Figure 4B. In this trajectory, the end effector first traverses through a semi-circle as in trajectory 1 (in the X-Y plane), then upward to the highest point in space as in trajectory 2, followed by a curve to the ending point as in trajectory 3. Figure 4A shows the desired position, velocity and acceleration of X, Y, Z components of the end effector. In the MATLAB implementation of the proposed dynamic model, the three trajectories of the end effector were simulated and the moment responses at the base of the robotic tail were computed using the Euler-Lagrange method as discussed in Section 3. For the ADAMS study, constraints and joint definitions were added to the imported 3D CAD geometry of the tail and the same end effector trajectories were executed. Similar to the MATLAB study, the moments and forces at the base were measured in ADAMS. Table 2 shows the system used in simulation of the tail dynamics. For experimental validation of the simulation models, the tail prototype was mounted on a 6-axis load cell capable of measuring forces and torques in real time. The position commands for each joint were sent from MATLAB to the microcontroller using serial communication and the load cell measurements were recorded. To study the effect of the end effector position on the wrench exerted by the tail on the base, joint space trajectories were executed where the end effector was positioned at both its lowest (retracted) and highest (extended) possible positions on Link 3. Figure 5A presents the base moments obtained from the proposed mathematical model and ADAMS simulation compared against the experimental measurements collected with the tail in the retracted end effector mode. The magnitude of maximum (absolute) torque estimates from the proposed dynamic model about the X and Y axis were found to be 0.5706 Nm and 0.5973 Nm, respectively. The maximum moment about the Z-axis was close to zero, as expected due to the near-constant yaw-joint motor speed. The ADAMS simulation results largely corroborate with the proposed model. The maximum moments observed in the experimental data were 0.5592 Nm and 0.5575 Nm along the X and Y axis, respectively. In comparison to the retracted end-effector mode, following the desired test trajectory in the end-effector extended mode generated higher torques. Figure 5B shows the torque profiles generated from the proposed mathematical model, ADAMS simulation, and experimental data for the end-effector extended mode. The magnitude of the maximum (absolute) torque estimates from the proposed dynamic model about the X and Y axes were 0.9398 Nm and 0.9567 Nm, respectively, with near zero moments about the Z axis. The maximum moments observed in the experiment about the X and Y axes were 0.8623 Nm and 0.9280 Nm, and the root mean square (RMS) for the six trajectory between the proposed model and experiment are 0.1099, 0.0742, and 0.0074 for torque in the x, y, and z axis in Figure 5A and 0.1813, 0.129, and 0.0129 for torque in x, y, and z axis in Figure 5B, which are closely matched the simulation results. The experimental data showed good correlation with that obtained from the simulation models. However, the magnitude of the maximum moment recorded in the experiment is lower than the proposed mathematical model and ADAMS study results. In addition, a de-synchronization in measured torque was observed with respect to the simulation results. Both effects may be attributed to the unmodeled dynamics of the freely hanging wires used to provide power to the DC motor, manufacturing inaccuracies, or component specification deviation. Regardless, the differences are small enough that they are ignored for the remainder of this work. Robot Stabilization Using the Robotic Tail This paper presents a case study where the applicability of the proposed tail for stabilization is demonstrated on a simulated biped robot after receiving an unexpected angular impulse. A hierarchical controller is developed, as shown in Figure 6. The high-level controller is composed of the trajectory planner, the zero moment point (ZMP) placement-based virtual torque estimator, and a model-based controller, while the low level controller includes the actuator controller. Based on the robot trajectory, α, obtained from the trajectory planner, the predefined trajectory for δ, and the maximum torque estimated from the ZMP-based virtual torque estimator, the model-based controller can maneuver the tail to generate a counter-moment to bring the ZMP back inside the support polygon and dissipate unwanted energy from the biped robot with the tail actuators. The model-based controller computes the tail, β, and end effector trajectory, which drive the robot tilt angle to follow the desired α and dissipate energy according to the virtual torque estimator. The low-level controller applies proportional integral derivative (PID) control law in order to control the actuators to execute the desired β and δ trajectories. Biped Robot-Tail System The biped robot is composed of two robotic modular legs [22] with the tail installed horizontally so that the it can rotate about the z axis continuously, as shown in Figure 7A. The full system dynamics equation, which was previously simplified, can be expressed using Equation (6) as follows: where, H is the total angular momentum of the whole system; H G represents the net torque generated by the gravity of all parts; τ is the external torque applied to the entire system; i I Ci , i ω i , R v Ci , and R r Ci represent the moment of inertia, angular rates, linear velocity, and position of the CoM respectively for the i th part in the frame of reference marked by the superscript; sub-index R, represents the biped robot without tail; and sub-indices {B, 1, 2, 3, E} are as defined in Equation (3). The full dynamic equation is used to simulate the motion of the bipedal robot with a tail, while for the computational convenience a simplified dynamic equation is applied in the inverse model controller, as shown in Figure 7B. The controller models the unstable biped robot as an inverted pendulum with a tilt angle, α, an actuated tail, β, and an end effector mass on the tail at a displacement, δ, with motion that is constrained to the lateral plane. It is also assumed the ground provides enough friction to prevent lateral translation. The rotating joint of the inverted pendulum is located at the center of the support polygon, which is defined by the two robot feet and the direct line connecting them. The rotating active joint of the tail is located at the far end of the inverted pendulum. The simulation parameters used in this case study are listed in Table 3. The mass of end effector is 5 times that of the real tail model. Trajectory Planner Rather than mid-air orientation adjustment in which one fixed target tilt angle is pursued, terrestrial vehicles use inverse models to generate actuator trajectories to follow a desired orientation trajectory. Smooth trajectories are always preferred as they avoid abrupt changes in low-level actuation commands and prevent the actuators from saturating. Another concern in trajectory planning is actuation limits, which may cause unpredicted motion and drive the system unstable if the actuators go beyond their limits. A trajectory optimizer is essentially a tradeoff between the trajectory error (difference between desired trajectory and feasible trajectory) and the actuator limit/system health. While designing a trajectory, the initial and final condition of the system and smoothness of the motion play are most important. The use of higher order polynomials and other continuous functions in modeling the desired trajectory can avoid abrupt changes in actuator commands from the low-level controller. If {α(t)} BC and {α(t)} Feasible are sets for the desired system state trajectories that satisfy the boundary conditions and feasibility limitations, respectively, then the trajectory planner problem converges to finding an optimal α(t) ∈ {α(t)} BC ∩ {α(t)} Feasible . In the presented case, the {α(t)} BC and {α(t)} Feasible are described as follows: whereα 0 is the initial angular velocity of the biped robot after receiving an unwanted impulse, M inv (·) defines the inverse model operation, and t set is the desired settling time in which the trajectory converges to within a small acceptable threshold, , around the final position. In addition, the control variables β low and β up define the lower and upper limit of the actuator position and velocity for this case study. In order to have a controllable initial and final angular position and velocity, the proposed method chooses an exponentially decaying log function (as shown below) to describe α with design parameters {a, b, c, d} in Equation (8). where constant d determines the desired initial and final position of the biped robot. In our case, the parameter d is chosen as π/2 to keep the biped robot vertical. The proposed α model is a high-order differentiable function that guarantees the smoothness of the actuator trajectory. By differentiating the trajectory in Equation (8) with respect to time and solving after equating to zero, the local maximum time, T max , can be obtained. The initial condition of angular velocity further constrains the design parameter of trajectory by b =α(0)/a, this could be gained by setting t = 0 in Equation (9). Thus, the design parameters of trajectory converge to {b, c}. The constants b and c determine the position limits of the servomotors and the maximum allowed tilting of the biped robot, as shown in Figure 8. Figure 8A shows the influence of parameter b on both trajectory and corresponding actuator trajectory while Figure 8B shows the effect of parameter c. The parameters b and c are obtained by sampling from a predefined set {(b, c)} and carrying out a feasibility study on each sample by applying the inverse dynamic model M inv (·), described in the following section. Figure 9 shows the real trajectory of both tail actuators and body angles, following desired body angles designed with parameters b and c. Out of these feasible trajectories, optimal trajectory is then selected considering the overall performance cost. The process can be described in Equation (10) as following, where and β max |b,c ∑ b,c β max |b,c are the normalized peak time of α and the normalized maximum value of β, which control the time of the body deviate from the stable stand, and the total rotation of the tail, respectively. During stabilization, the moving mass simply follows a predefined trajectory, δ, starting at its lowest position to its highest position in order to stabilize by increasing the moment of inertia of the robot. The end effector mass moves at a constant speed in the middle phase of the trajectory with constant acceleration and deceleration in the start and terminal phase. This predetermined trajectory not only minimizes the controller computational, but also reduces mechanical load on the rotary tail actuator that in turn reduces the acceleration and rotation necessary. Virtual Torque Estimator Position and orientation control in robots often use zero net angular momentum control in trajectory planning [23,24] whereas terrestrial vehicles more often use moment control for stabilization [25]. The proposed controller utilizes the ZMP to generate a trajectory for energy dissipation and stabilization of the robot-tail system. Multiple researchers in the past have demonstrated the use of ZMP estimates in trajectory planning [17,25]. The ZMP is the point at which the net tipping moment acting on the robot is zero which must be maintained inside the convex hull of the support polygon to prevent toppling of the robot [26]. The green line in Figure 7A on the ground denotes a 1-dimensional projection of the support polygon where the biped robot will stay balanced or recover to stable configuration as long as the ZMP (red dot) is inside the support polygon. In the event of an unexpected impulse, if the ZMP begins to translate outside of the support polygon, the tail can be used to generate counter-moments in order to bring the ZMP back within the support polygon. While the ZMP is inside the support polygon and the tail is still rotating, the virtual torque estimator computes the maximum virtual torque that could be applied to the system to stop the tail from rotating and keep the ZMP inside the support polygon. This motion thus assists the robot in both recovering a stable stance and transferring the excess energy via tail actuators via electromagnetic damping. In this paper, the ZMP based virtual torque estimator computes the current ZMP of the robot (Figure 7) in Equation (11) using [26,27] as follows: Here it is worth noting that as the mass of the robot is ∼ 19 times larger than the tail and therefore the effect of the motion of the tail on the combined CoM of the whole robot body can be ignored. The CoM of both the biped robot and tail respect to base frame is l CoM . After receiving X ZMP from the estimator, the maximum virtual torque that can be applied to the robot while keeping the robot marginally stable can be computed in Equation (12) as follows: where α max is the marginally stable value of α and sign(H) is the sign of the angular momentum of the whole system H. To avoid discontinuity in the model arising from the sign function in Equation (12), the sign function has been replaced with sigmoidal membership function. The rate of energy that is dissipated can be simplified and expressed using electromagnetic induction principles in Equation (13) as follows: K e is the simplified regenerative braking coefficient for DC motors. To avoid discontinuity in the model arising from the sign function in Equation (8), the sign function has been replaced with the sigmoidal membership function. Model Based Controller Based on the desired trajectory α and maximum estimated virtual torque, the model-based controller generates actuator trajectory using inverse model dynamics. For computational convenience, the controller uses a simplified model, which treats the tail as a single DoF system and only focuses on the lateral motion as stated in Section 5.1. The simplified model implemented in the controller is stated in Equation (14) as follows: where, the quantities 0 I C1 , 0 v C1 , 0 ω 1 , 0 r C1 , and m 1 represent the moment of inertia (located at O B ), linear velocity, angular velocity, center of mass vector, and mass of the leg and tail base (Link 0) with respect to inertial frame {F O } from Figure 7. In addition, the quantities 0 I C2 , 0 v C2 , 0 ω 2 , 0 r C2 , and m 2 represent the moment of inertia, linear velocity, angular velocity, center of mass vector, and mass of the tail parts (Links {1, 2, 3, E}) at their CoM with respect to inertial frame {F O }. By inverting the model of the system in Equation (14), the tail control trajectory can be obtained as a continuous function β(t) = f (α,α,α, δ,δ,δ, β,β, τ v ) to be executed by the low-level controller. Controller Performance To evaluate the performance of the tail in robot stabilization, the controller was written in MATLAB Simulink TM as applied to the dynamic model of the tail-robot assembly. In the presented case study, the biped robot is simulated to receive an unexpected torque impulse sufficient to drive the ZMP out of the support polygon of the robot. In the absence of the tail controller, the robot becomes unstable and falls to the ground. Figure 10 shows the biped robot trajectory α after receiving an impulse of 10 Nm-s. The simulation study shows that the system recovers to its original orientation through the contribution of tail dynamics. The proposed system uses the virtual torque estimator to dissipate the excess kinetic energy. In the absence of the virtual torque estimator, the robotic tail needs to keep rotating in order to balance the unexpected impulse of 2.5 Nm-s ( Figure 11A), where angle β keeps increasing in order to stabilize angle α. In reality, the robotic tail has mechanical limits in rotation angles to the design, because the tail cannot balance the unexpected impulse. However, when the virtual torque estimator is added to the robot trajectory planner, the excess energy is imparted to the system due to impulse is dissipated. While adjusting the biped robot's orientation using the tail, the end effector is moved along Link 3 at different maximum speeds using trajectory shown in Figure 11B. This results in changes to both the α and β angles. It is also worth noting that with the virtual torque estimator in trajectory generation, the tail moves less and stabilizes faster with higher end effector speeds. The simulation results show that the virtual torque estimator can effectively dissipate unwanted energy, hence eliminating the need for the continuous rotation of the tail. Figure 12 shows the energy of the tail during the motion. This energy transfers from the body part to the tail and then transfers back to the body part after the body comes back to the stable zone. Thus, by controlling the position and velocity of the end effector simultaneously, one can limit the tail travel while stabilizing the robot. Conclusions and Future Work This paper presented a novel design of a 4-DoF robotic tail with a demonstrated capability to stabilize a bipedal robot. The incorporation of a prismatic joint in the system helped the tail change its moment of inertia with respect to its base, which could change the magnitude of the moments acting on the tail-robot system all while lowering the actuation requirements for each DoF. The experimental data validated the proposed mathematical model for the tail dynamics, where it delivered up to 0.95 Nm of reactive moment. The case study also validated the controller for a simulated biped robot using the proposed tail as well as demonstrated the capability of the proposed ZMP placement method and momentum-based control in trajectory generation and disturbance rejection. Although the proposed tail robot was well predicted by the model, the power and speed limits of the servomotors restricted its capability to small and lightweight biped robots in the tails current form. In future, the tail will be equipped with more powerful geared brushless direct current electric motor (BLDC) motors to overcome this limitation. In addition, the tail robot will be equipped with a larger mass at the end effector to enable greater control over the moment of inertia of the system. Additional experiments will be performed with a physical bipedal robot, which is already a work in progress as a parallel project [18].
2020-10-29T09:07:50.972Z
2020-10-25T00:00:00.000
{ "year": 2020, "sha1": "ec03d1ea7ec863bb02f26ac53c12bf7aefe1cc63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-7673/5/4/55/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7899f72fd1468668a8e9f8cf50560bff7a71845", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
25216810
pes2o/s2orc
v3-fos-license
Template activating factor-I remodels the chromatin structure and stimulates transcription from the chromatin template. To study the mechanisms of replication and transcription on chromatin, we have been using the adenovirus DNA complexed with viral basic core proteins, called Ad core. We have identified template activating factor (TAF)-I from uninfected HeLa cells as the factor that stimulates replication and transcription from the Ad core. The nuclease sensitivity assays have revealed that TAF-I remodels the Ad core, thereby making transcription and replication apparatus accessible to the template DNA. To examine whether TAF-I remodels the chromatin consisting of histones, the chromatin structure was reconstituted on the DNA fragment with core histones by the salt dialysis method. The transcription from the reconstituted chromatin was completely repressed, while TAF-I remodeled the chromatin and stimulated the transcription. TAF-I was found to interact with histones. Furthermore, it was shown that TAF-I is capable not only of disrupting the chromatin structure but also of preventing the formation of DNA-histone aggregation and transferring histones to naked DNA. The possible function of TAF-I in conjunction with a histone chaperone activity is discussed. To study the mechanisms of replication and transcription on chromatin, we have been using the adenovirus DNA complexed with viral basic core proteins, called Ad core. We have identified template activating factor (TAF)-I from uninfected HeLa cells as the factor that stimulates replication and transcription from the Ad core. The nuclease sensitivity assays have revealed that TAF-I remodels the Ad core, thereby making transcription and replication apparatus accessible to the template DNA. To examine whether TAF-I remodels the chromatin consisting of histones, the chromatin structure was reconstituted on the DNA fragment with core histones by the salt dialysis method. The transcription from the reconstituted chromatin was completely repressed, while TAF-I remodeled the chromatin and stimulated the transcription. TAF-I was found to interact with histones. Furthermore, it was shown that TAF-I is capable not only of disrupting the chromatin structure but also of preventing the formation of DNA-histone aggregation and transferring histones to naked DNA. The possible function of TAF-I in conjunction with a histone chaperone activity is discussed. The eukaryotic nucleosome, a unit of chromatin, consists of 146 base pairs (bp) 1 of DNA and a histone octamer containing two copies each of histone H2A, H2B, H3, and H4. It has been thought that some modifications of the chromatin structure would be needed before the initiation of replication or transcription (reviewed in Ref. 1). Some factors are shown to gain access to the chromatin DNA directly in vitro (2), while some others do so with the aid of proteins, such as yeast or human SWI/SNF (reviewed in Refs. 3 and 4), Drosophila NURF (5), and related factors (reviewed in Refs. 6 and 7), which facilitate the change of interaction between DNA and histone octamer. Furthermore, the gene activity is also regulated by enzymatic modification of histone octamer. Each histone possesses sites in its N-terminal region that can be hyperacetylated, and their acetylation and/or deacetylation are closely related to the gene activity (1, 8 -12). In order to study the molecular mechanism for activation of transcription and replication from chromatin templates, we have been using the adenovirus DNA complexed with the viral basic core proteins (Ad core) as a model system. The Ad genome is a double-stranded DNA of about 36,000 bp and forms the chromatin-like structure in the virion and in the infected cells. About 200 bp of DNA per viral nucleosome is coiled around six copies of the viral core protein VII, and each unit of viral nucleosome is bridged by the core protein V (13). Immediately after infection, early genes are transcribed, and some of their products together with the host factors NFI, II, and III put forward the genome DNA replication (14,15). Newly synthesized DNA does not remain naked but transiently forms a complex with the cellular histones (16). Late genes are transcribed from the newly replicated DNA, and core proteins and other viral capsid proteins are synthesized. Since histones are not present in the Ad virion, cellular histones on the newly replicated viral DNA are to be removed and replaced with newly synthesized viral core proteins before being packaged into the progeny virus capsid. This type of replacement seems similar to that of histones with protamine during spermatogenesis. Although basic mechanisms for replication and transcription of the Ad genome DNA have been evaluated with in vitro systems using naked DNA templates, in vitro replication and transcription from the Ad core do not take place with the factors needed for these reactions on the naked DNA template (17)(18)(19). Since the viral DNA in infected cells is also complexed with either basic viral core proteins or histones to form the chromatin structure, the access of trans-acting factors involved in replication and transcription to their cognate sites is restricted. Therefore, it is reasonable to presume that the remodeling of the viral chromatin takes place before the initiation of replication and/or transcription. Recently, we have identified from uninfected HeLa cells template activating factor (TAF)-I, which stimulates the replication from the Ad core (18). TAF-I also stimulates the transcription from the E1A promoter on the Ad core but not effectively from the major late promoter (MLP) (19). There are two subtypes of TAF-I, designated as TAF-I␣ and TAF-I␤, both of which have a common amino acid sequence except that Nterminal 30-amino acid sequences are specific for each subtype. TAF-I has a long acidic tail in its C-terminal region that is required for the activation of the Ad core replication and transcription (20,21). The stimulatory activity of TAF-I␤ is higher than that of TAF-I␣. TAF-I␤ is the same as the product of the set gene, which is fused to the can gene by the translocation in an acute undifferentiated leukemia (21,22). TAF-I shows low but distinct amino acid sequence homology to nucleosome assembly protein (NAP)-I, which was originally identified as the factor involved in chromatin assembly (23). It is indicated that NAP-I can replace for TAF-I in the stimulation of replication and transcription from the Ad core, and that TAF-I has NAP-I activity (20,21). Therefore, both proteins are structural and functional homologue of each other. Here we investigate the mechanisms for the stimulation of transcription by TAF-I from both E1A and ML promoters on the Ad core and the reconstituted chromatin consisting of histones. TAF-I stimulates the transcription not only from the E1A promoter on the Ad core but also from the chromatin template reconstituted on the DNA containing the MLP. The nuclease sensitivity assays have revealed that TAF-I stimulates the transcription from these templates by altering core protein-DNA or histone-DNA interaction. Furthermore, the Far Western analyses reveal that TAF-I binds to each core histone and its binding affinity to histone H3/H4 complex is higher than that to H2A/H2B complex. TAF-I binds to core histones through its acidic region and prevents the formation of aggregation between DNA and core histones. Our results lead to the possibility that one of the putative physiological functions of TAF-I may be to suppress the random aggregation of DNA-basic proteins such as DNA-histones. Since other proteins are also identified as histone chaperone, the redundancy of these histone chaperones raises the question how their roles are assigned and cooperated in a cell. Chromatin Reconstitution by the Salt Dialysis Method-Chromatin structure was reconstituted on the plasmid DNA or end-labeled DNA fragments by the salt dialysis method (24). The 453-bp-long DNA fragment containing the region between nucleotide positions 5789 and 6242 (where the left terminus of the Ad DNA is position 1) was prepared from plasmid pSmaF by digestion with HindIII and XhoI. Core histones, histone H2A/H2B, and histone H3/H4 were separately prepared as described by Simon and Felsenfeld (25) using hydroxylapatite column chromatography. Ten micrograms of the 453-bp-long DNA or pSmaF was mixed with core histones (8 g) in 2 M NaCl, 10 mM Tris-HCl, pH 7.5, and 1 mM EDTA, and then dialyzed at 4°C against 10 mM Tris-HCl, pH 7.5, 1 mM EDTA, 1 mM 2-mercaptoethanol, and 0.1 mM phenylmethylsulfonyl fluoride in the presence of stepwise concentrations of NaCl. NaCl concentrations and periods for dialysis were as follows: 2 M for 2 h, 1.5 M for 4 h, 1.0 M for 4 h, 0.75 M for 4 h, and 0 M for 12 h. After dialysis, samples were subjected to centrifugation through a 5-20% sucrose gradient in a SW41Ti rotor (Beckman) at 35,000 rpm for 16 h at 4°C containing 10 mM Tris-HCl, pH 7.5, and 1 mM EDTA. Fractions (400 l) were collected and analyzed by electrophoresis on 0.8% agarose gel in 22 mM Tris borate and 0.8 mM EDTA. Chromatin reconstituted on the plasmid DNA was fractionated through a 15-60% glycerol gradient in a TLS55 rotor (Beckman) at 50,000 rpm for 90 min at 4°C. Fractions containing chromatin were dialyzed against 10 mM Tris-HCl, pH 7.9, and 1 mM EDTA and stored at 4°C until use. The reconstituted chromatin was used for the in vitro transcription and the nuclease sensitivity assay within 1 week. Cell-free Transcription-A cell-free transcription assay was performed essentially as described previously (19) using nuclear extracts (about 40 g of protein) prepared from HeLa cells in 25 mM Hepes-NaOH, pH 7.9, 12.5 mM MgCl 2 , 60 mM KCl, 1 mM dithiothreitol, 7.5 mM creatin phosphate, 500 M each of four nucleoside triphosphates, and 8% glycerol. Fifty nanograms of naked DNA or reconstituted chromatin containing 50 ng of DNA was used as a template. The reaction mixture was incubated at 30°C for 1 h, and transcripts were purified and analyzed by primer extension with radiolabeled oligo-DNA primer PML2 (19) and 100 units of reverse transcriptase (Life Technologies, Inc.). The products were purified, separated by electrophoresis on 8% polyacrylamide gel in the presence of 50% urea, and visualized by autoradiography. Glycerol Gradient Assay-To analyze the interaction between the Ad core and TAF-I, the Ad core (2 g) was incubated with TAF-I (2 g) purified from HeLa cells at 30°C for 30 min in the transcription reaction mixture and centrifuged through a 15-60% glycerol gradient in a TLS55 rotor (Beckman) at 35,000 rpm for 2 h at 4°C. Fractions (100 l) were collected and analyzed by 12.5% SDS-polyacrylamide gel electrophoresis (PAGE), and proteins were visualized by silver staining. Far Western Analysis-For the Far Western analysis, radiolabeled protein probe was prepared by phosphorylation with cAMP-dependent protein kinase (PKA) (Sigma) and [␥-32 P]ATP. The rTAF-I protein that was produced using TAF-I cDNA (21) cloned into the NdeI-and BamHIdigested pET14bk (Novagen) has hyperphosphorylated sites by PKA. When TAF-I was used as a probe (15 pmol in 5 ml of hybridization solution), about 5 pmol each of core histones, histone H2A/H2B, or histone H3/H4 was either spotted on a PVDF membrane (Millipore) or separated by electrophoresis on 15% SDS-PAGE and then transferred to a PVDF membrane. When core histones were used as a probe, 15 pmol each of histidine-tagged recombinant proteins, rTAF-I␤(1-277), rTAF-I␤(1-225), rTAF-I␤ (26 -277), and rTAF-I␤(133-277) prepared as described (20,21), was subjected to 12.5% SDS-PAGE and transferred to a PVDF membrane. The proteins on the membrane were denatured by HBB buffer (20 mM Hepes-KOH, pH 7.6, 5 mM MgCl 2 , 1 mM KCl, and 5 mM dithiothreitol) containing 6 M guanidine HCl. Renaturation of proteins was carried out by immersing the membrane in the same buffer containing 3, 1.5, 0.75, and 0.375 M guanidine HCl successively. Finally, the membrane was soaked in HBB buffer containing 5% nonfat dry milk at 4°C overnight. The binding reaction was performed at 4°C for 3 h in HBB buffer containing 5% nonfat dry milk and 1% Nonidet P-40 and washed at 4°C with 25 mM Tris-HCl, pH 7.4, 150 mM NaCl, and 0.1% Tween 20. ␥-32 P-Labeled proteins on the filter were visualized by autoradiography. Gel Mobility Shift Assay-Core histones (20 ng) were first incubated at 30°C for 30 min in the transcription reaction mixture containing 1 mg/ml BSA in the absence or presence of increasing amounts of rTAF-I␤. This mixture was then mixed with the 453-bp-long DNA fragment (10 ng) and incubated at 30°C for 60 min. The mixture was subjected to electrophoresis on 0.8% agarose gel containing 22 mM Tris borate and 0.8 mM EDTA, followed by autoradiography. Disruption of the Ad Core Structure by TAF-I-Our previous study showed that TAF-I stimulates transcription from the E1A promoter on the Ad core but not from the MLP (19). Here we have examined the effect of TAF-I on the structural change of the Ad core around these promoter regions. The Ad core was subjected to digestion with restriction enzymes after incubation with or without TAF-I. Nuclease sensitivity was monitored by the Southern blot analysis using the radiolabeled DNA probe complementary to either the E1A promoter region spanning nucleotide positions 455-628 from the left end of the adenovirus type 5 or the MLP region spanning nucleotide positions 5779 -6242 (Fig. 1A). On the E1A promoter region, TAF-I augments the accessibility of nucleases to DNA in the Ad core and the amounts of DNA fragments generated by PvuII or NciI are increased in a dose-dependent manner (Fig. 1B, upper panel). This suggests that TAF-I induces the conformational change of the Ad core structure, so that nucleases easily gain access to DNA. On the other hand, TAF-I does not have such intensive effect on the MLP region as seen in the E1A promoter region (Fig. 1B, bottom panel). Although small amounts of PvuII or NciI fragments are generated by the addition of TAF-I (lanes 3-5 and 8 -10), a large portion of the MLP region in the Ad core remains uncut. The fact that TAF-I induces the conformational change of the Ad core in the E1A promoter region but not in the MLP region is in good agreement with the difference of these promoter activities on the Ad core stimulated by TAF-I, although the detailed mechanism for this selectivity of TAF-I is not known at present. Next we examined the effect of nucleoside triphosphates (NTPs) on the TAF-I activity. As shown in Fig. 1C, the TAF-I activity is totally independent of the presence of ATP, GTP, or dATP. This is consistent with the observation that TAF-I does not have ATPase activity (data not shown). This observation suggests that the mechanism for chromatin remodeling by TAF-I is different from that by ATP-dependent chromatin remodeling complexes. TAF-I␤ had the stronger chromatin remodeling activity than TAF-I␣ (data not shown), which is in agreement with the fact that the stimulatory activity for the Ad core replication and transcription by TAF-I␤ is more than that by TAF-I␣ (18,20,21). Since the long acidic tail present in the C-terminal region of TAF-I is required for the stimulation of the Ad core DNA replication and transcription from the E1A promoter (20,21), it is possible that TAF-I associates with basic core proteins through the acidic region and that this interaction is involved in the conformational change of the Ad core. In addition, it is not known whether TAF-I removes the core proteins from the Ad core complex. To examine this, the Ad core incubated with or without TAF-I was subjected to the glycerol gradient assay. When the Ad core is incubated with TAF-I, TAF-I is co-sedimented with the Ad core (Fig. 2). This suggests that TAF-I binds to the Ad core through core proteins, although the protein seen between TAF-I ␣ and ␤ (lanes 8 -10 in both panels A and B) associated with the Ad core is a possible target of TAF-I. However, this would not be the case, since the stoichiometric amounts of TAF-I to core proteins are required for the maximal TAF-I activity (18,19,21) and the ratio of this protein to TAF-I is much smaller than that of core proteins to TAF-I. It is worth noting that the Ad core treated with TAF-I distributes in broader ranges compared with the non-treated Ad core. These results suggest that TAF-I interacts with the Ad core forming an Ad DNA-core protein-TAF-I tertiary complex, and that the less dense Ad cores are produced due to the conformational change. It is not known whether TAF-I interacts with core protein V, VII, or both. TAF-I Stimulates the Transcription from Chromatin Templates-Since TAF-I remodels the Ad core and stimulates the transcription from the Ad core, it is important to test whether this is also the case for the cellular type chromatin. Then, to examine the effect of TAF-I on the chromatin template consisting of core histones, we have reconstituted the chromatin structure. In order to avoid the possible contamination of non-histone proteins into the reconstituted chromatin when chromatin assembly extracts of Xenopus or Drosophila are used for the chromatin reconstitution, the salt dialysis method was employed. For reconstitution of the chromatin structure, the 453bp-long DNA fragment containing the Ad MLP spanning nucleotide positions from Ϫ260 to ϩ190 relative to the transcription start site (ϩ1) and core histones purified from HeLa cells were used. The mechanism of the initiation of transcription from the MLP has been well studied by use of naked DNA templates. The 453-bp-long DNA fragment contains not only TATA box and MLTF binding site but also downstream element factor binding site. TATA box and MLTF/USF binding sites are the minimal requirement for the in vitro transcription reaction (26), and the binding of downstream element factor to the specific DNA sequence between nucleotide positions ϩ86 and ϩ95 relative to the transcription start site further stimulates the transcription from the MLP (27). The histone-DNA complexes generated by the salt dialysis method were fractionated on a sucrose gradient, and the nucleoprotein complex in each fraction was directly analyzed by electrophoresis on an agarose gel (Fig. 3A). Histone octamer seems randomly positioned on the DNA fragment, and mono-, di-, or trinucleosomes cannot be clearly separated in this gel. Since the specific sequence or DNA structure is needed for the positioning of histone octamer in theory, there would not be a strong signal for such positioning in this region as detected in the ribosomal RNA gene (28). Indeed, nucleosome positioning on the adenovirus DNA has not been reported. To confirm the structure of the reconstituted chromatin on the DNA fragment, we compared the sedimentation profile of the histone-DNA fragment complex with that of a mixture of oligonucleosomes as a marker. A mixture of oligonucleosomes was prepared by the MNase digestion of chromatin reconstituted on the plasmid DNA and fractionated through the sucrose gradient. DNA present in each fraction as histone-DNA complex was deproteinized and separated by electrophoresis on an agarose gel (Fig. 3B). DNA fragments derived from mono-and dinucleosomes are well resolved. By comparison of these patterns (Fig. 3, A and B), it is assumed that nucleosomes reconstituted on the radiolabeled DNA fragment recovered in fractions 13-17 would contain mono-and dinucleosomes. In fact, when nucleoprotein complex reconstituted on the 453-bp DNA fragment in each fraction was subjected to the MNase digestion assay, only a 150-bp and 150-and 300-bp fragments, respectively, were recovered from fractions 13 and 17 (data not shown). From these results, we concluded that fractions 13 and 17 mainly contain mono-and dinucleosomes, respectively. These fractions were pooled and used in subsequent experiments. Next, the transcription reaction was performed (Fig. 4A) using the reconstituted chromatin as a template and nuclear extracts prepared from HeLa cells as enzyme/factors source. Transcripts were detected by the primer extension method as described previously (19). When template DNA is assembled into nucleosome, transcription is repressed (compare lanes 1, 2, and 7). Transcription repression on the dinucleosome is more extensive than that on the mononucleosome (lanes 2 and 7). Because the promoter region and the transcription start site may remain open when mononucleosome is formed around 3Ј-end of the DNA fragment, transcription would proceed. This would be the reason why the transcription from the dinucleosome template is completely repressed (lane 7) while that from the mononucleosome is not (lane 2). Of importance is that this transcription repression is relieved by TAF-I (lanes 3-6 and 8 -11). As the increase of this transcription level by TAF-I is not observed when naked DNA is used as a template (Ref. 19 and data not shown), this transcription derepression should be induced by the conformational change of the chromatin structure by TAF-I. To confirm this, the nuclease sensitivity assays were performed. When nucleosome is treated by increasing amounts of TAF-I (Fig. 4B), the nuclease sensitivity is increased and 293-and 160-bp DNA fragments are generated. These results indicate that TAF-I somehow remodels not only the Ad core but also the chromatin structure and thereby stimulates the transcription from DNA-basic protein complexes in vitro. While the level of the nuclease sensitivity reflecting the level of the remodeling of the structure of the reconstituted chromatin is increased in a TAF-I dose-dependent manner, transcription is not strictly stimulated as the function of increasing amounts of TAF-I. It is known that only a fraction of DNA can be utilized as template for the in vitro transcription reaction even when naked DNA is used. Therefore, it is likely that the amount of remodeled DNA by TAF-I is not optimum for the in vitro transcription reaction. SWI/SNF and related complexes have been shown to stimulate the chromatin remodeling depending on the DNA binding factors that would lead to the promoter specificity of these chromatin remodeling factors (1, 3-7). The activity of TAF-I seems independent of promoter specific factors, since these factors are not required for TAF-I-mediated transcription activation (19). NAP-I-like Nature of TAF-I-TAF-I is shown to be a structural and functional homologue of NAP-I, which facilitates the formation of the chromatin structure (20,21). The nucleosome assembly reaction by NAP-I is divided into at least two steps, i.e. its histone binding and transfer of histone to DNA. In fact, TAF-I binds to core histones as previously reported (29), although its binding specificity is not known. First, we have tested which histones interact with TAF-I. The rTAF-I protein, which has sites specifically phosphorylated by PKA, was prepared and used as a probe for Far Western analysis. Phosphorylation of the rTAF-I by PKA did not give any significant effect on the TAF-I activity (data not shown). Core histones, histone H2A/H2B complex, and histone H3/H4 complex were separately purified from HeLa cells (Fig. 5A) as described previously (25). To examine the binding of TAF-I to individual histones, histones separated by SDS-PAGE were transferred to a PVDF membrane and incubated with PKA-mediated 32 P-labeled-TAF-I (Fig. 5B). TAF-I is capable of interacting to all histones tested in the Far Western assay. Since the structure of each histone on the membrane after denaturation/renaturation would be different from that of native proteins, it is important to test the affinity of TAF-I to each histone complex. We have, therefore, tested the binding of TAF-I to histone H2A/H2B and H3/H4 complexes, which are present as hetero-dimer and -tetramer, respectively, in a cell. Five picomoles each of core histones, H2A/H2B, H3/H4, and rTAF-I␤ as a positive control or BSA as a negative control were spotted on a PVDF membrane and the membrane was subjected to the Far Western analysis (Fig. 5C). The 32 P-labeled TAF-I protein does not bind to BSA but to the TAF-I␤ at the low but distinct level (lanes 5 and 4), since TAF-I␣ and ␤ forms both hetero-and homo-oligomer both in vivo and in vitro (18). 2 TAF-I is capable of interacting with core histones (lane 1) being in good agreement with the results from the pull-down assay (29), the results shown in Fig. 5B, and the glycerol gradient assay (data not shown). As shown in Fig. 5C, the binding of TAF-I to H3/H4 complex (lane 3) is greater than that to H2A/H2B complex (lane 2). The binding activity of TAF-I to H3/H4 complex is about 68% compared with core histones, while that to H2A/H2B complex is only 13%. This result suggests that TAF-I preferentially interacts with H3/H4 rather than H2A/H2B. Then, the domain of TAF-I involved in association with histones was determined. Wild type and mutant rTAF-I proteins shown in Fig. 5D were subjected to the Far Western analysis using core histones labeled by PKA as a probe. As shown in Fig. 5E, core histones bind to the wild type-TAF-I␤ (lane 1). On the other hand, rTAF-I␤(1-225), which completely lacks the acidic region, loses the histone binding activity (lane 2). rTAF-I␤(26 -277), which lacks the N-terminal region specific for TAF-I␤, binds to histones with the same extent as the wild type TAF-I␤ (lane 3), and the histone binding activity of rTAF-I␤(133-277) is about half of that of the wild type (lane 4). These observations suggest that the acidic region of TAF-I is crucial for histone binding and the N-terminal region between the amino acid positions 26 and 132 would be more or less involved in this activity. Next, we analyzed the chromatin assembly activity of TAF-I using the gel mobility shift assay (Fig. 6). The end-labeled, 453-bp-long DNA fragment was mixed with core histones that had been pre-incubated in the presence or absence of TAF-I. TAF-I does not bind to naked DNA (lanes 1 and 2). When DNA is directly mixed with histones, large histone-DNA aggregates that fail to enter the gel and random complexes are observed (lane 3). In contrast, when core histones are pre-incubated with TAF-I, distinct DNA-protein complexes are formed (lanes 4 -7) in a dose-dependent manner of TAF-I. A fraction of the histone-DNA complex formed in the presence of TAF-I migrates to the same position as that of the nucleosome reconstituted on the same DNA fragment by the salt dialysis method (lane 8). The amount of this complex is increased when increasing amounts (50, 100, 200, and 500 ng for lanes 4, 5, 6, and 7, respectively) of TAF-I are mixed. When the structure of this DNA-protein complex was examined by the MNase digestion assay, the discrete 150-bp-long DNA fragment, a unit of nucleosomal DNA, was not clearly detected (data not shown). It is therefore presumed that the DNA-histone complex mediated by TAF-I would be loosely assembled and not form the complete nucleosome structure. However, it is noted that TAF-I prevents the large DNA-histone aggregation (compare lanes 3-7). DISCUSSION We have described the effects of TAF-I on the reconstituted chromatin. TAF-I was originally identified from HeLa cells as the factor that stimulates replication and transcription from the adenovirus DNA-core protein complex in a chromatin-like structure (18,19). Here we have shown that TAF-I also stimulates transcription from the reconstituted chromatin consist-2 M. Okuwaki and K. Nagata, unpublished data. 2-6), or dinucleosome template (lanes 7-11) and HeLa nuclear extracts in the absence (lanes 1, 2, and 7) or presence of 100 ng (lanes 3 and 8), 200 ng (lanes 4 and 9), 500 ng (lanes 5 and 10), and 1000 ng (lanes 6 and 11) ing of core histones through the structural change of chromatin. Furthermore, it is shown that TAF-I suppresses random aggregation between DNA and histones. TAF-I remodels the Ad core structure on the E1A promoter region, while it does not have such an effect on the MLP region on the Ad core (Fig. 1). In contrast, TAF-I is capable of stimulating the transcription and remodeling the structure of the reconstituted chromatin formed on the DNA fragment containing the MLP (Fig. 4). These results suggest that TAF-I would not have any specificity of DNA sequence, although the MLP on the Ad core is not effectively remodeled by TAF-I. The contradiction of these results would be explained by the difference of the Ad core structure between the E1A promoter and MLP regions. The MLP region may be packed more compactly than E1A promoter region and the MLP in the reconstituted nucleosome. In fact, when the Ad core is used as a template for the transcription, the stimulation of the transcription from the MLP is dependent on the genome DNA replication (19). It is presumed that the MLP region on the Ad core would be much more insensitive to TAF-I than the E1A promoter region on the Ad core and the MLP in the reconstituted chromatin. Therefore, the structural change of the Ad core within the MLP region coupled with the genome DNA replication would be needed for the transcription from the MLP. In infected cells, the transcription from the MLP is activated dramatically after the onset of the genome DNA replication compared with its transcription activity in early phases of infection. The molecular basis of this transcription switching between early and late phases via replication has not been well clarified. The transcription activation from the MLP by the switch could not be explained simply by the increase of the genome copy number. The Ad genome is complexed with viral basic core proteins in the virion and in the cell during early stages of infection, and newly synthesized DNA would be complexed with histones of host cells and assembled into the chromatin structure following the genome replication (16). When replication occurs, the Ad core structure is drastically disrupted and trans-acting factors could easily gain access to the parental genome template and/or the newly synthesized DNA. It is suggested that the chromatin structure would be reconstituted on the newly synthesized DNA complexed with transcription factors. Workman et al. (30) demonstrated that the transcription from chromatin reconstituted on the DNA in the presence of MLTF/USF is active, while the transcription from the chromatin template reconstituted without MLTF/USF is repressed. The mechanism of the transcription activation from the MLP by trans-acting chromatin remodeling factors is the other possibility as we have demonstrated in this study. More precise studies are needed because the transcription level relieved by TAF-I is only 10 -20% of that E, Far Western analysis using ␥-32 P-labeled core histones as a probe. Each TAF-I protein (1.5 pmol) was separated in 12.5% SDS-PAGE. Proteins are stained with CBB (left panel) or blotted to a PVDF membrane followed by incubation with radiolabeled core histones (right panel). The level of binding activity of core histones to TAF-I shown under panel E was quantitated by a Fujix BAS2000 bioimage analyzing system. The binding activity of wild-type TAF-I␤ is set as 1. Three independent experiments showed similar results, and the result shown here is a typical one. on the naked DNA. These observations raise the possibility that TAF-I may cooperate with other factors in remodeling the chromatin structure in vivo. SWI/SNF, NURF, and other factors that require ATP hydrolysis are suggested to function together with sequence-specific DNA-binding proteins. Recently, Mizuguchi et al. (31) demonstrated that NURF complexes remodel the chromatin structure and stimulate the transcription dependent on GAL4 fused to HSF. Because the process of the transcription activation from the MLP involves the binding of promoter specific transcription factors such as MLTF/USF, it is possible that the chromatin structure in the MLP region is remodeled by SWI/SNF or NURF-like complexes. Since the DNA fragment used in this study contains the binding sites described above, our system would be useful to further assign roles of these factors. Furthermore, Orphanides et al. (32) reported that NTP-hydrolysis independent accessory factor, termed FACT, is needed for the elongation step of the transcription from the chromatin templates, although the detailed mechanism of the reaction mediated by FACT is unknown at present. The functional nature of FACT would be different from that of the factors that function as histone chaperone including TAF-I, since FACT cannot remodel the chromatin structure. TAF-I also suppresses the random aggregation of histones, possibly through complex formation with core histones as shown in Figs. 5 and 6. NAP-I and nucleoplasmin and N1/N2 and CAF-I bind preferentially to histones H2A/H2B and histones H3/H4, respectively (reviewed in Ref. 33). This study showed that TAF-I preferentially interacts with histones H3/H4 rather than H2A/H2B, although more precise experiments under physiological conditions are needed. Since there are a variety of acidic proteins that have the histone binding activity (35), the function of each protein and its behavior within a cell should be carefully investigated. In addition, the histone-DNA complex formed by TAF-I would not be a complete chromatin structure. The glycerol gradient assay indicated that the Ad core treated with TAF-I forms the tertiary complex (Fig. 2). Since the mobility of the major nucleoprotein complex in the presence of TAF-I as shown in lane 7 of Fig. 6 was similar to that of the reconstituted chromatin, TAF-I would not be present in the complex. However, it is possible that TAF-I is present in complexes tailing toward the gel origin. It has been reported that NAP-I assembles the chromatin structure cooperatively with ATP-dependent complex (34). It is possible that the nucleoprotein complex formed by TAF-I (Fig. 6) is loosely assembled and another factor would be needed to form the complete chromatin structure. It has been shown that polyanions such as polyglutamic acid and bulk RNA are also able to prevent the random aggregation between DNA and histones, and mediate the histone transfer process (reviewed in Refs. 33 and 35). The internal deletion mutant of TAF-I␤, which has the same long acidic tail as wild type TAF-I␤, loses the stimulatory activity for the replication from the Ad core (20), suggesting that not only acidity but also the proper conformation of the acidic region in TAF-I is required for its activity. The function of TAF-I in vivo is unknown. It is reported that components of SWI/SNF complex are enriched in the active chromatin and nuclear matrix fractions (36). From the fact that TAF-I has been originally purified from cytoplasm fractions, TAF-I seems to leaked out easily from the nucleus, although TAF-I has a nuclear localization signal and it is retained in the nucleus in part through its acidic region (29). It is an open question how the TAF-I activities for disruption of the chromatin structure and prevention of random aggregation between DNA and histones are controlled in a cell. Since the level of TAF-I proteins are not significantly fluctuated through the cell cycle (29), qualitative change rather than quantitative change would be needed. It is tentatively speculated that some modifications may operate in the regulation of the TAF-I activity. Recently, it has been reported that Xenopus TAF-I␤ homologue specifically binds to a B-type cyclin (37). Human TAF-I is found to inhibit the activity of protein phosphatase 2A (38) by the multi-protein complex formation (39). Based on these facts, the TAF-I activity is possibly regulated during the cell cycle by phosphorylation and/or TAF-I may regulate phosphorylation/ dephosphorylation. On this line, TAF-I␤ has shown to be phosphorylated in vivo in its N-terminal region (40), although a specific kinase(s) that phosphorylates TAF-I is unknown. From these observations, TAF-I would be a multi-functional protein.
2018-04-03T02:15:44.713Z
1998-12-18T00:00:00.000
{ "year": 1998, "sha1": "9ae96ad1d9dc771254e3ed6879cf65a9c4c3ef5b", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/273/51/34511.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "acf02ef1e371207c6c756432a13c48f028ab1828", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259376851
pes2o/s2orc
v3-fos-license
NITS_Legal at SemEval-2023 Task 6: Rhetorical Roles Prediction of Indian Legal Documents via Sentence Sequence Labeling Approach Legal documents are notorious for their complexity and domain-specific language, making them challenging for legal practitioners as well as non-experts to comprehend. To address this issue, the LegalEval 2023 track proposed several shared tasks, including the task of Rhetorical Roles Prediction (Task A). We participated as NITS_Legal team in Task A and conducted exploratory experiments to improve our understanding of the task. Our results suggest that sequence context is crucial in performing rhetorical roles prediction. Given the lengthy nature of legal documents, we propose a BiLSTM-based sentence sequence labeling approach that uses a local context-incorporated dataset created from the original dataset. To better represent the sentences during training, we extract legal domain-specific sentence embeddings from a Legal BERT model. Our experimental findings emphasize the importance of considering local context instead of treating each sentence independently to achieve better performance in this task. Our approach has the potential to improve the accessibility and usability of legal documents. Introduction Legal case documents are typically quite lengthy, often spanning many pages, which can make it time-consuming for legal practitioners and academics to read them in their entirety. In many cases, these professionals may only need to access specific portions of a document, such as the facts of the case or the arguments put forward by the parties involved (Jain et al., 2021d). However, legal case documents are often unstructured and lack clear section headings, unlike research papers or books. This can make it difficult for readers to quickly and efficiently locate the information they need. The lack of structure in legal case documents can be particularly problematic for legal practitioners who are trying to build a strong legal argument. Without the ability to easily navigate and access relevant information, lawyers may struggle to build a coherent case that is based on sound legal principles and precedents. To address this challenge, researchers and practitioners are exploring a range of approaches to structuring legal case documents and making them more accessible to readers. This includes the development of new tools 1 and technologies that can automatically extract key information from legal documents, such as the parties involved, the legal issues at stake, and the arguments put forward by each side (Farzindar and Lapalme, 2004;Polsley et al., 2016). By leveraging such tools, legal practitioners and academics can more quickly and efficiently access the information they need to build strong legal arguments and advance the field of law. Rhetorical role labeling of sentences is a technique that can help legal practitioners quickly comprehend the structure and specific components of a legal case document (Teufel and Moens, 2002). This method involves identifying the semantic function associated with each sentence in the document. Formally, rhetorical role labeling refers to the process of classifying each sentence in a legal document based on its role in the overall document (Saravanan et al., 2008). By understanding the specific function of each sentence, legal practitioners can more easily identify the relevant portions of the document and extract the information they need to build a strong case. Such kind of upstream tasks are also helpful for performing downstream tasks such as summarization (Bhattacharya et al., 2019b). This can save valuable time and improve the efficiency of legal research and analysis. To facilitate research in this area, the organizers of LegalEval 2023 have proposed a task of Rhetorical Labeling. The dataset provided for this task includes 247 training document-summary pairs, 30 development document-summary pairs, and 50 doc-1 https://tax-graph.273ventures.com/ 751 uments for testing. The sentences present in these documents are classified into 13 different rhetorical role classes: Preamble (PREAMBLE), Facts (FAC), Ruling By Lower Court (RLC), Issues (ISSUE) Argument by Petitioner (ARG PETITIONER), Argument by Respondent (ARG RESPONDENT), Analysis (ANALYSIS), Statute (STA), Precedent Relied (PRE RELIED), Precedent Not Relied (PRE NOT RELIED), Ratio of the decision (Ratio), Ruling By Present Court (RPC), None (NON). For more details about the task, please refer to the overview paper (Modi et al., 2023). In this work, a detailed experimental study is conducted to solve the problem of rhetorical role labeling, by considering both sentence level as well as sentence sequence level classification approaches. Moreover, the utilization of legal domainspecific sentence embeddings is also considered in this work so that better model training is possible. From the experimental study it has been identified that domain-specific embeddings along with local context of sentence sequences are important for achieving improved performances in this task. Such an approach can significantly improve the comprehension of lengthy legal documents for legal practitioners as well as other readers. Our code is publicly available 2 . The rest of the paper is organized as follows: Section 2 presents the related works for the rhetorical roles prediction task. Section 3 describes our method used for performing rhetorical roles prediction. Section 4 presents the experimental results along with a detailed discussion. Finally, Section 5 concludes the paper with future research directions. Related Works Legal documents play a critical role in our society, as they provide the foundation for laws and regulations that govern our behavior. However, these documents can be difficult to read and understand, even for legal experts. In the recent years, there has been growing interest in the legal specific tasks such as Rhetorical labeling (Bhattacharya et al., 2019b;Malik et al., 2021a), legal document summarization (Jain et al., 2020(Jain et al., , 2021c(Jain et al., ,b, 2022(Jain et al., , 2023a, court judgment prediction (Chalkidis et al., 2020;Malik et al., 2021b;Niklaus et al., 2022) and so on. There have been several comparative analysis works performed using the legal documents to understand the behavior of these lengthy documents (Bhattacharya et al., 2019a;Jain et al., 2021a;Satwick Gupta et al., 2022). Due to the progress of deep learning techniques, researchers have started employing these methods to analyze Indian court judgments for rhetorical labeling tasks as well. Farzindar and Lapalme (Farzindar and Lapalme, 2004) as well as Hachey and Grover (Hachey and Grover, 2006) have used the concept of rhetorical roles to generate summaries of legal texts. This approach involves identifying the various roles played by different segments of the text. By understanding these roles, the researchers were able to create condensed summaries that captured the main points of the text while maintaining its overall structure and coherence (Saravanan et al., 2008). Recently, Bhattacharya et al. (Bhattacharya et al., 2019b) have proposed BilSTM-CRF model to automatically assign rhetorical roles to the sentences of an Indian Case Judgement document. In another work, Malik et al. (2021a) have constructed a corpus of rhetorical roles (RR) and annotated it with 13 different detailed roles. Furthermore, they have proposed a multitask learning pipeline for identifying rhetorical roles. In another work done by Kalamkar et al. (2022), authors have created a larger rhetorical role dataset as compared to the dataset created by Malik et al. (2021a). The authors created a baseline system using the SciBERT-HSLN architecture (Brack et al., 2021). In this work also, we utilize Indian Legal BERT (Paul et al., 2022) to create the embeddings of sentences in a sequence preserving the local context, followed by feeding them to a BiLSTM-based model for identifying the rhetorical roles. System Description The task of rhetorical role labeling can be modeled in several different ways. Keeping in mind the specific characteristics of legal documents, we set out in this work to find the most appropriate approach for solving the rhetorical role labeling problem. This section describes the several different methods explored in this work, along with the specific implementation details. Document-level dataset Local context-level dataset In this work, our baseline system employs a common technique used in natural language processing (NLP) tasks, where a pre-trained language model is used to encode text data into dense representations, followed by classification layers. More specifically, the Indian Legal BERT (Paul et al., 2022) model is used in this work to encode the sentences into 768-dimensional contextual vectors, which capture the semantic and syntactic information of the text. After encoding the sentences, a multilayer perceptron (MLP) model is employed to classify the encoded sentences into different categories. The MLP model consists of multiple dense layers followed by dropout layers, which help prevent overfitting by randomly dropping out nodes during training. The final softmax layer classifies the encoded sentence into different categories based on the probability distribution. It is important to note here that this approach performs an individual sentencelevel classification and does not make use of any context information. This approach is called as the "Baseline" approach for all the experimental analysis in the rest of the paper. Sentence sequence labeling approach The primary idea for model development that has been proposed in this work, is a sentence-sequence classification based idea, where a local-context based dataset is built for model training. This dataset is built with a sentence window size of 5, where we hoped to train a sentence-sequence classification model that can learn from it's local context. The reason for restricting the window size to 5 sentences is due to the resource constraints as well as the extremely lengthy nature of legal documents. The dataset building process is pictorially depicted in Fig. 1. Where we generate training samples with a set of 5 sentence embeddings and their corresponding labels. In order to ensure the proper inclusion of each sentence in the dataset regardless of their position in the document, we also perform 0-vector paddings (represented as s 0 in Fig. 1). A BiLSTM based deep learning model was trained on this dataset, the architecture for which is depicted in Fig. 2. During inference we consider two different ways of recovering the sentence level labels: considering the middle sentence in an input sequence as the sentence of interest for recording it's label and considering the last sentence as the sentence of interest. These two approaches are called as the "M SS−M id " and "M SS−End " respectively. Oversampling approach One important observation from the Task A dataset is that the sentence labels are having a high degree of class-imbalance. Such kind of imbalanced data often causes deep learning models to ignore the minority classes and perform poorly during inference. The level of class imbalance across the dataset is depicted pictorially in Fig. 3, with the normalized frequencies of each of the individual class present in the dataset. In order to deal with this issue one straightforward idea is to employ class-weighted oversampling of minority-class samples. The following is the description of the proposed oversampling technique that we carry out to deal with the class-imbalance problem. Firstly, we decide to keep only one copy of any 5-sentence sample where at least one of the labels is a dummy label for a 0-vector sentence (s 0 ). Secondly, for all the other types of samples, we utilize the normalized frequencies of each class to calculate the exact number of times we will oversample them. This calculation is described below: Let f 1 , f 2 , ..., f 13 be the normalized frequencies For the k th sample in the training set, we initially have five labels [y k 1 , y k 2 , y k 3 , y k 4 , y k 5 ]. Based on these labels we find the following oversampling rate as depicted in equation 1: where, N k denotes the number of times the k th sample is to be oversampled, α is a hyperparameter which is chosen experimentally as 5, and the summation term adds up the individual normalized class frequencies of the sentences present in that sample. Such a calculation ensures that the samples with majority class sentences get a smaller amount of oversampling, whereas samples with minority class sentences gets oversampled more number of times. This oversampling technique gives rise to two new variants of the proposed approach discussed in subsection 3.2, which are referred to as "M SSO−M id " and "M SSO−End " in the rest of the paper. Experimental setup The specific architecture used for the Baseline model includes a dense layer of 2048 nodes followed by a dropout layer with 0.6 probability, then another dense layer of 1024 nodes followed by another identical dropout layer, and finally a softmax layer which classifies a given sentence into different categories. An Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 is considered. A sparse categorical cross entropy is considered for calculating the loss since the output labels are considered as integer values. For the "M SS−M id ", "M SS−End ", "M SSO−M id " and "M SSO−End " models, we consider two BiLSTM layers consisting of 128 nodes having a dropout layer with 0.6 probability between them, followed by a time distributed dense layer. The learning rate, optimizer, and loss function is considered the same as the Baseline model. All the models are run for 500 epochs with an early stopping patience of 10. Results and discussion In this section, we present the results of our experiments on the development dataset provided by the organizers along with a detailed discussion on the experimental observations. Specifically, Tables 1, 2, and 3 report the class-wise precision, recall, and F1 scores achieved by our models on the development set. Additionally, we provide a visual representation of the overall weighted F1 scores attained by our various models on the development set through Fig. 4. Considering the class-wise Precision scores presented in Table 1, we can make certain key observations. The "M SSO−M id " model is able to obtain the best results for five classes in total, however, the performance of this model is not as good on the other class sentences. On the other hand, the "M SS−M id " model finds best Precision values for three classes with other class results also being quite decent. One key observation is that, the oversampling based method "M SSO−M id " is able to obtain a non-zero Precision for the "PRE NOT RE-LIED" class, which is a minority class containing the least number of samples in the entire dataset. The class-wise Recall scores for the different proposed approaches are shown in Table 2. A similar trend as Table 1 can be observed for Recall scores also, where the "M SS−M id " model achieves improved scores across most of the classes, however the maximum number of best Recall scores are obtained by the oversampling based approach "M SSO−M id ". Moreover, the performance of the "M SSO−M id " on the minority classes are better than the non-oversampling based approach. Class-wise F1 scores for the proposed approaches are depicted in Table 3, which demonstrates the overall high performance of the "M SS−M id " model with five best F1 scores. However, it fails in comparison with the oversampling based approach "M SSO−M id " on the minority classes. From the overall weighted F1 scores shown in Fig. 4, we can observe that the "M SS−M id " model achieves the best results and it's oversampling based variant is able to obtain the second best scores. The findings from the experimental study can be summarized as follows: • Considering the task of rhetorical role labeling as an individual sentence classification task is not an appropriate approach as it loses a lot of context information. The local-context based sentence sequence labeling approach is able to outperform the single sentence classification based approach in almost all cases. • Considering the sentence of importance in the middle of the local context is better than to consider it in the end of the context, as "M SS−M id " model outperforms "M SS−End " model in almost all the scenarios. This is due to the fact that the sentence in the middle has access to it's previous as well next sentence context information, however the sentence at the end only has access to it's previous sentences and their labels. • Although oversampling based approaches do not outperform the non-oversampling based approaches when the overall performance is considered, they are still quite important, especially for the minority class sentences. Moreover, based on the weighted average scores from Tables 1, 2 and 3 it is quite evident that the oversampling based approach "M SSO−M id " consistently gives impressive precision and recall results. Conclusion The organizers of Legal Eval 2023 have introduced a rhetorical Roles Prediction task (Task A) as part of their competition, due to the reputation of Indian case judgments for being lengthy and unstructured. Our team participated in this task and achieved a 71.43% F1 score on the testing data, as reported on the Codalab 3 leaderboard. We also conducted exploratory experiments and discovered that context is a significant factor in accurately identifying the rhetorical roles. Also when it comes to roles that are very rare across such documents, oversampling based model training can actually be quite helpful. As part of the future work, an ensembling based approach can be explored that combines the predictive power of both oversampling as well as nonoversampling based approaches. Such an approach has the potential to achieve even higher quality rhetorical role labeling.
2023-07-10T13:07:18.117Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "4413890bd52956ba7c44fe3f87ce53a812944ba3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "4413890bd52956ba7c44fe3f87ce53a812944ba3", "s2fieldsofstudy": [ "Law", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16575389
pes2o/s2orc
v3-fos-license
Targeting of surface alpha-enolase inhibits the invasiveness of pancreatic cancer cells. Pancreatic Ductal Adenocarcinoma (PDAC) is a highly aggressive malignancy characterized by rapid progression, invasiveness and resistance to treatment. We have previously demonstrated that most PDAC patients have circulating antibodies against the glycolytic enzyme alpha-enolase (ENO1), which correlates with a better response to therapy and survival. ENO1 is a metabolic enzyme, also expressed on the cell surface where it acts as a plasminogen receptor. ENO1 play a crucial role in cell invasion and metastasis by promoting plasminogen activation into plasmin, a serine-protease involved in extracellular matrix degradation. The aim of this study was to investigate the role of ENO1 in PDAC cell invasion. We observed that ENO1 was expressed on the cell surface of most PDAC cell lines. Mouse anti-human ENO1 monoclonal antibodies inhibited plasminogen-dependent invasion of human PDAC cells, and their metastatic spreading in immunosuppressed mice was inhibited. Notably, a single administration of Adeno-Associated Virus (AAV)-expressing cDNA coding for 72/1 anti-ENO1 mAb reduced the number of lung metastases in immunosuppressed mice injected with PDAC cells. Overall, these data indicate that ENO1 is involved in PDAC cell invasion, and that administration of an anti-ENO1 mAb can be exploited as a novel therapeutic option to increase the survival of metastatic PDAC patients. INtrODUctION Pancreatic Ductal Adenocarcinoma (PDAC) is the fourth leading cause of cancer mortality in developed countries. Despite the available treatment, PDAC has the worst prognosis of all major malignancies, with a 5-year survival rate of 6% and a median survival of 6 months after diagnosis [1,2]. The high mortality rate associated with PDAC is almost equal to the incidence rate, and is caused by the high frequency of metastatic disease found at diagnosis [3,4]. The plasminogen system is involved in tumor growth, invasion and metastasis [5][6][7]. Urokinase (uPA) and tissue (tPA) plasminogen activators released from cancer cells catalyze the proteolytic conversion of plasminogen to plasmin, leading to degradation of the extracellular matrix (ECM), thus facilitating cancer cell invasion [5][6][7][8]. The uPA receptor (uPAR) is a cell membrane-anchored protein which aids the accumulation of plasminogen at the cell surface [5]. Binding proteins for plasminogen include alpha-Enolase (ENO1), Annexin 2 (ANX2) and Cytokeratin 8 (CK8) [9]. Among these, ENO1 has been classified as a pancreatic cancer-associated antigen as it is overexpressed in PDAC and induces both humoral and T cell-specific responses in patients [10,11]. In this study, a multiple approach was adopted to investigate the role of ENO1 in the invasion and metastasis of PDAC, and to develop possible therapeutic options, based on ENO1 regulation, aimed to counteracting the invasiveness of this tumor. We evaluated i) the expression of ENO1, uPA and uPAR and of plasminogen-induced migration, in a panel of eight PDAC cell lines; ii) the in vitro and in vivo effects of anti-ENO1 monoclonal antibodies (mAbs); iii) the in vitro and in vivo effects of ENO1 silencing or mutations of its plasminogen-binding site, and iv) the effect of administering recombinant adeno-associated viral vector (AAVV) for the expression of complete anti-ENO1 mAb in in vivo metastatization. Effect of the blockade of ENO1 on plasminogendependent invasion of PDAc cells In the presence of plasminogen, CFPAC-1 cells were strongly invasive compared to those in the absence of plasminogen ( Fig. S1a and b). No increase in invasion was observed in the presence of plasminogen for any of the other cell lines (Fig. S1a, b). As the CFPAC-1 cells produced uPA and expressed both surface uPAR and ENO1, they were able to invade in response to plasminogen. Nevertheless, as TGF-β has been shown to up-regulate both uPA and uPAR [12], its effect on plasminogen-dependent invasion was evaluated. In ENO-1 expressing T3M4 and in L3.6pl cells, TGF-β increased the expression of uPAR and uPA (Fig. S1c) and rendered them responsive to plasminogen-dependent invasion (Fig. S1d and Table S1). In the presence of anti-ENO1 mAb, the plasminogen-dependent invasiveness of both CFPAC-1 (Fig. 2a) and TGF-β-treated-T3M4 (Fig. 2b) cells was significantly reduced. The extent of this reduction was similar to that induced in CFPAC-1 cells by the plasminogen system inhibitor EACA (Fig. 2a). By contrast, BxPC-3 cells, which expressed very low levels of ENO1, did not invade in the presence of plasminogen, and were not affected by the addition of anti-ENO1 mAb ( Fig. 2a lower panel). These results were also confirmed using the Oris TM -FLEX Platypus Kit, in which cells were completely plunged into Matrigel and their invasion was evaluated in the absence of chemotactic stimuli, by measuring their ability to fill a central hole in the well (Fig. 2c). In the presence of plasminogen, a similar growth pattern was observed when PDAC cells were cultured with anti-ENO1 mAb or isotype-control Ab (Fig. S2). This ruled out the possibility that the inhibitory effect of the anti-ENO1 mAb on migration is due to interference with the growth of tumor cells. Effect of mutation of ENO1 plasminogen binding sites on the plasminogen-dependent invasion of PDAc cells ENO1 expression was silenced in CFPAC-1 cells with a lentivirus delivering an shRNA targeting ENO1 or the 3'UTR (shENO1). A scrambled shRNA (shCTRL) was used as a control. Both ENO1 mRNA (Fig. S3a upper panel) and protein levels ( Fig. S3a lower panel), as well as plasminogen-induced invasion (Fig. S3b) were efficiently reduced after silencing in CFPAC-1 shENO1 cells. Infection of CFPAC-1 cells with a second shRNA targeting the ENO1 CDS region (shENO1#2) gave similar results (Fig. S3). All subsequent experiments were carried out using CFPAC-1 shENO1 cells. FACS analysis revealed that uPA and uPAR www.impactjournals.com/oncotarget To evaluate intracellular expression of ENO1 (a, lower panel), Western blot analysis was performed on whole cell lysates of all PDAC cell lines with anti-ENO1 72/1 mAb. Results were normalized using β-Actin. A representative of three independent experiments is shown. www.impactjournals.com/oncotarget expression was not modified by ENO1 silencing in CFPAC-1 cells (Fig. S3c). To assess the contribution of ENO1 to PDAC invasion, shENO1 cells were transfected with a mutated form of ENO1 (shENO1+TM), in which three lysines of the plasminogen binding site at the C-terminal [13] were substituted with three arginines, resulting in a nonfunctional plasminogen binding site (Fig. 3a). shENO1 cells transfected with a wild type full-length exogenous ENO1 (shENO1+WT) or Empty vector (shENO1+Empty) were used as a controls. WB analysis showed that shENO1+WT or shENO1+TM rescued ENO1 protein levels ( Fig. 3b upper panel). Flow cytometric analysis showed a lack of ENO1 surface expression in shENO1 and shENO1+Empty (Fig.3b lower panel and not shown) whereas ENO1 surface expression in shENO1+WT or shENO1+TM was rescued ( Fig. 3b lower panel). These data demonstrated that the triple mutation in the plasminogen binding site resulted in the inability of ENO1 to bind plasminogen without its cell surface expression being affected. Cells were then tested for invasive capacity in response to plasminogen. Control shENO1 CFPAC-1 cells failed to invade through the Matrigel, whereas shENO1+WT cells recovered this ability, but to a lesser extent compared to CFPAC-1 shCTRL cells (Fig. 3c). Notably, shENO1+TM cells significantly reduced invasion in response to plasminogen compared to the shENO1+WT cells (Fig. 3c). To confirm the contribution of ENO1 to in vivo invasion and metastasis, NSG immunocompromised mice were injected i.v. with shENO1+TM, shENO1+WT or shENO1+Empty CFPAC-1 cells. On day 28, postmortem observations confirmed a significantly reduced metastatic area in the lungs of mice injected with shENO1+Empty or with shENO1+TM cells versus mice injected with shENO1+WT CFPAC-1 cells (Fig. 3d). (lower panel) and T3M4 (b) were placed on Matrigel-coated transwell filters and plasminogen (1 μg/ml or 10 μg/ml), anti-ENO1 mAb 72/1 (50 μg/ml) or an isotype-matched IgG1 mAb (50 μg/ml), EACA (50mM) and TGF-β (10 ng/ml) were added in appropriate conditions. Data are reported as mean ± SEM of Optical Density units (OD) and the different conditions were repeated in triplicate. (c) Effect of anti-ENO1 72/1 mAb on migration in Matrigel (Oris TM Platipus kit) of CFPAC-1 (upper panel) and BxPC3 (lower panel) cultured in the presence of 50 μg/ml of anti-ENO1 or IgG1 control mAb with or without plasminogen (40 µg/ml). Images are taken at x5 magnification. A representative of three independent experiments is shown. *p<0.05; **p<0.01;***p<0.001 Anti-ENO1 mAb blocks liver metastasis in an orthotopic pancreatic tumor model To better characterize the role of ENO1 in in vivo tumor spreading, PANC-1/P cells with low expression of surface ENO1 were orthotopically injected into pancreases of NOD-SCID mice. Metastatic cells were harvested from livers and cultured, and were designated as PANC-1/M. Western blot analysis of total ENO1 protein showed that its expression level in PANC-1/P cells was significantly up-regulated, compared to that in normal-like human pancreatic duct epithelial cells, HPDE (Fig. 4a left). Conversely, although the total amount of ENO1 was not increased in metastatic PANC-1/M cells, its surface distribution was clearly augmented in these cells (Fig. 4a right), confirming that surface ENO1 may play a crucial role in tumor metastasis. To confirm the potential therapeutic use of an anti-ENO1 Ab for metastatic PDAC, experiments were conducted with a different anti-ENO1 mAb, namely E10A. The invasion of metastatic PANC-1/M cells treated with the anti-ENO1 mAb was significantly suppressed in a dose-dependent manner (Fig. 4b). In addition, these cells were orthotopically injected into the pancreases of NOD-SCID mice, followed by i.v. administration of the anti-ENO1 mAb or its isotype mAb at 2h and 24h postinoculation. Metastatic tumor cells were fluorescently tracked by the IVIS system, showing that, in the orthotopic model, the liver was the major target for tumor metastasis of PANC-1/M cells, although a few metastatic colonies were detected in lungs and spleens ( Fig. 4c upper left). Although two control mice died at 2 weeks prior to the completion of the experiments, the number and tumor Bars represent the percentage of tumor area calculated as: (tumor area / total area) x100. Data are reported as mean ± SEM of five mice per group. *p<0.05; **p<0.01;***p<0.001 Statistic analysis respect to CFPAC-1 shENO1+Empty (*), to CFPAC-1 shENO1+WT ($) or to CFPAC-1 shCTRL ( §). www.impactjournals.com/oncotarget volume of visible metastatic colonies in livers of mice treated with the anti-ENO1mAb were markedly decreased compared to mice treated with the control mAb ( Fig. 4c upper right). This observation was further confirmed by directly weighing each organ. Again, treatment with the anti-ENO1 mAb substantially reduced metastatic tumor masses in livers ( Fig. 4c lower panel). The average weights of the livers of the anti-ENO1 mAb-treated mice were comparable to those of age-matched un-injected mice. Anti-ENO1 mAb reduces the in vivo growth and metastasis of cFPAc-1 cells To prove the therapeutic effect of anti-ENO1 mAb, SCID-beige mice were injected i.v. with luciferaseexpressing CFPAC-1 cells, and treated biweekly until sacrifice with anti-ENO1 mAb or isotype-matched control mAb. Notably, CFPAC-1 cells resulted in large masses at the lymph node level, prior to lung tumors. Anti-ENO1 mAb treatment led to a reduced number of tumor masses compared to control treatment. This effect was most evident from day 14 onwards, and the difference between anti-ENO1 mAb-treated mice and control mice was even greater on day 28 ( Fig. 5a left). Post-mortem observations confirmed a reduced number of tumor masses in anti-ENO1 mAb-treated mice compared to control mice ( Fig. 5a right). Only a few mice developed lung metastasis (4 out of 15 mice), confirmed by hematoxylin-eosin stained lung sections (data not shown). This number of mice was too small to appreciate significant differences in the number and size of lung metastasis between treated and control groups. An additional experiment was performed using NSG mice. Mice were pre-treated for 3 days with anti-ENO1 or control Abs prior to tumor challenge. At day 0, NSG mice were injected i.v. with luciferase-expressing CFPAC-1 cells, and treated biweekly with the mAbs until sacrifice. As early as day 14 after tumor injection, anti-ENO1 mAbtreated mice emitted a significantly reduced number of photons compared to the control group, as evaluated by IVIS spectrum technology (Fig. 5b). An AAVV strongly increases the anti-tumor role of anti-ENO1 mAb To further enhance the effect of anti-ENO1 mAb by continuous production of the antibody, NSG mice were injected with 1x10 11 genocopies of Adeno-Associated Viral (AAV) vector expressing anti-ENO1 72/1 mAb, or control AAV, into femoral muscle, 7 days prior to i.v. CFPAC-1 cell injection (Fig. 5c upper panel). At day -7, 7 and 28, blood from mice was taken and analyzed for the presence of anti-ENO1 mAb. A progressively increasing concentration of anti-ENO1 mAb was observed, showing that AAVV facilitated a continuous, long-lasting and sustained production of circulating anti-ENO1 mAb ( Fig. 5c lower panel). On day 28, mice injected with AAVV expressing anti-ENO1 mAb showed a significant decrease in lung metastases compared to control mice (Fig. 5d). DIscUssION Evidence from experimental models suggests that cell-associated plasminogen and its activators play a central role in tumor invasion [5,[14][15][16]. Numerous extracellular proteins have been identified as plasminogen Left panel, Western blot analysis of total ENO1 expression in HPDE, PANC-1/P and PANC-1/M cells using an in-house purified rabbit antiserum against ENO1. β-Actin was used as a loading control. Right panel, flow-cytometric analysis of cell-surface ENO1 in HPDE, PANC-1/P and PANC-1/M cells using anti ENO1 E10A (empty area) or its isotype-control Ab (black area). (b) Dose-dependent inhibition of cell invasion by the anti ENO1 E10A mAb in PANC-1/M cells. Cells that degraded tumor-associated matrix and migrated through the membrane were stained and quantified with the ImageJ image-processing software. One representative pair is shown in the upper panel. Data are expressed as means ± SEM and are represented as a fold-decrease in the invasive ability of cells treated with different doses of the anti ENO1 E10A mAb, compared with that of cells treated with the control antibody (lower panel). (c) Blockade of liver metastasis by the anti-ENO1 E10A mAb in an orthotopic pancreatic tumor model. Intravenous administration of the anti-ENO1 E10A (250 μg/mouse) or control mAbs was performed at 2 h and 24 h after tumor inoculation. The tissue distribution of the luciferase-expressing cells was monitored using the IVIS image system every 2 weeks for a total of 6 weeks. Mice were sacrificed after 6 weeks. Organs from one pair of representative mice treated with the anti-ENO1 or control mAbs, as indicated, were photographed (upper left). Metastatic tumors in different organs were visualized by exposure for 10s and 1s, respectively, in the luminescent mode of the IVIS system. Upper right; after mice were sacrificed, the number and volume of metastatic tumor nodules in livers of mice treated with the anti-ENO1 (black) or control (gray) mAbs were quantified. Data are expressed as the mean ± SEM of each group of mice. Lower panel, intact individual organs obtained from mice treated with the anti-ENO1 (black) or control (gray) mAbs were weighed, as indicated at the bottom. Quantitative data from each treatment are presented in the histograms. Normal livers taken from age-matched, untreated mice (white) served as healthy controls; * and ** indicate P<0.05 and P<0.01, respectively. For the in vivo experiment, five mice per group were used. receptors, including ENO1, ANX2 and CK8 [9], which are often de-regulated in cancer. In this study, we identified ENO1 on the surface of human PDAC cells. Notably, among the eight cell lines tested, ENO1 was expressed at intermediate or high levels in metastatic cell lines (Hs766T [17], T3M4 [18], CFPAC-1 [19], L3.6pl [20]), and was absent or expressed at lower levels in primary tumor-derived cell lines (BxPC3 [21], PANC-1 [22], PT45[23], Mia-PaCa2 [24]). Ex vivo analysis of PANC-1 cells from a liver metastasis showed that the surface expression of ENO1 was higher compared to the parental cells from the primary tumor. This suggests that spreading and invasion of PDAC cells is strictly related to the high cell surface expression of ENO1, which, in turn, facilitates binding of elevated concentrations of plasminogen at the cell surface. www.impactjournals.com/oncotarget Plasminogen expressed at the cell surface activates plasmin, increasing the ability of PDAC cells to degrade the ECM. However, the mechanism by which ENO1 is expressed at the cell surface is still unknown. Hypoxia, a condition that characterizes tumor growth in vivo, upregulates ENO1 expression [25][26][27]; therefore, we cannot rule out that the surface expression of ENO1 results from a general increase of ENO1 transcription and translation. However, as ENO1 is phosphorylated, methylated and acetylated [26,28] in PDAC cells, the role of these posttranslational modifications in the regulation of surface localization of ENO1 in PDAC cells should also be considered. In this study, we demonstrated that the in vitro and in vivo blockade of ENO1 by treatment with two different specific mAbs reduced the migration and invasion capacity of PDAC cells. Indeed, the transduction of wild type ENO1, but not ENO1 carrying the mutated plasminogenbinding site, restored the plasminogen-dependent invasion of CFPAC-1 cells that had been previously suppressed due to ENO1 silencing. Taken together, these data strongly support the notion that ENO1 is involved in plasminogendependent invasion of PDAC. In vitro, CFPAC-1 cells displayed an invasive ability in the presence of plasminogen, which could be ascribed to the endogenous expression of uPA and uPAR, as well as ENO1. Moreover, after exposure to TGF-β ENO1-expressing T3M4 and L3.6pl cells were induced to up-regulate uPA and uPAR and to invade in response to plasminogen (Table S1). As the invasion of all these cell lines was inhibited by anti-ENO1 mAb this implies that ENO1 regulated the PDAC metastatic process in vivo where the TGF-β is provided [12,[29][30][31][32][33]. Metastatic PANC-1/M cells derived from a liver metastasis, following orthotopical injection of PANC-1 cells in the pancreas, expressed higher levels of surface ENO1 compared to the primary tumor cells, suggesting a role for surface ENO1 in facilitating tumor spreading. In vivo, blockade of ENO1 by two specific anti-ENO1 mAbs reduced tumor spreading in different mouse xenograft tumor models. In the SCID-beige tumor model, we observed a particular pattern of tumor dissemination, as cells grew in lymph-nodes without forming organ metastasis. Since PDAC cells express CXCR4, they can migrate towards the gradient of CXCL12 released by lymphoid organs and localize in lymph nodes [34]. By contrast, NGS null mice injected with CFPAC-1 cells did not develop lymph-node masses, probably because they lack functional lymph-nodes, but displayed classical lung tumor spreading. In both cases, specific treatment with anti-ENO1 mAb was effective in inhibiting tumor growth. Finally, in NOD-SCID mice that were orthotopically injected with PANC-1/M cells, the anti-ENO1 mAb was effective in inhibiting the spreading of liver metastasis. The innovative use of AAV technology to increase the efficacy of anti-ENO1 mAb treatment in mice is noteworthy, and results in dramatic inhibition of metastasis in the NSG model. AAVV is non-pathogenic and the recombinant vector retains none of the viral genes, making it a safer alternative to live bacterial strains. The lack of the Rep gene of the AAV vector also limits its integration potential, and the vast majority of AAV vectors are thought to remain episomal [35]. This strategy has many advantages, namely resistance to the effects of pH; a localized or broad cellular tropism depending on the AAVV serotype; efficient gene transfer; persistence of gene expression, and low toxicity in vivo. Moreover, AAV-based therapeutic strategies have been tested in humans, and several clinical trials have been shown to be successful in terms of initial safety and proof of concept [36]. Increased expression of ENO1 has been observed in many tumors [9,26,37,38], together with its ability to induce an immune response both in vitro and in vivo [10,[39][40][41][42]. Recently, we have demonstrated that ENO1 is expressed on the surface of lung tumor cells and promotes ECM degradation and invasion through a plasminogendependent process [16]. Our findings strongly suggest that surface ENO1 is involved in the invasion of PDAC cells, and that blockade of the ENO1/plasminogen interaction, by using an AAVV-ENO1 mAb, could provide a new therapeutic approach for the treatment of metastatic PDAC patients. Western blot analysis PDAC cells (1x10 7 ) from the various cell lines were harvested, lysed, resolved and transferred to nitrocellulose membranes, as previously described [40]. Membranes were incubated for 1 h at RT with anti-ENO1 72/1 mAb or rabbit polyclonal anti-β-Actin antibody (Sigma-Aldrich), at dilutions of 1:2000 in Tween-Tris-Buffered Saline (TTBS) and then probed with a horseradish peroxide (HRP)-conjugated anti-mouse IgG (Santa Cruz) or HRP-conjugated goat anti-rabbit Ig secondary antibody (Sigma-Aldrich) at dilutions of 1:2000. For Western blot analysis of ENO1 in PANC-1/P, PANC-1/M and HPDE cells, membranes were probed with in-house purified rabbit antiserum against ENO1 or with mouse antibody specific to β-Actin (Sigma St. Louis, MO, USA) as a protein loading control. Immunocomplexes were detected by probing with appropriate secondary antibodies conjugated with HRP (Jackson ImmunoResearch), and were visualized using the SuperSignal detection system (Thermo Fisher, Waltham, MA, USA). silencing of ENO1 in the PDAc cell line cFPAc-1 Two Mission short hairpin RNA (shRNA), one targeting the 3'UTR of the gene coding for ENO1 (TRCN0000029324) and one targeting the CDS region (TRCN0000029327) were used to transform bacteria (Sigma-Aldrich, Milan, Italy); plasmids were purified with the PureLink HiPure Plasmid Maxiprep Kit (LifeTechnologies). Lentiviruses were produced by co-transfecting 293T packaging cells (Clontech by Diatech Lab Line Srl, Jesi, AN, Italy) with pLKO.1 puro vector containing the shRNA and the helper vectors pCMVΔ8.74 (Add gene, Cambridge, MA, USA) and pVSV-G (Clontech), using the calcium phosphate method. Lentiviruses collected at 24h after transfection were used for the transduction of the CFPAC-1 cell line supplemented with 8 μg/ml polybrene (Sigma-Aldrich) and, after 48h of infection, cells were then selected for stable silencing using 2 µg/ml Puromycin (Sigma-Aldrich). For quantitative mRNA expression analysis, a polymerase chain reaction (PCR) was carried out with total cDNA and the SYBR Green PCR Master Mix (LifeTechnologies), with a two-step amplification protocol. mRNA expression of target genes was normalized using the mRNA level of β-Actin. Plasmid construction and mutagenesis The total RNA from CFPAC-1 cell lines was extracted using the RNeasy Mini kit (Qiagen, Milan, Italy). RNA concentration and purity was determined using a NanoDrop instrument (Thermo Scientific by VWR, Milan, MI, Italy), and 1µg of the total RNA was used as a template for cDNA synthesis using the iScript cDNA synthesis kit (BioRad, Segrate, MI, Italy). Using the specific primers (Table S2) encoding for ENO1, cDNA was amplified by PCR; amplification products were analyzed on 1% agarose gels, and isolated from gels by using a Gel Extraction Kit (Qiagen), ligated with the EGFP retroviral vector Pallino-GFP [46], using the XhoI and NotI restriction sites, transformed into Top10 competent cells (LifeTechnologies), and sequenced. A mutated form of ENO1, bearing mutations in its plasminogen binding sites on lysines, 420, 422 and 434 (substituted by arginines) was obtained by three different point mutations using primers containing K420R, K422R and K434R substitutions (Table S2) and the QuikChange site-directed mutagenesis kit (Stratagene by Eppendorf, Milan, Italy); real-time PCR was performed using the C 1000 thermal cycler (BioRad). The PCR product was subsequently subjected to sequencing. Establishment of the PDAc cell line cFPAc-1 expressing ENO1 and the ENO1 mutant variant The retrovirus was obtained by transfecting the Pallino-GFP vector containing the ENO1 gene or the ENO1 mutant variant (K420R, K422R and K434R) into the GP-293 packaging cells (Clontech) co-transfected with the pVSV-G helper vector (Clontech) using the calcium phosphate method. Released retroviruses were collected at 24h after transfection, and used for transduction of the 3'UTR ENO1-silenced CFPAC-1 (shENO1) cell line in the presence of 8 μg/ml polybrene (Sigma-Aldrich). After 12h of incubation, complete medium was added and cells were cultured for a further 2 days. Cells were then analyzed for GFP content on a FACS Calibur flow cytometer (BD Biosciences). The CELLQuest TM software (BD Biosciences) was used for data acquisition and analysis of the cells expressing the full-length exogenous ENO1 (shENO1+WT), the triple-mutated ENO1 (shENO1+TM), as well as cells expressing the empty vector (shENO1+Empty) as a control. Expression of protein levels was also analyzed by Western blotting. Enzyme-Linked Immunosorbent Assay (ELIsA) The plasminogen-binding assay was performed with a recombinant human ENO1 (rENO1) protein histidine-tag (rENO1 WT) or a mutated form (rENO1 TM), produced as previously described [10]. Briefly, rENO1 (2.5 µg/mL in 0.1 M Na 2 CO 3 ) was coated in 96-plate well and incubated over-night at 4°C. After 2 h of blocking with PBS 3% bovine serum albumin (BSA) at RT, plasminogen diluted in PBS 1% BSA 0.05% Tween was added at different doses for 1 h at RT. After incubation with Streptavidin-HRP (Sigma) and then with tetramethylbenzidine (TMB) (Sigma), the plate was read at the spectrophotometer at a wavelength of 450nm. Anti-ENO1 mAb levels were measured by ELISA, by binding to rENO1 (2 µg/mL in 0.1 M Na 2 CO 3 ). Sera collected before the injection of AAVV and 2 and 5 weeks after the injection were diluted at 1:100 in DPBS, and antibody concentrations were calculated by regression analysis using seven 2-fold serial dilutions starting from 1µg/mL of anti-ENO1 72/1 mAb for creating a standard curve [39]. construction of the recombinant adeno-associated viral vector (AAVV) for the expression of complete anti-ENO1 72/1 mAb Total RNA was extracted from hybridoma 72/1 cells [43] using the RNeasy Mini Kit (Qiagen), and V L and V H genes were amplified by RT-PCR with primer pairs LB13 (GAYATTGTGATGACYCAGKC) / Cκ2 (TGGATACAGTTGGTGCAGC) and VHB (AGGTSMARCTGCAGSAGTCWGG) / CHγ (GGCCAGTGGATAGAC), respectively. Amplified fragments were sequenced and reamplified with primer pairs VL12-BssHII (ATAGCGCGCCGTTTCAGCTCCAGCTTGGT) / VL12-EcoRV (ACTCGGATATCGTGATGACCCAGGCT) for V L and VH14-ApaLI (TATAGTGCACTCTCAGGCCTATCTGCAGCAGT)/ VH14-Eag/BspE(TAATTCCGGACGGCCGAAGAGAC AGTGACCAGAGT) for V H , to allow the reconstitution of complete functional light-and heavy-chain genes for the anti-ENO1 mAb. Re-amplified V genes were inserted into a vector derived from pcDNA3 (Life Technologies) containing the sequences of the constant regions of the mouse κ light chain and γ1 heavy chain arranged in a single transcriptional unit, where the light chain and heavy chain genes are separated by a sequence encoding the autocatalytic peptide 2A from the FMD Virus; the two genes contained in this bicistronic mRNA are translated into a single polypeptide that spontaneously cleaves into two distinct proteins [47]. To remove the residual peptide 2A, a sequence encoding a furin cleavage site (RSKR) was introduced between the light-chain and the peptide 2A coding sequences [48]. The V L sequence was inserted into BssHII/EcoRV in this bicistronic vector, in order to join it to the κ constant region gene, while the V H segment was firstly inserted into ApaLI/BspEI in a pUT-SEC vector [49], in order to provide it with a sequence encoding a secretion signal; the SEC-V H unit was then excised with HindIII/EagI, and joined to the mouse γ1 constant region gene in the bicistronic vector, downstream of the light chain gene and the 2A sequence. The complete anti-ENO1 mAb transcriptional unit was then transferred to HindIII/XbaI in a plasmid vector, under the control of the cytomegalovirus immediate early promoter, to yield the final vector pAAV-72/1. To confirm that this vector was able to direct the production of functional antibodies, 7 µg of pAAV-72/1 was used to transfect approximately 3x10 6 HEK293 cells using the standard calcium phosphate method, and supernatants were used to probe cellular extracts from PDAC cells by Western blotting. The reactivity of the secreted recombinant mAb was compared to that of the original 72/1 mAb (Fig. S4). The pAAV-72/1 vector was then used for the generation of recombinant AAV (serotype9) in the AVU (AAV Vector Unit) Core Facility of the ICGEB (Trieste, Italy), as described [50]. In vivo experiments NOD-SCID IL2Rgamma null (NSG) mice (provided by the animal facility of the Molecular Biotechnology Center, University of Turin, Italy) were injected in the tail vein (i.v.) with 1x10 5 CFPAC-1 shCTRL or shENO1+Empty, shENO1+WT, shENO1+TM cells (in 0.1 ml DPBS) and the mice were euthanized after 28 days. SCID-beige mice (Charles River, Calco, LC, Italy) or NSG mice were injected in the tail vein (i.v.) with 1x10 5 CFPAC-1 cells expressing luciferase (in 0.1 ml DPBS), followed by biweekly injections of anti-ENO1 72/1 mAb (500 μg/mouse) or an isotype-matched control antibody. NSG mice were pre-treated for 3 days with the same antibodies. In a different set of experiments, NSG mice were injected 7 days before the i.v. CFPAC-1 cell challenge with 1x10 11 genocopies of AAVV expressing anti-ENO1 72/1 mAb or control AAVV in femoral muscles. For detection of in vivo growth of CFPAC-1 luciferase-transduced tumor cells, mice were anesthetized with 2% isofluorane and given i.p. injections of luciferin substrate (100 mg/kg) (Caliper Life Sciences by Promega, Milan, Italy),10 min prior to imaging using the IVIS Spectrum in vivo Imaging System (Xenogen Corp., Alameda, CA, USA). Images were taken on day 0, 14 and 28 after CFPAC-1 injection (SCID-beige mice) and on day 0 and 14 (NSG mice). Images were analyzed using Living Image software for the IVIS Spectrum (Xenogen Corp.). After 28 days, mice were euthanized and checked for metastasis by histological analysis. For orthotopic experiments, NOD-SCID male mice (6-8 weeks old) were obtained from the National Laboratory Animal Center, Taiwan, and housed under specific pathogen-free conditions according to the guidelines of the Animal Care Committee at the National Health Research Institutes, Taiwan. On Day 0, PANC-1/M cells expressing luciferase (1×10 6 /200 ml per mouse) were orthotopically injected into the pancreases of the mice. The mice were intravenously administered with the anti-ENO1 E10A mAb (250 μg/mouse) or its isotype control, as indicated, at 2 h and 24 h after tumor inoculation. Tissue distribution of the cells was monitored using the IVIS in vivo imaging system every 2 weeks for a total of 6 weeks, as described above. Tumor volumes of metastatic tumors were measured using the following formula: length (mm) × width 2 (mm 2 ) × (π/6). All organs from the mice treated with the anti-ENO1 E10A or control mAb, as well as of age-matched healthy mice were also weighed. In all experiments, five mice were used in each group. tissue sample and histopathology Mice were euthanized, necropsied and examined for the presence of tumor masses. Tumor masses and main organs, including lungs, spleens, and livers were fixed in 4% (v/v) neutral-buffered formalin (Sigma-Aldrich) overnight, transferred to 70% ethanol, followed by paraffin-embedding. For histological analysis, 5μm formalin-fixed paraffin-embedded tissue sections were cut and stained with hematoxylin-eosin. Tumor/normal tissue ratios were evaluated with ImageJ software. statistical Analysis The Student's t test (GraphPad Prism 5 Software, San Diego, CA) was used to evaluate the differences in the invasion test, and in in vivo experiments. Values were expressed as mean ± SEM. AcKNOWLEDGMENts We would like to thank Marina Dapas for the preparation of the AAVV used in this study, Roberta Curto for technical support in the in vivo experiments and Dr. Radhika Srinivasan and Dr. Marianne Murphy for critically reading the manuscript.
2017-04-03T20:08:00.761Z
2015-03-14T00:00:00.000
{ "year": 2015, "sha1": "2b222fa0ee34871979923e47a8b5e743af7e62aa", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=3572&path[]=7227", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b222fa0ee34871979923e47a8b5e743af7e62aa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
220326790
pes2o/s2orc
v3-fos-license
Fatigue in chronic myeloid leukemia patients on tyrosine kinase inhibitor therapy: predictors and the relationship with physical activity Fatigue is a common side effect of tyrosine kinase inhibitor (TKI) therapy in patients with chronic myeloid leukemia (CML). However, the prevalence of TKI-induced fatigue remains uncertain and little is known about predictors of fatigue and its relationship with physical activity. In this study, 220 CML patients receiving TKI therapy and 110 genderand age-matched controls completed an online questionnaire to assess fatigue severity and fatigue predictors (Part 1). In addition, physical activity levels were objectively assessed for 7 consecutive days in 138 severely fatigued and non-fatigued CML patients using an activity monitor (Part 2). We demonstrated that the prevalence of severe fatigue was 55.5% in CML patients and 10.9% in controls (P<0.001). We identified five predictors of fatigue in our CML population: age (odds ratio [OR] 0.96, 95% confidence interval [95% CI]: 0.93-0.99), female gender (OR 1.76, 95% CI: 0.92-3.34), Charlson Comorbidity Index (OR 1.91, 95% CI: 1.16-3.13), the use of comedication known to cause fatigue (OR 3.43, 95% CI: 1.58-7.44), and physical inactivity (OR of moderately active, vigorously active and very vigorously active compared to inactive 0.43 (95% CI: 0.12-1.52), 0.22 (95% CI: 0.06-0.74), and 0.08 (95% CI: 0.02-0.26), respectively). Objective monitoring of activity patterns confirmed that fatigued CML patients performed less physical activity of both light (P=0.017) and moderate to vigorous intensity (P=0.009). In fact, compared to the non-fatigued patients, fatigued CML patients performed 1 hour less of physical activity per day and took 2,000 fewer steps per day. Our findings facilitate the identification of patients at risk of severe fatigue and highlight the importance of setting reduction of fatigue as a treatment goal in CML care. This study was registered at The Netherlands Trial Registry, NTR7308 (Part 1) and NTR7309 (Part 2). F atigue is a common side effect of tyrosine kinase inhibitor (TKI) therapy in patients with chronic myeloid leukemia (CML). However, the prevalence of TKI-induced fatigue remains uncertain and little is known about predictors of fatigue and its relationship with physical activity. In this study, 220 CML patients receiving TKI therapy and 110 genderand age-matched controls completed an online questionnaire to assess fatigue severity and fatigue predictors (Part 1). In addition, physical activity levels were objectively assessed for 7 consecutive days in 138 severely fatigued and non-fatigued CML patients using an activity monitor (Part 2). We demonstrated that the prevalence of severe fatigue was 55.5% in CML patients and 10.9% in controls (P<0.001). We identified five predictors of fatigue in our CML population: age (odds ratio [OR] 0.96, 95% confidence interval [95% CI]: 0.93-0.99), female gender (OR 1.76, 95% CI: 0.92-3.34), Charlson Comorbidity Index (OR 1.91, 95% CI: 1. 16-3.13), the use of comedication known to cause fatigue (OR 3.43, 95% CI: 1.58-7.44), and physical inactivity (OR of moderately active, vigorously active and very vigorously active compared to inactive 0.43 (95% CI: 0.12-1.52), 0.22 (95% CI: 0.06-0.74), and 0.08 (95% CI: 0.02-0.26), respectively). Objective monitoring of activity patterns confirmed that fatigued CML patients performed less physical activity of both light (P=0.017) and moderate to vigorous intensity (P=0.009). In fact, compared to the non-fatigued patients, fatigued CML patients performed 1 hour less of physical activity per day and took 2,000 fewer steps per day. Our findings facilitate the identification of patients at risk of severe fatigue and highlight the importance of setting reduction of fatigue as a treatment goal in CML care. This study was registered at The Netherlands Trial Registry, NTR7308 (Part 1) and NTR7309 (Part 2). Fatigue in chronic myeloid leukemia patients on tyrosine kinase inhibitor therapy: predictors and the relationship with physical activity the finding that adverse events lead to lower TKI treatment adherence 3 and therefore to poorer disease control. 4 Although TKI-induced fatigue is one of the most frequently reported adverse effects, 5,6 its actual prevalence is unknown because of the heterogeneity in measurement techniques used across studies and it has not been compared to that in the general population. Furthermore, clinicians are unable to identify patients at risk of fatigue since predictors have never been assessed in this specific population of patients. Although a variety of predictors of fatigue, such as gender, age and socioeconomic status have been described in literature, [7][8][9] it is unknown whether these predictors, even when obtained in cancer populations, can be extrapolated to this unique group of CML patients on TKI therapy. Aside from these unmodifiable predictors of fatigue, physical activity has been identified as a modifiable predictor of fatigue in several patient populations. 10 The aim of this multicenter observational study was threefold. First, to assess the prevalence of fatigue in CML patients on TKI therapy compared to that in the general population. Second, to identify predictors of fatigue in CML patients. Third, to objectively assess physical activity levels and compare these between fatigued and non-fatigued patients. In this way, we will facilitate the identification of patients at risk of fatigue and provide insight into the association between fatigue and physical activity in the CML population. Methods CML patients aged ≥18 years who were receiving TKI therapy were invited to complete an online questionnaire to assess the prevalence and predictors of TKI-induced fatigue (Part 1). Control subjects were selected from a database consisting of over 20,000 subjects without CML who participated in previous research at the Department of Physiology at Radboud University Medical Center (Nijmegen, the Netherlands). Controls were matched for gender and age (±3 years) in a 1:2 ratio to the CML patients. A subgroup of CML patients was asked to wear an activity monitor in order to measure physical activity levels objectively (Part 2). Patients were recruited through the outpatient clinics at the Radboud University Medical Center and Amsterdam University Medical Center (Amsterdam, the Netherlands), and via CMyLife, a Dutch online platform for CML patients. 11 Informed consent was obtained from all participants. This study was approved by the Medical Review Ethics Committee region Arnhem-Nijmegen and registered at The Netherlands Trial Registry with numbers NTR7308 (Part 1) and NTR7309 (Part 2). Part 1: questionnaire Fatigue severity was measured by the Checklist Individual Strength subscale "subjective experience of fatigue" (CIS-fatigue), which is a validated fatigue questionnaire assessing fatigue over the preceding 2 weeks. 12 A score of 35 or above was considered as severe fatigue. The following general characteristics were collected: age, gender, body mass index, education level, and marital status. Time since CML diagnosis, TKI type and dose, duration of TKI treatment, and disease control (major molecular response, defined as ≤0.1% BCR-ABL transcripts on the International Scale) were collected to assess CML-related medical history. The Charlson Comorbidity Index (CCI) 13 was used to quantify participants' medical comorbidities. Both over-the-counter and prescribed medication known to cause fatigue (e.g., benzodiazepines, opioids, βblockers, and metformin) were assessed. Lastly, potential lifestyle predictors were collected, including smoking, daily fluid and caf-feine intake, alcohol consumption (beer and wine), and physical activity. Physical activity (defined as Metabolic Equivalent of Task [MET] min/week) was classified into four categories: inactive (<500 MET min/week), moderately active (500-1,499 MET min/week), vigorously active (1,500-2,999 MET min/week), and very vigorously active (>3,000 MET min/week). Part 2: activity monitor Physical activity was measured with the activPAL3 micro (PAL Technologies Ltd., Glasgow, UK) 14 in a subgroup of 143 CML patients. The sample size calculation was based on data from previous research published on differences in objectively assessed activity levels between fatigued and non-fatigued elderly subjects 15 using a power of 80%, with a two-tailed a level of 0.05, an estimated effect size of 0.50 and a drop-out rate of 10%. Participants wore the activity monitor 24 hours per day for 7 consecutive days and were asked to maintain normal daily activities. In addition, employment status and total work time were reported. BCR-ABL transcript levels, hemoglobin concentration, white blood cell count and platelet count were extracted from the patients' electronic records. Statistical analysis Continuous data are reported as means ± standard deviation or median (interquartile range [IQR]) and categorical variables as counts and percentages. Logistic regression was performed to identify predictors of severe fatigue. Predictor variables with P values <0.10 in univariable analysis were selected for multivariable logistic regression analysis. Odds ratios (OR) with 95% confidence intervals (95% CI) were calculated to estimate the effect size. The optimal model was selected based on the discriminative ability, assessed by the area under the receiving operating characteristic curve, and calibration slope. Differences in activity patterns were tested using Student t tests for independent samples when data were normally distributed, and Wilcoxon rank sum tests when data were skewed. To correct for potential confounding factors, multivariable linear regression was used. All data were analyzed using SPSS (version 22.0, IBM, Armonk, NY, USA). Statistical significance was set at a P value <0.05. Detailed information on the questionnaire and activity monitor is provided in the Online Supplementary Methods S1. Results A total of 357 participants were enrolled in the study, consisting of 247 CML patients and 110 controls. Figure 1 shows a schematic flowchart of participants in the two parts of the study. Two-hundred twenty CML patients (58% females, mean age 56 ± 13 years) and 110 gender-and age-matched controls completed the online questionnaire between May 2018 and May 2019 (Part 1). Of these 220 patients, 216 (98.2%) had no missing data and were included in the multivariable regression analysis. The activity monitor was worn by 143 CML patients, but five patients were excluded from analysis because of an invalid number of days registered by the activity monitor (Part 2). Part 1: prevalence and predictors of severe fatigue The prevalence of severe fatigue was 55.5% in the CML patients and 10.9% in the matched controls (P<0.001). Reported QoL was significantly poorer in CML patients than in controls (mean QoL scores 6.9 ± 1.5 and 8.1 ± 1.0, respectively; P<0.001), and also in severely fatigued CML patients when compared to patients without severe fatigue Figure 2, showing a positive predictive value of 73%, a negative predictive value of 68%, a sensitivity of 76%, and a specificity of 65%. Our model can be described by the following equation: log-odds = 1.98 -0.04 * age + 0.56 * gender (male=0, female=1) + 0.65 * CCI + 1.23 * use of comedication known to cause fatigue (no=0, yes=1)physical activity level (inactivity=0, moderately active=0.85, vigorously active=1.53, very vigorously active=2.58). Table 2 shows the characteristics, including hematologic values, of the 138 CML patients who wore the activity monitor and were included in the final analysis. In line with the first part of the study, there was a higher propor-tion of female patients in the severely fatigued group than in the non-fatigued group (61% vs. 44%, respectively; P=0.039). The employment status did not differ significantly across the groups (53% vs. 66% employed in the fatigued and non-fatigued groups, respectively; P=0.20). However, total work time was significantly shorter in both severely fatigued male and female patients than in non-fatigued patients (males: 29 ± 12 h/week vs. 38 ± 12 h/week, respectively; P=0.048; females: 15±8 h/week vs. Figure 3 shows the daily activity patterns of fatigued and non-fatigued patients categorized into sleeping, sitting, light intensity physical activity, and moderate to vigorous physical activity. Severely fatigued CML patients slept significantly longer than patients without fatigue ( Physical activity patterns were also analyzed for week and weekend days separately. Although there was no difference between the fatigued and non-fatigued group in sitting time during week days (9.7 h/day [IQR 8.9-11.3] and 10.0 h/day .0], respectively; P=0.73), we found a trend towards longer sitting time during weekend days in fatigued patients compared to non-fatigued ones (9.6 h/day [IQR 8.6-10.9] and 9.2 h/day (IQR 8.1-10.5), respectively; P=0.06). Severely fatigued patients slept longer and performed less physical activity (of light as well as moderate to vigorous intensity) on both week and weekend days (all P<0.05). Discussion This is the first study to assess the prevalence and predictors of severe fatigue in CML patients receiving TKI treatment and to provide insight into the relationship between severe fatigue and physical activity in this population. The prevalence of fatigue in our CML population was 55%. Using multivariable logistic regression, we built a model with good discriminative ability and found five significant predictors of severe fatigue in our population: younger age, female gender, higher CCI, the use of comedication known to cause fatigue, and physical inactivity. Using physical activity monitors, we objectively confirmed that severely fatigued CML patients are less physically active during the day, with regard to both light and moderate to vigorous intensity activity, on both week and weekend days. These findings suggest that: (i) there is a subset of CML patients particularly prone to TKI-induced fatigue, and (ii) severely fatigued patients have reduced levels of physical activity. Over half of the CML patients in the present study experienced severe fatigue, which was significantly greater than that in matched controls and compared to TKI-induced fatigue rates reported in literature. In patients receiving imatinib, the prevalence of fatigue varied across large clinical trials from 34.5% (IRIS 16 ) to 15.5% (CML IV 17 ), 10% (DASI-SION 18 ), 22% (ENESTnd 19 ), 47% (BFORE 20 ), and 20% (EPIC 21 ). However, these trials were not designed to assess fatigue, which may explain the wide range of prevalence rates. We did not find a difference in fatigue prevalence between patients taking different TKI, which is in agreement with the findings of these large trials. [18][19][20] Interestingly, severe fatigue was neither independently associated with treatment-related factors, such as TKI therapy dose and duration, nor with disease control. Although fatigue is a common sign of severe anemia, hemoglobin levels did not differ between fatigued and non-fatigued patients. Low hemoglobin levels were also not identified as an independent predictor of fatigue in patients with other hematologic malignancies. 22 We found that fatigue was more often present in younger and female patients. In line with this, Efficace et al. found that the largest differences in health-related QoL between CML patients and the general population was among younger subjects. 2 Contradictory findings are reported in the literature regarding the association between age and fatigue in other populations. For example, chronic fatigue was more often present in younger breast cancer survivors, 7 while older age has been identified as a risk factor for fatigue in both hematologic 23 and non-hematologic 24,25 cancer patients. Several studies showed an association between fatigue and gender in line with our results, with a more prominent risk for female cancer patients. 26,27 Interestingly, compared to the general population, female CML patients are more negatively affected than male CML patients in both mental and physical health. 2 Sex-related differences in disease perception and anxiety may contribute to the higher prevalence of fatigue in women, 28 although this aspect was beyond the scope of this study. Furthermore, we showed that patients with comorbidity were more often fatigued, as were patients taking comedication known to cause fatigue. This suggests the need to check patients' medication records critically, and to stop or reduce the dose of any comedication known to cause fatigue if possible (e.g., benzodiazepines as sleep medication), especially in those CML patients who are prone to fatigue. Both subjective and objective assessments of physical activity showed that fatigued patients were more often inactive than were non-fatigued patients. More precisely, we found that compared to the non-fatigued patients, fatigued CML patients slept approximately 0.5 h/day longer, performed 1 h less of physical activity per day and took 2,000 fewer steps per day. Although it may seem self-evident that severely fatigued patients are less physically active as a result of fatigue, physically inactivity itself may contribute to the persistence of fatigue. 9 Additionally, there is a significant body of evidence to support the beneficial effects of exercise interventions to reduce fatigue levels in various (post)-cancer patient populations. 29 Interestingly, our study showed that the vast majority of the CML patients, both fatigued and nonfatigued, already met the recommended American College of Sports Medicine/American Heart Association guidelines for physical activity (i.e., 150-300 min of moderate-intensity or 75-150 min of vigorous intensity physical activity per week, or an equivalent combination). However, higher activity levels were associated with lower levels of fatigue in our CML population. Furthermore, the extra amount of physical activity that non-fatigued patients performed when compared to fatigued patients (~6.3 h of light intensity and 1.4 h of moderate to vigorous intensity physical activity per week) may yield additional health benefits. 30 Consequently, it is of clinical relevance to focus on preventing and treating TKI-induced fatigue in clinical practice. This is further supported by our findings that fatigued CML patients have impaired QoL and work fewer hours when compared to non-fatigued patients. There are several limitations to this study. First, due to the cross-sectional design of the study we cannot distinguish between cause and effect. Although we found that a reduced level of physical activity is associated with the presence of fatigue, and thus is a predictor of fatigue, we cannot state that reduced physical activity is a risk factor for fatigue. However, regardless of whether or not there is a causal relationship between fatigue and reduced levels of physical activity, our results highlight the importance of combating fatigue and of examining whether exercise interventions are useful to counteract fatigue. Secondly, because of the inclusion of a heterogeneous study population, we observed a considerable variation in physical activity levels. However, the representative sam-Fatigue in CML patients haematologica | 2021; 106 (7) ple in this population-based study allows translation of the findings to clinical practice. Lastly, we used a Likert scale for the assessment of QoL in order to reduce the length of our questionnaire to assess predictors of fatigue (Part 1) even though validated QoL questionnaires have been developed in the CML population. However, a simple Likert scale has been shown to measure QoL adequately in cancer patients. 31 A major strength of our study is the objective assessment of physical activity, which ruled out response bias. Another strength is the relatively large sample size and the small amount of missing data (<3% in both parts of the study). In conclusion, we demonstrated that the majority of the CML patients receiving TKI therapy experienced severe fatigue and that severely fatigued patients have impaired QoL. Independent predictors of severe fatigue include: younger age, female gender, higher CCI, the use of comedication known to cause fatigue, and physical inactivity. Objective assessment of physical activity showed that, compared to patients without fatigue, severely fatigued CML patients sleep more and are less active during the day on both week and weekend days. These findings emphasize the importance of recognizing the reduction of fatigue as a treatment goal in CML care and the need for future studies to identify physical activity as a possible target to achieve this goal. Disclosures No conflicts of interest to disclose. Contributions LJ performed the research, analyzed data and wrote the manuscript with support from all authors. NB performed research and supervised the study. MD performed research and analyzed data. EB analyzed data. MN and JJ performed research. ST and MH supervised the study.
2020-07-04T13:05:43.892Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "bfb6e6270ab1248f76d148a6575ce65b11e3f742", "oa_license": "CCBYNC", "oa_url": "https://haematologica.org/article/download/9797/71012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b0da05c1c3eb81a958c3c740db6332f92bb3fd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
106181355
pes2o/s2orc
v3-fos-license
Comparative effects of trifluoromethyl- and methyl-group substitutions in proline Proline is one of a kind. This amino acid exhibits a variety of unique functions in biological contexts, which continue to be discovered and developed. In addition to the reactivity of the primary functional groups, the trans–cis isomerization of the peptidyl–prolyl amide bond and its impact on the protein structure and function are of major interest. A variety of proline ring substitutions occur in nature, and more substitutions have been generated via chemical synthesis. Particularly promising is the F-labelling of proline, which offers a relatively new research application area. For example, it circumvents the lack of common NH-NMR reporters in peptidyl–prolyl fragments. Obtaining structural information from selectively fluorine-labelled peptides, proteins, and non-peptidic structures requires the analysis of the physicochemical features of the F-carrying proline analogues. To better understand and ultimately predict the potential perturbations (e.g., in protein stability and dynamics) introduced by fluorine labels, we conducted a comprehensive survey of the physicochemical effects of CF3 substitutions at each ring position by comparing the behavior of CF3-substituted residues with the CH3-substituted analogues. The parameters analyzed include the acid–base properties of the main chain functional groups, carbonyl-group interaction around the residue, and the thermodynamics and kinetics of trans–cis isomerization. The results reveal significant factors to consider with the use of CF3-substituted prolines in NMR labeling and other applications. Furthermore, lipophilicity measurements demonstrate that CF3-substituted proline shows comparable hydrophobicity to valine, suggesting the potential application of these residues for enhancing interactions at nonpolar interfaces. Introduction Complex biological processes such as protein translation and folding are fundamentally influenced by the unique features of amino acid proline (Pro, Fig. 1). [1][2][3][4][5][6][7] The secondary amino group of Pro creates a tertiary amide-based backbone structure that is prone to cis-trans isomerization issues, which is sometimes responsible for the special role of Pro in protein folding. [4][5][6][7][8] Additionally, the cyclic nature of the Pro residue restricts the molecular conformation to certain envelope-type states of the pyrrolidine ring. [9][10][11][12] As the only coded amino acid with a restricted f torsion, Pro is typically positioned in specific structural contexts in biological systems relative to other amino acid residues. Several unique biological phenomena associated with the presence of Pro in polypeptide structures have been investigated through the substitution with structural analogues of Pro (ProAs). ProAs were shown to impact translation yields 13,14 and velocities, 15 folding kinetics, [16][17][18] protein structural stability, [19][20][21][22][23][24][25] aggregation properties, 17,26 biological potency of peptides 27,28 and more. In each individual case, changes resulting from the substitution of a Pro with an analogue were attributed to structural differences or are rationalized using data obtained from experimental molecular models. [29][30][31][32] Parallel to these efforts, significant computational data have been generated for select ProAs in recent literature. [33][34][35][36][37][38] Curiously, the majority of the characterized substitution effects have involved very closely related ProAs and typically these substitutions occupied position 4 of the ring. A more general summary characterizing the effect of substitutions at each ring position is lacking. Therefore, we decided to summarize the physicochemical data of molecular models based on ProAs bearing methyl-(CH 3 -) and trifluoromethyl-group (CF 3 -) substitutions (Fig. 1). We started from a basic assumption that a methyl group would mimic the presence of an aliphatic substituent, whereas a trifluoromethyl group would exhibit an electron-withdrawing effect in addition to larger steric demands. The resulting physicochemical outcomes are presented in this work. While we sought to locate general trends related to the movement of substituents along the heterocyclic ring, our primary goal was to discern effects resulting from the presence of the CF 3 group due to its utility in 19 F labelling of polypeptides for NMR studies. The latter application is useful in peptide and protein labelling studies, 39,40 although other applications are also possible. This study is limited to a select set of regioisomeric ProAs collected from synthetic and commercial sources. Available regioisomers All isomeric methylprolines have been well described in the literature. In contrast, all trifluoromethylated prolines are relatively recent constructs, and these are all products of the XXI century chemistry. In 2002, the synthesis of the first trifluoromethylated proline, 4CF 3 Pro, was reported independently by two groups. [41][42][43] The synthesis of this regioisomer was later addressed by others. 44,45 In 2006, the first synthesis of 2CF 3 Pro was reported, 46 and additional syntheses of this amino acid and its analogues were established later. [47][48][49][50] The synthesis of 5CF 3 Pro was first reported in 2012, 51 and since then additional synthetic approaches to this compound and its diastereomers have been reported. 52,53 Synthesis of 3CF 3 Pro has not yet been provided in the literature; nonetheless, this regioisomer is available from commercial sources. Thus, we located the set of regioisomeric trifluoromethylated prolines shown in Fig. 2, while the methylproline analogues were collected from commercial and synthetic sources (see ESI †). For example, a very convenient synthesis of 5CH 3 Pro from a glutamic acid derivative and Meldrum's acid was recently reported by Mohite and Bhat. 54 Amino group Physicochemical examinations were then performed on the identified set of ProAs and their derivatives. For example, for a-trifluoromethyl amino acids, it has been established that the ammonium group is severely deactivated in nucleophilic reactions due to the electron-withdrawing effect of the fluorinated moiety. As a result, special synthetic strategies towards the incorporation of these amino acids into peptides have been developed. [55][56][57][58][59] In order to quantify the amino group deactivation, we examined the acidity of the ammonium group in the set of selected amino acids (Table 1 and Fig. 3). While the pK a of the ammonium group in methylproline remains nearly identical to that of Pro, the trifluoromethylgroup imposes a severe reduction of the value. These reductions are 4.6-4.9 pK a units at positions 2 and 5 and 2.2 units at positions 3 and 4. The magnitudes of the effects are similar to those previously described for trifluoromethylated pyrrolidines 60 and morpholines. 61 Notably, for the free amino acid, the C-terminal carboxyl-group remains ionized over the entire pH range of the ammonium protonation transition. In this case, the presence of the C-terminal charge stabilizes the ammonium group in the protonated state. 62 We mimicked the removal of the compensatory charge by esterification into a methyl ester. This reduces the pK a by Fig. 2 Regioisomers of trifluoromethyl-and methyl-proline selected for this study. Indices 't' and 'c' indicate the configuration of the substituent relative to the carboxyl group. Carboxyl-group We then examined the acidity of the carboxyl group in the N-acetyl derivatives, which is comparable to the C-terminal residue in a polypeptide. The N-acetylation has a dual effect by both removing the N-terminal charge and simultaneously generating two new conformational states of the formally single N-C(QO) bond: s-trans and s-cis amides, where prefix 's-' refers to the single bond (Scheme 1). This transition between the two conformations is slow on the NMR time scale, thereby allowing the determination of the pK a for each form separately. 64 We determined the acidity of the N-acetyl derivatives. The results are shown in Fig. 4 ( Table 1). The analyses revealed similar acidities among the 5CF 3 Pro, methyl-bearing derivatives, and Pro, while re-positioning of the CF 3 -group (from 4 to 3 to 2) gradually increased the acidity over the range of 1.6 pK a units. Inter-carbonyl alignment The difference between the pK a of the s-trans and s-cis amides is always positive (eqn (1)- (3)). We have previously demonstrated that this difference is indicative of the inter-carbonyl interaction, which plays an important role in the folding propensities of the amino acids. 64 The DpK a is between 0.67 and 0.70 for N-acetyl proline, and this increase indicates an attraction between the carbonyl groups. In ProAs, this effect can be related to the stabilization of the C 4 -exo side chain envelope conformation. 65 The inter-carbonyl alignment is an important contributor to the stability of polypeptide structures containing ProAs. This phenomenon has been attributed previously to the n-p* orbital interaction between the adjacent carbonyl groups. 66 Previous studies enable us to speculate that examined diastereomers of 3-, 4-and 5-methylprolines should exhibit elevated DpK a values due to the larger contribution of the side chain C 4 -exo conformation and more favourable angles between the interacting carbonyl groups. [67][68][69][70] Indeed, we observed that the experimental DpK a values are higher for methylprolines compared to Pro (Fig. 5 and Table 1). Interestingly, for corresponding trifluoromethylprolines, the inter-carbonyl alignment is systematically lowered. Supporting this observation, we previously reported that the trans-amide stability in a model 4CF 3 Pro derivative was not enhanced despite the C 4 -exo conformation dominant in solution and crystal structure. 45 Thus, fluorination of the molecules slightly reduces the energy of the inter-carbonyl interaction, as can be clearly deduced from the presented data. DpK a = pK a (trans) À pK a (cis) (1) In contrast to these tendencies, the 2-substituted Pro derivatives exhibited a remarkably higher interaction of carbonyl groups relative to the other cases. A previously reported crystal structure of a 2CH 3 Pro derivative illustrates that elevated alignment energy may result from the increased proximity of interacting groups due to steric effects of the a-substituent. 71 The larger steric effect of the CF 3 -group can explain the best alignment of the carbonyl groups featured by this amino acid. These results highlight potency of the a-substituents to stabilize the trans-amide bond according to stereoelectronic effect in addition to the steric one. Amide rotameric preference We then determined the trans/cis amide rotameric preference in N-acetyl derivatives bearing carboxylate, carboxylic acid and methyl ester groups at the C-terminus. Increasing the polarity of the carbonyl group leads to an increase in the inter-carbonyl alignment and increases the trans/cis amide ratio in the order of ester Z acid 4 salt (Scheme 1). The amide isomerism is impacted by steric factors due to the presence of bulky substituents around the amide bond for substitutions at the 2-and 5-positions in the ring. Fluorination evidently increases the original steric effect in both 2-and 5-substituted structures (Table 2). However, additional contribution of the inter-carbonyl alignment leads to additional stabilization of the trans-amide but not cis-amide structure. Thus, the substituent position confers an asymmetric effect on the trans/cis amide ratio (Fig. 6). Although, another reason for the asymmetric shape of the curves can be non-additivity of the steric sizes of the substituents. Amide rotation kinetics The rotational velocity of the amide bond around Pro is an important contributor to protein folding kinetics. Peptidyl-prolyl cis-trans isomerases are a class of enzymes that accelerate the amide rotation around prolyl residues, and the catalytic centres of these enzymes are attractive targets for the development of pharmaceutical peptide-based inhibitors. [72][73][74][75][76] ProAs often alter the amide rotational velocities in peptidyl-prolyl fragments, although the overall effect is complex. In order to clarify the effects of positional substitutions in the Pro ring, we summarized the potential contributing forces in Scheme 2. Effects A, B and C occur in the ground state. Effect A is invariant in the ground state and correlates with the basicity of the nitrogen atom, while effects B and C vary for the trans-and cis-amides and are reflected in amide thermodynamics. Since the rotation proceeds via the syn/exo transition state, 69,77 a substituent may sterically interfere with the oxygen atom, which shifts below the ring and opposite to the carboxyl group of Pro, thus creating effect D. For example, an electron-withdrawing substituent usually reduces the barrier by destabilizing the ground state resonance (effect A, Scheme 2). However, the same substituent can have an opposite and compensatory effect on the transcis barrier by increasing the energy of the inter-carbonyl alignment. For example, this occurs in the case of 4-hydroxy 78 and 4-fluoroprolines. 79 Experimentally determined velocities of the cistrans and transcis amide rotations in methyl esters of N-acetyl amino acids (water) are shown in Fig. 7 (see also Table S1 in the ESI † for the data on C-terminal carboxylates). These values can be rationalized based on the considerations outlined in Scheme 2. For example, compared to Pro, the 2CH 3 Pro derivative exhibits a decreased cistrans barrier due to effect B, whereas the transcis barrier increases due to the additional contribution from effects C and D. In the derivative analogues of 2CF 3 Pro, both rotational barriers are decreased primarily due to the additional contribution of effect A, which is not present in methylprolines. Overall, the rotational barrier is the parameter that is most complex among all presented so far. Lipophilicity The attempts to classify Pro in the paradigm of hydrophobic/ hydrophilic dualism create an ambiguity in the literature, as evidence exists supporting both classifications of Pro in polypeptides. In natural proteins, Pro can be present in both hydrophobic interiors and exposed, highly solvated stretches. 80 For example, Pro can accommodate very lipophilic prosthetic groups such as retinal in a hydrophobic core of proteins. 81 Pro interactions with aromatic residues are also thought to be guided by the hydrophobic contacts between the aromatic ring and the ring structure of proline. 82 In many other instances, Pro occupies solvated and polar stretches primarily due to the extended nature of the secondary structures it forms. 80 Noteworthy, we recently demonstrated that the addition of Pro systematically decreases the lipophilicity in an oligoproline peptide series. 83 Both trifluoromethyl-and methyl-groups are expected to increase the lipophilicity of an organic molecule, 84,85 and the same effect should potentially be seen in the amino acid derivatives. A recent study of aliphatic fluorine-containing amino acids demonstrated that the main effect of fluorination on the overall polarity occurs due to the alteration of the backbone contacts with water, 86 although we presume that for ProAs, the backbone-solvation considerations are less relevant because the backbone-forming tertiary amide is less accessible to the solvent. In order to characterize the outcome of CF 3 -/CH 3 -group substitutions, we examined experimental octan-1-ol/water partitioning (log P) values for the methyl esters of the N-acetyl amino acids (Fig. 8 and Table S2 in the ESI †). Our results demonstrate that the presence of a methyl group increased the lipophilicity by approximately 0.4 log P units, whereas the trifluoromethyl moiety exhibits a larger effect of approximately 0.7 log P units. The lipophilic contribution of the CF 3 group is slightly higher in the 2CF 3 Pro derivative, presumably due to the partial intermolecular compensation of dipoles from the trifluoromethyl-and carboxymethyl-moieties. Overall, these data suggest that trifluoromethylprolines can be considered as hydrophobic amino acids, with the log P close to that of valine (Fig. 9). The hydrophobicity of amino acid analogues is an important consideration when designing peptide sequences, and some trifluoromethylated ProAs have recently been applied in attempts to improve the membrane permeability of therapeutic peptide constructs. 87 We have demonstrated that the local polarity differences introduced by the incorporation of 4-methyl-and 4-fluoroprolines impact structural stability at a level nearly equal to the preorganization from the inter-carbonyl alignment. 88 A notable fact is that 4-methylprolines are abundant in natural products, 89,90 and these are the simplest alkylproline derivatives among others occurring in nature. 91 Another common modification is hydroxylation in position 4; finally extensive experimental data have addressed 4-fluoroprolines. 19,20 For this reason, we compared the lipophilicity of the 4-substituted proline derivatives in Fig. 10. The data illustrate that the 4-hydroxyl and 4-methyl moieties have opposite effects, with the former enhancing the molecular hydrophilicity and the latter enhancing the molecular lipophilicity. Furthermore, 4-fluorination increases polarity, though less so than hydroxylation. Discussion A number of chemical routes have been developed for the convenient synthesis of amino acids bearing CF 3 -groups over the last two decades. Further application of these compounds in engineering and/or in peptides and proteins requires a comprehensive understanding of the physicochemical effects of trifluoromethylation. Interestingly, since the synthesis of the first CF 3 -bearing proline in 2002, [41][42][43] only one of these compounds has undergone physiochemical characterization through investigation of trans/cis amide properties and pK a (in 2015). 45 Scheme 2 Potential effects contributing to the amide rotation barrier in N-acetyl prolyl derivatives. EWG = electron-withdrawing group. Fig. 7 Amide rotation barriers determined for methyl esters of N-acetyl trifluoromethyl-(A) and methyl-(B) prolines: transcis (full purple circles) and cistrans (empty green circles). Measured in aqueous medium at 298 or 310 K (see Table 2 for details). In an attempt to provide the missing data we have systematically characterized all of the trifluoromethylproline regioisomers and compared obtained values with those of analogous methylproline counterparts (Fig. 2). The results of the N-terminal basicity and C-terminal acidity provide a numerical description for the electronwithdrawing effect of the CF 3 -group ( Fig. 3 and 4). Subsequently, we noticed that the inter-carbonyl alignment (represented by the DpK a and DpK a * values and trans/cis equilibrium constants) is reduced in the trifluoromethyl-derivatives compared to the methyl derivatives in the 3-, 4-, and 5-substituted amino acids ( Fig. 5 and 6). For the 2-position, however, substitutions the quaternization of the residue significantly enhances the original interaction between the carbonyl groups due to the steric reasons. The kinetics of the amide bond rotation are reflected in the transcis and cistrans activation barriers (Fig. 7). The latter parameters are quite complex, as they are impacted by the amide thermodynamics as well as the N-terminal basicity. For instance, the curves in Fig. 7A (trifluoromethyl derivatives) can be considered as composite results of the barriers for the corresponding methylated amino acids (Fig. 7B) and the basicity trends (Fig. 3A). Transition state effects should be considered when the substituents interfere with the upstream carbonyl oxygen atom moving below the pyrrolidine ring. Another important observation is that trifluoromethyl substituents cause the amino acid to exhibit lipophilicity levels equivalent to those of valine, as demonstrated by the experimental examination of the octan-1-ol/water partitioning constants ( Fig. 8 and 9). Our data will enable researchers to rationally and more accurately predict and plan experiments that involve Pro substitutions, in applications such as in the design of peptidyl-prolyl cis-trans isomerase inhibitors, 72-76 expansion of the genetic amino acid repertoire [13][14][15] as well as in protein engineering [21][22][23][24][25][26] and development of other peptide therapeutics. 87 However, the most outstanding application is the use of fluorine-labelled ProAs in 19 F NMR studies of polypeptides. A few works have suggested the use of fluoroprolines for this purpose (Fig. 11). 92,93 Recently vicinal difluoroprolines were proposed for use in 19 F NMR labelling, as these derivatives have a reduced conformational bias in the side chain and the amide bond. 94 To a great dissatisfaction, studies of the polarity changes introduced in ProAs are quite underrepresented in the literature. Even for fluoroprolines, which are most well studied, there is a very little awareness of the polarity changes introduced by these residues in the polypeptide structures. In this context, we recently demonstrated that local polarity changes introduced by ProAs may have quite a significant effect when placed on a periphery of a 9 kDa foldon trimeric propeller Fig. 9 The position of trifluoromethyl-(CF 3 Pros) and methyl-(CH 3 Pros) prolines on the lipophilicity scale based on log P octan-1-ol/water values previously reported for methyl esters of N-acetyl amino acids. 80 Fig. 10 Comparison of the experimental log P octan-1-ol/water values for 4-substituted prolines. Molecular conformation is sketched taking into account the preferred side-chain conformation. 29 structure. 88 As seen from further analysis of the log P values, 4-fluorination increases the polarity (Fig. 10), which may reduce the folding stability when the Pro residue is located at a buried, interior position. Geminal difluoroprolines 95 may potentially reduce the polarity of the proline residue following partial C-F dipole compensation 84,85 while maintaining conformational bias at minimum. 96 However, these would also have poor NMR properties due to a large coupling between the fluorine atoms. 94 Alternatively, Tressler and Zondlo adapted the use of perfluoro-tert-butytoxy 19 F probes 97,98 and proposed the use of O-perfluoro-tert-butyl-4-hydroxyprolines in NMR applications, 99 although these bulky lipophilic compounds also demonstrated a significant conformational bias in trans/cis amide equilibrium. Other proposed Pro substitutes are trifluoromethyl-4,5-methanoprolines, which were originally introduced as 19 F NMR labels for solid-state NMR studies in lipid membranes. 100 Later studies demonstrated that 4,5-methanolprolines also impose a notable conformational alteration depending on the stereochemistry of the cyclopropane unit. 101 A recent 19 F NMR study fully supported this conclusion. 102 However, their use in NMR labeling can justified given that the conformational bias is taken into account when interpreting the data. A single trifluoromethyl group represents another alternative to the above-mentioned 19 F NMR labelling strategies. The utility of this labelling scheme follows in particular from the good relaxation profile of the axially rotating trifluoromethyl-group. 103 Our study together with our previous report 45 shows that the trans/cis ratios are minimally perturbed for the 3-and 4-position substitutions, as these positions are distant from the backbone. Thus, the hydrophobic residues of CF 3 -proline are good candidates for the design and engineering of non-polar interaction interfaces. These insights have enabled the recent application of 4CF 3 Pro for the labelling and detection of the first transmembrane polyproline helix by means of solid state 19 F NMR. 104 Also, some other ProAs, 4-trifluoromethyl-3,4dehydroproline 105 and difluoro-4,5-methanoprolines 106 can be considered promising fluorine-bearing proline substitutes. However, these compounds exhibit severely compromised stability, in particular, in basic media. Finally, we believe that the physicochemical data reported in this study will further support the use of CF 3 -containing prolines for purposes ranging from customizable design of small molecules to the reprogramming of complex biological processes and structures. Conflicts of interest The authors declare the following competing financial interest. SP is affiliated to a commercial company, which sells some of the discussed amino acids. VK and NB declare no conflict of interests.
2018-12-15T23:16:07.860Z
2018-08-06T00:00:00.000
{ "year": 2018, "sha1": "d4f99bfa29fe5ffa8a74ba455668c6851fa11965", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/nj/c8nj02631a", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "01a7429d923f07e2f1891b305cd21bfa951581d9", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
250294184
pes2o/s2orc
v3-fos-license
Residual Similarity Based Conditional Independence Test and Its Application in Causal Discovery Recently, many regression based conditional independence (CI) test methods have been proposed to solve the problem of causal discovery. These methods provide alternatives to test CI by first removing the information of the controlling set from the two target variables, and then testing the independence between the corresponding residuals Res1 and Res2. When the residuals are linearly uncorrelated, the independence test between them is nontrivial. With the ability to calculate inner product in high-dimensional space, kernel-based methods are usually used to achieve this goal, but still consume considerable time. In this paper, we investigate the independence between two linear combinations under linear non-Gaussian structural equation model. We show that the dependence be-tween the two residuals can be captured by the di (cid:11) erence between the similarity of (Res1, Res2) and that of (Res1, Res3) (Res3 is generated by random permutation) in high-dimensional space. With this result, we design a new method called SCIT for CI test, where permutation test is performed to control Type I error rate. The proposed method is simpler yet more e (cid:14) cient and e (cid:11) ective than the existing ones. When applied to causal discovery, the proposed method outperforms the counterparts in terms of both speed and Type II error rate, especially in the case of small sample size, which is validated by our extensive experiments on various datasets. Introduction Independence and conditional independence (CI) are central notions in statistical model building, as well as being a foundational concept for much of statistical theory. In the problem of causal discovery, independence and CI tests are usually used for testing CIs among variables. In constraint-based methods (Pearl and Mackenzie 2018), the CI relationship x y|Z allows us to separate x−y when constructing a probabilistic model based on the joint distribution, which results in a parsimonious representation (Zhang et al. 2011). By using CI tests, constraint-based methods (Pearl 2009) can generally return a partial directed acyclic graph (DAG) (Pearl 2009). In the causal functional model (Velikova et al. 2014;Peters et al. 2012;Zhang et al. 2016), there is a solution to infer causal directions by testing the independence between the set of independent variables x and the corresponding residual R x→y (or the causal process of P(y|x)). Without given any assumption or precondition, CI testing is generally more difficult than independence testing. Many existing methods are based on explicit estimation of conditional densities or their variants such as ranks, kernel, copulas, nearest neighbours and discretizing-based methods (Diakonikolas and Kane 2016). For example, the characterization of CI of P x|yZ =P x|Z can be used to test CI by measuring the distance between two conditional densities (Su and White 2008). Due to the curse of dimensionality, inevitably the required sample size increases dramatically with the size of controlling set Z, which makes accurate estimation of conditional density or related quantity hard to be accomplished. Assume that Z contains only one variable with a finite number of values {z 1 , ..., z k }, then x y|Z iff x y|Z=z i for each value z i . Given a sample of size n, even if the data are distributed evenly on the values of Z, we must ensure that the independence within each subset of the sample with the same Z value by using only approximately n/k data points in each subset. When Z is continuously distributed or contains several variables, the observed values of Z are almost surely unique. To extend the above procedure to the continuous cases, we must consider the neighboring values of Z. However, it is also difficult for us to find appropriate neighboring points. As kernel functions are able to represent high order moments by calculating similarity of high-dimensional implicit functions, a series of kernel-based CI tests were presented to solve the above problems. In practice, mapping variables into reproducing kernel Hilbert spaces (RKHSs) allows us to infer properties of distributions like independence (Gretton et al. 2006). Fukumizu et al. (2007) tested conditional covariance by using Hilbert-Schmidt norm of conditional cross-covariance operator, as the zero operator norm is equivalent to x y|Z when the RKHSs are characteristic kernels. Daudin (1980) presented a characterization of CI that transforms CI to a set of zero correlations of regression functions. Concretely, x y|Z if and only if for all ψ ∈ L 2 xZ and φ ∈ L 2 y (L 2 xZ and L 2 y denote the spaces of square integrable functions of (x, Z) and y, respectively) such that E(ψφ) = 0 wherẽ ψ(x, Z) = ψ(x, Z) − r ψ (Z) andφ(y, Z) = φ(y) − r φ (Z), where r ψ , r φ ∈ L 2 Z are regression functions. To give an empirical estimate of this characterization of CI, Zhang et al. (2011) The Thirty-Sixth AAAI Conference on Artificial Intelligence developed a method called KCIT, which relaxes the spaces of functions ψ, φ, r ψ and r φ to RKHSs. (Doran et al. 2014) introduced the PKCIT method that utilizes permutation to convert the CI test problem into an easier two-sample test problem. Strobl, Zhang, and Visweswaran (2017) used random Fourier features to approximate KCIT. Lee and Honavar (2017) employed a modified unbiased estimate of maximum mean discrepancy to measure CI. Compared to discretization-based CI testing methods, kernel methods exploit more complete information of the data and incur less random error. Recently, regression-based tests were proposed for CI testing. Generally, these methods can be divided into two steps, regression and independence test. An indispensable assumption used by these methods is that any information of the controlling set Z can be removed from x and y by regression. As we know that this assumption is not always true but it works well in general continuous cases. Especially, when the information of Z can be totally removed from x and y by regression, regression-based CI tests generally works better than kernel-based methods. Grosse-Wentrup et al. (2016) transformed the CI of x y|Z to independence between x − ψ(Z) and (y, Z). to test x y|Z. In the two methods, ψ (or φ) is obtained by regressing x (or y) on Z, then CI test can be reduced to a set of regression and independence tests. In practice, x − ψ(Z) Z is a strong condition, as x − E(x|Z) Z ⇒ Z causes x in many cases (Zhang and Hyvärinen 2009). On the other side, when Z contains several variables, checking whether or not P(x − ψ(Z)) is independent from the joint distribution P(y, Z) or P(y − φ(Z), Z) tends to be prohibitively expensive. Note that in the two methods, independent-residuals is just sufficient but not necessary to meet CI. Flaxman, Neill, and Smola (2016) showed that given structural faithfulness and Markov assumptions (Pearl 2009), if Z causes x or y, x y|Z is equivalent to x − E(x|Z) y − E(y|Z). Similarly, here a strong condition that Z causes x or y is assumed, hence it is easy to derive the corresponding causal relations. Moreover, faithfulness condition means that x y|Z ⇒ x and y are d-separated by Z, and Markov condition implies that y are d-separated by Z ⇒ x y|Z, so CI is relaxed to d-separation given the faithfulness and Markov assumptions. However, CI is neither sufficient nor necessary to d-separation. In practice, given the faithfulness assumption, x − E(x|Z) y − E(y|Z) and x y|Z have significant correlations. For example, in (Ramsey 2014), the authors suggested to use x − E(x|Z) y − E(y|Z) to test x y|Z under the faithfulness assumption. In , the authors further conjectured that x−ψ(Z) y−φ(Z) can lead to x y|Z under nonlinear and faithfulness conditions, where ψ and φ are nonlinear functions, x, y and Z are generated by nonlinear additive noise model. Zhang, Zhou, and Guan (2018) showed that x − E(x|Z) y − E(y|Z) is sufficient to support x y|Z if the data is generated by following the linear non-Gaussian structural equation model (SEM) under the faithfulness assumption. As the residuals can be easily calculated by linear regression, the performance mainly depends on the independence test. Note that in this case, cov(x − E(x|Z), y − E(y|Z)) = 0 often holds. Therefore, it is difficult to detect the common component shared by x − E(x|Z) and y − E(y|Z). To get the best performance, this method (denoted by ReCIT) uses KCIT to achieve this goal, but it is computationally rather demanding. In (Zhang et al. 2021), the authors used kurtosis to test independence, they proved that in linear case with x y, x − h * y and x − h * r have different kurtosis, where h is the number of interpolation points, k is the times of permutations. This methods works very efficient in simple case. However, when the scenario becomes complicated with two residuals being very Gaussian, it is easy to cause Type II error where the CI hypothesis is not rejected although it is false. In this work, we aim to test the independence between the two residuals R x,Z = x − E(x|Z) and R y, and s 1,...,l are noise mutually independent. We show that the dependence between residuals can be captured by the difference between the similarity of (R x,Z , R y,Z ) and that of (R x,Z , R r ) where R r is an independent copy of R y,Z in high-dimensional space, denoted by S [ψ(R x,Z ), ψ(R y,Z )] and S [ψ(R x,Z ), ψ(R r )]. We design an elaborate test criterion for measuring the difference between the two S [ * ], by kernel and permutation based methods. The proposed method needs to calculate n × 1 similarity matrix instead of the trace of product between two n × n matrices, therefore it works more efficient. Extensive experiments show that our method performs better on regression based CI test than the counterparts, which can work faster and get a better performance in causal discovery. Similarity Based CI Test In this work, we assume that the given variables are generated by the linear non-Gaussian structural equation model (SEM), which is defined as a tuple (S , P(X)) where S = {S 1 , ..., S n } is a collection of n equations, S i : x i = pa x i +ε i , i = {1, ..., n} and pa x i corresponds to the set of direct parents of x i in a DAG G. The noise variables ε i have a strictly positive density with respect to the Lebesgue measure and are independent, all of them have the same non-Gaussian distribution. SEM reflects the data-generating processes of X in G. We say a SEM is identifiable if it is asymmetrical in cause and effect and is able to distinguish between them. In fact, linear SEM is generally identifiable in non-Gaussian cases (Zhang and Hyvärinen 2009). Regression Based CI Test Consider the task as follows: given two randomly selected nodes x and y , we want to test whether x and y are conditionally independent given a set of variables Z. According to the mechanism of regression based CI test, the CI test of x y |Z can be relaxed to an independence test between two residuals x = x − E(x |Z) and y = y − E(y |Z) in the linear non-Gaussian case. As the residuals x and y can be easily calculated by linear regression, the task turns to testing the independence between x and y. Concretely, the two variables (residuals) x and y are linear combinations of independent noise s i (i = 1, ..., l) such that x = l i=1 a i s i , y = l i=1 b i s i . When x and y are correlated, we know x y holds. However, if x and y are uncorrelated, then it is difficult to check whether x and y are independent or not. In what follows, we try to develop a low complexity method (compared to kernel-based methods) to test independence between two residuals. We know the mutual information of x and y is Then, I(x, y) = 0 implies p(x, y) = p(x)p(y), i.e., x and y are independent. In continuous case, we need to use some methods to measure probability density. We first review how the existing methods based on Hilbert-Schmidt independence criterion (HSIC) work. Lets consider p(x, y) = p(x)p(y), then for any ψ and φ, the square integrable function of x and y, respectively, we have (2) Therefore, to solve this problem, we need to select enough ψ and φ to see At this time, we can use kernel function to calculate the inner product between any ψ and φ, then L can be changed to We need to multiply n × n kernel matrices, computing L is expensive with complexity O(n 3 ), n being the sample size. CI Test Criterion Based on Similarity In this section, we present a method for test CI based on similarity. Consider three variables, x, y and r, where r is an independent copy of y, that is y and r are independent and identically distributed, r∼p(y). Intuitively, if x and y are independent, then the similarity between ψ(x) and ψ(y) equals to that between ψ(x) and ψ(r), denoted by where ψ is any square integrable function of x, y and r. On the contrary, there must be some ψ such that ] if x and y are not independent. Therefore, we can derive the following theoretical result, Proposition 1. Given three random variables x, y and r, where r is an independent copy of y, if x y, then ∀ψ, C[ψ(x), ψ(y), ψ(r)] = 0; if x y, then almost surely ∃ψ such We therefore can derive a test criterion by following Equ. (2)-(6) as in which We need to measure how close is L xyr to zero. Next step, we use kernel function to calculate the inner product between ψ(x) and ψ(y) or ψ(r) w.r.t. a set of ψ( * ), then Note that, the data we have is a finite sample (x, y) = ((x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n )) from a pair of variables x and y. Here we use permutation method to test the hypothesis of independence: H 0 : x and y are independent, versus H 1 : x and y are not independent. The idea behind it is that permuting y removes any dependency between x and y. Therefore we can compare L xyr with L xy p r , where y p is the permutation p applied to the sample y. We choose the number of permutations k, and create k permuted samples y p i , i = 1, ..., k. Then if x and y are truly independent, permuting y will not change much L xyr , therefore we will not be able to reject the null hypothesis (x and y are independent). On the other hand, if x and y were not independent, we can reject the H 0 with greatly changed L xyr . In practice, permutation tests are particularly attractive because of their simplicity and their ability to control Type I error without any distributional assumptions (Berrett et al. 2020). Recall that our task is given two variables x and y, test whether x and y are conditionally independent given a set of variables Z. We use least square method to regress x on Z, and denote the obtained residual by R Similarly, we can get the residual of regressing y on Z, R y,Z = y − E(y|Z) = y − Z(Z T Z) −1 Z T y. By subtracting the two residuals, we obtain where we simply denote the matrix Z(Z T Z) −1 Z T by M. Suppose the Gaussian Radial Basis Function (RBF) is used to calculate inner product, then where M p r is the permutation p r applied to M on row, as R r =(R y,Z ) p r = (y − My) p r = y p r − M p r y = r − M p r y (12) Consider the test criterion with permutation p We can see +E x,y,Z∼p(x,y,Z),r∼p(y) (exp(−γ||x−Mx−r+M p r y|| 2 )) 2 simultaneously exists in L and L p i , therefore this term can be removed, i.e., and L p i =E x,y,Z∼p(x,y,Z) (exp(−γ||x − Mx − y p i + M p i y|| 2 )) (19) Assume i = 1, ..., k, then P-value can be defined as where 1 is indicator function. Then, given a significant value α, if P-value≥ α, we accept H 0 : x and y are independent, otherwise accept H 1 : x and y are not independent. Implementation of Similarity Based CI Testing As mentioned above, the difference between L and L p i can be used to test CI. With these theoretical results, we design a new method for CI testing called Similarity based Conditional Independence Test (SCIT in short). The details of SCIT are given in Alg. 1. To test the CI of x y|Z, we first apply k + 1 different permutations to y and obtain k + 1 permuted samples of y, the new variables are denoted by r and y p 1 , ..., y p k (Line 1). Then, we calculate k + 1 statistics L and L p 1 , ..., L p k according to Equ. (18) and Equ. (19). In this process, the time is mainly spent on calculate M = Z(Z T Z) −1 Z T , which contains an inverse operation of matrix Z T Z. The matrix M p i can be easily obtained from M and permutation p i (Line 2). In the final step, we calculate the P-value = i (L < L p i )/k. If P-value≥ α, we accept H 0 : x y|Z, otherwise accept H 1 : x y|Z (Lines 3-8). As SCIT is used for linear CI testing, therefore SCIT can be directly applied to the PC algorithm for linear causality discovery. For more details about using regression based CI test in the PC algorithm, the readers can refer to (Zhang, Zhou, and Guan 2018). Discussion Go back to Equ. (7), if the similarity S (·) is measured by using Pearson correlation coefficient ρ(·), then Contrast to Daudin's work on characterization of CI (Daudin 1980), SCIT searches for only one function ψ, which means that Equ. (7) is sufficient but not necessary to support CI. But in practice, by assuming that any ψ can be covered by SCIT with a family of kernel functions, only in well-designed situations where counterexamples will be found. Performance Evaluation We first compare SCIT with ReCIT (Zhang et al. 2019) and KCIT (Zhang et al. 2011) by extensive simulated experiments, in which SCIT, ReCIT are residual-based CI test methods, ReCIT tests the independence between two residuals by using HSIC/KCIT. To the best of our knowledge, ReCIT is one of the best residual-based CI testing methods in linear cases, there are many comparisons between ReCIT and other CI testing methods like KCIT presented in the previous works Zhang, Zhou, and Guan 2018). We then illustrate the advantage of SCIT in causal skeleton learning. We compare our method (SCIT + PC algorithm) with the causal learning method PC ReCIT over various causal graphs. The experimental platform adopts Matlab R2021b, Intel i7-11700K (3.60 GHz) CPU, Windows 10, and 32G memory. The source code of SCIT package is available at https://github.com/Causality-Inference/SCIT. Effect of Controlling Set and Sample Sizes As we know, CI test methods are mainly affected by the size of the controlling set and the sample size, therefore we aim to examine how the probabilities of Type I error (where the CI hypothesis H 0 is incorrectly rejected) and Type II error (where the CI hypothesis is not rejected although it is false) errors of SCIT change with the size of the conditioning set Z (|Z| = 1, 2, ..., 5, respectively) by simulation. Here, we consider two cases as follows. In Case I, only one variable in Z, denoted by z 1 , is effective, i.e., the other variables are independent of x, y, and z 1 . The causal link is x → z 1 → y, in which z 1 = a * x + ε x , y = b * z 1 + ε y . The other variables x, z 2 , ..., z 5 are independently generated by following U (−1, 1), ε x , ε y ∼ U(−0.2, 0.2) and a, b ∼ U(0.2, 1). The ground truth is x y|z 1 ∪S and x y|S , where ∀S ⊆ Z\ z 1 . In Case II, all variables in the conditioning set Z are effective in generating X and Y. The causal link is x → Z → y, in which z i = a i * x + ε i and y = i b i * z i + ε y . The setting of coefficients a i , b i and noise terms ε i , ε y are similar to those in Case 1. The ground truth is x y|Z and x y|S where ∀S ⊂ Z. Recall that, the residuals can be easily recovered by linear regression in this simple setting. To evaluate the robustness of these methods, here we do not want the returned residuals being very accurate. Therefore we test CI with small sample size of 50 and 100. The significance levels are fixed at α = 0.05. Note that for a good testing method, the probability of Type I error should be as close to the significance level as possible, and the probability of Type II error should be as small as possible. We check how the errors change when increasing the dimensionality of Z and the sample size n. For each parameter setting, we randomly repeat the testing 100 times and average their results. Type I and II errors are calculated like this: take |Z| = 3 for example, in Case I, x is independent of y given (z 1 ), (z 1 , z 2 ), (z 1 , z 3 ) and (z 1 , z 2 , z 3 ), then Type I error rate =1the number o f CI s/4. On the other side, x is not independent of y given ∅, (z 2 ), (z 3 ) and (z 2 , z 3 ), then Type II error rate = the number o f CI s/4. Similarly, we can calculate Type I and II error rate in Case II. The results are presented in Fig. 1. We can see that 1. As shown in Fig. 1(a) and (c), the Type I error rate of SCIT is close to the significance level α = 0.05 (between 0.04 -0.06). This because we use permutation test to control Type I error rate; 2. As shown in Fig. 1(b), the Type II error rate of each method keeps stable when crossing |Z| = {1, 2, ..., 5}. The reason is that only one variable z 1 is effective in Case I, then the probability of rejecting CIs of x y|z 1 ∪ S for any S ⊆ Z\ z 1 would be very close. We can see that SCIT outperforms the other two methods in terms of Type II error rate; 3. As shown in Fig. 1(d), the Type II error rate of each method increases/changes with different sizes of Z. This is because all the variables z 1 , ..., z 5 are effective in Case II, then the probability of rejecting CIs of x y|S for different S ⊆ Z would be various. In this case, SCIT also achieves the best performance; 4. Increasing sample size can significantly reduce the Type II error rate, while Type I error rate is generally not impacted by sample size. Efficiency Comparison We compare the efficiency of SCIT, ReCIT and KCIT in terms of elapsed time with the sample size increasing from 50 to 1000. As presented in Table 1 regression progress is performed in SCIT, ReCIT, therefore the time-consuming difference among them depends on the respective unconditional independence test methods. SCIT is evidently faster, as it only needs to calculate similarity vectors, while ReCIT needs to calculate the trace of product of two n × n matrices. Performance on Small Graphs In this section, we evaluate SCIT, ReCIT and KCIT in more complex scenarios. We generate data from a set of random DAGs. For each DAG G, we first create four nodes v 1 , ..., v 4 , and with probability 50% each possible edge is either present or absent, and orient arrow between them from v i to v j only for i < j. Then, each variable x i corresponding to each root node in G is generated by following U(−1, 1) and each variable x i corresponding to leaf node is generated by i a i · pa x i + ε where a i ∼ U(0.2, 1) and ε ∼ U(−0.2, 0.2) independent across pa x i . For significance level 0.05 and sample sizes from 25 and 200, we simulate 100 DAGs and evaluate the performance of the three methods PC S CIT , PC ReCIT and PC KCIT on discovering causal skeletons. As shown in Fig. 2, we can see that when the sample size is small (e.g. less than 50), PC S CIT performs significantly better than other two methods. As the sample size increases, the performance of PC S CIT close to that of PC ReCIT and PC KCIT . When the sample size up to 150, the Recall, Precision and F1 curves of the three methods tend to be overlapping. Therefore, PC S CIT performs significantly better in CI test in causal discovery when the sample size is small, which is the frequentlyencountered case in reality. Fig. 3 shows the elapsed time of PC S CIT , PC ReCIT and PC FRCIT with the sample size increasing from 25 to 200, it is consistent with the result present in Table 1. SCIT can be very efficient to test CI in causal discovery with small sample size (n ≤ 1000). Performance on Causal Discovery In the experiments above, we compare SCIT and ReCIT in terms of learning causal skeletons of small DAGs, the result shows the two methods have almost the same accuracy when sample size is more than 100, though SCIT works much more efficient than the others. In this section, the two methods will be evaluated on six causal graphs 1 that cover a variety of applications, including biomedicine (Cancer and Asia), expert systems (Child), insurance evaluation (Insurance), medicine (Alarm) and agricultural industry (Barley). The structural statistics of these causal networks are summarized in Table 2. To obtain the precise ground truth in every cases, the corresponding data-generating process follows the previous works Hao 2013, 2017). As the residuals can be easily recovered by linear regression with enough samples, to evaluate the robustness of these methods, here we test CI with small sample size of 25 and 200. In causal discovery, partially correlation tests (Baba, Shibata, and Sibuya 2004) are often used to speed up CI tests based on the criterion: pcorr(x, y|Z) 0 ⇒ x y|Z. In order to evaluate these methods independently, here we do not perform any partially correlation test. The results are shown in Table 3 and Table 4. One can see that the Precision is higher than the Recall in most cases. We know Recall = Discovered edges ∩ Actual edges Actual edges and Precision = Discovered edges ∩ Actual edges Discovered edges , the Type I error occurred in SCIT and ReCIT would not affect PC( * ) much, that is because if Type I error occurs, the CI test will continue to test x and y given another controlling set Z. However, such a traversal search strategy will be greatly affected by Type II error. For example, assume that Type II error rate is r i for each controlling set Z i , then the rate of rejecting all CI hypothesis when they are really false is (1 − r i ), and we have lim k→+∞ i=1,...,k (1 − r i ) = 0. Therefore, the performance of constraint-based causal discovery is largely determined by the Type II error rate of CI tests. Compare Table 3 with Table 4, one can see that increasing samples can significantly reduce the Type II error rate, then improve the Recall of the two methods. On the other side, we can see PC S CIT outperforms PC ReCIT in most of cases in terms of F1, although their Precision are very close to each other. As aforementioned, the Type I error occurred in SCIT and ReCIT would not affect PC( * ) much, therefore all of them obtain high Precision. Similarly, we can see that the Recall of PC S CIT is slightly better than that of PC ReCIT . This result is consistent with the result presented in Fig. 1(b),(d) and Equ. (22), the lower the rate of Type II error, the higher the value of Recall. In addition, like the results presented in Table 1 and Fig.3, PC S CIT works much more efficient, it is very suitable for testing CI or discovering causalities in low-sample scenarios. Similar to the results presented in Fig.2, their accuracy will become very close given sufficient samples, and PC S CIT will lose its advantage on accuracy. Here we mainly consider the case of small sample size which is the most significant advantage of SCIT. Conclusion In this paper, we propose a new and fast residual similarity based conditional independence (CI) test method, called SCIT, to support effective and efficient causality discovery under the linear structural equation model (SEM) with non-Gaussian noise variables. Concretely, we provide a simple way to test the independence between two residuals R x,Z =x-E(x|Z) and R y,Z =y-E(y|Z) returned by linear regression. We show that the dependence between residuals can be captured by the difference between the similarity S [ψ(R x,Z ), ψ(R y,Z )] and the similarity S [ψ(R y,Z ), ψ(r)] given a set of square integrable functions ψ. Then kernel functions are used to calculate the inner product, i.e., similarity. As the value of similarity is not scale-free, we simply use permutation test to get the P-value to accept or reject CI hypothesis. Our theoretical analysis proves the correctness of the proposed method, and extensive experiments verify the advantage of SCIT. By assuming that any ψ can be covered by SCIT with a family of kernel functions, only in well-designed situations where counterexamples will be found. In practice, this assumption does not always hold. So in the future, we will explore how to implement SCIT using neural networks that can approximate the function more accurately according to the Universal Approximation Theorem.
2022-07-06T15:10:13.245Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "13be7dfa6f53b2456ce83f736d5cc3d453814059", "oa_license": null, "oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/20539/20298", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "724e1eed9878b164a62a3c2dd9093993d77a7144", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
218863432
pes2o/s2orc
v3-fos-license
Genome Wide Association Mapping of Spot Blotch Resistance at Seedling and Adult Plant Stages in Barley Barley spot blotch (SB) caused by Cochliobolus sativus is one of the major constrains to barley production in warmer regions worldwide. The study was undertaken to identify and estimate effects of loci underlying quantitative resistance to SB at the seedling and adult plant stages. A panel of 261 high input (HI-AM) barley genotypes consisting of released cultivars, advanced breeding lines, and landraces, was screened for resistance to SB. The seedling resistance screening was conducted using two virulent isolates from Morocco (ICSB3 and SB54) while the adult plant stage resistance was evaluated at two hot spot locations, Faizabad and Varanasi, in India under artificial inoculation using a mixture of prevalent virulent isolates. The HI-AM panel was genotyped using DArT-Seq high-throughput genotyping platform. Genome wide association mapping (GWAM) was conducted using 13,182 PAV and 6,311 SNP markers, for seedling and adult plant resistance. Both GLM and MLM model were employed in TASSEL (v 5.0) using principal component analysis and Kinship Matrix as covariates. Final disease rating and Area Under Disease Progress Curve (AUDPC) were used for the evaluation of adult stage plant resistance. The GWAM analysis indicated 23 QTL at the seedling stage (14 for isolate ICSB3 and 9 for isolate SB54), while 15 QTL were detected at the adult plant stage resistance (6 at Faizabad and 9 at Varanasi) and 5 for AUDPC based resistance at Varanasi. Common QTL at seedling and adult plant stages were found across all barley chromosomes. Seedling stage QTL explained together 73.24% of the variance for seedling resistance to isolate ICSB3 and 49.26% for isolate SB54, whereas, QTL for adult plant stage resistance explained together 38.32%, 44.09% and 26.42% of the variance at Faizabad and Varanasi and AUDPC at Varanasi, respectively. Several QTL identified in this study were also reported in previous studies using bi-parental and association mapping populations, corroborating our results. The promising QTL detected at both stages, once validated, can be used for marker assisted selection (MAS) in SB resistance barley breeding program. INTRODUCTION Spot blotch (SB) of barley (Hordeum vulgare L.), also commonly referred as leaf blight, and is caused by Cochliobolus sativus [anamorph: Bipolaris sorokiniana (Sacc.) Shoem.]. It is one of the major concerns in South Asia including China, Nepal, Pakistan, Bangladesh and the humid north eastern regions of India (Kumar et al., 2007;Chand et al., 2008;Singh et al., 2009;Vaish et al., 2011;Prasad et al., 2013). In addition, SB is also considered as a serious threat to barley production in the upper Midwest of the United States and the prairie provinces of Canada (Clark, 1979;Ghazvini and Tekauz, 2007). Recently SB has been identified in the warm regions of North Africa, especially in Morocco (Rehman et al., unpublished data). The yield losses of up to 36% in susceptible cultivars under disease conducive conditions have been reported in the United States with reduction in malting quality (Clark, 1979). In a disease survey of [2003][2004][2005][2006] in eastern Uttar Pradesh, Bihar states of India, SB was recovered from 63% of the blighted leaves. In addition, during field trials 42.5% SB severity was recorded on susceptible barley variety RD 2503 even after three fungicide treatments (Singh et al., 2009). Furthermore, Vaish et al. (2011) reported 21.3% SB incidence on barley in a survey conducted in the cold arid Trans-Himalayan region of India, where barley is grown in summer season from May to September. SB has not been reported before from Morocco, but recent disease surveys have shown its presence. The Moroccan SB isolates have shown a diversity of virulence on the set of 12differential barley genotypes tested (Rehman et al., unpublished data). Therefore, understanding host-pathogen interaction at genetic level is quite important on identifying and deploying SB resistance. The aggressiveness of SB in South Asia and North Africa is a serious threat to barley cultivation in these regions including Morocco and India. Although fungicide applications have been reported effective to control SB (Kiesling, 1985;Anonymous, 2011), but their use increases the cost of barley cultivation. Host resistance is considered important for Asian and African regions to control foliar blights where barley is grown by small holder farmers in marginal lands under low-input conditions. Thus, host resistance is widely considered to be the most sustainable and economical method for managing SB in barley (Wilcoxson et al., 1990). Remarkably stable SB resistance from NDB 112 (developed from a cross CIho 7117-77//Kindred by Wilcoxson et al. (1990) has protected six-row malting cultivars for the last 50 years in the Upper Midwest United States. Despite the transfer of all resistance loci into two-row barley like Bowman (PI483237), stable resistance like NDB 112 has not been observed and a differential expression of resistance loci in entirely different genetic background has been attributed to it (Fetch and Steffenson, 1994;Bilgic et al., 2005). The association mapping (AM) has advantages over bi-parental mapping like increased resolution for mapping QTL, greater diversity of alleles and being faster and efficient (Lander and Botstein, 1986;Buntjer et al., 2005;Yu and Buckler, 2006). Several studies have identified QTL to SB resistance by using diverse wild and cultivated germplasm against SB pathotype 1, 2, and 7 (Roy et al., 2010;Zhou and Steffenson, 2013;Wang et al., 2017). Roy et al. (2010) has shown nicely the additive effect of each QTL on SRT and APR. Breeding lines carrying resistance allele of one QTL Rcs-qtl-1H-11_10764 reduced infection rate (IR) from 0 to 20% and disease severity from 20 to 29%. Barley lines carrying two QTL Rcs-qtl-1H-11_10764 and Rcs-qtl-3H-11_10565 reduced IR from 5 to 31% and disease severity from 52 to 56%. Furthermore, barley lines carrying three QTL Rcs-qtl-1H-11_10764, Rcs-qtl-3H-11_10565, Rcs-qtl-7H-11_20162 showed 47% lower IR and 83% lower disease severity when compared with lines lacking any of three QTL. Similar findings on additive effects of QTL for stripe rust of barley have been reported (Castro et al., 2003). Mapping of effective SB resistance in South Asian and North African barley germplasm is still lagging behind resulting in slow progress in employing marker-assisted selection of SB resistance to pyramid effective genes against other foliar pathogens of barley. The present study was taken up to map SB resistance in High Input Association Mapping (HI-AM) panel using genome wide association mapping (GWAM) approach at the seedling and adult plant stages. Plant Materials The HI-AM panel used in this study is composed of 261 spring barley genotypes (released cultivars from different countries; advanced breeding lines from ICARDA's barley breeding program, and landraces from GenBank). The set is named as HI-AM (High Input Association Mapping) panel as most of barley genotypes were targeted toward optimum management (supplemental irrigation and fertilizer) conditions. Out of the 261 genotypes (172 two-row and 89 six-row types), 124 were from ICARDA's barley breeding program (50 two-row and 74 sixrow type), 32 from Europe (28 two-row and 4 six-row type), 34 from North America (28 two-row and 6 six-row type), 67 from South America (62 two-row and 5 six-row type), and 4 from Australia (all two-row type). The full list of genotypes is available in Supplementary Table S1. Screening for Seedling Resistance With Moroccan C. sativus Isolates The seedling resistance test (SRT) for HI-AM panel was done with two C. sativus isolates under controlled conditions in the growth chamber at the International Center for Agricultural Research in the Dry Areas (ICARDA), Rabat, Morocco. These C. sativus isolates were collected from farmer's field in Morocco during the disease survey of 2015 and were preserved as monoconidial isolates in −80 • C until further use (Supplementary Table S2). Two C. sativus isolates (ICSB3 and SB54) were classified into pathotypes by using three differential barley cultivars (NDB5883, Bowman, ND B112) as described by Fetch and Steffenson (1999). The isolate ICSB3 belongs to pathotypes 7 (virulent on NDB5883, Bowman, and ND B112) and SB54 belongs to pathotype 3 (virulent on ND B5883, and Bowman) (Rehman et al., unpublished data). To produce inoculum, lyophilized agar plugs of mono-conidial isolates were incubated on V8PDA (Vegetable juice 200 ml, potato dextrose agar 10 g, bacteriological agar 10 g) in the dark for 4-5 days at 20 • C followed by incubation at 20 • C with 12 h light/12 h dark photoperiod for 7-8 days. Further, the V8PDA plates were flooded with 5-10 ml of sterile distilled water and the conidia were harvested by rubbing the agar surface with sterile specula followed by filtration with double layer of cheese cloth. The spore density was adjusted to 5000 conidia ml −1 supplemented with the surfactant (0.01% of Tween 20). About 4-5 seeds of each barley genotype were sown in peat moss in a single cone of 3.8 cm diameter and 14 cm depth (Stuewe & Sons, Inc., OR, United States) supplemented with 14-14-14 NPK and the seedlings were raised in the growth chamber with photoperiod of 16 h light/8 h dark at 20 • C. Each tray containing 96 test genotypes along with resistant (ND B112) and susceptible checks (Annoucer [a Moroccan variety highly susceptible to SB]), was inoculated with 100 ml of spore suspension with hand held sprayers (0.2 ml/seedling) till run off followed by incubation under 100% relative humidity for 24 h in the dark at 20 • C. After 24 h, the seedlings were transferred to growth chamber under same conditions as described earlier (Fetch and Steffenson, 1999). The experiment was laid out for three replications using a randomized complete block design. A disease rating scale of 0-9 (Fetch and Steffenson, 1999), was used to evaluate the level of disease resistance at 10 days post inoculation (dpi). Based on the infection responses barley genotypes were grouped as immune (0), resistant (1-3), moderately resistant (4-5), moderately susceptible (6), susceptible (7-8) or highly susceptible (9) as described by Fetch and Steffenson (1999). Two independent replications of HI-AM were inoculated with each SB pathotype and the mean infection types of two replications was used in further analysis. Screening for Spot Blotch Resistance at the Adult Stage Resistance at adult plant stage was assessed in three trials, for 2 years, at two different locations. In 2013-2014 growing season, a set of 261 barley genotypes (HI-AM panel) including two standard checks, Rihane-03 and VMorales, was sown in first week of December 2013 at the Agricultural Research Farms of the Banaras Hindu University (BHU), 25.2677 • N, 82.9913 • E, Varanasi, and at Narendra Dev University of Agriculture and Technology (NDUAT), 26.7732 • N, 82.1442 • E, Faizabad, both in Uttar Pradesh, India. These genotypes were sown in a 1-m row using augmented block design with a highly susceptible genotype, "RD2503" repeated at interval of 20 test genotypes. RD2503 was selected as a SB susceptible check because it showed highly susceptible reactions (IR = 8-9 on 0-9 scale) at the seedling stage and 99 score (double-digit score) of SB severity at the adult stages in the field. Further, RD2503 was grown as long paired row perpendicular to the test plots as spreader rows on either side. The SB isolates (locally collected and maintained as mono-conidial pure culture at BHU and NDUAT) were multiplied on sterilized sorghum grains to get enough inoculum. Artificial inoculation was done with a spore mixture (approximately 10 5 spores ml −1 ) of virulent SB isolates grown on sorghum grains at booting stage (GS 43-49) twice during evening hours by using knapsack sprayer (Chaurasia et al., 1999;Joshi and Chand, 2002;Kumar et al., 2007). Experimental plots were flood irrigated after inoculation to create a conducive environment for infection and disease development. The SB severity was rated on each genotype using double-digit (00 to 99) method according to Nagarajan and Kumar (1998). The first and second digits indicates percent area with disease on flag leaf (F), and below flag leaf (F-1). Final SB severity was scored at GS 83-85 at both locations (Zadoks et al., 1974). During 2014-2015 crop season, the panel was screened again at BHU, Varanasi and disease severity was recorded three times at 5 days interval during March 2015 at GS 77-87 and area under the disease progress curve (AUDPC) at BHU was calculated (Jeger and Viljanen-Rollinson, 2001). Where, SB i is the spot blotch severity on i th days, t i is the time in days at i th observation, and n is the total number of observations. The genotypes were categorized into different groups based on length of spots and hallowing (reaction type), extent of disease severity level (Double digit, i.e., on Flag and Flag-1) based on the maximum score on genotype as well as based on AUDPC values (Supplementary Table S2). Genotyping, Population Structure, and Linkage Disequilibrium The 261 genotypes of the HI-AM panel were genotyped with DArT-Seq technology (Diversity Array Technology Pty Ltd., DArT P/L). The final marker sets (13182 PAVs and 6311 SNPs, respectively) were obtained by removing heterozygous and monomorphic markers and markers with minor allele frequencies (MAF) < 5% and markers with missing data > 10%. Markers distribution across the seven barley chromosome is shown in Supplementary Figure S1. Population structure was determined by using STRUCTURE version 2.3.4 (Pritchard et al., 2000), the number of subgroups was confirmed using Bayesian Information Criteria (BIC), generated with the adegenet package for R statistical software (The R Development Core Team). Finally, based on principal component analysis (PCA), genotypes were assigned to subgroups or considered admixed on the basis of 80% membership criterion. Linkage disequilibrium (LD) was calculated with TASSEL 5.2.32 (Bradbury et al., 2007). The extent of LD was estimated by non-linear regression analysis on the basis of intra chromosomal r 2 values (Hill and Weir, 1988;Remington et al., 2001) using nlstools package for R Statistical Software (The R Development Core Team). More information regarding genotyping population structure and LD analysis was reported by Visioni et al. (2018). Genome Wide Association Mapping Genome wide association mapping was performed combining genotypic data and disease severity scores at the seedling and adult plant stages. Genome scans were performed using both General Linear Model (GLM) and Mixed Linear Model (MLM), the general equations for GLM and MLM were reported by Visioni et al. (2018). Genomic scans using the GLM model were performed incorporating population structure (GLM + PCA model) or the Q-matrix (GLM + Q model) as covariate in order to avoid type I errors. The MLM model consider the familiar relatedness (the K model) and it was used to take into account both population structure and familiar relatedness (Q + K and PCA + K models). The kinship matrix (K) was estimated using Tassel V 5.2.32 from the both whole sets of markers. For both GLM and MLM analysis a threshold of (-log 10 p ≥ 3) was set for identifying significant marker-trait associations. Significant markers mapping within the interval of LD decay were considered as being linked to the same QTL and the marker with the highest p-value was chosen as representing the QTL. Considering the stringency of the model used for accounting population structure, in which most of the false positives were inherently controlled. The critical p-value for marker-trait association was firstly determined according to a liberal approach proposed by Chan et al. (2010) rather than using false discovery rate. Considering this approach, markers were declared significant at the p = 0.0001 [−log(p) = 4] with the selected models (Visioni et al., 2018). A further step to increase confidence in QTL identified was done by applying the LD adjusted Bonferroni, proposed by Duggal et al., 2008. The value calculated for LD decay of 4 cM (Visioni et al., 2018), corresponding to 4.3 Mbp, indicated that this association panel interrogated the 987.65 cM of our association mapping panel via 246 "loci hypothesis, " and hence the Bonferroni correction for this panel was set to 3.68 −log(p) (p < 0.05). QTL Alignment and Candidate Genes QTL detected for SB resistance were aligned with those previously reported in different barley germplasm by checking the position of markers at the QTL peak in the barley pseudomolecules Morex V.2.0 database. Markers sequences were mapped in the database using the IPK Barley Blast Server 1 . The position of the marker representative of the QTL was compared with those of markers at QTL peaks reported in previous studies and considered adjacent on the base of LD value (intervals selected correspond to 4 cM on each side of the QTL peak). Molecular markers sequences were aligned to the barley physical genome 2 . Putative candidate genes were then identified searching within the genes aligned and located within the LD interval at both sites of the markers at QTL peaks using PGSB database (Plant Genome System Biology 3 ). The database provides access to the barley gene annotation described by the IBSC (2012). Candidate genes (CG) search was focused mainly on functional domains or genes functionally related with disease resistance mechanisms. In the greenhouse, SB infection was uniform and reliable infection responses (IR) were recorded. The frequency distribution of IR HI-AM panel (261 genotypes) at the seedling stage has been presented in Figure 1. Details about the IR of individual genotype from HI-AM are available as Supplementary Figure S1. The mean IR for ND B112 (resistant check) and Annoucer (susceptible check) varied from 2.5 to 4.0 and 7 to 8.5, respectively. Of the 261 barley genotypes tested, none of them were immune to isolate SB54 (Pathotype 3) and ICSB3 (Pathotype 7). The distribution of IR of barley genotypes to isolates ICSB3 and SB54 was negatively skewed toward MR, MS, and S categories. Adult Plant Resistance to Spot Blotch Pathogen Population in the Field The frequency distribution of SB severity of the HI-AM at BHU-14 (Varanasi) and NDUAT-14 (Faizabad) are presented in Figure 2A and AUDPC of SB at BHU in Figure 2B. Final SB disease severity and AUDPC of individual barley genotype is presented in Figure 2 and in Supplementary Table S4. Genome Wide Association Mapping Performing GWAM for SB at SRT, the GLM procedure using PCA for accounting population structure and relatedness was the best fitting model, when analyzing data for isolate SB54 using both PAVs and SNPs markers sets. On the other hand, analyzing data for isolate ICSB3 GLM + PCA was again the best fitting model using the SNPs marker set, while the MLM procedure using PCA + K model was the best fitting model using the PAVs marker set. The genome scans for isolate SB54 showed 9 QTL located on chromosomes 1H, 3H, 4H, 6H and 7H (Table 1). Markers R 2 for isolate SB54 ranged from 4.53% to 6.82% and the total phenotypic variance explained by 9 QTL was 49.26%. The GWAM analyses at SRT for ICSB3 identified 14 QTL located on chromosomes 1H, 3H, 4H, 6H and 7H (Table 1) with R 2 ranging from 4.32% to 7.79%, explaining together 72% of phenotypic variance. Performing GWAM for SRT to SB, using both PAVs and SNPs markers sets, the best fitting model for and BHU-14 (Varanasi) was the MLM procedure using PCA + K for accounting for population structure and relatedness. When the data from BHU-15-AUDPC (Varanasi) was used for GWAM the best fitting models were MLM Q + K for the PAVs and MLM PCA + K for the SNPs marker sets, respectively. GWAM for APS showed a total of 15 QTL using disease severity data from two locations NDUAT-14 and BHU-14 and 5 QTL in BHU-15-AUDPC (Varanasi) by using AUDPC values. At BHU-14 (Varanasi) 9 QTL were located on chromosomes 2H, 3H, 4H, 5H, and 7H with marker R 2 between 4.44% and 5.84% and explaining 44.09% of the total phenotypic variance ( Table 2). At NDUAT-14, six QTL were found on chromosomes 1H, 2H, 4H, and 6H in BHU-14 with marker R 2 ranged from 4.64% and 9.85% explaining 38.32% of the total phenotypic variance. Furthermore at BHU-15-AUDPC, five QTL were detected on chromosome 4H, 5H, and 7H with marker R 2 ranged from 4.53% and 5.69% explaining 26.42% of the total phenotypic variance ( Table 2). QQ plots are shown in supplementary materials (Supplementary Figures S2-S4). Overlapping QTL at SRT were found between both isolates. The QTL were located on chromosomes 3H (2 cM and 133 cM, respectively), 6H (17 cM) and 7H (116 cM). Furthermore, QTL SRT_ICSB3_11 for SRT located on chromosome 7H (10 cM) overlaps with a QTL for APS located on the same chromosome at 12.75 cM (APS_Var_9). Five QTL detected for APS resistance were found also to overlap with others already reported for stripe rust by Dracatos et al. (2016) and by Visioni et al. (2018): APS-Fai-3 detected at NDUAT-14, APS-Var14-1, APS-Var14-2 and APS-Var14-8 detected at BHU-14 and APS-AUDPC-2 at BHU-AUDPC-15 (Table 3). An overview of QTL mapped at both SRT and APR stage is given in Figure 3 Candidate genes identified for QTL at both SRT and APR are reported in Table 3. Most of the QTL detected were located in regions enriched with functional domains or genes involved in host plant defense based upon their annotation. In total, we have identified 26 CG (15 at SRT and 11 at APS, respectively) for SB resistance, most of the CG shows homology with resistance genes belonging to nucleotide binding sites with leucine rich repeat (NBS-LRR) class, disease resistance proteins, MYB transcription factors and genes involved in β-glucans biosynthesis. Host Resistance to SB In this study, we have used two Moroccan SB isolates; pathotype 3 (SB54) and pathotype 7 (ICSB3) to map loci conferring seedling resistance in a diverse barley germplasm from ICARDA, adapted specifically to the high input conditions. SB was not reported in Morocco until recently our group found this disease during disease survey in 2015 (unpublished data) Net Form of Net Blotch (NFNB) and Spot Form of Net Blotch (SFNB) have been prevalent with disease incidence up to 70% and disease severity from 40 to 90%, respectively (Yousfi and Ezzahiri, 2002;Jebbouj and Brahim, 2010;Gyawali et al., 2018;Rehman et al., unpublished data). The pathotype information of Moroccan SB isolates was unknown until our studies revealed that highly virulent pathotypes 7 was found along with other pathotypes 0, 1, and 2. Previous mapping studies of SB resistance in barley have used pathotype 1, 2, or 7 (Bilgic et al., 2006;Roy et al., 2010;Wang et al., 2017). To our knowledge this is the first study where pathotype 3 has been used for screening barley HI-AM to identify SRT QTL. The identification of pathotype 7 isolate from Morocco is quite alarming for stable barley production because six-row barley landraces are widely grown under low input conditions by many small holder farmers of Morocco. Our results suggest that most of the barley genotypes grown in Morocco are very susceptible to SB54 and ICSB3 (Rehman et al., unpublished data). In SRT, 18 (7%) genotypes showed resistance reaction to isolate SB54 and 7 genotypes (2.7%) were resistant to isolate ICSB3 (Figure 1 and Supplementary Table S3). However, previous SRT study of barley association mapping panel (AM-2014) at ICARDA, revealed only 1 out of 336 barley genotypes to be resistant to a mixture of 19 C. sativus isolates from Morocco . This can be explained due to the presence of diverse repertoire of avirulence genes from all SB isolates on a diverse barley germplasm which can mask the detection of genefor-gene interactions. About 78% (14 out of 18) resistant barley genotypes in case of SB54 and 86% (6 out of 7) genotypes in case of ICSB3 were of two-row type. Interestingly, seven barley genotypes 88,89,95,199,216,218) were resistant to both pathotypes with six out of seven lines being two-row type. The over representation of two-row type resistance to both SB pathotypes might be the first report of its kind. This can be also explained by the absence of population subgrouping based on ear type and/or by the fact that ICARDA's breeding program routinely exercise hybridization between two-row and six-row types (Visioni et al., 2018). Spot blotch is also a major constraint for barley production in South Asian countries like China, Nepal, India, Bangladesh, and Pakistan due to hot and humid climate prevailing during February to March (Dubin and van Ginkel, 1991;Kumar et al., 2007;Chand et al., 2008;Singh et al., 2009;Vaish et al., 2011;Prasad et al., 2013). More specifically, in the North Eastern Indian states (Uttar Pradesh, Bihar, and Jharkhand), the winter *Common significant QTL between the two isolates. † Common QTL between SRT and APR. QTL highlighted in bold passed the LD adjusted Bonferroni test correction. Know co-segregating loci indicated in italic refers to previous QTL mapped in the same position for stripe rust (Visioni et al., 2018). is very short and relatively warmer weather provides perfect conditions for SB. Singh et al. (2009) reported yield loss of 79.6% in susceptible cultivar RD2503 in India (Singh et al., 2009). BHU (Varanasi) has been used as SB hot spot for screening wheat and barley germplasm (Chand et al., 2008;Prasad et al., 2013;Gyawali et al., 2018). In our study, we have also found that the disease severity was much higher at BHU-Varanasi (74 ± 15 in 2014 and 64 ± 23 in 2015) than in NDUAT-Faizabad (55 ± 11 in 2014) which corroborate findings of Gyawali et al. (2018). This can be attributed to high inoculum pressure, disease conducive environment, and existence of more virulent SB races at BHU-Varanasi than at NDUAT-Faizabad. Unfortunately, SB pathotypes in India are poorly characterized and SRT studies with pure isolates are lacking. We found that rating genotypes by doubledigit scale based on final observation (highest reaction) seems to indicate fewer SB resistant barley genotypes in field screening as compared to AUDPC where relative disease progress is recorded at three timepoints. For example, at NDUAT-14 and BHU-14, only 1 and 10 (4%) genotypes, respectively, were found resistant based on single observation on the highest disease score as compared to BHU-15-AUDPC, where 49 (19%) genotypes were found resistant to SB (Figure 2 and Supplementary Table S4). Gyawali et al. (2018) reported 6.5% (22 genotypes) to be resistant at BHU, in the AM-2014, while in HI-AM we observed 19% (49 genotypes) resistant genotypes at BHU with AUDPC observations. Thus HI-AM offers much more diversity for SB resistance breeding program of India. Candidate Genes In case of SRT, 15 QTL out of 23, and in case of APR, 11 QTL out of 20 showed association with functional candidate genes. The genomic regions where most of the QTL have been mapped seems to be enriched with NBS-LRR disease resistance like proteins (10/15 SRT CG; 4/11 APS CG), pathogenesis related proteins (2/15 SRT CG; 4/11 APS CG), and MYB transcription factors (3/15 SRT CG; 1/11 APS CG). NB-LRR disease resistance proteins have been implicated in effector triggered immunity to various pathogens and a similar role to SB resistance is envisaged here. Nucleotide-binding (NB)-LRR (leucine rich repeat) proteins (NLRs) have been associated with quantitative resistance to necrotrophs. A combination of transcriptomics and association mapping of pathogen or hosts will result in the identification of novel necrotrophic effectors (NEs) and corresponding QTL, respectively. A probable strategy would be to eliminate host plant susceptibility genes for both biotrophic and necrotrophic pathogens. Furthermore, minor R genes (APR) could be pyramided for durable control of diverse pathogens (Virdi et al., 2016;See et al., 2018). The putative candidate gene (CG) associated with the DArT2274 marker (APS_Vars_14-1) on 2H at 40.08 cM encodes CsAtPR5 pathogenesis-related (PR) protein. PR proteins are conserved in many plant species and are induced upon biotic stresses conditioned by various pathogens (Prasath et al., 2014). The wheat ortholog TaAetPR5 is 93% identical to CsAtPR5 and was upregulated upon infection of Blumeria graminis f.sp. tritici only in the resistant line (Niu et al., 2007). Likewise, enhanced expression of PR genes was observed in resistant barley upon inoculation with P. teres teres (Al-daoude et al., 2017). CONCLUSION One two-row barley genotype HI-AM-32 (Issaria) was recorded as resistant and two six-row barley genotypes, HI-AM-241 (ZIGZIG/BLLU//PETUNIA 1) and HI-AM-250 (M104/TOCTE), were found as moderately resistant across two locations during two cropping seasons. The present study has further unlocked the genetic potential of HI-AM with the identification of 15 novel QTL for SRT and 14 novel QTL for APR. Furthermore, 11 previously mapped QTL were also identified (5 for SRT and 6 for APR). Markers at QTL peak will enrich the existing allelic diversity for SB resistance and once validated, could be used for MAS to pyramid multiple resistance alleles to curb losses induced by this economically important pathogen of barley. The three lines observed as resistant/moderately resistant across the three environments can be readily utilized in barley breeding program for incorporation of effective SB resistance targeted for South Asia and North Africa.
2020-05-25T13:09:53.675Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "ca1d8bb6de43861b27dfd59aac190cf893116fd4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.00642/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca1d8bb6de43861b27dfd59aac190cf893116fd4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
5930965
pes2o/s2orc
v3-fos-license
ST Segment Elevation Is Not Always Myocardial Infarction: A Case of Focal Myopericarditis Protocols exist on how to manage STEMI patients, with well-established timelines. There are times when patients present with chest pain, ST segment elevation, and biomarker elevation that are not due to coronary artery disease. These conditions usually present with normal coronary angiography. We present a case that was clinically indistinguishable from STEMI and that was diagnosed with focal myopericarditis on cardiac MRI. Introduction Typical chest pain with elevated ST segment elevation on electrocardiogram (EKG) is a medical emergency, with welldrilled protocols and timeline targets [1]. ere are times when the ST elevation is not due to coronary obstruction but a mimic. In ammation associated with myopericarditis can produce chest pain and EKG changes that may be clinically indistinguishable from a coronary event. Myalgia, fatigue, pleuritic chest pain, and fever are common presentations associated with viral illness, but the absence of this history does not preclude post viral myopericarditis. Typical cardiac chest pain and elevated cardiac enzymes can be present in acute coronary syndrome (ACS) as well as in myopericarditis, especially its focal form. Few case reports have been published on this rare and important mimic [2,3]. We present a case of a young male who presented with indistinguishable features and was treated as STEMI, later to turn out as focal myocarditis. Case Presentation A previously healthy 33-year-old white male presented with sudden onset substernal chest pain that started while exercising on a treadmill one hour before. He described a left sided sharp, nonradiating pain that persisted till he presented to the ER. He had associated nausea, diaphoresis, and shortness of breath. Nothing made it better. He denied heartburn, vomiting, cough, fever, and recent travel. He had no personal or family history of heart disease. On physical examination, he was a young athletic male with normal vital signs, and he appeared in distress from the pain. His cardiovascular examination was normal with no murmurs or pericardial rubs. He had an elevated troponin I at 21.9 ng/ml and an EKG ST segment elevation in the inferior leads ( Figure 1). All other baseline laboratory tests were within normal limits. A STEMI alert was placed, and patients had an emergent left cardiac catheterization that reported normal coronary anatomy with no obstructing coronary stenosis (Figures 2 and 3). A left ventriculogram was also normal. He was started on a heparin drip and transferred to the coronary care unit. A plain chest X-ray did not reveal any pulmonary lesions or consolidation, and a chest CT angiogram ruled out pulmonary embolism. A transthoracic echocardiogram done reported a normal left ventricular ejection fraction (EF 50-55%) and a slight enlargement of the right ventricle without any wall motion abnormalities. Two days after presentation, the patient still reported continued chest pain and had an episode of nonsustained ventricular tachycardia (NSVT). At this point, a cardiac MRI was done (Figures 4-7) that demonstrated epicardial and midmyocardial enhancement in the inferior wall, sparing of the subendocardial region, and overlying focal pericardial enhancement, consistent with EKG changes. He was started on indomethacin; his symptoms improved in the following 5 days, and he was discharged. Discussion e presentation of myopericarditis is widely variable from asymptomatic to focal or di use myopericarditis, congestive failure, and even sudden cardiac death. Di use myopericarditis has well variable presentation and EKG changes, re ecting the degree of myonecrosis. Certain changes on EKG are associated with myocarditis rather than pure pericarditis, such as ST segment elevations and occurrence of arrhythmias as was evident in our patient [4]. Focal myopericarditis on the other hand may have EKG ndings indistinguishable from STEMI as is seen in our case. Chest pain with unusual cardiac risk pro le and normal coronary angiography should raise suspicion of focal myopericarditis. e gold standard for diagnosis of myocarditis is endomyocardial biopsy, which has variable sensitivity of up to 64% [5]. e noninvasive cardiac MRI is increasingly being used to make the diagnosis of myocarditis and is associated with sensitivity of up to 90% in Case Reports in Cardiology one study [6,7]. e pattern of myocarditis on MRI includes focal or global calculated myocardial early enhancement ration greater than 4.5 compared to skeletal muscles, focal or global intense T2 signal indicative of edema, or late gadolinium enhancement with nonregional ischemic distribution. ese often involve epicardium towards myocardium, typically sparing the subendocardium, while myocardial infarction displays a pattern of enhancement involving the subendocardium [8][9][10]. Using the Lake Louise Consensus criteria [10], our patients' ndings were consistent with myocarditis by displaying early gadolinium enhancement ratios, regional T2 signal edema, and myocardial late gadolinium enhancement. It is arguable that the region of cardiac MRI ndings could have been consistent with a spontaneously reperfused coronary artery disease, and this theory could not be proven in our case. Our patient had no traditional risk factors for coronary artery disease, though the sudden onset of symptoms pointed to ACS and the coronary angiograms was clear of atherosclerosis. Similar presentations to this patient that were ruled out include Takotsubo cardiomyopathy as the ventriculogram was normal during cardiac catheterization. e coronary anatomy was normal, and thus both coronary artery disease and spontaneous coronary dissection were ruled out. Our patient had no eosinophilia on peripheral blood testing; hence, hyper eosinophilic myocarditis and hypersensitivity myocarditis were also ruled out. Conclusion Focal myopericarditis may present with typical chest pain, ST segment elevation on EKG, and biomarker elevation that may be indistinguishable from STEMI. Unusual cardiac risk pro le, in absence of traditional cardiac risk factors, should raise suspicion of alternative diagnosis. Cardiac MRI is useful in distinguishing myopericarditis from MI. Case Reports in Cardiology 3
2018-04-03T02:42:22.483Z
2017-11-29T00:00:00.000
{ "year": 2017, "sha1": "632d400d734f1c0b47e17b6be51c82c9bfd3419b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cric/2017/3031792.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b48f4bb51d2609f949ca512a12509c6da97a5750", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
31089859
pes2o/s2orc
v3-fos-license
Cause of Androgenic Alopecia: Crux of the Matter Summary: What is wrong with the current understanding of etiopatho genesis of androgenic alopecia (AGA)? What is the most important question to ask to understand AGA? Why is that question skimped? Which findings are interpreted incorrectly? Is there anything that has not been discerned about AGA until today? What are the deceptive factors for investigators? Is it possible to snap the whole view uninterruptedly in one clear picture? Answers and an overview with fresh perspectives are provided. Based on these findings, DHT is held responsible for miniaturization of hair follicles in AGA. 6 However, as DHT is a more potent form of testosterone and androgens are expected to convert hair follicles from vellus to terminal not the other way around, this inference gives rise to a paradox. In a review article, Randall says, "How one type of circulating hormone has such contrary effects on a single tissue depending on its body site is not clear; this biologi-cal paradox alone makes androgen action in hair follicles very intriguing." 7 It is indeed so. Androgen-dependent/independent and androgen-sensitive/insensitive hair follicle concepts have been used to explain the opposite effects of androgens on hair follicles at different body sites. [6][7][8] According to these concepts, nonbalding scalp hair is designated as androgen independent and androgen insensitive, whereas balding scalp hair as androgen independent but androgen sensitive. 8 Here, there is a point that deserves attention. Androgen-dependent/independent hair follicle concept treats the whole scalp as a single body site, which stands to reason. Scalp hair, either balding or nonbalding, is considered androgen independent because they grow in the absence of androgens as opposed to, for example, axillary or pubic hair. However, androgensensitive/insensitive hair follicle concept divides the scalp into 2 sites as androgen-sensitive and androgen-insensitive hair-bearing areas. This arbitrary division of scalp is discomforting in the first place and inappropriateness of the concept becomes evident as it is questioned further. As stated above, DHT is accepted to be the androgen that binds to androgen receptors and effects follicular miniaturization. Along with other findings, this conclusion is based on the finding of increased levels of DHT in balding scalp compared with nonbalding scalp. Unless DHT levels in balding scalp are equal to DHT levels in nonbalding scalp, follicular miniaturization in AGA cannot be considered a matter of sensitivity to androgens. The amount of androgen is not the same in the so-called androgeninsensitive area. In a relatively recent review article, the role of androgens in the pathophysiology of AGA is documented by Kaufman. 6 Kaufman says, "In men with MPHL (male pattern hair loss), follicular miniaturization is caused by an inherited sensitivity of scalp hair follicles to normal levels of circulating androgens." "Thus, it appears that in balding men DHT binds to androgen receptors in susceptible hair follicles and, by an unknown mechanism, activates genes responsible for follicular miniaturization." Kaufman tries to put it right saying sensitivity to "normal levels of circulating androgens" to no avail. It helps correct the terminology from one point of view only and obviously does not bring in an explanation to the opposite effects of androgens on balding scalp hair follicles vs hair follicles at other body sites. If there is sensitivity to normal levels of circulating androgens and the resulting product of this sensitivity is DHT, an overall pronounced effect is expected not an opposite effect. DHT is a stronger form of testosterone. Androgen sensitivity concept is not only an inept concept to explain the role of androgens in AGA but it is also confusing and misleading its adopters after some dubious "genes responsible for follicular miniaturization" and "an unknown mechanism" that activates them. Worst of all, it skimps the most critical question and prevents it from standing forward, which is: • Why does DHT (or 5-alpha reductase enzyme activity) increase in balding scalp? This question requires a solid answer and has priority over the other crucial questions that are as follows: • How does DHT cause hair loss while exactly the opposite effect is expected? • Why does balding (or increase in DHT levels) occur only at the top of the head? If these questions are considered fair questions, a valid theory on the etiology of AGA has to be able to answer all of these questions. I introduced a new theory in 2008. 9 It does accomplish this hard task adroitly; moreover, it is in agreement with all findings in connection with AGA. The mechanism of AGA is thought to be highly complex. 10 However, this theory provides a new viewpoint from which it seems to be quite simple. According to the theory, pressure on the hair follicles created by the weight of the scalp is the cause of AGA. Total weight of the skin, subcutaneous connective tissue, and galea are operative. With sandwiched fat tissue and fibrous connections between the skin and galea, all of these components of the scalp form a combined structure that sits on the cranial bones much like a separate structure movable on the cranial bones due to the intervening loose areolar tissue. Hair follicles are compressed by the skin against the calvarial bones. This theory is uniquely capable of explaining all related phenomena and paradoxes. In summary, the theory points out that the pressure on the hair follicles is buffered by the surrounding subcutaneous fat tissue and young dermis that is capable of keeping itself well hydrated. As one ages, thickness of the subcutaneous fat tissue and the volume of the dermis decreases, 11,12 that is, the buffer decreases and consequently the pressure on the hair follicles increases. Another factor that is well known to cause thinning of subcutaneous fat tissue much more rapidly than aging is testosterone. [13][14][15][16] With the onset of puberty, subcutaneous fat tissue starts to decrease instantaneously at an early age in the male due to increase in testosterone levels. 17,18 Estrogen protects the cushioning tissues until after menopause in the female. [19][20][21][22][23][24] And, testosterone effected reduction in subcutaneous fat tissue normally does not happen in the female at all. On the other hand, it has been shown that the downward growth of early anagen follicle occurs by growth pressure. 25,26 It has to work against the compressive force described above. As the cushion reduces, the hair follicle needs to strive against a higher pressure to reach its terminal follicle size. More androgen is demanded to promote the growth. This is a local demand, and there is a mechanism for increasing the effect of androgens locally without raising systemic androgen levels. 5-alpha reductase enzyme activity increases at the locale and converts more testosterone to DHT, which has severalfold greater affinity for androgen receptors than testosterone. And, DHT increases locally. The sequence of reactions does not end here. Increased DHT causes further erosion of the subcutaneous fat tissue around the hair follicle. [27][28][29] A vicious circle is created (Fig. 1). There is not another theory that reasonably and satisfactorily explains hair loss in AGA without ascribing a function to DHT that is opposite to its known function. DHT increases to help the hair follicle forge ahead deeper to reach its normal terminal follicle size in the face of increased pressure due to decrease in cushioning tissues. As long as the pressure on the follicle is adequately buffered, a base androgen level is enough and required for healthy hair growth. 30 As the cushion decreases, the balance is lost at some point and the vicious circle is initiated. Increased DHT promotes hair growth probably mainly by stimulating mitosis in the early anagen follicle. However, increased growth pressure due to advanced mitosis cannot overcome the compressing pressure on the hair follicle but speeds up and shortens the anagen phase. Hair follicle cannot grow to its full size and becomes smaller and smaller with each cycle along with the increasing pressure on the hair follicle. Hair follicle miniaturization, that is, terminal to vellus conversion, takes place, anagen to telogen ratio reduces, and hair loss increases. A reaction-dubbed microinflammation around the bulge region of individual hair follicles in balding scalp has been presumed a causative factor in AGA, 31,32 but most likely it is evidence for the new theory. A reaction that is much milder than the inflammation seen in a typical inflammatory scarring alopecia and takes place at the site of mitosis suggests scavenging of products resulting from inefficient mitosis rather than a factor in the etiology of AGA. Finally, if the pressure created by the weight of the scalp causes the hair loss in AGA, which is claimed by the theory, it is expected that the hair at the top of the head is lost. This is exactly what happens in AGA. Although the conformity is manifest and there is an appreciable relation between the shape of the cranium and the hair loss area in AGA, a few points have to be observed so as not to get confused. It is better to examine only the last-stage AGA cases observing and evaluating the hair loss area from this theory point of view for the first time. AGA is a progressive condition, and there are several factors that can be effective on where hair loss starts and how it progresses in different individuals. Observing more hair loss at presumedly lower pressure areas than higher pressure areas in intermediate-stage AGA cases can be deceptive and is the most common source of confusion. An example of the relation between the shape of the cranium and the hair loss area is shown in Figure 2. Outlines of the side views and back views of 2 heads are seen. The only difference between the 2 heads is the shape of the back of the calvarium. One of them is rather rounded in shape ( Fig. 2A), whereas the other is more like 2 oblique surfaces that meet at an obtuse angle (Fig. 2C). Hair loss area extends down to where the back of the upright head contacts with a vertical line (red line in the drawing). The pressure is relieved below this contact site as the scalp turns away from the vertical direction. Most of the time the point of pressure relief is located at a lower level in the latter type (Fig. 2D) than in the former (Fig. 2B), so that looking from behind the person a bigger bald area is seen in the latter. This finding is a strong revealing finding for the new theory but may be misleading if it is not interpreted correctly. Also, the structure of the scalp has to be given due consideration during the evaluation of hair loss area in relation to the shape of the cranium. For example, even if the back of the head is precisely straight and vertical when the head is upright, the weight of the scalp still creates pressure on the hair follicles at the back of the head although the direction of the gravitational force is vertical, that is, parallel to the back of the head (the same applies for the hair follicles within the contact site with the vertical line in the previous example). Galea aponeurotica is a tough nonelastic structure and there are dense, nonelastic fibrous attachments between the galea and the skin of the scalp. Downward pull in the vertical direction on the skin of the back of the head is opposed by these nonelastic fibrous attachments. The resulting net force is toward the calvarium and it compresses the hair follicles (Fig. 3). Therefore, in such cases, hair loss area is expected to extend down usually to the level of the border of the galea with the occipitalis muscle at the back of the head. One more important point that should be regarded is that the force of downward pull caused by the gravity on the scalp skin is not distributed equally around the circumference of the head (Fig. 4). As the ears are firmly fixed to the temporal bones, they interrupt the soft-tissue continuity, shore up the soft tissues above and around them, and assume the pull of the soft tissues below. By contrast, scalp skin is continuous with the skin of the face between the ears and eyes on both sides of the face, so that the weight of the facial soft tissues adds to the pressure in the frontal part of the scalp. The circumstances are similar at the back of the head in terms of effective weights. The weight of the soft tissues below the ear level at the back of the head similarly adds to the pressure in the vertex area as an extra weight compared with the area above the ears. In most of the AGA cases, hair loss starts at the frontal and/or vertex areas. The new theory's unparalleled ability to explain even the details of the hair loss process and the formation of the pattern in AGA is apparent. In his review Trüeb 10 states that genetic involvement in AGA is pronounced, but no specific gene has been identified yet and genetic predisposition to AGA remains poorly understood. He continues, "We probably deal with a polygenic inheritance, dependent on a combination of mutations, e.g. in or around the AR (androgen receptor) gene affecting the expression of the AR, and other genes controlling androgen levels." However, systemic androgen levels are normal in AGA, DHT increases locally and the enzyme that converts testosterone to DHT is 5-alpha reductase. In the same review, Trüeb acknowledges that the genes encoding the two 5-alpha reductase isoenzymes have been shown not to be associated with AGA by Ellis et al. 33 That is, although there are many findings that suggest genetic involvement in AGA, DHT increase in AGA is not an occurrence directly determined by genes. It comes to the same question again: Why does DHT increase in balding scalp? This is the crux of the matter. Since its introduction, the new theory has been regarded with notable skepticism and resistance. Simplifying a very complicated problem is probably the only disadvantage of the theory. AGA has been one of the biggest and most challenging problems of the humankind. It has affected so many lives throughout the human history and has been a devastating condition for so many of the afflicted. It is difficult to settle for any mechanism less than highly complex. However, all natural phenomena that seem to be complex look simpler if viewed from the right standpoint.
2018-04-03T01:49:05.434Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "5587129316858866ff3174c725eaf0ca7afca54f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/gox.0000000000000005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5587129316858866ff3174c725eaf0ca7afca54f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222123239
pes2o/s2orc
v3-fos-license
EXTRACTION OF ROAD EDGES FROM MLS POINT CLOUDS USING BEND ANGLE OF SCANLINES Efficient road edge extraction from point clouds acquired by Mobile Laser Scanning (MLS) is an important task because the road edge is one of the main elements of high definition maps. In this paper, we present a scanline-based road edge extraction method using a bend angle of scanlines from MLS point clouds. Scanline-based methods have advantages in that computational cost is low, it is easy to extract accurate road edges, and they are independent of driving speed of MLS compared to methods using unorganized points. In contrast, there are some problems with these methods where the extraction accuracy becomes low at curb cuts and intersections. The extraction accuracy becomes low caused by the scanning noise and small occlusion from weeds and fallen leaves. In addition, some parameters should be adjusted according to the mounting angle of the laser scanner on the vehicle. Therefore, we present a scanlinebased road edge extraction method which can solve these problems. First, the points of the scanline are projected to a plane in order to reduce the influence of the mounting angle of the laser scanner on the vehicle. Next, the bend angle of each point is calculated by using filtered point clouds which are not vulnerable to small occlusions around the curb such as weeds. Then, points with a local maximum of bend angle and close to trajectories are extracted as seed points. Finally, road edges are generated by tracking based on bend angle of scanlines and smoothness of road edges from the seed points. In the experiments, our proposed methods achieved a completeness of over 95.3%, a correctness of over 95.0%, a quality of over 90.7%, and RMS difference less than 18.7 mm in total. INTRODUCTION Point clouds acquired by Mobile Laser Scanning (MLS) have been applied to efficient road asset management and the improvement in accuracy and generation cost of high definition maps. Road edges, which are defined as border lines between a roadway and sidewalks, are one of the most important information points in these applications. Therefore, many methods for automatic extraction of road edges have been proposed (e.g. Qiu et al., 2016, Hervieu et al., 2013, Zai et al., 2018, Yang et al., 2013, and Ishikawa et al., 2018. In most proposed methods, road edges are acquired based on curb extraction by evaluation of local points distribution. These methods can be classified into unorganized points-based methods and scanline-based methods by evaluation processes of local points. As the unorganized point-based methods, Qiu et al. (2016) extracts candidate points of road edges from multiple planes extracted by RANSAC algorithm, and extracted road edge points from candidate points based on the stability and continuity of the road width. Hervieu et al. (2013) proposed a method to detect curb points using Kalman filter inspired method from roadside points extracted by evaluation of the difference of normal vector and estimated plane direction using RANSAC. Hernández et al. (2009) extracted the road region and the road boundary by evaluation of height difference of points using height images. Rodriguez-Cuenca et al. (2016) generated images which recorded height difference and number of points in each pixel from point clouds and extracted curb candidate pixels by evaluation of these images. Furthermore, curb points are * Corresponding author extracted based on the evaluation of the distance between the plane estimated by the least squared fitting and each point using points corresponding to the curb candidate pixels. Zai et al. (2018) proposed a road edge extraction method using Supervoxels with several attributes such as normal vector and intensity of point clouds and applied the α-shape algorithm and the graph cuts-based energy minimization algorithm to Supervoxels. The advantage of these methods is their applicability in any point clouds. However, the disadvantages are that the computational cost is high due to iteration of points search processes at the calculation of normal vector, these methods are weak to changes in points density due to the specification and driving speed of MLS because they are based on processes and parameters depending on point density such as using images with a certain resolution, and it is difficult to extract road edges accurately when road edges are not parallel to the trajectories because these methods adopt the parallel condition from the trajectories when road edges are extracted from candidate points. In contrast, scanline-based methods have advantages such as the low computational cost compared with the ones using unorganized point clouds, it is easy to extract accurate road edges which are independent of driving speed of MLS. As scanlinebased methods, Yang et al. (2013) proposed a semi-automated extraction method of road boundaries. In this method, curb candidate points are extracted based on moving operation of windows which consist of a certain number of neighboring points on scanlines, and curb points are extracted by optimization of segmented curb candidate points. Miyazaki et al. (2017) proposed a scanline-based region growing method for extracting flat regions such as roads, curbstones and sidewalks. In this method, point clouds on the scanline are approximated to polylines and planar regions are generated by region growing based on the evaluation of geometric distance and normal vector of each polyline. Ishikawa et al. (2018) extracted curb points and classified curb types from low-density points by angle evaluation of neighboring points on scanlines. In this method, extracted curb points were refined by using evaluation of point distribution and a Statistical Outlier Removal (SOR) filter. However, these methods have some problems; the extraction accuracy is low at curb cuts and intersections because these road edges have only a slight height difference and it is difficult to adjust the parameters. Furthermore, extracted lines may be influenced by the mounting angle of the laser scanner on the vehicle, weeds and fallen leaves around the curbs because neighbor points on the scanline are simply used for angle evaluation. The objective of our research is to develop an automatic extraction method of the road edges passing through the lower edge of the curb from the point cloud acquired by MLS in order to solve these problems. Our method is based on tracking of road edges using bend angle, and are not vulnerable to slight occlusions around the curb by using filtered point clouds. Outlines The majority of our method uses scanlines. In our method, we In our method, road edges are obtained by tracking based on bend angle of each point calculated from scanlines and smoothness of road edges from seed points which are considered as curb edges. In order to improve the stability of the bend angle calculation for each point, projection of point clouds, irregular point removal, scanline smoothing, and bend angle estimation based on neighbors at a certain distance are used. This method consists of (a) projection of points to the driving direction, (b) calculation of bend angle, (c) removal of irregular points, (d) smoothing of scanline, (e) extraction of seed points of road edges, and (f) extraction of road edges shown in Fig. 1. Steps (a)-(d) are processes of each scanline. The scanline is defined as a point set which is divided at the upper of trajectories and is ordered by acquisition time. The details of each step are described below. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) Projection to the Driving Direction (Fig.1 (a)) The points of the scanline are projected onto a plane perpendicular to the driving direction of the vehicle for detection of the curb edge without the influence of the mounting angle of the laser scanner on the vehicle, as shown in Fig. 2. The coordinate of point projected on the plane perpendicular to the driving direction is obtained by Eq. (1) and (2). where ( , ), = coordinate and yaw angle of trajectory at the acquisition time of the point Calculation of Bend Angle (Fig.1 (b)) The degree of bending of the scanline becomes large at the edge of the curb, and becomes small at the curb cut because the cross slope is designed to change at the border between the roadway and the sidewalk. In order to recognize the degree of bending at each point, the bend angle is defined as shown in Fig. 3. For reducing the influence of measurement noise, first, two neighbors, and , of an interesting point (position: ) are set as farthest points within the fixed distance from (we assume that < < , where is the acquisition time of point ). Then, the angle from the line passing through the points and to the line passing through the points and is calculated as the bend angle of the point . If the shape of the three points used in the calculation is concave towards the position of the laser scanner, the sign of the bend angle is positive, otherwise the sign of the bend angle is negative. As shown in Fig. 4, robust computation of bend angles for reducing the influence of measurement noise is realized by using the farthest point within the fixed distance. (Fig.1 (c)) Irregular points such as plants near the road edge may cause false extraction of road edge points in the following steps because absolute values of the bend angle of these points are relatively high. Therefore, points where the absolute value of the bend angle is larger than a given threshold are removed. As shown in Fig. 5, even if curbs are covered by plants, it is easier to extract road edges with the following steps because most of the irregular points are removed at this process. Smoothing of Scanline (Fig.1 (d)) Because tracking processes of road edges in the following steps is sensitive to slight changes of the bend angle, it is necessary to reduce the influence of measurement noise. In our method, Taubin smoothing (Taubin, 1995) is applied to the scanline, and the bend angle is calculated again using the smoothed scanline. Taubin smoothing is a removal method of measurement noise by low-pass filtering which repeats the Gaussian smoothing step while exchanging positive and negative scale factors. The position ̂ +1 of the new points after the one smoothing step is defined as shown in Eq. (3). where ̂ = position for the -th smoothing = weight = scale factor The weight parameter = .5 is used as the reciprocal of adjacent point number. Scale factor = .63 7 is used for the even number of smoothing steps, and = − .6732 is used for the odd number of smoothing steps based on (Taubin, 1995). We applied Taubin smoothing to the scanline around a curb to determine the optimal number of repeats. The smoothing results by different numbers of repeats to the actual scanline are shown in Fig. 6. In the original data, a noise point exists at the side of the curb on the scanline. As the number of smoothing increases, this noise point mixes with the side of curb points and becomes less noticeable while the shape of the curb is maintained. As the result of experiments, we determined that the suitable number of repeats is 20. Extraction of Seed Points of Road Edges (Fig.1 (e)) In our method, the road edges are tracked from the position of seed points considered as a curb edge. This section describes the extraction procedure of seed points. First, points whose bend angles are local maximum within the range ( , ) are extracted as peak points. The range is determined to obtain only curb points at which the bend angle is close to 90 degrees. Next, we track the scanline from the point directly below the trajectory to the outside of the road, and the first peak point is extracted as a seed candidate point. It is considered that the seed candidate points include non-curb points in the cross section of the road corresponding to the scanline without curbs such as curb cuts and intersections. Therefore, lower-edge points of curbs are extracted from the seed candidate points by using the property that local seed candidate points can be linearly approximated if seed candidate points consist of only curb points. Seed candidate points are first divided into left and right point sets of the trajectory. Then each point set is also divided along the trajectory using the given intervals . Line fitting using RANSAC (Fischler et al., 1981) is applied to each point set, and the closest point to the fitted line is set as the seed point. In this process, there is no limitation by the direction of the fitted line in order to extract seed points even if the curb is not parallel to the The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) trajectory. Therefore, non-curb points such as boundary points between a vehicle and a road surface may remain in seed points, but those points are removed after the extraction of road edges. (Fig.1 (f )) The road edge (point sequence) is obtained by iterative connection of neighboring points from seed points. As shown in Fig. 7, the point of road edge is selected as the one with minimum energy in a local point set which is located around the search distance along search direction from a current road edge point. The energy evaluates the bend angle and smoothness of the road edges, and the next road edge point is determined using Eq. (4). where , = average and standard deviation of the bend angle in the point set ∆ ℎ ( ), ∆ ( ) = horizontal and vertical angle between the search direction and the vector connecting the point and the previous road edge points The initial search direction is the line direction calculated by RANSAC in Sec. 2.6, and the following direction is defined by weighted difference vectors of previous road edge points, as shown in Fig.8 and Eq. (8). If there are no points in the search range, or there are only points with the bend angle less than the given threshold in the search range, the extraction of road edges is finished. After extraction of road edges, short road edges are removed. Next, in order to remove edges between road and objects in the roadway, such as vehicles, if the points exist in a certain height range above the road edges, the edges are removed. Furthermore, to remove edges outside the roadway, if the line segment perpendicular to the extracted line , whose endpoints are an endpoint of and its nearest trajectory point, intersects with another extracted line, is removed. Experimental Data The point cloud used in the experiments was collected using an MLS named StreetMapper 360. The specification of MLS and the acquired point clouds information is shown in Table 1. In these experiments, only the point cloud acquired from one laser scanner is used. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) of points around the other side of road edges is approximately 120 mm. Quantitative Evaluation The extraction results of road edges are shown in Figs.9 and 10. Fig. 9(a) and Fig.10(a) are overviews of the extracted road edges by the proposed method, Fig.9(c) to (g) and Fig.10(c) to (f) are enlarged views of the extracted road edges, and Fig.9(b) and Fig.10(b) are the manually extracted road edges for evaluation. It is observed that road edges of curbs, curb cuts, and intersections were extracted except for occlusion parts of road edges. At the intersections, road edges are extracted until just before the position where points are sparse and road edges become unclear. It was observed that the road edges are extracted without the influence of weeds that exist around curbs such as shown in Fig. 9(f). The road edges were extracted, even the parts where road edges are not parallel to the MLS driving direction such as the bus bay, as shown in Fig. 9(g). On the other hand, when the direction of scanlines and road edges are parallel, the length of extracted lines at the intersection tends to be short. The performance of the proposed method is quantitatively evaluated using Heipke's proposed method (Heipke et al., 1997) which has been widely used in related works (e.g. Zhou et al., 2012). This method evaluates extracted lines using the ground truth data as the reference lines. Therefore, we manually extracted road edges in the point cloud as the reference lines. The reference lines were not generated at the occlusion part, such as behind a vehicle. When a buffer is generated from the reference line, True Positive (TP) is defined as the extraction line included in the buffer, and False Positive (FP) is defined as the extraction line outside the buffer as shown in Fig. 11. Similarly, when a buffer is generated from the extraction line, False Negative (FN) is defined as the reference line outside the buffer as shown in Fig. 11. Then, completeness, correctness, and quality are defined as shown Eq. This evaluation method requires a suitable setting of the buffer width because the value of the buffer width significantly affects the evaluation results. We adopted the buffer width of 50 mm because the distance from the extracted line to the true road edge was often kept within approximately 50 mm in the case of manual extraction. This buffer width is small compared to related works, for example, Zhou et al. (2012) adopted 500mm, Qui et al. (2016) adopted 200 mm, andKumar et al. (2013) adopted multiple values ranging from 100 to 500 mm as the buffer width. In order to simplify the calculation of each quantity, sampled points in 10 mm intervals from each line are used in the calculations. The extracted road edges were divided into curbs, curb cuts, and intersections of the left and right sides of the road, and were evaluated individually. The results of the quantitative evaluation are shown in Tables.2 and 3. The results are summarized as follows. Completeness, correctness and quality: Our proposed methods achieved the completeness of over 97.1% and 89.9%, the correctness of over 97.2% and 89.0%, the quality of over 94.4% and 80.9% in total at the site of Kawasaki city and Sapporo city. It is considered that the results of Sapporo city were relatively low accuracy because of the curb height is lower than in Kawasaki city. There were no extracted lines of other than road edges in both sites. The extraction quality of left and right road edges were similar, 94.4% and 99.1% at Kawasaki city. In contrast, the extraction quality of right road edges were lower than left road edges at Sapporo city because the extraction quality is more influenced by point density when curb height is low as shown in Fig. 10(e) and (f). The quality of curb cuts was high similar to the quality of curbs on both sides. On the other hand, Figure 11. Evaluation method The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B2-2020, 2020XXIV ISPRS Congress (2020 at the intersections, the quality achieved over 62.7% which is lower than the one of curbs or curb cuts because the height difference is small and road edges curved sharply. RMS difference (α): The RMS difference α is less than 18.7mm in total. These values of the left side where point density was high tend to be lower than the right side. The RMS difference α of the left side is close to the point intervals around the road edges, and one of the right side is smaller than the point intervals around the road edges. These results indicate that optimum points were selected as road edge points at the tracking process by energy function. Number of Gaps ( ) and Total gap length (ℎ): The number of gaps were 24 and 86, and total gap length ℎ were 17.2 m and 69.9 m in total at the site of Kawasaki city and Sapporo city. These values tend to be better for curbs and curb cuts than intersections. The processing time for approximately 18 million points at the site of Kawasaki city was 74.8 seconds, and the processing time for approximately 17 million points at the site of Sapporo city was 74.3 seconds on a PC with CPU Intel Core i7-7700K and 32 GB RAM. CONCLUSIONS In this paper, we proposed an accurate extraction method of the road edges from point clouds using the bend angle of the scanline acquired by MLS. Our proposed method extracts road edges without the influence of the mounting angle of the laser scanner on the vehicle, and results in makes robustness against slight occlusions around the curb by using filtered point clouds. In addition, the road edges at curb cuts and intersections where the height difference is small at edges can be extracted using tracking based on the energy defined by bend angles of scanlines and smoothness of the extracted line. In the experiments using two point clouds, the completeness of over 95.3%, the correctness of over 95.0%, the quality of over 90.7% are achieved in total. Our proposed method can extract at similar quality although we used 50 mm buffer width which is smaller than related works. RMS difference less than 18.7 mm was achieved in total. This result indicates that our proposed method is very accurate. At the intersections, road edges were extracted until just before the position where points are sparse and road edges become unclear. Furthermore, our proposed method also indicated good performance at road edges where the point density is low. Future studies will include an automatic classification of curbs or curb cuts and accuracy improvement using point clouds from two laser scanners on the MLS.
2020-08-20T10:01:54.999Z
2020-08-14T00:00:00.000
{ "year": 2020, "sha1": "759c4437f7c6205b12c8a0e053dd159cec1e11c3", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2020/1091/2020/isprs-archives-XLIII-B2-2020-1091-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f8e040df160c808de16f01a794748aee2e9b8d7e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
216301991
pes2o/s2orc
v3-fos-license
A Review of the Gut-Uterine Axis in Persian Medicine Literature: Implications in Polycystic Ovary Syndrome Persian medicine (PM) takes a holistic approach towards diagnosis and management of disease states, focusing on the connections between body systems and organs. Menstrual disorders are of utmost importance in women, as they may lead to dysfunctions in other body systems. Deeming a mutual relationship between the gastrointestinal and female reproductive systems, PM physicians believed in a gut-uterine axis to exist. Ehtebas-e Tams (ET), meaning menstrual retention, is not an exception, being accompanied by gastrointestinal morbidities including digestive disorders, nausea, heartburn, food craving and pica, reduced appetite, abdominal pain, and bloody diarrhea. Considering polycystic ovary syndrome (PCOS) as an instance of ET, we searched studies to investigate these correlations. While a number of the mentioned ET symptoms were confirmed by contemporary studies, others had not been investigated widely and are yet to be elucidated. Conducting studies to clarify such correlations has implications in improved diagnosis and novel modes of treatment. Introduction Normal menstruation is a marker of women's health [1], and regarded as the fifth vital sign in women by the American College of Obstetricians and Gynecologists [2]. Over a thousand years ago, Avicenna, the renowned Persian physician, observed menstruation as an important factor in women's health status. Considering this monthly cycle as an important factor in maintaining health, he believed that normal menstruation in terms of quantity, quality (color, density, viscosity, etc.) and timing eliminated many harmful substances from a woman's body [3]. Persian medicine (PM) -in some references known as Iranian traditional medicine, practiced for thousands of years, is a humoral medicine based on the belief that the body is a constitution of four humors -blood, phlegm, yellow bile and black bile. Despite its seemingly simplistic nature at first glance, omics-based research has found evidence in support of this categorization [4]. Persian medicine offers a holistic approach toward the body and aims to diagnose and eradicate the roots of a disease with regard to the connections between body organs and systems. PM references consider the uterus, by which they mean the female reproductive system, a cardinal organ, since diseases related to this organ and its function are said to be disseminated in the body to affect all other body systems [5]. In addition to the position they considered for the uterus in relation to other organs, they have also emphasized the close connection between the uterus and gut, or gut-uterus connection. [6] According to PM, a strong correlation exists between the uterus and the gastrointestinal (GI) system. Menstrual disorders including both increased [7,8] and decreased [3,9] menstrual bleeding lead to gastrointestinal (GI) disorders. Conversely, gastrointestinal disorders can initiate diseases in the uterus. Uterus is a general term used by PM scholars to imply the female reproductive system not just the uterus itself [10]. An important gynecologic disorder in Persian medicine is "Ehtebas-e Tams" (ET) meaning menstrual retention, whether it be complete cessation of menstruation, an increase in the interval between cycles, or reduction in the amount of bleeding [10]. In modern semiology, this disorder can be regarded as an equivalent to polycystic ovarian syndrome (PCOS) [11,12,13]. According to physicians of Persia, ET is a critical disease with various morbidities that should not by any means be left untreated, as the wastes that should have been excreted via menstruation spread throughout the body, leading to many comorbidities [5]. Avicenna in his book, Canon, under the topic of ET, describes its morbidities. In addition to expressing symptoms such as hirsutism and hoarseness, which are now known as hyperandrogenic symptoms, he referred to the various morbidities of ET in various organs [3], that is shown in Figure 1. The most prevalent menstrual disorder accompanied by amenorrhea/oligomenorrhea in contemporary literature is polycystic ovarian syndrome (PCOS). It is estimated to affect 10% of women in child-bearing age [14]. Typically developing in adolescence, PCOS continues throughout the reproductive years and is even reported to leave sequela after menopause [15,16]. Many endocrine and metabolic complications including insulin resistance and type 2 diabetes [17,18], dyslipidemia [19], and cardiovascular diseases [20,21] accompany PCOS. Gastrointestinal (GI) dysfunctions are also present in PCOS, proved by many studies of the recent years that have investigated this correlation. However, many aspects have remained elusive, necessitating yet more studies. The purpose of this study was to review the gastrointestinal morbidities of ET mentioned in PM references, and to investigate whether the symptoms can be applied to PCOS as a disorder accompanied by amenorrhea/oligomenorrhea. http://jtim.tums.ac.ir Gut-uterine axis in persian medicine A. Naghizadeh et al. Methods This article is a qualitative study in two branches of traditional Persian medicine and modern literature. We first searched PM references, including Rhazes' "Al-Kitab al Hawi" (10th century), Ahwazi's "Kamil al-Sana'a" (10th century), Avicenna's "Canon of Medicine" (11th century), Akbari's Tebb-e-Akbari (18th century), Nazem Jahan's encyclopedia of "Exir-e Azam" (19th century) for words related to "Ehtebas-e tams" and gathered the data on the associated gastrointestinal morbidities. The data were then classified, coded, and analyzed. In the next step, we searched PubMed and Scopus databases using the words polycystic ovary syndrome, and each of the gastrointestinal symptoms found in PM references in combination. Results and Discussion Gastrointestinal morbidities of ET in PM literature include indigestion, heartburn, nausea, GI inflammation, anorexia, food craving and pica, excessive thirst, GI bleeding, intraabdominal tumors [5,10]. In the following, each morbidity will be outlined as discussed in PM references and then investigated for any rela- Indigestion ET can be accompanied by digestion weakness and deterioration. Digestion disorders as described by PM scholars include a spectrum of conditions, the mildest of which is named "digestion weakness", defined as the inability of the digestive faculty to transform food into a desirable quality to be optimally used by the body [22]. This disease is diagnosed by symptoms of delayed gastric emptying [23], postprandial fullness, abdominal distention, abdominal bloating and belching [5]. "Digestion deterioration", is a more severe form of digestion dysfunction, accompanied by symptoms of liver dysfunction in addition to gastrointestinal symptoms [3]. Both of these conditions and also heartburn and nausea have been mentioned to occur in ET [3], depending on the severity and duration of disease. [3,7,9]. Symptoms of digestion weakness described by PM scholars resemble functional dyspepsia in contemporary medicine. Functional dyspepsia (FD), typically chronic and recurrent in nature [24], refers to cases where no evidence of struchttp://jtim.tums.ac.ir Gut-uterine axis in persian medicine A. Naghizadeh et al. tural disease likely to explain the symptoms can be found [24,25]. The prevalence of FD is estimated to be 5-15% in the general population, excluding cases of heartburn [27]. Affected individuals complain of upper abdominal fullness, nausea, early satiety, belching and bloating [28]. However, to increase specificity to the gastroduodenal area, Rome IV confined the main symptoms to early satiety, postprandial fullness, epigastric pain and epigastric burning [29]. Since many cases are meal related and aggravated following food ingestion, FD is categorized into three subtypes: 1) postprandial distress syndrome (PDS): characterized by post prandial fullness and early satiation, 2) epigastric pain syndrome (EPS): characterized by pain and burning in the epigastric region, and 3) overlapping subtype: characterized by overlapping features of PDS and EPS [29]. Gastric dysmotility or hypersensitivity is observed in two thirds of FD patients [30]. Four main factors were defined in a cluster analysis of dyspeptic symptoms in several hundred patients: 1) nausea, vomiting, weight loss and early satiety associated with female sex and young age; 2) postprandial fullness and bloating; 3) pain associated with psychosocial factors; and 4) belching unrelated to psychosocial factors. The first two factors were associated with delayed gastric emptying while the two latter were associated with gastric hypersensitivity [31]. Our search revealed no direct relationship between dyspepsia, gastrointestinal motility, or hypersensitivity and PCOS. However, a number of mechanisms have been hypothesized to play a role in FD, which were investigated in terms of relations with PCOS. Some recent studies have explored the relationship between PCO and GI diseases, one of the most important of which is research on the gut microbiome, the collective microorganisms that are resident in the GI tract. Microbiome al-terations have been found in rodent PCO models [32,33]. This is also true in humans, as the fecal microbiome in PCO affected individuals has less diversity and a different phylogenetic profile compared to healthy controls [34]. Microbiome shift results in an augmented gut permeability [35], which in turn induces endotoxemia and inflammation, a process that has been proposed to play a role in the pathogenesis of PCO [36]. Additional evidence in favor of increased gut permeability in PCO individuals is a report of an increase in serum zonulin, the only known biomarker of gut permeability [35]. Other proposed mechanism for FD is infection with Helicobacter Pylori and the resultant gastric and duodenal mucosal inflammation [37]. A possible relationship between H. Pylori infection and PCOS has been propounded in the recent years. Some studies have confirmed a more prevalent positive serology in PCOS patients [38], while others have not found this correlation to be true [39,40]. Central nervous system mechanisms have also been implicated in the pathogenesis of functional dyspepsia, including cerebral glycometabolic disturbances have been found in certain brain regions [41]. There are also evidences of abnormal brain activity in FD patients aggravated by anxiety [42]. In a population based study, a correlation between anxiety with the PDS, but not the EPS subtype was found, which supports the idea of different pathophysiologies of the subgroups [43]. This mechanism may be applicable to PCOS, as anxiety has been found to be more prevalent in this population [44]. Regarding it, PM scholars declare one of the morbidities of ET to be anxiety [8,45]. Eating Disorders Food craving and Pica Pica was first described by Hippocrates, who viewed it as a yearning in pregnant women to consume earth or charcoal [46]. The desire http://jtim.tums.ac.ir Gut-uterine axis in persian medicine A. Naghizadeh et al. to ingest inappropriate foodstuff, so-called "vahm" or "fasad-e shahvat" in PM [47], is categorized into two subtypes. One is associated with a pathologic desire towards a specific taste such as salty, sour, or hot food, while the other is a yearning for non-nutritional substances like earth, ice, or charcoal [47,48]. According to Canon of Medicine, the first subtype is seen in mild diseases, whereas more severe pathologies are associated with the second subtype [3]. Vahm is more common in women and children, due to the higher percentage of moistures in their bodies [49]. Etiologies in women, as described by PM literature include ET and early pregnancy [3,10,47]. Since Vahm includes desire towards both certain foods and non-nutritional substances, it can be regarded as an equivalent to respectively, food craving and pica. Food craving (FC), is more common in women, of whom a third report correlations with the menstrual cycle [50]. Overall, 58% of women experience FC, with 7% occurring only during pregnancy. Narrowing the definition to moderate and severe craving reduces the incidence to 42% and 21% respectively [51]. Considering the higher perimenstrual and prenatal prevalence of FC, hormonal mechanisms have been proposed [52]. Evidences regarding PCOS correlations with food craving are found in modern literature. In a recent cohort of obese and overweight PCOS patients, FC was demonstrated to be more prevalent compared to healthy women [53]. Likewise, in the largest study investigating the relationship between PCOS and eating disorders, it was reported that food craving was significantly more prevalent in obese PCOS patients compared to lean and overweight affected individuals [54]. PCOS is also associated with eating disorders, namely binge eating disorder (BED) and bulimia nervosa. DSM-5 defines bulimia nervosa as recurrent episodes of binge eating, i.e. con-sumption of larger amounts of food in a discrete period than is typical for most people and a lack of control of eating during these episodes along with recurrent inappropriate compensatory behavior such as self-induced vomiting or laxative abuse for a duration of at least once a week for three weeks, while in BED there is no compensatory behavior [55]. In a large population-based study, women who reported lifetime binge eating were more likely to report either amenorrhea or oligomenorrhea than women who had not experienced binge eating [56]. Specifically speaking of PCOS, a number of studies have reported no correlations with eating disorders [57,58], while other research in this regard indicate higher prevalence of eating disorders in these patients [59][60][61]. Results of the largest study in this regard, indicate that over half of obese PCOS patients experience binge eating behavior, about 40% of which are clinically significant. Moreover, binge eating is more prevalent in lean women with PCOS compared to healthy controls [54]. Binge eating disorder resemble "Joo-e kalbee ", a subtype of "fasad-e shahvat" which defined by uncontrolled desire for eating food. PM scholars declare one of the morbidities of ET to be "fasad-e shahvat". Anorexia PM references categorize appetite as true and false. True appetite is the desire to consume food consequent to the body's need for food, while false appetite refers to pathologic states that are a result of dysregulation in production and secretion of appetite regulating factors, and lead to abnormal eating patterns [3,8]. A true appetite is the result of depletion in body stores and need a healthy stomach to detect appetite stimulants [5,10]. Since ET is associated with incomplete expulsion of body wastes and gastrointestinal dysfunction [48], PM practitioners have proclaimed that this menstrual disorder http://jtim.tums.ac.ir Gut-uterine axis in persian medicine A. Naghizadeh et al. may result in suppression, and in prolonged cases, complete loss of appetite [3,10]. Studies regarding appetite and satiety in PCOS have yielded conflicting results. Some show no difference in satiety index levels during meal tolerance test or standard meal consumption compared to controls [62,63], while others indicate dysregulations in this respect. Exploring hormones that control hunger and satiety, including leptin, ghrelin, cholecystokinin, glucagon-like peptide-1 (GLP-1), peptide tyrosine-tyrosine (PYY), neuropeptide Y (NPY) will help create a clearer picture of appetite regulation in PCOS. Leptin, an adipocyte-derived hormone, suppresses appetite, stimulates thermogenesis, and reduces body fat mass [64]. A number of studies have demonstrated that leptin is increased in PCOS independent of insulin resistance and that this effect may have a role in the pathogenesis of this disease [65][66][67][68]. The peptide hormone, GLP-1, acts as a modulator of insulin secretion, glucose homeostasis, satiety and gastric emptying [69]. A number of studies have investigated fasting and stimulatory levels of GLP-1 in PCOS, most of which have reported levels of it to be unaltered in both states [70], although there are instances of both decreased [71] and increased [72] concentrations of fasting and stimulatory GLP-1. As a product of intestinal cells, Cholecystokinin (CCK), has a role in induction of satiety, delaying gastric emptying, pancreatic beta cells proliferation, and glucose lowering effects [73]. In a clinical study, PCOS patients have been demonstrated to have low postprandial CCK [74]. Neuropeptide Y (NPY), a member of the NPY family of biologically active peptides, regulates appetite and is increased in both obese and non-obese cases of PCOS, independent of the increase in BMI [75]. PYY, mainly secreted from L-cells of distal intestine in response to food intake, suppresses appetite [76]. Levels of this peptide were found to be unaltered in four of the five studies that have investigated its concentrations in PCOS patients [70]. One clinical trial reported lower basal and postprandial total PYY levels in PCOS, which correlated negatively with insulin levels [77]. Therefore, it seems that appetite regulation may be impaired in women with PCOS, but to our knowledge no study has investigated the prevalence or associations of Anorexia in PCOS. GI Bleeding Al-Zahrawi, in his thirty-volume encyclopedia of medical practices, Kitab al-Tasrif, which has been a reference for Islamic and European medicine for more than five centuries, has mentioned ET as a differential diagnosis of bloody diarrhea. He believes the treatment of this disorder to be correcting the menstrual problem and boosting liver function [78]. The most common causes of lower gastrointestinal bleeding in adults include hemorrhoids, diverticula, vascular ectasias, neoplasms, and colitis-most commonly infectious or idiopathic inflammatory bowel disease. In adolescents, the most common colonic causes of significant GIB are inflammatory bowel disease and juvenile polyps [21]. To our knowledge, there is no investigation on the prevalence of the above mentioned causes of lower gastrointestinal bleeding in women with PCOS. Intraabdominal Tumors Avicenna, in his book, Canon, has mentioned ET as a risk factor for progression of intraabdominal tumors [3]. A cohort study in 2015 revealed that PCOS patients are at increased risk of colon and kidney cancers [81]. This association may be due to different conditions that accompany PCOS, including sex hormones and androgens effect on incidence of GI cancers [82,83] or adipose tissue role in development of cancers [84] or association of decreased insulin sensitivity Conclusion This study attempted to review the gastrointestinal morbidities of ET (menstrual retention) mentioned in Persian medicine references, and to investigate whether the symptoms correlate to PCOS, based on evidences from contemporary literature. According to PM, a strong connection exists between the uterus and the gastrointestinal (GI) system, which is of great importance in diagnosis and treatment of disorders in both systems. Menstrual disorders lead to gastrointestinal (GI) disorders, and conversely, gastrointestinal disorders can initiate diseases in the uterus, the female reproductive system. Specifically, menstrual disorders are considered in the differential diagnosis of gastrointestinal disease, and are therefore a target in the treatment process. Also, due to the mutual nature of this correlation, improving gastrointestinal functions is a necessary component of treating menstrual disorders. Therefore, there may be a relation between gastrointestinal disorders like functional dyspepsia and PCOS, that according to PM may be resolved by treating the main disease. Ideas proposed for future research would be to investigate this correlation and to see whether reconstituting normal menses leads to a decrease in gastrointestinal disorders such as FD cases as proposed by Persian physicians. Despite evidences in favor of such a correlation in contemporary studies, there are still many ambiguous and in some instances conflicting results necessitating further studies to elucidate these associations. We believe that the all-inclusive detailed data of modern medicine, integrated with the holistic vision of traditional Persian medicine can yield a more comprehensive view toward the human body in terms of health maintenance, diagnosis, and management of any disease. Declaration of interest The authors declare that there is no conflict of interest Funding This research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
2020-04-16T09:12:44.408Z
2020-03-16T00:00:00.000
{ "year": 2020, "sha1": "6bd97849a09b6c4db59d398627b00798f797e401", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/tim.v5i1.2669", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2a6ce8daab738e6ce5636ebbd2f826ac2faaff2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214059772
pes2o/s2orc
v3-fos-license
Effects of Playful Exercise of Older Adults on Balance and Physical Activity a Randomized Controlled Trial There is evidence that one of the most important approaches to improving the healthy ageing of older adults is for them to carry out daily physical activity. However, motivation to engage in physical activity is often low in old age. This study investigated the potential of engaging older adults in playful exercise to increase physical activity and balance. A randomised control trial (RCT) was performed with 26 independently living older adults (initially 38, but 12 were lost to illness or death during the course of the project), mean age 83.54 (SD: 7.12), 19 women. Participants were randomly allocated to intervention ( n = 16) or control ( n = 12) (originally 19 in each group). The intervention consisted of playful exercise on Moto tiles 6 * 2 min twice a week over 10 weeks, while control group participants engaged in normal daily activities. The intervention group participants improved functional balance (Berg ’ s Balance Score) by an average of 5.02 points, and the control group by 2.58 points ( p = 0.11). No between-group difference was observed in physical activities outside exercise sessions ( p = 0.82). The difference in gain of balance as measured by BBS was below statistical significance, as a result of the sample size being too small. However, trial results suggest that older pre-frail and frail adults who engage in a moderate playful exercise programme over at least 10 weeks may potentially experience a modest gain in balance. Moreover, the playful exercise created a joyous social atmosphere among the participants who spontaneously remarked that the play sessions were much more fun than their standard light exercise programme of one hour twice a week. This motivational outcome is important for adherence to any exercise programme and indeed for general well-being. Introduction According to the World Health Organization's report (2009), one area of increasing focus in health is the demographic development and the increasing life expectancy of older adults. The proportion of adults aged 65 or above was estimated at around 521 million in 2011 and this number will grow by 939 million by 2030 (United Nations, 2015) The increased life expectancy imposes age-related challenges, which affect daily life activities and normal functioning. This can cause inactivity and place old people at an increased risk of a variety of chronic diseases and disorders (World Health Organization 2015). These challenges also impose a heavy burden on society in terms of social welfare, healthcare need and the high cost associated with these challenges (Merom et al. 2012). Age-related impairment leads to deterioration of various parameters of physiological aspects important for balance, such as vision, vestibular and proprioceptive senses, and muscle strength. In addition, fall accidents are considered to be one of the most commonplace problems associated with getting older (Pua et al. 2017;Franceschi et al. 2018). Prior studies suggest that exercise can increase muscle strength, balance, activity of daily living function and walking speed (Yardley et al. 2006;Janssen and LeBlanc 2010;Gillespie et al. 2012;Lee et al. 2012). Especially for old people, physical activity can help maintain a high functional capacity, (i.e., the ability to cope with everyday life). For example, exercise interventions among older adults can reduce falls (Sherrington et al. 2011;Gillespie et al. 2012;Guirguis-Blake et al. 2018). Physical activity can prevent the development of lifestyle diseases (e.g., type 2 diabetes and cardiovascular disease) (Reiner et al. 2013;Peek et al. 2016;Daskalopoulou et al. 2017). It can simultaneously reduce symptoms and be included in the treatment of many serious diseases (Blair and Brodney 2018). Finally, physical activity contributes to good mental health, including enhanced self-confidence and joy of life, better social well-being and more energy (Newson and Kemps 2006;Yates et al. 2008;Bauman et al. 2016). In addition, important dimensions of physical activity include muscle strength and balance training, which also have a major role in health promotion and disease prevention in older adults (Garber et al. 2011). The physical activity covers all forms of movement that increase energy consumption (e.g., from sports and exercise, to everyday activities such as gardening, cycling, taking the stairs or walking the dog (Titze and Marti 1997)). Despite documentation of the physical and psychological benefits derived from physical activity, studies also conclude that older adults lack the motivation to get regularly physical active (Phillips et al. 2004;Schutzer and Graves 2004). Therefore, effective strategies to prevent loss of muscle strength, and maintain balance in older adults are needed. In the present Randomised Controlled Trial we aim to add to the underexplored field of playful exercise for health research by examining how and to what extent playful exercise may empower older adults' functional and physical ability. Hence, we examine to what extent playful physical exercise during a 12-week period improves physical and functional abilities and to what extent it is accompanied by changes in physical activities outside exercise sessions. Function Ability Balance is a central function in most activities of daily living (ADL) (Kalron and Achiron 2013). The reduced balance increases the risk of falling (Pua et al. 2017). As age increases, gradual deterioration of various sensory systems contributes to a worse postural control (Kalron and Achiron 2013). The balance problems can occur when only one of the different parameters in postural control is weakened because the systems function in a mutual interaction (Pollock et al. 2000;Laughton et al. 2003). In 2010, the World Health Organization released a report with recommendations for preventing falls and thus fractures and other injuries among old people (World Health Organization 2010). The report indicates that preventive measures are an important element in eliminating the problem of fall accidents as they can slow down the number of hospitalisations related to fall accidents, which are otherwise expected to increase, by 2040, due to the increasing number of people aged above 65 (Daskalopoulou et al. 2017). In addition, prior studies show a positive association between physical activity and healthy ageing (Schutzer and Graves 2004;World Health Organization 2010;Sun et al. 2013;Bauman et al. 2016;Daskalopoulou et al. 2017). It is also documented that through targeted intervention, a significant reduction in the risk of falls for this group of elderly people can be achieved (Howe et al. 2007;Tak et al. 2013). This intervention, for example, may consist of medicine optimisation, and physical exercise such as balance strength and walking training (Bauman et al. 2016). Many studies have shown that appropriate exercise can modify and significantly delay the primary risk factors for falls, including poor balance and muscle weakness in old people (Schreiber et al. 1999;Tárraga et al. 2006;Padala et al. 2012;Bauman et al. 2016;Dietlein et al. 2018). They also showed improvement in cognitive and physical function when playing serious games (Larsen et al. 2013). Furthermore, balance training is considered to be one of the most important factors in reducing falls among older adults (Sherrington et al. 2008;Jessen and Lund 2017). In addition, a growing number of studies show that gaming has positive impact on physical functions (Duque et al. 2013;Cho and Lee 2014). Playful Exercise Gamification refers to playful experiences that aim to motivate individuals in performing certain tasks by making them feel in control and aware of their abilities (Jessen 2016). Gamification is mostly used for behaviour change purposes (Cugelman 2013;Larsen et al. 2013;Edwards et al. 2016;Johnson et al. 2016;Fleming et al. 2017), not least for older adults to become more physically active (Barden et al. 2013;Larsen et al. 2013;Sailer et al. 2014;Boot et al. 2016;Skjaeret-Maroni et al. 2016;Kappen et al. 2018). Previous studies show that gamification of exercise engages participants and may lead to behavioural change in the short term (Larsen et al. 2013;Edwards et al. 2016;Skjaeret-Maroni et al. 2016;Lee et al. 2018). But it is well known that the motivation to engage in such physical activity becomes reduced when people get older whether it is playful or not. Still, playful activity programmes for older adults may have beneficial effects: reported results include improvement in well-being and balance, and lead to social and emotional benefits, (e.g., social interaction, increased self-esteem, positive emotions) (McLaughlin et al. 2012;Larsen et al. 2013;Boot et al. 2016;Edwards et al. 2016;Kaufman et al. 2016;Dietlein et al. 2018). Methods The study design was based on the CONSORT guideline (Chan et al. 2014) and was planned as an RCT with blinded assessment of end points. The intervention group engaged in 12 min of playful exercise (spread over one hour in 2-min minisessions) twice weekly over a planned duration of 12 weeks. The training protocol included a combination of games that train both static and dynamic postural control as well as agility, reaction time, and for most older adults, endurance. Participants in both groups engaged in their normal physical and social activities including twice-weekly joint meals and social activity and one hour's light exercise (mobility training while sitting in chairs). Daily physical activity of both intervention and control group participants was recorded by activity trackers (SENS motion sensorscf. Figure 1). The device used is a triaxle accelerometer capturing raw triaxle acceleration (±4 g) at 12.5 Hz. The collected data are wirelessly transferred to a secure cloud server via Bluetooth and can be accessed via a web interface. The tracker was attached to the leg above the knee with a plaster, changed once a week with assistance from a student. Participants were measured on Berg Balance Score (BBS), 30 Second Chair Stand Test (CST) and 6 Minute Walk Test (6MWT) at the beginning (baseline) of the study, midway, and at the end. After completing the baseline measurements and signing the informed consent, participants were assigned to two groups by block randomisation (Efird 2011). Randomisation For randomisation, an Excel sheet was prepared with the 38 participants' pseudonymised ID-codes, age, gender, and BBS scores. Using Excel's random function repeatedly, the 38 participants were randomly assigned to two groups (intervention and control) calculating for each repetition the group mean age, no. of steps and BBS score and gender distribution. The first repetition of randomised allocation that satisfied the following criterion was chosen: difference of group means less than <2 years of age, less than 2 points of BBS, and 4 or more males in each group. Participants The participants were recruited from two activity centres for the elderly in Lyngby-Taarbaek municipality and inclusion criteria were: age 65 or above, living independently at home and being able to walk independently and maintain a standing position either alone or with the use of a cane or rollator. Participants signed an informed consent form approved by the Ethics Committee of the technical university of Denmark (DTU DOC 18/00981). Sample Size A sample size calculation was made from literature guidance on (BBS) and results of a previous 12-week feasibility trial with a cohort of elderly 7 participants (March-June 2017; unpublished). The feasibility study showed, as it subsequently became apparent, an untypically high average BBS gain of 11.2 points. The sample size calculation indicated that 2 groups of 20 would be required to detect a BBS gain of at least 5.0 points, using a power estimate of 80% and a significance threshold of 0.05. Materials and Balance Training Programme The study used the Moto tiles (www.Moto-tiles.com) as the playful training tool. A Moto tiles set consists of a number of tiles (typically 10) that connect to each other to form a surface. Each tile contains a microprocessor, a battery, IR communication, an FSR sensor, and 8 coloured LEDs in a circle. The FSR sensor can sense a step or a hit on a tile, and the LEDs will shine in different coloured patterns. The tiles communicate with a tablet via Bluetooth. Games are set up via the tablet allowing for selection of any among a variety of different games that challenge the players in ways suitable to their abilities. The Moto tiles are designed to stimulate physical activity level for all ages and train Moto abilities after injuries (Lund 2009;Lund and Marti 2009), and previous studies have reported positive effects on the balance of elderly users Jessen and Lund 2017). Elderly users typically exercise one by one or in pairs on the tiles during repeated brief game sessions (2 min), playing interactive games that challenge their dynamic and static balance, agility, endurance, and sensor-Moto reaction. The 12-week intervention plan engaged each participant in a training group of 5 elderly users. Each group met at their training centre twice weekly for their 1-h session. Each session was led by two training assistants (students) who managed the individual sessions and supported the elderly in the rare case that they might need physical assistance (Fig. 2). The training protocol included a combination of games that train both static and dynamic postural control. The planned 12-week trial period was divided into two equal periods of 6 weeks, divided by a 3-week break for Easter holidays. The original plan was to train participants in a training group of 5 elderly people each session, but small adjustments were made from time to time. If participants were prevented from attending their scheduled training session, they were offered a replacement session, sometimes on the same day when they could join the previous or the following team, the session then being extended by 12 min (cf. Fig. 2). Participants in both groups engaged in their normal physical and social activities including their meeting twice weekly at the municipal activity centre for joint meals and social activity and one hour's light exercise (mobility training while sitting in chairs). Control Group Assistants meet with each control group participant once a week for 5-10 min in order to: (a) encourage their continued participation, and (b) check whether activity trackers were functioning and to exchange the plaster holding the tracker. Primary Outcome The primary outcome was balance as measured by BBS. This test is widely used to measure functional balance of old people or others (Rikli and Jones 1999Li 2010). BBS consists of 14 individual tests that reflect movements in everyday life. In each test, the participant receives a score from 0 to 4 depending on their performance. The BBS scores are interpreted as 45-56 = low fall risk, 21-40 = medium fall risk, 0-20 = high fall risk and < 36 fall risk close to 100% (Guccione et al. 2012) BBS is widely considered the gold standard for measuring the balance of older users (Langley and Mackintosh 2007). Secondary Outcomes The study had three secondary outcomes: CST, 6MWT and physical activity outside of training: The CST is used to measure the aerobic capacity of the lower body (Rikli and Jones (2013)). The test is performed in the following steps by instructing the participant: 1. Sit in the middle of the chair. 2. Place your hands on the opposite shoulder crossed at the wrists. 3. Keep your feet flat on the floor. 4. Keep your back straight, and keep your arms against your chest. 5. On "Go", rise to a full standing position, then sit back down again. 6. Repeat this for 30 s. The number of times the patient comes to a full standing position in 30 s is then counted. If the patient is over halfway to a standing position when 30 s have elapsed, it is counted as a stand. The 6MWT is used to measure aerobic capacity and endurance (Sherwood et al. 2016). The test is performed in these steps: A 20-m course is set up on a hard and level surface. The participants are asked to walk around the course for 6 min. The tester records how many metres are walked. If the participant needs a break this is allowed, while the time keeps going. Daily physical activity was recorded by SENS motion-sensors 24/7 during the trial period for both intervention and control group participants. Outcome measure was mean number of steps per day. BBS, CST and 6MWT tests were conducted under the supervision of a certified physiotherapist and three student assistants, all blinded to group allocation. Ethics During recruitment, written and oral information about the study was given to all participants and their relatives. Interested participants were informed that they have the right to withdraw from the trial at any time. Performance data were recorded in a pseudonymous form. All data were collected, transferred and stored in accordance with GDPR guidelines. The study is approved by the Regional Ethics Committee (Center for Regional Udvikling Region Hovedstaden: H-19018499). Statistical Analysis Data were initially examined for normality violations, outliers, errors and missing values. Missing post-trial values were replaced by mid-test values (Telenius et al. 2015). A two-sample t-test was performed to test between group differences between the pre-and post-trial tests. A P value < .05 was considered statistically significant. All statistical analyses were performed with SAS JMP (v.9.4) and Microsoft Excel (2016). Results As shown in Fig. 3, out of 52 possible participants, 38 persons (73%) met the inclusion criteria and agreed to participate. Two participants (4%) have been excluded, not meeting the inclusion criteria, and 12 people (23%) dropped out at the very beginning of the trial. During the 12-week study period, 10 out of 38 (26%) dropped out and 2 (5%) passed away, thus we ended up with data from 26 participants (14 (54%) in the intervention and 12 persons (46%) in the control group) at the end of the trial. The 12 participants who withdrew after randomisation were excluded from the statistical analyses. Final analysis included data from 26 participants (19 females, 7 males; mean age, (S.D) = 83.54 (7.12) years. All participants skipped one or several sessions due to temporary illness or other obligations. Out of the total of 24 sessions participation median was 20, mean was 18.92 (SD 2.37; max. 23; min. 14) and an attendance rate of 79%. The duration of the exercise is therefore reported for the group as 10 weeks of exercise exposure. Table 1 shows the score on BBS tests at baseline and at the 12-week post-trial. The intervention group increased their score on average by 5.0 points, the control by 2.1 points. The difference between BBS baseline and BBS post-trial for each individual participant in the intervention group ranged from −2 to 11. CTS The intervention group experienced a decrease in the number of chair stands by 1.00 on average and the control group a decrease of 1.33. The difference was not significance (p = 0.96). 6MWT The intervention group increased their mean 6MWT by 18.5 (SD: 72.2) metres from baseline to post-trial test. The control group increased their 6MWT by 11.0 (SD: 45.7) metres. The difference between the groups was not significant (p = 0.75). Physical Activity The control group walked on average 718.53 more daily steps per week (mean = 4859.8; SD = 356.61) than the intervention group (mean = 4140.5; SD = 418.08), cf. Discussion This assessor blinded, parallel group, RCT aimed to investigate the effects of a playful exercise intervention with Moto tiles on functional ability and daily physical activity. The study was not sufficiently powered to reveal between-group differences. The mean BBS score of both the control group and the intervention group increased during the 12-week intervention period, but while the observed increments were different between the groups, they did not achieve statistical significance (p = 0.11). We have made a post-test power analysis with 80% power, alpha 0.05, and with assumptions of a mean difference between intervention and control of 3 points and a pooled SD of 5. This post-test power analysis shows that it would require 45 participants in each group. Still, the randomisation, allocation, assessments and the intervention were found to be applicable and future large-scale RCTs should include sufficient participants to clarify (1) Two-sample t-test the effects of the Moto tiles. Moreover, trial results suggest that older pre-frail and frail adults who engage in a moderate playful exercise programme over at least 10 weeks are likely to experience a modest gain in balance. As the BBS has been found to be with good internal consistency, reliability and good interrater reliability in community-dwelling older adults (Wang et al. 2006;Marques et al. 2016), the measurement properties should have the strength to find clinically relevant differences in such a trial. Furthermore, the BBS has been found to be a valid instrument and is recommended for clinical practice (Marques et al. 2016). Future studies should aim to have sufficient power to investigate clinical relevant differences on the BBS scale which has been reported to be 5 points for a range 35-44 (Donoghue et al. 2009). In this study, the relatively small number (n = 14) of participants in the intervention group increased their BBS score by 5 points, which suggests that the intervention period could possibly benefit from being longer in duration or more intense (minutes of playful exercise per week). Similar conclusions can be drawn from the findings on the CST, the 6MWT and the daily physical activity. No between-group differences were found. As the study was not powered for such an investigation, the null-hypothesis cannot be confirmed and future investigations could reveal relevant clinical differences on the abovementioned outcomes. Previous investigations have found large and promising effects from a Moto tiles intervention (Lund and Jessen 2013;Jessen and Lund 2014;Lund and Jessen 2014). However, we have not been able to replicate these findings, our results indicating modest, non-significant, improvements of balance outcomes. The difference between (Lund and Jessen 2013;Jessen and Lund 2014) occurred by chance or if Moto tile exercise indeed has a large effect on the balance of older adults. It was noticeable that the playful exercise created a joyous social atmosphere among the participants and they remarked spontaneously that the playful sessions were much more fun than their standard light "chair training". This motivational outcome is important for adherence to any exercise programmeand indeed for general wellbeing. Limitations This RCT comes with several limitations. First of all, this study was originally designed as a normal superiority RCT, powered to investigate the effects of the Moto tiles intervention. After completing the study in the available time, it was not possible to include sufficient participants and no final conclusion can be made upon the study effects or lack thereof. Studies that fail to reach the pre-specified sample size will be prone to type II errors (failing to reject the null-hypothesis when it should be rejected) but most importantly, the confidence intervals of the effect sizes and thus the mean estimates are wide and comes with a high degree of uncertainty. The conclusion should instead be focused on applicability and adherence to the intervention. Secondly, and probably most relevant, we did not measure the participants' quality of life, motivation or self-efficacy for exercise. In hindsight, instruments such as the Short Form 36 could have produced useful valid information on the quality of life effects of the intervention (Walters et al. 2001). Similarly, instruments such as the selfefficacy for exercise questionnaire could have been included to clarify the motivation and self-efficacy for exercise and physical activity (Resnick and Jenkins 2000). In summary, to provide the most useful results on the feasibility of the intervention, adherence and motivation for participation should be investigated using specific instruments. Conclusion This study investigated an innovative playful exercise intervention with Moto tiles in a population of older adults. The randomisation process, allocation of participants, 24/7 tracking of physical activity over many weeks, outcome assessments and adherence of both intervention and control participants were found to be applicable. Future large-scale RCTs should include sufficient participants to clarify the effects of the Moto tiles and whether the intervention provides a clinically relevant effect on the functional abilities of older adults. Furthermore, future studies should use specific instruments to investigate the effects on quality of life, motivation and self-efficacy for exercise and physical activity in older adults. Acknowledgments The work presented in this paper is part of project REACH (http:\crreach2020.eu) that received funding from the European Union's Horizon 2020 Research and Innovation program under grant agreement No 690425. The authors are grateful to our REACH partners for support and to all project contributors: our very dedicated participating citizens of the two care centres, Bredebo and Virumgård, staff at the centres, our student assistants and physiotherapist Mr. Andreas Lund Hessner. We also thank Mr. Christian Bøge Lyndegaard. DTU, for statistical analysis and advice. Compliance with Ethical Standards Competing Interests The authors declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-02-20T09:17:03.880Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "fb3e9462e3394f2b2d4854064bf556c445e61c3b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12062-020-09273-8.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b5b0860e2ab67d3228268d6516e8a7a7e01b60e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231694598
pes2o/s2orc
v3-fos-license
The category of number in Latin: meanings of the plural with verbal nouns : This paper deals with the category of number in Latin, specifically with the different meanings of the plural with verbal nouns. In the fi rst section, I establish a reference framework on the concept of number, and in particular the so-called “ number anomalies ” . The second part of the paper addresses the functional complexity of the category of number itself, so it presents and exempli fi es the four different meanings of plural forms with verbal nouns and explains them in light of the concepts of prototype and recategorization. The third section aims to identify the factors yielding a determined plural reading; in this way, I explain the connection between some meanings of the plural and the types of events that verbal nouns describe. Lastly, in the fi nal section, I discuss the main results of this study. Introduction Latin presents a number system based on the opposition singular versus plural, where singular expresses "one" and plural designates "more than one". Aside from structuralist discussions about which one is the marked term, 1 Latin grammars tend to deal with the study of this category by listing so-called "number anomalies", that is, instances where singular forms are used instead of plural forms, and vice versa (Bassols 1956: 24-28;Kühner andStegmann 1971 [1914]: II, 67-89; Löfstedt 1956Löfstedt [1928: II, 12-65;Pinkster 2015: 35-37). 2 These anomalies include the collective singular, generic singular, poetic uses of singular and plural, and the rhetoric plural, among other uses. 3 As expected, there are specific works that focus on these number anomalies with different perspectives and purposes (cf. e.g. Correa 1989;Sánchez 1977). However, the majority of these works also take into account the typology of nouns (collective nouns, mass nouns, abstract nouns, etc.) in order to explain some of the particular meanings that the opposition singular/plural assumes in Latin. This is how examples such as (1)-(3) are explained: (1) (Ov. Met. 8.526-527) Alta iacet Calydon: lugent iuuenesque senesque, uulgusque proceresque gemunt… 'Lofty Calydon is brought low. Young men and old, chieftains and commons, lament and groan…' 4 (2) (Caes. Gal. 5.14.2) Interiores plerique frumenta non serunt, sed lacte et carne uiuunt pellibusque sunt uestiti 'Of the inlanders most do not sow corn, but live on milk and flesh and clothe themselves in skins' (3) (Sen. Oct. 210-211) deus Alcides possidet Heben nec Iunonis iam timet iras 'Alcides as a god possesses Hebe and now no more fears Juno's wrath' Example (1) contains the collective noun uulgus, which does not have a plural form in Latin. The anomaly that this kind of noun represents for the number system can be understood if we look at its semantic properties: these nouns denote entities interpreted as a unit or a collective but with a plural content, so in terms of semantics they include the notions of plurality and singularity at the same time. In example (2), frumentum appears in the plural, although this is a mass noun; here, it could be interpreted as a noun denoting different types 2 For questions relating to Agreement, cf. Pinkster (2015Pinkster ( : 1243Pinkster ( -1301. 3 Classifications of these anomalies are varied and may cover different uses (see the pages in the manuals cited above). 4 Translations have been taken from The Loeb Classical Library. of the concept expressed by its singular (i.e. 'different kinds of corn'), thus not referring to plurality in its strictest sense. 5 A similar situation can be seen in (3), where the noun iras that in principle should not admit the plural form, since it refers to an abstract concept, denotes 'the different acts in which that feeling can show up'; indeed, the manifestations of Juno's anger are what intimidate Alcides in the text. 6 The particular meanings that singular (1) and plural (2)-(3) forms take on in the quoted examples are related to the semantics of the nouns themselves. Prototypical nouns, such as hammer or table, present the following features: (i) they have a great temporal stability, that is, their properties change very little over repeated perceptual scans; (ii) they are multi-featured bundles of experience; (iii) they are concrete, and made out of relatively durable materials; (iv) they tend to be spatially compact rather than scattered all over the perceptual space, and (v) they tend to be countable, in contrast to non-prototypical mass nouns (Givón 2001: 51). The closer a noun can be to the semantic features that configure its class (noun category), the more it can be employed within the opposition singular/ plural without reading problems or restrictions (Ramos 2009: 94). Thus, the behavior of number with the analyzed nouns (uulgus, frumenta and iras) is utterly justified, given that these are not prototypes within the noun category: they are not static, well-delimited, concrete, compact, and countable objects or things. We can see, then, that some nouns deviate from the values of their prototype. In this paper, I will focus on verbal nouns, that is, nouns that derive morphologically from verbs (e.g. sartio, 'the process of hoeing' < sario, 'to hoe'; uastatio, 'the action of laying waste or ravaging (territory, etc.)' < uasto, 'to lay waste'). The particular status of this type of nouns is that they exhibit verbal characteristics (basically, the capacity to denote events, with all the consequences this might entail). 7 Given the above, verbal nouns are undoubtedly one of the most interesting and revealing subgroups of the grammatical category of nouns. Rosén (1981) for instance, amongst other scholars, shows ways in which the category of the plural combines with verbal nouns: "there are two differing phenomena that manifest themselves in the countability of a verbal noun: concretization on one hand, and on the other hand transformation of plural or repetitive action" (Rosén 1981: 29). Given their morphological derivation, it might be expected that plural verbal nouns should designate plurality of actions, that is, repetition or iterativityone of the phenomena mentioned by Rosén (1981: 29-33). However, this is not always the case, as I will show in this paper. The specific questions addressed in this article are as follows: what are the other meanings that the plural form can acquire with verbal nouns? How can we explain their variety of meanings? What is the relationship between these meanings and the type of events that verbal nouns express? To answer these questions, I will examine verbal nouns ending in -tio (such as certatio, 'struggling for superiority', or aedificatio, 'an act or process of building; a building or structure') within a reference corpus that comprises the complete works of Plautus, the first and second books of Cicero's Philippicae, books 1-4 of Livy's Ab urbe condita, and Columella's De re rustica. Nevertheless, in order to extend the results of this study, this corpus has been amplified through examples extracted from other works dating from the period under scrutiny (archaic and classical Latin). 8 2 Meaning of the plural with verbal nouns As mentioned before, the plural form with verbal nouns (also called "action nouns") is associated with the repetition or iteration of the event denoted by the noun itself. This iterative reading of the plural has been clearly described by typological studies: "number is normally shown only when it can be understood 7 Of these consequences, I highlight the fact that these nouns can mirror the semantic structure of their corresponding base verbs (for example, through the use of subjective and objective genitives). 8 Specifically, the following list of authors and works are quoted in this article: Caesar (De bello ciuili), Cicero (Epistulae ad Atticum, Epistulae ad familiares, Pro Milone and Tusculanae disputationes), Seneca the Younger (Epistulae morales ad Lucilium), Petronius (Satyrica), Tacitus (Annales) and Suetonius (De uita Caesarum). as signalling 'occurrences', or 'cases' of 'verbing'" (Comrie andThompson 2007 [1985]: 354). In my corpus, a significant group of verbal nouns is interpreted in this way, as shown in the following examples: (4) (Col. 3.21.5) … quae plerumque populationibus uolucrum pluuiisque aut uentis lacessita dilabitur '… [the early vintage], which, being assailed by the plunderings of birds and by rains or winds, usually comes to ruin' In examples (4) and (5), the plural of populationibus and oppugnationibus refer, respectively, to repeated plundering by birds, rains, and winds, and the numerous occasions upon which cities are besieged. Thus, here plurality denotes repetition of the event and has a lexical aspectual function: it expresses iteration. It is worth noting that the circumstances and participants involved in these events are, in principle, the same in each instance or, at least, there are no key differences between the distinct occurrences of the event. Nevertheless, for various languages, including Latin, the meaning addressed above is neither the only reading of the plural with verbal nouns nor the most frequently documented: plural forms can also take on references to different entities, that is, refer to more than one entity (which, in Rosén's terms is called "concretization"). In the cases of "concretization", verbal nouns present the highest degree of nominality, mainly because they denote the result or effect of the event expressed by their corresponding base verbs (e.g. aedificatio meaning 'a building, a structure', derived from aedifico, 'to build'). This is also true of nouns that refer to participants of the events: for example, an instrument (munitio as 'a defence work' or 'a fortification' from munio, 'to provide [a place] with defensive works'), a location (cenatio, 'dining room' from ceno, 'to have dinner [with place or host indicated]') or any other similar content (supplicatio meaning The category of number in Latin 'thanksgiving', is a lexicalized noun deriving from supplico, 'to make propitiatory offerings'). 9 As one would expect, the meaning of the plural as a reference to different entities is also documented within my corpus. The examples (6)-(8) illustrate this meaning with result nouns: Ante fundum Clodi quo in fundo propter insanas illas substructiones facile hominum mille uersabatur ualentium … 'Was it in front of Clodius's manora manor in which, thanks to those gigantic basements, a thousand able-bodied men were easily accommodated …' (7) (Tac. Ann. 6.45) Milies sestertium in munificentia ea conlocatum, tanto acceptius in uulgum, quanto modicus priuatis aedificationibus ne publice quidem nisi duo opera struxit … 'One hundred million sesterces were invested in this act of munificence, which came the more acceptably to the multitude that he was far from extravagant in building on his own behalf [lit.: private buildings]; whilst, even on the public account, the only two works erected were …' (6) and (7), substructiones and aedificationibus make reference to those objects (basements and buildings) that are created as a result of the actions described by their base verbs: substruo and aedifico, respectively. On the other hand, in (8) the noun cogitationes, 'ideas', is understood as an abstract product of the general event of thinking (cf. its base verb cogito, 'to think'). This interpretation as a productand not as an event nounis reinforced by the adjective aliquas, 'some', which determines the noun. It is also unlikely that the action or, rather, actions of thinking can be literally 'expressed' (exprimet). The plurality of these verbal nouns simply denotes that the number of entities is more than one, regardless of whether they are concrete entities, as in (6) and (7), or abstract entities like in (8). Examples (9) and (10) show that verbal nouns that denote a component of their base verbs (such as, for instance, a location or an instrument) contain the same plural reading: 10 Habet quattor cenationes, cubicula uiginti, porticus marmoratos duos, susum cellationem, cubiculum in quo ipse dormio … 'It has four dining-rooms, twenty bedrooms, two marble colonnades, an upstairs dining-room, a bedroom where I sleep myself …' (10) (Liv. 1.33.4) nam et urbs tuta munitionibus praesidioque firmata ualido erat… 'for the city was protected by fortifications and was defended by a strong garrison…' In phrase (9), cenatio refers to the place where the event described by its base verb ('to have dinner') takes place; its plural form expresses a number of real-world entities that are always more than just one; moreover, in this example, the number is specified by the quantifier quattor. Similarly, example (10) shows how the plural noun munitionibus, meaning 'fortifications', denotes the instrument of its base verb munio, that is, something with which someone fortifies a place. Here, again, this plural noun refers to the plurality of concrete entities. Finally, according to example (11), the plural marker of supplicationes, 'thanksgivings', functions in a similar way: the plural noun is understood as 'different ceremonies of thanksgiving': (11) (Tac. Ann. 14.59) Decretae eo nomine supplicationes, utque Sulla et Plautus senatu mouerentur 'On that ground, a national thanksgiving was voted, together with the expulsion of Sulla and Plautus from the senate' In view of examples (6)-(11), the verbal nouns whose plural can refer to different entities denote either the result of the events described by their base verbs or a component or particular event related to the verbs in some way. Consequently, in 10 It is important to note that these components can not only be typical arguments (which are necessarily expressed at syntax level). Following the classification of arguments by Pustejovsky (1996Pustejovsky ( [1995: 63-64), they can also be interpreted as default arguments (parameters that participate in the logical expressions but are not necessarily expressed syntactically) and shadow arguments (parameters that are semantically incorporated into the lexical item; they can only be expressed by operations of subtyping). The category of number in Latin these examples, the nouns never adopt a reference to the event described by their base verbs in a strict sense. In the corpus studied, however, there are some examples in which a plural verbal noun does not carry an iterative meaning nor refer to several entities. Instead, the noun is associated with other values. This is the case of example (12), where the plurality of sationes does not refer to repeated actions of sowing nor the cultivated lands (understood as a result of sowing): (12) (Col. 3.13.5) Sed hae, quas rettulimus, uinearum sationes pro natura et benignitate cuiusque regionis aut usurpandae aut repudiandae sunt nobis 'But these methods of planting vineyards, as we have given them, are ours to employ or reject according to the nature and favourableness of each region' The only way to correctly interpret (12) is by understanding the plural as referring to different 'sorts of' sowing (here reflected in the English version through 'methods'); this interpretation becomes evident by the relative clause quas rettulimus, which makes clear that these types of sowing have been addressed by Columella in previous sections. In this example, we could say that the plural carries the meaning of 'sorts of' (sometimes 'types of' or 'kinds of'). In all of these cases, the plurality of verbal nouns makes reference mainly to the particular ways or manners in which a certain process can be carried out. A clear example can be seen in (13), with the same verbal noun sationes: (13) (Col. 2.10.29) Viciae autem duae sationes sunt: prima, qua pabuli causa circa aequinoctium autumnale serimus septem modios eius in unum iugerum, secunda, qua sex modios mense Ianuario uel etiam serius iacimus semini progenerando 'Of vetch, however, there are two sowings: the first about the time of the autumnal equinox, for the purpose of forage, in which we sow seven modii to the iugerum; the second in the month of January or even later, when we scatter six modii for the production of seed' This example clearly shows that the plural form of the noun sationes, determined by the numeral quantifier duae, once again refers to the two ways or methods of carrying out the event described by the verbal noun: the sowing. In fact, the noun is followed by an explanation of the characteristics that define both processes in terms of time, purpose, and quantity of grains. Thus, in comparison with the iterative reading of the plural, and even if the Object is the same in both types of sowings (cf. uiciae), the circumstances that surround the event are relatively different in each case. In terms of prototypes, Corbett (2000) considers that "the likely interpretation of a number form depends in part on the position of the head noun in the Animacy hierarchy (speaker > addressee > 3 rd person > kin > human > animate > inanimate)" (Corbett 2000: 86). According to this proposal, verbal nouns are at the bottom of this hierarchy (they are inanimate) and so are good candidates for recategorizing readings of the plural "since the 'normal' singular-plural opposition is typically not required" (Corbett 2000: 86). In principle, those nouns that refer specifically to events are not expected in the plural; nevertheless, as we have shown, they may not only occur in the plural but also designate different contents. The functional complexity of the category of number in Latin does not end at this point. Exceptionally, in certain contexts, some verbal nouns in -tio intensify the event that they designate through the plural, such as, for instance, palpationes (14) and exspectationes (15) In (14), palpationes is not limited to designating repeated actions of touching or caressing (an iterative aspectual reading), but seems to intensify the way in which these actions occur, thus granting, in this specific case, a value of reprimand. 11 Similarly, the plural form expectationes in (15) adds an intensification of the emotion ('feeling of hope') to its meaning, an interpretation that is reinforced by the adjective crebras. This value of intensification 12 has been recognized in several languages as a special use of the plural and can have different effects such as dissatisfaction, affectivity, politeness, etc. (Corbett 2000: 235-239). As Corbett (2000: 238) has noted, these effects are typically produced because the receiver usually knows (from the context or from his general knowledge) the real-world number of the referent. This way, there is a discrepancy between the sender's presentation of the situation and the receiver's knowledge. In short, the plural forms of verbal nouns seem to have at least four meanings (or functions) in Latin: (i) reference to different entities (prototypical plural), (ii) repetition, (iii) reference to different sorts of the verbal noun, and (iv) intensification. Given this variety of meanings, the question that arises now is what factors determine the particular reading awarded to the plural of a verbal noun? For the first plural reading the answer is easy: the prototypical plural is limited to verbal nouns that either denote the result or effect of the event described by their base verbs, that is, verbs with an effected or affected Object (substructiones, 'basements'), or that refer to participants of the events (the location in cenationes, 'dining rooms') or lexicalized nouns (supplicationes, 'different ceremonies of thanksgiving'). However, I would argue that value (4), seen in palpationes, 'caresses' in (14), should not be included in our description here, since this seems to be a marginal interpretation: in my corpus, only a very limited number of plural verbal nouns carry a value of intensification. 13 Therefore, in the next section, I limit myself to comparing the factors yielding iterative readings, such as populationibus, 'the reiterated plunderings', in example (4), versus those yielding 'sorts of' or 'type of' interpretations, such as sationes, 'ways of sowing', in (13). As we shall see, in these two examples we are dealing with situations in which plural verbal nouns specifically describe the event designated by their base verbs. Factors that determine how the plural is interpreted To determine these factors, I will start with satio, a verbal noun whose plural form can either be interpreted according to the 'sorts of' reading (cf. example [13] examined above) or as an iteration, as shown in (16) In examples (13) and (16) satio is related to the same meaning as its base verb sero, 'to sow'. Thus, the recategorization to one interpretation or another of its plural must not be down to the meaning of the verb, but to a series of compositional factors related to the specific noun meaning that is activated in a given discourse. The first point to consider, then, is which characteristics are associated with iterativity and then compare them with the two meanings of the plural verbal nouns, that is to say, iteration and the 'sorts of' value. According to Comrie (1976), iterativity is "the repetition of a situation, the successive occurrence of several instances of the given situation" (Comrie 1976: 27). With this in mind, the iterative interpretation of sationes in (16) refers to a number of individual and completed occurrences of sero that take place at different times. That means that in order for an event to be followed by another equal one, this first event must end. Only an ended event may be repeated. As a result, we can verify that it is in such a context that the plural of satio designates different individual events. The case of (13), where the plural noun sationes gets the 'sorts of' reading, can be interpreted in a different way. Here, this verbal noun makes reference to any event of sowing of vetch (i.e. just to the pure action). Under this type of reading, the noun behaves in a generic way, as in those situations that are referred to, for instance, by infinitives such as errare humanum est, where errare designates not a situation but the pure action of 'erring'. In cases like this, the only possibility for expressing plurality would be to designate different acts of errare, that is to say, 'ways of errare'. In the same way, sationes uiciae does not designate individual events, but different pure actions of serere uiciam; it denotes generic or potential events whose effective realizations are not relevant. Thus, it is not an "event" noun in a strict sense, but a noun naming the pure "action". To summarize, the difference between an iterative reading and a 'sorts of' reading is that the former requires a reference to a series of ended occurrences and the latter does not refer to any instance at all, but to a generic action. Such a distinction does not depend on the meaning of the verb; in fact, it is not even due to the context, although the linguistic conditions of the contexts of these two plurals can determine the change. In the first type, the verbal noun sationes is temporarily anchored to the construction pinguere humum. This means that in (16) the sowings are located relative to the time expressed by pinguere; in fact, to the extent that the reiterated sowings lead to the ground being enriched, they are previous events. In this way, the iteration expressed by the plural is only possible if the verbal nouns refer to events anchored to a determined time somehow: only an event that happens, has happened or is supposed to happen may be repeated (i.e. have an iterative reading). In the second case (i.e. the 'sorts of' reading), the same noun sationes functions as a topic of its sentence and, thus, is not located in a specific time; consequently, the effective realization of the different types of sowings is not at all guaranteed. It is important to note that even if sationes has an Object encoded in the genitive (cf. uiciae), this participant does not delimit the action of serere. A consequence of these syntagmatic conditions is that, in principle, all verbal nouns must receive one interpretation or the other. The iterative reading will always be associated with a determined time because it refers to actual events, whereas the 'sorts of' reading will not, mainly because it does not refer to any specific instance of the event. To support the idea that a relationship exists between a reference to individual instances of the events and iteration, on the one hand, and between a reference to generic events and a 'sorts of' reading, on the other, I offer below some examples of both. Firstly, examples (17)- (19) are indicative of those cases in which only an iterative reading of the plural nouns is available. This way, according to our previous analysis, the events described by these nouns are supposed to be understood as individual instances of the same event: Indeed, the plural noun uenditionibus in (17) refers to different occurrences of the action of selling carried out by Caesar (cf. eius). This is the case because the previous phrase mentions that Caesar sold different estates (and dispersed, by the way), something that supposes more than one individual occurrence of this event. The nouns altercationes and eruptiones in (18) and (19) admit the same analysis: e.g. in (19), the sorties from the town and the firebrands by the Albici happened several times, as crebrae specifies. In addition, from the quoted examples we can confirm that, in a broad sense, the verbal nouns describe events that are carried out or repeated in the same way and by the same participants, as it is specified in the syntax (cf. disceptantium). The only change here is the time of the occurrence. A final interesting example can be seen in (20), which illustrates those cases where the plural admits an iteration reading even though the verbal noun describes an event that has not actually happened. This reading is possible because the verbal noun appears in a general instruction: In (20), the plural form iterationibus takes an iteration reading because it designates a series of actions of iterare that happen every time somebody wants to resoluere ueruactum in puluere. The situation in which it is included is an instruction; in this sense, the noun iteratio is inserted in a determined but virtual (not factual) time and space, since the instruction has not actually occurred but is only destined to be accomplished. In contrast, the following passages exemplify plural forms with the 'sorts of' meaning: (21) (Cic. Tusc. 4.59) Earum igitur perturbationum, quas exposui, uaria sunt curationes. Nam neque omnis aegritudo una ratione sedatur; alia est enim lugenti, alia miseranti aut inuidenti adhibenda medicina 'The means then of attending to the disorders I have enumerated are varied. For not every distress is assuaged by one method; for there is one remedy to be applied to the mourner, another to the compassionate or envious' (22) (Col. 9.16.2) Sed iam consummata disputatione de uillaticis pecudibus atque pastionibus, quae reliqua nobis rusticarum rerum pars superest … 'Having now finished the discussion of the animals kept at the farmhouse and their feeding, the part of husbandry which still remains to be treated …' In these examples, curationes (21) and pastionibus (22) are not participants of actual events, but the topic of their sentences. In both cases, curationes and pastionibus are presented as the pure actions of curare perturbationes and pasci uillaticos et pecudes, respectively. Plurality in cases like these can only express different plural actions and not repeated events. This means that they express different ways in which the actions can be carried out. In (21), specifically, the methods of attending (curationes) are said to be varied (uaria), because every distress or disorder requires a different one. Example (22) also admits the same interpretation, although due to space constraints the description of the methods of feeding is not quoted in here. A similar example is insitionum in (23), where the noun appears as determiner of tria genera, already containing the sort reference: (23) (Col. 5.11.1) Tria genera porro insitionum antiqui tradiderunt: unum, quo resecta et fissa arbor insertos surculos accipit. Alterum … 'Further, the ancients have handed down to us three kinds of grafting; one in which the tree, which has been cut and cleft, receives the scions which have been cut; the second …' It is important to note that, in general, this interpretation of verbal nouns appears in contexts that contain instructions about some processes or events. Such instructions are widely documented within my corpus in Columella's work. (17)- (23), it should be noted that the plural verbal nouns discussed so far could, in principle, receive the other reading in precise syntagmatic conditions, even if this is not attested in my corpus. The conditions that must be met are: (i) they must be a nominal predicative of an event conceived as having an end point; or (ii) be a noun naming the "action", that is, denoting a generic event. After my commentary of examples All in all, an examination of the plural with verbal nouns and, specifically, with these event nouns, shows that in Latin this marker can carry two different meanings: iterative meaning and a 'sorts of' meaning. Under the first interpretation, the events described by verbal nouns are understood as a series of individual instances of the same event, while under the second, the verbal noun is not the head of an event expression, but simply the generic designation of this event. Conclusions The principal conclusions drawn by this study are as follows: 1. There are at least four types of interpretation for the category of number with verbal nouns: reference to different entities, iteration meaning, 'sorts of' meaning, and intensification. 2. The interpretation of the plural as referring to different entities is reserved for verbal nouns that denote the result or effect of the event expressed by the base verb or that assume the reference of an entity (an instrument, location, etc.) or of a particular event (e.g. a celebration) associated with this verb. 3. The other interpretations of plural forms, that is, the iteration meaning, 'sorts of' meaning, and intensification value are linked to verbal nouns referring to an event. 4. Two of the meanings attributed to the plural with verbal nouns can be explained through linguistic conditions: the iterative interpretation of the plural is connected to nouns describing specific events (anchored to a determined time somehow), while the 'sorts of' interpretation is related to nouns denoting generic events (not located in a specific time). 5. Finally, as a reflection, the analysis of the category of number with verbal nouns has proven that these nouns are far more complex in their behavior than expected, given the limitations of their own category.
2020-12-24T09:07:28.354Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "e04f56474220ea16d1fcd5763daa172d4854c3c5", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/joll-2020-2011/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5bc6fb1c093b5b5d744138cd9971a508545190c7", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
252724696
pes2o/s2orc
v3-fos-license
Outcomes of Surgical and Mechanical Thrombectomy in Massive Saddle Pulmonary Embolism: A National Perspective Introduction Saddle pulmonary embolism (PE) is a type of central PE that involves the bifurcation of the pulmonary arteries. First-line treatment is usually systemic thrombolytics, but surgical and mechanical thrombectomy (ST and MT) are used for patients with contraindications to thrombolytics or right heart strain. This study compares surgical and mechanical thrombectomy trends and outcomes in patients with saddle PE. Methods The data was extracted from the National In-Patient Sample (NIS) from 2016-2018 using the International Classification of Diseases-10-Clinical Modification (ICD-10-CM) diagnosis codes. We used the Cochrane-Armitage trend test to analyze the trends of ST and MT and the chi-square test for statistical analyses. A two-tailed p-value of <0.05 was considered statistically significant. Results The overall trend of MT in saddle PE rose from 2016 to 2018, while ST remained stable. Around 95% of patients undergoing ST/MT were emergent admissions, with 82.5% occurring in teaching hospitals. Patients of age >65 years and more with comorbidity burdens were more likely to undergo MT over ST. In-hospital mortality after ST was 15.1%, and after MT was 11.1% (p:<0.001). The most common complications after ST were congestive heart failure (CHF) and atrial fibrillation (AF), and after MT were vascular events and CHF. Conclusion The use of mechanical thrombectomy has steadily increased during the study period. ST is more common in large/teaching hospitals, weekend admissions, and patients transferred from other facilities. MT is more common in elderly patients with a higher comorbidity burden. Patients who underwent MT had lower mortality, length of hospital stay, and post-procedural complications. Introduction Acute pulmonary embolism (PE), a manifestation of venous thromboembolism, is the third-most common form of cardiovascular death after myocardial infarction and stroke [1]. According to physiological effects, pulmonary emboli may be classified as high-risk (super-massive or massive), intermediate-risk (submassive), or low-risk [1]. With the advent of CT pulmonary angiography as the gold standard for diagnosing pulmonary embolism, the exact anatomic locations of PE can also be determined. Based on the level of proximal extension, PE can be anatomically classified as central, lobar, segmental, and sub-segmental [2]. Central pulmonary embolism is diagnosed when thrombi are found in the main trunk of the pulmonary artery and the right or left pulmonary arteries [3]. Saddle PE refers to a specific type of central PE-an embolus lodged in the bifurcation of the main pulmonary artery trunk, often extending into the right and left pulmonary arteries [4,5]. Saddle emboli are more commonly (but not always) high-risk (super-massive or massive) or intermediate-risk (sub-massive) [6]. significant bleeding [8]. As an alternative to fibrinolysis or for patients with contraindications to fibrinolysis, surgical or mechanical thrombectomy can be employed to rapidly reverse PE-related right ventricular failure and cardiogenic shock [7]. A saddle configuration currently does not alter the treatment configuration. However, there is evidence of a higher rate of hemodynamic compromise and complications (cardiac arrest, shock, respiratory failure, mechanical ventilation, length of hospital stays) in saddle PE [6,9], although there is no significant difference in short-term mortality between the saddle and non-saddle PE [6,10]. This study aimed to compare the trends and in-hospital outcomes of surgical and mechanical thrombectomy in treating hospitalized patients with saddle PE. We utilized one of the most extensive population-based datasets to produce national estimates from hospitalized saddle PE patients. Data source We extracted our study cohort from the National Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality (AHRQ) [11]. NIS is one of the largest all-payer publicly available databases on inpatient discharges from U.S. hospitals maintained by the AHRQ [11]. The NIS approximates a 20% stratified sample of discharges from US community hospitals, excluding rehabilitation and long-term acute care hospitals, and contains more than 7 million hospitalizations annually [11]. With the established survey weights in NIS, this data could be weighted to represent the standardized U.S. population and obtain national estimates with high accuracy [12]. Study population and design NIS data from 2016-2018 were queried using International Classification of Diseases, 9th Revision, Clinical Modification and International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) diagnoses codes of I26.02, I26.02 for Saddle PE. Mechanical thrombectomy and surgical thrombectomy were identified by using ICD-10-CM procedural codes. In our final analysis, we only included patients who underwent either surgical or mechanical thrombectomy (ST or MT). Detailed data regarding patient demographics and hospital-level characteristics such as geographical region, size, and teaching status were extracted as supplied by NIS [13]. We estimated comorbidities using Elixhauser comorbidity software [14]. Statistical analysis To establish the trend, we calculated the proportion of hospitalizations among saddle PE patients who underwent either MT or ST each year and used the Cochrane-Armitage trend test for analysis. We performed descriptive statistics to present the baseline characteristics of saddle PE patients who underwent either MT or ST. We also estimated post-procedural outcomes by thrombectomy type and compared them using the chi-square test. We utilized SAS 9.4 (SAS Institute, Cary, NC) for all analyses and included designated weight values to produce nationally representative estimates [12]. We considered a two-tailed p-value <0.05 as statistically significant. Temporal trends of MT and ST Between 2016 and 2018, there were 47,820 hospitalizations for saddle PE. Of those patients, 1705 underwent MT, and 695 underwent ST. The percentages of each intervention compared to the total admissions are also listed in the figure below ( Figure 1). We observed a rise in the use of MT, from 3.22% (n = 460) in 2016 to 3.83% (n = 620) in 2017. Alternatively, ST as an intervention had remained stable over all three years, with no significant differences observed. Baseline characteristics The overall population included more males than females (53% vs. 47%) and more white people than other ethnicities (69% vs. 31%). However, there was no significant gender or race difference among the patients undergoing ST vs. MT. A higher proportion of patients in the >65 years age group experienced MT over ST, while the opposite was true for the 18-34-and 35-49-years age groups (p<0.0001). Patients with higher comorbidity scores tended to undergo MT over ST, while those with lower scores underwent ST over MT (p<0.0001). When looking at specific complications, patients with hypertension, uncomplicated diabetes mellitus, chronic pulmonary disease, iron-deficiency anemia, and metastatic cancer were preferentially assigned to the MT group (p<0.05). On the other hand, more patients with obesity, complicated diabetes mellitus, congestive heart failure, valvular heart disease, coagulopathies, and fluid-electrolyte disorders underwent ST over MT (p<0.05). 95% of patients undergoing ST or MT were emergent or urgent admissions, with 82.5% of ST/MT occurring in teaching hospitals. Patients admitted directly from the hospital's emergency department tended to undergo MT, while those transferred from other facilities had a higher chance of undergoing ST (p <0.0001). Similarly, patients admitted on weekdays had a higher rate of MT, while those admitted on weekends had a higher rate of ST (p = 0.004). Temporal trends of surgical and mechanical thrombectomy There has been a steady increase in MT-capable centers in the United States over the past 10 years, more commonly around major cities [15]. A study from 2019 demonstrated a 10-fold increase in the utilization of catheter-directed thrombolysis (CDT) over 12 years [16,17], while an earlier study from 2012 reported a sixfold increase over five years [18]. The increase in mechanical thrombectomy utilization results from increased access to capable centers and the decreased overall morbidity and mortality associated with the intervention. In 2014, the FDA approved using mechanical thrombectomy to treat PE [19]. The use of MT for the treatment of PE has been increasing since then, and our data reflected the same trend. On the other hand, the use of surgical thrombectomy has remained relatively stable over the years. This was expected, as there have been no innovations or changes to specific indications for its use [17]. Baseline characteristics of the ST vs. MT groups Most of our study population fell into the >50 years age group. This matches the tendency of pulmonary embolism to occur in elderly patients more than in younger ones [4,20]. On closer examination, the >65 years age group primarily underwent MT, while the 18-34-and 35-49-year age groups mainly underwent ST. Because of the high surgical risk of ST in the elderly, physicians prefer catheter-based thrombectomy or thrombus fragmentation over surgery [21]. The majority of our study population is also White -not surprising, as the population of the United States is predominantly White. Patients with hypertension, chronic pulmonary disease, and anemia were more likely to undergo MT over ST. This was an expected finding, as these comorbidities are associated with more significant risks from anesthesia and blood loss during surgery. On the other hand, patients with obesity, congestive heart failure (CHF), and valvular heart disease (VHD) were more likely to undergo ST. Existing literature has no explanation for this. The only related studies we could find reported patients developing chronic pulmonary hypertension after treatment of PE and requiring ST as treatment [22,23]. We hypothesize that the increased pulmonary pressures associated with obesity, CHF, and VHD make it harder to proceed with mechanical thrombectomy. Patients with complications from diabetes mellitus tended to undergo ST, while those with no complications underwent MT. Previous studies have demonstrated poor outcomes after MT in diabetics but were unable to determine the causal relationship between hyperglycemia and poor prognosis [24]. Another study determined that the lacking collateral circulation in diabetics was a significant contributor to the worse results in diabetics after MT [25]. Patients with metastatic cancer were also more likely to undergo MT. This is likely because metastatic cancer increases surgical risks, and an unnecessarily invasive procedure in a patient with a limited lifespan is relatively contraindicated [21,26]. The focus is instead on minimizing complications after the intervention to maximize the quality of life. Surprisingly, patients with coagulopathy were more likely to undergo ST over MT. We expected the opposite, as the patients would be more prone to intra-operative and postoperative bleeding. We could not find any literature explaining this phenomenon. Still, we hypothesize that patients with coagulopathy are either at increased risk for the formation of additional thrombi or progression to DIC and thus require more drastic intervention. Patients admitted on weekends were more likely to undergo ST, while those admitted on weekdays were more likely to undergo MT. This is probably due to the availability of personnel. MT requires skilled interventional radiologists and the staff to operate the specialized equipment. They are less likely to be available on weekends. On the other hand, most hospitals have surgeons and OR staff available for emergencies over the weekend. Given that massive saddle PE is an acute emergency requiring immediate treatment [27], it is only logical that they proceed with ST when MT is unavailable. On a similar note, patients transferred from other facilities were more likely to undergo ST, while those admitted directly from the emergency department of the hospital were more likely to undergo MT. The reasoning here is two-fold-patients transferred from other locations are likely to have more severe forms of the disease and have more time to deteriorate during transit. They tend to be unstable and require emergent surgery. In comparison, patients admitted directly from the emergency department are likely to receive treatment faster, allowing them to undergo MT [28]. In-hospital outcomes of ST and MT Saddle PE is an acute, life-threatening condition associated with elevated right atrial pressure, profound hypoxemia, and heart failure, even when treated promptly, with a high mortality rate. Our study reported the same for both the ST and MT groups, with significantly higher mortality in the ST group (15.1% vs. 11.1% for ST and MT, respectively). Similar findings have been reported in other studies [29]. The higher mortality of ST is also expected and matches prior results. Unstable, critically ill patients tend to undergo rescue surgery (ST), and the inherent risks and complications of surgery add to the mortality rate in the ST group [20]. For the same reasons, patients undergoing ST tend to have more extended hospital stays and a higher chance of being discharged to a skilled nursing facility instead of going home after their postoperative stay [30]. Congestive heart failure was the most common periprocedural complication reported in both groups, with a significantly higher occurrence in the ST group. This finding has been reported in prior studies [31] and has multiple contributing factors. One, it is a consequence of cardiopulmonary bypass performed as part of the ST procedure [30]. Two, as discussed before, critically ill patients and severely hemodynamically compromised patients are likely to undergo ST [32]. And three, patients requiring ST generally have a greater risk of right ventricular strain and myocardial infarction, contributing to the development of heart failure. However, despite the high incidence of CHF, various studies have demonstrated excellent cardiac recovery after both procedures. We also found that new-onset atrial fibrillation was a frequent adverse event after both procedures, more commonly following ST than MT. PE itself may be the trigger for AF through increased right atrial pressure and subsequent right atrial strain-with ST patients having a higher risk than MT patients, as they tend to have more severe obstructions and right heart strain (as discussed earlier) [33]. There is relatively little data available on this topic, and the incidence of postoperative atrial fibrillation may be underestimated [34]. Vascular events were another significant periprocedural complication for both MT and ST, with MT having a small (1.8%) but significantly higher incidence. Patients undergoing ST usually require CPB, which exposes them to systemic anticoagulation. This increases the risk of widespread bleeding, including intracranial hemorrhage. In contrast, the MT group has a higher proportion of patients who failed or have absolute contraindications to systemic thrombolysis and high surgical risk-i.e., patients who already have an increased risk of bleeding complications [35][36][37]. 33.8% of ST patients and 17.6% of MT patients required intermittent ventilation (IMV) during or after the procedure (p < 0.001). The hypoxic condition generated by saddle PE is one of the main contributors to the initiation of IMV, after which the patients are maintained on artificial ventilation until the saddle PE is resolved [38,39]. Patients requiring ST are usually more critically ill and frequently have been previously mechanically ventilated, increasing the chances that they may need IMV [40]. Also, ST is often used as a last resort for patients who failed to respond or have contraindications to initial management and have contraindications to MT-i.e., patients with a higher risk of needing IMV [6,41]. Limitations This study is based on a retrospective analysis of NIS data. This analysis is inherently vulnerable to the effects of missing data or administrative errors in data entry. As ICD-9 and ICD-10 diagnostic codes were used to extract data, errors in coding the diagnosis or description of saddle PE can also be assumed. However, the size of the dataset used will offset the effect of these errors. Also, as NIS is a discharge database, each hospitalization is unique but the same patient may be recorded multiple times for multiple admissions, inflating the study population. For the same reason, it is impossible to establish a temporal relationship between the intervention (ST or MT) and the outcomes or complications. Nor can we identify whether the complications (such as AF or CHF) predated the admission for saddle PE. Our data was also limited to the three-year period from 2016 to 2018, as mechanical thrombectomy for treatment of PE was only approved in 2014. This relatively short study period is another limitation of our study. Conclusions The use of mechanical thrombectomy has been steadily increasing in the study period, while surgical thrombectomy has held steady. Surgical thrombectomy is more common in large/teaching hospitals, weekend admissions, and patients transferred from other facilities, while mechanical thrombectomy is more common in the elderly (age >65 years) and patients with a higher comorbidity score. Mechanical thrombectomy is also associated with decreased mortality and length of hospital stay compared to surgical thrombectomy and has a lower risk of complications. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-10-06T15:09:55.837Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "d64cf7bf861bc7f2024189651e57879f235a6d7f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/114775-outcomes-of-surgical-and-mechanical-thrombectomy-in-massive-saddle-pulmonary-embolism-a-national-perspective.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6665a31dff9cf3db9aa6f9db2ecf382f6757646a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
226275501
pes2o/s2orc
v3-fos-license
Historical and projected future range sizes of the world’s mammals, birds, and amphibians Species’ vulnerability to extinction is strongly impacted by their geographical range size. Formulating effective conservation strategies therefore requires a better understanding of how the ranges of the world’s species have changed in the past, and how they will change under alternative future scenarios. Here, we use reconstructions of global land use and biomes since 1700, and 16 possible climatic and socio-economic scenarios until the year 2100, to map the habitat ranges of 16,919 mammal, bird, and amphibian species through time. We estimate that species have lost an average of 18% of their natural habitat range sizes thus far, and may lose up to 23% by 2100. Our data reveal that range losses have been increasing disproportionately in relation to the area of destroyed habitat, driven by a long-term increase of land use in tropical biodiversity hotspots. The outcomes of different future climate and land use trajectories for global habitat ranges vary drastically, providing important quantitative evidence for conservation planners and policy makers of the costs and benefits of alternative pathways for the future of global biodiversity. H abitat range size is a strong predictor of species' vulnerability to extinction 1,2 . As a result, two major drivers of the decline of geographic range sizes-the conversion of natural vegetation to agricultural and urban land, and the transformation of suitable habitat caused by climate change-are considered two of the most important threats to global terrestrial biodiversity 3 . Land-use change has caused staggering levels of habitat contractions for a range of mammal 4-6 , bird 7 , and amphibian species 8 . Simultaneously, anthropogenic climate change has been driving shifts in species' ranges [9][10][11][12] , which, whilst resulting in larger range sizes for some species, has led to severe range retractions for others 11,13,14 . Declines in global range sizes due to land-use and climate change heavily contribute to the loss of local species richness [15][16][17] and abundance [17][18][19] in many parts of the world, thereby threatening essential ecosystem functions 17,20 . With global agricultural area potentially increasing drastically in the coming decades 21 , and climate change continuing to drive ecosystem change at an accelerating pace 22 , future projections suggest that past trends in range contractions may continue 23,24 , and likely contribute to projected large-scale faunal extinctions 12,13,25,26 . Considering the crucial role that species' range sizes play for extinction risks, a better understanding of the long-term range dynamics of individual species, and projections of future changes under alternative scenarios, is crucial for conservation planning from the local to the global scale. Such estimates would allow quantification of historical pressures on species, and inform prioritisation of future efforts. Here, we estimate the habitat range sizes of 16,919 mammal, bird and amphibian species from the year 1700 until 2100 based on global land use and climatic conditions. We use empirical datasets of the global distribution of species, and combine these with species-specific biome preferences to estimate local habitat suitability under natural vegetation, cropland, pasture and urban land cover. By overlaying these data with reconstructions of global biomes corresponding to past climatic conditions, and agricultural and urban areas since 1700, we estimate the historical habitat ranges of each species ('Methods'). We then extend the analysis into the future based on 16 alternative land use and climate trajectories until the year 2100, representing four emission scenarios (representative concentration pathways (RCPs) 2.6, 4.5, 6.0, 8.5), and five socioeconomic pathways (shared socio-economic pathways (SSPs) 1-5) ('Methods'). SSP1 and SSP3 represent futures where socioeconomic challenges for adaptation and mitigation to climate change are both low and both high, respectively; SSP4 combines high challenges to adaptation with low challenges to mitigation, while SSP5 represents the opposite case; SSP2 is a middle-of-theroad scenario of intermediate challenges to adaptation and mitigation 27,28 RCPs 2.6-8.5 represent increasing levels of global warming by the end of the century 29 . Considering all possible SSPs for any given RCP is crucial, as using only one realisation per RCP can conflate effects and lead to contestable patterns (e.g., ref. 16 ). By design of the method used here, modelled species' habitat ranges do not exceed the outermost geographic limits of species' observed and projected occurrences. Whilst this approach still allows for ample range shifts and expansions ('Methods'), climate change may push some species beyond these bounds, which our estimates would not account for. Furthermore, our method does not account for habitat range shifts from climatic changes that are too small to manifest as biome changes ('Methods'); thus, range shifts in highly climatically sensitive species may be underdetected. Our estimates of the distribution of species' habitat ranges based on land use and climate represent upper estimates for the actual distribution of populations. They neither incorporate other types of human influence, such as hunting 30 , suppression by introduced species 31 and pathogens 32 , nor do they account for species' mobility 33 , or the impacts of habitat fragmentation 34 and trophic cascade effects 35 on the viability of local populations. Our analysis reveals that species have lost an average of 18% of their natural range sizes thus far, a figure that may drop to 13% or increase to 23% by the end of the century, depending on future global climatic and socioeconomic developments. Results and discussion Historical changes in habitat range sizes. With moderate impacts on the habitat ranges of most species' up until the industrial revolution, the expansion of agricultural production and settlements alongside the rise in population growth since the early 1800 s has drastically reduced range sizes of most mammals, birds, and amphibians (Fig. 1a). Using potential natural ranges in 1850 as a reference ('Methods'), we estimate that species had lost an average of 18% of their natural habitat area by 2016. For most species, alterations in the global distribution of biomes due to past climatic change have had a much smaller effect on range sizes compared to land use, causing average range changes of <1% in the past 300 years ( Supplementary Fig. 1). There is substantial variability between species in terms of the experienced range changes. Critical levels of habitat range loss affect a rapidly rising number of species, with currently 16% have lost more than half of their natural range. Among these species, tropical species account for an increasingly larger proportion (Fig. 1b), whereas smallranged and threatened species did not experience significantly higher ranger losses than other species ( Supplementary Fig. 2). For an estimated 18% of species, ranges have expanded in consequence of anthropogenic climate change and the conversion of unsuitable natural vegetation to cropland and pastures (Fig. 1a). The magnitude of habitat range contractions estimated since 1700 is not merely the result of the increasing area of converted land. Over recent centuries, range loss has increased disproportionately in relation to the total size of agricultural and urban areas (Fig. 2a). Whilst the first billion hectares converted since 1700 caused an average 3% loss of habitat size, the most recently converted half billion hectares are responsible for an average loss of 6% of natural range sizes. This acceleration of marginal range losses can be explained by a long-term trend in the location of land-use change towards tropical regions, where both local species richness is higher and average ranges sizes are smaller, and thus where the destruction of natural habitat leads to particularly high relative range losses 36 (Fig. 2b, c). Following a long period of much less land conversion than in other parts of the world, these areas have experienced a rapid expansion of agriculture since the end of the 19th century. Habitat conversion rates reached their highest levels to date in South America around the mid-late 20th century, and in the late 20th and early 21st century in South East Asia (Fig. 2b), a global hotspot of smallranged species 36 (Fig. 2c). Projected future changes in habitat range sizes. Whether these past trends in habitat range losses will reverse, continue or accelerate will depend on the global emission and socio-economic pathway chosen in the coming years and decades. By 2100, average range losses could reach up to 23% in the worst-case scenario (RCP 6.0, SSP 3), or drop to 13%-roughly equivalent to levels in 1955-in the best case (RCP 2.6, SSP 1) (Fig. 3a). The proportion of species suffering the loss of at least half of their natural range size could increase to 26% (RCP 6.5, SSP 3) or decrease to 14% (RCP 2.6, SSP 1) by 2100 (Fig. 3b). Isolating the impact of climate change shows that higher levels of global warming increase both the number of species experiencing substantial range contractions and range expansions 11,14 (Supplementary Fig. 1). Across-species average range losses by 2100 increase consistently with higher emission levels for any given socio-economic pathway (Fig. 3a). At the same time, the differences between climate-change scenarios, in terms of average range change, are at times smaller than the differences between socio-economic scenarios. Across climate change scenarios, average range loss is consistently highest for SSP 3 (high challenges for both mitigation and adaptation to climate change), similar for SSP4 (adaptation challenges dominate), SSP5 (mitigation challenges dominate) and SSP2 (intermediate challenges), and lowest for SSP1 (low challenges for both mitigation and adaptation). Whilst SSP 1 would enable the re-expansion of ranges in many parts of the world as the result of the abandonment of agricultural areas, notably in Southeast Asia, SSP 3 represents a continuation of land-use change in the tropics, most strongly in the Congo basin ( Supplementary Fig. 4). Our estimates of the past and present states of species' habitat ranges, and how they will be impacted under alternative future climatic and socio-economic scenarios, provide important evidence for conservation-oriented decision-making from the local to the global scale. Our results provide quantitative support for policy measures aiming at curtailing the global area of agricultural land 37,38 (by sustainably intensifying production 39-41 , encouraging dietary shifts 42,43 and stabilising population growth 44 ), especially in areas of small-range species 36 , steering production to agro-ecologically optimal areas when the additional expansion is inevitable 39,45 , targeting land abandonment and restoration in hotspot areas 46,47 and limiting climate change 48 . Whilst our data quantify the drastic consequences for species' ranges if global land use and climate change are left unchecked, they also demonstrate the tremendous potential of timely and concerted policy action for halting and indeed partially reversing previous trends in global range contractions. Methods Global land-use data. For the historical time period 1700-2016, we used reconstructions of global cropland, pasture, and urban areas from the HYDE 3.2 dataset 49 (available from https://doi.org/10.17026/dans-25g-gez3). Whilst HYDE 3.2 provides land-use data as far back as 10,000 BCE, we began our analysis in the year 1700, prior to which global land-use data are subject to increased uncertainty 49,50 . A total of 47 maps, including lower and upper uncertainty bounds, are available at 10-year intervals between 1700 and 2000, and at 1-year intervals between 2000 and 2016. These data were upscaled from their original spatial resolution of 0.083°to a 0.5°grid by summing up the cropland, pasture, or urban areas of all 0.083°grid cells contained in a given 0.5°cell. For the period 2020-2100, we used 0.5°-resolution 10-year time-step projections of global cropland, pasture, and urban areas from the AIM model 51 (available from https://doi.org/10.7910/DVN/4NVGWA), covering Representative Concentration Pathways (RCPs) 2.6, 4.5, 6.0 and 8.5, and Shared Socio-economic Pathways (SSPs) 1-5. The dataset contains all possible combinations of these emission and socio-economic trajectories with the exception of RCP 2.6/SSP 3, and RCP 8.5/SSPs 1-4. The data were harmonised with the HYDE 3.2 data by adding the differences between HYDE 3.2 and AIM cropland, pasture and urban area maps in the year 2010 to the AIM future land use projections. We refer to refs. [27][28][29]52 for details of the emission and socio-economic pathways, and to ref. 28 for a comparison between the AIM model and other integrated assessment models. Global biome data. We used the BIOME4 vegetation model 53 (available from https://pmip2.lsce.ipsl.fr/synth/biome4.shtml) to simulate the distribution of global potential natural biomes between the years 1700 and 2000, and between 2020 and 2100 for each of the four climate-change scenarios considered here (RCPs 2.6, 4.5, 6.0, 8.5), at a spatial resolution of 0.5°. Inputs required by BIOME4 include global mean atmospheric CO 2 concentration, and gridded monthly means of temperature, precipitation, and percent sunshine. Past and RCP-specific future CO 2 levels were obtained from refs. 54 and 55 , respectively. The climatic data were generated as follows. For the period 1700-1900, we used annual simulations from the HadCM3 climate model 56 (available from https://esgf-node.llnl.gov/search/cmip5/; Experiments 'past1000' and 'historical', Ensemble 'r1i1p1'). For the period 1901-2016, we used 0.5°resolution annual observational data 57 (available from https://doi.org/ 10.5285/10d3e3640f004c578403419aac167d82). For the period 2020-2100, and for each RCP (2.6, 4.5, 6.0, 8.5), we used annual simulations from the HadGEM2-ES climate model 58 , the MIROC5 climate model 59 and the CSIRO-Mk3.6.0 climate model 60 (available from https://esgf-node.llnl.gov/search/cmip5/; for each climate model and each RCP, we used averages from Ensembles 'r1i1p1', 'r2i1p1', 'r3i1p1', 'r4i1p1'). We downscaled and bias-corrected both the pre-1901 HadCM3 simulations and the future HadGEM2-ES, MIROC5, and CSIRO-Mk3.6.0 simulations using the delta method 61 . This method is based on applying the difference between simulated and observed climate at times at which both are available (here we used the 1900-1930 period for the historical data, and the year 2006 for the future data) to the simulated climate at points in time at which only simulated data exist (i.e., pre-1901 and post-2016) in order to correct systematic biases in the climate model 61,62 . The delta method also serves to spatially downscale the simulated climate to the 0.5°resolution of the observational data. Estimation of species' habitat ranges. We estimated the geographic habitat ranges of an individual bird, mammal, and amphibian species through time following the general methodology in ref. 23 . Our approach combines the following data: I. Spatial polygon data of species-specific extents of occurrence of all known birds 63 Fig. 3 Projected future range changes of mammals, birds and amphibians for representative concentration pathways (RCPs) 2.6, 4.5, 6.0, 8.5, and shared socioeconomic pathways (SSPs) 1-5. a Across-species median range size changes, relative to potential natural ranges in 1850 (analogous to the black line in Fig. 1a). b Percentage of species projected to experience a loss of more than half their natural range size. In Fig. 1a), and bar charts showing critical range losses by primary mega-biome (as in Fig. 1b), are shown in Supplementary Fig. 3 for individual RCP/SSP combinations. ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-19455-9 III. Maps of global potential natural biome distributions corresponding to the relevant climatic conditions through time (i.e., reconstructions for the past, and RCP-specific projections for the future). IV. Maps of global cropland, pasture, and urban areas through time (i.e., reconstructions for the past, and RCP-and SSP-specific projections for the future). The data I-IV were used to estimate the habitat range of individual species at a given point in time as illustrated in Fig. 4 and detailed in the following. In a first step, we used species-specific extents of occurrence (data I), which represent the outermost geographic limits of species' observed, inferred or projected occurrences 1 . These spatial envelopes do not account for the distribution of natural or artificial land cover within that area, and therefore generally extend substantially beyond a species' actual area of occupancy 65,66 . We first remapped extents of occurrence from their original spatial polygon format to a 0.083°resolution grid using the 'rasterise' function of the 'raster' package in R, which maps spatial polygons to those raster grid cells whose centres are contained within the polygons. For each species, we then determined the proportion of 0.083°cells contained in each 0.5°grid cell that represents the species' extent of occurrence. This provides an estimate of the proportion of each 0.5°grid cell that is contained in the species' extent of occurrence. Compared to the rasterising extent of occurrence directly to a 0.5°grid, this approach provides for more accurate estimates of species' ranges and reduces the number of species that are not included in our analysis because their extents of occurrence do not overlap with any grid cell centre. In a second step, we refined the derived species-specific maps of the proportion of 0.5°grid cells contained in species' extents of occurrence by combining them with species-specific biome requirements and maps of global biome distributions. Species-specific biome requirements (data II) include one or more habitat categories (cf. Supplementary Table 1), in which each species is known to occur. A species was estimated as being present in a grid cell contained in its previously derived extent of occurrence under the potential natural biome at a given point in time if the species' list of habitat categories contained the local (i.e., grid cell-specific) potential natural biome at the relevant time (data III; see above). This required matching IUCN habitat categories (https://www.iucnredlist.org/ resources/habitat-classification-scheme) with the biome categories of the Biome4 vegetation model, which was done as shown in Supplementary Table 1. In this way, we subset extents of occurrences by only retaining grid cells where the natural biome type is included in a species' list of suitable habitat categories. The result of this step represents a species' estimated potential natural habitat range (i.e., in the hypothetical absence of anthropogenic land use) at a given point in time. In a third step, we estimated actual habitat ranges by including maps of global land use through time. Each species' actual habitat range at a given time was derived by removing any unsuitable anthropogenic land from the previously estimated potential natural range. Historical and projected future land use maps (data IV; see above) provide the fraction of each grid cell that is occupied by cropland, pasture or urban areas. These data were combined with information on which of these three artificial land cover types, if any, species can occur in, which is also included in the list of species' biome requirements (data II). This allowed us, for each grid cell contained in a species' potential natural range at a given time, to estimate the proportion of the grid cell that contained suitable habitat. A species' actual habitat range size was then obtained as the sum of the areas of the remaining suitable habitat from all relevant grid cells. We applied the above method at each point in time for which global land use data is available (see above). In this way, we obtained potential natural ranges and actual ranges for 47 points in time between 1700 and 2016-using the baseline as well as lower and upper uncertainty bounds of the HYDE 3.2 land-use reconstructions-, and for nine points in time between 2020 and 2100-using the 16 combinations of future climatic and socio-economic pathways (see above), each of which, in turn, was considered based on climate data from three alternative models. Thus, we considered a total of 141 historical and 432 future scenarios. Since the global distribution of natural biomes varies over time as the result of (naturally or anthropogenically) changing climatic conditions, the sizes of potential natural habitat ranges are time-dependent. This motivates to consider range changes in relation to the potential natural ranges estimated at a particular reference time, for which we chose the year t 0 = 1850, representing a modern preindustrial baseline. Denoting the potential natural range and the actual range of a species i at a time t by A potential i ðtÞ and A actual i ðtÞ, respectively, the range change associated with species i at time t as the result of the distribution of biomes and land use at that time was calculated at as Species whose potential natural habitat range size in the reference year t 0 = 1850 (i.e., the range size estimated in the absence of anthropogenic land use and based on the global distribution of biomes in 1850) is zero, A potential i t 0 ð Þ ¼ 0, were not included in the analysis as, in this case, changes in range size are not defined. Based on the set ΔA i t ð Þ f g i¼1;2; of the individual range changes of all species through time, we calculated range change percentiles at each point in time (Fig. 1a), and determined the proportion of species that have experienced the loss of a given Fig. 4 Method of estimating potential natural and actual range for the example of the bat-eared fox (Otocyon megalotis) in the year 1900. Here, for visualisation purposes, cropland, pasture, and urban areas were aggregated into one map; in reality, our method checks each of them separately against species' artificial habitat preferences. Method discussion. Whilst the available climate data for a given point in time only allows us to assign one primary natural biome type to each 0.5°grid cell, microclimates within cells may, in reality, result in the presence of different biomes in parts of a cell that are not represented in our data. By design of the approach used here, grid cells containing a non-primary biome that is suitable for a species, whilst the estimated primary biome is not, do not contribute to our estimation of the species' habitat range. Conversely, grid cells containing a non-primary biome that is not suitable for a species, whilst the primary biome is suitable, would be included in their entirety in the species' estimated range. This may lead us to underestimate the range sizes of species typically occurring in non-primary biomes in areas in which the estimated primary biomes are not suitable for the species, and to overestimate the range sizes of species typically occurring in the estimated primary biome in areas where other biomes also occur that are not suitable. Higher-resolution biome data could, in principle, reduce inaccuracies; however, generating such data in a reliable manner is not trivial. We are not aware of indications that this aspect of the approach would either systematically increase or decrease our overall estimates for range size changes across species in Fig. 1a. Our estimation of species' habitat range sizes does not take into account habitat connectivity within or across grid cells. In principle, this can result in disconnected patches being included in a species' estimated range, despite in reality being too small to represent potentially suitable habitat. However, neither species-specific data on the minimum size that spatially connected areas must not fall below before becoming non-viable nor reliable very-high-resolution land use and biome data, both of which would be needed to fully accommodate this issue, are currently available. Although species' extents of occurrence are based not only on known, but also inferred and projected occurrences, the data remain very likely biased as the result of range contractions that occurred before the beginning of the systematic collection and mapping of species' distributions, and that cannot be fully reconstructed. Whilst this may lead us to underestimate the absolute range sizes of species, it does not necessarily imply that we either systematically underestimate or overestimate the percentage change of species' ranges through time. We chose the 0.5°resolution for our analysis as both the 1901-2016 observational climate data (and therefore also the pre-1901 and future climate data, which were downscaled using the observational data) and the projections of future land use are only available at this resolution. Attempts to further downscale these data would likely involve significant additional uncertainties. We are not aware of indications that an increase in the resolution of the analysis (if indeed the necessary datasets were available) would result in a systematic increase or decrease of either the absolute range sizes or the percentage change of range sizes relative to the baseline sizes, estimated here, at any point in time. Species-specific extents of occurrence and habitat preferences have been argued to be subject to uncertainty 69 ; however, uncertainty estimates (quantitative or otherwise) are not provided with the data. In our main analysis, we therefore used the available data at face value. However, to verify that our results are not overly impacted by specific species, we performed the following bootstrapping analysis. Based on the set of species-specific range changes of all 16,919 species, estimated for the year 2016, we randomly sampled 16,919 values from this set with replacement a total of 10 4 times. For each of these 10 4 sets of range change estimates, we calculated 10%-90% percentiles analogous to Fig. 1a. For each percentile, we then calculated the mean and standard deviation of the computed 10 4 values. The result, shown in Supplementary Fig. 5, demonstrates that the uncertainties of our estimates with respect to specific species are very small, indicating that our results are robust with respect to potential uncertainties in the species data. Estimates of temporal delays in biome shifts in response to climatic changes 70 are currently not available with the global coverage that would allow us to further refine our approach of assuming that biomes at a given point in time are determined by the climatic conditions in the preceding 30 years. This also applies to data on the dispersal speeds of plant functional types, and their effect on potential delays in colonisations of previously climatically unsuitable areas 33 ; current studies on this topic are too spatially scarce to inform our approach. In our main analysis, we therefore followed the assumption commonly made in global vegetation models of no seed dispersal limitations 71 . However, to explore the impact of this assumption, we also repeated our analysis based on the extreme scenario of biomes not shifting at all between the present (year 2016) and 2100. The estimated range size changes ( Supplementary Fig. 6) are quantitatively similar to the results of our main analysis (Fig. 3), consistent with our assessment of the overall stronger impact of land use compared to climate-driven biome changes. Qualitatively, i.e., in terms of how different RCP/SSP scenarios rank relative to each other, results are equivalent to those of our main analysis. As noted in the Introduction, our estimates of future habitat ranges represent upper estimates of species' actual geographic distributions. In particular, our main analysis does not account for species' ability to migrate to areas that will become suitable habitat at a future point in time but are not at present. However, our framework allows us to examine the effect of excluding such areas from the estimated habitat range. We repeated our analysis of future changes in habitat range sizes, but considered a grid cell as part of a species' range only if the local biomes estimated for both the relevant point in the future and for the present (year 2016) were included in the species' list of biome requirements. In other words, grid cells outside of species' current potential natural habitat ranges were not counted towards their future range sizes, assuming that species are not able to migrate at all. This represents an extreme scenario that will underestimate most species' mobility (e.g., over half of the species considered here can fly) and their ability to track biome shifts. Since the habitat range derived for a species in this manner is a subset of the one estimated in our main analysis, projected range losses based on this approach are, by design, higher ( Supplementary Fig. 7). Qualitatively, results are equivalent to those in Fig. 3 in terms of how different RCP/SSP scenarios rank relative to each other. As the empirical data on species' habitat preferences only provide categorical biome requirements, not continuous climatic envelopes, the method used here does not account for range changes due to changes in climatic conditions that are too small to manifest as biome changes. However, estimating precise climatic envelopes of species can be subject to considerable uncertainty and be highly sensitive to the way in which they are estimated (see below). By construction of the method used here, species' ranges over time vary within the extents of occurrence provided with the empirical data, and do not exceed those. Justification for this assumption is provided by the fact that potential natural ranges (and, much more, actual ranges) are generally well-contained within extents of occurrence, with the former accounting for an average of 64% of the area of the latter in the reference year 1850, thus providing ample space for range shifts and expansions within the boundaries. Additional evidence that the restriction of habitat ranges to the extents of occurrence does not prevent significant range expansions can be seen in the sizeable number of species that have already experienced such range expansions ( Fig. 1a and Supplementary Fig. 1) or are predicted to do so in future scenarios of strong global warming (Supplementary Fig. 1 and Supplementary Fig. 3a). Climate niche models estimate statistical relationships between climatic conditions and species' spatial distributions, and apply these to climate projections in order to estimate future distribution patterns 72 . By design, they have great potential for mapping species' distributions under a high degree of complexity in terms of possible predictor variables and their interactions, which has made the approach very useful in scenarios where the number of species, the geographic region and/or the temporal scale considered is relatively small so that statistical challenges are well-manageable [73][74][75] . In an analysis involving a large number of species, points in time, and different climatic and land-use scenarios considered here, the challenges commonly faced by climate nice models, specifically in terms of ensuring the robustness of the underlying statistical model and the estimated parameters, and avoiding unwanted artefacts in the extrapolation behaviour [76][77][78][79][80][81] , would be very difficult to manage. By operating directly and transparently on the empirical data of species' extents of occurrence and biome requirements, and not being reliant on any particular statistical model or parameterisation, the approach used here provides the robustness needed at this scale of data 23,82 . Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
2020-11-08T14:07:09.053Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "0c5b02925232ed423d2c2a688cf2b86d4b8575b7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-19455-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0864bb9865ce8ffa6fa9768b18ece46b6ce56615", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Geography" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
252854498
pes2o/s2orc
v3-fos-license
Recruitment Maneuver to Reduce Postoperative Pulmonary Complications after Laparoscopic Abdominal Surgery: A Systematic Review and Meta-Analysis Background: Lung-protective ventilation strategies are recommended for patients undergoing mechanical ventilation. However, there are currently no guidelines to follow regarding recruitment maneuvers (RMs). We attempted to identify the effects of RMs on patients undergoing laparoscopic abdominal surgery. Methods: We searched for randomized controlled trials (RCTs) in PubMed, the Cochrane Library databases, Embase, Web of Science and the ClinicalTrials.gov registry for trials published up to December 2021. The primary outcome was postoperative pulmonary complications (PPCs). The secondary outcomes consisted of the static lung compliance, driving pressure (DP), intraoperative oxygenation index (OI), OI in the post-anesthesia care unit (PACU), mean arterial pressure (MAP) and heart rate (HR). Seventeen RCTs with a total of 3480 patients were examined. Results: Patients who received RMs showed a considerable reduction in PPCs (risk ratio (RR) = 0.70; 95% confidence interval (CI): 0.62 to 0.79; p < 0.01), lower DP (weighted mean difference (WMD) = −3.96; 95% CI: −5.97 to −1.95; p < 0.01), elevated static lung compliance (WMD = 10.42; 95% CI: 6.13 to 14.71; p < 0.01) and improved OI (intraoperative: WMD = 53.54; 95% CI: 21.77 to 85.31; p < 0.01; PACU: WMD = 59.40; 95% CI: 39.10 to 79.69; p < 0.01) without substantial changes in MAP (WMD = −0.16; 95% CI −1.35 to 1.03; p > 0.05) and HR (WMD = −1.10; 95% CI: −2.29 to 0.10; p > 0.05). Conclusions: Recruitment maneuvers reduce postoperative pulmonary complications and improve respiratory mechanics and oxygenation in patients undergoing laparoscopic abdominal surgery. More data are needed to elucidate the effect of recruitment maneuver on the circulatory system. Introduction Laparoscopic surgery is becoming more and more common due to its advantages of minimal incisions, clear surgical views and reduced postoperative hospital stays [1,2]. However, the pneumoperitoneum and Trendelenburg position cause cephalad displacement of the diaphragm, which reduces pulmonary compliance and functional residual capacity (FRC) and greatly increases the risk of postoperative pulmonary complications (PPCs) [3]. PPCs have been reported to be associated with increased early postoperative mortality, ICU readmission and length of hospital stay [4,5]. Therefore, it is critical to prevent PPCs in the perioperative period. Pulmonary-protective ventilation strategies, including low tidal volume (TV) ventilation, positive end-expiratory pressure (PEEP) ventilation and the recruitment maneuver (RM), are among the beneficial means for reducing PPCs that many researchers have studied [6]. 2 of 24 There remains controversy and a lack of guidelines to follow regarding the RM. The RM can reverse pulmonary atelectasis to some extent and maintain the alveolar opening by increasing the airway pressure. Depending on the fluctuation of airway pressure, RMs can be divided into the sustained RM and stepwise RM. The stepwise RM comprises a stepwise increase in TV and stepwise increase in PEEP [7]. Previous systematic reviews have reported that the RM in patients undergoing general anesthesia improves oxygenation and reduces PPCs [8]. However, the study did not distinguish between laparoscopic and open surgery, and the number of included publications was limited. Another large multicenter randomized controlled trial (RCT) showed that the open-lung ventilation strategy was not effective in reducing the incidence of PPCs compared to conventional protective ventilation [4]. Therefore, we performed this meta-analysis of RCTs to discuss the effect of the RM on PPCs, the respiratory mechanics and the hemodynamics during laparoscopic abdominal surgery. Materials and Methods We report the results of this meta-analysis in compliance with the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines [9]. This study is registered in the International Prospective Register of Systematic Reviews (PROSPERO) with registration number CRD42022315969. Search Strategy PubMed, Embase, the Cochrane Library databases, Web of Science and the Clinical-Trials.gov registry were searched, and we included literature published before December 2021. We used Medical Subject Headings (MeSH) terms and multiple combinations related to "Abdomen", "Laparoscopy" and "Hand-Assisted Laparoscopy" for retrieval. With no MeSH terms associated with the RM, we used "recruitment maneuver", "recruitment maneuvers", "RM", "open lung", "protected ventilation" or "protective ventilation" for the search based on previous literature [10,11]. The study type was restricted to RCTs. There were no language restrictions. Finally, the above findings were combined to produce our results. The general search strategy is provided in Table 1. Selection Criteria Studies were selected for inclusion based on the following criteria. The screening process was performed independently by SP and WW. • The subjects were adult patients subjected to laparoscopic abdominal surgery requiring general anesthesia and mechanical ventilation. • The included studies were required to compare RM groups with non-RM groups (or control groups). • The included studies had to plainly state the mechanical ventilation strategies, and inclusion and exclusion criteria. Postoperative pulmonary complications had to be reported. • Studies containing patients who were minors or had previous lung disease were excluded. Data Extraction Two researchers (YP and JW) independently collected the following information from the original texts: the first author, publication year, ASA grading, age, gender, sample size, body mass index (BMI), surgery type, ventilation settings (the TV, airway pressure, PEEP and RM), hemodynamic parameters (mean arterial pressure (MAP) and heart rate (HR)), respiratory indicators (the incidence of PPC, static lung compliance, driving pressure (DP), intraoperative oxygenation index (OI) and OI in the post-anesthesia care unit (PACU)). We calculated the OI as the arterial partial pressure of oxygen/the inspiratory oxygen fraction (PaO 2 /FiO 2 ). The DP was computed as (airway plateau pressure-PEEP), while the static lung compliance was measured as TV/(airway plateau pressure-PEEP). If the patients were divided into multiple groups in the article, only data from the RM group (followed by PEEP) and conventional ventilation group (without RM) were recorded. Any disputes were adjudicated by SY and HX. Continuous data and dichotomous data were expressed as the means ± standard deviations (SDs) and numbers, respectively. If continuous data were provided as medians, interquartile ranges or ranges, we transformed them to means and SDs on the basis of the Cochrane Collaboration recommendations [12]. Statistical Analysis We used Review Manager 5.3 (Cochrane Collaboration, Oxford, UK) and Stata17.0 (StataCorp, College Station, TX, USA) to aggregate the data in accordance with the PRISMA standards [13]. The inverse-variance and Mantel-Haenszel methods were performed separately to assess continuous and dichotomous variables among merged trials. We calculated the weighted mean differences (WMDs) and 95% confidence intervals (CIs) for continuous variables, while for dichotomous variables, we derived the risk ratios (RRs) and 95% CIs. The heterogeneity was assessed using Cochrane's Q test. p > 0.10 indicated that heterogeneity was not detected, and the fixed-effects model was used to calculate the combined statistics. Additionally, p < 0.10 suggested significant heterogeneity, and the Grading Evidence Quality The results of the assessment of the evidence quality using GRADEpro are presented in Table 3. Based on the risk of bias, inconsistency, indirectness, imprecision and publication bias, we classified the evidence quality into four levels: high, moderate, low and very low. In terms of the risk of bias, we ranked the risk for all 18 indicators assessed as not serious. The inconsistency for the static lung compliance, driving pressure, intraoperative OI and OI in the PACU was rated as severe due to I 2 > 50%, which indicates unacceptable heterogeneity. The indirectness and imprecision for all the indicators were classified as not serious because all the studies made direct comparisons between RMs and control groups with adequate sample sizes. No publication bias was found according to Egger's test and Begg's test. Due to the RR being less than 0.5, the quality of evidence for single RMs, sustained RMs, recruited pressure < 40 and comparisons to ZEEP was improved. Finally, we had moderate confidence in the outcomes for the static lung compliance, driving pressure, intraoperative OI and OI in the PACU, while we had high confidence in the rest of the results. Incidence of PPC Seventeen studies with a total of 3480 patients reported PPCs whose general incidence was about 21.9% (448/1734 in the non-RM group and 314/1746 in the RM group). RMs significantly reduced PPCs, with low heterogeneity, compared to the control group ( RR = 0.70; 95% CI: 0.62 to 0.79; p < 0.01; p for heterogeneity > 0.10; I 2 = 28%) (Figure 3). Incidence of PPC Seventeen studies with a total of 3480 patients reported PPCs whose general incidence was about 21.9% (448/1734 in the non-RM group and 314/1746 in the RM group). RMs significantly reduced PPCs, with low heterogeneity, compared to the control group (RR = 0.70; 95% CI: 0.62 to 0.79; p < 0.01; p for heterogeneity > 0.10; I 2 = 28%) (Figure 3). Only one study included subjects who were all elderly patients (age ≥ 65 years). Six studies, with a total of 1823 patients, included non-elderly subjects (age < 65 years). The results showed that RMs reduced the incidence of PPCs in non-elderly patients but were not effective in elderly patients (age ≥ 65: RR= 0.82; 95% CI: 0.43 to 1.58; age < 65: RR= 0.77; 95% CI: 0.63 to 0.95; p < 0.05; p for heterogeneity > 0.10; I 2 = 5%). We should be cautious Subgroup Analysis of PPCs by Age Only one study included subjects who were all elderly patients (age ≥ 65 years). Six studies, with a total of 1823 patients, included non-elderly subjects (age < 65 years). The results showed that RMs reduced the incidence of PPCs in non-elderly patients but were not effective in elderly patients (age ≥ 65: RR= 0.82; 95% CI: 0.43 to 1.58; age < 65: RR= 0.77; 95% CI: 0.63 to 0.95; p < 0.05; p for heterogeneity > 0.10; I 2 = 5%). We should be cautious regarding the effect of RMs on the elderly due to the insufficient number of studies ( Figure 5). Subgroup Analysis of PPCs by Age Only one study included subjects who were all elderly patients (age ≥ 65 years). Six studies, with a total of 1823 patients, included non-elderly subjects (age < 65 years). The results showed that RMs reduced the incidence of PPCs in non-elderly patients but were not effective in elderly patients (age ≥ 65: RR= 0.82; 95% CI: 0.43 to 1.58; age < 65: RR= 0.77; 95% CI: 0.63 to 0.95; p < 0.05; p for heterogeneity > 0.10; I 2 = 5%). We should be cautious regarding the effect of RMs on the elderly due to the insufficient number of studies ( Figure 5). Subgroup Analysis of PPCs by the Number of RMs Five studies used single RMs during the procedures. The remaining 12 studies used repeated RMs. The results showed that a single RM significantly reduced the incidence of PPCs with no heterogeneity (RR = 0.36; 95% CI 0.21 to 0.64; p < 0.01; p for heterogeneity > 0.10; I 2 = 0%). The use of repeated RMs also reduced the incidence of PPCs with acceptable heterogeneity (RR = 0.73; 95% CI: 0.64 to 0.83; p < 0.01; p for heterogeneity > 0.10; I 2 = 29%). A single RM may be more efficient than repeated RMs (p for heterogeneity < 0.05; I 2 = 81.6%) ( Figure 6). Subgroup Analysis of PPCs by the Number of RMs Five studies used single RMs during the procedures. The remaining 12 studies used repeated RMs. The results showed that a single RM significantly reduced the incidence of PPCs with no heterogeneity (RR = 0.36; 95% CI 0.21 to 0.64; p < 0.01; p for heterogeneity > 0.10; I 2 = 0%). The use of repeated RMs also reduced the incidence of PPCs with acceptable heterogeneity (RR = 0.73; 95% CI: 0.64 to 0.83; p < 0.01; p for heterogeneity > 0.10; I 2 = 29%). A single RM may be more efficient than repeated RMs (p for heterogeneity < 0.05; I 2 = 81.6%) ( Figure 6). Subgroup Analysis of PPCs by Recruited Pressure Rothen et al. [33] reported that a recruited pressure greater than 40 cm H2O is required to ensure opening in pulmonary atelectasis. In Figure 8, we divide the included studies into two groups according to the recruited pressure at 40 cm H2O. There are five studies with recruited pressures ≥ 40 cm H2O, while four studies had recruited pressures < 40 cm H2O. The results showed that the incidence of PPCs was reduced when the recruited pressure was less than 40 cm H2O, while a recruited pressure ≥ 40 cm H2O was not beneficial for improving outcomes (recruited pressure ≥ 40 cm H2O: RR = 0.50; 95% CI: 0.24 to 1.04; p > 0.05; p for heterogeneity > 0.10; I 2 = 21%; recruited pressure < 40 cm H2O: RR= 0.41; 95% CI: 0.27 to 0.61; p < 0.01; p for heterogeneity > 0.10; I 2 = 0%). The heterogeneity was reduced to 0% in sensitivity analysis by excluding the study of Nestler et al. [24] from the subgroup with recruited pressures ≥ 40 cm H2O (RR = 0.37; 95% CI: 0.16 to 0.84; p < 0.05; p for heterogeneity > 0.10; I 2 = 0%). Subgroup Analysis of PPCs by Recruited Pressure Rothen et al. [33] reported that a recruited pressure greater than 40 cm H 2 O is required to ensure opening in pulmonary atelectasis. In Figure 8, Subgroup Analysis of PPCs by ZEEP or PEEP Used in Control Group In the open-lung strategy, the RM is usually used in combination with PEEP. Some control groups of the included studies used PEEP, while the rest used zero end-expiratory pressure (ZEEP). We performed subgroup analysis based on whether PEEP was used in the control group. The results showed that there was a significant difference in the incidence of PPCs, with no heterogeneity, regardless of whether PEEP was used in the control group (compared to ZEEP: RR = 0.48; 95% CI: 0.37 to 0.64; p < 0.01 p for heterogeneity > 0.10; I 2 = 0%; compared to PEEP: RR = 0.78; 95% CI: 0.68 to 0.90; p < 0.01; p for heterogeneity > 0.10; I 2 = 0%). The protective effect in comparison with ZEEP was more pronounced (p for subgroup differences < 0.01) (Figure 9). Subgroup Analysis of PPCs by ZEEP or PEEP Used in Control Group In the open-lung strategy, the RM is usually used in combination with PEEP. Some control groups of the included studies used PEEP, while the rest used zero end-expiratory pressure (ZEEP). We performed subgroup analysis based on whether PEEP was used in the control group. The results showed that there was a significant difference in the incidence of PPCs, with no heterogeneity, regardless of whether PEEP was used in the control group (compared to ZEEP: RR = 0.48; 95% CI: 0.37 to 0.64; p < 0.01 p for heterogeneity > 0.10; I 2 = 0%; compared to PEEP: RR = 0.78; 95% CI: 0.68 to 0.90; p < 0.01; p for heterogeneity > 0.10; I 2 = 0%). The protective effect in comparison with ZEEP was more pronounced (p for subgroup differences < 0.01) (Figure 9). Static Lung Compliance Seven studies involving a total of 628 patients reported static lung compliance, and the data suggest that the RM is beneficial in enhancing lung compliance but is highly heterogeneous (WMD: 10.42; 95% CI: 6.13 to 14.71; p < 0.01; p for heterogeneity < 0.10; I 2 = 95%) ( Figure 10). Static Lung Compliance Seven studies involving a total of 628 patients reported static lung compliance, and the data suggest that the RM is beneficial in enhancing lung compliance but is highly heterogeneous (WMD: 10.42; 95% CI: 6.13 to 14.71; p < 0.01; p for heterogeneity < 0.10; I 2 = 95%) (Figure 10). Driving Pressure The driving pressure was reported in seven trials with a total of 2603 individuals, and the findings showed that the RM was useful in reducing the DP but was very heterogeneous (WMD: −3.96; 95% CI: −5.97 to −1.95; p < 0.01; p for heterogeneity < 0.10; I 2 = 96%) ( Figure 11). Intraoperative Oxygenation Index The intraoperative OIs were reported for 1285 patients from 11 studies. The global data suggested that the RM could improve the intraoperative OI but with high heterogeneity (WMD: 53.54; 95% CI: 21.77 to 85.31; p < 0.01, p for heterogeneity < 0.10; I 2 = 96%) ( Figure 12). Seven studies examined the postoperative OIs in patients who underwent laparoscopic abdominal surgery. The RM group had higher OIs than the control group (WMD: Figure 11. Forest plot for driving pressure between RM and control groups [17][18][19]21,24,25,27]. Intraoperative Oxygenation Index The intraoperative OIs were reported for 1285 patients from 11 studies. The global data suggested that the RM could improve the intraoperative OI but with high heterogeneity (WMD: 53.54; 95% CI: 21.77 to 85.31; p < 0.01, p for heterogeneity < 0.10; I 2 = 96%) ( Figure 12). Intraoperative Oxygenation Index The intraoperative OIs were reported for 1285 patients from 11 studies. The global data suggested that the RM could improve the intraoperative OI but with high heterogeneity (WMD: 53.54; 95% CI: 21.77 to 85.31; p < 0.01, p for heterogeneity < 0.10; I 2 = 96%) ( Figure 12). Oxygenation Index in Post-Anesthesia Care Unit Seven studies examined the postoperative OIs in patients who underwent laparoscopic abdominal surgery. The RM group had higher OIs than the control group (WMD: 59.40; 95% CI: 39.10 to 79.69; p < 0.05; p for heterogeneity < 0.10; I 2 = 96%) ( Figure 13). Heart Rate Six studies, with a total of 1692 patients, reported HR. Overall, there was no significant difference in the effect of the RM on HR compared to control (WMD: −1.10; 95% CI: −2.29 to 0.10; p > 0.05; p for heterogeneity > 0.10; I 2 = 0%) ( Figure 15). Discussion This meta-analysis included 17 RCTs comparing RMs and conventional mechanical ventilation in patients undergoing laparoscopic abdominal surgery. The types of procedures included robot-assisted laparoscopic radical prostatectomy (RARP), laparoscopic colorectal cancer resection, laparoscopic gastric cancer radical surgery, laparoscopic total hysterectomy and laparoscopic bariatric surgery. Patients undergoing laparoscopic abdominal surgery are at high risk for PPCs. The RM is an effective method for improving pulmonary atelectasis. However, there are few systematic reviews or meta-analyses re- Figure 14. Forest plot for mean arterial pressure between RM and control groups [17,19,24,26,29,30,32]. Heart Rate Six studies, with a total of 1692 patients, reported HR. Overall, there was no significant difference in the effect of the RM on HR compared to control (WMD: −1.10; 95% CI: −2.29 to 0.10; p > 0.05; p for heterogeneity > 0.10; I 2 = 0%) ( Figure 15). Discussion This meta-analysis included 17 RCTs comparing RMs and conventional mechanical ventilation in patients undergoing laparoscopic abdominal surgery. The types of procedures included robot-assisted laparoscopic radical prostatectomy (RARP), laparoscopic colorectal cancer resection, laparoscopic gastric cancer radical surgery, laparoscopic total hysterectomy and laparoscopic bariatric surgery. Patients undergoing laparoscopic abdominal surgery are at high risk for PPCs. The RM is an effective method for improving pulmonary atelectasis. However, there are few systematic reviews or meta-analyses reporting the effects of RMs on patients undergoing laparoscopic abdominal surgery. There- Figure 15. Forest plot for subgroup analysis of heart rate between RM and control groups [17,24,26,29,30,32]. Discussion This meta-analysis included 17 RCTs comparing RMs and conventional mechanical ventilation in patients undergoing laparoscopic abdominal surgery. The types of procedures included robot-assisted laparoscopic radical prostatectomy (RARP), laparoscopic colorectal cancer resection, laparoscopic gastric cancer radical surgery, laparoscopic total hysterectomy and laparoscopic bariatric surgery. Patients undergoing laparoscopic abdominal surgery are at high risk for PPCs. The RM is an effective method for improving pulmonary atelectasis. However, there are few systematic reviews or meta-analyses reporting the effects of RMs on patients undergoing laparoscopic abdominal surgery. Therefore, a comprehensive analysis of previous RCTs was necessary. Our results showed that, for patients undergoing laparoscopic abdominal surgery, RMs reduced the incidence of PPCs and the driving pressure and improved the oxygenation and static lung compliance compared with controls, without significant differences in the MAP and HR. The heterogeneity was higher for the static lung compliance, DP, intraoperative OI and OI in the PACU, while less heterogeneity was found for PPCs, the MAP and the HR. Heterogeneity may arise from several sources. First, the enrolled patients had a wide age range and underwent different laparoscopic abdominal procedures. Second, the intraoperative ventilation strategy is highly variable. The tidal volume, RM and PEEP can affect the oxygenation and respiratory mechanics. Third, the DP and OI are directly provided in some articles. For studies where the data are not available, we calculated them using equations. Obese patients are likely to undergo laparoscopic bariatric surgery; they usually have reduced functional residual capacity (FRC), impaired oxygen reserves and comorbidities [34,35]. Pulmonary atelectasis, which plays an important role in PPCs [36], is further aggravated under the influence of general anesthesia, pneumoperitoneum and the Trendelenburg position. The role of RMs in obese patients is still worth discussing. Several studies have demonstrated that RMs can ameliorate PPCs. Reinius et al. [37] concluded that RMs alone were not sufficient to maintain improved respiratory function. We performed subgroup analysis based on BMI and found that RMs reduced PPCs in both obese and non-obese patients, with no significant difference between the two subgroups. This was contrary to the finding of Cui et al. [38], whose meta-analysis indicated that RMs did not improve PPCs in obese patients. However, the fact that there were only two studies with a total of 70 patients in the obese group and high heterogeneity lent low credibility to their findings. The majority of patients undergoing laparoscopic radical prostatectomy and tumor resection are elderly. With increasing age, elderly patients have compromised respiratory compliance, increased closing volumes and impaired airway protective reflexes. These changes make them more prone to abnormal gas exchange and pulmonary atelectasis. Our subgroup analysis based on age showed that RMs reduced the incidence of PPCs in non-elderly patients but were not effective in elderly patients. However, there was only one study in the elderly group, containing 62 patients, which made the results less reliable. The meta-analysis by Cui et al. [38] showed that RMs reduced the incidence of PPCs in elderly patients undergoing general anesthesia. However, Cui et al. classified patients as elderly or non-elderly based on age 60, whereas our study used 65 as the cut-off. In addition to patient characteristics, the RM itself is worthy of further discussion. Some experiments used single RM [18,23,26,29,30], while others employed repeated RMs [16,17,[19][20][21][22]24,25,27,28]. The results showed that both methods reduced PPCs. Unexpectedly, single RM had even lower risk ratios and a statistically significant difference compared to the other subgroup, which indicated a more pronounced effect. Although the RM is considered to be an effective means of reducing pulmonary atelectasis and preventing PPCs, repeated RMs are accompanied by an increased risk of lung hyperinflation and hemodynamic instability in normal lungs. The single RM in the included studies was administered after intubation or pneumoperitoneum, a phase with a higher incidence of pulmonary atelectasis and greater risk of the development of hemodynamic instability due to medications, positive pressure ventilation and pneumoperitoneum. There is no high-quality evidence to recommend routine RMs after tracheal intubation for patients undergoing general anesthesia, and anesthesiologists need to assess the patient's risk-benefit ratio to tailor treatment. Continuous hemodynamic and SpO2 monitoring is necessary during RMs. RMs are usually classified as sustained RMs and stepwise RMs. A sustained RM involves setting the airway pressure at a high value and ventilating continuously for a period of time. This is commonly achieved by adjusting the airway-pressure-limiting valve on the ventilator and squeezing the air reservoir. The sustained RM is easy to perform and is widely used in clinical settings. However, when switching back to machinecontrolled mode, there is a risk that the alveoli will re-collapse. A stepwise RM gradually boosts the airway pressure by stepping up the tidal volume or PEEP. The stepwise RM is ventilator driven and can be followed by PEEP titration. The operation is complicated and time consuming. As shown in Figure 7, subgroup analysis showed that the risk ratio was lower in the sustained RM group, and the difference was considered statistically significant compared to the stepwise RM group. No heterogeneity was found in either subgroup. This is consistent with the findings of Cui et al. The incidence of PPC in patients receiving stepwise RM was 22.6% while that in patients receiving sustained RM was 7.1%. Significantly, the included studies only compared the RM and control groups. No direct comparison of different RMs was performed. From the available data, we could not identify which RM was more effective. A study by Rothen et al. [33] based on CT imaging suggested that a recruited pressure of 40 cm H 2 O was efficient in reversing pulmonary atelectasis. We performed further subgroup analysis accordingly. Our results showed that the subgroup with a recruited pressure greater than 40 cm H 2 O showed a poor reduction in the incidence of PPCs, while that with a recruited pressure less than 40 cm H 2 O was good. Sensitivity analysis showed that the heterogeneity originated from the study of Nestler [24]. It was the only study in which the number of PPC cases was greater in the RM group than the control group. This may have been due to errors caused by the small samples, as only 25 patients per group were analyzed. After excluding this study, the results showed that a recruited pressure greater than 40 also reduced PPCs compared to the control group. The PEEP should be manipulated following RMs to keep the alveoli open. Karsten et al. [39] demonstrated that the combination of the RM and PEEP guaranteed homogeneity in the local ventilation during laparoscopic surgery and enhanced the oxygenation and lung compliance based on electrical impedance tomography (EIT). In an observational study of 10,978 patients, Myrthe et al. [40] noted that mechanical ventilation combined with PEEP at 5-10 cm H 2 O was associated with fewer postoperative respiratory complications and shorter hospital stays in major abdominal surgery. Our subgroup analysis revealed that the incidence of PPCs was lower in the RM group, regardless of whether PEEP was performed in the control group. The protective effect of the RM coupled with PEEP was more apparent than that with ZEEP. This suggested that neither PEEP nor the RM alone was fully effective and that their combination was necessary to maximize the benefits. We analyzed two indices of pulmonary function: the static lung compliance and driving pressure. The results showed that RMs improved static lung compliance while decreasing the DP. This may be the mechanism by which RMs reduce PPCs. It has been shown that, among patients undergoing mechanical ventilation during general anesthesia, an increased DP is associated with more PPCs, and a lower DP may be lung protective [40]. Christopher et al. [6] also noted that the pulmonary compliance and driving pressures should be examined after RMs to assess the effects. We evaluated the oxygenation indices of patients during surgery and in the PACU. The fact that the RM improves intraoperative OIs has been confirmed by most studies. The RM needs to be maintained with a ventilator. There is a risk of the alveoli re-collapsing after detachment from the circuit. However, our results suggested that the RM was equally beneficial for improving oxygenation in the PACU. The hemodynamic stability during the RM is noteworthy. We evaluated two hemodynamic parameters: the MAP and HR. There was no significant difference between the control and RM groups. However, we cannot assume, on this basis, that the RM does not have any impact on the circulatory system. The point at which the data were recorded varied widely between studies, with some being recorded 60 min after pneumoperitoneum and others before the end of surgery. The parameters when carrying out the RM were not recorded. In fact, the increased transpulmonary pressure (TP) during RMs causes elevations of the central venous pressure (CVP), pulmonary vascular resistance index (PVRI) and pulmonary artery pressure (PAP), which raises the preload and afterload of the right ventricle, resulting in a transient decrease in the right and left ventricular ejection fraction (R/LVEF) during RMs. Celebi et al. [41] showed that the effect of the RM on the right ventricle was temporary and that the hemodynamics returned to normal with the release of the high airway pressure. Reis et al. [42] also demonstrated that the right ventricular work increases only during the first 2 min after intervention. There are some limitations of this meta-analysis that need to be taken into account. First, the diagnostic criteria for PPCs varied among studies. Some studies reported the incidence of PPCs at 24 h postoperatively, while others reported PPCs at 5 or even 7 days following surgery. The PPCs were well defined in the high-quality studies but not explicitly stated in others. These factors may affect the accuracy of the conclusions. Second, the measurements for continuous data were conducted at different time points (e.g., 40, 50 or 60 min after pneumoperitoneum). Third, quantitative hemodynamic analysis is inadequate, and the safety of the RM remains to be further clarified. Fourth, the majority of the patients included in the study had normal cardiopulmonary function, so our conclusions may not be applicable to patients with severe cardiac or pulmonary disease. Conclusions Our systematic review and meta-analysis have demonstrated that the recruitment maneuver reduces postoperative pulmonary complications and improves respiratory mechanics and oxygenation in patients undergoing laparoscopic abdominal surgery. More data are needed to elucidate the effect of recruitment maneuver on the circulatory system. In general, the use of recruitment maneuver during mechanical ventilation maybe beneficial. Moreover, the long-term outcome parameters for the recruitment maneuver in patients and how to choose the optimal recruitment maneuver according to patient characteristics remain to be further explored. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-13T15:40:36.840Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "6ac6d52e92f44c1c9f65c9198c7f86ed3ca63fc4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/19/5841/pdf?version=1665210695", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af771f5912c9b8cc8962d1c80aeb37f38459b9be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125740588
pes2o/s2orc
v3-fos-license
Astrophysical Tests of Screened Modified Gravity Screened modified gravity theories evade the solar system tests that have proved prohibitive for classical alternative gravity theories. In many cases, they do not fit into the PPN formalism. The environmental dependence of the screening has motivated a concerted effort to find new and novel probes of gravity using objects that are well-studied but have hitherto not been used to test gravity. Astrophysical objects---stars, galaxies, clusters---have proved competitive tools for this purpose since they occupy the partially-screened regime between solar system and the Hubble flow. In this article we review the current astrophysical tests of screened modified gravity theories. Introduction Screened modified gravity theories evade the solar system tests that have proved prohibitive for classical alternative gravity theories such as Brans-Dicke. In many cases, they do not fit into the PPN formalism. The environmental dependence of the screening has motivated a concerted effort to find new and novel probes of gravity using objects that are well-studied but have hitherto not been used to test gravity. Astrophysical objects-stars, galaxies, clusters-have proved competitive tools for this purpose since they occupy the partially-screened regime between solar system and the Hubble flow. In this section, we review the current astrophysical tests of screened modified gravity theories. We begin by introducing the theories we will study and outline the strategy typically employed to identify astrophysical probes. Searching for Screening Mechanisms We will split the known theories with screening mechanism into three distinct categories that exhibit similar effects on astrophysical objects. This allows us to identify the optimum strategy for testing each theory. Thin-Shell Theories: Chameleon, 1, 2 symmetron, 3 and dilaton 4 models all screen using the thin-shell effect. For this reason we will refer to them as thinshell theories. The specific details of each model are not important for astrophysical tests and one can completely parameterize them using the effective coupling β(φ BG ) where φ BG is the asymptotic (background) field value and the self-screening parameter For f (R) models one has f R0 = 2χ 0 /3 where χ 0 is the value of χ BG evaluated at cosmic densities. If the self-screening parameter is larger than an object's Newtonian potential Ψ = GM/R then this object will be self-screening. If not, then an object will be partially unscreened. This implies that the best objects for testing these theories are non-relativistic ones. In particular, main-sequence stars have Ψ ∼ 10 −6 whereas post-main-sequence stars have Ψ ∼ 10 −7 -10 −8 (owing to their larger radii) and are therefore more constraining probes. Similarly, rotationally-supported galaxies have where v circ is the circular velocity. The most unscreened galaxies are therefore dwarf galaxies with v circ ∼ 50 km/s so that Ψ ∼ 10 −8 . (Spirals like the Milky Way have v circ ∼ 200 km/s implying Ψ ∼ 10 −6 .) There is the added complication of environmental screening whereby a potentially unscreened dwarf could be screened by its cluster companions. Therefore, one needs to use void dwarfs as laboratories for testing thin-shell screening theories. Reference 5 has complied a 'screening map' of the nearby universe using criteria developed by 6 and calibrating on N-body simulations. Recently, this has been revisited by. 7 Vainshtein Screening Theories: Theories that screen using the Vainshtein mechanism so that the ratio of the scalar to Newtonian force outside a spherical object is will be referred to as Vainshtein screening theories. We parameterize the coupling strength α and Vainshtein radius (below which the force is screened) using a crossover scale r c (= Λ −1 c ) akin to the DGP model so that r 3 V ∼ αGM r 2 c . These theories include very general theories including Horndeski 8 but here we will only focus on cubic and quartic galileon models 9 for which n = 3/2 and n = 2 respectively. In the case of Vainshtein screening, the Vainshtein radius is typically larger than the radius of stars and galaxies making astrophysical tests difficult but not impossible. Vainshtein Breaking Theories: Theories such as beyond Horndeski 10, 11 and degenerate higher-order scalar-tensor theories (DHOST) (see 12 and references therein) exhibit a 'breaking of the Vainshtein mechanism' such that the Newtonian potential and lensing potential (g ij = (1 − 2Φ)δ ij ) are corrected inside extended objects to [13][14][15] dΨ dr = GM (r) where the three dimensionless parameters Υ i are related to the cosmological values of the functions and parameters appearing in a specific theory and also the effective description of dark energy [15][16][17][18] (introduced by 19 ). The form of the corrections do not suggest the best objects for testing these theories and one must calculate on an object by object basis. Roadmap of Astrophysical Tests We will begin by discussing the most important difference between Vainshtein and thin-shell screened theories in section 2: equivalence principle violations. Next, we will introduce the theory of stellar structure in modified gravity in section 3. In the subsequent sections we review current astrophysical bounds by object. Nonrelativistic stars in section 4, galactic tests in section 5, galaxy cluster tests in section 6, and tests using relativistic stars in section 7. Weak Equivalence Principle One important difference between thin-shell and Vainshtein screening is the presence of weak equivalence principle (WEP) violations a . It was pointed out in the original chameleon and symmetron papers that thin-shell screening violates the WEP 1-3 because the thin-shell factors for each body, which determines their motion in an external field, are composition and structurally dependent. This issue was studied in more detail by 20 who also studied equivalence principle violations in galileon theories. Consider an extended object of inertial mass M i and gravitational mass M g in an applied external Newtonian potential Ψ ext and scalar field φ ext (chameleon, symmetron, or galileon). The equation of motion for this object in the non-relativistic limit is The gravitational mass can be thought of as a 'gravitation charge' that parameterizes the response of the object to an externally applied Newtonian potential and so we have defined an analogous scalar charge Q that quantifies the response of an object to an externally applied scalar field b . In theories without a scalar field, the WEP is obeyed if M i = M g . This is not generically the case in scalar field theories because Q can depend on the structure and composition of the object. In this section we will refer to the baryonic mass M defined by where the second equality holds in the non-relativistic limit. Note that this may include the mass of dark matter but not the self-energy of the gravitational field, which is found by integrating the Landau-Lifschitz energy-momentum pseudo-tensor. a We define the WEP as the statement that the motion of a test body in an external gravitational field depends only on its mass and is independent of its composition and internal structure. b The factor of M pl is needed because φ ext has different units to Ψ ext . It is chosen so that that Q has units of mass. Thin-shell screening: For theories that screen using the thin-shell effect (chameleon and symmetron theories) one has where M (r s ) is the mass enclosed inside the screening radius. The force between two bodies with masses M 1 and M 2 is 21, 22 where Q i is given by equation (8) with M → M i . Thus the WEP is violated unless either Q = 0 or Q = M i.e. the objects are fully screened or fully unscreened. Vainshtein screening: In the case of Vainshtein screening, there is no thin shell suppression. Furthermore, the equation of motion can be written in the form of a current conservation law ∇ µ J µ = 8παGρ, which ensures that i.e. the charge is equal to the baryonic mass. The WEP is therefore satisfied in galileon theories. One possible caveat to this is many-body effects. The equation of motion for galileon theories is non-linear in second-derivatives, which leads to severe violations of the superposition principle c . The above argument circumvents this by assuming that the external galileon field is only slowly varying so that the galilean shift symmetry can be used to superimpose the fields, and it is not clear what happens away from this approximation. The full two-body problem has been studied by 23 for an Earth-Moon like system; they found a mass-dependent reduction of the galileon force of ∼ 4% indicating that the WEP may be broken by non-linear many-body effects. The non-linear nature of the equations makes modeling of such systems difficult. Indeed, departures from spherical symmetry do not have analytic solutions except in highly symmetric cases. 24,25 See 26,27 for some detailed studies of this issue. Strong Equivalence Principle The strong equivalence principle (SEP) is the statement that an object's motion is independent of its self-gravity. Unlike the WEP, the SEP is violated by all of the theories considered in this section d . This is because the scalar field is sourced only by the baryonic mass (defined in (7)) and not the curvature so that the no-hair theorems hold and strongly gravitating objects have no scalar charge e . A no-hair theorem for the galileon was proved by 33 holes and subsequently generalized by 29 to the case of slow rotation. Thus, if a system composed of baryonic matter (including dark matter) and black holes the baryonic component will have Q = M while the black holes will have Q = 0. The baryons will therefore fall at a faster rate than the black holes in an externally applied gravitational field, violating the SEP. In the case of chameleon theories, the presence of an accretion disk around black holes may source secondary scalar hair. 34, 35 Stellar Structure in Modified Gravity Stars are complicated objects whose lives, existence, and stability are a result of the interface between diverse and disparate areas of physics, including gravitational physics, atomic physics, nuclear physics, hydrodynamics, and particle physics. 36 Modern theoretical modeling of stellar structure and evolution therefore utilizes sophisticated numerical simulations that solve a large number (often in the thousands depending on the type of star) of coupled differential equations simultaneously. Fortuitously, the effects of gravitational physics appears in a single equation, the momentum equation, which describes the Lagrangian velocity v =˙ r of a fluid element located at Lagrangian position r due to some external force (per unit mass) f and the hydrodynamic (Eulerian) pressure P : where ρ( r) is the Eulerian density. In the case of general relativity, the force per unit mass is simply the gradient of the Newtonian potential For alternative theories, one must solve for the force per unit mass within the new framework. Typically this involves solving for additional scalar (or other spin) field profiles sourced by the star's mass. Note that we will only discuss non-relativistic objects here, postponing relativistic stars for a later section. Equilibrium Structure The velocity of each fluid element is constant for a static, spherically symmetric object in equilibrium and so the left hand side of equation (11) is zero. In GR, the force per unit mass is simply the Newtonian force and one has the well-known hydrostatic equilibrium equation (HSEE) where M (r) is the mass enclosed inside r and therefore satisfies the continuity equation For thin-shell screening theories, the HSEE is modified to f dP (r) dr = − GM (r)ρ(r) where M (r s ) is the mass enclosed within the screening radius r s , Θ(x) is the Heavyside step function, and the new factor arises from the fifth-force F φ = −β(φ BG )φ /M pl with φ BG being the background (asymptotic) value of the scalar. If the star's host galaxy is self-screened then this is the field value that minimizes the effective potential at mean galactic density, if the host galaxy is unscreened then the relevant density is the mean cosmic density. In theories that exhibit Vainshtein breaking the corresponding HSEE is 14, 42-44 which can be expressed in alternate forms by taking derivatives of (14) to find The Vainshtein radius is necessarily several orders of magnitude larger than the radius of typical stars and so we do not give the HSEE for theories that do not include Vainshtein breaking. These equations presented thus far do not form a closed set because the equation of state P (ρ) is not known. One must either couple these equations to microphysical and macrophysical processes such as radiative transfer, nuclear burning, opacity, and convection to calculate the equation of state (EOS), or provide a known (or approximate) equation of state. Two important equations that will arise at various points in this section are the equation of radiative transfer, which describes the temperature gradient of the star due to photon transport: where κ is the opacity, and the energy generation equation This equation describes the photon luminosity gradient produced by the interaction process i with rate ε i per unit mass. f Note that we have ignored the mass of the field, which is a good approximation inside the unscreened region of stars. 37 which are good approximations for many stars, or at least some region of them. In the context of modified gravity (MG), polytropic equations of state allow one to decouple to gravitational and non-gravitational physics. This means one can discern the effects of changing the theory parameters without the need to account for possible degeneracies with non-gravitational processes. The stellar structure equations are self-similar for polytropic equations of state, which means one can work with dimensionless variables to extract the structure of the star independently of the central conditions. In particular, it is useful to work with the dimensionless radial coordinate and P c and ρ c are the central pressure and density respectively. One can define the dimensionless function θ(y) via ρ = ρ c θ(y) n and P = P c θ(y) n+1 = Kρ c n+1 n θ n , which encodes the structure of the star. In GR, one can take a derivative of equation (13) and apply equation (14) to find the Lane-Emden equation (LEE) The equivalent equation for thin-shell screening theories is 38, 40, 41 where r s = r c y s (i.e. y s is the dimensionless radius where screening begins) and the factor of (1 + 2β 2 (φ BG )) assumes that the star is fully unscreened outside the screening radius g . In Vainshtein breaking theories the LEE is 14, 43 which has been derived using equation (17) and using the relations (22). The boundary conditions for the LEE are θ(0) = 1 (P (r = 0) = P c ) and θ (0) = 0 (dP (r)/dr = 0 at the origin, which is a consequence of spherical symmetry). (See 46 for a detailed study of the LEE in GR.) One can find analytic solutions for specific values of n but these are typically not relevant for astrophysics and so one must solve the LEE numerically. g If one were to attempt to go beyond this approximation and include the thin-shell factor (1 − M (rs)/M (r)) the self-similarity would be lost and, with it, the simplicity of the LEE. The radius of the star is defined as the radial coordinate where the pressure falls to zero, which defines y R such that θ(y R ) = 0. One then has The stellar mass can be found by integrating equation (14) to find M = 4πr c 3 y R 0 y 2 θ(y) n dy = ω R GR and Vainshtein breaking where we have replaced θ(y) n using the appropriate Lane-Emden equation and defined with Y = s being short for Y = y s . Two important properties of polytopes that will be useful later one are the mass-radius relation and the central density in terms of the mass and radius These relations are derived in 45,47 (and other similar textbooks). They apply to GR and Vainshtein breaking theories but not chameleon theories. We do not give the chameleon equivalents here since they will not be necessary h . Numerical Models: MESA In order to model more complicated stars that do not have simple polytropic equations of state, one needs sophisticated numerical codes. One publicly available code that has proven invaluable for stellar structure in MG is MESA. 48 MESA solves the stellar structure equations coupled to the equations describing micro and macrophysical processes. The reader is referred to the instrumentation papers 48-50 for a comprehensive review of MESA's capabilities. In the context of MG, MESA has been modified to solve the modified HSEE for both thin-shell screening (equation (15)) 37,38,40,41 and Vainshtein breaking theories (equation (16)). 14 MESA is a one-dimensional code (meaning that is assumes spherical symmetry) that splits each star into cells of varying lengths (the number of cells depends on the complexity of the star) and assigns relevant quantities (radius, density, temperature etc.) to each cell. The set of cells and these quantities then defines a stellar model at a specific time-step. Given a specific stellar model, the stellar structure equations are discretized on each cell solved to produce a new stellar model at a later time. Thus, the star is simulated over its entire lifetime. The publicly available version of MESA solves the GR HSEE (13). The modified versions of MESA solve either equation (15) or (16). We will briefly describe how these modifications work below. Thin-shell: There are two independent chameleon modifications of MESA (see 51 for a recent third). The first 37 solves the full scalar differential equation using a Gauss-Seidel relaxation algorithm. The second, 38,40,41 uses the thin-shell approximation. Both codes agree very well but here we will only describe the latter implementation since it is more commonly used in the literature. Given a starting stellar model, the screening radius is computed by solving 37,38,40,41 The code numerically integrates rρ(r) from the first cell until the cell where equation (31) is satisfied. If the central cell is reached before this happens the screening radius is set to zero. In the latter case, the code simply rescales G → G(1 + 2β 2 (φ BG )). In the former case, the mass inside the screening radius is found and used as an input for equation (15). The next stellar model is then found by solving equation (15). The screening radius is recomputed at every time-step to account for the changes in the star's structure. Vainshtein breaking: MESA was first updated to include Vainshtein breaking by. 14,42 In this case, the default (GR) HSEE is replaced by equation (16). A numerical derivative of the density is taken by differencing across adjacent cells so that d 2 M (r)/dr 2 can be computed in each cell using equation (17). The code then evolves to the next time-step using the modified HSEE for any input value of Υ 1 , allowing the stellar evolution to be computed. Radial Perturbations Moving away from equilibrium, one can consider Lagrangian perturbations so that r = rˆ r + δr (32) and the velocity is v = δ r. The dynamics of δr describe perturbations of the star about its equilibrium configuration and, specializing to linear time-dependent radial perturbations i i Non-radial modes are not important for thin-shell screened theories because they cannot be observed in galaxies other than our own (which is screened) and their governing equations have yet to be derived in MG theories. where subscript zeros refer to equilibrium quantities (found by solving the HSEE and other stellar structure equations) and Γ 1,0 = d ln P 0 /d ln ρ 0 is the first adiabatic index (Γ 1,0 = (n + 1)/n for polytropic equations of state). Equation (34) is referred to as the linear adiabatic wave equation. It is a Sturm-Liouville eigenvalue problem that must be solved given certain boundary conditions 52 defined at the center and surface of the star The eigenfrequencies ω n give the period of oscillation about the minimum Π n = 2π/ω n . Just like the equilibrium equations, the LAWE is self-similar and one can scale all of the dimensionful quantities out of the equation to find a dimensionless form in terms of a dimensionless frequency Ω 2 = ω 2 R 3 /(GM ) so that the frequencies scale as or Π ∝ G −1/2 . Theories where gravity is stronger therefore make stars of fixed mass and composition pulsate faster (or with a shorter period). Reference 40 has derived the equivalent wave equation for thin-shell screened theories d dr which is typically referred to as the modified linear adiabatic wave equation (MLAWE). The boundary conditions are the same as in GR. One can see that the effect of the scalar field is to add a density-dependent mass term for ξ(r) that increases ω (makes the period shorter) at fixed mass and composition, in line with our scaling arguments above. This is borne out by numerical simulations of polytropic and MESA models. 40 Another possible effect of stellar oscillations is that they may source scalar radiation, although detailed work for both non-relativistic 53,54 and relativistic stars 55 have found this to be negligible. For Vainshtein breaking theories, the derivation of the MLAWE is incredibly complicated but follows the relativistic derivation of 56 starting from perturbations of a relativistic gas sphere in a de Sitter background and taking the weak-field sub-horizon limit. The result is 57 with modified boundary condition at the center (see 57 ). Stellar Stability In GR, and thin-shell and Vainshtein breaking theories, the wave equation is a Sturm-Liouville eigenvalue equation of the form of a differential operatorL acting on a function ξ(r) with weight function W (r) i.e.Lξ = W (r)ω 2 ξ. This means we can bound the lowest eigenfrequency using the variational method by constructing the functional for some trial function χ. Taking this to be constant we find using the GR wave equation. When Γ 1,0 < 4/3 the lowest frequency is necessarily complex, signaling a tachyonic instability. In thin-shell screening theories, the equivalent of (39) is 40 so that the instability is mitigated in a screening-dependent manner. Objects that are more unscreened can have Γ 1,0 < 4/3 and still be stable due to the compensating effect of the (positive) new term. This is borne out in the numerical computations of. 40 Finally, in Vainshtein breaking theories the expression is 57 When Υ 1 < 0 the instability is the same as in GR but when Υ 1 > 0 there is a second potential instability. For a star of constant density this always occurs when Υ 1 > 49/6. For more general models, one needs to integrate over the equilibrium structure to determine the presence of the instability, although, given the large value for constant density stars, it is unlikely that the instability is realized in practice for sensible choices of Υ 1 . Stellar Structure Tests In this section we review different objects that the theory developed in the last section has been applied to and the resulting bounds on screened MG theories. The Eddington Standard Model One of the simplest treatment of main-sequence stars which works well for low-mass objects is the Eddington standard model, which makes the assumption that the star is supported by a combination of radiation pressure from photons generated by nuclear burning in the core and hydrodynamic gas pressure (ideal gas law): where m H is the mass of a hydrogen atom and µ is the mean molecular weight (number of particles per atomic unit). Introducing b = P gas /P , equation (42) implies This implies b is a constant if one makes the approximation that the specific entropy (s ∝ ρ/T 3 ) is constant. The total pressure is then so that the star is therefore polytropic with n = 3 and its structure can be found by solving the Lane-Emden equation for the theory of gravity in question. For MG, the most important quantity for main-sequence stars is the luminosity, which must be determined from the radiative transfer equation. (In this section we assume that the opacity is constant, which is a good approximation for mainsequence stars where the dominant contribution comes from electron scattering.) Differentiating equation (42), one can find an expression for the surface luminosity using the appropriate HSEE ((13) for GR, (15) for thin-shell models, and (16) for Vainshtein breaking) GR and Vainshtein breaking is the stellar mass. Thus, in order to determine the luminosity (at fixed mass) we must calculate b. This is accomplished by inserting the definition of r c (equation (21)) into equation (27) whereω ≈ 2.018 is the GR value and the Eddington mass is Note that the GR and Vainshtein breaking luminosities are not identical despite having the same expression since b is determined from different equations. At this point, one can discern the gross effects of MG on the stellar luminosity. First, note from equation (45) that when b = 1 the luminosity is zero. This is because this extreme value corresponds to no radiation pressure and hence no photons. When b 1 the star is dominated by radiation pressure and one has L ∝ GM . Conversely, when b is close to unity (so that the star is gas-pressure supported) one can write b = 1 − δ for δ 1 and equation (46) shows One then has L ∝ G 4 M 3 . This means that the effects of MG are more pronounced in pressure supported stars. Equation (46) requires b ≈ 1 for M < M Edd whereas b 1 for M > M Edd so that low-mass stars are gas-supported and high-mass stars are pressure supported. We therefore expect the effects of MG to be more pronounced in low-mass stars. The procedure for calculating the luminosity in any given gravity theory is as follows: first, one numerically solves the relevant n = 3 Lane-Emden equation for a given set of parameters (there are no free parameters in GR) to find ω R (=ω R in GR). For thin-shell models, one must also find the screening radius and ω s using (31) (see 38,41 for the details). Once ω R (and ω s for chameleons) have been obtained, equation (46) can be solved numerically to find b. This can then be put into (45) to find the luminosity. Plots of the ratio L/L GR are shown in figure 1 for both thin-shell and Vainshtein breaking theories. In both cases µ = 1/2, appropriate for hydrogen stars. Evidently, the effects of MG are indeed more pronounced in low-mass objects due to their gas pressure support. We have chosen β(φ BG ) = 1/ √ 6 for thin-shell models corresponding to the (constant) value predicted by f (R) models. When χ BG ≥ 10 −5 the enhancements plateau at low masses because the stars are fully unscreened. The asymptotic value is precisely (1 + 2β(φ BG ) 2 ) 4 = (4/3) 4 ≈ 3.16, in agreement with our prediction above that L ∝ G 4 for full unscreened gas pressure supported stars. We chose Υ 1 > 0 for the Vainshtein breaking models, which, evidently, lowers the luminosity compared with GR. A good rule of thumb (but by no means a concrete feature) is that positive values of Υ 1 weaken gravity (compared with GR) in the Newtonian limit j . Had we chosen Υ 1 < 0 we would have found the converse behavior i.e. the luminosity would have been enhanced, a consequence of strengthened gravity. the radius and ages of the star when the central hydrogen mass fraction X = 0.5, 0.1, and 10 −5 so that one can compare stars at similar points in their evolution. The parameters are chosen so that the stars are progressively more unscreened from bottom to top. The curve at χ BG = 10 −7 mimics GR on the main-sequence because the star is fully screened (recall main-sequence stars have Ψ ∼ 10 −6 ) but becomes unscreened on the red giant branch when the radius of the star increases about a factor of 10, lowering its Newtonian potential. The blue curve has a comparable shape to GR but is shifted to higher temperatures and luminosities, indicating that the star is brighter and hotter than its GR counterpart. The green curve corresponds to a star that is fully unscreened, and looks like the HR track for a 2M star. In all cases, at fixed X more unscreened stars are younger, indicating that stellar evolution has proceeded at a faster rate. This is because the amount of nuclear fuel is fixed (at fixed mass) but more unscreened stars need to consume it at a faster rate in order to combat the increased gravity. Thin-shell screening stars are therefore hotter, brighter, and more ephemeral the more unscreened they are. Unfortunately, these predictions have yet to be utilized as a test of chameleon theories. The main reason for this is that one requires unscreened galaxies-dwarf galaxies in cosmic voids-in order for the stars to become sufficiently unscreened. Main-sequence and post-main-sequence stars are typically not resolvable in such galaxies. Vainshtein Breaking Stars The HSEE for Vainshtein breaking theories was implemented into MESA by 14 using the method outlined in section 3.1.2. The HR tracks for solar mass stars and two solar mass stars are shown in the left and right hand panels of figure 3 respectively. One can see that, at fixed metallicity the effects of increasingly positive Υ 1 is to make the star dimmer and cooler. This is because positive values of Υ 1 act in an equivalent manner to weakening gravity and therefore the star needs to burn nuclear fuel at a slower rate to stave off gravitational collapse. Another consequence of this is that stars evolve more slowly when Υ 1 is more positive, as evidenced by the location of the filled circles in the left panel. Negative value of Υ 1 have the opposite effect (i.e. gravity is strengthened); fuel is consumed at a faster rate and the star is hotter, brighter, and more ephemeral. On the main-sequence, these effects are degenerate with metallicity; it is evident from the figures that a GR Z = 0.03 star has a similar main-sequence track to a Vainshtein breaking star with Υ 1 = 0.1 and Z = 0.02. (If Υ 1 < 0 the effects of Vainshtein breaking are degenerate with decreasing the metallicity.) This degeneracy vanishes on the red giant branch. In theory, the effects of Vainshtein breaking should be present in all stars in our local neighborhood. In practice, to date there have been no local tests, either proposed or performed. This is due partly to the degeneracy with metallicity, although this can either be corrected for with other measurements or avoided by using post-mainsequence stars. A Stellar Bound for Vainshtein Breaking Theories One important requirement for the stability of stars is that P (r) < 0. 58 At the center of the star, the pressure, density, and mass can be expanded as where the linear terms are absent in the expansions of P (r) and ρ(r) because one needs P (0) = ρ (0) = 0; the expansion for M (r) begins at cubic order in order to be consistent with equation (14). Plugging these expansions into the HSEE (16) implying the bound Υ 1 > −2/3. This bound was first derived by 43 using similar arguments. Dwarf Stars Dwarf stars are those that populate the mass range between Jupiter mass planets (M J ∼ 10 −3 M ) and main-sequence stars with masses M ∼ O(0.1M ). When first formed, a star will contract under its own self-gravity liberating energy and increasing the temperature and density. The contraction must be halted by the onset of pressure support either due to electron degeneracy pressure or thermonuclear fusion. In the former case, the star is inert and is referred to as a brown dwarf. In the latter case, it is a red dwarf. Only stars that are sufficiently heavy can achieve the requisite core density and temperature for hydrogen burning to proceed efficiently. Thus, low-mass stars are brown dwarfs and higher mass stars are red dwarfs. The transition mass, the minimum mass for hydrogen burning (MMHB), is M MMHB ≈ 0.08M in GR. A detailed account of low-mass stars can be found in. 59 In the context of MG, dwarf stars are good probes of Vainshtein breaking theories, 60, 61 and so we focus exclusively on these in this subsection. layer near the surface, which is composed of a weakly coupled plasma that is welldescribed by the ideal gas law. They are fully convective and therefore contract along the Hyashi track with a polytropic n = 1.5 EOS. 36 In fact, coulomb corrections to the electron scattering processes shift the EOS of lower mass brown dwarfs (M < ∼ 4M J ) to lower values n ≈ 1. 59,62 For n = 1 one has P c = Kρ c 2 (c.f. equation (22) and recall θ(0) = 1) so that equation (21) gives r c 2 = K/(2πG) and the radius, R = r c y R is, is independent of the mass. This leads to a radius plateau in the massradius relation for stars with masses M J < M < M MMHB . In GR, the plateau lies at R ≈ 0.1R but in Vainshtein breaking theories y R depends on Υ 1 and therefore so does the plateau radius. This is shown in figure 4. One can see that the changes in the radius are significant for |Υ 1 | ∼ O(1), although whether this can be used to place new bounds is not clear since the data pertaining to the radius plateau is currently sparse. 63 Future data releases from Gaia, Kepler, or their successors may be able to populate the brown dwarf mass-radius diagram sufficiently. Red Dwarf Stars: The Minimum Mass for Hydrogen Burning The central conditions in low-mass stars are not sufficient for efficient burning on the PP-chains. In particular, the coulomb barrier for the 3 He-3 He and 3 He-4 He cannot be overcome at the relevant central temperatures and densities (10 6 K and 10 3 g/cm 3 ). Instead, proton burning proceeds via deuterium burning with the end point being Helium-3. The MMHB is the smallest mass where the luminosity generated by this reaction process can balance the luminosity lost from the star's surface. A simple model of red dwarf stars first presented in 59 for GR was adapted for Vainshtein breaking theories by 60,61 who showed that the MMHB is sensitive to Υ 1 . In this model, the star is supported by a combination of degeneracy pressure and the ideal gas law, which are both described by n = 1.5 polytropic equations of (29) and (30) respectively. The degeneracy parameter η is the ratio of the Fermi energy to k B T and measures the relative contribution of each type of pressure, degenracy pressure being more important for larger η. The function I(η) has a minimum value of 2.34 at η = 34.7 and so there is a minimum value of M for which (50) can be satisfied, the MMHB. Assuming κ −2 = 1 (we will discuss this later), the MMHB in GR is M GR MMHB ≈ 0.08M whereas in Vainshtein breaking theories it depends on Υ 1 as shown in figure 5. One can see that, for positive values of Υ 1 , the MMHB is larger than the GR value. This is because the weakened gravity results in lower central densities and temperatures at fixed mass so that heavier objects are needed to reach the requisite conditions for hydrogen burning. One cannot take theories with Υ 1 too large because the theory would predict that observed red dwarf stars should be brown dwarfs. Indeed, the lightest red dwarf (M-dwarf) is Gl 866 C with a mass M = 0.0930 ± 0.0008 . 64 Vainshtein breaking theories are only compatible with this observation if the bound Υ 1 < 1.6 is satisfied. This bound is incredibly robust. Indeed, there are few degeneracies with other astrophysical effects. There is a degeneracy with the opacity but, as is evident in equation (50), this is very mild and is not strong enough to impart any uncertainty onto this bound. Similarly, variations in the chemical composition between different dwarf stars are small and the compositions themselves do not evolve significantly over the life-time of the star. Another possible degeneracy is rotation, but this acts to increase the MMHB 65, 66 and can therefore only make the bound stronger. Finally, the method used to infer the star's mass is insensitive to the theory of gravity. The mass is either inferred from empirical relations, which do not assume any gravitational physics, or from the orbital dynamics of binaries, 67 which occurs in a regime where there is no Vainshtein breaking (i.e. outside the objects) so that the equations are identical to GR. See 60, 61 for an extended discussion on this. White Dwarf Stars: the Chandrasekhar Mass and Mass-Radius Relation White dwarf stars are the remnants of low-mass stars (M < ∼ 8M ) that have gone off the main-sequence to become giant stars and have subsequently had their outer layers blown away by stellar winds leaving only the core. In the absence of any thermonuclear fusion, electron degeneracy pressure provides the counter-gravitational support. Low mass white dwarf stars are well described by n = 1.5 polytropic equations of state (P ∝ ρ 3 ) corresponding to a fully relativistic gas. Following equation (29), this means that low-mass white dwarfs follow the mass-radius relation R ∝ M − 1 3 whereas fully relativistic white dwarfs have a fixed mass (the Chandrasekhar mass). If one tries to go to higher masses, the star is unstable and a thermonuclear explosion occurs, resulting in a type Ia supernova. This is the same instability found using perturbation theory (see equation (39)). The majority of white dwarf stars are composed primarily of 12 C, for which an equation of state can easily be found. We will follow the method of, 68 which 44 have adapted to Vainshtein breaking theories. Defining x = p F /m e where p F is the Fermi momentum, the number density of degenerate electrons is while the electron pressure and energy density are P e = m 4 e Ψ 1 (x) and e = m 4 e Ψ 2 (x) with Ψ i (x) given in. 68 The density receives contributions from both the carbon atoms and the electrons but the former far heavier than the latter and so the density is ρ ≈ ρ C . On the other hand, the pressure comes primarily from the electrons and so one has P ≈ P e . One can use these approximations with the appropriate HSEE to construct white dwarf models. In this case, the unknown functions are m(r) and x(r) which satisfy M (0) = 0 and x(0) = x 0 so that their is one free parameter defined at the center for the star. The radius is defined by P (R) = 0 (which implies x = 0) so that M = M (R). Varying x 0 allows one to build up the mass-radius relation. Reference 44 have studied white dwarfs in Vainshtein breaking theories using the above equation of state by solving both the GR (equation (13)) and Vainshtein breaking (equation (16)) HSEEs. The mass-radius relation that they obtained is shown in the left panel of figure 6. A χ 2 test was performed using the observed masses and radii of 12 white dwarfs taken from 69 treating Υ 1 as a fitting parameter. A final bound can be found by considering rotating white dwarfs. If the white dwarf is rotating with angular frequency ω then the HSEE must be augmented by a centrifugal force If at any point the pressure gradient is outward i.e. dP/dr > 0 then the star is unstable and so we must require 44 at every r. Note that the equality changes to an upper bound if d ln ρ/dr < −2. For the simple case of constant density one recovers the bound of, 43 Υ 1 > −2/3. The positive pressure contribution implies that there is a minimum stellar mass for given values of ω and Υ 1 , and that the strongest bounds should come from the most rapidly rotating objects. The majority of white dwarfs are slowly rotating but some rapidly rotating objects have been observed, in particular, RX J0648.0-4418, which has a mass M = 1.28 ± 0.05M which rotates with a period of 13.2 s. 71 Fixing ω to this value, 44 have scanned a range of x 0 for different values of Υ 1 to find the range of parameters where such a star can be stable. Their results are illustrated in the right hand panel of figure 6. Accounting for the error bars, only values of Υ 1 in the range −0.59 ≤ Υ 1 ≤ 0.50 can successfully model this object. Distance Indicator Tests Distance indicators have proved a highly constraining novel probe of theories that screen using the thin-shell mechanism. Distance indicators are a method of inferring the distance to a galaxy based on some proxy, for example, by measuring the apparent magnitude of a standard candle such as a type Ia supernova. Typically, the formula used to infer the distance is based upon empirical calibrations made locally or theoretical calculations. In the former case, the calibration has been performed in a screened environment and in the latter the calculations assume GR. Therefore, if one compares two distance estimates to the same galaxy, one sensitive to the theory of gravity and the other not, then the two will not agree if the galaxy is unscreened. The amount by which they agree therefore constrains the model parameters. In what follows, we will summarize how 39 used two different distance indicators, Cepheids and tip of the red giant branch (TRGB) stars, to constrain thin-shell models. Screened Distance Indicators: Tip of the Red Giant Branch When stars of 1-2M leave the main-sequence and ascend the red-giant branch the stellar luminosity is due to a thin shell of hydrogen burning outside the helium core. As the star ascends, the core temperature increases until it is hot enough for the triple-α process to proceed efficiently. At this point, known as the helium flash, the star moves rapidly onto the asymptotic giant branch (AGB), leaving a visible discontinuity in the I-band at I = 4.0 ± 0.1 with the spread being due to a slight dependence on metallicity. This discontinuity can be used as a distance indicator since the luminosity is known. The details of the helium flash depend on nuclear physics and not the theory of gravity so the TRGB is a screened distance indicator l . Unscreened Distance Indicators: Cepheid Stars Stars with masses 3.5-10M execute semi-convection-driven blue loops in the colormagnitude diagram where the temperature increases at roughly fixed luminosity. During this phase, the stars can cross the instability strip where they are unstable to pulsations driven by the κ-mechanism (see 52 for details of this process). In this phase, a layer of doubly ionized helium acts as a dam for energy so that small compressions of the star go towards increasing the temperature in the ionization zone and not into increasing the outward pressure. This energy dam drives pulsations which result in a periodic variation of the luminosity and gives rise to a periodluminosity (PL) relation. 72 These stars are known as Cepheid variable stars and are used as distance indicators. In thin-shell screened theories, the inferred distance depends on the level of screening because the period of pulsation Π is faster. This can be calculated either by solving the MLAWE (36) or by using the fact that Π ∝ G −1/2 to find 39 Thus, in thin-shell screened theories Cepheid distance indicators are unscreened and under-estimate the true distance. Comparisons and Constraints Using the screening map, 73 compared the TRGB and Cepheid for a sample of screened galaxies as well as a control sample of unscreened galaxies. The TRGB distance was taken as the true (screened) distance and the theoretical value of ∆d/d was computed by using MESA Cepheid profiles at the blue edge of the instability strip m to calculate ∆G/G in equation (54). An example is shown in figure 8. One can see that the two samples are consistent and a statistical analysis yielded the constraints shown in figure 8. In this case χ BG probes the cosmological value of χ (or, equivalently f R0 for f (R) models) since the galaxies are unscreened. The bounds are the strongest astrophysical ones to date and f R0 > 4 × 10 −7 is ruled out for f (R) models. l In fact, if χ BG > ∼ 10 −6 MESA simulations reveal that the tip luminosity can decrease by 20%. This is because the core is unscreened in these cases and the temperature is increased. For this reason, the temperature needed for the helium flash is reached faster and therefore the discontinuity occurs lower on the red giant branch. In what follows we will only consider χ BG < ∼ 10 −6 . m The location of the instability strip may change in MG models but, to date, this has never been investigated. Astroseisemology The use of radial stellar oscillations in Vainshtein breaking theories has been studied by 57 who solved the MLAWE (equation (37)) for some simple polytropic stellar models with the results shown in figure 9. The effects are small with the exception of brown dwarfs where ∆Π/Π ∼ O(1). The authors also investigated MESA models and found large changes in the period of Cepheid pulsations, although this was primarily driven by the altered equilibrium structure, which changed the intersection of the Hertzprung-Russell track with the instability strip. Galactic Tests The morphology and dynamics of galaxies, in particular dwarf or low surface brightness (LSB) galaxies, have proved to be a strong tool for testing screened MG theories, especially those that screen using the thin-shell mechanism. This is partly because they have multiple components-dark matter, stars, gas-that can be screened to different levels and partly because they themselves have Newtonian potentials of O(10 −8 ) making them some of the most unscreened objects in the universe n . In this section, we discuss several novel tests that can, and in some cases have, been used to constrain thin-shell screening theories. We will use two common models for the dark matter density profile to aid in our computations: the cored isothermal sphere (CSIS) where ρ 0 and r 0 are the core density and radius (CSIS) and ρ s and r s are the scale density and radius (NFW). The former profile is typically a good fit to dwarf galaxies with core radii of order 1-4 kpc 75 whilst the latter are well-motivated both theoretically and observationally. Rotation Curves Theories that violate the equivalence principle i.e. those that screen using the thinshell effect allow for a novel test of gravity using the rotation curves of different galactic components. 73 In particular, a galaxy is composed of stars (with Newtonian potentials Ψ ∼ O(10 −7 -10 −6 )) and diffuse gas (with Newtonian potential O(10 −11 -10 −12 ) 20 ) that rotate around the center with a radially-dependent circular velocity given by where a subscript 'gal' refers to fields sourced by the galaxy and M obj and Q obj are the mass and scalar charge of the object respectively (see section 2). Let us make two simplifying assumptions: that the galaxy is unscreened so that dφ gal /dr = 2β(φ BG )M (r)/M pl and that stars are fully screened (Q obj = 0) whilst the gas is n Of course, one must use dwarf galaxies that are sufficiently isolated so as to avoid environmental screening by their neighbors. In practice, this means using dwarf galaxies in voids. See the discussion in section 1.1 for more information on this matter. fully unscreened Q = β(φ BG )M obj . In this case, the circular velocity for the stars, v and gas v gas satisfy 73 v Thus, a comparison of the rotation curves of stars and gas can provide a novel probe of thin-shell screening theories. In practice, performing this test is not so simple because traditional probes of the galactic rotation curves use either Hα or 21cm lines, both of which probe the gaseous, unscreened component. Another useful line is the OIII line that results from a forbidden transition in doubly ionized oxygen. This is particularly useful for thinshell screening theories since the line is only present at very low densities. The stellar component can be probed independently using absorption lines for metals found in stellar atmospheres, for example, the MgIb triplet or the CaII lines, found in the atmosphere of K-and G-dwarfs (main-sequence stars). These stars have Newtonian potentials of order 10 −6 and hence values of χ BG smaller than this (where they are screened) can be probed provided that their host galaxies are unscreened for the same parameters. The screening map contains six galaxies that have both OIII and MgIb information available that reference 76 have used to perform this test. Their method is as follows: first, the gaseous rotation curve is used to fit a density profile for the galaxy accounting for systematic errors and astrophysical scatter. (Note that the gaseous curve is measured at more finely-spaced radial intervals so this provides amore accurate fit.) Next, this model is used to predict the stellar rotation curve and deviations from the measured curve are quantified to determine the statistical significance with which any deviation can be rejected. The results for each individual galaxy are then combined to obtain the constraints in figure 10. (Note that these constraints probe the self-screening parameter (χ BG = χ 0 = 3f R0 /2) at cosmic densities since the galaxies are unscreened.) Also shown are the distance indicator constraints for comparison. One can see that distance indicators are more constraining for large couplings but rotation curves can push into the regime 2β 2 (φ BG ) < 0.1. (Effects on distance indicator tests are subdominant to GR in this range.) The jaggedness of the contours is a result of the small sample size. A larger sample size with better kinematical data from both gaseous and stellar emission lines would greatly improve the constraints. It is possible that data from SDSS IV-MaNGA could provide such a sample although, to date, no analysis has been performed. Morphological and Kinematical Distortions Another consequence of the WEP violations discussed in section 2 is that when χ BG < ∼ 10 −6 (the Newtonian potential of main-sequence stars) it is possible for the stellar component of a dwarf galaxy to be self-screening whilst the surrounding dark matter halo and gaseous component is unscreened. This leads to several novel morphological and kinematical tests of thin-shell screened theories. 73 mass M 1 is falling edge-on towards another larger (but unscreened) galaxy of mass M 2 a distance d away then the gas and dark matter will feel a larger external force than the stars and will hence fall at a faster rate. The stellar disk will then lag behind the gas and dark matter and become offset from the center. In the case of face-on infall the stars are displaced from the equatorial plane by a height 73 where R 0 is the equilibrium distance from the galaxy's center. (z and R 0 can be taken to define cylindrical coordinates centered on the falling galaxy.) Since M 2 ∝ R 2 0 with n < 3 for any sensible density profile this is an increasing function of distance from the center and one hence expects the stellar disk to be warped into a U-shape that curves away from the direction of in-fall. Reference 73 have simulated these scenarios by solving for the orbits of galaxies composed of 4000 stars for dark matter halos described using both NFW and CSIS profiles. The halo and gas are taken to be fully unscreened with β(φ BG ) = 1/ √ 2 (corresponding to a fifth-force that is equal in strength to the Newtonian force). The halo falls from a distance of 240 kpc to a final distance of 100 kpc in 3 Gyr. The orbits are initially circular with a gaussian scatter of 1 km/s. They considered two simple scenarios: edge-on infall and face-on infall. They identify the following three observational consequences of the WEP violation: shown in figure 11 where O(kpc) offsets are evident for CSIS galaxies. The offset is smaller for NFW profiles owing to the larger slope near the center and therefore larger restoring force. • Morphological warping: The face-on infall cases exhibit a warping of the galactic disks whereby the stars were displaced from the principal axis by an amount that increases with distance from the galactic center. An example of this is shown in figure 12. • Asymmetries in the rotation curves: For edge-on in-falling galaxies, the stellar rotation curve becomes asymmetric compared with the HI curve. An example of this is shown in figure 13. One can see that the zero-velocity point of the stellar rotation curve is off-axis (in the opposite direction to the galaxy's motion) whilst the HI curve is symmetric and sits on-axis. Note that the effect discussed in the previous section-faster HI circular velocities than stellar circular velocities due to self-screening of the stars and unscreening of the dwarf galaxy-is also evident in the plot. All of the effects found above are observable and the first attempt to use them to place constraints was made by reference 77 who analyzed data circa 2013. They searched for potential offsets between the HI and optical centroids using SDSS rband optical measurements to trace the stellar centroid and ALFALFA radio observations of the 21cm line to trace the HI centroid. In both cases they used a sample of unscreened galaxies taken from the screening map as well as a control sample of screened galaxies. A similar test was performed by looking for offsets between the optical centroid and kinematic HI centroid measured using the rotation curve. Both samples were consistent and a statistical analysis accounting for both astrophysical and MG scatter did not allow the authors to place any meaningful constraints. The same authors searched for U-shaped warpings of nearly edge-on galaxies by aligning each galaxy image so that the principal axis lies along the horizontal direction then finding the centroids in each vertical column; no constraints could be placed due to the large error bars. The authors estimate that 8000 dwarf galaxies would be needed to test down to χ BG ∼ 10 −6 and 20000 to reach 10 −7 . Finally, the authors tested the prediction of asymmetric rotation curves by using a weighted average of the difference in the velocity ∆v of the approaching and receding sides of the rotation curves for Hα about the optical (stellar centroid) normalized to the maximum rotation velocity v max . The GHASP Hα survey was used for this purpose. No constraints could be placed due to large uncertainties in the modeling of the inner halo as well as systematic uncertainties due to asymmetric drift and non-circular motion. Very recently, reference 78 have used ALFALFA observations of a sample of 10,822 galaxies taken from an updated screening map 7 to constrain thin-shell theories by searching for offsets between the optical and HI centroids. Using a forward-modeling Bayesian likelihood method, they were able to obtain a new bound χ 0 = 3f R0 /2 < 1.5 × 10 −6 . Improved measurements and larger samples from future surveys such as VLA or SKA could markedly improve these constraints. In particular, one could constrain β(φ BG ) < ∼ 10 −3 . 78 Galaxy Cluster Tests Galaxy clusters are another useful probe of MG models. One reason for this is that they can be probed using both non-relativistic (dynamical and kinematic) and relativistic (weak lensing) tracers and many MG theories predict that the dynamical and lensing masses differ. Another is that they are some of the most massive objects in the universe and may enhance small fifth-forces (although they are also likely to be highly self-screening). Dynamical vs. Lensing Mass: X-ray and Lensing Comparisons There are many ways to probe the mass of galaxy clusters. Dynamical measurements such as rotational velocities or the surface brightness temperature use nonrelativistic objects such as the galaxies themselves or intra-cluster medium gas whilst weak lensing provides a relativistic probe. In GR, the mass measured using both types of tracer will agree but in generic MG theories the dynamical mass (measured using non-relativistic objects) and lensing mass will differ. Typically, one can quantify this difference using the PPN parameter γ but in screened theories this is close to unity in the solar system o and the deviation is a function of how screened the cluster is (see 79 and especially 80 section 3.2 for an extended discussion on this). In what follows, we will look at two tracers and two different definitions of mass. The intra-cluster ionized plasma is in hydrostatic equilibrium and the pressure profile is therefore dependent on the dark matter mass (which is the dominant contribution to the gravitational force), which is well-modeled by an NFW profile (56). The gas emits in the X-ray and the surface brightness can be directly related to the pressure, allowing the mass profile to be determined. We therefore define the thermal mass via In theory, there is a component of non-thermal pressure that could act as a correction to this so that the thermal mass does not truly probe the non-relativistic source for gravity. This has hitherto been ignored and chameleon simulations have shown it to be negligible. 81 Using the ideal gas law, P gas = k B n gas T gas where n gas is the gas number density one has where µ is the mean molecular weight and m p is the proton mass so that ρ gas = µm p n gas . One can also relate the gas number density to the electron density via n e = (2 + µ)n gas /5. Using a combination of X-ray and SZ measurements, one can apply fitting functions to determine T gas (r) (which is taken to be the electron temperature) and n e (r) and therefore infer the thermal mass. See reference 82 Appendix A for further details. The dynamics of light is controlled by the lensing potential Φ + Ψ, which in GR satisfies ∇ 2 (Φ + Ψ) = 8πGρ. One integrate once to find Φ + Ψ = GM (r)/r 2 , which motivates the definition of a lensing mass which can be measured using the lensing shear. In GR, the thermal (dynamical) and lensing masses are identical and probe the dark matter component p and so comparing the two is a novel probe of screened MG theories. Thin-Shell Screened Theories Theories that screen using the thin-shell mechanism are conformal scalar-tensor theories and therefore the lensing of light is unaffected so that the lensing mass is the true mass q whereas the thermal (dynamical) mass is given by where M (r) is the mass found by integrating the density profile. The second term is deviation from the lensing mass due to MG and 82 have placed constraints on chameleon models by using observations of the Coma cluster to constrain the deviation between this and the lensing mass. This was accomplished by assuming an NFW profile (56) and using an analytical approximation for φ(r) (obtained using said profile). Their constraints are shown in figure 14 and the bound χ 0 = 3f R0 /2 < 9×10 −5 was obtained. A followup analysis was performed by 83 using a sample of 58 X-ray selected clusters for which both temperature data from XMM-Newton and lensing data from CFHTLens were available; similar constraints were obtained. Currently, these constraints are not competitive with those coming from distance indicators, although this may change in the future since the number of X-ray and lensing measurements are expected to increase. (Such measurements would be applicable to a diverse range of science goals.) Theories with Vainshtein Breaking In theories with Vainshtein breaking, both the thermal and lensing mass are altered since both Φ and Ψ receive corrections. Assuming an NFW profile, the masses are p In theory, all the components are probed but the dominant contribution is from the dark matter halo. q By this we mean the mass found by integrating over the dark matter density profile. M lens (r) = M (r) + πr 3 The case Υ 3 = 0 was studied by 84 who used the stacked profiles of the same X-ray selected sample used by 83 at the 95% confidence level. Note that the mean redshift for the cluster sample is z = 0.33 and since Υ 1 and Υ 2 can vary with time these constraints should be taken to apply at this redshift. Models where there is no strong time-dependence are constrained to this level at z = 0. Strong Equivalence Principle Violations: Black Hole Offsets Galileon theories are difficult to test on small scales (unless they include Vainshtein breaking) due to the efficiency of the Vainshtein mechanism and, until recently, the strongest constraints came from the lack of deviations in the inverse-square law found using lunar laser ranging (LLR). 86 (Laser ranging to Mars could improve these by several orders of magnitude. 87 ) One way the Vainshtein mechanism has been successfully constrained is using the SEP violations discussed in section 2.2. The principle of this test, as first pointed out by, 88 is the following: consider a galaxy falling in an external Newtonian and galileon field. The baryons (stars and gas) and dark matter all have scalar charge Q = M but the central black hole (in fact, any r The thermal mass and lensing mass with Υ 3 = 0 was first derived by. 84 These expressions are fully general and are presented here for the first time. black hole) has zero scalar charge. The stars, gas, and dark matter therefore fall faster than the central black hole, causing it to lag behind and become offset from the center. Eventually, the restoring force from the remaining baryons at the center will compensate for the lack of the galileon force, leading to a visible offset s . This offset would be correlated with the direction of the galaxy's acceleration, thereby providing a smoking-gun signal. One scenario for testing this (inspired by the proposal of 88 ) was proposed by. 85 Satellite galaxies orbiting inside massive clusters are accelerating towards the center. When they are far away they can be outside the Vainshtein radius and see an unscreened galileon field but even inside the Virial radius there can be a large galileon contribution to the acceleration. (This is partly because the Vainshtein mechanism is not as efficient for extended objects 89 and partly because 2-halo corrections boost the cluster mass at large radii. 90 ) Figure 15 shows the predicted offsets for the Virgo cluster (modeled using a concentration c = 5 NFW profile) for satellite galaxies with constant density profiles. One can see that offsets of O(kpc) are predicted. Using a dynamical model of M87, 91 which is falling towards the center of the Virgo cluster, one finds that the central black hole is offset by no more than 0.03 arcseconds so that the galileon force < ∼ 1000(km/s) 2 /kpc. Combining this with the model for the Virgo cluster above, 85 obtained the constraints on cubic and quartic galileon models shown in figure 16. One can see that self-accelerating models (r c ∼ 6-10 × 10 3 Mpc) are currently unconstrained but smaller values of r c are excluded. Of course, this is just one system and 85 discuss how future X-ray and optical surveys could improve these bounds. It is worth mentioning here that the black hole offset test is not unique to Vainshtein screened theories or, indeed, screened MG. Any scalar-tensor theory will predict similar SEP violations. What is novel is the screening mechanism. In the absence of Vainshtein screening, scalar-tensor theories are best tested in the laboratory or solar system 79,80 (or with the other astrophysical probes discussed s In fact, it is possible for the black hole to escape the galaxy all together in some circumstances. 85 Relativistic Stars Relativistic stars are a good probe of alternative gravity theories but many of the classic tests (absence of dipole radiation for example) are not competitive for screened MG. In the case of thin-shell screening, the screening is more efficient for objects with larger Newtonian potentials (i.e. relativistic objects) and any effects are highly degenerate with the EOS. 92 In the case of Vainshtein screening, the Vainshtein radius is several orders of magnitude larger than the radius of neutron stars and any deviations from GR are highly suppressed. 93 The exception is theories with Vainshtein breaking since the deviations inside astrophysical bodies can be important for the structure of compact objects. Given the above considerations, the entirety of this section will focus on Vainshtein breaking theories. See 94, 95 for reviews of compact objects in MG theories. One generic feature of scalar tensor theories with coupling strength α (= 2β 2 (φ BG ) for chameleons) is that there is a tachyonic instability for the scalar when the quantity t . Static Spherically Symmetric Stars Unlike non-relativistic objects, for which the HSEE depends universally on Υ 1 independently of the specific theory, the TOV equation for Vainshtein breaking theories depends on both the theory, and the asymptotics. Indeed, since Υ 1 is a function of the cosmological time-derivative of the scalar (as well as H and second timederivatives) one has Υ 1 = 0 in an asymptotically Minkowski spacetime whereas in an asymptotically FRW spacetime Υ 1 may be non-zero. The situation is complicated further by the fact that there are three branches of solution for the scalar field (the equation of motion reduces to a cubic after manipulation) and the correct branch (the one which gives the correct asymptotics) requires a fully relativistic calculation to determine. For this reason, current works have used specific models that admit exact de Sitter (dS) solutions so that one can determine the correct branch of solution (and therefore Υ 1 ) in a controlled and systematic manner. References 57, 98, 99 have identified several models that have exact dS solutions and exhibit Vainshtein breaking. The derivation of the TOV equation for these models is long and complicated so we refer the reader to 57,98,99 for the full details. Here we will only sketch the derivation. One first solves the equations of motion to find an exact de Sitter solution. Next, the metric potentials and scalar are perturbed by introducing a perfect fluid source. One finds an exact Schwarzschild-de Sitter metric in the exterior of the star and the correct branch of solution for the scalar inside the star is the one that matches onto this solution. Taking the sub-Horizon limit, one can eliminate the scalar completely leaving a system of three equations that must be solved, two for the metric potentials and one for the pressure. These are the Vanshtein breaking counterparts to the Tolman-Oppenheimer-Volkoff (TOV) equation found in GR. Given an equation of state, these can be solved with appropriate boundary conditions (spherical symmetry at the center and vanishing pressure at the radius) to find the structure of the star. In the simplest models, one finds a universal parameter Υ 1 = Υ 2 = Υ (Υ 3 = 0). Reference 95 has solved the TOV equations using both n = 2 relativistic polytropic and two realistic (BSK20 and SLy4) neutron star equations of state. Furthermore, reference 99 have solved using 32 equations of state, including some that include hyperons, kaons, and strange quark matter. An example of the neutron star mass-radius relation found using the SLy4 EOS is shown in figure 17. Also shown is the mass of the heaviest neutron star observe to date (PSR J0348+0432 with mass M = 2.01 ± 0.04M ). 100 One can see that positive values of Υ make the stars less compact i.e. they have lower masses at fixed radii. In the case of SLy4, even reasonably small values of Υ > 0 result in maximum masses that are not compatible with the mass of PSR J0348+0432 but since the EOS of dense neutron matter is not known this just implies that the SLY4 EOS is not compatible with Vainshtein breaking theories with these values of Υ and, indeed, changing the EOS one can find masses in excess of 2M . Negative values of Υ 1 produce stars that are more compact so that they have larger masses and fixed radii. Although figure 17 only shows one EOS, the features exhibited there are generic for all the equations of state studied by. 99 Another interesting prediction evident from the figure is that the maximum mass can be far in excess of the GR prediction. While an exact mass is dependent on the EOS, reference 99 has noted that, for some equations of state, the radius and mass of higher-mass stars can violate the GR causality bound, and so the observation of stars with these properties would be in tension with GR. The equivalent causality bound in Vainshtein breaking theories is unknown at present since calculating it is a more difficult task. In particular, the kinetic mixing of the scalar and metric would require one to find the sound speeds of both the scalar and pressure modes simultaneously. Slowly Rotating Stars A more robust method of testing gravity with relativistic stars is to use relations that are independent of the equation of state. In particular, it is well known in GR that there is a relation between the dimensionless moment of inertiaĪ = I/(G 2 M 3 ), where I is the moment of inertia, and the compactness C = GM/R. 101,102 The compactness of a given star can be computed by solving the appropriate TOV equation but in order to compute the moment of inertia one needs to solve the equations for a slowly rotating star to first order in its angular velocity ω. Given the complexity of the equations, we will once again sketch the procedure for calculating these quantities and refer the reader to 57 for the full details. The method essentially follows the method of Hartle 103 applied to scalar-tensor theories. The first step is to perturb the Schwarzchild-de Sitter metric to include the star's rotation with an angular velocity Ω u . For slowly rotating objects, this plays the rôle of the small perturbation parameter. The quantity that must be calculated is ω(r), the coordinate angular velocity of the star as measured by a freely-falling observer. One finds that the scalar is only perturbed at O(Ω 2 ) whereas the O(Ω) contribution to the perturbed tensor equations yield and equation of motion for ω of the form ω = K 1 (δν, δλ, ρ, ρ, P, P , Υ)ω + K 0 (δν, δλ, ρ, ρ, P, P , Υ)ω, where K 1 and K 0 are given in 57 and reduce to their GR values of K 1 = 4/r and K 0 = 0 when Υ = 0. The moment of inertia is found using the relation Reference 101 found aĪ-C relation of the form I = a 1 C −1 + a 2 C −2 + a 3 C −3 + a 1 C −4 (70) for the GR relation well and so reference 57 fit the Vainshtein breaking relation to the same function (the reader is referred to the original reference for the numerical coefficients). Their results are shown in figure 18. One can see that Vainshtein breaking theories also predict anĪ-C that depends on Υ and furthermore that it is distinct from the GR prediction. Therefore, in principle, measuring theĪ-C relation could place new bounds on Vainshtein breaking theories. In practice, this measurement is a while away since one needs to find highly relativistic systems where the (post-Newtonian) spin-orbit contribution to the precession can be measured. There are few known systems at the present time, although the next generation of radio surveys should be able to find more, making this measurement possible on a time-scale of a decade or so. Since Vainshtein breaking theories screen outside bodies, the measurement of the spin-orbit coupling, and therefore I itself is the same as in GR. u The perturbation decays at infinity so that there is no change to the asymptotic spacetime. Astrophysical Tests of Couplings to Photons There have been many studies of chameleon theories that couple to photons via a term in the Lagrangian. 79,80,104 Mixing between the scalar and photons can induce both linear and circular polarizations into the (nominally unpolarized) starlight in the inter-galactic medium (IGM). 105 The lack of any observed polarization places the bound M γ > 1.1 × 10 9 GeV provided m eff (φ IGM ) < 1.1 × 10 −11 eV. The coupling (71) can also act as a loss mechanism whereby photons are converted to chameleons. This can result in deviations in the X-ray luminosity functions of active galactic nuclei (AGN), the lack of which places imposes M γ > 10 11 GeV for m eff (φ AGN ) < 1.1 × 10 −12 eV. Similarly, the lack of any observed delpetion of CMB photons in the COMA cluster constrains M γ > 1.1 × 10 9 GeV. 106 Finally, the depeletion of CMB photons increases the opacity of the universe and alters the distance-dulaity relation, 107 although current constraints are not competitive with those discussed above. This may change with data releases from current and next generation cosmological surveys.
2019-04-22T13:11:52.932Z
2018-07-24T00:00:00.000
{ "year": 2020, "sha1": "b78c2c29ff7b175337fc3fd2198cf639991fa081", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.04194", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "628504a91df84552848afe75fbc66ba11be2e705", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233741427
pes2o/s2orc
v3-fos-license
Managing passenger flows for seaborne transportation during COVID-19 pandemic The ongoing coronavirus disease 2019 (COVID-19) pandemic has negatively affected the cruise and ferry industry as the passenger numbers and revenues have plummeted. Therefore, we developed a holistic approach for mitigating COVID-19 during seaborne transportation in a cost-efficient way by combining behavioural changes, procedural workflows and technical innovations to reset the industry. The attempts to curb the coronavirus disease 2019 (COVID-19) pandemic have led many nations to set social distancing and mobility restrictions, which have greatly affected our daily lives and exposed several weaknesses in our society. Travel and transportation are vital to the welfare of society as it guarantees the availability of food and medicine. Furthermore, mobility restrictions have impacted negatively on industries, individuals and work possibilities both at national and international level. The cruise industry, including ship owners and the shipbuilding supply chain, cruise and ferry operators, and passenger ports, is one of the hardest hit. The news coverage regarding the COVID-19 outbreak on the Diamond Princess cruise ship, amongst other outbreaks onboard vessels, has given a blow to the reputation of the cruise industry in general, as the spreading from one single individual resulted in several hundred infected passengers. 1 The pandemic has drained revenue streams and plummeted passenger numbers, and COVID-19 outbreaks on ships have resulted in a sharp value decrease for the cruise ship owners. 2 Consequently, there is an urgent need to develop strategies to limit the spread of pathogens onboard cruise ships and ferries. To this end, we propose a rethink of seaborne passenger transportation by rapidly implementing healthy travel concepts that include integrating healthcare technology, introducing behavioural and service production changes to avoid viruses from spreading during the voyage. Furthermore, in order to ensure passenger health, cruise and ferry operators will most likely have to develop new types of service concepts regarding food, hosting and recreation as many of the current core services create an ideal environment for pathogens to spread. 3 Although the ongoing COVID-19 pandemic will likely affect the cruise industry much more than, for example, the global financial crisis of 2008-09 or the negative publicity from the loss of the Costa Concordia in 2012, the proactive approach to ensuring safe travel can lead to overcoming difficulties in this challenging situation, too. [1][2][3][4] This perspective presents a model of macro-passenger flows based on a combination of both new and rather well-known countermeasures that considers how pathogens spread on ships and in terminals. In contrast to the detailed, zero-risk view of countermeasures that is predominant in the literature and currently implemented by central authorities, macro-passenger flows comprise the broader actions taken to combat pathogens in a more applicable near zero-risk approach. We advocate a holistic perspective on how to mitigate pandemic outbreaks that includes the behavioural (e.g. social distancing), procedural (e.g. different boarding time) and technical (e.g. testing procedures) actions against infectious agents. This involves identifying bottlenecks, transmission hotspots, changing boarding and transportation procedures, and calculating which countermeasures are the most cost efficient, that is, those with the lowest price per protection. Several studies demonstrate how restrictions in mobility, social distancing, use of face masks, hand washing and general hygiene significantly reduce the transmission potential of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 1 -7 However, less is known about how to combine these different countermeasures in a practical and cost-efficient way in real-life scenarios and near zero-risk contexts. The case of the Diamond Princess, where one infected passenger spreads the virus to 697 people who were potentially in contact with an additional 627 386 individuals, has demonstrated that improved procedures are needed to limit the spread of contagious diseases. 1,8 Some of the biggest difficulties were in implementing large-scale quarantine and obtaining medical support during the voyage and hospitalization after disembarking the passengers. 2 In order to practically minimize the risk of an infected passenger boarding a ship, we suggest different terminal procedures depending on the number of passengers. The number of COVID-19-infected individuals within the population varies, but many studies estimate that the infected portion of a population during a pandemic is around 1-2%. 5 One of the challenges in identifying COVID-19-infected individuals is that some do not manifest any symptoms; a meta-study estimated that asymptomatic individuals make up around 17% of the SARS-CoV-2-positive population and that the pre-symptomatic proportion is around 63%. 9 Therefore, several layers of precautions are needed to identify possible COVID-19-infected passengers, as relying solely on measuring temperature or only looking for other symptoms to indicate possible COVID-19 is insufficient. On the other hand, even with the most sophisticated reverse transcriptase polymerase chain reaction testing (RT-PCR) equipped with ∼90% sensitivity, there will always be a risk of false negatives rendering the detection of COVID-19 difficult. 10 Therefore, we advocate for a holistic and practical near zero-risk implementation strategies as shown in Figure 1 (recommended boarding procedures and recommended onboard procedures). Based on recent COVID-19 publications and discussions with health sector professionals and marine industry stakeholders, we recommend different boarding procedures depending on the size of the ship, as illustrated in Figure 1a. Simply put, the bigger the ship and the longer the duration of the voyage, the more precautions and procedures are necessary to ensure that infection does not spread among the passengers. For smaller ships, the near zero risk is achieved by decreasing the maximal numbers of passengers, implementing health questionnaire before boarding combined with symptom and temperature measurement at check-in (Figure 1a). For example, if there are up to 800 passengers boarding, then the calculated number of disease carriers is 8-16 (with 1-2% infected population). The number of symptomatic disease carriers can initially be narrowed down by a self-diagnostic questionnaire, where the passengers are asked if they have COVID-19 symptoms the evening before boarding. If answered affirmatively, the passenger needs to test negative in order to travel. This procedure would reduce the number of potential disease carriers arriving at the terminal to ∼5.04-10.08 if the passengers comply with the instructions. At check-in, passengers should be given both temperature and symptoms check, possibly in combination with rapid tests, which would further narrow down the number of disease carriers to 0.86-1.71, depending on the COVID-19 carrying population, which also dictates the relevant safety procedures. For bigger ships carrying up to 2400 passengers, a tracking system is needed in addition to the above-mentioned procedures, giving around 74% efficacy if >60% of the passengers comply with the instructions. The tracking system, such as a mobile application that tracks the vicinity of other users, is shown during boarding and demonstrates that a passenger has not been exposed to the pathogen. To enter the biggest ships with 6000 passengers, the travellers need to either have a negative RT-PCR test 1-2 days before boarding or proof of vaccination against a specific disease to achieve near zero-risk travel. During boarding, it is advisable to spread out the arrival times at the terminal so that there are no >60% of the maximum passenger capacity at any given time, reducing the numbers of transmittable passengers arriving simultaneously to the terminal. 5, 11 Dividing passengers into smaller groups can be accomplished by boarding (and devising terminal arrival) in intervals. According to a passenger movement simulation done for the St Peters terminal, it is obvious that the most crowded place is the queue line and the vicinity of the check-in area. 11 Therefore, we suggest having separate queues across several check-in stations, with a 2 m distance between each passenger, and handing out complementary hand sanitizers and face masks at the beginning of each queue. Passenger flows should also be organized so that encounters between the departing and arriving passengers are avoided. Also, staffs that are in contact with passengers inside the cabins while cleaning should be avoided to minimize potential cross-transmissions between staff and passengers. Furthermore, it is advisable to have separate gangways to the ship for the elderly and other high-risk groups in order to reduce their risk of contracting possible diseases during boarding. Then, based on the transmission risk onboard and the epidemiological situation at the departure and the destination, we suggest different modes of operation: normal condition, elevated risk or outbreak mode, which would also be communicated to the passengers with the simple 'traffic light' modes of green, yellow or red. To pursue such an operation, we recommend having several levels of procedures to mitigate the risk of spreading contagious diseases inside the vessel that can be adjusted according to the transmission risk, as illustrated in Figure 1b. The first level of protection is to introduce social distancing of individuals by reducing both mobility and the number of passengers by at least 20% in order to decrease the transmission risk by 10%. 12 Then, by blocking all three main transmission routes (aerosols and direct or indirect contact) at the same time, the risk of spreading the disease is greatly reduced, depicted as an adjusted basic reproduction number (R0; Figure 1b). These procedures would incorporate face masks, hand sanitizers, and additional disinfection and antimicrobial coatings of surfaces that are often in contact with passengers. 1 -12 A third level of risk mitigation would be to inform passengers that, when feeling sick, they could take a self-diagnostic test online where healthcare professionals would estimate the situation and possible administer a COVID-19 test in order to determine whether quarantine is required. The fourth level implemented during an outbreak demands a 60% decrease in mobility to control the spread throughout the ship, where the nightlife, buffet and shopping malls would be closed for keeping human contact at minimal. For the second part of our 'toolbox', we proposed the use of a price per protection by usage (P/PU) calculation, where the price of an item is divided by the protection in percentage divided per usage. The following example illustrates the reasoning: for a disposable 1 euro face mask, the P/PU would be around 0.56-0.7, whereas for a 10 euro hand sanitizer that can be used by 100 passengers, the P/PU would be 0.0735, or an antimicrobial coating for 1000 euros could greatly reduce the risk of contracting infectious diseases for potentially over 10 000 passengers (P/PU = 0.1). Thus, the hand sanitizer and antimicrobial coating would provide more cost-efficient prevention as part of an acute first line of defense against contagious diseases both now and in the future. In Figure 1b, the first level of procedures starts by decreasing mobility of passengers, and the second level relies on additional safety measures such as face masks, increased hand hygiene and additional disinfection. The third level relies on online self-diagnostics combined provided by healthcare professionals combined with rapid tests and quarantine. The fourth level represents a lockdown where the mobility of staff and passengers are minimized. In our procedures, we are considering both the practical, theoretical and cost-efficient mitigation strategies in combating COVID-19 for achieving a near zero-risk strategy where the most important implementation is to improve the boarding procedures so that no one sick board the ship and to have standby proceeding for the crew to quickly respond to the different risk levels during the voyage by changing passenger behaviour and mode of operation. However, it is crucial to consider the characteristics of different types of ships and terminals combined with the movements, activities and uses of protective measures by passengers and crew members during the voyage as well as the specific characteristics of the infectious agent that could all influence the transmission dynamics of the specific setup. These factors are likely to affect the pathogen-spreading dynamics that dictate the most efficient mitigation procedures at each specific risk level. Nevertheless, 'the toolbox' of the procedures described in this study represents a holistic approach in mitigating current and future pandemic threats during seaborne passenger transportation. Combined with calculating a price per protection of each specific countermeasure, this toolbox can serve as a practical means to 'restart' the cruise industry with a pragmatical near zero-risk approach.
2021-05-06T06:16:13.013Z
2021-05-05T00:00:00.000
{ "year": 2021, "sha1": "e366138071a6d110c24d8175008eada1a2057dbf", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jtm/article-pdf/28/7/taab068/40567539/taab068.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "de92e59442c3df58ddd73ed130b80d02e29cccbb", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }