id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
28386727
pes2o/s2orc
v3-fos-license
Metabolism of Retinaldehyde and Other Aldehydes in Soluble Extracts of Human Liver and Kidney* Purification and characterization of enzymes metabolizing retinaldehyde, propionaldehyde, and octanaldehyde from four human livers and three kidneys were done to identify enzymes metabolizing retinaldehyde and their relationship to enzymes metabolizing other aldehydes. The tissue fractionation patterns from human liver and kidney were the same, indicating presence of the same enzymes in human liver and kidney. Moreover, in both organs the major NAD+-dependent retinaldehyde activity copurified with the propionaldehyde and octanaldehyde activities; in both organs the major NAD+-dependent retinaldehyde activity was associated with the E1 isozyme (coded for byaldh1 gene) of human aldehyde dehydrogenase. A small amount of NAD+-dependent retinaldehyde activity was associated with the E2 isozyme (product of aldh2 gene) of aldehyde dehydrogenase. Some NAD+-independent retinaldehyde activity in both organs was associated with aldehyde oxidase, which could be easily separated from dehydrogenases. Employing cellular retinoid-binding protein (CRBP), purified from human liver, demonstrated that E1 isozyme (but not E2 isozyme) could utilize CRBP-bound retinaldehyde as substrate, a feature thought to be specific to retinaldehyde dehydrogenases. This is the first report of CRBP-bound retinaldehyde functioning as substrate for aldehyde dehydrogenase of broad substrate specificity. Thus, it is concluded that in the human organism, retinaldehyde dehydrogenase (coded for by raldH1gene) and broad substrate specificity E1 (a member of EC 1.2.1.3aldehyde dehydrogenase family) are the same enzyme. These results suggest that the E1 isozyme may be more important to alcoholism than the acetaldehyde-metabolizing enzyme, E2, because competition between acetaldehyde and retinaldehyde could result in abnormalities associated with vitamin A metabolism and alcoholism. In mammalian organisms retinoids and their derivatives are important in regulation of diverse physiological functions. Retinoic acid has only recently been recognized as a major hormone in cell differentiation and development (1,2). It is also thought to be a causative agent in diseases such as cancer (3) and more recently, schizophrenia (4). In mammals, biosynthesis of retinoids proceeds via central or excentric cleavage of carotene to retinaldehyde followed by its reduction to retinol or oxidation to retinoic acid (5)(6)(7). Enzymes with broad substrate specificity such as alcohol and aldehyde dehydrogenases have been known for a long time to include retinaldehyde among their many substrates (8 -10). Efforts have been also made to identify aldehyde dehydrogenase isozyme active with retinaldehyde (11)(12)(13). The mouse enzymes were found to have activity (11) as well as the human E1 isozyme (12,13). However, more recently, cytosolic NAD ϩlinked retinaldehyde dehydrogenases, more specific toward alltrans-retinaldehyde and assumed to be distinct from aldehyde dehydrogenase of broad substrate specificity, have been purified from rat liver (Ref. 14, retinaldehyde dehydrogenase 1) and kidney (15). In addition, rat retinaldehyde dehydrogenase was shown to utilize cellular retinol-binding protein (CRBP) 1bound retinaldehyde as substrate (14). The gene for retinaldehyde dehydrogenase 2 (RALDH2) was cloned from developing mouse eye (16) and rat testis (17) and in both cases the enzyme was characterized by expressing its cloned gene. The primary structures of mouse and rat enzymes and substrate specificity of the enzyme from rat testis exhibited all characteristic features of aldehyde dehydrogenase. There are differences in the distribution of aldehyde dehydrogenase isozymes between human and rat livers. While in the human liver the mitochondrial enzyme, E2 (coded for by aldh2 gene), is expressed at approximately the same level as the cytoplasmic enzyme, E1 (coded for by aldh1 gene), in the rat liver the product of the aldh2 gene is the major enzyme (18). Rat cytoplasm contains very little aldehyde dehydrogenase activity and a large number of aldehyde dehydrogenases (19), of which some have not yet been identified. Moreover, gene duplication of aldh1 must have occurred in the rat (20 -22) and other animals (23) while only one aldh1 gene is known in humans. Human aldehyde dehydrogenases have been well characterized. We purified and characterized two human liver aldehyde dehydrogenases (E1 and E2) (24); human liver glutamic semialdehyde dehydrogenase (25) and human betaine aldehyde/␥aminobutyraldehyde dehydrogenase (E3, GenBank ALDH9) (26). All were found to be of broad substrate specificity including a wide spectrum of aldehydes. Retinaldehyde was recognized as substrate for E1, E2, and E3 isozymes (27). During this investigation an attempt was made to identify retinaldehyde dehydrogenases such as those purified from rat tissues (14,15) in mature human liver and kidney utilizing the procedures and identification methods with which we have many years of experience. The results demonstrate that in human liver and kidney major retinaldehyde dehydrogenase activity is associated with aldehyde dehydrogenase of broad substrate specificity, which also recognizes CRBP-bound retinaldehyde as a substrate. Materials Adult human autopsy livers and kidneys were from NDRI, Philadelphia, PA. NAD ϩ was from Roche Molecular Biochemicals, Indianapolis, Inc. All-trans-retinaldehyde and all-trans-retinoic acid were obtained from Sigma-Aldrich; both were maintained under nitrogen at Ϫ70°C and used in gold light illuminated rooms. Diethylaminobenzaldehyde was from Aldrich, disulfiram was from Ayerst. CM-Sephadex, DEAE-Sephadex, and 5Ј-AMP Sepharose 4B, Mono P column, Pharmalyte of 3-10 pH range, agarose, gradient gels PAA 4/30, and pI standards were from Amersham Pharmacia Biotech. All other chemicals were reagent grade. Enzyme Assays Aldehyde Dehydrogenase (EC 1.2.1.3)-Aldehyde dehydrogenase catalyzes the dehydrogenation of aldehydes in the presence of NAD ϩ . Activity was determined by a standard assay, previously employed (24). The assay mixture in 100 mM sodium pyrophosphate buffer, pH 9.0, contained 500 M NAD ϩ , 1 mM propionaldehyde, and 1 mM EDTA. The reaction was started by addition of enzyme and followed by continuous recording at 340 nm and 25°C. The reaction velocities were calculated from extinction coefficient of NADH of 6.22 cm Ϫ1 mM Ϫ1 . A variant of the above assay containing 0.2 mM octanaldehyde (added in 0.02 ml of acetonitrile/3 ml total volume), instead of propionaldehyde, was also employed. The E1 and E2 isozymes can be readily distinguished by comparison of their octanaldehyde (200 M)/propionaldehyde (1 mM) activity ratio. This ratio has been found to be 0.5-0.6 for the E1 isozyme and 0.92-0.97 for the E2 isozyme. The specific activity with propionaldehyde (micromole/min/mg of protein) was determined and adjusted to maximal specific activity, which for E1 is 0.6 mol/min/mg, for E2, 1.6 mol/min/mg, and for E3 0.6 mol/min/mg. Retinaldehyde Dehydrogenase (EC 1.2.1.36)-Catalyzes the NAD ϩlinked dehydrogenation of retinaldehyde. Activity was assayed by HPLC by determining retinoic acid formed from the enzyme-catalyzed dehydrogenation of retinaldehyde. For determination of activity in chromatography fractions during purification, retinaldehyde was incubated with enzyme in 100 mM Tris glycine buffer, pH 9.0, containing 1 mM EDTA, 0.5 mM NAD ϩ , and 20 M all-trans-retinaldehyde in 1 ml total volume at 25°C. The reaction was initiated by addition of 10 l of retinaldehyde, dissolved in absolute ethanol. Retinaldehyde up to 20 M concentration was completely soluble in these conditions, and no precipitation was ever observed at 50 M retinaldehyde. After 20 min incubation the reaction was terminated by freezing in a dry ice-ethanol bath, which inactivates human aldehyde dehydrogenase, and then transferred to a Ϫ80°C freezer for storage before the HPLC analysis. Total activity of the enzyme used per assay was in the range of 0.002-0.02 mol/min for which the steady state reaction was observed for about 40 min and conversion of substrate after 20 min was not greater than 20%. Incubations were done in duplicate and appropriate controls without NAD ϩ and without enzyme for each experimental set were included. The incubation mixtures were rapidly thawed (about 30 s) in a water bath, and 200 l were injected directly into the HPLC column. At the beginning of purification when protein concentrations were high, after thawing, the protein was precipitated with 50% v/v ethanol and removed by centrifugation. Reverse phase HPLC analysis (variation between duplicates below 5%) was carried out on a Waters Bondapak C18 column with isocratic elution of acetonitrile and 1% (w/v) ammonium acetate (80:20% v/v) at 1.25 ml/min flow rate and 340 nm detection. The quantitative measurement of retinoic acid was obtained by comparing sample peak areas with that of standard retinoic acid. Column performance and stability of enzymatic assay conditions during HPLC chromatography were checked by employing tetraphenylethylene (Aldrich) as an internal standard and retinoic acid during control enzymatic assay. In both cases variation was below 5% of the averages for controls used several times for each separate set of daily experiments. The progress of enzymatic reaction was also confirmed by measurement of retinaldehyde concentration, but variation in the range of 10 -15% of averages was too high for precise calculation at low range. The low detection limit with 340 nm detector and 200-l sample loop was about 2 pmol for retinoic acid and 4 pmol for retinaldehyde. On the basis of HPLC/spectroscopy all-trans-retinaldehyde was free from retinoic acid with purity better than 96%. Purity of retinoic acid was better than 98%. At 0.02-0.5 detector sensitivity range (absorbance units at full scale) linear correlation (r ϭ 0.999) of retinoic acid concentrations up to 50 M and peak areas were observed. Magnesium chloride (150 M) was employed in Pipes buffer, 25 mM, pH 7.6, in the presence of an appropriate substrate and 500 M NAD ϩ and absence of EDTA to test its effect on reaction velocity. Diethylaminobenzaldehyde (10 M) was used with the standard assay and added before the reaction was started with substrate. Disulfiram (33 M) was also used with the standard assay and added either before or after the start of the reaction. Retinaldehyde Oxidase (Aldehyde Oxidase EC 1.2.3.1)-Retinaldehyde oxidase catalyzes the oxidation of retinaldehyde in the presence of atmospheric oxygen in the absence of NAD ϩ . Retinaldehyde oxidase activity was determined by the same procedure as that used for retinaldehyde dehydrogenation, with NAD ϩ being omitted from the incubation mixture. In all experiments with pure enzyme the reactions were started by addition of enzyme. At pH 7.6, 50 mM Tris/HCl buffer, and for pH 9.0, 100 mM Tris glycine buffer, 1 mM EDTA was used; both buffers contained 500 M NAD ϩ . Reaction mixtures (1 ml) were set up in Eppendorf tubes and incubated for 20 min at 25°C. When reactions were carried out in the presence of CRBP, retinaldehyde and CRBP were preincubated for 10 min before adding the enzyme. Kinetic constants were calculated by the Lineweaver-Burk (28), or a single line Yun and Suelter (29) procedures employing the statistical method of Cleland (30). All work was done under gold lights. Protein Determination Protein was determined by the microbiuret procedure (31) using bovine serum albumin as standard. Electrophoretic Procedures Isoelectric focusing was done employing 1% agarose gels prepared with Pharmalyte pH 3-10 according to the manufacturer's instructions. Propionaldehyde and retinaldehyde activities were detected on gels by employing 1 mM propionaldehyde and 20 M retinaldehyde in the presence of NAD ϩ (1 mM), nitro blue tetrazolium (1 mM), and phenazine methosulfate (0.1 mM). The gels were stained for protein with Coomassie Brilliant Blue. pI values were determined by comparison with those of pI standards (Amersham Pharmacia Biotech). Native molecular weights were determined by electrophoresis on a gradient gel (PAA 4/30) and comparison with molecular weight standards (Amersham Pharmacia Biotech). Subunit molecular weights were determined on 10% polyacrylamide gels containing SDS by comparison with known standards (Sigma). Enzyme Purification All purification steps were performed in anaerobic conditions under nitrogen or argon at 4°C. The tissues were extracted into 30 mM sodium phosphate buffer, pH 6.0, containing 0.1% (v/v) 2-mercaptoethanol and 1 mM EDTA (2 volumes of buffer per wet weight of tissue). Only the enzyme extracted into the buffer is accounted for during the subsequent purification steps. The purification procedure from 600 g of liver and about 300 g of human kidney employed consecutively CM-Sephadex and DEAE-Sephadex ion exchange chromatography followed by affinity chromatography on 5Ј-AMP Sepharose (Table I). All chromatographic steps routinely employed for aldehyde dehydrogenase purification are shown as steps 1-5 (Table I). Additional steps involving the same chromatographic column are shown as the step number with a letter suffix (e.g. steps 5A, 5B etc). The columns used for purification from kidney were decreased in size proportionately to the weight available. All buffers used were evacuated and exhaustively nitrogenated and contained 1 mM EDTA and 0.1% (v/v) of 2-mercaptoethanol. The dialyzed homogenate in 30 mM sodium phosphate buffer, pH 6.0, was loaded on CM-Sephadex column (20 g of CM-Sephadex for 600 g of liver) equilibrated with the same buffer. The proteins passing through the CM-Sephadex column, after adjustment of pH to 6.8, were applied to DEAE-Sephadex column (40 g of DE-Sephadex for 600 g of liver) equilibrated in 30 mM sodium phosphate buffer, pH 6.8. The column was washed with 2-3 liters of loading buffer before elution. Aldehyde dehydrogenase activity was eluted by a salt gradient from 0 to 0.25 M sodium chloride in pH 6.8 buffer. The active fractions eluted by the salt gradient were combined, and after pH adjustment to pH 6.0 applied on the 5Ј-AMP column (20 g of 5Ј-AMP Sepharose 4B for 600 g of liver) equilibrated with the same buffer as that used for CM-Sephadex chromatography. After loading, the column was washed with about 5 column volumes of buffer, pH 6.0. The column was first eluted with 1 M sodium chloride in pH 6.0 buffer to obtain the E2 isozyme. Then proteins were eluted from the column with 50 mM sodium phosphate, pH 8.0, containing 1 mg/ml NAD ϩ , to obtain the E1 isozyme. Following this step, proteins were eluted from the column with 1 M NaCl in pH 8.0 buffer, containing 0.3 mg/ml NADH. The procedure employed was essentially that described by Hempel et al. (32) except that additional high salt column washing steps were employed and the E2 isozyme was separated from E1 isozyme on 5Ј-AMP affinity column by using 1 M NaCl salt elution at pH 6.0, instead of previously described elution with phosphate buffer at pH 8.0. The currently employed procedure gives much cleaner separation of the E1 and E2 isozymes. Structural Analysis Structural analysis was done by W. M. Keck Foundation, Yale University, New Haven, CT. The enzymes were carboxymethylated and digested with trypsin, prior to chromatography. Tryptic peptide maps were run analytically and preparatively on the microbore HPLC with the blank and transferrin control. Purification of Human CRBP CRBP was purified from human liver by combining the procedure of Ong and Chytil (33) with that of Fex and Johannesson (34). The crude liver homogenate in 100 mM Tris/HCl, pH 7.5, after centrifugation, was acidified to pH 5 with acetic acid. The precipitated proteins were removed by centrifugation and discarded. The supernatant was then mixed with CM52-cellulose (Whatman), equilibrated at pH 5 and filtered (4MM, Whatman) after stirring 5 h at 4°C. The filtrate was then concentrated by lyophilization and dialyzed against the gel filtration buffer (10 mM Tris acetate, pH 7.5, containing 0.2 M NaCl). The protein solution was loaded onto a gel filtration column (Sephadex G-50) and fractions containing proteins of about 15 kDa, as determined by SDSpolyacrylamide gel electrophoresis, were collected and combined. The material was concentrated by lyophilization and dialyzed against anion exchange buffer (10 mM imidazole acetate, pH 6.2). An anion exchange chromatography on DEAE-Sephadex was done at pH 6.2. Proteins were eluted with a gradient of 10 -100 mM imidazole/acetate buffer, pH 6.2; the fractions containing proteins of 15 kDa were combined, concentrated by lyophilization, and centrifuged. The precipitate was discarded, the supernatant was ultrafiltered using an Amicon membrane of 30-kDa molecular mass cut off. The protein in the filtrate was then concentrated by ultrafiltration utilizing an Amicon membrane of 10-kDa molecular mass cut off. Analysis of CRBP The resulting protein sample produced a single protein band of about 15 kDa on SDS-polyacrylamide gel electrophoresis. In the presence of CRBP the absorption spectrum of retinol showed the three characteristic peaks in the 320 -350 nm range as described previously (35). The fluorescence emission spectrum obtained by excitation of the proteinretinol mixture at 350 nm showed a peak at about 470 nm, which disappeared upon exposure of the sample to UV light. The dissociation constant for retinol of human CRBP was previously reported to be 10 nM and that for retinaldehyde to be 100 nM (34). During this investigation fluorometric titration with retinol was done on Perkin-Elmer Model MPF-2A fluorescence spectrophotometer following retinol fluorescence by the procedure described by Cogan et al. (36) at 350 nm excitation and 485 nm emission. The K d for retinol calculated from titration was 14 nM. Fluorometric titration with retinaldehyde was done by quenching of CRBP fluorescence (37) at 280 nm excitation and 340 nm emission. The K d for retinaldehyde calculated (36) from titration was 44 nM. It was determined that 62% of the purified CRBP protein was active in binding both retinol and retinaldehyde; the latter was used for calculations of the molar ratio of CRBP to retinaldehyde. Proportions of free and CRBP-bound retinaldehyde were calculated from the mass law equation K d ϭ (free retinaldehyde) (free CRBP)/(CRBP-retinaldehyde complex). At CRBP concentrations two times that of retinaldehyde, the mass law equation takes the following form: Purification of Retinaldehyde and Aldehyde Dehydrogenases from Caucasian Human Livers and Kidneys-Four purifications were done from human liver and three from kidney; the results are summarized in Table II. Retinaldehyde activity was separated into three batches: (i) that absorbed to CM-Sephadex (about 10% of total retinaldehyde activity), (ii) that absorbed to DEAE-Sephadex and subsequently eluted by a salt gradient (the major retinaldehyde activity, about 2/3 of total), and (iii) that which did not absorb to DEAE-Sephadex (about 20% of total retinaldehyde activity). (i) NAD ϩ -independent retinaldehyde activity (presumably aldehyde oxidase) was retained on CM-Sephadex. It was eluted (see step 3A in Table II) by high salt concentrations and was present in both liver and kidney homogenates. In liver it constituted about 10% of total retinaldehyde activity, using the same HPLC assay as that used for dehydrogenase activity. The CM-Sephadex also retained a small amount of NAD ϩ -dependent propionaldehyde activity, previously characterized as glutamic-␥-semialdehyde dehydrogenase (25) found during this investigation to be inactive with retinaldehyde. (ii) The majority of NAD ϩ -dependent retinaldehyde activity of the human liver and kidney was absorbed to DEAE-Sephadex from which it was eluted by salt gradient (step 4 in Table II). (iii) Loss of retinaldehyde and propionaldehyde activities during loading and washing of DEAE-Sephadex (see step 4A in Table II) has been observed during these and previous aldehyde dehydrogenase purifications. Use of smaller loads, relative to column size, and adjustment of pH from 6.8 to pH 9 did not prevent the activity loss. Reloading of active fractions on new DEAE-Sephadex resulted in non-absorption of initially non-absorbing enzyme, demonstrating that the original loss of activity from DEAE was not due to an insufficient amount of DEAE-Sephadex used. This fraction was active with both propionaldehyde and retinaldehyde. The retinaldehyde active enzyme which did not bind to DEAE-Sephadex was also present in kidney. Chromatographic profiles with propionaldehyde and retinaldehyde as substrates of kidney homogenates were identical to those of liver. Characterization of the Major Retinaldehyde Activity-The major retinaldehyde activity eluted by a salt gradient from DEAE-Sephadex (step 4 in Table II) was loaded on the 5Ј-AMP Sepharose column and eluted by NAD ϩ in pH 8.0 buffer, where the E1 isozyme is normally eluted. Isoelectric focusing showed a single band, active with both propionaldehyde and retinaldehyde, which was identified as the E1 isozyme. The ratio of retinaldehyde to propionaldehyde activity was 0.12:1.0 and was constant throughout the elution peak. No substrate inhibition with the E1 isozyme was observed up to 20 M retinaldehyde. Variation was about 7.5% of the average E1 activity in the retinaldehyde concentration range of 5 to 20 M. K m and V max values (determined at 0.02-5 M retinaldehyde at pH 9.0) were 120 -400 nM and 30 -65 nmol/min/mg, respectively. The E2 isozyme was also found to have some activity with retinaldehyde. Its K m value for retinaldehyde was similar to that determined for the E1 isozyme, but maximal velocity was much lower (2 nmol/min/mg). IEF gel electrophoresis showed that E2 was homogeneous and that its retinaldehyde activity was not due to E1 isozyme contamination. Activity of the E2 isozyme with retinaldehyde was also measured in the presence of Mg 2ϩ ions (38). Magnesium stimulated retinaldehyde activity showing that this activity belonged to the mitochondrial E2 isozyme. Magnesium inhibited retinaldehyde activity of the cytoplasmic E1 isozyme. Formation of retinoic acid from retinaldehyde in the presence of the E3 isozyme was also investigated by HPLC. The enzyme also appeared to metabolize retinaldehyde but the velocity was extremely low, preventing determination of kinetic constants. Thus, all three isozymes recognized retinaldehyde as substrate, but metabolized it at low velocity; even with the E1 isozyme the velocity of retinaldehyde dehydrogenation was only 12% of that with propionaldehyde. Effect of CRBP on Retinaldehyde Activity of Human Aldehyde Dehydrogenases E1 and E2-The effect of CRBP on retinaldehyde dehydrogenation by human aldehyde dehydrogenase isozymes E1 and E2 was investigated. At 1 M retinaldehyde, in 50 mM Tris/HCl buffer, pH 7.6, at 25°C, increasing concentrations of CRBP increased slightly the activity of human E1 isozyme. However, in the same experimental conditions the activity of human E2 isozyme was inhibited (Fig. 1, panel A). Formation of the CRBP-retinaldehyde complex in these conditions was demonstrated by fluorometric titration of CRBP with retinaldehyde ( Fig. 1, panel B). The effect of CRBP on the retinaldehyde dehydrogenase activity of the E1 isozyme is shown in Fig. 2. In the absence of CRBP the K m and V max values of E1 for retinaldehyde were 450 nM and 27 nmol/min/ mg, respectively. The presence of CRBP at a concentration two times higher than that of retinaldehyde resulted in an increase of the K m value for retinaldehyde (1.7 M), showing that free retinaldehyde was utilized by the enzyme at lower concentration than bound retinaldehyde. The observed K m value for the CRBP-bound retinaldehyde is an approximation due to the presence of free retinaldehyde which also contributes to the reaction. The fact that CRBP has no effect on maximal velocity of retinaldehyde dehydrogenation (29 nmol/min/mg) by the E1 isozyme demonstrates that bound CRBP is a substrate for the E1 isozyme. Thus, the E1 isozyme can utilize CRBP-bound retinaldehyde. The effect of CRBP on the retinaldehyde dehydrogenation of the E2 isozyme is shown in Fig. 3. In the absence of CRBP the K m and V max values of E2 were 450 nM and 1.6 nmol/min/mg, respectively. The presence of CRBP at twice the concentration of retinaldehyde resulted in the inhibition of the retinaldehyde dehydrogenation activity of the E2 isozyme. The K m for retinaldehyde in the presence of CRBP was determined as 310 nM. CRBP exerted its effect on V max which was less than 20% of that in the absence of CRBP. In fact, the maximal velocity with E2 isozyme in the presence of CRBP never exceeded the veloc-TABLE II Summary of purification of aldehyde dehydrogenase and retinaldehyde dehydrogenase from human liver and kidney Results are measurements of activity (IU ϭ international units ϭ micromoles of product formed per min) extracted by 200 ml of buffer from 100 g of liver or kidney and are presented as mean Ϯ S.D. for n ϭ 4 purifications (liver) and n ϭ 3 purifications (kidney). Table II) bound to 5Ј-AMP Sepharose in the same conditions as aldehyde dehydrogenase (Table III). The enzyme also behaved like E1 and E2 isozymes during elution. 1 M NaCl at pH 6.0 eluted the propionaldehyde-active enzyme (with almost no retinaldehyde activity) which on isoelectric focusing gels could be identified as the E2 isozyme. Elution from 5Ј-AMP Sepharose with NAD ϩ at pH 8.0 resulted in the recovery of most of the retinaldehyde activity and the remainder of propionaldehyde activity. As in the case of the E1 isozyme, the remainder was eluted with high salt and NADH. On isoelectric focusing both were identified as the E1 isozyme by staining with either propionaldehyde or (Table II) non-binding to DEAE Purification steps are described in Table I and are the same as those used in Table II for purification from human liver and kidney. Enzymes eluted in both steps of 5Ј-AMP elution were identified by isoelectric focusing and staining of gels with propionaldehyde and retinaldehyde in the presence of 500 M NAD ϩ . 28.3 mol/min of propionaldehyde activity, and 10.8 mol/min of retinaldehyde activity from step 4A were loaded in 5Ј-AMP Sepharose column. Step Propionaldehyde IU ) ) differed from the E1 isozyme in retinaldehyde to propionaldehyde activity ratio which was about four times higher than that of the E1 isozyme. Its specific activity with propionaldehyde was also 25% higher (Table IV). Determination of its K m value with retinaldehyde was attempted (Fig. 4). Unlike retinaldehyde activity of E1 isozyme (see Fig. 2) E1 (4A) was subject to pronounced substrate inhibition with retinaldehyde. Although K m and V max values could not be accurately determined from data of Fig. 4, because of rapid curvature due to substrate inhibition, the results suggest a K m of about 1.6 M and V max of about 1.4 mol/min/mg. Both of these values are larger than those determined for the E1 isozyme. As shown in Table IV, other kinetic and physicochemical properties of E1 (4A) were indistinguishable from those of E1. Structural comparison of E1 and E1 (4A) by peptide mapping resulted in identical, completely superimposable peptide maps (Fig. 5). Additional Experiments-Further investigation of DEAE eluates of liver proteins demonstrated that a large amount of protein with molecular mass of about 15,000 Da (which might have been CRBP) was eluted from DEAE during column loading and washing in the same fractions as the retinaldehydeactive E1 (4A) enzyme. In our previous experiments, loss of propionaldehyde activity of 7-10% occurred during loading and washing of DEAE-Sephadex columns when human livers with no history of alcoholism were used, which compares well with the results presented in step 4A of Table II. It is of interest to note that the loss of E1 (4A) on DEAE-Sephadex, as measured by loss of propionaldehyde activity, was very small when purification was carried out from livers of one male and one female alcoholic (0.5 and 1.5%, respectively). DISCUSSION The procedure employed for purification of retinaldehyde dehydrogenase was similar to that used for purification of aldehyde dehydrogenases with additional steps included to analyze fractionated proteins with properties which differ from those of aldehyde dehydrogenase (Tables I and II). It was found that retinaldehyde dehydrogenase activity copurified with aldehyde dehydrogenase in both liver and kidney. Four livers and three kidneys (all from different Caucasian individuals) showed similar enzyme activity distribution in both of these tissues. Isoelectric focusing activity gels for liver and kidney extracts at different purification stages developed with retinaldehyde and propionaldehyde as substrates were also identical. During enzyme purification from both tissues (except in the case of aldehyde oxidase where propionaldehyde activity could not be determined by the assay employed) propionaldehyde activity paralleled retinaldehyde activity. Therefore, we state with confidence that NAD ϩ -linked retinaldehyde dehydrogenation in extracts of human liver and kidney is associated in small part with aldehyde oxidase and in large part with aldehyde dehydrogenase of broad substrate specificity. We also conclude that there is no separate or special retinaldehyde dehydrogenase, with detectable catalytic activity, in extracts of kidney that is absent from extracts of liver; aldehyde dehydrogenase composition of both tissues is also the same. No evidence could be detected for retinaldehyde dehydrogenases distinct from known aldehyde dehydrogenases. The ratios of retinaldehyde to propionaldehyde activity varied greatly with the isozymes. The major retinaldehyde activity comprising 2/3 of the total starting was found to be associated with the E1 isozyme of human aldehyde dehydrogenase. Its Michaelis constant for retinaldehyde was low and so was its maximal velocity. The K m values agreed with those previously determined (12,13); the maximal velocity was, however, much lower than that previously reported. The maximal velocity of the E1 isozyme with retinaldehyde as substrate at pH 9.0 was only about 12% of its maximal velocity with propionaldehyde. However, the enzyme occurs at a large concentration in human liver (about 1 g/kg); thus, an average human liver of 1.4 kg should be capable of metabolizing about 38 mol/min of retinaldehyde at maximal velocity determined here to be 0.027 mol/min/mg at pH 7. 6 The E2 isozyme was also found to have some activity with retinaldehyde. Its Michaelis constant for retinaldehyde was similar to that of the E1 isozyme, the maximal velocity, however, was considerably lower than that of the E1 isozyme. However, since this enzyme occurs in human livers at protein concentrations similar to that of the E1 isozyme, even at this low velocity 1.5 mol/min of retinoic acid could be produced by this enzyme from retinaldehyde by an average human liver. The activity of the E3 isozyme with retinaldehyde has been found to be so small that it could not be accurately determined via HPLC. Thus, the E1 isozyme of human aldehyde dehydrogenase appears to be a major contributor to retinaldehyde metabolism in human liver and kidney. Retinaldehyde is inherently unstable and is known to be present in tissues in a bound form, bound inside the cell to CRBP. Utilization of CRBP-bound retinaldehyde by retinalde- hyde dehydrogenase has been described (14) as a distinguishing characteristic for retinaldehyde dehydrogenases. It was important, therefore, during this investigation to find out if aldehyde dehydrogenases could utilize retinaldehyde-bound to CRBP. Human CRBP purified from liver was used for studying retinaldehyde activity of the E1 and E2 isozymes. As shown in Fig. 2, CRBP-bound retinaldehyde is utilized by the E1 isozyme as substrate. What makes this experiment especially convincing is that the E2 isozyme apparently cannot utilize CRBPbound retinaldehyde (Fig. 3), only free retinaldehyde can be utilized by the E2 isozyme. Thus, the E1 isozyme exhibits features previously attributed to specific retinaldehyde dehydrogenases. In view of the above results, it can be stated with confidence that retinaldehyde dehydrogenase 1 (14,15) and E1 are the same enzyme. The second major retinaldehyde activity (constituting about 15-20% of starting retinaldehyde activity, Table II) was eluted during washing of DEAE-Sephadex (step 4A in Table II) and called E1 (4A) because of its similarity to the E1 isozyme (Table IV). Although E1 (4A) appeared to be the same as the E1 isozyme on isoelectric focusing gels and in behavior on 5Ј-AMP Sepharose, as well as in the majority of properties (see Table IV), its retinaldehyde:propionaldehyde activity ratio was considerably higher, about four times higher than that of the E1 isozyme (Table III). The ratio of retinaldehyde to propionaldehyde activity was determined at 20 M retinaldehyde where substrate inhibition is considerable (see Fig. 4); thus this ratio is even higher at lower retinaldehyde concentrations. The maximal velocity of 1.4 mol/min/mg, obtained by extrapolation of data in Fig. 4, is higher than those reported for rat retinaldehyde dehydrogenases (14,15). The enzyme was also subject to substrate inhibition with retinaldehyde (Fig. 4), which was not observed with the E1 isozyme (Fig. 2). Thus, it appeared that this enzyme might be the human equivalent of animal retinaldehyde dehydrogenases. However, when E1 (4A) was subjected to careful peptide mapping to compare it with the E1 isozyme no structural differences could be detected (Fig. 5). If there are only a few substitutions, they may not be visible in tryptic peptide maps, where symmetrical peaks usually represent mix-tures of several peptides. Thus, the question of primary structure of E1 (4A) cannot be finally resolved without complete sequencing of the E1 (4A) protein. This may be important in view of the fact that further investigations indicated that E1 (4A) was present at much lower concentrations in DEAE-Sephadex eluates from livers of alcoholics. There are also other possibilities. The enzyme that does not attach to DEAE might be E1 which is bound to CRBP or some other small ligand. Large amounts of protein of molecular mass of about 15,000 Da was visualized in chromatography eluates containing E1 (4A) . It has been recently demonstrated that binding of metals can alter substrate specificity of an enzyme involved in the methionine salvage pathway (39). Thus, higher activity with retinaldehyde, achieved in other mammals by gene duplication, may have been achieved in humans by ligand binding. E1 is extremely unstable and sensitive to atmospheric oxygen. If E1 (4A) represents the E1 which is bound to CRBP, its higher specific activity with propionaldehyde could be the result of the protecting effect by CRBP. Similarity of E1 (4A) to E1 isozyme, however, argues against E1 (4A) being a specific retinaldehyde dehydrogenase. Even if some structural differences are detected, E1 (4A) has to be considered as a variant of the E1 isozyme (see Table IV) and is, therefore, an aldehyde dehydrogenase of broad substrate specificity. The third major activity was associated with aldehyde oxidase as previously observed by Chen and Juchau (40) in rat conceptual homogenates. This enzyme could be easily separated from dehydrogenases by chromatography on CM-Sephadex. In one liver less than 1% of total propionaldehyde or octanaldehyde activity separated on DEAE from E1, E2, and E3 isozymes, suggesting that it may be yet another, hitherto unidentified, aldehyde dehydrogenase; no activity with retinaldehyde was detected in this fraction. Retinaldehyde dehydrogenase 2, which has been reported to occur in response to developmental stimuli (16,17) has not yet been described in human organism. This enzyme has been recently cloned from mouse developing eye (16) and rat testis (17). Published amino acid sequence and properties reported (17) are so similar to those of aldehyde dehydrogenase that there is no reason to FIG. 5. Tryptic peptide maps of E1 and E1 (4A) isozymes. Tryptic digests of E1 (4A) (75 mol) and E1 (75 mol) isozymes were chromatographed preparatively on microbore HPLC, in addition to blank and transferrin controls. Top, E1 (4A) ; bottom, E1; blank and transferrin controls (not shown). assume that even this enzyme is not an aldehyde dehydrogenase of broad substrate specificity, since it can apparently utilize short and long chain aldehydes as substrates (17).
2018-04-03T01:12:10.308Z
1999-11-19T00:00:00.000
{ "year": 1999, "sha1": "a483ad268e2101621cd6840648b733948de384ab", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/47/33366.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "cb21dea7f36297867ecc53ffce58c8df106b7bcf", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Biology" ] }
53454182
pes2o/s2orc
v3-fos-license
RSTM Numerical Simulation of Channel Particulate Flow with Rough Wall Turbulent gas-solid particles flows in channels have numerous engineering applications ranging from pneumatic conveying systems to coal gasifiers, chemical reactor design and are one of the most thoroughly investigated subject in the area of the particulate flows. These flows are very complex and influenced by various physical phenomena, such as particle-turbulence and particle-particle interactions, deposition, gravitational and viscous drag forces, particle rotation and lift forces etc. Introduction Turbulent gas-solid particles flows in channels have numerous engineering applications ranging from pneumatic conveying systems to coal gasifiers, chemical reactor design and are one of the most thoroughly investigated subject in the area of the particulate flows.These flows are very complex and influenced by various physical phenomena, such as particle-turbulence and particle-particle interactions, deposition, gravitational and viscous drag forces, particle rotation and lift forces etc. The mutual effect of particles and a flow turbulence is the subject of numerous theoretical studies during several decades.These studies have reported about the influence of a gas turbulence on particles (one-way coupling) and/or particles on turbulence of a carrier gas flow (the two-way coupling) in case of high flow mass loading (the four-way coupling).The influence of particles on a gas turbulence, which consists in a turbulence attenuation or augmentation depending on the relation between the parameters of gas and particles. There are different approaches and numerical models that describe the mutual effect of gas turbulence and particles. The k-ε models, earlier elaborated for the turbulent particulate flows, e.g., [1][2][3][4][5], considered a turbulence attenuation only by the additional terms of the equations of the turbulence kinetic energy and its dissipation rate.The results obtained by these models were validated by the experimental data on the turbulent particulate free-surface flows [6]. Later on, the models [7,8] considered both the turbulence augmentation and attenuation occurring in the pipe particulate flows depending on the flow mass loading and the Stokes number.Then, these models have been expanded for the free-surface flows.As opposed to the k-ε models, [7,8] considered both the turbulence augmentation caused by the velocity slip between gas and particles and the turbulence attenuation due to the change of the turbulence macroscale occurred in the particulate flow as compared to the unladen flow.The given approach has been successfully tested for various pipe and channel particulate flows. Currently, the probability dense function (PDF) approach is widely applied for the numerical modeling of the particulate flows.The PDF models, for example, [9][10][11][12][13] contain more complete differential transport equations, which are written for various velocity correlations and consider both the turbulence augmentation and attenuation due to the particles. As opposed to the pipe flows, the rectangular and square channel flows, even in case of unladen flows, are considerably anisotropic with respect to the components of the turbulence energy, that is vividly expressed near the channel walls and corners being notable as for the secondary flows.In addition, the presence of particles aggravates such anisotropy.Such flows are studied by the Reynolds stress turbulence models (RSTM), which are based on the transport equations for all components of the Reynolds stress tensor and the turbulence dissipation rate.RSTM approach allows to completely analyze the influence of particles on longitudinal, radial and azimuthal components of the turbulence kinetic energy, including also possible modifications of the cross-correlation velocity moments. A few studies based on the RSTM approach showed its good performance and capability for simulation of the complicated flows, e.g., [14], as well for the turbulent particulate flows, for example, see [15].Recently, the nonlinear algebraic Reynolds stress model based on the PDF approach has been proposed in [16] for the gas flow laden with small heavy particles.The original equations written for each component of Reynolds stress were reduced to their general form in terms of the turbulence energy and its dissipation rate with additional effect of the particulate phase.Eventually, the model [16] operated with the k-ε solution and did not allow to analyze the particles effect on each component of the Reynolds stress. The 3D RSTM model, being presented in this chapter, is intended to apply for a simulation of the downward turbulent particulate flow in channel of the square cross-section (the aspect ratio of 1:6) with rough walls. In order to approve and validate the developed model, the separate investigations have been carried out.The first study was the simulation of the downward unladen gas flow in channel of the rectangular cross-section with the smooth and rough walls.The second study relates to the downward grid-generated turbulent particulate flow in the same channel with the smooth walls. The further stage of this study will be the development of the present model for implementation to the particulate channel flow with the rough walls and the initial level of turbulence. Governing equations and numerical method The present 3D RSTM model is based on the two-way coupling k-L model [8] and applies the 3D RANS equations and the RSTM closure momentum equations. The sketch of the computational flow domain is shown in Figure 1 for the case of the downward grid-generated turbulent particulate flow in the channel of square cross-section.Here u and u s are the longitudinal components of velocities of gas and particles, respectively. Governing equations for the Reynolds stress turbulence model The numerical simulation of the stationary incompressible 3D turbulent particulate flow in the square cross-section channel was performed by the 3D RANS model with applying of the 3D Reynolds stress turbulence model for the closure of the governing equations of gas, while the particulate phase was modeled in a frame of the 3D Euler approach with the equations closed by the two-way coupling model [8] and the eddy-viscosity concept. The particles were brought into the developed isotropic turbulent flow set-up in channel domain, which has been preliminary computed to obtain the flow velocity field.The system of the momentum and closure equations of the gas phase are identical for the unladen while the particle-laden flows under impact of the viscous drag force.Therefore, here is only presented the system of equations of the gas phase written for the case of the particle-laden flow in the Cartesian coordinates. 3D governing equations for the stationary gas phase of the laden flow are written together with the closure equations as follows: continuity equation: where u, v and w are the axial, transverse and spanwise time-averaged velocity components of the gas phase, respectively. x-component of the momentum equation: , y-component of the momentum equation: z-component of the momentum equation: Computational and Numerical Simulations the transport equation of the x-normal component of the Reynolds stress: the transport equation of the y-normal component of the Reynolds stress: the transport equation of the z-normal component of the Reynolds stress: the transport equation of the xy shear stress component of the Reynolds stress: the transport equation of the xz shear stress component of the Reynolds stress: the transport equation of the yz shear stress component of the Reynolds stress: the transport equation of the dissipation rate of the turbulence kinetic energy: Computational and Numerical Simulations ( ) ( ) ( ) The given system of the transport equations (Eqs. 1 -11) is based on the model [17] with applying of the numerical constants taken from [18]: are the turbulence integral time scales for the unladen and particle-laden flows, respectively. ¯) are the turbulence kinetic energy of gas in the particle-laden and in the unladen flows, respectively; ε and ε 0 are the dissipation rates of the turbulence kinetic energy in the particle-laden and unladen flows, respectively; τ p is the Stokesian particle response time, τ p = ρ p δ 2 18ρν ; ν is the gas viscosity; (u − u s ), (v − v s ) and (w − w s ) are the components of the slip velocity. The additional terms of Eqs.(2 -7) pertain to presence of particles in the flow and contain the particle mass concentration α.The influence of particles on gas is considered by the aerody- namic drag force in the momentum equations (the last term of the right-hand sides of Eqs. 2 -4), and by the turbulence generation and attenuation effects contained in the transport equations of components of the Reynolds stress (the penultimate and last terms of the righthand sides of Eqs.(5 -7), respectively).The given model applies the two-way coupling approach [8], where the turbulence generation terms are proportional to the squared slip velocity, and the turbulence attenuation terms are expressed via the hybrid length scale L h and the hybrid dissipation rate ε h of the particle-laden flow, where L h is calculated as the harmonic average of the integral length scale of the unladen flow L 0 and the interparticle distance λ. . The particles influence on the shear Reynolds stress components is considered in Eqs.(8 -10) indirectly via the averaged velocity flow field (u, v, w). The production terms P are determined according to [18] as follows: , , , , , The diffusive or second order partial differentiation over Cartesian coordinates, i.e. the first three terms in Eqs.(5 -11) are given, e.g. in [18].The anisotropy terms R of the normal and shear components of the Reynolds stress ¯, are defined by various pressure-rate-of-strain models of the isotropic turbulence written in terms of variation of constants C R and C 2 [18] as follows: ( ) . The relative friction coefficient C ′ D is expressed as C ′ D = 1 + 0.15Re s 0.687 for the non-Stokesian streamlining of particle.The particle Reynolds number Re s is calculated according to [19] as 3D governing equations for the particulate phase are written as follows: the particle mass conservation equation: x-component of the momentum equation: y-component of the momentum equation: The closure model for the transport equations of the particulate phase was applied to the PDF model [20], where the turbulent kinetic energy of dispersed phase, the coefficients of the turbulent viscosity and turbulent diffusion of the particulate phase are determined as follows, respectively: where ν t is the turbulent viscosity, ν t = 0.09 k 0 2 ε 0 and τ ′ p = τ p / C ′ D is the particle response time with respect of correction of the particles motion to the non-Stokesian regime. Boundary conditions for the Reynolds stress turbulence model The grid-generated turbulent flow is vertical, and it is symmetrical with respect to the vertical axis for both y-and z-directions.Therefore, the symmetry conditions are set at the flow axis, and the wall conditions are set at the wall.In case of the rough and smooth walls the flow was asymmetrical over the y-direction and symmetrical over the z-direction. The axisymmetric conditions are written as follows: for z=0: Computational and Numerical Simulations The boundary conditions for the particulate phase are set at the wall as follows: for y = 0.5h : , , , 0, for z = 0.5h : At the exit of the channel the following boundary conditions are set: Additionally, the initial boundary conditions are set for three specific cases: 1. the low level of the initial intensity of turbulence that usually occurs at the axis of the channel turbulent flow; 2. the high level of the initial turbulence generated by two different grids: a. small grid of the mesh size M=4.8 mm; b. large grid with mesh size of M=10 mm. Numerical method The control volume method was applied to solve the 3D partial differential equations written for the unladen flow (Eqs. 1 -11) and the particulate phase (Eqs.26 -29), respectively, with taking into account the boundary conditions (Eqs.30 -40).The governing equations were solved using the implicit lower and upper (ILU) matrix decomposition method with the fluxblending differed-correction and upwind-differencing schemes [21].This method is utilized for the calculations of the particulate turbulent flows in channels of the rectangular and square cross-sections.The calculations were performed in the dimensional form for all the flow conditions.The number of the control volumes was 1120000. Numerical results The validation of the present model took place in two stages. In case of the unladen flow, the model was validated by comparison of the kinetic (normal) components of stresses with the experimental data [22] obtained for the specially constructed horizontal turbulent gas flow in the channel of rectangular cross-section (the aspect ratio of 1:6) of 54 mm width with the smooth and rough walls for the flow Reynolds number Re=56000 and the roughness height of 3.18 mm. Figure 2 shows the distributions of the longitudinal component of the averaged velocity of gas u 0 over the channel cross-section for two cases: i) the smooth walls and ii) the left wall is rough and the right wall is smooth for the mean flow velocity 15.5 m/s. Figure 3 shows the distributions of the normalized Reynolds normal stress tensor components obtained for the same conditions as Figure 2. The radial distance y/h=0 corresponds to the rough wall and y/h=1 corresponds to the smooth wall.The subscript "0" denotes the unladen flow conditions. One can see that in case of the smooth channel walls, the mean flow velocity and the components of the turbulence kinetic energy demonstrate the representative symmetrical turbulent distributions over the cross-section of the rectangular channel.The transfer to the rough walls results in transformation of the given distributions.The maximum of the distribution of the time-averaged flow velocity moves towards the smooth wall.The similar change relates to the distributions of each component of the turbulence kinetic energy.These numerical results demonstrates the satisfactory agreement with the experimental data [22]. The next step of the study was the extension of the present model to the gas-solid particles grid-generated turbulent downward vertical channel flow.The experimental data [23] obtained for the channel flow of 200 mm square cross-section loaded with 700-μm glass beads of the physical density 2500 kg/m 3 was used for the model validation.The mean flow velocity was 9.5 m/s, the flow mass loading was 0.14 kg dust/kg air.The grids of the square mesh size M=4.8 and 10 mm were used for generating of the flow initial turbulence length scale. The validity criterion was based on the satisfactory agreement of the axial turbulence decay curves occurring behind different grids in the unladen and particle-laden flows obtained by the given RSTM model and by the experiments [23].Figure 4 demonstrates such agreement for the grid M=4.8 mm. Figure 5 shows the decay curves calculated by the present RSTM model for the grids M=4.8 and 10 mm.As follows from Figs. 4 and 5, the pronounced turbulence enhancement by particles is observed for both grids.The character of the turbulence attenuation occurring along the flow axis agrees with the behavior of the decay curves in the grid-generated turbulent flows described in [24]. Computational and Numerical Simulations One can see that the turbulence enhancement occupies over 75% of the half-width of the channel, that takes place at the initial period of the turbulence decay of the particle-laden flow as compared to the unladen flow.Along with, the distributions of Δ u , Δ v and Δ w are uniform that corresponds to the initial grid-generated homogeneous isotropic turbulence, which The distributions of modification of Δ u , Δ v and Δ w taken place beyond the initial period of the turbulence decay (location x / M ≈ 200 at Figures 6 -11) are typical of the channel turbulent particulate flow.One can see that in this case the turbulence enhancement becomes slower, since here the turbulence level is substantially smaller as compared with the initial period of decay, i.e. for x / M < 100 (s.Figures 4 and 5).This means that the grid-generated turbulence of the particulate flow decays downstream, and this causes the decrease of the rate of turbulence enhancement due to the particles occurred beyond the initial period of the turbulence decay.As a result, the turbulence is attenuated, that is expressed in terms of decrease of Δ u towards the pipe wall (s. Figure 9).Such tendency has been shown qualitatively in [25]. The certain increase of Δ u , Δ v and Δ w , that is observed verge towards the wall (s., arises from the growth of the slip velocity (s.curves 1, 2, 3 in Figure 12).The decrease of Δ u , Δ v and Δ w taken place in the immediate vicinity of the wall is caused by the decrease of the length scale of the energy-containing vortices and, thus, the increase of the dissipation of the turbulence kinetic energy.The analysis of Figure 13 shows that the increase of the grid mesh size results in the weaker contribution of particles to the turbulence enhancement and dissipation of the kinetic energy taking place over the cross-section for the initial period of the turbulence decay.This can be explained by the higher rate of the particles involvement into the turbulent motion due to the longer residence time that comes from the larger size of the eddies. Conclusions The RSTM model has been elaborated for the horizontal and vertical turbulent particulate flows in the channels of rectangular and square cross-sections with the smooth and rough walls. The present RSTM model has been validated for the unladen channel gas flow with the rough wall.It satisfactorily described the experimental data on the averaged gas axial velocity and three components of the turbulence energy. Further, the present model was applied to simulate the vertical grid-generated turbulent particulate channel flow.It considered both the enhancement and attenuation of turbulence by means of the additional terms of the transport equations of the normal Reynolds stress components.The model allowed to carry out the calculations covering the long distance of the channel length without using algebraic assumptions for various components of the Reynolds stress.The numerical results showed the effects of the particles and the mesh size of the turbulence generating grids on the turbulence modification that had been observed in experiments.It was obtained that the character of modification of all three normal components of the Reynolds stress taken place at the initial period of the turbulence decay are uniform almost all over the channel cross-sections.The increase of the grid mesh size slows down the rate of the turbulence enhancement which is caused by particles. Figure 2 . Figure 2. Numerical and experimental [22] distributions of the longitudinal component of the averaged velocity of gas over the channel cross-section. Figures 6 - show the cross-section modifications of three components of the Reynolds stress, Δ u , Δ v , Δ w , caused by 700-μm glass beads, calculated by the presented RSTM model at two locations of the initial period of the grid-generated turbulence decay x / M = 46 and 93 as well as beyond it for x / M ≈ 200.Here: Figure 3 . Figure 3. Numerical and experimental [22] distributions of the normalized Reynolds normal stress tensor components. Figure 4 . Figure 4. Axial turbulence decay behind the grid M=4.8 mm: 1 and 3 are the data [23] got for the unladen and particle-laden flows, respectively; 2 and 4 are the numerical data obtained for the same conditions. Figure 5 . Figure 5.The calculated axial turbulence decay behind the grids: 1 and 2 are the data got for the unladen and particle-laden flow, respectively, at M=4.8 mm, 3 and 4 are the data obtained for the same conditions at M=10 mm. Figure 6 . Figure 6.Effect of particles on the modification of the x-normal component of the Reynolds stress: M=4.8 mm, z=0. Figure 7 . Figure 7. Effect of particles on the modification of the y-normal component of the Reynolds stress: M=4.8 mm, z=0. Figure 8 . Figure 8.Effect of particles on the modification of the z-normal component of the Reynolds stress: M=4.8 mm, z=0. Figure 9 . Figure 9.Effect of particles on the modification of the x-normal component of the Reynolds stress: M=10 mm, z=0. Figure 10 .Figure 11 . Figure 10.Effect of particles on the modification of the y-normal component of the Reynolds stress: M=10 mm, z=0. Figure 12 .Figure 13 . Figure12.The cross-section distributions of the axial gas and particles velocities and particles mass concentration for the grid M=4.8 mm: 1 -u / U for x / M = 46; 2 -u / U , 3 -u s / U and 4 -α / α m for x / M ≈ 200.Here α m is the value of the mass concentration occurring at the flow axis.
2017-09-18T09:28:21.640Z
2014-02-12T00:00:00.000
{ "year": 2014, "sha1": "02d4a9721b81ca79f6db8f28da724f663df8d2c4", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/45880", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "da98d706c0c9123ddff7da4abd69874f80434990", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
252857766
pes2o/s2orc
v3-fos-license
Study assessment, quisses, and critical thinking skill of elementary school students The use of Quizizz that can be set based on the learning objectives to be achieved. Quizizz can be used by educators as an alternative to assess learning as effectively as possible in order to achieve the learning objectives. The purpose of this study was to determine the analysis of Quizizz-Based assessment to improve critical thinking in the fourth grade of elementary school. This research uses descriptive qualitative research on the subject of 6 educators and 30 students with data collection using a questionnaire technique to students and educators. This data analysis is data reduction, data presentation, and verification. The results of this study are fourth grade students at SDN 1 Jati Indah Tanjung Bintang still need an assessment application or assessment according to the needs of the times, namely in the form of applications that can be accessed online such as quizizz in the cognitive assessment of students and have not increased critical thinking in the learning process, then the alternative solution requires the development of quizzz based assessments to improve critical thinking in elementary school thematic learning. Introduction Education is an orientation process in the form of knowledge transfer. In this educational process, students are positioned as educational subjects. Educational activities are aimed at developing learning skills, knowledge and personality formation of students. Formal educational activities are usually carried out in schools. Good education is education that is able to equip students with knowledge, skills, good personalities, and active learning. To achieve this goal, education must be of good quality. Only with quality education we can achieve educational goals and improve the quality of education. Improving the quality of education to improve the learning process. Assessment of the learning process can be seen from the changes that occur compared to the previous state. Assessment of learning outcomes is an important stage in learning activities. Assessment is the process of gathering information as the basis for making decisions about students, regarding the curriculum, curriculum, and school policies (Basuki and Hariyanto, 2014). Assessment aims to identify student skills before and after learning, provide feedback to teachers to improve learning tools (including teaching methods, approaches, activities and learning resources) used and provide information to parents and schools about the effectiveness of education (Hamzah, 2014). Efforts to improve the quality of learning are possible through improving the quality of the assessment system. Educational activities aimed at developing the learning capacity, knowledge and personality formation of students. Formal educational activities are usually carried out in schools. Good education is education that helps students gain broad knowledge, have skills, have noble character, and be active in learning. To achieve this goal, education must be of good quality. Having a quality education will be able to achieve educational goals and improve the quality of education. Improving the quality of education requires improving the learning process. Assessment of the learning process can be seen from the changes that occur from the previous state. Assessment of learning outcomes is an important component of learning activities. Efforts to improve the quality of learning are possible through improving the quality of the assessment system. Learning in this 21st century has a difference with learning in the past. Currently learning requires standards as a reference to achieve learning objectives. Through the standards that have been set, teachers have definite guidelines about what is taught and what is to be achieved. Advances in information and communication technology have changed human lifestyles, both in working, socializing, playing and studying. Entering the 21st century, these technological advances have entered various aspects of life, including in the field of education. Learning in the 21st century must be able to prepare the generation of Indonesian people to meet the advancement of information and communication technology in social life, (Syahputra, 2018). The development of era of advanced technology make the teacher must follow the era of the progress of the times. According to Hariyanto & Jannah (2020), teachers have a vital role in changing a nation, there are various things that teachers need to do as a form of revolution in the digital era . Thinking skills that must be mastered by students in education in the 21st century are creative, critical thinking, problem solving, and decision making. The way of working or the ability to work in a global and digital world is that students must be able to communicate and collaborate. Critical thinking includes the ability to think in high order (high order of thinking) which is one component in the issue of 21st century intelligence (The issue of 21st century literacy). Critical thinking is an important thing that must be possessed in building student knowledge. Critical thinking skills will stimulate students' cognitive reasoning in acquiring knowledge. Students' critical thinking is needed, because during the learning process students develop thinking ideas about the problems contained in learning, (Diharjo et al., 2017). Many teachers are still afraid to use media in the form of web-based applications. This causes researchers to be interested in knowing how teachers and students perceive the use of the Quizizz web application as a learning medium. The perception of teachers and students is necessary. Thus, no more teachers are afraid to try using webbased applications as learning media. Many web-based applications are currently being developed such as Kahoot, Quizizz, Ruang Guru, Zenius and many more. In this study, researchers used a web application, namely Quizizz. Quizizz is a web tool for creating interactive quiz games that are used when learning in class. An interactive quiz that has up to 4 or more answer options including the correct answer and an image can be added to the background question (Ramadhani et al. 2020). Currently, there are lots of modern or technology-based evaluation tools that can be used by teachers in order to provide an assessment or evaluation to students. This of course can make evaluations conducted by teachers more effective and efficient. In addition, the use of technology-based evaluation tools is also expected to make students more relaxed in carrying out tests. Some modern technology-based evaluation tools that can be used by teachers to create quizzes or record student opinions are such as kahoot, quizizz, socrative, polldaddy, verso, poll everywhere, google form, classmaker, and so on (Chaiyo & Nokham, 2017). Quizizz is an application that provides formative questions with various choices that are presented in a fun and interesting way for all students. In the opinion of Noor (2020), Quizizz is an interactive quiz game that can be applied while studying in class as an example of implementing formative assessment. Quzizz can be accessed with a computer or android that is connected to the internet network. It has an attractive appearance and is easy to input questions. The process of implementing the Quizizz application-based assessment in the form of questions and answers is automatically displayed on the screen of each user, both smartphone and PC/computer, so there is no need for LCD or projection screen assistance (Wihartanti et al., 2019). Researchers also distributed questionnaires of needs analysis to 30 students in class IVA at SDN 1 Jati Indah Tanjung Bintang. Collecting data on the needs of students aimed to see how important it is to use the assessment using the Quizizz application. The results of the needs analysis questionnaire show that students need smartphones/laptops/notebooks 83 % of the total 30 students. Besides, 78 % of students need an assessment application according to the needs of the times, namely in the form of an application that can be accessed online. Furthermore, 87% of 30 students expect the use of applications such as Quizizz in the cognitive assessment of students. Various previous studies (Rini, 2017) mention that the use of the SETS approach has an effect on improving elementary school students' science process skills. Regarding with assessment instruments (Pusparani, 2020) the use of Quizizz media as an application for learning evaluation activities was declared effective because it was able to improve learning outcomes and students' understanding of the material. In addition, Quizizz media is considered efficient for teachers and students because it is easy to use, more efficient in paper use (paperless), and can be done anywhere and anytime. A research of Handayani & Wulandari (2021) show that that Quizizz also includes 21st century skills such as critical thinking skills, creative and innovative skills, communication skills, and collaborative skills. Quizizz has many positive impacts on learning, especially for increasing student motivation, where motivation itself can improve critical thinking skills and creativity skills. Kurniawan (2021) affirms that the effect of using Quizizz as an exercise task on the learning outcomes of grade 5 elementary school students create decent results and can have an impact on student learning outcomes. Methods The method used in this research was descriptive qualitative method by using questionnaire and interview as data collection techniques. This research was conducted on fourth grade students at at SDN 1 Jati Indah Tanjung Bintang in 2022 academic year with a total of 6 educators and 30 students as subjects of this research. This data analysis technique has three stages, namely data reduction, data presentation, and data verification. The aim is to simplify abstract data into a clear and detailed summary, which the data later on is presented in a simpler form in the form of narrative exposure and compiled to reveal the analysis of Elementary School assessment. In this data collection technique, the data can be analyzed by calculating the average of each answer based on the score obtained, using the following assessment criteria: Results The results of the research obtained based on the distribution of needs analysis questionnaires carried out to 30 students in class IVA at SDN 1 Jati Indah Tanjung Bintang. Data Collection on the needs of students to see how important the use of assessment is by using the Quizizz application. The results of the needs analysis questionnaire show that students need a smartphone/laptop/notebook 83% of the total 30 students. 78% of students need an assessment application according to the needs of the times, namely in the form of an application that can be accessed online. 87% of 30 students expect the use of applications such as Quizizz in the cognitive assessment of students. The results indicate those things can cause the lack of increasing character values, motivation, and the decreased of the level of understanding of students in receiving the learning material presented so that it can affect the results of student learning evaluations. These reasons make teachers feel that using the Quizizz application can make learning more varied, thus it can improve students' skills in learning because indeed the use of online learning evaluation using Quizizz having advantages that can respondents feel yet there are also disadvantages. Students work on questions in the form of quizzes and crosswords to see or observe the formation of students' memory and critical thinking skills. According to Lismaya (2019), critical thinking is a process of intelligence, like the creation of concepts, and application by evaluating all information obtained from interpretation and observation, field experience, deep reflection or communication as a basis for belief and taking an action. Therefore, in e-learning using media, for example crossword puzzles, students can develop memory of the material explained by the teacher, observe daily life events. During multiple choice assessments, it has the advantage that students are not allowed to question or cheat students and friends. Therefore, because of the time that has been determined in a question, when students answering Quizizz they don't get the opportunity to ask people around or look at notebooks or look for answers on the Google platform. Thus, over time after taking the test, students can find out the ranking of all students who took the test. Students currently answering questions in the quiz media can find out the right and wrong answers to the questions made. Meanwhile, the advantages of crossword puzzles are that students can work individually without having to be with other students, without too much time to work on questions, and students can practice critical thinking and creativity because these crosswords must match the numbers in the box provided with the answer deemed to fit the part of the box. Discuss the assessment of the use of puzzles and crosswords for students to think critically so that students find it difficult to answer questions on quizzes because time is so fast. However, time that is too fast can train students' critical thinking skills to be able to continue to stimulate memory of the material that has been explained by the teacher. Meanwhile, the crossword is not problem because the time is not as fast as the quiz on the radio. This natural crossword game can train creativity and shape students' thinking skills according to the number of missing squares, thus the answer must match the space in the box. The research is also based on relevant research conducted by Wihartanti et al., (2019) entitled Smartphone-Based Application "Quizizz" as a Learning Media. The similarity of the research includes the aspects studied, namely the learning media. The research differences are in the subjects used. This study concludes that "Quizizz" is the best alternative to be used as a learning medium that is available on mobile applications such as Android and the App Store and can be used as a website via a browser on a computer. Quizizz is effective in increasing the enthusiasm of students in learning. Students are more interested, more focused and serious in implementing it . One way to make assessments assisted by android-based mobile phones and computers is to use facilities on the internet in the form of a Contain Management System that has been programmed in the form of a website, one of which is Quzizz. According to Noor (2020), Quizizz is an interactive quiz game that can be applied while studying in class as an example of implementing formative assessment. Quizizz can show data and statistics related to student performance (Putri, & Dwijayanti, 2020). Quzizz can be accessed with a computer or android that is connected to the internet network, has an attractive appearance and is easy to input questions (Agustina & Rusmana, 2019). The process of implementing the Quizizz application-based assessment in the form of questions and answers are automatically displayed on the screen of each user, both smartphone and PC/computer, so there is no need for LCD or projection screen assistance (Wihartanti et al., 2019). The use of Quizizz allows students to be challenged in class because the scores obtained by students will be displayed after the test is ended by what position the students are in. In addition, the existence of pictures or caricatures when they are done with the test becomes an interesting thing because students become enthusiastic to take the test again and get the highest score. The application of Quizizz learning needs to be done continuously, so that Quizizz can become a competitive application as a learning assessment, in the middle adaptation of 21st century education. The use of learning assessment itself cannot be separated from learning patterns. Learning patterns are organized, then applied based on the boundaries of educational technology. Basically, there are 4 learning patterns applied in Indonesia, 1) Traditional Patterns, namely the teacher-student relationship directly, 2) Teacher patterns with media, 3) Media learning patterns, 4) Media-only learning patterns. The use of the Quizizz application as a learning medium is included in the category of learning pattern number 3, which places th e media as a component of the learning system on a par with other components. Quzizz is selected because it has an attractive appearance and the preparation of test questions is very easy. Similar with the website in general, Quzizz can be accessed with a computer or android that is connected to the internet network. Quizizz is able to adapt to the learning objectives thus students have an attitude of curiosity towards the material. The content of the material from Quizizz can also be adjusted to the content of the material so that students feel more familiar with the concepts being taught. This is in accordance with the research proposed by (Chaiyo & Nokham, 2017) which concluded that the use of the Quizizz web-based application supports learning and increases students' concentration, engagement, fun and motivation. Quizizz also helps them to realize their level of knowledge and facilitates understanding of concepts and enhances their learning process. The form of Quizizz which is like playing a game makes practicality and flexibility of use so that students do not feel bored in using it. Students also feel that they can learn independently thus it is suitable for large groups and small groups. Easy access in Quizizz which only enters a number code makes Quizizz easy to use. The attractive technical quality makes students want to learn and the feedback from students makes Quizizz attract students' attention (Chaiyo & Nokham, 2017). Learning patterns created and empowered through the Quizizz application are interactive multimedia patterns. The Quizizz application has advantages that can be easily used in addition to learning media, as well as learning evaluation materials, for example, there are data and statistical calculations of student performance, the results of which can describe the extent to which students understand the material, which will later be used as a measurement for overall learning evaluation. Thus, it gives a new color to the teacher's evalu ation and learning patterns that are fun for students. According to Nesbit & Leacock (2019), learning evaluation will be maximized if teachers can measure student competencies carefully. One of the applications that can be used as an online test or evaluation tool is the Quizziz application. Quizziz is an online application that contains material that is packaged interactively with various themes (Aini, 2019). According to Darmaningrat et al (2018), the advantage of Quizizz is that it is easy to access, especially for teachers who are not very tech-savvy. Then in Quizizz there are also some interesting features that teachers can use in updating evaluation questions for students. Quizziz can be used as a learning evaluation because of its unique appearance and equipped with music that can make children forget that they are doing exams or tests. In addition, in the Quizziz application, images can be added according to the subjects to be evaluated. The contribution of the application of the Quizizz application in learning is expected to increase students' critical thinking, in addition to the various benefits that can be felt through the use of the Quizizz application as a learning assessment, one of which can motivate and attract students' interest to get learning in new and more fun ways so as to make students more enthusiastic about learning. Quizizz assessment must pay attention to the development and ability of students. Most teachers use assessment in the form of a written test. With the use of tests, the result is not impressive by students, thus student learning outcomes are low. Alternative use of assessment can be in the form of Quizzz as a stimulant that is "fun" but still "learning" which can refresh memory, be interesting, and give a good impression in students' brain memory. Thus, it is hoped that the use of Quizizz as an assessment medium can improve. Quizizz is a web tool for creating interactive quiz games for use in your classroom learning, for example for formative assessment. There are various other features available in the Quizizz application, which can be used as a tool for teachers to give assignments or homework. Besides doing assignments, students can feel learning that does not requires hard thinking about answers, because the Quizizz application has a fresh look and is rich in fun things. A game cannot be separated from creative, innovative, adventurous, and fun elements, which in turn can foster positive motivation to learn from each student. Therefore , they can realize the ideals and goals of education in a concrete and even way. The teacher can also add an image to the background of the question and adjust the question settings according to the need of teachers (Aini, 2019). Quizizz can be used as a good and fun learning strategy without losing the essence of ongoing learning. Even this strategy can involve active student participation from the beginning. In addition, the demands of the 4.0 industrial revolution era make various sectors of life including the education sector need to reorient in determining the direction of education policy to answer the challenges of the industrial revolution 4.0 which demands a significant and comprehensive increase in individual capacity through various efficiencies in the world of education, such as the education system that involves technology in the learning process. Conclusion Based on the results of the development research that has been carried out regarding the assessment on theme 8 through Quizizz as an application to evaluate learning in class IV SDN 1 Jati Indah Tanjung Bintang, the result of research can be concluded as follows: Quizizz as an application for learning evaluation activities has many features that teachers can use for learning evaluation activities, not only containing multiple choices and descriptions, as well as checklists according to the needs of the teacher. Quizizz can be used individually or in groups. This application can also be used directly or as a task/homework. Quizizz makes it easy for teachers to analyze students' questions and answers, and teachers can send quizz results to parents. Based on the results of the research, it is necessary to do several things as an effort to further utilize the product, namely the development of Quizzz-based assessments to improve critical thinking.
2022-10-13T15:42:07.031Z
2022-10-09T00:00:00.000
{ "year": 2022, "sha1": "05af44becbd38eb3cd3d7e1055c7dd0c78b48739", "oa_license": "CCBYNCSA", "oa_url": "https://lighthouse-pub.com/ajet/article/download/33/133", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "08a563983a4be93c44e8fad50a7acfa2467dcfe8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
13986025
pes2o/s2orc
v3-fos-license
Nonlinear Dependencies of Biochemical Reactions for Context-specific Signaling Dynamics Mathematical modeling can provide unique insights and predictions about a signaling pathway. Parameter variations allow identification of key reactions that govern signaling features such as the response time that may have a direct impact on the functional outcome. The effect of varying one parameter, however, may depend on values of another. To address the issue, we performed multi-parameter variations of an experimentally validated mathematical model of NF-κB regulatory network, and analyzed the inter-relationships of the parameters in shaping key dynamic features. We find that nonlinear dependencies are ubiquitous among parameters. Such phenomena may underlie the emergence of cell type-specific behaviors from essentially the same molecular network. Our results from a multivariate ensemble of models highlight the hypothesis that cell type specificity in signaling phenotype can arise from quantitatively altered strength of reactions in the pathway, in the absence of tissue-specific factors that re-wire the network for a new topology. Mathematical modeling can provide unique insights and predictions about a signaling pathway. Parameter variations allow identification of key reactions that govern signaling features such as the response time that may have a direct impact on the functional outcome. The effect of varying one parameter, however, may depend on values of another. To address the issue, we performed multi-parameter variations of an experimentally validated mathematical model of NF-kB regulatory network, and analyzed the inter-relationships of the parameters in shaping key dynamic features. We find that nonlinear dependencies are ubiquitous among parameters. Such phenomena may underlie the emergence of cell type-specific behaviors from essentially the same molecular network. Our results from a multivariate ensemble of models highlight the hypothesis that cell type specificity in signaling phenotype can arise from quantitatively altered strength of reactions in the pathway, in the absence of tissue-specific factors that re-wire the network for a new topology. M athematical modeling of cell signaling pathways is recognized as an important component of molecular systems biology [1][2][3][4][5][6] . However, it is still a long way before the approach is widely accepted and utilized in mainstream cell biology. This could be attributed to several things. Models are often represented by time-dependent equations that contain kinetic parameters, and most of these rate constants are unknown. One can attempt to estimate some of the constants by in vitro assays, but it is not clear how they approximate the in vivo values. Other rate constants are simply not feasible to measure directly and need to be inferred. Therefore, quite often it is judged that mathematical modeling of a pathway is likely to produce a 'wrong' model, because it is impossible to determine all rate constants accurately. So how can one avoid using wrong models? A most relevant clue may come from the experimental counterpart: biological results do not come from studying the behavior of one cell. Even in single cell experiments, a finding is confirmed to be definitive if it is reproduced in a large number of cells. Thus, it would be more appropriate to consider an ensemble of models that occupy a 'cloud' of multi-parameter space and correspond to the natural variability of the biological system, rather than looking for 'the correct model' (with a single set of parameter values). Exploration of a range of possible parameter values is necessary not only because of the uncertainty in the model parameter values that were inferred or compiled from diverse sources. But also, individual cells are likely to have slightly variable rate constants for any molecular process in the model 7 . Moreover, studying the parameter space helps understand all the possible behaviors that could be realized under certain pathological or distinct situations. Here we applied these principles to the widely studied NF-kB pathway and considered an ensemble of models and their signaling characteristics. We present theoretical evidence that context-specific signaling behavior can emerge from parameter dependencies inherent in the nonlinear network of molecular interactions. Our results also imply the existence of situations where reaction kinetics can have discrepant signaling roles in different cell contexts. Results NF-kB as a prototypic signaling system within a complex network. NF-kB is an example of latent transcription factors that respond to cell stress and operate in a feedback-controlled network [8][9][10] . It regulates numerous cell signaling processes and its activity is controlled in part by the level of nuclear translocation. In resting cells, the predominant dimer p65:p50 exists mostly as a cytoplasmic complex bound to its inhibitor IkB proteins. Numerous upstream signals induce degradation of the IkB proteins following phosphorylation by the IkB kinase complex (IKK). This release from latency allows NF-kB to translocate into the nucleus and activate expression of target genes, including several feedback genes 8,11 . We used a previously published model of NF-kB that captures experimentally observed behaviors reasonably well [12][13][14] . The mathematical model includes the core NF-kB regulatory network that operates in virtually all cell types ( Fig. 1, Tables 1 and 2). Diverse upstream signals that activate the canonical NF-kB pathway converge at the IKK complex. It consists of catalytic subunits IKKa and IKKb, and the regulatory subunit NEMO. The 'IKK' in the model corresponds to the active form of IKK, as it has different kinase activities depending on many factors such as its phosphorylation status. The input 'IKK' is introduced as an approximate step function in later simulations. All the processes that negatively regulate IKK activity are combined into a first order term with rate constant neg. The IKK-initiated processes, IkBa phosphorylation at serines by IKK, ubiquitination, and degradation of IkBa by the proteasome is lumped into a single catalytic reaction with constant r 1 . r 2 is for a similar but less efficient reaction that targets free IkBa. IkBa binding to NF-kB, and IKK binding to NF-kB-bound or free IkBa, are all reversible reactions with association and dissociation rates that are roughly based on their binding affinities. dg 2 and dg 1 denote parameters for the constitutive degradation of NF-kB bound and free IkBa, respectively. The nuclear import and export of NF-kB, IkBa, and the complex are included as first order terms. Finally, the induction of IkBa gene by NF-kB is represented by a first order process with a rate constant s and a time delay t. The delay allows one to incorporate multiple processes during the de novo IkBa synthesis (gene transcription, mRNA processing and export, translation, folding, etc.) into a single term, thereby avoiding unnecessary model complexities that would arise from numerous unknown kinetics involving the intermediates. A multivariate ensemble of mathematical models for NF-kB in the high-dimensional parameter space. We explored our differential equations model with an extensive multi-parameter sampling approach. Instead of varying one parameter at a time while fixing all the others, which results in an extremely limited investigation of the system properties, we employed a large set of random parameter combinations for model simulations. Such a high-dimensional ensemble of parameter states better recapitulate the true variability within a population of single cells, because individual cells are unlikely to have identical values for any rate constant. In fact, typical single cell measurements from flow cytometry or quantitative microscopy result in a distribution, not a single value. Each parameter was allowed to vary by two orders of magnitude, and 1000 random combinations of parameters were generated by Latin Hypercube sampling method for computational efficiency (see Methods for details). The randomly generated parameter sets were used to solve the delay differential equations where IKK is activated at t 5 0 and to obtain our multi-parameter variation results. Because of the significant coverage of the high dimensional parameter space, the simulated time course profiles consisted of remarkably diverse response patterns ( Fig. 2A), providing numerous signaling dynamics that are possible and may be realized in some cellular and microenvironmental contexts. Control parameters that influence characteristic features of signaling dynamics. To identify the parameters that influence NF-kB signaling dynamics, we examined the sensitivity of four defining characteristics in a temporal profile of free nuclear NF-kB (see Figure 1 | A mathematical model of the core regulatory network for NF-kB. The process diagram represents the individual reactions included in our model. It includes essential regulatory events such as IKK activation, inducible/constitutive degradation of IkBa, nuclear import/export, inducible synthesis of IkBa, and post-stimulus attenuation of IKK activity. The quantitative model is described in full by the differential equations in Table 1. The arrows are color-coded based on the reaction type (black: transport, red: complex formation, gray: degradation, purple: multiple molecular processes). www.nature.com/scientificreports SCIENTIFIC REPORTS | 2 : 616 | DOI: 10.1038/srep00616 Fig. 2B), against variations in parameter values. We will consider F 1 , the integrated activity, which is the area under the time course curve divided by the time interval. It is also mathematically equivalent to the time average response. The first response magnitude F 2 is simply the height of the first peak. F 3 , the response time, is the time from the onset of stimulation to the first peak. F 4 is the period of oscillation if the temporal profile is periodic. These features capture some essential aspects of a temporal profile. To assess sensitivity of feature F k (k 5 1, …, 4) against variations in parameter p i (i 5 1, …, 18, as ordered in Table 2), we binned the parameter vectors in the high dimensional parameter space, according to their p i values (regardless of the other parameter values). Bin-average F k values were obtained and the standard deviation of these values across the bins was taken to be our sensitivity measure D i F k . Table S1 shows the parameters sorted by this measure, i.e. how much each parameter influences F k . Parameter dependencies are prevalent. Next we addressed our main question by determining whether the influence of a parameter on F k depends on another parameter. We first illustrate some cases of control parameter pairs with a strong interaction in Fig. 3 (using InterF below; see Methods). Panel A shows how the integrated activity F 1 was influenced by rates of IKK association to NF-kB:IkBa complex (a 2 ) and IKK-induced phosphorylation/ degradation of NF-kB bound IkBa (r 1 ). Their relationship represented by the best-fit surface indicates that F 1 is a decreasing function of r 1 for low a 2 values, but F 1 is roughly a parabola for high a 2 . This can be interpreted in biological terms as follows. First, the integrated activity of NF-kB over time is a most likely determinant of the transcriptional output of direct NF-kB-dependent genes 12 . Then panel A implies that the gene output can be greater for slower signaldependent degradation of IkBa when IKK binding to substrate is relatively slow. But in a different cellular context where substrate recognition of IKK is faster (due to local tethering, for example), Table 1 | Differential equations for modeling the core NF-kB network. The 9-variable, 18-parameter delay differential equations model was adapted from 21 with the addition of a term that represents the post-stimulus attenuation of IKK activity ('neg IKK' for the equation for IKK). (Variable definitions: NF 5 NF-kB, I 5 IkBa, IKK 5 the active IKK complex, the colon indicates a bound complex, and the subscript 'n' denotes nuclear species.) the transcriptional output may be generally elevated with a slight moderation at a mid-range degradation rate of IkBa. It is also to be noted that the parameter dependencies are not necessarily symmetric, i.e. the effect of r 1 depended on a 2 , but a 2 did not depend on r 1 (Fig. 3A). Our simulations also found F 1 to depend on i I and a 3 in a nonlinear fashion (Fig. 3B). If the import rate i I was low, F 1 increased with the association rate a 3 , but if i I was high, a 3 had little effect on F 1 . Figure 3C shows another pair of inter-dependent parameters caused by a more complex nonlinearity in their influence on F 1 . When the constitutive degradation of NF-kB bound IkBa (dg 2 ) was minimal, shorter time delays involved in IkBa re-synthesis (t) resulted in higher integrated NF-kB activity. However, this trend switched to a completely different outcome when the degradation of NF-kB bound IkBa was constitutively higher: There was an optimal time delay that produced maximal transcriptional activity in such a condition. Therefore we conclude that the effect of t depends on dg 2 . The response time F 3 had differential dependence on d 1 and s in that F 3 was minimized for a distinct value of IkBa synthesis rate s only if d 1 , the dissociation rate of NF-kB:IkBa was high (Fig. 3D). Figure 3E indicates that the constitutive degradation of free IkBa (dg 1 ) affected how the IkBa re-synthesis rate (s) influenced the response time, F 3 . If the constitutive degradation was low, the response was fastest at an optimal IkBa induction rate. If, on the other hand, the degradation was constitutively fast, the response was generally fast regardless of the synthesis rate. Finally, we examined all the inter-dependencies systematically in the following way. For each combination (i, j, k), the coordinate effect of p i and p j on F k was extracted by fitting the data with a smooth surface as shown in Fig. 3. We defined a quantity InterF k (i, j) to capture the deviation from the independence of p i effect on F k from p j (see Methods). Nonzero InterF k (i, j) values indicate the presence of an inter-dependency for the two parameters, where the parameter p i had a different qualitative effect on F k depending on the value range of p j . There were numerous such incidences and some pairs (i, j) corresponded to parameters that had weak influences on F k , where any dependencies would impose an insignificant effect. So, for cases in Fig. 3, we chose those pairs that had significant influence as single control parameters and had high interF values. We summarize all the results in the 'parameter dependency map' in Fig. 4 (strong to weak interF in yellow to red) which shows the prevalence of inter-dependencies among parameters in shaping the signaling dynamics. Most parameters had differential effects on F k depending on the values of one or more parameters. On the other extreme, some vertical stretches of red are discernable from the map and correspond to parameters that did not depend on the other parameters. Most of these exert strong control over the relevant F k . For example, the time delay (t) and the inactivation rate of IKK (neg) were critical parameters that determine the period F 4 (see Table S1), and their influence on the period were not affected by other parameters. We explore a possible manifestation of our findings by illustrating a scenario that corresponds to Fig. 3A in more detail (Fig. 5). In mathematical terms, we found that F 1 (r 1 ) is a decreasing function for low a 2 (lower arrow in the surface plot) and is an increasing function for a higher range of a 2 (upper arrow). In biological terms, this implies that the transcriptional consequence of inhibiting signalinduced degradation of IkBa can vary depending on the association rate of IKK to its substrate, NF-kB bound IkBa. This rate, in turn, may well depend on the cell type under study. Cellular features such as the organization and volume of the cytoplasm, or local tethering of kinase scaffolds, are different for distinct cell types. Smaller cytoplasm and local clustering can endow the cells with faster substrate recognition with little need for diffusion-based association. Cell type A represents such a situation. When such cells are treated with an inhibitor that reduces the induced degradation of IkBa, the integrated activity of NF-kB, therefore target gene output, is decreased (indicated by the direction of the upper arrow on the surface plot). However, just the opposite outcome is expected for the same perturbation in another cell type B, where the IKK recognition of its substrate is relatively slow. Other dependencies we found can similarly be elaborated with concrete biological interpretations. Inter-dependencies of reaction kinetics may well explain, perhaps to a significant extent, the cell type specificity of the signaling roles of numerous factors that seem to have context-dependent actions 15,16 . We note that most signaling pathways possess feedback structures and that the ensuing system nonlinearity is likely to cause interdependencies of parameters. To this end, we have looked into another signaling pathway, Wnt/b-catenin, and found a similar extent of parameter dependencies (Myong-Hee Sung, Songjoon Baek, Kwang-Hyun Cho, unpublished data). Discussion A significant hindrance in translating the knowledge from a particular quantitative signaling model to a real-world molecular system is the lack of in vivo measurements of the kinetic parameters from the relevant context, such as particular cell lines or primary tissues. We have looked into the effects of varying kinetic parameters upon signaling characteristics and their dependencies on other parameters. For example, suppose that a higher degradation rate of a signaling protein A has the effect of shortening the response time. But this effect may depend on whether the synthesis rate of protein B is within a certain range. The effect of A on response time may even be opposite in other conditions or cellular contexts. By extensive simulations of an NF-kB model, we demonstrate that such a phenomenon can be widespread. This may be a source of apparently discrepant behaviors of the same cellular signaling system in different biological contexts. The phenomenon seen here may underlie the differential effect of a given molecular process/reaction that is dependent upon distinct levels or efficiencies of another reaction. In general, different cell types are thought to have differences in splice variants 17,18 , organiza- tion of the genome into accessible chromatin domains 19 , basal turnover of signaling factors, subunit composition of holoenzymes that may affect catalysis rates 20 , and more. Here our results explain how quantitative differences in such key molecular systems can lead to qualitatively distinct signaling behaviors. Methods A mathematical model of NF-kB signaling network. We used a published mathematical model 12 . Briefly, the delay differential equations (DDE) model described in 21 was modified by including a term (neg in Fig. 1) to represent the inactivation of IKK by various mechanisms including A20, CYLD, and IKK autophosphorylation 11 . These IKK inactivation mechanisms lack single cell data on their kinetic parameters and could not be represented individually. The 9-variable DDE model is shown in Table 1 and the reference parameter values are listed in Table 2. Multi-parameter variation and model simulations. Each kinetic parameter was varied by 2 orders of magnitude around the reference value (from 0.1-to 10-fold), and was randomly combined with others by the Latin Hypercube sampling method to limit the total number of simulations. The time delay parameter for IkBa synthesis was constrained to vary between 30 and 55 minutes to avoid an unrealistic range. Specifically, parameter combinations were generated by the following procedure. For the j-th parameter, we subdivide the range of the parameter into n (5 5) subintervals of equal size. Then randomly sample n values (p ij , i 5 1, …, n), one from each subinterval, for the j-th parameter. The uniform sampling was done on the logscale for all parameters. To combine these values of individual parameters to generate sets of parameter values, we randomly permute the n values for each parameter to get the parameter vectors, i.e. we permute the elements of each column of the matrix p ij separately, and use the rows as the parameter vectors. This sampling method was implemented by the MATLAB function 'lhsdesign' to produce 1000 sets of parameter values. These parameter vectors were used for DDE simulations. The initial condition for numerical solutions of DDE was provided by specifying a constant history of I 5 0.03 mM; NF:I 5 0.04 mM; I n 5 0.03 mM; the other variables 5 0. The time delay was given by the parameter t. All simulations were run by using the MATLAB solver 'dde23' on the time interval [210 h, 12 h]. IKK activation was introduced at t 5 0 by a sharp Gaussian k(t) (standard deviation 5 5 min) multiplied by 0.025 mM/min. Evaluation of NF n from each numerical solution was obtained at the 5 min resolution grid of time points spanning [0 h, 12 h]. The evaluated series NF n (t) for each simulation was taken as its time course profile for subsequent analyses. Dynamic measures F i . For each time course profile, F i (i 5 1, …, 4) values were calculated as follows. where T is the time course interval (12 h) and [total NF] is the amount of all NFcontaining molecular species (determined by the initial condition and fixed at 0.04 mM). F 2 and F 3 were obtained by finding the first time point t* (. 0) where the time series becomes decreasing, i.e. NF n (t*1dt) 2 NF n (t*) # 0. Then F 2 5 NF n (t*)/[total NF] and F 3 5 t*. For the period F 4 , all time course profiles were analyzed by Fourier analysis to sort for the oscillating profiles. NF n was considered oscillating if the periodogram from the fast Fourier transform had a detectable peak between 0 h and 5 h either as a global maximum or a local maximum which is at least 0.7 of the global maximum. 345 cases (among 1000) passed the criteria and were used for the analysis of F 4 . Figure 5 | Biological manifestations: a plausible scenario. Implications from the dependence of r 1 , signal-induced degradation of IkBa, on a 2 , the association rate of IKK to its substrate, in affecting F 1 , the integrated activity of NF-kB. The surface plot is from the result in Fig. 3A. Assuming F 1 is a primary determinant of transcriptional output of NF-kB dependent genes, inhibition of signal-dependent degradation of IkBa (that lowers r 1 ) has opposite effect on transcriptional output in two cell types A and B, where the IKK substrate recognition is fast and slow (with distinct ranges of parameter a 2 ), respectively.
2018-04-03T02:33:55.362Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "52fe6ff32b6c9b3a26c26844f93bf4d553afe2aa", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep00616.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52fe6ff32b6c9b3a26c26844f93bf4d553afe2aa", "s2fieldsofstudy": [ "Biology", "Mathematics" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250144841
pes2o/s2orc
v3-fos-license
New physics above 50 TeV: probing its phenomenology through UHECR air-shower simulations Ground based observations appear to indicate that Ultra High Energy Cosmic Rays (UHECR) of the highest energies (>10^{18.7} eV) consist of heavy particles -- shower depth and muon production data both pointing towards this conclusion. On the other hand, cosmic-ray arrival directions at energies>10^{18.9} eV exhibit a dipole anisotropy, which disfavors heavy composition, since higher-Z nuclei are strongly deflected by the Galactic magnetic field, suppressing anisotropy. This is the composition problem of UHECR. One solution could be the existence of yet-unknown effects in proton interactions at center-of-mass (CM) energies 50 TeV, which would alter the interaction cross section and the multiplicity of interaction products, mimicking heavy primaries. We study the impact of such changes on cosmic-ray observables using simulations of Extensive Air-Shower (EAS), in order to place constrains on the phenomenology of any new effects for high energy proton interactions that could be probed by \sqrt{s}>50 TeV collisions. We simulate showers of primaries with energies in the range 10^{17} - 10^{20} eV using the CORSIKA code, modified to implement a possible increase in cross-section and multiplicity in hadronic collisions exceeding a CM energy threshold of 50 TeV. We study the composition-sensitive shower observables (shower depth, muons) as a function of cross-section, multiplicity, and primary energy. We find that in order to match the Auger shower depth measurements by means of new hadronic collision effects alone (if extragalactic UHECR are all protons even at the highest energies), the cross-section of proton-air interactions has to be 800 mb at 140 TeV CM energy, accompanied by an increase of a factor of 2-3 in secondary particles. We also study the muon production of the showers in the same scenario. I. INTRODUCTION Cosmic Rays (CR) are the most energetic particles in the universe. Over 100 years have passed since they were first discovered by Hess [1], yet still today their composition, origin and acceleration mechanism are subjects of debate. This lingering uncertainty stems not only from observational limitations but also from particle-physics uncertainties: the first collisions of ultra-high-energy cosmic rays (UHECRs E 10 18 eV) with the Earth's atmosphere occur at center-of-mass (CM) energies exceeding 40 TeV and reaching 300 TeV; for comparison, current lab tests of hadronic physics (in the Large Hadron Collider, LHC) only reach CM energies of 14 TeV. Observationally, the greatest challenges in studying UHECRs are their low flux and their inability to penetrate the atmosphere: detection has to be indirect, tracing either the development of the extensive particle air * sromanop@physics.uoc.gr † pavlidou@physics.uoc.gr ‡ deceased shower (EAS) caused by a CR's collision with the atmosphere, or the EAS products reaching the ground, or both; and dedicated UHECR observatories need collective areas of thousands of km 2 . Despite significant progress in recent decades in improving statistics of UHECR detections thanks to very lagre facilities such as the Pierre Auger Observatory in Argentina [2] and the Telescope Array in the United States [3], important questions regarding UHECR astrophysics remain open. In particular, the distribution of UHECR arrival directions on the sky, their origin, and their composition constitute three persistent, interconnected puzzles. Because cosmic rays are charged, it is not possible to directly associate their arrival directions with the sources that accelerate them. However, there are astrophysical arguments about possible classes and cosmic locations of cosmic-ray sources, with important implications for the resulting cosmic-ray properties. At lower energies (< 10 12 eV), it is fairly certain that cosmic rays originate in Galactic sources. This is evidenced by differences in the cosmic ray fluxes estimated for different galaxies through observations of gamma rays originating in the decay of neutral pions pro-duced through collisions of cosmic rays with interstellar gas (e.g., [4][5][6]). At very high energies (> 8 × 10 18 eV) we are equally confident that cosmic rays are extragalactic, since the anisotropic distribution of arrival directions that starts emerging at these energies is not correlated with the Galactic plane or the Galactic center [7]. The exact energy at which the transition from Galactic to extragalactic cosmic rays occurs is still under debate (see, e.g., [8] for a recent review), however most recent works assume that it happens somewhere between 10 17 and 10 18.5 eV. Hints for this are seen both in the spectrum and in the composition of cosmic rays at this energy range. The steepening of the spectrum seen at the "second knee" and the transition, at the same energies, to a heavier composition [9] point towards a population originating in magnetically-confining accelerators reaching its maximum possible energy (as per the Hillas criterion, [10,11]). At energies between ∼ 10 17 eV and ∼ 10 18.5 eV, composition-sensitive observables from KASCADE-Grande, Auger, and Telescope Array indicate a transition back to lighter composition [12][13][14]. At ∼ 10 18.7 eV the spectrum also becomes harder, a feature known as the ankle [14,15]. There are two interpretations of the ankle. In the first, the ankle marks the transition to UHECR of extragalactic origin (e.g., [16][17][18]). In the second, this transition is assumed to have already happened at somewhat lower energies. As a result, the composition is already light at the ankle, and the ankle spectral feature is actually a "dip" caused by electron-positron losses (e.g., [19][20][21]). The debate of ankle-versus-dip is not, however, the only controversy at these energies. A second one is the socalled composition problem of UHECR. This is summarized as follows: At energies of 10 18.7 eV, compositionsensitive variables, taken at face-value, indicate a transition back to heavier composition [22]. However, there are certain astrophysical indicators against a heavy composition: (a) The spectrum at this energy is transitioning to a shallower slope, not a steeper one -i.e, there is no coincident spectral indication that the UHECR accelerators are reaching their maximum energy [23] (b) Anisotropies start to emerge at these energies [7,[24][25][26][27]. This might not be so severe a problem if the Galactic magnetic field has the overall low strengths indicated, e.g., by [28]. However, recent studies of the Galactic magnetic field have shown that it is approximately an order of magnitude stronger than previously thought [29] in a small region near the reported hotspot from TA [26,27]. If indeed the average Galactic magnetic field is proven to be just a few times stronger than the existing models, combined with the dipole anisotropy at high energies, we can conclude that UHECR are light nuclei. The reason is that heavy nuclei are strongly deflected from Galactic magnetic fields and would spread over all the sky, eliminating all evidence of anisotropy. (c) Heavier nuclei photodissociate fast during propagation (e.g., [18,30,31]) -with the exception of iron -so the composition be-comes lighter during propagation, unless it starts out as pure iron. However, iron is far from a best-fit to Auger composition-sensitive observables. Instead, observations can be better fit by a mix of intermediate-mass nuclei, requiring an astrophysically contrived composition of the accelerated particles at the source (e.g., [32][33][34][35][36]). In contrast, models that are more natural astrophysically are not in as good agreement with composition-sensitive observables [37,38]. In addition to these astrophysical considerations, there are particle-physics considerations that add to the composition problem. Hybrid detectors such as Auger and Telescope Array measure composition indirectly, in two ways: from the atmospheric slant depth at which the shower reaches a maximum, X max (measured through fluorescent detectors); and from the number of muons reaching the ground (measured by surface array tanks). At a fixed primary CR energy, heavier nuclei will typically give a lower and less variable X max ; and they will produce more muons. The observations of X max and muon numbers are then compared against the predictions from air-shower simulations. However, the best-fit compositions from muons and X max do not match [39]: too many muons are produced on the ground compared to what would be expected from the best-fit composition obtained from X max alone. The latter two problems indicate that the air-shower simulations (or rather, the hadronic collision simulation models on which these are based) may not be capturing correctly the development of showers. This is not altogether unexpected, since the first collision of a 10 17 eV cosmic ray with a stationary atmospheric proton is already super-LHC: we are simulating collisions of these primaries with the atmosphere based on theoretical extrapolations of hadronic behavior to higher energies. This has led several authors to hypothesize that the problem may lie in the hadronic collision models themselves. The solution that has been proposed in this context is that, above a threshold energy E th , the proton-Air interaction changes due to new new physics beyond the SM. This scenario is widely recognized both by the Auger Collaboration [40][41][42] and other authors [43][44][45][46]. In this scenario, the composition of the primary remains light. In [46] (hereafter PT19) we calculated analytically phenomenological constraints on any new effect that would alter hadronic interactions in such a way as to mimic a heavy composition at the highest UHECR energies. We showed that if the multiplicity of first-collision products increased over the SM predictions at a certain rate, the growth of the average X max with energy can be fully explained while keeping the composition light even at the highest energies. We also showed that a simultaneous increase in the proton-air cross-section over the SM prediction would improve agreement of σ Xmax (the showerto-shower variation in X max ) with the data, although we did not calculate the optimal behavior of the cross-section to best match Auger observations. In this paper, we extend the analytic formulation of PT19 using EAS simulations. For that purpose, we use a widely known program: CORSIKA 1 [47]. CORSIKA uses extrapolations of SM at post-LHC energies to model hadronic interactions. In this work we used two such models: EPOS LHC [48] and QGSJETII-04 [49]. For low energy interactions we used FLUKA 2 [50] which is a Monte Carlo code used extensively at CERN. Our main goals are: (a) To test whether an increase in the multiplicity of first-collision products can indeed yield the changes in the X max distribution predicted in PT09as contrasted with, e.g. the implementation of such a change by [51], who found that the variance of X max remains practically unchanged under a change in product multiplicity. (b) To calculate the optimal change in crosssection that best matches Auger X max data. In contrast, in PT19 we did not fully explore the parameter space, but we only argued that an increase in cross section changes the X max variance in the direction of better agreement with Auger data. (c) To evaluate the impact of this scenario on the muon problem, which was not addressed in PT19. This paper is organized as follows. In Section II we present the mathematical formulation of our implementation of new physics effects in the first collision of E > 10 18 eV CR with the atmosphere. We discuss the results of our simulations in Section III, and we summarize and discuss our conclusions in Section IV. II. MATHEMATICAL FORMULATION The "new physics above 50 TeV" scenario that we explore here using simulations of EAS implements two phenomenological changes in the first collision of the incoming primary cosmic ray with the atmosphere (as in PT2019): (1) an increase in the multiplicity of the first collision products; and (2) an increase in the cross section of the first interaction with the atmosphere. Our approach is phenomenological and not tied to any specific new physics model. However, several candidate particles and new physics mechanisms exist that might lead to such behavior (see, for example, [43,[52][53][54]). In §II A we discuss how we describe quantitatively these phenomenological changes. We discuss how these changes impact the slant depth of the showers in §II B, and the muons reaching the ground in §II C. In §II D we describe how we implement the changes of §II A in air shower simulations using CORSIKA. In this paper, we assume that all extragalactic cosmic rays reaching the Earth are protons. However, not all E > 10 18 eV cosmic rays detected on Earth are extragalactic, and, more importantly, the high-energy-end of the Galactic cosmic ray spectrum has a heavy composition, additionally affecting the slant depth and muon 1 content. In order therefore to compare our results with observations, we additionally need a model for the way the Galactic cosmic ray flux cuts off with energy. We describe this model in §II E. A. Parametrization of changes in cross section and multiplicity The cross-section of protons with nitrogen has a logarithmic behaviour at high energies [55]. For that reason, the cross section is usually parameterized as where σ 0 and β are constants. Here we normalize the energy scale as ε = E/E th , where E th is the threshold energy above which new physics sets in. Based on the arguments in PT2019, we will take E th = 10 18 eV, corresponding, for a collision of an primary proton with a stationary atmospheric proton, to a CM collision energy of ∼ 50 TeV. We note that this threshold is neither coincident with the location of the break seen in Auger X max data, nor fine-tuned (other choices that satisfy both that the threshold is ultra-LHC and that it lies below the Auger data break give similarly good results). We can calculate σ 0 and β from hadronic interaction models (here we use EPOS LHC and QGSJETII-04) and the standard-model extrapolations employed therein. The results are given in the first 2 lines of Table (I). If new phenomena take place at ε > 1, the cross section of the first interaction may change. Here we assume that such a change will only affect the value of the coefficient β of the energy-dependent term, and that the cross section will be continuous at ε = 1: We will parameterize this change in terms of the fractional change δ in the coefficient β relative to its standard-model value. Defining then we obtain In other words, for ε > 1, the cross section will deviate logarithmically from its standard-model-predicted value at the particular energy, with a coefficient δβ. Clearly, δ = 0 corresponds to no change to the cross-section at any energy over the standard-model prediction. We note however that the uncertainty in the standard-model predictions for the proton-air cross-section to super-LHC energies is high (see, e.g., Fig. 2 of [51]), and values of δ as high as 3.5 could still be consistent with the standard model within uncertainties. We also postulate an increase of the number of secondary particles produced after the first collision of the primary with the atmosphere. We limit the effect to the first collision since, for energies of interest, secondary particles will, with very high probability, have energies such that their collisions with air will occur at CM energies below the threshold for new physics. We parameterize this increase in first-collision product multiplicity by where N is the number of secondaries produced under new physics, and N SM is the standard-model prediction for the number of secondaries. It is possible that new physics may also change the charged-particle ratio of the products; we do not however implement such a change here. We further discuss this issue in §III. B. Shower maximum The slant depth of the shower maximum is the air column density traversed by the EAS front -measured from the top of the atmosphere -for which the shower front achieves its maximum atmospheric ionization rate: In Eq. (6), ρ is the density of the atmosphere, l is a length measured along the path the shower particles traverse in it, and x max is the height of the shower maximum. X max quantifies the total atmospheric column the shower has already encountered at its maximum, independently of the inclination of the incoming CR. X max can be written as the sum of two terms. The first one is the column density after which the first interaction of the cosmic ray primary takes place, X int ; and the second one is the column density between first interaction and shower maximum, X long , corresponding to the "longitudinal" development of the shower: 1. The first interaction: Xint The probability that a CR has not interacted with the atmosphere in the vicinity of height x is where m is the average mass of the particles in the atmosphere (mainly nitrogen) and σ CR-Air is the cosmicray-air cross section for the energy of the primary. The average value of the depth of the first interaction for a given primary energy thus is Since X int follows Poisson statistics, its variance will be If new physics sets in for ε > 1, the distribution of X int for primaries of a given energy will be affected through the change in σ CR-Air , which (assuming the composition remains light) will take the value of σ p,Air,new given by Eq. (4): and The longitudinal development: X long Following the CR interaction with the atmosphere, secondary and subsequent generation of particles are produced. As the shower of particles evolve though the atmosphere, the energy per particle decreases, partly because of ionization losses and partly because of new particle production. This process continues until the energy of the shower particles drops below the energy threshold of new particle production, at which point the shower continues to evolve by ionization losses alone. This evolution of the shower is observable through the fluorescence of ionized atoms of the atmosphere, which can be detected with ground telescopes. The intensity of this fluorescent light encodes the energy loss rate of the shower front. X long measures the column density traversed by the shower front after the first interaction until the energy loss rate reaches its maximum value. The dependence of the longitudinal column, X long , with energy can be derived from the simple model of Heitler [56,57], where the initial particle produces two daughter particles which split the primary's energy, and the process continues until the energy of each particle reaches a critical energy below which the process cannot continue. This results to a logarithmic increase of X long with energy. A more realistic calculation of X long is more complex because additional phenomena take place (e.g. bremsstrahlung, pair production, pion production, hadronization), and there are significant showerto-shower fluctuations. For that purpose numerical simulations are used to follow the development of EAS (e.g. CORSIKA). The final behavior of X long with energy, however, is still logarithmic: where X 0 and α are constants, while σ X long remains approximately constant with energy. In the last 2 lines of Table (I) we show the best-fit parameters X 0 and α for protons, derived from CORSIKA EAS simulations using two different models (EPOS LHC and QGSJETII-4) of hadronic interactions. Both models employ standard-model extrapolations for collisions at super-LHC energies. The parameters do depend slightly on the hadronic interactions model, however both models produce qualitatively similar results. If now new physics sets in for ε > 1, the distribution of X long will change. To quantify this change, we model empirically the shower as n( ) "component showers", of energy /n( ) on average, developing independently. Note that this approach is conceptually and qualitatively different from that of [51] who used a multiplicative factor to increase the number of products in each collision -the most prominent difference being that the presence of independently developing "component showers" decreases the shower-to-shower fluctuations of X long (i.e. σ X long ), while a multiplicative increase of products of identical distribution as the original shower leaves σ X long unchanged. Under this change, X long,new becomes [from Eq. (12)] To produce an analytical estimate the variance of X long we take, as in PT19, the average of individual "component-shower" longitudinal depth, 1 n i X long,i to be a reasonable estimation of X long . Then X long is the "sample mean" of n "draws" from the underlying distribution of X long,i and the distribution of these "sample means" has a variance that is given by the "error in the mean" formula, Var(X long,new ) = Var(X long,i ) n(ε) . Here Var(X long,i ) is the variance of X long,i , and it can be assumed to follow the SM predictions, since each subshower of the sample will have < 1. New physics mimics a transition to heavy composition The average value and the variance of X max are sensitive to the first interaction -both its cross-section and the multiplicity of its products. A higher first-interaction cross-section will result to lower X int and Var(X int ), and therefore lower X max . A large number of first collision products will distribute the energy of the primary more widely, resulting to lower X long and therefore to lower X max . A larger number of first-collision products will also result to reduced shower-to-shower fluctuations in X long and in turn to a lower Var(X long ) and a lower Var(X max ). A heavy primary composition will drive booth X max and Var(X max ) to lower values for a given energy, through both an increased cross-section of the primaryair collision, and an increased first-collision product multiplicity. The new-physics scenario we discuss here will also move the X max distribution in the same direction through similar changes in cross-section and multiplicity, and can thus mimic a transition to a heavier composition. Constraining n(ε) and δ The empirical model we have presented here features a free parameter δ and a free function n(ε). However, if we assume that the change of slope in X max ( ) observed by Auger at the highest energies is, in its entirety, due to new physics, so that all primaries at these energies are protons, we can completely determine n( ) as a function of δ, leaving only a single free parameter in our model (which can also be optimized by comparison to other moments of the X max distribution, as we will see in §III). C. Muons The muonic part of EAS is also a problem in UHECR physics. Muons are produced mainly when charged pions or kaons decay, indicating hadronic interactions. They have a large mean free path and consequently their journey in the atmosphere is mostly undisturbed. Upon reaching the ground, muons can be detected via the Cherenkov radiation they produce inside the water tanks comprising the surface array of hybrid experiments such as Auger and TA. Thus the energy and spacial distribution of muons can be measured. A useful parameter used to compare experiment with simulations is the ratio where N µ is the number of muons detected on the ground. The reference parameter N µ,19 = 2.148 · 10 7 is inferred from simulations assuming proton as the primary particle with energy of 10 19 eV, taking into consideration the detector's response for muons above 0.3 GeV 3 that reach Auger's site at an altitude of 1425 m with inclination of θ = 60 • . Aab et al. [58] report that N µ, 19 does not depend strongly on the selected high energy model. They quote a ∼ 11% systematic error introduced in this way. The reason for selecting so inclined showers is because for inclination θ > 60, EAS are dominated mostly by muons, since the electromagnetic part is absorbed by the atmosphere. This is in fact the inclination we used in our simulations. Auger reports that the average of that ratio depends on energy as 3 This is the Cherenkov threshold for Auger's water tanks. where a = 1.841 and b = 1.029. They also report a relative standard deviation of Auger observes 25 − 40% more muons at 10 19 eV than high energy models predict, assuming proton as a primary. However, muons are observed to be in overabundance even if a heavier composition, consistent with the one that would produce the observed X max ( ) is assumed for the primaries. In the scenario where proton interactions change above 10 18 eV introducing new physics, the muon production will change due to the increase of multiplicity for the secondary particles. The total number of muons on the ground will be the sum of muons that each component-shower produces. Note that each component-shower does not produce the same amount of muons. We then expect the average number of muons on the ground to be D. CORSIKA simulations We now turn to EAS simulations. We wish to: (a) test whether the implementation, in simulations, of new physics as discussed in §II A will produce the same behavior as our analytic approximations; and (b) determine the optimal phenomenological parametrization (cross section and the multiplicity, quantified by δ and n(ε) in our description) that any new proton-air interaction must exhibit in order for EAS to produce the observed data in Auger if the composition of primaries is to remain light up to the highest energies. We simulated showers induced by primaries with energies in the range 10 17 − 10 20 eV with step in log E of 0.1. At each energy bin, we performed 1000 EAS simulations. We simulated EAS with the first collision treated either with SM extrapolations, or with new physics as per our phenomenological implementation. We performed SM EAS simulations for proton primaries with E < E th = 10 18 eV, and for the heavier Galactic primaries (see next section), since their per-nucleon kinetic energy never exceeds 10 18 eV. We implemented new physics for all proton primaries with E > 10 18 eV. CORSIKA EAS simulations using either EPOS-LHC or QGSJETII-04 yielded our SM results. For low-energy interactions we used FLUKA. We also used the CONEX hybrid scheme [59] which decreases the simulation time dramatically. Each simulation generates three output files. The first records the energy deposited as a function of depth in the atmosphere. We fitted a Geisser-Hillas to the simulated data to estimate the shower longitudinal-development maximum, X long , for each shower. For each energy bin, we then calculated the average value of X long and its variance. The second output file records information on the cross section of the primary with the atmosphere and from it we calculated X int . The second output file also contains the number of muons detected on the ground. We fitted a convolution of a Gaussian with an exponential to calculate the average value and the relevant variance of the number of muons on the ground. The third output file (stack file) contains information about the secondary particles produced after the first interaction. The results from this run are the Standard Model predictions from extrapolated models. For the new-physics simulations, we used the following approach. For a given value of δ, we first calculated the multiplicity according to Eq.(17) at each energy bin. We then combined stack files from the same energy bin (produced by SM simulations) according to the calculated multiplicity: we rounded n(ε) to the nearest integer, and we combined as many stack files, accounting for energy and momentum conservation. To this end, we divided the energy and momentum of each particle by the number of stacked files. We then used the combined stack files as input to CORSIKA and continued the simulation of the EAS, obtaining data files for the energy deposition as a function of depth in the atmosphere and the muon number on the ground. The cross section was calculated from Eq.(2). E. Cosmic Ray Flux At energies above 10 18.3 eV, both Auger and TA observe a dipole distribution of CR uncorrelated with the galactic plane [60,61]. This indicates that above this energy, CR are of extra-galactic origin. The energy at which this transition takes place is an important input for our calculations. The reason is that Galactic CR are heavy particles and thus the the energy per nucleon will be below the threshold of new physics. As a result we need not apply any new physics corrections to their EAS simulations. We model this transition using a simple, phenomenological approach, based on three assumptions: (1) That above 10 17 eV cosmic rays consist of a single, fixedcomposition Galactic component, and a single, fixedcomposition extragalactic component. The energydependent fraction of Galactic CR is f (ε). The fraction of extragalactic CR is then 1 − f (ε). (2) That above 10 17 eV we can model the Galactic CR spectrum (differential particle flux J(ε)) as a power law of slope −γ G , cutting off exponentially at a a characteristic energy ε G , corresponding to the maximum energy of Galactic CR accelerators: where ε 17.5 = 10 17.5 eV/E th = 10 −0.5 . (3) That CR at energies lower than those where losses (either e + e − or pion photoproduction) become important, the extragalactic CR flux J EG (ε) is a single power law. At low energies (E < 10 17.5 eV), Auger data constrain J G,0 = 4.1 × 10 −15 (km 2 eV yr sr) −1 , and γ G = 2.9. By virtue of our third assumption above, we can also constrain ε G by demanding that, for 10 17.5 eV/E th < ε < 10 18.2 eV/E th the extragalactic spectrum J EG (ε) = J total, Auger − J G (ε) is consistent with a single power law. We thus find ε G = 10 17.9 eV/E th , which results in an extragalactic spectrum consistent with J EG (ε) ∝ ε −2.0 between 10 17.5 and 10 18.2 eV (see Fig. 1). The resulting Galactic CR fraction, f (ε) = J G (ε)/J total,Auger (ε) is shown in the inset of Fig. 1. We note that under the three assumptions adopted here, extragalactic CR are found to dominate already at 10 18 eV, the composition at 10 18.5 is light, and the ankle must be an e + e − "dip". In this simple scenario, the probability density function of X max will be leading to an average shower maximum and its variance V ar(X max ) = f V ar(X max,G ) with subscripts G and EG referring to the Galactic and extragalactic populations respectively. At energies around 10 17 eV, CR are mainly of Galactic origin. In this paper we assume Galactic CR to be one type of nucleus for simplicity. We assume the Galactic component to be helium for simplicity -lighter than PT09, who had assumed carbon. Ultimately, the Galactic component should ideally be simulated with appropriate mixed composition, with each species cutting off at different energies according to its charged, as per the Hillas criterion [10]. III. RESULTS We performed CORSIKA simulations as described in §II D for δ = 0, 2.9, 3.5, 4, 6, 8 and 10, treating the Galactic-to-extragalactic transition as described in §II E. The simulation results for X max are (by construction) in excellent agreement with Auger data 4 for all values of δ (Fig. 2). Our simulations are also a much improved fit to the σ xmax data even for δ = 0 (no change in cross section, only multiplicity increases). For higher δ the agreement improves further and is optimal for δ between 4 and 8 (Fig. 3). In Figs. 2 and 3 we did not plot results for all simulated values of δ in order to keep the figures legible. Figs. 2 and 3 show a deviation of the simulation results from observations at low energies. We expect that this discrepancy is due to the assumption of a Galactic component consisting purely of helium. A more reasonable assumption is a mixture of helium, carbon and oxygen, with ratios that depend on the energy. However, since the details of Galactic CR composition have little to no impact on the (dis)agreement between theory and observations of X max at the highest energies, we chose, for simplicity, not to focus on the modeling of the Galactic component in this work. Instead, we implement the very simple recipe described above aiming only to show the direction in which X max and σ Xmax will change due to the Galactic-to-extragalactic transition. We plan to return to this problem and relax the assumption of a single-species Galactic component in a future publication. We next explore the agreement between new-physics σ(X max ) and observed Auger data, quantifying it by means of the reduced χ 2 statistic. Since we have treated the Galactic component in a very approximate way, we GeV. When we alter the way the cross section and the first-interaction product multiplicity scale with energy as in Eqs. (4) and (17), EAS simulations with protons as a primaries (green and red) reproduce the observed data well at the highest energies.The simplifying assumption of a single component Galactic CR is the reason our simulations deviate from observational data at lower energies. only use energies at which extragalactic protons have fully dominated the UHECR flux (E > 10 18.5 eV) for this comparison. In Fig.(4) we plot the reduced χ 2 as a function of δ for our EAS simulation results (datapoints), as well as for our analytic approximation for σ Xmax given by Eq. (19) (solid lines). We calculate the position of minimum χ 2 for our simulations by fitting parabolas (dashed lines) to our datapoints. EPOS-LHC has a minimum of χ 2 = 1.5 at δ = 4.8 and QGSJETII-04 has a minimum of χ 2 = 5.2 at δ = 6.7. EPOS-LHC has overall better performance. At their minimum χ 2 the cross section at an energy of 10 19 GV rises to 830 mb (788 mb) for EPOS-LHC (QGSJETII-04). Furthermore at the same energy Fig. 2), its standard deviation does. We obtain the best agreement between simulated σX max and Auger observations for δ near 6 for QGSJETII-04 (upper) and for δ near 4 for EPOS-LHC (lower). Further increment of the δ parameter does not result in significant changes at high energies, as σX max quickly reaches an asymptotic behavior. the multiplicity has increased by a factor of 3 (2). In Figs. 5 and 6 we show how the changes in crosssection and multiplicity in our scenario affect the number of muons measured on the ground. Fig. 5 shows the fractional difference of muons on the ground from simulations relative to the data observed by Auger. The new-physics scenario produces more muons than the SM prediction. Although this is an improvement, still a deviation between 30 − 37% from observational data persists. The reason is that the muon production does not depend strongly on the overall multiplicity of the secondary particles but rather on the ratio of pions and kaons produced after the first interaction [62]. The fraction of such particles in the products of the first collision does not change in the implementation of new physics we have considered here. However, such a change in . Agreement between new-physics EAS simulations and Auger data for σX max and E > 10 18.5 eV, quantified through the reduced χ 2 statistic, as a function of the value of the δ parameter. Overall, EPOS-LHC simulations produce results more consistent with observational data. To find the position of the minimum χ 2 , we perform a parabolic fit to the datapoints. The locations of the minima are at δ = 4.8 for EPOS-LHC and at δ = 6.7 for QGSJETII-04. the charged-particle ratio may be an important feature in any specific new-physics model that attempts to fully explain the UHECR composition problem, including the muon problem. We do however point out that the residual discrepancy between simulated and observed number of muons in our current implementation is constant with energy, unlike the SM predictions, where the discrepancy increases with energy. Our implementation additionally produces a variance for the muon number that is consistent with observed data (Fig.6), including the correct trend with energy. We plan to revisit this issue in a future publication, adding the possibility of a change in the charged-particle ratio. IV. SUMMARY AND CONCLUSIONS We performed simulations of EAS with CORSIKA, appropriately modified for interactions above E th = 10 18 eV to feature an increased proton-air cross-section and first-collision product multiplicity. Ww have parameterized the increase in cross section through a parameter δ, defined as the fractional increase of the coefficient of logarithmic growth with energy of the proton-air crosssection with respect to its standard-model value (see Eq. 4). We have parameterized the increase in product multiplicity through a function n(ε), defined as the ratio of first-collision products over their SM-predicted number. By demanding that Auger observations of X max as a function of energy above 10 18.7 eV are reproduced under the assumption that all UHECR at these energies are protons, we can determine n(ε) for any given value of δ (see Eq. 17). This then leaves δ as the only parameter FIG. 5. Fractional difference of muons on the ground predicted by simulations relative to the observed data at the Auger Observatory. When new physics sets in above E th , the number of muons change (green and red), due to the change in product multiplicity (hence, independently of δ). This change does not fully reconcile the simulated number of muons with observations. However, unlike the SM prediction (grey), the discrepancy does not increase with energy above 10 18.5 eV, but rather stabilizes around 30% (40%) for EPOS-LHC (QGSJETII-04). in our description. We have shown that these modifications to hadronic interactions at energies above 10 18 eV are sufficient to reproduce observations of X max . The growth of X max is reproduced for any value of δ (by construction), and σ Xmax is best reproduced for δ between 4 and 7. If QGSJETII-04 is used for SM modeling of high-energy hadronic interactions, the optimal value for δ is 6.7. For EPOS-LHC, the optimal value for δ is 4.8. Epos-LHC with δ = 4.8 provides the best overall fit to Auger X max data at E > 10 18.5 eV. In each case, product multiplicity increases by a factor n(ε) given by Eq. (17). These results provide phenomenological constraints on the properties (cross section, multiplicity) of any new FIG. 6. Standard deviation over average of the muon ratio as a function of energy. When new physics sets in above E th , this quantity becomes more consistent with observed data than SM predictions (grey), and reproduces the trend with energy observed by Auger. effect beyond the SM that may set it for collisions at CM energies exceeding ∼ 50 TeV, if such an effect is to be held responsible for the change of behavior of the X max distribution observed by Auger for collisions of primaries more energetic than 10 18.7 eV with the atmosphere. As far as the muon problem is concerned, we found that although the change in multiplicity we have investigated here does improve the agreement between Auger data and EAS simulations, it is not by itself sufficient to fully resolve the discrepancy. For this reason, we speculate that any new effect setting in at 50 TeV may also induce a change in the charged-particle ratio, which would further increase the number of muons produced and detected on the ground. We plan to investigate this possibility in a future publication. Current and planned advances in the tomographic mapping of the Galactic magnetic field through local measurements [29,[63][64][65][66][67] are expected to make feasible an electromagnetic determination of the charge of UHECR of the highest energies in the near future [68]. If such studies provide unequivocal evidence that UHECR at the highest energies are indeed protons, then this will be a strong argument in favor of new physics setting in for hadronic interactions at CM energies above 50 TeV, with the phenomenology of any such new effect exhibiting the behavior calculated in this work. ACKNOWLEDGMENTS S.R. and V.P. would like to dedicate this work to the memory of our friend and collaborator Theodore Tomaras whom we have lost way too soon. We will always fondly remember all the exciting and fruitful scientific discussions and debates that we have had and we will be sorely missing the ones that will now never take place. We thank Alan Watson, Nicusor Arsene, Konstantina Dolapsaki, Andreas Tersenov, and Christos Litos for helpful comments and discussions that improved this manuscript, and Dieter Heck for valuable feedback on the use of CORSIKA. This work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project 1552 CIRCE). V.P. acknowledges support from the Foundation of Research and Technology -Hellas Synergy Grants Program through project Mag-MASim, jointly implemented by the Institute of Astrophysics and the Institute of Applied and Computational Mathematics.
2022-07-01T01:15:34.394Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "4adc945dab09b10d610fecb6a2c8bf9327020e27", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ad2a8832c4bc492ea0696e13bd048cc2abab67df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
92536543
pes2o/s2orc
v3-fos-license
Efficiency of Sampling Sunfishes Using Snorkeling in Clear, Warm-Water Streams of the South-Central United States The continued evaluation of fish-sampling gears and methods is essential to identify their applicability across environmental conditions and among species. Although limited by visibility, snorkeling has potential advantages relative to other fish-sampling gears in wadeable streams (e.g., minimally intrusive, cost effective, and appropriate in deeper areas). Clear water is common to warm-water streams; however, the use of snorkeling for monitoring streamfish populations has largely focused on cold-water systems. To assess relative snorkeling efficiency in warm-water streams, we compared standardized single-pass snorkel counts to tow-barge electrofishing abundance estimates for six sunfishes (Centrarchidae) in the Ozark Highlands ecoregion of northwest Oklahoma and southwest Missouri under relatively similar environmental conditions (i.e., clear water, cobble substrates, low-flow conditions). Snorkeling efficiency was variable among sunfishes and consistently low for species with cryptic traits and habitat use. We also did not detect cryptic sunfishes (i.e., a single individual was not encountered) using snorkeling at multiple stream reaches where estimated abundance was . 50 within a 0.5to 1.0-km stream reach. Our findings indicate that snorkeling has applications for monitoring sunfish populations and assemblages when using an abundance estimator or accounting for imperfect detection; however, it is inappropriate for estimating population size of cryptic sunfishes. We encourage continued research into the applicability of snorkeling to estimate warm-water stream fish abundance. Introduction All fish-sampling gears are imperfect and biased in some way. Stream-fish capture probability (i.e., the estimated proportion of individuals captured) is variable across sampling conditions and among both gear types and species (Peterson and Paukert 2009;Gwinn et al. 2016). Standardizing sampling gears and methods (Bonar et al. 2009a(Bonar et al. , 2015, though important aspects of sound stream-fish monitoring, does not alleviate the inferential issues associated with variable capture probability (Peterson et al. 2004;Price and Peterson 2010;Mollenhauer et al. 2017). Thus, the continued evaluation of biases and limitations of stream-fish sampling methods is essential for ecological advancements and well-informed conservation and management decisions. Preliminary evaluations are particularly important for fish-sampling gears with a paucity of information about potential applications (e.g., poorly evaluated species or systems). For example, initial assessments can prioritize future research directions and identify inappropriate gear types (e.g., consistently low capture probability). Backpack and tow-barge electrofishing, along with seining, are the most common sampling methods for estimating fish abundance in wadeable streams (Rabeni et al. 2009); however, single-pass snorkel counts are an option given adequate visibility (Dunham et al. 2009;Thurow et al. 2012). Snorkeling is both minimally intrusive and cost effective (e.g., gear requirements are typically only a mask, snorkel, and wetsuit). Snorkeling is also less limited by water depth than seining and electrofishing, thus making it more applicable for sampling deeper pools. Although clear water is often more strongly associated with cold-water streams, adequate visibility for fish snorkel counts is also common in warm-water systems. For example, many streams of the south-central United States have substantial groundwater influence and excellent underwater visibility during dry weather periods (Nigh and Schroeder 2002). However, the applicability of snorkeling to estimate stream-fish abundance in warm-water streams has received relatively little attention by researchers relative to cold-water systems (but see Dauwalter et al. 2007;Jordan et al. 2008;Brewer and Ellersieck 2011;Weaver et al. 2014;Hain et al. 2016). For example, snorkeling was among the gear choices discussed for cold-water streams, but not warm-water streams, in a recent American Fisheries Society text (Bonar et al. 2009b) outlining standard fish-sampling gears and methods. Our objective was to examine the applicability of snorkeling for monitoring sunfish (Centrarchidae) populations in wadeable warm-water streams on the basis of a relative efficiency (i.e., a point count relative to an absolute abundance estimate). We differentiated efficiency from capture probability as the former being an observed proportion and the latter a modeled estimate. The sunfishes of interest in our study were Bluegill Lepomis macrochirus, Green Sunfish Lepomis cyanellus, Longear Sunfish Lepomis megalotis, Redear Sunfish Lepomis microlophus, Rock Bass Ambloplites rupestris, and Warmouth Lepomis gulosus. Study area We sampled sunfish populations using both snorkeling and tow-barge electrofishing in 20 stream reaches in the Ozark Highlands ecoregion of northwest Oklahoma and southwest Missouri from summer to early autumn 2014-2015 ( Figure 1). The Ozark Highlands are characterized by cherty-limestone lithology and oak-hickory forests, with valleys primarily converted to pasture (Woods et al. 2005). All reaches were wadeable (i.e., most habitat was , 1 m deep; Rabeni et al. 2009) and comprised three to five riffle-run-pool sequences 0.5-1 km in length to characterize in-stream habitat. We conducted all sampling under good visibility (horizontal underwater water clarity 3.0 m) and relatively low flows (0.09-4.10 m 3 /s). Substrate was primarily cobble across our reaches. Methods We performed standardized single-pass snorkel counts (Dunham et al. 2009) before tow-barge electrofishing using two to three trained crew members. We installed two sets of block-off nets at both the upstream and downstream end of each reach to ensure a closed system following Peterson et al. (2004). Three snorkelers sampled most reaches, but we used only two when wetted channel width was , 10 m (i.e., a third ''lane'' was not available in the reach; see below). We slowly snorkeled stream areas 0.2-m deep in an upstream direction, while avoiding sudden movements and carefully inspecting areas of structure (e.g., under logs and between boulders). Each snorkeler maintained a designated longitudinal lane and remained lateral to other crew members, while communicating using underwater hand signals to minimize double counting. When snorkelers either passed or were passed by sunfishes 50-mm total length (TL), they identified them to species and recorded them on an underwater wrist cuff. Our size restriction excluded most age-0 fishes not recruited to electrofishing (McClendon and Rabeni 1986; personal observations). We used fish silhouettes and rocks of known sizes to confirm the ability of crew to recognize fish-size cutoffs underwater (Dunham et al. 2009). Approximately 24 h after the snorkel counts, we performed standardized two-pass removal electrofishing (Rabeni et al. 2009) using a tow barge (Midwest Lake Management, Polo, Missouri). The electrofishing crew comprised three people: one tow-barge operator armed with a hand net and two persons equipped with dip nets, each operating one of the anodes. We electrofished stream areas 0.2-m deep in an upstream direction with a zigzag pattern, while thoroughly sampling areas with structure. We used pulsed direct current, 60 Hz, and a 25% duty cycle for electrofishing. We adjusted voltage to a target power (W) to standardize the electrical field across levels of ambient water conductivity, while minimizing electrofishing-induced injuries (Miranda 2009). We measured sunfishes 50-mm TL and identified to species. We estimated sunfish electrofishing capture probabilities (i.e., estimated number of individuals captured) using the model described by Mollenhauer et al. (2017) to calculate sunfish abundance estimates. Briefly, Mollenhauer et al. (2017) used a series of mark-recaptures across a range of sampling conditions and fish sizes (i.e., gear calibration; Peterson and Paukert 2009) to develop a multispecies electrofishing capture probability model, where a cross-validation indicated good model performance. We calculated sunfish abundance estimates for fish 50 mm TL asN ¼ c/q (Thompson and Seber 1994;Peterson and Paukert 2009), whereN is the speciesspecific abundance estimate for each stream reach, c is electrofishing count, andq is estimated capture probability approximated from the logit scale to values from 0 to 1 (Jørgensen and Pedersen 1998). Electrofishing counts are provided as supplemental material (Table S2, Supplemental Material). We compared relative snorkeling efficiency among stream fishes using the snorkel count at each reach divided by the electrofishing abundance estimate. We calculated weighted means and standard deviations using the R package Hmisc (Harrell 2018) to summarize snorkeling efficiencies. We used the electrofishing abundance estimate as the sample size for the weighted statistics, where species-reach observations with counts of 0 (thus, we did not include abundance estimates of 0) were not included. We constrained snorkeling efficiency to 1 for the weighted summary statistics when snorkel counts exceeded electrofishing estimates (i.e., we used the electrofishing estimate for the snorkel count). Sunfish Lepomis microlophus, Rock Bass Ambloplites rupestris, and Warmouth Lepomis gulosus) from summer to early autumn 2014-2015 using both snorkeling and two-barge electrofishing. We defined stream reaches as three to five riffle-run-pool sequences 0.5-1 km in length to characterize in-stream habitat. Results and Discussion Relative snorkeling efficiency (reported as weighted mean 6 SD) was variable among species-reach observations (Table 1). Snorkeling efficiency ranged from 0.00 to 1.00 (0.28 6 0.23; n ¼ 95 species-reach observations) and was consistently , 0.10 for Rock Bass, Green Sunfish, and Warmouth. With the exception of Longear Sunfish, each species had at least one false absence using snorkeling (i.e., efficiency of 0 at a reach). Mean snorkeling efficiency was highest and most variable for Longear Sunfish (0.41 6 0.22; n ¼ 18 reaches) and Bluegill (0.32 6 0.16; n ¼ 18 reaches; one false absence). Mean snorkeling efficiency was lower and similar for Redear Sunfish (0.14 6 0.11; n ¼ 10 reaches; five false absences) and Rock Bass (0.11 6 0.10; n ¼ 20 reaches; one false absence). Estimated abundance for Redear Sunfish was almost exclusively , 20, and mean snorkeling efficiency was largely based on one stream reach located near an impoundment with disproportionately higher estimated abundance (Spavinaw Creek2; . 1,000). Mean snorkeling efficiency was lowest and least variable for Green Sunfish (0.03 6 0.06; n ¼ 20 reaches; four false absences) and Warmouth (0.03 6 0.03; n ¼ 9 stream reaches; six false absences). Snorkel counts exceeded electrofishing estimates for five observations (Bluegill and Longear Sunfish: 2, Redear Sunfish and Rock Bass: 1). We encountered a species using snorkeling, but not electrofishing, at only one stream reach (Redear Sunfish at Buffalo Creek2; n ¼ 1 individual). The relatively higher and lower snorkeling efficiencies among sunfishes can be related to species traits (e.g., behavior and coloration) and habitat use. Both Longear Sunfish (Witt and Marzolf 1954;Bietz 1981) and Bluegill (Colgan et al. 1979;Dugatkin and Wilson 1992) are gregarious fishes with bright coloration often observed outside of cover, which would promote higher snorkeling efficiency (Dunham et al. 2009;Thurow et al. 2012). Conversely, sunfishes with lower snorkeling efficiency exhibited cryptic traits and habitat use. Both Green Sunfish (Werner and Hall 1977;Stuber et al. 1982) and Warmouth (McMahon et al. 1984) tend to occupy shallow, heavily vegetated areas and have cryptic coloration. Rock Bass also tend to occupy dense instream cover and have cryptic coloration (Casterlin and Reynolds 1979;Probst et al. 1984;Grossman et al. 1995). We also commonly observed Rock Bass using interstitial spaces and displaying skittish behavior. Other researchers have associated cryptic traits and habitat use to lower stream-fish snorkeling efficiency and capture probability (e.g., Bozek and Rahel 1991;Korman et al. 2010;Macnaughton et al. 2014). False absences among sunfishes when using a single snorkel pass were associated with both lower fish abundance and cryptic traits. We detected a species Table 1. Relative snorkeling efficiency and weighted mean and SD for six sunfishes (Bluegill Lepomis macrochirus, Green Sunfish Lepomis cyanellus, Longear Sunfish Lepomis megalotis, Redear Sunfish Lepomis microlophus, Rock Bass Ambloplites rupestris, and Warmouth Lepomis gulosus) at 20 stream reaches in the Ozarks Highland ecoregion of northeast Oklahoma and southwest Missouri sampled from summer to early autumn 2014-2015 (Figure 1). We calculated snorkeling efficiency as the snorkel count (S) divided by the two-pass tow-barge electrofishing abundance estimate (E). We defined stream reaches as three to five riffle-run-pool complexes to characterize in-stream habitat. We derived electrofishing estimates using the multispecies capture probability model described by Mollenhauer et al. (2017) and used them as the sample size for the weighted means and standard deviations. We calculated the weighted summary statistics using the package Hmisc (Harrell 2018) in the statistical software R (R Core Team 2018). NA indicates reaches that were not included in the calculations because electrofishing counts were 0. Electrofishing counts and relative uncertainty for the abundance estimates are provided in Table S1. ( using snorkeling on only 2 of 10 species-reach observations when estimated abundance was , 10 (Redear Sunfish at Flint Creek and Rock Bass at Caney Creek; Table 1). However, all false absences for Green Sunfish and Rock Bass and four of six false absences for Warmouth occurred at reaches with estimated abundances . 50. McManamay et al. (2014) also observed relationships between single-pass snorkeling detection and both densities and cryptic traits for warm-water stream fishes. We did not identify any obvious trends in snorkeling efficiency associated with water clarity and flow conditions across the ranges encountered during our sampling. For example, snorkeling efficiency between the reach sampled under the highest flows (Big Sugar Creek; 4.10 m 3 /s) and the reach sampled under the lowest flows (Evansville Creek2; 0.09 m 3 /s) was similar across sunfishes. Additionally, snorkeling efficiency was also similar at reaches sampled under both higher visibility (e.g., Flint Creek; horizontal water clarity of 5 m) and lower visibility (Fourteenmile Creek; horizontal water clarity of 3 m). However, we recognize that numerous interacting factors contribute to variation in sampling efficiency (thus, also underlying capture probability). In addition to visibility, in-stream structure (Mullner et al. 1998;Wildman and Neuman 2003), water depth (Schill and Griffith 1984;Brewer and Ellersieck 2011), underlying lithology (Ensign et al. 1995;Albanese et al. 2011), and fish densities (Hillman et al. 1992;Dunham et al. 2009) also can contribute to variable snorkeling capture probability. Given adequate data (e.g., sample size and variation in sampling conditions), hierarchical modeling can be used to estimate capture probability and abundance and species detection probabilities among species and across sites. Multispecies electrofishing capture probability models have been developed for warm-water streams (e.g., Price and Peterson 2010;Mollenhauer et al. 2017); however, common abundance estimators are not well suited for stream-fish snorkeling applications. For example, removal estimators are generally not feasible for snorkeling because physical capture of fishes is difficult (but see Dorazio et al. 2005;Jordan et al. 2008), and a secondary method is typically required to mark individuals for mark-recapture estimation (e.g., Brewer and Ellersieck 2011). Repeated counts (e.g., Royle 2004) and repeated sighting approaches (e.g., double observer; Royle et al. 2004;Koneff et al. 2008) can be used with abundance estimators that do not require physical capture; however, evaluations of their feasibility for stream-fish snorkeling has been extremely limited (but see Webster et al. 2008). Similarly, occupancy modeling (MacKenzie et al. 2006) could be used to account for imperfect snorkeling detection when assessing streamfish species occurrence (e.g., Hagler et al. 2011;McManamay et al. 2013;Fraley et al. 2017). Here, our objective was not to quantify or explicitly identify sources of variability related to snorkeling capture probability or detection, but rather to provide a preliminary assessment of snorkeling applications to monitor stream-dwelling sunfish populations on the basis of a relatively small number of stream reaches. The electrofishing abundance estimates provided a useful benchmark to compare the snorkel counts; however; there are always caveats associated with modeled data and their applications. For example, relative uncertainty around the abundance estimates varied (i.e., the point estimate was more reliable at some stream reaches than at others; Table S1, Supplemental Material). Thus, in some instances relative snorkeling efficiency may have been affected by a less accurate electrofishing estimate (see Mollenhauer et al. 2017 for a detailed discussion of model performance and limitations). Nevertheless, we feel that the reasonable level of uncertainty around the vast majority of the abundance estimates and the clear trends in snorkeling efficiency among sunfishes suggest that perfect knowledge of reach-to-reach abundance (impossible to achieve) would not change our major findings. Management implications Our findings indicated that snorkeling is applicable for monitoring sunfishes in warm-water streams using single-pass snorkel counts given variable capture probability and species detection are considered. Relying solely on standardized gears and methods when comparing abundances and community metrics (e.g., species diversity and richness) among sites can lead to misinterpreted ecological relationships and misinformed conservation and management decisions. The lower efficiency for Green Sunfish, Rock Bass, and Warmouth suggests that snorkeling is inappropriate for estimating population size for cryptic sunfishes even when using an abundance estimator. The consistently low efficiency associated with sampling these species would likely result in difficulty meeting model assumptions or levels of uncertainty that would make the population estimates uninformative. The detection of Redear Sunfish at a lowdensity reach using snorkeling, but not electrofishing, highlights the potential value of snorkeling as a secondary gear to establish the occurrence of rarer warm-water stream fishes. Although snorkeling is typically most often associated with cold-water stream-fish monitoring, other recent studies have also examined its applicability in clear, warm-water streams. For example, Brewer and Ellersieck (2011) evaluated snorkeling capture probability of age-0 Smallmouth Bass Micropterus dolomieu in Ozark Highland streams. Hain et al. (2016) compared snorkel counts with mark-recapture estimates for the stream-dwelling 'O'opu nākea Awaous guamensis in Hawaii. Stream-fish scientists likely underappreciate the potential of snorkeling as a noninvasive sampling method to monitor warm-water assemblages, and widespread applicability across stream fishes and systems remains relatively unexplored. Identifying limitations (e.g., consistently low capture probability) and providing insight into potential bias (e.g., relationships with species traits and habitat use or prevailing environmental conditions) are important aspects of fish-sampling gear evaluations to prioritize future research directions. We encourage continued research into the applicability of snorkeling to estimate warm-water stream-fish abundance. Supplemental Material Please note: The Journal of Fish and Wildlife Management is not responsible for the content or functionality of any supplemental material. Queries should be directed to the corresponding author for the article. Table S1. Two-pass tow-barge electrofishing abundance estimates and associated 95% confidence intervals (CI) for six sunfishes (Bluegill Lepomis macrochirus, Green Sunfish Lepomis cyanellus, Longear Sunfish Lepomis megalotis, Redear Sunfish Lepomis microlophus, Rock Bass Ambloplites rupestris, and Warmouth Lepomis gulosus) at 20 stream reaches 0.5-1 km in length in the Ozarks Highland ecoregion of northeast Oklahoma and southwest Missouri sampled from summer to early autumn 2014-2015.
2019-04-03T13:06:35.128Z
2018-08-14T00:00:00.000
{ "year": 2018, "sha1": "ec25038d7d567f1bdcda249d813622724ae32cf9", "oa_license": null, "oa_url": "https://meridian.allenpress.com/jfwm/article-pdf/9/2/602/2338562/032018-jfwm-027.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "59a1fce1c1c80b3e4b78e8c72825ababc46c48aa", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15797177
pes2o/s2orc
v3-fos-license
Electron acceleration in a nonrelativistic shock with very high Alfv\'en Mach number Electron acceleration associated with various plasma kinetic instabilities in a nonrelativistic, very-high-Alfv\'en Mach-number ($M_A \sim 45$) shock is revealed by means of a two-dimensional fully kinetic PIC simulation. Electromagnetic (ion Weibel) and electrostatic (ion-acoustic and Buneman) instabilities are strongly activated at the same time in different regions of the two-dimensional shock structure. Relativistic electrons are quickly produced predominantly by the shock surfing mechanism with the Buneman instability at the leading edge of the foot. The energy spectrum has a high-energy tail exceeding the upstream ion kinetic energy accompanying the main thermal population. This gives a favorable condition for the ion acoustic instability at the shock front, which in turn results in additional energization. The large-amplitude ion Weibel instability generates current sheets in the foot, implying another dissipation mechanism via magnetic reconnection in a three-dimensional shock structure in the very-high-$M_A$ regime. Collisionless shocks provide us great opportunities to explore nonlinear dynamics in strongly inhomogeneous plasmas. Dynamics therein result in excitation of various types of electrostatic and electromagnetic waves and associated plasma heating and acceleration. Extreme circumstances encountered in such situations can be realized in astrophysical phenomena, such as supernova remnant (SNR) shocks where the plasma kinetic energy overwhelms other magnetic and plasma internal energies. SNR shocks have been thought to be a generator of cosmic rays, and exploring nonlinear dynamics in extreme circumstances therefore clarifies how charged particles are accelerated to relativistic energies out of the thermal counterpart. The magnetized collisionless shock is characterized by the Alfvén Mach number M A , which is the ratio of the flow speed V 0 to Alfvén speed V A in the upstream. When the Alfvén Mach number exceeds a critical value (∼ 3), the plasma cannot be fully dissipated at the shock, and additional dissipation is compensated by the ion specularly reflected by the shock front [1,2]. In particular, in very-high-M A shocks, the ion can provide free energy for various plasma kinetic instabilities [3][4][5]. SNR shocks are indeed such cases of very high M A . Remote imaging of SNR shocks has provided rich information of fine-scale structures in which the presence of relativistic electrons has been evidenced [6]. It is only recently that a high-M A shock was directly measured with relativistic electrons accelerated in the vicinity of the Kronian bow shock [7]. Laboratory experiments involving a high-power laser facility provide other opportunities of exploring high-Mach-number shocks [8][9][10]. Although such experimental studies reveal the macroscopic nature of high-Mach-number shocks, they still lack detailed information on electric and magnetic fields, and associated mechanisms of particle acceleration. While numerical simulation is an alternative way of exploring these extreme environments, examining nonrelativistic, high-M A shocks is still computationally challenging. This is because of a strong dependence of CPU time on the ion-to-electron mass ratio (M/m), which increases as (M/m) 3(2.5) in a three-(two-)dimensional fully kinetic simulation of a collisionless shock. This scaling has limited discussions either with small mass ratios or in moderate M A shocks. Nonetheless, numerical experiments have revealed part of their signatures. One-dimensional simulation studies have proposed an efficient electron acceleration mechanism in the high-M A regime, which can be an agent of the diffusive shock acceleration in SNR shocks [11,12]. The process -the electron shock surfing acceleration (SSA) -is re-alized with an electrostatic field via the Buneman instability and the motional electric field at the leading edge of the foot. The accelerated electrons are trapped/reflected by largeamplitude wave electric fields, in contrast to the classical shock drift acceleration where the compressed magnetic field plays the role [13], and are energized much more efficiently. It has been a controversial issue whether the mechanism operates efficiently in multidimensional shock structures [14][15][16]. Investigation of nonlinear saturation levels of the Buneman instability in multiple dimensions [17] led to the condition for the effective electron SSA [18]. There is therefore a need for very-high-M A shock studies with more physically important mass ratios. In this Letter, we report results from a fully kinetic particle-in-cell (PIC) simulation of such a very-high-M A shock. We found that various types of electrostatic and electromagnetic instabilities are strongly activated at the same time, but in different regions of the two-dimensional shock structure. Electron energization associated with the instabilities is discussed. We use a two-dimensional PIC code to examine a shock evolution. The code implements a second-order (spline) shape function with a charge conservation scheme [19] to inhibit "low- The spatial profile of the ion number density at Ω gi T = 4 is shown in Fig. 1(a). From the upstream (right) to the downstream (left) regions, there is a transition between X = 60λ i and X = 47λ i (foot). This transition region is followed by a rapid increase in the number density (ramp), a peak of the value (overshoot), and a recovery to the downstream value (∼ 4N 0 ) in X < 42λ i . While this overall signature is essentially the same as signatures found in supercritical collisionless shocks [2], the magnitude of the overshoot value reaches N i = 25N 0 . A long-wavelength mode (m = 1) as seen in the density inhomogeneity along the shock front has been similarly found in the two-dimensional kinetic hybrid simulations [20]. Structures of different scales and modes are found in these regions. Figure 1 has already formed in the foot (50.0λ i ≤ X ≤ 53.3λ i ) before reaching the shock front (blue line in Fig. 2(b)). In the downstream (43.3λ i ≤ X ≤ 45.0λ i , red line in Fig. 2(b)), the maximum energy reached γ ∼ 12, which is 2. The wave vector is almost orthogonal to the electrostatic mode at the leading edge of the foot in Fig. 1(b). The amplitude is very large as compared with the upstream magnetic field B 0 . It varies from 5 B 0 to 10 B 0 within the ion inertia scale around X = 47λ i , resulting in self-generated current sheets. The Fourier power spectrum of B z in the foot region in Fig. 3(a) shows that the mode with the wavelength of the ion inertia length (|k|λ i ∼ 2π) is dominant and the wave vector is tilted from the x axis, which are features closely related to the motion of the reflected ion. Figure 3 motion in the velocity space. Thus, the velocity distribution function is highly anisotropic. This situation is subject to the ion-beam Weibel instability [22]. Indeed, the wave vector in Fig. 1(c) and Fig. 3(a) is almost perpendicular to the direction of the anisotropy in the velocity space. The observed ion-scale electromagnetic mode corresponds to the fastest growing mode of the instability [23]. There exist large density gradients at the ramp and behind the overshoot in the present high-M A shock. The strong plasma inhomogeneity permits a kind of drift wave to grow along the shock surface with an amplitude of |E| ∼ 5B 0 (25E 0 ) as shown in Fig. 1(d). Figure 4(a) shows the power spectrum of E y in the y direction averaged over the region behind the overshoot (43.3λ i ≤ X ≤ 45.0λ i ). The excited strong electrostatic wave is powered at where λ e is the electron inertia length in this region. The region consists of the relativistically hot (T e ∼ mc 2 ) electron preheated by the Buneman instability at the leading edge of the foot (Fig. 4(b)), and the transmitted and reflected ions ( Fig. 4(b) and 4(c)). The electron drift motion is in the +y direction with a speed of v d = 0.1c. Although the temperature ratio T e /T i is not large, the non-Maxwell ion distribution and the background electron temperature gradient relax the threshold of the ion acoustic instability even for the case with T i ∼ T e [3,24]. The present configuration with the out-of-plane magnetic field component limits possibilities of other types of kinetic instability. In particular, the strong magnetic field com-pression at the overshoot would be free energy for the ion cyclotron instability owing to the anisotropy of the ion temperature [25]. The resultant large-amplitude electromagnetic fields would modify the present coherent shock front structure and work as a scattering body for the preaccelerated electron [26]. The unprecedentedly high-M A PIC simulation enabled us to confirm the theoretical prediction Eq. (1) for the first time with a large mass ratio sufficient to separate ion and electron dynamics. Furthermore, the introduction of multidimensionality provides new insights into nonlinear shock dynamics, in which various kinetic instabilities are activated at the same time and competing with each other . The anisotropy of the ion distribution function in the foot destabilized the ion-beam Weibel instability that generates current sheets. This implies that the magnetic reconnection, which cannot be realized in the present two-dimensional configuration, can be another dissipation mechanism in shocks with much higher M A in three-dimensional space. The strongly inhomogeneous plasma around the overshoot introduces free energy for the drift instability along the shock surface. The characteristics suggest growth of the ion acoustic (IA) instability, while a number of instabilities have resulted from linear kinetic theories [3,27]. However, the instability works only for complementary heating of electrons, since they are already relativistically hot (T ∼ mc 2 ≫ E 2 IA /8πN 0 ); electrons are substantially heated at the leading edge of the foot by the Buneman instability rather than at the shock front. The electrostatic field with large amplitude at the leading edge also allows efficient electron SSA. The resultant distribution of electron energy has a high-energy tail exceeding the upstream ion kinetic energy, suggesting that the electron SSA is a robust preacceleration mechanism that seeds the electron diffusive shock acceleration in young SNR shocks. ACKNOWLEDGMENTS This work was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (Startup) 23840047. Numerical computations were conducted using the Fujitsu PRIMEHPC FX10
2013-10-31T02:36:56.000Z
2013-10-31T00:00:00.000
{ "year": 2013, "sha1": "085e2e9746d3d7b5060c166ce1a20dced4b1aabe", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "085e2e9746d3d7b5060c166ce1a20dced4b1aabe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
59434813
pes2o/s2orc
v3-fos-license
Dynamic modeling and control of flexible manipulators: a review In this paper a survey of flexible robotic manipulators has been carried out. The significance of flexible mechanical arms is considered compared with traditional robot arms .The applications of flexible mechanical arm are introduced. Papers are classified according to flexible manipulator modeling technology and flexible manipulators control method. Introduction With the completion of the International Space Station and exploration of deep space, advanced aerospace technology is acquired. Mankind completed the first on track operation in 1973. Solar panels on the Sky lab space station were repaired successfully through the extravehicular activities [1]. However, astronauts cannot perform on-orbit operations accurately and effectively, restricted by physical conditions and the environment. Nowadays scientists are pursuing applications of space manipulator to complete a large number of difficult and dangerous tasks. Significant features of space maneuvering arm are large load capacity and high control requirements. Multi-degree freedom flexible mechanical arm has become popular considering the launch costs, space energy constraints, and space operations [2]. As a result, robotic arms are widely used in dangerous, monotonous and tedious repetitive applications. However, most robotic arms are traditional rigid manipulators. They are mostly designed from heavy-duty materials and built in a manner to maximize stiffness in an attempt to minimize the vibration of end-effectors to achieve desired position accuracy [3]. Typical examples of flexible structural systems are: large-size solar panels, light-arm robots, large parabolic expansion antennas, satellite whip antennas, and high-sensitivity radio telescope reflectors [4]. Flexible manipulators can improve production efficiency, reduce operating time and decrease consumption of resources. Elastic deformation and vibration are unavoidable in the operation of flexible manipulators. Elastic deformation and vibration of flexible mechanical arm mainly occurs in joints and arms. The production and transmission power, location perception, mechanical connection are three tasks of joints, which is the key of the movement ability, movement precision, sport stability and motion safety of the manipulator. Motor, transmission, sports shaft and sensors are main components of joints. The joint flexibility is mainly determined by the two series of flexible components, torque sensor and harmonic reducer. Robotic arm is often made of lightweight materials, such as carbon fibers, which have the characteristics of low density, high rigidity and high strength [2]. Therefore, joint flexibility and arm flexibility must be considered in the dynamic modeling of manipulators, to ensure the control accuracy and system reliability, reduce the elastic vibration and decrease the life span of flexible mechanical arms. Meanwhile, the space flexible manipulator and its rigid carrier form a set of typical rigid and soft fit multi-body system. It is necessary to research dynamics of the flexible multi-body system. In recent years, many scholars have done a lot of work on the modeling theory, calculation methods and experimental research of flexible multi-body systems. However, mechanisms of motion and elastic deformation between the rigid and soft in a large range are not fully understood [4]. In summary, the study of flexible arms can have a significant impact on the field of aerospace and also improve the efficiency of social production. Research Status of Multi-body System Dynamics Multi-body system dynamics includes multi-rigid body dynamics and flexible multi-body system dynamics. Multi-rigid body system dynamics has been studied by many scholars, the theory of multi-rigid body dynamics has been fully studied. The first multi-body dynamics monograph written by Wittenburg [5] illustrates the kinematics and dynamics of the multi-rigid body system very well. Graph theory was introduced into the dynamics of multi -body system, which laid a perfect foundation for later scholars' research on related fields. Kane [6] proposed Kane's method based on the analysis and comparison of various dynamical principles, to replace the generalized coordinates by selecting the generalized rate which has the advantages of both vector mechanics and analytical mechanics. Haug [7] proposed a computer modeling of the Cartesian method, which has application in business. Since then, scholars have made important contributions in this area such as Roberson [8], Nikravesh [9], Schiehlen [10], Huston [11] and so on. With the scholar's in-depth study of the multi-rigid body dynamics, the theories of multi-rigid body dynamics has been very mature, to produce many commercial software. In the field of flexible multi-system dynamics, kineto-elastodynamics(KED) method [12] was firstly proposed to solve the related problems. The elastic deformation is obtained by finite element analysis, but not considering the interaction of the large-scale rigid motion and the structural elastic deformation in motion. Subsequently, some scholars proposed floating coordinate method. It is a way of combining the multi-rigid-body dynamics with the structural dynamic. It can take full advantage of modal technology and have a better computing efficiency and accuracy in small deformation and low-speed large-scale movement. It is the most extensive method in flexible multi-body system modeling [13]. In 1996, Shabana [14] proposed the absolute node coordinate method based on finite element and continuum mechanics theory. The modeling process does not need small deformation and local coordinate system assumptions, it can truly reflect the dynamic behavior of large deformation [2]. Berzeri proposed a simplified model for elasticity of one-dimensional beams based on different assumptions and made a comparison. Omar and Shabana proposed a classic plane strain shear beam element based on continuous dielectric mechanics theory, which leads to the bending strain and axial strain of the inconsistency and shear lock. This problem can be solved by redefining the bending strain by using the local tangent coordinate system of the element [15]. Multi-body system dynamics modeling The multi-body system dynamics model is the basis of dynamic analysis. Kinetic equations are determined by different dynamic model. Newton-Euler method, Lagrange method, Kane method are the most widely used. The Newton-Euler method is clearly in physical meaning, but the number of equations is large, which leads to low calculation efficiency. Lagrange method is suitable for relatively simple flexible multi-body system based on the kinetic energy and potential energy. It avoids the emergence of Advances in Engineering Research, volume 138 internal forces in the equation. Angular velocity and partial velocity are introduced by Kane's method, instead of the generalized rate of the generalized coordinates. It avoids the differential operation and is more suitable for the automatic derivation of the application computer [16]. In addition to the above three classical methods, the dynamic modeling method of the multi-body system also includes an optimization algorithm based on the principle of Gauss extreme value. Hamilton principle transmits the traditional Euclidean geometry to the symplectic geometry, the multi -body dynamics equation are solved by the symplectic mathematical framework [17]. Discrete Method In the flexible multi-body system modeling, the discrete method can be divided into two categories: the discretization of the physical model and the discretization of the mathematical model. The purpose of the former is to discrete the actual engineering flexible multi-body system into physical model. The latter mainly chooses the appropriate expression to describe the flexible deformation of the object in the flexible multi-body system [18]. (1) Rayleigh-Ritz method On the basis of satisfying the displacement compatibility condition and the complete condition, a hypothetical displacement field function is constructed, and the modal analysis vector and the corresponding modal coordinates are used to describe the displacement of the object in time. The method has high computational efficiency. (2) Finite element method The objects are divided into many simple shapes such as lines, triangular elements, tetrahedral elements, etc. The units are connected at the nodes, and the damping and stiffness of each unit are equally transplanted to the nodes, such as external loads, etc. Finite element method is suitable for dealing with complex boundary, shape and complex load under the problem. ( 3) Modal analysis The modal coordinate is used to describe the change of the component with time, and the modal synthesis and mode truncation are carried out to reduce the size of the solution. It can be used to consider the range of modal truncation according to the system prior parameters. The computational complexity is relatively small. However, this method cannot describe the rigid body motion in the system accurately, so it is not suitable for solving the dynamic problem with large rigid body displacement [16]. Difficulties of flexible manipulator control Because of its light-weight and long arms, a series of problems will arise in the actual movement, it is difficult to make sure the accuracy and efficiency of the flexible robot arm control. At the same time, the flexible manipulator needs to track its trajectory during operation. In addition, unlike the rigid robotic arm, movements of the flexible manipulator and the platform are coupled to each other, which leads to higher control errors. Scholars have made a lot of research on the problem of flexible manipulator control, and proposed many mature control methods 3.2 Classic control method (1). PID control In the classical control method, PID feedback control methods are widely used . PID parameters can be adjusted according to the external disturbance and the system's own kinetic parameters to achieve the effect of a more fine PID control. (2) Computational torque method Computational torque method is a dynamic control method considering the dynamic model of the manipulator. It is also the most important and most widely used method in the tracking control of the manipulator. The basic idea is to introduce a model-based nonlinear compensation in the internal control loop, which makes the nonlinear coupling robot system realizing global linearization and decoupling, and then use the classical PD control to control the linear stationary system after decoupling [19]. Modern control method (1). Adaptive control method Adaptive control can adjust the parameters of the controller to adapt to the state change according to system process status data measured in real time. Lin Lih-Chang [20] and YehSy-Lin [21] have designed an adaptive control rule to identify the uncertainty parameters. (2).Robust control method Because the flexible manipulator model is discredited from the infinite dimension to the finite dynamic model, there are obvious uncertainties in the dynamic equation. Many scholars use the robust control method to solve this problem [22]. Jong-Guk [23] designed a recursive robust controller to eliminate the effects of uncertainties. (3).Optimal control method In the problems of the active control of the flexible manipulator, the optimal control quantity can be solved by using the optimal control theory in order to maximize the vibration suppression effect [24]. (4).Other control methods The singular value perturbation control [25] divides the robotic arm system into fast-changing subsystems and slow-changing subsystem. Sliding mode control [26] has a good adaptability in a wide range of applications through the design of the sliding surface to switch the state of arm. Conclusions This review of flexible robotic manipulators indicates that in the field of automation and manufacturing, the dynamic analysis and control of flexible manipulators is an important research area. A large number of researches have been done to improve efficiency, accuracy and reliability. A series of problems elicited by flexible manipulators are also worth study, such as inhibition of flexible mechanical arm elastic vibration and deformation, rigid and flexible co-system system, flexible mechanical arm dynamics modeling. As to authenticate the theoretical modeling, more experiments are needed.
2018-12-31T17:10:48.169Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b54d634bb8d3354fecb28e0be19acd0623944d6d", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25878989.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b54d634bb8d3354fecb28e0be19acd0623944d6d", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
254320248
pes2o/s2orc
v3-fos-license
Looking Back to Look Forward: What to Expect in a Redo Surgery for a Bioprosthesis Replacement Redo surgeries are becoming more common because of an increased rate of bioprosthesis implantation. We performed a retrospective study on patients who underwent redo replacement of an aortic and/or mitral bioprosthesis between 2005 and 2018 to evaluate intra-hospital mortality and morbidity. Univariate analysis was performed on the propensity score variables to determine predictors of mortality. A total of 180 patients were enrolled in the study: Group A (replacement of aortic bioprosthesis) with 136 patients (75.56%) and group B (replacement of mitral bioprosthesis ± aortic bioprosthesis) with 44 patients (24.44%). NYHA class ≥ 3 and female sex were significantly more common in group B. Cardiopulmonary-bypass time and aortic cross-clamping time in group A and group B were, respectively, 154.95 ± 74.35 and 190.25 ± 77.44 (p = 0.0005) and 115.99 ± 53.54 and 144.91 ± 52.53 (p = 0.0004). Overall mortality was 8.89%. After propensity score adjustment, Group B was confirmed to have an increased risk of death (OR 3.32 CI 95% 1.02–10.88 p < 0.0001), gastrointestinal complications (OR 7.784 CI 95% 1.005–60.282 p < 0.0002) and pulmonary complications (OR 2.381 CI 95% 1.038–5.46 p < 0.0001). At the univariate analysis, endocarditis, cardiopulmonary-bypass and aortic cross clamping time, NYHA class ≥ 3 and urgency setting were significantly associated to death. Intra-hospital outcomes were acceptable regarding mortality and complications. Patients who need redo surgery on mitral bioprosthesis have an increased risk of post-operative pulmonary and gastrointestinal complications and mortality. Therefore the choice of mitral bioprosthesis at time of first surgery should be carefully evaluated. Introduction In cardiac valve surgery, the most commonly used prostheses to replace patients' diseased valves are biological ones, as opposed to mechanical ones [1,2]. Indeed, bioprostheses have several advantages: first of all, life-time anticoagulant therapy is usually deemed not necessary, even if patients might have the indications to take the anticoagulant drugs for a short period [3]. Secondly, bioprostheses are not noisy, which means that they cause less discomfort to patients. Nevertheless, there are also disadvantages linked to the use of biological valves: firstly, a smaller effective valve orifice and, secondly, the structural degeneration of the prosthesis [4]. This latter mentioned disadvantage is unavoidable at the present time and determines the need to undergo a reoperation [5,6]. The main reason for the rise in valvular reinterventions or redo surgeries is that a growing number of biological prostheses are being implanted in young patients [7], partly due to the development of percutaneous surgeries in recent years, which may allow a future valve-in-valve procedure [8]. Indeed, biological prostheses can also be recommended for patients younger than 50 years old [9,10]. Percutaneous techniques for treating failing valvular bioprostheses are developing more and more but remain an alternative to surgery in the medium-high surgical risk group only for the aortic valve, with the TAVI technique, and for the high surgical risks associated with the more complex mitral valve. A further unknown of transcatheter valve implantation techniques is the durability of the valve bioprostheses [11] and the consequent risk in explanting a TAVR [12]. Nevertheless, redo surgery has higher mortality and morbidity when compared to first surgery [13,14]. The aim of this study is to analyze the immediate post-operative outcomes (survival and main complications) of patients who undergo redo cardiac surgery on a previously implanted bioprothesis through the assessment of a group of patients subjected to the above-mentioned surgical operation. Materials and Methods We performed a retrospective monocentric study on patients who underwent replacement of a bioprosthesis in the aortic and/or mitral position between 2005 and 2018. The study was approved by the local ethical committee (n. R1480/21-CCM 1554) with the need for consent waived given the retrospective nature of the study. Data are available upon request. Inclusion criteria included previous surgery with implantation of biological prosthesis in aortic and/or mitral positions (in the case of double replacement, both valves were, in all cases, replaced with bioprostheses at the time of first surgery) and indication to undergo redo surgery because of malfunctioning of the valve. Exclusion criteria included only being under age. Intraoperative data were obtained retrospectively and stored in a database. The biological prosthesis dysfunction definition has been reviewed over the years. Aetiology for redo surgery was either endocarditis, paravalvular leak or structural valve deterioration (SVD). Our patients who underwent redo surgery because of SVD were in stage 3 of the definition proposed by Dvir Danny et al. [4]. Pulmonary complications were defined as pleural effusion and/or pneumothorax needing tube placement, pneumonia, prolonged mechanical ventilation (>48 h) and acute pulmonary insufficiency (P/F < 100). Gastrointestinal complications were defined as intestinal ischemia or perforation. Diagnostic Work-Up and Surgery In the case of elective surgery, all patients underwent echocardiographic studies to evaluate and define the aetiology of the bioprosthesis disease. In the case of endocarditis, an antibiotic therapy was also initiated. Moreover, a CT scan was performed to study adherences and the sternal relationship with the heart. In contrast, in urgent cases, once the correct diagnosis was obtained, the CT scan might have not been performed, depending on the clinical status of the patient. The surgery was carried out through re-sternotomy (only one patient underwent thoracotomy for replacement of mitral bioprosthesis) and cardiopulmonary bypass (CPB) was instituted either centrally or peripherally, depending on mediastinal adherences. After aortic cross clamp, the left atrium and/or aorta were opened to examine the bioprosthesis and confirm the indication (SVD, endocarditis or paravalvular leak). Subsequently, the bioprosthesis was removed and a new prosthesis was implanted. In the case of replacement of mitral and aortic bioprostheses, the mitral bioprosthesis was implanted before the aortic one. The choice of the new type of prosthesis (biological or mechanical) was discussed pre-operatively with the patient and decided upon depending on the age, comorbidities and risk of another surgery. Statistical Analysis Continuous variables were expressed as mean ± standard deviation, for normally distributed variables, as medians and quartiles (25-75%) for continuous variables not normally distributed and as numbers (percentages) for categorical variables. To identify differences between the two groups in terms of mean, median, or percentage, t-test, Wilcoxon's test, Fisher's exact test, and χ 2 were used. The multivariate logistic model was implemented to assess whether the group was a predictor of the individual endpoints (exitus, gastrointestinal complications and pulmonary complications), after adjustment for propensity score. The propensity score was estimated running a logistic model including these characteristics: preoperative ECG, NYHA class, etiology, and endocarditis; these were chosen through an epidemiological approach (i.e., those factors that in the clinician's experience can be confounders). Finally, a univariate analysis was performed on the propensity score variables to determine predictors of mortality (as total intra-hospital death). A p-value < 0.05 was considered significant. All analyses were performed using SAS 9.4 software. Pre Operative Results A total of 180 patients underwent redo surgery between 2005 and 2018, among 8500 who underwent cardiac surgery. Patients were divided in two groups: Group A and Group B. Group A included 136 (75.56%) cases who underwent replacement of an aortic bioprosthesis. Group B included 44 (24.44%) patients who underwent replacement of a mitral valve bioprosthesis only (30 patients) or of both mitral and aortic valve bioprostheses (14 patients). Pre-operative characteristics are reported in Table 1. Table 1). The mean telediastolic diameter was in range of normality in both groups. Aetiology for redo surgery is described in Table 1. Out of 180 patients, 41 (22.78%) underwent surgery because of endocarditis, 125 (69.44%) had a bioprosthesis degeneration and 11 (6.11%) had a paravalvular leak. We did not observe any statistical differences between the two groups. Intra Operative Results Intraoperative features taken into account for this study are listed in Table 2. The number of surgeries performed in an emergency setting were significantly higher in group A (19 patients, 13.96%), than in group B (1 patient, 2.27%) (p = 0.03). Only in 11 patients (6.11%) was cardiopulmonary bypass instituted through femoral vessels. In all other cases, a central cannulation was preferred. Clamping time was also statistically different between the two groups: 115.99 ± 53.54 min in group A versus 144.91 ± 52.53 in group B (p = 0.0004). Lastly, cardiopulmonary bypass (CPB) time was observed to be higher in group B (190.25 ± 77.44) than in group A (154.95 ± 74.35) (p = 0.0005). Of the whole considered population, 30.56% underwent concomitant procedures (including tricuspid valve repair and/or aorto-coronary bypass) but no difference between the two groups was noticed. Post-Operative Results Overall, 16 patients (8.89%) out of 180 died after surgery, 8 in group B (18.18%) and 8 in group A (5.88%). Hence, the mortality in group B was statistically higher than in group A (p = 0.0001). Among causes of death, seven patients (43.75%) died because of multiorgan failure, one patient (6.25%) because of intestinal ischemia, one patient (6.25%) because of intractable haemorrhage in the operating room and seven patients (43.75%) because of intractable cardiac failure. Moreover, among the 16 deceased patients, 12 (75%) underwent surgery because of endocarditis and 6 (37.56%) were operated on in an urgency setting. Anyway, even without including in the analysis patients who had endocarditis (37, 20.5%), mortality was similar. Indeed, on a total of 143 patients, there were 9 deaths (6.29%), in group A 106 with 5 deaths (4.71%) and in group B 37 patients with 4 deaths (10.81%). Post-operative complications are listed in Table 3. Pulmonary complications affected 45 patients (25%) in total; group B reported a higher percentage of these complications (38.64%) than group A (20.59%) (p = 0.0001). GI complications were also higher in group B: 6.82% vs. 1.47% (p = 0.0002). After the propensity score adjustment, it was confirmed that patients in group B had a significantly higher risk of mortality, gastrointestinal and pulmonary complications (Table 4). A univariate analysis was then performed to evaluate potential risk factors for mortality in our whole population. All analyzed variables are listed in Table 5. Discussion In the last few years of valvular surgery, bioprostheses are being implanted more commonly. Therefore, surgeons are facing many redo surgeries to replace a failing biological prosthesis. In order to evaluate the impact of REDO surgery for replacement of biological prosthesis on intra-hospital outcomes, we performed a retrospective study on patients who underwent replacement of aortic and/or mitral biological prosthesis. The aetiologies taken into account were either SVD, endocarditis or paravalvular leak. The overall mortality was 8.89%. Moreover, the results showed that replacement of mitral bioprosthesis was an independent risk factor for death, gastrointestinal and pulmonary complications. Our mortality was in line with already published data, being between 7.3% and 10.9% [15][16][17][18][19]. Among predictors of mortality, our univariate analysis found that CPB time, aortic cross-clamping time, NYHA ≥ 3, urgency setting and endocarditis were significant predictive factors. Sex was not a significant predictor, which is in line with literature, which shows conflicting results. Indeed, Vogt et al. [20] and Pansini et al. [18] found a higher mortality in females, while Akins et al. [17] reported an increased mortality in males. Longer CPB time and aortic cross-clamping times are known risk factors for worse surgical outcomes, as are urgency and endocarditis. Indeed, 43.7% of exitus was present in patients who underwent surgery for endocarditis, which is also in line with previously published studies [17,19]. Moreover, a higher NYHA class is known to be a risk factor for mortality [7,19,20]. Of interest, replacement of mitral bioprosthesis was a risk factor for death, gastrointestinal and pulmonary complications. A similar result was reported by Jones et al. and Lytle et al. [13][14][15][16][17][18][19], whose studies demonstrated that patients undergoing mitral valve replacement have a higher risk of mortality than patients undergoing aortic valve replacement, mainly due to post-operative acute myocardial infarction, rupture of the left ventricle and arrhythmias. Nevertheless, the causes of death in our patients in group B were multiorgan failure and cardiogenic shock. The increased mortality in patients who underwent a replacement of a mitral bioprosthesis might have both surgical and clinical reasons. First of all, surgical access to the mitral valve requires a deeper lysis of adherences and an increased manipulation of the heart. Furthermore, patients are usually more frail. Indeed, patients in group B were more prone to have an NYHA ≥ 3, indicating a worse underlying clinical state [21]. Replacement of a mitral valve bioprosthesis resulted in an increased risk of gastrointestinal complications. In the literature [17,[22][23][24], causes are mainly related to low cardiac output syndrome, post-operative arrhythmias (no difference in our pool of patients), use of noradrenaline and intra-aortic balloon pump (which had a higher incidence in group B in our study) and CPB time (significantly higher in group B). Moreover, Balsam et al. [21] showed a correlation between NYHA ≥ 3 and gastrointestinal complications, in line with our results. Pulmonary complications might also be related to the worse clinical picture of the patients in group B because of the underlying pathology. Mitral pathology already causes an altered lung function, and it could be exacerbated in a redo surgery. Our study shows that a redo surgery to change a biological prosthesis is not risk-free. Despite advent of valve-in-valve procedures, their use might not be indicated in all cases, such as endocarditis, risk of patient-prosthesis mismatch, high risk of embolization and left ventricle tract obstruction. Moreover, long term results are still lacking. Therefore, the role of the heart team during the first surgery becomes pivotal in assess, as much as possible, the possibility of permitting a future valve-in-valve procedure and the risk of a future re-intervention. The most important limitation of our study is its retrospective nature. Moreover, it covers a long span of time, therefore developments in surgical techniques have been made during it. It lacks a long term follow up, even though this was not the primary objective of the study, it could give a deeper insight to the results of the surgery. Conclusions In our experience, redo surgery for replacement of mitral bioprosthesis carries an increased risk of mortality and serious complications (gastrointestinal and pulmonary). Therefore, the choice of biological prosthesis at the time of first surgery must be carefully evaluated, and the anatomical criteria for a future percutaneous mitral valve-in-valve procedure might be assessed at the time of the first surgery, in order to assess the possibility of performing a minimally invasive treatment for a future prosthesis dysfunction. Nevertheless, the patients should be aware that, in case of a bioprosthesis dysfunction needing a traditional open surgery (which might anyway be the only option, especially in case of endocarditis), mitral bioprosthesis replacement carries an higher risk in terms of mortality, gastrointestinal and pulmonary complications when compared to other standard redo valvular surgery.
2022-12-07T19:01:59.114Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "293120c910d0602043dbb3763d89398a95fd9799", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/23/7104/pdf?version=1669791618", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e531c79553fb9e85533296d067b523eb17c70a6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
219589773
pes2o/s2orc
v3-fos-license
Nitrated monoaromatic hydrocarbons (nitrophenols, nitrocatechols, nitrosalicylic acids) in ambient air: levels, mass size distributions and inhalation bioaccessibility Nitrated monoaromatic hydrocarbons (NMAHs) are ubiquitous in the environment and an important part of atmospheric humic-like substances (HULIS) and brown carbon. They are ecotoxic and with underresearched toxic potential for humans. NMAHs were determined in size-segregated ambient particulate matter collected at two urban sites in central Europe, Ostrava and Kladno, Czech Republic. The average sums of 12 NMAHs (Σ12NMAH) measured in winter PM10 samples from Ostrava and Kladno were 102 and 93 ng m−3, respectively, and 8.8 ng m−3 in summer PM10 samples from Ostrava. The concentrations in winter corresponded to 6.3–7.3% and 2.6–3.1% of HULIS-C and water-soluble organic carbon (WSOC), respectively. Nitrocatechols represented 67–93%, 61–73% and 28–96% of NMAHs in PM10 samples collected in winter and summer at Ostrava and in winter at Kladno, respectively. The mass size distribution of the targeted substance classes peaked in the submicrometre size fractions (PM1), often in the PM0.5 size fraction especially in summer. The bioaccessible fraction of NMAHs was determined by leaching PM3 samples in two simulated lung fluids, Gamble’s solution and artificial lysosomal fluid (ALF). More than half of NMAH mass is found bioaccessible, almost complete for nitrosalicylic acids. The bioaccessible fraction was generally higher when using ALF (mimics the chemical environment created by macrophage activity, pH 4.5) than Gamble’s solution (pH 7.4). Bioaccessibility may be negligible for lipophilic substances (i.e. log KOW > 4.5). Electronic supplementary material The online version of this article (10.1007/s11356-020-09540-3) contains supplementary material, which is available to authorized users. Introduction Nitrated monoaromatic hydrocarbons (NMAHs) are an important part of humic-like substances (HULIS), which in turn constitute a large mass fraction of particulate matter (PM) water-soluble organic carbon (WSOC; Graber and Rudich 2006) and brown carbon (Laskin et al. 2015). NMAHs are primarily emitted into the atmosphere or formed by secondary processes. Gas-and aqueous-phase oxidation and nitration of lignin thermal decomposition products (m-cresol, phenols, methoxyphenols, catechols, salicylic acid, etc.) are major form a t i o n p a t h w a y s f o r 4 -n i t r o c a t e c h o l ( 4 -N C ) , methylnitrocatechols (MNCs), nitroguaiacols (NGs) and nitrosalicylic acids (NSAs; Iinuma et al. 2010;Kelly et al. 2010;Kroflič et al. 2015;Frka et al. 2016;Teich et al. 2017;Xie et al. 2017;Finewax et al. 2018;Wang et al. 2019). Traffic and coal and wood combustion, as well as industry and agricultural use of pesticides, are considered as main primary emission sources of nitrophenols (NPs), which can also be secondarily formed in the atmosphere (Harrison et al. 2005;Iinuma et al. 2007;Kitanovski et al. 2012;Inomata et al. 2015;Wang et al. 2018). 4-NC and MNCs are well-established tracers for biomass burning secondary organic aerosols (Iinuma et al. 2010;Kitanovski et al. 2012;Kahnt et al. Responsible Editor: Philippe Garrigues Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11356-020-09540-3) contains supplementary material, which is available to authorized users. 2013; Caumo et al. 2016;Chow et al. 2016). NSAs may also be formed in secondary organic aerosols exposed to NO 3 radicals (Ramaswamy et al. 2019). NMAHs may represent up to 1% and 2% of PM 10 mass and HULIS, respectively Kitanovski et al. 2012Kitanovski et al. , 2020Kahnt et al. 2013;Caumo et al. 2016). NPs and NSAs are proven to have adverse effects on human health (estrogenic activity, carcinogenicity, cataract; Karim and Gupta 2001;Brüning et al. 2002;Harrison et al. 2005;Grundlingh et al. 2011;Kovacic and Somanathan 2014), while little is known about the toxicology of NCs. NMAHs may redox cycle in epithelial lung fluid and be a source of reactive oxygen species (ROS) in the lungs. Organic chemicals in ambient PM contribute significantly to air pollution and its adverse health effects (Lewtas 1993;Jones 1999;Shiraiwa et al. 2017). Extracts of ambient wood burning aerosol induce mutagenicity and intracellular production of ROS more than road traffic aerosol (Velali et al. 2019). Polar fractions of organic PM extracts show higher toxicities than apolar ones (Nováková et al. 2020). The complete pollutant mass in the air may not be bioaccessible upon inhalation as the dissolution of the substance in the epithelial lung lining fluid (LLF) is a prerequisite for biological activity. However, this prerequisite is not needed, when the substances are carried by ultrafine particles which may penetrate membranes completely (Oberdörster et al. 2004;Li et al. 2017). Unlike heavy metals in PM (Wiseman and Zereini 2014;Wiseman 2015;Kastury et al. 2017;Polezer et al. 2019), the organic matter (OM) fraction of PM that is potentially soluble in LLF has hardly been studied. The most common approach for in vitro assessment of the bioaccessibility of PM chemicals in LLF is by determining the fraction of the total concentration of a chemical leached from PM deposited filters immersed in simulated LLFs, under controlled conditions (Wiseman 2015). The two most commonly used simulated LLFs are artificial lysosomal fluid (ALF; Colombo et al. 2008;Wiseman 2015) and Gamble's solution (Marques et al. 2011;Wiseman 2015). ALF mimics the chemical environment around inhaled particles after being phagocytized by lung alveolar and interstitial macrophages. It is an acidic aqueous electrolyte without lipids, pH 4.5 (Table S1). Gamble's solution is the most common simulated LLF and represents the interstitial fluid in the lung. It is a neutral aqueous electrolyte without lipids, proteins and antioxidants, pH 7.4 (Table S1). The bioaccessible fraction of a chemical in PM is calculated as f bio_p = c p LLF /c p MeOH × 100 (%), where c p LLF is the leached concentration in LLF and c p MeOH is the total concentration (from extraction in methanol) of the substance in PM samples used for leaching. The aim of this present study was to determine levels and mass size distributions of NMAHs in the atmospheric PM collected at two urban locations in the Czech Republic. Inhalation bioaccessibility of semivolatile organic compounds so far has been mostly focusing on PAHs (Wei et al. 2018). For the first time, we quantify the inhalation bioaccessibility of NMAHs in PM. PAHs' and nitro-and oxy-PAHs' abundances and bioaccessibility in the same PM samples (Lammel et al. 2020a, b), as well as simultaneously in the gas phase (Lammel et al. 2020a), are presented in companion papers. Toxicities of these PM samples, as well as the mixture toxicity of the substance classes addressed (reconstituted mixtures), are published elsewhere (Nováková et al. 2020). Sampling sites Air samples were collected at two urban and one rural site in the Czech Republic, Kladno-Švermov (50°10′ 01″ N/14°06′ 15″E) during 10-14 February 2016 and Ostrava-Přivoz (49°51′23″N/18°16′11″E) during 15-27 February and 5-17 September 2016, respectively (Fig. S1). In Kladno, an industrial town (≈ 70,000 inhabitants), a heat plant but no major industries were working during the campaign. The Ostrava site is located quite central in the industrial area (≈ 500,000 inhabitants). It is a station of the Czech Hydrometeorological Institute (CHMI). A major cokery with 200 furnaces, a major metallurgical plant, a waste burner and other industries are within 3 km from the site. Ostrava is a hot spot of air pollution in Europe (Pokorná et al. 2015(Pokorná et al. , 2016Kozáková et al. 2019). For example, abundance of polycyclic aromatic hydrocarbons (PAHs) is high in Ostrava and the biological effects of PM are evident, in particular during winter time (Líbalová et al. 2012;Šram et al. 2013;Topinka et al. 2015;Pokorná et al. 2015;Leoni et al. 2016). Filter samples were kept on-site and during transport cool (≈ 0°C), then stored at temperatures below − 18°C. Leaching of NMAHs in simulated lung fluids and chemical analysis Two LLFs were used, i.e. artificial lysosomal fluid (ALF; Colombo et al. 2008) and Gamble's solution (Marques et al. 2011). Their compositions are given in the supplementary material (SM) ( Table S1). The bioaccessible fractions of NMAHs in PM 3 (f bio ) were obtained by leaching the slotted and backup PM deposited QFFs with particles < 3 μm in 20 mL of simulated LLF by shaking (60 revolutions min −1 ) in a 100-mL flask during 24 h in an incubator at 37°C, in the dark. Dependent on NMAH load, 1.5-cm 2 cuts up to one strip (out of 10 strips of length 12 cm) of each slotted QFF were leached, while 1.5-20-cm 2 cuts were leached from backup QFFs. The leachates were filtered through 0.45-μm cellulose acetate membrane, acidified with formic acid (1.0 mL 98-100% formic acid per 20 mL leachate), spiked with 4-nitrophenold 4 (internal standard (IS); spiked mass 100 ng) and loaded on solid-phase extraction disks (SPE disks; BakerBond SPEEDISK DVB H 2 Ophilic, J.T. Baker). Targeted compounds were eluted from SPE disks sequentially with methanolic solution of EDTA (3.4 nmol mL −1 ) and a mixture of methanolic solution of EDTA (3.4 nmol mL −1 ) and acetonitrile (1:1). The obtained extract was concentrated to 0.5 mL using a TurboVap II (bath temperature, 40°C; nitrogen gas pressure, 15 psi; Biotage, Uppsala, Sweden). The concentrated extract was filtered through a 0.2-μm PTFE syringe filter (4 mm, Whatman; GE Healthcare, Little Chalfont, UK) into a 2-mL vial and was evaporated to near dryness under the gentle stream of nitrogen (99.999%; Westfalen AG, Münster, Germany). Finally, the extract was dissolved in methanol/ water mixture (3/7, v/v) containing 5 mM ammonium formate buffer pH 3 and 400 μM EDTA for LC/MS analysis. The determination of NMAHs in the PM filter samples was done using a validated analytical procedure (Kitanovski et al. 2012 with small modifications. In short, a 1.5-cm 2 section of the filter was spiked with 4-nitrophenol-d 4 (IS; spiked mass, 100 ng) and extracted three times (5 min each) with 10 mL methanolic solution of EDTA (3.4 nmol mL −1 ) in an ultrasonic bath. The combined extracts were concentrated, filtered, dried and re-dissolved for LC/MS analysis as described above for SPE extracts. The targeted NMAHs, i.e. 2 NSAs, 4 NCs and 6 NPs (listed in Table S2 together with main physicochemical properties), were determined using an Agilent 1200 Series HPLC system (Agilent Technologies, Waldbronn, Germany) coupled to an Agilent 6130B single quadrupole mass spectrometer equipped with an electrospray ionisation (ESI) source . Atlantis T3 column (150 mm × 2.1 mm i.d., 3-μm particles size; Waters, Milford, USA), connected to an Atlantis T3 VanGuard pre-column (5 mm × 2.1 mm i.d., 3-μm particles size; Waters), was used for the separation of the targeted analytes. NMAHs were eluted isocratically using a mobile phase consisted of methanol/tetrahydrofuran/water (30/15/55, v/v/v) mixture containing 5 mM ammonium formate buffer pH 3 at a flow rate of 0.2 mL min −1 . The column temperature and injection volume were 30°C and 10 μL, respectively (Kitanovski et al. 2012). For the detection and quantification of NMAHs, the mass spectrometer was operated in single ion monitoring (SIM) and negative ESI mode. The optimised ESI-MS parameters were as follows: 1000 V for the ESI capillary voltage, 30 psig for the nebulizer pressure and 12 L min −1 and 340°C for the drying gas flow and temperature, respectively. High-purity nitrogen was used as a nebulizer and drying gas. 3-Methyl-4-nitrocatechol (3-M-4-NC) concentrations were calculated based on the calibration curve of 4-methyl-5-nitrocatechol (4-M-5-NC) due to the lack of a reference standard for 3-M-4-NC and its structural similarity to 4-M-5-NC. LC/MSD ChemStation (Agilent Technologies) was used for data acquisition and analysis. Field blanks (n = 3) were prepared during sample collection by mounting the pre-baked filters on the sampler without switching it on. These filters were subsequently retrieved and processed along with the rest of the samples. The mean of two or three field blank values was subtracted from the sample values (in both methanol extracts and leachates). Values below the mean + 3 standard deviations of the field blank values were considered to be below the limit of quantification (<LOQ). LOQs for the various campaigns are listed in Table S4. Heavy metal content, aerosol number and mass size distributions (MSDs), meteorological and trace gases were also covered by respective methods, described in the supplementary material (SM) (S1.4). Concentration levels and mass size distributions The levels of the targeted substance classes in PM 10 are listed in Table 1, and the time series are shown in Fig. S2. With PM 2.5 ranging 15-34 μg m −3 (Table 1), the sites were considerably polluted. The pollution by heavy metals in Ostrava air was found very high, independently of season (Table 1; Fig. S3) and must be seen in the context of the local metallurgical industries and coal production and burning (Pokorná et al. 2015;Vossler et al. 2015). The pollution at the urban sites was less reflected by the levels of the secondary inorganic aerosol (SO 4 2− , NO 3 − , NH 4 + ), because these are regionally distributed pollutants, exhibiting a low urban-to-rural gradient (Lammel et al. 2003). The NMAH levels at the Kladno and Ostrava (winter) sites corresponded to 2.6 and 3.1% of the WSOC, respectively, and 6.3 and 7.3% of the HULIS-C, respectively (Voliotis et al. 2017). NMAHs were dominated by 4-NC and MNCs (Fig. S4a). The patterns in PM 1 and PM 10 are rather similar unlike typical for many other aerosol constituents (Putaud et al. 2010). Mass size distributions of NMAHs are shown in Fig. 1 and S6. PM 1 accounts for 80-90% of NCs, 70-80% of NSAs (as well as the NMAHs in total) and 40-60% of NPs. For all NMAH substance classes, the significance of the smallest size, PM 0.49 , was higher in summer than that in winter (in Ostrava). In contrast, the significance of a super-μm mode (3-7 μm) of NP and NSA MSDs decreased in summer, completely in the case of NSAs. A high fraction of NPs, 30-50%, was associated with the coarse fraction (PM 10 -PM 3 ) in winter (Fig. 1). These results are in agreement with previous reports from other urban sites in central and southern Europe and China (Li et al. 2016). The aerosol number size distributions (characterised in Fig. S5) indicated close combustion sources and are consistent with the possible influence of wood burning. The MSDs peaking in the sub-micrometre size range highlight the significance of NMAHs' inhalation exposure of the deep lung (Kitanovski Voliotis et al. (2017) et al. 2020), similar to other aromatic combustion byproducts like the parent PAHs (Ringuet et al. 2012) and polychlorinated dibenzodioxins and -furans (Zhang et al. 2016). Bioaccessibility The lowermost 4 impactor stage filters of the campaigns at Kladno (1 winter-time sample set) and Ostrava (3 winter-and 3 summer-time sample sets), encompassing PM 3 , were leached in ALF. Only one sample set encompassing PM 3 per location and season (3 sample sets in total) was leached in Gamble's solution (GS; Table 2; Table S5; Fig. 2). Using ALF, more than half of NMAH mass was found bioaccessible in winter, and almost complete, 94%, in summer (Table 2a). This could be related to a higher content of hydrophobic substances in PM in winter. In central Europe, fossil fuel combustion byproducts, in particular PAHs, are much higher concentrated in winter, also in urban air, and also in Ostrava (Lammel et al. 2010;CHMI 2013;Vossler et al. 2015). The difference of f bio_p found when using ALF across the campaigns (Table 2) was not pronounced as compared with when using GS to leach samples (insignificant differences for p < 0.05, t test). Often lower f bio_p was found for all NMAH species when using the neutral GS than when using the acidic ALF (Table 2 and Table S5a; note that due to less samples leached by GS than ALF, directly comparable f bio_p data are given in Table 2, but not in Table S5), but also the opposite was found (Kladno sample, Table 2, Table S5b). Kladno winter Ostrava winter Ostrava summer NMAHs NPs NCs NSAs Fig. 1 Time-weighted mean Σ 12 NMAHs and sub-classes' mass size distributions. The error bars show the standard deviation from the campaign mean (n = 3 for Ostrava, n = 1 for Kladno) NSAs were almost completely bioaccessible, i.e. f bio_p ≈100% in both LLFs. Bioaccessible fractions > 100% most likely reflect leaching procedure artefacts. They are more pronounced for NSAs and NCs when leached in ALF (Table S5). Therefore, we investigated the stability of NMAHs during the leaching procedure by spiking the LLFs with NMAH standard mix and carrying out the usual 24-h leaching. The results from stability study (Table S7) showed > 100% recoveries for NSAs in both LLFs, but usually < 1 0 0 % r e c o v e r i e s f o r N C s . N e i t h e r N C s n o r methylnitrophenols (MNPs) or dinitrophenols (DNPs) were found more stable in ALF than in GS (not significant, p < 0.05, t test). With a pKa (acidity constant) of 6.78 at 35°C (Gelb et al. 1989), the majority of 4-NC (but also MNC) molecules will be deprotonated in GS (pH 7.4) at 37°C. In deprotonated form, NCs are more susceptible to oxidation (e.g. by the dissolved oxygen in LLFs) and formation of nitrated 1,2-benzoquinones, which could not be measured by the analytical method employed here. In ALF at pH 4.5, NCs are in neutral form and more stable, hence their higher recoveries from ALF. This could also explain their higher f bio_p in ALF (significant at the p < 0.05 level, t test; Table S5). For MNPs and DNPs, however, their lower stability in ALF is unexplained, having in mind their pKa values around 7.3 and 4.0, respectively (Schwarzenbach et al. 1988), as well as their high recoveries during the SPE clean-up after the leaching process (Table S3). Only for summer samples from Ostrava, the NMAHs' bioaccessible fractions in ALF are much higher than 100% (range, 99-187%; Table S5b), suggesting the possible aqueous-phase formation of NMAHs from their precursors in the PM during the leaching process (positive artefact) under mild acidic conditions (pH 4.5; Kroflič et al. 2018). This hypothesis is supported by the high levels of PM 10 (PM 2.5 ), NO x and Fe (Fig. S3) measured during the summer sampling campaign (Table 1a) which could facilitate the oxidation and nitration of NMAH precursors. Interestingly, for the same sample sets, very low bioaccessibility in GS was observed for NCs (range, 9-77%; Table S5b) that cannot be solely explained by the NC stability results (50-99%; Table S7). Due to high Fe content in samples, NCs could partly exist as monocomplexes of Fe 3+ and enhance the production of reactive species by Fenton or Fenton-like systems (Salgado et al. 2017). During these processes, NCs can be oxidised or degraded by the formed reactive species, thus diminishing their leached concentrations (negative artefact), as well as their measured bioaccessible fractions. For both LLFs, f bio_p was found independent of particle size, i.e. do not differ significantly between sub-micrometre particles and the PM 3 size fraction (p < 0.05, t test; Table 2). This is also reflected as similar (statistically not different, p < 0.05) values for PM 1 /PM 3 found for PM methanol extracts as for LLFs (Table S6). The range of physicochemical properties of NMAHs is not large, 2 and 1 order of magnitude for water solubility, s, and K OW , respectively (listed in Table S2). The respective data for Ostrava winter (Table S5) are shown together with data for 7 oxygenated polycyclic aromatic hydrocarbons (OPAHs; Lammel et al. 2020b), hence, s and K OW across the two substance classes ranging 5 and 4 orders of magnitude, respectively (Fig. 2, Fig. S7). The bioaccessible fractions of NMAHs, f bio p , were similar in winter and summer (Fig. S7), reflecting that ambient aerosol chemical composition in 4nitrocatechol;OFLN,O 2 ANT,9,O 2 NAP,1,O 2 NAC,5, O 2 BAA, benzanthracene-7,12-dione; BAN, benzanthrone; BaOFLN, benz(a)fluorenone source areas (anthropogenic sources) is subject to little seasonal variation (Putaud et al. 2010). It decreased with the compound's increasing K OW (Fig. 2) and decreasing water solubility (Fig. S7). Bioaccessibility may be negligible for lipophilic substances (i.e. log K OW > 4.5). A lack of a clear trend in Fig. S7 reflects the aqueous electrolyte nature of the LLFs. The MSDs of the bioaccessible fractions were only slightly shifted against the MSDs of the PM methanol extracts. For example, for GS, the bioaccessible sub-micrometre mass fraction in PM 3 , i.e. PM 1 /PM 3 , deviated typically only within 2% from the total sub-micrometre mass fraction in PM 3 (Table S6b), while for ALF these shifts were up to ≈ 10% (Table S6a), in the sense that the sub-micrometre fraction was less bioaccessible than the coarse size fraction. This is possibly related to a higher hydrophobicity of PM 1 particles as compared with coarse PM. Hydrophobicity may limit the leachability of particles. Hydrophobicity was not determined, but more than 60% of EC and OC, which often represent hydrophobic constituents, were associated with the PM 1 mass fraction, more than in coarse PM (cumulative MSDs, Fig. S5). Conclusions and suggestions for research Inhalation bioaccessibility of the nitrated monoaromatic pollutants in PM as operationally defined by leaching filter samples in simulated lung fluids was found very high for both an aqueous acidic (pH 4.5, ALF) and a neutral electrolyte (pH 7.4, Gamble's solution). This emphasises the human inhalation exposure to polar constituents of particulate organic matter. Bioaccessibility of a given PM constituent will depend on not only the substance properties but also the aerosol matrix (e.g. its hydrophobicity). Here, a limited number of samples have been analysed. Among aerosol types, only urban aerosols, strongly influenced by fossil fuel burning sources (metallurgical industries and coal production and burning, road traffic; Lammel et al. 2020b) were covered. More such data should be gained from other aerosol types and extended to other organic pollutants, abundant in aerosols, such as polycyclic aromatic compounds. The determination of bioaccessibility based on leaching with simulated lung fluids may even be an underestimate, as ultrafine particles may penetrate through the membrane and thus deliver pollutants without dissolution in the lung fluid. On the other hand, the presence of false-positive (f bio p >> 100%) and false-negative artefacts (f bio p < 50%) during the in vitro tests of bioaccessibility should be avoided by (a) optimization of the duration of the tests (allowing less time for unwanted reactions to occur), (b) using degassed LLFs and performing the tests in inert atmosphere for analytes that could be easily oxidised (which is opposite to the real conditions in the lung) and (c) by using more realistic LLF models that contain lipids, proteins and antioxidants (e.g. Boisa et al. 2014). The presence of organic constituents and antioxidants in LLFs would serve as "buffer" for PM and potentially in situ formed ROS during the leaching procedure. Only the bioaccessible fraction of pollutants can become biologically effective, such as ROS active. While the reduction potential as an indicator for redox reactivity is available for a number of NMAHs such as nitrobenzenes (Uchimiya et al. 2010), determination of the oxidative potential (OP) of organic pollutants has so far been limited to quinones (Charrier and Anastasio 2012;Yu et al. 2018;Lammel et al. 2020b) and N-heterocycles (Dou et al. 2015). Finally, the inhalation exposure to the targeted NMAHs is in fact higher, because part of the NMAH mass will be distributed to the gasphase of ambient aerosols, not considered in this study.
2020-06-12T14:28:00.604Z
2020-06-11T00:00:00.000
{ "year": 2020, "sha1": "d20ee387953af34f4a110a776ffa3d385f826004", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-020-09540-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d20ee387953af34f4a110a776ffa3d385f826004", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
103174626
pes2o/s2orc
v3-fos-license
In Situ Hydrothermal Construction of Direct Solid-State Nano-Z-Scheme BiVO4/Pyridine-Doped g-C3N4 Photocatalyst with Efficient Visible-Light-Induced Photocatalytic Degradation of Phenol and Dyes In the current study, a mediator-free solid-state BiVO4/pyridine-doped g-C3N4 nano-Z-scheme photocatalytic system (BDCN) with superior visible-light absorption and optimized photocatalytic activity was constructed via an in situ hydrothermal method for the first time. The pyridine-doped g-C3N4 (DCN) nanosheets show strong absorbance in the visible-light region by pyridine doping, and the BiVO4 (∼10 nm) nanoparticles are successfully in situ grown on the surface of DCN nanosheets by the controlled hydrothermal method. Under the irradiation of visible light (λ > 420 nm), the BiVO4/DCN nanocomposite photocatalysts efficiently degrade phenol and methyl orange (MO) and display much higher photocatalytic activity than the individual DCN, bulk BiVO4, or the simple physical mixture of DCN and BiVO4. The greatly improved photocatalytic ability is attributed to the construction of the direct Z-scheme system in the BiVO4/DCN nanocomposite free from any mediator, which leads to enhanced separation of photogenerated electron–hole pairs, as confirmed by the photocurrent analysis. The possible Z-scheme mechanism of the BiVO4/DCN nanocomposite photocatalyst was investigated by transient time-resolved luminescence decay spectrum, active species trapping experiments, electron paramagnetic resonance (EPR) technology, and hydrogen evolution test. INTRODUCTION Semiconductor-based photocatalysis has attracted increasing attentions as it has the potential to eliminate the environmental contamination because of its low cost and eco-friendly nature. 1,2 Photocatalysts with efficient solar energy utilization capability and quantum efficiency are necessary for applying the photocatalysis technology into practical applications. Unfortunately, the limitation on practical applications of the most applied semiconductor photocatalysts is that they can only absorb ultraviolet (UV) light. 3 As a result, exploiting visiblelight-driven photocatalysts with high efficiency and good durability has been a hot spot in the field of photocatalysis. Recently, graphitic carbon nitride (g-C 3 N 4 ) has been regarded as a promising visible-light-driven photocatalyst because of its appropriate band gap and unique optical properties with promising performance in the degradation of organic pollutants. 4,5 The relatively very negative conduction band (CB) (−1.13 eV) enables a strong reduction potential of the photogenerated electron. However, the photocatalytic activity of g-C 3 N 4 is restricted by its insufficient visible-light absorbance, slow charge-carrier transport, and high recombination of photogenerated charge carriers. To overcome these inherent drawbacks of g-C 3 N 4 , various methods, including structure regulations, 6,7 molecular doping, 8−19 and heterojunction, 20−28 have been developed to improve the photocatalytic activity and selectivity of g-C 3 N 4 . Inspired by the thermal polymerization process of the precursors during the fabrication of C 3 N 4 , Wang et al. 8 exploited a molecular doping strategy in which small organic molecules were incorporated into the g-C 3 N 4 network by a copolymerization process, which can not only enhance the light absorbance of g-C 3 N 4 but also create surface dyadic heterostructures to promote the separation of the generated charge carriers on the surface of the molecular-doped g-C 3 N 4 , thus resulting in a higher photocatalytic activity. Latterly, various types of monomer precursors with cyano and/or amino groups have been developed for integrating organic molecules with different functional groups into the CN frameworks. 9−19 However, after modification by the organic molecular doping method, the valence band (VB) position of g-C 3 N 4 was sharply shifted toward the negative potential position, which was detrimental to its applications in the organic pollutant degradation by dramatically decreasing the oxidizability of the organic-molecule-doped g-C 3 N 4 . 19,29 Therefore, it is highly desired to develop a novel strategy to maintain or enhance the reducibility and oxidizability of the organic-molecule-doped g-C 3 N 4 . More recently, the construction of the g-C 3 N 4 -based Zscheme system has been developed as another feasible and efficient method to promote the photocatalytic performance of g-C 3 N 4 because of the fact that the Z-scheme charge-carriertransfer channel can facilitate the separation and restrain the recombination of photoinduced charge carriers. 30 −39 In the Zscheme charge-carrier-transfer channel, the photogenerated electrons in the CB of oxidation semiconductor will recombine with the photogenerated holes in the VB of reduction semiconductor through the Z-scheme charge-carrier-transfer channel rather than the vastly reported heterojunction channel in the composites. Thus, the reducibility of photogenerated electron in the CB of reduction semiconductor and the oxidizability of photogenerated holes in the CB and VB of oxidation semiconductor in the Z-scheme system were retain and enhanced. Therefore, it enlightened us to enhance the reducibility and oxidizability of the organic-molecule-doped g-C 3 N 4 by the construction of a mediator-free solid-state g-C 3 N 4based Z-scheme system. The band gap of BiVO 4 is 2.40 eV, which is visible-light-responsive semiconductor and thus shows a great potential in visible-light-driven Z-scheme photocatalysts. 40 The relatively low VB position makes it a good candidate in the construction of the Z-scheme photocatalytic system with g-C 3 N 4 . It is highly anticipated that combining the enhanced light absorbance ability by the organic molecular doping strategy and the preserved or enhanced reducibility and oxidizability by the Z-scheme system strategy, our proposed BiVO 4 /organic-molecule-doped g-C 3 N 4 Z-scheme photocatalytic system will be quite promising as an excellent photocatalytic system. To the best of our knowledge, the combination of the organic-molecule-doped g-C 3 N 4 by copolymerization and mediator-free solid-state Z-scheme designation to synergistically achieve the enhanced photo-catalytic property has not been reported yet. Moreover, very few works have focused on the geometry architectures of g-C 3 N 4 -based Z-scheme systems, such as size, spatial distribution, and morphologies, which have critical influences on the lightharvesting activity, charge-carrier separation, and photocatalytic activity of the g-C 3 N 4 -based Z-scheme system. In our current work, a novel and mediator-free solid-state BiVO 4 /pyridine-doped g-C 3 N 4 (BDCN) nano-Z-scheme photocatalytic system with superior visible-light absorbance ability and photocatalytic activity was demonstrated for the first time, which was obtained by the successful in situ growth of BiVO 4 nanoparticles (∼10 nm) on the surface of pyridine-doped g-C 3 N 4 (denoted as DCN) nanosheets via a controlled hydrothermal method. The obtained BDCN nano-Z-scheme photocatalytic system was characterized by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy, transmission electron microscopy (TEM), and diffuse reflectance spectrum (DRS). Degradation of phenol and methyl orange (MO) was performed to evaluate the photocatalytic activity of the samples under visible-light irradiation (λ > 420 nm). Furthermore, the possible photocatalytic Z-scheme mechanism was investigated and confirmed via active species trapping experiments, electron paramagnetic resonance (EPR) measurement, and hydrogen evolution test, and the stability and recyclability of the BDCN photocatalytic system were also examined. RESULTS AND DISCUSSION 2.1. Structural and Optical Analysis. To verify the successful doping of the pyridine ring into the carbon nitride network in the DCN and BDCN samples, the solid-state 13 C NMR spectrum measurement was recorded, and the obtained results are shown in Figure 1A. Compared with the pure g-C 3 N 4 (CN) and the BiVO 4 /undoped g-C 3 N 4 nanocomposite (denoted as BCN, prepared by the same hydrothermal method, and the mass percentage of g-C 3 N 4 was optimized in terms of the results of photocatalytic analysis), the DCN and BDCN4 (40% mass percentage of DCN in BDCN) samples show an additional broad peak centered at 115.8 ppm in the 13 C NMR spectra, suggesting that the pyridine ring was doped successfully into the carbon nitride network in the DCN and BDCN4 samples. Moreover, the BDCN4 sample shows the identical solid state 13 C NMR spectrum before and after photocatalytic reaction ( Figure 1B), suggesting that the pyridine ring in the carbon nitride network is stable under light irradiation. The XRD patterns was taken to detect the crystalline and chemical structures of the samples, and the outputs are shown in Figure 2. Two pronounced peaks in the DCN sample at 13.04°and 27.47°are observed, corresponding to the (100) and (002) diffractions of DCN, respectively. The (100) and (002) diffractions are attributed to the in-plane structural packing mode and the interlayer stacking of aromatic networks, respectively. 4,5 The diffraction pattern of the bulk BiVO 4 sample shows a series of peaks that can be indexed to the monoclinic shelties phase of BiVO 4 (JCPDS card #14-0688). In the case of the BDCN nanocomposites, the XRD pattern of BDCN shows both the characteristic diffractions of DCN and the monoclinic shelties of BiVO 4 , implying that the hydrothermal process leads to the successful in situ formation of a BiVO 4 structure and has no dramatic influence on the structure of DCN. In addition, the intensities of DCN diffraction peaks gradually increase from BDCN1 to BDCN5, corresponding to the variation of DCN contents from 10 to 50%. The FT-IR spectra of DCN, BiVO 4 , and BDCN1−5 samples are shown in Figure S1. The pure BiVO 4 sample shows the peaks at 839 and 745 cm −1 , which correspond to ν1 symmetric stretching and ν3 asymmetric stretching vibrations of VO 4 , respectively. The DCN sample shows typical peaks at 900− 1500 and 809 cm −1 , representing the typical stretching vibration of CN heterocyclic and plane breathing and vibration of triazine units, respectively. Moreover, the broad peak at 2800−3450 cm −1 is ascribed to the adsorbed O−H vibration in water molecules and the N−H vibration because of the surface uncondensed amine groups. For the BDCN composites, all BDCN samples show the characteristic peaks of both DCN and BiVO 4 , further confirming the composite structure between DCN and BiVO 4 . It also indicates that the hydrothermal treatment did not change the C−N heterocycles and the chemical bond dramatically in the carbon nitride structure. The actual mass percent of DCN in the composite photocatalyst was determined by thermogravimetric analysis (TGA). DCN was decomposed at 1000 K, whereas BiVO 4 remained stable above 1000 K ( Figure S2). Thus, the mass percent of DCN in the composite photocatalysts was estimated by the mass loss between 300 and 1000 K, which was determined as 10.8, 19.9, 30.4, 39.9, and 49.7% for the samples BDCN1−BDCN5, respectively. This is in good agreement with the theoretical mass percentages. The optical properties of the DCN, BiVO 4 , and BDCN composites were monitored by DRS, as shown in Figure 3. Pure bulk BiVO 4 shows a steep absorption edge at around 524 nm, which reveals a band gap energy of 2.37 eV. DCN has a broader absorption edge at 576 nm, relating to a band gap energy of 2.16 eV. After combined with BiVO 4 , the obtained BDCN composites present a gradual shift to the red edge of the absorption band as the DCN contents increase, indicating that DCN has a significant light-harvesting ability in the BDCN composite photocatalysts. From Figure S3, it can be found that both DCN and BDCN samples show much wider light absorption range than the pure CN and BCN samples, suggesting that the pyridine doping can efficiently extend the light absorbance of the doped carbon nitride and BDCN samples. The morphology of DCN, BiVO 4 , and BDCN was characterized by scanning electron microscopy (SEM), as shown in Figure 4. DCN shows the crumpled layered structure containing several stacking layers, indicating the planar graphite-like structure of carbon nitride. The blank BiVO 4 exhibits a decagonal shape with a size of 150−300 nm. Compared with the morphology of DCN and BiVO 4 , the smallsized BiVO 4 particles with a size of about 10 nm are found to disperse on the crumpled DCN surface, which will be further confirmed by the TEM images. Additionally, the energydispersive spectrometry (EDS) was also adopted to analyze the elements in the BDCN4 nanocomposite, and the result is given in Figure 4D. It can be clearly found that C, N, Bi, V, and O elements exist in the BDCN4 samples, which further confirms the compositional structure of the BDCN samples. Moreover, the morphology of DCN, BiVO 4 , and contact interface between DCN and BiVO 4 in the BDCN sample was analyzed by TEM and high-resolution TEM (HRTEM), as shown in Figure 5. The typical graphite-like structure of DCN is clearly evidenced in Figure 5A, as evidenced by the result of SEM. In the absence of DCN, the BiVO 4 particles obtained by the hydrothermal route show the agglomerations with diameters ranging from 150 to 300 nm, and no nanosized BiVO 4 particles are obtained ( Figure 5B). However, it is very interesting to note from Figure 5C that the BiVO 4 nanoparticles with the average size of approximately 10 nm were successfully in situ synthesized and deposited on the surface of DCN uniformly in the presence of DCN during the hydrothermal process after precise control over the hydrothermal synthesis. Compared with the bulk BiVO 4 , the size of in situ synthesized BiVO 4 decreases obviously with the introduction of DCN. Bi 3+ ions can be bound on the surface of DCN by chemisorption between the Bi 3+ ion and the heptazine rings of DCN, which successfully prevents the overgrowth and agglomeration of the BiVO 4 particles. This demonstrates that the in situ hydrothermal method is a feasible route to construct the uniform BiVO 4 /DCN nanocomposite photocatalyst. The HRTEM image in Figure 5D shows that the lattice spacings of 0.292 and 0.308 nm correspond to the (040) and (121) planes of monoclinic shelties of BiVO 4 , respectively, and the lattice spacing of 0.324 nm corresponds to the (020) plane of DCN. Figure 5D also reveals the intimate interface between DCN nanosheets and BiVO 4 nanoparticles, indicating that the BiVO 4 nanoparticles are tightly attached to the surface of DCN, which is beneficial to the transport of photoinduced charge carriers. X-ray photoelectron spectroscopy (XPS) technology was applied to further study the surface chemical composition and the oxidation state of the elements in the BDCN4, as shown in Figure S4. The C 1s, N 1s, Bi 4f, V 2p, and O 1s are all found in the survey XPS spectrum ( Figure S4A). The peak of C 1s at 284.6 eV can be attributed to the adventitious carbon on the surface of g-C 3 N 4 , and the two peaks at 286 and 288.2 eV both belong to the sp 2 -hybridized C [C−(N) 3 ] ( Figure S4B). The main features of N 1s include a broad and wide peak from 398 to 403.5 eV ( Figure S4C), in which the existence of the sp 2bonded DCN in the BDCN4 nanocomposites was verified by the sp 2 -hybridized nitrogen (CN−C) of N 1s peak at 398.8 eV. The peak at 400.1 eV corresponds to tertiary nitrogen N− (C) 3 groups, and the peak at 401.3 eV relates to the effects of charging reaction. 17−19 The binding energies of Bi 4f 7/2 and Bi 4f 5/2 are 164.3 and 159.2 eV, respectively ( Figure S4D), and the two peaks at 516.6 and 526.1 eV can be attributed to V 2p ( Figure S4E). In the O 1s region ( Figure S4F), one peak at 529.4 eV belongs to the Bi−O bonds of (Bi 2 O 2 ) 2+ units and one minor peak at 530.1 eV is attributed to the −OH groups on the surface. The XPS results further demonstrate the composition of DCN and BiVO 4 in the prepared BDCN nanocomposites. 2.2. Photocatalysis Analysis. The photocatalytic activities of the as-prepared DCN, BiVO 4 , BCN, and BDCN samples were examined by the photodegradation of phenol ( Figure 6A,B) and MO ( Figure 6C,D) under visible-light irradiation (λ > 420 nm). Because of the poor oxidation ability of DCN associated with the relatively high level of VB position, the photodegradation of phenol is not quite efficient. The DCN sample presents only 21.2% removal efficiency within 150 min ( Figure 6A) on phenol. The bulk BiVO 4 shows better oxidation ability because of its low level of VB position, achieving 49.5% removal efficiency in the degradation of phenol in 150 min. Interestingly, the BDCN nanocomposites demonstrate much higher activities than pristine DCN or bulk BiVO 4 , up to 69.8, 77.3, 87.4, 97.1, and 92.0% removal efficiency by BDCN1, BDCN2, BDCN3, BDCN4, and BDCN5 nanocomposite photocatalysts, respectively, which increase with the doping concentration of DCN from 10 to 40% but levels off at 50%. This decreased photocatalytic activity was due to the decrease in the light absorbance of BiVO 4 and/or the enhanced recombination of photoinduced charge carriers when the mass percent of DCN is over 50%. In the presence of the BDCN4 nanocomposite, which shows the best photocatalytic activity over all BDCN nanocomposite photocatalysts, nearly 92% of phenol and 97% of MO molecules ( Figure 6C) are photodecomposed within 150 min under visible-light irradiation. Moreover, the apparent rate constant for phenol photodecomposition obtained from the BDCN4 sample is 0.02202 min −1 , which is about 13.7 times higher than that of the DCN sample (0.0016 min −1 ) and 5.0 times higher than that from the BiVO 4 sample (0.0044 min −1 ) ( Figure 6B). The apparent rate constant for the MO decomposition obtained from the BDCN4 sample is 0.0215 min −1 , which is about 6.3 times higher than that from the DCN sample (0.0034 min −1 ) and 4.2 times higher than that from the BiVO 4 sample (0.0051 min −1 ) ( Figure 6D). Moreover, the BDCN4 sample shows a more effective photocatalytic activity than the pyridine-free undoped BiVO 4 /g-C 3 N 4 -4 (the BCN nanocomposite with the mass percentage of g-C 3 N 4 at 40%), a simple physical mixture of DCN and BiVO 4 (40% of DCN by mass), BiVO 4 /CN samples made by a mixing-calcination method, and BiVO 4 /CN made by ultrasound-assisted 41 and calcination methods, 42 implying there occurred more efficient charge-carrier separation and transfer in the BDCN nanocomposite than those in the simple physical mixture, mixing-calcination-prepared sample, and much more efficient visible-light-harvesting capability than the BCN sample ( Figure S5). The total organic carbon (TOC) analysis was also applied to evaluate the mineralization ratio of organic contaminant photodecomposition in water over the photocatalysts. As illustrated in Figure 7, the TOC in the phenol and MO solution during photodegradation decreases with the irradiation time and down to 38.6 and 26.3% after 150 min irradiation, respectively, indicating that about 73.6 and 61.4% organic carbon in phenol and MO are photodegraded into inorganic carbon (CO 2 ). The TOC results indicate that phenol and MO are extensively degraded by the photocatalytic process over the BDCN nanocomposites, rather than just being decolorized to some organic intermediates, which may be still hazardous to the environment. However, the TOC in the phenol and MO solution did not decrease to zero, suggesting that there must be some intermediate products 43 The interfacial charge-carrier-transfer dynamics between the DCN, BiVO 4 , and BDCN photocatalysts were assessed by photocurrent−time measurement, and transient photocurrent responses and the results are shown in Figure 9A. Obviously, all BDCN nanocomposites exhibit a higher photocurrent than DCN and BiVO 4 , indicating that the formation of the BDCN nanocomposite by combining DCN with BiVO 4 can facilitate the separation of photogenerated electron−hole pairs. However, BDCN5 shows a decreased photocurrent that is attributed to the excess DCN that decreases the light absorbance of BiVO 4 and enhances the recombination of photogenerated electron−hole pairs. In addition, a comparison of the Figure 9B. The DCN and BDCN4 samples show much higher photocurrent than CN and BCN because of the incorporation of pyridine into the CN network, which can delocalize the aromatic system. This enhances the separation of photogenerated electron−hole pairs. Mechanism of Photocatalytic Degradation. A series of active oxygen species, such as hole (h + ), hydroxyl radical ( • OH), and superoxide radical ( • O 2 − ), are hypothesized to be involved in the current photocatalytic degradation of phenol or dye. 44 To investigate the photocatalytic mechanism, ammonium oxalate (AO), isopropanol (IPA), and benzoquinone (BQ) were introduced into the photocatalytic oxidation process separately over the BDCN4 photocatalyst and acted as the scavengers for h + , • OH, and • O 2 − , respectively. The photocatalytic analysis results ( Figure 10A) indicate that the addition of IPA and AO slightly hinders the MO degradation rate but the addition of BQ strongly suppresses the MO degradation rate, indicating that • O 2 − is the major oxygen active species, whereas • OH is the minor oxygen active species in photodegradation. The addition of IPA and AO shows a similar inhibition effect, which suggests that • OH is generated by the reaction between h + and H 2 O. In the meanwhile, the photocatalytic experiments over the BDCN4 photocatalyst were also performed under different atmospheres (O 2 and N 2 gases) to investigate the active oxy-radical, and the results are given in Figure 10B. It is found that the degradation rate is suppressed under an N 2 atmosphere but slightly improved by bubbling O 2 , revealing that O 2 dissolved in the suspension acts as the electron trapper, thus resulting in the generation of • O 2 − . For comparison, the photocatalytic activity of the BiVO 4 and DCN samples after the addition of AO, IPA, and BQ was also investigated, as shown in Figure 10B,C, from which a conclusion can be drawn that • O 2 − plays the most crucial role in the oxidation process over DCN, whereas for the pure bulk BiVO 4 , • OH was the main active species during the photodegradation process. Therefore, in general, the active species of the BDCN nanocomposite in the degradation process is determined to be • O 2 − and • OH species. For the sake of explaining the enhanced photocatalytic activity and exploring the possible mechanism, the band edge positions of the VB and CB of BiVO 4 and DCN were determined by VB XPS and the empirical equation (E VB = E CB + E g ), which is shown in Figure 10D. The band gap energy of DCN was assumed to be 2.16 eV, as discussed above in the results of DRS. According to the VB XPS and empirical eq 2, the VB and CB positions of DCN were calculated as 1.81 and −0.35 eV versus the normal hydrogen electrode (NHE), respectively. For BiVO 4 , the positions of the VB and CB were calculated by the following empirical eqs 1 and 2. where E VB corresponds to the VB edge potential; E CB corresponds to the CB edge potential; χ corresponds to the electronegativity of the semiconductor, which is the geometric average of the absolute electronegativity of the constituent atoms (the χ value of BiVO 4 is 6.15 eV); E e corresponds to the free-electron energy on the hydrogen scale (E e ≈ 4.5 eV); and E g corresponds to the band gap energy of the semiconductor. The band gap energy of BiVO 4 was calculated as 2.36 eV from To further confirm this hypothesis, the transient timeresolved luminescence decay and photoluminescence (PL) measurement experiment was conducted, and the results are shown in Figure 12. These decay curves are well-fitted to a double-exponential PL decay model, as shown in Figure 12A− C. The fast PL lifetime component (τ 1 ) and the slow PL lifetime component (τ 2 ) are ascribed to the surface-related nonradiative recombination processes and the recombination of free exciton, 45 respectively. Compared with BiVO 4 (4.78 ns) and DCN (4.95 ns), the BDCN4 sample displays a longer PL decay lifetime of 10.22 ns. Both the τ 1 and τ 2 and PL life times (τ) of BDCN are longer than those of BiVO 4 and DCN, suggesting that the separation of the photogenerated charge carrier of BDCN is higher than those of BiVO 4 and DCN. 46,47 Moreover, the PL spectra show that the PL intensities of BDCN are higher than those of DCN and BiVO 4 ( Figure 12D), suggesting a higher recombination probability of the photogenerated charge carrier of the composites than that of DCN and BiVO 4 . If the charge-carrier transfer of BDCN4 is heterojunction, a faster PL decay kinetics and a lower PL intensity should be observed. However, BDCN4 shows a much slower PL decay process and weaker PL intensity, and as shown in the result of photocurrent−time measurement above, the charge-carrier transfer of BDCN is more efficient than those of other photocatalysts, which are contradictory to traditional heterojunction, suggesting that the photogenerated electron− hole pairs between BiVO 4 and DCN follow another transfer channel than heterojunction. Furthermore, the photocatalytic hydrogen generation experiment was carried out, and the result is shown in Figure S7. The average hydrogen generation rate of BDCN is 26.3 μmol/h, which is about two times higher than that of DCN (14.4 μmol/h), whereas BiVO 4 cannot generate hydrogen, revealing that the CB position of BDCN is more negative, which is further against the heterojunction mechanism. Consulting the similar works reported by other groups, 36−40,45−48 the Z-scheme charge-carrier-transfer channel existing in natural photosynthesis appears to be more proper for our BDCN photocatalysts, as shown in Scheme 1B. Under visible-light irradiation, both DCN and BiVO 4 can be excited and electron−hole pairs are generated. The photoexcited electrons in the CB of BiVO 4 are transferred to the VB of DCN quickly through the interface between DCN and BiVO 4 and then combine with the holes in the VB of DCN, which leads to a slower PL decay process and a weak PL intensity of BDCN than those of DCN and BiVO 4 . Consequently, the most negative electrons in the CB of DCN reduce the molecular oxygen dissolved in water to yield • O 2 − , and the most positive holes in the VB of BiVO 4 generate the active • OH radicals. The verification of • O 2 − ion and • OH radicals as the active species confirms the likely direct Z-scheme mechanism of our photocatalytic system. Namely, the Z-scheme charge-carriertransfer channel is favorable to preserve the strong reducing capacity of electrons in the CB of DCN and the oxidizing capacity of holes in the VB of BiVO 4 for the production of • O 2 − and • OH reactive species, thus leading to significantly efficient separation of photogenerated electron−hole pairs and enhanced photocatalytic performance. In general, the much higher photocatalytic activity of BDCN than that of DCN, bulk BiVO 4 , BCN, or physical mixture can be ascribed to (1) the enhanced absorption ability by pyridine doping method, (2) the in situ growth of small nanosized BiVO 4 particles, and (3) the mediator-free Z-scheme chargecarrier-transfer route between the doped DCN and BiVO 4 nanoparticles, which can not only preserve the high oxidation potential of the photogenerated electrons and holes in the CB of DCN and VB of BiVO 4 but also reduce the recombination rate of electron−hole pairs. The stability and reusability of the BDCN4 photocatalyst were assessed by the recycling photodegradation experiment, and the result is shown in Figure 13. Even after five times of successive recycling photodegradation experiments, the BDCN4 photocatalyst did not show obvious decrease in the photocatalytic degradation activity under visible-light irradiation, implying that the BDCN4 photocatalyst is sufficiently stable for photocatalytic degradation. Moreover, the crystalline structure of BDCN4 photocatalysts after photocatalytic experiments was examined by XRD, as shown in Figure S8. No obvious change in the crystalline structure was observed, further demonstrating the superior stability of our BDCN nanocomposite photocatalyst. CONCLUSIONS A direct solid-state nano-Z-scheme BiVO 4 /DCN photocatalytic system with a superior visible-light-harvesting ability and a photocatalytic activity was synthesized via an in situ hydrothermal method in this work. The BiVO 4 nanoparticles were successfully in situ deposited on the DCN nanosheets uniformly. All as-synthesized BiVO 4 /DCN nanocomposite photocatalysts showed photocatalytic activity superior to those of the pristine DCN, bulk BiVO 4 , and physical DCN/ BiVO 4 mixture under visible-light irradiation (λ > 420 nm). • O 2 − and • OH are confirmed to be the oxygen active species by the active species trapping experiments and EPR technology. Moreover, the hydrogen evolution test, decreased PL decay process, and increase in the PL intensity further verified that the Z-scheme charge-carrier-transfer channel is more suitable to explain the mechanism than that of heterojunction for the BDCN photocatalyst. The significantly improved photocatalytic activity under visible-light irradiation of our successful design of solid-state nano-Z-scheme principled BiVO 4 /DCN nanocomposite can be ascribed to the enhanced visible-light absorption and largely reduced the recombination of photogenerated electron−hole pairs. Moreover, the excellent stability and the recycling ability of our BiVO 4 /DCN nanocomposite photocatalyst ensure its practical applications in the environmental remediation. EXPERIMENTAL SECTION 4.1. Preparation of DCN, BiVO 4 , and BDCN. The DCN sample was synthesized via organic pyridine doping approach and thermal copolymerization treatment, as reported in our previous research. 14 Briefly, 70 mg of 2,6-diaminopyridine (DPY) and 3.0 g of dicyandiamide were dispersed into 15 mL of water, and the mixed solution was maintained at 100°C under stirring to remove water. Finally, the obtained solid precursor was put in a crucible and maintained at 550°C for 4 h in a muffle furnace with the heating rate of 15°C/min under the air atmosphere. The obtained solid pyridine-doped g-C 3 N 4 was denoted as DCN. The BDCN nanocomposite photocatalysts were in situ synthesized by a controlled hydrothermal route. The Bi 3+ ions were absorbed on the surface of g-C 3 N 4 by a chemical bond 49,50 because of the tri-s-triazine (heptazine) ring structure of g-C 3 N 4 , which benefited the growth and dispersion of BiVO 4 nanoparticles on the surface of the DCN sheet. The controlled in situ hydrothermal synthesis procedure of BDCN is described as follows: 1.81 g of Bi(NO 3 ) 3 ·5H 2 O, 0.435 g of NH 4 VO 3 , 1.16 g of urea, and different amounts of DCN were mixed in 30 mL of deionized (DI) water. HNO 3 (6 M) was used to adjust the pH of the mixture to 1, and the mixture was stirred for 1 h at 25°C. The obtained mixture solution was sealed into a Teflon-lined stainless steel autoclave (50 mL) and kept at 180°C for 4 h. After being cooled down to 25°C naturally, the precipitate was filtered, collected, washed by DI water, and finally maintained at 60°C for drying. The obtained BDCN nanocomposite photocatalysts with different mass percentages of DCN varying from 10 to 50% were denoted as BDCN1, BDCN2, BDCN3, BDCN4, and BDCN5, respectively. In the absence of DCN, only the bulk BiVO 4 sample was obtained by the same hydrothermal process. For comparison, the pure g-C 3 N 4 was prepared by thermal polymerization (denoted as CN). In addition, the BiVO 4 / undoped g-C 3 N 4 nanocomposite was also prepared by the same hydrothermal method (denoted as BCN, the mass percentage of g-C 3 N 4 was optimized in terms of the results of photocatalytic analysis). 4.2. Characterization. Powder XRD measurement was carried out on a D/Max-IIIA instrument by using Cu Kα radiation (the scanning rate was 0.02°·s −1 ). The FT-IR spectrum was recorded on a Vector 33 infrared spectrometer. Thermal degradation was identified via TGA (TGA 92-18, Setaram, N 2 atmosphere, 313−1173 K). DRS were recorded by a Shimadzu U-3010 spectrophotometer with BaSO 4 as the reflectance standard. XPS of the samples was obtained by an ESCALAB 250Xi spectrometer equipped with a pre-reduction chamber. SEM and TEM images were obtained by S-3700 scanning TEM and JEM 2100 field emission TEM. The solidstate 13 C NMR spectra were measured by a Bruker Avance III 500 spectrometer. The transient time-resolved luminescence decay spectra were recorded by FLS980 series of fluorescence spectrometers. EPR measurements with and without light irradiation were recorded using a Bruker model A300 spectrometer. Two filters, an IR cutoff (λ < 800 nm) and a UV cutoff (λ > 420 nm), installed between the 300 W Xe lamp and the samples, filtered out the light, which were used as the visible-light source. The TOC of the phenol or MO solution was tested by a high-temperature TOC/TNb analyzer (Liqui TOC II, Elementar, Germany). 4.3. Photocatalytic Activity. Degradation of the phenol and MO solution under visible-light irradiation was performed to evaluate the photocatalytic activities of the catalysts. The phenol compound is considered to be a direct threat to the health of humankind because of its highly toxic, persistent, and biorecalcitrant properties. In a typical experiment, 0.05 g of the as-prepared photocatalyst was dispersed into 150 mL of the phenol or MO aqueous solution with a concentration of 10 mg/L under stirring. Before light irradiation, the suspension was stirred for half an hour in the dark with the light turned off to get the adsorption−desorption equilibrium. After light irradiation, about 5 mL of the suspension was taken every 0.5 h and centrifuged to remove the photocatalysts and get a clear solution in the upper level, of which the absorption spectrum was obtained by a Shimadzu UV-2050 spectrophotometer. The experiments of the active species trapping were carried out with the aim to explore the active species involved during the degradation process and the possible photocatalytic mechanism. A certain amount of AO, BQ, and IPA was introduced into the reaction suspensions to capture the possible active species, such as hole (h + ), superoxide radical ( • O 2 − ), and hydroxyl radical ( • OH), respectively. N 2 and O 2 gases were also bubbled into the reaction system to further confirm the active species. To test the stability and reusability of the BDCN nanocomposite photocatalyst, the BDCN4 nanocomposite photocatalyst was centrifuged and collected after the photocatalytic test experiment, dried in 80°C, and redispersed into the fresh phenol or MO solution. The recycling photocatalytic reaction was repeated under the same condition for successive five times, and the XRD pattern of the BDCN4 sample was recorded and compared with that of the original sample to detect any degradation of the photocatalyst after each photocatalytic run. Electrochemical Analysis. An electrochemical analyzer (CHI-660E, Chenhua, China) was used to test the electrochemical properties of the samples in a conventional three-electrode cell. The working electrodes were made by the doctor-blading method. A Pt sheet was used as a counter electrode, and an Ag/AgCl electrode was used as a reference electrode. The Na 2 SO 4 aqueous solution (1.0 M) was used as the supporting electrolyte. The area exposed in the solution and illuminated by the lamp was 0.25 cm 2 . With a voltage of 0.5 V bias versus Ag/AgCl and a perturbation signal of 10 mV with a frequency at 1 kHz set, the periodic on/off photocurrent response of the modified fluorine-doped tin oxide glass (FTO) was measured.
2019-04-09T13:01:57.243Z
2017-06-16T00:00:00.000
{ "year": 2017, "sha1": "94e9c37bb1a824a09a4d07b7d9ea943e2749062a", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b00338", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78b0168683fbd001a763aa61e5953984c803de29", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
234475064
pes2o/s2orc
v3-fos-license
A Rapid Motor Task-Based Screening Tool for Parkinsonism in Community-Based Studies Background: The prevalence of parkinsonism in developing countries is largely unknown due to difficulty in ascertainment because access to neurologists is often limited. Objective: Develop and validate a parkinsonism screening tool using objective motor task-based tests that can be administered by non-clinicians. Methods: In a cross-sectional population-based sample from South Africa, we evaluated 315 adults, age >40, from an Mn-exposed (smelter) community, using the Unified Parkinson Disease Rating Scale motor subsection 3 (UPDRS3), Purdue grooved pegboard, and kinematic-UPDRS3-based motor tasks. In 275 participants (training dataset), we constructed a linear regression model to predict UPDRS3. We selected motor task summary measures independently associated with UPDRS3 (p < 0.05). We validated the model internally in the remaining 40 participants from the manganese-exposed community (test dataset) using the area under the receiver operating characteristic curve (AUC), and externally in another population-based sample of 90 participants from another South African community with only background levels of environmental Mn exposure. Results: The mean UPDRS3 score in participants from the Mn-exposed community was 9.1 in both the training and test datasets (standard deviation = 6.4 and 6.1, respectively). Together, 57 (18.1%) participants in this community had a UPDRS3 ≥ 15, including three with Parkinson's disease. In the non-exposed community, the mean UPDRS3 was 3.9 (standard deviation = 4.3). Three (3.3%) had a UPDRS3 ≥ 15. Grooved pegboard time and mean velocity for hand rotation and finger tapping tasks were strongly associated with UPDRS3. Using these motor task summary measures and age, the UPDRS3 predictive model performed very well. In the test dataset, AUCs were 0.81 (95% CI 0.68, 0.94) and 0.91 (95% CI 0.81, 1.00) for cut points for neurologist-assessed UPDRS3 ≥ 10 and UPDRS3 ≥ 15, respectively. In the external validation dataset, the AUC was 0.85 (95% CI 0.73, 0.97) for UPDRS3 ≥ 10. AUCs were 0.76–0.82 when excluding age. Conclusion: A predictive model based on a series of objective motor tasks performs very well in assessing severity of parkinsonism in both Mn-exposed and non-exposed population-based cohorts. INTRODUCTION The Global Burden of Disease Study estimates that the number of people affected by Parkinson's disease (PD) more than doubled from 1990 to 2015 with the highest prevalence in high-income regions and lowest in sub-Saharan Africa and Eastern Europe in 2015 (1,2). However, little is known about the burden of the disease in resource-poor environments such as many regions in Africa (3). Relatively low reported PD prevalence in Africa is almost certainly inaccurate, since case identification is challenging in countries that lack sufficient research and clinical expertise to survey their populations (4). Globally, there are 3.1 neurologists per 100,000 people, whereas in Africa there are only 0.1 neurologists per 100,000 people (5). Given the increasing life expectancy in many African countries, estimating true disease burden of diseases of aging, such as PD, is critical to providing adequate healthcare resources for these patients. The Unified Parkinson Disease Rating Scale motor subsection 3 (UPDRS3) remains the most widely used tool to quantify parkinsonian motor signs in patient and non-patient populations (6)(7)(8)(9)(10). In addition to quantifying parkinsonism severity, this standardized examination elicits the cardinal signs required to make a diagnosis of PD. Nevertheless, many developing countries lack the relevant clinical expertise required to quantify parkinsonism or diagnose PD accurately. For this study, we sought to develop a motor battery that can be used to predict UPDRS3 scores in population-based African cohorts. In practice, such estimates of the UPDRS3 score might be useful for initial screening to identify those who should receive further evaluation by a neurologist. These UPDRS3 estimates might also be suitable for epidemiological studies investigating neurological health effects of environmental or occupational exposures when UPDRS3 assessment by a movement disorders specialist is not feasible. Standard Protocol Approvals The Washington University School of Medicine Human Research Protection Office (St. Louis, Missouri, United States) and the University of the Witwatersrand Human Research Ethics Committee (Johannesburg, Gauteng, South Africa) approved this study. All participants provided written informed consent. Participants Within a cross-sectional population-based study with 315 South African adults age >40 and living <5 km from a large manganese (Mn) smelter in Meyerton, South Africa, we developed and validated a predictive model for UPDRS3, in training (N = 275, 87%) and test (N = 40, 13%) datasets, respectively. We enrolled the participants in these two groups consecutively (Supplementary Figure 1). Participants in the training dataset lived a mean of 1.85 km (SD = 0.77) from the smelter, and the participants in the test dataset lived a mean of 1.96 Km (SD = 0.74) from the smelter. The participants from Meyerton comprised a subset of participants who were recruited as part of a larger environmental Mn study (11). The recruitment approach for this larger study was based upon the location of residence, and was designed to obtain a true population-based sample within three Meyerton-based settlements. Briefly, we pre-selected every other residence (two communities) or all residences (one, smaller community). We attempted to recruit all age-eligible adults in each pre-selected residence to participate in the study, or, in the two communities where only half of residences were selected, the residence to the left if no one was home or eligible. Across the three settlements in the Meyerton community, 462/666 (69.4%) of homes that we visited had at least one eligible adult who agreed to participate. Air monitoring in all three Meyerton settlements confirmed relatively high mean concentrations of PM 2.5 -Mn (203 ng/m 3 at a long-term fixed site in one settlement, and based upon concurrent sampling approximately half that level in the other two settlements) (11). This is ∼12-20 times higher than mean PM 2.5 -Mn in other populated areas in South Africa (11,12), and ∼4 times higher than the mean modeled PM 2.5 -Mn exposure levels for an Mn smelter community in the United States (13). We therefore refer to all participants from this community as Mnexposed. We externally validated our model in 90 additional participants, also >40 years old, from a community in the same province, Ethembalethu. Ethembalethu was smaller than the Meyerton-based settlements, so we attempted to recruit every age-eligible resident using the same door-to-door approach to obtain a population-based sample. In this community, 79/108 (73.1%) of homes that we visited had at least one age-eligible adult who agreed to participate. Ethembalethu is an industryfree community, with no nearby Mn smelting or mining operations, located ∼70 km from Meyerton. Air monitoring in Ethembalethu demonstrated mean concentrations of PM 2.5 -Mn ∼20 times lower than at the fixed site in Meyerton (10 ng/m 3 ) (11). This is ∼40% lower than mean PM 2.5 -Mn levels measured in the air in a city on the southeastern coast of South Africa with some industry but no Mn smelting or mining activities (12). We therefore refer to all participants from Ethembalethu as non-exposed. For inclusion in the present work, we required participants in both communities to meet the following criteria: (1) be non-ambidextrous (self-report as right-handed or left-handed), (2) complete at least one trial for the dominant and nondominant hand for each of five motor tasks under the direction of a trained test administrator, (3) have been examined by a movement disorder specialist using the UPDRS3 and had complete UPDRS3 data. Exclusion criteria were: (1) current use of neuroleptic medications, and (2) neurologic co-morbidities that might compromise the accuracy of the UPDRS3, such as stroke or spasticity. In order to include individuals with other conditions that would cause incomplete UPDRS3 data, such as injured/missing limbs or inability to undergo the pull test to assess balance, we used imputation of the respective subscore(s) to obtain complete UPDRS3 data when possible. We included individuals regardless of UPDRS3 score, presence of PD, or occupational Mn exposure, in order to ensure that our original population-based samples remained representative of the respective settlements. The above exclusions, need for imputation, presence of PD, and past or current occupational Mn exposure were all uncommon (all <2.5%) in the larger study (11). Grooved Pegboard and Kinematic Testing Participants completed five motor tasks-the grooved pegboard task and four accelerometry-based kinematic-UPDRS3 tasks-in their homes, at the time of enrollment into the original study. These kinematic-UPDRS3 tasks were designed to characterize finger tapping, hand rotation, action tremor, and postural tremor. All participants had grooved pegboard and kinematic testing conducted by one of three non-clinician test administrators. We trained all three test administrators previously using a videobased training module, followed by supervised administration of the tests to non-research participants. In addition, we made frequent data quality checks throughout the study. For the grooved pegboard task, we used a standard grooved pegboard device (Lafayette Instrument Company, Lafayette, Indiana) and followed published testing procedures (14). For the four kinematic tests, test administrators placed a wireless motion sensor (Kinesia TM , Great Lakes NeuroTechnologies, Independence, Ohio) (15)(16)(17)(18)(19) on the top of the participant's index finger. The Kinesia Motion Sensory device comprises a triaxial accelerometer and triaxial gyroscope, allowing measurement of acceleration (linear) and velocity (angular), respectively, along all three axes (x, y, and z) at 64 Hertz. We recorded the digitized signals on a computer tablet, using motion capture software (Great Lakes NeuroTechnologies, Independence, Ohio), following at least one non-recorded practice trial. Participants then completed three 12-second trials while seated for each hand for each task: (1) postural tremor-participant was instructed to raise both arms, straight out in front of his/her body and stay as still as possible; (2) action tremor-participant alternated touching his/her index finger to his/her nose and to the administrator's finger held an arm's length away from the participant; (3) finger tapping-participant tapped his/her index finger and thumb together while keeping the other fingers stable and the elbow extended; (4) hand rotation-participant rotated his/her hand at the wrist, positioning the arm so that the elbow was flexed and the hand open. Participants were instructed to perform finger tapping and hand rotation tasks with as large an amplitude and as fast, as possible. Participants completed the trials for the right hand first, followed by the left hand, for each of these four tasks. A previous study in PD patients that used a similar Kinesia Motion Sensory device found that in the more parkinsonian hand the test-retest reliability across three 15s trials, as measured by the intraclass correlation coefficient, was 0.71 for postural tremor and 0.94 for finger tapping speed (18). We then developed, validated, and applied comprehensive computer code to process these large datasets (Supplementary Material 1). Specifically, we first checked and standardized data from each trial. For example, we removed kinematic data for trials that appeared to be incomplete (<12 s) or for sensor failures. We then calculated six summary measures across the three trials (mean velocity, mean peak velocity, coefficient of variation, decrement in peak velocity, cycles/second, and decrement in cycles/second; Supplementary Table 1). We also calculated three summary measures for each hand from grooved pegboard testing (time to place all 25 pegs, number of pegs placed, number of pegs dropped). We calculated all of the kinematic and grooved pegboard measures for each hand (dominant, non-dominant), using a mean of all ≤3 trials and by taking only the first trial. As an additional variation for mean velocity for the finger tapping and hand rotation tasks, we isolated both the upward/downward motions and the clockwise/counterclockwise motions, respectively. Assessment of UPDRS3 Score and Subscores One movement disorder specialist (BR) examined all participants using the UPDRS3 (20), blinded to performance on the grooved pegboard and kinematic motor tasks. The examination occurred in one central non-clinical location in each of the two communities, while study staff conducted the testing in-home on an earlier date (a median of 37 and 3 days earlier, respectively, for Meyerton and Ethembalethu) without the examiner present, and individual testing results were not available to the examiner. In addition to UPDRS3 total score, we focused on selected UPDRS3 subscores (upper limb bradykinesia and tremor) to facilitate validation and selection of grooved pegboard and kinematic summary measures for development of the prediction of the total UPDRS3 score (Supplementary Material 2). Determination of Handedness and Demographic Variables We used self-reported handedness to classify UPDRS3 subscores and the five motor tasks as dominant or non-dominant. Participants also provided socio-demographic information, including age, sex, ethnicity, and home language. Statistical Analysis We performed all data processing and statistical analyses using Stata version MP 14.2 (StataCorp, College Station, Texas) (21). To help prioritize which summary measures to include in the predictive model, we estimated Spearman's ρ correlation coefficients between each of the kinematic and grooved pegboard test summary measures and the respective UPDRS3 subscores (our gold standard) (Supplementary Material 2). Specifically, we sought to identify summary measures with the Spearman's ρ correlation coefficients of the greatest magnitude (relative to each other, or at least weakly or significantly correlated, i.e., ρ > 0.20 and/or with p < 0.05) and, more importantly, in the expected direction (positive or negative). Greater UPDRS3 scores and subscores indicate greater parkinsonism, which we anticipated would be associated with longer grooved pegboard times, fewer pegs placed, and more pegs dropped. With regard to the kinematic tests, we also anticipated that greater UPDRS3 scores would be associated with lower velocities (or fewer cycles per second) and greater decrement (and hence greater variability as assessed by the coefficient of variation) on the finger tapping and hand rotation tasks. We investigated whether correlations differed according to hand dominance and whether motor task data from only one hand might be sufficient to predict UPDRS3. We also examined the association between summary measures derived from only the first of the three trials to determine if a single trial would be sufficient for UPDRS3 prediction. Model Development We used linear regression with the total neurologist rated UPDRS3 as the outcome variable, to predict UPDRS3 in our training dataset (N = 275). Age and motor task summary measures selected above were our primary a priori predictors of interest, and we initially retained all as continuous measures (8,22). We then used locally weighted scatterplot-smoothing (LOWESS) graphs to determine, for each of these predictors, whether linear modeling or other approaches were most appropriate. The LOWESS graphs suggested a quadratic term for hand rotation and finger tapping mean velocities, which we verified and confirmed to be true for only hand rotation (In a simple linear regression between UPDRS3, hand rotation mean velocity, and its quadratic term, the quadratic term was statistically significant; therefore, we included this term in our model. In contrast, the square term for finger tapping was not statistically significant when including finger tapping mean velocity). To assess multicollinearity, we used the variance inflation factor (VIF) and conservatively verified that the VIF was <2 for all predictors in our final model. The exception was the hand rotation linear term and its quadratic term because these are inherently correlated, so both were included to better capture the true association between that predictor variable and UPDRS3. Secondarily, we repeated this model development process but did not allow age to be included as a predictor, given that age in the Mn-exposed community could be a surrogate for duration of Mn exposure, and therefore the coefficient for age might not translate well to settings without this potential cause of parkinsonism. Model Validation We formally validated model performance in our test dataset (N = 40) and in the independent, external dataset (N = 90), using the receiver operating characteristic (ROC) curve (21,23). We estimated the area under the ROC curve (AUC) using a dichotomized UPDRS3 variable (UPDRS3 ≥ 10 and UPDRS3 ≥ 15, when possible) as our gold standard. We used a UPDRS3 score of 15 as a cut point because most idiopathic PD patients become symptomatic and present for medical attention with UPDRS3 scores ≥15 (24-26), i.e., that this threshold would reflect functionally impairing motor dysfunction. We also used a UPDRS3 score of 10 as a cut point because our primary focus was to develop a screening tool, i.e., one would likely use a more conservative (lower) UPDRS3 than 15. We also calculated sensitivity and specificity for these same gold standard variables at selected predicted UPDRS3 cut points. In order to obtain an overall measure of whether the predicted UPDRS3 might be suitable for use as a continuous outcome measure in epidemiologic studies, we calculated Spearman's ρ correlation coefficient to measure agreement between neurologist-assessed UPDRS3 and the model-derived UPDRS3. We calculated the 95% confidence interval (CI) for this ρ and for the AUCs. Obtaining an estimate for the lower CI is more informative than a p-value. A CI for ρ that excludes zero indicates significance at a two-sided α = 0.05, but for this particular comparison we viewed a lower CI > 0.20 to indicate clearly that the observed correlation was at least weakly positive. An AUC of 0.5 indicates discrimination ability no better than chance, an AUC of 0.7 is fair, an AUC of 0.8 is good, an AUC of 0.9 is excellent, and an AUC of 1.0 indicates perfect discrimination (27). Therefore, we viewed a lower CI > 0.70 to indicate clearly that the observed AUC was at least fair. Characteristics of Participants Most (98.5%) of participants were of black/African ethnicity, with a median age of 51 ( Table 1). The mean UPDRS3 score was 7.9 (SD = 6.3, range 0.0-38.5), but differed markedly between the two communities. Upper limb bradykinesia contributed substantially to UPDRS3 as 16.8% of participants had >6 points in total across the six upper limb bradykinesia subscores, i.e., a subscore of >1 on at least one of these six scores (not shown). In contrast, action/postural tremor was relatively uncommon. Only 20 (4.9%) of the participants had an action/postural tremor subscore of 2, and only three participants had prominent action/postural tremor or only rest tremor (subscore >2). Three participants who had complete kinematic and grooved pegboard data, all from Meyerton, had PD according to this neurological examination. Performance of Summary Measures Several summary measures for upper limb bradykinesia correlated with the respective UPDRS3 subscores (Supplementary Table 2), with grooved pegboard time and hand rotation and finger tapping mean velocities demonstrating the greatest agreement. Of these three measures, the mean velocity for hand rotation demonstrated the best agreement (ρ = −0.43 and ρ = −0.42 for the dominant and non-dominant hand, respectively), with the corresponding UPDRS3 subscore, i.e., rapid alternating movements. Agreement for these kinematic summary measures was not improved by isolating the direction of the movement, i.e., upward vs. downward motions within finger tapping or clockwise vs. counterclockwise motions within rapid alternating movements. The other kinematic summary measures (cycles/second, decrement in peak velocity, and decrement in cycles/second) did not perform as well as mean and peak velocities as a measure of upper limb bradykinesia. Of these, the greatest agreement was for coefficient of variation and "decrement" in the non-dominant hand for the hand rotation task (Supplementary Table 2). The absolute value of the correlation coefficients between the action/postural tremor summary measures and UPDRS3 subscore were all well below 0.20 (action tremor task: ρ = −0.06 to 0.08; postural tremor task: ρ = −0.11 to 0.06; Supplementary Table 3). When comparing a given measure to the respective UPDRS3 subscore, dominant and non-dominant hands yielded similar Table 2). However, testing from the non-dominant hand performed slightly better for the "decrement" summary measures. We found similar correlations between UPDRS3 and kinematic testing when we used only the first trial, rather than the mean of all available trials (Supplementary Table 4). UPDRS3 Predictive Model The final model included the following predictors for one trial from the non-dominant hand: hand rotation kinematic mean velocity linear term and quadratic term (squared term); finger tapping kinematic mean velocity; grooved pegboard time; and age, each as linear terms (Table 2, Figure 1). Finger tapping and hand rotation mean velocities were inversely associated with UPDRS3, whereas age and grooved pegboard time were positively associated with UPDRS3. Administration of the tests selected for this screening tool for this predictive model took <10 min (Figure 1). Model Performance In our test dataset, performance of the predictive model as measured by the AUC was 0.81 (95% CI 0.68, 0.94) for identifying participants with neurologist-assessed UPDRS3 ≥ 10 (Figure 2) Figure 2). Too few participants in this sample had a UPDRS3 ≥ 15 to construct a smooth ROC curve. When attempting to identify participants who had a UPDRS3 ≥ 10, a cut point of predicted UPDRS3 of 4 resulted in sensitivity of 83.3% and specificity of 68.3% (Supplementary Table 6). Agreement between the neurologistassessed and predicted UPDRS3 as continuous variables was 0.58 (95% CI 0.46, 0.74). In a post-hoc exploratory analysis in which we considered how well the predictive model might identify the three individuals with PD in Meyerton (training and test datasets combined), the predicted UPDRS3 scores were 16.3, 17.1, and 21.7, all above the UPDRS3 ≥ 15 cut point. Table 2. UPDRS3, Unified Parkinson's Disease Rating Scale motor subsection 3. DISCUSSION In this large study in South Africa we predicted the UPDRS3 score with a limited battery of kinematic and grooved pegboard tests administered by trained non-clinician community members in <10 min per participant. AUCs in two validation datasets demonstrated that performance of this predictive model for identifying individuals with UPDRS3 scores ≥10 or ≥15 was good to excellent. In addition, we confirmed that there was at least a moderate correlation between the predicted UPDRS3 scores and neurologist-assessed UPDRS3. Notably, this UPDRS3 predictive model worked equally well in communities with and without environmental Mn exposure, i.e., in population-based samples with either relatively high or relatively low UPDRS3 scores. Taken together, our findings indicate that this UPDRS3 predictive model likely can be applied to facilitate screening programs or research studies in a wide variety of settings, including underresourced environments with limited access to PD specialists or other neurologists. In all potential applications of this UPDRS3 predictive model, the first step after administering the three-test battery is to calculate the mean velocities from the two kinematic tests. Of the many kinematic summary measures we assessed, the mean velocities are the easiest to calculate, further ensuring the potential usefulness of our model in many situations. Calculation of these summary measures only requires the isolation of the kinematic data from the relevant axis, i.e., x-axis for finger tapping and y-axis for hand rotations, before taking the mean across all sampled time points with the respective movement. In the second step, these two means from kinematic testing, along with the grooved pegboard time and age, are simply inserted into the equation produced by our predictive model. The resulting UPDRS3 score could be used in an epidemiologic study in which it is not possible for a neurologist to examine any participants. Alternatively, in a third step, one would dichotomize the predicted UPDRS3 score according to a pre-selected cut point to identify individuals with a particular UPDRS3, corresponding to the desired sensitivity and specificity. One likely would dichotomize the predicted UPDRS3 scores at the lowest possible cut point so as to achieve the highest sensitivity that is feasible, given that a specialist would need to examine all individuals with predicted UPDRS3 scores above the selected cut point. As part of a tiered screening protocol, maximizing sensitivity would either facilitate clinical care to the largest percentage of people in a community who might benefit from treatment, or alternatively would minimize misclassification of research study participants in terms of a dichotomous outcome of interest. As evidence of the potential utility of this type of protocol, a similar tiered screening approach was adopted in a previous occupational Mn exposure setting in South Africa (28). We acknowledge some limitations that could affect usefulness of our predictive model. First, the kinematic methods may not be easily incorporated in all settings. We trained our fieldworkers extensively in the use of kinematic devices, and we view this training as essential. Second, kinematic testing generates large amounts of data, requiring expertise in data handling. In addition, a substantial amount of computer storage and computational power would be required to process these data for many individuals at once, which might be a challenge in under-resourced environments. Third, while we used a common motion capture device, such devices have unique operational principles, instrumentation, and type of data produced (29) so data from other devices might not be ideal for use in our model. Finally, while a UPDRS3 score from a predictive model might not be as accurate as neurologist assessment, our tool is more objective than the UPDRS3. As a result, our model can address potential important challenges that might arise in epidemiologic studies, such as examiner blinding. The strengths of our study include the use of one movement disorder specialist to obtain gold standard UPDRS3 ratings, the relatively large sample size, and a unique study population at risk for parkinsonism paired with a lower risk population, each representative of their underlying communities. These strengths positioned us well to develop and validate the resulting UPDRS3 predictive model, which also has several strengths. First, we were successful in minimizing the time required to complete the included motor testing without materially affecting performance of the UPDRS3 predictive model. Most notably, we confirmed that it was sufficient to conduct only one trial of the selected tests in one hand. Second, all predictors can be assessed objectively, i.e., without subjective assessments that might require clinical expertise. Although some prior predictive models of UPDRS3 score or "parkinsonism" benefitted from the use of objective motor assessment (either a different fine motor task or gait assessment), some of these models required test administrators to make clinical judgements (30,31). Third, no translation of questions or questionnaires is required beyond obtaining age. Some prior predictive models required assessment of selected medical conditions (30,31), as might be done via questioning, or even administration of full questionnaires (32). Translation burden (33), and population literacy makes a purely questionnaire-based screening protocol especially challenging to administer in low-resourced countries. Avoiding the need for translation is especially beneficial in the context of Africa and other locales in which many languages are spoken. Restriction of our model to age and a modest battery of objective motor tasks to avoid the above potential challenges did not come at the expense of performance. Previous occupational or community-based cohorts in which investigators attempted to identify people with parkinsonism achieved AUCs ranging from 0.72 to 0.79 (30)(31)(32). The predictive model we present achieved slightly better AUCs than in these studies. Both the grooved pegboard test and the finger tapping task, such as assessed in a similar manner as here, have been shown to provide good discrimination between existing PD patients and comparable controls previously, with AUCs as high as 0.80-0.87 (34,35). While our study is unique in the use of these particular tests in population-based non-patient populations to estimate a UPDRS3 score, our results are consistent with these prior studies, suggesting that these motor tests are useful in a variety of study or clinical populations. Nonetheless, validation of our exact predictive model before its application in additional populations is recommended. However, we anticipate the model's usefulness in additional populations, given the model worked well to predict UPDRS3 in two communities with markedly different mean air Mn levels and UPDRS3 scores. While we were only able to explore the potential performance of the model in identifying PD in one of these communities, those results were very encouraging, as well. These findings demonstrate the robust predictive ability of the model in multiple practice settings and further underscore the usefulness of selected motor tasks in screening for parkinsonism. In summary, using selected accelerometry-based kinematic-UPDRS3 tasks and the grooved pegboard, we developed a UPDRS3 predictive model to identify individuals with potential parkinsonism, which demonstrated good predictive ability. The model performed exceptionally well considering that we applied it in non-patient populations and relied on non-clinicians to administer the tests. These findings have important public health applications in screening for or assessing parkinsonism in clinical or research settings in regions of the world with limited clinical neurologic expertise. The proposed screening tool provides an alternative assessment in these low-resourced environments because of ease of administration. DATA AVAILABILITY STATEMENT Individual participant data that underlie the results reported in this article, after de-identification, will be made available upon reasonable request, following article publication. Data will be made available to investigators whose proposed use of the data has been approved by an independent review committee identified for this purpose and after approval of the protocol by the University of the Witwatersrand Ethics Committee and the Washington University Human Resource Protection Committee. The analysis performed must be solely to achieve aims in the approved proposal. Proposals should be directed to BR, at racetteb@wustl.edu, and requestors will need to sign a data use agreement. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Washington University School of Medicine Human Research Protection Office (St. Louis, Missouri, United States) and the University of the Witwatersrand Human Research Ethics Committee (Johannesburg, Gauteng, South Africa). The participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS BR, MU, GN, and SN: research project conception, organization, execution, and manuscript preparation review and critique. SN: statistical analysis design. WD and SN: statistical analysis execution. BR, MU, and GN: statistical analysis review and critique. WD: manuscript preparation and writing of the first draft. All authors: approval of final manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the National Institutes of Health-National Institute of Environmental Health Sciences (Grant Nos. R01ES025991, R01ES026891-S1, and K01ES028295). The funder had no role in the design and conduct of the study; collection, statistical analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and the decision to publish these results.
2021-05-13T13:28:53.827Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "46508bf1d60cd80dd367f66e40450d0d89d772eb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.653066/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46508bf1d60cd80dd367f66e40450d0d89d772eb", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12681192
pes2o/s2orc
v3-fos-license
Dinitrogen fixation and dissolved organic nitrogen fueled primary production and particulate export during the VAHINE mesocosm experiment (New Caledonia lagoon) . In the oligotrophic ocean characterized by nitrate (NO − 3 ) depletion in surface waters, dinitrogen (N 2 ) fixation Introduction Nitrogen (N) availability constitutes one of the most limiting factors for marine primary production (PP) (Moore et al., 2013). About 80 % of the global ocean surface is depleted in dissolved inorganic N (nitrate (NO − 3 ) and ammonium (NH + 4 ) < 1 µmol L −1 ), and characterized by low PP, low biomass and low particulate matter export fluxes (Longhurst, 2007). In these low nitrate, low chlorophyll (LNLC) ecosystems, the strong stratification of the photic surface layer prevents the mixing with NO − 3 -replete deep waters and requires phytoplanktonic communities to rely on alternative N sources for growth. These sources comprise the virtually inexhaustible dissolved dinitrogen (N 2 ) pool (∼ 400 µmol L −1 ), and the large (∼ 5 µmol L −1 ) but mainly refractory dissolved organic N (DON) pool. The first pool (N 2 ) is only accessible to diazotrophic organisms able to reduce the N 2 gas molecule into bioavailable NH + 4 . This process called N 2 fixation (or diazotrophy) is responsible for the main external source of N for the upper ocean (Gruber and Galloway, 2008;Mahaffey, 2005) and fuels PP in LNLC ecosystems (e.g., Dugdale and Goering, 1967;Karl et al., 1997). However, the fate of recently fixed N 2 in the planktonic food web and its potential impact on carbon export are poorly understood. Moreover, this fate may differ according to the diazotrophic species involved in N 2 fixation. The widely distributed filamentous cyanobacterium Trichodesmium spp., one of the main contributors to global N 2 fixation (Capone et al., 1997), is rarely found in sediment traps (Chen et al., 2003;Walsby, 1992), indicating that Trichodesmium spp. has a low direct export efficiency. However, a recent study performed in the southwest Pacific indicates that N fixed by Trichodesmium spp. is preferentially and rapidly (within few days) transferred to diatoms and bacteria (Bonnet et al., 2015b), potentially resulting in indirect carbon export. Conversely, diatom-diazotroph associations (DDAs) drive an efficient biological carbon pump in the Amazon River plume (Subramaniam et al., 2008) and in the North Pacific Gyre (Dore et al., 2008;Karl et al., 2012), indicating efficient export of the N fixed by these organisms. Finally, unicellular N 2 -fixing cyanobacteria (UCYN) are presumably the most abundant in the global ocean and are also major contributors to global N 2 fixation (e. g. Moisander et al., 2010;Montoya et al., 2004). However, little is known regarding the fate of the N fixed by UCYN, whether it is directly or indirectly exported out of the euphotic zone or recycled in surface waters (Thompson and Zehr, 2013). The second N pool (DON) may constitute a significant N source for planktonic communities but remains poorly constrained (Bronk, 2002). The DON pool is a "black box" composed of various chemical products more or less refractory with their own specific turnover time (Bronk et al., 2007). The persistence of high DON concentrations in surface oceanic waters has previously been understood to mean that it is unavailable for the marine biota. However, the determination of DON concentrations is subject to high analytical uncertainties (Czerny et al., 2013) that may hide low but ecologically relevant changes in concentrations resulting from the consumption of the labile or semi-labile fraction of the DON pool. Furthermore, due to the heterogeneous composition of DON, isotopic labeling experiments using tracers are difficult to conduct, which explains the lack of information on the fluxes transiting in and out of the DON pool (Bronk, 2002;Bronk et al., 2007). Although heterotrophic bacteria are presumably the main users of this organic pool, it has been shown that primary producers can also use it to meet their N requirements (Antia et al., 1991;Berman and Bronk, 2003). Similarly to fixed N 2 , the fate of DON assimilated by marine plankton will depend on the consumers, whether they would remineralize or export the particulate organic matter produced. Studying the fate of N in the ocean is complex as it requires one to follow the biogeochemical characteristics, the succession of planktonic species and the potential export from the same water mass for several weeks. In the open ocean, such studies are further complicated by physical processes (e.g., lateral advection) that spread the water masses. Here, we isolated a part of the water column from physical dispersion using in situ large mesocosms (52 m 3 ) equipped with sediment traps in order to overcome this issue. The objectives of this study were (1) to investigate the contribution of N 2 fixation and DON use on PP and particle export, and (2) to trace the fate of these N sources in the ecosystem, i.e., whether the freshly produced particulate organic N (PON) is accumulated or exported out of the system. The mesocosms were deployed in the subtropical New Caledonian lagoon (southwest Pacific), characterized by LNLC conditions , where high N 2 fixation rates and abundances of Trichodesmium spp. and UCYN communities have been reported (Biegala and Garcia et al., 2007;Le Borgne, 2008, 2010). Phosphate (PO 3− 4 ) availability has previously been reported to control N 2 fixation in the southwest Pacific (Moutin et al., 2005(Moutin et al., , 2008. In order to avoid PO 3− 4 limitation and to create favorable conditions for diazotroph growth, the mesocosms were fertilized with PO 3− 4 at the beginning of the experiment. Diazotrophs developed extensively in the mesocosms during the 23-day experiment (Turk et al., 2015). The diazotroph community was dominated by DDAs during the (Ouillon et al., 2010). The complete description of the mesocosm design is detailed in Bonnet et al. (2015c). Briefly, the enclosures were cylindrical bags 2.3 m in diameter and reaching about 15 m deep into the water. The bags were maintained 1 m above the surface to prevent external water inclusions. They were supported by a polyethylene frame and kept at the surface with floats. The bags were straightened by weights at the bottom of the mesocosms. After deployment, the mesocosms were left opened at the bottom for 24 h to ensure a total homogeneity in the water column. On day 1, the bottom was closed with a sediment trap consisting in a funnel shape end fitted with a 3 in. adapter for fastening a collection bottle for the daily sinking material. The PO 3− 4 fertilization was conducted in the evening of day 4. The fertilization consisted of an addition in each mesocosm of 20 L of a filtered seawater solution enriched with KH 2 PO 4 (41.6 mM) leading to a final concentration of ∼ 0.8 µmol L −1 in the mesocosms. To ensure homogenization, the solution was added to each mesocosm using a polyethylene tubing connected to a Teflon pump lifted regularly from the bottom to the surface of the mesocosms. In situ monitoring and water sampling CTD casts and water collection were conducted daily in each of the three replicate mesocosms (hereafter called M1, M2, and M3) and in surrounding waters. Seawater sampling was performed from a floating platform moved around the mesocosms. CTD casts were performed at 10:00 (local time) in each mesocosm and in surrounding waters using a memory probe SBE 911 plus (Sea-Bird Electronics, Inc.) equipped with conductivity, turbidity, fluorimetry, temperature and dissolved oxygen sensors. The CTD was operated at a speed of 0.2-0.3 m s −1 . The water was collected just before the CTD casts at three depths in each mesocosm (1, 6 and 12 m) using an air-compressed Teflon pump (AstiPure ™ ) connected to polyethylene tubing. Samples for particulate and dissolved organic and inorganic matter (C, N and P) were first collected in 50 L polypropylene carboys at the three depths and subsampled back on the R/V Alis, moored 1 nautical mile away from the mesocosm site. Samples for N 2 fixation rate and PP determination were directly collected from the pump in polycarbonate bottles (4.5 L) for each depth in each mesocosm and in surrounding waters. Sinking material was collected every day by divers from sediment traps as described in Bonnet et al. (2015c). Primary production rates and phosphate turnover time PP rates and PO 3− 4 turnover time (T PO 4 , i.e., the ratio of PO 3− 4 concentration and uptake) were measured using the 14 C/ 33 P dual labeling method (Duhamel et al., 2006). 60 mL bottles were amended with 33 P and 14 C and incubated for 3 to 4 h on a mooring line close to the mesocosm site at the sampling depths. Incubations were stopped by adding 50 µL of KH 2 PO 4 solution (10 mmol L −1 ) in order to reduce to a minimum the 33 P assimilation by dilution effect and by keeping them in the dark to stop 14 C uptake. Samples were then filtered on 0.2 µmol L −1 polycarbonate membrane filters, and placed into scintillation vials with 250 µL of HCl 0.5 M. After 12 h, 5 mL of scintillation liquid (ULTIMA Gold MV, PerkinElmer Inc.) was added to each vial before the first count on a Packard Tri-Carb ® 2100TR scintillation counter. The activity of 33 P and 14 C was separated using a second count made 5 months later taking into account the half-life of 33 P of 25.38 days (Duhamel et al., 2006). PP and T PO 4 were calculated according to Moutin et al. (2002). N 2 fixation rates Samples for N 2 fixation incubations were collected in 4.5 L polycarbonate bottles. The latter were amended with 15 N 2enriched seawater according to the protocol developed by Mohr et al. (2010). Briefly, the 15 N 2 -enriched seawater was prepared daily from 0.2 µmol L −1 filtered seawater collected from the same site in a 4.5 L polycarbonate bottle. Seawater was first degassed through a degassing membrane (Membrana, Minimodule ® , flow rate fixed at 450 mL min −1 ) connected to a vacuum pump (< 200 mbar absolute pressure) for at least 1 h. It was then tightly closed with no head space with a silicone septum cap and amended with 1 mL of 15 N 2 (98.9 % Cambridge isotope) per 100 mL. The bottle was then shaken vigorously and incubated overnight at 3 bars (20 m depth) to promote 15 N 2 dissolution. Incubation bottles were then amended with 1 : 20 (vol : vol) of 15 N 2 -enriched seawater and closed without headspace with silicone septum caps. These bottles were incubated on an in situ mooring line close to the mesocosms at the appropriate sampling depths for 24 h. After incubation, 12 mL of incubated water was sampled in Exetainers ® on 10 replicate samples and analyzed using a Membrane Inlet Mass Spectrometer (Kana et al., 1994) to estimate the final enrichment of the 15 N 2 pool during the incubation. The measured final 15 N / 14 N ratio of the N 2 in the incubation bottles was 2.4 ± 0.2 atom% (n = 10). Samples were then filtered on combusted (450 • C, 4 h) GF / F filters and stored at −20 • C for the duration of the cruise. Every day, T0 samples were spiked with 15 N 2 and immediately filtered in order to determine the initial background of 15 N / 14 N ratio of PON for calculation of N 2 fixation rates. Filters were then dried at 60 • C for 24 h prior to analysis using a mass spectrometer (Delta plus, Thermo Fisher Scientific) coupled with an elemental analyzer (Flash EA, Thermo Fisher Scientific) for PON concentration and PON 15 N enrichment determination. The standard deviation was 0.004 µmol for PON and 0.0001 atom % for the 15 N / 14 N isotopic ratio. The fluxes were defined as significant when 15 N enrichment was higher than 3 times the standard deviation obtained from T0 samples. The fluxes were calculated according to the equation detailed in Montoya et al. (1996). A recent study (Dabundo et al., 2014) reports potential contamination of commercial 15 N 2 gas stocks with 15 N-enriched NH + 4 , NO − 3 and/or nitrite (NO − 2 ), and nitrous oxide. The 15 N 2 Cambridge Isotopes stocks analyzed contained low concentrations of 15 N contaminants, and the potential overestimated N 2 fixation rates modeled using this contamination level would range from undetectable to 0.02 nmol N L −1 d −1 . These rates are in the lower end of the range of rates measured in this study and we thus considered that this issue did not affect the results reported here. Chlorophyll a and inorganic and organic matter analyses Samples for chlorophyll a (Chl a) concentrations determination were collected by filtering 550 mL of seawater on GF / F filters. Filters were directly stored in liquid nitrogen. Chl a was extracted in methanol and measured by fluorometry (Herbland et al., 1985). Samples for total organic carbon (TOC) concentrations were collected in duplicate at only one depth (6 m) in each mesocosm and in surrounding waters in precombusted (450 • C, 4 h) 12 mL sealed glassware flasks, acidified with H 3 PO 4 and stored in the dark at 4 • C until analysis. Samples were analyzed on a Shimadzu TOC-V analyzer with a typical precision of 2 µmol L −1 . Samples for particulate organic carbon (POC) concentrations were collected by filtering 2.3 L of seawater through a precombusted (450 • C, 4 h) GF / F filter and determined using the combustion method (Strickland and Parsons, 1972) with an EA 2400 CHN analyzer. Filters were not acidified to remove inorganic carbon as it is assumed to be < 10 % of the total particulate C (Wangersky, 1994). Dissolved organic carbon (DOC) con-centrations were calculated as the difference between TOC and POC concentrations. The DOC precision calculated from the analytical precision of each term according to the error propagation law was 5 µmol L −1 . Samples for NH + 4 were collected in 40 mL glass vials and analyzed by the fluorescence method according to Holmes et al. (1999) on a trilogy fluorometer (Turner Design). The detection limit was 0.01 µmol L −1 . Samples for NO − 3 , NO − 2 , PO 3− 4 , total N (TN) and total P (TP) concentrations determination were collected in 40 mL glass bottles and stored at −20 • C until analysis. NO − 3 , NO − 2 and PO 3− 4 concentrations were determined using a segmented flow analyzer according to Aminot and Kérouel (2007). The detection limit was 0.01 and 0.005 µmol L −1 for NO − 3 + NO − 2 and PO 3− 4 , respectively. TN and TP concentrations were determined according to the wet oxidation procedure described in Pujo-Pay and Raimbault (1994). The precision was 0.5 µmol L −1 and 0.02 µmol L −1 for TN and TP, respectively. Samples for PON and particulate organic P (POP) concentrations were collected by filtering 1.2 L of water on precombusted (450 • C, 4 h) and acid washed (HCl, 10 %) GF / F filters and analyzed according to the wet oxidation protocol described in Pujo-Pay and Raimbault (1994) with a precision of 0.06 and 0.007 µmol L −1 for PON and POP, respectively. DON concentrations were calculated from TN concentrations subtracted by PON, NO − 3 , NO − 2 and NH + 4 concentrations. Dissolved organic P (DOP) concentrations were calculated from TP concentrations subtracted by POP and PO 3− 4 concentrations. The precision calculated according to the propagation law of analytical precision associated with each parameter was 0.5 and 0.03 µmol L −1 for DON and DOP, respectively. Samples from sediment traps were collected daily and preserved in a 5 % buffered solution of formaldehyde and stored at 4 • C until analysis. All the swimmers were handpicked from each sample, and were found to be a negligible source of particulate matter compared to the total particulate matter exported (< 5 %). Samples were then desalted using ultrapure water (Milli-Q grade) and freeze dried. The daily amount of POC exported (POC export ) and PON exported (PON export ) were measured using a CHN analyzer (Perkin Elmer 2400). The POP exported (POP export ) was measured after mineralization using nitric acid and further determination of mineralized P according to Pujo-Pay and Raimbault (1994). Data presentation and statistical analyses The build-up of an elemental mass balanced budget is theoretically accessible as all the stocks and fluxes were sampled in the mesocosms. However, the attempts of closing a elemental mass budget in similar mesocosm studies were limited by the large analytical uncertainties on the organic pool determination (Czerny et al., 2013;Guieu et al., 2014). Nevertheless, as the only incoming (N 2 fixation) and export fluxes (PON export ) of N were characterized accurately, we were able to calculate the change in TN content ( TN calc ) of a mesocosm. It was defined as the following: where N 2,fix and PON export are the cumulated average over depth N 2 fixation rates and PON export . This approach does not discriminate among the different N pools in the water column but allows a precise evaluation of the TN variation in the mesocosm and a direct comparison of the N 2 fixation and the PON export . The values of fluxes and concentrations presented in the text are averaged over the three depths (no significant differences were observed among depths, paired Friedman test, α = 0.05). Statistical differences between each mesocosm or between the mesocosms and surrounding waters were tested using the paired non-parametric Wilcoxon signed-rank test (α = 0.05) for each parameter presented. If no significant differences between the mesocosms were detected, the values were averaged between the mesocosms. The associated uncertainties were calculated as the analytical precision cumulated to the standard deviation of each term according to the propagation of errors law. Differences between P1 and P2 were tested using the non-parametric Kruskal-Wallis test (α = 0.05). Hydrological background The detailed description of hydrological and inorganic nutrients conditions during the experiment is presented in Bonnet et al. (2015c). Briefly, seawater temperature increased inside the mesocosms and in surrounding waters from 25.5 to 26.7 • C over the course of the experiment. The water column was well mixed in the mesocosms as temperature and salinity were homogeneous with depth over the course of the experiment. The sum of NO − 3 and NO − 2 concentrations averaged over depth in the mesocosms were below 0.04 µmol L −1 the day before the PO 3− 4 fertilization (day 4) and decreased to 0.01 µmol L −1 towards the end of the experiment. NH + 4 concentrations were close to the detection limit of 0.01 µmol L −1 from day 1 to day 18 and increased in all the mesocosms up to 0.06 µmol L −1 toward the end of the experiment. Prior to the PO 3− 4 fertilization, PO 3− 4 concentrations in the mesocosms ranged from 0.02 to 0.05 µmol L −1 . The day after the fertilization, PO 3− 4 concentrations reached ∼ 0.8 µmol L −1 in all mesocosms. Then, the concentrations decreased steadily, tending towards initial concentrations (0.02-0.08 µmol L −1 ) at the end of the experiment. In surrounding waters, NO − 3 remained below 0.20 µmol L −1 and PO 3− 4 averaged 0.05 µmol L −1 throughout the experiment. and (c) primary production (PP) rates (µmol C L −1 d −1 ) in the mesocosms M1 (red), M2 (blue) and M3 (green) and in surrounding waters (black). The three dots of each color represent the measured values at the three sampled depths. The solid lines are the 3-day running mean value. P1 and P2 denote the two phases of the experiment when the diazotrophic community was dominated by diatom-diazotroph associations and unicellular N 2 -fixing cyanobacteria (group C), respectively. Phosphate turnover time The evolution of T PO 4 was closely related to the dynamics of PO 3− 4 concentrations. Before the PO 3− 4 fertilization, T PO 4 decreased from 1.0 ± 0.1 d on day 3 to 0.4 ± 0.1 d on day 4 in all the mesocosms (Fig. 1a). At the start of P1, T PO 4 dramatically increased in all mesocosms, reaching 35.7 ± 15.7, 30.1±8.6 and 35.8±10.5 d in M1, M2 and M3, respectively. T PO 4 then decreased steadily in all the mesocosms, reaching 1 d on day 14, day 19 and day 21 in M1, M2 and M3, respectively. At the end of the experiment (day 23), T PO 4 values were the lowest reached over the entire experiment in all the mesocosms with values below 0.2 d. In surrounding waters, T PO 4 was stable around 1.8 ± 0.7 d from the start of the experiment to day 15 and then decreased to reach 0.5±0.1 d on day 23 (Fig. 1a). N 2 fixation and primary production rates Before the PO 3− 4 fertilization, N 2 fixation rates inside the mesocosms were 17.4 ± 7.3 nmol N L −1 d −1 and decreased in the days following the fertilization (Fig. 1b). During P1, the average N 2 fixation rates in the mesocosms were 9.8 ± 4.0 nmol N L −1 d −1 . During P2, N 2 fixation rates in the mesocosms were significantly (p < 0.05) higher than during P1, averaging 27.7 ± 8.6 nmol N L −1 d −1 . N 2 fixation rates were not significantly different (p > 0.05) among the three mesocosms throughout the experiment. In surrounding waters, N 2 fixation rates did not show any clear pattern along the experiment and averaged 9.2 nmol N L −1 d −1 , ranging from 1.9 to 29.3 nmol N L −1 d −1 (Fig. 1b). The day before the PO 3− 4 fertilization, PP was not significantly different (p > 0.05) among the three mesocosms and averaged 0.4 ± 0.1 µmol C L −1 d −1 (Fig. 1c). During P1, PP increased steadily in the mesocosms to reach 0.9 ± 0.1 µmol C L −1 d −1 at the end of P1. During P2, while T PO 4 was decreasing, PP continued to increase in the mesocosms with values generally exceeding 1.5 µmol C L −1 d −1 . During P2, PP was significantly higher in M3 than in M1 and M2 (p < 0.05). In surrounding waters, PP was stable before and during P1 at 0.9±0.3 µmol C L −1 d −1 and increased during P2 reaching 1.5 ± 0.2 µmol C L −1 d −1 on day 23. Over the whole experiment, PP in the mesocosms was not significantly different from surrounding waters (p > 0.05) except in M3 during P2 (p < 0.05). Assuming that all the diazotrophs are primary producers and have a C : N fixation ratio of 6.6 (Redfield, 1934), we calculated that N 2 fixation contributed 10.8 ± 5.0 % (range 3.7-32.2 %) of PP in the mesocosms after the PO 3− 4 fertilization and 5.7±2.0 % (range 2.2-9.1 %) in surrounding waters. The contribution of N 2 fixation to PP was not significantly different (p > 0.05, n = 57) during P1 (9.0 ± 3.3 %) and P2 (12.6 ± 6.1 %). Chl a and particulate organic matter dynamics The day before the fertilization, Chl a concentrations in the mesocosms were 0.21 ± 0.05 µg L −1 (Fig. 2). During P1, Chl a did not show any clear pattern; concentrations were in the 0.12 to 0.28 µg L −1 range. During P2, Chl a increased in all the mesocosms but to a greater extent in M3 compared to M1 and M2 and reached maximal depth-averaged concentrations of 0.55 ± 0.01, 0.47 ± 0.08 and 1.29 ± 0.22 µg L −1 in M1, M2 and M3, respectively. Before and during P1, Chl a concentrations in surrounding waters were close to the con- centrations in the mesocosms and ranged between 0.09 and 0.28 µg L −1 . During P2, they increased but to a lower extent than in the mesocosms with daily averaged concentrations of 0.42 ± 0.03 µg L −1 on day 23. The day before the PO 3− 4 fertilization, POC concentrations ranged from 9 to 15 µmol L −1 (Fig. 3a). During P1, POC concentrations did not show any clear pattern in the mesocosms, and concentrations ranged from 6 to 13 µmol L −1 . During P2, POC concentrations increased in M3 reaching 18 µmol L −1 on day 21, whereas they remained stable in M1 and M2. POC concentrations in surrounding waters were stable throughout the experiment and were significantly lower (p < 0.05) than in the mesocosms. Initial PON concentrations were about 0.9 µmol L −1 and remained relatively stable during P1 (Fig. 3b). During P2, PON concentrations increased in all the mesocosms by a factor 1.5 in M1 and M2, and by a factor 2 in M3 at day 23 reaching 2 µmol L −1 . PON concentrations also increased outside but to a lesser extent with values remaining below 1 µmol L −1 . POP concentrations showed the same pattern than PON concentrations. The day before the PO 3− 4 fertilization, POP concentrations were not significantly different (p > 0.05) between the mesocosms and averaged 0.05 µmol L −1 (Fig. 3c). During P1, the concentrations in the mesocosms remained relatively stable. During P2, POP concentrations increased in all the mesocosms but to a higher extent in M3, reaching 0.07, 0.07 and 0.12 µmol L −1 in M1, M2 and M3, respectively. The particulate POC / PON ratio decreased over the course of the experiment from 12 to 8, but remained higher than the Redfield ratio (6.6). Fig. 1. The three dots of each color represent the measured values at the three sampled depths. The solid lines are the 3-day running mean value. P1 and P2 denote the two phases of the experiment when the diazotrophic community was dominated by diatomdiazotroph associations and unicellular N 2 -fixing cyanobacteria (group C), respectively. Dissolved organic matter dynamics DOC concentrations ranged from 50 to 74 µmol L −1 (average value of 60 ± 4 µmol L −1 ) over the course of the experiment without any clear trend over the 23 days (Fig. 4a). Furthermore, no significant differences were measured between the mesocosms and surrounding waters (p > 0.05). Before the PO 3− 4 fertilization, DON concentrations averaged The three dots of each color represent the measured values at the three sampled depths. The solid lines are the 3-day running mean value. P1 and P2 denote the two phases of the experiment when the diazotrophic community was dominated by diatom-diazotroph associations and unicellular N 2 -fixing cyanobacteria (group C), respectively. 5.2 ± 0.5 µmol L −1 on day 4 (Fig. 4b). Concentrations remained stable during P1 in and out the mesocosms. During . Temporal evolution of (a) particulate organic carbon exported (POC export ), (b) particulate organic nitrogen exported (PON export ) and (c) particulate organic phosphorus exported (POP export ) fluxes (nmol L −1 d −1 ) in the mesocosms expressed in equivalent water volume. The color code is identical to that in Fig. 1. The solid lines are the 3-day running mean value. P1 and P2 denote the two phases of the experiment when the diazotrophic community was dominated by diatom-diazotroph associations and unicellular N 2 -fixing cyanobacteria (group C), respectively. the mesocosms up to day 17 (p > 0.05). From this day, even though a significant decrease (p < 0.05) in DON concentrations was also observed in surrounding waters, the resulting concentrations were significantly higher outside the mesocosms than inside (p < 0.05). The DOP dynamics was similar to the DON dynamics: during P1, DOP concentrations were on average 0.14 ± 0.03 µmol L −1 and remained stable up to day 14, day 16 and day 17 for M1, M2 and M3, respectively (Fig. 4c). Af-ter these days, DOP concentrations significantly decreased (p < 0.05) and reached 0.09 ± 0.01 µmol L −1 on average. DOP concentrations also decreased in surrounding waters from day 18, but to a lesser extent than in the mesocosms. Export fluxes and their coupling with primary production and N 2 fixation Before the PO 3− 4 fertilization (day 4), the exported fluxes were not significantly different (p > 0.05) among the three mesocosms (104±35, 6.5±1.7 and 0.35±0.09 nmol L −1 d −1 on average for POC export , PON export and POP export , respectively) (Fig. 5). The daily exported particulate matter remained relatively stable during P1 averaging 164 ± 141, 10.2 ± 7.1 and 0.9 ± 1.3 nmol L −1 d −1 for POC export , PON export and POP export , respectively. During P2, the daily export fluxes increased continuously in all the mesocosms to a higher extent than during P1 peaking at 1197 ± 257 and 106.1 ± 20.1 nmol L −1 d −1 for C and N, respectively. The daily POP export ranged from 1.0 to 18.4 nmol L −1 d −1 . The e ratio, defined as the amount of exported carbon (POC export ) relative to the fixed carbon (PP), was significantly higher (p < 0.05, n = 57) during P2 (39.7 ± 4.9 %) than during P1 (23.9 ± 20.2 %). Integrated N 2 fixation rate over P1 was 0.10 ± 0.02 µmol L −1 on average for all the mesocosms and did not significantly differed from the integrated PON export of 0.10 ± 0.04 µmol L −1 (Fig. 6). The resulting change on the TN pool in the mesocosms remained between −0.01 and 0.01 µmol L −1 and was not significantly different from 0 (sign test, p > 0.05). The time-integrated N 2 fixation rate over P2 was 0.25 ± 0.06 µmol L −1 and the PON export was 0.45 ± 0.04 µmol L −1 (Fig. 6). The resulting change of the TN calc pool in the mesocosms remained not significantly different from 0 (sign test, p > 0.05) up to day 18 but deviated negatively from 0 (sign test, p < 0.05) from day 19 to day 23 (Fig. 6). At the end of P2, the decrease of the TN calc pool was 0.20 ± 0.04 µmol L −1 . The contribution of N 2 fixation to PP (10.8 ± 5.0 %) in the mesocosms and in surrounding waters (5.7 ± 2.0 %) was in the upper range of previous studies in the Pacific Ocean (Raimbault and Garcia, 2008;Shiozaki et al., 2013) and the Mediterranean Sea (Bonnet et al., 2011;Ridame et al., 2013). Prior to PO 3− 4 fertilization, NO − 3 concentrations were < 0.04 µmol L −1 . As there was no external supply of NO − 3 , the potential consumption of the initial NO − 3 in the mesocosms represented < 11.5 % of the integrated N 2 fixation rates over P1 and P2 (0.35 ± 0.08 µmol L −1 ) (Fig. 6). Thus, N 2 fixation supplied nearly all of the new production during the experiment. These results indicate that in a N-depleted system, diazotrophs can provide enough new N to sustain high PP (exceeding 2 µmol C L −1 d −1 ) and biomass (up to 1.42 µg L −1 of Chl a), as long as PO 3− 4 does not limit N 2 fixation. The relative efficiency of different diazotrophs to export particulate matter Only few studies have focused on the direct coupling between N 2 fixation and particulate export (Dore et al., 2008;Karl et al., 2012;White et al., 2012). To our knowledge, the only study comparing the export efficiency of different diazotrophs reports that DDAs blooms could contribute up to 44 % of the direct export in the northeast Pacific, while UCYN (Group A) and Trichodesmium spp. could account for only 0 to 10 % of the export . The scarcity of data is due to methodological issues associated with the use of sediment traps in the open ocean due to (1) the patchy distribution of N 2 fixers that are not necessarily collected by the sediment traps, and (2) the temporal lag between production and export which is difficult to assess (Nodder and Waite, 2001). The mesocosm approach was designed to overcome these experimental limitations. The shallow depth of the traps (∼ 15 m) and the absence of NO − 3 normally supplied via the nitracline prevent any comparison with open ocean studies. Nevertheless, the mesocosm approach enables a comparison of the export efficiency under contrasting ecological situations. In this case, the period dominated by DDAs (P1) is compared with the period dominated by UCYN-C (P2). During P1, the biomass was stable in the mesocosms and the amount of recently fixed N 2 was equal to the amount of exported PON, suggesting a tight coupling between the two processes (Fig. 6). It has been shown that large aggregates of the diatom Rhizosolenia spp., representing the majority of Figure 6. Time-integrated dinitrogen fixation rates ( N 2,fix ) and particulate organic nitrogen exported ( PON export ) during P1 (dominance of diatom-diazotroph associations) and P2 (dominance of unicellular N 2 -fixing cyanobacteria group C) in the mesocosms together with the calculated change in Total N content ( TN calc ) defined as the difference between N 2,fix and PON export . The shaded areas represent the uncertainty associated with the TN calc change. DDAs during P1 (Turk et al., 2015), can sink at high rates (Villareal et al., 1996). This suggests that during this experiment, the recently fixed N 2 by DDAs remained within the symbiotic association and was quickly exported in the settling particles. This agrees with Karl et al. (2012), who showed that DDAs support the export pulses regularly observed in late summer in the tropical North Pacific Ocean. During P2, the increase in PON concentrations (Fig. 3b) suggests that part of the freshly produced biomass remained in the water column. The accumulation of PON probably favored remineralization processes, explaining the increase in NH + 4 concentrations. This may have enhanced the transfer of the recently fixed N 2 to the non-diazotrophic plankton as demonstrated by Bonnet et al. (2015a) and explains the development of picocyanobacteria (Leblanc et al., 2015). Additionally, the total amount of N provided by N 2 fixation did not account for all the exported PON during P2 (Fig. 6), implying that an additional N source played a significant role in promoting the export. The only alternative N source is DON which, indeed, exhibited a significant decrease in concentration of 0.9±0.7 µmol L −1 in the mesocosms during P2 (see Sect. 4.3 for further discussion on DON consumption). Assuming that DON and N 2 fixation are the only possible sources of N in the mesocosms, we calculated that a DON use of 0.9 µmol L −1 would have supported ∼ 78 % of the PON production during P2, and potentially fueled the PON export to the same extent. This is in agreement with Torres-Valdé et al. (2009) and Letscher et al. (2013), who showed that DON pool is a dynamic contributor of the N cycle able to support up to 40 % of the vertical PON export in the oligotrophic gyres of the Pacific and Atlantic Oceans. A quantification of the diazotrophs in the sediment traps, performed on day 19, shows that ∼ 10 % of the UCYN-C biomass in the mesocosms was exported on this day, explaining ∼ 7 % of the PON export (Bonnet et al., 2015a). Thus, the N 2 recently fixed by UCYN-C can directly be exported, but is probably more efficiently transferred to non-diazotrophic plankton through mineralization processes. The contrast between P1 and P2 is also observed using the e ratio. The production driven by UCYN-C was more efficient in promoting POC export than the production driven by DDAs. During P1, it is probable that the recently fixed C by DDAs remained within the symbiotic association and sunk with the recently fixed N 2 constituting a direct and net C export. During P2, the higher efficiency of C export strongly suggests that the DON ultimately fueled PP which, in turn, increased POC export. Additionally, when UCYN-C dominated, an enhanced N remineralization may have enabled more C to be fixed per unit of fixed N 2 , leading to a higher e ratio. A proportionally higher N remineralization following high PP and N 2 fixation rates is supported by similar findings in the western North Pacific warm pool (Shiozaki et al., 2013). The unexpected high dissolved organic matter consumption The use of dissolved organic compounds and their implications on PP in the open ocean has long been demonstrated (Antia et al., 1991). The use of DOP by plankton communities in the oligotrophic ocean has been observed in the North Pacific Ocean (Bjorkman and Karl, 2003) and in the Atlantic Ocean (Lomas et al., 2010;Mather et al., 2008) and generally occurs under PO 3− 4 limitation. In this study, the decrease in DOP concentrations during P2 occurred when T PO 4 reached the lowest levels, confirming the ability of the planktonic community to use DOP under low PO 3− 4 availability. More surprisingly, the significant and rapid decrease in DON concentrations (Fig. 4) observed during the development of UCYN-C (P2) in the mesocosms was associated with a rapid increase in PP (Fig. 1c), biomass (Figs. 2 and 3) and bacterial production (BP) (Van Wambeke et al., 2015), suggesting high consumption of DON directly or indirectly by primary producers. In the open ocean, DON is mainly refractory. Nevertheless, it is now recognized that a fraction of the DON is labile and can directly support phytoplankton growth, while a semi-labile fraction can be mineralized by bacterioplank-ton (Antia et al., 1991;Bronk, 2002;Bronk et al., 2007). In this study, we propose three hypotheses that could explain the observed decrease in DON concentrations during P2: (i) bacterial mineralization of DON triggered by high PP, (ii) direct uptake of DON by primary producers including UCYN-C and (iii) abiotic photo-degradation of DON into NH + 4 . i. The increase in PP driven by high N 2 fixation rates during P2 led to an increase in bacterial production (Van Wambeke et al., 2015). The significant negative correlation between BP and DON concentrations (Spearman rank correlation, r = −0.35; p < 0.001) indicate significant consumption of DON by bacterial mineralization. Diazotrophs are known to over-fix C relative to N (Mulholland et al., 2007), which may explain why the POC / PON ratio was above the Redfield ratio during the experiment. The resulting N deficit for bacterial mineralization may have been found in the labile or semi-labile DON pool. This hypothesis is supported by Van Wambeke et al. (2015), who showed that BP was limited by N availability in the mesocosms during the experiment. Based on BP data and assuming a bacterial growth efficiency of 10 to 30 % (del Giorgio and Cole, 1998) and a C / N ratio of 6.6 in bacteria cells (Fukuda et al., 1998), bacterial respiration would have led to a DON consumption of 0.2 to 0.7 µmol L −1 during P2, supporting at least part of the DON removal of ∼0.9 µmol L −1 reported here. ii. An alternative explanation for the decrease in DON concentrations is direct consumption by primary producers. Cyanobacteria are known to use DON compounds such as urea (Collier et al., 2009;Painter et al., 2008) to such an extent that DON has been reported to be one of the main sources of N for cyanobacterial blooms in coastal waters (Glibert et al., 2004). The DON decrease occurring during the development of UCYN-C, whose abundances reached 5.10 5 nifH copies L −1 (Turk et al., 2015), questions their ability to use DON to meet their N requirements. Direct uptake of glutamate and amino acids (constitutive components of the DON pool) has been reported in natural and laboratory populations of Trichodesmium spp. . Furthermore, large decrease in DON concentrations were observed after blooms of the diazotroph Aphanizomenon ovalisporum in Lake Kinneret (Berman, 1997). The hypothesis of a direct use of DON by A. ovalisporum was confirmed in a culture experiment where the development of this diazotroph was stimulated by DON additions (Berman, 1997(Berman, , 1999. To our knowledge, no direct uptake measurements of DON compounds have been performed on UCYN. However, the ureA gene involved in the urea assimilation has been identified in the cyanobacterial diazotrophic strain Cyanothece PCC 7822 (Bandyopadhyay et al., 2011) closely related to the UCYN-C cluster. These pieces of evidences suggest that in addition to N 2 fixation, UCYN-C might be able to use the DON pool to meet their N requirements. iii. Finally, photo-degradation could be a possible sink of DON in surface waters (Bronk, 2002). A field study performed in the ultraoligotrophic eastern basin of the Mediterranean Sea indicates a production of NH + 4 from DON of 0.2-2.9 nmol N L −1 d −1 in surface waters (Kitidis et al., 2006). Taking into account the highest rates reported above, this process cannot explain more than 10 % of the observed DON removal. Moreover, the DON decrease occurred only during P2 whereas photodegradation would be occurring continuously over the entire experiment. The first two hypotheses (i and ii) are more likely to explain the DON decrease during P2. Neither of these two hypotheses can be excluded even though direct proof of large uptake of DON by UCYN-C is lacking. Thus, in this study, the DON use was directly or indirectly triggered by the UCYN-C activity. Conclusions This study confirms that, in the southwest Pacific, N 2 fixation is a biogeochemically relevant process able to provide sufficient new N to drive new PP, biomass accumulation and organic matter export as long as P is not limiting. The fate of the recently fixed N appears to be closely related to the diazotrophic community involved in N 2 fixation. A strong coupling of N 2 fixation and PON export occurred when DDAs dominated the diazotrophic community, suggesting their direct export. When the community was dominated by UCYN-C, biomass accumulation was observed together with an efficient particulate export. A significant decrease in DON concentrations was observed during the same period indicating a direct or indirect use of DON by UCYN-C. Thus, in addition to fueling primary production, UCYN-C appear to be able to enhance regenerated production based both on the transfer of recently fixed N 2 toward non-fixing planktonic groups and on the use of the DON pool. This use of DON exceeded the new N provided by N 2 fixation even though the N 2 fixation rates were among the highest reported in literature for the global ocean. These results suggest that DON has to be considered as a dynamic pool in LNLC areas as it may provide significant amounts of N and contribute significantly to particulate export.
2015-03-27T04:16:54.000Z
2015-07-07T00:00:00.000
{ "year": 2015, "sha1": "91934691d33ff3037231b13a28eb532ade8e9648", "oa_license": "CCBY", "oa_url": "https://www.biogeosciences.net/12/4099/2015/bg-12-4099-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "645b74a3c8b0ab5429bfabda4793c593fd3c9898", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
255715280
pes2o/s2orc
v3-fos-license
New Approaches to Project Risk Assessment Utilizing the Monte Carlo Method : An environment of turbulence in the market in recent years and increasing inflation, mainly as a result of the post-COVID period and the ongoing military operation in Ukraine, represents a significant financial risk factor for many companies, which has a negative impact on managerial decisions. A lot of enterprises are forced to look for ways to effectively assess the riskiness of the projects that they would like to implement in the future. The aim of the article is to present a new approach for companies with which to assess the riskiness of projects. The basis of this is the use of the new Crystal Ball software tool and the effective application of the Monte Carlo method. The article deals with the current issues of investment and financial planning, which are the basic pillars for effective management decisions with the goal of sustainability. The article has verified a methodology that allows companies to make effective investment decisions based on assessing the level of risk. For practical application, the Monte Carlo method was chosen, as it uses sensitivity analysis and simulations, which were evaluated for two types of projects. Both simulations were primarily carried out based on a deterministic approach through traditional mathematical models. Subsequently, stochastic modeling was performed using the Crystal Ball software tool. As a result of the sensitivity analysis, two tornado graphs were created, which display risk factors according to the degree of their influence on the criterion value. The output of this article is the presentation of these new approaches for financial decision-making within companies. Introduction Creating the conditions for correct investment decisions is a key factor leading to the sustainability of businesses in the future.A systemic approach focused on the sustainability of businesses in the field of financial and investment planning can create a comprehensive view of the issue of effective managerial decision-making. Currently, several authors are interested in and draw relationships between financialization and technological innovations, as well as analyzing the behavior of nonfinancial enterprises in financing from both a macro and micro perspective [1][2][3]. Risk is generally perceived as the uncertainty of future development, the uncertainty of whether the projects that the company invests in will be profitable or will make a loss.The success or, on the contrary, the failure of business projects can significantly affect the economic result of the company and, in the worst case, even the very existence of the company [4][5][6].For this reason, companies should pay attention to the assessment of the risks of individual business projects before their implementation.Currently, risk management is very neglected in practice, but globalization forces our entrepreneurs to apply new methods of risk management to their businesses in order to be competitive [7,8].Risk analysis is usually understood as a process of defining threats, the probability of their occurrence, and the impact on assets, i.e., determining risks and their severity [9][10][11].Other authors [12][13][14][15] describe risk analysis as part of five basic phases of risk management: the determination of project risk factors, the determination of the significance of risk factors, the determination of project risk, the assessment of project risk and the adoption of measures to reduce it, and the preparation of a corrective action plan. We currently know several risk management methods for each business activity or strategy.In general, we distinguish between deterministic and stochastic (probabilistic) approaches to risk measurement.Deterministic approaches assume that a certain value of one variable is assigned a certain value of the second variable.In stochastic approaches, it is assumed that a certain value of one variable corresponds with certain probabilities of different values of the other variable.Stochastic approaches incorporate variability into the risk measurement model itself by specifying a probability distribution for the random variables.In particular, the following types of probability models can be used to measure risk: models based on an expert determination of subjective probability distributions, analytical models, and simulation models [4,8,9,16]. For new business plans, the greater part of the required probability distributions of risk factors must be determined by subjective estimation based on expert evaluation.It is usually easier to determine them in the form of a discrete probability distribution for three decision variants: pessimistic, most probable, and optimistic.In the second type of probabilistic model, an analytical approach is used using standard theoretical probability distributions for the continuous and discrete variables.The result of the solution is the determination of the consequences of risk variants in the sense of determining the probability distribution of the values of the evaluation criteria for individual risk variants.The third type of probabilistic model-simulation models-is used when the problem is too complex for the use of the previous methods.The main phases of simulation studies are the definition of the problem, the creation of a simulation model, the specification of input variable parameters and their mutual relations, and the simulation and design of new experiments.Currently, the use of simulation models is associated with the application of Monte Carlo computer simulations [4]. Large portfolios of financial assets or commodities with high variability, which can significantly affect the financial stability of the company, will require more sophisticated techniques, including statistical analyses based on the value at risk and cash flow at risk models.VaR models make it possible to estimate the value of the risk in the portfolio as a maximum loss in the event that the portfolio had to be held for a fixed period with a predetermined level of significance-usually with a probability of 95% or 99% based on past experience [17]. The categorization of individual methods for risk analysis is presented in Table 1. Qualitatively What-if method, scenario analyses, failure mode and consequence questionnaires, criticality analyzes (FMEA / FMECA), hazard and operability analysis (HAZOP), human error analysis (HEA), block reliability scheme, fault tree analysis (FTA), event tree analysis (ETA), probability risk analysis and safety assessment (PRA &PSA), survey questionnaires Quantitatively Statistical, cost and efficiency analysis, expert systems, analysis of the relative value of risk, sensitivity analyses, Monte Carlo simulations, critical point analysis; reduced standard methods, cost-benefit analysis, the Delphi method Combined (qualitative and quantitative approaches) Fault tree analysis, the Delphi method, value chain analysis Types of Methods Qualitative methodologies used in nuclear and chemical processing plants Preliminary hazard analysis (PHA), hazard and operability analysis (HAZOP), failure mode and consequence analysis (FMEA/FMECA) Tree techniques used to quantify the probability of occurrence of accidents and other adverse events leading to loss of life or economy Fault tree analysis (FTA), event tree analysis (ETA), cause and effect analysis (CCA), fault tree risk management (MORT), organizational safety management by assessment technique (SMORT) Techniques for a dynamic system Dynamic event logic analytical method (DYLAM), dynamic event tree analytical method (DETAM), Markov Preliminary hazard analysis (PHA), checklists, human error analysis (HEA), hazard and operability analysis (HAZOP), criticality failure mode and consequence analysis (FMECA) Deductive method (so how?) Events and fault trees The basis of risk management is a certain systematic procedure for working with risk and uncertainty aimed at increasing the quality of project preparation and evaluation.The first three phases of risk management include determining the risk factors, determining their significance, and determining project risk [21][22][23].These three phases are collectively referred to as project risk analysis.The next two phases are referred to as the project's own risk management [10,24,25]: • The 1st stage of risk management is the determination of risk factors.The content of this phase is the determination of risk factors as quantities whose possible future development could affect the economic results, the criteria of the economic efficiency of the project (profit, return on capital, and net present value), and its financial stability; • The 2nd stage of risk management is the determination of project risk.The importance of the risk factors can basically be determined in two ways, namely expertly or by using sensitivity analysis; • The 3rd phase determines the risk of investment projects.Project risk can be determined numerically or indirectly.In numerical form, the risk is determined using statistical characteristics (dispersion, standard deviation, coefficient of variation), which serve as a measure of risk in financial management.Project risk is indirectly determined using certain managerial characteristics, which, in their summary, provide information on a greater or lesser degree of risk. Hertz and Thomas [26] prescribe the content of risk analysis, which includes the analysis of input variables (resulting in the determination of the risk factors and their distribution functions), Monte Carlo simulation (generation of risk situations), and the evaluation of outputs based on the obtained probability distributions.Berkowitz [27] divides the risk analysis into two basic parts: the identification of risk factors and their impact on the value of the portfolio and a model that connects the risk factors with the observed output quantity. Savvides [28] presents a risk analysis model, which consists of a sequence of seven basic steps, ensuring the processing of a certain number of inputs (random variables, i.e., risk factors, deterministic variables, and parameters) for the calculation of the outputs (selected criteria for evaluating business projects). Several authors [29][30][31] discuss the procedure for determining the significance of risk factors in two ways, namely, the expert assessment of risk factors or sensitivity analysis. The expert assessment of the significance of risk consists of a professional evaluation by managers who have the necessary knowledge and experience in the areas where the individual risk factors fall.The significance of the risk is assessed from two points of view.The first is the probability of the occurrence of the risk factor, and the second is the intensity of the negative impact that the occurrence of the risk factor has on the results of the project [32]. The purpose of the sensitivity analysis is to determine the sensitivity of the project's economic criterion, such as its net present value, profit, and profitability of invested funds, depending on the factors that influence this criterion.So, it means determining how certain changes in these factors, for example, changes in the volume of production, or utilization of production capacity, reflect changes in the selling prices of products, the prices of the basic raw materials, the materials and energy, the investment costs, the interest and tax rates, the exchange rates, the project lifetime, and the discount rates that affect the chosen economic criterion of the project [18,33].For those factors in which certain changes, e.g., a deviation in the size of 10% from the most probable value, cause only a small change in this criterion, we then can consider them to have little importance because the sensitivity of the chosen criterion to changes in these factors is small. On the contrary, those factors in which the same change causes significant changes in the chosen criterion will certainly be significant for us.The given criterion is highly sensitive to changes in these factors.However, in the case of risk factors with smaller impacts on the project's profit, it is necessary to remember that the percentage changes in profit refer to an increase in these factors by a specified percentage.However, if possible, changes in some risk factors with a small impact on profit can be significantly greater (e.g., in the case of energy prices); it is also necessary to consider such a factor as a significant risk factor.Therefore, not only the results of the sensitivity analysis but also the possible range of these factors are essential to define unimportant risk factors that can be neglected and work only with their most probable estimates [34]. The main goal of these methods is to allow those managers who are responsible for risk management to have more transparent access to information about threats and to ensure integrated risk management throughout the enterprise at the level of strategic management.In the current conditions of business uncertainty, simple deterministic models are not sufficient; we need to focus more on the use of probabilistic methods for measuring risk, which provide greater possibilities in terms of information security of decision-making processes. In our opinion, these methods most accurately determine the extent of risks and allow investors to more easily decide on which investment project to invest in, as well as help them decide on reducing or transferring risk to another entity. The basic shortcoming of the traditional methods for evaluating investment projects is a single-scenario approach based on an optimistic assumption of the development of the business environment.An increase in the quality of investment decision-making, in terms of respect for risk and uncertainty, can be brought about by probabilistic approaches, a significant representative of which is the Monte Carlo simulation [35]. This tool requires the identification of risk factors affecting investment projects and, thus, their evaluation criteria.The result of the application of Monte Carlo simulation is the distribution of the probability of these quantities and, subsequently, an easier decision for the investor to accept or reject investment projects based on valuable information about the size of the project's risk obtained by this method [19,36]. The Monte Carlo method originated in the 20th century.Even so, this method is currently considered one of the most advanced methods today.The wide application of this method results from its simple modification to current conditions and the usability of modern software tools.For this very reason, this method has become a multidisciplinary method used in various scientific branches, such as the field of physics and electrical engineering [37][38][39][40], chemistry [41,42], safety assessment [43], industry [44,45], the public sector [46], economics [36,[47][48][49], and many other fields.Practice has shown that the use of the Monte Carlo method leads to a significant reduction in variance but, above all, to a reduction in computing time [50,51]. The goal of our contribution is to apply Crystal Ball software tools and Monte Carlo simulation in the evaluation of investment projects, which creates prerequisites for expanding the applied use of simulation software tools in risk management in practice.The article is aimed at solving the issue of financing the investment activities of companies in order to decide on a more effective project.The modeling process was based on the evaluation of the economic efficiency of the investment and a decision about which of the two projects is more advantageous and less risky in terms of future sustainability. The secondary goal was to integrate the use of new classical and modern economicstatistical methods, which are an effective tools for the sustainability of businesses [1,3,52].The application verification was based on the methodology presented by us in our published article [19].The methodology shows two approaches to eliminating risk in enterprises in Slovakia.The first approach represents the modeling of financial risks using the principles of financial mathematics in order to optimize them.The second approach is stochastic modeling, which is based on the use of simulations. The purpose of the article is to present new approaches to assessing the riskiness of projects and investment decisions.At the same time, the aim of the article is to verify, using a practical example, the methodology created by us aimed at achieving the sustainability of businesses in the territory of the Slovak Republic.The problem is primarily that businesses in the territory of the Slovak Republic use traditional and outdated methods that do not take risks and the factor of time into account in decision-making processes and in the processes of assessing projects and investments.The purpose of this contribution is to provide guidance for these companies on how to integrate new modern approaches into decision-making processes.The article applies the methodology of assessing project and investment decisions to the environment of a real company with the aim of introducing new software tools to companies that will facilitate the decision-making processes of the company's management and, thus, make the decision-making about the future investments of these companies more efficient. Despite the wide applicability of the Monte Carlo method in published studies, there is no guide for the simple integration of this method into decision-making processes in companies.A methodology was therefore created for the conditions of companies in the territory of the Slovak Republic, which provides simple instructions for companies on how to integrate new approaches in the form of the Monte Carlo method into their internal processes. The use of the Monte Carlo method through the software environment creates space for companies to implement simulations that integrate risk assessment, especially when taking time into account.The businesses will obtain a realistic idea of the future development of their investments.The main advantage of the methodology is the fact that the introduction of such an approach for companies in the conditions of the Slovak Republic does not represent high initial investments and will contribute to their sustainability. Materials and Methods The article deals with the issue of investment decision-making in enterprises in the territory of the Slovak Republic.The basic principle of the article is the verification of the methodology that was presented in the authors' previous publications [19].The methodology is aimed at solving the investment decisions of the company when implementing modern software tools.Several companies operating in the territory of the Slovak Republic were chosen to verify the methodology.To fulfil the objective of the presented article, the article presents the outputs obtained from the methodology verification process within the company, which acts as a partner company ensuring security in transport sector companies, such as airports and transport companies.We will not name the company due to GDPR.Among other things, the analyzed company provides a number of products for companies in the transport sector that are essential as part of a security solution.The list of products is shown in Figure 1. Materials and Methods The article deals with the issue of investment decision-making in enterprises in the territory of the Slovak Republic.The basic principle of the article is the verification of the methodology that was presented in the authors' previous publications [19].The methodology is aimed at solving the investment decisions of the company when implementing modern software tools.Several companies operating in the territory of the Slovak Republic were chosen to verify the methodology.To fulfil the objective of the presented article, the article presents the outputs obtained from the methodology verification process within the company, which acts as a partner company ensuring security in transport sector companies, such as airports and transport companies.We will not name the company due to GDPR.Among other things, the analyzed company provides a number of products for companies in the transport sector that are essential as part of a security solution.The list of products is shown in Figure 1.For research purposes, the lifetime of both devices was 12 years in the company's accounting records.The introduction of full automation brings with it an increase in production in direct proportion to the requested quantity, a reduction in labor costs, and a reduction in nondelivery.However, an increase in the variable costs associated with energy consumption is also expected.The analyzed company was forced to make a decision in 2022 to modernize their technological procedures in production manufacturing.The company considered purchasing two types of lines: For research purposes, the lifetime of both devices was 12 years in the company's accounting records.The introduction of full automation brings with it an increase in production in direct proportion to the requested quantity, a reduction in labor costs, and a reduction in nondelivery.However, an increase in the variable costs associated with energy consumption is also expected. It is focused on the use of the Monte Carlo method applied through the Crystal Ball software tool in the MS Excel environment.The sequence of steps is shown in Figure 2. As the algorithm of the methodology shows, the first step is to develop mathematical apparatus, which was processed in the MS Excel environment.The mathematical apparatus represents the modeling of deterministic variables that do not take into account changes in time.The basic monitored value was the profit. It is focused on the use of the Monte Carlo method applied through the Crystal Ball software tool in the MS Excel environment.The sequence of steps is shown in Figure 2. As the algorithm of the methodology shows, the first step is to develop mathematical apparatus, which was processed in the MS Excel environment.The mathematical apparatus represents the modeling of deterministic variables that do not take into account changes in time.The basic monitored value was the profit.The following relations have been used in the calculation: 1. Depreciation: The company primarily uses linear depreciation, and this has also been modeled for the purpose of verifying the methodology, while the value of such depreciation is expressed by the following relationship: 2. The value of operating costs has been calculated according to the following relationship: Assessment of results Optimalization Figure 2. The assessment methodology algorithm for the investment decisions [19]. The following relations have been used in the calculation: 1. Depreciation: The company primarily uses linear depreciation, and this has also been modeled for the purpose of verifying the methodology, while the value of such depreciation is expressed by the following relationship: 2. The value of operating costs has been calculated according to the following relationship: where DC Direct cost; IC Indirect cost; D Depreciations; OC Other costs.3. Revenues are calculated using the following relationship: where R Revenue; P Price; S Sale (quantity of sales).4. The financial risk assessment model also took into account the tax burden in the form of income tax calculation.According to § 15 letter (b) of the Income Tax Act, the corporate income tax rate in Slovakia is 21% and is calculated from the tax base after the deduction of the tax loss [53].The tax base is calculated according to this relationship: 5. Profit after tax is calculated according to the relationship: where EAT earnings after taxes; EBT earnings before taxes. In order to perform the necessary analyses, defining the basic parameters of the Monte Carlo simulation was required.The criterion value that has been assessed is profit before tax (EBT).Fixed costs, variable costs, sales price, and production are considered to be risk values (given that risk mapping has shown that they are the riskiest financial risks). Risk Mapping As part of the risk mapping, a risk factor assessment matrix has been created.The matrix is based on an expert risk assessment.The essence of the expert assessment of a risk's significance when using risk assessment matrices is that this significance is assessed by two aspects.First of all, the probability of the occurrence of the risk was defined, and then the intensity of the negative impact that the occurrence of the risk had on the company was assessed. The significance of the risk was assessed on the basis of a higher probability of occurrence and the higher intensity of the negative impact of this risk on the company.The output is a semiquantitative assessment of the significance of the company's risks based on the risk assessment matrix or its graphic display.The resulting risk assessment matrix is shown in Figure 3. 5. Profit after tax is calculated according to the relationship: where EAT earnings after taxes; EBT earnings before taxes. In order to perform the necessary analyses, defining the basic parameters of the Monte Carlo simulation was required.The criterion value that has been assessed is profit before tax (EBT).Fixed costs, variable costs, sales price, and production are considered to be risk values (given that risk mapping has shown that they are the riskiest financial risks). Risk Mapping As part of the risk mapping, a risk factor assessment matrix has been created.The matrix is based on an expert risk assessment.The essence of the expert assessment of a risk's significance when using risk assessment matrices is that this significance is assessed by two aspects.First of all, the probability of the occurrence of the risk was defined, and then the intensity of the negative impact that the occurrence of the risk had on the company was assessed. The significance of the risk was assessed on the basis of a higher probability of occurrence and the higher intensity of the negative impact of this risk on the company.The output is a semiquantitative assessment of the significance of the company's risks based on the risk assessment matrix or its graphic display.The resulting risk assessment matrix is shown in Figure 3.The risk matrix interprets a graphical representation of the probability of occurrence of a risk and its intensity.The significance of the impact of the risk is shown by a color scale: red, orange, and green.The risks that are the highest for the company are marked in red.On the contrary, the least risks are those marked in green.The yellow color indicates the risks with a medium level of riskiness.From the risk matrix, it can be stated that red risks are unacceptable for the company, and the company must immediately minimize them.The orange risks are temporarily acceptable risks, which require the clean implementation of measures within the company.The green risks are acceptable risks and do not require immediate action.It is clear from the elaborated risk matrix that financial risks are considered the riskiest for the company.For this reason, a profit was set for the criterion value in the simulations. Sensitivity Analysis in the Simulation Model The software tool Crystal ball, which was used for the Monte Carlo simulation, enables a sensitivity analysis to be performed through a tornado plot and a spider plot.The goal of this analysis was to get a basic idea of the impact of individual risk factors on the criterion value: profit and cash flow, and thus also a kind of control, whether the impact makes sense and whether there is, by chance, an error in the model.The principle of this analysis is that the resulting values of the criterion value are calculated based on the selection of the values from the predefined intervals of the possible values of the risk factors. The output of the analysis is a tornado graph, which displays the individual risk factors in descending order according to the degree of their influence on the criterion value.The degree of influence is calculated according to the resulting values that the criterion variable achieves in the values of the considered risk factor intervals.For the needs of the sensitivity analysis in the simulation environment, the quantiles of 10% and 90% were chosen.Even in this case, the influence of only one risk factor is always considered without taking into account the simultaneous influence of other risk factors.The tornado graphs for both monitored projects-the A project and the B project-are shown in Figures 4 and 5.As can be seen from both graphs, the main risk factors are the fixed costs and the selling price of the P6Te product.The figures show that the 10% quantile of the risk factor in the form of the fixed costs in project A has a value of EUR 59,614.91,and in the B project, EUR 177,866.91.Subsequently, the 90% quantile reaches a value of EUR 65,979.09and a value of EUR 196,855.09 in the B project for the fixed costs in the A project.It follows from the above that the range of values of the criterion value is the highest between the 10% and 90% quantile of the considered fixed costs.This means that if the fixed costs of the A project are only 10%, the value of the profit will be EUR 34,585.09.This can interpret the other values from the tornado charts of both projects in the same way. The spider chart is also part of this analysis.The principle of this graph is practically identical to that of the tornado graph, with the difference that the resulting values of the criterion value are monitored not only in the interval values of the risk factors, but also between them.The spider charts of both projects are shown in Figures 6 and 7.As can be seen from both graphs, the main risk factors are the fixed costs and the selling price of the P6Te product.The figures show that the 10% quantile of the risk factor in the form of the fixed costs in project A has a value of EUR 59,614.91,and in the B project, EUR 177,866.91.Subsequently, the 90% quantile reaches a value of EUR 65,979.09and a value of EUR 196,855.09 in the B project for the fixed costs in the A project.It follows from the above that the range of values of the criterion value is the highest between the 10% and 90% quantile of the considered fixed costs.This means that if the fixed costs of the A project are only 10%, the value of the profit will be EUR 34,585.09.This can interpret the other values from the tornado charts of both projects in the same way. The spider chart is also part of this analysis.The principle of this graph is practically identical to that of the tornado graph, with the difference that the resulting values of the criterion value are monitored not only in the interval values of the risk factors, but also between them.The spider charts of both projects are shown in Figures 6 and 7 The spider chart shows the degree of influence of the risk factors using the slope of the lines.The advantage of this graph compared to the tornado graph is that it can also capture the possible nonlinear influence of the risk factor in the observed quantile interface precisely because the recalculation of the criterion value is carried out at several points from the interval of the possible values of the risk factor and not just from two.Additionally, in this case, the results of both charts confirmed the results obtained from the tornado charts. Monte Carlo Simulation If the behavior of the model seems "reasonable", it is possible to proceed to the Monte Carlo simulation itself in the Crystal Ball software environment.Setting the number of simulation steps is important when starting the simulation.For the needs of the simulation in the analyzed company, the number of simulation steps was set to 10,000, which means that a total of 10,000 values were generated within the simulation for each of the risk factors, for which, of course, 10,000 values were also obtained for each criterion quantity. The primary result of the Monte Carlo simulation is the frequency histogram of the criterion variable and its automatic recalculation-normalization of the probability distribution.This fact enables the calculation of a whole range of statistical data.The main meaning of the number/probability distribution from the point of view of risk analysis is the overall view of the possible values of the criterion quantity and their number/probability.The results of the Monte Carlo simulation and the statistical analysis of the selected company for the A project are shown in Figure 8, and for the B project, in Figure 9.The spider chart shows the degree of influence of the risk factors using the slope of the lines.The advantage of this graph compared to the tornado graph is that it can also capture the possible nonlinear influence of the risk factor in the observed quantile interface precisely because the recalculation of the criterion value is carried out at several points from the interval of the possible values of the risk factor and not just from two.Additionally, in this case, the results of both charts confirmed the results obtained from the tornado charts.The spider chart shows the degree of influence of the risk factors using the slope of the lines.The advantage of this graph compared to the tornado graph is that it can also capture the possible nonlinear influence of the risk factor in the observed quantile interface precisely because the recalculation of the criterion value is carried out at several points from the interval of the possible values of the risk factor and not just from two.Additionally, in this case, the results of both charts confirmed the results obtained from the tornado charts.The primary result of the Monte Carlo simulation is the frequency histogram of the criterion variable and its automatic recalculation-normalization of the probability distribution.This fact enables the calculation of a whole range of statistical data.The main meaning of the number/probability distribution from the point of view of risk analysis is the overall view of the possible values of the criterion quantity and their number/probability.The results of the Monte Carlo simulation and the statistical analysis of the selected company for the A project are shown in Figure 8, and for the B project, in Figure 9. factors, for which, of course, 10,000 values were also obtained for each criterion quantity. The primary result of the Monte Carlo simulation is the frequency histogram of the criterion variable and its automatic recalculation-normalization of the probability distribution.This fact enables the calculation of a whole range of statistical data.The main meaning of the number/probability distribution from the point of view of risk analysis is the overall view of the possible values of the criterion quantity and their number/probability.The results of the Monte Carlo simulation and the statistical analysis of the selected company for the A project are shown in Figure 8, and for the B project, in Figure 9.Both graphs show that the distribution for both projects is symmetrical according to the mean value and the probability.At the same time, it follows from both graphs that in the case of the A project and the B project, the company will achieve a positive value for the criterion value with a 100% probability, i.e., profit. Another important analysis was obtained using the Monte Carlo simulation: the Monte Carlo sensitivity analyses.It should be noted that although these results are similarly interpreted as per the classic sensitivity analyses mentioned above, the sensitivity analysis using Monte Carlo simulation is based on a completely different principle.This means that individual risk factors are analyzed from the point of view of their contribution to the total variance of the distribution of the criterion quantity.The graphic outputs of these analyses for the A project are shown in Figure 10, and for the B project, in Figure 11. Monte Carlo sensitivity analyses.It should be noted that although these results are similarly interpreted as per the classic sensitivity analyses mentioned above, the sensitivity analysis using Monte Carlo simulation is based on a completely different principle.This means that individual risk factors are analyzed from the point of view of their contribution to the total variance of the distribution of the criterion quantity.The graphic outputs of these analyses for the A project are shown in Figure 10, and for the B project, in Figure 11.Crystal Ball calculates the sensitivity by computing the rank correlation coefficients between every assumption and every forecast while the simulation is running.Correlation coefficients provide a meaningful measure of the degree to which assumptions and forecasts change together.If an assumption and a forecast have a high correlation coefficient, it means that the assumption has a significant impact on the forecast (both through Monte Carlo sensitivity analyses.It should be noted that although these results are similarly interpreted as per the classic sensitivity analyses mentioned above, the sensitivity analysis using Monte Carlo simulation is based on a completely different principle.This means that individual risk factors are analyzed from the point of view of their contribution to the total variance of the distribution of the criterion quantity.The graphic outputs of these analyses for the A project are shown in Figure 10, and for the B project, in Figure 11.Crystal Ball calculates the sensitivity by computing the rank correlation coefficients between every assumption and every forecast while the simulation is running.Correlation coefficients provide a meaningful measure of the degree to which assumptions and forecasts change together.If an assumption and a forecast have a high correlation coefficient, it means that the assumption has a significant impact on the forecast (both through Crystal Ball calculates the sensitivity by computing the rank correlation coefficients between every assumption and every forecast while the simulation is running.Correlation coefficients provide a meaningful measure of the degree to which assumptions and forecasts change together.If an assumption and a forecast have a high correlation coefficient, it means that the assumption has a significant impact on the forecast (both through its uncertainty and its model sensitivity).Positive coefficients indicate that an increase in the assumption is associated with an increase in the forecast.Negative coefficients imply the opposite situation.The larger the absolute value of the correlation coefficient, the stronger the relationship.It is important to note that the "Contribution To Variance" method is only an approximation and is not precisely a variance decomposition.Crystal Ball calculates Contribution To Variance by squaring the rank correlation coefficients and normalizing them to 100%.Both the alternate "Rank Correlation View" and the Contribution To Variance view display the direction of each assumption's relationship to the target forecast.Assumptions with a positive relationship have bars on the right side of the zero line; assumptions with a negative relationship have bars on the left side of the zero line [54]. The influence of risk factors on the criterion value described in this way is very illustrative and can be shared mainly by laymen.However, from an analytical point of view, it is necessary to bear in mind that this is a derived and not completely accurate calculation.The principle of this sensitivity analysis is a rank correlation, within which the values of individual risk factors are generated, and the resulting criterion values are calculated.This is a kind of contribution to the variance based on squaring the rank correlation values and normalizing them to 100%.Subsequently, all the generated values are ranked, and the degree of rank correlation between the risk factors and criterion variables is calculated.In this way, the influence of individual risk factors on the criterion value is proven through the correlation value while simultaneously including the influence of all the other variables. Despite the fact that a problem may arise when comparing both sensitivity analyses, in the case of the A project and the B project, the results are uniform in the identification of the riskiest factors, i.e., the fixed costs and selling price. Discussion Applying risk analysis to financial and investment decision-making is not easy due to the fundamental differences between deterministic and probabilistic approaches.Important barriers to successful implementation include, above all, the fact that it requires a change in thinking and a change in the traditional, long-established system processes for decisionmaking, and it is necessary to overcome resistance to changes. An important limiting factor within sensitivity analysis in a simulation environment is that it analyzes the impact of individual risk factors in isolation, i.e., without including the dependencies between risk factors.Therefore, there is a danger arising from the exclusion of one of the risk factors, which, based on this sensitivity analysis, appears to be insignificant due to the neglect of its influence in connection with another risk factor.However, if we summarize the conclusions from the sensitivity analysis in the simulation environment, whether in the form of a tornado or spider web graph, it is significant mainly because of the following reasons: 1.A certain first visual check of the consistency of the relationships between the risk factors and the criterion value; 2. Evaluation of the significance of individual assumed risk factors in relation to the criterion value and a compilation of a certain possible list of risk factors that are unlikely to be important for further analyses; 3. Detection of the possible nonlinear relationships between risk factors and the criterion value. The sensitivity analysis is a relatively complex method, which is the result of two influences: 1. The sensitivity of the model-in general, the sensitivity of the criterion quantity is to the risk factor, which results from the relationships defined in the mathematical model, e.g., how the criterion value changes when the value of the risk factor changes by 1%; 2. Uncertainty of risk factor values-possible values the risk factor can reach. If the sensitivity of the model is high, even small changes in the values of the risk factors will lead to significant changes in the resulting criterion value.On the contrary, if the sensitivity of the model is relatively small, even with larger deviations in the values of the risk factors, significant changes in the criterion value may not occur. As the sensitivity analysis showed, fixed costs and selling prices can be considered the riskiest factors.The correctness of the methodology was also confirmed by the fact that both sensitivity analyses-classical (in the simulation environment) and sensitivity analysis (in the Monte Carlo method)-demonstrated the significance of the same risk factors for the criterion variable EBT. The core of the presented methodology is the Monte Carlo method.Monte Carlo simulation requires much more complex analysis than traditional deterministic models.The objective of the verifiability of the methodology was the assessment of the profitability of the projects in the selected company.The probability of project implementation within the given time limit is determined after completing the total number of cycles.The statistical metrics derived from these iterations are useful for determining the resulting decision for the success of the project [55,56].Monte Carlo simulation involves choosing a statistical distribution representing the risk factor, which, in our case, is the duration of each activity, and then running a large number of iterations, creating the same number of different schedules for the project and calculating its total duration [57]. In order to assess the profitability of the projects, the profit output values and statistical indicators were obtained through Monte Carlo simulation.A comparison of the outputted statistical indicators is presented in Table 2.The most interesting value is the difference between the mean value and the median, which is given by the skewness of the distribution.The distribution of the B project is skewed to the disadvantage of the company to the left (skewness is negative), i.e., the probability of significant negative profit values is greater than the analogous probability of positive values.In the case of the B project, the difference between the minimum and maximum values generated by the simulation is significant. When deciding on two projects, the following characteristics were applied: • If two projects have the same average value of expected revenues, the project with a lower standard deviation is preferred; • If two projects have the same standard deviation, the project with a higher average value of expected revenues is preferred; • In each project, a higher mean value and a lower standard deviation are preferred; • If the project has a higher mean value and a lower deviation than all the other projects, it is optimal; • If the projects have a different mean value and a different deviation, the project with a lower coefficient of variation is preferred. On the basis of the above-mentioned findings, it can be concluded that the A project is a more advantageous and less risky project for the analyzed company. Many companies today rely on well-known traditional methods for decision-making processes.However, in order for the decisions of the company's management to be effective, it is necessary that they take into account individual risks and provide management with information about developments over time.For this reason, it is necessary to imply new approaches not only in decision-making processes but also in the system procedures of individual companies. In professional contributions, it is possible to find studies dealing with the application of the Monte Carlo method in partial calculations or in the solution of partial problems.Despite the multidisciplinary nature and wide applicability of the Monte Carlo method, there is no study that could provide guidance to companies on how to imply this method in decision-making processes.The research carried out enabled the creation of a methodology that integrates this method into decision-making processes in companies in the transport sector in the territory of the Slovak Republic.At the same time, the article demonstrated the applicability of such an approach in practice.The application of this approach in enterprises in the territory of the Slovak Republic, thus, becomes unique. However, the methodology is limited by the conditions of the market environment of the companies in the territory of the Slovak Republic.It is primarily about the legislative conditions or the financial and educational possibilities of individual companies.However, with sufficient knowledge of the Monte Carlo method, its wide applicability provides scope for use in other types of businesses as well.However, the feasibility of such an approach needs to be subjected to future research. Conclusions Our methodology for evaluating investment projects was focused on solving the financing of investment activities in transport companies, where simulations and calculations in the MS Excel software environment were chosen as a tool to achieve this goal.The simulation tool used was the Crystal Ball simulation software, which is based on the Monte Carlo method.As part of the verification of the methodology, two approaches that focused on the analysis and evaluation of financial risks of investment projects were implemented.In order to fulfill the goal of the article, deterministic calculations were used to assess the riskiness of two projects using mathematical apparatus based on the principle of financial mathematics.The resulting ranking was used to assign an uncertainty to activity duration and estimate the probability of a project being completed on time, employing the Monte Carlo simulation approach.The main contribution of this article is the development of an innovative framework that co-ordinates an established qualitative and quantitative risk classification approach with a powerful simulation approach to effectively predict time deviations while executing complex projects under uncertainty [55,56].The integration of new software tools into investment decisions is represented by the simulations of the Monte Carlo method based on the stochastic approach in the Crystal Ball software environment.The simulation is based on the modeling of the criterion value in the form of profit, taking into account risk factors defined as the distribution functions of input variables.The application of such an approach to managerial decision-making when assessing investment projects is unknown in Slovak companies and thus becomes unique.The uniqueness of the project assessment lies in the integration of various multicriteria approaches.The outputs of the article form part of the research into the VEGA project, which verifies the methodology on a sample of 100 enterprises in the transport sector in the Slovak Republic.The transport industry is an investment-intensive industry, and the question of how to mitigate risks in this sector is currently being discussed intensively.This article presents the verification of the effective assessment of the investment projects of enterprises.The goal is to ensure the sustainability of businesses based on the integration of new approaches to managerial decision-making.The application of probabilistic approaches in financial decision-making is negatively affected, mainly by a lack of the necessary knowledge or the weak support of sophisticated computer methods in the practice of companies.It is, therefore, necessary for companies, in their future research, to focus attention on the education of managers and the use of sophisticated modern tools for managing the risk of business projects.The result of such an effort should be a gradual change in corporate culture that supports expert work with risk.The possibility of applying the procedure in specific Slovak companies can be considered a practical contribution of the article.The proposals presented in the thesis form a system of solutions and are applicable under certain conditions in the practice of other industrial enterprises through the selected selection of individual methods and models by supplementing, replacing, or expanding with other specific characteristics and processes, according to a specific type of industry [52,58].The basis of this will be the ability of colleges, universities, and scientific and research institutions to transmit the widest possible spectrum of the latest knowledge and findings in the field of risk management, with the aim of creating a platform for business practice for further development in this area.It is possible to state that, even at present, many of the methods that are defined have shortcomings and errors, which are pointed out by several authors dealing with this issue.These shortcomings often limit the application of these models in the practice of the companies themselves [2,3].Therefore, it is advisable for every expert, evaluator, and risk manager to use not only the results of a risk analysis but to use several methods for such an evaluation at the same time and draw conclusions from their results that will bring them objective, more correct results.The implementation of the methods and models built in this way will enable Slovak companies, as well as other companies in the European region, to create space for the further rationalization and streamlining of business processes, increasing economic efficiency and performance and establishing their own business strategies for the future.At the same time, such methods of risk analysis could be an impetus (mainly for medium-sized enterprises) for the application of not only traditional, already proven methods but also modern researched methods and approaches, which will bring them a new perspective on the field of risk management and the possible complete elimination of risks, from which they will start their business development potential. Figure 1 . Figure 1.Products of the analyzed company. Figure 1 . Figure 1.Products of the analyzed company. • A project: the purchase of a new sheet metal ringer SIHR 6/3, 2050 × 6 mm.The amount of this investment is EUR 47,422.08;• B project: the purchase of a new welding machine, amounting to EUR 88,000. Figure 2 . Figure 2. The assessment methodology algorithm for the investment decisions [19]. Figure 4 . Figure 4. Tornado graph of the A project. Figure 5 . Figure 5. Tornado graph of the B project. Figure 6 . Figure 6.Spider chart of the A project. Figure 7 . Figure 7. Spider chart of the B project. Figure 6 . Figure 6.Spider chart of the A project. Figure 7 . Figure 7. Spider chart of the B project. Figure 8 . Figure 8. Probability/numerical distribution of profit for the A project. Figure 9 . Figure 9. Probability/numerical distribution of profit for the B project. Figure 8 . Figure 8. Probability/numerical distribution of profit for the A project. Figure 8 . Figure 8. Probability/numerical distribution of profit for the A project. Figure 9 . Figure 9. Probability/numerical distribution of profit for the B project.Figure 9. Probability/numerical distribution of profit for the B project. Figure 9 . Figure 9. Probability/numerical distribution of profit for the B project.Figure 9. Probability/numerical distribution of profit for the B project. Figure 10 . Figure 10.Profit sensitivity analysis-Monte Carlo Simulation in the A project. Figure 11 . Figure 11.Profit sensitivity analysis-Monte Carlo Simulation in the B project. Figure 10 . Figure 10.Profit sensitivity analysis-Monte Carlo Simulation in the A project. Figure 10 . Figure 10.Profit sensitivity analysis-Monte Carlo Simulation in the A project. Figure 11 . Figure 11.Profit sensitivity analysis-Monte Carlo Simulation in the B project. Figure 11 . Figure 11.Profit sensitivity analysis-Monte Carlo Simulation in the B project. Spider chart of the A project. Table 2 . Comparison of the A project and the B project statistics.
2023-01-12T17:30:27.023Z
2023-01-05T00:00:00.000
{ "year": 2023, "sha1": "473f4875c041c3adc7c5657e2c4414b04baa3222", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/15/2/1006/pdf?version=1672917059", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e44832b1360bac03d6076495f51871244ee814b2", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
248485737
pes2o/s2orc
v3-fos-license
Effects of capacitive and resistive electric transfer therapy on pain and lumbar muscle stiffness and activity in patients with chronic low back pain [Purpose] In this study, we investigated the therapeutic effects of capacitive and resistive electric transfer therapy in patients with chronic low back pain. [Participants and Methods] The study included 24 patients with chronic low back pain (12 patients each in the intervention and sham groups). Pain intensity, superficial and deep lumbar multifidus stiffness and maximum forward trunk flexion and associated activation level of the iliocostalis (thoracic and lumbar component) and lumbar multifidus muscles were measured. [Results] Post-intervention pain intensity and muscle stiffness were significantly lower than pre-intervention measurements in the intervention group. However, no between-group difference was observed in the muscle activation level at the end-point of standing trunk flexion. [Conclusion] Our findings highlight a significant therapeutic benefit of capacitive and resistive electric transfer therapy in patients with chronic low back pain and muscle stiffness. INTRODUCTION Capacitive and resistive electric transfer (CRet) therapy has increasingly been reported for the treatment of low back pain (LBP) in recent years. CRet includes two therapeutic modes, capacitive electrode transfer (CET) for deep thermal therapy and resistive electrode transfer (RET) for superficial thermal therapy. The frequency range of CRet (500 Hz) reduces capacitance at the electrode-skin interface, lowering the risk of skin burn associated with traditional deep thermal and superficial thermal therapies. Previous studies reported that among individuals with non-specific LBP, CRet therapy produced vasodilation in deep local tissues and an increase in temperature, with resulting improvements in hemoglobin saturation [1][2][3][4] . These effects of CRet reduce pain and increase range of motion of the lumbar spine. However, the therapeutic effects of CRet for chronic low back pain (CLBP) have not been well examined to date 5) . Muscle stiffness and the flexion-relaxation phenomenon (FRP) have previously been used as objective indicators of the treatment effects for LBP among patients who have a stiffer lumbar multifidus muscle than healthy individuals 6) . The FRP specifically refers to the relaxation (i.e., absence of muscle activity) of the thoracolumbar extensor muscles at the point of maximum standing trunk flexion that is observed in 82%-100% of adults without LBP 7) . By contrast, persisting muscle activity at the point of maximum standing trunk flexion has been reported in adults with CLPB [8][9][10] . The FRP is thought to reflect the coordination between the passive supporting tissues of the lumbar spine and the active contribution of the flexor and extensor muscles of the trunk, with this coordination being crucial to providing functional stability to the spine 11) . It has been hypothesized that the increased fatiguability and pain of the erector spinae associated with LBP results in decreased spinal stability, causing the observed FRP 12,13) . In addition, ischemic changes in spinal tissues due to reduced local blood flow and accumulation of muscle byproducts associated with CLBP increases the stiffness of thoracolumbar muscles, further leading to loss of lumbar spine flexibility and a change in the point of maximum standing trunk flexion 3,14) . Based on this evidence, improving local blood circulation, decreasing muscle stiffness, pain, and muscle fatiguability, and increasing lumbar spine flexibility are therapeutic targets for patients with CLBP, which might normalize activity of the thoracolumbar musculature and, hence, the FRP. As recent studies have reported on the therapeutic benefits of CRet to improve local blood circulation and muscle fatiguability, as well as for pain relief 3,4) , we sought to evaluate the therapeutic effects of CRet therapy in improving pain and muscle stiffness as well as in normalizing muscle activity during maximum standing forward trunk flexion and the FRP among patients with CLBP. PARTICIPANTS AND METHODS This was a double-blinded randomized clinical trial. The study group consisted of 24 male patients with CLBP, randomly allocated to either the intervention or sham group (n=12 each). A medical history questionnaire was used to screen for the following exclusion criteria: nerve root compression, disc prolapse, spinal canal stenosis, tumors, spondylolisthesis, LBP with extensive neurological symptoms, and use of painkillers. Patients with LBP with confirmed FRP before the intervention were also excluded 15) . Participants provided informed consent. All methods were performed according to the standards of the Declaration of Helsinki. The study was approved by the ethics committee of the Kanazawa Orthopedic Sports Medicine Clinic (kanazawa-OSMC-2021-004). CRet, both therapeutic and sham, was applied in a single session to the lower back, for 15 min. The Physio Radio Stim Pro CRET system was used (SAKAI Medical Co., Ltd., Tokyo, Japan). Participants were placed in the prone position on a plinth. A rigid circular electrode (diameter, 60 mm) was used as the active electrode, placed over the lumbar multifidus and erector spinae muscles. A rectangular electrode (dimensions, 150 × 210 mm) was used as the inactive electrode, placed on the abdominal area. Manufacturer-supplied cream was used to maintain conductivity between the electrode and the skin surface. For the sham treatment, electrodes were placed but no CRet treatment was applied. Therapeutic CRet was delivered at a frequency of 500 kHz and consisted of 5-min of CET, followed by 10-min of RET. The intensity was individually set at 6-7 on the following 11-point scale of subjective heat sensation, with anchors at '0' (no heat sensation) and '10' (highest heat sensation tolerable) 2, 4) . The following outcomes were evaluated: LBP intensity, stiffness of the superficial and deep lumbar multifidus, and maximum forward trunk flexion and associated activation level of the iliocostalis (thoracic and lumbar component) and lumbar multifidus muscles. LBP intensity was evaluated using a 100-mm visual analog scale (VAS), with anchors at '0' (no pain) and '100' (worst possible pain). Muscle stiffness was evaluated by elastography using a B-mode ultrasound apparatus (SSD-3500SV; Fuji Film, Tokyo, Japan) with a linear transducer (scanning frequency, 7.5 MHz). An acoustic coupler (Young's modulus, 22.6 kPa; EZU-TECPL1, Fuji Film) was placed between the probe and the surface being assessed. Images were recorded over the superficial and deep lumbar multifidus muscles, as per previously described methods 6) . All elastography measurements were performed by an experienced technician. The strain ratio was calculated as the measurement area of the muscle component evaluated (A) divided by the area of the acoustic coupler (B). A strain ratio calculated for the acoustic coupler and a reference material was used to normalize the measured A/B ratio, as previously described 16,17) . A strain ratio <1 indicated that the muscle was less stiff (i.e., softer) than the reference material. Muscle activation levels were evaluated using surface electromyogram (EMG) using the active electrode MQ8/16 telemetric EMG system (Kissei Comtec, Nagano, Japan). Disposable Ag/AgCl surface electrodes were used (area, 1 × 1 cm), with an inter-electrode distance of 1 cm. Using previously described methods 18) , the electrodes were placed over the thoracic and lumbar components of the iliocostalis lumborum muscle and the lumbar multifidus. The trunk flexion maneuver used to evaluate the muscle activation level (the FRP) was performed from a standardized 'start' position, in static standing, with both arms relaxed naturally along the body. The static standing position was held for 4 s to obtain baseline muscle activity levels. Participants were then asked to flex their trunk forward and to hold their maximum flexion position for 4 s, and then to return to the static standing position and to hold this position for 4 s. Three trials of the flexion maneuver were performed, with the average EMG values used for analysis. EMG signals were recorded at the start position and at maximum flexion. EMG were sampled at a 1 KHz frequency. The EMG signals were recorded to a computer for offline processing and analysis (Kine Analyzer, Kissei Comtec, Japan). Signals were bandpass filtered (20-450 Hz), fullwave rectified, and smoothed using the root mean square (RMS) methods. The RMS value for each muscle in the static standing position recorded before the CRet session was set to 1 to normalize values for between-participant analysis. An RMS value for the lumbar multifidus muscle of <1 after the intervention was indicative of a normalization of the FRP (i.e., absence of muscle activity at the point of maximum of standing trunk flexion). Analyses were performed using SPSS (version 24.0 for Windows; IBM, Tokyo, Japan). Kolmogorov-Smirnov test revealed normal distribution of data. Outcome measures were evaluated before and after the CRet session and compared using a paired t-test analysis. For comparisons between groups, an unpaired t-test was used. The level of significance was set at a p-value <0.05. RESULTS There were no differences in the general characteristics of participants between the two groups: intervention (age, 34.3 ± 8.7 years; height, 173.4 ± 4.8 cm; weight, 65.7 ± 6.3 kg) and sham (age, 32.5 ± 7.5 years, height, 175.0 ± 7.8 cm; weight, 66.9 ± 8.2 kg) group. Outcome measures are summarized in Table 1. Post-intervention, LBP intensity and muscle stiffness values were significantly lower than pre-intervention (p<0.05). However, there was no between-group difference in the FRP, with no difference in the RMS value of the lumbar multifidus muscle at the end-point of standing trunk flexion (p>0.05). There were no changes in measured outcomes, from baseline to post-intervention, for the sham intervention group (p>0.05). DISCUSSION Our findings support a positive effect of CRet in reducing pain and muscle stiffness among patients with CLBP, but with no immediate effect on increased levels of muscle activation during forward trunk flexion. The measured effects of CRet on LBP in our study group are consistent with those from a previous study 4) . The effect of heat in alleviating LBP has previously been described and includes local vasodilation, for ischemic pain relief 19) , and decreased conduction velocity in pain mediating fibers (Aδ and C), increasing the pain threshold 20) . Similarly, a previous study has reported on the decrease in muscle stiffness of the supraspinatus muscle with CRet 21) , as we identified for the lumbar musculature. This effect is likely mediated by the deep vasodilation induced by CRet, improving local blood circulation and, thus, decreasing the internal pressure of local tissues caused by an accumulation of fluid and waste byproducts in ischemic tissues 16) . The FRP is mediated by both active (muscles) and passive (ligaments and fascia) spinal tissues 7) . LBP has been associated with dysfunction in the active components, including abnormal muscle activation levels and patterns, as well as increased muscle fatiguability 13) . Although we had hypothesized, a priori, a positive effect of CRet on the FRP, our findings were not supportive of this hypothesis, with no effect of CRet on activation levels of the lumbar extensors during the forward flexion maneuver in our study group. This lack of effect might reflect a contribution of passive spinal tissues to the abnormal FRP observed in patients with CLBP. A previous research has reported on micro-injury to passive spinal tissues with repeated loading or stretching stress, resulting in degeneration and reduced stability of the thoracolumbar fascia 22) . Organoleptic changes in other passive spinal tissues, including the supraspinous ligament and intervertebral capsule, due to continuous or repeated elongation stress caused by reflex activity of the lumbar multifidus and erector spinae muscles, have also been reported 23) . The immediate improvement in muscle stiffness and recovery of muscle fatigue with CRet are thought to reflect its effects on active spinal tissues, with no indications of therapeutic effects for spinal tissue degeneration and reduced spinal stability [1][2][3][4] . Studies have reported on the positive therapeutic effect of exercise on the FRP among individuals with LBP 24,25) . These exercises focus on the coordination between active and passive spinal structures to improve spinal stability and posture control, such as exercises using the Neurac Sling System 25) . Therapeutic effects of exercise are achieved over a longer term period of intervention compared to our single session CRet intervention. Yet, our single session of CRet was effective in achieving a decrease in the VAS pain immediately after the treatment (9.38 ± 10.16 mm). CRet may therefore be more effective than exercise to achieve an acute reduction in LBP. Consequently, CRet therapy appears to influence different tissues of the lumbar spine than therapeutic exercise, which supports the combined use of CRet and exercise to achieve pain relief and a normal FRP. We do note previous findings have a possible healing effect of CRet on passive spinal tissues by facilitating the proliferation of precursor cells and collagen remodulation 21) . Our study was a single intervention, so it's unclear what the long-term effects will be. Future research is required to evaluate these effects of CRet, as well as the benefits of combined CRet and therapeutic exercise for the treatment of CLBP. In summary, our findings indicate an acute therapeutic benefit of the intervention on LBP and muscle stiffness. Research is needed to evaluate the effect of capacitive and resistive electric transfer therapy on passive spinal tissues and of combining this intervention with therapeutic exercise. Funding This clinical trial was not funded.
2022-05-02T15:08:28.950Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "d50d21934e7fea6b163880ddcd4a76212011dee1", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ac24fe468f3e6a22598208e10edd57a4ea5213b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248032581
pes2o/s2orc
v3-fos-license
Running gait biomechanics in female runners with sacroiliac joint pain [Purpose] To identify running gait biomechanics associated with sacroiliac (SI) joint pain in female runners compared to healthy controls. [Participants and Methods] In this case-control study, treadmill running gait biomechanics of female runners diagnosed SI joint pain, (by ultrasound-guided diagnostic SI joint injection and/or ≥2 positive SI physical exam maneuvers) were compared with age, height, mass, and BMI matched healthy female runners. Sagittal and coronal plane treadmill running video angles were measured and compared. [Results] Eighteen female runners with SI pain, and 63 matched controls, were analyzed. There was no difference in age, height, mass, or BMI between groups. At the point of initial contact, runners with SI joint pain demonstrated less knee flexion, greater tibial overstride, and greater ankle dorsiflexion, compared to controls. In midstance, runners with SI pain had greater contralateral pelvic drop compared to controls. For unilateral SI joint pain cases (N=15), greater contralateral pelvic drop was observed when loading their affected side compared to the unaffected side. [Conclusion] Female runners with SI joint pain demonstrated greater contralateral pelvic drop during midstance phase; along with less knee flexion, greater “tibial overstride”, and greater ankle dorsiflexion at initial contact compared to controls. INTRODUCTION According to a recent report from the International Association of Athletics Federation, running's popularity has increased by nearly 60% over the past decade, with millions of people participating in running events annually, a trend largely fueled by a growing population of female runners, who now make up over 50% of race entrants 1) . Despite the numerous health benefits of running [2][3][4][5][6] , musculoskeletal injuries are also common, with a 19-79% incidence among runners [7][8][9][10][11] . Running-related injuries can be the result of overuse, anatomic predisposition, and/or problems with gait biomechanics [12][13][14] . Injuries to the back, pelvis, hip, and thigh have been reported to account for approximately 25-35% of all injuries sustained by runners, and can require prolonged periods of rehabilitation and time away from sport [15][16][17][18] . Sacroiliac (SI) joint pain can be particularly challenging to diagnose and treat as therapeutic options are relatively limited 19,20) . Given that commonly used imaging modalities such as MRI and radiographs may show no abnormality in the setting of functional SI joint pain, diagnosis relies on detailed physical examination and the gold standard of diagnostic-anesthetic SI joint injections under ultrasound or fluoroscopic guidance 21,22) . The SI joint complex functions in the transmission, dampening, and distribution of forces from the lower extremities to the spine 23) . Repetitive torsional and shear forces can cause deleterious effects, strain, and pain at the SI joint 24) . Given the SI joint's integral role in force distribution during ambulation, return to sport can be especially challenging among runners and athletes participating in running-based sports, who repetitively load the lower extremities and lumbopelvic-hip girdle for prolonged periods of exercise and through varying degrees of fatigue which can alter an athlete's baseline gait biomechanics. Recent studies indicated that SI joint pain is more common among females than males, which has been posited to be in part due to gender-related differences in joint contour and orientation in relation to center of gravity, and hormonally-derived joint mobility, contributing to relatively less SI joint stability in females 25) . Running gait analysis has been utilized to evaluate patterns that are associated with several common running injuries [25][26][27][28][29][30] . Despite advances in a growing body of research on running gait mechanics and retraining, no published reports exist that have investigated running gait mechanics specifically in female runners with SI-joint pain. Identifying running mechanics associated with SI joint pain in a female population is the first step toward developing evidence-based strategies for the integration of running gait retraining into the management and prevention of SI joint pain in female runners. The purpose of this study was to identify running biomechanical differences between healthy female runners and female runners with SI joint pain. We hypothesized that those with SI joint pain would exhibit poorer peri-pelvic control while running with coronal plane mechanics including greater contralateral hip drop (CPD), greater hip adduction, and greater knee valgus during midstance phase of the gait cycle compared to healthy controls. Additionally, we hypothesized that those with SI joint pain would demonstrate sagittal plane running mechanics at the point of initial contact (IC) that might contribute to a stiff landing and "breaking impulse" including less hip flexion, less knee flexion, greater "tibial overstride," and greater ankle dorsiflexion compared to healthy controls. PARTICIPANTS AND METHODS A case control retrospective study design was used. Video analysis of running treadmills is routinely obtained because it is a part of care included in clinical evaluation of patients in the Injured Runners Clinic at Boston Children's Hospital-Sports Medicine and the Micheli Center for Sports Injury Prevention. Two-dimensional video analysis was used with the intention that two-dimensional analysis is more commonly available in clinical facilities than three-dimensional motion capture, and thus findings would be more easily translated for wide clinical application. Institutional Review Board approval was obtained prior to commencement of this study. All study participants signed informed consent for participation. The case group (N=18) included female patients seen at the Boston Children's Hospital Division of Sports Medicine (Boston, MA, USA), diagnosed with SI joint pain based on history and physical exam, including 2 or more positive SI joint pain provocative tests on physical exam and/or a positive response after ultrasound-guided SI joint injection of with rapid-acting anesthetic and corticosteroid (e.g. >90% pain relief on post-procedure SI-joint provocative testing). Physical exam testing criteria were established based on published data on the sensitivity and specificity of composite SI joint provocative physical exam test findings in the diagnosis of SI joint pain 23,[31][32][33][34] . The control group (N=63) was comprised of asymptomatic female runners enrolled in the Running Injury Prevention Program at the Micheli Center for Injury Prevention (Waltham, MA, USA), matched based on age, height, weight, and BMI. Inclusion criteria included female gender, self-identification as a runner or running-based sport participant. Exclusion criteria included: significant co-existing musculoskeletal pathology including history of orthopedic surgery for the back or lower extremity, significant congenital or acquired spinal pathology, neuromuscular disorders, rheumatologic conditions (e.g. spondyloarthropathies), or participant inability to tolerate treadmill running for 5 minutes for any reason including significant pain, medical comorbidity, or functional limitation. Prior to initiating the video recordings, markers (neon-colored adhesive tape) were placed at key anatomic landmarks by a certified athletic trainer, including: the acromioclavicular joint of the shoulder, greater trochanter of the femur, anterior superior iliac spine (ASIS), posterior superior iliac spine (PSIS), medial and lateral femoral epicondyles, fibular head, fifth metatarsal head (the shoe overlying this bony landmark), distal achilles tendon, and the lateral malleoli. All study participants had prior lifetime experience running on a treadmill. Prior to using the treadmill, participants did self-guided warm up stretching for 5 minutes. Each participant was instructed to "run at a comfortable pace they would choose if running a long distance" and subsequently selected their treadmill speed based on their comfort level. All participants maintained a pace of at least 4 miles per hour and demonstrated a flight phase in their gait cycle. Treadmill slope grade was set at zero. Each participant ran for a total of 5 minutes. Video recordings began after a minimum of 3 minutes of running and when the participant reported they had had adequate warm up and felt comfortable running at their preferred speed. Participants were asked to notify staff if they experienced any pain or discomfort while running. Two high-speed video cameras (Casio Exlim 1, Casio, llc, Tokyo, Japan), resolution 512 × 384 pixels at 300 frames per second (fps) were mounted on a commercially available casio video camera stand (Casio EX FH25, Casio America Inc., Dover, NJ, USA), locked at a standard height to maintain the video camera lens 84 centimeters from the floor with the camera mount apparatus locked in position to maintain the camera's orthogonal positioning relative to the floor. Videos were recorded using two separate cameras, for coronal plane and sagittal plane. The sagittal video ( Fig. 1) camera stand was positioned at a distance of 2.5 meters from the center of the treadmill belt with the camera's optical axis perpendicular to the runner's plane of movement. The camera was confirmed to be in position in line with the runner's greater trochanter marker, for consistency. The posterior/coronal plane video camera was set up behind the treadmill (facing the dorsum of the runner) with the camera positioned 3 meters from the center point of the treadmill belt with the camera's optical axis in parallel with the runner's plane of movement. Coronal and sagittal plane videos included the runner's full body. The video images were analyzed by blinded study personnel using ImageJ software (US National Institutes of Health; Bethesda, MD, USA). From the video recordings, 10-30 second video clips were created, with the goal of capturing 5 complete strides. Still-frame images were taken from the videos for measurement. For the purpose of this study ground reaction force (GRF) data from the Noraxon treadmill's force plate (Noraxon USA; Scottsdale, AZ, USA) was used strictly for correlation with video images and confirmation of key gait cycle events including initial contact, midstance, and toe off and delineation of stance and swing phases of the gait cycle. Initial contact (IC) event was identified as the first contact of the sole of the shoe with the treadmill on sagittal plane video, selected based on both visual review and confirmed by correlation with force plate recordings. Video of the coronal plane was assessed and midstance phase identified as the first sub-phase of single limb support when the full foot maintained contact with the ground while the contralateral leg was moving through swing phase. Midstance phase on coronal video was confirmed by correlation with vertical GRF apex before conversion into terminal stance phase. Each still-frame image selected for analysis was taken after at least the third foot strike on the clip, in order to allow the study personnel analyzing the video to familiarize with the participant's gait pattern and most accurately select the video still-frame image reflecting the gait cycle event of interest. SI Joint Pain Provocative Physical Exam Tests used in the diagnosis of SI joint pain outlined below have been described previously in the literature, with 2 or more positive tests combined demonstrated to have excellent sensitivity and specificity [31][32][33][34] . POSITIVE TEST: Pain reproduced at the SI joint. • SI Compression Test: With patient in decubitus position, a vertically directed force is applied to the iliac crest directed towards the floor, transversely across the pelvis, compressing the SI joints. • SI Thigh Thrust Test: With the patient in supine position with the affected-side hip in 90 degrees flexion, the sacrum is fixated against the table with the examiner's hand, and a vertically oriented force is applied through the line of the femur directed posteriorly, producing a posterior shearing force at the SI joint. • Sacral Thrust Test: With patient in prone position, a vertically directed force is applied to the midline of the sacrum at the apex of the curve of the sacrum, directed anteriorly, producing a posterior shearing force at the SI joints with the sacrum nutated. • Gaenslen's Test: With the patient in supine position, one hip in 90 degrees of flexion and the contralateral hip in 0-5 degrees extension off the edge of the exam table, the pelvis is stressed with a torsion force by a superior/posterior force applied to the knee and a posteriorly directed force applied to the contralateral knee. • SI Distraction Test: With the patient in supine position, vertically oriented pressure is applied to the anterior superior iliac spinous processes directed posteriorly, distracting the sacroiliac joint. • Patrick's FABER Test: With the patient in supine position, the affected side leg is held in flexion, abduction, and external rotation, with the affected-side foot crossed over the opposite-side thigh. The pelvis is stabilized at the opposite ASIS with the hand of the examiner. A gentle downward force is applied to the affected-side knee of the patient and is steadily increased, exaggerating the motion of hip flexion, abduction, and external rotation. • Sacral Torque Test: With patient in decubitus position, a horizontally directed force is applied to the sacrum while a torque rotation force is generated by the examiner's top hand applying posterior rotational force to the anterior iliac spine. SAGITTAL PLANE ANGLES: Measured from still-frame images extracted from video taken from video camera positioned to the runner's side (sagittal plane). Measurements are based on anatomic landmarks identified and marked on the limb ipsilateral to the location of the camera at the point of initial contact. • Trunk Posture Angle: Generated by the intersection between a line from the superior tip of the greater trochanter to the acromioclavicular joint and a vertical axis (angle measurements anterior to the greater trochanter denoted as positive, and angle measurements posterior to the greater trochanter were denoted as negative). • Pelvic Tilt Angle: Generated by the intersection between a line drawn from the Anterior Superior Iliac Spine (ASIS) to the Posterior Superior Iliac Spine (PSIS) and level horizontal axis. • Hip Flexion Angle: Generated by the intersection between a line drawn from the greater trochanter to the center of the knee and a vertical axis. • Knee Angle: Generated by the intersection of a line drawn from the greater trochanter to the center of the knee, and a line from the center of the knee to the lateral malleolus. • Overstride Angle: Generated by the intersection between a line drawn from the lateral malleolus to the fibular head, and a vertical line. Angle measurement anterior to the fibular head are considered positive and those posterior to the fibular head are considered negative angles. • Ankle Angle: Generated by the intersection of a line drawn from the fibular head to the lateral malleolus, and a line from the lateral malleolus to a marker on the shoe overlying the fifth metatarsal head. • Foot Inclination Angle: Generated by the intersection between a line drawn along the sole of the shoe and the treadmill surface. CORONAL PLANE ANGLES: Measured from still-frame images extracted from footage taken from video camera positioned posterior to the runner. Angles are ascribed to the weight bearing limb in midstance of the running gait cycle. • Contralateral Pelvic Drop Angle: Generated by the intersection of a line drawn between the PSIS of the weigh bearing limb and the PSIS of the contralateral side, and a horizontal axis. Angle is ascribed to the weight bearing limb. • Hip Adduction Angle: Generated by the intersection of a line drawn between the greater trochanter of the femur and the midline of the posterior knee (point equidistant between the medial and lateral femoral condyles), and a vertical axis. • Knee Valgus Angle: Generated by the intersection of a line drawn between the greater trochanter of the femur and the midline of the posterior knee (point equidistant between the medial and lateral femoral condyles), and a line from the midline of the posterior knee to the distal insertion of the Achilles. For statistical analysis: for continuous variables, Shapiro-Wilk test was used to determine normality of distribution. When the normality was not violated, independent t-test was used. Conversely, when the data were not normally distributed, Mann-Whitney U test was employed. Physical characteristics including age, height, mass, and BMI were compared between runners with SI joint pain and healthy controls. Biomechanical variables of interest analyzed included sagittal plane angles at point of initial contact: trunk posture angles, pelvic tilt angles, hip flexion angles, knee flexion angles, tibial overstride angles, ankle dorsiflexion angles, and foot inclination angles, along with coronal plane angles at midstance: contralateral pelvic drop (CPD), hip adduction angle, and knee valgus angle. For categorical variables, foot strike type at IC in sagittal plane, a χ 2 analysis was used for right and left foot separately. The foot strike was originally categorized rearfoot and non-rear foot strike types. Then, the foot strike patterns were further compared by proportion of rearfoot, midfoot, and forefoot strikes. For all comparisons, p=0.05 was used as a critical statistical significant value, and the IBM SPSS statistical software (Version 23, SPSS Inc, Chicago, IL, USA) was used for all analyses. RESULTS A total of 81 runners met the inclusion criteria (runners with SI pain: N=18, healthy control runners: N=63). Because the Shapiro-Wilk test indicated non-normally distributed patterns in continuous variables, Mann-Whitney U test was used. There were no differences in age, height, mass, and BMI between the two groups ( Table 1). In the sagittal plane at IC, runners with SI pain had significantly less knee flexion (p=0.018) and greater tibial overstride angles (p=0.026). Those with SI pain also demonstrated greater ankle dorsiflexion (p=0.010) and foot inclination angles at the point of IC (p<0.001) ( Table 2). There was no significant difference in sagittal plane IC hip angles, pelvic tilt angles, and trunk posture angles ( Table 2). In the coronal plane at midstance phase, for runners with SI pain-when their symptomatic side was weightbearing, there was significantly greater CPD compared to healthy controls (p=0.005) ( Table 3). There was no significant difference identified in hip adduction angles or knee valgus angles between runners with SI pain and healthy controls ( Table 3). For foot strike pattern categories (rearfoot, midfoot, and forefoot strike), 87% of all runners in the study exhibited a rearfoot strike pattern. A χ 2 analysis did not indicate any statistical difference in categorical footstrike patterns between limbs with SI pain and healthy controls (Table 4). DISCUSSION Female runners with SI joint pain demonstrated significant differences in certain components of running gait mechanics compared to controls. In the sagittal plan at IC, those with SI joint pain demonstrated what has been characterized as "braking impulse mechanics" with less knee flexion, greater "tibial overstride", and greater ankle dorsiflexion, when compared to these measures in healthy controls-effectively creating a landing mechanism that is stiffer and lands the foot farther in front of the runner's center of mass. In coronal plane at midstance, when the affected limb was weightbearing, there was significantly greater CPD compared to healthy control runners. Moreover, for patients with unilateral SI joint pain (N=15), a significantly higher degree of CPD was seen on the symptomatic side compared to degree of CPD for the asymptomatic limb, suggestive of poorer hip control on the symptomatic side. Among healthy controls there was no significant difference in CPD between their right and left lower extremities. The SI joints play a key role in both dampening and distributing GRF during ambulation 16,35,36) . The SI joint surfaces are positioned in parallel to vertical loading forces, making the joint vulnerable to shear forces [37][38][39] . Although multiple there are multiple stabilizing mechanisms for the joint, including gross orientation, intraarticular surface contour, and compression by overlying ligaments and fascia, when stressed, SI joint motion can occur along multiple axes, including rotation up to 8 degrees and translation up to 8 mm [38][39][40][41][42][43] . Our findings supported our hypothesis that runners with SI joint pain would demonstrate a "stiff" landing with less knee flexion, greater tibial overstride, and greater ankle dorsiflexion at the point of IC. Previous studies have demonstrated associations between greater tibial overstride angles with relatively high GRF in running [44][45][46][47] . Overstride patterns and higher magnitude GRF has been found to be associated with running-related musculoskeletal injuries such as stress fractures in the lower leg and plantar fasciitis 9,10,[48][49][50] . When the knee flexes from the point of IC through loading phase, there is eccentric contraction of the quadriceps, helping to absorb GRF 44) . Those who land with a less flexed knee have a "stiffer" landing, with less engagement of the quadriceps muscles in the loading phase and potentially leading to greater force vectors being transmitted up the kinetic chain to the SI joint. Several studies have synthesized the mechanical implications of ankle dorsiflexion on GRF load and distribution in the lower extremities 21,36,50,51) . Kinematic and kinetic running analyses by Tam et al. demonstrated a positive association between increased ankle dorsiflexion at IC and initial force loading rate in runners 52) . Loading rate has been found to be a risk factor for tibial stress fractures and other running-related injuries 53,54) . Increased loading rates experienced by runners in a greater degree of ankle dorsiflexion at IC could contribute to stress on the SI joint. Lieberman et al. analyzed the kinetic influence of ankle dorsiflexion at IC, demonstrating that a highly dorsiflexed ankle will convert little translational energy into rotational energy about the ankle joint, increasing the magnitude of the impact transients 36) . Moreover, the degree of ankle dorsiflexion at IC alters the way that runners attenuate the rate of loading, and the more dorsiflexion at IC, the less effective soft tissue involvement in force distribution 50,52) . In addition to these mechanical factors, there were significant differences between coronal plane mechanics in runners with SI pain compared to those without. When a symptomatic limb with SI pain was loaded through midstance phase, there was significantly greater CPD. Ireland et al found that weakness of the hip abductors causes increased coronal plane hip motion 55) . Abnormal motor control patterns of the gluteus medius are found in individuals with low back pain 56) . When the limb is loaded, dysfunction of the hip abductors leads to contralateral pelvic drop, hip internal rotation, and valgus force at the knee. In order to maintain balance, there may be compensatory rotation of the pelvis into a counternutated position, with ventral rotation of the iliac bones relative to the sacrum and predisposing the SI joint to strain and pain. Vleeming et al. showed that the long dorsal SI ligament is tensed when counternutated (dorsal rotation of the sacrum relative to the iliac bones with movement of the sacral promontory posteriorly and superiorly with concurrent ventral ilium-on-sacrum rotation) and slackened when the SI joints are nutated (ventral rotation of the sacrum relative to the iliac bones with movement of the sacral promontory anteriorly and inferiorly with concurrent dorsal ilium-on-sacrum rotation. Nutation likely leads to more effective compression and force closure of the SI joint 42,43,57,58) . Hungerford at al. showed that fully reversed movement patterns between healthy individuals and SI joint pain patients, where nutation was found to occur in healthy persons on the weightbearing side, but counternutation was found to occur on the weightbearing side in those with SI joint pain dysfunction. These findings are thought to be due to reduced tonicity of the erector spinae, gluteus maximus, biceps femoris, and external oblique muscles 59) . This type of inappropriate postural loading may have marked effects on stresses on the SI joint 60,61) . There were limitations to this study that warrant consideration. First, two-dimensional video technology was utilized, limiting our ability to collect data on horizontal plane (Z-axis) motion as would be captured with three-dimensional technology. That being said, findings in this study may be more broadly clinically applicable given that two-dimensional gait evaluation technology is far more commonly available in most clinical settings. Additionally, it is known that soft tissue artifact is inherent in surface-based movement analysis, and can be a significant source of error in movement analysis in humans. However, it is expected that this error is systematically applied and does not bias between-group comparisons. However, considering the relatively small difference between groups for these coronal and sagittal plane variables, results should be interpreted with acknowledgement of these factors 62,63) . Whereas this study focused on treadmill-based running, further research is needed to verify the biomechanical features identified in this study also apply in the setting of over ground running 64) . Further research is. Although this study provides an important first step in the identification of biomechanical features associated with SI joint pain in runners, future prospective longitudinal studies will further inform sports medicine professionals and coaches to help mitigate risk of this this running-related injury. This study was designed to identify running mechanical features associated with SI joint pain in a population of female runners. Female runners with SI joint pain demonstrated sagittal plane mechanics with significantly "stiffer landing" including less knee flexion, greater tibial overstride, and greater ankle dorsiflexion at the point of initial contact. In the coronal plane, those with SI pain had a greater degree of contralateral pelvic drop during midstance phase, suggestive of altered lumbosacral and hip abductor muscle recruitment for those with SI pain. While further studies are warranted, these findings suggest a potential role for gait analysis and retraining in the management of SI joint pain, a common running-related injury. Author contributions All of the named authors contributed to study design, data collection and interpretation, and approved the final manuscript as submitted. Conflicts of interest All authors have no conflict of interest to disclose.
2022-04-09T15:08:34.575Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "4381c6b62cfb8f5c207b2111cb03c6de78430a1b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d80b2cd9562f69152a0c4220bfa51585afeb8142", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247491944
pes2o/s2orc
v3-fos-license
The Effect of Distributive Justice and Situational Leadership on Job Satisfaction through Work Family Conflict (Case Study of Full-time Working Women in the Banking Sector in Jakarta) Work Family conflict is a phenomenon that is experienced by most people in big cities, where working women are a demand of the times as an effort to support the family economy. This study aims to analyze the effect of distributive justice and situational leadership on job satisfaction through work family conflict. The object of this research is women who work fulltime in the banking sector in Jakarta. A total of 100 respondents filled out the questionnaire, the sample was taken using the purposive sampling technique. Data were analyzed using PLS (Partial Least Square analysis). The results of this study indicate that distributive justice has no impact on job satisfaction and situational leadership has a positive effect on job satisfaction. Work family conflict has no impact on job satisfaction. Distributive justice through work family conflict has no effect on job satisfaction. Situational leadership through work family conflict has no impact on job satisfaction. I. INTRODUCTION The economic development in Indonesia makes household needs increase. Husband and wife working together to earn a living (working) for their family's future are common in this era of globalization. A phenomenon marked by changes in demographic trends that hit the whole world, namely an increasing number of working women. Work-family conflict is a phenomenon that is experienced by most people in big cities, where working women are a demand of the times in an effort to support the family economy. Parents will be faced with the issue of which interests will come first, family or work, as research conducted by Asbari et al. [1], this study found that the main factors for housewives to work outside the home were financial and educational factors. Another factor is the factor to fill spare time and to socialize with colleagues. Research on work-family conflict in relation to job satisfaction has been previously conducted by Srimulyani & Prasetian [2] with the title The Effect of Mediation on Job Satisfaction on the Work-Family Conflict (WFC) relationship and organizational commitment. The results of the study indicate that work family conflict has a negative effect on job satisfaction. Work family conflict arises when someone who performs his role in a job has difficulty carrying out his role in the family and vice versa. Job satisfaction is a positive state of mind, happy and always working hard, employees who work hard and have feelings of pleasure towards their work are assets in the organization, they will produce a good performance and image for the organization. Every worker or employee must have his level of satisfaction which can be measured by the performance of the employee working in the company, but each employee with one another does not necessarily have the same level of job satisfaction. Therefore, to establish a good level of job satisfaction, the company needs to take action so that employees can feel comfortable doing their job well. Previous research that discusses the variables of distributive justice, situational leadership, work family conflict, and job satisfaction has been done separately before. Jaenab & Kurniawati [3] conducted a study on the level of distributive justice and interactional justice of compensation on job satisfaction. In addition, research from Li, McCauley, & Shaffer [4] examines the impact of Leadership Behavior on Employee Work Family Conflict Outcomes. The results showed that leadership behavior had a negative effect on work family conflict. From the description above, it is interesting to conduct research on the effect of distributive justice and situational leadership on job satisfaction through work family conflict, with the case of female employees in the banking sector in DKI Jakarta. This indicates the need for further research that examines these four variables, namely distributive justice, situational leadership, job satisfaction, and work-family conflict. A. Job Satisfaction Robbins & Judge [5] argue that job satisfaction is a general attitude of an individual towards his work. Furthermore, Sekartini [6] revealed that job satisfaction is a positive feeling about one's job which is the result of an evaluation of characteristics, and job satisfaction also reflects one's job. Another understanding of job satisfaction is put forward by Simanjuntak, Nadapdap, & Winarto [7], that job satisfaction is a feeling of pleasure, where there is a match between employee expectations and the results they receive for the job. B. Work Family Conflict Definition of Work Family Conflict according to Goltzman & Peleg [8] "Work Family Conflict (WFC) is a form of conflict in which role pressures from the work and family domains conflict with each other" That is, Work Family Conflict (WFC) is a form of conflict where the role pressures from work and family roles conflict. The definition of Work-Family Conflict put forward by Costa et al. [9] "Work-family conflict (WFC) refers to situations in which it is difficult to condition family and professional demands". Furthermore, the definition of Work-Family Conflict according to Dhakirah, Hidayatinnisa, & Setiawati [10] is a conflict due to the demands of roles from work and family at a time that cannot be aligned, where on the one hand are required to perform their duties as employees in a workplace and others are required as family members. C. Distributive Justice According to Greenberg and Baron [11] distributive justice is a person's perception of fairness in the distribution of resources among employees. In other words, the perceived fairness of how rewards are distributed among employees. Distributive justice refers to the rewards allocated among employees; interactional justice refers to interpersonal relationships in determining organizational output. Distributive justice is about how one compares the input (input) with the result (outcome). D. Situational Leadership According to Daft [12] that the situational leadership model created by Hersey and Blanchard focuses an approach that focuses on great attention to the characteristics of employees in determining appropriate leadership behavior. Every organization has a culture that serves to form rules and guidelines in thinking and acting in achieving the goals that have been set. The following is the framework of the research model proposed in this study: Previous research that discussed distributive justice and job satisfaction by Irawan & Sudarma [16] showed the results of research partially distributive justice had a positive impact on job satisfaction. The research of Atmojo & Tjahjono [27] also explains that aspects of distributive justice and procedural justice of compensation have a positive impact on paramedic compensation satisfaction and paramedic performance. Previous research that discusses situational leadership and job satisfaction by [18]. The results show that partially distributive justice has a positive impact on job satisfaction. Putra et al [33] research also explains that situational leadership has a positive impact on the job satisfaction of hotel employees. H3: Distributive Justice has a negative impact on Workfamily conflict. A previous study that discussed the relationship between organizational justice and work family conflict was the research of Tziner & Sharoni [19] which found that organizational justice was negatively related to work-family conflict. This is also in line with Sorush Niknamian's research [20] which found that organizational justice is negatively related to work-family conflict. H4: Situational Leadership has a negative impact on Work family conflict. Previous research that discussed the relationship between transformational leadership and work-family conflict was the research of Nicholas Gillet et al. [21] who found that leadership had a negative impact on work-family conflict. This is also in line with the research of Li A. [4] who found that situational leadership was negatively related to work-Distributive Justice [14] Work Family Conflict [15], [16] Job Satisfaction [17] Situasional Leadership [18] H1 H6 H3 H5 H2 H4 family conflict. H5: Work-family conflict has a negative impact on job satisfaction. Previous research that discussed the relationship between work family conflict and job satisfaction was the research of Sihaloho & Damrus [22] which found that work family conflict was negatively related to job satisfaction. This is also in line with research by Rajak A. [32] found that work family conflict was negatively related to job satisfaction. H6: Distributive Justice through Work-family conflict has a negative impact on Job Satisfaction. Previous research discussing work-family conflict justice and job satisfaction by [23] found that work-family-conflict was negatively related to job satisfaction and [26] research at PT. Port of Indonesia I (Persero) Medan. The results showed that partially work-to-family conflict had a negative and significant impact on job satisfaction. H7: Situational Leadership through work-family conflict has a negative impact on job satisfaction. Previous research discussing work-family conflict justice and job satisfaction by Putra [24] found that leadership (transformational, situational) through work-family-conflict was negatively related to job satisfaction and research by Agung et al. [25]. The results showed that partial situational leadership through work family conflict has a negative and significant impact on job satisfaction. III. RESEARCH METHODOLOGY This research is included in associative research with the form of a causal relationship or cause and effect. According to Sugiyono [26] a causal relationship is a causal relationship. So, here there are independent variables (influence) and dependent variables (influenced). This means that the research focuses on the effect of distributive justice and situational leadership as independent variables through work family conflict as a mediating variable on job satisfaction as the dependent variable. The place where this research was conducted is in the city of Jakarta, Indonesia, including employees of Bank Mandiri, BNI, Danamon, Permata, and BCA. Data analysis using Structural Equation Model Partial Least Square (SEM PLS) by evaluating the measurement model and structural model. A. Characteristics of the Respondent profile 1) Respondents Based on Age Based on the results of the study that of the 100 respondents studied, it shows that the age of 20-29 years has a percentage of 35%, the age of 30-39 years has a percentage of 46% and the age of >40 years has a percentage of 19%, thus that the respondents studied in the majority of female full-time workers in the banking sector in Jakarta are aged from 30 to 39 years old by 46%. 2) Respondents Based on Education Based on the results of the study, it can be explained that respondents who are full-time female workers in the banking sector in Jakarta have the most recent educational background, namely bachelor's degree of 49%. 3) Respondents Based on Child Age Based on the results of the study, it can be explained that the data on full-time female workers in the banking sector in Jakarta who already have children under 10 years of age are the largest respondents, namely children aged 4 to 6 years as much as 39%. B. SEM-PLS Analysis Results In this chapter, we will discuss the results of statistical data analysis using the Smart PLS program, namely the Outer model and Inner model tests. 1) Convergent Validity Test A convergent validity value is the value of the loading factor on the latent variable with its indicators. The value of convergent validity is used to determine the validity of a construct. According to the general rule (rule of thumb), the loading factor indicator value 0.7 is said to be valid. However, in the development of new models or indicators, the loading factor value between 0.5-0.6 is still acceptable [27]. In this study, we use a limit of 0.5, so indicators whose loading factor values are above 0.5 are declared valid. Here are the results of the validity test: items, namely KS03 and KS07. This is because the value of the loading factor is less than 0.5. -Work-family conflict variable: there is 1 invalid item, namely WFC06. This is because the value of the loading factor is less than 0.5. These invalid items will be deleted, in the model image in SmartPLS invalid indicators will be deleted, then the validity test is carried out again, namely, stage 2, the results are as follows: 2) Inner Model Test Results Testing of the inner model or structural model is carried out to see the value of R Square and test the influence between variables. 2.1) R Square Analysis This analysis is to determine the percentage of endogenous construct variability which can be explained by exogenous construct variability. This analysis is also to find out the goodness of the structural equation model. The greater the Rsquare number, the greater the exogenous variable can explain the endogenous variable so that the better the structural equation. The output results of the R Square value are as follows: 2.2) Predictive Relevance (Q2) It is also known as the Stone-Geisser. This test is carried out to show the model's predictive capability if the value is above 0. This value is obtained by the formula [29]: where R12, R22…Rp2 is the R-square of the exogenous variable in the equation model. If Q2 > 0 indicates that the model has predictive relevance and if the value of Q2 < 0 indicates that the model lacks predictive relevance [28]. Q2 test is calculated by using MS Excel. Obtained results: 0.744. Because the value is more than0 then the model has predictive relevance. 2.3) Goodness of Fit Index (GoF) To evaluate the overall structural and measurement model. This GoF index is a single measure used to validate the combined performance of the measurement model or external model and structural model or internal model. The purpose of the GoF assessment is to measure the performance of the PLS model both at the measurement stage and in the structural model by focusing on predicting the overall performance of the model which can be calculated by the following formula [30] in [29]: 2 GoF AV E R = The criteria for a value of 0.10 (GoF small), a value of 0.25 (GoF medium), and a value of 0.36 (GoF large) [28, p. 83]. GoF test is calculated by using MS Excel. The result is 0.528. So GoF is big. C. Hypothesis Testing (Influence between Variables) In this hypothesis testing stage, it will be analyzed whether there is a significant effect between the independent variables on the dependent variable. Testing the proposed hypothesis is done by looking at the path coefficients which show the parameter coefficients and the statistical significance value of t. The significance of the estimated parameters can provide information about the relationship between variables. Limits to reject and accept the hypothesis using a probability of 0.05. The table below presents the estimated output for structural model testing: Conclusion: 1. Distributive Justice has no effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta. This is because the t value < tTable (0.277 < 1.96) or P values > 0.05 (0.782 > 0.05), so Ho is accepted, and Ha is rejected. 2. Situational Leadership has a positive effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta. This is because the value of t count > tTable (2.534 > 1.96) or P values < 0.05 (0.012 < 0.05), so Ho is rejected, and Ha is accepted. It has a positive effect because the positive coefficient value is 0.470, meaning that if situational leadership increases, job satisfaction will also increase. 3. Distributive justice has a negative effect on work-family conflict in full-time female workers in the banking sector in Jakarta. This is because the value of t count > tTable (2,028 > 1.96) or P values < 0.05 (0.043 < 0.05), so Ho is rejected, and Ha is accepted. It has a negative effect because the negative coefficient value is -0.132, meaning that if distributive justice increases, work-family conflict will decrease. 4. Situational Leadership has a negative effect on Workfamily conflict in full-time female workers in the banking sector in Jakarta. This is because the value of t count > tTable (16.842 > 1.96) or P values < 0.05 (0.000 < 0.05), so Ho is rejected, and Ha is accepted. It has a negative effect because the negative coefficient value is -0.774, meaning that if situational leadership increases, work-family conflict will decrease. Note: For conclusions, numbers 5 and 6 will be discussed in the discussion of the test of the effect of the mediating variable below. D. Test the Effect of Mediation Variables (Test Indirect Effect) Testing the effect of the mediating variable is used to determine whether the mediating or intervening variable mediates the effect of the independent variable on the dependent or not. The results of the path analysis or mediation effect test can be seen in the output of the Indirect Effect, if the P-value is less than 0.05 then there is a mediation effect [31]. The output results of the mediating variable influence test or indirect influence test are as follows: The analysis results are as follows: 1. Distributive justice through work-family conflict has no effect on job satisfaction in full-time female workers in the banking sector in Jakarta. This is based on the test of the effect of the mediating variable. The P-value of the indirect effect of Distributive Justice on Job Satisfaction through Work-family conflict is 0.587, which is greater than 0.05. 2. Situational Leadership through Work-family conflict has no effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta. This is based on the test of the effect of the mediating variable, the P-value of the indirect influence of Situational Leadership on Job Satisfaction through Work-family conflict is 0.503, which is greater than 0.05. E. Discussion 1) The Effect of Distributive Justice on Job Satisfaction Based on the results of the study, it is known that distributive justice has no effect on job satisfaction in fulltime female workers in the banking sector in Jakarta. Thus, the first hypothesis which states "Distributive Justice has a positive effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta" is not proven and can be declared not accepted. Previous research that discussed distributive justice and job satisfaction by Irawan & Sudarma [16] showed the results of research partially distributive justice had a positive effect on job satisfaction. 2) The Effect of Situational Leadership on Job Satisfaction Based on the results of the study, it is known that Situational Leadership has a positive effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta. This means that if situational leadership increases, job satisfaction will also increase. Thus, the second hypothesis which states that "situational leadership has a positive effect on job satisfaction in full-time female workers in the banking sector in Jakarta" is proven and can be declared accepted. Previous research that discusses situational leadership and job satisfaction by Solihin Mattalatta [18]. The results show that partially distributive justice has a positive effect on job satisfaction Putra et al [33] research also explains that situational leadership has a positive effect on the job satisfaction of hotel employees. 3) The Effect of Distributive Justice on Work Family Conflict Based on the results of the study, it is known that distributive justice has a negative effect on work-family conflict in full-time female workers in the banking sector in Jakarta. Thus, the third hypothesis which states "Distributive Justice has a negative effect on work-family conflict in fulltime female workers in the banking sector in Jakarta" is proven and can be declared accepted. A previous study that discussed the relationship between organizational justice and work-family conflict was the research of Tziner & Sharoni [20] which found that organizational justice was negatively related to work-family conflict. This is also in line with Sorush Niknamian's research [20] which found that organizational justice is negatively related to work-family conflict. 4) Effects of Situational Leadership on Work Family Conflict Based on the results of the study, it is known that Situational Leadership has a negative effect on work-family conflict in full-time female workers in the banking sector in Jakarta. Thus, the fourth hypothesis which states that "situational leadership has a negative effect on work-family conflict in full-time female workers in the banking sector in Jakarta" is proven and can be declared accepted. Previous research that discussed the relationship between transformational leadership and work family conflict was the research of Nicholas Gillet et al. [21] which found that leadership was negatively related to work-family conflict. This is also in line with the research of Li A. [4] who found that situational leadership was negatively related to workfamily conflict 5) Effect of Work Family Conflict on Job Satisfaction Based on the results of the study, it is known that workfamily conflict has no effect on job satisfaction in full-time female workers in the banking sector in Jakarta. Thus, the fifth hypothesis which states "Work-family conflict has a negative effect on job satisfaction in full-time female workers in the banking sector in Jakarta" is not proven and can be declared not accepted. Previous research that discussed the relationship between work family conflict and job satisfaction was the research of Sihaloho & Damrus [22] which found that work family conflict was negatively related to job satisfaction. This is also in line with the research of Rajak A. [32] found that work family conflict was negatively related to job satisfaction. 6) The Effect of Distributive Justice on Job Satisfaction through Work Family Conflict Based on the results of the study, it is known that distributive justice through work-family conflict has no effect on job satisfaction in full-time female workers in the banking sector in Jakarta. Thus, the fifth hypothesis which states "Distributive Justice through Work-family conflict has a negative effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta" is not proven and can be declared not accepted. Previous research discussing work-family conflict justice and job satisfaction by Utama & Sintaasih [23] found that work-family-conflict was negatively related to job satisfaction and Damrus & Sihaloho [22] research at PT. Port of Indonesia I (Persero) Medan. The results showed that partially work-to-family conflict had a negative and significant effect on job satisfaction. 7) The Effect of Situational Justice on Job Satisfaction through Work Family Conflict Based on the results of the study, it is known that Situational Leadership through work-family conflict has no effect on job satisfaction in full-time female workers in the banking sector in Jakarta. Thus, the sixth hypothesis which states that "situational justice through work-family conflict has a negative effect on job satisfaction in full-time female workers in the banking sector in Jakarta" is not proven and can be declared not accepted. Previous research discussing work-family conflict justice and job satisfaction by Putra [24] found that leadership (transformational, situational) through work-family-conflict was negatively related to job satisfaction and research by Agung et al. [25]. The results showed that partial situational leadership through work family conflict has a negative and significant effect on job satisfaction. A. Conclusions Based on the results of research and data analysis in the previous chapter, several conclusions were obtained, namely: 1. Distributive Justice has no impact on Job Satisfaction in full-time female workers in the banking sector in Jakarta. 2. Situational Leadership has a positive impact on Job Satisfaction in full-time female workers in the banking sector in Jakarta. This means that if situational leadership increases, job satisfaction will also increase. 3. Distributive justice has a negative impact on workfamily conflict in full-time female workers in the banking sector in Jakarta. it means that if distributive justice increases, work family conflict will decrease. 4. Situational Leadership has a negative impact on Workfamily conflict in full-time female workers in the banking sector in Jakarta. This means that if situational leadership increases, work family conflict will decrease. 5. Work family conflict has a negative effect on job satisfaction for full-time workers in the banking sector in Jakarta. This means that if work-family conflict increases, job satisfaction will decrease. 6. Distributive justice through work-family conflict has no effect on job satisfaction in full-time female workers in the banking sector in Jakarta. 7. Situational Leadership through Work-family conflict has no effect on Job Satisfaction in full-time female workers in the banking sector in Jakarta. B. Suggestion After analyzing and observing all the existing limitations, the researcher provides the following suggestions: 1) For Organization 1. It is recommended for companies to be able to implement work-life balance, which is a state of balance between two demands where the work and life of an individual are the same, in other words, do not ignore all aspects including work, personal, family, spiritual and social life. 2. Company leaders are advised to emphasize to their superiors the importance of clarity in assigning tasks such as explaining the rules of the game that must be obeyed by employees and explaining the priorities of various existing task targets. 2) For further research 1. For further research, it is possible to develop a research model with a more varied population and sample so that it becomes useful input for the company. 2. For further research, more variables can be used, so that the research results will be more valid.
2022-03-17T15:23:28.285Z
2022-03-08T00:00:00.000
{ "year": 2022, "sha1": "7e86b913d0ede1a3ee56ae99034b6f7711f0f7fd", "oa_license": "CCBYNC", "oa_url": "https://www.ejbmr.org/index.php/ejbmr/article/download/1131/704", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6079ee29fe0cef1ab52e277b5e9f3e0491050e3a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
252377438
pes2o/s2orc
v3-fos-license
Enhancing the quality of cognitive behavioral therapy in community mental health through artificial intelligence generated fidelity feedback (Project AFFECT): a study protocol Background Each year, millions of Americans receive evidence-based psychotherapies (EBPs) like cognitive behavioral therapy (CBT) for the treatment of mental and behavioral health problems. Yet, at present, there is no scalable method for evaluating the quality of psychotherapy services, leaving EBP quality and effectiveness largely unmeasured and unknown. Project AFFECT will develop and evaluate an AI-based software system to automatically estimate CBT fidelity from a recording of a CBT session. Project AFFECT is an NIMH-funded research partnership between the Penn Collaborative for CBT and Implementation Science and Lyssn.io, Inc. (“Lyssn”) a start-up developing AI-based technologies that are objective, scalable, and cost efficient, to support training, supervision, and quality assurance of EBPs. Lyssn provides HIPAA-compliant, cloud-based software for secure recording, sharing, and reviewing of therapy sessions, which includes AI-generated metrics for CBT. The proposed tool will build from and be integrated into this core platform. Methods Phase I will work from an existing software prototype to develop a LyssnCBT user interface geared to the needs of community mental health (CMH) agencies. Core activities include a user-centered design focus group and interviews with community mental health therapists, supervisors, and administrators to inform the design and development of LyssnCBT. LyssnCBT will be evaluated for usability and implementation readiness in a final stage of Phase I. Phase II will conduct a stepped-wedge, hybrid implementation-effectiveness randomized trial (N = 1,875 clients) to evaluate the effectiveness of LyssnCBT to improve therapist CBT skills and client outcomes and reduce client drop-out. Analyses will also examine the hypothesized mechanism of action underlying LyssnCBT. Discussion Successful execution will provide automated, scalable CBT fidelity feedback for the first time ever, supporting high-quality training, supervision, and quality assurance, and providing a core technology foundation that could support the quality delivery of a range of EBPs in the future. Trial registration ClinicalTrials.gov; NCT05340738; approved 4/21/2022. Background There is a mental health crisis in the United States. One out of five adults will receive a mental health diagnosis in their lifetime [1], and major depression is currently the single biggest contributor to disability globally [2]. Over the past several decades, scientists have demonstrated the efficacy and cost-effectiveness [3,4] of treatments for mental health disorders, including psychosocial interventions such as Cognitive Behavioral Therapy (CBT) [5,6]. Despite the billions of dollars spent to disseminate evidence-based psychotherapies (EBPs) like CBT into clinical settings [5][6][7][8][9], access to these effective treatments remains severely limited. Training, policy mandates, and value-based incentives have not translated to broad access to high-quality EBP care in the community [10][11][12]. A specific barrier to effective implementation and sustainment of psychosocial interventions is the ability to measure therapist fidelity. Proctor et al. concluded that "The foremost challenge [to disseminating EBPs] may be measuring implementation fidelity quickly and efficiently" ( [13], p. 70; italics added). To effectively implement psychosocial interventions in community settings and capitalize on the significant investment that health systems have made in EBPs, technology is needed to scale up fidelity assessment "quickly and efficiently". While measurement-based care has demonstrated differences in the effectiveness of providers [14], the existing technology for evaluating therapist EBP fidelity and quality does not scale up to real-world use. Specifically, the research-based, gold-standard for assessing fidelity is behavioral coding: A session is recorded, and then this "raw data" is rated by trained human coders. Research on training and quality assurance indicates that using objective, performance-based feedback like behavioral coding can enhance and sustain therapist skills [15], and ultimately client access to EBPs such as CBT [16,17]; without performance-based feedback or quality monitoring, the return on investment of costly implementation efforts is often lost [15,18]. However, this process is time consuming, expensive, and at times, error prone: It is a non-starter in the vast majority of community practice settings. Accordingly, mental health services researchers utilize a variety of alternative measures, including patterns of utilization (e.g., continuity of care, the number of sessions in a specified time period after diagnosis), therapist self-reports of adherence, client-rated measures of satisfaction, or measures of clinical outcomes [19,20]. However, these are proxies of intervention quality, distal to the content of the clinical encounter, and/or subject to self-observation bias. They are all problematic indicators of fidelity and quality. Behavioral coding provides a methodology for measuring EBP fidelity, but it is impractical at scale, forcing reliance on feasible but circumspect metrics [13,15]. Advances in machine learning (ML) and artificial intelligence (AI) have transformed computers' abilities to create, understand, and respond to natural language. There have been major advances in basic processes (e.g., natural language understanding), as well as consumer-facing technologies (e.g., Alexa, Siri). In addition, cloud-based computing means that any internet-connected device can access server-based computing power that can scale on-demand. Lyssn.io, Inc, (or Lyssn, pronounced 'listen') is a technology start-up deploying an array of such AI technologies to support training, supervision, and quality assurance of EBPs. Lyssn has an established cloud-based platform which includes: a) user management and organization of sessions, clinicians, and supervisors, b) recording, playback, and annotation of audio or video data from therapy sessions; c) speech-to-text transcription, d) AI-generated fidelity and quality metrics; and e) data summaries and visualizations for feedback. Therapists and supervisors access Lyssn via a web-browser, and a therapist's caseload of patients is shown in a dashboard. Via the web-based dashboard, therapists can record in-person or telehealth sessions or upload sessions recorded elsewhere. The session review interface enables time-linked comments directly in the video (or audio) playback, which facilitates efficient use of traditional supervision. The platform allows therapists and their supervisors to discuss a session asynchronously and immediately queue up a portion of the session to review. In addition, each session is automatically transcribed via Lyssn's in-house, state-ofthe-art speech recognition algorithms, trained on over 4,000 sessions. Lyssn's algorithms automatically identify separate speakers and their role (i.e., client vs therapist), and the transcript is searchable and linked to the recording to support efficient review and supervision [21][22][23]. This supervision platform serves as a base for the AI-generated psychotherapy quality metrics. Study team members led the foundational research that established that ML-based evaluation of psychotherapy quality is possible [24]. The Lyssn platform incorporates algorithms that automatically identify Motivational Interviewing (MI) fidelity codes from session recordings. Keywords: Artificial intelligence, Cognitive behavioral therapy, Fidelity, Competence, Community mental health, Training, Supervision, User-centered design, Technology, Implementation science These algorithms utilize speech and language features to identify both session-level (e.g., how empathic was the therapist in this session?) and per-utterance (e.g., open questions, affirmations, confrontations within talk-turns) MI fidelity codes. The study team has published numerous papers on machine learning applied to MI and psychotherapy common factors (e.g., therapeutic alliance, facilitative interpersonal skills; ). The proposed work will extend the AI aspects of the Lyssn platform to CBT and further develop the Lyssn user-interface to support community mental health needs and workflows. The Lyssn MI psychotherapy quality metrics have recently been extended to CBT, building on a 15-year partnership between the Penn Collaborative and Philadelphia's Department of Behavioral Health and Intellectual disAbillity Services (DBHIDS) to implement CBT in Philadelphia's community mental health (CMH) system, known as the Beck Community Initiative (BCI; [49]). After an initial implementation readiness phase, intensive workshops teach CBT theory and strategies, followed by weekly group case consultation. In the 6-month consultation phase, therapists' CBT competence is rated from recorded therapy sessions using the Cognitive Therapy Rating Scale (CTRS; [50][51][52]). BCI training significantly improved CBT quality [49]. Prior to training, only 2% of therapists demonstrated CBT competence, while the majority of therapists (79.6%) demonstrated competence by the end of training [53]. While mean final competence scores (M = 41.2) were above the criterion threshold by the certification point, there remains an opportunity to improve skills among those who have completed training, as well as those who continue to participate in BCI training. Considerable research [15] also shows that training effects wear off over time without additional support, such as performance-based feedback, but this requires ongoing and extensive efforts (i.e., CTRS coding) that are not income-generating or reimbursed by payers. Clearly, human-based fidelity coding presents a substantial challenge in the training protocol. BCI CTRS coding to date is the equivalent of a doctoral-level rater working full-time for almost 4 years, or approximately $100-$125 per session rated-more than the cost (reimbursement) of the session itself. It is expensive, a rate limiting factor for scaling up training, and does not provide for sustainability, as ongoing CTRS coding ends at 6 months post-workshop. Using technology instead of humans for feedback on CBT competence would promote scale, efficiency, sustainability, and more effective allocation of limited human training resources. Supported by R56 MH118550, the study team demonstrated initial feasibility of using ML models to rate CBT fidelity (i.e., CTRS codes; [50]) from linguistic features using a subset of sessions from previous BCI trainings [54,55]. In 2020, Lyssn established a data use agreement with the University of Pennsylvania for the updated corpus of recordings (n = 2,494) and related CTRS ratings. Using transformer-based, deep neural networks [56], Lyssn developed AI-generated models for each of the 11 CTRS codes using all 2,494 sessions (within a crossvalidation framework of test and training partitions; [57,58]). The goal is that AI-generated metrics are indistinguishable from human-generated metrics, with a benchmark of 80% of human reliability (e.g., if human reliability is 0.80 and AI predicted scores correlate with human scores at 0.75, then AI predictions are 0.75 / 0.80 or 94% of human reliability). All results are based on a 30% test set of sessions that is totally distinct from the training set where models were originally developed and fit. Results shown in Fig. 1 demonstrate very strong signal and prediction. In all but one instance (CTRS code Understanding), AI-generated predictions cross the 80% of human reliability threshold. It is worth noting that in almost all clinical research using the CTRS, reliability estimates focus exclusively on the CTRS total score [49,59]. Using AI-generated metrics, the tools achieve 100% of human reliability on the total score and also demonstrate highly accurate individual item reliability. This published research and large-scale analyses of the Penn Collaborative's CTRS data demonstrate feasibility for the development of a LyssnCBT tool for automated fidelity feedback. Moreover, Lyssn has a standing in-house coding team that continuously provides new validation and calibration data to assess the ongoing performance of the productive AI algorithms. Objectives and aims The primary objectives of Project AFFECT (AI-Based Fidelity Feedback to Enhance CBT) are to 1) refine a LyssnCBT user interface geared to CMH clinical, supervision, and administrative workflows and needs and evaluate it for usability and implementation readiness and 2) prospectively evaluate both service and implementation outcomes using LyssnCBT for supervision and quality assurance. LyssnCBT will massively scale up evaluation and feedback capacity to support high-quality CBT in routine care settings across the US. The end goal is more therapists across the country providing higher quality CBT to the millions of Americans suffering from mental health challenges. Project AFFECT has two phases. In Phase 1 the objective is to use an existing prototype to develop LyssnCBT for CMH settings and evaluate its usability. This will include: understanding community stakeholder needs to inform software design and functionality (Aim 1), and evaluating usability and implementation readiness of LyssnCBT with CMH therapists and supervisors in a standardized roleplay design (Aim 2). Iterative software development and preliminary system validation will ensure readiness to advance to Phase 2 testing. The objective of Phase 2 is to evaluate LyssnCBT in realworld, CMH settings. A hybrid type 2 implementationeffectiveness, stepped-wedge randomized study of LyssnCBT will evaluate improvement in CBT skill use and client outcomes with 50 therapists and 1,875 clients across 5 CMH clinics (Aim 1). In addition, the hypothesized mechanism by which LyssnCBT affects clinical outcomes will be assessed (Aim 2). Study setting DBHIDS is a $1 billion per year healthcare system with over 300 agencies that provide behavioral health services to the city's 470,000 Medicaid recipients, plus thousands of uninsured and underinsured individuals [51]. To qualify for services, individuals must live in Philadelphia and earn no more than 138% of the federal poverty index. The DBHIDS client population is racially and ethnically diverse (e.g., 50.1% Black / African-American, 24.8% Latino / Hispanic, 21.4% White / Caucasian) and over 54% of clients are women. Clinics treat individuals with a broad range of mental health and substance use problems: Depression (30.6%), Substance Use / Dependence (28.9%), Bipolar / Other Mood Disorders (26.7%), Psychotic Disorders (13.3%), and Anxiety Disorders (12.8%; [60]). Additional therapists are enrolled in the BCI each year through its contracts with DBHIDS. Participants and procedures For Phase 1, Project AFFECT will recruit therapists, supervisors, and administrators (n = 25) from within the DBHIDS network. Inclusion criteria will include being currently employed at an adult outpatient CMH program that has received CBT training and implementation support from the BCI and being able to engage in study processes in English. For Phase 2, the Project AFFECT study team will identify 5 adult outpatient CMH programs from among the BCI partner programs who agree to integrate LyssnCBT into their routine procedures. Across the 5 agencies, therapists (n = 50) and their supervisors will be recruited for participation. Clients from the caseloads of participating therapists will be recruited to participate, with a goal of 5 consenting clients per therapist. Median treatment length at DBHIDS programs is approximately 10 weekly sessions. Across 18 months of planned data collection (~ 75 weeks), a minimum of 1,875 clients for 50 therapists are expected (i.e., 50 therapists × 5 sessions per week × 75 weeks = 18,750 sessions, with an average of 10 sessions per client). Phase 1 research design and methods (1 year) The goal of Phase 1 is to refine a fully functional prototype of LyssnCBT designed for CMH use and workflows. Fig. 1 Percentage agreement between AI-generated and human-generated CTRS codes It will be integrated within the Lyssn cloud-based, software platform and make use of previously developed AI models for CBT fidelity. The prototype will be evaluated for usability and implementation readiness, and the Phase 1 milestones will establish readiness for a randomized evaluation of LyssnCBT in Phase 2. Aim 1: Community Mental Health (CMH) user-centered design and software development The LyssnCBT user-interface (UI) for CMH settings and workflows will be refined using an iterative, user-centered design (UCD) process so that the front-end UI of the system is maximally useful (and implementable) to a variety of end-users. Three groups of stakeholder participants will be recruited from sites previously trained in CBT by the BCI: therapists (n = 3), supervisors (n = 3) and clinic administrators (n = 4). Participants will be compensated $50 for their participation. The focus group and individual interviews will probe: typical client population, clinical and supervision workflows, current information technology (IT) infrastructure, and (administrators only) how quality assurance is currently conducted. A brief demonstration of the Lyssn platform and the existing CBT fidelity prototype will be provided. Participants will be queried about whether and how CTRS-based feedback is currently used within ongoing supervision, additional features of sessions and/or clients that would be useful for the LyssnCBT system to capture and report back, and perceived motivators and barriers to adoption of the LyssnCBT system. Based on the input from the UCD design sessions, the research team will refine the design of the LyssnCBT software. It will build from Lyssn's HIPAA-compliant cloud system, and while final features and functions will be shaped by the design sessions, the LyssnCBT UI is anticipated to include interactive summaries of CBT fidelity scores (individual items, plus total), allowing summarization of ranges of sessions, clients, and therapists along with drill down to individual sessions for review and supervision. Aim 2: usability and implementation readiness of LyssnCBT with standardized patients After the LyssnCBT software is adjusted to reflect the UCD feedback, usability and implementation readiness will be assessed using standardized patient (SP) methodology. Ten CMH therapists and five supervisors will be recruited to participate in an individual, 60-90 min session, including an SP "therapy" session and semi-structured interview. (Note: Supervisors will not record SP sessions and interview questions will be framed around supervision processes, but otherwise will be largely similar to therapist sessions.) There will be a brief introduction to the Lyssn recording platform, and then each therapist will use it to record a 15-min session with the SP (played by a study team member), treating it as if it were a regular therapy session at their clinic. The recorded therapy session will be processed by LyssnCBT. The therapist participants will then be guided through the LyssnCBT interface, which will display the SP session just recorded along with other roleplayed sessions pre-recorded by the research team. During the semistructured interview, participants will be solicited for feedback on UI elements, including visualizations of CBT measures, data fields, and navigation controls. Participants will then be asked how they would imagine using LyssnCBT to complete critical actions, like assessing their performance during a clinical session or reviewing a session during supervision. At the end of the sessions, therapist and supervisor participants will complete brief (4-item) implementation measures of acceptability (Acceptability of Intervention Measure; AIM; [61]), appropriateness (Intervention Appropriateness Measure; IAM; [61]), and feasibility (Feasibility of Intervention Measure; FIM; [61]), as well as usability (System Usability Scale; SUS; [62]) with respect to the LyssnCBT prototype. The research team will review and refine the LyssnCBT platform based on the feedback gathered in this stage. Phase 2 research design and methods (3 years) The primary research activity of Phase 2 is a type 2 hybrid implementation-effectiveness, randomized study comparing LyssnCBT for clinical and supervision services to services as usual (SAU), where the primary outcomes include therapist CBT skill and client outcomes (symptom improvement and drop-out). Working closely with DBHIDS leadership, 5 programs will be recruited from among the Penn Collaborative / BCI CMH partner organizations, targeting adult outpatient mental health clinics with 8 or more staff therapists. Agency participation will include the integration of LyssnCBT on a program level to facilitate the integration of LyssnCBT into standard workflow practices including requesting consent to record and participate from clients during the standard intake process, and integration into supervision practices. Therapists (n = 50) and their supervisors will be recruited, where each therapist will have a caseload of about 5 clients participating in the study at any given time. (Note: Participating therapists may use LyssnCBT with as many of their clients as they wish, but they will be asked for a minimum of 5 clients consenting to participate in study data collection at any given time.) Median treatment length at DBHIDS clinics is approximately 10 weekly sessions. Across 18 months of planned data collection (~ 75 weeks), recruitment is expected to yield a minimum of 1,875 clients (i.e., 50 therapists × 5 sessions per week × 75 weeks = 18,750 sessions, with an average of 10 sessions per client). LyssnCBT will be compared to SAU using a steppedwedge design in which each clinic will have SAU and LyssnCBT phases. Stepped wedge designs allow the intervention (here, LyssnCBT) to eventually roll out to all clinics and therapists and also has greater power than a parallel cluster randomized trial [63]. As shown in Fig. 2, all 5 clinics will start with SAU (black solid lines), and clinics will be randomized to begin LyssnCBT sequentially over time (dashed purple) using simple randomization. The names of the five participating agencies will each be enclosed in individual sealed envelopes, and every two months, a study team member will select one envelope with the name of the agency to begin LyssnCBT. At the start of the trial when all clinics are in the SAU phase, all clinics will begin using a modified version of the Lyssn platform for recording sessions that provides access to the recording and session sharing functionality, but no other features (e.g., speech-to-text transcription, annotation tools, any AI-generated metrics). When a clinic is randomly selected to use LyssnCBT, there will be an onboarding and training session to cover the software and clinical / supervision protocols. Participants may withdraw or take away permission to use and disclose their information at any time by sending written notice to the investigator for the study. If they withdraw their permission, they will not be able to stay in the study. Assessments / outcomes There are two classes of primary outcomes: a) therapist CBT fidelity, and b) client outcomes. CBT fidelity will be assessed by AI-generated CTRS scores for every recorded therapy session, which will be recorded via the Lyssn platform during both SAU and LyssnCBT phases of the study (approximately 18,750 sessions in total). Client outcomes will be evaluated using the Patient Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder-7 (GAD-7) at each session. The PHQ-9 is a brief, widely used depression inventory with 9 total items, and similarly, the GAD-7 is a brief, widely used anxiety inventory with 7 total items [64][65][66]. The PHQ-9 and GAD-7 are reliable and valid and can be completed in 2-3 min. These client self-report measures will be collected via a web-based survey tool that supports text and email notification via URL to complete assessments and integrates with the Lyssn platform. Before a participating therapist begins a new session using the Lyssn platform, an email (or text) notification will be sent to the client to complete the PHQ-9 and GAD-7. Data will also be collected on client drop-out / premature termination via a brief monthly survey sent out to participating therapists. Finally, after three months of engagement with the LyssnCBT tools, each participating therapist and supervisor will complete the battery of implementation measures, including the SUS, AIM, IAM, and FIM, as well as have an opportunity to provide more general feedback on the LyssnCBT system. All assessments are summarized in In addition, therapists will be compensated $15 per participating client. Clients will be compensated with one $10 gift card for allowing the research team access to the data from their weekly symptom measures and session recordings. Data analysis plan CBT fidelity, client symptom measures (PHQ-9, GAD-7), and client drop-out will be analyzed with mixed-effects models (also called hierarchical or multilevel models; [67][68][69][70]). Mixed models are very flexible with respect to nested and imbalanced data, where the current data will contain repeated measures within clients and within therapists with varying numbers of sessions and clients. Within each individual clinic, a stepped wedge design is similar to an interrupted time series, with pre-intervention (i.e., SAU) and post-intervention (i.e., LyssnCBT) phases. To model the intervention effect, separate slopes will be examined for time by phase to capture differential changes in outcomes across the two phases. Condition (LyssnCBT vs SAU) will be dummy-coded, and clinics will also be included in the model as dummy-coded control variables. Analyses across the outcomes will be very similar, with the exception that client drop-out is a binary outcome, whereas a logistic mixed effects model will be used instead. The primary focus in all analyses is the main effect of Condition and its interaction with Time (during the phase since LyssnCBT started). Finally, sensitivity analyses will examine missing data. Missing data could be a function of a therapist not recording a session, or of a client not completing self-report measures. The research team will review weekly reports of anticipated recordings and assessments to prevent missing data whenever possible. No significant challenges with missing data are anticipated, though calculations have assumed 20% attrition in power analyses (see below). Mixed-effects models provide unbiased estimates in the presence of missing data as long as missing data can be assumed Missing at Random (MAR; [70]). If missing data is greater than 20% or there are other concerns about the MAR assumption, a pattern-mixture approach will be used to (potentially) non-ignorable missing data [71]. Conceptual model and evaluation of the proposed mechanism of action According to the deliberate practice model, both repetition and specific, performance-based feedback are crucial to improving provider skill [72][73][74][75]. LyssnCBT is designed to enhance exposure to repeated practice opportunities (i.e., CBT sessions) with exactly the type of specific, performance-based feedback empha-sized in the deliberate practice model. In support of this, high-quality implementation efforts that are inclusive of practice with feedback increase CBT competence [49]. The skills training involved in CBT (encompassing both cognitive interventions like generating alternative explanations and cognitive coping skills, and behavioral interventions, like behavioral activation and coping skills) act as mediators in reducing distress and impairment among individuals with mental health problems. Adopting principles of experimental therapeutics outlined in the NIMH strategic plan [76] and applying them to the conceptual frameworks guiding LyssnCBT, the following mechanism of action will be assessed: LyssnCBT will provide performance-based feedback on CBT fidelity, which should improve therapist CBT skills which in turn should improve client outcomes. Importantly, most process or mechanism research has been limited to 100-200 sessions due to human-based observational coding [77]. In the present study, the proposed mediation model above will be assessed using more than 18,000 sessions. The hypothesized mediation model will be tested using mixed models [78]. Specifically, analyses will test the total effect (or "c" pathway in mediation literature) of LyssnCBT on client outcomes (i.e., PHQ-9, GAD-7, premature drop-out), then the effect of LyssnCBT on CTRS scores (or "a" pathway). Finally, analyses will test the direct effect of LyssnCBT on client outcomes ("c prime") while controlling for the effect of the CTRS mediator ("b" pathway). The indirect ("a*b") effect will be tested via bootstrap confidence intervals [79] and is a direct estimate of the hypothesized mechanism. Two additional analyses are also planned. While the above analyses provide the traditional approach to mediation, it is specifically hypothesized that it is changes in CBT fidelity (i.e., CTRS scores) due to LyssnCBT which would drive improved client outcomes. To examine this hypothesis, CTRS deviation scores will be created. Mean CTRS scores during SAU will be estimated for each therapist and for each CTRS item (i.e., mean Agenda score during SAU phase). These means will be subtracted from each corresponding CTRS score during the LyssnCBT phase (e.g., if mean Agenda during SAU is 2.5 and 6 in a new session, the deviation score would be + 3.5). These CTRS deviation scores will provide somewhat more specific information on whether the improvement in CBT fidelity is driving client outcomes. Finally, the Lyssn platform collects user-interaction data, which will be used to examine whether time spent reviewing sessions and interacting with the LyssnCBT UI are predictive of improved client outcomes. Power and sample size Power and sample size estimation took into account a number of factors: 1) correlation of CBT fidelity and client symptoms within therapists, 2) repeated measures, 3) stepped wedge design, and 4) a range of possible effect-sizes. Intraclass correlation coefficients (ICC; i.e. the correlation in the data due to nesting) were based on recent analyses of more than 400 CBT therapists and associated CBT fidelity ratings (ICC = 0.20; [80]) and published literature on patient symptoms (ICC = 0.10; [14]). Power and sample size focused on the effect of LyssnCBT on client outcomes, where a smaller effect size is expected as compared to therapist CBT skill, which is being directly targeted by LyssnCBT. Sample size was estimated using the formula for mixed model power analysis developed by Raudenbush and colleagues [81] that incorporates therapist ICCs and repeated measures, used in conjunction with a design effect from the specific stepped-wedge design proposed [66]. Because steppedwedge studies include both within and between therapist comparisons, they typically have greater power than similar parallel groups cluster randomized trials [82]. A range of effect sizes for LyssnCBT on client outcomes (from 0.05 to 0.25) and therapist sample sizes were examined. Given the proposed design of 5 clinics, 50 providers, 1,875 total clients, and 18,750 repeated measures, the current design has power of 0.80 or more to detect LyssnCBT effects of d = 0.15 or greater. This would represent a 'small' effect, but given the number of clients who receive CBT services in a given year, this would still entail a large effect in the population over time [83]. Data management All data will be recorded using a secure, password protected, HIPAA compliant cloud platform (LyssnCBT). The platform will be hosted on Amazon Web Services (AWS) and require two-factor authentication to ensure the security of all sensitive patient data. Lyssn maintains a Business Associate Agreement (BAA) with AWS to ensure that both parties are adhering to HIPAA guidelines. This platform streams data directly to secure data storage, ensuring that recordings do not reside on less secure therapist computers or tablets. All hard disks utilize full hard drive encryption (in compliance with HIPAA guidelines) and client identifiers have a second level of encryption in the database tables. This cloudbased recording system has already been built and has been used to securely record and store sessions as part of R44 DA046243. Once files have been recorded, they are then processed on the cloud-based platform. This processing includes speech signal processing methods, voice activity detection, speaker segmentation (or diarization), and automated speech recognition (ASR). Lyss-nCBT therapist competence ratings and speech feature data will then be used in the computer-generated summary reports. These summary reports will be viewable on the same secure, HIPAA compliant cloud platform used for recording. Therapists and supervisors will view the LyssnCBT feedback reports together during supervision, and therapists may also choose to log in to view feedback metrics independently. All other data and source materials are only accessible to the researchers. The research team will only have access to individually identifiable private information (e.g., name and contact information) on an as-needed basis, such as to confirm that recordings are only being copied for sessions from consented therapists and that LyssnCBT feedback reports are given to the appropriate therapist only. Data transfer of all files will be protected using strong Advanced Encryption Standard (AES) encryption; electronic data will only be maintained on network servers and computers that incorporate security protections; any hard copies that contain identifiable data will storedin locked file cabinets; subject-identifiable information will be replaced with identification numbers at the earliest possible time; and subject-identifiable information and the link to identification numbers will be kept separately from data. Discussion Artificial intelligence holds great promise for advancing our ability to evaluate therapist skills at scale, providing a view inside the black box of psychotherapy as it is delivered in routine care. This study will employ user-centered design to engage community stakeholders in refining a tool to evaluate and support clinicians' CBT skills, and then evaluate the impact of that tool on CBT skill and client outcomes in a publicly funded mental health care system. The study will also examine the hypothesized mechanism by which LyssnCBT affects clinician and client outcomes. It will be among the first studies to test artificial-intelligence generated metrics and tools to improve skills and client outcomes in routine mental health care. Outcomes will have a significant impact on the advancement of strategies to implement mental health EBPs at scale and with fidelity, which in turn may have positive impacts for broader accessibility of these treatments. Trial results will be reported in peer-reviewed publications, at scientific presentations, and through open presentations to the community mental health system within which this research will be conducted. Potential problems and alternative strategies It is possible that technology problems (e.g., lack of computer or computer access) will require a more extensive period of system enhancements. The study team has extensive past history working within DBHIDS and similar sites, and the pilot data suggests that sites have the necessary, basic technology infrastructure. However, the investigators will be attentive to these technical issues during the Phase 1 research and have budgeted for just this possibility. Provider turnover and client no-shows could affect the retention rate in the randomized trial and/or increase missing data, reducing statistical power and increasing standard error. Additionally, recruitment for the randomized trial may be slower than expected. The team will proactively employ several strategies to mitigate this risk. If the study flow does not support the recruitment targets by the end of month 9 of Phase 2, the team will work with the recruited agencies and DBHIDS leadership to either identify new providers for recruitment at existing study sites or expand recruitment effort to additional sites. Potential for impact The DBHIDS network includes over 300 agencies. Conservatively assuming that each agency has 10 therapists and each therapist sees 30 clients per week, that is 90,000 sessions per week, approximately 4.5 million per year -in one publicly-funded behavioral health system in a single large American city. There are currently no feasible methods for estimating the quality of even a fraction of those sessions. The current research builds upon a robust, existing platform and lays the groundwork for a feasible, technology-enabled assessment of CBT intervention quality at scale, which can inform performancebased feedback for training, supervision, and quality assurance. The combination of the team, the project, and the partnership with DBHIDS presents a unique opportunity to massively scale up quality monitoring of CBT interventions, and the underlying methodologies would lay a foundation for psychosocial intervention quality monitoring and feedback in general. The implications of this research would be improved outcomes for clients, improved support for therapists, and improved quality assurance processes for behavioral healthcare systems.
2022-09-20T17:08:29.323Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "8843648fc53fc6c7b2efbda3c53265071d5fe6f8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "8843648fc53fc6c7b2efbda3c53265071d5fe6f8", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
15055841
pes2o/s2orc
v3-fos-license
Comparison of genomic and amino acid sequences of eight Japanese encephalitis virus isolates from bats We compared nucleotide and deduced amino acid sequences of eight Japanese encephalitis virus (JEV) isolates derived from bats in China. We also compared the bat JEV isolates with other JEV isolates available from GenBank to determine their genetic similarity. We found a high genetic homogeneity among the bat JEVs isolated in different geographical areas from various bat species at different time periods. All eight bat JEV isolates belonged to genotype III. The mean evolutionary rate of bat JEV isolates was lower than those of isolates of other origin, but this difference was not statistically significant. Based on these results, we presume that the bat JEV isolates might be evolutionarily conserved. The eight bat JEV isolates were phylogenetically similar to mosquito BN19 and human Liyujie isolates of JEV. These results indicate that bats might be involved in natural cycle of JEV. monkeys [13,28,43]. However, only pigs and water birds are considered reservoirs of the virus [2,22,33,43]. Bats are recognised as important reservoirs of a large number of zoonotic viruses [1,3,30,45]. Some species of bats can maintain viruses for long periods of time [16,37]. Previous studies have shown that JEV and/or serum antibody against JEV may exist in bats in Japan and China [10,23,28,36,48]. However, the role of bats in the JEV life cycle is unknown. Limited information is currently available about batderived JEV isolates. The full-length nucleotide sequences of four JEV isolates (B58, GB30, HB49 and HN97) derived from bats in Yunnan Province, China, were determined [28,48]. The B58 and GB30 isolates were isolated from a Rousettus leschenaultia bat in 1989 and a Murina aurata bat in 1997, respectively. The HB49 and HN97 isolates were isolated from a R. leschenaultia bat in 1990. These four bat JEV isolates belonged to GIII [28,48]. In recent years, we collected four JEV isolates from bats captured in Guangdong, Hainan and Hunan provinces. The genetic relationship between the genomes of these batderived JEV isolates and the previously collected batderived isolates remains unknown. Here, we compared the genetic characteristics of eight bat JEV isolates and compared them to those of other original JEV isolates available from GenBank. Materials and methods Bats were sampled at four natural habitats in three regions in Guangdong, Hainan and Hunan provinces of southern China between July 2007 and August 2009. Bats were captured using mist nets at natural habitats of bats (e.g., caves or palm trees). The sampling method was followed as described previously [31]. Bat brain samples were taken in the laboratory, immediately placed into tubes containing 300 ll of RNAlater (QIAGEN, Hilden, Germany), and stored at -80°C until used. The supernatants from brain homogenates were used to inoculate baby hamster kidney (BHK-21) cells and were consecutively passed three times. The virus was isolated as described previously [48]. Four viruses were isolated and designated GD1, HN2, SY87 and YY158 isolates. Fulllength genomic sequences were obtained from the GD1 and HN2 isolates, while only the sequence of the E gene was obtained for the SY87 and YY158 isolates. The GD1 isolate was obtained from a Myotis ricketti bat collected in Huizhou, Guangdong Province, in 2009, and the HN2 isolate was obtained from a Miniopterus schreibersii bat that was collected in Haikou, Hainan Province, in 2008. The SY87 isolate was obtained from a Rhinolophus affinis bat and the YY158 isolate was obtained from a M. schreibersi bat, both of which were collected in Yueyang, Hunan Province, in 2008. A total of 105 full-length JEV genomic sequences were downloaded from GenBank, including four bat-derived JEV genomic sequences. Phylogenetic trees were constructed based on these 105 nucleotide sequences and two nucleotide sequences of JEV (GD1, HN2 isolates) determined in this study. Consequently, twenty-seven fulllength genomic sequences of JEV isolates were selected from the phylogenetic tree based on region, isolation time, host and phylogenetic position. In addition, fourteen E gene sequences of JEV isolates were selected in the analyses, which included two sequences of the JEV E gene determined in this study. A total of 41 JEV isolates were used for constructing the phylogenetic trees, which contained isolates isolated from mosquitoes (n = 14), humans (n = 12), pigs (n = 3), vaccine (n = 2), midges (n = 2) and bats (n = 8) ( Table 1). The JEV isolate that was first isolated from human brain in 1935 (Nakayama strain) was used as prototype strain in sequence comparisons. Multiple sequence alignments were performed using MEGA 4.0 [40]. The percent identity within the nucleotide sequence alignment was determined using MegAlign (DNASTAR, Madison, WI, USA). Geneious 5.5.6 was used to show differences in the nucleotide and amino alignments. Phylogenetic trees were constructed based on 41 E nucleotide sequences using the maximum-likelihood method in PHYLIP 3.9.6 [14]. The E gene of West Nile virus was used as an outgroup. In addition, the maximumparsimony method in PHYLIP 3.9.6, the neighbor-joining method in Mega 4.0 [32], and the Bayesian method in BEAST 1.5.4 [11] were used in the analyses. The rate of nucleotide substitutions per site was estimated using the Bayesian Markov chain Monte Carlo (MCMC) approach as implemented in the BEAST 1.5.4 package [11]. The analysis was performed by using the HKY substitution model under a coalescent model of constant population size. In each case, the relaxed molecular clock model was used. The resulting convergence was analyzed by using Tracer1.5. A 95 % high-probability density (HPD) was determined to ascertain the uncertainty in the parameter estimates. Results The eight bat JEV isolates (GD1, HN2, SY87, YY158, B58, GB30, HB49 and HB97) used were obtained at different times over two decades ). Four of these isolates were isolated in Yunnan Province (Fig. 1), which has been a highly epidemic area for JE since the 1990s, with an average incidence of infection greater than 0.5/100,000 people [46,48]. Two of the isolates were isolated in Hunan Province (Fig. 1), with an average incidence of infection between 0.2/100,000 and 0.5/100,000 people [46,53]. The other two isolates were from Guangdong and Hainan Province (Fig. 1), respectively, which were once highly endemic areas for JE before the 1990s but are currently low-endemic areas with an incidence of less than 0.2/100,000 people annually [46,53]. The diversity of the 27 full-length genomes and the 41 E genes at the nucleotide and amino acid level is shown in Table 2. The isolates generally shared high nucleotide and amino acid sequence identity. The amino acid sequence identities were higher than the corresponding nucleotide sequence identities. The full-length nucleotide sequences of bat JEV isolates shared identities from 99.4 % to 99.9 %, and the E gene sequences shared identities from 99.2 % to 99.9 %. However, all of isolates of full-length nucleotide sequences shared identities from 79.4 % to 99.9 %, and the identities of the E gene sequences ranged from 77.4 % to 99.9 %. When the comparison was restricted to the same isolation time period (1986 to 2009), there was 97.0-99.1 % identity in the nucleotide sequences and 96.7-99.7 % identity in the E gene sequences of JEVs isolated from humans. There was 79.6-97.2 % and 77.9-97.3 % identity in mosquito JEVs in the genomic and E gene sequence, respectively. The gene sequence homology of JEV isolates from bats was higher than those from other hosts ( Table 2). When the comparison was restricted to GIII, the genetic homogeneity in the bat JEV isolates was likely higher than in those derived from humans and mosquitoes (data not shown). We compared six bat JEV isolates (GD1, HN2, B58, GB30, HB49 and HB97) with the Nakayama strain on the basis of UTR variation ( Table 3). The six bat JEVs shared the same nucleotide changes (C 14 ? T 14 and T 49 ? C 49 ) in the 5 0 NTR (Table 3). Two nucleotide changes were (Table 3). Two other nucleotide differences in the 3 0 NTR of GD1 and HN2 were also revealed ( Table 3). In addition, one nucleotide was absent in the 3 0 NTR of the GD1 isolate, and two nucleotides were absent in that of the HN2 isolate (Table 3). Phylogenetic analysis Five genotypes were distinguished based on the E gene nucleotide sequences of the 41 selected JEV isolates ( Fig. 3), which is consistent with the classification made by Chen and colleagues [6,7]. The phylogenetic analysis demonstrated that all bat JEV isolates belonged to GIII. The isolates of GD1, HN2, SY87 and YY158 belonged to the same subgroup (Fig. 3). In addition, these isolates were similar to the GB30, B58, HB49 and HB97 isolates (Fig. 3). Notably, the BN19 isolate, which was isolated from a mosquito in Yunnan Province, China, in 1982 and the Liyujie isolate, which was obtained from a human in Yunnan Province, China, in 1979, were most closely related to the GD1 and HN2 isolates. Similar trees were produced by the neighbor-joining, maximum-parsimony, and Bayesian methods. Evolutionary analysis of eight E genes for bat JEV isolates showed that the mean evolutionary rate of bat JEV isolates was 1.44 9 10 -4 (95 %HPD = 2.33 9 10 -7 to 4.41 9 10 -4 ) nucleotide substitutions per site per year. The mean evolutionary rate previously reported from an analysis of 35 full-length genomes derived from humans, pigs and mosquitoes was 4.35 9 10 -4 (95 %HPD = 3.49 9 10 -4 to 5.30 9 10 -4 ) nucleotides substitutions per site per year [24]. Discussion Bats are known to be reservoir hosts for many zoonotic viruses, such as SARS-coronavirus-like viruses of bats [19], Hendra virus [15] and Ebola virus [41]. JEV was isolated from naturally infected Miniopterus schreibersii [3]. There are some features of bats that might help explain the detection of JEV in bats. R. Affinis, M. ricketti and M. schreibersii can migrate hundreds of miles to their hibernation sites. Thus, bats have more opportunities to come into contact with humans or other animals at different geographical locations, which make it possible for interspecies transmission. Secondly, R. leschenaultia and M. schreibersii exhibit an exceptionally long lifespan, ranging up to 14 years. The long lifespan of bats may enhance the persistence of chronic infections [50]. In addition, some bat species also hibernate over the winter [49]. Sulkin et al. [37] found that infectious JEV was recovered from seropositive bats fifteen weeks after a shift in temperature. The reduced body temperature and metabolic rate may suppress immune responses and reduce the rate of virus replication, and therefore, JEV could persist for extensive periods without evidence of disease [37]. There are currently approximately 105 fully sequenced JEV isolates available from different hosts [28,48]. Genetic variation has been reported among JEV isolates isolated from widely different time periods and geographical locations [6,21,28]. In the present study, we selected JEV isolates with genetic information available from GenBank based on their genotype, time period, geographic region and host from which they were isolated, and we used these reference isolates to compare the genetic variation of the eight bat-derived JEV isolates from China between 1986 and 2009. The isolates showed identities from 79.4 % to 99.9 % at the nucleotide level and identities from 91.1 % to 99.9 % at the amino acid level. Most of the differences were base substitutions and nucleotide changes that did not result in amino acid alterations (Fig. 2), which is consistent with previous findings [6]. The results indicate that most of the nucleotide mutations in the bat JEV isolates are silent. Notably, the bat JEV isolates showed 99.4-99.9 % genetic homogeneity in the full-length nucleotide sequences and 99.2-99.9 % genetic homogeneity in the E gene sequences, which were higher than those from other hosts ( Table 2). Also, the results of evolutionary analysis showed that bat JEV isolates probably had slower evolutionary rates than other original JEV isolates. The mean evolutionary rates of bat JEV isolates tended to be lower than those of isolates of other origin, but there was no statistically significant difference. This suggests that JEVs from bats might be more phylogenetically conserved than isolates from humans, swine and mosquitoes (Table 2). Moreover, according to the phylogenetic analysis, the GD1, HN2, SY87 and YY158 isolates were most closely related to the other four bat JEV isolates (B58, GB30, HB49 and HB97), showing a relatively high bootstrap value (Fig. 3). Eight bat JEV isolates were clustered into the same subgroup, although they were isolated from different bat species within separate regions and were originally isolated over the span of more than two decades. The reason for this phenomenon is unclear. It may be attributed to the host preference, with GIII JEVs having adapted to bats. Even though Van den Hurk et al. [44] performed laboratory-based infections on Pteropus alecto (Megachiroptera: Pteropididae) with JEV TS3306 (GII), there was no evidence that bats could harbor other JEV strains in nature except those belonging to GIII. It is unknown whether the other three genotypes of JEV circulate in bats in nature. In this study, the bats from which JEV isolates were isolated looked healthy, suggesting that the virus is not pathogenic to bats. Since sufficient nucleotide sequence information was not available about human or other host origins in the regions where bat JEVs were isolated, we -899) T T I I I I I -Nucleotide sequence is missing Japanese encephalitis virus isolates from bats 2549 could not determine the relationships between bat JEV isolates and isolates from other host origins in local areas. Further studies are needed to explore the role of bats in the natural cycle of JEV. However, it is worth noting that the human Liyujie isolate and the mosquito-derived BN19 isolate from Yunnan Province in China were closely related to the six bat isolates (Fig. 3), with high amino acid similarities of 99.0 % to 99.6 % and 98.8 % to 99.8 %, respectively. This indicates that a relationship might exist between humans, mosquitoes and bats within the JEV transmission cycle. In conclusion, our study showed that eight bat JEV isolates (GD1, HN2, SY87, YY158, B58, GB30, HB49 and HB97) belonged to GIII of JEV and shared a high degree of genetic identity. We presume that bat JEV isolates might be more evolutionarily conserved than other original JEV isolates. In consideration of the bat JEV isolates being phylogenetically similar to the mosquito isolate (BN19) Fig. 3 Phylogenetic tree generated based on the envelope (E) gene sequence using the maximum-likelihood method. Numbers above or below branches indicate neighbourjoining bootstrap values. West Nile virus was used as an outgroup. Genotypes are indicated on the right. The four bat JEV isolates sequenced in the present study are indicated with a circle on the left, and four other previously reported bat JEV isolates are indicated with a rhombus on the left. The scale bar indicates the number of nucleotide substitutions per site and the human isolate (Liyujie) from China and the Nakayama strain, bats might be involved in the JEV cycle in nature. However, we could not conclude whether bats are the hosts for JEV or are occasionally infected by JEV based on current evidence. Further virological and molecular epidemiologic studies of the bat JEVs are still needed.
2017-08-02T20:46:28.438Z
2013-07-09T00:00:00.000
{ "year": 2013, "sha1": "3185dfda3a47db56f43d7927c47f19cbaa6841f6", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7086626?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3185dfda3a47db56f43d7927c47f19cbaa6841f6", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
50276798
pes2o/s2orc
v3-fos-license
Spekkens's Symmetric No-Go Theorem In a 2008 paper, Spekkens improved the traditional notions of non-negativity of Wigner-style quasi-probability distributions and non-contextuality of observations. He showed that the two improved notions are equivalent to each other. Then he proved what he called an even-handed no-go theorem. The paper contains some minor inaccuracies and one false claim, in the proof of the no-go theorem. This claim, early in the proof, is used in an essential way in the rest of the argument. Here we analyze carefully Spekkens's proof of the no-go theorem, explain the inaccuracies, reduce the task of proving the no-go theorem to the special case of a single qubit, and then prove the special case. This gives us a complete proof of Spekkens's no-go theorem. In [6], Spekkens clarifies the ways in which classical theories differ from quantum mechanics. He improves the traditional notions of non-negativity of Wigner-style quasi-probability distributions and noncontextuality of observations. He argues that the improvements more accurately capture what a classical universe would look like. Thus, both of these improved notions serve to distinguish quantum theory from classical theories, in particular from theories that use hidden variables in an attempt to explain the results of quantum mechanics on a classical basis. Spekkens then shows that the two improved notions are equivalent to each other. Spekkens's improvements of non-negativity and non-contextuality emphasize the involvement of both preparations and measurements. In the second part of [6], Spekkens provides what he calls an even-handed approach to a no-go theorem. The theorem asserts that the requirement of non-contextuality (or equivalently of non-negativity) prevents a theory from matching the predictions of quantum mechanics; in other words, non-contextual hidden-variable theories can't succeed. "Evenhanded" means that the proof treats preparations and measurements in a symmetrical way. The paper [6] contains some minor inaccuracies and one false claim, in the proof of the no-go theorem. The false claim is "that a function f that is convex-linear on a convex set S of operators that span the space of Hermitian operators (and that takes value zero on the zero operator if the latter is in S) can be uniquely extended to a linear function on this space." Unfortunately, this claim, early in the proof, is used in an essential way in the rest of the argument. In this note, we analyze carefully Spekkens's proof of the no-go theorem, explain the inaccuracies, reduce the task of proving the no-go theorem to the special case of a single qubit and then prove the special case. This gives us a complete proof of Spekkens's no-go theorem. An alternative proof of the no-go theorem is given in the series of papers [1,2,3]. Definitions Spekkens defines a quasiprobability representation of a quantum system by the following features. QPR1 Every density operator ρ is represented by a normalized and real-valued function µ ρ on a measurable space Λ. QPR2 Every positive operator-valued measure (POVM) {E k } is represented by a set {ξ E k } of real-valued functions on Λ that sum to the unit function on Λ. (The trivial POVM {I} is represented by ξ I (λ) = 1, and the zero operator is represented by the zero function.) QPR3 For all density operators ρ and all POVM elements E k , we have A quasiprobability representation is called nonnegative if all the functions µ ρ and ξ E take only nonnegative values. We begin our analysis by looking carefully at the notions used in this definition of quasiprobability representation and clarifying some aspects of the definition. 1.1. Density operator. Within the definition of quasiprobability representation, Spekkens explains "density operator" as "a positive traceclass operator on a Hilbert space H". Although "trace-class" implies that the operator ρ has a well-defined trace, Spekkens presumably intended more, namely that the trace Tr(ρ) should be equal to 1. This would conform with the usual meaning of "density operator"; it would also account for the requirement that µ ρ be normalized. If one could multiply ρ by a positive real factor and still have a density operator, then the associated µ ρ should also be multiplied by the same factor. From now on, we shall assume that "trace 1" is included in the definition of density operator. It is also worth remembering that "trace-class" is important only in the context of infinite-dimensional Hilbert spaces. If H is finitedimensional, then all (linear) operators on it are in the trace class. Spekkens's proof of the no-go theorem does not require an infinitedimensional space; it works as long as the dimension of H is at least 2. So for many purposes, we need not worry about the "trace-class" clause in the definition of density operators. 1.2. Measurable space. The phrase "measurable space" is standard terminology for a set X together with a σ-algebra Σ of subsets of X, the members of Σ being called the measurable sets. A measurable space differs from a measure space in that the latter has, in addition to X and Σ, a countably additive measure defined on all the measurable sets. We believe that Spekkens intends Λ to be not merely a measurable space but a measure space. He uses the formula µ ρ (λ) dλ = 1 as the definition of the requirement in QPR1 that µ ρ be normalized. This integral and the one in clause QPR3 of the definition of quasiprobability representation both presuppose the presence of a measure to make sense of dλ. They also presuppose that the functions µ ρ are measurable. An alternative modification to make sense of these integrals would be to change the requirement that ρ is represented by a function and to require instead that it be represented by a measure, say ν ρ . The notation µ ρ (λ) dλ could then be taken to be syntactic sugar for dν ρ (λ). This alternative approach has, as far as we can see, two disadvantages and two advantages. The first disadvantage is that it requires us to understand Spekkens's notation µ ρ (λ) dλ, which looks like a standard notation, as syntactic sugar for something rather different. The second is that it explicitly contradicts Spekkens' assertion that µ ρ should be a function. The first advantage is that it preserves Spekkens's convention that Λ is merely a measurable space, not a measure space. The second advantage is that it is more general. In the approach with an a priori given measure dλ, multiplying it by functions µ ρ produces only those measures µ ρ (λ) dλ that are absolutely continuous with respect to dλ. The alternative approach allows arbitrary measures (on the given σalgebra Σ) without any requirement of absolute continuity. 1.3. Positive operator-valued measures. The second defining feature, QPR2, of a quasiprobability representation represents positive operator valued measures 1 {E k } by sets of functions ξ E k . The elements E k of a POVM are positive Hermitian operators such that I − E k is also positive. That is, the spectrum of E k lies in the interval [0, 1] of the real line. Conversely, any such operator occurs as a member of some POVM, and usually as a member of many POVMs. Specifically, if E is a positive Hermitian operator and I − E is also positive, then {E, I − E} is a POVM; unless E = I, we can replace I − E in this POVM by two or more positive operators whose sum is I − E, thereby obtaining other POVMs containing E. 1 We follow Spekkens's usage of "POVM" to refer to a discrete set of operators. This usage agrees with the standard text [4]. There is a generalization, involving operator-valued measures; see for example [8]. For our purposes, the simpler version is adequate, since the no-go theorem for these simpler POVMs implies the theorem for the broader class. The question arises whether the function ξ E k in a quasiprobability representation can depend on the POVM from which E k was taken or must depend only on the operator E k itself. The wording of the definition suggests the former, while the notation ξ E k suggests the latter. Fortunately for our purposes, Spekkens's definition of "measurement noncontextuality" requires that ξ E k "depends only on the associated POVM element E k " (italics added). Since our goal in this paper, the no-go theorem, is about noncontextual representations, we can safely follow the notation and assume that ξ E depends only on E, not on the POVM in which it occurs (and, a fortiori, not on the measurement process by which that POVM is realized). An additional hypothesis At the beginning of his proof of the no-go theorem, Spekkens notes that a mixture ρ = j w j ρ j of density operators ρ j with weights w j can be prepared by first randomly choosing one value of j from the probability distribution {w j } and then preparing ρ j . He infers that "clearly" µ ρ (λ) = j w j µ ρ j (λ). Although this inference is highly plausible and natural on physical grounds, it does not follow from just the definition of quasiprobability distribution as quoted above. Suppose that the functions ξ E do not span the whole space of square-integrable functions on Λ, so that there is a function σ orthogonal to all of these ξ E 's, where "orthogonal" means that σ(λ)ξ E (λ) dλ = 0. One could modify the µ ρ functions by adding to each one some multiple of σ, obtaining µ ′ ρ = µ ρ + c ρ σ and still satisfying the definition of quasiprobability representation. Here the coefficients c ρ can be chosen arbitrarily for each density operator ρ. By choosing them in a sufficiently incoherent way, one could arrange that µ ′ ρ (λ) = j w j µ ′ ρ j (λ). If, on the other hand, the ξ E 's do span the whole space of functions on Λ, then Spekkens's desired equation µ ρ (λ) = j w j µ ρ j (λ) does follow, for all but a measure-zero set of λ's, because the two sides of the equation must give the same result when integrated against any ξ E . Unfortunately, nothing in the definition of quasiprobability representations requires the ξ E 's to span the whole space. For example, given any quasiprobability representation, we can obtain another, physically equivalent one as follows. Replace Λ by the disjoint union Λ 1 ⊔ Λ 2 of two copies of Λ. Define the measure of any subset of Λ 1 ⊔ Λ 2 to be the average of the original measures of its intersections with the two copies of Λ. Define all the functions µ ρ and ξ E on the new space by simply copying the original values on both of the Λ i 's. The result is a quasiprobability representation in which the ξ E 's span only the space of functions that are the same on the two copies of Λ. The result of this discussion is that, in order to prove the no-go theorem along the lines proposed by Spekkens, we must add an additional hypothesis about mixtures of densities. There is a similar assumption for mixtures of measurements. Convex-linearity Hypothesis: Let {w j } be a probability distribution on a set of indices j. • This hypothesis is exactly statements (7) and (8) in [6]. The name of the hypothesis refers to the following terminology, which we shall need again later. Thus, the convex-linearity hypothesis says that the functions ρ → µ ρ and E → ξ E are convex-linear on the sets of density matrices and POVM elements, respectively. The no-go theorem On an intuitive level, the no-go theorem asserts that nonnegative quasiprobability representations 2 subject to the convex-linearity hypothesis cannot reproduce the predictions of quantum mechanics. A considerable amount of agreement with quantum mechanics is already built into the definition of quasiprobability representations. Specifically, the equation Tr(ρE k ) = µ ρ (λ)ξ E k (λ) dλ says that the expectation of E k in state ρ is the same whether computed by the quantum formula Tr(ρE k ) or as an average using the functions µ ρ and ξ E k from the quasiprobability representation. Spekkens's no-go theorem asserts that there is no nonnegative quasiprobability representation satisfying convex-linearity. A small technical point is that the no-go theorem presupposes that the quantum mechanics is non-trivial. Quantum mechanics on Hilbert spaces of dimensions 0 or 1 is classical (and trivial), so we must assume that we are dealing with a Hilbert space H of dimension at least 2. An inspection of Spekkens's argument reveals that he never uses any stronger assumptions about H. Thus, the no-go theorem can be formally stated as follows. Theorem 2. For a Hilbert space H of dimension at least two, there is no way to define nonnegative µ ρ , for all density operators ρ, and to define nonnegative ξ E , for all positive Hermitian operators E with I − E positive, so as to satisfy both the definition of a quasiprobability representation and the convex-linearity hypothesis. Reduction to two dimensions In this section, we reduce the task of proving Spekkens's no-go theorem to the special case where H has dimension 2. (In the terminology of quantum computing, H represents a single qubit.) More generally, we show that, if there were a nonnegative quasiprobability representation satisfying convex-linearity for some Hilbert space H, then there would also be such a representation, using the same measure space Λ, for any nonzero, closed subspace H ′ of H. To see this, suppose functions µ ρ (for all ρ) and ξ E (for all E) constitute such a representation for H. Let i : H ′ → H be the inclusion map (the identity map of H ′ regarded as a map into H), and let p : H → H ′ be the orthogonal projection map (sending each vector in H ′ to itself and sending each vector orthogonal to H ′ to 0). Also, fix some unit vector |α ∈ H ′ . Each density operator ρ on H ′ gives rise to a density operatorρ = i • ρ • p on H. For pure states, this amounts to just considering a state vector in H ′ as a vector in the larger Hilbert space H. For mixed states, the extension preserves averages. We begin defining a quasiprobability representation for H ′ by setting µ ′ ρ = µρ. We note that this is a normalized nonnegative real-valued function on Λ, and that it satisfies the part of convex-linearity that refers to the representations of densities. It is tempting to proceed exactly analogously with POVM elements E and their representing functions ξ E . That procedure doesn't quite work, because the definition of quasiprobability representation imposes a specific requirement on ξ I , where I is the identity operator. Unfortunately, if I is the identity operator on H ′ , then i • I • p is not the identity operator on H. So we must proceed slightly differently, and it is here that the fixed unit vector |α will be useful. Given a POVM element E on H ′ , i.e., a positive, Hermitian operator such that I − E is also positive, we defineĒ to be the unique linear operator on H such that In other words,Ē agrees with E on H ′ and with a scalar multiple of the identity on the orthogonal complement of H ′ , the multiplier of the identity being α|E|α . This extension process produces POVM elements for H; indeed, if a set {E k } of operators is a POVM for H ′ , then {Ē k } is a POVM for H. Furthermore, the extension process sends the identity and zero operators on H ′ to the identity and zero operators on H, and the process respects weighted averages. We continue the definition of a quasiprobability representation for H ′ by setting ξ ′ E = ξĒ for all POVM elements E on H ′ . The remarks above immediately imply that these functions ξ ′ E are as required by the second part, QPR2, of the definition of quasiprobability representation, that they are nonnegative, and that they satisfy the relevant part of the convex-linearity hypothesis. To verify the last part, QPR3, of the definition of quasiprobability representation, we observe that, for any density operator ρ and POVM element E on H ′ , the extensionsρ andĒ agree with ρ and E on H ′ , while on the orthogonal complement of H ′ ,ρ acts as zero andĒ acts as a scalar multiple of the identity. It follows immediately that Tr(ρĒ) = Tr(ρE), and therefore as required. This completes the proof that nonnegative quasiprobability representations subject to convex-linearity can be "restricted" to nonzero, closed subspaces of the original Hilbert space. Therefore, it suffices to prove the no-go theorem in the special case where H has dimension 2. Remark 3. By concentrating on the case of dimension 2, we gain two advantages. First, we can avoid some technicalities that would arise for infinite-dimensional Hilbert spaces. Second, we obtain a more concrete picture of the relevant spaces of density operators and measurements. (The first of these advantages would result from reduction to any finite number of dimensions; the second benefits specifically from dimension 2.) Convex-linear transformations Spekkens asserts that, if a function f is convex-linear on a convex set S of operators that span the space of Hermitian operators (and f takes the value zero on the zero operator if the latter is in S), then f can be uniquely extended to a linear function on this space. Unfortunately, such a linear extension need not exist in the general case, when zero is not in S. 3 For a simple example, consider the function that is identically 1 on an S that spans the space of Hermitian operators, does not contain 0, but does contain two orthogonal projections and their sum. The correct version of the result extends f not to a linear function but to translated-linear function, i.e., a composition of translations and a linear function. The rest of this section is devoted to a proof of this fact, in somewhat greater generality than we need. It applies to arbitrary real vector spaces; that the space consists of Hermitian operators is irrelevant. The convex hull, Conv(S), of a subset S of a real vector space V consists of the convex combinations a 1 v 1 + · · · + a n v n of vectors v 1 , . . . , v n ∈ S where a 1 + · · · + a n = 1 and every a i ≥ 0. The affine hull, Aff(S), of S consists of the affine combinations a 1 v 1 + · · · + a n v n of vectors v 1 , . . . , v n ∈ S where a 1 + · · · + a n = 1 but some coefficients a i may be negative. A set is convex if it contains all the convex combinations of its members; similarly, it is an affine space if it contains all the affine combinations of its members. An easy computation shows that convex hulls are convex and affine hulls are affine spaces; that is Conv(Conv(S)) = Conv(S) and Aff(Aff(S)) = Aff(S). An affine space A in a vector space V is said to be parallel to a linear subspace L of V if A = u 0 + L = {u 0 + v : v ∈ L} for some u 0 ∈ V . It is easy to see that, if an affine space A is parallel to a linear space L as above, then (i) L is unique, (ii) u 0 ∈ A, (iii) any vector in A can play the role of the translator u 0 , and (iv) A is either equal to L or disjoint from L. Lemma 4 ( §1 in [5]). Any affine subspace A of a real vector space V is parallel to a linear subspace L of V . Proof. If A contains the zero vector 0 then it is a linear subspace. Indeed, if v ∈ A then any multiple av = av For the general case, let u 0 be any vector in the affine space A. It suffices to show that L = {v − u 0 : v ∈ A} is an affine space, because then the preceding paragraph shows that it is a linear space, and clearly A = u 0 + L. Any affine combination a 1 (v 1 − u 0 ) + · · · + a n (v n − u 0 ) of vectors in L (so the v i are in A and the sum of the a i is 1) can be rewritten as (a 1 v 1 + · · · + a n v n ) − u 0 , which is in L. Let V and W be real vector spaces, S a subset of V , C = Conv(S) its convex hull, and A = Aff(S) its affine hull. Recall that a transformation f : C → W is convex-linear on S if f (a 1 v 1 + · · · + a n v n ) = a 1 f (v 1 ) + · · · + a n f (v n ) for any convex combination a 1 v 1 + · · · + a n v n of vectors v i from S. A transformation f : A → W is translated-linear if it has the form f (v) = w 0 + h(v − u 0 ) for some w 0 ∈ W , some u 0 ∈ A, and some linear function h : L → W defined on the linear space L = A − u 0 parallel to A. Proposition 5. With notation as above, any transformation f : C → W that is convex-linear on S has a unique extension to a translatedlinear function on A. Proof. Notice first that translations v → v − u 0 and linear functions both preserve affine combinations. A translated-linear function, being the composition of two translations and a linear function, therefore also preserves affine combinations. This observation implies the uniqueness part of the proposition. Indeed, every element of A is an affine combination a 1 s 1 + · · · + a n s n of elements of S, and therefore any translated-linear extension of f must map it to a 1 f (s 1 ) + · · · + a n f (s n ). To prove the existence part of the proposition, it will be useful to work with the graphs of functions. For any function g : S → W with S ⊆ V , its graph is the subset of V ⊕ W consisting of the pairs (s, g(s)) for s ∈ S. 4 We record for future reference that the graph of g is a linear subspace of V ⊕ W if and only if the domain of g is a linear subspace of V and g is a linear transformation from that domain to W . We also note that the projection π : V ⊕ W → V : (v, w) → v is a linear transformation that sends the graph of any g to the domain of g. In the situation of the proposition, let f : C → W be a transformation that is convex-linear on S, and let F ⊆ V ⊕ W be its graph. Also, let F − be the graph of the restriction of f to S. Notice that the convex-linearity of f on S means exactly that F is the convex hull of F − . It follows that F and F − have the same affine hull, because We claim that this affine hull Aff(F − ) is the graph of a function; that is, it does not contain two distinct elements (v, w) and (v, w ′ ) with the same first component v. To see this, suppose we had two such elements in Aff(F ) = Aff(F − ), say (v, w) = a 1 (s 1 , f (s 1 )) + · · · + a m (s m , f (s m )) and (v, w ′ ) = b 1 (t 1 , f (t 1 )) + · · · + b n (t n , f (t n )), where all the s i 's and t j 's are in S and where (1) a 1 + · · · + a m = b 1 + · · · + b n , because both sides are equal to 1. So we have (2) a 1 s 1 + · · · + a m s m = b 1 t 1 + · · · + b n t n , because both sides are equal to v, and we want to prove w = w ′ , i.e., In the special case where all coefficients a i and b j are ≥ 0, vector v is in C and both sides of (3) are equal to f (v). The general case reduces to this special case as follows. In all three equations (1)-(3), move every summand with a negative coefficient to the other side, and then divide the resulting equations by the left part of the rearranged equation (1). As a result we return to the special case already treated. Since the old version of (3) follows from the new one, this completes the proof of our claim that Aff(F ) = Aff(F − ) is the graph of a function. By Lemma 4, the affine space Aff(F ) is parallel to a linear subspace H of V ⊕W , say Aff(F ) = (u 0 , w 0 ) + H, where u 0 ∈ V and w 0 ∈ W . From the fact that Aff(F ) is the graph of a function, it follows immediately that H is also the graph of a function. Indeed, if H contains (v, w) and Let h be the function whose graph is H. Because H is a linear subspace of V ⊕ W , we know that h is a linear transformation from some linear subspace L of V into W . The fact that (u 0 , w 0 ) + H = Aff(F ) tells us, by applying the linear projection π : V ⊕ W → V , that u 0 + L equals π(Aff(F )) = Aff(π(F )) = Aff(C) = A, where the first equality comes from linearity of π and the second from the fact that F is the graph of the function f whose domain is C. So A is parallel to the linear subspace L of V . Furthermore, for each v ∈ C, we have ( On the other hand, in contrast to Proposition 5, thish is not unique (unless L = V ). Also, in the case of infinite-dimensional spaces, the extension process requires the axiom of choice (to extend bases) and need not be wellbehaved with respect to natural topologies on the vector spaces. Density operators and POVM elements in two dimensions In this section, we recall the form of density operators and POVM elements in the case where H is two-dimensional. In this case, a basis for the Hermitian operators on H is given by the identity and the three Pauli matrices It will be convenient to use vector notation, denoting the triple of matrices (X, Y, Z) by X. Then the general Hermitian matrix looks like where w and the three components of x are real numbers. The eigenvalues of this Hermitian matrix are In particular, the trace of this matrix is 2w, and the matrix is positive if and only if w ≥ x . Density matrices are the Hermitian, positive matrices of trace 1, so they have the form where x ≤ 1. As indicated by the notation, we parametrize these density matrices by three-component vectors x of norm ≤ 1. The threedimensional ball that serves as the parameter space here is called the Bloch sphere (with its interior). Similarly, POVM elements have the form E = E(m, p) = mI + pX + qY + rZ = mI + p · X with p ≤ m ≤ 1 − p (because E and I − E are positive operators) and therefore p ≤ 1 2 . The parameter space here, consisting of all four-component vectors satisfying these inequalities, is a double cone over a three-dimensional ball of radius 1 2 . We record for future reference the traces Tr(I) = 2, Tr(X) = Tr(Y ) = Tr(Z) = 0 and the multiplication table From these facts, it is easy to compute that where the factor 1 2 in the definition of ρ( x) has cancelled the factor 2 arising from Tr(I). Quasiprobability representation Finally, we are ready to prove Theorem 2. Suppose, toward a contradiction, that we have a nonnegative quasiprobability representation satisfying convex-linearity, for a two-dimensional H. In view of Proposition 5, we know that and ξ E(m, p) = p · B(λ) + mD(λ) + F (λ) for some nine functions A i (λ), B i (λ), C(λ), D(λ), F (λ) where the index i ranges from 1 to 3. (The "translated" part of "translated-linear" accounts for C and F .) The definition of quasiprobability representation leads to some simplifications. E(0, 0) is the zero operator, whose associated ξ function is required to be identically zero. That gives us F (λ) = 0 for all λ, so we can simply omit F from the formula for ξ. Also, E(1, 0) is the identity operator, whose associated ξ function is required to be identically 1. That gives us D(λ) = 1 for all λ. So we can simplify the ξ formula above to read ξ E(m, p) = p · B(λ) + m. We already evaluated the trace on the left side of this equation at the end of the preceding section. The integral on the right side is [( p · B(λ))( x · A(λ)) + ( p · B(λ))C(λ) + m( x · A(λ)) + mC(λ)] dλ. Comparing the trace and the integral, and equating coefficients of the various monomials in m, p, and x, we find that A i (λ) dλ = 0, and (6) C(λ) dλ = 1. Next, we extract as much information as we can from the assumption that all the functions µ ρ and ξ E are nonnegative. In the case of ξ E , this means that, as long as p ≤ m, 1 − m (so that E(m, p) is a POVM element), we must have m + p · B(λ) ≥ 0 for all λ. Temporarily consider a fixed λ and a fixed m ∈ [0, 1 2 ]. To get the most information out of the inequality m + p · B(λ) ≥ 0, we choose the "worst" vector p, i.e., we make p · B(λ) as negative as possible, by choosing p in the opposite direction to B(λ) and with the largest permitted magnitude, namely m. That is, we take so that our inequality becomes 0 ≤ m(1 − B(λ) ), and therefore B(λ) ≤ 1 for all λ. Repeating the exercise for m ∈ [ 1 2 , 1] gives no new information. So we turn to the case of µ ρ( x) , for which the nonnegativity requirement reads x · A(λ) + C(λ) ≥ 0. For each fixed λ, we consider the "worst" x, namely a vector x in the direction opposite to A(λ) and with the maximum allowed magnitude, namely 1. So we take and obtain the inequality 0 ≤ − A(λ + C(λ). Thus, we have A(λ) ≤ C(λ) for all λ.
2015-03-27T14:08:07.000Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "53542347125d3ebe9a5515abbd8d2c74da5088ee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "53542347125d3ebe9a5515abbd8d2c74da5088ee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
236814573
pes2o/s2orc
v3-fos-license
Research of Installation Stress of EMU Aluminum Alloy Beam and its Connecting Structure Bolt hanging is one of the common ways of rail vehicle hanging equipment. In this paper, the assembly structure of a type of EMU and converter is taken as the research object, and the load values at the four suspension points are calculated by theoretical calculations. A simulation model of a single hanger is established, and the simulation is determined by reference to the theoretical calculation under working conditions, the stress distribution at the hanging points under different loads was studied. The results show that under different loading conditions, the stress distribution of beam is basically the same. The maximum stress of the beam occurs at the point where it contacts the bolt head. Compared with other regions, the stress at the corner of the T-slot of the beam is also larger. Introduction For high-speed EMUs, subway vehicles, etc., most of the equipment under the car is directly suspended on the beam of the chassis of the car body by bolts, mainly including braking, electrical, and air conditioning equipment. The T-slot of the beam and the converter box are connected by special rectangular bolts and belong to the direct hanging type [1]. The bottom beam structure of high-speed EMUs uses A7N01S-T5 aluminum alloy extruded materials. Its strength is high, and can extrude thinwalled profiles with complex shapes, and its welding performance is good , but under the action of tensile stress, it is prone to stress corrosion failure behavior [2]. In order to prevent the occurrence of stress corrosion of the beam, it is particularly important to study the stress distribution of the beam assembly structure. As for the research of the hanging structure, there are more studies in the two major fields of petrochemicals and nuclear power technology involving large equipment structures. In fact, in the field of high-speed EMU technology, hanging problems are also common, and the problem of hanging equipment directly related to train safety ,so it is very important to the reliability of high-speed EMUs [3]. However, there are few studies on the installation stress of the car body under and the beam. Zhang Shucui proposed to convert the underconstrained problem into the static balance problem of a multibody system, and based on the principle of virtual work, calculated the balance equation for the hanging node force and its corresponding tangent stiffness matrix [4]. Li Tuo studied the optimal lifting ear position of the containment module, and used ANSYS software to make a detailed study and analysis of the stress at the lifting ear position of the three-stage lifting [5]. Based on the calculation of static equilibrium theory, this paper proposes a calculation method of eccentric load, which is applied to the calculation of the load at the suspension point of the beam. Based on this, the detailed stress distribution of the assembly structure is obtained through simulation research. Introduction of Beam Hanging Structure In this paper, the connection model of the beam and converter of a type of EMU is taken as the research object, and the specific stress distribution at the suspension point of the beam is studied. The static load of the beam is mainly transmitted to the beam by the gravity of the converter through the bolt. But because the converter is not homogeneous and its center of mass is not in the geometric center, it belongs to the problem of eccentric load distribution. The analysis of the stress distribution of each suspension point of the beam is of great significance for improving the safety and reliability of the beam and the EMU. This paper studies the assembly stress distribution of two beams through four suspension points to suspend the converter. The concrete structure of the T-slot of the beam and the bolt connection is shown in the figure: In this paper, the connection model of the beam and converter of a type of EMU is taken as the research object, and the specific stress distribution at the suspension point of the beam is studied. The static load of the beam is mainly transmitted to the beam by the gravity of the converter through the bolt. But because the converter is not homogeneous and its center of mass is not in the geometric center, it belongs to the problem of eccentric load distribution. The analysis of the stress distribution of each suspension point of the beam is of great significance for improving the safety and reliability of the beam and the EMU. This paper studies the assembly stress distribution of two beams through four suspension points to suspend the converter. The concrete structure of the T-slot of the beam and the bolt connection is shown in the figure: Load Calculation at the Suspension Point of the Beam The geometric center of the hanging point does not coincide with the geometric center of the hanging weight, which belongs to eccentric hanging. Aiming at the above eccentric suspension structure, based on certain assumptions, the load distribution of the beam suspension points is analyzed by theoretical calculations. Assume that the hanging points of each beam are in the same plane, the position of the hanging points is as follows: According to the static balance conditions of the beam, the moments of the OX axis and OY axis are obtained respectively: Because the sum of the loads carried by each lifting point is equal to the gravity of the hanging weight, that is: Assuming that the hanging points are in the same plane, and the hanging points a, b and hanging points c, d are arranged in a row, the deflection at the intersection of the hanging points a, d and hanging points b, c can only be a value [6] ,which is: The stiffness values of the hanging points are basically equal, so: Where: W is the gravity of the hanging weight, According to the formulas (1)- (3) and (6), i P (i = 1,2,3,4) can be obtained. The above calculation method is used for the load calculation of the hanging point studied in this paper. The total weight of the converter is 1800kg. The distribution points of the T-slot beams to suspend the converter are as follows. The numbers 1-4 indicate the number of the suspension points. 4978N, 4037N, 4963N, 4052N. Simulation of Single Hanging Point Assembly Structure The distributed load of each hanging point is obtained through theoretical calculation in the second section. The maximum hanging point force is close to 5000N. In order to further analyze the stress distribution of the hanging assembly structure, a single hanging point assembly model of the beam, bolt, and hanger was established, and the stress response under different external loads was obtained. Simulation Method of Contact Relationship and Preload Bolt connection structure is a typical nonlinear contact behavior, when two separate surfaces contact each other and tangency, they are said to be in contact. The nonlinearity of the contact is caused by the nonlinearity of the state conditions. In order to simulate the assembly between the connected part and the bolt, we need to use contact elements in ANSYS. The first is to prevent the contact surfaces from penetrating each other, which is called contact coordination; the second is to transfer normal pressure and friction between the contact surfaces, and the other is to track the change of the relative position of the contact surfaces [7][8]. In order to simulate the pretension force of bolts, a pretension force element needs to be established in ANSYS. Generally, PRETS179 element is used. This element has only one degree of freedom and can only apply tension load,bending moment or torque load will be ignored. When creating the element, it is required that the fasteners be a whole, and the fasteners are cut into two parts at the selected pre-tension position, a pre-tension section is generated using a pre-tension element, and then pretension force is applied on the pre-tension section [9][10]. Related Parameters The T-slot cross beam is made of A7N01S-T5 aluminum alloy thin-walled profile, and its mechanical properties are as follows: The faster adopts special T-bolt and HARD-LOCK strong lock nut, and the material is steel. The Tbolt adopts M20, class 8.8. According to the actual assembly experience, the pre-tightening torque is selected in the range of 156.9-200N· m. The preload force is selected according to where F is the preload force of the bolt, d is the nominal diameter of the thread, and T is the tightening torque. Establishment of Simulation Model In order to analyze the distribution of the installation stress of the beam and the hanger, the following model of the connection between the single hanger and the beam was established, and the bolts and nuts were connected as a whole, ignoring the threads, and it was convenient to apply the pretensioning force to the bolts using the section method. By establishing surface-to-surface contact between the lower surface of the T-nut head and the upper surface of the T-corner inside the beam, the lower surface of the T-corner of the beam and the upper surface of the hanger, and the upper surface of the nut and the lower surface of the hanger, it can be more realisticly to simulate the of assembly situation of beam, hanger and bolt .In order to improve the calculation accuracy, the three parts in the assembly model were divided into hexahedral meshes, and solid 185 elements were assigned for calculation. A vertical load is applied to the MASS element below the hanger, and the MASS element is connected to a row of nodes below the hanger through rigid elements. The finite element model of the node is as follows: Results Analysis From the above calculation results, it can be known that under different working conditions, the stress distribution of the beam and the hanger are basically the same, and the position of the maximum stress is also basically the same. The maximum stress of the beam occurs where it comes in contact with the bolt head. The maximum stress of the hanger occurs near the bolt hole. In addition to the position where the beam is in contact with the head of the bolt, the stress at the corner of the T-slot is also larger than the stress in other surrounding areas, which should be paid special attention. When only 45KN of preload is applied, the maximum stress of the beam is 151.4MPa, the maximum stress at the T-slot of the beam is 57.8MPa, and the maximum stress of the hanger is 110.1MPa; when a vertical load of 1500N is applied based on the preload , The maximum stress of the beam is 150.9MPa, the maximum stress at the T-slot of the beam is 57.5MPa, the maximum stress of the hanger is 113.6MPa; when a vertical load of 5000N is applied on the basis of the pretension force, the maximum stress of the beam is 146.1MPa, the maximum stress at the T-slot of the beam is 51.2MPa, and the maximum stress at the hanger is 148.4MPa. According to the above results, it can be known that with the application of vertical load, the maximum stress of the beam is slightly reduced, and the maximum stress at the T-slot of the beam is also slightly reduced, but the maximum stress of the hanger is significantly increased. Conclusion 1. Based on the calculation of static balance theory, the load distribution of each hanging point of the hanging assembly structure is obtained. The maximum hanging point load is 4978N and the minimum hanging point load is 4037N. The difference between the two is about 900N. 2. By establishing the assembly model of single hanging point suspension, the stress nephogram of the beam and the hanger is obtained. It can be known from the results that due to the application of the bolt pre-tensioning force, the position of the maximum stress of the beam appears on the side in
2020-10-28T19:21:51.798Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "7635affce08f61377c34bd11ab00ae461b89dbae", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1637/1/012137", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "85b10d93763e823a7201cd62986545af73851682", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
247871906
pes2o/s2orc
v3-fos-license
Comparison of Scanning LiDAR with Other Remote Sensing Measurements and Transport Model Predictions for a Saharan Dust Case : The evolution and the properties of a Saharan dust plume were studied near the city of Karlsruhe in southwest Germany (8.4298°E, 49.0953°N) from 7 to 9 April 2018, combining a scanning LiDAR (90°, 30°), a vertically pointing LiDAR (90°), a sun photometer, and the transport model ICON-ART. Based on this Saharan dust case, we discuss the advantages of a scanning aerosol LiDAR and validate a method to determine LiDAR ratios independently. The LiDAR measurements at 355 nm showed that the dust particles had backscatter coefficients of 0.86 ± 0.14 Mm − 1 sr − 1 , extinction coefficients of 40 ± 0.8 Mm − 1 , a LiDAR ratio of 46 ± 5 sr, and a linear particle depolarisation ratio of 0.27 ± 0.023. These values are in good agreement with those obtained in previous studies of Saharan dust plumes in Western Europe. Compared to the remote sensing measurements, the transport model predicted the plume arrival time, its layer height, and its structure quite well. The comparison of dust plume backscatter values from the ICON-ART model and observations for two days showed a correlation with a slope of 0.9 ± 0.1 at 355 nm. This work will be useful for future studies to characterise aerosol particles employing scanning LiDARs. Introduction Atmospheric dust has a significant impact on the Earth's climate system, but the impacts remain highly uncertain [1]. These uncertainties are attributed to the larger spatial-temporal variability of aerosol dust and its complex interaction with atmosphere constituents, radiation, and clouds [2]. Besides, dust particles can participate in cloud formation as Cloud Condensation Nuclei (CCNs) and Ice-Nucleating Particles (INPs), and these clouds can redistribute solar radiation [3][4][5][6][7][8]. Furthermore, dust plumes can modify cloud microphysics and may even change precipitation distributions [9,10]. Hence, simultaneous observation of clouds and dust plumes can help unravel the details of dust-cloud interaction processes. Understanding the distribution and the properties of dust is the key to quantifying radiative forcing [11]. For decades, satellites have been used to study the properties and transportation of dust around the globe. However, their data still have limitations, especially concerning the characterisation of the vertical structure at a high resolution of dust plumes for passive sensors aboard satellites (e.g., Meteosat, Terra and Aqua) and cannot not obtain continuous datasets at one place for polar orbit satellites (e.g., Terra and Aqua, CALIPSO). In addition, satellite data still have limitations compared to ground-based active remote sensing methods, e.g., concerning the characterisation of the structures of dust plumes especially for low aerosol particle concentrations [12]. obtain vertical aerosol profiles [42]. However, this technology is still not widely used due to the complex configurations of this kind of system. The aerosol optical depth measured by a sun photometer can also be used to constrain the LiDAR retrieval, thus helping us obtain column-averaged LiDAR ratios [43]. Scanning aerosol LiDARs have been used to determine three-dimensional aerosol distributions [44][45][46][47] and particle orientations [48]. Furthermore, a method using multiple angles, e.g., based on scanning aerosol LiDAR measurements, was proposed to retrieve extinction coefficients independently in horizontal homogeneous atmospheres [49,50]. The uncertainties of this method applied for inhomogeneous atmospheres and an improved method for poorly stratified atmospheres were also discussed [51][52][53]. This method has the advantage of retrieving extinction coefficients from elastic LiDAR measurements without assumptions of LiDAR ratios for the elastic LiDAR. Another better way to obtain extinction coefficients from elastic LiDAR measurements is via the Klett-Fernald method with a known LiDAR ratio. However, to the best of our knowledge, there is no method to retrieve LiDAR ratios directly from elastic LiDAR measurements. Sun photometers can also be used to infer wavelength-dependent optical and microphysical properties of aerosols from observing direct and diffuse solar radiation [54,55]. The ground-based sun photometer aerosol network AERONET (AErosol RObotic NETwork) provides a long-term, continuous, and readily accessible public domain database for aerosol research [54]. Various global and regional transport models have been developed, and many of them can simulate the transport, transformation, and properties of aerosol particles. Examples of such models are the general circulation model, ECHAM-HAMMOZ [56,57], ECHAM/MESSy Atmospheric Chemistry (EMAC) [58][59][60], the Whole Atmosphere Community Climate Model (WACCM) [61,62], the Weather Research and Forecasting (WRF) model coupled with Chemistry (WRF/Chem) [63], the CONsortium for small-scale MOdeling (COSMO) and its extension Aerosol and Reactive Trace gases (ART) [64], and its successor, ICOsahedral Nonhydrostatic (ICON), and its extension Aerosol and Reactive Trace gases (ART) [65,66]. Special focus has been on mineral dust due to its strong impact on atmospheric radiative forcing [1]. A three-dimensional mineral dust model has been developed to study its impact on the radiative balance of the atmosphere [67]. Recently, various models such as CAMS [68], WRF/Chem [69], EMAC [70], COSMO-ART [64,71], and ICON-ART [65,72,73] have been used to predict mineral dust plumes. A multi-model forecast comparison is available by the Sand and Dust Storm Warning Advisory and Assessment System [74], a program of the World Meteorological Organisation (WMO). For the dust event that occurred in April 2018, we collected a comprehensive set of data and compared it with a global transport model simulation to understand the distribution and evolution of dust near the city of Karlsruhe, in southwest Germany. Two LiDAR systems and a sun photometer were used to investigate the dust event employing different retrieval methods. The major objective was to quantify the uncertainties of different measurement and retrieval methods including a demonstration of how useful scanning LiDAR measurements can be in addition to vertical LiDAR and sun photometer data and what kind of understanding of the aerosol properties can be achieved by combining the different measurement techniques. Furthermore, we compared these observational data with predictions calculated by the state-of-the-art transport model, the ICON-ART model, to understand the distribution and evolution of dust near the city of Karlsruhe, in southwest Germany. This paper is organised as follows. Section 2 describes the remote sensing methods and the model simulations. Details of the dust observations and dust properties are discussed in Section 3 including a comparison of the different remote sensing methods, as well as how they compare to the model predictions. In the final section, we provide some conclusions. Methods Two LiDAR systems were used in this study, a vertically pointing system called the Deutscher Wetterdienst-Depolarization Raman LiDAR (DWD-DELiRA (LR111-D200, Raymetrics Inc.)) and a spatially scanning system called the Karlsruhe scanning aerosol LiDAR (KASCAL (LR111-ESS-D200, Raymetrics Inc. Athens, Greece)). Both have an emission wavelength of 355 nm and are equipped with elastic, depolarisation, and Raman channels, hence providing profiles of extinction coefficients, particle backscatter coefficients, and depolarisation ratios. Besides, a sun photometer (CE-318, CIMEL, Holben et al. [54]) provides the wavelength-dependent AOD, Angstrom exponent, under clear sky via inversion, Aerosol Size Distributions (ASDs), and SSA. To predict the dust transport and distribution, the online coupled model system ICON-ART [65] was used. The model system is running in quasi-operational mode by Deutscher Wetterdienst (DWD). Remote Sensing Instruments For this observation campaign, the KASCAL and DWD-DELiRA LiDAR systems were deployed on the campus north of the Karlsruhe Institute of Technology (8.4298°E, 49.0953°N, 119 m above sea level). The KASCAL is operated by the Institute of Meteorology and Climate Research (IMK-AAF) at the Karlsruhe Institute of Technology and the DWD-DELIRA by the Deutscher Wetterdienst (DWD). The horizontal distance of the two LiDAR systems was 500 m. The sun photometer was installed at a roof between both LiDARs. The KASCAL LiDAR system is a mobile scanning system with an emission wavelength of 355 nm. The laser pulse energy and repetition frequency are 32 mJ and 20 Hz, respectively. The laser head, a 200 mm telescope, and LiDAR signal detection units are mounted on a rotating platform allowing zenith angles from −7º to 90º and azimuth angles from 0º to 360º. The overlap range of both LiDAR systems is 255 m, and the overlap correction for the KASCAL LiDAR was performed with horizontal measurements assuming horizontal homogeneous conditions. The overlap correction for the DWD-DELIRA was performed by comparing with scanning aerosol LiDAR. This KASCAL LiDAR works automatically, time-controlled, and continuously via software developed by Raymetrics Inc. [75,76]. The fixed vertically pointing LiDAR (DWD-DELiRA) has a laser energy power of 50-55 mJ at an emission wavelength of 355 nm and a 300 mm telescope area, as well as the same detection channels as the scanning LiDAR system. For the data analysis and calibration of the system, we followed the quality standards of the European Aerosol Research LiDAR Network (EARLINET) [77]. The quality of the retrieval method was studied by EARLINET. The developed calculus modules, EARLINET Single Calculus Chain (SCC), show that the mean relative deviation is 10% (15%) for the Raman (Klett-Fernald) backscatter coefficient method and 15 % for Raman extinction coefficient retrieval [78]. Further, for optical depths τ ≤ 5 and SNR = 10, the estimated relative error is below 10%; for optical depths τ ≤ 5 and LiDAR ratio errors of 10%, the estimated error is below approximately 4% [79]. The uncertainties given in this paper include standard deviations from repeated profile measurements, which include both systematic LiDAR uncertainties and the variability of the atmosphere. For the data analysis in this Saharan dust case, Hamming window filters whose window length is 75 m (10 bins) were applied to raw LiDAR signals firstly. Then, extinction and backscatter coefficients at 355 nm were both calculated from the elastic channel using the Klett-Fernald method [34,35] and were also calculated from the elastic and Raman channels [80]. For extinction coefficients calculated with the Raman method, Hamming window filters with window lengths of 300 m (40 bins) were applied to the Raman signals, and subsequently, the retrievals were performed with an average vertical resolution of 150 m. Please note that the Raman data are only available for night time measurements. We compared our data analysis algorithm with the Single Calculus Chain (SCC) code (EARLINET) and the Raymeterics code and obtained consistent results. For the backscatter coefficients retrieved based on the Klett-Fernald method, we only show the results for periods free of clouds as the presence of clouds makes it very difficult to choose reasonable reference values for the retrieval methods. Particle depolarisation was calculated as suggested by Freudenthaler et al. [13]: Here, δ m is the depolarisation ratio of gas molecules, which was assumed to be 0.004 in this paper according to Behrendt and Nakamura [81], δ v is the volume depolarisation ratio, and R is the backscatter ratio: Here, β p is the backscatter coefficient of particles and β m is the backscatter coefficient of molecules. The extinction coefficients and LiDAR ratios can also be retrieved using a multiangle method [49,50]. Additionally, we found that LiDAR ratios can be retrieved from elastic LiDAR signals independently. Although retrieval of LiDAR ratios is straightforward with known extinction coefficients based on the LiDAR equation, to the best of our knowledge, there is no report about retrieving LiDAR ratios from single elastic LiDAR measurements. Hence, finding a stable method to determine LiDAR ratios is important for further application of elastic LiDARs. Therefore, we propose a method employing the backscatter coefficients measured at different LiDAR viewing angles to determine the LiDAR ratios. This method works due to the fact that backscatter coefficients retrieved using the Klett-Fernald method for different LiDAR viewing angles show a difference even in horizontally homogeneous atmospheres. This difference varies with the LiDAR ratios assumed. This method was tested by simulations, proven mathematically, and validated using this Saharan dust case. Firstly, we constructed LiDAR signals based on the LiDAR equation at two elevation angles with the same aerosol backscatter coefficient profile. Then, we retrieved backscatter coefficients at these two angles with different values of the LiDAR ratio using the Klett-Fernald method. Figure 1 shows the input (dashed line) and retrieved (red and green line) backscatter coefficient profiles for different values of the LiDAR ratios. The input LiDAR ratio used in the LiDAR equation for this simulation was 55 sr. The differences between the input and retrieved profiles increased with LiDAR ratios deviating from the input LiDAR ratio (55 sr). More importantly, we found increasing differences between the profiles retrieved for different viewing angles for LiDAR ratios deviating from the input LiDAR ratio (55 sr). This implies that we can use the difference of two retrieved profiles at different view angles to determine the LiDAR ratios. In conclusion, this method allows for retrieving the LiDAR ratio if assuming a horizontal homogeneous atmosphere based on elastic LiDAR measurements at two different observation angles. A mathematical derivation for this method is given in the Supplementary Materials. To test how sensitive this difference in backscatter coefficients of different viewing angles depends on the LiDAR ratio, we performed a series of simulations. Figure 2 shows the ratio between vertical and slant backscatter coefficients for different LiDAR ratios with the input LiDAR ratios being 55 sr and 30 sr, respectively. From this figure, we can identify the ratio equal to unity when the retrieval LiDAR ratio is equal to the correct LiDAR ratio. Besides, a smaller value of a chosen LiDAR ratio caused the backscatter coefficient from the vertical retrieval being smaller than that from the slant retrieval, and vice versa. A sun photometer (CE-318, CIMEL, Holben et al. [54]), located between the two LiDAR systems on a roof top 25 m above the ground level, measures solar radiance at 339 nm, 379 nm, 441 nm, 501 nm, 675 nm, 869 nm, 940 nm, 1021 nm, and 1638 nm. This allows the calculation of wavelength-dependent AOD. The sun photometer data can also be used to calculate other aerosol parameters (e.g., SSA, AE, ASD, and Complex Refractive Index (CRI) [82][83][84]. The SSA is the ratio of the scattering coefficient to the extinction coefficient, which has a negative correlation with the absorption ability of the aerosol particles. Hence, this parameter can be used to characterise the scattering and absorption capability of the particles. The AE is a parameter that describes the wavelength dependency of AOD. A stronger wavelength dependence occurs when the sizes of particles are smaller than or equivalent to the incident wavelength. Hence, AE has a negative correlation with particle size. From clear-sky measurements with the sun photometer, ASD between 0.05 and 15 µm and complex refractive index in the range 1.33-1.6 and 0.0005i-0.5i [20,84] can be derived. The sun photometer is part of AERONET, and for this work, we used the level 2.0 data [85]. Aerosol Transport Modelling To predict the dust transport and distribution, the online-coupled model system ICON-ART was used. ICON is a weather and climate model that solves the full three-dimensional non-hydrostatic and compressible Navier-Stokes equations on an icosahedral grid [86]. The ART module is an extension of ICON to include the life cycle and cloud/radiation feedback of aerosols and trace gases. Mineral dust in ART is represented by three lognormal modes with mass median diameters of 1.5 µm, 6.7 µm, and 14.2 µm and standard deviations of 1.7, 1.6 and 1.5, respectively. The dust emission scheme is based on Vogel et al. [87] and Rieger et al. [65], which considers the soil properties (size distribution, residual soil moisture), the soil dispersion state, and soil type heterogeneity. The dust removal processes include sedimentation, dry, and wet deposition. The simulations were performed on a global domain including a regional nest (over North Africa and Europe) with horizontal resolutions of 40 km and 20 km, respectively. The vertical resolution of the model ranged from tens of meters to several kilometres from low to high altitudes. At altitudes that often contain dust layers from long-range transport (2-6 km), the vertical resolution ranged from 200 m to 400 m. The ICON-ART model describes with the above 3 modes desert dust particles and provides the dust particle concentration. The concentrations were used together with the particle mass efficiencies provided by Meng et al. [88] for calculating the particle backscatter coefficient and extinction coefficient. For this study, the altitude-dependent backscatter coefficients and column AODs were used to compare the ICON-ART calculations to the results from the LiDAR and sun photometer measurements. For the comparison of the ICON-ART model with the LiDAR measurements, the altitude in the y axis means the height above sea level and for the other situation, the height means the height above the ground level. Results and Discussion Based on measurements near the city of Karlsruhe in southwest Germany during a Saharan dust event in April 2018, we demonstrate the advantages of multiangle LiDAR measurements. Furthermore, we compared the LiDAR and sun photometer data to characterise the dust aerosol, and finally, we validated the transport model predictions for this single dust event. Application of Two-Angle LiDAR Measurements for a Saharan Dust Case The method to retrieve LiDAR ratios discussed in the Methods Section was applied for a Saharan dust event from 19:21 to 22:54 (UTC) on 8 April 2018 with the dust layer at an altitude between 2.5 km and 6.0 km. We chose this time period since we had a wellstratified dust layer and also Raman data available. Figure 3 shows backscatter coefficients from vertical and slant LiDAR measurements for different values of assumed LiDAR ratios ranging from 20 sr to 80 sr. Consistent backscatter values for vertical and slant profiles in the dust layer are only available for a LiDAR ratio of 50 sr. This means that a LiDAR ratio of around 50 sr at the Saharan dust layer can be derived from this case, which is a typical value for Saharan dust over Europe [89]. Figure 3a shows that the backscatter coefficients for vertical and slant measurements were consistent for Saharan dust particles (2.5-6.0 km), but inconsistent for boundary layer aerosol particles (below 1 km) when the LiDAR ratio was assumed to be 50 sr. This is because a LiDAR ratio of 50 sr is not suitable for boundary layer aerosol particles at this location [21]. Therefore, we calculated LiDAR ratios based on our Raman signals for boundary layer aerosol particles and the Saharan dust particles. The results are shown in the right panel of Figure 4. The LiDAR ratio for the dust particles was 46 ± 5 sr and for the boundary layer aerosol particles 31 ± 3 sr as the average of both vertical and slant measurements. We parameterised these LiDAR ratios of 46 sr and 31 sr, respectively, as a function of altitude with a single step at 2 km and then used this as the LiDAR ratio for the elastic LiDAR signal retrieval. The results are shown in the left of Figure 4. These backscatter coefficients are consistent for vertical and slant measurements for both dust and boundary layer aerosol. A LiDAR ratio of 31 sr below a 2 km altitude led to much better agreement of the backscatter coefficient profiles from the elastic channel and Raman channel compared to Figure 3c. However, there remain small differences at low altitudes for backscatter coefficients from elastic data for two elevation angles. This inconsistency may be related to an inhomogeneous atmosphere in the boundary layer, as can also be seen in the backscatter coefficients calculated from the Raman data. The retrieval for the elastic channel data uses two different LiDAR ratios at different altitudes. Elastic and Raman represent the channels of LiDAR data used in retrieval; vertical and slant represent the laser beam direction; low represents data retrieval for a low altitude (below 2 km), e.g., ElasticVerticalLow means the backscatter coefficient is retrieved from the elastic channel in the vertical direction for altitudes below 2 km. The application of this multiangle method for this Saharan dust case proved that this method is useful to retrieve LiDAR ratios from scanning elastic LiDAR measurements. Compared with other methods such as Raman or HSRL LiDAR, the multiangle method provides an applicable solution for both day and night as the elastic LiDAR can obtain reliable measurements for both periods. Furthermore, this method can also be used in multiwavelength scanning LiDAR systems to determine wavelength-dependent LiDAR ratios. Characteristic Properties of the Saharan Dust Determined by Remote Sensing In early April 2018, a far southward-reaching upper-level trough associated with a large low-pressure complex in the western North Atlantic led to a cold front with strong surface winds and dust emission in the Northern Sahara in Morocco and Algeria. The dust was transported northward into the western Mediterranean, where it entered a warm conveyor belt that effectively lifted the dust and transported it towards central Europe. This Saharan dust plume was characterised by the methods described above for nearly three days in April 2018 near the city of Karlsruhe in southwest Germany. The ICON-ART model predicted the arrival of the dust plume and its spatial-temporal evolution as was characterised by LiDAR and sun photometer measurements. Figure 5 shows the corresponding backscatter coefficients from the scanning LiDAR (a), the vertical pointing LiDAR (b), and the ICON-ART model simulation (c), as well as with the linear depolarisation values of the KASCAL (d) for 7-9 April 2018. The scanning LiDAR was operated by performing vertical and slant measurements at 90 • and 30 • elevation angles alternatingly with integration times for each observation angle of 250 s. The data shown for KASCAL were averaged over two of these measurement periods. The data shown for the DWD-DELiRA were averaged for 30 min. As can be seen in these figures, the plume arrived in Karlsruhe at 11:00 on 7 April (dashed line T1) and lasted about 3 d. Initially, this dust layer showed a maximum of the backscatter at an altitude of 2.5 km, which subsequently also reached lower altitudes. At 12:00 UTC, 8 April, another dust layer between 5.0 km and 11.0 km arrived at the observation station (dashed line T2). Then, the dust layer started sinking and overlapped with the lower dust layer at around 3:00 am of 9 April (dashed line T3). A cloud with a base at 4.5 km appearing at 11:00 (UTC) of 9 April made it difficult to retrieve the backscatter coefficients for the aerosol particles below. Hence, the backscatter coefficients of the LiDAR measurements are not shown for this period. In addition, two periods (C1 and C2) are highlighted for which we performed a more detailed analysis. All four panels in Figure 5 show a good agreement among dust layer height, dust plume arrival times, and dust plume structures. In particular, ICON-ART predicted the arrival time of the dust plume precisely (±20 min difference with the observations). This indicates that the model reproduced the synoptic scale processes very well, which led to the precise prediction of dust transport. Thus, the general good agreement between LiDAR measurements and ICON-ART partially validated the model's capabilities to predict dust transport. Please note that this special model run only included desert dust aerosols. Hence, the difference due to boundary layer aerosol particles was expected in this case. Furthermore, there were dust layers predicted by the model for higher altitudes (e.g., a dust plume at around 8 km on 7 April and 8 April), which were not detected by the LiDAR measurements. Potential reasons for the agreement and differences between LiDAR observations and model predictions are discussed in Section 3.3. During this dust event, the LiDARs used three optical measurement paths (two vertical measurements and one slant measurement with an elevation angle at 30°). The comparison of these three profiles can be used to test different LiDAR retrieval methods and to characterise the properties of the dust plume (e.g., horizontal homogeneity of the dust plume). Figure 6 shows the extinction and backscatter coefficients obtained for different retrieval methods and different optical paths for the measurement time from 19:21 to 22:54 (UTC) of 8 April (period C2 in Figure 5) and averaged over 66 min for scanning LiDAR measurements and 213 min for vertical LiDAR measurements. Please note that the scanning LiDAR measured alternated at two angles (90°and 30°). A LiDAR ratio of 50 sr, which is a typical value observed for Saharan dust [89], was used in the Klett-Fernald method to retrieve the elastic backscatter coefficients and extinction coefficients. Figure 6a,b shows the extinction coefficients from different retrieval methods (elastic, Raman, and multiangle methods) and from different optical paths, respectively. Figure 6c,d shows the backscatter coefficients from different retrieval methods (elastic, Raman, and multiangle methods) and from different optical paths, respectively. The extinction coefficients and backscatter coefficients calculated using the above methods as shown in Figure 6a,c were consistent, but the extinction coefficients calculated from the Raman measurements had larger variations. In addition to the classical methods to retrieve extinction coefficients, we also calculated the extinction coefficients from the elastic channels with a multiangle method, which also agreed with the other methods. The denoising methods can have a substantial impact on the remaining variability of the extinction coefficients retrieved from Raman data. In Figure S1, we provide extinction coefficients retrieved from Raman data for different filters and different filter lengths. In addition, the average extinction coefficients and their standard deviations averaging from 4.0 km to 6.0 km altitude are listed in Table S1. These data show that the mean values of the extinction coefficients for different filter types and filter lengths remained almost constant. In contrast, their uncertainties varied from around 35 Mm −1 to 5 Mm −1 with window lengths from 82.5 m to 1207.5 m for different types of filters. Hence, the Raman extinction coefficients were affected more by the filter window lengths than the filter type, which was in agreement with the observations by Shen and Cao [90]. The backscatter coefficients and extinction coefficients for different optical paths are shown in Figure 6b and Figure 6d, respectively. The consistency of these profiles reflects the high quality of measurements and retrieval algorithms. A comparison between the active LiDARs and the passive sun photometer can help to understand the properties of the dust aerosol particles employing dust aerosol scattering information from different scattering angles to retrieve dust particles' microphysical properties. During this dust event, we compared the AOD from DWD-DELiRA measurements and a sun photometer for two continuous days (7)(8). The AOD from the sun photometer was the AERONET Version 3 level 2.0 product [83], while that of LiDAR measurement was corrected in the following ways. Firstly, as discussed above, two aerosol layers existed with different LiDAR ratios. Hence, we used two different LiDAR ratios at different altitudes to retrieve the backscatter coefficients, which are shown in Figure S2. We used a LiDAR ratio of 50 sr for the upper layer (above 2 km, red line) and 30 sr for the lower layer (below 2 km, green line), typical values for Saharan dust and boundary layer aerosol [89]. Secondly, constant backscatter coefficients were assumed in the LiDAR overlap region, and these constant values were set to be the backscatter coefficients at a range of 255 m (the overlap region of DWD-DELiRA). Finally, the AOD in the far range (e.g., stratosphere) was assumed to be zero. The hourly AODs from the sun photometer, the vertical LiDAR (DWD-DELiRA), and the ICON-ART model are shown in Figure 7. Please note that the model result is discussed in detail in Section 3.3. As the signal-to-noise ratio of the LiDAR is low for KASCAL in the daytime, the AODs were not calculated by this LiDAR. All these three methods showed a similar trend with AODs increasing from around 0.13 to 0.45 during these two days. However, the average AOD retrieved from the LiDAR data for the two days was systematically lower by 0.053 ± 0.031 than that from the sun photometer after wavelength conversion to 340 nm. The AE used for this wavelength conversion was 0.471, which was calculated from the sun photometer data at wavelengths of 340 nm and 380 nm. The average stratospheric AOD for the years 2018-2019 in the Northern Hemisphere was 0.01 at 340 nm [91]. Hence, the averaged AOD measured by the sun photometer was still larger by 0.043 ± 0.031 than the AOD from the LiDAR measurement even considering stratospheric AOD. This bias may be due to an inappropriate assumption of constant backscatter coefficients in the overlap region of the LiDAR. Such an uncertainty of the AOD corresponds to an uncertainty in backscatter coefficients of 5.6 ± 4.1 Mm −1 sr −1 in the overlap region, which is reasonable for typical boundary layer aerosol variations [92][93][94]. On 8 April, clouds led to increased uncertainties in AOD retrievals from the LiDAR measurements and also some data gaps in the sun photometer. Hence, the AOD from LiDAR measurements can be given only for some selected clear sky periods while the sun photometer has still enough valid data points to calculate hourly averages. For this dust event, vertical and slant volume (δ v ) and particle (δ p ) depolarisation ratios were measured by the two different LiDAR systems, and the volume depolarisation ratios for these two elevation angles are shown in Figure 8. No obvious difference between vertical and slant measurements was found for volume depolarisation ratios and particle depolarisation ratios. This may mean that the dust particle had no specific orientation [95,96]. The particle depolarisation ratio of this dust plume was 0.27 ± 0.023, which is very similar to, but slightly larger than the depolarisation ratios determined at 532 nm for Saharan dust particles [13,97]. However, this is still within the combined uncertainty limits. Furthermore, the day-to-day variability of the values given by Freudenthaler et al. [13] ranged from about 0.22 to 0.31. Figure 9 shows the SSA, AE, and ASD calculated based on the sun photometer measurements on 7 and 8 April. The AE at wavelengths of 440/880 nm decreased from 1.38 to 0.08 during these two days, as shown in Figure 9a,b. This may be related to a smaller wavelength dependence of the AOD, which may be caused by larger particles. Particle size distributions provided by sun photometer retrievals are shown in Figure 9c,d. They indeed showed increasing amounts of larger particles. The maximum column-integrated volume concentration of coarse mode particles increased from around 0.007 µm 3 /µm 2 in the early morning of 7 April (before Saharan dust arrival) to 0.093 µm 3 /µm 2 in the afternoon of 8 April. The Single Scattering Albedos (SSAs) determined for the wavelengths between 439 nm and 1018 nm ranged between 0.88 and 0.96 and agreed quite well with the data from previous observations (cf . Table S2). Model-Observation Comparison Model simulations and LiDAR observations were used to study the spatial and temporal evolution of a dust plume in this study. The comparison between the model and LiDAR results can be used to evaluate the performance of the model simulation including dust layer height, dust arrival time, dust layer structure, and dust optical parameters. The evolution of the dust plume over Karlsruhe predicted by the ICON-ART model is shown in Panel (c) of Figure 5. According to the model simulation, the dust layer arrived in Karlsruhe at 11:00 of 7 April, and this plume passed over that location for nearly two and a half days. Two dust layers were observed from time 12:00 (UTC) of 8 April to the morning of 9 April, then they merged. A comparison between the model prediction and LiDAR measurement is shown in Panel (b) of Figure 5, where the black contour line is the modelled backscatter coefficient and the contour fill is the LiDAR (DWD-DELiRA) observation. The white line in Panel (b) is the cloud base height from the LiDAR measurements. The dust layer heights (vertical extend) and their peak heights (the heights for the maximum backscatter coefficients) for both the LiDAR measurements and ICON-ART prediction are shown in Figure S3. The criteria for an aloft dust layer are as follows: (i) the δ p value is larger than 0.1 throughout the layer; (ii) the layer thickness exceeds 0.3 km; (iii) the layer base is above the planetary boundary layer [98]. This figure shows a very good agreement in dust layer heights for these two measurements and the ICON-ART prediction. The comparison showed that the dust plume arrival time, layer height, structure, and backscatter coefficients were consistent between the LiDAR measurement and model simulation for this event. Although the LiDAR data showed more details of the dust plume structures, the agreement with the model was quite good considering the relatively coarse spatial resolution used in this model run. On the other hand, in the presence of clouds, aerosol properties cannot be retrieved from LiDAR data. Therefore, a comparison for thin dust layers is not always meaningful. The dust layer height range was based on the dust layer heights shown in Figure S3, which does not include the boundary layer aerosol for LiDAR measurements. All comparisons between LiDAR measurements and ICON-ART model results followed these criteria. A comparison of the vertical backscatter coefficient profiles between LiDARs and ICON-ART model predictions is shown in the right of Figure 5. The backscatter coefficients are given for LiDAR measurements from 15:30 to 16:30 and ICON-ART calculations for 16:00 on 7 April. This figure shows a good agreement in the dust layer vertical extent of the backscatter coefficients from the model calculation for this time period. The comparison between the LiDAR and ICON-ART model showed that the ICON-ART predictions agreed very well with the measurements, although some variability can be observed as well. Besides, the AODs for three wavelengths from the sun photometer and model calculation are shown in Figure 10. This figure shows that the AODs from the sun photometer and model showed a similar trend. However, the modelled AODs were systematically lower than those from the sun photometer. Figure S4 shows the time series of coarse particle mode AOD for the sun photometer and modelled AOD, which showed that the modelled AOD values agreed well with the coarse mode AOD of the sun photometer at a wavelength of 550 nm. Hence, the reason for underestimating the AOD by the model shown in Figure 10 is partially due to the fact that the modelled AOD only included the Saharan dust plume and the sun photometer also included the boundary layer aerosol. After the arrival of the dust plume, the AOD from model calculation was systematically lower than the sun photometer measurement, and the bias between the model and sun photometer increased with decreasing wavelength towards the ultraviolet (UV) spectral region. In other words, the discrepancy was wavelength dependent with a bigger difference in the UV. Figure 11 shows the correlation of the backscatter coefficients from LiDAR measurements and ICON-ART simulations for the dust plume, assuming both Non-Spherical (NSP) (left panel) and Spherical (SPH) (right panel) particles. The colour of the scatter points in this figure indicates the normalised density of the backscatter coefficients, which indicates the frequency of the occurrence of these values. These data points were selected for the dust layers shown in Figure S3, which does not include the boundary layer aerosol for the LiDAR measurements. The parameterisations for the NSP and SPH are given in Hoshyaripour et al. [73] and were based on the work by Meng et al. [88]. For the whole dust episode, there was a remarkable agreement between the model simulation and observations, although individual profiles might differ significantly. The regression fitting was performed for both the NSP and SPH data points, which had a normalised density greater than 0.4. The corresponding results of a regression analysis for the NSP showed a slope of 0.9 ± 0.1 and a R 2 of 0.68. This is an excellent result taking into account all uncertainties and assumptions for measurements and model simulations. However, the regression fitting for the SPH had a slope of 2.3 ± 0.3 and an R 2 of 0.68. This means that assuming spherical particles led to overestimated backscatter coefficients. This is confirmed in Figure S5, which shows the backscatter coefficients of two LiDAR measurements and two model simulations using SPH and NSP parameterisations, respectively. This figure shows that the ICON-ART model overestimated the backscatter coefficients at a wavelength of 355 nm by assuming spherical particles to calculate the backscatter coefficients. The mean value and standard derivation of the backscatter coefficient from the LiDAR measurement and ICON-ART simulation are shown in Table S3, which also confirmed that an inappropriate parameterisation would overestimate the predicted backscatter coefficient. The reason for the overestimation was that the spherical particles have larger backscatter coefficients (at 180 • ) than non-spherical particles [73,99]. The physical meaning behind this phenomenon is that for spherical particles, surface waves can contribute to the backscatter, hence causing larger backscatter coefficients for spherical particles [100]. The vertical profiles of the backscatter coefficient from two LiDAR measurements and two ICON-ART modes are shown in Figure S6 for two selected periods, indicated as C1 and C2 in Figure 5. Comparing Figure S5 and Figure S6, we found that ICON-ART could predict dust layer structures quite well for most of the time of this event, but also showed substantial differences from the LiDAR measurements, e.g., for the time period C2 (cf. Figure S6 right). The coarse mode AOD of the sun photometer and ICON-ART results for spherical and non-spherical particle models are shown in Figure S4. All AOD values followed a similar trend, but the model results were higher by a factor of 1.25 ± 0.21 for NSP particles and 1.14 ± 0.18 at a wavelength of 550 nm for SPH particles at a wavelength of 550 nm. Conclusions The objectives of this work were to compare different measurements and retrieval methods including scanning LiDAR measurements and sun photometer data and to demonstrate which aerosol properties can be determined by combining the different measurement techniques. Furthermore, we wanted to understand the quality of the dust plume predictions with the ICON-ART model by comparison with the observations. The evolution and the properties of a Saharan dust plume were characterised for two and a half days combining data from a scanning LiDAR, a vertical LiDAR, and a sun photometer. The comprehensive dataset from different methods could characterise the dust plume in different ways, thus providing additional information for further analysis. The scanning angle LiDAR measurements enabled us to retrieve LiDAR ratios and extinction coefficients independently and during day and night, which were comparable to the Raman-based retrievals. The comparison of extinction and backscatter coefficients for different retrieval methods was used to quantify the uncertainties of the different methods and the impact of different denoising filters on the extinction coefficients from Raman scattering LiDAR signals. The consistency among three different LiDAR laser beam paths reflected the high quality of the measurements, as well as the retrieval algorithms. Vertical and slant volume and particle depolarisation ratio measurements contained information on the shape and partially the orientation of dust particles. The comparison between LiDAR and sun photometer measurements proved useful to study the dust optical properties such as aerosol optical depth and to obtain information about LiDAR parameters such as the LiDAR ratio. Wavelength-dependent optical parameters and the microphysics of dust particles provided by the sun photometer indicated larger particles over the observation station for this dust event. The comparison between LiDAR measurements, the sun photometer, and ICON-ART predictions showed a quite good agreement for the dust arrival time, dust layer height and structure, backscatter coefficients, and AODs. The average AOD from the model was larger by a factor of 1.25 ± 0.21 compared to the sun photometer at a wavelength of 550 nm. The modelled backscatter coefficients for dust showed a correlation with a slope of 0.9 ± 0.1 (R 2 =0.68) for a wavelength of 355 nm with LiDAR observations. However, the model can overestimate the observed backscatter coefficients if assuming spherical particles. The corresponding correlation between the model and LiDAR data showed a slope of 2.3 ± 0.3 (R 2 = 0.74). This demonstrates how crucial it is to use an appropriate parameterisation for the dust particle optics. This has implications for the particle assimilation scheme of the models. Despite the good agreement between model predictions and observations for this Saharan dust plume at one location, we cannot generalise this result. Systematic comparisons for different meteorological conditions and at different locations are needed to substantiate the model validation and to facilitate a potential improvement of the dust processes (emission, transport, removal, and microphysics) and properties (size distribution and optics) in the models. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/rs14071693/s1, Figure S1: Extinction coefficients from the Raman signal from vertical and slant measurements with different types of filters and different filter lengths, Figure S2: Elastic backscatter coefficients from vertical LiDAR measurements for different values of altitude-dependent LiDAR ratios with the interval time being 1 h, Figure S3: Time series of dust layer heights and peak heights (the heights for the maximum backscatter coefficients) for both LiDAR measurements and ICON-ART prediction, as well as cloud base heights (green line) measured by LiDAR from 7 to 9 April 2018, Figure S4: AOD from the sun photometer (coarse mode) and ICON-ART for both SPH and NSP particles model simulation on 7 and 8 April for a 1 h temporal resolution. SPH = spherical; NSP = non-spherical, Figure S5: Time series of backscatter coefficients from KASCAL measurements (a) and from DWD-DELiRA measurements with ICON-ART results shown as black contour lines (b), as well as ICON-ART results for SPH particles (c) and as ICON-ART results for NSP particles from 7 to 9 April 2018. Please note that the model data only include the Saharan dust, while the LiDAR data show also other aerosol particles and clouds. The profiles of backscatter coefficients measured by the two LiDARs from 15:30 to 16:30 and predicted by ICON-ART for 16:00 on 7 April 2018 (indicated as C1 in the contour plots) are shown on the right side of this figure. The vertical dashed lines in the contour plots indicate dust arrival (T1), second dust layer appearance (T2), and the two dust layers merging (T3). C1 and C2 represent the time periods used for a more detailed data analysis. SPH = spherical; NSP = non-spherical, Figure S6: Profiles of the backscatter coefficient from KASCAL (both vertical and slant direction), DWD-DELiRA measurements, as well as ICON-ART model simulation for two typical cases indicated as C1 (left) and C2 (right) in Figure 1. SPH = spherical; NSP = non-spherical, Table S1: Averaged extinction coefficients and their standard deviations for different window types and lengths, Table S2: Overview of SSAs measured for Saharan dust, Table S3: Comparison of backscatter coefficients from LiDAR and ICON-ART based on spherical parameterisation (SPH) and non-spherical parameterisation (NSP). Appendix: Mathematical derivation of the multiangle method.
2022-04-03T15:37:09.242Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "03f7f8f247294f51e30af73becf73c63f424d0fd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/7/1693/pdf?version=1649771878", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4c7747d77c9cb5288ffec17b4d77f2b8726c9ce7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
41203157
pes2o/s2orc
v3-fos-license
Integrated transformations of plant biomass to valuable chemicals, biodegradable polymers and nanoporous carbons Integrated transformations of wood biomass to valuable chemicals and materials are described. They include the main biomass components separation, the conversion of cellulose to glucose, levulinic acid, biodegradable polymers and lignin – to nanoporous carbons. For wood fractionation on pure cellulose and low molecular mass lignin the methods of catalytic oxidation and exploded autohydrolysis are used. The processes of acid-catalysed hydrolysis of cellulose to glucose and levulinic acid were optimized. New methods of biodegradable polymers synthesis from lactone of levulinic acid and nanoporous carbons from lignin were suggested. Introduction Plant biomass annual stock addition significantly exceeds the yearly demands of a mankind in the fuels and chemical products [1]. Russia has around 23 % of world forest resources which can serve as a relevant source of raw material for production of a big variety of needed chemicals, materials and alternative fuels. Wood biomass contains 40-50 % of cellulose, 15-30 % of hemicelluloses and up to 30 % of lignin [2]. Cellulose is a linear polymer, constructed from C 6 H 10 O 5 units. Hemicelluloses are branched polysaccharides, containing C 5 -units with shorter chains as compared to cellulose. Lignin is non-regular polymer of aromatic nature composed of phenylpropane fragments. Traditional industrial technologies of wood deep processing are inefficient and dangerous for the environment and they give only the limited range of products. Perspective directions in the development of innovative technologies of plant biomass processing to valuable products are connected with a design of integrated processes which ensure the total utilization of all main components of a biomass. In this paper some results of study the integrated transformations of wood components to valuable chemicals, biodegradable polymers and nanoporous carbon materials are presented. viscosimetry method. The chain-length distribution was determined by phase-reversal chromatography with the use Nova-Pak CIS column, octadecyl-coupled silica gel sorbent; vaporizing light scattering detection device, model 500 (Alltech Corporation, USA). 1 H NMR spectroscopy was also used for study the polymers structure. Biodegradation ability of the obtained polymers was estimated by liveweight gain of cultures of microorganisms Sacharomices cerevisae, Streptomyces chrysomallus and Streptomyces lividans and by detection of polymers mass decreasing. Results and discussion The study of integrated transformation of wood sawdust to cellulose, glucose, levulinic acid, biodegradable polymers and nanoporous carbons was accomplished according to the scheme, presented in figure 1. The first stage of studied integrated transformations is the wood separation on cellulose and low molecular mass lignin. The following methods of wood separation were compared: catalytic oxidation by hydrogen peroxide in acetic acid-water medium and exploded autohydrolysis with overheated water steam. The complete separation of wood biomass on cellulose and soluble lignin was realized by catalytic oxidation with hydrogen peroxide in acetic acid-water medium in the presence sulfuric acid catalyst. It was found that the growth of oxidation process temperature from 110 to 140 °C increases the cellulose content and degreases the lignin concentration in cellulosic product. At the same time, the yield of cellulosic product decreases because the oxidative destructions of lignin and polysaccharides are accelerated with temperature. Similar effects were observed when the initial hydrogen peroxide concentration is increased from 2.0 to 10.2%. The optimal conditions of aspen-wood and birch-wood oxidation were found, which supply the rather high yield of cellulose (48-50 % relative to a. d. wood) with very low content of residual lignin (0.3-0.4 % mas.): temperature 120-130 °C, H 2 O 2 concentration 4-6 % mas., H 2 SO 4 concentration 2 % mas. In order to obtain high-quality cellulose from abies or larch wood, which contain more lignin than aspen or birch wood, it is necessary to use the more active catalysts, such as TiO 2 The process of high temperature hydrolysis of wood cellulose by diluted mineral acids is used in industry and it demands the low expenses of acid catalyst. But the low-temperature hydrolysis of cellulose by concentrated acids has its own advantages: higher yield of glucose and possibility to carry out the hydrolysis process at atmospheric pressure. The data on hydrolysis of cellulose from aspen wood with concentrated sulfuric acid are presented in table 1. Obtained solutions have no C 5 -sugars, which inhibit the fermentation of glucose to alcohols, lactic acid, polyhydroxyalcanoates etc. The effective methods of lignin depolymerization and hemicelluloses removal from wood is the shorttime treatment of wood with overheated water steam with following fast drop of the pressure (exploded autohydrolysis) [5]. The significant reduction of hemicelluloses concentration was observed after aspen wood treatment at 220 °C at conditions of exploded autohydrolysis ( Figure 2). А В Figure 2. Content of hemicelluloses in aspen wood, autohydrolyzed at 187 °C (A) and 220 °c (B) The same results were obtained after treatment of birch wood, abies wood and larch wood. The optimal conditions of wood autohydrolysis were selected (220-230 °C, 3 min) which allow to obtain solution of glucose with concentration of C 5 -sugars less 1 %. C 5 -sugars are completely absent in solutions of glucose obtained via stages of autohydrolysed wood catalytic oxidation to cellulose and of subsequent cellulose hydrolysis by concentrated H 2 SO 4 . The other valuable product of acid-catalyzed conversions of carbohydrates is levulinic acid (LA). The high-temperature and low-temperature methods are used for LA synthesis. In the presence of sulfuric acid catalyst the highest yield of LA from cellulose reaches 35-40 % mas. at 240 °C. The relative reactivity of different carbohydrates at 98 °C decreases in the order: fructose > sucrose > inulin > glucose > cellulose. Glucose is of 20-40 times less reactive, than fructose, and cellulose is of 30 times less reactive, than glucose. In glucose conversion to LA the hydrochloric acid is twelve times more active, than sulfuric acid, but the selectivities of the LA formation are practically the same in both cases. In the presence of phosphoric acid catalyst the maximum yield of levulinic acid does not exceed 5 mol. %, and the rate of process is low. Lacton of LA (α-angelicalactone) is an attractive substance for syntheses the new types of biodegradable polymeric materials. There are two possible ways of angelicalactone polymerization: -disclosing of an olefinic linkage with formation of a polyfuranone and disclosing of the lactone cycle with formation of a polyester. Polyesterification of angelicalactone is the most interesting way for producing of new biologically compatible polymers [6]. Polyesters of alpha-angelicalactone were obtained with the use of alkali based catalysts -sodium butylate and NaOH. The obtained polymers are light yellow resins or solids, water-insoluble and soluble in polar organic solvents. Molecular weight of the obtained polymers is up to 2000 amu. According to 1 H NMR spectroscopy data the polymer of alpha-angelicalactone is a polyester having the "head to tail" structure. The part of polyester intermonomeric bonds in the obtained products of polymerization reaches 68-80 %. The obtained polymers are exposed to full biodegradation by microorganisms Sacharomices cerevisae during 5-15 day, and by Streptomyсes lividans and Streptomyсes chrysomallus -during 20-30 day. For producing the nanoporous carbons from various type of carbon-containing raw materials the different methods of alkaline thermal activation are used [7]. It was found that the melted sodium hydroxide and potassium hydroxide promote the significant development of nanoporous structure of carbon materials from lignin and cellulose. Such characteristics of obtained nanoporous carbons as a specific surface area, the volume and size of pores are depend on the nature of initial raw material and alkali, ratio raw material/alkali and temperature of thermal treatment. The specific surface area of nanoporous carbons, obtained by alkaline thermal activation of cellulose and lignin in melted KOH goes through a maximum with the increase of ratio KOH/raw material (Fig. 3). Maximal surface area of carbons from cellulose (1170 m 2 /g) corresponds to the content of KOH in a mixture of 67 % mas. In the case of lignin the maximal surface area (2035 m 2 /g) of obtained carbons corresponds to KOH content 75 % mas. Nanoporous carbon materials obtained by alkaline thermal activation of cellulose and lignin have high sorption activity to hydrogen (sorption capacity 3 % mas. at 77 K and 5 MPa H 2 ), to volatile organic compounds (hexane, CHCl 3 , benzol, butanol etc.). Also they are able to separate the mixtures He-CH 4 , H 2 -CH 4 (separation factor 3.6-3.8). Conclusion As a result of the accomplished study, the integrated process of wood biomass conversion to cellulose and low molecular mass lignin with their subsequent transformations to glucose, levulinic acid, biodegradable polymers and nanoporous carbons was developed. For the each stage of developed integrated process the optimal reaction conditions were selected which allow to produce the target product with the high yield and with the required characteristics.
2017-11-04T17:37:13.871Z
2013-03-14T00:00:00.000
{ "year": 2013, "sha1": "b61205016a889c2bc9d480e303a8da0bb0ade38c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/416/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6f8c8aabdc34fde7c0acaf4c900617c6af33cebe", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
67857953
pes2o/s2orc
v3-fos-license
Urban environment as an independent predictor of insulin resistance in a South Asian population Background Developing countries, such as India, are experiencing rapid urbanization, which may have a major impact on the environment: including worsening air and water quality, noise and the problems of waste disposal. We used health data from an ongoing cohort study based in southern India to examine the relationship between the urban environment and homeostasis model assessment of insulin resistance (HOMA-IR). Methods We utilized three metrics of urbanization: distance from urban center; population density in the India Census; and satellite-based land cover. Restricted to participants without diabetes (N = 6350); we built logistic regression models adjusted for traditional risk factors to test the association between urban environment and HOMA-IR. Results In adjusted models, residing within 0–20 km of the urban center was associated with an odds ratio for HOMA-IR of 1.79 (95% CI 1.39, 2.29) for females and 2.30 (95% CI 1.64, 3.22) for males compared to residing in the furthest 61–80 km distance group. Similar statistically significant results were identified using the other metrics. Conclusions We identified associations between urban environment and HOMA-IR in a cohort of adults. These associations were robust using various metrics of urbanization and adjustment for individual predictors. Our results are of public health concern due to the global movement of large numbers of people from rural to urban areas and the already large burden of diabetes. Electronic supplementary material The online version of this article (10.1186/s12942-019-0169-9) contains supplementary material, which is available to authorized users. Background Currently, 54% of the world's population lives in urban areas, a proportion that is expected to increase to 66% by 2050 [1]. Most of the expected urban growth will take place in developing countries in Asia and Africa. Next to China, the world's second largest urban population resides in India with approximately 410 million people and this number is projected to double by 2050 [1]. India had over 69.2 million people living with diabetes in 2015, and this number is expected to grow to 123.5 million by 2040 [2,3]. In India, urban compared to rural populations have significantly higher diabetes prevalence [4,5]. Studies have shown that urbanization in India is associated with increased consumption of energy-rich foods and a decrease in energy expenditure (through less physical activity) leading to obesity and increased risk of developing type 2 diabetes mellitus (diabetes) and other cardiometabolic conditions [5][6][7][8]. Rapid urbanization in India also often coincides with increased environmental pollution with potential harmful effects to health due to undesirable changes in the physical, chemical or biological characteristics of air, water or land [9]. Emerging epidemiologic data suggests that environmental pollutants could be a risk factor for diabetes [10,11]. In a study in Chennai the overall diabetes prevalence increased from 11.6% in 1995 to 13.9% in 2000 [15]. Chennai, located in the rapidly urbanizing southern Open Access International Journal of Health Geographics state of Tamil Nadu, is the fourth largest metropolitan city in India. Subsequent studies of adults over 20 years old in Chennai showed that the prevalence of diabetes increased from 14.3% in 2003-2004 to 18.6% in 2006 [3,4]. In a more recent study (2010-2011) the age standardized prevalence of diabetes in Chennai was 22.8% (95% CI 21.5-24.1%) [12]. These results indicate a rapid increase in prevalence of diabetes in Chennai city in recent decades [3,4,12,13]. In a comparative study, residents in Chennai had lower BMI and waist circumference (WC) measurements than Asian Indians living in the U.S., but still had a higher prevalence of diabetes even at normal levels of BMI [14]. Adjustment for age, sex, WC, and systolic blood pressure did not fully explain differences in the odds of diabetes between the two groups suggesting that factors besides age and central adiposity play a role in diabetes development. There is minimal data on urban environmental degradation and risk for diabetes in developing countries such as India [9]. Insulin resistance, which is a reduction in the cellular response to endogenous insulin, is a powerful predictor of future development for diabetes [15]. Studies have shown links between insulin resistance and various chemicals, such as phthalates and bisphenol A (BPA), found in polluted environments [16][17][18]. Animal and recent epidemiological studies have reported that air pollutants, such as, nitrogen dioxide (NO 2 ) and PM 2.5 may affect insulin sensitivity [19][20][21]. Given the high levels of environmental pollutants in India, it is plausible that some of these pollutants could be factors within the urban environment contributing to increased diabetes risk [5]. In the current study, while controlling for traditional risk factors, we examine the cross sectional association between insulin resistance and measures of urban environment defined using the following metrics: (1) distance from urban center; (2) population density in the India Census; and (3) satellite-derived land cover type. We used health data from the Population Study of Urban, Rural, and Semi-urban Regions for the Detection of Endovascular Disease and Prevalence of Risk Factors and Holistic Intervention Study (PURSE-HIS) in a population recruited from Chennai and surrounding areas [22]. Methods The PURSE-HIS was designed and implemented to understand the prevalence and progression of subclinical and overt cardiovascular disease (CVD) and its risk factors in urban, semi-urban, and rural communities in southern India. Detailed methodology has been published elsewhere [22]. Briefly, Chennai served as the primary location from which the urban study population was recruited. The semi-urban and rural areas were near Chennai in the Thiruvallur and Kanchipuram districts, respectively. A total of 8080 participants over 20 years of age were recruited between 2009 and 2011 from urban (N = 2221), semi-urban (N = 2821) and rural (N = 3038) areas. A two stage cluster sampling method was used to ensure adequate spatial variability amongst administrative divisions. After excluding participants with a previous history of diabetes or newly diagnosed diabetes (a fasting blood glucose ≥ 126 mg/dL or a 2-h oral glucose tolerance test (OGTT) ≥ 200 mg/dL) our sample size was 6350; which included 3670 females and 2680 males. Questionnaire and clinical data collection An interviewer-administered questionnaire was used to collect data on demographics, CVD and its risk factors [22]. Physical activity was measured by a physiotherapist using the Global Physical Activity Questionnaire [23] and a score was calculated. A clinical psychologist assessed the level of stress and anxiety levels using the Presumptive Stressful Life Event Scale [24]. A socioeconomic (SE) score was computed based on a revision of the Kuppussamy classification scale [25,26]. Kuppuswamy's SE score was originally proposed in 1976 and was built for the Indian population combining values for education, occupation, education and income to create a robust estimate of standard of living. Participants are categorized into lower, middle and upper classes. Energy (food) intake was assessed from a 24-h recall of meals and a food frequency questionnaire [27]. Body mass index (BMI) was calculated by dividing the participant's measured weight in kilograms by the square of height in meters. Fasting blood specimens were collected and assayed for fasting blood sugar (FBS) and fasting insulin levels [22]. Homeostasis model assessment of insulin resistance (HOMA-IR) was calculated as fasting plasma insulin (mU/L) × FBS (mmol/L)/22.5. Since a diagnostic test for insulin resistance does not exist, insulin resistance was defined as a HOMA-IR level above the 75th percentile, as previously defined in multiple cohort and epidemiological studies [28]. Geo-location and creation of urbanization metrics We defined urbanization using three different metrics: distance from urban center (Chennai), land cover type, and census community designation. Residential addresses of study participants were geolocated to the nearest road or intersection through manual assignment by a single researcher using Google Earth© over the study area that spanned a geographic region of approximately 80 km by 80 km (Fig. 1). For quality control purposes, the Google Earth© location identification process was repeated by a second researcher with 100 randomly selected participants to examine potential positional error. We found For the first metric, the urban center of the study region was defined as the flag post on the ramparts of the Fort Saint George historic landmark in Chennai, in accordance with historical and local custom. Residential location KML files were imported into ArcGIS v10.1 (ESRI, Redlands, CA) to calculate the distance, in kilometers, and compass angle from the urban center for each participant using the Near Tool. The second metric utilized land cover data (MCD12Q1, NASA) obtained through the online Data Pool at the NASA Land Processes Distributed Active Archive Center. The values were derived from Terra and Aqua-MODIS land cover data products which provided yearly averages [29]. The data presented are from the year 2010 and have 500 m × 500 m resolution. We based our groups on the 17 land cover classifications of the International Geosphere Biosphere Program Plant Functional Scheme, which together have 72-77% classification accuracy [30]. We mapped the classifications as five distinct groups, which included urban, trees/shrubs, grass, crops, and other ( Fig. 1). (The figure also includes water for illustration; however, no participants resided in these grids. ) We then aggregated all the non-urban land classifications into a single group, which we designated as rural and the remaining groups as urban. The third metric, census community designation, was based on data from the 2011 India census. Participants residing in urban areas were those living in a municipality with a total population of at least 5000 and a population density of 400 persons/km 2 or more. Those residing in municipalities with smaller populations or densities were designated as non-urban [31]. Statistical analyses We first evaluated descriptive statistics for population characteristics according to urban designation and separated by gender. Analysis of variance was used to check for significant differences on mean scores in both genders between rural and urban residents. Analysis of variance was also used to test for significant differences in HOMA-IR scores according to categories of age (≤ 39 or ≥ 40), body mass index (non-obese [BMI ≤ 24.9] or obese [BMI ≥ 25]), physical activity (low, moderate or high), SE score (lower, middle or upper) and smoking (smoker or non-smoker). We ran logistic regression models to evaluate the association between each urbanization metric and the odds of having a HOMA-IR level in the fourth quartile of the distribution. Logistic regression models were adjusted for age, BMI, physical activity, energy intake, SE score and smoking in separate models for males and females. However, no adjustment for smoking was made in the models for females due to the very low prevalence of smoking. We evaluated effect modification, in models with distance to urban center as the exposure, by stratifying on categories of age, smoking status, BMI and physical activity. Potential modifiers were removed as covariates in the model as appropriate when evaluating modification. The standardized coefficients and 95% confidence intervals were multiplied by the interquartile range (IQR) (i.e. 32.6 km). We also conducted a sensitivity analysis that substituted WC for BMI in adjusted regression models. Results Population characteristics are given in Table 1. The mean age for females was 40 years and for males was 45 years. In both females and males, when compared to the rural population, the urban population had a significantly higher energy intake, SE score, stress score, insulin level and HOMA-IR. The urban population also had a higher BMI and was less physically active. The prevalence of smoking was higher in rural males compared to urban males and despite an overall low prevalence, smoking was higher among urban females compared to rural females. Table 2 shows HOMA-IR levels stratified by demographic and urbanization variables. The overall mean HOMA-IR levels were 1.98 ± 1.61 for females and 1.71 ± 1.39 for males. HOMA-IR levels in both females and males were significantly higher in sub-populations with low and moderate physical activity compared to high physical activity. HOMA-IR levels were significantly higher among non-smoking males compared to smoking males and also higher for participants who were obese. There were no statistically significant differences by age, however, mean HOMA-IR was slightly higher for older females and younger males. HOMA-IR levels were higher for residents of urban areas compared to non-urban. Mean HOMA-IR levels were 2.69 ± 2.44 for females and 2.39 ± 2.29 for males living within 0-20 km from city center. This was statistically significantly higher than their counterparts living at a greater distance. Urban designation compared to rural according to land area was statistically significantly higher among females and males. Similarly, census derived urban residence was statistically significantly higher for females and males compared to rural populations. Results of unadjusted and adjusted logistic regression models of HOMA-IR are given in Table 3. Adjustment for potential confounders resulted in an attenuation of the effect in most cases. The OR for high HOMA-IR appeared to increase as distance to urban center Changes in HOMA-IR are estimated for an IQR increase in distance to urban center (32.6 km). We also examined the modification of the association between distance to urban center and HOMA-IR by age, smoking, BMI, physical activity and energy intake in sex-stratified multivariate models (Additional file 2: Figure 1). Results show a significant increase in HOMA-IR the closer participants resided to the urban center of Chennai, with 0.19 mg/dL (95% CI 0.13, 0.25) and 0.16 mg/dL (95% CI 0.09, 0.22) increase in HOMA-IR per IQR increase in distance in females and males, respectively. The estimated Table 2 Mean HOMA-IR (mg/dL) stratified by demographic and urbanization variables a Statistically significant difference (p < 0.01) between non-obese and obese among females and males b Statistically significant difference (p < 0.05) between low & high and moderate & high physical activity among females c Statistically significant difference (p < 0.05) between low & moderate, low & high, and moderate & high physical activity among males d Statistically significant difference (p < 0.05) between smokers and non-smokers among males e Statistically significant difference between 0&1, 0&2, 0&3 (p < 0.001) and 1&2 in distance categories among females f Statistically significant difference between 0&1 (p < 0.05), 0&2, 0&3 (p < 0.001) in distance categories among males g Statistically significant difference between urban and non-urban (p < 0.001) among males and females h Statistically significant difference between rural and urban (p < 0.001) among males and females Characteristic Females Males . However, effects between obese and non-obese females were similar. In both males and females there was a greater effect of distance on HOMA-IR for participants reporting moderate and low physical activity compared to high physical activity, although there was a high degree of overlap in confidence intervals. A sensitivity analysis was conducted that replaced BMI with WC in the adjusted logistic regression models (Additional file 1: Table 1). The models adjusting for WC had a significantly lower magnitude of association between an IQR distance from urban center in males (0.09 mg/dL [95% CI 0.16, − 0.013]) than in the models adjusting for BMI, but was not significantly different in females (− 0.16 mg/dL [95% CI − 0.22, − 0.09]). Discussion In a population-based representative sample of adults in India without diabetes we investigated the association between residing in an urban environment and insulin resistance, which is an important underlying metabolic condition predisposing the development of diabetes [32,33]. After controlling for age, BMI, energy intake, SE score, physical activity, stress and smoking status, there were independent associations between multiple metrics of urban environment and HOMA-IR. Those residing in urban areas as defined by land cover and census category had higher HOMA-IR levels than those in rural or nonurban areas. The largest increase was found for participants living within 20 km of the city center. In multivariate models there were gender-specific differences of the effect of age and obesity on the association between distance from the urban center and HOMA-IR such that the association was more pronounced in younger females and among obese males. Previous, studies in young populations suggest that girls are intrinsically more insulin resistant [34]. Further, reports show that type 2 diabetes in younger populations show a female preponderance [35][36][37]. However, at older ages with increases in BMI, there is a greater amounts of visceral and hepatic adipose tissue in males, when compared with females, which contributes to higher insulin resistance in males [38]. These findings are consistent with the greater effect modification of the relationship between distance from urban center and HOMA-IR among younger females and obese males that we found. A small number of studies have evaluated the impact of urbanization on insulin resistance in varied locations across the globe. A higher prevalence of insulin resistance was identified in Floresian men (a specific ethnic group in Indonesia) that had moved to an urban center (Jakarta) compared to men than remained in the rural area [39]. Similarly, another study revealed statistically significant higher HOMA-IR in Ghanaian adults living in urban areas compared to rural areas [40]. Due to urbanization in India, environmental degradation has been occurring very rapidly resulting in poor water quality, air pollution, noise, dust and heat, as well as problems with disposal of solid and hazardous wastes [9]. Thirteen of the world's 20 cities with the highest levels of particulate matter less than 2.5 μm in aerodynamic diameter (PM 2.5 ) are located within India. Significant sources of air pollution in India include motor vehicles, electricity generation, manufacturing, construction and road dust, which have increased in India's cities in recent years along with the rapid growth in industry, power and transportation [41]. Air pollution, specifically PM 2.5 and nitrogen dioxide, have achieved recent attention given associations with diabetes and insulin resistance in multiple studies [10,20,42,43]. Proposed mechanisms for these effects include oxidative stress; endothelial dysfunction; overactivity of the sympathetic nervous system; changes in immune response in visceral adipose tissues; and altered insulin sensitivity and glucose metabolism [42,43]. Other chemicals such as persistent organic pollutants and endocrine disruptors have also been associated with diabetes [11]. These chemicals may act as antagonists or agonists to endogenous hormones necessary to maintain homeostasis or affect normal functioning of mitochondria [44]. However, it is important to note the spatial variation and temporality between increased pollution and this health effect, because there may be a lag between further degradation of the environment and diagnosis of adverse effects. Understanding the relationship between pollution and insulin resistance will require a more detailed analysis of these temporal trends. Other contributors to this association are also possible including access to qualified medical care. Past research has found that although there is a greater concentration of medical workers in urban areas a large proportion of those practitioners are also unqualified [45]. Diet is a key factor in insulin resistance and evidence of differences comparing urban and rural populations is mixed. One study reported similar fruit and vegetable intake among both populations [46], another reported high intake of fruits and vegetables, along with higher intake of carbohydrates, meat and dairy for urban populations [47]. One of the strengths of our study is the use of three metrics to test the associations between urban environment and HOMA-IR. Land cover classification allowed us to reduce exposure misclassification by identifying smaller or developing urban enclaves outside of the city center. For example, we could identify a rapidly urbanizing municipality approximately 65 km southwest from the urban center ( Fig. 1) that was classified as rural in the India Census data. Female participants residing in this second urban cluster had a mean HOMA-IR of 1.83 mg/ dL (SD: 2.17 mg/dL), which was significantly higher than female participants residing in the same distance interval (60-80 km distance group), who had a mean HOMA-IR of 1.57 mg/dL (SD: 0.97 mg/dl). It is possible that the second urban cluster introduced exposure misclassification in the multivariate analysis based on Census classification resulting in a null effect (Table 3). However, examining three metrics reveals an overall commonality of the association between urban residence and higher HOMA-IR. Our study also has several limitations. Although the land cover data are able to identify rapidly developing urban areas we must compare data for temporally close, but different, years. This will result in some error, which we have sought to address by gathering data on exposure before outcome. Also, the land cover classifications do not differentiate specific land uses within urban areas. There are likely to be differential exposures comparing residential versus industrial land use that may be important to the outcome. Another potential limitation is our geocoding method. Geocoding participant locations can be difficult in rapidly developing regions in India without reliable address network systems and Global Position System (GPS) ascertainment is not viable with large sample populations. Exposure misclassification from positional error could affect our analysis at the edges of our distance interval cut points, as well as with the 500 m × 500 m MODIS land cover grids. Nevertheless, in a subset of participants for whom we compared the geocoded location to the location recorded from a GPS, the mean difference was 0.19 km which is relatively small compared to the 20 km distance categories used in our main analysis. We would anticipate this error to be nondifferential with respect to our outcome and therefore would be expected to bias results towards the null. We found that adjustment for WC instead of BMI resulted in an attenuation of effects among males. This indicates possible residual confounding when using BMI as the measure for adiposity, which may not adequately account for fat distribution. Finally, the study design of our analysis was cross-sectional. We are therefore limited in our ability to evaluate temporality with regard to urban expansion and the effect of urbanization on HOMA-IR. Future work with this cohort may allow us to draw stronger conclusions about which aspects of the urban environment may be most important to the association with HOMA-IR and whether there is a causal association. Future analyses could consider ambient and household air pollution, which are often pervasive, persistent and exist at higher concentrations in urban areas of India [9]. Conclusion We have identified independent associations between the urban environment and insulin resistance in a cohort of adults in Southern India. The association was robust using various matrices of urbanization and adjustment
2019-02-14T21:37:25.931Z
2019-02-12T00:00:00.000
{ "year": 2019, "sha1": "a670da0ea78a089c0b30a8e8e01b91563eff8811", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12942-019-0169-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a670da0ea78a089c0b30a8e8e01b91563eff8811", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
269246028
pes2o/s2orc
v3-fos-license
Phosphorylation-Driven Epichaperome Assembly: A Critical Regulator of Cellular Adaptability and Proliferation The intricate protein-chaperone network is vital for cellular function. Recent discoveries have unveiled the existence of specialized chaperone complexes called epichaperomes, protein assemblies orchestrating the reconfiguration of protein-protein interaction networks, enhancing cellular adaptability and proliferation. This study delves into the structural and regulatory aspects of epichaperomes, with a particular emphasis on the significance of post-translational modifications in shaping their formation and function. A central finding of this investigation is the identification of specific PTMs on HSP90, particularly at residues Ser226 and Ser255 situated within an intrinsically disordered region, as critical determinants in epichaperome assembly. Our data demonstrate that the phosphorylation of these serine residues enhances HSP90’s interaction with other chaperones and co-chaperones, creating a microenvironment conducive to epichaperome formation. Furthermore, this study establishes a direct link between epichaperome function and cellular physiology, especially in contexts where robust proliferation and adaptive behavior are essential, such as cancer and stem cell maintenance. These findings not only provide mechanistic insights but also hold promise for the development of novel therapeutic strategies targeting chaperone complexes in diseases characterized by epichaperome dysregulation, bridging the gap between fundamental research and precision medicine. Introduction Conventional wisdom, as crystallized in Beadle and Tatum's 1941 paradigm of "one gene-one enzyme-one function," has traditionally delineated targets as outcomes of protein expression changes or point mutations within proteins.However, it is increasingly apparent that protein dysfunctions in the context of many disorders, including cancer, neurodegenerative disorders, among others, are predominantly shaped by changes in interaction strengths and cellular mislocalization.These factors, in turn, can be modulated by variations in post-translational modifications (PTMs), stabilization of disease-associated protein conformations, and other protein-modifying mechanisms 1,2 .Within this complex context, Heat Shock Protein 90 (HSP90) emerges as a compelling exemplar, transcending the boundaries of conventional understanding 3 . Positioned as a versatile chaperone, often referred to as the guardian of the proteome, HSP90 assumes a pivotal task in the realm of maintaining cellular equilibrium by facilitating protein folding, stabilization, and degradation 4 .Under the canonical folding paradigm, HSP90 functions as a homodimer.Each protomer is composed of an N-terminal domain (NTD), a middle domain (MD), and a C-terminal dimerization domain (CTD) 4,5 .The NTD contains a nucleotide binding pocket, where ATP binding and hydrolysis takes place 6 .The chaperone cycle of HSP90 is coupled to a series of dynamic conformational changes accompanying its ATPase cycle.Beginning with NTD/MD and MD/CTD interdomain rotations and cross-monomer dimerization 7 , HSP90 transitions from open to closed conformational states, while folding client proteins 8,9 .HSP70 and HOP (HSP70-HSP90 organizing protein) bring client proteins to HSP90 and form the loading complex 10 .Other co-chaperones participate at different stages of the HSP90 chaperone cycle and regulate its conformational changes along the chaperone and ATPase cycle 4 .Co-chaperones may have different preferences for client proteins, fine-tuning subcellular pools of HSP90 to mitigate stressors and maintain proteostasis 11 .These assemblies are further shaped by PTMs in HSP90, co-chaperones and client proteins 12 .Overall, the highly orchestrated interactions among these proteins -both chaperones and clients -are transient in the chaperone cycle under physiological conditions.While this classical understanding portrays HSP90 as a dimeric entity that interacts dynamically with co-chaperones and client proteins, research has uncovered a spectrum of multimeric HSP90 forms, each sculpted by the cellular milieu and the presence of stress-inducing factors 3 .These multimers, whether homo-oligomeric or hetero-oligomeric, expand HSP90's functional repertoire, blurring the boundaries between traditional chaperone functions and newfound roles as holdases or scaffold proteins.In disease contexts, such as cancer and neurodegenerative disorders, HSP90's conformational adaptability gives rise to epichaperomes-distinctive hetero-oligomeric formations of tightly bound chaperone, co-chaperones and other factors [13][14][15] .This phenomenon goes beyond mere biochemical curiosity; it represents a fundamental mechanism by which cells respond to stressors, whether of genetic, proteotoxic or environmental nature 3,[16][17][18] .Unlike chaperones which help proteins fold or assemble, epichaperomes exert a maladaptive influence, reshaping the assembly and connectivity of proteins pivotal for sustaining pathological traits.For example, in cancer, epichaperomes take on scaffolding functions not found in normal cells, altering the assembly and connectivity of proteins important for maintaining a malignant phenotype and enhancing their activity, which provides a survival advantage to cancer cells and tumor-supporting cells 13,19 .In Alzheimer's disease epichaperomes rewire the connectivity of, and thus negatively impact, proteins integral for synaptic plasticity, brain energetics and immune response 15 . The revelation of HSP90's maladaptive multimeric epichaperomes has also profound implications for therapeutic interventions, including in the treatment of diverse disease states including cancers and of neurodegenerative disorders.Rather than a blanket inhibition of all HSP90 pools, targeting specific pathologic conformations of HSP90 as found in epichaperomes while sparing normal HSP90 functions holds the promise of enhancing the safety as well as the immunostimulatory and anticancer effects of HSP90 inhibitors 3 . Despite these important mechanistic and therapeutic implications, key factors facilitating HSP90 incorporation in epichaperomes -namely the conformations that enable epichaperome formation and structural elements that support the enrichment of such conformation -remain unknown.In this study, we use a combination of chemical biology and unbiased mass spectrometry techniques to elucidate the conformation of HSP90 populated in epichaperomes and to characterize molecular factors that support and favor the enrichment of such conformation.Beyond structural revelations, our findings demonstrate how these factors directly influences cellular behaviors, particularly in contexts where robust proliferation and adaptation are crucial, such as cancer and stem cell maintenance.This direct link between epichaperome function and fundamental cellular processes has translational relevance for therapeutic development. Pluripotent stem cells and cancer cells share epichaperomes Epichaperomes nucleated through enhanced interactions between HSP90 and HSP70, namely the heat shock cognate 70 (HSC70) isoform, are a distinct feature of cancer cells 13,19 .Epichaperomes containing HSP90 are detected in iPSCs (induced pluripotent stem cells) 20 , in leukemia stem cells 21,22 and in glioma cancer stem cells (CSCs) 23 .Hyperactivation of the transcription factor c-MYC required in generating iPSCs 24 , maintaining embryonic stem cells (ESCs) 25 and CSCs 26 , is also a driving factor in epichaperome formation in tumors, irrespective of the tumor type 13,27 .Notably, these epichaperomes are all sensitive to and can be disrupted by small molecules such as PU-H71 (zelavespib) or PU-AD (icapamespib) that bind to HSP90 13,23,28 , suggesting that a similar composition, facilitated by a specific conformation of HSP90, may characterize epichaperomes in these distinct cellular contexts. To test this hypothesis, we initially explored the composition of epichaperomes in selected cellular models, encompassing pluripotent stem cells and cancer cells.For pluripotent stem cells, we examined two mouse embryonic stem cell lines (E14 and ZHBTc4) and a human induced pluripotent cell line (hiPSC).Additionally, two cancer cell lines, well-characterized in terms of epichaperome composition and function, were chosen as representative epichaperome-positive (MDA-MB-468) and -negative/low (ASPC1) cancer cells (Fig. 1a-f and Supplementary Figs.1,2). In contrast to folding chaperone complexes, which are inherently dynamic and short-lived 6 , epichaperomes represent long-lasting heterooligomeric assemblies composed of tightly associated chaperones, co-chaperones, and various other factors.HSP90 is a major component found within epichaperomes along with other chaperones, co-chaperones, and scaffolding proteins like HSP70 (especially HSC70), CDC37, AHA1, and HOP 13 .Consequently, when we analyzed cell homogenates containing epichaperomes using native PAGE followed by immunoblotting with antibodies specific to epichaperome constituent chaperones and cochaperones, we observed a range of high-molecular-weight species, both distinct and indistinct, in addition to the primary band(s) characteristic of chaperones.This observation held true for both pluripotent stem cells and cancer cells (Fig. 1b, Supplementary Fig. 1a-d and refs. 13,19,20).Notably, HSP90 immunoblotting revealed the presence of species comprising HSP90 in epichaperome assemblies in cancer cells and pluripotent stem cells, in addition to the prominent 242 kDa band, which is a characteristic of non-transformed cells 13,19,29 . Epichaperomes undergo disassembly during iPSC differentiation 20 or when cancer cells are treated with PU-H71 or PU-AD 15,23,28,30 .Therefore, next we induced the differentiation of the pluripotent stem cells under investigation.In the ZHBTc4 cell line, Oct4 expression is controlled by a Tet (tetracycline)-off oct4 regulatory system 31 .Down-regulation of Oct4 in ZHBTc4 cells has been reported to induce trophoblast differentiation, which is characterized by changes in cell morphology, specifically, cells flattening into epithelial-like cells, and is associated with slower growth 32 .Mouse embryonic E14 stem cells undergo spontaneous differentiation into embryoid bodies when cultured in suspension without antidifferentiation factors such as leukemia inhibitory factor 33 and induced pluripotent stem cells differentiate into mature dopaminergic neurons using a floor-plate based differentiation protocol 34 .We confirmed that differentiation of these pluripotent stem cells was correlated with the disassembly of epichaperomes, as observed through native PAGE immunoblotting.This disassembly is evident by a reduction in high-molecular-weight chaperone species on native PAGE observed when immunoblotting for epichaperome constituent chaperones (see HSP90α/β, HOP, HSC70, CDC37, AHA1, HSP110 in Fig. 1b and Supplementary Fig. 1), with minimal changes observed in total chaperone levels on SDS PAGE.Notably, for HSP90, a decrease in bands other than those in the ~242 kDa range was observed upon differentiation, supportive of epichaperome disassembly (see HSP90 immunoblotting).PU-H71 serves as an epichaperome probe that, in contrast to the tested antibodies which indiscriminately detect epichaperomes and other HSP90 pools, exhibits a preference for HSP90 when it is integrated into epichaperomes 13 .Labeled derivatives of PU-H71 can, therefore, be employed to detect HSP90 within epichaperomes, distinguishing it from other HSP90 pools (as illustrated in Fig. 1c and Supplementary Fig. 2a-c).To achieve this, we generated lysates from ZHBTc4, E14 cells, and MDA-MB-468 cells under conditions that preserve native protein assemblies.Subsequently, we labeled these homogenates with a clickable PU-probe (PU-TCO, ref. 19).After running these labeled samples on native PAGE gels, we conjugated the PU-probe with a Cy5 dye and visualized epichaperomes, confirming the presence of epichaperomes in both the ESCs and the cancer cells.These epichaperomes were characterized by multimers observed at and above ~300 kDa (Fig. 1c).Moreover, the labeling of epichaperomes by the PU-probe decreased upon ESC differentiation, supportive of epichaperome disassembly (Fig. 1c and Supplementary Fig. 2b). Additionally, we conducted labeling experiments using live E14 ESCs, instead of homogenates, employing a PU-CW800 probe (a derivative of PU-H71 conjugated with an 800 nm near-infrared dye) or a control derivative (an inactive PU-derivative that does not interact with epichaperomes) (see Supplementary Note 1).The most responsive target of the PU-probes, but not the control probe, was an HSP90 assembly of approximately 300 kDa, thus above the major 242 kDa band preferred by the anti-HSP90 antibody.This species was detected on Native-PAGE in PU-probe treated cell lysates but not in control treated cell lysates (Supplementary Fig. 2c). In summary, the predominant HSP90 band characteristic of epichaperomes is a 300 kDa assembly, distinctly differing from the typical ~242 kDa band observed in non-transformed cells 13,19,32 when analyzed on native PAGE gels.Mass spectrometric (MS) analysis of the ~300 kDa assembly confirmed the presence of HSP90 and HSC70 as the primary protein components of this multimeric complex (Supplementary Data 1, 300 kDa LC-MS).This finding aligns with the composition of core epichaperome complexes previously reported in cancer cells 13 .Consequently, these findings combined confirm that both cancer cells and pluripotent stem cells share HSP90 and HSC70 as integral constituents of their core epichaperomes. To gain further insights into epichaperome assemblies, we employed resin-based affinity purification experiments.Specifically, we utilized resins with immobilized PU-H71, referred to as PU-beads, and an inert control molecule on control beads, following established procedures 13 (Fig. 1d).As an additional control, we employed a resin containing immobilized geldanamycin (GA), known for its ability to bind and isolate predominantly un-complexed HSP90 (GA-beads, Supplementary Fig. 3 and ref. 35 ).Subsequently, we subjected the protein cargo isolated by these probes to unbiased MS analysis.To precisely determine the protein components of the cargo, we conducted in-gel digestion of the entire gel lanes and employed liquid chromatography/mass spectrometry (LC-MS/MS) in conjunction with the semi-quantitative spectra-counting method 36,37 for the identification and quantification of cargo proteins (Supplementary Data 1). We observed that the cargo isolated by PU-beads from ESCs contained 26 of the 42 major chaperone and co-chaperones identified prior in cancer cells 13 as being epichaperome components (Fig. 1d).The interaction between PU-beads and epichaperomes was specific towards PU-H71, because control resins did not purify noticeable protein complexes.Similarly, GA-beads precipitated HSP90 but few co-purifying proteins and epichaperome components (Supplementary Fig. 3, Supplementary Data 1) consistent with previous results that GA isolates largely an un-complexed HSP90 38 . In mammalian cells, HSP90 exists in two paralogs, HSP90α and HSP90β 39 , both of which have been reported to play roles in epichaperome formation in cancer cells 13 .To assess the isoform composition of HSP90 within epichaperomes, we exploited the subtle difference between one pair of isobaric peptides, namely 88Thr-Lys100 in HSP90α and 83Thr-Lys95 in HSP90β, where a single amino acid distinguishes them (Ile in HSP90α and Leu in HSP90β) (Supplementary Fig. 4a).The assignment of HSP90 isoforms relied on co-eluting peptides obtained from the isobaric peptide present in purified HSP90β (Supplementary Fig. 4b,c).Extracted ion chromatograms of the peptide mass revealed an approximate ~1.5 β/α ratio in the ESC lysate and the cargo isolated by PU-beads (Fig. 1e), while the GA-beads cargo exhibited a ~1.0 β/α ratio.Similar findings were obtained through spectra counting, with the HSP90β/HSP90α ratio determined using spectral counting consistent with ratios obtained through MS intensity calculations (Supplementary Data 1: 708/540 = 1.31 for the PU-beads cargo; 219/235 = 0.93 for the GA-beads cargo).This validation of spectra counting as an effective semi-quantitative method supports the conclusion that epichaperomes isolated from ESCs exhibit a predominantly unbiased HSP90 paralog composition, akin to what has been reported for cancer cells 13 . In summary, the wealth of complementary biochemical experiments presented here lends strong support to the idea that both cancer cells and pluripotent stem cells harbor epichaperomes that are compositionally similar.Notably, HSP90 and HSC70 emerge as major constituents of the core epichaperome structure, serving as a scaffold for recruiting various co-chaperones to create specific epichaperome assemblies.This shared architectural similarity between epichaperomes in ESCs and cancer cells underscores the existence of a common epichaperome-enabling HSP90 conformer that is enriched in both biological contexts. Epichaperome-enabling conformation of HSP90 MS identification of cross-linked residues that are in spatial proximity but not necessarily close in primary sequence, provides valuable distance restraints that can be employed for computational modeling of proteins and protein complexes [40][41][42] .Therefore, to determine the conformation of HSP90 in epichaperomes, we used a chemical cross-linking and mass spectrometry (CX-MS) approach to identify and quantify cross-linked peptides of PU-H71-favored HSP90 pools. To ensure the capture of the epichaperome-enabling conformation, we first cross-linked cellular lysates using the amine-reactive cross-linker DSS (disuccinimidyl suberate) prior to HSP90 capture on the PU-beads 13,35 (Fig. 2a).Parallel experiments were conducted using GA-beads, corresponding to solid-support immobilized GA, as a control 13,35 .The identity of cross-linked HSP90 peptides purified by PU-or GA-beads pull-down can be found in Supplementary Data 2. Notably, the alpha carbon distances between all cross-linked residues, as identified with high confidence, fell below the maximal span of DSS (30 Å).This suggests that proteins retained their native states without significant conformational perturbations during the cross-linking process. We calculated the cross-linking percentage for each pair of cross-linked PU-or GA-bound HSP90 residues.This calculation involved normalizing the MS ion intensity of cross-linked peptides by the sum of all cross-linked peptides and cross-linker-modified peptides containing the cross-linked residues.By doing so, we could mitigate the impact of variations in the reactivity of cross-linked residues, allowing us to primarily assess the influence of the distance between cross-linked residues and their local secondary structures 43 . Most cross-linked pairs from both PU-and GA-bound samples exhibited similar cross-linking percentages, with data points evenly distributed around a trend line with a slope of 1 (dotted line, Fig. 2b).This observation suggests a broad similarity in secondary and tertiary structures between these HSP90 populations.However, clear differences emerged, revealing conformational distinctions between the PU-and GA-favored HSP90 subpopulations (highlighted by orange circles, Fig. 2b). Notably, residues Lys58-Lys112 in HSP90α and Lys53-Lys107 in HSP90β, situated within the ligand-binding pocket, displayed a higher cross-linking percentage in PU-bound HSP90 populations compared to their GA-bound counterparts (Fig. 2b).This observation aligns with distinct pocket configurations preferred by each ligand, as previously observed through X-ray crystallography [44][45][46][47][48] .Specifically, crystal structures show the bulkier GA binds more superficially, causing helices 4 and 5 (Fig. 2d) to move away from the nucleotide binding site, thereby preventing full closure of the ATP lid.Moreover, the side-chain amino functional group of Lys112 forms a hydrogen bond with a benzoquinone oxygen of GA.This pocket configuration aligns with the reduced cross-linking activity of the lysine pair mentioned above.Conversely, PU-H71 binds deeply within the pocket.In this configuration, helices 4 and 5 are packed against helix 2 with Lys112 and Lys58 in HSP90α (or Lys107 and Lys53 in HSP90β) positioned more favorably for cross-linking.This arrangement of lysine residues is more likely to be found in the closed conformation of HSP90 (Fig. 2c), as proposed by crystallographic studies (PDB: 2CG9) 49 . It is essential to reiterate that the cross-linking experiments were conducted to 'lock' HSP90 conformations with covalent bonds before resin-based affinity purification experiments using the PU-or GA-beads.Consequently, the X-ray structures of PU-or GA-bound HSP90 NTD closely reflect a preferred pocket configuration that each ligand may capture in the cell, and in this case, for PU-H71, it is indicative of the pocket configuration of HSP90 in the epichaperomes.Furthermore, differences in HSP90 conformation were corroborated by cross-linked pairs located at the interfaces between NTD/MT (HSP90α: Lys293-Lys363) and MD/CTD (HSP90α: Lys444-Lys616; HSP90β: Lys435-Lys607) (Fig. 2b).These interfaces undergo significant reorientation during the HSP90 conformational cycle, implying a distinct HSP90 conformation favored by PU-H71 compared to GA. Lys444 in HSP90α (Lys435 in HSP90β) and Lys616 in HSP90α (Lys607 in HSP90β) are positioned either within the middle of the MD or in proximity to the central axis of the HSP90 homodimer (Fig. 2c).The distance between these lysine residues can provide insights into the relative placement of the monomer arms in specific HSP90 conformations (e.g., 20 Å in closed-like conformations; 29 Å in open-like conformations).The lower cross-linking percentage observed for Lys444 and Lys616 in HSP90α (Lys435 and Lys607 in HSP90β) in GA-favored HSP90 suggests a longer distance (29 Å) between them, supporting GA's preference for binding to an open-like conformation.In contrast, the moderate cross-linking percentage detected for these residues in PU-H71-favored HSP90 implies a medium distance (20 Å) between them, favoring a closed-like conformation enriched in epichaperomes (Fig. 2c). Additionally, a third pair of cross-linked residues (Lys293 and Lys363 in HSP90α) supports this notion.Located near the interface between the NTD and the MD, their positions are sensitive to the ligand binding state of the NTD, leading to changes in the relative positioning of secondary structures near the NTD/MD interface and altering the distance between Lys293α and Lys363α.Consistent with the cross-linked pair at MD/CTD interface, a closed-like conformation (16 Å) in PU-H71 bound HSP90 will be more amenable than an open-like conformation (13 Å) in GA-bound since the short distance might have limited the location of side-chains for cross-linking reactions. In summary, our CX-MS data, supported by several cross-linked residue pairs situated in structurally distinct regions, the nucleotide-binding pocket, and the NTD/MD and MD/CTD interfaces, shed light on the conformation adopted by HSP90 within epichaperomes.These findings underscore the notion that an enrichment of the closed-like conformation of HSP90 in specific cellular environments favors the formation of epichaperomes. Specific PTMs support HSP90 incorporation into epichaperomes To uncover the factors that facilitate the enrichment of the epichaperome-favoring HSP90 conformation, we conducted a comprehensive examination of the HSP90 pools isolated by PU-H71 and GA, searching for potential differences.Notably, we identified several peptides phosphorylated on Ser231 and Ser263 in HSP90α (Ser226 and Ser255 in HSP90β) exclusively in the PU-H71 cargo from ESCs (Fig. 3a,b and Supplementary Data 3).High-quality MS/MS spectra (illustrated for Ser226 and Ser255 phosphopeptides in HSP90β, Fig. 3b) coupled with precise mass accuracy allowed for the unequivocal identification of the peptide sequences and the phosphorylation sites.In contrast, these phosphorylated peptides were notably absent in substantial quantities in the GA cargo (Supplementary Data 3). Subsequently, we performed label-free quantitation of these phosphopeptides using ion intensity measurements and observed a significant enrichment in the PU-beads cargo, particularly in the case of Ser255 of HSP90β.For instance, the Ser255 phosphopeptide displayed a nearly threefold enrichment in the PU-H71 cargo compared to the lysate, after protein loading normalization using a representative tryptic peptide (Fig. 3c). To gain further insights, we leveraged previously reported MS datasets of PU-H71-isolated cargo from epichaperome-positive cancer cells 13,19 , including MDA-MB-468 (triple negative breast cancer), Daudi (Burkitt's lymphoma), IBL-1 (AIDS-related immunoblastic lymphoma), and NCI-H1975 (non-small cell lung carcinoma), as well as from non-transformed (NT) proliferating cells in culture (e.g., MRC5, lung fibroblast and HMEC, mammary epithelial cells).This analysis revealed that phosphorylation of these serine residues is also enriched in cancer cells when compared to NT cells (Ca:NT S255 = 16; S226 = 8; S263 = 12, Fig. 3d) establishing it as a hallmark of both ESC and cancer epichaperomes.This observation further supports the idea of a shared structural and architectural foundation for epichaperomes among ESCs and cancer cells. As HSP90 is found alongside HSC70 in epichaperomes, we conducted an additional confirmatory experiment.Here, we used YK5-B, a biotinylated probe that binds to HSC70 in epichaperomes, and thus captures HSP90 in epichaperomes via HSC70 19 .PU-H71 and YK5-B were used to isolate cargo from epichaperome-positive cancer cells, including MDA-MB-468 and OCI-Ly1 (breast cancer and diffuse large B-cell lymphoma, respectively), as well as from CCD-18Co colon cells in culture (i.e., non-transformed proliferating cells in culture).We found that the Ser255 and S226 phosphopeptides of HSP90β were nearly four to five times more abundant in epichaperome-positive cancer cells compared to non-transformed proliferating cells in culture, for both the PUcargo and the YK5-B cargo.Similar enrichment was noted for Ser263 and Ser231 in HSP90α (Fig. 3e).This analysis, thus, using both PU-H71 and YK5-B probes across diverse cell types, underscores the robustness of our observations and reinforces the role of phosphorylation in the acidic linker in shaping HSP90 within epichaperomes. In light of these findings, made with two distinct probes and observed in ESCs, five cancer cell lines, each representative of a distinct cancer type, and of three non-transformed, but proliferating, cells in culture, it is evident that the epichaperome-specific agents target a subpopulation of HSP90 characterized by high phosphorylation levels in the acidic linker between the NTD and the MD, and this subpopulation predominantly assumes a closed-like conformation.In conjunction with PU's preference for HSP90 within epichaperomes, and substantiated by YK5-B, a probe that binds epichaperomes via HSC70, these results strongly indicate that phosphorylation at these two serine residues is a key driver for HSP90 incorporation into epichaperomes and, consequently, for epichaperome formation. Specific PTMs drive epichaperome formation and function To explore whether the phosphorylation of these serine residues plays a pivotal role in driving, rather than merely resulting from, epichaperome formation, we next studied the phosphomimetic (HSP90β S226E,S255E ) and the non-phosphorylatable (HSP90 S226A,S255A ) mutants. Notably, these serine residues are located within an intrinsically disordered region (IDR) of HSP90 (Supplementary Fig. 5).IDRs are pivotal elements in the intricate network of protein-protein interactions (PPIs).These regions lack a fixed three-dimensional structure, granting them exceptional flexibility.This structural adaptability enables proteins containing IDRs to assume various conformations in response to specific cellular contexts or binding partners.Such adaptability plays a crucial role in facilitating context-dependent involvement in distinct PPIs.In the case of HSP90, these serine residues within the IDR may alter the dynamics and structure of the charged linker, contributing to stabilizing the epichaperome-enabling conformation of this chaperone, and in turn facilitating epichaperome formation. To explore this hypothesis, we conducted computational analyses to investigate the impact of each mutation on the flexibility of the charged linker (Fig. 4a-c).We constructed a model of the putative epichaperome core -namely the ~300 kDa assembly, see Fig. 1 -based on the cryo-EM structure of a multimeric HSP90 assembly (PDB: 7KW7).This structure represented 2xHSP90α, protomer A and B, bound to 2xHSP70 and 1xHOP.To create the model, we substituted HSP90 with human HSP90β using the closed-state cryo-EM structure (PDB: 8EOB).Additionally, we computationally inserted the charged linker, which was missing in the cryo-EM structures (Fig. 4a). We conducted all-atom molecular dynamics simulation of this pentameric protein assembly, with each system containing all the components along with either the EE, AA, or WT HSP90 -in both protomers.These simulations are intended to qualitatively explore the immediate response of the assembly to the perturbation induced by mutations and not to provide an extensive characterization of the assemblies' dynamics.By using a comparative MD-based approach we explore how short-term changes in the structural dynamics of different components within a large assembly may influence the emergence of states relevant for assembly stabilization.The underlying premise is that nanosecond timescale residue fluctuations in regions specifically responsive to certain states may facilitate large-scale rearrangements that underlie functional changes. These simulations revealed that the structure and conformation of the charged linker were sensitive to the phosphorylation of the serine residues.In the pentameric assembly containing the phosphomimetic EE mutant (i.e., HSP90 S226E/S255E ), the linker of HSP90, protomer A, had a high probability of forming a β-strand bordering the Ser226Glu residue (2.1% of β-strand A).This strand remained stable over the duration of the simulation.This β-strand's formation significantly decreased in the pentameric assembly containing the wild-type (WT, i.e., HSP90 S226/S255 ) protein (0.4% of β-strand A), with no secondary structure element found in the assembly containing the AA (i.e., HSP90 S226A/S255A ) mutant (Fig. 4b).Notably, ATP binding, but not ADP binding, favored a charged linker with a high content of β-strand A formation (2.1% vs. 0.3%, respectively, in the EE mutant) (Fig. 4b and Supplementary Fig. 6a).This finding emphasizes that the observed changes in the EE mutant were not merely due to the addition of charged residues; they were intricately tied to the phosphorylation status and the specific context, including the nucleotide environment permissive of the specific HSP90 conformation (i.e., closed-like).Intriguingly, the strategic formation of β-strand A not only stabilized the charged linker but also induced a conformational switch, flipping it into an 'up' conformation, thereby fully exposing the middle domain of HSP90, where HSP70 binds (Fig. 4c, see HSP90 protomer A -HSP70 interface).While other stabilized structural elements were observed in the analyzed assemblies containing either the WT or the mutant HSP90s, no other had a similar conformational effect on the charged linker as we observed for the β-strand A (see the effect of α-helices 1 through 6 in Supplementary Fig. 6a,b). We conducted dynamical residue cross-correlation analyses to explore how different protein units or subdomains in the pentameric 2xHSP90-2xHSP70-HOP assemblies, featuring either the WT (HSP90 S226/S255 ) or mutant (HSP90 S226E/S255E or HSP90 S226A/S255A ) HSP90s, correlate in their motions throughout the simulation (Fig. 5a,b).This analysis aimed to reveal how individual components move in relation to each other.Positive dynamical cross-correlations spanning different components of the assembly within the large epichaperome core may indicate enhanced cooperative motions, suggesting increased interactions that contribute to the stability of the assembled structure.Previous studies have employed similar analyses to investigate how ligandinduced modulations influence the overall flexibility of HSP90 assemblies, facilitating progress along the chaperone cycle, thereby supporting feasibility of this approach 50 .Indeed, we observed the highest correlation among the components in assemblies containing the HSP90 EE phosphomimetic, mimicking the case where the charged linker is phosphorylated, followed by the WT, and then the non-phosphorylatable HSP90 AA mutant (Fig. 5a).Notably, the coordinated movements observed in the assemblies containing the HSP90 phosphomimetic strongly support the idea that the HSP70-HSP90-HSP90-HSP70 or HSP70-HSP90-HSP90-HSP70-HOP assemblies can be preferentially stabilized when the HSP90 charged linker is phosphorylated (Fig. 5b).This observation aligns with the prominent ~300kDa band observed for the epichaperome core in native PAGE (see Fig. 1 showing HSP90 assemblies favored by PU-H71). In contrast, in the WT HSP90 assembly, coordinated movements were primarily observed between the two HSP90 protomers, within HSP90, and between HSP90 and HSP70 and HOP, specifically through HSP90 protomer B (Fig. 5a,b).These movements are more consistent and favorable in the context of HSP90-HSP90-HSP70 or HSP90-HSP90-HOP assemblies.This observation implies that the major, broad ~242 kDa band detected by the HSP90 antibodyrepresenting the primary HSP90-containing assembly observed in differentiated ESCs (Fig. 1) and in non-transformed cells [13][14][15]17,20 -may consist of such assemblies, along with HSP90 homooligomers. In summary, both MS evidence and computational models converge to support the conclusion that phosphorylation of the charged linker is a crucial contributor to epichaperome assembly, emphasizing its role in shaping not only HSP90, but also the stability and dynamics of the epichaperome structure. Next, we carried out an extensive biochemical and functional analysis to reinforce these findings.Given the well-established tight association between HSP90 and other chaperones and cochaperones in epichaperomes 13,19,20,51 , our focus shifted to a comprehensive evaluation of chaperone and co-chaperone proteins co-purified with the phosphomimetic (HSP90β S226E,S255E ) and non-phosphorylatable (HSP90 S226A,S255A ) mutants.Our strategy involved the purification of protein complexes containing N-terminally mCherry-tagged HSP90β in ESCs while retaining the endogenous WT HSP90 proteins.Distinctly labeled ESCs (i.e., labeled with heavy or light isotope lysine and arginine) expressing either the phosphomimetic or non-phosphorylatable mutant were subjected to immunoprecipitation (IP), followed by SDS-PAGE separation and quantitative analysis via MS to determine protein abundance (Fig. 6a-c; Supplementary Data 4).It is worth noting that we performed IP separately for the phosphomimetic and non-phosphorylatable mutants to minimize subunit exchange during IP 52 , thereby enhancing our ability to detect changes in co-chaperone binding more accurately than previous studies 53 . We found co-chaperones were among the most abundant copurifying proteins, and most cochaperones reported to participate in epichaperome formation 13,19 displayed prominent changes in the phosphomimetic mutant (Fig. 6b,c).The increased presence of epichaperome-specific cochaperones (such as AHA1 and FKBP4) 13 in phosphomimetic complexes compared to nonphosphorylatable complexes highlights a stronger association with Ser226 P /Ser255 P HSP90 as opposed to the non-phosphorylatable protein.However, we observed a slight reduction in the levels of HSC70 and HOP within phosphomimetic complexes.This decrease is potentially associated with specific subpopulations of HSP90 complexes that become more prevalent when the non-phosphorylatable Ala mutant is overexpressed in cells.The introduction of two Ala residues in the unstructured linker region of HSP90 may prompt the recruitment of HSC70 and HOP, chaperones recognized for their ability to bind unstructured unfolded protein stretches 54 .It is important to note that these assemblies are distinct from epichaperomes.Due to the anti-mCherry antibody capturing the entirety of the tagged HSP90, differentiation between specifically epichaperome-related HSP90 and a mixture of epichaperomes and other pools becomes challenging. To address these limitations, we adopted a multi-pronged approach.Firstly, we utilized immunoblotting with native cognate antibodies for chaperone assemblies retained on native PAGE, coupled with chemical blotting using PU-probes.Additionally, we employed affinity capture with PU-probes to quantify the amount of epichaperome components under each condition (Fig. 7a).For these experiments, we transfected cells with the phosphomimetic (HSP90β S226E,S255E , EE mutant) and with the non-phosphorylatable (HSP90 S226A,S255A , AA mutant) mutants, as well as with HSP90β WT or mCherry-tag only for control purposes.In this study, we chose human embryonic HEK293 cells as our cell model since they exhibit intermediate epichaperome expression levels (i.e., medium expressor, Supplementary Fig. 7), making them suitable for studying epichaperome dependence.We confirmed comparable transfection efficiency for each construct, with the tagged HSP90β protein expressed in addition to the endogenous HSP90β (Fig. 7b). Our findings revealed that cells expressing the EE mutant exhibited higher levels of epichaperomes compared to those expressing the AA mutant, as evidenced by immunoblotting of various epichaperome components (including HSP90α, HSC70, CDC37, AHA1, HOP, and HSP110) (Fig. 7c, native PAGE) and chemical blotting with the PU-Cy5 epichaperome probe (Fig. 7d).Notably, there was no significant change in the overall concentration of these proteins in association with their incorporation into epichaperomes (Fig. 7c, SDS PAGE).Epichaperome isolation using PU-beads as an affinity purification probe also revealed significantly greater incorporation of chaperones, including mCherry-HSP90β, and co-chaperones into epichaperomes in cells expressing the EE mutant compared to those containing the AA mutant HSP90 (Fig. 7e), with no substantial alterations observed in cells containing the control vectors (Supplementary Fig. 8a).In contrast, overexpression of wild-type HSP90 in HEK293 cells had a minimal impact on endogenous epichaperomes (Fig. 7c, native PAGE, and Supplementary Fig. 8a, PU-beads capture).This observation aligns with previous reports 13 suggesting that factors beyond chaperone concentration play a pivotal role in driving HSP90 incorporation into epichaperomes.Notably, cargo isolated on the control probe (control beads, Supplementary Fig. 8b) showed no detection of HSP90. We further established the dependency of epichaperome function, beyond its formation, on the phosphorylation of HSP90 serine residues (Fig. 8,9).A key characteristic shared among high epichaperome-expressing cells in PSC, CSC, and cancer cells is the hyperactivity of the transcription factor c-MYC 13,[25][26][27] .In cancer, c-MYC is frequently overexpressed or mutated, resulting in sustained activation, which drives uncontrolled cell proliferation 55 .In ESCs, c-MYC plays a crucial role in maintaining pluripotency and self-renewal, crucial for preserving the undifferentiated state of ESCs 56 .We therefore investigated the impact of HSP90β Ser226 P /Ser255 P on cellular behaviors such as self-renewal and proliferation. To assess proliferation, ESCs were transfected with plasmids containing either the phosphomimetic (HSP90β S226E,S255E ) or non-phosphorylatable (HSP90β S226A,S255A ) mutant.Notably, ESCs transfected with the HSP90β phosphomimetic mutant displayed a significantly higher proliferative rate (P<0.0001,>25%) compared to those transfected with the nonphosphorylatable variant, regardless of whether medium (1x) or high (2x) plasmid concentrations were employed (Fig. 8a).This observation lends support to the notion that HSP90β Ser226 P /Ser255 P , and consequently, epichaperomes, play a crucial role in ESC proliferation. 32.Since differentiation is also closely associated with the disassembly of epichaperomes, we next examined the phosphorylation levels of HSP90β at Ser226 and Ser255 in cells with varying self-renewal capacities.We utilized the TET-repressible oct4 mouse ESC line ZHBTc4, where the Oct4 expression is suppressed in the presence of doxycycline for ESC differentiation into trophoblastlike cells (Troph) 31 .In this experiment, we expressed WT mCherry-HSP90β in ZHBTc4 cells and quantified phosphopeptides in both ESCs and trophoblast cells following ESC differentiation (Fig. 8b, Supplementary Data 5).After normalizing the data to mCherry-HSP90β protein loading (middle panel, ES/Troph = 0.44), we observed a 30% higher phosphorylation of HSP90β at Ser255 in stem cells compared to differentiated cells (left panel, ES/Troph = 0.57).Phosphorylation levels of HSP90β at Ser226 appeared to remain unchanged under these experimental conditions after normalizing to protein loading (right panel, ES/Troph = 0.45). Differentiation of ESCs results in a decreased proliferative rate, as indicated by the doubling time of ZHBTc4 ES cells (~12 h) and trophoblast-differentiated cells (~25 h) Pluripotency hinges on crucial transcription factors like Oct4.Oct4 is widely recognized as one of the principal transcription factors governing the self-renewal of both pluripotent stem cells and cancer cells 57 .We find Oct4 interacts with epichaperomes in ESCs (Supplementary Data 1) and exhibits significant enrichment in the cargo captured with the Ser226/Ser255 phosphomimetic compared to the non-phosphorylatable HSP90 (Supplementary Data 4, Fig. 8c, 1.4-fold EE : AA).To validate the reliance of Oct4 on epichaperomes, we examined Oct4 levels in both MDA-MB-468 cancer cells and HEK293 cells transfected with the various HSP90 plasmids.Additionally, we utilized affinity capture with PU-probes (Fig. 8d-f and Supplementary Fig. 8a).Notably, we observed that cells expressing the phosphomimetic EE mutant showed significantly elevated levels of Oct4, both overall (Fig. 8e) and within epichaperomes (i.e., those sequestered within the epichaperomes, Fig. 8f), compared to cells expressing the HSP90 AA mutant.No detectable differences were observed under control conditions (WT HSP90 and empty vector only) (Supplementary Fig. 8a).Additionally, Oct4 was sequestered by epichaperomes in MDA-MB-468 cells, supporting the idea that epichaperomes play a role in regulating pluripotency through both direct and indirect regulation of Oct4. Epichaperomes play a pivotal role in supporting enhanced proliferation by altering the regulation of various proteins involved in cell signaling 3,13,19 .Higher epichaperome levels translate to a greater number of proteins being affected, resulting in increased signaling output 13,17,58 .We therefore next assessed the signaling output of cells transfected with the various HSP90 mutants.We observed a significantly heightened epichaperome-dependent impact on key signaling effector proteins involved in cell growth and proliferation (i.e., MEK, AKT, and mTOR) in cells expressing the HSP90 EE mutant compared to those expressing the AA mutant.This was evident in both the increased phosphorylation status of these effector proteins (Fig. 9a,b) and their enhanced recruitment to epichaperome platforms (Supplementary Fig. 9a-c) in cells expressing the EE mutant, as compared to those expressing the AA mutant.Importantly, these effects occurred without notable changes in the expression levels of the proteins (Supplementary Fig. 9a, b).No measurable differences were observed under control conditions (WT HSP90 and empty vector only) (Fig. 9b and Supplementary Fig. 9a, b). Epichaperome formation fuels aggressive behaviors in cells 51,59 .Indeed, when observed under a microscope, we noted that, in comparison to cells expressing the non-phosphorylatable AA mutant (HSP90β S226A,S255A ), those expressing the phosphomimetic EE mutant (HSP90β S226E,S255E ) displayed a higher prevalence of cells with an elongated phenotype and several protrusions (Fig. 9a,b), supportive of a mesenchymal-like phenotype 60 .These morphological changes suggest a shift towards a more stem cell-like state, or a more aggressive phenotype in the context of cancer, in cells harboring the EE HSP90 mutant (i.e., with a high epichaperome load), a feature not observed in cells carrying the AA HSP90 mutant (i.e., not permissive of epichaperome formation). Previous studies have found that irrespective of the tumor type, 60-70% of tumors contain HSP90-HSC70 epichaperomes 13,19 .Additionally, epichaperomes are known to specifically form in diseased tissue 3 .To assess whether our observations regarding the impact of the HSP90 charged linker, derived from cell models, extend to human patients and are not artifacts specific to cultured cells, we obtained surgical specimens from breast and pancreatic cancer surgeries (n = 18 tissues from 9 patients, Fig. 10a-d).Both tumor (n = 9) and tumor adjacent (n = 9) tissues, determined by gross pathological evaluation to be potentially non-cancerous, were analyzed for epichaperome levels using Native PAGE.Additionally, total HSP90β and phosphorylated HSP90β at Ser226 were assessed by SDS PAGE and immunoblotting with specific antibodies.To mitigate potential biases arising from varying HSP90 levels, each pair was normalized based on HSP90 concentration.Despite challenges in obtaining high-quality epichaperome profiles from surgical samples, a robust correlation emerged between epichaperome expression and Ser226 phosphorylation (Fig. 10c,d).Tissues positive for epichaperomes exhibited p-Ser226 HSP90β positivity, and conversely, those negative for epichaperomes showed no or negligible p-Ser226 signal. Collectively, these multifaceted biochemical and functional lines of evidence establish a compelling connection between structural features in HSP90 and the processes of epichaperome formation and function.These findings lend robust support to the hypothesis that the regulation of epichaperome processes in ESC and cancer cells-encompassing critical factors such as proliferative potential, self-renewal capacity, plasticity, and signaling output-crucially relies on the specific phosphorylation events taking place at key residues within HSP90's charged linker. DISCUSSION The intricate network of protein-chaperone interactions within cells plays a critical role in maintaining protein homeostasis and cellular function.In recent years, the discovery of epichaperomes as specialized chaperone complexes in both cancer cells and pluripotent stem cells has opened new avenues for understanding chaperone biology.This investigation offers valuable insights into the structural and regulatory intricacies of epichaperomes, with particular attention to the pivotal role played by PTMs of HSP90 in orchestrating their formation and function. A central discovery in this investigation is the recognition of specific PTMs on HSP90, especially at Ser226 and Ser255, as critical factors governing the assembly of epichaperomes.Our data reveal that phosphorylation of these serine residues enhances the association of HSP90 with other chaperones and co-chaperones, creating a microenvironment conducive to epichaperome formation.This finding underscores the significance of PTMs in regulating chaperone assemblies and highlights the potential of targeting these modifications for therapeutic intervention. Chaperones appear to be highly susceptible to structural and functional regulation by a spectrum of PTMs.For example, PTMs of HSP90 provide an important regulatory element, modulating cochaperone and client protein binding [61][62][63][64][65] , ATPase activity 66 , conformational cycle 62,[65][66][67] , turnover 68 and small molecule affinity 12,38 .Similar to minor changes in primary sequence, these PTMs likely regulate the access to and occupancy of key conformational states of HSP90 for in vivo processing of some essential clients.Our investigation pinpoints crucial PTMs that remodel the functional profile of HSP90, metamorphosing it from a protein-folding entity into epichaperomes, a platform orchestrating the reorganization of PPI networks for heightened cellular adaptability and proliferation. Our study uncovered a fascinating aspect of PTMs in HSP90 within epichaperomesphosphorylation events occur in an IDR of the protein.The strategic placement of these PTMs in the IDR holds profound significance, suggesting that they influence HSP90's conformation and function beyond the traditional structured regions.This adaptability is crucial for HSP90's participation in distinct PPIs, allowing it to stabilize the epichaperome-enabling conformation and restructure the interactions of numerous proteins in response to cellular stressors.Intriguingly, previous studies in yeast 69 , where the IDR was substituted with glycine-glycine-serine residues, align with our findings.These studies suggested that the charged linker (encompassing the IDR), influenced by the N-domain of HSP90, can adopt a structured form.This structured form, in turn, can stabilize interactions between specific HSP90 domains, influencing HSP90 dynamics, cochaperone binding, and overall biological function, especially in conditions of cellular stress. Changes in PPI networks play a fundamental role in cellular responses to stressors and the coordination of various biological processes 18 .These alterations, often induced by external stressors, are vital for the cell's ability to adapt and function under different conditions.Notably, less than 10% of human PPIs remain unaffected by stress-induced perturbations, highlighting the widespread impact of cellular stress on the interactome.These changes, influenced by factors such as PTMs and protein conformation, are essential for species-specific adaptation and contribute to PPI network malfunctions observed in diseases. One intriguing question is which kinase could phosphorylate HSP90 at these serine residues?A likely candidate is casein kinase II (CK2) 70,71 .CK2 is sequestered to epichaperomes in ESCs and in cancer cells 13 .Notably, CK2 is overexpressed in highly proliferative cells 72 and plays a role in phosphorylating numerous protein substrates involved in cell proliferation and survival 73 .Moreover, the mutation of CK2 has been shown to abolish the viability of both PSCs 74 and tumor cells 75,76 , indicating a potential direct link between epichaperome function and cellular physiology, possibly mediated by CK2 phosphorylation, which remains to be confirmed.The implications of our study go beyond providing structural and mechanistic insights.We present compelling evidence that phosphorylation of HSP90 at Ser226 and Ser255 not only promotes epichaperome formation but also influences cellular behaviors, including proliferation and selfrenewal.This suggests a direct link between epichaperome function and cellular physiology, particularly crucial in contexts such as cancer and stem cell maintenance, where robust proliferation and adaptation are vital. Plasticity, a key characteristic associated with both ESCs and cancer cells 77 , is also implicated in our findings.The morphological changes observed in cells expressing the phosphomimetic HSP90 mutant-specifically, the higher prevalence of cells with an elongated phenotype and several protrusions-hint at a mesenchymal-like phenotype 60 .This phenotypic shift is often associated with increased plasticity and is indicative of a more stem cell-like state.Our findings suggest a potential role for epichaperomes in modulating this dynamic process of cellular transition between different phenotypic states. The link between pluripotency and cancer is particularly intriguing.Cellular stress is increasingly recognized as a pivotal factor that can shift the balance between cellular pluripotency and the development of malignancies.The process of dedifferentiation, observed in regeneration in plants and some vertebrates, involves the deactivation of genes responsible for cell-specific functions, re-entry into the cell cycle, proliferation, and activation of pluripotency-associated genes 78 .Tumors also undergo dedifferentiation, where cancer cells revert to a less differentiated state, reexpress stem cell genes like Oct4, leading to the emergence of cancer stem-like cells with enhanced metastatic potential and treatment evasion 79 .Our study proposes epichaperomes as significant mediators of changes in cellular identity, partly through Oct4. The revelation of HSP90's dysfunctional multimeric states carries implications for therapeutic interventions 3,16 .Instead of universally inhibiting all HSP90 pools, a paradigm shift comes to the fore with precision medicine strategies.The prospect of targeting specific pathologic conformations while preserving normal HSP90 functions emerges as a promising direction.This shift beckons researchers to navigate the intricate interplay of HSP90 conformations as they forge ahead in the quest for innovative therapeutic approaches.Our study also confirms the notion that small molecule HSP90 binders have distinct preference for HSP90 conformers in cells, reinforcing the finding that not all HSP90 inhibitors act equally well or equally selectively on specific diseasepromoting HSP90 conformations or disease-associated HSP90 assemblies in comparison with HSP90 conformers found in normal cells.The first feature determines drug efficacy, whereas the latter influences the safety profile during administration. In conclusion, our study unravels the intricate interplay between PTMs, conformational regulation, and biological functions of HSP90 within epichaperomes.These findings have implications for the development of novel therapeutic strategies targeting chaperone complexes in diseases characterized by epichaperome dysregulation, such as in cancers and neurodegenerative disorders.By deciphering the regulatory mechanisms underlying epichaperomes, we move one step closer to harnessing their potential for precision medicine and therapeutic intervention. Human biospecimens research ethical regulation statement Surgical specimens were obtained in accordance with the guidelines and approval of the Institutional Review Board at Memorial Sloan Kettering Cancer Center, Biospecimen Research Protocol# 09-121, project title: Ex-Vivo Testing of Breast Cancer Tumors for Sensitivity to Inhibitors of Heat Shock Proteins and Signaling Pathway Inhibitors, S. Modi, PI, and Biospecimen Research Protocol# Protocol# 09-121, project title: Ex-Vivo Testing of Breast Cancer Tumors for Sensitivity to Inhibitors of Heat Shock Proteins and Signaling Pathway Inhibitors, S. Modi, PI, and Biospecimen Research Protocol# 14-091, project title: Establishment and Characterization of Unique Mouse Models Using Patient-Derived Xenografts .E. de Stanchina, PI.The source of samples consists of unused portions of surgical specimens that are taken for reasons other than research (i.e., for patients undergoing the procedures for medical reasons unrelated to need for research samples or to the nature of the research).No individuals were excluded on the basis of age, sex or ethnicity.Because breast cancer is a disease which overwhelmingly affects women, and is a disease that is generally not seen in children, the vast majority of breast cancer patients enrolled on protocol# 09-121 were females >18 years of age.Patient tissue samples were obtained with consent provided in written form.Samples were de-identified before receipt for use in the studies. Reagents and Chemical Synthesis All commercial chemicals and solvents were purchased from Sigma Aldrich or Fisher Scientific and used without further purification.The identity and purity of each product was characterized by MS, HPLC, TLC, and NMR.Purity of target compounds has been determined to be >95% by LC/MS on a Waters Autopurification system with PDA, MicroMass ZQ and ELSD detector and a reversed phase column (Waters X-Bridge C18, 4.6 x 150 mm, 5 µm) eluted with water/acetonitrile gradients, containing 0.1% TFA.Stock solutions of all inhibitors were prepared in molecular biology grade DMSO (Sigma Aldrich) at 1,000× concentrations.The PU-TCO, PU-CW800 and YK5-B probes and relevant control probes, and the PU-beads and the control probes were generated using published protocols 13,19,35,[80][81][82][83][84][85] or as described in Supplementary Notes 1.The GA-biotin probe was purchased from Sigma (SML0985).Disuccinimidyl suberate (DSS) was acquired from ThermoFisher (21655). Primary specimen processing Frozen tumor and matched tumor adjacent tissues were cut into small pieces using surgical blades and weighed using a precision balance.74 mg of tissue was homogenized in 200 µL of 1× native lysis buffer in 1.5 mL microtube homogenizer for each sample.Homogenization was performed on dry ice.Post homogenization samples were incubated on ice for 30 min followed by centrifugation at 12,000×g at 4°C for 15 min.Supernatant was collected, and protein quantification was done using BCA method.Samples were normalized using total HSP90β levels for each tissue pairs.An initial SDS-PAGE was run using 5 µg of total protein for each sample.Total protein loads were adjusted to ensure equal levels of total HSP90β in tumor and corresponding matched adjacent tissue.Samples were then processed for native PAGE and SDS-PAGE to check for HSP90β and p-Ser226 HSP90β as described below. Coomassie and Ponceau S staining Where indicated, gels after native PAGE or SDS-PAGE were washed with deionized water three times for 5 min and incubated with Coomassie G-250 stain (Bio-Rad) for 1 h.The gels were washed with water after to remove the excess of the dye and imaged.Where indicated, membranes after protein transfer were incubated with Ponceau S solution (Sigma) for 10 min, then were washed with water to remove the excess of the dye and imaged. Primary specimen analyses Specimens were harvested as previously reported 89 .Briefly, the surgical team delivered specimens in tightly sealed, sterile, leak-proof bags without fixatives.This maintained specimens in their fresh state, crucial for downstream analyses.Fresh specimens underwent sterile harvesting by the pathologist or assistant, using laminar flow hoods.Harvesting times were meticulously recorded, kept under 30 minutes post-surgery to mitigate cold ischemia effects.Primary breast tumor specimens were selectively obtained from the index lesion's periphery, avoiding central necrosis.Recognition criteria for necrotic tissue included color loss, softness, and demarcation from viable tissue.Normal breast tissue samples (e.g., normal dense/fibrous breast parenchyma) are taken from distant locations, at least 1 cm grossly away from the target lesion if feasible.In contrast, due to the relatively small size of the pancreas and the nature of surgical procedures, normal pancreas samples collected were typically in close proximity to the tumor.Whipple procedures typically involve the resection of the head of the pancreas, while distal procedures focus on the resection of the tail.Samples were initially stored in tubes with MEM and antibiotics and transported on wet ice to the laboratory immediately after procurement.Upon reaching the laboratory, samples were transferred to cryovials, 'snap' frozen, and stored at -80 °C for future molecular analyses. Chemical blotting For in-gel blotting using PUTCO, cells were harvested in 20 mM Tris pH 7.4, 20 mM KCl, 5 mM MgCl 2 , 0.01% NP40, and 10% glycerol buffer containing protease and phosphatase inhibitors (native lysis buffer), by a freeze-thaw procedure.Protein concentrations were measured by using the BCA assay according to the manufacturer's protocol (Pierce™ BCA Protein Assay Kit, Thermofisher Scientific, Waltham, MA).One hundred micrograms (100 µg) of protein were incubated with 1 µM of PUTCO in a total volume of 42 µL.Post 3 h of incubation samples were loaded in 4 to 10% native gel and run using native 1× Tris-Glycine buffer at 4°C in cold room at 125V.Following electrophoresis, the gel was incubated in 30 mL of 700 nM Cy5-Tetrazine containing ice cold 1× Tris-Glycine buffer at room temperature (RT) for 15 min for the click reaction to occur.After 15 min, the gel was washed thrice (5 min each) with ice cold 1× Tris-Glycine buffer.The gel was then imaged using ChemiDoc MP imaging system (Biorad).Alexa 546 channel (illumination: Epi-green, 520−545 nm excitation, Filter: 577-613 nm filter for green-excitable fluorophores and stains) was used to visualize mCherry-tagged species, and native page ladder (NativeMark™ Unstained Protein Standard, Cat.No. LC0725, Invitrogen™).The Cy5 channel (illumination: Epi-far red, 650−675 nm excitation, Filter: 700-730 nm filter for far red-excitable fluorophores and stains) was used for imaging PUTCO staining.Post capturing, the images from the two channels were merged to get the alignment of the bands with respect to the molecular weight ladder in Image Lab 6.1 (Bio-Rad).For in cell blotting using PU-CW800, E14 cells were plated at a seeding density of 1 × 10 6 per 10 cm plate and grown for 44 h before treatment with either PU-CW800 or control fluorophore (SS27) at a concentration of 1 μM in culture media for 4 h while incubating at 37°C, 5% CO 2 .Following the treatment, cells were harvested and lysed by dounce homogenization in Felts lysis buffer (20 mM HEPES at pH 7.4, 50 mM KCl, 2 mM EDTA, and 0.01% NP40) supplemented with protease, phosphatase, and deacetylase inhibitors.Cell lysates were buffer exchanged with fresh Felts lysis buffer containing supplements to remove any unbound drug before loading into a native gel.For visualization of PU-CW800 fluorescence and total protein, 200 μg of cell lysate was loaded onto a 4-10% native gradient gel and resolved at 4°C for 5 h.Fluorescence was visualized on LI-COR Odyssey CLx using Image StudioTM Software (LI-COR Biosciences) and then total protein was visualized on the same gel using Coomassie Brilliant Blue R250 stain.Band(s) with observable fluorescent signal were then processed by in-gel digestion and analyzed for LC-MS/MS to identify major proteins. SILAC and ESC transfection For metabolic labeling with SILAC (stable-isotope labeling of amino acid in cell culture), ESCs were cultured and passaged five times at 48 h intervals in media containing SILAC DMEM (Thermo Fisher 88364) supplemented with 13C-and 15N-labeled heavy L-arginine (84 mg L -1 , Cambridge isotope CNLM-539-H) and L-lysine (146 mg L -1 , Cambridge isotope CNLM-291-H) or supplemented with 12C-and 14N-labeled light L-arginine (Fisher BP2505100) and L-lysine (Fisher J6222522) amino acids for five passages to ensure complete stable isotope incorporation.For heterologous expression of HSP90 AA or EE mutants, cells were then reverse transfected with plasmid DNA using LipofectamineTM 3000 Transfection Kit (Invitrogen #L3000015) and incubated at 37°C, 5% CO 2 for 72 h at which point they were harvested. Measurement of cell proliferation E14 cells were transfected and incubated in 37°C/5% CO 2 incubator for 24 h.Cells were then replated to 6-well plate at the same dilution factor for each transfection treatment condition and then returned to incubator.At 60 h post-transfection, cell proliferation was determined via cell count for all conditions. Confocal microscopy HEK293 cells transfected with mCherry-HSP90β-AA or mCherry-HSP90β-EE plasmids were seeded at a density of 1.8 × 10 6 cells mL -1 on coverslips in a monolayer in six well plates and then grown overnight for the cells to attach.Cover slips were mounted with ProLong TM Gold antifade mountant with DAPI.Imaging was done using Leica SP8 Stellaris microscope.Images were analyzed using Image J and Leica LAS X lite software.Cell morphology was manually inspected, and the percentage of cells exhibiting an elongated phenotype and several protrusions was calculated.Specifically, cells transfected with mCherry were assessed, and those displaying the described features were counted.The percentage was then determined based on the total number of mCherry-transfected cells observed. Chemical precipitation and cross-linking The GA-affinity beads were prepared by incubating GA-biotin (Sigma SML0985) with Dynabeads M-280 Streptavidin (ThermoFisher 11205D) at 4°C for 2.5 h.The GA-bound beads were then incubated with cleared cell lysates or cross-linked cell lysates overnight at 4°C.For PU-beads affinity capture, cell lysates were incubated with PU-beads or control beads at 4°C for 3.5 h.Following incubation, bead conjugates were washed three times in lysis buffer before elution with sample buffer.The chemical cross-linking and HSP90 purification experiments were carried out in >3 replicates for both ligands.Samples were analyzed separately, and statistical significance was assessed. Chemical precipitation and immunoblotting Cells were harvested in 20 mM Tris pH 7.4, 20 mM KCl, 5 mM MgCl 2 , 0.01% NP40, and 10% glycerol buffer containing protease and phosphatase inhibitors (native lysis buffer), by a freezethaw procedure.Protein concentrations were measured by using the BCA assay according to the manufacturer's protocol (Pierce™ BCA Protein Assay Kit, Thermofisher Scientific, Waltham, MA).PU-beads and control beads were washed with the native gel buffer 3 times prior use.Post washing, 40 µL aliquots of the beads were distributed into the sample tubes.Five hundred micrograms (500 µg) of total protein in 300 µL final volume, adjusted with native lysis buffer were added.Samples were incubated for 3 h at 4°C on a rotor, followed by washing with native lysis buffer four times.Post washing, 30 µL of 5× Laemmli buffer was added to the beads and boiled at 95°C for 5 min.Ten micrograms (10 µg) of the lysates (2%) was used as input for the pull-down experiment.Samples were then centrifuged at 13,000 × g for 20 min and supernatant collected was loaded on to SDS-PAGE.The protein transfer and western blotting procedures were performed as described in SDS-PAGE and western blot section. IUPred analysis for disorder prediction Sequence Preprocessing: The primary amino acid sequence of human HSP90β (P08238) and HSP90α (P07900) were extracted in FASTA format.These sequences served as the input for subsequent disorder prediction using the IUPred algorithm.Calculation of Disorder Scores: The IUPred algorithm utilizes energy potentials derived from pairwise amino acid interactions to assess the local structural propensities of each residue in the protein sequence.For each residue, IUPred computes a disorder score within the range of 0 to 1.A score of 0 suggests a higher likelihood of being ordered, while a score of 1 indicates a higher likelihood of being disordered.Threshold for Disorder Classification: To classify residues as either ordered or disordered, a threshold was applied to the calculated disorder scores.A common threshold of 0.5 was employed, designating residues with scores above 0.5 as disordered.The output of the IUPred analysis consisted of a disorder profile, providing disorder scores for each residue in the input protein sequence.Residues were categorized based on the applied threshold, facilitating the identification of regions with a high probability of disorder.All analyses were performed with the default parameters of the IUPred algorithm.The results presented here are based on the specific sequence input and the applied threshold for disorder classification. Protein complex preparation and docking calculations: The structure comprising HSP90β-HSP70(2)-HOP proteins was developed using the molecular comparative modeling technique, employing Modeller v10.4, the Modeller Python script 90 , and experimental template structures (PDB codes: 7KW7, 8EOB) 10,91 .The cryo-EM structure of human HSP90β (8EOB) served as the basis for obtaining coordinates for HSP90β (protomers A and B) in the developing model.To construct the assembly involving HSP70 and HOP, we utilized the sequences and atomic cryo-EM structure from the HSP90-HSP70-HOP-GR (7KW7) template.As these structures lacked certain residues, including those in the charged linker (Glu222 -Lys273), we incorporated them as intrinsic loops during computational processing.The target sequence for each HSP90β protomer was extracted from Uniprot ID: P08238.After model generation, we selected the optimal model based on the Discrete Optimized Protein Energy (DOPE) score.The final model included full-length HSP90 (excluding a ten-residue N-terminal disordered segment).For HOP and HSP70, we maintained the sequences provided in PDB:7KW7.The validated model, equipped with cocrystal ligands on each HSP90β protomer, was imported into Maestro v13.3 (Schrödinger LLC, 2022-3).Mutagenesis was performed to substitute Ser226/Ser255 with phosphomimetic conditions (Glu226/Glu255) and de-phosphorylated conditions (Ala226/Ala255) in both protomers of HSP90β.The preparation of all complexes utilized the Protein Preparation Wizard, a module for creating reliable, all-atom protein models.This involved restraining the assignment of bonds and bond orders, adding hydrogens, correcting formal charges, and filling missing side chains.Pre-processing steps included generating hetero states, H-bond assignment, and energy minimization using the Optimized Potentials for Liquid Simulations (OPLS3) force field, with a maximum root-mean-square deviation (RMSD) of 0.30 Å, employing the molecular mechanics engine Impact v9.6.Essential water atoms within 5 Å of the binding pocket were retained, while remaining waters were deleted.Structural refinement at neutral pH was carried out through the Epik v6.1 module 92 .The final refined structure served as the receptor for docking simulations.Ligands, such as ATP and ADP, underwent preparation with the LigPrep node, where the optimized ligand minimization algorithm yielded more conformers with numerous rotatable bonds, enhanced efficiency, and robustness.Different possible protonation states based on machine learning were generated, and ligand structures were minimized at pH values within the range of 7.0 and +/-2.0, to guide the selection of protonation states on acidic/basic groups on ligands consistent with their pKa values, using the OPLS_3 force field, Premin, Truncated Newton Conjugate Gradient (TNCG), and Epik v6.1 nodes.Subsequently, a receptor grid was generated around the co-crystal ligand with default parameters.Docking experiments were executed on the nucleotide binding pockets of both protomers using the XP (extra-precision) Glide program (Glide v9.6) and Prime-MMGBSA (molecular mechanics generalized born surface area) modules, respectively.The best poses in the resulting docked complexes served as the initial complex structure for MD simulations 93 .Molecular dynamics simulations: The pentameric assemblies were prepared in the following combinations: 2xHSP90(Ser226Ser255)-2xHSP70-HOP, 2xHSP90(Glu226Glu255)-2xHSP70-HOP-, 2xHSP90(Ala226Ala255)-2xHSP70-HOP, each bound to either ATP or ADP.These complexes underwent individual 100 ns all-atomic molecular dynamics simulations using the Desmond v7.1 module of the MAESTRO Suite from Schrodinger (www.schrodinger.com).Before simulations, each assembly was built by embedding water molecules, adjusting temperature and pressure closer to the physiological environment through the OPLS3 force field and TIP4PEW water model.The system was neutralized with counter ions (Na + / Cl -) to balance the net charge in the simulation box.The particle mesh Ewald (PME) method 94 was used for electrostatics with a 10 Å cut-off for Lennard-Jones interactions, and the SHAKE algorithm 95 was applied to restrict the motion of all covalent bonds involving hydrogen atoms.The complex system underwent a six-step relaxation protocol before productive MD simulations.The solvated system was initially minimized with solute restraints and then without solute restraints, utilizing a hybrid method of steepest descent and the LBFGS (limited memory Broyden-Fletcher-Goldfarb-Shanno) algorithm 96,97 .The energy-minimized system underwent a brief 12 ps simulation within the NVT canonical ensemble at a temperature of 10 K, followed by a similar simulation in the isothermal-isobaric (NPT) ensemble at 10 K, with restraints on nonhydrogen solute atoms.Subsequently, the system was simulated for 24 ps in the NPT ensemble at 300 K with limited restraints on nonhydrogen solute atoms.In the final equilibration step, the system was simulated for 24 ps in the NPT ensemble at 300 K without constraints to reach an equilibrium state.The minimized and equilibrated system without restraints was then subjected to a 100 ns NPT simulation for production.The temperatures and pressures of the system in the initial simulations were controlled by Berendsen thermostats and barostats, respectively 96,97 .The relaxed system underwent productive simulations using the Nose'-Hoover thermostat at 300 K and the Martyna-Tobias-Klein barostat at 1.01325 bar pressure.Atomiccoordinate data for each receptor-ligand complex and system energies were recorded every 1000 ps.Residue-pair correlations were calculated along the MD trajectory using the script trj_essential_dynamics.py available in the Schrödinger suite.Additionally, the unexplored cryptic motions, distribution of secondary structural elements, and the array of protein folding in intrinsic disordered regions were thoroughly examined using the extracted meta-trajectory data from 1000 trajectories throughout the simulation period.The secondary structure elements (SSE) index was computed to illustrate the percentage occurrence of alpha-helices (α) and beta-strands (β) during the simulation period, delineated by residue. Immunoprecipitation of mCherry-HSP90 RFP Selector (NanoTag #N0410) resins were equilibrated with lysis buffer to prepare the resin.Cell lysates were then added and incubated with the resins at 4°C with head over tail rotation for 90 min.Following incubation, resins were washed twice with lysis buffer and once with PBS before elution with 2 × sample buffer and incubation at 95°C for 5 min.Eluents were then run on a 12.5% SDS-PAGE.For SILAC samples, heavy and light replicates were immunoprecipitated separately, before combined and separated by SDS gel electrophoresis. Chemical cross-linking Cell lysates, with a concentration of approximately 3 μg μL -1 , underwent cross-linking using disuccinimidyl suberate (DSS; ThermoFisher# 21655) at a concentration of 2.5 mM.This process occurred at room temperature for 1 h.To terminate the reaction, 0.8 M NH 4 OH (Sigma# 09859) was added, reaching a final concentration of 25 mM, and incubated at room temperature for an additional 15 min.The lysates were clarified through two rounds of centrifugation at 16,200× g for 15 min at 4°C before proceeding to separate HSP90 using immobilized PU-H71 or GA. SDS-PAGE and trypsin digestion After elution from PU-or GA-beads, samples were loaded into 12.5% SDS-PAGE gel for separation.The entire lanes were cut into 10-15 bands and processed by in-gel digestion as described previously 19 .Briefly, gel bands were cut into small cubes, washed with 25 mM NH 4 HCO 3 /50% acetonitrile, reduced with 10 mM DTT (in 25 mM NH 4 HCO 3 ) at 56°C for 1 h, alkylated with 55 mM iodoacetamide (in 25 mM NH 4 HCO 3 ) in darkness for 45 min.Gel pieces were washed again with 25 mM NH 4 HCO 3 /50% acetonitrile and evaporated in a speed-vac to complete dryness.The dried gel samples were proteolyzed using varied volumes of trypsin (0.6-1.0 µg depending on the intensity of the gel bands) at 37°C for 4 h, before the extraction of tryptic peptides by 50% acetonitrile/2% acetic acid.Tryptic peptide mixture was concentrated down to ~7 µL before LC-MS/MS analysis.For validation experiments in Figure 3d,e, chemical precipitation and sample preparation for PTM analyses were performed as follows.For in-cell YK-B bait affinity purification, cells were plated in 10 cm plates at 6 × 10 6 cells per plate and treated with 50 µM YK5-B for 4 h.Cells were next collected and lysed in 20 mM Tris pH 7.4, 150 mM NaCl and 1% NP40 buffer.Five hundred micrograms (500 µg) of total protein were incubated with streptavidin agarose beads (ThermoFisher Scientific) for 1 h and beads were washed with 20 mM Tris pH 7.4, 100 mM NaCl and 0.1% NP40 buffer (washing buffer).For in-lysate YK5-B bait affinity purification, cells were lysed in the above-mentioned lysis buffer.Streptavidin agarose beads were incubated with 50 µM YK5-biotin for 1 h, washed and added to 500 µg of total protein and incubated overnight.The beads were then washed with the washing buffer.For PU-H71 beads pull-down, 250 µg of the same protein lysates were incubated with 40 µl PU-H71 beads for 3 h and washed.The samples were applied onto SDS-PAGE.Resulting gels were washed 3 times in distilled deionized H 2 O for 15 min each and visualized by staining overnight with Simply Blue Coomassie stain (Thermo Fisher Scientific).Stained protein gel regions were typically excised into 6 gel sections per gel lane, and completely destained as described 19 .In-gel digestion was performed overnight with MS-grade trypsin (Trypsin Gold, Mass spectrometry grade, Promega) at 5 ng mL -1 in 50 mM NH 4 HCO 3 digestion buffer and incubation at 37°C.After acidification with 10% formic acid (final concentration of 0.5-1% formic acid), peptides were extracted with 5% formic acid / 50% acetonitrile and resulting peptides were desalted using hand-packed, reversed phase Empore C18 Extraction Disks (3M, Cat#3M2215), following an established method 98 .Each of the 6 sections per sample, per gel lane, were excised and separately digested in-gel, at the same time, using the same batch and amount of trypsin.The peptides from each of these gel sections were purified and analyzed by nano-LC-MS/MS separately. LC-MS data acquisition, protein and phosphopeptide identification Briefly, the digestion mixtures were injected into an Dionex Ultimate 3000 RSLCnano UHPLC system (Dionex Corporation, Sunnyvale, CA), and separated by a 75 μm × 25 cm PepMap RSLC column (100 Å, 2 µm) at a flow rate of ~450 nL min -1 .The eluant was connected directly to a nanoelectrospray ionization source of an LTQ Orbitrap XL mass spectrometer (Thermo Scientific, Waltham, MA).LC-MS data were acquired in a data-dependent acquisition mode, cycling between a MS scan (m/z 315-2,000) acquired in the Orbitrap, followed by low-energy CID analysis on three most intense multiply charged precursors acquired in the linear ion trap.The centroided peak lists of the CID spectra were generated using PAVA searched against a database that is consisted of the Swiss-Prot protein database using Batch-Tag, a program of the University of California San Francisco Protein Prospector software, version 5.9.2.For identification of proteins in pull-down experiments, a precursor mass tolerance of 15 ppm and a fragment mass tolerance of 0.5 Da were used for protein database searches (trypsin as enzyme; 1 miscleavage; carbamidomethyl (C) as constant modification; acetyl (protein N-term), acetyl+oxidation (protein N-term), Met-loss (protein N-term), Met-loss+acetyl (protein N-term, oxidation (M)).Protein hits were reported with a Protein Prospector protein score ≥ 22, a protein discriminant score ≥ 0.0 and a peptide expectation value ≤ 0.01 99 .This set of thresholds of protein identification parameters does not return any substantial false positive protein hits from the randomized half of the concatenated database.After protein identification, PTM search was carried out with S/T/Y phosphorylation included in variable modifications among the identified proteins.A threshold of SLIP score >6 was imposed for false phosphorylation site assignment <5% 100 .Identified phosphopeptides were manually inspected by confirming the quality of MS/MS spectra and mass accuracy.Cross-linked peptides were identified using an integrated module in Protein Prospector, based on a bioinformation strategy developed in the UCSF Mass Spectrometry Facility 41,42,101,102 .Key cross-linked peptides were identified and confirmed by manually examining the returned spectrum, peptide scores, mass accuracy and absence from uncross-linked samples.For validation experiments in Figure 3e, MS data acquisition and processing were performed as follows.Desalted peptides were concentrated to a very small droplet by vacuum centrifugation and reconstituted in 10 mL 0.1% formic acid in H 2 O. Approximately 90% of the peptides were analyzed by nano-LC-MS/MS).A Q Exactive HF mass spectrometer was coupled directly to an EASY-nLC 1000 (Thermo Fisher Scientific) equipped with a self-packed 75 mm × 18 cm reverse phase column (ReproSil-Pur C18, 3M, Dr. Maisch GmbH, Germany) for peptide separation.Analytical column temperature was maintained at 50°C by a column oven (Sonation GmBH, Germany).Peptides were eluted with a 3-40% acetonitrile gradient over 60 min at a flow rate of 250 nL min -1 .The mass spectrometer was operated in DDA mode with survey scans acquired at a resolution of 120,000 (at m/z 200) over a scan range of 300-1750 m/z.Up to 15 of the most abundant precursors from the survey scan were selected with an isolation window of 1.6 Th for fragmentation by higher-energy collisional dissociation with normalized collision energy (NCE) of 27.The maximum injection time for the survey and MS/MS scans was 20 ms and 60 ms, respectively; the ion target value (Automatic Gain Control) for survey and MS/MS scan modes was set to 3e 6 and 1e 6 , respectively.Quantitation of phosphopeptides and crosslinked peptides Manually-confirmed, high-confidence phosphopeptides and cross-linked peptides were quantified by the peak height of the extracted ion chromatogram of each peptide monoisotope mass.For phosphopeptide quantitation, the protein loading of HSP90 peptides in lysates or from pull-down experiments was normalize to a representative, isoform specific tryptic peptide, ELISNSSDALDK for HSP90α and ELISNASDALDK for HSP90β.Phosphopeptides with different charge state or miscleavages were considered as different measurements for quantitation of each phosphosite.To assess the relative phosphorylation levels of different phosphosites in cancer cells and nontransformed cells, the ion intensity values of all phosphopeptides for each phosphosite were summed.The average ion intensities of each phosphosite between cancer and non-transformed cells were compared.Cross-linked peptides were identified using an integrated module in Protein Prospector, based on a bioinformation strategy developed in the UCSF Mass Spectrometry Facility 41,42,101,102 .Key cross-linked peptides were identified and confirmed by manually examining the returned spectrum, peptide scores, mass accuracy and absence from uncross-linked samples.Cross-linked peptides identified from various samples were pooled together, and the cross-linking propensity of each cross-linked peptide was assessed by its cross-linking percentage 43 .Cross-linking percentage for each peptide pair was calculated using the following formula: where the peak height is the apex peak height in LC-MS/MS runs.Dead-end XLs are crosslinker modified peptides where only one NHS-ester function of DSS is cross-linked to a Lys residue and the other NHS-ester function is hydrolyzed by water. Homology modeling The mouse HSP90 sequences for both alpha and beta isoforms were aligned and the models were built using an open conformation template (PDB: 2IOQ), a closed conformation template (PDB: 2CG9), and an HSP70-bound model (derived from a cryo-EM structure of HSP90•HSP70•GR complex 10 using UCSF Modeller.Structural visualization and analysis were carried out using UCSF Chimera. Statistics and reproducibility Unless as specified above under Protein identification and Bioinformatics analyses, statistics were performed, and graphs were generated, using Prism 10 software (GraphPad).Statistical significance was determined using Student's t-Tests or ANOVA, as indicated.Means and standard errors were reported for all results unless otherwise specified.Effects achieving 95% confidence interval (i.e., p < 0.05) were interpreted as statistically significant.No statistical methods were used to pre-determine sample sizes, but these are similar to those generally employed in the field.No samples were excluded from any analysis unless explicitly stated. %XL = Cross − linked peptide Peak Height (PH) ∑Cross − linked peptide PH + Dead − end XL 1 PH + Dead − end XL 2 PH preparation.A.R., P.P., S.J., S.C., S.B. and H.E-B. performed experiments.C.S.D. provided reagents.V.M., C.K., J.L., P.Y., E.deS., A.C., S.M., and M.A. were involved in various aspects of biospecimen handling, including recruitment, procurement, or processing at different stages from surgery to delivery to the laboratory.R.J.C. and P.R.B. provided Protein Prospector and supported data analysis.F.C., T.A.N., G.Chiosis and A.L.B. participated in the design and analysis of various experiments.H.E-B., A.R., S.D.G., G.Colombo and T.A.N. assisted with manuscript writing and data analysis.F.C. and G.C. developed the concept and wrote the paper.COMPETING INTERESTS Memorial Sloan Kettering Cancer Center holds the intellectual rights to the epichaperome portfolio.G.C., A.R. and S.S. are inventors on the licensed intellectual property.All other authors declare no competing interests. Figure 1 . Figure 1.Embryonic stem cells and cancer cells share compositionally similar epichaperomes.a Schematic illustrating the biochemical and functional distinctions between epichaperomes, defined as long-lasting heterooligomeric assemblies composed of tightly associated chaperones and co-chaperones, and traditional chaperones.Unlike chaperones, which assist in protein folding or assembly, epichaperomes sequester proteins, reshaping protein-protein interactions, and consequently altering cellular phenotypes.The schematic also outlines key principles for the use of PU-probes in epichaperome analysis.b Detection of epichaperome components (chaperones and co-chaperones) through SDS-PAGE (bottom, total protein levels) and native-PAGE (top), followed by immunoblotting.See also Supplementary Fig.1.c Visualization of HSP90 in epichaperomes using the PU-TCO click probe.See also Supplementary Fig.2.Gel images are representative of three independent experiments.d Epichaperome constituent chaperones and co-chaperones identified through mass spectrometry analyses of PUbeads cargo.Representative data of two independent experiments.See Supplementary Fig.3for the GA-cargo.e Illustration of an isobaric, discriminant peptide pair from ESC lysate samples and HSP90 captured by PU-and GAbeads.Representative data of two independent experiments.f Schematic summary.Both cancer cells and pluripotent stem cells harbor epichaperomes.These epichaperomes undergo disassembly during differentiation processes.Source data are provided in Supplementary Data 1 and in Source data file. Figure 2 . Figure 2.An enrichment of the closed-like conformation of HSP90 favors epichaperomes formation.a Experiment outline.b Plot comparing cross-linking propensity of Lys residues in HSP90 bound to PU-H71 or GA.Average cross-linking percentage of PU-H71 (x-axis) and GA (y-axis) bound HSP90 cross-linked pairs are shown.Blue circles represent pairs with similar cross-linking propensity (dotted line with a slope of 1).Orange points indicate outlier cross-linked peptides, with cross-linked Lys residues 8 amino acids away and the cross-linking percentage difference ≥ 1.5 standard deviation of replicates.Solid orange circles represent p ≤ 0.05, n = 3 replicate measurements.c Homology model illustrating the HSP90 dimer in the open conformation (template PDB: 2IOQ), favored by geldanamycin (GA), and the closed conformation (template PDB: 2CG9), favored by PU-H71.One HSP90 protomer is colored to indicate the N-terminal domain (NTD, light blue), the middle domain (MD, dark blue), and the Cterminal domain (CTD, green).Cross-linked residues are labeled by pink dots and connected by red dashed lines.d NTD structures of PU-H71 (top, PDB: 2FWZ) and GA (bottom, PDB: 1YET)-bound HSP90.Source data are provided as Supplementary Data 2. Figure 3 . Figure 3. Phosphorylation of key residues located in the charged linker supports HSP90 incorporation into epichaperomes.a Experiment outline and expected outcomes.b Tandem MS spectra of HSP90 Ser226 (bottom) and Ser255 (top) phosphorylated peptides are presented, supporting the sequence and phosphorylation site identification.c Comparison of the extracted ion chromatogram of HSP90 Ser255 phosphopeptide in the PU-bead cargo (red trace, left panel) and ESC lysate (black trace, left panel) with a representative unmodified tryptic peptide in the PU-bead cargo (blue trace, right panel) and ESC lysate (black trace, right panel).d Ion intensity values of all phosphopeptides and the ratio of mean peptide intensity for each phosphosite in the samples described in panel a (n = 4 Ca and n = 2, NT). e Ratio of individual peptide intensity for each phosphosite in the samples described in the schematic (S255 n = 5; S226 n = 4; S263 n = 8; S231 n = 5).Source data are provided as Source Data file and as Supplementary Data 3,6. Figure 4 . Figure 4. Phosphorylation of key residues located in the charged linker of HSP90 leads to a conformational shift in the linker, exposing the middle domain of the protein.a Model of the HSP90-HSP90-HSP70-HSP70-HOP assembly used for the molecular dynamics simulations.A and B, protomers A and B, respectively.b Protein secondary structure elements (SSE) like alpha-helices and beta-strands of the charged linker of protomer A of ATP-bound HSP90 monitored throughout the MD simulation.WT (HSP90 S226/S255), phosphomimetic (HSP90 S226E/S255E) and non-phosphorylatable (HSP90 S226A/S255A) mutants were analyzed.The plot on the left reports SSE distribution by residue index throughout the charged linker and the plot on the right monitors each residue and its SSE assignment over time.Schematic illustrating the primary structure of the full-length HSP90 with color-coded domains is also shown: NTD, N-terminal domain; MD, middle domain and CTD, C-terminal domain.The charged linker (CL) and the location of the two key serine residues are also shown (top inset).The gray bar indicates the CL segment encompassing residues 218 to 232.c Cartoon representation of ATP-bound HSP90 protomer A in assemblies containing the phosphomimetic (HSP90 S226E/S255E) or the nonphosphorylatable (HSP90 S226A/S255A) mutants is shown.Green, reference trajectory; gray, representative trajectories of n = 1,000.The inset illustrates the surfaces available for the interaction between HSP90 A and HSP70 A when the CL is in the 'up' conformation.A blue arrow indicates the location of the key beta-strand in the charged linker.See also Supplementary Figs. 5 and 6. Figure 5 . Figure 5. Phosphorylation of key residues located in the charged linker of HSP90 facilitates assembly motions conducive to epichaperome core formation.a Calculated dynamic cross-correlation matrix of Cα atoms around their mean positions for 100 ns molecular dynamics simulations.ATP-bound WT (HSP90 S226/ S255), phosphomimetic (HSP90 S226E/S255E) and non-phosphorylatable (HSP90 S226A/S255A) mutantcontaining HSP90-HSP90-HSP70-HSP70-HOP assemblies were analyzed.The cartoon below captures the key motions among the different domains of the individual assembly components.Extents of correlated motions and anti-correlated motions are color-coded from blue to red, which represent positive and negative correlations, respectively.The assembly contains two full-length HSP90beta proteins (protomer A and protomer B).The two HSP70 proteins (HSP70 A and HSP70 B) and the HOP protein are of sizes reported, and as per the constructs used in 7KW7.b Cartoon showing assemblies that are preferentially formed when the HSP90 charged linker is either phosphorylated (as in the EE mutant) or not phosphorylated (as in the WT protein). Figure 9 . Figure 9. Regulation of epichaperome processes in ESC and cancer cells hinges on the specificphosphorylation events occurring at key residues within HSP90's charged linker.a Overview of the experimental design and expected outcomes.b Detection and quantification of proteins involved in transducing signaling events that lead to cell proliferation, survival, and protein synthesis control.See Supplementary Fig.9for total protein levels and levels sequestered into epichaperomes.Data are presented as mean ± s.e.m., p-S6 n = 8; p-mTOR n = 3; p-MEK1/2 n = 6; p-AKT n = 5, unpaired two-tailed t-test.c Confocal microscopy shows morphological differences between the cells transfected with either the AA or the EE HSP90 mutant.Micrographs are representative of 96 cells for EE and 62 cells for AA.Scale bar, 10 µm.Data are presented as mean ± s.e.m., n = 8 wells for EE, n = 14 wells for AA, unpaired two-tailed t-test.Source data are provided as Source data file. Figure 10 . Figure 10.Human tissues positive for epichaperomes exhibit p-Ser226 HSP90β positivity, and conversely, those negative for epichaperomes show no or negligible p-Ser226 signal within HSP90's charged linker.a Cartoon illustrating the processing of human tissue for biochemical analyses.Both tumor (T) and tumor adjacent (TA) tissues, determined by gross pathological evaluation to be potentially non-cancerous, were harvested and analyzed.b MDA-MB-468 breast cancer cells (epichaperome-high) and ASPC1 pancreatic cancer cells (epichaperome-low) served as controls for assessing p-Ser226 HSP90 levels.c The graph presents the relationship between epichaperome positivity and HSP90 Ser226 phosphorylation for tissues described in panel a.Data represent mean ± s.e.m., with n = 9 tumor (T) and n = 9 paired tumor-adjacent (TA) tissues classified based on epichaperome positivity or negativity, as determined by Native PAGE (see panel d); unpaired two-tailed t-test.d Detection of epichaperomes through native-PAGE (top), and of p-Ser226 HSP90 (middle) and total HSP90 (bottom) by SDS-PAGE, followed by immunoblotting, in tissues from the indicated patient specimens, as in panel a.Blue brackets indicate the approximate position of epichaperome-incorporated HSP90.Note: Obtaining genuinely "normal" tissue adjacent to tumors presents challenges, especially in the case of pancreatic tissue.The relatively small size of the organ and the nature of surgical procedures for pancreatic cancer often lead to the collection of normal samples in close proximity to the tumor.It's crucial to acknowledge that, due to these challenges, we designate potentially normal tissue as tumor-adjacent tissue, recognizing that it may not entirely reflect a truly normal tissue state.PDAC, Pancreatic Ductal Adenocarcinoma; IDC, Invasive Ductal Carcinoma; ILC, Invasive Lobular Carcinoma; ER, Estrogen Receptor; PR, Progesterone Receptor.Source data are provided as Source data file.
2024-04-21T05:06:04.697Z
2024-04-03T00:00:00.000
{ "year": 2024, "sha1": "c8c317c7d1ffd69bec57c5a126833ba86ef6c47d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-4114038/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c8c317c7d1ffd69bec57c5a126833ba86ef6c47d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
229292908
pes2o/s2orc
v3-fos-license
Effect of lysergic acid diethylamide (LSD) on reinforcement learning in humans Background The non-selective serotonin 2A (5-HT2A) receptor agonist lysergic acid diethylamide (LSD) holds promise as a treatment for some psychiatric disorders. Psychedelic drugs such as LSD have been suggested to have therapeutic actions through their effects on learning. The behavioural effects of LSD in humans, however, remain incompletely understood. Here we examined how LSD affects probabilistic reversal learning (PRL) in healthy humans. Methods Healthy volunteers received intravenous LSD (75 μg in 10 mL saline) or placebo (10 mL saline) in a within-subjects design and completed a PRL task. Participants had to learn through trial and error which of three stimuli was rewarded most of the time, and these contingencies switched in a reversal phase. Computational models of reinforcement learning (RL) were fitted to the behavioural data to assess how LSD affected the updating (‘learning rates’) and deployment of value representations (‘reinforcement sensitivity’) during choice, as well as ‘stimulus stickiness’ (choice repetition irrespective of reinforcement history). Results Raw data measures assessing sensitivity to immediate feedback (‘win-stay’ and ‘lose-shift’ probabilities) were unaffected, whereas LSD increased the impact of the strength of initial learning on perseveration. Computational modelling revealed that the most pronounced effect of LSD was the enhancement of the reward learning rate. The punishment learning rate was also elevated. Stimulus stickiness was decreased by LSD, reflecting heightened exploration. Reinforcement sensitivity differed by phase. Conclusions Increased RL rates suggest LSD induced a state of heightened plasticity. These results indicate a potential mechanism through which revision of maladaptive associations could occur in the clinical application of LSD. Higher-order cognitive flexibility, on a set-shifting task, was impaired by acute intoxication with LSD in healthy humans (Pokorny et al., 2019).Meanwhile, psilocybin increased higherorder cognitive flexibility (set shifting), subsequent to drug treatment, in individuals with major depressive disorder (Doss et al., 2021).Ayahuasca, another psychedelic non-selective 5-HT 2A agonist, and psilocybin have been shown to increase creative thinking during and after drug administration, which was interpreted as increased psychological flexibility (Kuypers et al., 2016;Mason, Mischler, Uthaug, & Kuypers, 2019).Meanwhile, healthy human behaviour on an outcome devaluation task, used to parse habitual v. goal-directed action, was not impaired by LSD (Hutten et al., 2020). Here, we studied healthy human volunteers to examine the effects of LSD on a widely used translational measure of instrumental conditioning and behavioural/cognitive flexibility: probabilistic reversal learning (PRL).In contrast to the set-shifting and outcome devaluation tasks used previously, PRL models fundamental aspects of choice behaviour under uncertainty (probabilistic reinforcement) and when flexibility is required.We explored how LSD altered not only overt choice behaviour during PRL (using classical statistics) but also the underlying learning mechanisms, using computational models of reinforcement learning (RL, using Bayesian statistics), which have not been employed in previous studies.Utilising PRL in a placebo-controlled study of healthy human volunteers, the aim of the current experiment was to inform the psychological mechanisms by which LSD could have salubrious effects on mental health. Based on raw data measures, we predicted LSD would modulate either sensitivity to negative feedback or the impact of learned values on subsequent perseverative behaviour (den Ouden et al., 2013).Measuring 'staying' (repeating a choice) or 'shifting' (choosing another stimulus) after wins or losses assesses sensitivity to immediate reinforcement but does not account for the integration of feedback history across multiple experiences to influence behaviour (Daw, 2011).To this end, we applied computational models of RL.The expected value of choice options, for example, increases or decreases dynamically based on reward or punishment prediction errors (experienced better or worse than expected outcomes).A key objective of this study was to evaluate the effects of LSD on the rate at which value is updated ('learning rates')in essence, does LSD affect how quickly expectations change following reinforcement?Another question of interest was whether LSD modulates exploratory behaviour.We tested two varieties of exploration.First, we addressed whether LSD impacts the extent to which behaviour is guided Psychological Medicine by exploiting the more highly valued choice or, conversely, an exploratory pattern that is less guided by value (termed high or low 'reinforcement sensitivity,' respectively).The second variety of exploration (low 'stimulus stickiness') was value-free rather than value-based in that it represents a tendency to explore (rather than repeat) different choices (stimuli) to what has been chosen previously, regardless of the action's outcome (irrespective of value representations). Subjects and drug administration Nineteen healthy volunteers (mean age 30.6; 15 males), over the age of 21, attended two sessions at least two weeks apart where they received either intravenous LSD (75 μg in 10 mL saline) or placebo (10 mL saline), in a single-blind within-subjects balanced-order design.Whereas 20 participants were included in the original study (Carhart-Harris et al., 2016b), one participant did not complete the PRL task; therefore, 19 participants are reported here.Demographic information is provided in online Supplementary Table S1.All participants provided written informed consent after briefing on the study and screening.Participants had no personal history of diagnosed psychiatric disorder, or immediate family history of a psychotic disorder.Other inclusion criteria were a normal electrocardiogram (ECG), normal screening blood tests, negative urine tests for pregnancy and recent recreational drug use, a negative breathalyser test for recent alcohol use, alcohol use limited to less than 40 UK units per week, and absence of a significant medical condition.Participants had previous experience with a classic psychedelic drug [e.g.LSD, mescaline, psilocybin/magic mushrooms, or dimethyltryptamine (DMT)/ayahuasca] without an adverse reaction, and had not used these within six weeks of the study.Screening was conducted at the Imperial College London Clinical Research Facility (ICRF) at the Hammersmith Hospital campus, and the study was carried out at the Cardiff University Brain Research Imaging Centre (CUBRIC).Participants were blinded to the condition but the experimenters were not.A cannula was inserted and secured in the antecubital fossa and injection was performed over the course of two minutes.Participants reported noticing subjective effects of LSD five to 15 min after dosing.The PRL task was administered approximately five hours after injection.Once the subjective drug effects subsided, a psychiatrist assessed suitability for discharge.This experiment was part of a larger study, the data from which are published elsewhere (e.g.Carhart-Harris et al. 2016b).Additional information can be found in Carhart-Harris et al. (2016b). Probabilistic reversal learning task A schematic of the task is shown in Fig. 1a.On every trial, participants could choose from three visual stimuli, presented at three of four randomised locations on a computer screen.In the first half of the task (40 trials), choosing one of the stimuli resulted in positive feedback in the form of a green smiling face on 75% of trials.A second stimulus resulted in positive feedback 50% of the time, whilst the third stimulus yielded positive feedback on only 25% of trials.Negative feedback was provided in the form of a red frowning face.The first stimulus selected was defined as the initially rewarded stimulus; the choice on trial 1 always resulted in reward.The second stimulus that was selected was defined as the mostly punished stimulus, and by definition the third stimulus was then the 'neutral' stimulus.After 40 trials, the most and least optimal stimuli reversed, such that the stimulus that initially was correct 75% of the time was then only correct 25% of the time, and likewise the 25% correct stimulus then resulted in positive feedback on 75% of trials.There were 40 trials in the reversal phase.This is a recently developed version (Rostami Kandroodi et al., 2021) of a widely used PRL task (den Ouden et al., 2013;Lawrence et al., 1999) novel due to the addition of a 50% 'neutral' stimulus in order to distinguish learning to select the mostly rewarding stimulus from learning to avoid the mostly punishing stimulus. Raw data measures of behaviour We examined whether LSD impaired participants' basic overall ability to perform the task by analysing the number of responses made to each stimulus during the acquisition and reversal phases.We measured feedback sensitivity by determining whether participants stayed with the same choice following positive or negative feedback (win-stay or lose-stay).The win-stay probability was defined as the number of times an individual repeated a choice after a win, divided by the number of trials on which positive feedback occurred (opportunities to stay after a win).Lose-stay probability was calculated in the same manner: the number of times a choice was repeated following a loss, divided by the total losses experienced.Note that in previous studies with a choice between only two stimuli (or responses), this metric is usually referred to as 'win-stay/lose-shift', which also captures the tendency to repeat (rather than switch) responses following a win, and the tendency to switch (rather than repeat) choices following a loss.Random choice would result in 50% win-stay and 50% lose-shift; however, in the current paradigm with 3 stimuli, this base rate is 33% (win-)stay and 67% (lose-)shift.We therefore encode both variables with respect to the stay (rather than shift) rate, but they are still conceptually identical to earlier studies.Perseveration was defined according to den Ouden et al. (2013) and was assessed based on responses in the reversal phase.A perseverative error occurred when two or more (now incorrect) responses were made to the previously correct stimulus, and these errors could occur at any point in the reversal phase.The first trial in the reversal phase (trial 41 of 80) was excluded from the perseveration analysis, however, as at that point behaviour cannot yet have been shaped by the new feedback structure.Note again that this metric is not entirely identical to the previous studies cited employing two stimuli, as the base-rate choice for each stimulus is now 1/3, so the 'chance' level of perseverative errors is lower.Null hypothesis significance tests used α = 0.05. Model fitting, comparison, and interpretation These methods are based on our previous work (Kanen et al., 2019).We fitted three RL models to the behavioural data using a hierarchical Bayesian method, via Hamiltonian Markov chain Monte Carlo sampling implemented in Stan 2.17.2 (Carpenter et al., 2017).Convergence was checked according to R, the potential scale reduction factor measure (Brooks & Gelman, 1998;Gelman, Hill, & Yajima, 2012), which approaches 1 for perfect convergence.Values below 1.2 are typically used as a guideline for determining model convergence (Brooks and Gelman 1998).We assumed the three models had the same prior probability (0.33).Models were compared via a bridge sampling estimate of the marginal likelihood (Gronau et al., 2017a), using the 'bridgesampling' package in R (Gronau, Singmann, & Wagenmakers, 2017b).Bridge sampling directly estimates the marginal likelihood, and therefore the posterior probability of each model given the data (and prior model probabilities), as well as the assumption that the models represent the entire group of those to be considered.Posterior distributions were interpreted using the 95% highest posterior density interval (HDI), which is the Bayesian 'credible interval.'Parameter recovery for this modelling approach has been confirmed in a previous study (Kanen et al., 2019) and is demonstrated in the online Supplementary material. The Bayesian hierarchy consisted of 'drug condition' at the highest level, and 'subject' at the level below.For each parameter, each drug condition (e.g.LSD) had its own mean (with a prior that was the same across conditions, i.e. with priors that were unbiased with respect to LSD v. placebo).This was then merged with the intersubject variability (assumed to be normally distributed; mean 0 by definition, standard deviation determined by a further prior).The priors used for each parameter are shown in Table 1.For instance, the learning rate for a given subject under LSD was taken as: the group mean LSD value for learning rate, plus the subject-specific component of learning rate.The learning rate for a given subject under placebo was taken as: the group mean placebo value for learning rate, plus the subjectspecific component of the learning rate for the same subject.This method accounts for the within-subjects structure of the study design.This was done similarly (and separately) for all other model parameters. To determine the change (LSDplacebo) in parameters, we calculated [group mean LSD learning rate] -[group mean placebo learning rate] for each of the ∼8000 simulation runs and tested them against zero via the HDI.This approach also removes distributional assumptions and provides an automatic multiple comparisons correction (Gelman et al., 2012;Gelman & Tuerlinckx, 2000;Kruschke, 2011). Models The parameters contained in each model are summarised in Tables 1 and 2. With Model 1, we tested the hypothesis that positive v. negative feedback guides behaviour differentially, and that LSD affects this.We augmented a basic RL model (Rescorla & Wagner, 1972) with separate learning rates for reward, α rew , and punishment, α pun .Positive feedback led to an increase in the value V i of the stimulus i that was chosen, at a speed governed by the reward learning rate, α rew , via R t represents the outcome on trial t (defined as 1 on trials where positive feedback occurred), and (R t -V i,t ) the prediction error.On trials where negative feedback occurred, R t = 0, which led to a decrease in value of V i at a speed governed by the punishment learning rate, α pun , according to V i,t+1 ← V i,t + α pun (R t -V i,t ).Stimulus value was incorporated into the final quantity controlling choice according to Q reinf t = τ reinf V t .The additional parameter τ reinf , termed reinforcement sensitivity, governs the degree to which behaviour is driven by reinforcement history.The quantities Q associated with the three available choices, for a given trial, were then fed into a standard softmax choice function to compute the probability of each choice: for n = 3 choice options.The probability values for each trial emerging from the softmax function (the probability of choosing stimulus 1) were fitted to the subject's actual choices (did the subject choose stimulus 1?).No further softmax inverse temperature was applied (β = 1; see below), and as a result the reinforcement sensitivity parameter (τ reinf ) directly represented the weight given to the exponents in the softmax function. Model 2 again augmented a simple RL model, but now also described the tendency to repeat a response, irrespective of the outcome that followed it (in other words, the tendency to 'stay' regardless of outcome).With Model 2 we tested the hypothesis that LSD affects this basic perseverative tendency.This was implemented using a 'stimulus stickiness' parameter, τ stim .The stimulus stickiness effect was modelled as Q stim t = τ stim s t-1 , where s t-1 was 1 for the stimulus that was chosen on the previous trial and was 0 for the other two stimuli.In this model, we used only a single RL rate, α reinf .Positive reinforcement led to an increase in the value V i of the stimulus i that was chosen, at a speed controlled by the learning rate, α reinf , via V i,t+1 ← V i,t + α reinf (R t -V i,t ).The final quantity controlling choice incorporated the additional stickiness parameter as . Quantities Q, corresponding to the three choice options on a given trial, were then fed into the softmax function as above.It should be noted that if τ stim is not in the model (or is zero), then τ reinf is mathematically identical to the notion of softmax inverse temperature typically implemented as β.The notation τ reinf is used, however, because it contributes to Q reinf t but not to Q stim t .A standard implementation of β, by contrast, would govern the effects of both Q reinf t and Q stim t by weighting the sum of the two (Q t ).Model 3 was the full model that incorporated separate reward and punishment learning rates as well as the stimulus stickiness parameter.With Model 3, we tested the hypothesis that LSD affects both how positive v. negative feedback guides behaviour differentially, and how LSD affects a basic perseverative tendency.Again, the final quantity controlling choice was determined by We then examined the relationship between initial learning and perseveration, following den Ouden et al. ( 2013) (Fig. 1b).LSD enhanced the relationship between the number of correct responses during the acquisition phase and the number of perseverative errors made during the subsequent reversal stage [acquisition correct responses (LSD minus placebo) v. reversal perseverative errors (LSD minus placebo): linear regression coefficient β = 0.56, p = 0.002].Confirming this, making fewer errors during the acquisition phase predicted more perseverative errors when on LSD (β = 0.44, p = 0.003) but not when under placebo (β = 0.04, p = 0.8).Perseverative errors, a subset of all reversal errors, alone did not differ between conditions (t 18 = 0.03, p = 0.98, d = 0.01). Choice of reinforcement learning model The core modelling results are displayed in Fig. 2. We fitted and compared three RL models.Convergence was good with all three models having R < 1.2.Behaviour was best characterised by a RL model with four parameters (Table 2).The four parameters in the winning model were: (1) reward learning rate, which reflects the degree to which the chosen stimulus value is increased following a positive outcome; (2) punishment learning rate, the degree to which the chosen stimulus value is decreased following a negative outcome; (3) reinforcement sensitivity, the degree to which the values learned through reinforcement contribute to final choice; and (4) 'stimulus stickiness', which quantifies the tendency to get 'stuck' to a stimulus and choose it because it was chosen on the previous trial, irrespective of the outcome.The last two parameters resemble the explore/exploit trade-off: low values of stickiness or reinforcement sensitivity characterise two different types of exploratory behaviour. Reward and punishment learning rates First, we modelled all 80 trials in the task (both acquisition and reversal phases) and these results are depicted in Fig. 2a.The reward learning rate was significantly elevated on LSD (mean 0.87) compared to placebo (mean 0.28) [with the posterior 99.9% HDI of the difference between these means excluding zero; 0 ∉ 99.9% HDI].There was also an increased punishment learning rate under LSD (mean 0.48) relative to placebo (mean 0.39) (drug difference, 0 ∉ 99% HDI; Figure 2a 99% HDIs not shown graphically).LSD increased the reward learning rate to a greater extent than the punishment learning rate [(α rew,LSDα rew,placebo ) -(α pun,LSD -α pun,placebo ) > 0; drug difference, 0 ∉ 99% HDI]. Stimulus stickiness and reinforcement sensitivity Modelling both acquisition and reversal contiguously, stimulus stickiness was lowered by LSD (mean 0.23) relative to placebo (mean 0.43) (drug difference, 0 ∉ 90% HDI; Figure 2a), which is a manifestation of increased exploratory behaviour.Reinforcement sensitivity was not modulated by LSD (LSD mean 4.70, placebo mean 5.57; no drug difference, 0 ∈ 95% HDI).This is in line with the absence of an effect of LSD on the tendency to 'stay' following reward or punishment (see analysis of raw data measures above). Relationship between model parameters and raw data behavioural measures Analyses to understand the relationship between computational and raw data measures were conducted.Given the initial finding on the relationship between better acquisition learning and perseveration, the first question addressed was whether the elevated reward learning rate under LSD during acquisition, from the computational model, was predictive of the raw data measure of perseveration from den Ouden et al. (2013).Simple linear regression showed that under LSD, a higher reward learning rate during acquisition predicted significantly more perseverative errors (β = 26.94,p = 0.02), whereas no such relationship was present when the same participants were under placebo (β = 9.59, p = 0.40).Next, we examined the relationship between the stimulus stickiness parameter from the computational model and the raw data measure of perseveration.Stimulus stickiness during reversal was not significantly correlated with the raw data measure of perseveration, in either the placebo (β = 4.13, p = 0.50) or LSD (β = 11.60,p = 0.09) condition.Further exploratory analyses are reported in the online Supplementary material. Discussion There has been a recent surge of interest in the potential therapeutic effects of psychedelics, including LSD.Theorising on the mechanisms of such effects centres on their role in enhancing learning and plasticity.In the current study, we tested these postulated effects of LSD in flexible learning in humans and find that LSD increased learning rates, exploratory behaviour, and the impact of previously learnt values on subsequent perseverative behaviour.Specifically, LSD increased the speed at which value representations were updated following prediction error (the mismatch between expectations and experience).Whilst LSD enhanced the impact of both positive and negative feedback, overall it augmented learning from reward significantly more than it augmented learning from punishment. Behaviour was more exploratory overall under LSD, as assessed computationally in two ways, consistent with theoretical accounts of psychedelic effects which have predicted increased exploratory tendencies (Carhart-Harris & Friston, 2019).First, LSD decreased stimulus stickiness, which indicates a diminished tendency to repeat previously chosen options, irrespective of reinforcement history (value-free).This effect on stickiness was significant in all phases of the experimentwhen considering the entire experiment as a whole (acquisition and reversal), when examining initial learning only (acquisition), and when isolating the reversal phase.In other words, regardless of LSD-induced changes in value-guided choice strategies (elaborated upon below), LSD promoted an overall latent tendency to explore in the form of shifting between choices, irrespective of feedback and value, which was maintained during both stable and changing circumstances.That LSD lowered stimulus stickiness may also be clinically relevant: stimulus stickiness was recently shown to be abnormally high in cocaine and amphetamine use disorders (Kanen et al., 2019). LSD also modulated value-based exploratory tendencies (indexed by the reinforcement sensitivity parameter), which, by contrast, differed by phase.When looking at the experiment as a whole, there was no effect of LSD on reinforcement sensitivity, although lack of an effect here was obscured by the following patterns: When examining initial learning only, reinforcement sensitivity was substantially diminished under LSD, indicating a tendency for increased exploration away from the more highly valued choice option.During the reversal phase, meanwhile, reinforcement sensitivity was increased, indicative of a heightened tendency to exploit the choice option that was computed to be more highly valued trial-by-trial, which can be seen as adaptive when circumstances change, and rapid reorienting of actions is required. A shift in the computations underlying choice was also observed in relation to RL rates, during learning to maximise reward and minimise punishment in an initial situation and when adapting actions following contingency reversal.Whereas overall, LSD enhanced both the reward and punishment rates (especially for rewards), the increase in punishment learning rate appeared during the reversal phase only.The reward learning rate was elevated in both the acquisition and reversal phases.Together, these learning rate findings suggest that LSD accelerates the updating of value, in a way that is (overall) especially rewarddriven, and LSD speeds up learning from negative feedback that is encountered when circumstances change. Under LSD, better initial learning led to more perseverative responding.The implication is that when a behaviour is newly and more strongly learned through positive reinforcement (i.e. the acquisition phase) under LSD, it may persist more strongly even when that action is no longer relevant (i.e. the reversal phase).These measures of overt performance defined based on feedback are orthogonal to an overall latent tendency towards exploration irrespective of reinforcement history (low stimulus stickiness).Importantly, perseveration (den Ouden et al., 2013) itself, as assessed in the analysis of raw data measures, was not elevated by LSD, nor did it correlate with stimulus stickiness (online Supplementary Table S3). Given the broad effect of LSD on a range of neurotransmitter systems (Nichols, 2004(Nichols, , 2016)), it is not possible to determine the specific neurochemical mechanism underlying the observed LSD effects on learning.Nonetheless, obvious possibilities involve the serotonin and dopamine systems, in particular 5-HT 2A and D 2 receptors (Marona-Lewicka et al., 2005;Marona-Lewicka & Nichols, 2007;Nichols, 2004Nichols, , 2016)).Specifically, the psychological plasticity purportedly promoted by psychedelics is believed to be mediated through action at 5-HT 2A receptors (Carhart-Harris & Nutt, 2017) via downstream enhancement of glutamatergic activity (Barre et al., 2016) and brain-derived neurotrophic factor (BDNF) expression (Hutten et al., 2021;Vaidya et al., 1997).The hypothesis that the present results regarding RL rates are driven by the serotonergic effects of LSD is supported by two recent studies in mice.Optogenetically stimulating dorsal raphé serotonin neurons enhanced RL rates (Iigaya, Fonseca, Murakami, Mainen, & Dayan, 2018), whilst activation of these neurons tracked both reward and punishment prediction errors during reversal learning (Matias et al., 2017).Neurotoxic manipulation of serotonin in marmoset monkeys during PRL, meanwhile, altered stimulus stickiness (Rygula et al., 2015): this implicates a serotonergic mechanism underlying increased exploratory behaviour following LSD administration in the present study. In addition to affecting the serotonin system, however, LSD also acts at dopamine receptors (Nichols, 2004(Nichols, , 2016)), albeit with a far lower direct affinity for dopamine receptors than for 5-HT receptors.Dopamine has long been known to play a crucial role in belief updating following reward (Schultz et al., 1997), and more recent evidence shows that dopaminergic manipulations may alter learning rates (Kanen et al., 2019;Schultz, 2019;Swart et al., 2017).A dopaminergic effect would be in line with our previous study where genetic variation in the dopamine, but not serotonin transporter polymorphism, was associated with the same enhanced relationship between acquisition and perseveration as reported here under LSD (den Ouden et al., 2013). Serotonin-dopamine interactions represent another candidate mechanism that could underlie the present findings.For example, stimulation of 5-HT 2A receptors in the prefrontal cortex of the rat Psychological Medicine enhanced ventral tegmental area dopaminergic activity (Bortolozzi, Díaz-Mataix, Scorza, Celada, & Artigas, 2005).Indeed, the initial action of LSD at 5-HT 2A receptors has been proposed to sensitise dopamine neuron firing (Nichols, 2016).LSD action at D 2 receptors, albeit with a low binding affinity, may be more pronounced in a late phase of LSD's effects (Marona-Lewicka et al., 2005;Marona-Lewicka & Nichols, 2007), which may be relevant given the relatively long delay between LSD administration and performance of the current task (see Methods).However, arguing against a late dopaminergic effect is a previous study in rodents where the effects of LSD on reversal learning were consistent across four different time lags between drug administration and behavioural testing (King, Martin, & Melville, 1974). The result of the enhanced coupling of acquisition learning and perseverative responding under LSD is in line with a recent study showing that LSD induced higher-order cognitive inflexibility in a set-shifting paradigm (Pokorny et al., 2019).Importantly, these effects were blocked by co-administration of the 5-HT 2A antagonist ketanserin (Pokorny et al., 2019), showing that the LSD-induced impairments were mediated by 5-HT 2A agonism, consistent with a 5-HT 2A mechanism underlying the present results. LSD's effects to increase acquisition-perseveration coupling and worsen set-shifting (Pokorny et al., 2019), in conjunction, suggest that what is newly or recently learnt through reinforcement under LSD is more 'stamped in', and thus may subsequently be harder to update.Whilst these findings are ostensibly at odds with the observation that LSD enhanced plasticity (through enhanced learning rates), they can be reconciled by considering the timing of drug administration with respect to initial learning and tests of cognitive flexibility.In both the present experiment and the previous set-shifting study (Pokorny et al., 2019), all phases of learning (acquisition and reversal) were conducted after LSD administration.In contrast, when acquisition learning was conducted prior to LSD administration, LSD resulted in improved reversal learning (using a reversal paradigm in rats; King et al., 1974).Likewise, when acquisition learning was conducted prior to the administration of a 5-HT 2A antagonist, reversal learning was impaired (Boulougouris et al., 2008; also see Furr et al., 2012).Collectively, these findings suggest that whether a prior belief is down-or up-weighted under LSD may depend on whether the prior is formed before or during drug administration, respectively.This observation is of great relevance for a putative therapeutic setting, where maladaptive beliefs will have been formed before treatment. Another important consideration for reconciling the effects of 5-HT 2A receptor modulation on behavioural/cognitive flexibility is that 5-HT 2A antagonism can produce opposite effects depending on whether the OFC or striatum is targeted (Amodeo et al., 2017), complicating the interpretation of studies employing systemic administration (Amodeo et al., 2014(Amodeo et al., , 2020;;Baker et al., 2011;Odland et al., 2021).Species, strain, dose, compound, route of administration, task specifications (and engagement of cortical and subcortical structures), and reinforcement schedule must also be considered.The application of computational modelling may also help unify effects across studies and species. While we observed an effect of LSD on acquisitionperseveration coupling, reminiscent of a previous similar observation as a function of genetic variability in the dopamine transporter (den Ouden et al., 2013), we did not observe effects of LSD on acquisition performance or perseveration directly, or on lose-stay and win-stay behaviour, unexpectedly.In fact, more broadly, the effects of LSD observed here differ from the effects of neurochemically more specific influences such as acute serotonin reuptake inhibition (Bari et al., 2010;Skandali et al., 2018), or neurotoxic serotonin depletion (Bari et al., 2010;Rygula et al., 2015).More in line with this, previous studies with LSD administration, examining perseveration, using an outcome devaluation paradigm, found no effect of LSD (Hutten et al., 2020), nor did a study on visual memory during paired associates learning (Family et al., 2020). Our computational modelling approach, here, was more sensitive to detecting the effects of LSD.It may be possible to reconcile these robust computational effects with the minimal overt behavioural performance effects via the following speculation.Subtle differences in states of underlying plasticity may not translate to overt differences in instrumental or Pavlovian responses, even if the long-term expression of these learned responses would differ.For example, in the memory reconsolidation literature, a previously learned associative memory is believed to become susceptible to disruption (e.g.pharmacologically or behaviourally) following cued reactivation or recall for a period of several hours known as the 'reconsolidation window' (Lee, Nader, & Schiller, 2017).There is evidence that conducting extinction training (learning) during the reconsolidation windowwhen mechanisms of plasticity differdoes not alter the overt success or failure of extinction within the session, yet there are long-term effects; extinction learning during the reconsolidation window can be more enduring than extinction learned outside of this window (Schiller, Kanen, LeDoux, Monfils, & Phelps, 2013;Steinfurth et al., 2014).These Pavlovian extinction learning data, showing no difference during extinction itself, may parallel the instrumental conditioning data in the present study, in that we report no observable effect of LSD on most raw data measures (e.g.number of correct responses), yet latent learning processes that relate to purported mechanisms of plasticity, namely learning rate, were affected.Future studies would need to determine whether and how to harness this apparent window of heightened plasticity for therapeutic benefit. Limitations of this study include the following.We have made a case for the critical involvement of the 5-HT 2A receptor; however, we cannot be sure which particular receptor interaction(s) the current findings are caused by.LSD, in addition to binding with high affinity to 5-HT 2A receptors, acts at numerous other receptors including D 1 , D 2 , 5-HT 1A/1B/1D , 5-HT 2C , 5-HT 5A , 5-HT 6 , and 5-HT 7 (Nichols, 2004).Indeed, 5-HT 2C receptors can counter 5-HT 2A effects on reversal learning (Boulougouris et al., 2008).A future study co-administering LSD with a 5-HT 2A antagonist would help discern the putative 5-HT 2Amediated effects.Additionally, the subjective effects and plasma levels of LSD were not measured at the time of task administration.Furthermore, even though our parameter recovery analysis was successful (see online Supplementary material), we were unable to demonstrate the initial learning-perseveration effect observed in the behavioural data in the simulated data. In summary, the core result of this study was that LSD enhanced the rate at which humans updated their beliefs based on feedback.RL was most enhanced by LSD when receiving the reward, and to a lesser extent following punishment.LSD also increased exploratory behaviour.These findings have implications for understanding the mechanisms through which LSD might be therapeutically useful for revising deleterious associations. Supplementary material.The supplementary material for this article can be found at https://doi.org/10.1017/S0033291722002963 Fig. 1 . Fig. 1.(a) Schematic of the PRL task.Subjects chose one of three stimuli.The timeline of a trial is depicted: stimuli appear, a choice is made, the outcome is shown, a fixation cross is presented during the intertrial interval, stimuli appear for the next trial (etc.)(RT, reaction time).One stimulus delivered positive feedback (green smiling face) with a 75% probability, one with 50%, and one with 25%.The probabilistic alternative was negative feedback (red sad face).Midway through the task, the contingencies for the best and worst stimuli swapped.s, seconds.(b) Better initial learning was predictive of more perseveration on LSD and not on placebo.Shading indicates ± 1 standard error of the mean (S.E.).(c) Trial-by-trial average probability of choosing each stimulus, averaged over subjects during the placebo session.A sliding 5-trial window was used for smoothing.The vertical dotted line indicates the reversal of contingencies.R-P indicates mostly rewarded stimulus, later mostly punished.N-N indicates neutral stimulus during both acquisition and reversal.P-R indicates mostly punished stimulus, later mostly rewarded stimulus.Shading indicates ± 1 S.E.(d) Trial-by-trial average probability of choosing each stimulus, averaged over subjects during the LSD session.A sliding 5-trial window was used for smoothing.The vertical dotted line indicates the reversal of contingencies.R-P indicates mostly rewarded stimulus, later mostly punished.N-N indicates neutral stimulus during both acquisition and reversal.P-R indicates mostly punished stimulus, later mostly rewarded stimulus.Shading indicates ± 1 S.E.(e) Distributions depicting the average per-subject probability (scattered dots) of choosing each stimulus while under placebo (shown in dark blue) and LSD (light blue).The mean value for each distribution is illustrated with a single dot at the base of each distribution, and the mean values for the probability of choosing different stimuli in each condition are connected by a line.Black error bars around the mean value show ± 1 S.E.Horizontal dotted line indicates chance-level 'stay' behaviour (33%).The global probability of choosing each stimulus did not differ between the placebo and LSD conditions.(f) Raw data measures of feedback sensitivity were unaffected by LSD.Distributions depicting the average per-subject probability (scattered dots) of repeating a choice (staying) after receiving positive or negative feedback under placebo (dark blue) and LSD (light blue).The horizontal dotted line indicates chance-level 'stay' behaviour (33%).
2020-12-17T09:09:34.425Z
2020-12-09T00:00:00.000
{ "year": 2022, "sha1": "b1a191b5f05da8b860c4125b0e7e355e737d300f", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/28E41FEE97D3A8614C77DC54DF501489/S0033291722002963a.pdf/div-class-title-effect-of-lysergic-acid-diethylamide-lsd-on-reinforcement-learning-in-humans-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "81dab98c376a77fde0fae46c2344f037cb151a21", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Biology", "Psychology" ] }
262044098
pes2o/s2orc
v3-fos-license
Self-supervised TransUNet for Ultrasound regional segmentation of the distal radius in children Supervised deep learning offers great promise to automate analysis of medical images from segmentation to diagnosis. However, their performance highly relies on the quality and quantity of the data annotation. Meanwhile, curating large annotated datasets for medical images requires a high level of expertise, which is time-consuming and expensive. Recently, to quench the thirst for large data sets with high-quality annotation, self-supervised learning (SSL) methods using unlabeled domain-specific data, have attracted attention. Therefore, designing an SSL method that relies on minimal quantities of labeled data has far-reaching significance in medical images. This paper investigates the feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans. We found that changing the embedding and loss function in SSL-MAE can produce better downstream results compared to the original SSL-MAE. In addition, we determined that only pretraining TransUNet embedding and encoder with SSL-MAE does not work as well as TransUNet without SSL-MAE pretraining on downstream segmentation tasks. A. Wrist fracture and ultrasound Distal radius fractures are one of the most common fractures in the Emergency Department and account for 8%-15% of adult injuries [1].As children have more pliable bones [2], the ratio of children with wrist bone fracture is higher than adults, accounting for up to one-third of all pediatric fractures [3].Currently, wrist fracture diagnosis mainly relies on radiographs, however, radiation exposure brings additional risks to infants and young children.Ultrasound (US), with the advantages of no radiation, low cost, and high portability compared to radiograph, could be a potential alternative for wrist fracture detection.The detection and classification of wrist distal radius fractures are based on displacements and comminution of bone shape [4].Therefore, developing an automatic system for accurate bony region segmentation is a critical step in wrist fracture diagnosis.Nevertheless, compared to wrist fracture detection using radiographs, there are few research on wrist fracture detection with US due to its intrinsic characteristics such as speckle noise and blurred boundaries, making it difficult to interpret and segment. B. Image segmentation and TransUNet Most classical medical image segmentation models are developed based on "encoder-decoder" architectures like U-Net [5], which allows precise segmentation on the pixel level.While the convolutional kernel is good at dealing with adjacent information, it limits the receptive field of an image, making the model bad at capturing long-range information.Global information is particularly crucial for segmentation, where local patches are often labeled based on the global image context.Transformers, initially designed for natural language processing tasks [6], utilize a selfattention embedding mechanism to make models pay more attention to regions with consequential information.Global information will always be kept during the training process of Transformers.Some recent publications have formulated image segmentation as a sequence-to-sequence problem and incorporated Transformers into their architectures to leverage contextual information at every stage [7], [8].Inspired by the U-Net shape and the success of the Transformer-based image classification model ViT [9], Chen et al. [10] proposed a new segmentation model TransUNet, which leverages advantages of both U-Net and Transformers.Similar to U-Net, TransUNet is a two-stage network that consists of an encoder and a decoder.The embedding, constructed from convolutional layers, is used as a feature extractor to generate a feature map from the input image to the ViT-backbone encoder.TransUNet decoder side is analogous to U-Net's decoder.Like U-Net, skip-connections connect features after embedding to decoder directly.The authors stated that TransUNet surpassed the state-of-arts segmentation models including U-Net. C. Self-supervised learning and Masked AutoEncoder Although deep learning has made a huge contribution to disease detection and organ/tissue segmentation in medical imaging, its application is largely restricted by limited labeled medical imaging datasets.Unlike natural images, medical image labeling requires people with years of medical training, making it costly and difficult to obtain.Hence, finding a method to train models with limited labeled data is a crucial step toward the widespread application of deep learning in medical imaging.Self-supervised learning (SSL) is an increasingly popular pretraining method, in which models learn the internal and underlying features of images by proxy tasks such as positive/negative image pairing [11], gray-scale image colorization [12] and inpainting [13] initialized with pretraining weights and fine-tuned for the downstream tasks on a small number of labeled images.SSL has been used for medical imaging datasets including US [14] and histology images [15].Felfeliyan et al. [16] applied different distortions to arbitrary areas of unlabeled data and used the improved Mask R-CNN models to predict distortion type and loss information.He et al. [17] proposed Masked Autoencoders (SSL-MAE) for ViT backbone models self-supervised pretraining.They randomly masked part of the images after patch embedding and trained the model to restore the image with encoder-decoder architecture.As the fine-tuning task could be quite different from the pretraining reconstruction, only encoder weights were kept for downstream tasks.SSL-MAE has been shown to be successful in classification and segmentation tasks with ViT and ViT-based Mask R-CNN models. In this study, we extended the SSL-MAE application to TransUNet to determine the utility of SSL-MAE for wrist 2D-US bony region segmentation by TransUNet.We also explored modifying the original loss function and embedding of SSL-MAE.Our ultimate goal was to obtain an accurate model for a small labeled training dataset. A. Datasets Data was collected prospectively at Stollery Children's Hospital ED with institutional ethics approval.118 children aged 0-17 with wrist trauma received 2D-US examinations with Lumify probe in 5 locations: dorsal, proximal dorsal, radial, volar, and proximal volar.Due to logistical and technical issues, 7 children received less than 5 views of US exams and 7 children received more than 5 views.A musculoskeletal sonographer with 10 years of experience labeled the bony region for each patient using a freeware ITK-Snap for medical imaging manual segmentation (Fig. 3).We converted the US video into a sequence of single-image scans for our study.Only slices with distinct and clear views which the sonographer felt to be confident of pathology were labeled and used.Dataset was randomly split into training, validation and test sets based on patient ID to avoid data leakage.For SSL-MAE pretraining, as most adjacent US images looked similar, we selected 1 out of every 10 scans from the original training set for image reconstruction.We then selected 10% of the pretraining data with bony region labeling for segmentation finetuning.All validation and test set images were kept for segmentation model evaluation.Details of dataset information can be found in Table I. B. SSL-MAE Self-supervised pretraining The SSL-MAE architecture was utilized for the purpose of self-supervised pretraining.The SSL-MAE pretraining process can be divided into four stages: 1. patch embedding 2. parts of patch embeddings that were masked randomly 3. encoder (input: patch embedding from unmasked region) 4. decoder for image reconstruction (input: latent features generated by encoder+mask tokens) (Fig. 1).To accommodate the TransUNet model, we changed 1. the original SSL-MAE ViT encoder to TransUNet encoder and 2. the SSL-MAE patch embedding to ResNet50 backbone embedding, which is the same as TransUNet embedding.There is a normalization layer between encoder and decoder in the original SSL-MAE, but its impact on the downstream segmentation task was not notable, therefore it was retained for SSL-MAE pretraining.Influenced by Felfeliyan's work [16], we investigated changing the original mean squared error (MSE) loss function to the root mean squared error (RMSE) + mean absolute error (MAE) loss over the masked patches reconstruction.Other architectures were maintained the same as the SSL-MAE paper.The encoder and ResNet50 embedding were initialized with ImageNet-pretrained weights for SSL-MAE pretraining. C. Segmentation fine-tuning Weights of the pretrained models with SSL-MAE were loaded for TransUNet embedding and encoder.TransUNet pretrained on ImageNet was set as a baseline model for comparison.Models were then fine-tuned on an extremely small training set.We used binary-cross-entropy+Dice loss.The model with the smallest validation loss was saved as the best model.We set segmentation threshold based on validation set probability map using Otsu's method, by minimizing intra-class intensity variance and maximizing inter-class variance. D. Implementation and Evaluation All networks received 3-channel images that were normalized and resized to a size of 224 × 224, and were optimized using AdamW optimizer.With hyperparameter tuning, we set the learning rate 0.0001, weight decay 0.05 and batch size 16 for all the experiments. For self-supervised pretraining task, input images augmented through a random horizontal flip before being forwarded into the model.Mask ratio of 0.75 was selected for pretraining after hyperparameter tuning.All pretraining models were trained for 1200 epochs, and all fine-tuning models were trained for 150 epochs.A V100 GPU on compute Canada server was used for model training and evaluation.Models were implemented in PyTorch.1870 unlabeled images were used for pretraining tasks, 187 labeled images were used for training and 4215 and 3822 images were respectively used as validation and test sets. The cosine similarity was used as the evaluation metrics on SSL-MAE reconstruction over masked area.Dice similarity coefficient (DSC) and Jaccard index were used as the segmentation evaluation metrics.For all metrics, a higher value implies better performance. B. Wrist US bony region segmentation TransUNet embedding and encoder pretrained with SSL-MAE were then fine-tuned for bony region segmentation.Table II presents quantitative segmentation results on the test set. TransUNet pretrained on ImageNet was used as the baseline model.The experiment results (Table II Qualitative results in Fig. 3 shows that all models made visually good segmentation.In the upper row image, there is a deceptive artifact and was segmented as a bony region by some models (A, B, D).However, TransUNet with embedding pretrained on SSL-MAE largely shrunk the artifact prediction (C, E).The segmentation image further supports the finding that changing SSL-MAE default ViT patch embedding can help with fine-tuning tasks.models.However, since SSL-MAE applies random masking after embedding and there is a risk of information leakage from embedding to the decoder, in this study the skipconnections were not used during image reconstruction.Therefore, our next steps will be to include both skipconnections and decoder in a future study with SimMIM selfsupervised pretraining [18], which applies random masking before patch embedding.Results show that using a combination of two loss functions(RMSE+MAE) for pretraining has a positive effect on convergence speed.This could be attributed to the combination of losses allowing the pretraining task to learn a more variant representation.This suggests that combining multiple loss functions instead of using single MSE loss for SSL-MAE pretraining is beneficial for downstream segmentation tasks, and could help downstream task to converge faster.We will investigate the effect of other loss function combinations in future research. By comparing cosine similarity from the SSL-MAE reconstruction over masked regions, we can see that higher performance on the pretraining task does not necessarily lead to gain higher performance on the downstream task.It is worth noting that in the SSL-MAE pretraining stage, the unmasked area recoveries were not as optimum as the masked area and had noticeable artifacts, particularly in dark regions.This is because the unmasked regions were not learned by the model during training.Our results are comparable to Xie's findings in masked SSL [18]. Currently, the TransUNet+SSL-MAE has only been pretrained using wrist US scans, for the future we will examine the transferability of features from US scans of other body regions or other modalities. V. CONCLUSION This study applied Masked AutoEncoder SSL technique to TransUNet pretraining on children's wrists US and fine-tuned TransUNet for bony region segmentation on an extremely small training set with only 187 images.Results showed applying loss function combinations (RMSE+MAE) during SSL-MAE pretraining stage improved the downstream segmentation task compared to using default MSE loss.Pretraining TransUNet patch embedding and encoder did not provide a noticeable improvement in segmentation task. Fig. 1 . Fig. 1.Masked Autoencoders.ViT patch embedding was changed to ResNet50 patch embedding and ViT encoder was changed to TransUNet encoder. Fig. 2 and Fig. 2 and Table II show the SSL-MAE reconstruction results visually and quantitatively.In general, TransUNet successfully restored the masked region of the original image.The masked area reconstructions by TransUNet are quite close to the masked area on the original image (shown in white rectangles).Changing default loss function to RMSE+MAE loss or changing ViT embedding to ResNet embedding achieved higher cosine similarity between the original image and reconstruction.The image reconstruction results show that the models learned about the global and local information as well as dense image representation during the SSL-MAE pretraining process. Fig. 2 . Fig. 2. Masked image reconstruction with SSL-MAE training.The white rectangle shows the comparison between the masked and reconstructed area. ) indicate selfsupervised pretrained models have very close performance to TransUNet pretrained ImageNet.ImageNet pretraining model achieved the highest DSC (0.837) and Jaccard index.SSL-MAE pretraining led to a low perceptible decrease in model performance (from 0.837 to 0.811-0.831).Out of all experiments, Model B, which is the SSL-MAE + ViT encoder and MSE loss, returned the lowest DSC (0.811).Results show alternating the loss function from MSE loss to RMSE+MAE loss (Model D, E) and replacing SSL-MAE ViT patch embedding with TransUNet ResNet50 patch embedding (Model C, E) improved model B performances but didn't outperform ImageNet pretraining (Model A). Fig. 3 . Fig. 3. Wrist 2D-US bony region segmentation results.Details of model A-E can be found in Table II.The segmentation training and validation loss plot of all models is demonstrated in Fig. 4. It shows TransUNets pretrained on SSL-MAE with RMSE+MAE loss (Model D and E) converge faster and have lower and more stable validation losses compared to the other models at the beginning(epoch 0-100).The other three models have similar convergence speed. Fig. 4 . Fig. 4. Segmentation training and validation loss plot IV.DISCUSSION The objective of this study was to determine the advantage of utilizing SSL-MAE pretraining compared to conventional ImageNet-pretrained TransUNet for wrist US bony region segmentation.The quantitative result shows that SSL-MAEbased pretraining methods achieve close accuracy to the baseline (ImageNet pretraining), but are not able to outperform it.A possible explanation for SSL-MAE TransUNet not surpassing the baseline (TransUNet pretrained with Im-ageNet) is that only the embedding and encoder weights of SSL-MAE TransUNet were used for the downstream task, and the decoder weights were left out, while skip connections and decoders play a crucial role in the function of TransUNet.Based on results presented in the TransUNet paper [10], adding skip-connections greatly improved model performance and made TransUNet the best among all other TABLE II SSL -MAE RECONSTRUCTION AND WRIST US BONY REGION SEGMENTATION RESULTS (SEG.=SEGMENTATION,CS.=COSINE SIMILARITY)
2023-09-19T06:53:21.839Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "5e7af99346d91703a1864ccd448f51485d0f351d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5e7af99346d91703a1864ccd448f51485d0f351d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
1555818
pes2o/s2orc
v3-fos-license
Modeling autism: a systems biology approach Autism is the fastest growing developmental disorder in the world today. The prevalence of autism in the US has risen from 1 in 2500 in 1970 to 1 in 88 children today. People with autism present with repetitive movements and with social and communication impairments. These impairments can range from mild to profound. The estimated total lifetime societal cost of caring for one individual with autism is $3.2 million US dollars. With the rapid growth in this disorder and the great expense of caring for those with autism, it is imperative for both individuals and society that techniques be developed to model and understand autism. There is increasing evidence that those individuals diagnosed with autism present with highly diverse set of abnormalities affecting multiple systems of the body. To this date, little to no work has been done using a whole body systems biology approach to model the characteristics of this disorder. Identification and modelling of these systems might lead to new and improved treatment protocols, better diagnosis and treatment of the affected systems, which might lead to improved quality of life by themselves, and, in addition, might also help the core symptoms of autism due to the potential interconnections between the brain and nervous system with all these other systems being modeled. This paper first reviews research which shows that autism impacts many systems in the body, including the metabolic, mitochondrial, immunological, gastrointestinal and the neurological. These systems interact in complex and highly interdependent ways. Many of these disturbances have effects in most of the systems of the body. In particular, clinical evidence exists for increased oxidative stress, inflammation, and immune and mitochondrial dysfunction which can affect almost every cell in the body. Three promising research areas are discussed, hierarchical, subgroup analysis and modeling over time. This paper reviews some of the systems disturbed in autism and suggests several systems biology research areas. Autism poses a rich test bed for systems biology modeling techniques. Background Autism is the fastest rising developmental disorder in the world today. In the US the rates of autism have risen from 1 in 2500 in the 1970 [1] to 1 in 88 today [2]. Autism is defined behaviorally, and is characterized by impairments in social behavior, stereotypic movements and difficulties in communicating [3]. Autism presents a burden upon both families and society as a whole. The estimated total lifetime societal cost of caring for one individual with autism is $3.2 million US dollars. This includes direct costs such as medical, therapeutic, educational and child and adult care. This figure also includes indirect costs such as loss of productivity of both the individual with autism and their caregivers [4]. In the past autism was considered purely a psychological [5] or neurological disorder [6]. There is increasing evidence that it is a highly diverse disease affecting multiple systems of the body. Some systems with strong evidence of involvement are metabolic, gastrointestinal, immunological, mitochondrial, and neurological [7,8]. Identification and modeling of these systems may lead to new treatments. It is hard to predict the all new treatments that would result from a systems approach, but the first would be better targeting of treatments. At present, physicians often rely on therapeutic trials and on psychotropic drugs not approved for autism [9]. One of the difficulties in describing the biology of autism is that it appears to have multiple etiologies. Some children have gastrointestinal disease, while others do not [10]. Some children have frank immune disorders, while others appear healthy [11]. Some show signs of autism from birth, while others appear to have a period of normal development, and then regress [12]. In addition to the difficulties this presents for modeling autism, the complex etiology can be a confounding factor in many autism studies as the different subgroups are not apparent using just the defining behavioral characteristics. Currently those going in for autism evaluation do not get a comprehensive workup. These patients cannot articulate their problems or have the cognition to request an evaluation, so we need better lab workups. Many of these patients present with behavioral challenges, so the testing procedures should be all-encompassing and as less invasive as possible. So this whole body approach to modeling could potentially generate the parameters for a comprehensive evaluation or intake that would best guide treatment. Another difficulty in understanding autism is that the various systems involved interact in complex and highly interdependent ways. This complexity points to a new paradigm in autism research using systems biology. In addition, autism poses particular difficulties as the scale of information to be modeled varies widely, from molecular level to anatomical. The diverse systems involved in autism and its complex etiology, makes the development of new techniques to model autism and mine its data, imperative. This paper will first review the systems that are altered in people with autism, and then present some of the challenges autism presents to system biologists. Genetics, metabolism and oxidative stress Autism has an established genetic component. Studies of twins shows a concordance of 0-10% in dizygotic twins and 70-90% in monozygotic twins [13,14]. However, the search for single autism genes has not been fruitful. It appears that autism results from a combination of relatively high frequency genes. The current model predicts that between 10 and 100 possible genetic variants may be responsible [15]. The rising rates of autism and the fact that the concordance of identical twins is not 100% supports the theory that autism results from a combination of genetic and environmental factors [16][17][18]. Several genetic variants have been associated with increased risk for autism. The variants found so far are mostly associated with differences in the metabolism, rather than in brain structure. The MET promoter variant rs1858830 allele "C", found at increased rates in autism, is associated with neuronal growth and development, but also is involved in immune function and gastrointestinal repair [19,20]. The fact that this genetic variant is present in 47% of the general population gives credence to the assertion that there is an environmental component to the development of autism. Many of the genetic variants at increased prevalence in autism are associated with the folic acid, transmethylation and transsulfuration metabolic pathways. Some of these genes are MTHFR, COMT, GST, RFC and TCN2. As with the MET variant, these are common in the general population. These variants decrease the activity of enzymes and decrease the efficiency of the body's ability to resolve oxidative stress, methylate genes and detoxify exogenous and endogenous toxins [21]. Oxidative stress occurs when production of Reactive Oxygen Species (ROS) and Reactive Nitrogen Species (RNS) exceeds the body's ability to neutralize them. ROS/RNS are free radicals, highly reactive molecules which can damage many parts of the cell. ROS/RNS occur through the energy production process in the mitochondria and through environmental sources. The mitochondrion is the main source of ROS/RNS and has evolved a system to neutralize the oxidants. The most important among these defences is glutathione (GSH). If the mitochondrial GSH pool is low, increased mitochondrial ROS production can occur. GSH is also the main antioxidant for extra-mitochondrial parts of the cell. GSH is produced by the sulfuration pathway as shown in Figure 1. The sulfuration pathway is linked to the methylation and folic acid pathways and any perturbation of those pathways will affect the production of GSH. The methylation pathway provides methyl groups, CH3, to many functions in the body. S-adenosylmethionine (SAM) transfers methyl groups to be used in over 150 methyltransferase dependant methylation reactions in the body [22], most notably the methylation of genes. This transfer results in S-adenosylhomocysteine (SAH). SAH can be reversibly transformed into homocysteine and adenosine by the SAH hydrolase (SAHH). Homocysteine can then be either remethylated to methionine or can be transferred to the sulfuration pathway to create glutathione. The pathway flux is influenced by the relative amounts of the components. If the activity of methionine synthase (MS) is reduced, either through availability of its cofactor cobalamin (vitamin B12) or other impairment, less homocysteine will be converted to methionine to continue the cycle. This will result in more homocysteine and SAH, which reduces SAM dependent methylation processes. Methylation serves many important functions in the body. It is used epignetically to turn on and off genes. A methylated gene will not be expressed [23]. Methylation is also important in the function of neurotransmitters, neurohormones, myelin, membrane phospholids, proteins and creatine [24]. The activity of MS also determines the proportion of homocysteine shunted into the sulfuration pathway to make GSH. As the MS cofactor cobalamin is easily oxidized, oxidative stress will cause more homocysteine to be turned into GSH. In a properly functioning system this additional GSH would resolve the oxidative stress. But in autism there is evidence of continued oxidative stress [25]. Metabolic markers of oxidative stress have been found to be elevated in children with autism. Glutathione, the main cellular antioxidant, levels were reduced. In addition, the oxidized disulfide form of glutathione (GSSG) was increased resulting in a doubling of the GSSG/GSH ratio, The ratio of plasma Sadenosylmethionine (SAM) to S-adenosylhomocysteine (SAM/SAH ratio) was reduced [26,27]. Evidence of increased lipid peroxidation was found which might indicate oxidative stress [28]. Oxidative stress can have a negative effect on many systems in the body. It has been implicated in cancer, cardiovascular disease, and autoimmune disease [29][30][31]. Oxidative stress is particularly destructive to the brain. The brain has higher energy requirements, high concentration of polyunsaturated fatty acids and lower reserves of GSH. Oxidative stress is also increased in schizophrenia, bipolar disorder and Parkinson's disease [32][33][34][35]. These interacting cycles are of great importance in autism as they have the potential for therapeutic intervention. Defects in MTHFR enzyme can be bypassed by supplementing the 5-CH3THF form of folic acid. Supplemental cobalamin can increase the efficiency of MS [36]. Supplements of other enzyme cofactors might also be of benefit [7]. The impairment of the metabolic pathways in autism can result from environmental influences in addition to genetics. Heavy metals [22] and pesticides [37,38] have been shown to inhibit the enzymes often deficient in autism. This could form a feedback loop, where insufficient activity of these systems allows toxins to remain, where they can further impair the detoxification systems. Mitochondrial system Mitochondria are the organelles responsible for the energy production in most eukaryotic cells. They convert the energy from carbohydrates and fats into adenosine triphosphate (ATP) through the process of cellular respiration. ATP is used to power most cellular functions. Mitochondria are also involved in signalling, cellular differentiation, and apoptosis, as well as the control of the cell cycle and cell growth [24]. The mitochondrion utilizes a complex series of chemical reactions to produce the ATP. During this process free radicals, including the particularly damaging super oxide, are produced. Since free radicals are so destructive, the mitochondrion has a series of defences to reduce the free radicals. If, due to genetic defects or acquired dysfunction, more free radicals are produced than the defences can reduce, oxidative stress can occur [39]. Mitochondrial disease occurs when there are mutations in the mitochondrial DNA. Mitochondrial disease is associated with a multitude of disorders including hypotonia, mitochondrial encephalomyopathy, cardiomyopathy and a range of endocrine, hepatic or renal tubular dysfunctions, myoclonic epilepsy and mitochondrial myopathy and developmental delay among others. Mitochondrial disease has many different presentations as a child can inherit a mixture of normal and mutated mitochondria from the mother [40]. There is clinical evidence of mitochondrial disease and dysfunction in autism [41]. Although only a small minority of people with autism have mitochondrial DNA (mtDNA) mutations, the rate of autism is higher among children with mitochondrial disease [42][43][44][45][46]. In addition, the task of finding genetic mutations influencing mitochondrial function is confounded by the fact that many mitochondrial functions are encoded by nuclear DNA. Mitochondrial dysfunction occurs when there is reduced mitochondrial function without genetic changes. Mitochondrial dysfunction and oxidative stress has been implicated in a variety of neurodegenerative diseases such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS) and Huntington's disease (HD). Since the brain has high energy demands, it is more susceptible to damage from faulty mitochondria [47]. Mitochondria can be inhibited by many stressors, but chief among them are metals such as mercury, arsenic, cadmium and lead [48,49]. Pesticides and industrial chemicals have been found to inhibit mitochondrial function [50]. In addition, people with autism have been found to have higher levels of the bacterium clostridium in their guts. Clostridium produces proprionic acid, which inhibits the oxidative phosphorylation of the mitochondria [8]. Although most people with autism have no discernible mutation indicating primary mitochondrial disorder, labwork gives evidence to reduced mitochondrial function, namely elevated plasma lactate, hyperlactacidemia and increased lactate/pyruvate ratio. Rarely have mtDNA changes been found in people with autism with clinical signs of mitochondrial dysfunction [51][52][53]. In addition, levels of enzymes associated with resolving mitochondrial produced radical production have been found to be lower in people with autism [54]. In addition to producing ATP, mitochondria perform the important function of sequestering calcium. Calcium is also used as a biologic signal between the mitochondria and the endoplasmic reticulum. Neuronal calcium signalling causes the release of neurotransmitters and can affect the speed of signals. Diseases with defects in the mitochondrial calcium pathways have a high Co-morbid occurrence of autism [55]. Post mortem studies of autistic brains show alterations in calcium homeostasis. This study also showed a possible connection between ionized calcium levels and the immune system [56]. There are several pathways for impaired mitochondrial function to affect the brain. The brain has high energy demands and a limited ability to neutralize free radicals, thus impaired mitochondria might be damaging to neurons [57]. Mitochondrial dysfunction could also lead to reduced frequency of neuron firing, particularly of inhibitory neurons [58]. Mitochondrial dysfunction could also affect the brain indirectly, through the immune system. Mice with mitochondrial deficiency have reduced number of immune cells [59], and supplementation of mitochondrial nutrients improve immune function of Type 2 diabetic rats [60]. Mitochondrial dysfunction in areas outside the brain could lead to hepatic production of VLCFA-containing lipids arising from impaired mitochondrial fatty acid beta-oxidation. These lipids can lead to microglial activation, and release of the neurotoxin glutamate [61]. Immune system There is strong evidence of immune dysfunction in children with autism. Relatives of children with autism have increased rates of autoimmune diseases [62]. Imbalances of immune system cells and cytokines are found in many different parts of the immune system of people with autism. Total levels of lymphocytes are reduced [63,64]. The serum immunoglobulin subtypes show abnormal patterns. In particular there is often a skewed Th1-Th2 helper ratio, with most people with autism showing a Th2 predomination [64,65]. T2 skewing results in increased antibodies which can induce allergies and autoimmune reactions. Food allergies are common in children with autism [66]. Th2 skewing also makes chronic viral infections more likely. Skewing also occurs in the serum immunoglobulin subtypes. Immunoglobulins are antibodies formed by the B cells to create humoral, persistant immunity. Immunoglobulins IgM, IgA, and total IgG are depressed while IgG subtypes IgG2 and IgG4, and total IgE are increased [67][68][69][70]. Increases of pro-inflammatory cytokines along with reductions of regulatory cytokines have been found [71]. The immune system has the ability to affect the mitochondria. Cytokines such as TNFα and IL6 can facilitate calcium influx and contribute to mitochondrial dysfunction possibly contributing to the deficits of autism through the mitochondrial system [72]. Extracellular mitochondrial DNA and anti-mitochondrial antibodies have been found in the serum of children with autism [73]. There are several avenues for the immune system to induce autistic behaviors. Immune dysregulation could result in generalized inflammation in the brain [74]. Inflammation in the brain has been linked to a number of psychiatric diseases including schizophrenia, [75] bipolar [76] Alzheimer's disease [77] and depression [78]. Multiple studies have found a correlation between abnormal levels of immune factors and core autistic deficits such as speech, mood and social deficits [79][80][81][82][83][84]. Another study found that the more the levels of the cytokines IL-1, IL5, IL-8 and IL-12p40 deviated from the norm, the more severe the stereotypical behaviors [85]. Challenge with nasal allergens during the low pollen winter months resulted in regression in 55% of children with autism as measured by the Aberrant Behavior Checklist [86]. Children with autism have been reported to have fewer aberrant behaviors particularly speech during fever as reported in a prospective study [87]. This gives further support to an immunological component [88]. The interaction between the immune system and the brain can present in several variations. Neuropeptides can modulate the immune system by recruitment of the innate immune system and chemotaxis [11]. In mouse models, decreased lymphocytes result in impaired learning and memory [89]. Autoimmunity is present in some cases. Anti-brain antibodies have been found in children with autism, though no evidence of demyelination has been found [11]. A study of 93 children with autism found that 75% had autoantibodies to the folate receptors in the central nervous system (CNS). Impairments of these receptors can lead to reduced levels of folate in the CNS and Cerebral Folate Deficiency (CFD). The levels of folate receiver antibodies were highly correlated with cerebrospinal fluid 5-methyltetrahydrofolate concentrations, thus indicating possible CFD in the tested children. There are structural similarities between the folate receptors and proteins found in milk [90]. A milk free diet, in addition to high dose folinic acid supplementation has been found to decrease the autoantibody titer and improve functioning in younger patients [91,92]. These immunological differences point to treatment options. Replacement of deficient lymphocytes in mice resolved the learning and memory difficulties [89]. Treatment of allergies often results in improvement in autistic behaviors such as hyperactivity and irritability [66]. An early study found that treatment with intravenous immune globulin in ten children with autism resulted in better speech, eye contact, focus and awareness of surroundings [93]. Gastrointestinal system Incidence of gastrointestinal (GI) disease among those with autism varies widely, depending on exclusion criteria and whether the study was prospective or retrospective. A prospective study showed GI symptoms in 80% of patients with autism [94]. These symptoms include abdominal pain, chronic diarrhea and or constipation, and gastro esophageal reflux disease [10]. GI disease has been confirmed via endoscopy in several studies [95][96][97]. Inflammation was found throughout the GI tract, with reflux esophogitis, stomach inflammation, duodenum and abnormal carbohydrate digestive enzyme activity. Other studies have found chronic patchy inflammation and lymphonodular hyperplasia. This is different than the pattern seen in classical inflammatory bowel disease, with infiltration of T cells and plasma cells into the epithelial layers of the mucosa [68,97]. Lymphocyte infiltration into the epithelial layers of the gut lining and crypt cells has been found on endoscopy. In addition, there were IgG antibodies deposited onto the epithelium and complement immune system activation. This might be indicative of an autoimmune process [98]. There is evidence of increased intestinal permeability in people with autism [99][100][101][102][103]. Increased intestinal permeability was even found in 43% of children with autism without clinical signs of bowel dysfunction [101]. Intestinal permeability allows larger molecules that would normally stay in the gut to cross into the bloodstream. Plasma and urinary concentrations of oxalate were greatly elevated in children with autism, which may be a result of increased intestinal absorption [104]. Increased permeability can lead to allergy and autoimmune processes. There appear to be multiple reasons for the increased permeability. The dietary protein gluten can bind to the CXCR3 receptor, resulting in increased zonulin levels. Zonulin regulates the opening of the tight junctions in the gut [105]. Ingested toxins such as Polychlorinated Biphenyls can also open the tight junctions in the gut [106]. Increased incidence of dysbiosis, an imbalance of intestinal flora, has been noted in children with autism [99,107] Dysbiosis can result from use of antibiotics. As beneficial bacteria are killed, antibiotic resistant pathogenic organisms can take their place. It has been theorized that toxins produced by pathogenic organisms may be affecting the brains of individuals with autism. In addition, decreased levels of disaccharide digestive enzymes have been noted in children with autism [99]. There are anecdotal reports of improvement of autistic behavior on restricted diets. Some experimental studies have reported improvements reported include socialization, speech, strange and unusual behavior [108,109], stereotyped behaviors, attention/hyperactivity [110] and physiological symptoms [109] One study of the casein/gluten free diet considered children with and without GI symptoms separately. They found greater improvent in autistic behaviors in children with gastrointestinal symptoms compared to those without [109]. The reported improvements may be due to several reasons. Removal of allergens may result in lessened autoimmune reactions [66]. Removal of gluten may reduce intestinal permeability [103,105]. Removal of dietary proteins for which there is insufficient enzymic activity may reduce dysbiosis [111]. The brain has the potential to directly effect the functioning of the gut. Stress has been implicated in Irritable Bowel Syndrome with alterations of the intestinal barrier function, altered balance in enteric microflora, exaggerated stress response and visceral hypersensitivity [112]. Antidepressants [113] and therapy [114] have been found to be effective treatments for irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD). There is also a finding that the brains of patients with IBS have increased hypothalamic gray matter compared with controls, though it is unknown whether the brain changes result from long term IBS or are preexisting [115]. Neurological system Among the body systems involved in autism is obviously the brain. Anatomical differences in the cerebellum and amygdala have been noted in multiple studies, and other regions have been inconsistently identified as diverging from the average [116]. Decreases in Purkinje and granular cells have been noted [117]. Macrocephaly is present in about 20% of people with autism studied, with a general upward trend in brain size in other people with autism. The increase appears to be disproportionately from white matter enlargement. The cause of the macrocephaly is not known, though larger brains are prevalent among first degree, unaffected relatives. Neuroinflamation is one postulated cause [118]. Minicolumns in the neocortex have been postulated as the fundamental unit of cognition [119]. Minicolumns in autistic brains appear to be narrower, with tighter spacing and higher neuron density [120]. Whether this is a sign of pathology is unclear, as the same variation occurs in autopsies of three distinguished scientists [121]. Autism does occur more often in families or mathematicians, engineers and physicists [122]. It has been theorized that narrow minicolumns facilitate discrimination and more finely tuned activities, while wider minicolumns would facilitate generalization. This is consistent with the behavioral observations of stimulus overselectivity in autism. Stimulus overselectivity is the neglect of some features and the overly focused attention on other features, to the detriment of the observation of the whole [123]. Evidence also exists for an increased excitatory/inhibitory neuronal activity in the autistic brain [119,124]. Functional MRI studies are giving evidence to enhanced local connectivity, and reduced global connectivity in the autistic brain. This might result in an over analysis of smaller features and an impairment in synthesizing the information into a coherent whole [125]. It has been suggested that a feature in the development of autistic traits is a low signal to noise ratio in neural signals. In murine models, constant undifferentiated noise will indefinitely delay the maturation of neurons responsible for processing sound. A similar low signal to noise ratio in multiple systems in the autistic brain may be responsible for the impairments observed [126]. This would be consistent with the underconnectivity theory. Neuronal synchrony may be impaired if presynaptic and postsynaptic neurons don't fire within <100 ms of each other [127]. Brain hypoperfusion has been noted in several studies of subjects with autism. Interestingly, the region affected can vary widely. Hypoperfusion can result from structural abnormalities or from global effects such as oxidative stress [7]. Seizures are present in 30% of people with autism [128]. In addition, subclinical seizures are often present and treatment with anti-epileptics can result in mental improvement [129,130]. Modeling autism All of the systems described above interact in highly complex ways. To date, little research exists in autism modeling outside of the genetic and neurological systems. Finding commonalities between autism and other conditions may lead to new treatments. Rzhetsky used statistical models to find genetic overlaps between autism, bipolar disorder and schizophrenia [131]. Individual subsystems of importance in autism have been modeled [132,133], but work needs to be done in modeling combinations of systems. It is clear that autism poses a challenging problem for modeling due to the high level of interactions between the different elements [134]. The probably incomplete Figure 2 shows some potential interactions between the systems discussed in this paper. For example, an analysis of children with both autism and mitochondrial disease found that a high proportion, 70%, regressed during a fever [135]. This illustrates just one example of an intersystem interaction between the mitochondrial, immune and neurological systems. The dotted lines in Figure 2 indicate how even the environment might be effected by the presence of autism. For example, food allergies or special diets would change the environment through different food choices. Fecal incontinence in older children would change the activities the child would be exposed to. Energy deficits from mitochondrial dysfunction could affect school activities. And being oversensitive to sensory input would change activities and family dynamics. Much work has been done investigating the genetic basis of autism. Additional work needs to be done to find and cluster the genes involved in autism. Modeling autism will require an integration of both systems and scales. A few potential research areas are presented below. Hierarchical modeling Modeling autism is complex due to the different physiological scales involved. Issues of importance to model range from the organ level to the genetic. Most systems biology to date has emphasized the "lower" levels, with a strong emphasis on direct genetic interactions. Outside of a few systems, such as the cardiovascular [136], less work has been done on an organ scale. To create a true model of the human body, the microscopic and macroscopic need to be integrated. One way to do this is to use a hierarchical system. Modules can be developed to model the scale being considered, with appropriate links between levels. Techniques have been borrowed from the systems engineering and software engineering communities to aid and formalize these connections between modules. An example is the BioUML, an open source platform for multilevel biology modeling [137]. Hierarchical modeling using rule based models has been implemented at a cellular level [138]. A hierarchical approach allows for separation of development of models for subsystems, but global effects of different substances and conditions need to be considered too. Studies of trans-organ and system effects of substances is a relatively unexplored field of study. For example, oxidative stress affects the mitochondria directly [24], but also the larger systems such as the brain [47]. Mitochondrial stress may also affect the brain indirectly. Dysregulation of mitochondrial dynamics has been implicated in Parkinson's disease [139]. Mitochondrial stress may lead to lipid peroxidation leading to reactive aldehyde generation in the liver, and finally to microglial activation and neuronal death [61]. Inflammation can affect many body systems. Inflammation can also be part of a feedback mechanism where inflammation creates conditions which create or perpetuate inflammation [140]. Xenobiotic substances must be taken into account.. Many exogenous substances are not typically included in existing models. Toxins such as PCBs, pesticides and heavy metals can affect the efficiency of enzymes often deficient in autism and need to be considered as a potential causative element [18,38]. In addition, the effect of toxins in combination may not be the same as the effect of the toxins in isolation [141][142][143]. The microbiome, the complex ecosystem of intestinal flora, may have an impact on many systems in the body either through immunological effects, or through the microbial metabolites such as the proprionic acid produced by clostridium [144]. Special diets and supplements used by many on the autism spectrum may affect the composition of the microbiome in addition to possibly changing the function of enzymes [145,146]. Identifying subgroups In spite of autism's many common behaviors, it has become evident that autism has a complex etiology and multiple subgroups. The development of autism appears to be a complex interaction of genes and environmental factors. Since most cases of autism are idiopathic, there are an unknown number of subgroups that may be present. Treating autism as homogeneous will obscure the differences required to ascertain the variances needed for proper treatment. Identification of subgroups would aid in both research and treatments. As David Amaral, President of the International Society for Autism Research states, "There is not going to be rapid progress in autism research unless we subtype" [147]. This subtyping can be done on the basis of genes or clinical data. Clustering has been tried using behavioral symptoms but has had little success at identifying latent factors [148][149][150]. The benefits of subgrouping are as follows. Subgrouping the population might result in subgroups that have distinctive symptoms and pathology that are already familiar in the medical literature, and can draw upon treatments that work in existing treatable conditions. For example, if one subgroup is a variant of a known syndrome, we can possibly benefit from the treatments known in the context of that syndrome. Subtyping would reduce the use of therapeutic trials, allowing a more targeted treatment. Another benefit that accrues from subgrouping is in prevention. If we know the sequelae of another similar condition, we can take appropriate action to include appropriate preventive measures in the treatment protocol. For example, if seizures are a symptom of the similar known syndrome or condition, potentially a periodic EEG evaluation could be included in the treatment protocol. Biomarkers can be used for clustering subgroups. Many of the metabolic, immunologic, proteomic, genetic and anatomical differences listed above can be used to search for subgroups [151,152]. Biomarkers can also be identified with more advanced methods [16,153]. An important consideration is that the biomarkers used be clinically relevant, chosen to maximize the potential for treatment [154]. For example, the following parameters could be included in a feature vector in the subgroup calculation algorithm, for the purpose of clustering: -Genetics. This can include genetic panels such as mitochondrial or results of microarray testing. -Lab test results, such as the above mentioned metabolic, immunologic and proteomic biomarkers. -Symptoms and severity as a function of time. These could be "hard" symptoms such as the presence and type of epilepsy, or "soft" symptoms such as parent reports of sociability. -Treatments and their effectiveness. The treatments could include steps to address some of the disease markers discussed above, such as methycobalamin and folinic acid [36] for methylation issues and carnitine [61] for mitochondrial issues. The feature vector would be a vector with both specific values and binary numbers as markers such as a 1 for the presence of a polymorphism or other hard symptom and a 0 for none. For example, a child with the MTHFR 677 genotype, a tGSH: GSSG ratio of 8.6 and no epilepsy could be represented by the feature vector [1, 8.6, 0]. Once in numerical form, a variety of pattern recognition techniques can be used. One popular clustering technique is the K-means [155]. The k-means algorithm is essentially a density finder. It assigns each input vector using an indicator function to a cluster defined by a prototype vector. The algorithm then minimizes the global average squared Euclidean distance from each input vector to the prototype. This optimization changes the position of the prototype vector to reflect Euclidean density patterns. The prototype center is the average of the input vectors assigned to it and thus potentially representative of a subgroup. One weakness of the k-means is that it performs a hard assignment of each input vector to a cluster. An input vector is either entirely in a cluster or not at all. This would not match situations where there might be an overlap of symptoms. Fuzzy techniques would be of value in these cases. Fuzzy set theory allows intermediate levels, between 0 and 1, of belonging to member sets. The fuzzy c-means (FCM) is a fuzzy generalization of the k-means algorithm to allow input vectors to belong to more than one prototype [156]. The FCM also does not suffer from the stability problems that sometimes occur in the k-means when an input vector will switch back and forth between two prototypes, and thus changing the prototypes in the process. An important issue with clustering algorithms is the number and validity of clusters. The k-means and FCM algorithms will find the number of clusters specified during program initialization, regardless of the actual number of clusters. Some clustering algorithms can produce clusters that are empty or degenerate. Many practitioners will heuristically try different numbers of clusters and asses the fit. There are also various methods to attempt to quantify the validity of clusters [157]. The Self Organizing Map (SOM) maintains a proximity relationship between clusters and can be useful for visualization [158]. The above techniques are unsupervised. Unsupervised techniques relay wholly on the input data to find clusters or groupings in the data. Supervised techniques incorporate additional knowledge about the expected groupings to guide the cluster development process. This additional information, if available, can aid in complex and high dimensional problems. Support Vector Machines [159,160] and a variety of Neural Network algorithms can be used to find patterns in the data [161]. Although supervised algorithms can, in general, outperform unsupervised algorithms, additional "ground truth" data is often unavailable. This ground truth can be information such as genes already associated with a phenotype or reaction to an intervention. It could also be symptoms that could also be used as inputs, such as the before mentioned presence of epilepsy. Most of the algorithms mentioned above measure similarity based on the Euclidean distance metric. Euclidean and other Minkowski lp norms such as the "city block" distance measures will represent hyperspherical patterns well. Other distance measures are possible such as various correlation measures [162] and non-spherical distance measures such as the Mahalanobis distance [161]. Another issue is the scale of the data. Expected results in lab tests may vary by several orders of magnitude. Therefore, it is usually advisable to normalize the data before using in an algorithm. Perhaps the most critical issue is the "Curse of Dimensionality". The curse of dimensionality refers to the somewhat counterintuitive properties of high dimensional spaces whereby additional information can result in a lessening of discernment. The simplest of the implications of high dimensional space is that the amount of data required to adequately cover a volume increases exponentially with dimension. It can be shown geometrically that most of the volume of a high dimensional Gaussian is contained in its tails, rather than at its center. This has obvious implications to distance based algorithms. The distance from a center of a cluster to any point is concentrated in a small interval and the relative differences from various data points to the prototype become essentially the same. Thus discriminatory power can decrease with added information, even if that additional information has discriminatory power in of itself [163,164]. That has implications for finding subgroups in a complicated disease such as autism that might require a large number of features. Feature selection will alleviate the curse of dimensionality but may exclude features needed to find less prevalent subgroups. The curse of dimensionality may also be avoided by using subspace methods or hierarchical clustering. Another issue prevalent in autism data is the abundance of missing data. One cause of missing data would be different protocols for different studies resulting in similar but not identical feature vectors. When utilizing clinical data, physicians will not perform all tests on all patients, resulting in missing data when patients are combined. Therefore techniques need to be utilized to make the most of the data that is present [165,166]. Numerical data in autism research has particular challenges. The data can refer to disparate body systems. Data can be problematic to integrate across studies and research centers. For example, studies can have different selection criteria, experimental conditions, and goals. Research centers can have different testing procedures which can lead to varying results. Data is often not precise. Fuzzy techniques should be incorporated, as many of the data considered will not be easily quantifiable, such as parent reports of behavior. Also, what might be considered outlier data may in fact be important. It may be representative of the extreme values that are evident in autism data [167]. There are a myriad of information that might be useful in determining autism phenotypes. As mentioned before, it might include items such as genotype information and lab results. It also might include items such as parent ratings of diarrhea odor. It is obvious that a value of '1' in these three categories would have very different meanings, although numerically they would be the same. Incorporating domain knowledge into the identification of subgroups will alleviate many of the problems noted above. As shown in the preceding sections, there is much qualitative information about autism contained in the medical literature. Most of it is single system studies. Techniques need to be discovered to integrate this information together. One way to incorporate domain knowledge is to embed causal information into the solution [168]. Some preliminary, simple subgrouping has already shown promise. An analysis of the gluten and casein elimination diet showed greater improvement in symptoms in children with gastrointestinal symptoms compared to those without [109]. This information can help practitioners decide whether to recommend restrictive diets. It has been proposed that there may be a mitochondrial [58], intestinal permeability [103] and immune subgroups [11] in autism, but it is probably more complex than that as many children may belong to multiple subgroups. Thus it is imperative to develop subgroups that have clinical significance for treating the symptoms of autism, not just statistical validity. For example, one could, possibly discover a subtype of autism that presents with clinical or subclinical seizures of a certain characteristic type. The treatment of seizures being a well-studied area, by itself, we could potentially establish a treatment protocol for patients in this subgroup, using treatment studies of drugs used for seizures in these patients also presenting with autism. This would result in a new treatment for those with autism, in contrast to using a seizure medication as an off-label drug without clear evidence of efficacy in this population. Time dependent modeling Another issue of importance is the time scales involved. Autism is a developmental, not a static disease. Disease progression might start prenatally and extend throughout childhood. And of course, the child's body is growing and changing. Modeling incorporating time progression has been primarily on the genetic or cellular level. Frameworks have been developed for parameter adjustments during phenotype transitions [169]. Molecular connectivity maps incorporating differentially expressed genes have been used to investigate the relationship of aging to neurological and psychiatric diseases [170]. Another time range to be considered is the progression through generations. Transgenerational changes have been shown with common toxicants. Low level bisphenol A exposure during pregnancy in mice resulted in transgenerational alterations in gene expression and behavior [171]. Another possible avenue for children with autism would be that impairments in the mother's methylation and sulfur pathways might result in a concentration of toxins in a mother. She would then pass on a greater than normal amount of toxins to her child prenatally [172] and through breast feeding [173]. This will impair the detoxification systems of the child from an early age, resulting in an even greater build-up of toxins. If this child, a girl, has children, she would pass on an even greater toxic load to her children. As the effects of toxins are more severe the earlier they are introduced, this might lead to developmental delays, including autism. Thus a non-genetic, non-epigenetic trans-generational inheritance could be occurring. A recent study showed a three-fold rate increase of autism in the descendants of survivors of the mercury induced Pink disease (infantile acrodynia). The study did not separate out matrilineal descendants, so it is impossible to determine whether there were toxins passed in utero, or whether the increased incidence was a result of a genetic hypersensitivity to mercury [174]. This sort of inheritance can also happen in other systems [175]. Inducement of diabetes in pregnant rats will result in increased prevalence of diabetes and obesity in the offspring. This can lead to gestational diabetes in the children and perpetuation of the diabetes through generations, through environmental causes [176,177]. Another source of time-dependence is that the brain itself is a state machine, in the sense that future characteristics depend on past characteristics, the various interventions employed or not employed at a certain time, etc. Simplified modeling with reasonable assumptions can be potentially employed to answer questions of generic value. An example would be "Are outcomes better for children with regression, who were treated with antiepileptic medications prior to puberty, compared to children who received such treatment later, after puberty?" Another example would be "Do children who exhibit conditions such as gastro-intestinal abnormalities or seizures generally tend to lose these symptoms after a certain age?" If so, did these children receive a certain therapy, either medical, educational or behavioral, at a certain age? In summary, a time-dependent model will throw more light into brain plasticity and its contribution to the outcomes that we see in this population. In order to introduce this complexity, we propose enhancing our models using Dynamic Time Warping (DTW) [178] or a more complex model with state information, similar to hidden Markov models where the body is assumed to be in a state where it produces certain symptoms or observations and transitions to other states based on the model. Estimating these models and predicting outcomes would be the most complex of the techniques proposed in this article, and would be the goal for modeling such a complex time-varying system. Discussion and conclusions This paper contains, of necessity, an incomplete review of the issues involved in autism. Research is exploding in this area and new findings are being published every month. It is clear that the complexity of autism presents a both challenge and an opportunity for systems biologists. Modeling autism requires new techniques to be developed to harness and tame the complexity of interactions. For example, a possible interaction would be impairment of the detoxification system could allow toxins to accumulate and cause mitochondrial dysfunction [48,49], which could cause immune dysfunction [179,180], which could cause gastrointestinal dysfunction which could then affect the brain [181]. This is not to imply that this relation is the cause of autism. In fact, the whole relation could go the other way, with stress inducing bowel dysfunction [112]. The bowel dysfunction could, through opening of tight junctions, induce immune activation [182], which could contribute to mitochondrial dysfunction [72], and finally the resultant oxidative stress can cause more resources to be used in the production of GSH, perturbing the metabolic pathways [183]. And in fact, the chain is not ordered. Gastrointestinal dysfunction could impair mitochondrial function directly through the clostridial production of proprionic acid [8]. These interactions are a purely hypothetical thought experiment and are not to be represented as causes. But even so, it is apparent that the number of possible interactions of systems in autism is almost exponential. This necessitates a system approach. Autism could be considered a model for other complex diseases. The probable interplay between genetic and environmental factors is suspected as a factor in many diseases such as cancer and diabetes. Since many of the genetic variants that predispose children to autism are common in the general population, findings in autism may have much broader implications for the population in general. Autism is the most rapidly increasing developmental disability with enormous costs to individuals and to society. The importance of modeling autism research cannot be overstated. In summary, the goal of a systems approach to modeling autism, can potentially lead to the following concrete benefits. First, having a comprehensive evolving digital data model for autism gives us a platform to capture the on-going research in an analysable format. The model itself can "learn" as results are incorporated as training data, into the system. Second, immediate tools such as a detailed hierarchical Intake or Follow up questionnaire could result from the system, based on its knowledge of subtypes and interconnections, leading to better clinical care for this population. Third, the system can be used for hypothesis generation, suggesting possible research topics for clinical trials. Autism research findings need to be mined, integrated and modeled to help not just future generations, but also to improve the outcomes for the current generation of people with autism.
2016-05-04T20:20:58.661Z
2012-10-08T00:00:00.000
{ "year": 2012, "sha1": "77a5169f4221da23c106f92600f6bc9dd1e6ebc3", "oa_license": "CCBY", "oa_url": "https://jclinbioinformatics.biomedcentral.com/track/pdf/10.1186/2043-9113-2-17", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8b6cae67c75302883841d53e3e80920e90edb08a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
228956656
pes2o/s2orc
v3-fos-license
Terrorism in time of the pandemic: exploiting mayhem ABSTRACT Despite the world’s overwhelming preoccupation with the COVID-19 pandemic, the threat of international and domestic terrorism is not in decline according to available indicators. The angst that the pandemic induced in millions of people, and the incapacitation of major functions and institutions of world’s societies are exploited by both jihadist and far-right terror organisations for the spread of conspiracy theories aimed to fuel hate against their alleged nemeses, the encouragement of easy attacks against vulnerable targets, and the spread of bedlam and confusion intended to bring down governments and promote the terrorists’ agenda. In this paper, we illustrate and discuss terrorism trends manifest during the COVID-19 pandemic and consider the threat these trends pose to the world’s security. Business as usual Despite the overriding media attention to the COVID-19 pandemic and its near-total eclipse of security issues, the terrorism milieu has hardly taken a pause from its deadly pursuits or suspended the execution of its plans. Just in the week of March 11-17, 2020Islamic State (IS) launched significant attacks in seven countries: Egypt, Niger, Nigeria, the Philippines, Somalia, and Yemen (SITE Intelligence Group, 2020e). In April alone, ISIS launched over 100 attacks in Iraq, the highest number in 2020 so far (Clarke, 2020). In Afghanistan, the Islamic State Khorasan Province (ISKP) carried out in mid May 2020, a devastating attack in a funeral parlour in Nangahar (24 killed, 68 wounded), another horrific (thus far unclaimed) attack was launched in a maternity ward in Kabul that killed pregnant women and babies (16 dead) (Gannon & Akhgar, 2020). In June and July ISIS resurged also in Syria, Kashmir, Pakistan and the Philippines (SITE Intelligence Group, 2020d). Outside of Kashmir, Pakistani institutions have also been targeted by the nationalist Balochistan Liberation Army, which is alleged by Pakistan to be supported by India (Haroon Janjua, 2020). Al-Qaeda affiliated Al Shabab organisation reported a significant uptick in its operations claiming 37 attacks in Somalia and Kenya claiming 52 dead, and 35 wounded. In Mali, al Qaedalinked Jama'at Nasr al-Islam kidnapped a high-profile opposition leader (Columbo & Harris, 2020). Pro-Al Qaeda groups claimed attacks also in Syria, Mali, and Yemen (Al-Qaeda in the Arabian Peninsula). The statistics for India, Pakistan, and the Malay Archipelago showed no decline in terrorist attacks in recent months (South Asian Terrorism Portal, 2020;SITE Intelligence Group, 2020b;Yaoren, 2020). The far-right extremists have not been 'sitting idle' during the pandemic either. In the U.S., there have been 50 vehicle ramming attacks since late May targeting protesters (Allam, 2020). The Boogaloo Boys, a farright, pro-guns anti-lockdown group has intensified their attacks, feeding off both the anti-lockdown protests and the police brutality protests (Jones et al., 2020). Early in the pandemic, there was a spike in anti-Asian hate crimes (ADL, 2020). In 2020 thus far, the far-right has been responsible for 90% of terrorist attacks in the U.S. compared to 66% in 2019 (Jones et al., 2020). There have also been right-wing attacks against anti-lockdown protests in Germany (Goßner, 2020). Finally, even though this has not been attributed to any known extremist groups, during the pandemic there has been a significant worldwide uptick in cyberattacks, mostly targeting hospitals (INTERPOL, 2020). So, the awe of the pandemic notwithstanding, extremist groups have not ceased sowing their own brand of horror. Far from just keeping up their activity despite the pandemic, they are using the pandemic as an opportunity to grow stronger. As shown below, they are exploiting gaps in security, and the general burdens on societies that the pandemic imposes and are pushing forward their ideologies as a cure for fear, frustration, and panic (Bloom, 2020). A boost to messaging Specifically, the widespread upheaval, uncertainty and global anxiety occasioned by the COVID-19 pandemic has been seen by terror organisations as a golden opportunity to tie their messaging to information about the disease and intensify their propaganda for purposes of recruitment and incitement to violence. These objectives are being pursued through a diverse, and, often internally inconsistent, blend of communications including conspiracy theories, claims of the God's vengeance against its enemies, exhortation to weaponise the virus, and taking advantage of society's weakness by launching widespread attacks wherever and whenever possible. Though bolstered by the pandemic, the extremist messaging activities are not new, nor at least in the case of ISIS, is tying their messaging to popularly trending news and popular culture hashtags. Terrorist groups have flourished online, with ISIS becoming notorious for their high-quality propaganda videos and content that resulted in the recruitment and travel to Syria of over 40,000 foreign terrorist fighters from over 100 countries around the world. Likewise, far-right groups have flourished online prior to the pandemic. Far-right extremists across the Atlantic share methods and ideology across the Internet and bolster each other's hatred online. Likewise, the 'incel' ideology, which overlaps a great deal with far-right violent extremism but also praises militant jihadism (Moonshot, 2020), was born and propagated almost entirely online and their misogynistic brand of extremist violence was recently classified as terrorism by Canadian authorities (BBC, 2020). Yet the current situation is qualitatively different from prepandemic realities, and entails new vulnerabilities. Now, during lockdowns, people feel alone and disempowered. They are increasingly unemployed, anxious about the future, eager for a sense of community and purpose, and looking for belonging and answers; this makes them perfect prey for terrorist recruiters who already have the skills to develop close bonds over the Internet and to incite people to violence without ever meeting in person (Speckhard & Ellenberg, 2020). Moreover, a report from the United Nations Counter-Terrorism Committee Executive Directorate [UN CTED] noted that school closures and the move to distance learning has led to a dramatic increase in unsupervised Internet activity among young people, who could be exposed to terrorist messaging on social media, online chatrooms, or gaming communities (United Nations Counter-Terrorism Committee Executive Directorate, 2020). In what follows, we illustrate the new trends in extremists' online propaganda via specific messaging examples the likes of which are in the many thousands. These continue to elevate the threat of terrorism, keeping up a dangerous level of incitement that societies would do well to prepare for both while the pandemic lasts and in its aftermath. The jihadist response The extremist narrative: grievance, culprit, method To appreciate the potential impact of the terrorists' propaganda stratagems, it is useful to consider them in the context of a paradigmatic extremist narrative that justifies violence against some target. Its three essential elements are those of Grievance, Culprit and Method (Kruglanski et al., 2019;Kruglanski & Fishman, 2006). The Grievance element refers to the harm or injustice that a given group of people suffered, or allegedly suffered. The Culprit is the social entity (nation, state, organisation, ethnic group or religion) deemed responsible for the harm/injustice. Crucially, the Method is the unleashing of violence deemed as both effective and morally justifiable means of punishing the Culprit and achieving glorious victory over the enemy. In jihadist circles, moral justification for unleashing violence is often, if not always, couched in defensive terms with the Culprit blamed for the original grievance and anti-Muslim violence, therefore morally justifying defensive and retributive action to prevent further acts of grievance, as in 'an eye for an eye'. Different elements in the terrorist propaganda have pertained to different elements of this narrative. Conspiracy theories In context of the COVID-19 pandemic, a veritable cottage industry of conspiracy theories was put in motion bearing on the Grievance and the Culprit elements of the terrorist narrative. In this regard, the jihadists and farright have mirrored each other, often repeating the claims made by their seemingly polar opposite and fanning flames of conspiracies about Western governments and their responsibility for the virus. In the case of jihadists, two types of grievance have been articulated: (1) the pandemic itself and (2) former crimes against Muslims by alleged enemies of their faith. In the first category have been claims that the Chinese or the Americans deliberately initiated the spread of the virus. Such conspiracies are widely shared in jihadist chat groups as well as by other elements of mainstream society, some of whom have emulated Western far-right (and U.S. President Trump's own) labelling of COVID-19 as 'the Chinese Virus' (SITE Intelligence Group, 2020b). For instance, Abu Ali al-Askari, a security chief for the Hezbollah Brigades, an Iraqi Shia militia attributed the pandemic, in a 26 February 2020 tweet to 'The capitalist countries led by America' whose 'Biological weapons are among many tools they use to crush their opponents'. Blaming the U.S. as the originator of the virus, he, therefore, enjoined 'all on the sincere media, the selfless and those of sound opinion to reveal those killers and expose their violations in order to reduce the danger facing our human world' (SITE Intelligence Group, 2020 c). Likewise, on jihadists' Instagram pages, reports of the 'New World Order', an oft-used conspiracy theory advanced by the far-right, abound. One meme featured Facebook CEO Mark Zuckerberg and Microsoft founder Bill Gates sitting on a couch conspiring to exploit their enemies and gain from the pandemic. A speech bubble above Zuckerberg reads, 'I will delete the post who are exposing us on every platform which belong to us . . .,' with Gates responding, 'Good! And I will delete the people's through vaccines & viruses. however we are on same mission [sic].' The same poster referred to the pandemic as 'Operation COVID-19: A global PSYOP [psychological operation], false flag operation, mass casualty event and series of bioterrorist attacks carried out by NATO and British Crown.' The post's author went on to claim that the pandemic was 'paid for by international banking cartel to expedite the military deployment of 5 G [. . .] establish a mandatory vaccination requiring immunity certificates and other draconian healthcare mandates.' In jihadist propaganda, this simply reinforces the existing claims that the Western powers are trying to dominate and even eliminate Muslims. Moreover, it feeds sinister fears about the true objective of vaccinations offered by Western governments, after a doctor purporting to be going house to house vaccinating Pakistani children was found to actually be a spy who helped determine the location of Osama bin Laden before he was subjected to a U.S. special forces capture and kill operation (Reardon, 2011). These conspiracy theories of social control are bolstered by government responses to the COVID-19 crisis, including mass surveillance and use of the military in domestic policing (United Nations Counter-Terrorism Committee Executive Directorate, 2020). Allah's soldier In contrast to blaming Western or Chinese powers for creating and spreading the virus as a bioweapon, common among ISIS and Al-Qaeda propagandists is the claim that the coronavirus is a soldier of Allah sent to avenge the Muslim people's suffering brought about by the US and its allies. In this narrative, the coronavirus is seen as a type of plague sent by God that will kill the enemies of Allah, sparing the believing Muslims. For instance, ISIS spokesman Abu Hamza al-Qurashi made a speech entitled 'And the Disbelievers Will Know who Gets the Good End', released 28 May th , 2020, in which he compared the pandemic to the biblical story of Moses cursing Pharaoh with the 10 plagues until he relented and let God's people go. Al-Qurashi claimed that the coronavirus of today is a modernday plague sent by Allah to inflict the U.S. and its allies to demonstrate the righteousness of Muslims and Allah's divine power; to turn Western unbelievers them from their disbelief and compel them to repentance. Likewise, he claimed that the pandemic was divine retribution for the deaths in ISIS-held territories in Syria and Iraq caused by the U.S. led coalition, pointing out the similar number of deaths now caused by COVID-19 and the parallel situation of people having to now stay locked in their homes (SITE Intelligence Group, 2020 j). On 20 March 2020, a user on a pro-ISIS platform made an almost identical claim: 'The daily number of deaths and new infections in Europe and North America is running almost equal to the number of daily civilian deaths and injured during coalition bombings in Mosul and Raqqah' to which a fellow user replied: 'they are getting payback for their crimes inshallah, now they experience some of the pain experienced by the Ummah' (SITE Intelligence Group, 2020 g). Two days later on March 22, the first user shared a chart reporting total and new cases of the COVID-19 infections and deaths in America, France, Germany, Italy, Spain and Switzerland and gleefully exclaiming: 'alhamdullah' (praised be Allah), and the acronym LOL (laughing out loud). Another user tweeted: '#coronavirus is doing the work of the mujihadeen, alhamdullah; Muslims should enjoy how's Allah punishing kuffar for their support against Muslims.' (SITE Intelligence Group, 2020 g), and on 20 March 2020 Gazan Imam Jamil Al-Mutawa sermonised: '(Allah) has sent just one soldier. what would happen had he sent 50 like the corona virus? He has sent just one soldier and it has hit all 50 (American) states . . . They talk about 25 million infected people in just one of the 50 states (California). Allah be praised' (MEMRI, 2020). Other clerics pointed to the virus's initial epicentre in Wuhan, China, as evidence that the virus was sent by Allah to punish China for its severe persecution of Uighur Muslims (Hanna, 2020). It was not only ISIS and al-Qaeda referring to the coronavirus as the soldier of Allah. A New York-based Muslim Brotherhood activist called on Egyptians infected with the virus to forgo hospitalisation and instead visit as many secular government officials and headquarters as possible to act as a human vector for spreading the widely spreading the infection to perceived government oppressors. This fits with the jihadist idea of 'martyrdom' and sacrificing oneself by attacking and dying for the good of the Muslim community (Al Arabiya English, 2020). The idea that the coronavirus was sent as 'Allah's soldier' resonates deeply with people of faith who already have accepted claims about atrocities carried out by the global world powers against Muslims worldwide: Russia's carpet bombing of Chechnya, China's oppression of Uighurs, Myanmar's rape and genocidal killing of Rohingya, and the U.S. coalition bombardments and killings of Muslim civilian populations in Syria and Iraq. The 'Allah's soldier' theory, however, puts Muslims in danger as it claims that true believers will be spared, as were the followers of Moses from the ancient Biblical plagues. In that vein, Iranian Shia pilgrims, perhaps believing they would be divinely protected, streamed into Iraq during the holy days, carrying the virus with them as they gathered in close quarters and kissed relics in succession, spreading the disease among themselves and into Iraq. Conspiracy theories' widespread appeal, beyond fringe extremists, is due to their provision of clarity in times of troubling uncertainty and the guidance for action required under the circumstances (Graumann & Moscovici, 1987). As demonstrated by the use of the 'Allah's Soldier' narrative as well as the conspiracy theories propagated by terror groups, extremists are often able to frame situations in a way that promotes their ideology. Logically, however, a disease sent by Allah to punish infidels cannot be simultaneously be created by Bill Gates and the New World Order; the fact that ISIS supporters have put forward both narratives -might lend itself to exploitation in CVE messaging efforts. A call to action Consistent with identification of the Grievance/harm (the pandemic, anti-Muslim violence) and the Culprit (China, the US and its allies) is the recommended Method of restoring dignity and significance to the injured (cf. Kruglanski et al., 2019), namely unmitigated violence aimed to (1) exploit the weakness and discombobulation of the enemy allowing the mujihadeen to hit them hard and with impunity, (2) developing a bio-weapon of one's own given its spectacular lethality. Relevant to the first theme, the ISIS editorial in the ISIS magazine NABA 226 titled "The Worst Nightmares of the Crusaders stated: 'Their houses are shuttered, their markets and activities disrupted . . . Do not have mercy on the disbelievers and the apostates even when they are at the height of their affliction. Exacerbate the stress on them, so that they become weaker . . . (in) their ability to fight the mujahideen' (The Stabilization Network, 2020). And on 18 March 2020, ISIS Maldivian supporters posted on a Dhivehi and English Telegraph Channel the posting: "Today we are witnessing the start and the spread of a new and dangerous disease which has shaken the world, and thrown all the governments into panic. Their attention is diverted . . . and even if they wanted to redirect their focus against us . . . the bitter truth is that they cannot afford to do that no matter how much they wanted . . . So, take advantage! And carry out Amaliyat (operations) . . . according to your capability. Do something good which will benefit you and others with you -for the sake of Allah (SITE Intelligence Group, 2020 h). Less than a month later, a major attack occurred in the Maldives, the first terror attack in the country to be claimed by ISIS (Zahir, 2020). Furthermore, a concern has been raised that the jihadists will learn from the horrific world impact of COVID 19 and intensify their efforts to switch from the use of complicated devices, bombs, and suicide attacks to biological warfare, and bioterrorism. The interest of terror groups in the use of biological weapons is longstanding and it has waxed and waned over the years (Guarrieri & Meisel, 2019). Yet recently (in 2018), the British MI5 received information that returning British jihadists from Syria and Iraq were trained in developing basic bio-weapons like ricin and anthrax. Likewise, researchers at the International Center for the Study of Violent Extremism (ICSVE) recently interviewed doctoral-level scientists who had been recruited by ISIS to study scientific journals around the world in multiple languages about biological and chemical advancements and from these compile for ISIS instructions on what to buy and how to create weapons of mass destruction in a lab they were running in Erbil, Iraq (Speckhard & Shajkovci, 2019). Al-Qaeda also experimented with anthrax and ricin even exposing dogs to bioweaponry in attempts to test lethality. These trends raise the possibility that the devastating health impact of the coronavirus, and its potential for societal and economic disruption would tempt terror organisations and revive their interest in pursuing the bioweapons option (Jayaratne, 2020). While development of unconventional weapons would take serious time to develop and likely require state sponsorship to come to full fruition, terrorists are already taking advantage of security gaps produced by the pandemic. At the simplest level, security forces all over the world will be depleted when servicemembers get sick, and they are also being redeployed as a result in a shift in priorities and the interest of safety. In Iraq, American forces shifted focus in early 2020 to focus on Iran and Iranian-funded militias and aircraft carriers of naval soldiers have seen serious spread of infection while on worldwide patrol. Concerns over the spread of COVID-19 in Iraq as well as attacks by Iranian militias caused relocations, consolidations and depletions of the American presence as troops were cautiously withdrawn. Coalition allies also removed many of their troops from Iraq in the face of the pandemic. Western countries such as the United Kingdom have delayed deployments to Africa in order to focus on fighting the virus, leaving local security forces more vulnerable to jihadist attacks (Campbell, 2020). ISIS urged its followers to take advantage of the power vacuum, directly referencing the pandemic by pronouncing, 'Fear of this contagion has affected them more than the contagion itself' (Magid, 2020). Indirect initiatives (1) Proselytisation for Islam Beyond the glee and celebration of the coronavirus as ally in the jihadist fight, a 'Soldier of Allah', extremist organisations recognised the danger that COVID 19 poses to believers, and urged Muslims to repent and embrace their religion. On 22 March 2020, the Afghan Taliban urged Muslims and others to realise that humans are weak and should commit, therefore, to the service of Allah. Specifically, the message stated:" Humans distinct from Islam must consider this tribulation as time for reflection and change while the Muslims in general must also return back to Allah by seeking forgiveness for their sins, and renewing their commitment to religious principles. As much as coronavirus is a calamity and plague it is also an exemplary lesson and an admonitory tribulation" (SITE Intelligence Group, 2020 f). Similarly, Al-Qaeda Central in a statement published on 31 March 2020 exhorted: 'In this crisis, we would like to remind people of knowledge and callers to Allah to intensify their efforts to call people to Allah and invite them to repent sincerely. Now is the time to spread the correct Aqeedah, call people to Jihad in the Way of Allah and revolt against oppression and oppressors. We also call upon rich Muslims to step forward and show mercy towards the poor and deprived segments of society so that they may find some solace in these distressing times. There is a dire need today to take care of the orphans, widows, families of the prisoners and to support the sincere Mujahideen' (SITE Intelligence Group, 2020i Consistent with the notion that the pandemic represents Allah's vengeance against corrupt infidels and apostates, embracement of true Islam was thought to offer a protection from the plague, and in the worst case -secure martyrdom. On 9 March 2020, a user of a pro-ISIS platform quipped: 'Notice, how coronavirus only affects the kuffar, rafidah and murtadeen (apostates)' (SITE Intelligence Group, 2020e), and on March 20, the Gazan Imam Jamil al-Mutawa exclaimed: 'Look how empty their streets are, and how crowded this mosque is. Who is it that has given us security and terrified them? Who is it that protected us and harmed them? Allah' (MEMRI, 2020 Humanitarian assistance to affected Muslims Some postings acknowledge the danger that COVID-19 poses to Muslim, and in particular to jihadists in prisons and refugee camps (in Syria). A poster in an Indonesian pro-ISIS chat group wrote specifically: 'there are reports that (the virus) has spread to the Al-Hol (that) contains thousands of Muslim families who had merely resided in ISIS areas or are suspected to be ISIS family members. Oh brothers don't forget to pray for your brothers and sisters' (The Stabilization Network, 2020). And a March 19 editorial published in al Naba called for action to 'free the Muslim prisoners in the prisons of the idolaters and the camps of humiliation in which they are threatened by disease' (ibid). Humanitarian and preventative actions are now being carried out by extremist groups on behalf of those suffering from the pandemic, stepping into areas that governments neglected. These actions serve a practical purpose in the battle for hearts and minds. For groups that aim to establish shariah governments, the virus has provided an opportunity to prove their ability to respond effectively to a wide-scale crisis in stark contrast to governments that defaulted on their duty in this regard. This contrast also serves to foment further distrust in the official governments, leading to anti-State violence. Opportunities for such violence are also strengthened by the strain on resources causing United Nations member states to withdraw troops previously supporting those official governments (United Nations Counter-Terrorism Committee Executive Directorate, 2020). In Idlib, HTS's civilian front known as the Salvation Government took a number of steps to prevent and contain the spread among Muslims, despite HTS's contradictory claims that the virus was punishment for unbelievers. Nevertheless, the steps, presumably taken with HTS leadership's approval, included releasing informational videos, conducting body-temperature checks at border crossings, educating clerics, setting up isolation centres, closing markets, and instituting remote schooling. Their actions, along with those of the Kurdish-led Autonomous Administration in Northeast Syria, stand in stark contrast to Syria's official government's repeated denial that the virus has taken a major toll on the population (Zelin & Alrifai, 2020). In Gaza, Hamas banned public gatherings and directed its fighters to focus on sanitisation of crowded areas and in Egypt, the Muslim Brotherhood launched their 'One People' campaign to help the population deal with the economic repercussions of the pandemic. Perhaps one of the most notable responses came from Hezbollah in Lebanon, a group which has long been known for competing successfully with the government in providing services to the people. Hezbollah sent 25,000 healthcare professionals and 100 emergency vehicles to assist patients and transformed an area of a hospital previously used to treat its fighters to accommodate COVID patients. Hezbollah fighters also offered humanitarian services to Iran and rented hotels to be used for quarantine (Perry & Bassam, 2020). All these actions serve to advance jihadist groups' claims that their strict Islamist system of government is superior to the inept secular or moderate Islamic governments, and better equipped to deal with emergencies. In many parts of Africa, governments already struggle to gain the trust of the citizenry; their failure to effectively deal with the pandemic provides opportunities for groups like al Shabaab in Somalia to enter the services vacuum, emphasising to the Somali people the superiority of shariah governance (Campbell, 2020). The far-right domain Violent extremism on the far right has been on the rise in Western societies. The data speak for themselves: Farright terrorist attacks increased by 320% between 2014 and 2019 according to the 2019 Global Terrorism Index (Weimann & Masri, 2020). In 2018 alone, far-right terrorist attacks made up 17.2% of all terrorist incidents in the West, compared to Islamic groups which made up 6.28% of all attacks. How did the far-right domain respond to the COVID-19 pandemic? Its reaction has been intense and widespread. Typically, its supporters seized on the opportunity that this unprecedented situation afforded for spreading their narrative and mobilising followers for a new type of violence against their perceived nemeses. These comprised the traditional targets of far-right hate: Jews, minorities, foreigners, the government and more generally anyone outside the White supremacist milieu. Far-right rhetoric has been rampant on the various social media; it has comprised (1) a plethora of conspiracy theories, (2) concrete calls for violent action including both conventional attacks and deliberate spreading of the virus; (3) disinformation initiatives designed to promulgate chaos and panic. Conspiracy theories Far-right conspiracy theories that dominated the online chatter alleged that (1) COVID-19 was weaponised by the Jews or by the Chinese, and (2) it is an Asian disease, a 'Chinese virus', caused by the Chinese poor hygiene, or (3) indeed by 'filthy Jews'. The first type of 'theory' is exemplified by a tweet from @CEOErickHayden whereby: 'A Jewish scientist at Harvard was caught working with Chinese nationals who were smuggling biological materials to China. Now Israel has the vaccine before everyone' and facetiously: 'How many are ill from coronavirus in Israel?' (Katz, 2020). In this vein, far-right extremists have propagated the theory that the virus is a tool of the New World Order, led by the 'usual suspects', far-right scapegoats George Soros and Jacob Rothschild. A photo posted on Instagram featured Rothschild with the words, 'First we spread the disease, put the planet on lockdown, bankrupt the planet, invoke martial law, then BOOM, the third temple emerges.' Numerous other posts suggested that Soros was funding the people protesting police brutality in Minneapolis, Minnesota in order to start a race war when the pandemic failed to 'work'. These posts often referenced a viral video entitled, 'Plandemic', which featured a discredited scientist claiming that global elites, a nonexistent cabal of which Soros is the face, were using the virus and future vaccine as a means of social control. The video was viewed millions of times after being shared first by farright conspiracy theories on QAnon and then by a wellknown anti-vaccination physician, a professional mixed martial arts fighter, and a Republican politician from Ohio. The video was eventually removed from YouTube and Facebook, and many posts on Instagram with the hashtag 'Plandemic' having been deleted for spreading misinformation, but extremists had already been validated by their mainstream popularity. By then, they were able to frame any discreditation of the video as censorship and efforts by the elites to silence and control those who dared to tell the truth (Frenkel et al., 2020). Timothy Wilson, a Neo Nazi who was planning to bomb a hospital in Missouri, stated: 'the Zionist operated government is using it (COVID 19) as an excuse to destroy our people. They scar people and have society break down. Mark my words, it is coming. I hope people are ready' (Martin, 2020). And 'Nordic Resistance Movement', a Swedish Neo Nazi group tied together the Jews, the Chinese and African Americans in the following conspiracy theoretic message: 'This Jewish made coronavirus is affecting the international stock market because our manufacturing is our sourced, this is all relied upon by China so we are in this position because of globalisation because of the Jews. You watch they will now use our collapsing economies by importing African infinity niggers' (Katz, 2020). The second type of theory is illustrated by a 13 March 2020 blog that stated, 'the world has finally realized that Asians are a plague . . . and spread the virus to the whites.' Similarly, the French neo-Nazi blog Blanche Europe opined: 'the Chinese unlike the Japanese have no concern for hygiene.' The Misanthropic Division, a section of the far-right organisation Azov Battalion posted a footage allegedly showing Jewish shamans summoning the coronavirus to attack non-Jews. And a 'diejewdie'user of the Gab platform tweeted 'Unless we deport these filthy Jews, this pandemic is never gonna stop' (Katz, 2020). Identification of the Jews as culprits for the pandemic is sad reminder of the massive persecutions and massacres of Jews in the middle ages based on the conspiracy theory that it was Jews who propagated the Black Death plague in Europe between 1348 and 1351 (Cohn, 2007). Calls for violent action The far-right online chatter hardly has stopped at the propagation of conspiracy theories that attribute the pandemic to minorities and foreigners. In a sinister exploitation of the possibility of contagion, calls on the far right have been repeatedly urging supporters to deliberately spread the coronavirus to alleged enemies of America, the Jews in particular. In the past, far-right extremists did not shrink from the desire to use CBRN weapons in order to advance their 'revolution.' This led to several CBRN plots by the far right in Western countries, the U.S., in particular that, fortunately, failed (Koehler & Popella, 2018). The current pandemic offered, therefore, a 'low hanging fruit' kind of opportunity for biological violence against hated groups. A post on the 8chan board 1 stated 'if you get infected with the corona virus, go visit your local synagogue and hug as many Jews as possible, cough on all the door knobs, rails, pens, etc.' A Telegram channel 'Coronawaffen' posted a poll asking respondents 'If I get sick where should I go'. The most popular answer given by 76% of the participants was 'synagogue' with 'Parliament Hill', a remote second at 10%. A section of the Telegram channel called 'Only White People Go to Heaven' recommends that anyone infected with the virus 'travel to more ethnic parts of town, including mosques and synagogues, etc.' (Katz, 2020). Calls have also proliferated on far-right chat rooms to exploit the public's preoccupation with the pandemic in order to carry out devastating conventional attacks with impunity. A call inciting sympathisers to rob a non-profit organisation gleefully asserted: 'the best part is, everyone is already wearing masks! Even if they did report you it is not like the cops can spare resources in the midst of an epidemic break.' A similar message on the Telegram platform counselled would be far-right attackers to 'wear a breathing mask, they won't question it' all that in order to 'cut the powerlines, climb on grocery stores, and cut the cooling systems hit rocks into rich neighbourhoods with tennis rackets, tip watertowers, blow up bridges, railroads and sewage treatment plants most power stations are completely unprotected, it is legal to open carry an RPG just cover your face and you won't get caught' (Katz, 2020). Some of the calls have come to fruition. In the United States, reports of verbal harassment of Asian-Americans, anti-Semitic and xenophobic vandalism abound. In New Jersey, a girl was charged with bias-related crimes for yelling racial slurs at an Asian woman and punching her in the back of the head. In Manhattan, a woman was charged with hate crime assault after she spat in the face of an Asian woman and pulled some of her hair out, and another woman verbally harassed and punched a Korean woman before fleeing the scene. In Texas, a 19year-old man attempted to murder an Asian-American family, including a 6-year-old and 2-year-old, by stabbing them in a parking lot, and in Los Angeles, a teenager was brutally assaulted (ADL, 2020). Promoting Chaos. Terrorist (the likes of the German Red Army Faction leader Andreas Baeder or the Brazilian Communist Carlos Marighela) have long believed that their function is to create a disintegration of the civic order by provoking the authorities to excessive response to attacks that would undermine the public trust in government and prepare the society for a revolution. In like fashion the accelerationist theory promoted by far-right movements aims at creating havoc and confusion which would lead to a collapse of the state, and pave the road to desired change. In this vein, the media group 'Terrorwave Refined' affiliated with the far-right group AWD (Atomic Waffen Division) urged subscribers to augment the panic by spreading disquieting, deliberately forged lies, specifically directing them to "Make social media posts .about some Chinese guy who was in the grocery store coughing in the fruit, and use a burner phone to cold call police and journalists to tell them you're about to enter significant crowder areas while infected with the coronavirus. A 20 March 2020 from the blog of 'Slovak's Siege Shock' counselled readers to 'spread rumors about troop deployment in urban areas' feigning horror about the infringement on freedom these entail. Using CDC and World Health Organisation logos, far-right propagandists cynically encouraged people to (1) frequent mosques and synagogues allegedly to benefit from these venues' high hygienic standards, (2) spend time in ethnic neighbourhoods allegedly to augment one's immune system and (3) utilise the public transport system as it is 'made with anti-bacterial material' (Katz, 2020). Far-right extremists have also been able to sow chaos through their actions in virtual and physical gatherings. During the first weeks of shelter-in-place in American cities, the video chatting platform, Zoom, gained immense popularity, allowing for meetings, classes, and family events to occur while socially distancing. Not yet protesting the lockdown orders in front of statehouses, far-right extremists infiltrated Zoom calls and shared their screens, projecting violent and graphic imagery such as swastikas and pornography into the homes of unsuspecting attendees and making it impossible for schools to rely on Zoom for home-based lessons. Such actions, known as 'Zoombombing', were eventually curtailed by Zoom features requiring hosts to admit people into Zoom meetings as a default setting with an option to opt-out (Lorenz, 2020). Later, as ordinary citizens grew weary of staying home, extremist views of the lockdowns as acts of tyranny and fascism spread, leading to armed protests in Michigan, North Carolina, Colorado, and elsewhere. There is evidence that the virus was spread among protesters, but public support from right-wing politicians, including President Trump, was more meaningful for the extremists yearning for significance and relevance (Wilson, 2020). Frequently seen at these lockdown protests, wearing Hawaiian shirts and holding semi-automatic weapons, are 'Boogaloos', who for a long time flew under the radar in the far-right world. The Boogaloo movement started online and promotes civil war against law enforcement. In contrast to other far-right movements, the civil war anticipated by Boogaloos is not divided by race, making their brand of violent extremism more palatable to those protesting the lockdowns who do not want to be associated with neo-Nazis (Evans & Wilson, 2020). In the wake of COVID-19, it appears that far-right extremists have discovered the extent of people's fear of social control and loss of liberty, and have realised how easily they can manipulate citizens, who may not normally subscribe to the extreme ideology, into joining their cause. Conclusions The uncertainty and confusion caused by the COVID-19 pandemic are being widely exploited by international and domestic terror groups for spinning a plethora of sinister schemes portending a potential new tide of violence against people and governments. The terrorists' propaganda unexceptionally included the grievanceculprit-method elements characteristic of violence justifying narratives. The specific contents of these elements differed across the different groups. The jihadis have contended that the West has been on a campaign to subdue Islam, and the far right has claimed that Jews, Chinese, people of colour and governments who support them, have been trying to deprive white Aryans of their freedoms and exploit them. For both the jihadists and the far righters, the pandemic has offered new opportunities and methods for unleashing violence against the objects of their hate. In the realm of messaging, portraying the pandemic as God's punishment against evil actors, and/or identifying specific ethnic, religious or national groups as deliberate creators of the plague may boost the recruitment to extremist organisations whose simplistic 'black and white' narratives offer certainty and guidance for millions of anxious people in dire need of clarity. Likewise, with the intimacy of Internet connections and the widespread lockdowns, Internet recruitment of vulnerable people, youth particularly, isolated during lockdown are making online recruitment to extremist groups much easier. The pandemic has created an opportunity for extremist groups to pin the responsibility for the virulent disease on their hated nemeses. Mixed messages regarding the origin of the virus aside, militant jihadist groups in Africa and the Middle East have taken the opportunity to provide services to civilians, highlighting the ineptitude of secular or moderate governments and the benefits of the 'true' Islam, untainted by democracy. Lockdowns, contact tracing, and vaccinations have allowed the far-right to emerge from the depths of the Internet to congratulate themselves for having known the truth, that an evil cabal of Zionist elites have been planning to surveil and ultimately destroy ordinary citizens in pursuit of a New World Order. Militant jihadists and far-right extremists have both used such 'proof' to reinvigorate their loyal followers as well as attract new ones. There is no question that violent extremist groups' explanations of and reactions to COVID-19 are dangerous in many ways, but even non-violent fundamentalists can offer false security in a time of panic, posing a major threat to public health. An element that the jihadi narrative shares with other religious rhetoric across the board is an attitude of fatalism and surrender, the faith that religious piety guarantees God's protection and deliverance from this modern-day plague. In India, the Tablighi Jammat, a non-political Sunni evangelical movement is now known as the largest viral vector of COVID-19 in South Asia after its preacher ignored India's quarantine laws (ThePrint Team, 2020). Several evangelical clergy members in the U.S. similarly defied the authorities' exhortations to practice social distancing. Bishop Gerald Glenn of the New Deliverance Church in Virginia exclaimed 'I firmly believe that God is larger than this dreaded virus', and that he was going to keep on preaching 'unless I'm in jail or in the hospital'. Soon thereafter he died of the virus (King, 2020). In Israel, prominent ultra-orthodox rabbis refused to comply with government directives regarding the closing of schools and yeshivas. Rabbi Kaniewski of the Ponovitz yeshiva in Bnei Brak even opined that 'canceling Torah study is more dangerous than corona' (Sokol, 2020). What can be done when religious fundamentalism and political extremism provide so much clarity in times of high uncertainty, anger and anxiety? Beside allotting the necessary resources for increased vigilance and effective thwarting of plotted assaults, world leaders must offer an alternative coherence, one based on science and rationality and must disavow their own supporters who promote bigoted conspiracy theories under the guise of liberty. The peaceful Muslim community must speak in a clear voice and condemn as blasphemous the 'Soldier of Allah' portrayal of COVID-19 and remind their followers that disease can be spread to one and all alike unless protective measures are taken. Ulama who represent the voice of reason must be heard. As the common saying goes, 'God helps those who help themselves.' After all, it was the Prophet (PBUH), according to the Hadith, who recommended to fear God yet at the same time be cautious about Earthly dangers. 'Pray to God but tie the camel tight,' counselled the Prophet to a Bedouin who entered the mosque without securing his animal. Furthermore, it is told of the second Caliph, Omar, who on the way from Medina to Syria met fellow travellers who alerted him to a pandemic in which thousands have died. Omar immediately discontinued the journey and when later challenged to explain why he was running from God's decree, he answered, 'I am going from one of God's decrees to another of God's decree. ' Raising the voices of ex-far-right extremists is also useful to discredit their messages of hate. Likewise, online platforms need to work effectively as they did in identifying terrorists' content and implementing take down policies. A recent controversy with Facebook and Twitter concerning the President's own posts seeming to incite violence in the face of protests against racism make clear that when it comes to the far-right and politicians who benefit from their support this won't be an easy or noncontentious task. Far from uniting humanity against a common threat, the global uncertainty and vulnerability caused by the COVID-19 pandemic is being widely exploited by international and domestic terror groups and violent extremists. Presenting the pandemic as God's punishment against evil actors, and/or portraying given ethnic, religious or national groups as perpetrators of the plague may boost the recruitment to extremist organisations whose simplistic narratives offer certainty and guidance for millions of anxious people. Though everyone's attention is naturally drawn to the immense health and economic challenges that the pandemic poses, we cannot ignore the potential storm of intensified world terrorism that seems to be gathering in its shadows. Note 1. an imageboard website composed of user-created message boards, with minimal interference from the administration. Notes on contributors Arie W. Kruglanski Disclosure statement No potential conflict of interest was reported by the authors.
2020-11-05T09:08:19.649Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "2eb5f1a6066856814ff5813e9cd971cac3e7f813", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23779497.2020.1832903?needAccess=true", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d6a91e66d0a530801733fe83647a5bd2ebbd487f", "s2fieldsofstudy": [ "Political Science", "History" ], "extfieldsofstudy": [ "Political Science" ] }
46668740
pes2o/s2orc
v3-fos-license
Lossless Brownian information engine We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines. Republic of Korea We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedbackcontrolled information engines. Understanding the interplay between information and thermodynamics is a fundamental challenge of nonequilibrium physics, in particular in systems of active and living matter that self-organize into information-rich homeostatic ensembles. The question emerged with Maxwell's demon who, by measuring the velocity of gas molecules, was able to sort them into fast and slow ones, thus decreasing the entropy and apparently violating the second law of thermodynamics [1]. A series of works, starting from Szilard's engine [2] through Landauer [3], Bennett [4] and others, elucidated the link between information gathered by the demon and thermodynamic entropy, thereby resolving the apparent paradox. That the demon can extract work from information has been known since these seminal papers, but recent breakthroughs in nonequilibrium thermodynamics of classical [5][6][7][8][9][10][11][12][13][14][15][16][17] and quantum systems [18][19][20][21], and experimentally realized Brownian and electronic systems [22][23][24][25][26][27], set new bounds on the demon's efficiency. And the question as to whether these bounds are sharp and how they can be realized in experiment is still open. Here, we examine a bound on demons, i.e. information engines, that follows from a generalization of Jarzynski equality [28] to feedback-controlled systems [8,9,15,17,29], The exponent averaged in Eq. (1) augments the terms from the standard Jarzynski equalitythe work performed on the system W and the free energy change F (in 1 B kT    units)with a contribution from the information circuitry: I is the information gathered by measurements, out of which a part Iu becomes 2 unavailable due to the irreversibility of the feedback process [17]. Applying Jansen's inequality to Eq. (1) yields a generalized second law [17], Namely, the work extracted from the information engine bound on extra work that can be gained from information on the system. We call an information engine "lossless" if it achieves the tight bound of Eq. (2). This indicates that almost none of the available information from the feedback protocol is lost, while it does not exclude other, more energetically efficient protocols. We also note that the derivation of Eqs. (1) and (2) does not account for the external energetic cost of detecting the particle and moving the trap accordingly [17]. In this paper, we use an information engine made of a colloidal particle trapped in a harmonic potential to demonstrate the sharpness of the bound set by the generalized second law in Eq. (2). During each cycle of the engine, a high-precision measurement of the particle position is followed by a swift shift of the trap according to the measurement and thermal relaxation of the particle before the next cycle begins. Iterating the measurement-feedback-relaxation cycle, the engine can transport the particle unidirectionally, thereby extracting work from the random thermal fluctuations of the surrounding heat bath. We derive the optimal operating point of the engine when the work extracted per cycle peaks, and show that this peak reaches the bound in Eq. (2). We also show that the engine satisfies the generalized Jarzynski equality in Eq. (1). Thus, we validate these basic nonequilibrium bounds in a nearly error-free feedback control system. x , the position of a Brownian particle immersed in a heat bath of temperature T (Fig. 1). The experimental setup is detailed in the following, but first we discuss the basic physical features of this information engine in terms of a simplified model. We consider a particle trapped in the harmonic potential generated by optical tweezers, , where 0 x is the center of the trap and k its stiffness. In the low-Reynolds regime, the dynamics of the particle is overdamped [30,31], with a relaxation time , where  is the Stokes friction coefficient. Each cycle consists of measurement, feedback control and relaxation (Fig. 1). The cycle begins when the particle is at thermal equilibrium with a Boltzmann distribution, A nearly error-free measurement of x is then taken and serves as an input for the following feedback response: we define a region R from 0 xL  to infinity (L > 0). Whenever the particle is found in R, we instantaneously (i.e. much faster than ~3 ms  ) shift the potential center 2L to the right; otherwise, if the particle is outside R, the potential remains centered at x0. After the feedback step, we let the particle fully relax before we reiterate the cycle. Next, we consider the energy balance during a shift cycle. After the shift, the particle always returns to the same equilibrium macrostate and the free energy remains unchanged, 0 F . Moreover, in the FIG. 1. The measurement-feedback-relaxation cycle of the Brownian information engine. A particle is initially in thermal equilibrium in a harmonic potential generated by an optical trap. The feedback is determined as following: We set a region R from L to infinity (shaded). (a) If the particle is outside R, nothing is changes. (b) If the particle is inside R, we instantaneously shift the potential center to 2L. By shifting the potential center, the system extracts work equal to the change in the potential energy ∆V. After the feedback step, the system relaxes back to thermal equilibrium and the cycle repeats. overdamped regime we can disregard the kinetic energy of the particle, so the change in the potential energy when the trap shifts, , is fully converted into heat and work [32]. However, the potential is shifted much faster than the typical relaxation time such that the particle has no time to move and dissipate energy [33]. Therefore, all the potential energy gained by the shift is converted into work. During the relaxation step, no work is done and only heat is produced by thermal dissipation. We conclude that the extractable work is The average work extracted is therefore To examine whether our feedback protocol can achieve the upper bound on the extractable work, we evaluate the terms in Eq. (2). Since the measurement is practically error-free, the net information is simply Shannon's entropy of a Gaussian variable [17,34]: where the limit of vanishing measurement error 0  ensures the positive-definiteness of the entropy and the correspondence between discrete and differential entropies ( [35] Ch. 8). During the relaxation phase of the feedback process, part of the information in Eq. (4) becomes unavailable [17]. To calculate the unavailable information u I we consider the inverse process: the particle is initially in equilibrium with the center of the trap at The upper bound of extractable work is found from Eqs. (2), (4) and (5) (6)). Finally, we verify that the feedback protocol satisfies the generalized Jarzynski equality in Eq. (1) by substituting the work and the information terms, Experimental setup. -The schematic of our home-built optical tweezers set up is shown in supplementary information as Fig. S1. A laser with 1064 nm wavelength is used for trapping a colloidal particle. The laser is fed to the Acousto-Optic Deflector (AOD) (Isomet, LS110A-XY). The AOD is controlled via an analog voltage controlled Radio-Frequency (RF) synthesizer driver (Isomet, D331-BS) and is capable of diffracting the laser light. The first order diffracted beam is focused at the sample plane of an optical microscope (Olympus IX73) using a 100x oil immersion objective lens of 1.30 numerical aperture. A second laser with 980 nm wavelength is used for tracking the particle position. A Quadrant Photo Diode (QPD; S5980, Hamamatsu) is used to detect the particle position. The electrical signal from QPD is preamplified by the signal amplifier (OT-301, On-Trak Photonics, Inc.) and sampled periodically with a Field-Programmable Gate Array (FPGA) data acquisition card (National Instruments, PCI-7830R). The QPD is capable of tracking the particle position with high spatial accuracy of 1 nm [36]. This is sufficiently enough to assume that our system is capable of performing nearly error-free measurements. We have designed a real-time feedback control system using LabVIEW programmed on the FPGA target. The feedback control measurement system is capable of position detection, potential modulation, and data storage. The sample cell consists of the highly dilute solution of 1.99 m diameter polystyrene particles suspended in deionized water. The trapping laser power at the sample stage is maintained at ~3 mW. Whereas, the laser power of the tracking laser is fixed at ~5% of the trapping laser power. All experiments were carried out at a constant temperature of 300 ± 0.1 K. 6 Experimental testing of the information engine bounds. -We first calibrate the parameters of the trap (Fig. 2). By fitting the probability distribution of the particle position in thermal equilibrium without feedback process to the Boltzmann distribution infinity. The QPD measures the particle position periodically at intervals of 25 ms. The FPGA board generates a bias voltage that corresponds to the initial position of the potential center. This bias voltage is applied to the AOD via the RF synthesizer driver. If the particle is found in R, the FPGA board generates an updated bias voltage that corresponds to the shift of the potential center to 2LThe decision whether to update the bias voltage and thereby shift of the potential center is taken within 20 s. After shifting the potential center, we wait for 25 ms, about eight times the relaxation time ~3 ms.  Finally, the potential center is instantaneously shifted back (within ~20 s) to the initial position, we wait for 25 ms for full relaxation of the particle and the cycle is repeated. We next focus on the energetics of the information engine. We set the region R from 0. 5 14 nm L   to infinity and perform the measurement and feedback control described above. The distribution (blue squares in Fig. 2(a)), which is obtained from 100,000 feedback cycles, is indistinguishable from the equilibrium distribution (red) with the same  = 28 nm. Fig. 2(b) shows the distribution of the measured extracted work, In conclusion, we examined a simple information engine consisting of a colloidal particle trapped by optical tweezers. By precisely measuring the particle position and shifting the potential center practically instantaneously, we can extract positive work from a system in a single heat bath at a constant temperature, thus exceeding the conventional bound of the second law of thermodynamics. The extra work originates from information on the system, which allows the feedback protocol to generate unidirectional motion. The measured work agrees well with the theoretical prediction, and we found that maximum work can be extracted from the engine when 0.612 . L   Finally, we demonstrated that the feedback protocol satisfies the generalized Jarzynski equality and is able to achieve the equality in the generalized second law under error-free measurements. Hence, the bound on information engines (demons) from Eq. (2) is sharp.
2018-04-03T02:32:50.237Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "cb825025a4ed8a1ca1ce023b5e5164e98b1d7dd6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.01868", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "51491cee948093e87d7e7f33aa7412c2bdc28133", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Computer Science" ] }
234829465
pes2o/s2orc
v3-fos-license
Quality analysis of briquette made of faecal sludge The purpose of this study is to convert slack waste into charcoal briquettes. The process for making briquettes was carried out by adding polyacrylamide as adhesives. The results showed that the best briquettes were obtained from 16 (sixteen) briquettes, namely the heating value, moisture content, density, and length of combustion were 7436.55 Cal/g; 7.277%; 1.06 gr/cm3; and 613 seconds respectively with the ratio of raw material: adhesive, pressure scale, and drying time were 5:2 105 kg/cm2; and 120 minutes respectively. The characteristics of the calorific value and water content have accordance with the SNI 01-6235-2000 standards, which were ≥ 5000 Cal/g and 7.75%, and the American standard for the density value was 1.0 gr/cm3. Introduction As a biological creature, humans in living their lives are also environmental destroyers and producers of waste. Humans are required to manage the development (repairing the damaged, enhancing and developing) a livable environment. In Law no. 23 of 1997 on Environmental Management, article 5 (1) states that everyone has the same rights to a good and healthy environment [1]. The Regional Drinking Water Company (PDAM) Tirtanadi, Medan, is starting to worry about the accumulation of waste in the form of feces. Efforts to process feces are being planned in the largest city on the island of Sumatra. One of the feces that PDAM Tirtanadi plans to treat is to convert the waste into fertilizer or briquettes (solid fuel). PDAM Tirtanadi Medan is already thinking about processing (feces) into fertilizer and briquettes. Medan City already has IPLT which has been inaugurated since January 2018 [2]. The briquette production process is an effort to process the feces into briquettes. The manufacture of briquettes is done so that they can use dirt/feces as raw material used to replace renewable fuels. However, the facts in the field of briquettes produced are still far from the right standard, seen from the cracked, broken, and damaged briquettes as shown in Figure 1 [3]. so it is necessary to analyze the quality of the briquettes that should have been produced through testing the briquette quality standards using the experimental design method. The following is a visual defect of briquettes which can be seen in Figure 1. So, it is necessary to do a concept design for making briquettes for a more precise quality of briquettes [4]. Experimental Design The basic principles that are commonly used and known need to be recognized, terms related to these principles include treatment, experimental error, and experimental unit. The treatment is a set of experimental conditions that will be used on the experimental unit within the selected design scope. This treatment can be singular or occur in combination [5,6,7]. Middle Square Expectation Value The following is an analysis of variance consisting of various models including: 1. Fixed Model or Model I It is called a fixed model because of the assumption that the level of treatment is fixed and the experiment has a mathematical model. Table 2. Expectation value squared mean value for factorial experiment with three factor randomized model Group Resources Degrees of Freedom r -1 Group Resources Degrees of Freedom r -1 Middle Square Expectation Value 3. Mixed Model or Model III Experiments with only a factor level A factor, only b factor level B and c factor level C taken randomly from a population consisting of all factors C will provide a mixed model with fixed a and b while c is random. Table 3 is presented with the mean square expected values with respect to the three models for the three factorial experiment. Mixed Model This model will occur if in the experiment he conducts, the researcher is involved with a random sample from a population consisting of all factor levels only as much as a factor level A, all of which are used as many as B factor B which has been taken randomly from a population consisting of all levels of factor B are c of factor C which is C. Table 4 is presented with the expected value of the middle square with respect to the three models for the three factorial experiment. 3 x 2 x 2 x 2 Factorial Experimental Design The experimental design of the 3 x 2 x 2 x 2 factorial is Model I, namely the Fixed Method. This model is used when the researcher only deals with many fixed levels for each factor, as many as a for factor A, b for factor B, c for factor C and d for factor D, all of which are used in the study. The conclusion certainly applies to the constant tare. Symbolically, this assumption can be written as: The null hypothesis that can be tested for this model is that there is no effect of factors and no interaction effect between factors. In the formulation of H0 we get: The critical area boundaries for each test are determined by the significant level α selected from the F distribution with the numerator degrees of freedom taken from the F-value tables according to the respective treatments paired with the denominator's degrees of freedom equal to the degrees of error of freedom. The ANOVA list for the a x b x c x d factorial experimental design can be seen in Table 5. Conceptual Framework The conceptual framework is a model that shows a logical relationship between the identified factors / variables to analyze research problems. In other words, the conceptual framework explains the relationship between all factors/variables that are related or explained in the theoretical basis. Research Type This type of research is experimental research (experimental research). Experimental research aims to find a causal relationship between factors that are deliberately caused by eliminating or reducing other disturbing factors [10]. Experimental research aims to investigate the causal relationship and how big the relationship is by applying a treatment to one or more experimental groups and comparing the results with one or more control groups. The research stage is the steps to find the problem and look for the factors that cause the problem. These factors will be analyzed to determine the extent of their influence on the problems that occur. Research Stages 2.7.1. Production Process Description Utilization of human feces that have been dried and processed by gluing, drying, molding and shaping so that they become briquettes that are safe, environmentally friendly, meet quality briquette criteria and of course can be used as alternative fuels. Briquette Making Process The process of making briquettes by weighing predetermined materials, weighing adhesives, mixing raw materials and adhesives, weighing dough, printing, pressing and drying with a predetermined temperature and time to produce briquettes. Briquette Analysis The briquette analysis aims to obtain the characteristics of the briquettes that have been produced, especially to obtain the quality of these briquettes. The analysis carried out such as analyzing moisture content, briquette flame, density, and heating value consists of: a. Water Content Analysis Percent moisture content can be calculated from the percentage value of water content inversely proportional to the resulting calorific value. Good briquettes are briquettes that have low water content so that their heating value and combustion power are high [11] b. Briquette Density Analysis Density analysis or density analysis is a parameter in determining a good briquette. Higher density of ingredients will result in better briquette quality than lower briquette density. c. Analysis of the Briquette Flame The analysis of briquette flame consists of calculating the length of ignition time and the ignition time period. The length of time for burning the briquettes is calculated when the briquettes are burned to heat the water until the water is boiling, awaited until the fire in the briquettes stops burning and the briquettes turn to ashes and calculates how long the briquettes burn during the burning process. d. Calorific Value Analysis The calorific value of a biomass fuel is the amount of heat energy (kJ) that can be released for each unit weight of the fuel (kg) if it burns completely, a perfect briquette is if all of the carbon element (C) in the briquette reacts with oxygen to carbon dioxide (CO2). Experimental Design Calculations Based on the experimental data obtained, tests were carried out to determine whether there was a significant effect between the treatments given on the heating time of the briquettes. Where the treatment given is the ratio of raw materials and adhesive, jack pressure and drying time. In this test, H0 and H1 are determined as follows: H0 : There is a significant effect of a factor or an interaction between the factors on the briquette production. H1 : There is no significant effect of a factor or the interaction between the factors on the briquette production. Analyzes were performed with descriptions of a single variable in terms of the unit of analysis tool. Analysis and Evaluation The analysis carried out on briquettes made from human feces is a classification of the things that affect the quality of the briquettes produced in the description of the production process. Processing with time and temperature at the specified briquette drying process and the appropriate jack pressure. The evaluation carried out was that during the briquette production process the appropriate ratio of raw materials and adhesives was needed, the duration and temperature of the briquette drying process had been determined and the appropriate jack pressure so that the briquettes produced were quality briquettes that were durable, safe and environmentally friendly [12]. Conclusions and recommendations After the analysis is carried out, conclusions will be drawn from the research results. The conclusion is the result of data processing and data analysis. This research starts from looking at the existing symptoms, then formulating the problem and making the objectives of this research. The final result of this research is a design to obtain the latest alternative energy sources using materials that are easily available to the environment, such as feces (human waste). Data Collection 2.8.1. Briquette Making Material Data The raw material used in the research process of making briquettes is dried human feces or feces. The feces were taken from the sewage treatment plant (IPLT) of PDAM Cemara, Medan. As much as 200 kg of waste is taken and stored in a container for further use and analysis. We recommend that the ingredients are dry to shorten the curing time. Machine and Equipment Data In the process of making briquettes, several machine tools are used, namely inlet tub, ph valve motor, receiving tank, coarse screen, fine screen, equalization tank, thickener, centrifugal pump, polymer mixer, belt press, aeration tub, cake drying bed, and other equipment. others that are used to facilitate the briquette production process such as plastic basins, shovels, digital scales, stopwatches, digital pocket scales, and thermometers [13]. Experimental Design Data The data used in the briquette production process are drying time, jack pressure, the ratio of raw material and adhesive (polyacrylamide). Analysis of the Description of the Production Process The analysis of the heating value (heat heating value, HHV), six types of samples with the best burning time were used to analyze their calorific values. In the third and fifth experiments, the calorific value obtained was different from the heating value in other experiments. This may be due to the operator's lack of accuracy in reading the scale on the thermometer so that the temperature value read is incorrect. It can be concluded that the calorific value contained in briquettes is equal to the average calorific value of the five experiments, namely: Based on the results of the calorific value of 6194.21 Cal/g is an indicator indicating that the briquette has a medium energy/calorie content. This determines that the briquette has a fairly good combustion rate with a large enough energy level.
2021-05-21T16:58:13.138Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "52d0da29c3eddba39a9d81da263d3553edadde02", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1122/1/012086", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d12771f836bb0cba5da8878e0a3c8308304b1253", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
270722345
pes2o/s2orc
v3-fos-license
“More Mindful of ESL Students”: Teacher Participation and Learning in ESL and Content Teachers’ Collaboration in a Science Middle School Classroom Teacher collaboration has received international research attention and has emerged as an effective way for teachers to engage in professional growth opportunities (Dove & Honigsfeld, 2018; Rao & Chen, 2020). An examination of teacher collaboration can shed light on the process by which teachers work together and illuminate further possibilities for professional learning and growth across all English teaching contexts (Dove & Honigsfeld, 2018; Giles & Yazan, 2019). Building on a sociocultural theory of learning, this study examined ESL and science teachers’ participation in a collaborative partnership to enhance ESL students’ education. It investigated how both teachers learned to co-plan and co-teach ESL students in a seventh-grade science classroom in the Southeastern U.S. This study relied on qualitative data methods and employed grounded theory techniques (Charmaz, 2006). The findings showed that limited collaborative planning time and the ESL and science teachers’ disparate notions of collaborative teaching contributed to the teachers’ unequal collaborative planning and teaching roles. Consequently, different learning outcomes were realized for both teachers. Introduction Teacher collaboration has received international attention and has emerged as an effective approach to teaching English in EFL and ESL contexts (Dove & Honigsfeld, 2018;Rao & Chen, 2020).Despite the growing popularity of teacher collaborative partnerships, such collaboration is still an underexplored research area in both EFL and ESL settings (Dove & Honigsfeld, 2018).EFL students most likely receive English instruction in a setting where English may not be the dominant language spoken, while ESL students receive English instruction often in contexts where the English language is dominant (Longcope, 2009;Storch & Sato, 2020).Previous studies point out how such contextual differences can influence English language teaching and learning (Longcope, 2009;Storch & Sato, 2020).Notwithstanding this difference, researchers and teachers alike aim to improve their instructional practices to teach English more effectively regardless of the context (Khaled et al., 2020).In this way, research affirms that teacher collaboration can engage teachers and researchers in opportunities for professional growth as they work together for the shared purpose of teaching English (Darling-Hammond et al., 2017;Giles, 2019;Giles & Yazan, 2020).An examination of teacher collaboration can shed light on the process by which teachers work together and further illuminate possibilities for professional learning and growth across all English teaching contexts (Dove & Honigsfeld, 2018). With this aim, the current study investigates how a new ESL teacher initiates, participates, and sustains a collaborative partnership with a seventh-grade science teacher in a content area classroom in the Southeastern U.S. Drawing on sociocultural learning theories and earlier studies on ESL and content teachers' collaboration, this study addresses the following research questions: (1) How do ESL and science teachers participate in an emerging collaborative partnership to co-plan for and co-teach ESL students in a seventh-grade science classroom in the Southeastern U.S.? (2) How does the ESL and science teachers' participation in this collaborative partnership relate to how the teachers learned to co-plan for and co-teach ESL students?This paper begins with a discussion of sociocultural learning theories and a literature review of ESL and content teachers' collaboration.Then, it describes the data collection and analytic procedures, which is followed by the presentation and discussion of the findings. Sociocultural Lens for Teacher Learning This paper draws on sociocultural learning theory to argue that teacher learning is a social process that occurs through human interactions in authentic and relevant contexts (Johnson & Golombek, 2016).From this perspective, teacher learning occurs when teachers rely on other people and tools to mediate their participation in the act of teaching so that they appropriate the resources for their own future use (Johnson & Golombek, 2003).In this way, collegial interactions can mediate teachers' participation and influence the ways that teachers transform their teaching practices.Thus, teacher learning is a dynamic, life-long process where teachers reconceptualize "understandings of themselves as teachers, of their students, and of the activities of teaching" (Johnson & Golombek, 2003, p. 735).This means that shifts in teachers' perspectives of who they are (e.g., professional identity development) and what they do (e.g., the teaching act) and/or observable changes in their teaching practices evince teachers' learning processes. We conceive ESL and content teachers' collaboration as a mediational space (Martin-Beltran & Peercy, 2014) where teachers rethink how to best serve ESL students and act on these renewed understandings to transform their teaching practices and influence positively ESL students' learning outcomes (Giles & Yazan, 2020).More specifically, as the ESL and science teachers engage in collaboration, they draw on their past experiences and expertise to "co-construct knowledge" to plan for and teach ESL students in the science classroom (Martin-Beltran & Peercy, 2014, p. 5).In this collaborative space, ESL and science teachers learn by reimagining their professional identities, changing their views on ESL students, and/or experimenting with different instructional approaches. Even though collaboration can be a pathway for equitable learning opportunities for ESL students (Giles & Yazan, 2019;Peercy, Martin Beltran, et al., 2017), it is still an underexplored research area (Dove & Honigsfeld, 2018;Peercy, 2018).Few research studies examine collaborative partnerships in secondary schools (Giles & Yazan, 2019;Glazier et al., 2017).As such, collaboration between secondary ESL and content teachers warrants further investigation (Dove & Honigsfeld, 2018).Undertaking this exploration, the current study seeks to examine the development of an emerging collaborative partnership between ESL and science teachers in a secondary school. Methodology The School Context Situated within a large suburban district in the Southeastern U.S., Starcreek Middle School4 contained about 800 students during the 2016-2017 school year.There were 26 students classified as ESL students, which meant that students identified an additional language on a home language survey at registration and made a qualifying score on the World Class Instructional Design Assessment (WIDA) Screener and/or ACCESS for English Language Learners 2.0.This study's state requires that students who identify an additional language on a home language survey to take the WIDA screener.If students make a qualifying score (i.e., 4.9 or lower), then they qualify for English language services.Students are also placed in four content area classrooms at registration, and if students make a qualifying score on the WIDA screener, they were placed in a 55-minute ESL class period taught by the school's ESL teacher.The study's state also requires students to take the ACCESS for English Language Learners 2.0 language proficiency assessment annually until a score (e.g., 4.8 or above) is reached to exit the English language program.The majority of English language instruction took place in content area classrooms since ESL students only spent a small portion of time in the ESL classroom daily. Co-participants The collaborative team in this study consisted of a science and ESL teacher, Candace and Amanda respectively.Candace taught four 55-minute science class periods daily and spoke English only.She had 15 years of total teaching experience when the study began, all of which were at Starcreek, and taught five ESL students in that semester.Even though she reported that she had experience teaching diverse students that included ESL students throughout her professional career, she stated that she had no previous training related to ESL instruction and/or collaborating with any ESL teacher prior to engaging in this collaborative process with Amanda.On four different occasions, Candace described herself professionally unqualified and unequipped to make decisions for ESL students in the science classroom, which most likely is a consequence of her inadequate training related to ESL students and instruction.When asked why she wanted to participate in this collaborative experience, she lauded the ESL teacher's (Amanda's) willingness to work with content teachers and her own desire to engage in professional growth opportunities that were relevant and practical to the science curriculum (Interview #1, March 16, 2017).She also wanted to emphasize her willingness to collaborate, stating, "I'm not someone who is not willing.It's more the fact that I'm not always as qualified" (Interview #3, May 25, 2017).Her words speak not only to her willingness to collaborate with the ESL teacher, but they also echo earlier studies that report the content teachers' limited training and experiences working with culturally and linguistically diverse students (Brooks & Adams, 2015). Amanda is the study's first author and ESL teacher.She was in her second year as an ESL teacher when the study began.Majoring in English and Spanish in college, she frequently taught many ESL students in her eighth-grade language arts classroom, translated for Spanish-speaking parents at school meetings, and began conversations with the principal early in her teaching career about transitioning to her current role as the ESL teacher at Starcreek.When she had the opportunity to retain full-time employment at Starcreek, she began teaching Spanish and ESL during the 2015-2016 academic school year.Realizing she wanted to continue to grow as an ESL teacher, she began a doctoral program with a specialization in second language teaching and learning that same year.Drawing on her own experiences as a content teacher and training related to ESL instruction, she began to solidify her belief that ESL instruction necessitated a collaborative partnership where all educational stakeholders work to provide equitable learning outcomes for ESL students.Amanda chose Candace as the collaborating science teacher because Candace taught the most ESL students in the seventh grade.Candace also agreed to participate in the study by signing the informed consent form.In addition, Amanda's goals were twofold.First, she sought to promote a shared responsibility for ESL instruction, and she envisioned collaboration as the most effective way to teach content and language to ESL students since students received the majority of language instruction in content area classrooms.Such collaborative practice might work to change the school culture at Starcreek Middle School where ESL and content teachers regularly work to collaborate for ESL students' educational outcomes.She also aimed to explore a topic at the intersection of research and practice.This exploration would examine the processes and experiences involved in collaboration, and thus contribute to the research on teacher collaboration in secondary schools where there is still a gap in the literature.In this way, this study was a pilot study in preparation for her dissertation (See Giles, 2019) that reported on one of her earliest attempts to initiate and sustain collaboration with content teachers. Data Collection Data collection included two cycles of collaboration between Candace and Amanda during the 2016-2017 school year.Each cycle aimed to produce one collaboratively planned and taught lesson based on the content and language standards.The first cycle began with an interview where Amanda asked Candace to share her previous experiences working with ESL students and/or the ESL teacher.The collaborating teachers then met to plan a content lesson that included language objectives.After Candace and Amanda planned the lesson together, they engaged in collaborative teaching, and both reflected on their experiences in a reflective journal separately.The second collaborative cycle began with the second interview, which sought to clarify statements in the reflective journal and expound on ongoing learning opportunities for both teachers.Like the first cycle, Candace and Amanda planned a second lesson together based on the content and language standards, collaboratively taught the lesson, and reflected on their planning, teaching, and learning in reflective journals authored by both teachers separately.The third interview concluded the collaborative process and served to elicit refined understandings about working with ESL students and collaborating with the ESL teacher. More specifically, data collection used qualitative methods, which included three audio recorded semistructured interviews, two video recorded collaborative planning sessions (CPS), two reflective journals (RJ) authored by the science and ESL teachers separately, on-going e-mail correspondence (EC) between the collaborating teachers, and field notes throughout the study's duration.We used these data methods to ascertain how a new ESL teacher's participation in collaboration with a science teacher influenced how both teachers learned to co-plan for and co-teach ESL students in the science classroom. Data Analysis We employed grounded theory (Charmaz, 2006) coding techniques to analyze the ESL and science teachers' participation and learning opportunities in collaboration.The analytical process spanned three coding cycles (see Appendix 1 for the coding table).During the initial coding cycle, we used in vivo and line by line coding (Charmaz, 2006) to emphasize teachers' exact words and construct codes developed in and through the data.This process uncovered 107 initial codes.During the focused coding cycle, we divided the initial codes into six categories that explained the smaller data segments.During the final cycle, we reflected on how these categories fit together to explain how the ESL and science teachers' participation related to both teachers' learning outcomes.To this aim, this coding cycle turned the data into theme statements, which will be explained in the next section. Findings The findings showed that insufficient time for collaborative lesson planning and the science and ESL teachers' disparate notions of collaborative teaching contributed to the ESL and science teachers' unequal collaborative planning and teaching roles, thereby constraining opportunities for the ESL teacher's participation.As such, these challenges led to the ESL teacher's role as a classroom assistant during both collaborative cycles.Consequently, teachers' unequal collaborative participatory roles related to the teachers' different learning outcomes to co-plan for and co-teach ESL students. Insufficient Time for Collaborative Lesson Planning The ESL and science teachers' limited collaborative planning time constrained opportunities for the ESL teacher's participation during both collaborative cycles.After many scheduling attempts, Candace and Amanda met for the first planning session and agreed to co-teach a lab on the length of the digestive system (CPS #1, April 27, 2017).This session lasted less than five minutes, because due to personal commitments, This is an open-access article distributed under the terms of a Creative Commons Attibution-NonCommercial-Share International (CC BY-NC-SA 4.0) license. Candace arrived late to the meeting (EC, April 27, 2017).Despite the time constraints, the collaborating teachers established the lesson objectives and agreed to meet at a later date after Candace shared the original lesson activity so that Amanda could make suggestions.However, this future planning session never occurred because Candace was ready to teach the lab before she and Amanda could meet a second time (Field notes, May 1, 2017). In an effort to sustain the first collaborative cycle, Amanda agreed to "definitely be there" during the first collaborative teaching session even though she doubted she contributed to planning the lesson, and consequently, had unclear expectations of her own teaching role (Email correspondence, May 1, 2017).In reflection, Amanda explained her (non)contribution to the planning session: We didn't design the lesson together.There was not a collaborative planning session where we actually planned and designed the lesson.There were many scheduling conflicts which prevented this from happening.There was just not a lot I could contribute to an already designed lesson activity.(RJ #1, May 11, 2017) Amanda's reflection showed the unequal planning responsibilities during the first session where Candace assumed primary responsibility for planning the lesson.Amanda attributed this unequal division of labor to "many scheduling conflicts" and ultimately stated her struggle to contribute to "an already designed lesson activity."Candace concurred that limited planning opportunities constrained both teachers' participation in collaboration.In the second interview, Candace identified planning time as a major challenge in the following: Candace In this exchange, Candace focused on the "lack of common planning time," which meant that she attributed the limited planning time to not having a shared planning period with the ESL teacher.Instead of discussing her collaboration with the ESL teacher, she shifted the topic to emphasize the fact that she only collaborated with her academic "team" and did not meet with administrators, the counselor, the special education teacher, and/or or the ESL teacher regularly.She pinpointed scheduling conflicts as the major obstacle to collaboration but did not conceptualize a different schedule so that all administrators and teachers could meet.Her words (e.g., "we're all doing the best we can for what we have") indicate that, while she wanted additional time, she resigned herself to believe that the schedule might not change to create space for more planning time between all stakeholders. The second collaborative cycle paralleled the first cycle and did little to create additional opportunities for both teachers' participation.Candace stated that the challenges were "similar to before in finding the time to work and plan together" (RJ #2, May 23, 2017).Like the first cycle, the second cycle included only one planning session that lasted less than five minutes in duration.During the first session, Candace stated the lesson objective as dissecting a frog as a culminating activity to the human body unit (CPS #2, May 18, 2017).Attempting to share planning responsibilities during this session, Amanda asked how she could assist Candace planning and teaching the lesson.Candace responded: Candace did not offer to share planning responsibilities with Amanda.Instead, she still assumed primary responsibility, which is made clear through her use of the first-person singular pronoun (e.g., "I'll be using the PowerPoint").While she did not ask Amanda to help her plan, she wanted assistance "going group to group" to help students " [know] what they see."Without adequate content knowledge about the body parts of a frog, Amanda struggled to offer language strategies to help ESL students who could potentially have a "really hard time sometimes knowing what they are seeing."Consequently, without opportunities to share planning responsibilities and without an additional planning session, Amanda was unable to fully participate in the second collaborative cycle.Amanda admitted that she did not contribute, commenting, "I didn't have input during the planning session.She already designed the lesson activity, and I helped her implement it in class" (RJ #2, May 24, 2017).Therefore, during both collaborative cycles, Amanda did not share planning responsibilities, which constrained her opportunities for planning for and teaching ESL students in the science classroom. As a consequence of Amanda's inability to assume planning responsibilities, Amanda's role resembled that of a classroom assistant during both collaborative teaching sessions.During the first teaching session, she helped students measure various items as they worked to complete the lab on digestion.In commenting on her own role during the first teaching session, Amanda commented, "She [Candace] would have taught the lesson the same way with or without my assistance.I assisted students in class, of course, I'm happy to help whoever needs assistance, but I wouldn't say my assistance was crucial" (RJ #1, May 11, 2017).Based on Amanda's perceptions, she did not think her teaching role "was crucial" in helping the ESL students because Candace could have "taught the lesson the same way with or without [her]."She also stated that she assisted "students" and stressed that she was "happy to help whoever needs assistance;" yet, she did not believe she had a teaching role that helped the ESL students access and master the content objectives.Amanda's teaching role of a classroom assistant continued in the second cycle in which she assisted students with the frog dissection.To Amanda, the teaching sessions were not collaborative.In this way, insufficient planning time constrained Amanda's opportunities to participate and relegated her teaching role to that of a classroom assistant. Teachers' Disparate Collaborative Teaching Notions The ESL and science teachers' disparate collaborative teaching notions prevented both teachers' full participation in the collaborative process.Candace had no previous experience engaging in collaborative planning and teaching with an ESL teacher (Interview #1, March 16, 2017).As a consequence, she did not have a collaborative experience in which to compare to this one with Amanda.When asked how she envisioned ideal professional learning opportunities, Candace responded that she desired "actual practical application that we could apply directly back to the classroom" (Interview #1, March 16, 2017).Since both collaborative cycles reflected Candace's expectation of "practical application," she did not express a desire for Amanda's increased contribution in the science classroom.Moreover, Amanda exceeded her expectations of the role of an ESL teacher within the school community.In commenting on Amanda's role, Candace stated: Candace remarked that Amanda's assistance was "quick" and "immediate," believing that Amanda made herself available in content area classrooms and beyond to help ESL students with content area assignments. In doing so, Candace juxtaposed Amanda's role to that of previous ESL teachers at Starcreek by stating that former ESL teachers provided little to no support except for managerial tasks related to the student's language plan.Since the collaborative process related to her own goals for professional development and aligned with her notions about the ESL teacher's role, Candace did not express concerns about their unequal responsibilities in collaborative planning and teaching.When asked to explain how this collaborative process reflected her ideal, she exclaimed: I think it was great.It was perfect.We talked over everything and kind of had a plan and then you know you did a great job kind of checking on the kids with what they were understanding and doing and keeping them on task and that kind of thing.I think it was perfect and great.(Interview #2, May 11, 2017) Candace stated that she believed this collaborative experience was "great" and "perfect" because both she and Amanda discussed the lesson "plan" and ensured that the students understood how to access and eventually meet the content objectives by "checking on the kids."Therefore, in her mind, she praised the collaborative effort because it aligned with her notions for collaborative planning and teaching. Although this collaborative process reflected Candace's ideal, Amanda reflected that her collaboration with Candace contradicted her ideal notions.In a reflective journal, Amanda expressed her frustration, stating: This is an open-access article distributed under the terms of a Creative Commons Attibution-NonCommercial-Share International (CC BY-NC-SA 4.0) license. I do try to make myself available as much as possible.I'll also do whatever is needed; however Amanda wrestled with the fact that her ideal notions clashed with Candace's notions to the point where she was unsure whether to pinpoint the cause to Candace's inexperience or conceptualization of collaboration.Amanda emphatically stated that the process "wasn't collaborative" because she did not actually help Candace plan or teach the lessons.She stressed that this experience met at best her most "BASIC" expectations for collaboration.Even though the collaborative process did not meet Amanda's ideals, Amanda never shared her concerns with Candace during either collaborative cycle.Had Amanda expressed her frustrations, she may have created additional space for her own participation, especially given the fact that Candace stated Amanda's willingness to offer assistance in the content area classroom.Amanda's unwillingness to express her feelings is most likely attributed to the fact that she constantly worried that she and Candace would not sustain the collaborative process after many failed scheduling attempts because the school year was drawing to an end (Field notes, May 27, 2017).The collaborative cycles concluded with a final interview, which took place on May, 25, 2017, the last day of school for students and teachers.Thus, had there been additional school days, Amanda may have shared her concerns and assumed an increased participatory role during the second collaborative cycle.Nonetheless, Candace and Amanda's different notions of collaborative planning and teaching constrained both teacher's participation in collaboration and did not make both teachers feel as if their contribution was valuable or significant, especially for Amanda who stated her desire for an increased role in collaboration. Teachers' Different Learning Outcomes The ESL and science teachers' unequal planning and teaching roles contributed to the teachers' different learning outcomes to co-plan for and co-teach ESL students.In working with collaboration with Amanda, Candace stated that she learned to be "more mindful to think about providing accommodations for ESL students" (RJ #2, May 23, 2017).When asked how she adapted instruction to meet the ESL students' content and language needs prior to collaboration, she responded, "I do a lot of just regular accommodations that I would do for any student that needed help.I'm not sure I've done as many things that are actually targeted to their needs as they're learning the language" (Interview #1, March 16, 2017).Candace admitted that she did not plan lessons with ESL students in mind before collaboration, even though she recognized that ESL students were not always "successful" in the science classroom.She "[felt] bad "when they [bombed] a test or [bombed] an assignment" (Interview #1, March 16, 2017).As a consequence of collaborating with Amanda, she now realized she needed to be "more mindful" of ESL students, thereby changing her mindset to include ESL students when designing lessons. There was no evidence to suggest that Candace's learning progressed beyond her stated realization to be "more mindful of ESL students."When asked how she might be "more mindful of ESL students" in the future, she commented, "It would be ideal to have more time working collaboratively with the ESL teacher" (Interview #3, May 25, 2017).Candace's words suggest that she would think about adapting her instruction for ESL students if she had more time to collaborate with Amanda.She used the conditional tense (e.g., "it would be ideal"), which further suggests that she did not think such collaboration would take place in actuality due to time constraints and scheduling conflicts.Nonetheless, if Candace continues to collaborate with Amanda, she might further reflect and refine her understandings about how to accommodate for ESL students in the science classroom.Candace's shift in thinking is a potential worthwhile first step in the learning process if she has continued interactions with Amanda and other colleagues to plan for and teach ESL students. Amanda, however, moved beyond an initial realization and stated how this experience would change her approach to collaborating with content teachers.In learning how to initiate and sustain collaboration, Amanda explained: I would make sure we had a second planning session during each cycle.Moving forward, we have to have two at least.I also have to start the collaborative process earlier to create leeway for scheduling conflicts.In future collaborative efforts with content teachers, I need to be more assertive in voicing my expectations for collaboration.This begins with explicitly stating my desire for a stronger planning and teaching role in the content area classroom.Without attempting to assume a stronger role, I will always be a classroom assistant.(RJ #2, May 24, 2017) This is an open-access article distributed under the terms of a Creative Commons Attibution-NonCommercial-Share International (CC BY-NC-SA 4.0) license. Amanda now understood the importance of additional planning sessions with adequate planning time to define and clarify collaborative planning and teaching responsibilities.It was also clear that she recognized that she needed to "be more assertive" and take greater risks in "voicing [her] expectations" for collaboration.Without clear expectations for collaborative planning and teaching, she thought that she would "always be a classroom assistant," which contradicted her own notions of collaboration.Moreover, she refined and appropriated a different collaborative approach when she realized that articulating her expectation for collaboration to content teachers might be an important entry point in future collaborative efforts.In moving forward, she learned to create additional sessions for collaborative planning and directly state to content teachers her desire for an increased planning and teaching role. Conclusion and Future Directions This study affirms earlier work that discusses how teachers' different expectations for collaboration can lead to unequal participatory roles in collaboration (Arkoudis, 2003).Moreover, unequal planning responsibilities constrained opportunities for the ESL teacher's participation in the content area classroom and led to the ESL teacher's role as a classroom assistant (Ahmed Hersi et al., 2016;Arkoudis, 2006;Creese, 2002;McClure & Cahnmann-Taylor, 2010).This study is distinct from earlier studies that report on the ESL teacher's relegation because the content teacher did not perceive the ESL teacher's role as less than the content teacher.That is, both Candace and Amanda assisted students with the labs, and in Candace's opinion, Amanda distinguished herself from previous ESL teachers at Starcreek who offered little to no support.In this way, Candace might have created opportunities for Amanda's increased participation had Amanda voiced her expectations and desire for an increased role in collaboration. This study also attests to the fact that collaboration can yield learning outcomes for teachers despite the challenges experienced in collaboration (Giles & Yazan, 2019;Martin-Beltran & Peercy, 2014;Peercy, Martin Beltran, et al., 2017).Candace and Amanda fulfilled different learning outcomes.Candace stated her desire to be "more mindful" of ESL students in the science classroom, but this shift in thinking was not enough to change her teaching practices to focus on ESL students nor change how she engages in collaboration with Amanda (Interview #3, May 25, 2019).From a sociocultural learning perspective, Candace's learning to plan for and teach ESL students is still in the early stages of the process where she will need to rely on tools and people to mediate how to best plan for and teach ESL students.In this regard, continued interactions with Amanda might eventually lead her to appropriate these resources for her own future use.On the other hand, there is evidence to suggest that collaboration was a mediational space for Amanda to refine her understandings about how she engages in collaboration with content teachers, which she regulated and appropriated for her own future use.She explicitly stated how she would participate differently in future collaborative endeavors.While her future collaborative actions extend beyond this study, other studies (see for example Giles, 2018Giles, , 2019) ) attest to the fact that she changed her collaborative approach with content teachers by taking an increased agentive role in planning for and teaching ESL students in content area classrooms. This study calls for future studies on ESL and content teachers' collaboration in secondary schools where teachers voluntarily agree to engage in a collaborative partnership to impact ESL students' learning outcomes.Future studies might investigate collaboration with additional content teachers (e.g., English/language arts, mathematics, and social studies) and explore how such collaborative partnerships influence ESL students' outcomes in the content area classrooms.This study is limited by time and one collaborative effort with a science teacher, so an additional study might explore the ESL teacher's collaboration with additional subject areas.Research on the impact of collaboration on ESL students in the content area classroom would illuminate how collaboration works to actualize equitable educational outcomes for ESL students. , I need to know what is needed.Here, I'm not sure if Candace just didn't know what she didn't know due to her lack of training, or if she had never conceptualized collaboration in a different way.But, this wasn't collaborative.If it was collaboration, it was collaboration at the most BASIC level.(RJ #2, May 24, 2017)
2024-06-26T15:24:48.815Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "ea0420b1be7c2e1065ee94337fca11db5bb731d6", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.61871/mj.v45n2-16", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1cb1e9b0f2d7011b55f0768c4fb02749919d1dde", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
221516162
pes2o/s2orc
v3-fos-license
Implicit Multidimensional Projection of Local Subspaces We propose a visualization method to understand the effect of multidimensional projection on local subspaces, using implicit function differentiation. Here, we understand the local subspace as the multidimensional local neighborhood of data points. Existing methods focus on the projection of multidimensional data points, and the neighborhood information is ignored. Our method is able to analyze the shape and directional information of the local subspace to gain more insights into the global structure of the data through the perception of local structures. Local subspaces are fitted by multidimensional ellipses that are spanned by basis vectors. An accurate and efficient vector transformation method is proposed based on analytical differentiation of multidimensional projections formulated as implicit functions. The results are visualized as glyphs and analyzed using a full set of specifically-designed interactions supported in our efficient web-based visualization tool. The usefulness of our method is demonstrated using various multi- and high-dimensional benchmark datasets. Our implicit differentiation vector transformation is evaluated through numerical comparisons; the overall method is evaluated through exploration examples and use cases. Abstract-We propose a visualization method to understand the effect of multidimensional projection on local subspaces, using implicit function differentiation. Here, we understand the local subspace as the multidimensional local neighborhood of data points. Existing methods focus on the projection of multidimensional data points, and the neighborhood information is ignored. Our method is able to analyze the shape and directional information of the local subspace to gain more insights into the global structure of the data through the perception of local structures. Local subspaces are fitted by multidimensional ellipses that are spanned by basis vectors. An accurate and efficient vector transformation method is proposed based on analytical differentiation of multidimensional projections formulated as implicit functions. The results are visualized as glyphs and analyzed using a full set of specifically-designed interactions supported in our efficient web-based visualization tool. The usefulness of our method is demonstrated using various multi-and high-dimensional benchmark datasets. Our implicit differentiation vector transformation is evaluated through numerical comparisons; the overall method is evaluated through exploration examples and use cases. Index Terms-High-dimensional data visualization, dimensionality reduction, local linear subspaces, user interaction INTRODUCTION Multidimensional data analysis has manifold applications in diverse domains such as finance, science, and engineering. It is often conducted by reducing data dimensionality with a dimensionality reduction (DR) technique and then visualizing the reduced data with scatterplots. By representing data points as visual marks in 2D space, 2D scatterplots are one of the most useful and common approaches [38] for visual exploration of multidimensional data. However, depicting the projected data with point-based information alone is oversimplified as only the density distribution can be perceived. In this paper, we propose a model to visually understand multidimensional local linear subspaces after dimensionality reduction-we refer to local linear subspace as the local linearized neighborhood in the original multidimensional space. There are many DR techniques that preserve certain structures of the data in low-dimensional space. Early techniques aim to faithfully represent the data's global structures, for example, principal component analysis (PCA) [18] and multidimensional scaling (MDS) [2,45], however, they cannot effectively reveal the low-dimensional manifold embedded in high-dimensional data. In contrast, more recent DR methods, e.g., locally linear embedding (LLE) [36] and t-distributed stochastic neighbor embedding (t-SNE) [46], seek to map nearby points in the high-dimensional space to nearby points in the low-dimensional space. These methods can better preserve local linear structures, however, only encoding data points as positions of visual marks will not reveal such structures, except for the distance between them. In particular, the traditional point visualization misses any orientation information from the local linear structures. Fig. 1 illustrates this issue for a typical DR example along with our strategy to resolve it. Fig. 1 (a) shows a traditional dot-based visualization in a scatterplot, where only the density distribution of data points can be perceived. With our new implicit local subspace projection method ( Fig. 1 (b)), global trends and local structures are visualized through the orientation and shape of glyphs. With our approach, we can visually separate two different groups of glyphs near the bottom-right of the plot-they are generally associated with right-and left-facing faces (transparent dashed curves in green). Another example is a global trend highlighted with two green dashed curves in the image view in Fig. 1 (c); now, it is possible to identify two trends that cross at the center of the plot, each is associated with a different smoothly changing camera position. Fig. 1 (b) demonstrates that further interesting local pattern can be identified with the glyphs, where three clusters of glyphs (bounded by dash lines in the orange zoom-in) correspond to the associated images with three distinct face orientations and the crossing pattern (see the purple zoom-in) well reflects the smooth transitions of face directions. We propose a model to characterize multidimensional local subspaces with basis vectors anchored at each data point (all in the original space) and transform the basis vectors to a low-dimensional visualization space with an analytical vector transformation technique based on the implicit function theorem. The basis vectors are extracted using local PCA in the original multidimensional space. Our vector transformation method takes the DR technique as an implicit function and uses its analytical gradient to compute the accurate projected basis vectors. In this way, we guarantee that the transformation of the subspace is consistent with the projection of the points in the DR method. Our implicit function-based vector transformation has the advantage of being efficient and accurate at the same time. To visualize the projected local subspace, we construct a glyph in the form of a closed B-spline curve that captures the deformations introduced by the transformation of the basis vectors. We reduce the overlap between glyphs by interactively changing their opacities and size, and order them according to glyph area. We measure the projection quality with two metrics: the loss function of the DR method and trustworthiness [13,32], and compute the anisotropy of the local subspace in high dimensional data. Combining these measures, we design a set of linked views for interactively examining projection errors and the relationship between them and data attributes. Based on a web-based implementation, we demonstrate the effectiveness of our framework with case studies. Our main contributions are as follows: • A model that visualizes local subspaces for multidimensional projections such that global trends in the data can be inferred by the perception of local subspaces; • An analytical vector transformation method based on implicit function differentiation; and • The glyph representation that shows the geometry of transformed basis vectors of local structures in the same 2D domain as a traditional scatterplot. Since we do not change the basic intuition associated with traditional multidimensional visualization, our approach readily fits in any existing DR visualization pipeline, regardless of whether it uses linear or nonlinear multidimensional projections. The source code is available for download on GitHub 1 . RELATED WORK Dimensionality reduction is an important research topic in statistics, data science, and visualization. A comprehensive survey of DR methods can be found elsewhere [22,47]. One issue of DR techniques is their interpretation. This is a difficult problem [48] addressed in extensive previous work. A large body of research focuses on providing empirical guidance and design principles for interpreting DR results and assessing their quality. Some examples include a systematic literature review of the topic [37], metric-based quality assessments of synthetic data [21], and a study of different visual encoding schemes [38]. Another branch of research improves the interpretability of DR visualizations by including quality information [13,32]. Aupetit [1] encodes Voronoi cells with luminance in the 2D visualization with distortion and uncertainty measurements to show the DR quality. Similarly, CheckViz [23] visualizes distortions in the local mappings with a 2D perceptually uniform color map applied to Voronoi-cell partitionings of the 2D scatterplot after dimensionality reduction. Seifert et al. [39] visualize the local stress metric [6] as a combination of a 2D heat map and a height map, simultaneously showing the stress levels of each datum and its neighboring area. Interactive tools with brushing-and-linking and carefully designed visual encodings are also useful for understanding DR results. Embedding Projector [42] allows the user to explore DR data as a 3D 1 https://github.com/VisLabWang/DRImplicitVecXform scatterplot and shows DR processes by animation; however, it is a general tool for overview and more in-depth analysis is not supported. A set of interactive visual analysis methods using various visual encoding are proposed to study the quality of dimensionality reduction methods on large-scale datasets [28]. Stahnke et al. [43] probe multidimensional projections with a set of integrated interaction techniques that allow for the investigation of each data point and a neighborhood as well as additional information, e.g., classes, clusterings, and original dimensions. Comparative visualization [8] provides results of multiple DR techniques, where the user can assess different behaviors of these techniques through interactive exploration of multidimensional data with linked views. Relating DR results with original data dimensions is yet another way of interpreting those results. Axes of original data dimensions can be drawn as 3D biplot [16] axes to understand DR results in 3D plots [7]. An interactive framework [3] allows the user to visually explore forward (high-to-low) and backward (low-to-high) projections, where original data axes are shown in 2D biplots. Alternatively, a perturbation-analysis-based method [15] aims to understand nonlinear DR methods. Here, the goal is to recognize how generalized axis lines (visualized as contours) change according to user-specified infinitesimal perturbations; small changes of the data are modeled in the original multidimensional space and their effect on the projected axes in the DR display are computed by automatic differentiation. Our work has a different goal in mind: we want to understand the shape and orientation of the local structure, and, furthermore, global correlations of data points; and we use implicit differentiation to accurately transform basis vectors in the multidimensional local neighborhood. Features of interest in multidimensional space may live in lowdimensional subspaces [49]. Therefore, subspace analysis that models multidimensional data by the union of multiple subspaces is a powerful tool for multidimensional data analysis. For example, subspace clustering localizes the search for features in relevant dimensions [33,49]. Recently, sparse and low-rank subspace clustering methods have been widely used in machine learning, computer vision, and pattern recognition [11,24]. Related concepts are useful for visualization as well. For example, automatic subspace searching, grouping, and filtering, followed by interactive analysis allow users to visually explore multidimensional data [44]. Alternatively, subspace clustering and animation of dynamic projections can be used for interactive visual exploration of subspaces [25]. These methods analyze lower-dimensional subspaces and point-based projections in multidimensional datasets. In contrast, we focus on shape and direction-based analysis of local neighborhoods of the same dimensionality of the data. Flow-based scatterplots [4] and generalized sensitivity scatterplots [5] enhance 2D scatterplots with glyphs encoding local trends. These methods are effective in identifying local relationships between two variables. More recently, clusters in DR data have been visualized on scatterplots with winglets [27] that enhance data points with arcs encoding cluster information and uncertainty. However, the local patterns visualized by both approaches are not defined by the whole original data. In contrast, improved parallel coordinates plots [31,51] can directly visualize local multivariate correlations of the original data. For example, indexed-points parallel coordinates [51] represent multidimensional locally fitted planar structures by using p-flat indexed points, allowing for effective pattern recognition through visual clustering. However, such parallel coordinates are effective only for data with a small number of dimensions [41]. In contrast, our method is designed for multidimensional projections, can be applied to high-dimensional data, considers multiple vectors to faithfully describe local subspaces, and-most importantly-works consistently with any DR method. Recently, user-centered dimensionality reduction methods that allow for fine tuning of the projections have attracted increasing attention. Most methods [14,20,34] based on neighborhood reconstruction are designed to support full interactivity and controllability but require user-provided control points. In contrast, our method focuses on the understanding of projection results with glyph perception and user interaction, and reveals the structure of local subspace around each projection point. Fig. 2. Workflow of our method: Dimensionalty reduction (left) projects points from the original space to the low-dimensional visualization space. We extend this conventional approach by first computing a linear subspace around each data point using local PCA and then projecting the basis vectors of the subspace to the low-dimensional visualization space. These transformed vectors are used to encode the deformation of the projected subspace in glyphs. METHOD OVERVIEW Our method augments point-based multidimensional projection with the shape and orientation information depicting the multidimensional local subspace around data points. The global structure of multidimensional datasets can be better understood by including local subspaces. Fig. 3 motivates our approach for a simple example of a synthetic data with two perpendicular planes in 3D space. The projection is chosen in a way so that the projection direction is not perpendicular to the planes. Therefore, it is not possible to separate the two planes in the traditional point-based visualization ( Fig. 3 (c)) unless we use color to distinguish them ( Fig. 3 (f)). With our method, the two planes can be easily identified-even without color coding-by the shapes and directions of glyphs: thin, elongated, vertical-going glyphs associated with the plane that covers a smaller area after projection; and round glyphs that represent the plane that is better preserved in the projection ( Fig. 3 (b)). Our approach extends traditional multidimensional projection in three ways: (1) extract the local linear subspace in the original space, (2) project the subspace in a way that is consistent with the DR technique applied to the data points, and (3) visualize the information from the projection of the subspace. The respective workflow of our method is illustrated in Fig. 2. For (1), we identify the local linear structure around each data point in the multidimensional space using the k-nearest neighbors (kNN) method. The local neighborhood of a point is fitted by a multidimensional ellipsoid as we perform PCA of the neighborhood and obtain its eigenvectors and the corresponding eigenvalues. In step (2), the extracted subspaces are transformed to the 2D visualization space. To this end, we transform the eigenvectors spanning local subspaces using implicit differentiation as explained in Section 4. In practice, not all eigenvectors are needed as many of them are associated with small eigenvalues and contribute little to the subsequent computations, i.e., we use local linear subspaces that do not necessarily have the full dimen- sionality of the original dataset. Finally (3), data with local subspace information is visualized using 2D ellipse-like glyphs generated from the data points and the transformed vectors as discussed in Section 5. MATHEMATICAL MODEL This section first clarifies the terminology and mathematical assumptions that we make. Then, we discuss our way of representing local linear subspaces and the corresponding transformation of vectors. This leads to our approach to using implicit functions to compute the transformation for general nonlinear DR methods. Multidimensional Projection of Points We present a general definition of multidimensional projections that will be used as a basis for our extended projection approach. In the following, we use uppercase letters for variables that are related to the original multi-dimensional space, and lowercase letters for quantities in the projection space (i.e., typically the 2D visualization space). Given a multi-dimensional dataset with the dimensionality D, a point in this space is denoted P ∈ R D . Multidimensional projection takes points at P to corresponding locations p ∈ R d (d ≤ D) in the lower-dimensional visualization space: Our discussion and derivation is independent of the choice of π, i.e., we formulate our approach to work with any linear or nonlinear multidimensional projection. Most nonlinear multidimensional projection techniques employ some complex optimization approaches to arrive at the projection π. Often, this map is not defined for all R D , but only the projection of the input points is computed, i.e., the target locations p i = π(P i ). In contrast, the linear projection π is directly accessible. A prominent case is the linear projection by PCA [18]: where M is a d × D matrix consisting of the first d eigenvectors (used as row vectors) that result from the diagonalization in PCA, i.e., in our typical use case of 2D visualization, these are the two eigenvectors corresponding to the two largest eigenvalues. Local Subspaces and Transformation of Vectors Let a local subpace S within the high-dimensional space be anchored around a point at position P ∈ R D and have dimensionality L ≤ D. As shown in Fig. 4, the subspace is spanned by the set of basis vectors where L is the intrinsic dimensionality and the set of basis vectors V is obtained by computing local PCA [50]. Specifically, we first find the k nearest neighbors of P, then perform PCA with the point P and its k nearest neighbors, and finally construct V by picking the first L eigenvectors that can explain most of total variance. To analyze the effect of π on the local subspace, we reduce this problem to understanding how vectors are mapped from the highdimensional space to the low-dimensional visualization space. Let us consider a vector V ∈ R D anchored at point P. This vector is transformed [17, page 290] with the Jacobian matrix of size d × D, The subscripts refer to the respective component of the Cartesian point. It is important to point out that a vector V only "lives" together with an anchor point P, i.e., the two together form a fiber bundle [19]-here, a vector bundle or vector field. In summary, the combined multidimensional projection of points and attached vectors can be written as: can be applied to all basis vectors V i ∈ V of the local subspace. However, it should be noted that the transformed basis vector do not necessarily form a basis in the target space R d . Often, the original subspace has dimensionality L larger than the available dimensionality d in the target space. Nevertheless, we can use the set of transformed vectors to learn about the characteristics of the multidimensional projection; see later in Sect. 5 and Sect. 7. This problem does not arise for L ≤ d; here, the transformed basis vector(s) may indeed form a basis in the target space. A simple example is L = 1, i.e., when just a single vector is transformed to show the effect on local linear correlation. Direct Computation of Vector Transformation An open question is the actual computation of the vector transformation from Equation 6. For a few cases, this computation is straightforward. For the example of PCA (Equation 2), we have a simple linear map π and, therefore, the Jacobian matrix is identical to the matrix from the linear PCA map: The above computation is valid for any linear projection. However, multidimensional projections, in general, are nonlinear and more complex. And they may not lend themselves to an analytic derivations of the Jacobian matrix. Here, we could resort to using a numerical computation of the partial derivatives for the Jacobian matrix. The standard approach employs finite differences [35], for example, in the form of forward differences applied to each element of the Jacobian matrix: with the unit vector e j in direction of the j-th dimension and a small distance measure h ∈ R + . Forward differences provide a first-order approximation; second-order approximation is achieved by analogous central differences. While finite differences are a valid means of approximating the vector transformation, they came with a number of shortcomings: They are just a numerical approximation and they need to compute the projection at a number of additional points P + h e j , which are not in the original set of data points. The latter issue has two negative implications. First, we need more computations of projections, which incurs higher computational costs. Second, and more importantly, DR techniques are data-dependent and often work only on the given input data; therefore, it can be hard to feed in data points that are different from the original point set or these additional points will modify the projection itself, which would lead to systematic errors. To resolve these issues we now introduce a new vector transformation approach based on the implicit function theorem. Implicit Vector Transformation Nonlinear projection methods typically find the relationship between P and π(P) by optimizing an objective function f (P, π(P)). Here, the mapping between P and π(P) is implicitly described by f . However, even the simpler linear projections can be formulated in this way. Therefore, we assume that any reasonable projection finds an optimal target location for each data point P by solving the following optimization problem to arrive at the projected data points π(P): min π(P) f (P, π(P)) . In other words, any resulting point π(P) "sits" in a local minimum with respect to the cost function. If this was not the case, the projection method could and should move the projected point π(P) to reduce the cost. Equation 9 leads to the necessary condition for the minimum if f is a smooth function, and the minimum is inside (not on boundaries of the domain): Using the implicit-function differentiation theorem [26,Chapter 11], we take the partial derivative with respect to P on both sides of Equation 10, and apply the chain rule: Equation 12 is the key mathematical result of our paper. It describes the implicit vector transformation that we can now use to transform subspaces. The transformation is generally applicable and completely accurate as long as the reasonable assumption from Equation 9 holds for the underlying smooth multidimensional projection. Furthermore, the transformation has to be computed only for the original points P i in the dataset, and not at any other locations or for any other points. For a specific choice of nonlinear dimensionality projection method, we have to evaluate the two derivatives ∂ 2 f (P,π(P)) ∂ π(P) 2 Application to MDS Here, we briefly explain the objective functions and the strategy to compute Equation 12 for the representative nonlinear method MDS. We have also derived solutions for t-SNE that can be found in the supplemental material. The MDS method has both linear and nonlinear versions. Here, we consider the SMACOF version [2], which is widely used as the default method in data analysis packages, for example, scikit-learn, due to its good performance. SMACOF minimizes the objective function: where x i , x j are original high-dimensional data points, and y i and y j are projected low-dimensional data points. We can map this equation to our original formulation in Equation 12 by associating F with f , x i , x j with P, and y i , y j with π(P). We first compute the partial derivative ∂ F ∂ y i of Equation 13: where the denominator requires all duplicate points to be removed. , we then derive the second derivatives and ∂ 2 f (P,π(P)) for cases i = j and i = j, respectively: where I is the identity matrix, and VISUALIZATION AND INTERACTION In this section, we elaborate on how to visualize projected local subspaces with glyphs, explain specialized user interactions for visual exploration, and briefly report on the implementation of our method. Fig. 7 shows a screenshot of our interactive tool. Glyph Generation We use a glyph to visualize the transformation from the L-dimensional local linear subspace embedded in the original D-dimensional space to d-dimensional visualization space. The original basis vectors V i are the eigenvectors that come from local PCA in high-dimensional space, and are assumed to be normalized to unit length. We incorporate a weight α i that measures the importance of the eigenvalue λ i associated with V i , according to α i = λ i / ∑ L j=1 λ j . Therefore, our original basis is {α i V i |i = 1, . . . , L}, anchored at point P. According to Equation 6, it is transformed to (π(P), {α i v i |i = 1, . . . , L}). The latter one is what the glyph should visualize. However, naive visualization is not possible because the α i v i usually do not form a basis in the d-dimensional visualization space. Typically L > d and, thus direct visualization of all L vectors as lines ( Fig. 5 (a)) might cause visual clutter that impairs perception. Ellipse based encoding can result in smooth visualizations (Fig. 5 (c)), but it supports only two transformed basis vectors. Although the convex hull ( Fig. 5 (b)) of the transformed basis vectors can show all L basis vectors, it is not smooth and overlapping areas of neighboring glyphs are large and potentially misleading. Given the disadvantages of these options, we design a closed B-Spline convex hull glyph, as shown in Fig. 6 (d), which can represent the major transformed basis vectors and it is smooth and easy to distinguish. Our solution to this problem is illustrated in Fig. 6: a glyph is generated by first computing the convex hull ( Fig. 6 (b)) of the transformed vectors centering at the projected point π(P) (Fig. 6 (a)); then, a smooth shape is computed using a B-spline [9] within the convex hull whose vertices are used as control points (Fig. 6 (c)). The B-spline is used because of its desired properties: it stays within the convex hull and follows a local control of its shape, which ensures that glyphs are comparable. One principal issue in glyph-based visualization is occlusion caused by the larger coverage on screen than by small dots. We alleviate the issue by using transparency and alpha blending: glyphs are assigned user-specified opacities, and they are drawn in layers such that larger glyphs are drawn below smaller ones. Furthermore, scaling is supported in our visualization tool so that the user can change the size of glyphs globally. The visualization is generated by alpha-blending glyphs assigned with desired visual properties-color, opacity, and scale. These visual properties can be interactively modified by the user with interactions as shown on the left-hand side of Fig. 7. Interaction Our interactive exploration system is shown in Fig. 7. It comprises three modules: the control panel ( Fig. 7 (a)), the local subspace projection viewer (Fig. 7 (b)) with glyph zoom-in (Fig. 7 (c)), and a linked point-based projection viewer with option to show image references ( Fig. 7 (d)) for comparison. The local subspace projection viewer is synchronized with the point-based viewer for zooming-and-panning-it is particularly useful for comparing local trends of glyphs and changes in image data (projection viewer) in high-dimensional datasets in computer vision as shown in Section 7. Brushing-and-linking is supported for closer examinations: the user can select glyphs (in Fig. 7 (b)) or points (in Fig. 7 (d)) of interest with a lasso and corresponding points and glyphs are highlighted. Furthermore, a set of specialized user interactions-including projection-quality filtering, flexible color mapping, and glyph structure zoom-in-are adopted in our method. The user changes opacity and size of glyphs with two sliders to reduce the occlusion for further explorations of local subspaces. Color mapping is useful when visualizing various metrics of data points and their local subspaces-for example, class, and the projection quality (see Section 5.2.1). The user may be also interested in the actual projected vectors from the local subspace. This information is especially useful for identifying anomalies. A zoom-in visualization of the glyph (Fig. 7 (c)) shows transformed basis vectors (red, green, blue, and cyan are for the first through fourth basis vector, respectively) inside the outline of the Bspline shape. The zoom-in is activated whenever the mouse pointer hovers over a glyph. Glyph filtering allows the user to focus on glyphs based on a user-selected metric among desired class, anisotropy, or projection quality. These interactions allow us to adopt the "overview first, zoom and filter, then details-on-demand" [40] process for visual analysis. Specifically, we first observe the visualization of the whole dataset to find interesting general trends and local patterns: e.g., anomaly and crossings. Then, we focus on these patterns and examine them with interactive filtering and brushing-and-linking, and verify our findings with the original data if possible (for example, images); when necessary, we zoom in to further explore the details. Projection Quality The quality of the DR method used for data points is also important for visual analysis. We introduce three metrics to describe the projection quality: projection error, neighbor preservation, and linearity. The projection error metric is the loss function of the particular DR method (Fig. 8 (a)). For MDS, the metric is the stress error. The neighborhood preservation degree is independent of the DR method, measured by the trustworthiness [13,32] (Fig. 8 (b)), where k is set to be the same as the one used for computing local PCA. The linearity metric describes the anisotropy of the local subspace in the original dimensionality-the ratio of the magnitude between largest eigenvalue and second-largest eigenvalue. Unlike the above two metrics, coloring the linearity with each glyph only indicates how similar the glyph-encoded local subspaces in the original high-dimensional space are, but the change-of-linearity can be revealed by looking at the aspect ratio of the glpyh itself. For example, the glyphs selected by the blue box in Fig. 10 has linearity close to one, but its result shape is elongated in Fig. 10 (b). The conjunction of these metrics help us better understand behaviors of the DR method. For example, it seems that the projection error Fig. 8 (c)) is negatively correlated with the neighborhood correlation ( Fig. 8 (d)), especially for the highlights with orange circles, in the MDS projection. Implementation The computational steps in our method-vector transformation by implicit differentiation and multidimensional projections-were implemented using Python and NumPy. These steps are calculated only once and the results are used as input to our web-based user interface. The user interface was implemented using JavaScript and Vue.js. The local subspace projection viewer is based on WebGL so that interactive visualization is achieved even for a large number of glyphs; the pointbased projection viewer was realized using D3. Thanks to our efficient visualization tool, full interactivity is achieved for all datasets used in our paper. We numerically compare our accurate implicit vector transformation method to the approximated random approach used in DimReader [15]. In contrast to our analytical technique, DimReader randomly chooses half the points and calculates one projected vector of each data point using auto-differentiation, and the process is repeated for the number of L basis vectors. We use the aforementioned implementation for our method and our own implementation of the random approach with publicly-available code snippets of DimReader on the Internet (both in Python without acceleration) for all evaluations. NUMERICAL EVALUATION Note that our implicit function method is the analytical, i.e., accurate, transformation of basis vectors, whereas the random approach is an approximation. To verify the correctness of our method, a synthetic planar data sampled on a 3D regular grid is generated and projected to 2D using MDS. With a neighborhood of k = 8, the transformed basis vectors of all non-border data points should be, theoretically, exactly orthogonal, i.e., 90 degrees, and having the same magnitudes. It can be seen in Fig. 9 that glyphs of transformed basis vectors with our method appear identical ( Fig. 9 (a)), whereas different shapes of glyph are visible with the random approach ( Fig. 9 (b)). Quantitatively, Table 1 summarizes basic statistics of transformed basis vectors of non-border data points of both methods: our method has a smaller difference of mean lengths of basis vectors (mean length 1 vs 2), a mean angle closer to 90 degrees (mean angle) with lower standard deviation (std of angles) than the random method. Furthermore, distributions of these measures are shown as histograms in the supplemental material. This verifies that our method is more accurate than the random method. (a) implicit function method (b) random method Fig. 9. Transformed basis vectors visualized as glyph by (a) our implicit function method and (b) the random method [15]. The data points are on a plane in 3D sampled using a regular grid, and projected to 2D with MDS. Next, we perform evaluations on real-world multidimensional datasets using nonlinear projections via MDS and t-SNE. We employ typical benchmark datasets: Iris (147 4D points-three duplicated points are removed), Wine (178 13D points), and Digits40 (606 40D points). Fig. 10 compares the glyph visualizations of transformed basis vectors generated by our method and the random method on the Iris dataset color-coded with the linearity measure. It can be seen that the random method generates more abnormal glyphs-with excessively long and thin shapes or with distinct directions inside a neighborhood of uniform directions. We also measure computation times for the two methods for transforming the top five (four for the Iris data) basis vectors: the results show that our method is faster than the random method for all test datasets with both MDS and t-SNE projections (Table 2 of the supplemental material). All figures of glyph visualizations generated during the evaluation can be found in the supplemental material. This is evidence that our implicit differentiation method is a fast and accurate vector transformation method from high-dimensional to low-dimensional space. CASE STUDIES In this section, we show examples of visual analysis of various representative multidimensional datasets-ranging from benchmark datasets in machine learning to downsampled image datasets in computer visionto demonstrate the usefulness of our method. We experiment with typical DR methods-PCA, MDS [2], and t-SNE [46]-on these datasets, and summarize and discuss interesting results. Further examples can be found in the supplemental material. Examples of multidimensional datasets from the UCI machine learning repository [10] are shown in Fig. 11. The Wine dataset is a 13D data of 178 instances with 3 classes. The PCA and MDS results as shown in Fig. 11 (Wine), rows one and two. With our method (left boxes), we can see global trends from the variations of local subspace glyphs. In the PCA result, sizes of most glyphs are rather uniform, and the orientations of glyphs vary smoothly, forming a global arc structure; glyphs in the MDS projection suggest a swirling global structure. Local subspace visualization of the t-SNE projection (Fig. 11, Wine, row three, left box) shows connections between the three clusters, and the glyphs on the exterior seem to form the boundary of the clusters-this may indicate that the clusters sit on the same manifold. Besides the global trends, we can easily spot glyphs of distinct shapes (e.g., the elongated green glyphs in MDS and t-SNE) that can be further investigated. With traditional point-based visualizations (right boxes), none of the global trends can be observed nor local points-of-interest can be found as indicated by abnormal glyphs. Visualizations of the Seeds dataset (210 data points of 7D) do not exhibit clear global trends compared to the Wine example. However, it can be observed that in the PCA (Fig. 11, Seeds, row one) and the MDS (Fig. 11, Seeds, row two) projections the glyphs form a twisting structure with crossings next to the central class. The t-SNE result (Fig. 11, Seeds, row three) provides a similar look as of the Wine example; glyphs around the boundary of clusters seem to comprise the shape of a bow tie. The COIL dataset [30] is a collection of 20 objects each of them photographed 72 times at different rotation angles. We pick three objects: a duck, a car, and a block. Scatterplots color-coded by object class visualizing projections of PCA, MDS, and t-SNE are shown in Fig. 12 (a), (c), and (e), respectively, where no correlation can be seen between points. With our implicit local subspace projection method ( Fig. 12 (b)), global trends and local structures can be visualized with the orientation and shape of glyphs. In the orange zoom-in on the red class, we see that the transition of orientations of glyphs matches that of the "duck" images. Four larger anisotropic glyphs are displayed in the purple zoom-in on the blue class-they are associated with backward facing "cars." Furthermore, the anisotropy of glyphs is correlated with the rotation angle (with respect to front-facing images) of images. In the MDS case ( Fig. 12 (row 2)), glyphs generated by our method (Fig. 12 (d)) form twisting structures that are similar to the PCA case. For example, by examining the images of the red glyphs in the orange zoom-in, it is confirmed that the crossings are actually associated to the change of orientation of the "duck"; similarly, the green glyphs in the zoom-in on the right show two distinct directions of the "block". Our visualization of the t-SNE result (Fig. 12 (f)) shows that the orientation of glyphs changes smoothly within each cluster. The orange zoomin on the green cluster along with the actual images confirms that our method successfully captures the smoothly changing image orientations in the local subspaces. The purple zoom-in confirms that there exists a crossing of orientations of the car. The Face data contains 698 face images of different camera positions and lighting conditions. Local subspace visualizations of the Face data with MDS is already shown in Fig. 1, and PCA and t-SNE results are shown in Fig. 13, respectively. With PCA projection colorized by trustworthiness ( Fig. 13 (a)), dense glyphs are seen on the left, and two trends of glyphs-one going upward and another going downwardseparate around the center of the visualization. The intersection region between these two trends has lower trustworthiness and contains three distinct types of glyphs (see the zoom-in), corresponding to the face images with three distinct orientations. Similarly, two distinct types of glyphs selected by the magenta box are associated with distinct face directions. The clusters can be clearly seen in our t-SNE result ( Fig. 13 (b)), but no prominent global trends can be recognized. However, we can identify local patterns that exhibit trends; for example, in the orange zoom-in, the trends of glyphs are associated with the smooth change of face orientation. By coloring each glpyh with the t-SNE loss function (KL divergence), we can see that the glyphs around the projection center has large size and loss values, indicating that the derivatives of the corresponding data points have large magnitudes. These experiments demonstrate the usefulness of our method in enhancing the anomaly identification ability and revealing global and local trends in data projections. Wine Seeds PCA MDS t-SNE Fig. 11. Visualizations of multidimensional datasets of Wine (first column) and Seeds (second column). Each projection method is visualized using our method (left boxes) and the point-based visualizations (right boxes). We can see global trends and twisting structures from the variations of local subspace glyphs. DISCUSSION, CONCLUSION, AND FUTURE WORK We have introduced a method for visualizing local linear structures in multidimensional projections using implicit differentiation. For each data sample in the original space, a local neighborhood is extracted and its structure is fitted using PCA. Basis vectors of the multidimensional local neighborhood anchored at the respective data point are transformed to the low-dimensional visualization space by calculating differentiation of implicit functions. We have derived analytical solutions for typical multidimensional projections. Next, for each data point, a glyph centered at the projected data point is generated by constructing a B-spline shape within the convex hull spanned by the transformed basis vectors. To aid the analysis of our glyph representation, a linearity metric and a projection quality metric are used to depict the faithfulness of the transformation. We have built an interactive, web-based visual analysis system that supports full-fledged, comparative analysis of our results and the point-based scatterplots. We would like to discuss a few aspects that could affect the results of our method. The number of neighbors k of the local subspace is a userset parameter-a small k extracts local information of the data, whereas a large k reveals more global information. We have observed that datasets with rather high dimensionality (e.g., Face) are less sensitive to k than datasets with lower dimensionality (e.g., Iris). Our implicit differentiation transformation requires good convergence of the DR method-otherwise, numerical errors occur during the computation of the Jacobian matrix. Also, duplicate data points need to be removed as they would yield dividing by zero error when computing the Jacobian. In terms of scalability, we argue that the visual scalability of our method is comparable to traditional point-based visualizations, because the glyphs' appearance can be interactively adjusted, and most information can be perceived with point-sized (as in the point-based visualizations) glyphs as shown in examples throughout the paper. For larger datasets, our current method also suffers from the overdraw problem, as point-based visualizations in general, and we could adopt clutter reduction techniques [12], e.g., sampling, for better visual scalability. For computational scalability, the memory size is the limiting factor of computing the Jacobian matrix. The largest dataset we have experimented on our machine contains 7494 data points of 16-D. In the future, we would like to optimize the calculation of the Jacobian matrix on GPUs. In conclusion, our method is a promising tool for understanding and analyzing multidimensional data and DR methods. A benefit of our method is that main trends and anomalies in the projected data can be quickly identified with glyphs-large differences of orientation, shape, and size can be easily perceived. Once the glyphs of interest are identified, the user can further analyze the data with interactive tools to understand the cause of trends or anomalies. We have shown the usefulness of our method through a number of multidimensional datasets with popular DR methods, including PCA, MDS, and t-SNE. In this process, we have gained new insights into familiar datasets and widely used DR methods. Another benefit is that our implicit vector transformation is accurate and fast. Our vector transformation method has been assessed through numerical comparisons, and our overall method has been evaluated through case studies on benchmark datasets. In the future, we would like to integrate our approach with more DR methods together. One prominent example is UMAP [29], which keeps a continuous transform function internally and thus can directly transform any point without our partial derivative based transformation. Second, more intelligent analysis approaches could be used to assist analyzing our results, for example, pattern recognition methods that automatically detect trends and local patterns of interest. Finally, we would like to study the effectiveness of our approach in real applications and compare with other explanation methods [15]. , which are colorzed by two different quality metrics: trustworthiness and KL divergences, respectively. From the PCA result, we can see that two trends of glyphs separate around the center, one going upward and another going downward, and in each zoom-in, the distinct types of glyphs are associated with distinct face directions. From the t-SNE result, we can identify local patterns that exhibit trends.
2020-09-08T01:01:13.305Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "edfbcf9570bfd67d4686fd86a15a0ddb0417f13b", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2009.03259", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "edfbcf9570bfd67d4686fd86a15a0ddb0417f13b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Medicine" ] }
265510203
pes2o/s2orc
v3-fos-license
Ultra-Broadband, Compact Arbitrary-Ratio Multimode Power Splitter Based on Tilted Subwavelength Gratings : Mode division multiplexing (MDM) technology is an effective solution for high-capacity optical interconnection, and multimode power splitters, as essential components in MDM systems, have attracted widespread attention. However, supporting a wide range of modes and arbitrary power splitting ratios with large bandwidth in power splitters remains a significant challenge. In this paper, we designed a power splitter based on a subwavelength grating (SWG) structure with tilted placement on a silicon-on-insulator (SOI) substrate. We achieve arbitrary TE 0 –TE 9 mode-insensitive power distribution by altering the filling coefficient of the SWG. Thanks to our specific selection of cladding materials and compensatory design for the optical wave transmission and reflection shifts induced by SWG, our device demonstrates low additional loss (EL < 1.1 dB) and inter-mode crosstalk ( − 18.8 < CT < − 60 dB) for optical modes ranging from TE 0 to TE 9 , covering a wavelength range from 1200 nm to 1700 nm. Furthermore, our proposed device can be easily extended to higher-order modes with little loss of device performance, offering significant potential in MDM platforms. Introduction With the development of artificial intelligence technology, various industries are deeply integrating with intelligent information technology.Due to the increasing prominence of issues such as the slowing of Moore's Law, the "power wall" [1], and the "von Neumann bottleneck" [2,3] in traditional electronic integrated circuits, silicon-based optoelectronic platforms are emerging as promising new technologies in the "post-Moore's Law" era [4].These can greatly reduce the limitations on the development and application of artificial intelligence technology and have become the main direction of widespread attention and breakthroughs in the industry at home and abroad [5].In the optical interconnect and optical computing network of the silicon-based photonic platform, a large number of passive devices are employed for controlling the transmission [6][7][8], coupling [9,10], and power distribution of light.Among these, optical splitters, as essential components in ultra-compact photonic integrated circuits, play a crucial role in various domains, including sensors, optical switches, logic gates, modulators, signal processing, and more.Additionally, the rapid advancement of MDM technology enables power splitters supporting multiple modes to enhance the transmission and processing speed of high-capacity information greatly.Presently, common multimode power splitters include devices such as multimode interference couplers (MMI) [11,12] and directional couplers (DC) [9,13].These couplers have limited-wavelength bandwidth support and require complex designs to achieve arbitrary power splitting ratios (PSRs), which significantly restricts their utility in on-chip MDM applications.Therefore, designing a broadband multimode splitter with arbitrary PSRs remains a challenging and strategically significant endeavor. In this work, inspired by geometric optical prisms, we propose a novel multimode 2 × 2 power splitter.When light waves pass through the etched subwavelength grating (SWG) in this multimode beam splitter, they undergo frustrated total internal reflection (FTIR), which enables the transmission of some light waves while reflecting others.Moreover, by adjusting the filling coefficient of the SWG material, we can flexibly control the PSR between the reflected and transmitted light waves.We employ 2.5 D Variational Finite-difference time-domain (VARFDTD) simulations and optimization to design the structure of this power splitter, aiming to achieve high-performance broadband power distribution for multimode signals while reducing the device's dependence on etching processes.On the one hand, to enhance device performance, we have designed slight offsets for our SWG and the output ports for transmission relative to the intersection point of the input and reflector waveguide centers to compensate for the effects of Goos-Hänchen (GH) shift and the transmission offset generated by the SWG layer.On the other hand, to simplify device fabrication, we investigate the impact of grating filling materials and device cladding materials on device performance and grating dimensions.Ultimately, our proposed multimode power splitter achieves power distribution of TE 0 -TE 9 -mode light waves at arbitrary ratios within the wavelength range of 1200 nm to 1700 nm.Furthermore, the power splitter exhibits EL of less than 1.1 dB for various mode light waves, and the insertion loss (IL) variation and inter-mode CT for different mode light waves at the same wavelength and output port are both less than 1 dB and −18.8 dB, respectively. Device Design The multimode power splitter is designed on a standard silicon-on-insulator (SOI) substrate with a 220 nm thick top Si layer and a 2 µm thick buried oxide.It is covered by a 2 µm thick Si 3 N 4 material, which serves as both the cladding structure and the filling material for the SWG, as shown in Figure 1a. Figure 1b shows that the core of the device consists mainly of a cross-shaped Si waveguide with a width of W WG and a centrally tilted (θ tilt = 45 • ) SWG.The multimode power splitter draws inspiration from the structure of a spectral prism, which gives it characteristics similar to those of a spectral prism.Specifically, the light wave mode incident from input port #1 is divided into two beams by the SWG, exiting from port #2 and port #3, respectively.Likewise, if light enters from port #2, it is divided into two beams, exiting from port #1 and port #4, with the PSR remaining unchanged. To prevent the shift of light waves caused by transmission and reflection through the SWG layer from affecting the performance of the output ports, there is a longitudinal offset δ o f f set and a lateral offset δ 1 between SWG and port #2 relative to the intersection points of port #1 and port #3 waveguide centers, as illustrated in Figure 1b.We provide a detailed definition of the SWG period Λ and the SWG filling coefficient f swg in Figure 1c.According to the characteristics of the SWG and the principles of geometric optics, the SWG can be considered equivalent to a single-layer dielectric [see Figure 1d]. According to the Fresnel formula, the reflectance and transmittance of TE-polarized mode incident waves on a single-layer dielectric layer can be expressed as: where n co is the refractive index of the Si waveguide core layer, n swg is the refractive index corresponding to the SWG equivalent to a single-layer dielectric, θ 0 is the incident angle of the beam, θ is the angle of refraction and reflection at interfaces A and A ' within the equivalent single-layer dielectric, and δ is the phase shift within the effective dielectric layer.According to Equations ( 1) and ( 2), the PSR of the SWG reflector is obtained as: The n swg in Equations ( 4)-( 6) can be obtained using the Rytov [14] formula: where n // and n ⊥ represent the ordinary/extraordinary effective medium indices, indicating that SWG is essentially birefringent [15].n f is the refractive index of the filling medium in SWG.The Rytov formula is derived using a zeroth-order approximation, assuming λ/Λ → ∞ .For smaller values of λ/Λ, higher-order approximation expressions of the Rytov formula [16,17] can be used to improve accuracy. where is the refractive index of the Si waveguide core layer, is the refractive index corresponding to the SWG equivalent to a single-layer dielectric, 0 is the incident angle of the beam, is the angle of refraction and reflection at interfaces A and A ′ within the equivalent single-layer dielectric, and is the phase shift within the effective dielectric layer.According to Equations ( 1) and ( 2), the PSR of the SWG reflector is obtained as: The in Equations ( 4)-( 6) can be obtained using the Rytov [14] formula: where // and ⊥ represent the ordinary/extraordinary effective medium indices, indicating that SWG is essentially birefringent [15]. is the refractive index of the filling medium in SWG.The Rytov formula is derived using a zeroth-order approximation, assuming / → ∞.For smaller values of /, higher-order approximation expressions of the Rytov formula [16,17] can be used to improve accuracy.As shown in Figure 1d, due to the half-wave loss generated when light is incident from an optically sparse medium and reflected from an optically dense medium, resulting in additional optical path difference, the phase shift within the effective dielectric layer can be obtained from the following equation [18]: As shown in Figure 1d, due to the half-wave loss generated when light is incident from an optically sparse medium and reflected from an optically dense medium, resulting in additional optical path difference, the phase shift δ within the effective dielectric layer can be obtained from the following equation [18]: where k e f f is the effective wave vector (k e f f = k 0 n swg , where k 0 is the wave vector of incident light in vacuum), and w swg is the width of the SWG [see Figure 1c].According to Equation (7), δ is influenced by n swg and w swg , Moreover, Equations ( 1)- (7) reveal that R TE , T TE , and PSR swg are all dependent on n swg associated with f swg .Consequently, the PSR between the reflected wave and the transmitted wave can be flexibly designed by adjusting f swg and w swg of the SWG reflector in Equations ( 4)- (7). Simulation and Analysis Due to the excellent compatibility of SiO 2 and Si 3 N 4 with the SOI platform and the mature growth process on Si, they can serve as ideal cladding materials.Furthermore, since other photonic devices in our research project primarily operate around a wavelength of 1310 nm, we used the finite element method (FEM) to simulate the effect of changing the Si waveguide width on the effective refractive index of TE 0 -TE 9 mode light in different cladding scenarios at this wavelength.The simulation results in Figure 2 show that when the cladding structures are Air, SiO 2 , and Si 3 N 4 , as the central waveguide width W WG increases, the effective refractive index of TE 0 -TE 9 modes in the waveguide gradually approaches equality.When W WG > 8 µm, the effective refractive index of low-order modes TE 0 -TE 3 becomes closer.Considering that overly large dimensions would dramatically increase the subsequent simulation and modeling time for the device, we chose to investigate the output characteristics of ports #2 and #3 of the power splitter in TE 0 -TE 3 modes for these three cladding structures with W WG = 8 µm.where is the effective wave vector ( = 0 , where 0 is the wave vector of incident light in vacuum), and is the width of the SWG [see Figure 1c].According to Equation ( 7), δ is influenced by and , Moreover, Equations ( 1)-( 7) reveal that , , and PSR are all dependent on associated with .Consequently, the PSR between the reflected wave and the transmitted wave can be flexibly designed by adjusting and of the SWG reflector in Equations ( 4)-( 7). Simulation and Analysis Due to the excellent compatibility of SiO2 and Si3N4 with the SOI platform and the mature growth process on Si, they can serve as ideal cladding materials.Furthermore, since other photonic devices in our research project primarily operate around a wavelength of 1310 nm, we used the finite element method (FEM) to simulate the effect of changing the Si waveguide width on the effective refractive index of TE0-TE9 mode light in different cladding scenarios at this wavelength.The simulation results in Figure 2 show that when the cladding structures are Air, SiO 2 , and Si3N4, as the central waveguide width WG increases, the effective refractive index of TE0-TE9 modes in the waveguide gradually approaches equality.When WG > 8 μm, the effective refractive index of low-order modes TE0-TE3 becomes closer.Considering that overly large dimensions would dramatically increase the subsequent simulation and modeling time for the device, we chose to investigate the output characteristics of ports #2 and #3 of the power splitter in TE0-TE3 modes for these three cladding structures with WG = 8 μm.Based on the diffraction theory of gratings, when the grating period is much smaller than the operating wavelength, only zero-order diffraction will occur.At this point, the SWG can be effectively considered to be a single-layer dielectric, and its effective refractive index is influenced by the SWG filling structure.Therefore, specific PSRs can be achieved by designing the SWG filling structure appropriately.To ensure that the light emitted from port #2 and port #3 undergoes only zero-order diffraction, both the diffraction angles (including reflection and transmission) and the angle of incidence of the grating must be equal.Only specular reflection and direct transmission occur with no backward reflection.Therefore, to suppress backward reflection, the following conditions [19] should be satisfied: where is the effective refractive index of the given mode, and 0 is the operating wavelength.From Formula (8), it can be observed that as increases and 0 Based on the diffraction theory of gratings, when the grating period is much smaller than the operating wavelength, only zero-order diffraction will occur.At this point, the SWG can be effectively considered to be a single-layer dielectric, and its effective refractive index is influenced by the SWG filling structure.Therefore, specific PSRs can be achieved by designing the SWG filling structure appropriately.To ensure that the light emitted from port #2 and port #3 undergoes only zero-order diffraction, both the diffraction angles (including reflection and transmission) and the angle of incidence of the grating must be equal.Only specular reflection and direct transmission occur with no backward reflection.Therefore, to suppress backward reflection, the following conditions [19] should be satisfied: where n e f f is the effective refractive index of the given mode, and λ 0 is the operating wavelength.From Formula (8), it can be observed that as n e f f increases and λ 0 decreases, the value of the maximum grating period Λ c decreases accordingly.To meet the requirements of all three cladding structures for suppressing backward reflection, the scenario with the maximum n e f f value is chosen to be used in Equation ( 8) for calculations.As shown in decreases, the value of the maximum grating period decreases accordingly.To meet the requirements of all three cladding structures for suppressing backward reflection, the scenario with the maximum n value is chosen to be used in Equation ( 8) for calculations.As shown in Figure 3, when W WG > 8 μm, the for TE0-TE3 mode waves in devices with the three cladding structures are quite close.Therefore, we study the impact of the three cladding structures on the power splitter performance at this width.Since under the same W WG conditions, the waveguide's using SiO2 and Air cladding structures is lower compared to Si3N4, plugging in the value for TE0-TE3 modes under the Si3N4 cladding structure ( = 3.018) into Equation ( 8) yields a value of = 0.25 μm.Taking into consideration the suppression of backward reflection for shorter wavelength light waves, = 0.2 μm is chosen.Without considering grating transmission offset and GH shift, we employ the VARFDTD method to simulate the effects of varying and on the PSR and EL of TE0-TE3 modes in devices with three cladding structures when 1310 nm wavelength light waves are incident, as shown in Figure 3a-c.To meet simulation accuracy requirements, the grid resolution at the SWG is set to dx = dy = dz = 10 nm.The formulas for calculating EL and PSR are as follows: where is the transmittance of input mode TEi (i = 0, 1, 2, 3) generating mode TEj (j = 0, 1, 2, 3) at ports k (k = 2, 3). here denotes the PSR when the corresponding emission mode is TEi.It can be observed that the varies with changes in , , and .However, for i = 0, 1, 2, 3, the changes are nearly identical due to the similarity in effective refractive indices of these modes, as shown in Figure 3a. The simulation and calculations in Figure 3 reveal that only at ≥ 0.3 μm, there is a significant difference in the PSR of TE3 mode compared to other lower-order modes.This is due to the significant difference in the of TE3 mode compared to other lowerorder modes in the simulated Si waveguide structure, and this effect can be reduced by increasing WG .Furthermore, from Figure 3, it is evident that in multimode power splitters with Air and SiO2 cladding structures, the EL and PSR curves for TEi (i = 0, 1, 2, 3) Without considering grating transmission offset and GH shift, we employ the VARFDTD method to simulate the effects of varying f swg and w swg on the PSR and EL of TE 0 -TE 3 modes in devices with three cladding structures when 1310 nm wavelength light waves are incident, as shown in Figure 3a-c.To meet simulation accuracy requirements, the grid resolution at the SWG is set to dx = dy = dz = 10 nm.The formulas for calculating EL and PSR are as follows: where T ijk is the transmittance of input mode TE i (i = 0, 1, 2, 3) generating mode TE j (j = 0, 1, 2, 3) at ports k (k = 2, 3).PSR i here denotes the PSR when the corresponding emission mode is TE i .It can be observed that the PSR i varies with changes in f swg , Λ, and w swg .However, for i = 0, 1, 2, 3, the PSR i changes are nearly identical due to the similarity in effective refractive indices of these modes, as shown in Figure 3a. The simulation and calculations in Figure 3 reveal that only at w swg ≥ 0.3 µm, there is a significant difference in the PSR of TE 3 mode compared to other lower-order modes.This is due to the significant difference in the n e f f of TE 3 mode compared to other lower-order modes in the simulated Si waveguide structure, and this effect can be reduced by increasing W WG .Furthermore, from Figure 3, it is evident that in multimode power splitters with Air and SiO 2 cladding structures, the EL and PSR curves for TE i (i = 0, 1, 2, 3) almost coincide.In contrast, for multimode power splitters with Si 3 N 4 cladding structures, there is a noticeable deviation in the EL and PSR curves compared to the other two cladding structures.This difference is reflected in two aspects: first, when Si 3 N 4 is used as the cladding structure, the PSR exhibits a smoother transition above the horizontally marked line in Figure 3a-d compared to the other two cladding structures.It is less sensitive to changes in f swg and thus achieves superior etching tolerance for the device.Second, when the three cladding structures achieve the same PSR in the power splitter, as shown in Figure 3a-d EL curves, it can be seen that the EL of TE i in the power splitter with Si 3 N 4 cladding is smaller than in the other two cladding structures, indicating lower loss.The reason for the aforementioned differences is that, in these two structures, the effective refractive index curves of mode TE i in the Si waveguide are nearly identical, and both differ significantly from the effective refractive index curve in the Si 3 N 4 structure, as can be seen in Figure 2a,b. The above results and analysis indicate that using Si 3 N 4 as the cladding structure for multimode power splitters performs better in etching processes and device performance compared to the other two cladding structures.To achieve a wide range of PSR, low EL, and lower mode sensitivity, we ultimately selected w swg as 0.2 µm and 0.3 µm for the subsequent simulation. On the one hand, due to the presence of GH shift, the actual reflection interface experiences displacement relative to interface A [see Figure 1d], resulting in a mode-field mismatch between the output mode and port #3 (reflection port).It is necessary to eliminate this mode-field mismatch to avoid additional losses and PSR errors.For TE-polarized modes, the required mirror offset δ o f f set to compensate for GH offset can be derived as [8]: where k 0 is the wave vector of incident light in a vacuum, with a value of 2π/λ 0 .n swg represent the effective refractive indices of the corresponding mode in the SWG layer, respectively.By combining Equation ( 4), it becomes evident that in multimode simulation scenarios, the value δ o f f set is influenced by f swg .On the other hand, as light is input from port #1 and undergoes reflection and refraction within the SWG before being output from port #2, the presence of SWG leads to an optical wave experiencing a transmission offset as it passes through the equivalent dielectric layer.This transmission offset distance can be calculated using the steady-state phase theory [20,21]: where A and B represent coefficients associated with n e f f , n swg , θ, and θ 0 .it can be seen that δ 1 , relative to δ o f f set , is influenced not only by f swg but also affected by w swg . From Figure 3b, it can be observed that when w swg = 0.2 µm and f swg = 0.8, or w swg = 0.3 µm and f swg = 0.65, the PSR of the four modes is around 0.5, indicating even power splitting at both output ports.We can more easily observe the impact of grating transmission offset and GH displacement on the two output ports in this situation.Using Snell's law n si sinθ 0 = n swg sinθ and Equations ( 4)-( 6), we can calculate the value of θ by substituting f swg , n f , and n si , and then determine the value of n swg .With the value of n swg and the SWG cladding structure, we can use FEM to calculate that n swg is 2.04 and 2.18 under the two conditions.Plugging these values into Equations ( 11) and ( 12) yields δ o f f set and δ 1 values of 0.23 µm, 0.17 µm, and 0.32, 0.25 µm, respectively.To validate the accuracy of this calculation, we conducted simulations using the VARFDTD method to assess the impact of δ o f f set and δ 1 displacements on the ELs and inter-mode CTs of TE i modes at output ports #2 and #3 in the entire device, with the grid resolution at SWG set to dx = dy = dz = 10 nm, and the simulation results are shown in Figure 4. CT is derived from the following equation: accuracy of this calculation, we conducted simulations using the VARFDTD method to assess the impact of and 1 displacements on the ELs and inter-mode CTs of TE modes at output ports #2 and #3 in the entire device, with the grid resolution at SWG set to dx = dy = dz = 10 nm, and the simulation results are shown in Figure 4. CT is derived from the following equation: Where represents the crosstalk of mode TEj (j = 0, 1, 2, 3) relative to mode TEi (i = 0, 1, 2, 3) at ports k (k = 2, 3).From Figure 4a,b, it can be observed that when is 0.23 μm and 0.32 μm, the CT effects on the four modes are weakest, and the values of ELs are minimized.The increase in has a much larger impact on the CTs of modes output from port #3 compared to those output from port #2.From Figure 4c,d, it can be seen that when 1 is 0.16 μm and 0.24 μm, the ELs for the two modes are minimal.Additionally, unlike the impact of on port #3, the variation of 1 has a more significant effect on the CTs of modes at port #2.Considering the simulation step size, these simulation results are consistent with the GH shift behavior predicted by Equation ( 11) and the transmission shift behavior predicted by Equation (12).Furthermore, from Figure 4a-d, it can be observed that when and 1 deviations relative to the target values are within 20 nm, and the changes in EL and CTs are less than 0.02 dB and 0.5 dB, respectively, indicating good manufacturing robustness of the device.When WG > 8 μm, it can be observed from Figure 2c that changing the WG has a very small impact on , and similarly, the value of ′ remains nearly unchanged.Therefore, at WG = 30 μm, the values of and 1 influenced by this factor also do not change significantly.To compensate for the GH shift and transmission shift, and 1 can be selected as 0.23 μm and 0.17 μm, respectively, at = 0.2 μm, or 0.32μm and 0.25μm, respectively, at = 0.3 μm to achieve optimal device performance.Where CT ijk represents the crosstalk of mode TE j (j = 0, 1, 2, 3) relative to mode TE i (i = 0, 1, 2, 3) at ports k (k = 2, 3).From Figure 4a,b, it can be observed that when δ o f f set is 0.23 µm and 0.32 µm, the CT effects on the four modes are weakest, and the values of ELs are minimized.The increase in δ o f f set has a much larger impact on the CTs of modes output from port #3 compared to those output from port #2.From Figure 4c,d, it can be seen that when δ 1 is 0.16 µm and 0.24 µm, the ELs for the two modes are minimal.Additionally, unlike the impact of δ o f f set on port #3, the variation of δ 1 has a more significant effect on the CTs of modes at port #2.Considering the simulation step size, these simulation results are consistent with the GH shift behavior predicted by Equation ( 11) and the transmission shift behavior predicted by Equation (12).Furthermore, from Figure 4a-d, it can be observed that when δ o f f set and δ 1 deviations relative to the target values are within 20 nm, and the changes in EL and CTs are less than 0.02 dB and 0.5 dB, respectively, indicating good manufacturing robustness of the device.When W WG > 8 µm, it can be observed from Figure 2c that changing the W WG has a very small impact on n e f f , and similarly, the value of n swg remains nearly unchanged.Therefore, at W WG = 30 µm, the values of δ o f f set and δ 1 influenced by this factor also do not change significantly.To compensate for the GH shift and transmission shift, δ o f f set and δ 1 can be selected as 0.23 µm and 0.17 µm, respectively, at w swg = 0.2 µm, or 0.32µm and 0.25µm, respectively, at w swg = 0.3 µm to achieve optimal device performance. To visually demonstrate the multimode scalability of the power splitter, we used the VARFDTD method to simulate the ILs, ELs for each mode, and inter-mode CTs for TE 0 -TE 9 modes in the multimode power splitter at W WG = 30 µm, w swg = 0.3 µm, and w swg values of 0.65 and 0.69 over a wavelength range of 1200-1700 nm at output ports #2 and #3 relative to the input port.The results are shown in Figure 5.The IL can be determined using the following formula: where IL ijk represents the insertion loss of mode TE j (j = 0, 1, 2, 3) relative to mode TE i (i = 0, 1, 2, 3) at ports k (k = 2, 3). To visually demonstrate the multimode scalability of the power splitter, we used the VARFDTD method to simulate the ILs, ELs for each mode, and inter-mode CTs for TE0-TE9 modes in the multimode power splitter at WG = 30 μm, = 0.3 μm, and values of 0.65 and 0.69 over a wavelength range of 1200-1700 nm at output ports #2 and #3 relative to the input port.The results are shown in Figure 5.The IL can be determined using the following formula: = 10 10 ( = ), (14) where represents the insertion loss of mode TEj (j = 0, 1, 2, 3) relative to mode TEi (i = 0, 1, 2, 3) at ports k (k = 2, 3).As shown in Figure 3c earlier, at = 0.65, the multimode power splitter achieves uniform power splitting for TE0-TE3 optical waves at 1310 nm wavelength with WG = 8 μm.In Figure 5a-c, it can be observed that at WG = 30 μm, the device achieves uniform power splitting for TE0-TE5 and even higher-order modes at 1310 nm wavelength, demonstrating excellent multimode expansion performance.Figure 5d-f reveals that at = 0.69, the device achieves relatively uniform power splitting for TE0-TE5 at 1450 nm wavelength, with ELs for TE0-TE5 modes below 0.5 dB across the simulated wavelength range of 1200-1700 nm, and their IL curves largely overlap.however, ELs for TE6-TE9 modes increase significantly.Figure 6g-j illustrates the electric field distribution of TE6-TE9 modes at 1450 nm wavelength and = 0.69, showing that TE6-TE9 modes leak noticeably into the cladding due to reduced confinement at the SWG grating in the Si waveguide.This leakage can be improved by reducing the difference of higher-order modes.By increasing the WG width, it is easy to expand the device to more modes, which can significantly reduce the losses and differences mentioned above.As shown in Figure 3c earlier, at f swg = 0.65, the multimode power splitter achieves uniform power splitting for TE 0 -TE 3 optical waves at 1310 nm wavelength with W WG = 8 µm.In Figure 5a-c, it can be observed that at W WG = 30 µm, the device achieves uniform power splitting for TE 0 -TE 5 and even higher-order modes at 1310 nm wavelength, demonstrating excellent multimode expansion performance.Figure 5d-f reveals that at f swg = 0.69, the device achieves relatively uniform power splitting for TE 0 -TE 5 at 1450 nm wavelength, with ELs for TE 0 -TE 5 modes below 0.5 dB across the simulated wavelength range of 1200-1700 nm, and their IL curves largely overlap.however, ELs for TE 6 -TE 9 modes increase significantly.Figure 6g-j illustrates the electric field distribution of TE 6 -TE 9 modes at 1450 nm wavelength and f swg = 0.69, showing that TE 6 -TE 9 modes leak noticeably into the cladding due to reduced confinement at the SWG grating in the Si waveguide.This leakage can be improved by reducing the n e f f difference of higher-order modes.By increasing the W WG width, it is easy to expand the device to more modes, which can significantly reduce the losses and differences mentioned above. Discussion In the optimization process, we employed the VARFDTD method to simulate the entire device.We set the grid resolution at the SWG to dx = dy = dz = 10 nm and utilized the TE0-TE9 modes for excitation.The wavelength range of the light source was set to 1200-1700 nm.In Table 1, we summarized the data from Figure 5d-f and presented detailed device data.From Table 1, it can be seen that our proposed multimode power splitter exhibits extremely low excess loss (EL < 1.1 dB) and relatively small inter-mode crosstalk (−18.8 < CT < −60 dB) for all TE0-TE9 modes in the wavelength range of 1200-1700 nm.Furthermore, when the PSR is around 0.5, the actual IL is within 2 dB of the target IL of 3 dB, and the maximum IL difference between different TE0-TE9 modes is less than 1.1 dB.In the subsequent manufacturing process, our proposed device will be manufactured based on commercial SOI chips with a top Si layer of 220 nm and a buried oxide layer of 2 μm in thickness.We will employ electron beam lithography (EBL) for waveguide patterning and perform inductively coupled plasma (ICP) etching on the masked chip.The Si layer will be etched completely to a depth of 220 nm, thus achieving a complete waveguide structure.Once the waveguide structure is etched and the mask is completely removed, the device fabrication will be completed by depositing a silicon nitride film using ion-assisted pulsed DC reactive magnetron sputtering.Before the device fabrication, Figure 4 demonstrates that the variations in EL and CTs are both less than 0.02 dB and 0.5 dB, respectively, when the compensations for transmissive and reflective optical wave displacements deviate from the target values by less than 20 nm. The comparison between the currently reported state-of-the-art on-chip multimode power splitters and the simulation results of our work is presented in Table 2.The results indicate that compared to other multimode power splitters, our proposed SWG-based multimode power splitter with arbitrary PSR supports more transmission modes, has a smaller footprint, and possesses an ultra-wide operating bandwidth covering O, E, S, C, L, and U bands.In the past, devices often could only achieve spectral splitting for a few modes with PSR = 0.5, and the wavelength range and device size could not be balanced.Additionally, for the same device size and identical excitation modes (such as TE0-TE8), compared to Ref. [22], due to our use of Si3N4 cladding structure and compensation for the transmission and reflection shifts of optical waves caused by SWG, the proposed Discussion In the optimization process, we employed the VARFDTD method to simulate the entire device.We set the grid resolution at the SWG to dx = dy = dz = 10 nm and utilized the TE 0 -TE 9 modes for excitation.The wavelength range of the light source was set to 1200-1700 nm.In Table 1, we summarized the data from Figure 5d-f and presented detailed device data.From Table 1, it can be seen that our proposed multimode power splitter exhibits extremely low excess loss (EL < 1.1 dB) and relatively small inter-mode crosstalk (−18.8 < CT < −60 dB) for all TE 0 -TE 9 modes in the wavelength range of 1200-1700 nm.Furthermore, when the PSR is around 0.5, the actual IL is within 2 dB of the target IL of 3 dB, and the maximum IL difference between different TE 0 -TE 9 modes is less than 1.1 dB. a | IL 0−9 | represents the IL difference of different TE 0 -TE 9 modes at the same wavelength and the same output port, | IL| refers to the absolute value of the actual IL difference between the two output ports for all modes relative to the reference IL difference, and BW represents the bandwidth. In the subsequent manufacturing process, our proposed device will be manufactured based on commercial SOI chips with a top Si layer of 220 nm and a buried oxide layer of 2 µm in thickness.We will employ electron beam lithography (EBL) for waveguide patterning and perform inductively coupled plasma (ICP) etching on the masked chip.The Si layer will be etched completely to a depth of 220 nm, thus achieving a complete waveguide structure.Once the waveguide structure is etched and the mask is completely removed, the device fabrication will be completed by depositing a silicon nitride film using ion-assisted pulsed DC reactive magnetron sputtering.Before the device fabrication, Figure 4 demonstrates that the variations in EL and CTs are both less than 0.02 dB and 0.5 dB, respectively, when the compensations for transmissive and reflective optical wave displacements deviate from the target values by less than 20 nm. The comparison between the currently reported state-of-the-art on-chip multimode power splitters and the simulation results of our work is presented in Table 2.The results indicate that compared to other multimode power splitters, our proposed SWG-based multimode power splitter with arbitrary PSR supports more transmission modes, has a smaller footprint, and possesses an ultra-wide operating bandwidth covering O, E, S, C, L, and U bands.In the past, devices often could only achieve spectral splitting for a few modes with PSR = 0.5, and the wavelength range and device size could not be balanced.Additionally, for the same device size and identical excitation modes (such as TE 0 -TE 8 ), compared to Ref. [22], due to our use of Si 3 N 4 cladding structure and compensation for the transmission and reflection shifts of optical waves caused by SWG, the proposed structure in this paper can achieve lower ELs (<0.92 dB) and inter-mode CTs (<−18.8dB) over a wider Figure 1 . Figure 1.Designed power splitter with SWG.(a) Schematic of the 3D structure.(b) Top view schematic; (c) Local enlargement of the SWG structure; (d) Schematic depicting the SWG structure as equivalent to a single-layer dielectric. Figure 1 . Figure 1.Designed power splitter with SWG.(a) Schematic of the 3D structure.(b) Top view schematic; (c) Local enlargement of the SWG structure; (d) Schematic depicting the SWG structure as equivalent to a single-layer dielectric. Figure 2 . Figure 2. Effective refractive indices of TE0-TE9 modes at a wavelength of 1310 nm under different cladding structures change with the central waveguide width WG .(a) Air cladding; (b) SiO2 cladding; (c) Si3N4 cladding. Figure 2 . Figure 2. Effective refractive indices of TE 0 -TE 9 modes at a wavelength of 1310 nm under different cladding structures change with the central waveguide width W WG .(a) Air cladding; (b) SiO 2 cladding; (c) Si 3 N 4 cladding. Figure 3 , Figure 3, when W WG > 8 µm, the n e f f for TE 0 -TE 3 mode waves in devices with the three cladding structures are quite close.Therefore, we study the impact of the three cladding structures on the power splitter performance at this width.Since under the same W WG conditions, the waveguide's n e f f using SiO 2 and Air cladding structures is lower compared to Si 3 N 4 , plugging in the n e f f value for TE 0 -TE 3 modes under the Si 3 N 4 cladding structure (n e f f = 3.018) into Equation (8) yields a value of Λ c = 0.25 µm.Taking into consideration the suppression of backward reflection for shorter wavelength light waves, Λ c = 0.2 µm is chosen. Figure 5 . Figure 5. Variations in ELs, ILs, and CTs of TE0-TE9 mode light waves in the multimode power splitter in the wavelength range of 1200-1700 nm.(a,d) represent the total ELs variation when is 0.65 and 0.69, respectively; (b,e) show the port ILs variation when is 0.65 and 0.69, respectively; (c,f) depict the inter-mode CT variation when is 0.65 and 0.69, respectively. Figure 5 . Figure 5. Variations in ELs, ILs, and CTs of TE 0 -TE 9 mode light waves in the multimode power splitter in the wavelength range of 1200-1700 nm.(a,d) represent the total ELs variation when f swg is 0.65 and 0.69, respectively; (b,e) show the port ILs variation when f swg is 0.65 and 0.69, respectively; (c,f) depict the inter-mode CT variation when f swg is 0.65 and 0.69, respectively. Figure 6 . Figure 6.Electric field distribution of TE 0 -TE 9 modes in the multimode power splitter at 1450 nm wavelength and f swg = 0.69.(a-j) correspond to TE 0 -TE 9 modes, respectively. Table 1 . The simulated result of the design power splitters.|△ IL 0−9 | represents the IL difference of different TE 0 -TE 9 modes at the same wavelength and the same output port, |△ IL| refers to the absolute value of the actual IL difference between the two output ports for all modes relative to the reference IL difference, and BW represents the bandwidth. a Table 1 . The simulated result of the design power splitters.
2023-12-01T16:02:23.787Z
2023-11-29T00:00:00.000
{ "year": 2023, "sha1": "179ccbb8d9b755a9e248c06797595ed3e3a76dc2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6732/10/12/1327/pdf?version=1701269665", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "73576f9c9d30b5ae190d07f7cd63e576065e76c7", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
267198552
pes2o/s2orc
v3-fos-license
Impaired motor inhibition during perceptual inhibition in older, but not younger adults: a psychophysiological study The prefrontal cortex (PFC) governs the ability to rapidly cancel planned movements when no longer appropriate (motor inhibition) and ignore distracting stimuli (perceptual inhibition). It is unclear to what extent these processes interact, and how they are impacted by age. The interplay between perceptual and motor inhibition was investigated using a Flanker Task, a Stop Signal Task and a combined Stop Signal Flanker Task in healthy young (n = 33, Mean = 24 years) and older adults (n = 32, Mean = 71 years). PFC activity was measured with functional near-infrared spectroscopy (fNIRS), while electromyography (EMG) measured muscle activity in the fingers used to respond to the visual cues. Perceptual inhibition (the degree to which incongruent flankers slowed response time to a central cue) and motor inhibition (the speed of cancellation of EMG activation following stop cues) independently declined with age. When both processes were engaged together, PFC activity increased for both age groups, however only older adults exhibited slower motor inhibition. The results indicate that cortical upregulation was sufficient to compensate for the increased task demands in younger but not older adults, suggesting potential resource sharing and neural limitations particularly in older adults. Stopping performance The speed at which a response was able to be stopped in the SST and SSFT is presented in Fig. 2, with descriptive data presented in Supplementary Table S1 online.Stopping speed was either estimated from the latency of overt button presses via SSRT (Fig. 2a), or via physiological measures of muscle activation Cancel Time (Fig. 2b). SSRT The main effect of Age group was significant, F (1, 62.95) = 29.25,p < 0.001, with slower SSRTs in older than young adults (M diff = 49 ms, SE = 9.13, t = 5.41 p = 0.001, 95% CI [31 ms, 68 ms], d = 7.59).The main effect of congruency was also significant, F (3, 178.03) = 6.15, p < 0.001.SSRT was slower when there were incongruent flankers compared to when there were neutral flankers (M diff = 22 ms, SE = 5.55, t = 3.91, p = 0.001, 95% CI [7 ms, 37 ms], d = 0.46) and compared to no flankers on SST trials (M diff = 19 ms, SE = 5.55, t = 3.36, p = 0.006, 95% CI [4 ms, 34 ms], d = 0.40).All other comparisons across levels of congruency were not significant as was the interaction between Age group and congruency, p > 0.05.Task stimuli and the methodological approach.the main figure shows the trial sequence for each of the conditions.Participants were comfortably sat approximately 60 cm from a 27" monitor.At the beginning of each trial, a central Fixation dot was displayed on the screen.The duration of the fixation varied with a truncated exponential distribution (range: 0.6-1.1 s) to prevent anticipation of the timing of the 'Go' signal.The Go signal cued participants to respond to the direction of the central arrow (left or right) with abduction of the corresponding index finger (inset image b).On 30% of the trials of the SST, and SSFT conditions the white arrow changed colour to blue after an individually tracked stop signal delay (SSD).The colour change indicated participants should Stop the button press.Feedback about trial performance was displayed for 2 s.Depending on trial performance, the feedback for 'Go' trials was either: "You've slowed down", "Incorrect", or "RT = … ".The feedback for 'Stop' trials was either: "Correct" or "Incorrect".Following this feedback, a blank screen was presented for 2 s before the start of the next trial.Example visual stimuli for the CRT (Go trials only) and SST (both Go and Stop trials) conditions are presented in the top row.The lower rows show stimuli for the Flanker Task (Go trials only) and SSFT condition (both Go and Stop trials).Inset image (a) shows the montage of fNIRS channel locations (white dots), located midway between fNIRS sources (red dots) and detectors (blue dots) with 30 mm source-detector separation.Neuro-navigation coordinates (10-20 system) are also depicted (green dots).Inset image (c) shows representative EMG bursts (ci) depicts a typical EMG response when pressing the button-either for a standard Go trial or a failed stop trial.(cii) depicts a typical partial EMG response, i.e., a muscle response on the side of the cued hand was initiated but the button press was successfully aborted.(ciii) depicts a successful stop in which there was no muscle response initiated. Cancel time The mean profile of the EMG bursts on stop trials, which were used to calculated Cancel Time, are presented in Fig. 3.There was a significant main effect of Age on the proportion of stop trials with prEMG, F (1, Inf) = 13.86,p < 0.001; Young = 39% of total stop trials (95% CI [35%-42%]), Older = 30% of total stop trials (95% CI [27-33%]).There were no other significant main effects or interactions. Performance on go trials The speed at which young and older adults responded to Go trials within the different conditions are presented in Fig. 4 (with descriptive data available in Supplementary Table S2 online).Response time on Go trials was Figure 3. Characteristics of partial EMG bursts in stopping trials.Changes in stopping latency and peak EMG amplitude (both prEMG, and RT-generating EMG bursts) for young adults (left column) and older adults (right column).The x axis shows time relative to the stop signal, and the y axis shows the amplitude of prEMG bursts on unsuccessful and successful stop trials.The y axis shows normalised units relative to the average peak EMG amplitude across successful Go trials from the CRT condition for that participant.4c). RT to congruent relative to neutral flankers The significant interaction appears to be driven by differences in the way young and older adults responded to trials with congruent flankers (see Fig. 4b).For young adults, there was no significant difference in RT for Peak EMG amplitude on go trials The normalised peak EMG amplitude reflects the neural drive initiated to perform the behaviour.Difference in peak amplitude for RT-generating burst are shown in Fig. 5.There was a significant two-way interaction between Age group and Congruency on the peak amplitude of RT-generating EMG bursts, F (7, Inf) = 5.50, p < 0.001.Specifically, peak EMG amplitude was significantly lower for all trial types compared to CRT in older adults (estimate range = 10.03-12.71%,p < 0.001, d range = 0.51-0.72)for all trial types.In contrast, in young adults, EMG peak amplitude was significantly lower for all trial types relative to the CRT condition (estimate range = 3.76-7.21%,p < 0.001, d range = 0.15-0.41)except for SST trials (estimate = 2.58, z = 2.96, p = 0.085).When young and older adult performance was compared for each trial type (e.g., Congruent Young − Congruent Older ), no comparisons were significant at the 0.05 level (d range = 0.27-0.43 for all estimates), suggesting that while decreased neural drive in more complex decision tasks became more pronounced with age, this is not statistically significant.Additional analyses comparing the baseline conditions with the experimental conditions are presented in Supplementary Material S3 (exploring the effect of visual complexity on RT) and Supplementary Material S4 online (exploring the extent of proactive slowing of go responses during the SST condition relative to the CRT condition). Prefrontal cortical activity Mean waveforms of the concentration change in oxygenated haemoglobin in the left and right PFC are shown in Fig. 6 with key output parameters presented in Fig. 7. Effect of Flanker congruency on neural activity During Go trials, there was a significant two-way interaction between Condition (Flanker/FSST) and Congruency, F (1, 732) = 4.40, p = 0.013.Bonferroni adjusted contrast tests revealed that PFC neural activity increased to a greater extent for incongruent than neutral trials of the Flanker condition, (M difference = 0.18, SE = 0.05, t = 3.54, p = 0.006, d = 0.46); and was also greater on neutral trials of the combined condition than neutral trials of the Flanker condition (M difference = 0.16, SE = 0.05, t = 0.039, d = 0.39).Four-and three-way interactions between Age group, Hemisphere, Condition, and Congruency were non-significant, and no main effects were significant. Effect of stopping on neural activity There was a significant two-way interaction between Congruency and Stop success, Association between prefrontal neural activity and behavioural performance A regression model assessing the association between the extent to which stopping speed was impacted by perceptual inhibition behaviourally (Δ SSRT Incongruent − Δ SSRT Congruent flankers] and the extent to which prefrontal neural activity increased when it was loaded with both perceptual and motor Discussion The current study investigated how ageing affects the interplay between perceptual and motor inhibition by assessing prefrontal cortical activity and behavioural performance when tasks assessing these inhibitory processes were performed in isolation, or concurrently.Consistent with our central hypothesis, when both motor and perceptual inhibition were engaged, PFC activity increased for both age groups, with only older adults exhibiting a deficit in performance.These results suggest either perceptual and motor inhibition rely on shared prefrontal resources, or there are lower thresholds of central cortical processing capacity with age. Motor inhibition was assessed using an approach that isolates motor stopping ability from attentional processes and waiting strategies 23 , as well as the more traditional SSRT method.By estimating the time at which partial response EMG on successful stop trials reaches peak amplitude (and thereafter begins to decrease), it was possible to estimate the time at which the stop process is engaged relative to the stop cue 23 .Cancel Time calculated with this approach is arguably more accurate than SSRT estimation as it allows action cancellation to be quantified on a trial-by-trial basis, identifying when the motor command was inhibited before an overt behavioural response occurred.Standard estimation methods for SSRT yield only one value per subject (per level of each independent variable), whereas the number of Cancel Time estimates is only limited by the number of successful stop trials in which a motor response was initiated.While the proportion of trials with partial EMG responses was somewhat less in our older, compared to younger, cohort (i.e., 54% of successful stops in older, and 77% of successful stops in younger), a large number of trials per group still yielded partial responses, allowing a robust Cancel Time metric.It also suggests that younger adults may be better able to cancel an action when muscle efferent commands have been initiated.Consistent with past research, estimates of Cancel Time were ~ 100 ms shorter than SSRTs, and mean Cancel Time in the young group on SST trials (155 ms) was consistent with the range of estimates in recent studies (110 ms-166 ms [23][24][25][30][31][32] ). LongerCancel Times were observed for older adults (~ 208 ms) and notably when perceptual inhibition was engaged by incongruent Flankers (~ 170 ms in young; 217 ms in older adults). Our study provides evidence that perceptual inhibition is more disruptive to the stopping process in older than young adults.Specifically, in older adults, Cancel Time on successful stop trials with incongruent flankers was longer than in Congruent and Neutral congruency conditions.In contrast, for young adults there was no difference in Cancel Time between different types of flankers, rather all flanker trials (congruent, neutral and incongruent) exhibited longer Cancel Times relative to the SST condition in which no flanker stimuli were presented.This suggested Cancel Time was fastest when the visual display was less complex.In other words, older adults showed greater slowing in response to incongruent Flankers, above and beyond the slowing that could conceivably be engendered by a more complex visual display. For both young and older groups, the increase in PFC activity was highest for successful stopping on incongruent flankers compared to successful stopping on all other flanker types (none, neutral dashes, or congruent arrows), this indicates that the increase in neural activity was due the perceptual inhibition requirement of the task, rather than activity associated with recognition of success or dopaminergic reward mechanisms.The CRUNCH, or scaffolding theory of cognitive ageing 14 suggests that following a decline in neural structure and function, an increase in neural recruitment in older adults is integral to maintain behavioural performance, and a broader and more bilateral range of regions are recruited for tasks that were previously quite lateralised 33,34 .The extent to which additional neural recruitment is able to compensate for functional and structural changes in the cortex to maintain task performance depends on the task requirements and individual ability.Cognitive resources may ultimately be insufficient when task demands are high (i.e., when engaging perceptual and motor inhibition concurrently), in which case performance declines. The incongruent stop trials represent the most demanding task condition, as the participant must use perceptual inhibition to ignore the incongruent flanker information whilst cancelling their motor action.Both age groups showed increased prefrontal activity in the Incongruent SSFT condition during successful stop trials relative to failed stop trials, which may reflect an increase in the attentional resources required to perform the task.However, only older adults exhibited impaired behavioural performance when trial-level stop data was analysed (i.e., Cancel Time).Whereas, on the simpler SST task, there was an increase in PFC activity only for the older adults relative to baseline CRT but no deficit in performance (see Supplementary Materials S4 and S5).Interpreted together, these results align with the CRUNCH model, whereby compensatory cortical recruitment supports behavioural performance on simple tasks, but a resource ceiling is reached for complex tasks, after which performance declines.These results are consistent with those in our previous work 35 where, in a dual task requiring both difficult balance tasks and a cognitive verbal task, older adults failed to recruit PFC to a great enough extent to prevent deficits in balance performance. The correlation between the stopping deficit incurred by concurrent perceptual inhibition, and neural activity changes did not reach statistical significance.This result may not be surprising given CRUNCH and scaffolding theories of cognitive ageing predict the relationship between neural activity and performance is non-linear and mediated by cognitive resource capacity, and how difficult an individual finds the particular task-parameters that were not controlled for in the current study. Previous research has also reported impaired stopping on incongruent stop trials 6,8,9,27,28,36 .According to Multiple Resources Theory 26 , this deficit in performance when the tasks are performed together indicates that perceptual and motor inhibition are controlled by common processing resources.However, central capacity theories 37 would argue the deficit could be explained by a bottleneck in central processing capacity.Substituting a memory task (for example) in lieu the flanker task in future studies would resolve this. The "Motor and Perceptual Inhibition Test" (MAPIT) 38 , an alternative task purported to investigate perceptual and motor inhibition, shows no evidence for decreased performance in instances where both types of inhibition are required, suggesting these processes are independent 39 .The divergence from our findings is likely due to the way each inhibitory process has been operationalised.In the SSFT, motor inhibition is the ability to stop initiated actions during the SST, while perceptual inhibition is the ability to filter relevant, from misleading visual information during the Flanker task.MAPIT assesses two variants of the Simon task designed to elicit stimulus-related conflict (involving ignoring arrow location in favour of direction) and response-related conflict (involving responding to directional arrows with the non-cued hand).Resolving these conflicts are referred to as "perceptual -" and "motor inhibition" respectively, but it has been argued that the perceptual component may not engender sufficient cognitive load to elicit true perceptual inhibition 40 .Furthermore, "motor inhibition" as assessed by MAPIT requires response initiation but does not involve any form of action cancellation or stopping. During incongruent stop trials of the SSFT task, perceptual inhibition was necessarily initiated prior to motor inhibition, as response selection (inhibiting the incongruent flankers to select the responding hand-based target stimulus) occurs prior to motor initiation, and subsequent cancellation.In these trials, it was motor inhibition that was impacted by the perceptual inhibition process-perceptual inhibition itself was not affected by the possibility of having to subsequently stop.That is, despite the purported sharing of similar neural resources, it appears that only if motor inhibition is actively engaged (when a stop signal appears) does the PFC activity increase; the mere possibility of stopping does not affect ongoing perceptual inhibitory processes.This finding aligns with a previous study suggesting that during sequential cognitive control tasks, the second task tends to show performance decline as a result of resource sharing 41 .Adjusting the sequential timing of the inhibitory processes in future experimental paradigms may help determine whether there is bidirectional interference between the perceptual and motor inhibition processes and rule out the contribution of order effects. A significant problem in interpreting the results of motor inhibition research has been participants' use of slowing strategies 42,43 in an attempt to facilitate accurate stopping 42 .Although the onset of the stop signal changes iteratively with every trial, these adjustments tend to be relatively small (e.g., 50 ms) and may take a long time to "catch up" with participants who slow down drastically during the task.Moreover, significant slowing in Go trials challenges the assumptions of the horse-race model, whereby contextual independence requires that Go trial performance should be invariant regardless of whether or not stop trials may be encountered.In order to mitigate the use of waiting strategies, we used visual feedback throughout the experiment if participants RT became too slow.Prior research has demonstrated that the use of such feedback results in little or no proactive slowing and yields ~ 40% of successful stop trials with prEMG 25 .In the current study, the slowing was limited to < 25 ms across age groups (see Supplementary Materials). Between-subject dispersion of Cancel Time estimates was more pronounced in the older than the young adult group.This may reflect variation in age-related sensorimotor decline as all older participants scored well on the cognitive screening questionnaire.These differences become somewhat obscured when a central tendency measure (i.e., mean) SSRT is used to operationalise stopping ability.Future research in this field should consider prEMG to measure stopping performance as our research indicates that it is a more powerful and nuanced approach, that is suited to testing inhibitory control within cohorts that vary in terms of their brain health and motor skill.If prEMG can be evoked reliably, then fewer trials are needed, which is an important consideration for measurement of motor inhibition in clinical populations. In summary, this study used a combination of innovative and established protocols to investigate the interplay between perceptual and motor inhibition at both the behavioural and neural level.The findings suggest that for older adults, motor inhibition is impaired when perceptual inhibition processes are engaged.This deficit may contribute to higher rates or accidents and falls in older adults, particularly in more challenging environments. Participants The sample comprised 65 young and older adult volunteers from the Hobart community.The older group (n = 32; 17 female) had a mean age of 71 years (range: 60-90), while the young group (n = 33; 22 female) had a mean age of 24 years (range: 18-36 years).The research was approved by the University of Tasmania Human Research Ethics Committee (Ref #14865) and all participants provided signed, informed consent prior to participation in agreement with the Declaration of Helsinki.Participants were excluded if they had a neurological disorder, prior brain surgery or a metal implant in their skull.The global cognitive status of the older participants was assessed with the standardised Mini-Mental State Examination (sMMSE) 44 .All participants scored above 25 (range 27-30), indicating normal cognitive status. Behavioural tasks Participants were seated comfortably in front of a computer monitor upon which visual stimuli were presented.They completed four different tasks, all requiring motor responses using the left or right index finger by pressing custom-built response buttons mounted in the vertical plane (index finger abduction see Fig. 1b).The first two conditions (Choice Reaction Time task and Flanker task) were presented in a fixed order to establish baseline reaction time (RT) in the absence of any stopping expectation; whereas the order of the subsequent two conditions (Stop Signal Task and Stop Signal Flanker Task) was counterbalanced between participants.Trial types are depicted in Fig. 1, and trial numbers in Table 1.To prevent fatigue, Conditions 3 and 4 were broken into blocks to allow rest breaks. Condition 1 Choice Reaction Time (CRT): Participants were required to respond as quickly as possible to a centrally-presented arrow with a button press using their index finger of the hand corresponding to arrow direction (left or right). Condition 2 The Flanker task was used to ascertain choice RT when the central arrow was flanked by either congruent or incongruent directional arrows, or by white dashes (neutral stimulus).Slower responses to the incongruent arrows relative to the neutral or congruent arrows indicates worse perceptual inhibition ability.The inclusion of neutral flankers allowed perceptual facilitation and perceptual inhibition to be assessed relative to a condition with similar visual complexity (i.e., five stimuli on the screen, with flankers that don't provide any faciliatory of Table 1.Trial numbers for each condition.Left and right stimuli were presented with equal frequency.In Flanker and SSFT conditions, trials with congruent, incongruent and neutral flankers occurred with equal frequency.Stop trials occurred on 30% of all SST and SSFT task trials and were equally frequent across all levels of hand and congruency. Condition 3 The Stop Signal Task (SST) was used to assess motor inhibition ability.As in the CRT (Condition 1), participants responded as quickly as possible to the direction of a central arrow (i.e., the 'go' signal).On 30% of trials, the arrow changed colour (i.e., the 'stop' signal, see Fig. 1), indicating that participants should attempt to cancel the button press.The delay between the 'go' and 'stop' signals (stop signal delay; SSD) was initially set at 200 ms and adjusted in 50 ms increments using an active staircasing procedure, independently calculated for each hand. The SSD increased by 50 ms after successful stop trials, (making stopping on the subsequent trial less likely) and decreased by 50 ms after failed stop trials (making stopping on the subsequent trials more likely).Thus over all stopping trials stop success approached 50% for each hand.Given that go response slowing (i.e., a waiting strategy) undermines the assumptions of the calculations for Stop Signal Reaction Time (SSRT) 21 , feedback ("You've slowed down!") was provided after each trial when RT was > 150 ms above each participant's mean CRT (calculated from Condition 1).Go responses during SST occurred, on average, less than 25 ms later than during CRT for both young and older adults, suggesting compliance with task instructions (See Supplementary Materials for descriptive statistics). Condition 4 The combined Stop Signal Flanker Task (SSFT) measured the interaction between perceptual facilitation and inhibition, and motor inhibition.Visual stimuli were the same as those used for the Flanker task.However, on 30% of all trials, the central arrow changed colour (as described in the SST condition), indicating that participants should cancel the button press.As per the SST, independent staircasing procedures adjusted SSD for each hand and level of congruency independently to achieve a stop success rate of ~ 50%; in each congruency condition (congruent, incongruent, neutral).Feedback ("You've slowed down!") was provided when go response RTs were > 150 ms slower than that participant's mean CRT for the comparable condition, i.e., congruent/incongruent/neutral trials from the Flanker task alone (Condition 2). Physiological measures Electromyography Cutaneous electromyography (EMG) was recorded using adhesive electrodes (Ag/AgCl) positioned in a bellytendon montage over the left and right first dorsal interossei (FDI); an additional electrode positioned on the ulnar bone on each wrist was used as ground reference (Fig. 1b).The analogue signals were amplified 1000 times, sampled at 2000 Hz and band-pass filtered between 20 and 1000 Hz, (CED Power 1401 and CED 1902, Cambridge, UK) and saved for post processing. Functional near infrared spectroscopy The haemodynamic response was recorded with an 8-source, 7 detector fNIRS montage over the medial and dorsolateral prefrontal cortex.The montage of optodes did not correspond directly to EEG coordinates, although the overlap with the 10-20 layout is presented in Fig. 1a.This system (NirSport, NIRs Medizintechnik GmbH, Berlin), designed to be measure PFC activity, uses near-infrared light (emitted at 760 nm and 850 nm wavelengths to capture haemodynamic changes within the outer ~ 1.5 cm of cerebral cortex.Data were recorded with NirStar software (version 15.3) with a sampling rate = 7.8125 Hz.The experimental protocol was developed in PsychoPy3 45 software.Synchronisation between the behavioural tasks and the physiological measures was achieved with a digital TTL signal from PsychoPy that triggered EMG data collection for each trial, and also marked a digital event in the continuously collected fNIRS data.The mean delay time between the TTL pulse and the increase in luminosity intensity on the screen (from black, to the white arrow measured via an Arduino photodiode) was 2.9 ms (SD = 2.0 ms), which was less than the refresh rate of the 240 Hz monitor (1frame = 4.2 ms). Stop signal reaction time calculation SSRT was calculated using the integration method, with go omissions replaced with the subjects' slowest RT 21 .SSRTs were averaged across left and right hands for each participant by trial type (SST, SSFT-Congruent, SSFT-Incongruent, and SSFT-Neutral).SSRT was only calculated for participants/conditions when assumptions of the "race model" were met, i.e., (i) mean failed stop RTs < mean Go RTs; and (ii) stopping success was between 25 and 75% 21 .As such, four participants' data were removed for the congruent condition, four for the incongruent condition, two for the neutral condition, and three for the SST condition. Electromyography processing EMG signals were digitally filtered using a fourth-order band-pass Butterworth filter between 20 and 500 Hz.Onsets and offsets of task-related EMG bursts were detected using a single-threshold algorithm when EMG amplitude was 3 SD above baseline (defined as the lowest activity detected during that trial) 52 .For robustness, EMG bursts separated by less than 20 ms were merged to represent a single burst.Using all defined onsets and offsets within a trial, we defined the RT-generating EMG bursts as the last burst where the onset occurred (i) after the go signal and (ii) at least 50 ms before the button press.Partial EMG responses (prEMG) on stop trials were defined as responses that were initiated after the go stimuli but cancelled (after presentation of the stop signal) and before generating a button press.Specifically, prEMG was defined as (i) EMG onset after the go signal; (ii) time of peak EMG happened after SSD (i.e., inhibition happened in response to the stop signal); (iii) peak prEMG amplitude was greater than 10% of the average peak RT EMG from successful Go trials.EMG signals were full-wave rectified and low-pass filtered at 10 Hz to obtain the EMG profiles.To allow comparisons between conditions, EMG profiles were normalised to the average peak EMG amplitude across successful Go trials from the CRT condition for each participant (Fig. 3).'Cancel Time' was calculated for each prEMG trial as the time from SSD to peak prEMG amplitude 23,24 . fNIRS processing fNIRS data were processed using established algorithms in Homer3 v 1.80.2 in Matlab (see: https:// github.com/ BUNPC/ Homer3/ wiki).Channels with excessively noisy light intensities were removed using the function Prune Channels (dRange [1, 3], SNRthresh = 2, SDrange [0, 45]), before being converted to optical density.Regions of motion artifact were then identified with set thresholds that were verified via visual inspection and a principal components analysis was performed only on these segments to avoid over correction in the data (HOMER 3 function: hmrMotionCorrectPCArecurse; tMotion = 0.5, tMask = 1, STDEVthresh = 9, AMPthresh = 100, nSV = 0.97, maximum iterations = 5).The number of components removed varied for each participant to ensure up to 97% of the variance in the segment of data was removed 53 .The fNIRS signals were reconstructed with the remaining components and offset corrected.Data were then low-pass filtered = 0.5 Hz) and converted to concentrations (µmol) of oxygenated haemoglobin (HbO 2 ), deoxygenated haemoglobin (HbR) and total haemoglobin (HbT).In line with our prior work, HbO 2 had the highest signal to noise ratio and was negatively correlated with HbR, so was used as an estimate of neural activity for subsequent analyses 20,35 . fNIRS channels were averaged within left and right hemispheres of the PFC (10 channels on each side excluding the midline channels-see Fig. 1a).Trial-level, waveforms were extracted over a 10-s time window and phaselocked to stimulus onset.Trials were grouped according to Condition type and trial outcomes (success/failure on stopping trials and correct/incorrect response selection on Go trials) and averaged.By averaging over multiple trials, extra-neuronal contributions associated with respiration, pulse rate, Meyer waves etc. were minimised.Cortical activity measured via fNIRS follows a characteristic haemodynamic response function based on the principles of neurovascular coupling; there is an initial dip in the first 1-2 s, followed by a peak and then a return to baseline.The size of the oxygenated haemoglobin change 4-6 s after a neural event (relative to t = 0 s) reflects neural activity associated with that specific event.To account for between subject variability in decision times, the peak change in HBO 2 concentration within a time window between 4 and 7 s after the stimulus presentation was calculated and used as the measure of neural activity in the statistical analyses. Stopping performance The effects of Age (young/older), Congruency (SST/neutral/congruent/incongruent) and their interaction effects on the motor inhibition ability measures (SSRT and Cancel Time) were investigated with two mixed models, with a subject-level random effect.Bonferroni-adjusted contrast tests were used to compare changes in stopping performance across all levels of congruency, within young and older adults for significant interactions.For SSRT, a linear mixed model was used, but due to positively skewed, trial-level data for Cancel Time, a generalised model (with a gamma distribution and identity link function) was used.To examine differences in the proportion of trials with and without prEMG, a binomial GLMM was run with a probit link function; with fixed factors of Age and Congruency, and random intercepts for subjects. Go performance Reaction time data to 'Go' trials were screened to remove trials where an incorrect left/right choice response was made and anticipatory responses (RT < 150 ms) (n trials included = 26,639, n trials excluded = 893) as well as practice trials.Go RTs were analysed using a generalised linear mixed model (GLMM) with gamma distribution and identity link function.This is consistent with best-practice guidelines for RT analysis, as distributions are typically positively skewed 54 .Fixed effects were Condition (Flanker task; Combined FSST), Congruency (Congruent, Incongruent, Neutral), and Age (young/older), with a subject-level random effect and random slopes by condition.Bonferroni-adjusted contrast tests were used to examine the change in RT for congruent and incongruent flanker trials relative to the neutral flankers.To examine differences in normalised peak EMG amplitude on Go trials between Age group and Congruency | Condition (CRT, SST, Flanker Congruent, Flanker Incongruent, Flanker Neutral, FSST Congruent, FSST Incongruent, FSST Neutral) were analysed with a linear mixed model with subject-level random effect. Neural activity changes A linear mixed model investigated the effect of flanker congruency on prefrontal neural activity during Go trials.Age group, Hemisphere (left/right), Condition (Flanker/Combined) and Congruency (congruent, neutral, incongruent) were fixed effects, with participant as a random effect.To investigate the effect of stopping on neural activity, linear mixed models examined the effects of Age group, Condition, Hemisphere, and Stop Success (success/fail) on neural activity changes during Stop trial of the SST and FSST conditions with a subject-level random effect. Figure 1 . Figure 1.Task stimuli and the methodological approach.the main figure shows the trial sequence for each of the conditions.Participants were comfortably sat approximately 60 cm from a 27" monitor.At the beginning of each trial, a central Fixation dot was displayed on the screen.The duration of the fixation varied with a truncated exponential distribution (range: 0.6-1.1 s) to prevent anticipation of the timing of the 'Go' signal.The Go signal cued participants to respond to the direction of the central arrow (left or right) with abduction of the corresponding index finger (inset image b).On 30% of the trials of the SST, and SSFT conditions the white arrow changed colour to blue after an individually tracked stop signal delay (SSD).The colour change indicated participants should Stop the button press.Feedback about trial performance was displayed for 2 s.Depending on trial performance, the feedback for 'Go' trials was either: "You've slowed down", "Incorrect", or "RT = … ".The feedback for 'Stop' trials was either: "Correct" or "Incorrect".Following this feedback, a blank screen was presented for 2 s before the start of the next trial.Example visual stimuli for the CRT (Go trials only) and SST (both Go and Stop trials) conditions are presented in the top row.The lower rows show stimuli for the Flanker Task (Go trials only) and SSFT condition (both Go and Stop trials).Inset image (a) shows the montage of fNIRS channel locations (white dots), located midway between fNIRS sources (red dots) and detectors (blue dots) with 30 mm source-detector separation.Neuro-navigation coordinates (10-20 system) are also depicted (green dots).Inset image (c) shows representative EMG bursts (ci) depicts a typical EMG response when pressing the button-either for a standard Go trial or a failed stop trial.(cii) depicts a typical partial EMG response, i.e., a muscle response on the side of the cued hand was initiated but the button press was successfully aborted.(ciii) depicts a successful stop in which there was no muscle response initiated. Figure 2 . Figure 2. SSRT and Cancel Time measures of stopping speed.Changes in SSRT (a) and Cancel Time (b), for young and older adults.Black symbols represent group mean for each level of Flanker congruency (older: circles; young: diamonds); with black lines indicating the 95% confidence intervals around the mean.Additionally, boxplots indicate the median, interquartile range, maximum, and minimum of the data.Outliers (points > 1.5 times the interquartile range) are shown as grey points.Note that Cancel Time was faster than SSRT and that regardless of the way motor inhibition was operationalised (SSRT or Cancel Time), older adults demonstrate longer inhibition times for incongruent stop trials relative to other congruency conditions. Figure 4 . Figure 4.The effects of Age, Congruency and Condition on Reaction Time.Boxplots represent the median and IQR of data stratified by condition (Green = Combined, Purple = Flanker); with whiskers representing the maximum and minimum range values and outlier data (> 1.5 times the IQR) represented by grey dots.In panels B and C, horizontal lines represent median delta RT values, notches represent 95% CIs around the median, and outliers are indicated by black dots.(a) RT stratified by Age group, condition and congruency.Black dots represent mean RT for Congruent, Incongruent, and Neutral trials, with black lines representing the 95% CI around the means.(b) ΔRT Congruent−Neutral Trials, stratified by Age group and Condition.(c) ΔRT Incongruent−Neutral Trials, stratified by Age group and Condition. Figure 5 . Figure 5. EMG mean response profiles in Go trials for Young (left) and Older participants (right).Each condition (a) SST, (b) Flanker, and (c) SSFT is compared to the CRT condition (grey) by synching peak EMG mean (± 95% CI) waveforms.The inset figures are zoomed in on the peak to highlight comparisons between peak amplitudes.The ordinate axis shows normalised units relative to the average peak EMG amplitude across successful Go trials from the CRT condition for that participant. Figure 6 . Figure 6.Prefrontal event-related haemodynamics changes.Mean changes (± 95% CI bands) in oxygenated haemoglobin in the left (left panels) and right (right panels) PFC.Activity was aligned to go stimulus onset and zeroed over the mean level 0−2 s.Row (a) represents neural activity changes for congruent (orange lines) and incongruent (blue lines) Go trials of the Flanker task.Row (b) represents changes in oxygenated haemoglobin for successful and failed stop trials of the SST.Row (c) represents oxygenated haemoglobin changes for successful and unsuccessful incongruent stop trials of the combined SSFT task.Row (d) shows the PFC activity for successful stop trials by flanker type in the SSFT condition. Figure 7 . Figure 7. Changes in neural activity.(a) Changes in bilateral prefrontal neural activity on Go trials of the Flanker and SSFT tasks.As there was no interaction between Age group and other variables, this panel pools data for young and older participants across both left and right PFC.(b) Changes in bilateral prefrontal neural activity on Stop trials for the range of different flankers presentations: None (SST task), and Congruent (Con), Incongruent (Incon) and Neutral (Neu) of the SSFT.For successful stop trials (right panel), bilateral neural activity change was greater for the Incongruent stop trials relative to all other levels of Congruency.Prefrontal neural activity change was greater for incongruent stop-success trials than incongruent stop-fail trials (left panel). inhibitory information in contrast to the single central arrow in the CRT task; see Supplementary Material S3 online).
2024-01-25T06:17:20.521Z
2024-01-23T00:00:00.000
{ "year": 2024, "sha1": "a0e67df03ae61614e88286b2bd142b69a2526536", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-52269-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "987bc1dfce9e0aa403cc32543ac57d89ce315c6f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
260258726
pes2o/s2orc
v3-fos-license
Measles virus transmission patterns and public health responses during Operation Allies Welcome: a descriptive epidemiological study Summary Background On Aug 29, 2021, Operation Allies Welcome (OAW) was established to support the resettlement of more than 80 000 Afghan evacuees in the USA. After identification of measles among evacuees, incoming evacuee flights were temporarily paused, and mass measles vaccination of evacuees aged 6 months or older was introduced domestically and overseas, with a 21-day quarantine period after vaccination. We aimed to evaluate patterns of measles virus transmission during this outbreak and the impact of control measures. Methods We conducted a measles outbreak investigation among Afghan evacuees who were resettled in the USA as part of OAW. Patients with measles were defined as individuals with an acute febrile rash illness between Aug 29, 2021, and Nov 26, 2021, and either laboratory confirmation of infection or epidemiological link to a patient with measles with laboratory confirmation. We analysed the demographics and clinical characteristics of patients with measles and used epidemiological information and whole-genome sequencing to track transmission pathways. A transmission model was used to evaluate the effects of vaccination and other interventions. Findings 47 people with measles (attack rate: 0·65 per 1000 evacuees) were reported in six US locations housing evacuees in four states. The median age of patients was 1 year (range 0–26); 33 (70%) were younger than 5 years. The age distribution shifted during the outbreak towards infants younger than 12 months. 20 (43%) patients with wild-type measles virus had rash onset after vaccination. No fatalities or community spread were identified, nor further importations after flight resumption. In a non-intervention scenario, transmission models estimated that a median of 5506 cases (IQR 10–5626) could have occurred. Infection clusters based on epidemiological criteria could be delineated into smaller clusters using phylogenetic analyses; however, sequences with few substitution count differences did not always indicate single lines of transmission. Interpretation Implementation of control measures limited measles transmission during OAW. Our findings highlight the importance of integration between epidemiological and genetic information in discerning between individual lines of transmission in an elimination setting. Funding US Centers for Disease Control and Prevention. Laboratory confirmation of measles cases Laboratory confirmation included the detection of measles-specific IgM in serum by enzyme immunoassays or the detection of measles virus RNA in a nasopharyngeal or urine specimen by real-time reverse-transcription-polymerase-chain-reaction (RT-PCR) assays, or both.Assays to detect IgM were performed at commercial laboratories, the Virginia Division of Consolidated Laboratory Services (DCLS), and CDC.Detection of measles virus RNA was performed at the Virginia DCLS, New York State Public Health Laboratory (NY), Wisconsin State Public Health Laboratory (WI), CDC, and California State Public Health Laboratory (CA).Measles genotyping and an RT-PCR assay to detect the measles vaccine strain (MeVA) 1 were performed in WI, NY, CA, and CDC. Measles Mumps Rubella (MMR) vaccine uptake Rates of MMR vaccine uptake among eligible Afghan evacuees during OAW were calculated using US Department of Defense (DoD) data reports documenting the daily and cumulative number of MMR doses administered at each military base from September 9 to October 15, 2021.DoD reports were available for 30 of the 37 days covering this period.To impute the number of MMR doses administered in days with missing data (October 3-7 and October 9-10) we calculated the difference in cumulative MMR doses across the data gap and assumed the doses were evenly distributed among missing days. DoD started reporting the daily and cumulative number of MMR doses administered at each military base on September 9, 2021.Limited vaccination of arriving Afghan evacuees occurred as early as August 24, 2021.We assumed the cumulative number of MMR doses documented in the September 9 report to be evenly distributed from August 24-September 8, 2021. Rates of MMR vaccine uptake at Hotel A were calculated using Department of Homeland Security data on the daily number of MMR doses administered at Hotel A. MMR vaccine eligibility and estimation of vaccine coverage Afghan evacuees were considered ineligible for vaccination if they were aged <6 months or pregnant.To estimate the denominator of MMR vaccine eligible evacuees we used three sources of information: 1. Base-specific populations on September 24, 2021, as provided by DoD (below).September 24 was selected because it occurred during the international flight pause and military base quarantine periods, during which base populations were stable.a Hotel A, the contracted isolation and quarantine hotel in Virginia, was managed by the Department of Homeland Security (DHS) and thus the census of pregnant women was taken over the duration of Hotel A being open to evacuees. Populations at eight locations housing Afghan evacuees during The number of immunocompromised persons was not accounted for in the calculation of the vaccineeligible population as this information was not available.Rates of MMR vaccine uptake among eligible Afghan evacuees at each location during the pause of international flights is shown below. One-dose MMR vaccine uptake among eligible Afghan evacuees by location.One-dose MMR vaccine uptake is plotted across nine military bases and the isolation and quarantine hotel (Hotel A) during the pause on international flights from September 10, 2021 to October 5, 2021. Schematic representation of disease states, flow between states, and parameters controlling flow in a model of measles transmission during Operation Allies Welcome. The model represents a constant (closed) population in which individuals belong to one of four states related to measles infection: susceptible (S), exposed (E), infected (I), and recovered (R).The model tracks the daily number of persons in each compartment, and incorporates stochasticity using the adaptive tau-leaping algorithm. 2,3Individuals in the susceptible pool (S) become exposed by the force of infection λ(t) = β*It, the per capita rate at which 2 persons come into sufficient contact to lead to infection per unit of time (β) times the number of infectious persons at time t (It), and progress to the exposed preinfectious (E) state.Transitions into the I and R compartments are determined by rates σ and γ, respectively.The effect of mass measles-mumps-rubella (MMR) vaccination is denoted by θ; susceptible persons are removed from the S compartment and added to the R compartment based on the date MMR vaccine was administered, with a lag-time of 7 days to account for a delay in vaccineinduced immunity.The compartment Sv pertains to individuals who were vaccinated and failed to produce an adequate immune response (primary vaccine failures).p represents the probability of primary vaccine failure (or 1-vaccine effectiveness). Transitions from the S to I compartments are determined by the transmission rate β (the per capita rate at which two persons come into sufficient contact to lead to infection per unit of time) and to the number of infectious persons at time t (It).β can be calculated as 0 where 0 is the basic reproduction number, or average number of secondary cases generated per infected individual in a fully susceptible population, N is the population size, and is the duration of infectiousness. 0 in a particular population can be derived from the effective reproduction number ,or average number of secondary cases generated per infected individual in a population with some level of immunity, and the proportion of the population susceptible s, as 0 = .We adapted 4 the Wallinga and Teunis algorithm, 5,6 a maximum likelihood approach that uses the time between illness onset of cases in the same infection cluster and the probability density of the serial interval (time between the successive illness onsets in a transmission chain), to infer .We used dates of rash onset and a serial interval derived from household transmission studies with a gamma probability distribution and a mean (standard deviation) of 11•1 (2•47) days. 7o obtain a range of values, we applied the algorithm to observed measles patients in the following settings: (1) all locations; (2) Ft.McCoy; and (3) a single barrack with a high attack rate at Fort McCoy, Barrack A. The first was selected because many patients shared the same itineraries to come to the U.S. and could have been part of a larger infectious cluster.The second and third were selected because these were more defined infection clusters.To calculate , we used early estimates of that would be less affected by the containment measures implemented across bases, particularly the rapid rise in vaccine uptake, and reflect more baseline population immunity. To estimate the proportion of the population that was susceptible at the outset, we developed an agespecific immunity profile of Afghanistan in 2021 based on routine immunization coverage and supplementary immunization activities (details provided below).We used a weighted average of age-specific immunity estimates (weighted by population size within each age-stratum) to inform overall susceptibility and to derive 0 from .Resulting 0 estimates are shown below; these were consistent with prior 0 estimates of measles in various settings (range, 5-18). 8,9We used the median value of 0 , 14•00, for primary analyses.Because of the uncertainty around baseline immunity levels in this population (see below) and several caveats that could affect estimation of (e.g., not being able to fully account for importations and use of prior estimates of the serial interval), 10 The infectiousness rate σ and the recovery rate γ describe the rates at which individuals progress into the I and R compartments and are inversely proportional to the pre-infectious period (the time period between infection and onset of infectiousness) and the duration of infectiousness, respectively.We parameterized the model using an 8-day pre-infectious period and a 5-day duration of infectiousness. 11,12n the model, evacuees susceptible to measles who received MMR vaccine are removed from the S compartment and added to the R compartment according to the date of vaccine receipt and assuming a 7-day delay in acquisition of vaccine-derived immunity.We assumed a vaccine effectiveness (VE) of one-dose of MMR vaccine to be 84% for infants aged 6-11 months and 92•5% for persons aged ≥12 months. 13,14For each stratum, we created an additional compartment, Sv, for once-vaccinated evacuees who remained susceptible (primary vaccine failure) and who could contribute to transmission, but who were not vaccinated a second time during initial mass vaccination campaigns. The differential equations approximating the stochastic process of this model are listed below, with state variables (e.g. , ) representing proportions of the total population: where () = ∑ () and the force of infection, () = () 𝑖=1 The model was divided into five strata based on MMR vaccine eligibility: (1) <6 months of age; (2) 6-11 months of age; (3) 1-11 years of age; 4) >12 years of age and not pregnant; and (5) >12 years of age and pregnant.We assumed homogeneous mixing between strata due to the congregate living environment of evacuees across the bases, where families of mixed ages resided in barracks, and all ages intermixed at common facilities such as dining and recreation areas.We used finite population sizes for each base based on the population denominators from September 24, 2021.Models were run 1000 times for 200 days independently at each of the five bases that reported cases.Models were started with one importation each into Ft.McCoy, MCB Quantico, Ft.Pickett, and JBMDL, and two importations into Holloman Air Force Base, based on the minimum number of potential importations (Figure 4).In certain outbreaks in the U.S., as was the case during OAW, some cases that are classified as an international importation (i.e. if at least some of the patient's exposure period (7-21 days before rash onset) occurred outside the U.S. and rash onset occurred within 21 days of entering the U.S., with no known measles exposure in the U.S. during that time), might actually be secondary cases to a first importation, because the link or exposure between these cases cannot be verified.This can occur when there are multiple importations and considerable mixing in the affected population.Because the probability of an outbreak and the number of subsequent cases increases with an increase in the number of importations, to initiate the models, we conservatively assumed the minimum number of importations that could explain subsequent cases at each of the bases. Modeling the vaccination campaigns The number of daily MMR doses administered at each of the affected bases and Hotel A were used to model the vaccination campaigns.MMR vaccine was administered to all eligible individuals who lacked written documentation of MMR vaccination.Because availability of vaccination records among evacuees was exceptionally rare, doses were given indiscriminately during the vaccination campaigns, including to those who may have been already immune from prior vaccination or natural disease but who lacked such documentation.Thus, we proportioned the daily doses of MMR among evacuees who were eligible to be vaccinated (i.e., those in strata 2, 3, and 4), based on the proportion these groups contributed to in terms of overall population size at each base. Calculation of susceptibility profile Because of maternal or natural immunity, or prior vaccination, the model assumes a proportion of the population is in the recovered compartment at the outset.Serosurveys that characterize the immunity profile in Afghanistan are unavailable.We estimated the age-specific measles immunity profile of Afghanistan in 2021 using an approach described by Xi Li, Robert Perry, and James Goodson at the WHO Meeting of the Advisory Committee on Immunization and Vaccine related Implementation Research. 15The approach estimates agespecific immunity levels reached through routine immunization with the first (MCV1) and second (MCV2) dose of a measles-containing vaccine, as well as through supplementary immunization activities (SIAs). Routine immunization coverage data for MCV1 and MCV2 were obtained from WHO/UNICEF Joint Estimates of National Immunization Coverage (WUENIC) (https://immunizationdata.who.int) and SIA coverage data were obtained from WHO/IVB Database of SIAs (https://immunizationdata.who.int/listing.html?topic=additional-data&location=).The total population in Afghanistan in 2021 by single age groups were drawn from the United Nations Department of Economic and Social Affairs, Population Division (https://population.un.org/wpp/Download/Standard/CSV/).All sites were accessed on July 14, 2022. In this approach it is assumed that previously vaccinated children are reached first for a subsequent vaccine dose (either through MCV2 or SIA) before unvaccinated children ("dependent scenario"). 15The following formulas were used to estimate age-specific immunity levels, with equations 2 and 3 applied incrementally to the prior equation, such that, for example, equation 2 is applied to those who have been vaccinated with MCV1: (1) % 1 = 1 * 1 (2) In Afghanistan, MCV1 is recommended at 9 months for age, and MCV2 is recommended at 18 months of age. 16VE for MCV1 was assumed to be 84•0% when received at 6-11 months of age and 92•5% when received at >12 months of age. 14VE for MCV2 was assumed to be 95%.For the SIAs, the corresponding VE was applied based on the age cohorts targeted by the particular SIA and whether prior doses were received. 15vailable SIA coverage was based on administrative data, which can be biased because of inaccurate numerators or denominators.Because 12 (close to two-thirds) of the 19 SIA coverage estimates were >95%, including 8 estimates >100%, we adjusted all SIA estimates by a factor of 81% based on a single available post-campaign assessment showing a coverage of 92% for an SIA with reported administrative coverage of 113%.Derived estimates of age-specific immunity and susceptibility in Afghanistan are shown below. Calculation of overall susceptibility of Afghan evacuees to derive 𝑹 𝟎 from 𝑹 𝒆 Similarly, the overall susceptibility of Afghan evacuees (across all ages) used to derive 0 from was calculated as the weighted average (weighted by population size within each age-stratum) of age groupspecific immunity estimates, as follows: Limitations of measles immunity profile of Afghanistan Several limitations to the approach used to estimate the immunity profile of Afghanistan should be considered.First, the approach does not incorporate immunity derived from natural infection, and it could underestimate overall population immunity, particularly among older age groups who were born when measles coverage was low, and levels of measles circulation were high.Second, the approach does not account for heterogeneity in vaccine coverage and assumes measles vaccine doses administered both routinely and through supplementary immunization activities (SIAs) were distributed homogenously across subpopulations.Third, among older age-groups, for whom measles vaccination or measles vaccination coverage estimates were unavailable, we used published data on measles immunity levels among adult Afghan asylum seekers in The Netherlands, 18 which might not be generalizable to the evacuee population of OAW.Finally, the precision of the profile estimates relies on the accuracy of the data on routine immunization and SIA coverage.For routine immunization, we used WHO and UNICEF estimates of national immunization coverage which are more conservative than coverage estimates based on administrative data.SIA coverage was based on administrative data, which can under-or overestimate vaccination coverage.Because post-campaign coverage surveys were unavailable, we adjusted the reported SIA coverage by 81% based on a single post-campaign assessment. Although representative age-specific measles seroprevalence data are not available for Afghanistan, we were able to benchmark our estimates with serology data obtained from a subset of Afghan evacuees at Ft. McCoy.These individuals were potentially exposed to the first measles case identified at Ft. McCoy and included all co-passengers on the same flight arriving to Ft. McCoy from Dulles International Airport, as well as the flights before and after the first case's flight (due to overlapping times at Dulles International Airport and during intake at Ft. McCoy).Testing was done using an enzyme immunoassay through a U.S. commercial laboratory.This test is calibrated against the established correlate of protection of 120 mIU/ml by the plaque reduction neutralization (PRN) assay.Among these 441 Afghan evacuees at Ft. McCoy tested using an enzyme immunoassay, measles-specific IgG antibodies were positive in 208 evacuees, negative in 107 evacuees, and equivocal in 126 evacuees.The overall proportion susceptible based on negative results alone was 24•2%, suggesting the approach we used might be underestimating overall susceptibility.Below Because of these important caveats, the model trajectories we present are not intended to be exact projections, but rather serve to characterize relative differences in transmissibility under various scenarios in order to assess the impact of public health interventions.In addition, sensitivity analyses were performed around estimated susceptibility levels. Modeling measles transmission at Hotel A Using the approach described above, we also modeled potential outbreak trajectories at the isolation and quarantine hotel (Hotel A) in Virginia.Because of a few key differences between Hotel A and the military bases, results from this analysis are reported separately and were not incorporated into pooled results in the main text.As a contracted isolation and quarantine facility, Hotel A received Afghan evacuees who had (or were exposed to) certain infectious diseases, including patients suspected of having measles.Evacuees noted to have symptoms (e.g., fever, rash) at Dulles International Airport were either transported to the hospital directly for medical assistance and evaluation, and upon discharge, went to Hotel A, or were transported directly to Hotel A with their family unit.Because suspected measles patients identified upon arrival to the United States were referred to Hotel A, Hotel A had substantially more importations compared to other settings (Figure 2).Social contacts and mixing in Hotel A might have been different to that seen at military bases, as patients with different conditions or exposures were grouped together on different floors.Hotel A also had a much smaller population compared to the military bases.Finally, isolation and quarantining at Hotel A ceased after about two months from the initiation of OAW, and thus measles transmission was modeled over a different time frame.We modeled measles transmission at Hotel A starting with 4 importations for a period of 60 days. Genotyping by Partial Sequence Window (N450) N450 genotyping was performed at CDC and the Wisconsin State Laboratory of Hygiene.The N450 fragment is obtained from the C-terminal 450 bases of the measles virus nucleoprotein gene (excluding the terminal stop codon), the analyses of which is widespread in molecular surveillance practice. 19Briefly, total nucleic acids (TNA) were extracted from clinical specimens (nasopharyngeal/oropharyngeal swabs); 5µL of TNA was subjected to reverse-transcription PCR (RT-PCR) amplification.Amplicons were column-purified and sequenced to double-per-reaction coverage by Sanger chemistry.Contigs were assembled, quality-edited, and then trimmed again to produce the final N450 window.Resulting N450 contigs were aligned to WHO standards to ascertain genotype (B3 for all specimens).During Operation Allies Welcome, 8 measles vaccine reactions were identified by detection of measles genotype A or a vaccine-specific RT-PCR assay (MeVa). Preparation of Unbiased RNA-Seq Libraries to Obtain Whole Genomes 20µL of specimen extract (total nucleic acid) was digested with RNAse-free DNAse for 10min at 37 o C (NEB, Ipswich, MA).RNA-Seq libraries were prepared using non-directional NEBNext® chemistry (NEB) according to manufacturer's instructions, with the following modifications: (1) RNA fragmentation was performed for 7min30s at 94 o C. (2) First strand synthesis was performed according to long-fragment recommendations -(10min at 25 o C, 50min at 42 o C, 15min at 70 o C). (3) At ligation, adaptors were diluted to a ratio of 1:100.(4) Indexing PCR was performed using recommended cycling parameters in the presence of unique dual index (UDI) oligos (NEB), for 20 cycles.Final library size distribution was confirmed using Tapestation® high-sensitivity D1000 capillary electrophoresis (Agilent, Santa Clara, CA).Library concentrations were verified by real-time PCR targeting Illumina adaptor ends (NEB). Enrichment of Measles cDNA Fragments in Illumina RNA-Seq Libraries to Improve Assembly Libraries were enriched for measles virus fragments using a modification of the probe-hybridization method of Metsky (CATCH). 20Briefly, 2µL of cDNA library was hybridized to biotin-tagged RNA probes in the presence of adaptor blocking primers, Human Cot-1 DNA, and sheared salmon sperm DNA for 4h at 65 o C. Probe-fragment hybrids were bound to MyOne® C1 magnetic, streptavidin-coated beads (Invitrogen/Fisher) and then washed 3x in the presence of buffers containing SSC solution and 10% SDS.Purified library was denatured from beads with 0.1 N NaOH, and was amplified using 0.1uM P5/P7 universal primers in the presence of Phusion® (NEB) G-C buffered master mix (NEB).Amplification program was as follows: Initial denaturation at 95 o C for 30s, 30 cycles of denaturation (10s, 95 o C), annealing (30s, 55 o C) and extension (30s, 72 o C).Final extension was for 5min at 72 o C. In all cases, cDNA libraries were purified using SPRISelect® magnetic beads (Beckman, Indianapolis, IN).Library size distribution and concentration were determined as described above before normalization and sequencing.Files containing primer/probe design are available at study repository: https://data.cdc.gov/Models/Measles-Case-and-Genetic-Metadata-Operation-Allies/b8tpjsmh. WGS Quality Control and Assembly to Produce Consensus Sequences Paired-end read sets were concatenated from separate sequencer lanes and were adaptor and qualitytrimmed using Trimmomatic v.0.39, using a sliding window of four bases and an average quality cutoff of 15; retained reads below 20 bases in remaining length were discarded.Trimmed reads (paired and singlet) reads were assembled de novo using SPAdes v.3.15.4.Contigs were aligned to reference sequence AF266287 (measles genotype A) using Mummer (nucmer) 4.0.0, and a preliminary de novo scaffold was generated using the assembly.pyorder_and_orient utility of viral-ngs.A more comprehensive alignment scaffold incorporating reference bases was then generated using the assembly.pyimpute_from_reference utility of viral-ngs.Trimmed read sets were realigned to this preliminary scaffold using Bowtie v.2.4.5 under "-very-sensitivelocal" presets, then sorted using PicardTools SortSam v.2.5 and deduplicated using PicardTools MarkDuplicates v.2.5.Two local indel realignment passes were performed using RealignerTargetCreator and IndelRealigner utilities of GATK v.3.7.Consensus bases were called from refined alignments using the UnifiedGenotyper in GATK v.3.7,producing consensus calls for any available pileup majority (base/indel) observed from a minimum of 10x read coverage.While covered at below a depth of 10 reads, two specimens (MVs/Wisconsin.USA/37.21/4and MVs/Wisconsin.USA/38.21)contained small regions of base ambiguity in the MF-NCR region, and Sanger contigs were generated directly from specimen TNA extracts to supply those base calls.Final annotations for NCBI submission were performed using VADR v.1.4.1.Human read content was stripped using Kraken 2 before upload of untrimmed fastqs to SRA.Sequence assembly pipeline was documented and controlled using Snakemake v.7.3.0,pipeline is available from the authors in this format or at the manuscript data repository: https://data.cdc.gov/Models/Measles-Case-and-Genetic-Metadata-Operation-Allies/b8tp-jsmh.Illumina reads, consensus assemblies, and Sanger contigs are contained in NCBI Bioproject PRJNA869081. Common Phylogenetic and Graphical Methods (WGS and N450) Public repository sequences were downloaded from NCBI on May 03, 2022, searching for the terms "Measles Virus Genotype B3"; sequences were excluded if the N-L span was not represented, if degenerate bases were present, if the sequence was tagged as a vaccine, isolated, laboratory-adapted, or obtained from an encephalitic measles disease process (e.g.measles inclusion body encephalitis (MIBE), subacute sclerosing panencephalitis (SSPE)).Sequences were aligned using MAFFT v.7.4.1, and suitability of alignment was visually inspected.Public repository accessions for sequences and sequencing data used in phylogenetic analyses is available in the study data repository at https://data.cdc.gov/Models/Measles-Case-and-Genetic-Metadata-Operation-Allies/b8tp-jsmh.The resulting alignment was prepared alongside reasonable tip dates for the taxa represented, which for publicly available sequences is the date of the centroid (Thursday) for the epidemiological week shown in WHO nomenclature.For sequences newly obtained in this study, the rash date was used.Bayesian inference was performed using BEAST v.2.6.3 21.In all cases, sampling runs were performed using a 200 million step chain, comprised of four independently sampled chains of 50 million steps apiece.Samples were drawn at 10,000-step intervals (20,000 trees per run in total for each experiment).10 percent burn in was discarded.Also in all cases, a Bayesian skyline coalescent tree prior was used, 22 with default pop.sizes=5 and group.sizes=5.A maximum clade credibility tree was prepared from the best-fit model using mean node heights and then annotated with case metadata using R v.4.1.2with ggtree v.3.0.4 and treeio v.1.16.2. Specific Phylogenetic Methods (WGS) Base substitution models were selected using the modeltest function of IQ-Tree v.2.1.2, 23using separate input partitions for (1) a concatenation of all individual coding sequence regions (CDS) and (2) a concatenation of all intergenic regions (NCR).The alignment was limited to the extrema of the nucleoprotein (N) and large (L) protein gene cassettes (N-L span or WGS-t) to prevent use of end artifacts (exclusion of header and trailer).Recommended base substitution models were TIM+F+G4 for concatenated CDSs, and TIM3+F+G4 for concatenated NCRs (BIC score: 58886•030 in aggregate).These base substitution models were used in all later analysis. Bayesian inference was performed using BEAST v.2.6.3, 21using a modification of the recommended partitioning scheme from previous tests.In this case, a concatenation of CDS was used as previously described, but all noncoding content in the N-L span was concatenated and considered simultaneously.Molecular clock models were compared for suitability using the nested sampling 24 technique to determine marginal likelihoods and Bayes factor comparison of fit.In all cases, nested sampling runs were performed as 32 independent runs, each using one particle, subchain length=10,000 and epsilon=1•0 x 10 6 . Fit and suitability of molecular clock models for Bayesian phylogenetic inference. Data are derived from nested sampling estimates of marginal likelihood, from which Bayes factor differences are calculated from the model of least-adequate fit, considering standard deviations.In all cases, base substitution models are unlinked (considered separately) across partitions.However, there was considerable difficulty in achieving model convergence for models in which the CDS and NCR partition clocks were unlinked.Considering this, it was interpreted that NCR partitions in the data set lacked sufficient clocklikeness to provide stable posterior estimates under either strict or relaxed-clock conditions.The model selected for final presentation was one in which CDS and NCR partitions both used unlinked base substitution models as described above, alongside a linked, strict clock model for CDS and NCR partitions (Marginal likelihood = -29233•742, S.D.= 3•498).Considering these results, the best-fit, partition-linked clock model was used for all Bayesian inference in the study (highlighted in grey, table below).Specific Phylogenetic Methods (N450) N450 sequence windows were obtained by RT-PCR and Sanger chemistry when specimen genotyping was originally performed.WGS consensus contigs (previously described) were individually aligned to corresponding Sanger fragments to ascertain divergence of N450 from WGS, if any.Two sequences were available for case 16 (nasopharyngeal and oropharyngeal swab) and were identical; MVs/Wisconsin.USA/39.21/8/1 was retained for analyses while MVs/Wisconsin.USA/39.21/8/2 was excluded from tree inference as a case duplicate.All N450 sequences (n=44 N450 sequences from 43 sequenced cases, 43 sequences used after duplicate exclusion) were identical to corresponding windows assembled by WGS, if available (n=42 WGS sequences from 43 sequenced cases, 41 sequences were used after duplicate exclusion).To summarize, the N450 phylogeny presented herein contains two sequences not contained in the WGS inference. Marginal Newly obtained N450 sequences were aligned with equivalent genome windows extracted from whole MeV B3 sequences (n=116 for entire set) obtained from NCBI, as was performed for WGS inference.Public repository accessions for sequences and sequencing data used in phylogenetic analyses is available in the study data repository at https://data.cdc.gov/Models/Measles-Case-and-Genetic-Metadata-Operation-Allies/b8tpjsmh.Base substitution model for N450 was selected using the modeltest function of IQ-Tree v.2.1.2, 23using the entire N450 segment, without partitioning.Recommended base substitution model was TN+F (BIC score: 2071•812).This substitution model was used to perform Bayesian tree inference as described above in common methods. Identification of infector-infectee pairs and of unrelated patients Three high confidence infector-infectee pairs (1→4, 3→8, and 30→28) were identified.Patients 1 and 4 were in Germany together before coming to the U.S., shared the same incoming international flight (Flight 3), as well as the same domestic flight (from Dulles International Airport (IAD) to Ft. McCoy), while patient 1 was infectious.Patients 3 and 8 shared the same barrack (Barrack B) at Ft. McCoy while patient 3 was infectious and were the only two patients identified in that particular barrack.Patients 30 and 28 were members of the same family (Family 4) who traveled together from Spain to IAD (Flight 5) and onto Hotel A. The difference in days between the rash onsets of patients 1 and 4, 3 and 8, and 30 and 28 were 11, 9, and 11 days, respectively. Six case pairs were known to be unrelated (1⥇35, 24⥇35, 25⥇35, 30⥇35, 31⥇35, 47⥇35).These pairs were used as a control set for genetic comparisons.Specifically, these cases were unambiguously known not to be part of the same measles virus transmission chain within the U.S.; i.e., they did not infect each other in the U.S. Case 35 was reported in MCB Quantico, was among the last few measles cases reported during OAW and was one of two patients who arrived via the Philadelphia International Airport (PHL).Cases 1, 24, 25, 30, 31, and 47 were reported in Ft.McCoy, Hotel A, and Ft.Pickett, were among the earliest cases reported during OAW, arrived via IAD, and had onset of rash >21 days before the rash onset of case 35. Context Currently, surveillance of measles virus is performed by acquisition of partial-genome sequencing windows (N450 genotyping), and these sequences are universally analyzed across international surveillance networks to substantiate claims of measles elimination.Elimination is assigned a formal definition by WHO, specifying the absence of continuous transmission lasting greater than one year in the presence of a wellfunctioning surveillance system. 25Since the diversity of circulating measles sequences is decreasing, postelimination scenarios are envisioned in which multiple transmission chains would not be genetically distinguishable by N450, possibly leading to overestimates of transmission continuity. 26Consequently, there is a perceived research need in the measles surveillance community to assess improvement to transmission chain discrimination offered by acquisition of WGS data, keeping in mind tradeoffs of expense inherent to implementation of WGS laboratory methods. 27,28Resolution of phylogenetic models is expressed as a combination of (1) certainty of tree topology or grouping patterns, (2) certainty of molecular clock estimates, and (3) certainty of node dating when molecular clocks are used.We report improvements to these parameters in the study data (WGS) when compared to a model inferred from N450 only.Molecular surveillance of measles virus in Afghanistan is infrequently reported (94 sequences in MeaNS2 (who-gmrln.org/means2)2011-present, accessed December 20, 2022), and so the dynamics of transmission within Afghanistan are poorly understood. Interpretation of Molecular Clocks By contrast to the tree generated from WGS, resolution was lower for both (1) temporal basis (wider HPD intervals for node dating) and (2) tree topology (lower posterior node probabilities and fewer highly supported nodes (WGS: n=6, N450: n=4, at posterior>=0•9)).It is noteworthy that the 95% HPD intervals for molecular clock rates do not overlap for WGS and N450 inference as performed here.We interpret this to mean that, while a resolvable evolutionary process is observed when using only the partial N450 window, this inference artificially compresses the timescale of the true evolutionary rate of the virus, most likely a result of the exclusion of the considerable mutational evidence available in the noncoding regions.N450 is extensively used in surveillance because it is known to capture intra-type diversity of MeV but substitution rates (substitutions per site of sequence) for the entire N-L sequence, in this case, provide apparent evidence for more extensive conservation over time than are shown by N450. Clock rates (substitutions per site per year) for measles virus are within those expected for respiratory viruses; for measles virus these rates are infrequently reported; previous studies are cited in the table below. This study differs from previous work in several respects.First, specimens were exclusively from oropharyngeal or nasopharyngeal swab extracts, without intervening viral culture.Second, a metagenomic sequencing approach was used, meaning that read sets were produced without intervening PCR amplification.Lastly, a partitioned genetic model is implemented to infer separate base substitution rates from coding and noncoding genome regions.Current study is highlighted in gray. Specimen Type Mean Clock Rate (Subs/Site/Year) Phylogenetic Model Settings Reference Oral fluids, cell culture isolate Interpretation of Topology In both WGS and N450 trees, there is some topological uncertainty in the split of main clusters 1 and 3 (cluster 3 = single case) from cluster 2 (WGS posterior = 0•693, N450 posterior = 0•630), while support was somewhat higher for WGS.In both N450 and WGS trees, there was complete support (WGS posterior = 1•000, N450 posterior = 1•000) for the internal node representing the common ancestor of all OAW sequences and a closely related B3 strain sequenced in 2019, in California (MVs/California.USA/50•19[B3]).As mentioned, the resulting tree was observed to be better-resolved than that produced using the N450 Sanger windows.Considerable overlap of time intervals was observed for cluster-defining internal nodes in recent divergence of N450 sequences.We chiefly interpret these features to minimally support the existence of three circulating lineages predating importation to the United States with these evacuee cases.Comparison of N450 and WGS trees is shown in Fig. S7.The distribution of the number of patients aged less than one year as predicted by the models are plotted comparing the base-case scenario (model results using MMR vaccination uptake as it occurred, which included infants aged 6-11 months) and a scenario in which infants aged 6-11 months were not vaccinated, at each of the five military bases and the isolation and quarantine hotel (Hotel A), where measles patients were reported.The red and blue vertical dashed lines represent the mean number of measles cases in the base case scenario and the no-vaccination scenario, respectively.As shown in the main text for whole genome sequencing (Figure 5), the tree (leftmost) contains the inset group of the larger B3 tree with sequences obtained during Operation Allies Welcome study (n=43), with a closely related B3 strain (MVs/California.USA/50.19[B3])shown for visual orientation.Metadata is shown as before for WGS (genetic cluster, arrival location, shared flight, importations status, barrack or bay, and family group).High-confidence transmission events are marked by red arrows and transmissions of nil-confidence are shown as grey arrows.Sequences for which N450 windows were available, but not included in the WGS tree are marked with a purple triangle on the branch tip.The time interval (95% HPD) for the origin of the common ancestor of all OAW specimens is shown with dashed lines for reference.Briefly, (1) the temporal scale of the trees differs considerably, with wider date bars in N450 preventing discrimination of the cluster 1-2 divergence from that of 1 and 3; these dates are narrower in the WGS tree.(2) More internal nodes (n=6) exceed 0•9 posterior in the WGS result, with n=4 internal nodes supported at that level in the N450 phylogeny.Sequences available for N450 phylogenetic inference that were not available as WGS are denoted as purple triangles.In both panels, the time range (95% HPD) representing the common ancestor of all OAW specimens is shown with dashed vertical lines.S5.Base substitution (WGS hamming distance) for relevant exposure groups.For each exposure group, accumulation of base substitutions is depicted in reference to the first ordinal case (green).Table S6.Base substitution (N450 hamming distance) for relevant exposure groups.For each exposure group, accumulation of base substitutions is depicted in reference to the first ordinal case (green). Table S7. Public repository accessions for sequences and sequencing data used in phylogenetic analyses. Table is available in the study data repository at https://data.cdc.gov/Models/Measles-Case-and-Genetic-Metadata-Operation-Allies/b8tp-jsmh. Figure S1 . Figure S1.Age distribution of measles cases and change in age distribution before and after the midpoint of the outbreak during Operation Allies Welcome, August-October 2021.Panel A shows a histogram of measles cases according to age in years, color-coded by age-group.The length of the horizontal bars is equivalent to the proportion of cases in each age-group category.Panel B shows the change in the age distribution of cases before and after the midpoint of the outbreak (September 24, 2021).Before September 24, 25 cases were reported, of which 21 (84%) occurred among those aged 12 months or older, 4 (16%) among those aged 6-11 months, and none among those aged <6 months.After September 24, 22 cases were reported, of which 9 (41%) occurred among those aged 12 months or older, 8 (36%) among those 6-11 months, and 5 (23%) among those aged <6 months. Figure S2 . Figure S2.Location-specific measles attack rates under different vaccination scenarios.Shown are density ridgeline plots of the measles attack rates at each of the five military bases (Ft.McCoy, MCB Quantico, Holloman Air Force Base, JB McGuire-Dix-Lakehurst, and Ft.Pickett), and the isolation and quarantine hotel,Hotel A, where measles cases were reported.In each panel, the base-case scenario (model results using vaccination uptake as it occurred) is compared to scenarios in which the age of MMR vaccine administration was not lowered to 6 months, vaccination was delayed 7 days, and vaccination was delayed 14 days.The vertical lines represent the 25 th , 50 th , and 75 th quantiles. Figure S3 . Figure S3.Location-specific number of measles cases aged less than one year in scenarios in which the age of MMR vaccine administration was and was not lowered to 6 months.The distribution of the number of patients aged less than one year as predicted by the models are plotted comparing the base-case scenario (model results using MMR vaccination uptake as it occurred, which included infants aged 6-11 months) and a scenario in which infants aged 6-11 months were not vaccinated, at each of the five military bases and the isolation and quarantine hotel (Hotel A), where measles patients were reported.The red and blue vertical dashed lines represent the mean number of measles cases in the base case scenario and the no-vaccination scenario, respectively. Figure S4 . Figure S4.Location-specific number of measles cases in scenarios in which an additional importation occurred at different time points during the pause on incoming international flights.Density ridgeline plots of the number of patients caused by a single additional importation arriving on September 10 (the day flights were paused), September 17 (one week later), September 24 (two weeks later), and October 5 (the day the flights resumed), at each of the five military bases and the isolation and quarantine hotel (Hotel A), where cases of measles were reported.The vertical lines represent the median. Figure S5 . Figure S5.Location-specific measles attack rates under various combined intervention scenarios.At each of the five military bases where measles cases were reported, boxplots of measles attack rates as predicted by the model for different combinations of interventions are compared to the base-case scenario (interventions as they occurred).Combinations of interventions include a 7-or 14-day delay plus an additional importation on September 17 and/or not lowering the age of MMR vaccination to 6-11-months.The median number of measles cases and the interquartile range (IQR) are listed above each boxplot. Figure S6 . Figure S6.Phylogenetic Tree from N450 genotyping windows.As shown in the main text for whole genome sequencing (Figure5), the tree (leftmost) contains the inset group of the larger B3 tree with sequences obtained during Operation Allies Welcome study (n=43), with a closely related B3 strain (MVs/California.USA/50.19[B3])shown for visual orientation.Metadata is shown as before for WGS (genetic cluster, arrival location, shared flight, importations status, barrack or bay, and family group).High-confidence transmission events are marked by red arrows and transmissions of nil-confidence are shown as grey arrows.Sequences for which N450 windows were available, but not included in the WGS tree are marked with a purple triangle on the branch tip.The time interval (95% HPD) for the origin of the common ancestor of all OAW specimens is shown with dashed lines for reference. Figure S7 . Figure S7.Comparison of measles virus genotype B3 phylogenies derived from N450 genotyping windows and WGS.N450 (A) and WGS (B) subtrees (visualized as cutout from entire B3 set used in tree construction) are visualized for comparison of tree shape, node dating (purple bars), and certainty of tree shape (black dot is posterior >= 0•9).Trees are identical to those visualized in Fig S6 (N450) and Fig. 5 (WGS; main text).Briefly, (1) the temporal scale of the trees differs considerably, with wider date bars in N450 preventing discrimination of the cluster 1-2 divergence from that of 1 and 3; these dates are narrower in the WGS tree.(2) More internal nodes (n=6) exceed 0•9 posterior in the WGS result, with n=4 internal nodes supported at that level in the N450 phylogeny.Sequences available for N450 phylogenetic inference that were not available as WGS are denoted as purple triangles.In both panels, the time range (95% HPD) representing the common ancestor of all OAW specimens is shown with dashed vertical lines. Operation Allies Welcome on September 24, 2021. Number of pregnant women at each base as of October 2, 2021 as provided by DoD.Women of childbearing age (aged 12-50 years) were screened for pregnancy during the mass MMR vaccination campaigns to assess for vaccine eligibility and to facilitate early prenatal care. 𝑹 𝒆 estimates based on the Wallinga-Teunis method by setting and 𝑹 𝒐 derivations Setting Re * Proportion susceptible s R0 § sensitivity analyses were performed around the estimated 0 values.Early estimates of Re for all locations, Ft.McCoy, and Barrack A were obtained during 7-day windows ending on transmission day 8, 11, and 8, respectively. * § Calculated as Re/s Stratification of the model and age dependent susceptibilities 18MCV2 was introduced in Afghanistan in 2004.^Because12(close to two-thirds) of the 19 SIA coverage estimates were above 95%, including 8 estimates above 100%, we adjusted these estimates by a factor of 81% based on a single available post-campaign assessment showing a coverage of 92% for an SIA reporting a coverage of 113% based on administrative data.ǂInfantsweredividedinto two age-groups (<6 months and 6-11 months of age) based on our model strata.Because MCV1 is given starting at 9 months of age, half of the MCV1 coverage reported in 2021 was applied to infants aged 6-11 months.Maternally derived measles immunity among infants was assumed to be 50% (66•67% of infants aged <6 months and 33•33% of infants aged 6-11 months were assumed to be immune).17ǂǂBasedon data from Freidl et al. showing 88% of adult Afghan asylum seekers born between 1998 and 1971 in the Netherlands were seropositive for measles.18Themodel is divided into five strata according to MMR vaccine eligibility, i.e., based on four age groups (<6 months of age, 6-11 months of age, 12 months-11 years of age, and >12 years of age) and pregnancy status (>12 years of age and not pregnant and >12 years of age and pregnant).Complete information on the age distribution of Afghan evacuees were available for MCB Quantico and Hotel A; these distributions were similar to the age distribution of the population in Afghanistan in 2021, below.We applied the median of these proportions to base-specific populations to determine age-specific subpopulations for the stratified model.The number of pregnant women at each base as of October 2, 2021 was used to inform the last stratum (≥12 years of age and pregnant). * Age is presented in years unless otherwise specified.* of Afghan evacuees in each of four age groups based on three different data sources and median values used for vaccine-eligibility calculations, derivation of age-specific populations for the simulation analyses, and estimation of overall susceptibility. Age group Data Source Median values used in analyses UN Department of Economic and Social Affairs, Population Division The measles immunity profile of Afghanistan was used to generate weighted averages of initial susceptibility in each of our four age strata (<6 months, 6-11 months, 1-11 years, ≥12 years), i.e., for age strata spanning more than a single age year (age strata 1-11 years and ≥12 years), the overall susceptibility of the particular strata was the weighted average of the susceptibility based on the relative population size of the single age years within the age strata, as below. Proportions of the population estimated to be susceptible to measles in each of four age groups. Because the number of birth cohorts included in the infant age categories do not span more than 1 year, the susceptibility of infants aged <6 months and 6-11 months is not weighted. * Seroprevalence data from a subset of Afghan evacuees at Ft. McCoy tested for measles IgG with a commercial enzyme immunoassay Test result Age group Percent <6 months , we show the IgG antibody results by age group:
2023-07-29T15:03:03.233Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "e896abe751e135d92a7cd1adb1242f1edade1cbc", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2468266723001305/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a33dce1c2ddadf3702c426c9862921508d4dc989", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
58011457
pes2o/s2orc
v3-fos-license
Prevalence, genotype and antimicrobial resistance of Clostridium difficile isolates from healthy pets in Eastern China Background Clostridium difficile (C. difficile) is a main cause of antibiotic-associated diarrhoea in humans. Several studies have been performed to reveal the prevalence rate of C. difficile in cats and dogs. However, little is known about the epidemiology of C. difficile in healthy pets in China. This study aimed to assess the burden of C. difficile shedding by healthy dogs and cats in China. Furthermore, the genetic diversity and antimicrobial susceptibility patterns of the recovered isolates were determined. Methods A total of 175 faecal samples were collected from 146 healthy dogs and 29 cats. C. difficile strains were isolated and identified from the feces of these pets. The characterized C. difficile strains were typed by multilocus sequence typing (MLST), and the MICs of the isolates were determined against ampicillin, clindamycin, tetracycline, moxifloxacin, chloramphenicol, cefoxitin, metronidazole and vancomycin by the agar dilution method. Results Overall, 3 faecal samples (1.7%) were C. difficile culture positive. One sample (0.7%) from a dog was C. difficile culture positive, while two cats (7.0%) yielded positive cultures. The prevalence rate differed significantly between cats and dogs. These isolates were typed into 3 MLST genotypes and were susceptible to chloramphenicol, tetracycline, metronidazole and moxifloxacin and resistant to ampicillin, clindamycin and cefoxitin. Notably, one strain, D141–1, which was resistant to three kinds of antibiotics and carried toxin genes, was recovered in the faeces of a healthy dog. Conclusion Our results suggest that common pets may be a source of pathogenic C. difficile, indicating that household transmission of C. difficile from pets to humans can not be excluded. Background Clostridium difficile (C. difficile) is a Gram-positive spore-forming anaerobic bacillus thatis a well-known pathogen causing pseudomembranous colitis and antibioticassociated diarrhoea [1]. Clostridium difficile infection (CDI) is also a common cause of enteritis in different animal species [2]. C. difficile enterotoxin A (TcdA) and cytotoxin B (TcdB) are mainly responsible for its pathogenesis [3]. In addition, the actin perturbing binary toxin (CDT) was found as an additional toxin in 4-12% of toxigenic C. difficile. This additional toxin is composed of two independent components, a catalytic domain (CDTа) and a binding domain (CDTβ) [4]. This toxin has been reported to be independently associated with recurrent CDI [5]. The prevalence of CDI has increased globally due to inappropriate use of antibiotics. Many articles have reported the molecular epidemiology of C. difficile isolated from patients in hospitals, which has been studied extensively as an external source of CDI [6]. Additionally, some human pathogenic PCR ribotypes are found in other mammals, such as pigs, horses, and cattle. Food contamination with pathogenic C. difficile has been demonstrated in previous reports [7][8][9]. Although articles have focused on C. difficile isolated from animals, these studies have mainly aimed to reveal the possible transmission of C. difficile from animal species used for food by humans, such as seafood, beef andpork. [7][8][9][10]. However, the zoonotic potential of this pathogen remains controversial.Only a few articles have investigated the molecular epidemiology of C. difficile isolated from pets [11][12][13], and little is known about C. difficile in common pets in good health in China. Therefore, this study focused on the epidemiology of C. difficile isolated from the most common pets in China, dogs and cats, to potentially indicate another important route of zoonotic transmission of C. difficile other than foodborne infection. This study was performed to assess the burden of C. difficile shedding by healthy dogs and cats in Eastern China and reveal the genetic diversity and antimicrobial susceptibility patterns of isolates recovered from these healthy cats and dogs. Three different methods were used to precisely identify C. difficile isolates, and multilocus sequence typing (MLST) analysis was performed to type C. difficile isolates. To explore whether C. difficile carried by pets poses a threat to humanhealth., multiplex PCR was used to detect of toxin genes in C. difficile. Additionally, the antimicrobial susceptibility of these C. difficile isolates was determined. Sample collection Faecal samples were collected from adult pets in pet shops, located in downtown or rural areas of Xuzhou City, Jiangsu Province, China. Xuzhou is located at latitudes of 33°43′~34°58' North and longitudes of 116°22′ 118°40′ East. The average annual temperature is 14°C, and the average precipitation is 800 mm. A total of 18 pet shops were included in this study. Solid or semi-solid faecal samples were obtained from individuals of the most popular domestic species of pets that were adult, non-diarrhoeic and clinically healthy. Pets were not included if they had been exposed to antibiotics in the last 3 months before sample collection., A total of 174 faecal samples were collected, including145 samples from dogs and 29 samples from cats. The study was approved by the ethics committee of Xuzhou Medical University. All animal experiments were approved by the Animal Care and Use Committee of Xuzhou Medical University. Isolation and identification of C. difficile For enrichment cultivation of C. difficile, each faecal sample was introduced into 5 mL of brain heart infusion broth (BHI) (CM1135B, Oxoid) supplemented with 1.0 g/L taurocholic acid sodium salt hydrate (T4009, Sigma) and C. difficile selective supplement (SR0096, Oxoid) [8]. After 7 days of incubation at 37°C in an anaerobic workstation (DG250, Don Whitley Scientific), alcohol-shock was performed by mixing homogenized broth-culture with an equal volume of ethanol (96%) for 50 min at room temperature. After centrifugation, the pellet was collected and spread onto cycloserine cefoxitin fructose agar taurocholate agar plates (CCFAT) [14]. After the plates were incubated anaerobically at 37°C for 48 h, the presumptive colonies on the plates that demonstrated a typical morphology (flat, irregular yellowish and ground-glass appearance) were selected and subcultured on BHI agar plates with C. difficile selective supplement. After 48 h at 37°C for 48 h, the presumptive isolates were identified by using a C. difficile latex agglutination rapid test kit (DR1107, Oxoid) for the detection of C. difficile antigen. In addition, the presumptive isolates were subcultured in BHI broth with C. difficile selective supplement for 24 h to collect bacterial pellets for DNA extraction and PCR confirmation. For DNA extraction, the cultures were centrifuged at 13400 rpm for 5 min to collect bacterial pellets for DNA extraction using. DNA was extracted from the bacterial pellets according to the protocol provided in the QIAamp DNA Mini Kit (51,304, QIAGEN). Further identification of C. difficile was performed by molecular techniques, detection of a species-specific internal fragment of tpi by PCR, and sequencing of 16S rDNA as described previously [15,16]. Previously reported primers targeting tpi and 16S rDNA were used to confirm presumptive isolates [15,16]. The tpi forward primer was tpi-F (AAAGAAGCTACTAA GGGTACAAA), and the tpi reverse primer was tpi-R (CATAATATTGGGTCTATTCCTAC). PCR-positive isolates were further confirmed by amplification and sequencing of 16S rDNA. The primers for amplification of 16S rDNA were PS13 (GGAGGCAGCAGTGGGGAAT A) and PS14 (TGACGGGCGGTGTGTACAAG). All PCRs were performed in an Applied Biosystems thermal cycler (Applied Biosystems 2720, Applied Biosystems) in a final volume of 20 μL/reaction. The reaction mixture consisted of 10 μL of 2╳Taq Plus PCR MasterMix (KT205, TIANGEN), 0.2 μM each primer and 1 μL of template DNA. Unused swabs and tubes (tool and container for sample collecting) were used as the control of lab contamination and included as samples to perform isolation of C. difficile, DNA extraction and PCR. Multiplex PCR for the detection of toxin genes A 5-plex PCR was performed to detect the tcdA, tcdB, cdtA, and cdtB genes and 16S rDNA [15]. C. difficile strain ST1/RT027 (tcdA + , tcdB + , ctdA + , ctdB + ) was used as a positive control for the amplification. The conditions and primers for the PCRs were as previously reported with several modifications [15,16]. The PCR assay was performed at 94°C for 10 min, followed by 32 cycles of 94°C for 50 s, 57°C for 40 s, and 72°C for 50 s, and a final extension at 72°C for 10 min. Multilocus sequence typing (MLST) analysis All of the C. difficile isolates were further characterized by MLST. MLST was performed using seven housekeeping genes (adk, atpA, dxr, glyA, recA, sodA and tpi) to compare theisolates from pets with human strains [17]. The amplification conditions and oligonucleotide primers for MLST were used as previously reported by Griffiths et al. [17]. Seven PCR products were obtained for each strain and sequenced using PCR forward and reverse primers. The sequences of the allele were submitted to the MLST database homepage. The assignment of the allele numbers, clades and sequence types (STs) wereperformed using the C. difficile MLST website (http://pubmlst.org/cdifficile/). The programme MEGA, version 4 (Molecular Evolutionary Genetics Analysis [http://www.megasoftware.net/]), was used to construct a phylogenetic tree by the neighbour-joining method. Statistical analysis Prevalence rates were compared by the χ 2 test with Yates' correction. All calculations were performed using Prism 5.0 (GraphPad Software, Inc. USA). A P-value < 0.05 was considered statistically significant. Isolation and identification of C. difficile Three of 175 faecal samples analysed were found to contain C. difficile. The isolation rate of C. difficile was 1.7% for the total faecal samples collected in this study. C. difficile was isolated from 2/29 cat faecal samples (7.0%) and 1/146 (0.7%) dog faecal samples The prevalence rates of C. difficile in cats and dogs differed significantly, suggesting that C. difficile recovery was associated with the pet species. Antibiotic susceptibility of C. difficile isolates There is increasing concern about the emergence of multi-drug resistant bacteria among household pets and the possible transmission of resistant strains between pets and their owners. Thus, the susceptibility patterns of the isolates to 8 antibiotics were determined ( Table 1). All C. difficile isolates characterized in this study were susceptible to chloramphenicol, tetracycline, metronidazole and moxifloxacin. Additionally, isolate C22-3 was resistant to vancomycin, while isolate D141-1 and C23-2 were susceptible. All of the C. difficile isolates displayed resistance to the other three antibiotics ampicillin, clindamycin and cefoxitin. The toxin gene profiles A 5-plex PCR was performed to detectfour C. difficile toxin genes tcdA, tcdB, cdtA and cdtB. 16S rDNA was used as an internal PCR positive control (Fig. 1). Our results showed that isolate D141-1 contained the toxin genes tcdA and tcdB but did notcarry the binary toxin genes cdtA and cdtB. No toxin genes were found in isolate C22-3 and C23-2 isolated from cats, ( Table 2). These results suggested the possibility of transmission of toxigenic C. difficile from pets to humans via contact. C. difficile MLST analysis MLST was performed to further assess the possibility of transmission of C. difficile between humans and pets by comparing the diversity of alleles among C. difficile strains in this study. All the 3 C. difficile isolates were typed by MLST, which showed that the 3 C. difficile isolates were assigned to different STs ( Table 2). The relationships among the three isolates in this study and between the isolates and other isolates reported previously were examined using phylogenetic analysis based on the sequences of seven housekeeping genes used in MLST as described previously [17]. The results revealed that all three isolates were clustered in Clade 1, ST3, ST15 and ST129 (Fig. 2). Isolates C22-3 and C23-2were assigned to ST-3 and ST-15 (Fig. 2). Isolates D141-1, which was found to carry the toxin genestcdA and tcdB, was assigned to ST-129. Discussion Our results revealed that faecal shedding of C. difficile is not common among healthy pets in Eastern China. The samples used in this study were collected from 18 pet shops. In pet shops, animals are kept in close contact with each other and may also be exposed to C. difficile from their handlers or visitors to the shop. Since the animals were kept in close proximity with each other, the low prevalence found in this study and the fact that no MLST types were shared between animals imply that the transmission of C. difficile is not common among animals. These results agree with observations in previous reports on the epidemiology of C. difficile in pets from Spanish veterinary teaching hospitals and veterinary clinics in the Madrid region [13,20]. In the current study, colonization and transient passage of C. difficile were not differentiated. In further studies, the repeat collection of stool samples could be performed to reveal whether the isolated C. difficile colonize in or transiently pass through pets' gut. The majority of samples were collected from dogs and the rest were from cats in this study, which may result in a bias of C. difficile prevalence in cats. Thus, due to the limited number of samples from cats, the results reported here may not completely represent the prevalence of C. difficile in cats in Eastern China. The age of the animal is important for C. difficile prevalence, since the pathogen has been isolated more frequently in the faecal samples of juvenile animals. In this study, faecal samples were collected from adult pets. In a future study, faecal samples from juvenile pets could be included, and an analysis of samples from juvenile and adult animals could be performed in a larger survey in China. Notably, one isolate recovered from dog faeces, D141-1, was resistant to three kinds of antibiotics and carried toxin genes (tcdA and tcdB). Since there is intimate contact between humans and their pets, this result suggests the possibility of transmission of toxigenic C. difficile from pets to humans during contact with pets. Vancomycin and metronidazole have long been used as first-line drugs for the treatment of CDI [21,22]. In this study, one isolate displayed high-resistance to vancomycin (MIC> 8 μg/ml). All isolates were susceptible to metronidazole (Table 1). A few isolates with low resistance or reduced susceptibility to vancomycin have been were reported in China [23][24][25]. All C. difficile isolates in this study showed resistance to clindamycin and cefoxitin, similar to clinical C. difficile in China and other countries [23,24,26,27]. All C. difficile isolates in this study exhibited high susceptibility to tetracycline, consistent with clinical isolates in China [23]. Few studies have tested the susceptibility of C. difficile to chloramphenicol and ampicilin. The results in this study showed that all three isolates were susceptible to chloramphenicol. Isolate D141-1, which contained toxin genes, showed resistance to 3 different antibiotics, including ampicillin, clindamycin and cefoxitin, which are known to promote CDI [28,29]. Some studies abroad have reported high resistence of C. difficile isolates from food or community patients, with resistance rates of 72.22 and 100%, respectively [28,30]. Some domestic studies have shown that C. difficile isolated from hospitals also displays high resistance to clindamycin (88.1%) and cefoxitin (86.67%) [18,23]. C. difficile can be classified into 5 major clades (Clade 1-5) and 2 novel clades (Clade 6, C-I) using MLST [31]. [22,32]. In the current study, the three isolates were assigned to ST-3, ST-15 and ST129, which have been identified in patients with diarrhoea in Eastern China [22,32]. These results indicate the potential for C. difficile to be transmitted from pets to humans. Since there is intimate contact between humans and their pets, the isolation of C. difficile from pets in this study suggests a possibility that humans may be colonized by C. difficile carried by pets, although faecal shedding of pathogenic C. difficile was not common among healthy dogs and cats. Given that Clade 1 contains the majority of human isolate STs [17,31], these results further imply that domesticated pets may be possible community reservoirs of C. difficile infection in humans, potentially due to the intimate contact between these pets and their owners. Conclusion There has been a lack of studies of C. difficile among animals in China. In this study, C. difficile isolates were recovered from the faeces of healthy pets in Eastern China. These results demonstrated that faecal shedding of pathogenic C. difficile is not common among healthy dogs and cats in Eastern China. The three isolates were assigned to ST-3, ST-15 and ST129, which have been identified from patients with diarrhoea in Eastern China. Among them, one isolate, D141-1, which contained toxin genes and was genotyped into ST129, was isolated from the faeces of one dog. This result implies a potential association between pets and diarrhoeal infection in humans. In addition, one isolate displaying high resistance to vancomycin was found. In summary, the results of the present study provide evidence that domestic pets may be a reservoir of human pathogenic C. difficile, and thus a threat of healthy companion pets to human health by cannot be excluded. Nucleotide sequence accession number The sequences of 16S rDNA of C22-3, C23-2 and D141-1 have been deposited in the GenBank databases with the accession numbers MK246185, MK246184 and MK246131, respectively.
2019-01-11T15:40:27.803Z
2019-01-11T00:00:00.000
{ "year": 2019, "sha1": "393f557c2a3268da6f5eb8f32e1cc12d984291b2", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-019-3678-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "393f557c2a3268da6f5eb8f32e1cc12d984291b2", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257043764
pes2o/s2orc
v3-fos-license
An ultra-thin double-functional metasurface patch antenna for UHF RFID applications An ultra-thin double-functional metasurface patch antenna (MPA) was proposed, where it can operate not only in the antenna mode but also can simultaneously act as perfect absorber for normal incident waves, suitable for RFID applications in the 868 MHz band. The MPA structure consists of a typical coaxially-fed patch antenna merged, for the first time, with a metasurface absorber acting as artificial ground. A methodology for the unit-cell design of the metasurface is proposed followed by an equivalent circuit model analysis, which makes it possible to transform a low-loss (tanδ=0.0015\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$tan\delta =0.0015$$\end{document}) unit-cell with highly-reflective characteristics to a perfect absorber for normal incident waves. It is based on modifying the critical external coupling by properly introducing slits on the unit-cell, allowing to design an ultra-thin (λ0/225\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _0/225$$\end{document} at 868 MHz) and a very compact structure in comparison to previously developed designs. For validation purposes, the MPA was fabricated and its performances in both functional modes were characterized numerically and experimentally. It is demonstrated that merging the absorber with the patch not only allows obtaining a well-matched (|S11|<-30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|S_{11}|<-30$$\end{document} dB) antenna with an enhanced gain (by 175.6% compared to a typical patch) at the desired frequency but also leads to an overall thickness of only 2.5 mm (λ0/138.1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _0/138.1$$\end{document} at 868 MHz). With an absorber size limited to the MPA dimensions, a reasonable 1.3 dB reduction in powers reflected by the MPA was achieved compared to a similar size metallic sheet. Whilst having the lowest profile among the so far reported RFID readers, the proposed MPA can be conveniently fitted for example within the required volume of smart shelf RFID readers or used in portable RFID readers while being capable of mitigating multipath reflection issues and incorrect reading of RFID. www.nature.com/scientificreports/ Coaxially-fed printed patch antenna structure. Seeing that low-profile antennas are in demand, printed patch structures are favored. They, however, suffer from narrow bandwidth and low gain due to the excited surface wave, which traps and dissipates a portion of the radiated energy in the substrate. Thanks to the rapid development of design solutions to overcome those limits, discussed in a precedent section, printed patch antennas are among the most commonly used antenna types. Since the patch antenna is aimed here to be merged directly with the metasurface structure, due to the existence of the strong mutual coupling effect, the selection of the feeding technique to excite the antenna is crucial. Feeding structures such as microstrip and coplanar lines would lead to destruct the balance of the current distributions on the nearby metasurface unit-cells all along the feeding line connected to the patch. In order to avoid that problem, a coaxial feeding method was considered to excite the patch. Moreover, since linearly-polarized reader antennas typically have longer read range compared to circularly-polarized ones of the same gain, which is due to concentrating the emission in one vertical or horizontal plane rather than across two separated planes, a patch antenna with a linear polarization was chosen for the purpose of the study. Hence, a typical linearly-polarized patch antenna fed by a coaxial probe was designed to operate in the lower UHF RFID band. Figure 2 depicts the antenna geometry together with its parameters in terms of input reflection coefficient and realized gain. As observed, the antenna has a narrow 10 dB matching bandwidth ranging from 863 to 876 MHz, wide enough to cover the required UHF RFID band in Europe, and a peak realized gain of − 2.25 dBi. www.nature.com/scientificreports/ Metasurface absorber structure. Artificially engineered metasurfaces find extreme applications in radio, microwave and optical frequency regions as diverse as metalenses, high-gain antennas, energy harvesters, transmit/reflectarrays, absorbers, to name a few [41][42][43][44] . The basic structure of metasurface absorber consists of an array of unit-cells replicated a number of times in the x and y directions. The unit-cell comprises three layers: a metal patch, a dielectric, and a conductive ground plane. Here follows a comprehensive analysis for the unit-cell design by proposing a new methodology along with the equivalent circuit modeling. Design methodology. Metasurfaces control the amplitude, phase, and polarization of local fields by manipulating the shape, geometry, and arrangement of the sub-wavelength polarizable inclusions embedded in a host medium 43,44 . The interaction between the incident EM beam and the metasurface absorber can be explained by the induced currents generated from the excited electric and magnetic dipole moments on the unit-cells. A perfect metasurface absorber, assessed by the balance between the internal losses and the external coupling, prevents re-radiation by canceling out the amount of transmission and reflection power coefficinets 29 . Since the backside of the structure is completely covered by copper, transmission coefficient is zero, and hence, the absorptivity (A) can be calculated by A(ω 0 ) = 1 − |Ŵ(ω 0 )| 2 , where ω 0 and Ŵ are the angular frequency and reflection coefficient, respectively 41 . The total reachable absorption is substantially limited to the intrinsic losses of resonant unit-cells. Possible solutions to further improve the absorption characteristics include introducing additional internal losses by resistive materials or effective dielectric losses with appropriate impedance matchings 45 . Another alternative approach is to control the absorption mechanism by adjusting the external coupling rather than the internal losses 46 . The reflection coefficient Ŵ as a function of angular frequency ω 0 , internal losses 1 τ 0 , and external coupling 1 τ e is expressed as in (1). For 1 τ e = 1 τ 0 , no reflection is experienced at resonant frequency satisfying the condition of critical coupling ( |Ŵ| = 0 ). The latter can be achieved through a proper design of unit-cells. It should be stated that when 1 τ e > 1 τ 0 and 1 τ e < 1 τ 0 , the resonator is so-called to be overcoupled and undercoupled, respectively (see Fig. 7.5 in 46 ). In the overcoupled region, the rate of the power escape from the resonating structure (metasurface absorber at resonance) is greater than that of the internal dissipation, i.e. 1 τ e > 1 τ 0 . For the undercoupled case ( 1 τ e < 1 τ 0 ), the amount of internal losses is higher compared to power leakage from the structure. The reflection phase at the resonant frequency is 0 • and 180 • in the overcoupled and undercoupled regions, respectively. The above approach was recently implemented in 45 to design the absorber unit-cell in the millimeter waves (i.e. 30 GHz). However, the drawback is the dependency of the critical coupling condition satisfaction on the overall size of the unit-cell. In the current study, that method was further developed in the megahertz range with the aim to satisfy the critical coupling condition independently of the unit-cell dimensions. We intend to demonstrate that, on the same size and type of substrate, a simple unit-cell with highly-reflective characteristics can be transformed to a perfect absorber for normal incident waves. To achieve a perfect absorption by modifying the external coupling, the latter was controlled by properly creating rectangular slits on the metal patch, which will be explained hereafter. To assess the unit-cell behavior, a typical set of full-wave simulations were carried out using a normal incident plane wave with periodic boundary conditions parallel to the main axis. The unit-cell design is based on a simple square patch printed on an ultra-thin low-loss substrate with a low dielectric constant. As can be seen in the Smith-chart, shown in Fig. 3a, the unslotted patch lies within the overcoupled region at 1728 MHz ( Ŵ = 0.88∡0 • ) being far from the critical coupling conditions at the frequency of interest. Note that with the E polarization along the x-axis (as can be seen in Fig. 3b), the E maximum on the top surface of the unit-cell tends to appear in the edges along the y-axis. In order to keep the design compact while achieving acceptable absorption characteristics, slits were etched on the patch to be able to modify the external coupling, due to the power leakage from the structure, until reaching the critical coupling condition. A first diagonal slit, with the aim to make a longer current path, was created on the patch (Fig. 3c), to shift down the resonant frequency from 1728 to 955 MHz. That also led to move from the overcoupled to undercoupled region ( Ŵ = 0.38∡180 • , Fig. 3a). A second slot was introduced to the structure to locally symmetrize the E distribution with respect to the incident wave polarization having its maximum concentrated in the unitcell edges along the y-axis (Fig. 3d). That made it possible to further reach the critical coupling condition ( Ŵ = 0.14∡0 • , Fig. 3a) at the same frequency to the case with one slit. In order to reach the desired frequency, an additional slit was introduced horizontally in the middle of the structure where the E distribution is minimal, shown in Fig. 3e, not only to avoid disturbing the E symmetry but also to downshift its resonance frequency by further extending the length of the current path. With an optimized length of the last slit extracted from HFSS, the condition of critical coupling was satisfied at the desired frequency of 868 MHz ( Ŵ = 0.08∡0 • , Fig. 3a). In ultra-thin metasurface absorbers where the perfect absorption is achieved by additional internal losses, the periodicity of the cell (P), which greatly affects the absorption frequency, can be approximately calculated as in (2) www.nature.com/scientificreports/ where m , ε r , c, and f are the guided wavelength, substrate permittivity, velocity of light, and the resonance frequency, respectively. Although the critical coupling condition to attain perfect absorption was satisfied here by adjusting the external coupling rather than the internal losses, the above formulation can be used provided that the increase of the length of the current path due to the addition of the slits into the unit-cell design is considered. To illustrate the maximum extent in that length, Fig. 3f,g show the vector current distributions on the top surface of the unslotted and proposed unit-cells, respectively. The surface currents are mainly concentrated on the outer edge portions of the proposed unit-cell due to the presence of the slits (Fig. 3g) experiencing a longer path (approximately twice) compared to the unslotted patch (Fig. 3f); the latter resonant frequency is almost twice that of the proposed unit-cell (Fig. 3a). That change in resonant frequency by a factor of two explains the choice of the periodicity value of 55 mm ( Fig. 1b) rather than 110 mm (being the approximate P value at 868 MHz using (2) for ε r = 2.55 ). Note that although equation (2) reasonably estimates the resonant frequency of the unslotted patch unit-cell (i.e. 1728 MHz) for P = 55 mm, due to the absence of any additional internal losses (resistive materials or effective dielectric losses), no absorption should be expected with that unit-cell geometry (as seen in Fig. 3a, | Ŵ| = 0.88). The further additions of the slits are indeed required to satisfy the critical external coupling at 868 MHz. It is worth mentioning that, according to the Poynting theorem, the real part of the average power over time-period is related to the imaginary part of permeability and permittivity 47 . Therefore, the substrate loss tangent is a key parameter to obtain the perfect absorption. The latter was achieved here by the control of the external coupling on a low-loss unit-cell material with tanδ of only 0.0015. For a better understanding of the loss tangent impact on the absorption characteristics, Fig. 4 depicts the simulated absorption and reflection phase of the proposed unit-cell with tanδ equal to 0.0015 and 0. As observed, the lossless unit-cell behaves similar to a typical HIS reflector with the unity-magnitude and in-phase reflection (i.e. + 90 • to − 90 • ), whereas a nearly perfect absorption was achieved with tanδ = 0.0015 . The concept and the design procedure introduced in this subsection can be scaled to various frequency bands and generally applied to the design of the absorber unit-cell of any low-loss material. www.nature.com/scientificreports/ Equivalent circuit analysis. In order to have an insight on the role and effects of the slits introduced in the unit-cell on the absorption mechanism from the circuit point of view, a simplified equivalent circuit model was designed and developed. The unit-cell dielectric can be modeled as a combination of parallel resistor and capacitor representing the dielectric losses and its capacitance, whereas the conductive ground plane can be simply an inductor neglecting the ohmic losses 41 . The unit-cell metal patch can be described using inductors and the addition of the slits calls for extra parallel combination of equivalent resistors and capacitors. For a clearer illustration, Fig. 5 demonstrates the locations of each lumped component on the corresponding unit-cell design steps together with the equivalent electrical model. The input port was set the free space characteristic impedance (i.e. 377 ). A first approximate value was defined for some of the lumped components and the equivalent lumped circuit parameters were then optimized using ADS simulations. For instance, a ratio was considered between the inductors representing the patches based on the surface current intensities extracted from full-wave simulations using current probes defined along the current path at the resonant frequency. Note that additional inductors were introduced in the final model as the surface currents experienced multiple directions along the current path due to the addition of the horizontal slit. The capacitance of the unit-cell substrate was estimated from the microstrip line model. The modeled resistors in the gap area between the unit-cell elements and in the slits were considered to have a higher value compared to those formed between the top and the bottom conductive layers. This is due to the weaker fringing E-field (lower displacement current) within the superficial slots compared to the E-fields existing within the substrate 48 , which further leads to lower capacitance values. Figure 6 compares the |S 11 | responses for each step of the unit-cell design obtained from ADS and HFSS. As observed, with a reasonable agreement between the results, the circuit model can accurately predict the electromagnetic properties. Moreover, such an equivalent electrical model would further allow manipulate consciously its architecture to achieve the desired performance. State-of-the-art unit-cell designs operating around 868 MHz. Table 1 provides a summary of the so far developed absorbers' unit-cells with various type patch geometries printed on substrates of different dielectric permittivities and thicknesses. As it can be seen, implementing the proposed design methodology to produce the unit-cell led to an ultra-thin ( 1 225 0 ) and a very compact structure when compared to all previously reported designs. Note that although the designs developed in 5,11 have smaller dimensions compared to our proposed one, their thicknesses are larger by factors of 14.1 and 2.6, respectively. It is worth mentioning that the previous designs employed additional internal losses (by resistive materials 10 or dielectric losses 11 ) to achieve a perfect absorption. Thanks to the proposed methodology, the latter was obtained by modifying the external coupling even with a low-loss unit-cell material ( tanδ = 0.0015 ). Moreover, its size permits a large number of array elements in a compact area by placing unit-cells in a two-dimensional grid. www.nature.com/scientificreports/ Merging patch antenna and metasurface absorber. Following the goal to not only design an ultrathin absorber but also to make a low-profile RFID reader antenna, after successful design and introduction of both the patch antenna and the metasurface absorber configurations, they were merged into the complete antenna structure, referred to as MPA (Fig. 1c). This subsection describes in detail the MPA design presenting its electrical characteristics. A comparison was made between the parameters of a typical patch antenna on a conventional ground (Fig. 2) and when placed over the absorber acting as an artificial ground in order to assess the possible improvement brought by the later. Since the unit-cell of the absorber was analyzed with an incident planewave with the E-field pointing in the x-direction, the patch was positioned on the absorber with a similar E polarization along the x-axis. The direct combination of the antenna and the absorber is a challenging task due to the strong mutual coupling between them. This would be even more crucial when taking into account the narrow bandwidth of both the patch and the metasurface designs. Therefore, cautions have to be taken about any possible frequency adjustment due to that close proximity. The impact of the patch antenna substrate on the absorber unit-cell performance was first investigated in HFSS considering a normal incident with periodic boundary conditions. Although the patch leads to disturb the external coupling efficacy on the nearby unit-cells, which can be neglected in a large-scale array, the patch was not taken into account due to not being a repetitive pattern. A tiny downshift of 1.3% was noticed that can be explained as a dielectric loading effect of the antenna substrate on the absorber unit-cell 49 . That frequency shift is expected to be further decreased for a finite structure, it was hence discarded. A metasurface absorber as an artificial ground replacing the conventional patch antenna ground leads to modify the antenna parameters. Before exploring the latter, the number of unit-cell elements, which significantly influences the MPA radiation behavior, has to be determined. A current distribution analysis in a large scale metasurface can provide hints to the minimum number of required unit-cells. Figure 7 depicts the surface current distribution on the unit-cell array of the MPA structure composed of two different element matrices, i.e. 6 × 6 and 4 × 4, at the resonant frequency of 868 MHz with 1 W input power injected to the antenna terminal. The colors representing the current in linear scale go from dark blue (weak current density) to green to yellow www.nature.com/scientificreports/ to red (strong current density). As observed, for the MPA with a 6 × 6 element matrix, the current is mostly concentrated with a higher intensity over the central 4 × 4 elements below the radiating patch area (Fig. 7a); an almost similar current distribution can be observed on the unit-cells of the MPA with a 4 × 4 element matrix (Fig. 7b). That indicates that the size of the absorber area can be reduced to 4 × 4 unit-cells, while expecting to approximately achieve the maximum radiating properties with the MPA composed of a minimum 4 × 4 unitcells in the antenna mode. Since the considered artificial ground is a metasurface absorber rather than a high-impedance surface, it is necessary to understand its physical behavior to be able to figure out how it would contribute to the antenna radiating properties. In order to explain that, the loss of the artificial ground (including the substrate and copper layers) in the MPA structure was removed to convert it to a HIS 41,47 (Fig. 4). The radiated E-field in the lossless case was compared to that of the normal MPA and the typical patch antenna in a defined area in the x-z plane for an input power of 1 W at 868 MHz (Fig. 8). It can be clearly seen that the outward radiation extends to further distances for the MPA (Fig. 8b) compared to the typical patch (Fig. 8a). This is attributed to the fact that a portion of energy is dissipated in the patch substrate (indicated by dashed-line in Fig. 8) leading to a lower gain compared to the proposed MPA. In other words, replacing the conventional patch ground with the artificial ground, which was designed as an absorber rather than a HIS, contributes positively in the overall antenna radiating properties. However, for the MPA with lossless artificial ground, the extension of the fields into the free space, pointed out by horizontal arrows, is higher (Fig. 8c) compared to that radiated by the normal MPA (Fig. 8b). This means that using the artificial ground designed as absorber contributes slightly negatively to the antenna radiation properties when compared to the ground designed as HIS. www.nature.com/scientificreports/ In order to further indicate the extent of that positive and negative contributions to the MPA performance and to verify the effectiveness of the reduced sized absorber when compared to a larger scale one on the MPA radiating characteristics, Fig. 9 shows the impact of the absorber unit-cell's matrix array and dielectric loss on the |S 11 | and realized gain of the proposed MPA. The following observations were made: • the direct integration of the typical patch antenna (Fig. 2) with the final structure of the absorber slightly detuned the patch resonant frequency (shifting up from 868 to 872 MHz, Fig. 9a) and |S 11 | matching level (increasing from − 22.9 to − 9.5 dB, Fig. 9a). It, however, led to significantly increase the antenna gain by approximately 158.4% (increasing from − 2.25 to 1.87 dBi, Fig. 9b) in line with the observations made and analyzed from the antennas radiated E-fields (Fig. 8). For a fair comparison, to bring the resonant frequency back to 868 MHz and to compensate for the matching level, a small U-shaped slot was etched near the feeding point, labeled as slotted patch in Fig. 9. As it can be seen, the slotted patch (solid-blue) sharply resonates at 868 MHz with a good matching level ( < −33 dB). In this case, the gain enhances from 1.87 to 2.15 dBi. This means that replacing the conventional ground of the typical patch antenna with the proposed artificial ground leads to significantly increases the gain of about 175.6%. • reducing the number of absorber unit-cells from 6 × 6 to 4 × 4 leads to slightly decrease the gain (from 2.3 to 2.15 dBi, Fig. 9b) with hardly any change in the resonant frequency (Fig. 9a). Therefore, in line with the conclusion made from the surface current analysis (Fig. 7), the size of the absorber area can be indeed reduced to 4 × 4 unit-cells, while expecting the maximum radiating properties in the antenna mode. • omitting the loss of the absorber led to a slight increase (from 2.15 to 2.69 dBi) in the MPA gain, as deduced when comparing the E-fields radiated by the normal MPA structure (Fig. 8b) and that by the lossless absorber (Fig. 8c). This can be considered as a moderate cost to use a metasurface absorber instead of an HIS in the MPA design. Note that, in contrast to HIS where the magnitude of the reflection coefficient at the resonant frequency is unity (similar to the PEC/PMC), |Ŵ| can reach approximately zero for a large scale of the proposed absorber (Fig. 3a) following the proposed methodology. Therefore, the combination of the absorber with the typical patch antenna not only leads to notably increase the gain (by 175.6%) but also would potentially help improve the multipath environment of RFID systems by mitigating the severe multiplex reflection interference and collision issues. It should be stated that the enhancement in the gain of the MPA compared to the typical patch can be explained by the increase of the maximum effective aperture of the antenna 50 . Since the physical size of both structures is identical (22 cm × 22 cm), the artificial ground leads to a higher effective aperture and, consequently, a higher aperture efficiency. The calculated aperture efficiencies for the typical patch and MPA structures are 11.7% and 32.2%, respectively. In order to also visualize that, Fig. 10 depicts the E-field distribution under the FR4 layer of both structures for 1 W input power at 868 MHz. As observed, the maximum effective aperture for the MPA is much larger than that of the typical patch antenna due to the high intensity E distribution all over the entire area (Fig. 10b) rather than only under the patch location (Fig. 10a). Experimental results and discussion To validate the proposed design concept, a prototype was fabricated (Fig. 1). Its performance in terms of input reflection coefficient, gain at the boresight, and radiation pattern was measured in the antenna mode. To have an idea of the absorption characteristics, even though the MPA size is limited, the amount of reflected power from the MPA compared to a metallic plate of similar dimensions was measured in the far-field zone to demonstrate its potential effectiveness in the absorbing mode. MPA characteristics in antenna mode. The |S 11 | characteristics of the MPA are shown in Fig. 11a, observing a remarkable agreement between the simulation and measurement results. The − 10 dB |S 11 | bandwidth fully covers the UHF RFID band in Europe, ranging from 862 to 874 MHz. The gain characteristics as a function of frequency of the MPA is depicted in Fig. 11b. A reasonable agreement was achieved between the simulated and measured gains. The slight differences between the results can be due to the small drops of the epoxy glue in several areas between the unit-cell gaps, used to stack the patch to the absorber, which was not considered in simulations. The measured maximum gain is 2 dBi in the boresight, closely predicted from the simulation results (2.15 dBi). The radiation patterns of the MPA at the resonant and 10 dB band edge frequencies in the x-z and y-z planes are shown in Fig. 12. A good agreement was obtained between the simulated and measured patterns. The measured (simulated) half-power beamwidth (HPBW) at 868 MHz are 76 • (78 • ) and 77 • (80 • ) in the x-z and y-z planes, respectively. The MPA has a measured front-to-back ratio of approximately 26 dB ( φ = 0 • ) and 21.6 dB ( φ = 90 • ) in its entire matching bandwidth. This type of pattern is very well-suited for the target application. State-of-the-art RFID reader designs operating in the UHF band. The obtained results demonstrate the suitability of the approach to combine the metasurface absorber with the typical patch antenna, which results in a low-weight, low-cost, and ultra-thin alternative to current solutions. The proposed MPA dimensions and radiation characteristics are compared to the state-of-the-art RFID reader antennas operating in the UHF band; a summary is provided in Table 2. As it can be seen, not only the commercially available reader antennas [15][16][17] but also most of the developed designs in literature [18][19][20][21][22][23]25 are bulky and cumbersome, and may not be appropriate for portable RFID applications. They, however, have a higher gain mainly due to using air substrates in their configurations. In 24 , a compact size was achieved using a very high permittivity ceramic material. In 26 , two different compact antenna structures were developed using 4.8-mm-thick FR4 and 4.6-mm-thick RO4003 ( ε r = 3.38 ) substrates. For a fair comparison to that design, in the proposed MPA structure the thickness of the FR4 substrate was first increased from 1 mm to a standard 3.2 mm in order to have an overall thickness www.nature.com/scientificreports/ comparable to that in 26 . The MPA size was then reduced to a 2 × 2 array matrix to have a comparable size to that in 26 . Results are given in Table 2. As it can be seen, the proposed MPA provides a higher gain (by 31.8%) and bandwidth (by 27.3%) while having a similar HPBW when compared to that in 26 . It is worth mentioning that for the final MPA size with that increased thickness to 4.7 mm, the gain increased to as high as 3.5 dBi. To sum up, thanks to its ultra-thin thickness and the degree of freedom in the overall physical size by selecting the number of unit-cell array, with a simple engineering in the structure, similar designs can be created satisfying all the demanding needs with respect to the restrictions imposed depending on the application. MPA characteristics in absorbing mode. The free space measurement method was employed to assess the absorption performance of the MPA structure. In this method, a planewave, in a normal incidence or an oblique angle, is used to excite uniformly the unit-cells of the metasurface absorber 51 . Although a nearly perfect absorption was achieved with the proposed unit-cell (Fig. 4a), a large number of unit-cells has to be considered in the demonstration test sample, where the cells with the largest distance from the center of the structure not only neglect the impact of the central patch presence but also have a minimal influence on the energy absorption 52 . This would not be an issue for the target application provided that RFID readers are often located on the walls, ceilings and/or on the tables, where there would be usually no limit on the physical size. However, due to the limitation to fabricate a large-scale absorber in our laboratory (LPKF ProtoLaser S4 laser machine maximum layout area is 22.9 cm × 30.5 cm), the prototyped MPA structure including a 4 × 4 unit-cell array, which was enough to ensure the maximum radiating properties in the antenna mode (Fig. 7), was used for showcasing purposes. The measurement campaign inside the anechoic chamber is depicted in Fig. 13a. Two commercially available identical ultra-wideband log-periodic antennas (0.6-16 GHz), one for transmitting (Tx) and one for receiving (Rx), with a 20 cm feed-to-feed distance in between were employed. The input signal generated by an Agilent N9310A (9 kHz-3 GHz) RF signal generator with − 10 dBm input power was amplified by a broadband power www.nature.com/scientificreports/ amplifier (R&K2737M) with a maximum 28.9 dB gain and further connected to the Tx antenna. The Rx antenna was linked to a Keysight spectrum analyzer (N9320B, 9 kHz-3 GHz). The MPA was placed in the far-field of the log-periodic antennas with a large enough separation distance (50 cm), and then replaced by a same size conducting sheet for a fair comparison to evaluate the absorption performance. In order to have the antennas peak gains coincided with the center of the MPA, both Tx and Rx were positioned by an angle of θ = 11.5 • ; a laser pointer was used to improve the accuracy of the setup. Since the dimensions of the prototyped absorber is limited to only 0.64 0 , a perfect absorption around the target frequency of course cannot be expected. In order to have an idea about the absorption characteristics of the structure, a number of full-wave simulations were also performed mimicking the experiments by calculating the radar cross section (RCS) for a planewave with a similar θ = 11.5 • oblique incident angle. For a better understanding of the absorption physical mechanism of that limited-size structure, the RCS of a typical patch antenna was also reported. Results are shown in Fig. 13b. As far as the absorption performance for RFID applications is concerned, in line with the simulations where a 2.4 dB RCS reduction was observed (Fig. 13b), a 1.3 dB decrease in the power levels reflected from the MPA compared to the metal plate was achieved in experiments at the desired frequency (e.g. 0.868 GHz). At frequencies outside the band of interest (e.g. 0.855 GHz), a similar RCS (0.2 dB difference) can be observed for both the typical patch and the MPA, whereas it gradually decreases in the MPA with the increase of the frequency. That reduction reaches its maximum at an up-shifted frequency (e.g. 0.875 GHz), which can be due to the limited number of unit-cell elements in the metasurface structure. A higher reduction can be also noticed in the reflected power from the MPA compared to the metal plate around the frequency band of interest, e.g. at 0.868 GHz compared to 0.855 GHz, both in experiments and simulations. These results demonstrate the potential energy absorption capacity of the reduced-size MPA. An improved absorption performance might be expected for a larger number of unit-cells in the array of the structure, provided that a nearly perfect absorption was obtained with the designed unit-cell element (Fig. 4a). Conclusion An ultra-thin metasurface patch antenna with double functionality (i.e. antenna and absorbing modes) was proposed suitable for RFID applications in the 868 MHz band. The MPA structure comprises a typical coaxially-fed patch antenna merged with a metasurface absorber as artificial ground. The use of metasurface absorber in the antenna structure appeared attractive to design a reader antenna capable of mitigating multipath reflection and incorrect reading of RFID capabilities, which is essential for the considered applications. A design methodology based on adjusting the external coupling rather than the internal losses was proposed to transform a low-loss ( tanδ = 0.0015 ) unit-cell with highly-reflective characteristics to a perfect absorber for normal incident waves on the same size and type of substrate. The methodology made it possible to develop an ultra-thin ( 0 /225 at 868 MHz) and a very compact structure in comparison to previously developed designs. To validate the proposed design, the MPA performance in both functional modes was characterized numerically and experimentally. It was demonstrated that the MPA not only has a 175.6% higher gain when compared to a same size typical patch but also has a low-profile design with overall thickness of only 2.5 mm ( 0 /138.1 at 868 MHz), which is the lowest profile among the so far reported UHF RFID readers. With the MPA in absorbing mode, taking into account the limit in size, a reasonable reduction of 1.3 dB in powers reflected by the MPA was achieved experimentally compared to a similar size metallic sheet. These results indicate that the proposed MPA is a promising candidate for future portable or stationary RFID applications, competing fairly with so far developed RFID readers.
2023-02-21T14:43:00.334Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "83be9650373551de630ea7bedcc7e0785cfd6b07", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-79506-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "83be9650373551de630ea7bedcc7e0785cfd6b07", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
208421795
pes2o/s2orc
v3-fos-license
Long‐term survival of an advanced colorectal cancer patient treated with Regorafenib: Case report and literature review Abstract Two phase 3 trials reported a prolonged survival in the third‐line setting of colorectal cancer patients treated with regorafenib with the longest duration of treatment of 16 months. Herein, we reported a unique case of a patient refractory to conventional chemotherapy who showed a prolonged stable disease with regorafenib. | INTRODUCTION Regorafenib (Stivarga ® ) is a tyrosin kinase inhibitor (TKI) impairing angiogenesis through the block of both vascular endothelial growth factor receptors (VEGFR) 1 and 3 (VEGFR3) and tyrosine kinase with immunoglobulin-like and EGF-like domains 2 (TIE2). Moreover, it targets tumor microenvironment through the inhibition of platelet-derived growth factor receptor (PDGFR) and fibroblast growth factor receptor (FGFR). [1][2][3] Actually, this drug represents a therapeutic option in the third-line setting of metastatic colorectal cancer (mCRC) patients according to the results of two phase III randomized trials (CORRECT and CONCUR) which showed a significant improvement both in terms of progression-free survival (PFS) and overall survival (OS) compared to best supportive care (BSC) 4,5 alone. Median overall survivals (mOS)s were 6.4 and 8.8 months, for Western 4 and Asiatic trials, 5 respectively, with the longest duration of treatment of 16 months in CORRECT trial. Herein, we report a unique case of a patient refractory to oxaliplatin-based and irinotecan-based chemotherapy combined with bevacizumab who showed a prolonged response (25 months) to regorafenib. | CASE PRESENTATION A 57-year-old caucasian woman underwent right hemicolectomy for a poorly differentiated mucinous CRC. Presurgical radiological staging deemed negative for distant metastases. Pathological stage was pT 3 N 2 M 0 (stage III). Subsequently, she received adjuvant systemic therapy with the combination of capecitabine plus oxaliplatin for eight cycles. A computed tomography (CT) scan carried out at the end of this treatment revealed a bulky left ovarian mass (70 × 67 mm) associated with high serum levels of CA19.9 (289 µ/mL) and CEA (48 ng/mL). Therefore, the patient underwent an exploratory laparotomy with evidence of multiple peritoneal nodules. Thus, debulking surgery was performed with left ovariectomy and excision of two peritoneal metastases. Histological examination revealed a localization of well-differentiated mucinous CRC (CDX2 positive, CK7, and CK20 partially positive; KRAS-codon 12 mutation and BRAF wild-type) associated with peritoneal carcinosis. RAS and RAF determinations had been performed on metastatic site since it had been demonstrated the high concordance between RAS and RAF between primary and metastatic CRC. 6 The CT scan performed 2 months after surgery showed controlateral ovaric mass (100 × 80 mm) and multiple peritoneal nodules (maximum diameter of 70 mm in recto-uterine pouch). High levels of tumor markers (CA19.9 302 µ/mL, CEA 27 ng/mL) were observed. First-line therapy according to FOLFIRI regimen in combination with bevacizumab was started. Nevertheless, 2 months after starting of therapy a CT scan showed an increase in ovaric mass (140 × 130 mm) and peritoneal involvement and the appearance of two hepatic lesions (largest diameters of 12 and 10 mm in V and II hepatic segments, respectively). She was enrolled in a clinical trial by another referral cancer center. Nonetheless, a CT scan documented an early progression with the appearance of right ovarian metastases (largest diameter of 30 cm) associated with omolateral hydroureteronephrosis and bowel subocclusion. A second debulking cytoreductive surgery was performed with histological confirmation of moderately differentiated mucinous metastases from CRC. Subsequently CT scan confirmed multiple peritoneal and omental implants up to 95 mm and liver metastases ( Figure 1). A third-line treatment with regorafenib (160 mg po day for 3 weeks and 1-week rest) was started. Remarkably, this therapeutic approach allowed to obtain a prolonged modest reduction of the dimensions of both peritoneal nodules and liver metastases associated with the decrease in serum levels of CA19.9 and CEA (up to 113 ng/mL and 13 µ/ mL, respectively). (Figure 1) The most frequent observed regorafenib-related grade 1-2 adverse events were hypertension, hand-foot syndrome, stomatitis, and hoarseness. Occasionally, grade 3 diarrhea and fatigue required dosage modulations. After 25 months of treatment with regorafenib, a CT scan revealed a peritoneal and liver metastases progression in combination with a performance status decline. After few weeks of BSC, the patient's exitus was registered. | DISCUSSION Regorafenib improved mOS in patients with CRC who were pretreated with conventional chemotherapies. In particular, mOSs were 6.4 and 5 months in the regorafenib group and placebo group, respectively (hazard ratio 0.77; 95% CI 0.64-0.94; one-sided P = .0052), in CORRECT trial. 4 This phase III trial randomized pretreated patients with CRC to receive regorafenib or placebo. Additionally, CONCUR phase III trial on pretreated Asian CRC patients investigated the same randomization. 5 This trial demonstrated a mOS improvement with regorafenib than placebo (hazard ratio 0.55, 95% CI 0.40-0.77, one-sided P = .00016; 8.8 vs 6.3 months, respectively). The longest duration of regorafenib treatment was 16 months. Only few reports described patients who received this molecule for a prolonged period (Table 1). Rosati et al reported an OS of 13 months after administration of regorafenib. 7 Of note, Callebout et al reported an OS of 25 months. 8 Nonetheless, in this report the therapeutic program was discontinuous due to radiotherapy combinatorial approach. Conversely, our case achieved the same OS with uninterrupt ed medical treatment. Intriguingly, the histology of Callebout and colleagues' report was a mucinous colorectal cancer likewise our patient. As these authors reported, mucinous histology shows highly epithelial-mesenchimal transition signature, which, given FGFR and PDGFR inhibition, could represent a regorafenib target. Also Yoshino et al described a case of a 2-year survival with regorafenib treatment. 9 These authors reported a case of a CRC patient with RAS-RAF WT and a sustained OS of over 9 years. Roberto et al illustrated the case of a CRC patient with a regorafenibrelated OS of 36 months, even if his oligometastatic disease was controlled with stereotactic radiotherapy. 10 Also in this report, the patient's OS resulted in about 6 years. Similarly, Korphaisarnet al. described the case of a chemo-resistant rectal cancer with a prolonged response to regorafenib and locoregional progression which underwent a RT control. 11 These patients shared common features, such as their prolonged response to previous lines of therapy. It is reasonable to speculate the presence of biological features of these tumors related to their chemo-responsiveness. 12,13 Two peculiar aspects of our patient are represented by the prolonged response to regorafenib combined to the lack of response to oxaliplatin-based and irinotecan-based chemotherapy. Furthermore, our patient underwent two debulking surgeries before the beginning of therapy with regorafenib. In order to better understand the predictive role and the clinical activity of regorafenib in CRC, a retrospective, exploratory analysis of circulating DNA and protein biomarkers had been carried out in patients enrolled in the CORRECT trial. Several biomarkers have been evaluated. In particular, it was demonstrated that regorafenib had a greater impact on OS of patients with high concentration of TIE-1 than in those with a low concentration 14 as emerges from the study a post hoc analysis. In fact, the great response to regorafenib should be improved to sensible activation of pathways conventionally inhibited by regorafenib, 15 namely angiogenesis and vasculogenesis, which in mucinous mCRC result hyperactivated due to hypoxic microenvironment. 16 Our patient displayed several regorafenib-related adverse events (ie, hand-foot syndrome, stomatitis, hypertension, and hoarseness), even if only of grades 1-2 with the exception of grade 3 diarrhea and fatigue which required temporary dose modifications according to summary of product characteristics. Also, this aspect is relevant in our case due to the frequent correlation between the length of treatment and the appearance of AE which sometimes could require hospitalization. 17 Histopatological and clinical features of this tumor associated with its chemorefractory to previous lines of chemotherapy support its poor prognosis. Conversely, the prolonged stable disease to regorafenib in combination with its good toxicity profile supports the potential therapeutic role of this drug. Conclusively, we believe that only the knowledge of the molecular aspects of primary tumor and of its metastases could have helped in the deeper comprehension of the unique history of this patient. In particular, the analysis of clinical, laboratoristic, and biological features s uch as for other antiangiogenic drug 19-21 might provide novel insights explaining the long-term survival.
2019-10-31T09:15:11.424Z
2019-10-24T00:00:00.000
{ "year": 2019, "sha1": "7b39e779d230ee12c48e4877697acdf4292ae155", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.2496", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ede746e5b225b908d4a8d8bcb2c7f53e7de8106e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237547325
pes2o/s2orc
v3-fos-license
The antimuscarinic agent biperiden selectively impairs recognition of abstract figures without affecting the processing of non‐words Abstract Objectives The present study investigated the effects of biperiden, a muscarinic type 1 antagonist, on the recognition performance of pre‐experimentally unfamiliar abstract figures and non‐words in healthy young volunteers. The aim was to examine whether 4 mg biperiden could model the recognition memory impairment seen in healthy aging. Methods A double‐blind, placebo‐controlled, two‐way crossover study was conducted. We used a three‐phase (deep memorization, shallow memorization, and recognition) old/new discrimination paradigm in which memory strength was manipulated. Strong memories were induced by deep encoding and repetition. Deep encoding was encouraged by redrawing the abstract figures and mentioning existing rhyme words for the non‐words (semantic processing). Weak memories were created by merely instructing the participants to study the stimuli (shallow memorization). Results Biperiden impaired recognition accuracy and prolonged reaction times of the drawn and the studied abstract figures. However, participants were biased towards “old” responses in the placebo condition. The recognition of the new abstract figures was unaffected by the drug. Biperiden did not affect the recognition of the non‐words. Conclusions Although biperiden may model age‐related deficits in episodic memory, the current findings indicate that biperiden does not mimic age‐related deficits in recognition performance. | INTRODUCTION It is well-established that healthy aging is associated with memory impairments. However, the effect of aging on memory seems to depend on which memory functions are being investigated. For example, aging seems to impair episodic memory most consistently, whereas semantic memory, working memory, and procedural memory remain to a great extent intact in healthy elderly (Nilsson, 2003). Furthermore, age-related impairments are typically found in recognition memory tests (Fraundorf et al., 2019;Rhodes et al., 2019). In recognition memory paradigms, participants must recognize previously studied stimuli as "old" correctly and identify not presented ones as "new" (Malmberg, 2008). However, the aging effect on recognition memory seems to depend on the stimulus's nature (i.e., identifying a stimulus as "old" or "new"). Age appears to decrease stimulus discriminability (Fraundorf et al., 2019;Wolk et al., 2009), which is typically related to a tendency to judge presented stimuli as "old" despite them being new (Gallo et al., 2007;Kroll et al., 1996). It seems likely that these performance differences are at least partly due to an impairment in sensitivity to novelty (Czigler et al., 2006;Daffner et al., 2006Daffner et al., , 2011. Another factor could be the limited availability of processing resources in older age (Park & Festini, 2017). A final factor could be the age-related slowing in processing speed (Levin et al., 1992;van Hooren et al., 2007). Salthouse (1996) proposed that this reduction in processing speed contributes to delayed cognitive process execution and the loss of information processed at earlier stages. In addition to novelty processing, the level of processing (LOP) also seems to affect recognition performance in aged people (Fraundorf et al., 2019). The LOP theory predicts that deep (e.g., via mnemonics, meaning-extraction, pattern recognition, and activation of prior knowledge) and intermediate processing (e.g., phonetics) lead to superior and faster retrieval when compared to shallow processing (e.g., perceptual analyses, rehearsal) (Craik, 2002;Craik & Lockhart, 1972;Craik & Tulving, 1975;Newell & Andrews, 2004). Fraundorf et al. (2019) reported that age differences were larger when deep semantic encoding was applied compared to shallow processing. This may be related to age-related difficulties with selfinitiation of deep encoding strategies. Thus, when such strategies are provided age differences were not found (Craik & Rose, 2012;Froger et al., 2009;Logan et al., 2002). In previous studies, it has been shown that selective blocking of muscarinic type 1 (M1) receptors specifically impairs episodic memory (Borghans et al., 2017(Borghans et al., , 2020Sambeth et al., 2015;Vingerhoets et al., 2017;Wezenberg et al., 2005). In these studies, it was found that the M1 antagonist biperiden (BIP) impaired the performance in the verbal learning task (VLT) but did not affect working memory, as measured by the n-back task. These effects appeared to be selective memory impairments since BIP treatment did not affect the performance in attention tasks. These results suggest that BIP treatment could be a suitable pharmacological model of age-related episodic memory impairment. Characterizing BIP's effects can aid a better understanding of which neurotransmitter systems may underlie the age-related memory deficits. This is relevant from a scientific viewpoint, and it may be relevant for the development of treatments for age-related memory deficit. This could be an M1 agonist such as BIP. To further investigate the validity of BIP as a pharmacological model of age-related memory impairment, we examined the effect of BIP on old/new discrimination performance using pre-experimentally unfamiliar stimuli in a sample of healthy young participants. We applied a three-phase old/new discrimination memory paradigm with abstract figures and non-words (Toth et al., 2021). Memory strength was manipulated as a function of LOP (Craik, 2002;Craik & Lockhart, 1972;Craik & Tulving, 1975;Newell & Andrews, 2004) and repetition (Hintzman & Curran, 1997;Ranganath & Rainer, 2003). Repetition is known to strengthen memory by increasing the subjective sense of familiarity resulting from the re-encoding of a particular memory trace (Hintzman & Curran, 1997;Ranganath & Rainer, 2003). In the current experiment, we first familiarized the stimuli using mnemonics to induce deep processing (deep memorization): the participants were asked to redraw the abstract figures and to mention existing rhyming words for the non-words (semantic processing). In the second phase, participants were asked to merely study the stimuli (shallow memorization). Here, the previously deeply encoded items were shown again in combination with some new items. Finally, an old/new recognition test was applied in which stimuli from the first and second phases were intermixed with new ones. Both recognition accuracy and speed were assessed. Based on previous studies in healthy aging, we did not anticipate detecting drug effects on the overall correct old item recognition (drawn/semantically encoded and studied items). However, we anticipated lower discriminability indexes due to higher false alarm rates (incorrectly identifying new items as "old"), and slower reaction times as a consequence of drug treatment. Furthermore, we anticipated that BIP would decrease the number of correctly rejected new items. Also, we expected that BIP would increase the false alarm rates in response to the new stimuli presented only during the recognition phase. Finally, we hypothesized that in the BIP as well as the placebo (PLA) sessions, deep memorization and repetition would prompt better recognition than shallow memorization without repetition. In other words, items relying on strong memory would be better recognized than those relying on weak memory. | Participants Based on previous studies using the current paradigm, an a priori statistical power analysis using G*power 3.1 showed that in order to detect significant behavioral effects using an ANOVA, 19 participants were required with an effect size of 0.4 and power of at least 90% at a significance level of 5% (Faul et al., 2007). Therefore, 21 healthy volunteers between the age of 18 and 35 years were recruited. One participant terminated the study due to personal reasons, and thus, was excluded from further analyses. The final dataset contains 20 participants (five males, with a mean age of 23 years) who were students from Maastricht University, with the highest education level being pre-university education or bachelor's degree. Inclusion was based on medical screening, which involved filling in a medical questionnaire followed by a detailed examination by a physician. Blood and urine tests were taken to confirm the participants' health condition and to rule out the apparent use of psychoactive drugs (e.g., cannabinoids, methylphenidate, cocaine, amphetamine, antidepressants, etc.), pregnancy or lactation. Furthermore, participants were included if their body mass index fell within the range of 18.5-30 kg/m 2 . | Study design and medication A randomized, double-blind placebo (PLA) controlled two-way crossover design was applied with a counterbalancing of orders over the two sessions. This means that each participant was tested two times on two separate occasions, once receiving 4 mg BIP (Akineton®) and once PLA. The order of treatment (PLA-BIP and BIP-PLA) was balanced in the sample. The washout period was 7-14 days. The order of the medications was blinded. Treatment was applied in accordance with previous results showing that peak plasma levels of BIP are reached 60-90 min after intake of a single dose (Sudo et al., 1998). | Procedure Volunteers provided informed consent before the medical examination. Hereafter, they received training to be familiarized with the test procedures. A test battery was used during this training session, which contained a different set of stimuli from those used during the actual test days. This was done to avoid learning effects. Hereafter, the test days were scheduled within a maximum of seven days after the training session. The two testing days were scheduled at the exact same time of the day to reduce diurnal effects. Before and after the testing sessions, participants filled in questionnaires assessing their general well-being status and possible complaints (e.g., headache, drowsiness, sweating, and sleepiness). Participants had to indicate whether they experienced any of the 33 possible complaints on a four-point scale. For example, a score of zero stood for "I do not experience this complaint at all," and a three stood for "I am experiencing this complaint strongly." If the participants experienced any complaints not listed on the questionnaire, they were asked to mention them on the questionnaire form in writing. Scores were compared between the different time points to examine treatment-induced side effects. Adverse events were monitored using printed forms. Subsequently, 90 min before the behavioral testing, medication (BIP or PLA) was administered. The participants were asked to refrain from alcohol, smoking, and caffeine 12 h before testing and not to use drugs throughout the study. A memory paradigm with abstract figures and non-words was applied in separate tests (Toth et al., 2021). See Figure 1 for an example of the stimuli used. Every participant performed each test phase first with the abstract figures and then with the non-words to minimize the verbalization of the figurative stimuli. The experiment consisted of three phases (see Figure 2). In phase 1 (deep memorization leading to "strong" memory), participants were familiarized with a series of 15 monosyllabic abstract figures or non-words in separate tests (list 1: L1). Participants were asked to manually redraw the abstract figures on an answer sheet to induce deep LOP. They had to mention existing English or Dutch rhyming words for each non-word to induce intermediate LOP. Stimuli were presented for 1 s, and the participants were given 14 s to execute the mnemonic encoding task. If they were ready earlier, they could press a button, and 2 s later, the next stimulus appeared. Stimuli were extracted from previous studies (Glosser et al., 1998;Redoblado et al., 2003;Seidenberg et al., 1994). During phase 2 (shallow memorization leading to "weak" memory), participants were instructed to remember as many stimuli as possible. In this phase, 30 stimuli (abstract figures or non-words) were used: 15 stimuli from L1 were randomly mixed with 15 new ones (L2). All stimuli were shown for 1 s with an inter stimulus interval (ISI) of 2 s. -3 of 10 During phase 3, participants were asked to decide if they had seen the presented stimulus in the previous series (L1 and L2) or whether the stimulus was new to them (L3: new, n = 15). The 45 nonwords or abstract figures were presented for a duration of 1 s, or less in case of faster button press; the ISI was 2.5 s. Participants had to press the corresponding buttons ("old" for L1 and L2, or "new" for L3 stimuli) on a response box as quickly and accurately as possible. The Attention Network Test was administered between phase 2 and 3 as a filler task lasting 20 min (Togo et al., 2015). | Data analysis Before analysis, all data were evaluated for having normal distri- For the behavioral data, Signal Detection Theory (SDT) was applied in order to investigate the discrimination performance (Benjamin & Bawa, 2004;Benjamin et al., 2009;Stanislaw & Todorow, 1999;Verde & Rotello, 2007). Discrimination accuracy was defined as the ability to distinguish the different types of stimuli (drawn/semantically processed, studied, and new). Correct responses included an "old" response to the drawn/semantically processed items, and the studied stimuli, and a "new" response to the new items. Incorrect responses involved a "new" response to the drawn/ semantically processed items and the studied stimuli and an "old" response to the new stimuli. See Table 1 for an overview. Given the memory strength manipulation in the current design (deep memorization, shallow memorization and recognition), the correct response rates, being hit rates (HR) for the drawn/semantically processed and the studied items and correct rejection rates (CRR) for the new, were used to evaluate the discrimination accuracy. Furthermore, in order to investigate discriminability, non-parametric A 0 statistics were computed for the drawn/semantically processed and the studied stimuli using Equations (1 or 2) (Snodgrass & Corwin, 1988;Stanislaw & Todorow, 1999). A 0 varies from 0 to 1 with 0.5 indicating chance performance. Higher values are indicative of improved performance (Snodgrass & Corwin, 1988;Stanislaw & Todorow, 1999). A 0 : discriminability index; HR: hit rate; FAR: false alarm rate During recognition, the a priori probabilities of old and new items and the quality of the match between a test item and the memory for studied items can influence the bias parameter (Huang & Ferreira, 2020;Stanislaw & Todorow, 1999). Such a model does not fit the current paradigm due to the memory strength manipulation used and the equivalent proportion and intended comparison of the drawn/semantically processed (n = 15), studied (n = 15), and new items (n = 15; Benjamin & Bawa, 2004). After all, the final proportion of "old" and "new" responses was 2:1. Therefore, we calculated the total amount of "old" (H + FA) and "new" (M + CR) responses given by the participants. This was done to examine whether there was a preference for either the "old" or "new" responses. Results were compared using paired samples t-tests with Bonferroni corrections. RT data of the hits were evaluated, as well. To be able to use parametric tests, RT-s were transformed into |log(1/RT)| to obtain a normal distribution of the data (Osborne, 2002). Moreover, the median RT data are reported as central tendency parameters, together with the corresponding first and third interquartile ranges (Ratcliff, 1993 Post hoc tests showed that the semantically processed stimuli were recognized better than the studied (p < .001 | Complaints and POMS The analyses did not result in any significant treatment effects for the neurovegetative complaints and the POMS (all associated t values < 1.37, p > .330; t values < 1.61, p > .123, respectively; see Table 7). Also, no further complaints other than listed in the questionnaire were mentioned. There were no adverse events found. | DISCUSSION AND CONCLUSIONS The present study aimed to examine whether BIP could model the recognition memory impairment as seen in healthy aging using an old/new recognition paradigm with abstract figures and non-words. The results show that BIP impaired the correct recognition and -7 of 10 figures, it is possible that the effects were related to a response bias. Namely, we detected an "old" response bias in the PLA session but not in the BIP session. Therefore, the response bias may underlie the observed drug effects on recognition memory. Further, although it was expected that BIP would decrease the discriminability index (A 0 ) of the drawn/semantically processed and studied items, the current data did not show this impairment for either the abstract figures or the nonwords. However, as expected, BIP prolonged the reaction times when responding to the drawn and the studied abstract figures. Taken together, the effects of BIP did not fully model the typical age-related deficits in recognition performance. The finding that BIP did not affect the recognition performance of the non-words is somewhat unexpected. The treatment effects were dependent on the type of stimulus used. It could be argued that the recognition performance of the abstract pictures was better than the non-words and that the high performance is more sensitive to treatment effects. However, the performance of the non-words was about 80% correct, which can also be considered relatively high. Moreover, the strongest treatment effects for the abstract figures were found for the studied stimuli. Here, the recognition performance was about 70% correct. Therefore, the lack of treatment effects for the non-words cannot be attributed to recognition performance level. The lack of effect for the non-words may also be explained based on an age-related difference in the use of pre-existing semantic knowledge (Badham et al., 2016;de Chastelaine et al., 2017;Fraundorf et al., 2019). Belleville et al. (2011) (Wezenberg et al., 2005). Although this is another recognition task using existing words, these data suggest that BIP could impair word recognition as seen in aging. Further studies are indicated in which the effects of BIP on the familiarity of words are tested. The elderly often have difficulties identifying new items correctly when the old items are perceived as insufficiently distinct (Dodson et al., 2007;Fraundorf et al., 2019;Gallo et al., 2007). Consequently, new items are identified as "old" in recognition tasks (i.e., more false alarms; Gallo et al., 2007;Kroll et al., 1996). The stimuli in the current experiment were pre-experimentally unfamiliar, which theoretically could make their discrimination more difficult than the preexperimentally known items. Indeed, several empirical studies have shown that memory is worse for pre-experimentally unknown versus known items, such as unfamiliar versus familiar symbols (Cycowicz & Friedman, 2007), words versus non-words (Belleville et al., 2011;Gardiner & Java, 1990). In agreement with these findings, BIP should decrease the number of correctly recognized new stimuli (abstract pictures and non-words). However, this was not observed in the current study, which further undermines the notion that BIP models recognition deficits in aging. The drug-induced impairment in reaction times to the abstract figures complies with the well-documented age-dependent cognitive slowing (Levin et al., 1992;Salthouse, 1996;van Hooren et al., 2007). A decrease in response times after BIP has also been observed in other tasks in previous results (Sambeth et al., 2015;Silver & Geraisy, 1995;Wezenberg et al., 2005). In addition, pictures are represented as integrated patterns (Rajaram, 1996), and their processing requires additional allocation of attentional resources, which can slow down reactions (Noldy et al., 1990). If this was true, then BIP might have affected attention. However, this seems unlikely considering that participants did not report sedation in the present study. Furthermore, our findings align with previous research. Firstly, the memory strength manipulation showed a clear difference between the deeply and shallowly processed stimuli (Hulstijn, 1997;Paivio & Desrochers, 1981;Solso, 1995). Secondly, in the PLA condition, previous behavioral findings using this paradigm were replicated (Toth et al., 2021). Namely, the correct identification of the new abstract figures and non-words was superior to old item recognition when they were merely studied and not repeated, but not when they were drawn or semantically processed. Finally, as in previous studies, 4 mg BIP did not cause any adverse effects as measured by the POMS. In closing, although BIP has been found to mimic an episodic memory impairment in young, healthy volunteers, the current data do not indicate that BIP can adequately model typical age-related deficits in recognition performance of abstract figures and nonwords. CONFLICT OF INTEREST The authors have declared no conflict of interest. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-09-18T06:17:07.107Z
2021-09-17T00:00:00.000
{ "year": 2021, "sha1": "b97d36fcbcb71060f52cb868e100f836bab2d2dd", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hup.2819", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "cbd5d36601550b657bbd3655eb54a8ae235e6b6e", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254910794
pes2o/s2orc
v3-fos-license
Data Driven Classification of Opioid Patients Using Machine Learning–An Investigation The opioid crisis has led to an increased number of drug overdoses in recent years. Several approaches have been established to predict opioid prescription by health practitioners. However, due to the complex nature of the problem, the accuracy of such methods is not yet satisfactory. Dependable and reliable classification of opioid dependent patients from well-grounded data sources is essential. Majority of the previous studies do not focus on the users’ mental health association for opioid intake classification. These studies do not also employ the latest deep learning based techniques such as attention and knowledge distillation mechanism to find better insights. This paper investigates the opioid classification problem by using machine learning and deep learning based techniques. We used structured and unstructured data from the MIMIC-III database to identify intentional and unintentional intake of opioid drugs. We selected 455 patient instances and used traditional machine learning and deep learning to predict intentional and accidental users. We obtained 95% and 64% test accuracy to predict the intentional and accidental users from the structured and unstructured datasets, respectively. We also achieve a distilled knowledge based test accuracy of 76.44% from the integrated above two models. Our research includes an ablation analysis and new insights related to opioid patients are extracted. I. INTRODUCTION Opioid analgesics are generally used to alleviate severe and chronic pain in patients. Doctors and other health care practitioners prescribe opioids in large numbers, especially in the United States of America (USA). According to the Centers for Disease Control and Prevention (CDC), the approximate cost of opioid abuse in the United States is $78.5 billion per year [1]. The number of opioid prescriptions in the United States is very high; research found that around 153 million opioid drugs were prescribed in 2019 [2]. Opioids are a The associate editor coordinating the review of this manuscript and approving it for publication was Juan Wang . class of drugs prescribed as painkillers, but they are heavily overused due to their addictive nature. Several studies [3], [4] have described that patients get these medications not to control pain; but because they are dependent on them. This can also result in an overdose. In our study, we use machine learning techniques to predict users' opioid misuse patterns from both structured data (i.e., demographic information, gender, ethnicity, etc. ) and unstructured data (i.e., chronological medical history and eventnotes). Barkley and Shin [5] found that intentional overdoses correlated with a depression. Other studies [6], [7] found that the rate of intentional drug use among adolescents is worrying. Prince [8] found that there is a direct connection between taking drugs and mental illness. Jones and McCance-Katz [9] also found that opioid use disorder (OUD) is associated with mental disorders. There appears to be a direct relationship [10], [11] between mental illness and drug abuse which needs further investigation. In the studies mentioned above, most authors conduct research on a specific aspect of the opioid problem, such as particular age groups or demographics [12], [13], [14]. The database we utilize is a good source of data which includes demographic, ethnicity, medical condition and age variables to study the problem. Previous studies did not use contextual analysis based on natural language processing (NLP) techniques of the patients' event notes, and medical history. Deep learning and Machine Learning have gained popularity in the healthcare applications [15], [16], [17], [18]. However, the current opioid risk assessment tools [19] are insufficient in terms of predictability and automatic contextual analysis based on patients' historical data 1 . Furthermore, clinicians should be offered tools that allow determination of patients' risk of misuse before administering opioids. Considering that opioid misuse is a medical problem impacting people's health and economy, investigating the problem based on a Machine learning approach can be useful. The database that we work with has data which could be utilized to identify opioid patients. In the light of the above discussion, previous studies find an association between mental health and opioid intake. In some other studies, researchers consider demographics (e.g., age, ethnicity, etc.) for finding opioid associations. Therefore it is important to utilize the above features as the predictors of opioid intaking early warning systems. In addition to this, users' historical data provides a contextual cue for users' future behavior. Previous studies rarely employ the latest deep learning based NLP techniques such as attention and knowledge distillation mechanism from the contextual signals which can unveil better insight for the researchers. In this paper, we use data from the MIMIC-III database [20], from which we have identified the opioid cases based on keyword identification. We identify relevant tables (i.e., schemas) from the database and select 41 features which are relevant to our study. Based on the keywords and patients' history, we identify which patients take opioids intentionally. In this way, we label our dataset as opioid intake 'YES'/'NO'. Later, we build a structured (i.e., tabular) dataset. To strengthen the model, we also incorporate an unstructured dataset. As training an unstructured dataset is complex and challenging, we apply deep learning based NLP techniques. For each patient, we analyze their historical data (i.e., event notes/unstructured data), and we convert the data using word embedding and attention based LSTM techniques. Since our patients data is already labelled, we train the unstructured data with the deep learning based technique mentioned above. In this study, we obtain a higher performance model by using the structured dataset while the model using unstructured dataset shows weaker results. To build a combined model, we apply knowledge distillation technique where structured dataset shows the higher capacity network and then, we transfer the knowledge to the weaker unstructured dataset. Our study further investigates whether a pattern of opioid use has any connection with users' mental health statuses and other socio-economical determinants. Classification of opioid patients and their mental health is important, considering the number of overdose deaths per year and the financial consequences of opioid addiction [21]. Our study may benefit society in a number of ways, such as early detection of intentional and unintentional opioid misuse, reducing the effect of aggressive marketing by pharmaceutical companies which profit from pain medication use, and better surveillance of opioid misuse by authorities and stakeholders. The main contributions of this study are: 1) We build a dataset by using the MIMIC-III database for predicting opioid misuse. 2) We investigate the relationship between mental health and opioid misuse by patients from their structured and unstructured data (patients' clinical event notes). 3) We develop traditional and deep learning based supervised models to predict intentional and unintentional Opioid users, using an attention based mechanism. The organization of this paper is as follows. In section II, we briefly present existing studies related to our work. Section III discusses the methodology of our study which describes the dataset, data preprocessing, ground truth procedures, feature engineering, correlations, and model architecture, respectively. Section V describes the ablation study and section VI presents the discussion of our study. Section VII concludes our study. II. RELATED WORKS Prior research has shown that using opioids and benzodiazepines increases the risk of an overdose fatality compared to opioids use alone [22]. The authors described trends in intentional abuse of opioid analgesics, benzodiazepines, or both, from 2001 to 2014. They then calculated the increased risk of mortality associated with the abuse or misuse of the combination of opioid analgesics and benzodiazepines relative to opioid analgesic abuse or misuse alone. Barkley and Shin [5] investigated the characteristics of the individuals who died from intentional drug overdoses compared to unintentional overdoses. They found that intentional overdoses are associated with depression. Another study [6] investigated the demographic characteristics related to intentional opioid usage. A study based on information provided by three poison control centers in 2002-2014 in Ohio [7] showed that the rate of intentional drug use among adolescents is alarming, and that there is a need for more research into the misuse and abuse of drugs, especially suicide drug poisoning. Legislative actions may help in controlling drug VOLUME 11, 2023 use among adolescents and young adults and preventing them from getting access to specific drugs. Mensah et al. [23] investigated factors leading to addiction and abuse of opioids. Prince [8] found that there is a direct connection between drug use and mental illness. People who have a record of past hospitalization or using prescribed painkillers for mental illnesses (e.g. schizophrenia, bipolar disorder, major depressive disorder) are more likely to seek drugs afterwards. Suicide attempts among people with severe mental illness (SMI ), who used prescribed painkillers, were 2.40 times higher than for people with other substance use disorders. People with SMI are more likely to have an opioid use disorder (OUD) and OUD was present in 5% of people with SMI. However, they were unable to identify the neurobiological risk factors in their Syndrome Model. A related study [9] presents that people, who have a record of OUD, are also likely to have mental disorders and to use other substances (nicotine, alcohol, tranquilizers, etc.). We may conclude that there is a connection between mental illness and drug use. However, there is a correlation, causality is less clear. Van et al. [10] identified the connection between mental disorders and opioid overdoses. In their review study, authors tried to explain the association between mood/depression disorder and opioid overdose. There also is an association between PTSD (post-traumatic stress disorder) and opioid overdoses. Bohnert and Ilgen [11] found OUD has a strong connection with suicide attempts and overdose deaths. About 40% of suicide and overdose deaths were related to opioid use disorders in 2017 in America. Some factors cause people to use opioids in order to cope with society but this may increase depression, stress, anxiety, pain and eventually lead to suicide or overdose death. Easy availability of opioids and the use of other substances combined with opioids further increases the risk of unintentional overdose deaths. Several studies [14], [24] have attempted to predict opioid abuse using traditional statistical approaches. Alzeer et al. [25] wrote a review paper attempting to find crucial indicators from literature. Their study identified 75 factors that are connected with opioid abuse. They found age and gender to be the most important indicators. Vunikili et al. [13] also presented a set of statistical models to classify patients at risk of opioid misuse, death, and drug-drug interactions. Machine learning is an emerging field for predictive analysis and is used in multiple sectors. Han et al. [12] developed a prediction model to demonstrate the efficacy of machine learning in predicting opioid patients. The MIMIC-III dataset was used to classify opioid-dependent patients. In this study, we consider the social determinants and the patient characteristics, particularly using the structured set of data from the MIMIC-III database. In addition, making use of the unstructured data could improve the decision-making process. To the best of our knowledge, no study has investigated the interaction of behavioral health and opioid drugs using domain-specific word embedding. The contribution of our research is to classify opioid patients from structured as well as unstructured data. It is challenging to classify opioid patients with unstructured data using machine learning algorithms. However, it may be feasible with the aid of knowledge distillation. Ahn et al. [26] introduced the concept of the knowledge distillation which was formulated by Hinton et al. [27]. According to Ahn et al. [26] knowledge from a higher capacity model could be compressed to a lower capacity model by training the weaker model with the logits generated by the stronger model. However, it has been established in the research by Gao et al. [28] that it is feasible to distill dark knowledge from a totally different presentation of data from a strong network to a weaker network. III. METHODOLOGY In this section, we present different parts of our methodology. Figure 1 depicts the different steps of the study. Using data from the MIMIC-III dataset, we create a structured and unstructured dataset and which we use to predict opioid misuse. We also analyse the performance of our model using different ablation studies. In the following subsections, we describe all the steps in detail: The MIMIC III database [29] has 26 schemas but only ten schemas are relevant to our study. To create our dataset, 41 relevant features were taken. Within the unstructured, data, we identified a total of 37,127 distinct cases which we filtered systematically. We extracted our cohorts from the MIMIC-III database. MIMIC-III is a massive, publicly available database which contains health-related information of over 40,000 patients from the Beth Israel Deaconess Medical Centre's intensive care units during the period from 2001 to 2012. Demographics, vital sign assessments taken at the bedside, diagnostic test findings, treatments, prescriptions, caregiver observations, imaging documents, and death details are also stored (both in and out of the infirmary). The database has information on 53,423 different admitted patients (aged 16 years and older) in critical care units. The tables in MIMIC-III [20] are connected by identifiers that ordinarily end in ''ID''. For instance, the SUBJECT ID is a single sufferer, while the HADM ID signifies a hospital admittance, and the ICUSTAY ID signifies an admittance to an ICU. A descriptions of MIMIC-III is summarized in Table 1. In our study, we also use unstructured data to understand users' opioid behavior. Note that the events [30] column from the Lab events schema of the MIMICIII database contains a huge corpus of text data. To find opioid patients, we have performed an initial query with 120 opioid keywords to every prescription in the database. We identified 4,08,130 cases. Table 2 presents the statistics of keywords, features, structured and unstructured data instances which we filtered manually. B. DATA PREPROCESSING We selected 121 opioid-related keywords (i.e., Hydrocodone, Methadone, Fentanyl, etc.) from the literature [31], [32], [33]. We verified whether the opioid-related keywords referred to prescription opioids or to illegal addictions in this identification process. This qualitative search helps us to remove irrelevant keywords. As a result, we identified 32,152 opioiddependent patients from the prescription tables of MIMIC-III database with selected keywords. The prescriptions schema is about the medications ordered for a given patient. The opioid keywords we chose performed detailed queries by finding one or more opioid drugs served to the patients and returned distinct identifiers as subject id. The purpose of our initial data preprocessing was essentially to find the opioid related patients from the MIMIC-III database, and the group of patients who were given treatment for an overdose. As MIMICIII is a huge dataset of patients with various diseases, and treatments, so we first segregate the opioid patient cohort, and to include the maximum numbers of patients. We applied every possible opioid related keyword where we could find out of the 26 tables and 202 features, we found 41 features that are relevant to our research. Therefore, the shape of our initial dataset was (32,152*41). For the two types of datasets, several data preprocessing techniques have been used. For the structured (i.e., tabular) dataset, the label encoding technique has been used initially since many of the attributes had qualitative data. The tabular dataset had a few missing values as well. There are several missing value handling techniques which are available to deal with this, for example, discarding the instance, replacing with mean/mode/median, replacing with next/previous value, imputing the most frequent value, KNNImputer, etc. Among those techniques, KNNImputer [34] was selected for handling the missing values. KNNImputer is widely used for missing value handling [35], [36] and provides the best outcomes for the tabular dataset that we are working with. C. GROUND TRUTH COLLECTION We manually selected data from patient's prescriptions for the ground truth collection. We asked the following questions: is the patient taken any narcotics, did the patient take any opioids, did the patient use any controlled substances? After a careful review, we selected 500 samples of opioid overdose and mental health conditions like depression, hypertension, bipolar disorder. It is important to mention that MIMIC-III is a publicly available data source for research purposes, and we only rely on this source. We have not used or amalgamated any other external data. Furthermore, no personally identifiable information (PII) was disclosed, and HIPAA policy was maintained during ground truth collection. We got the text column from the 'Noteevents' table, where the prescription of the patient is stored. The Noteevents is the only table in the MIMIC-III dataset which contains all the notes of the patients, and is a comprehensive source of the unstructured data. Each note is linked to the specific subject id of the patient containing information about admission type, VOLUME 11, 2023 past medical history, socio-economic status, and a detailed description of the patient. For each patient, we searched for any opioid related information in the prescription. As mentioned earlier, there is specific information for each patient in all the prescriptions such as past medical history, social history, and family history along with the medication which is provided to the patients. We searched manually in the prescriptions to find any evidence of opioid misuse from the past medical history and we also search for information on depression, anxiety, living alone, or broken family that can potentially cause someone to feel unhappy, in the 'Social History' and 'Family History' categories. For example: we identified a patient (patient id: 7445) as opioid dependent or potentially a future abuser. According to the objectives of this study, we develop datasets containing the socio-economic characteristic of the patients, in addition to the related factors like lab events and vital signs, to identify opioid patients. We performed a qualitative study by doing a brief literature search select only schemas and their relevant columns which may be helpful for decision making. After several data prepossessing steps, high level categorization, and feature engineering, we found 41 factors useful for this study. Table 2 shows the selected tables and their columns to form a structured dataset for our study. The dataset that we created for the analysis has a shape of (454*41) which we feed into the model to classify intentional and unintentional opioid patients. As mentioned in our contribution section, this dataset is made publicly available for further research work. D. FEATURE ENGINEERING Feature engineering is an integral part for this study. It helps to uncover the hidden patterns in the data and boost the predictive power of a machine learning model. The tabular data consisted of 21 attributes. Among those 21 attributes, 11 categorical attributes were taken manually for classification. Some of these attributes had string data and each of the attributes had some missing values and some noise as well. There are several data preprocessing techniques which have been used to preprocess the tabular data. Initially, label encoding has been used for labeling the dataset where string data have been replaced with numeric values. Secondly, missing value imputation has been used to impute the missing values where the closest instance was applied to fill up the missing values. Thirdly, feature selection has been used to find the correlations of the attributes, and finally the dataset is randomly divided into training and testing sets. There are 363 (80%) observations in the training set and 91 (20%) in the testing set. In accordance with literature, we divided the age section into three categories: young (15-39 years), middle (40-60 years), and senior (more than 60 years). Moreover, we discretize the 'los (length of stay)' attribute into three categories: short stay (1-50 days), middle stay (50-100 days), and long stay (more than 100 days). These categorical data are later converted into numerical vectors [37]. In the dataset, a feature named icd9_code was provided. However, this attribute had 41 categories and was not sufficient to describe the disease or past medical history of a patient. Hence, we performed feature engineering by creating more sections, named ''icd9_code_desc'', and ''High level category''. Here, the feature named 'icd9_code' represents the disease code, and the added section, 'icd9_code_desc' is the description corresponding to the icd-9 codes. The feature titled 'High level category' was added to categorize the icd-9 codes description. The purpose is to find the correlation between dependent and independent variables. The 41 categories in 'icd9_code_desc' were reduced to eight categories according to the higher level categorization: • Diseases of blood and circulatory system • Diseases of nervous system and mental disorder • Diseases of digestive system • Diseases of genitourinary system • Diseases of respiratory system • Skin, subcutaneous tissue and musculoskeletal diseases • Endocrine, metabolic, immunity disorder and sepsis • Poisoning and injury 1) AGREEMENT ANALYSIS For our data analysis purposes, we convert the ICD9 codes into higher level disease categories. We perform this task to produce a lower number of class labels. According to the original dataset, if we use each ICD9 code as a single class label there is a high chance of overlap in the class prediction for the feature. For example, respiratory system disorder and shortness of breathing. We did not consider these diseases in two different classes, rather we lower down the class label as respiratory issues. In this way, we convert the 51 independent class labels to only eight class labels. To this end, we recruited three physicians from Directorate General of Health Services (DGHS), Bangladesh (01), and Dhaka Medical College Hospital, Bangladesh (02). We hired three physicians as annotators of the ICD9 codes, because converting the result according to a majority voting scheme. First, we produce the same spreadsheet in three independent copies. We confirmed that these annotators performed the task in an independent fashion, so that one annotator does not get influenced by the other annotators. Since we have three annotators, we use Fleiss' kappa [38] to compute the final class label. To what extent raters assign the similar score is called agreement in their observation. To obtain a fair percentage of agreement, statisticians create a matrix where column and row represent raters and objects that are going to be rated, respectively. If two raters have complete agreement between their ratings, then we consider a zero (i.e., no disagreement), else we assign a one which represents a disagreement. Then, we find the percentage of zeros which represents the agreement score, i.e., Kappa [38]. Kappa can range from 1 to +1. After calculating the agreement score by using Fleiss' kappa, we obtain kappa scores of 0.9 among the physicians which indicates strong agreement. E. FINDING CORRELATIONS We apply data engineering to find the Date of Birth (DOB) of the patients from a masking condition due to HIPAA policy [39]. We had to involve domain experts to categorize certain features, and we did binning of the Length of stay (Los) to ensure the overall dataset is suitable for the model building stage. Initially we have aggregated 41 features from the 13 schemas of the MIMICIII dataset. However, after high level categorization of certain features, we reduce this to 10 attributes. The dataset with those 10 attributes serves as our structured dataset. We identified ten important features which are likely correlated to users' opioid behavior. Some of these attributes are continuous and some attributes are categorical in nature. We discretized the continuous values into categorical attributes. We then computed chi-square [40] correlation between our independent variables and dependent variable (i.e., opioid intake intentional YES/NO). In statistics Chi-square is used for testing the independence of two events. We identify the features which have a p-value (<0.05) and determine whether these features are significant in terms of users' opioid intake behavior. Table 3 shows the correlation with different features and users' opioid intake. Mental status, ethnicity and high level category (diagnosis) are significant features in predicting users' opioid intake behavior. F. MODEL BUILDING We build models to predict opioid patients using two different approaches: i) tabular data from correlated attributes and ii) unstructured data from patients' history notes, i.e., eventnotes. In this subsection, we first explain the method of building opioid intake behavior prediction by using the tabular data. Later, we also describe the process of model building by using our unstructured data. 1) STRUCTURED DATA CLASSIFICATION We build our structured dataset, D t , with the correlated attributes: mental status, ethnicity and ICD high level category (diagnosis). We then split our dataset (instances 454) into training and testing datasets of 80% and 20%, respectively. We train our dataset with a cross validation with 10-iterations by using the following classifiers [41]: AdaBoost Classifier, Logistic Regression, Support Vector Classifier, XGB Classifier, and Random Forest Classifier. Table 4 shows the performance of our models which are developed from the tabular dataset. 2) UNSTRUCTURED DATA CLASSIFICATION For unstructured data classification, where the patients' histories are the input of the models, several classification approaches can be used. Since the unstructured data do not have any particular format, our study performs the classification by NLP techniques. The unstructured data consists of two attributes where the first attribute refers to the patients' history and the second attribute is our target attribute opioid intake intentional YES/NO. We applied several preprocessing techniques to our unstructured dataset. The data cleaning technique is used to remove rows of incomplete data from the dataset. We discard a few irrelevant syntaxes such as brackets and punctuations from patient's history data field. We also replace some abbreviations (e.g., dr./Dr./md./m.d.) with their full form so that the words do not appear as individual sentences at the time of vectorization. Word vectorization [42] is an NLP method of mapping a word or phrase from a vocabulary to a corresponding vector. The unstructured data, D us , has been classified using three different well known methods: 1D CNN-based model, a basic LSTM-based model, and an LSTM and attention-based model. a: LSTM AND ATTENTION TECHNIQUES Long Short Term Memory Networks, most commonly referred to as ''LSTM'' are a unique class of RNN that can recognize long-term dependencies. When there is a lot of information to summarize, the model performs poorly and produces inaccurate results. It is known as the RNN or LSTM long-range dependency problem. The attention mechanism with the LSTM attempts to address the problem. The cell state is the foundation to LSTMs. The LSTM can modify the cell state by removing or adding information, VOLUME 11, 2023 which is carefully controlled via gates. Information can pass through gates voluntarily. These three gates serve to preserve and regulate the cell state in an LSTM architecture. Choosing whatever information from the cell state to discard is the first stage in our LSTM model. The forget gate layer, which is on sigmoid function, decides what to discard. Equation 1 shows how the forget gate layer scans x t and h t−1 : The next step is to choose the new information that will be kept in the cell state. Two different components perform this task. The input gate layer i t , a sigmoid layer, first determines which values will be updated. The state is then updated with a vector of potential new values,c, created by a tanh layer. These two will be combined in the subsequent phase to produce an update to the state as follows according to In the very next stage, we update the old state. We multiply the previous condition by f t while omitting the earlier items on our list of information to forget. Then i tct is added and it makes a new candidate value which is presented in Equation 4. Based on the output of candidate state, we finalized the output of LSTM cell in the final output gate as follows (according to Equation 5): In a conventional LSTM unit, the sequences are only encoded in one direction (past information). In order to preserve both past and future information, we employ a BiLSTM model [43] to encode the sequences in both directions. Natural language processing, machine translation, and image processing tasks are successfully using the attention mechanism. In the supplied input sentence, it extracts pertinent context information about a word. The bidirectional LSTMs forward and backward output features are concatenated into the vectors h t it before applying attention. The formulation of the attention mechanism is described in Equations 7, 8 and 9: In a global attention model, all of the encoder's hidden states are taken into account when determining the context vector c t . In this model type, the current target hidden state h t and all source hidden states h s are compared to derive a variable-length alignment vector at, whose size is equal to the number of steps (according to Equation 10). The weighted average of all the source states, according to a t s, are then calculated to create a global context vector, or c t . Therefore, the computation path of this attention mechanism is, calculation h t , then the attention weights a t s, then the context vector c t and finally the attention vector a t . To learn the text's sequence, the LSTM-based model contains one LSTM layer with 400 units. This model has 3.1 million parameters, making it more complicated than the other models. Additionally, a hybrid (LSTM + Attention) model was employed to classify the sequence. The attention layer, which outputs a 128-dimensional vector, comes after a bi-directional LSTM layer with 128 units for each layer. For each of the models, there is a dense layer that is fully connected and has two units. The output layer includes two units since the model requires a logit to pass through the softmax function. Adam serves as the optimizer for all models, with the same learning rate (0.001), batch size (32) and Data split (80/20). b: 1D CNN 1D CNN is the modified version of the 2D CNN. It is widely used in sequence learning which has a shallow layer. Therefore, it uses minimum resources to train and test. Compact 1D CNNs have shown improved performance in recent research for applications with low labeled data and high signal variations obtained from various sources [44]. We use a non causal system of CNN since the output y is dependent on a future sequence of inputs x. Let the input to convolution layer of length n be represented by x, and let the kernel of length k be represented by h. Let the kernel window be shifted s positions (number of strides) after each convolution operation. Then a non-causal convolution between x and h for stride s can be defined as follows: In our 1-D CNN model. we use a 128-dimensional embedding layer and makes use of 32 kernels in total. In terms of parameters, the model is less complex than the others. Starting with a layer of embedding of 128 dimensions, all models are constructed. G. COMBINING MODELS BY KNOWLEDGE DISTILLATION Given an eventnote x from a dataset D us , consisting of patients history in texts with corresponding labels where D us is an unstructured dataset. The note consists of t 1 , t 2 . . . .t n tokens which are extracted from the texts. Our aim is to train a model M S on the dataset D us such that it achieves an accuracy a by using distill knowledge from a model M T . The model M t is trained on structured dataset D t with limited features t 1 , t2, . . . t k , where n k. We propose a novel method to classify misused opioid patient's from unstructured data, D us by using the knowledge of a model that is trained on structured data. Figure 2 presents the details pipleine how we combine both structured and unstructured datasets by using knowledge distillation from the structured dataset. In our knowledge distillation approach, a model is trained on unstructured data, D us , learns from the insights extracted from a structured data, through a shallow ANN (Artificial Neural Network) model. In this case, the teacher model, M t , is trained on dataset, D t , which is a tabular structured dataset with limited features. The dataset D t is constructed by the domain experts (see Section III-D). Therefore, the dataset D t is trustworthy and the model is learning the known limited features while training. The main purpose of the teacher model (M t ) is to provide insights on the set of unstructured training set (D us ) of the student model (M S ). Initially, We look at the label of dataset D us , and we extract features from dataset D t for the corresponding label, and created a new dataset. Therefore, we get a new dataset D sn whose label's is identical to the label of dataset D us . The student model outputs two-dimensional score vectors for each input. Additionally, the teacher model gives us scores that resemble the output of the student model. Now, these can be utilized to calculate soft probabilities. To soften probabilities, We make use of the hyper-parameter temperature τ . When τ = 1, softmax produces its typical output. However, when we raise, the softmax output softens and reveals which classes our teacher model discovered to be more similar to the predicted class. Hinton et al. [27] called it dark knowledge. The teacher model itself implant the dark knowledge during training. However, during the distillation process, this dark knowledge is transmitted to the student model which is built from unstructured dataset, D us . According to the experiments of the authors [27], the value of τ could be from 1 to 20. Authors find that the same value of τ to the student and teacher models likely return the maximum results. Formally, Let (x,Y) in D us where x is an eventnote and Y is the corresponding label. Now our student model M S , given an input x will output logits L S which can be shown as L S = M S (x). These logit values are softened by using the temperature τ and used in the softmax function σ to get the soft probabilities denoted byŶ Sτ = σ (L S /τ ). On the other hand, Y S denotes the hard probabilities in Y S = σ (L S ) to be used by the CE (cross entropy) loss. The teacher model M T , outputs the score for each inputs from dataset D sn . Assuming, (I,Y) in D us where i is the set of structured features, gathered from the dataset D s and Y is the corresponding label that is identical to the label of dataset D us . Therefore, The score provided by the model, can be denoted as L T i = M T (I i ) and the hard probability distribution for each input can be shown as Y T i == σ (L T i /τ ), the soften probability would be then,inŶ T τ = σ (L T /τ ). The final loss function now can be derived as Equation 12: IV. RESULTS Several classification algorithms and techniques have been used to classify the opioid patients from both structured and unstructured datasets. A. FROM STRUCTURED DATASET In the comparison of application of ANN (Artificial Neural Network) and traditional machine learning algorithms for classification of tabular dataset different values of accuracy have been achieved. Table 4 presents the results of the models that have been used to classify the tabular dataset. The traditional machine learning classification algorithms such as AdaBoost, Logistic regression, Support vector, XGB and Random Forest provide a training accuracy of 95.3% -96.7% VOLUME 11, 2023 and testing accuracy of 92.3% -93.4%. Random Forest classification algorithm gives the best outcome in-terms of both training and testing accuracy among the traditional machine learning algorithms. On the other hand, the ANN algorithm provides a training accuracy of 95.9% and testing accuracy of 95.9%. Comparing ANN with traditional classification algorithm's we found that ANN provides slightly better testing accuracy in terms of classifying the tabular dataset. B. FROM UNSTRUCTURED DATA Several machine learning algorithms and techniques have been also used to classify unstructured dataset and the several values of accuracy have been achieved. Table 6 represents the results of different classification algorithms by using different approaches to classify the unstructured opioid patients. Here the results can be divided into two approaches: 1) Classification without using KD. In terms of classification of unstructured data without using KD (knowledge distillation), 1D-CNN, LSTM and Hybrid (LSTM + Attention) models have been achieved testing accuracy of 64.8%, 54.9% and 61.0%, respectively. By using the KD technique the outcomes have been improved for the same classification algorithms. The testing accuracy for the unstructured data classification using KD technique are 65.9%, 57.2% and 76.44% for 1D-CNN, LSTM and Hybrid (LSTM + Attention) respectively. 1D-CNN has been performed better than other algorithms in both approaches. V. ABLATION STUDY A component of a machine learning architecture may typically be deleted or replaced as part of an experiment called an ablation study [45] to determine how these changes affect the overall performance of the system. The performance of a model may remain stable, improve or get worse when these components are changed. The accuracy can be improved by experimenting with various hyper-parameters like optimizers, learning rates, loss functions and batch sizes. Altering the model's architecture has an effect on overall performance as well. In this study, our suggested model is examined by arbitrarily removing or changing various components and parameters. A. ABLATION STUDY 1: CHANGING HIDDEN LAYERS Between the input and output layers is a layer known as the hidden layer, where artificial neurons receive a series of weighted inputs and generate an output using an activation function. The performance of the model is influenced by the hidden layers. Arbitrarily, we chose a single dense layer, which is a dense output layer. We observed a considerable change in the CNN and attention-based model results if we increased the number of hidden layers. The accuracy of the identical model with three hidden layers LSTM based mode, however, remains the same, although it significantly alters the training accuracy. Table 7 presents the performance of different models for different numbers of hidden layers. B. ABLATION STUDY 2: CHANGING BATCH SIZE The number of training samples used in one iteration is referred to as the batch size. If fewer samples are used to train the network, it uses less memory in the process overall. Minibatches typically help networks train more quickly. We do so because the weights are updated following each propagation. Experimenting with fewer samples shows that a batch size 32 appears to be optimal for all three models. We observe that changing the batch size can reduce test accuracy. Table 7 presents the performance of different models with different batch sizes. C. ABLATION STUDY 3: CHANGING OPTIMIZER We used different optimizers to investigate the performance of our models. We found that the Adam optimizer performed the best among all the optimizers. For all three optimizers, we employ the same learning rate and loss function. For this dataset, SGD [46] did not perform well and RMSprop [47] did not outperform the Adam optimizer. Table 7 represents the performance of different optimizers for the models. D. ABLATION STUDY 4: CHANGING LEARNING RATE The learning rate [48] indicates how frequently the weights are updated during training. The learning rate is a hyperparameter that can be customized and is used to train neural networks. Its value is typically small and positive in the range of 0.0 and 1.0. The learning rate significantly impacts our models performance. For majority of the models, 0.01 is the best learning rate. However, 'the accuracy increased for an attention-based model when the learning rate was 0.1 or 0.001. With such modification in learning rate, the performance is improved. Table 10 shows the performance of the models by using different learning rates. E. ABLATION STUDY 5: CHANGING DROPOUTS A method to prevent neural networks from overfitting is dropout regularization [49]. Dropout disables neurons and their associated connections at random. This step may change all the neurons to develop their generalization skills and keep the network from relying too much on individual neurons. We employ dropout in primary layers like LSTM and CNN because our identical models have only one dense layer. When we do not use dropouts in an LSTM-based model, the accuracy improves, but the accuracy declines to the same level when we apply more dropouts. The attention-based model is comparable in terms of test accuracy for close dropouts. For dropout 0.3, it functions a little better. However, if dropout is increased to 0.50 or more, it performs weaker. Table 11 shows the performance of our models using different drop out levels. F. SUMMARY OF THE ABLATION STUDY Identical accuracy is the term that defines a result of a model with default hyperparameters. The accuracy only VOLUME 11, 2023 drops or increases when we change the hyper-parameters from the default. Thus, we labeled Accuracy dropped, Accuracy improved in the table. We get the best accuracy for the 1d CNN based model when the optimizer is Adam, the batch size is 32, 1 hidden layer, the learning rate is 0.01 and the dropout rate is 0.3. The LSTM-Based model performs well when we don't use any dropout, the learning rate is 0.01, the optimizer is Adam, the batch size is 32 and there is 1 hidden layer. The Attention based model obtained 60.4% accuracy which is the maximum for this model. when the learning rate is 0.1 with no dropout layers, the optimizer is Adam, the batch size is 32, and there is 1 hidden layer. G. SELECTING THE BEST MODEL The best model for approach 1 is the regular 1D CNN-based model since it has obtained the maximum accuracy among all the models applied. On the other hand, the gradient boosting classifier is the best model for approach 2. Approach 1 is a direct classification approach from raw event note data without assistance of a pre-trained model. The performance of the LSTM and the attention based models is comparable and cannot surpass the accuracy of the 1D CNN-based model. we obtained 62.63% accuracy in this case with a 1D CNNbased model. Approach 2 is an approach where we rely on a pre-trained neural model named Stanza. Here, we extract the features (test, problems, treatments) using the stanza model, relying on Stanza to train the model on the same data. In this case, the gradient boosting algorithm obtained the maximum accuracy of 74%. VI. DISCUSSION Opioids are a class of drugs used for the relief of pain and many studies show that the use of opioids in USA is increasing day by day [50].Opioid actually stops the pain signals between the brain and the body which may have long term consequences like addiction and even death. In our study, we find that ethnicity has a strong correlation with users' opioid behavior (see Table 3). We also find in a few studies that different ethnic groups have different number of opioid related deaths. In 2018-2019, the distribution was 73% nonHispanic White, 15% non-Hispanic Black, 7% Hispanic, and 6% other ethnicity communities [51]. A significant increase in death rate of around 38% had been observed for non-Hispanic Black individuals from 2018 to 2019, but there was no change overall among the other ethnic groups [51]. Other studies [52] found that there are some relationship between ethnicity and intentional behavior in terms of using opioid intake. Patients who belong to hispanic/latino, white Russian, white Brazilian, native Hawaiian, and other Pacific Islander, are likely 'No' opioid intentional intake behavior. On the other hand, Black/African Americans and few other patients have a larger rate of 'YES' opioid intake behavior. Several studies [8], [9], [10], [52] show that there is a strong connection between users' mental health status and their opioid intake pattern. Among the 239.4 million U.S. adults, 38.6 million had a mental health disorder. Among the adults with mental health disorders, 18.7% are opioid users compared with only 5.0% among those without mental health disorders [52].The study also shows that approximately 115 million opioid prescriptions are distributed each year in the US, 51.4% (60 million prescriptions) of which are received by adults who have a mental health disorder [52]. Our study is compatible with the previous findings. We also found that patients' mental health and intentionality of opioid using has a strong relationship. Almost every one of the patients who is suffering from depression, hypertension, and bipolar disorder has a tendency to misuse opioids intentionally. On the other hand, patients without mental health issues are not likely to misuse with opioid overdose. Another interesting aspect is that mental health and social determinants are a controlling factor of drug abuse. Studies [53], [54] have shown that individuals with low levels of education and those who fall into high unemployment and poverty categories are at a greater risk of opioid abuse. Additionally, it suggests that people with higher socioeconomic status are more prone to opioid abuse disorder than those with lower socioeconomic status. We used several models to classify the unstructured data. These models are: LSTM, CNN, and hybrid model to classify opioid intentional intake pattern. Among the three models, the CNN model is able to classify the unstructured data most accurately. The LSTM and Hybrrid models need a larger dataset to train accurately and with only 453 patients' histories available, those two models obtained accuracies in the range of 52% -54%. On the other hand, CNN is a comparatively simple model and it can understand the word sequence better in compare to other LSTM, therefore 1D CNN model was able to reach an accuracy of 64%. Figures 3 and 4 show the patterns of word cloud in the history of patients who take opioid intentionally YES/NO. If we observe the problems, tests, and treatments for both classes, we see many familiar entities between the two categories. Therefore, we may find some overlap in the features of the two categories. However, we build a word cloud which is not an easy task to classify as they share almost the same features. Figure 3 shows that opioid intentional user No has an indication of mental health issues. On the other hand Figure 4 shows that opioid intentional YES users' have a strong tendency mental issues in their history such as depression, mental status, etc. Our study has a number of shortcomings. We prepare independent models, but a combined model could improve the accuracy of the model. In our study, we identify that opioid usage has an association with users' mental health issues. However, our models do not find which opioid has association with which mental health issues (i.e., depression, obsessive compulsive disorder, schizophrenia, etc.). VII. CONCLUSION Opioid use is a crisis globally among young and older people. In our study, we have built a dataset by using MIMIC-III dataset where we have narrowed down a total of 26 relational tables to only 41 features. We then identified correlated features in terms of opioid intentional YES/NO user. We identified three important features which have a strong correlation. In this way, we built a tabular dataset which demonstrated a good performance in predicting users' opioid intake behavior. We have also built a deep learning based model to predict users' opioid intake behavior from their historical information (event notes). By using our tabular model, we have obtained an accuracy of 93% by using random forest classifiers. Later, by using our deep learning (i.e., 1D CNN, LSTM+ attention), we have obtained an accuracy of 66% data from patients' unstructured historical data. After using the knowledge distillation mechanism of the tabular model over the deep learning based model, we have obtained an overall accuracy of 76.44%. We found some interesting correlations with users' mental health issues. There are a number of avenues to further improve our studies. We may increase the size of our dataset which might require more manual work to discover opioid intake YES users. SADDAM AL AMIN received the B.Sc. degree in physics from Southeast Missouri State University, Missouri, MO, USA, in 2016, and the M.Sc. degree in computer science from United International University, Dhaka, Bangladesh, in 2022. He is currently pursuing the master's degree in information systems with the University of Maryland. His thesis defense works on the verge of completion at UIU. He is also working as a Federal Contractor with the Health Organization, USA. He is supporting as a Data Analyst with the Data and Evaluation Division. His research interests include data analytics inclined to the health domain epidemiology, health disparities, and public health. He is the author of more than ten peer-reviewed publications in international journals and conferences. His research interests include machine learning, natural language processing, data science, image processing, and human-computer interaction. MOHIUDDIN AHMED (Senior Member, IEEE) has been educating the next generation of cyber leaders and researching to disrupt the cybercrime ecosystem. He has edited several books and contributed articles to The Conversation. His research publications in reputed venues attracted more than 2500 citations and have been listed in the world's top 2% scientists for the 2020-2021 citation impact. He secured several external and internal grants worth more than A$1.3 Million and has been collaborating with both academia and industry. He has been regularly invited to speak at international conferences and public organizations and interviewed by the media for expert opinion. His research interests include ensuring national security and defending against ransomware attacks. SAMI AZAM is currently a leading Researcher and a Senior Lecturer with the College of Engineering and IT, Charles Darwin University, Casuarina, NT, Australia. He has number of publications in peer-reviewed journals and international conference proceedings. His research interests include computer vision, signal processing, artificial intelligence, and biomedical engineering. VOLUME 11, 2023
2022-12-21T16:04:28.753Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a4668e948c10c6ec0445944bab917201f2b1d445", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09991956.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "171575b29f696141a52ed7d95c0404201865de18", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
3626536
pes2o/s2orc
v3-fos-license
A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network. INTRODUCTION Robot manipulators were widely investigated and applied to many fields Zhang and Zhang, 2012;Zhang, 2013, 2014a;Jin and Zhang, 2015;Zhang et al., 2015;Yamada et al., 2016), such as human-robot interaction, path tracking, industrial manufacturing, military, repetitive motion, and so on. Many researches have been focused on this topic, and various kinds of robot manipulators have been developed and investigated (Li et al., 2012Xiao and Zhang, 2013;Jin and Zhang, 2015;Zhang et al., 2015). As far as we know, there are some manipulation tasks (including large, heavy, awkwardly sized payloads) that cannot be fulfilled by only a single robot manipulator. In contrast, dual robot manipulators can not only complete some common tasks but also can finish some complex and dangerous things that the single robot manipulator is usually hard to finish Li et al., 2012Jin and Zhang, 2015). In addition, dual robot manipulators have been successfully applied to various applications (Jin and Li, 2016;Zhang et al., 2013Zhang et al., , 2015Xiao and Zhang, 2014b;Jin and Zhang, 2015;Jin et al., 2016a), e.g., load transport, cooperative assembly, dextrous grasping, coordinate welding. Therefore, using dual robot manipulators to collectively conduct complicated tasks is becoming increasingly popular. It is well known that inverse kinematics of robot manipulators (including dual manipulators) is a much more difficult problem than forward kinematics, but it is a fundamental issue in the field of robotics (also including dual robot manipulators). Generally speaking, there are two types of good methods for addressing the inverse kinematic problem. One is based on the pseudoinverse method that includes a homogeneous solution and a specific minimum-norm solution (Klein and Kee, 1989;Klein and Ahmed, 1995). However, the traditional pseudoinverse method needs to compute the inverse/pseudoinverse of matrices, which usually costs a lot of time. In addition, this method would lead to the joint angle drift when the end-effector completes a repetitive motion (Klein and Ahmed, 1995). The second method is based on optimization techniques, which treat performance criteria as objective functions (Jin and Li, 2016;Zhang et al., 2004;Guo and Zhang, 2012;Xiao and Zhang, 2013, 2014a. Among the existing schemes, single performance criterion is widely used to control the motion of manipulators at different joint levels, such as repetitive motion Zhang, 2013, 2014a), manipulability (Jin and Li, 2016), obstacle avoidance (Xiao and Zhang, 2016), minimum velocity norm (Guo and Zhang, 2012), and minimum torque norm (Zhang et al., 2004). It is worth pointing out that single criterion optimization schemes cannot satisfy multiple requirements in practical applications, so dual-criteria optimization schemes are needed (Hou et al., 2010). Besides, considering the importance of the repetitive motion control for dual robot manipulators, it also requires an effective criterion for solving the joint-angle drift problem of dual robot manipulators in practical applications Zhang, 2013, 2014a;Zhang et al., 2013). To satisfy the above requirements, in this article, a novel bi-criteria optimization scheme is presented and investigated for coordinated path tracking of dual robot manipulators at the joint velocity level, of which the bi-criteria consist of the minimum velocity motion (MVN) and the repetitive motion (RM). Note that the proposed optimization scheme consists of two subschemes (corresponding to the left and right manipulators). Besides, such two subschemes can be rewritten as two general quadratic programs (QPs), which is further integrated into one QP formulation. There are a lot of methods to solve the above QP problems, such as numerical algorithms, recurrent neural networks (RNN), and so on. Although the numerical algorithms can iterate good solutions, they are not suitable for real-time implementations due to their series characteristic and computational complexity. As an efficient computation tool, the neural network approach has several potential advantages in real-time applications (Li et al., 2013a,b;Xiao and Zhang, 2014c;Xiao, 2015Xiao, , 2016aXiao and Lu, 2015;Jin et al., 2016bXiao and Liao, 2016), such as parallel processing, hardware implementation ability, and distributed storage. For example, a gradient-based neural network (GNN) has been widely used to solve various challenging mathematical problems (Zhang et al., 2009;Xiao and Zhang, 2011;Yi et al., 2011;Li et al., 2013c;Xiao, 2016c). Considering the advantages of this method, GNN is developed and applied for solving the proposed bi-criteria optimization scheme and the unified QP problem. Finally, on the basis of a dual three-link planar manipulator, we conduct circular path tracking simulations using such a GNN model and the proposed bi-criteria optimization scheme. The computer simulation results further verify the feasibility and effectiveness of the proposed scheme for coordinated path tracking of dual robot manipulators using the recurrent neural network. PRELIMINARIES The forward kinematic equations of the robot manipulators at the position level and the velocity level can be expressed, respectively, as follows Zhang and Zhang, 2012;Zhang, 2013, 2014a;Jin and Zhang, 2015;Zhang et al., 2015): where θ(t) ∈ R n andθ(t) ∈ R n denote the joint position vector and the joint velocity vector, respectively; r(t) ∈ R m andṙ(t) ∈ R m denote the end-effector position vector and the end-effector velocity vector, respectively; Jacobian matrix J(θ) = ∂f(θ(t))/∂θ ∈ R m×n ; and f (·) denotes a smooth non-linear function. For example, for a three-link planar robot manipulator, we can readily get the forward-kinematic equation (the independent variable t is omitted for presentation convenience): where θ = [θ 1 , θ 2 , θ 3 ] T ∈ R 3 , r ∈ R 2 , l 1 denotes the length of the first link, l 2 denotes the length of the second link, and l 3 denotes the length of the third link. In addition, the variables depicted in the above are defined as c 1 = cos(θ 1 ), s 1 = sin(θ 1 ), The Jacobian matrix of f (·) can be solved in this situation by differentiating (1): Note that, in this article, we are concerned with the dual robot arms. Without loss of generality, one is called the left manipulator and the other is called the right manipulator for convenience. Therefore, the variables of the left and right robot manipulators of dual arms are correspondingly marked by subscripts l and r . For example, variables θ l and θ r denote the joint position vectors of the left and right robot manipulators of dual arms, respectively. In Section 5, we set l 1 = l 2 = l 3 = 1 m. SCHEME FORMULATION For simplicity, the bi-criteria scheme of one robot manipulator is firstly proposed. To integrate the optimization criteria of the minimum velocity norm (MVN) and the repetitive motion (RM), a bi-criteria optimization objective at the velocity level is designed as minimize ∥θ l/r ∥ 2 2 /2 + ∥θ l/r + q l/r ∥ 2 2 /2, where q l/r = ϵ(θ l/r − θ l/r (0)) with ϵ > 0. Besides, performance index ∥θ l/r ∥ 2 2 can achieve the minimum velocity motion of robot manipulators, and performance index ∥θ l/r + q l/r ∥ 2 2 /2 can complete the repetitive motion task at the joint velocity level. For the left robot manipulator, considering the forward kinematics equation and the above bi-criteria optimization objective, the bi-criteria optimization scheme can be formulated as below: whereθ l , q l , J l (θ), andṙ l are defined the same as before, but belong to the variables of the left robot manipulator. Equation (5) uses the bi-criteria optimization objective (equation (4)); and equation (6) is the forward kinematics equation (2) of the left robot manipulator of dual arms. For the right robot manipulator, the bi-criteria optimization scheme can be formulated as below in the same way: subject to J r (θ)θ r =ṙ r , whereθ r , q r , J r (θ), andṙ r are defined the same as before, but belong to the variables of the right robot manipulator. QP REFORMULATION AND UNIFICATION In this section, to obtain two standard QP formulations, the proposed subschemes are rewritten as two QPs, which can be unified into one QP problem. (1) Conversion of MVN criterion: according to definition of two norms, minimizing ∥θ l ∥ 2 2 /2 in the first term of equation (5) for the left robot manipulator is equivalent to minimizeθ T l Iθ l 2 , where I ∈ R n×n denotes an identity matrix. Similarly, MVN criterion ∥θ r ∥ 2 2 /2 in the first term of equation (7) for the right robot manipulator is equivalent to minimizeθ T r Iθ r 2 . (2) Conversion of RM criterion: the RM criterion ∥θ l + q l ∥ 2 2 /2 in the second term of equation (5) for the left robot manipulator is rewritten equivalently as which is further equivalent to the following form: where q T l q l can be deemed as a constant with respect to optimization variableθ and can be ignored during minimization. Thus, the RM criterion ∥θ l + q l ∥ 2 2 /2 of the left robot manipulator is finally equivalent to the following form: Similarly, the RM criterion ∥θ r + q r ∥ 2 2 /2 of the right robot manipulator can be equivalent to the following form: A B FIGURE 1 | Simulation results when the dual three-link manipulator tracks the given circular path synthesized by the bi-criteria optimization scheme (equations (19) and (20)) and GNN model (equation (23) Thus, through the above conversion, the bi-criteria optimization subscheme for the left robot manipulator can be formulated as the following standard QP: where x l =θ l ∈ R n , Q 1 = 2I ∈ R n×n , q l = ϵ(θ l − θ l (0)) ∈ R n , A l = J l (θ) ∈ R m×n , and b l =ṙ l . Similarly, the bi-criteria optimization subscheme of the right robot manipulator is presented as where x r =θ r ∈ R n , Q r = 2I ∈ R n×n , q r = ϵ(θ r − θ r (0)) ∈ R n , A r = J r (θ) ∈ R m×n , and b r =ṙ r . Finally, the presented two QPs for the left and right robot manipulators of two arms are unified into a new QP formulation, i.e., minimize z T Wz/2 + ω T z, where coefficient matrices (or vectors) are defined as below: RECURRENT NEURAL NETWORK SOLVER Note that there are many methods to solve such a standard QP problem. The most common approach is to use a Lagrange multiplier and to minimize a cost function (Li et al., 2013c; A B C D FIGURE 2 | Simulation results when the dual three-link manipulator tracks the given circular path synthesized by the bi-criteria optimization scheme (equations (19) and (20)) and GNN model (equation (23) (19) and (20)), its related Lagrangian is presented as follows: where λ ∈ R 2m denotes the multiplier variable. It is well known that solving the quadratic optimization (equations (19) and (20)) could be achieved by zeroing the following equations: The above linear equations can be further equivalent to the following: Note that there were a lot of methods to solve the above linear equation system (equation (21)). In this part, a gradient-based neural network (GNN) is presented and investigated for solving the proposed bi-criteria optimization scheme and the finally equivalent equation (21). By following the literature (Zhang et al., 2009;Xiao and Zhang, 2011;Yi et al., 2011;Li et al., 2013c;Xiao, 2016c), the design procedure of GNN is listed as below. First, an non-negative scalar-based energy function Ω is defined as follows: Second, the negative gradient of Ω can be solved as −∂Ω/∂y = G T (Gy − u). (19) and (20)) and GNN model (equation (23) Finally, according to gradient neural network design formulȧ y = −γ∂Ω/∂y, the GNN model for dynamic inverse kinematics problem can be described as follows: where y ∈ R 2n+2m denotes the neural state of GNN model (equation (23)). SIMULATIVE VERIFICATIONS In this part, the unified bi-criteria optimization scheme (equations (19) and (20)) is applied to a dual three-link planar manipulator and solved by the presented GNN model (equation (23)). In computer simulations, the end-effectors of the dual manipulators are expected to simultaneously track a circle. Without loss of generality, design parameters ϵ = 10 and γ = 10 7 ; the task execution time is 8 s, and the radius of the desired circle is 0.25 m. Besides, the joints of the left and right manipulators are expected to begin with the initial states θ l (0) = [3π/4, −2π/5, −π/4] T rad and θ r (0) = [π/3, 2π/5, π/4] T rad, respectively. The computer simulations are illustrated in Figures 1-3, which is solved by the proposed bi-criteria optimization scheme and the presented recurrent neural network. Specifically, Figure 1 shows the whole motion trajectories of the dual three-link planar manipulators when the end-effectors track the given circular path. As seen from Figure 1A, the circular path-tracking task is performed successfully by the dual three-link planar manipulators. In addition, from Figure 1B, we can see that the final state and the initial state of the dual three-link planar manipulators coincide with each other. Figure 2 shows the joint-variable (including joint angle and joint velocity) profiles during the task execution of the dual three-link planar manipulators. From this figure, we can conclude that the proposed bi-criteria optimization scheme [synthesized by GNN model (equation (23))] can not only solve the jointangle drift problem but also prevent the occurrence of high joint velocity in this path-tracking task. Specifically, after the endeffectors completing the circular-path tracking task, the final joint states of the left and right manipulators return to their initial states, which can be seen in Figures 2A,B. In addition, from Figures 2C,D, we can observe that the situation of the high joint velocity does not happen, and the final velocity of each joint for the dual three-link manipulators is equal to zero. It is worth pointing out that, if the final joint velocities is not equal to zero, the manipulator' joints will not stop immediately at the end of the task duration; and thus, the non-repetitive problem would happen. These results demonstrate and verify the effectiveness of such a bi-criteria optimization scheme synthesized by GNN model (equation (23)). For further verifying the accuracy of the proposed bi-criteria optimization scheme and GNN model (equation (23)), Figure 3 shows the corresponding position error ε(t): = r(t) − f (θ(t)) and the velocity errorε(t) of the left robot manipulator and the right robot manipulator, where ε X and ε Y denote, respectively, the X-axis and Y-axis components of ε(t). As observed from Figures 3A,B, the corresponding X-axis and Y-axis components of position errors for the left robot manipulator and the right robot manipulator are less than 2 × 10 −5 m. Besides, from Figures 3C,D, we can obtain that the X-axis and Y-axis components of velocity errors for the left robot manipulator and the right robot manipulator are less than 6 × 10 −6 m. These demonstrate that the given circular path tracking task is fulfilled well via the proposed velocity-level bi-criteria optimization scheme. In summary, the end-effector tasks are performed very well by synthesizing the proposed velocity-level bi-criteria optimization scheme. The detailed results verifies the effectiveness and applicability of the proposed bi-criteria optimization scheme for coordinated path tracking of dual redundant robot manipulators using the recurrent neural network. CONCLUSION In this article, a novel velocity-level bi-criteria optimization scheme (i.e., integrating minimum velocity norm and repetitive motion) has been proposed and investigated for complex motion planning of dual robot manipulators. Such a bi-criteria optimization scheme can not only prevent the occurrence of high joint-velocity but also remedy the joint angle drifts of dual redundant robot manipulators well. In addition, the proposed scheme guarantees the joint velocity equals zero at the end of path tracking motion. To do so, two subschemes have been presented for the left and right robot manipulators, which are reformulated as two general quadratic programs (QPs). Then, such two general QP problems have been further unified into one standard QP formulation. Simulative results based on the dual three-link robot manipulators have substantiated the efficacy and applicability of the proposed velocity-level bi-criteria optimization scheme. The future work may lie in the applications of the bi-criteria optimization scheme to real robot manipulators. AUTHOR CONTRIBUTIONS LX: experiment preparation, data acquisition and processing, and publication writing; YZ: experiment preparation, data processing, and publication drafting; BL: experiment technology support and publication review; ZZ and LD: experiment preparation and publication review; LJ: experiment preparation, data acquisition, and publication review.
2017-09-05T08:42:23.656Z
2017-09-04T00:00:00.000
{ "year": 2017, "sha1": "d70d8244316289d059dc8742c11f1802c033e356", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2017.00047/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d70d8244316289d059dc8742c11f1802c033e356", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
7450399
pes2o/s2orc
v3-fos-license
Identification of a novel human Rad51 variant that promotes DNA strand exchange Rad51 plays a key role in the repair of DNA double-strand breaks through homologous recombination, which is the central process in the maintenance of genomic integrity. Five paralogs of the human Rad51 gene (hRad51) have been identified to date, including hRad51B, hRad51C, hRad51D, Xrcc2 and Xrcc3. In searches of additional hRad51 paralogs, we identified a novel hRad51 variant that lacked the sequence corresponding to exon 9 (hRad51-Δex9). The expected amino acid sequence of hRad51-Δex9 showed a frame-shift at codon 259, which resulted in a truncated C-terminus. RT-PCR analysis revealed that both hRad51 and hRad51-Δex9 were prominently expressed in the testis, but that there were subtle differences in tissue specificity. The hRad51-Δex9 protein was detected as a 31-kDa protein in the testis and localized at the nucleus. In addition, the hRad51-Δex9 protein showed a DNA-strand exchange activity comparable to that of hRad51. Taken together, these results indicate that hRad51-Δex9 promotes homologous pairing and DNA strand exchange in the nucleus, suggesting that alternative pathways in hRad51- or hRad51-Δex9-dependent manners exist for DNA recombination and repair. INTRODUCTION Homologous recombination (HR) is a fundamental process conserved in all organisms, maintaining genomic stability through the repair of exogenous and endogenous DNA double-strand breaks. HR also contributes to genomic diversity in evolution through its pivotal roles in the exchange of chromatids during meiosis (1). In addition, dysregulation of HR may lead to aberrant genetic rearrangements and genomic instability, resulting in translocations, deletions, duplications or loss of heterozygosity (2). Precise control of the HR equilibrium is therefore essential for genetic stability because both HR stimulation and repression lead to genome instability (3). Rad51, a eukaryotic ortholog of bacterial RecA, plays a central role in the repair of double-strand DNA breaks by mediating homologous pairing and strand exchange in recombinatory structures known as Rad51 foci in the nucleus (4). Rad51 belongs to the Rad52 epistasis group in Saccharomyces cerevisiae, which is comprised of a number of the key genes (Rad50 to Rad57) involved in recombinational repair of double-strand DNA breaks (5). Among the members of the Rad52 epistasis group, Rad51 shows the highest degree of sequence conservation in evolution, with 83% amino acid sequence homology between yeast and human orthologs and 99% homology between mouse and human orthologs (6). The functional importance of Rad51 has been further emphasized by the findings that Rad51 interacts with the tumor suppressor protein, p53 (7,8), and the breast cancer-susceptibility proteins, BRCA1 and BRCA2 (9)(10)(11). Additionally, elevated levels of hRad51 have been observed in a variety of tumor cells (12)(13)(14), suggesting that strict regulation of this recombinase may be essential for maintaining genome integrity. To date, five human Rad51 (hRad51) paralogs, Rad51B (Rad51L1), Rad51C (Rad51L2), Rad51D (Rad51L3), Xrcc2 and Xrcc3, have been identified. Each of these genes shows only a limited degree of sequence similarity to hRad51, however, they all contain the RecA domain for DNA recombination and the Walker A and B motifs for ATP binding and hydrolysis in the predicted amino acid sequences (15)(16)(17)(18). These hRad51 paralogs have presumably arisen through a series of gene duplications in the early stages of eukaryotic evolution (19). In addition, the five hRad51 paralogs have been reported to assist the DNA strand exchange activity of hRad51, forming two distinct complexes, Rad51B-Rad51C-Rad51D-Xrcc2 and hRad51C-Xrcc3 (20). Deficiency in any of the Rad51 paralogs has been shown to lead to increased sensitivity to DNA cross-linking agents and ionizing radiation in vertebrate cells (21)(22)(23). In an attempt to identify additional hRad51 paralogs in humans, we searched a human testis cDNA library. We report here a novel splice variant of hRad51, hRad51-iex9, which lacks the sequence corresponding to exon 9. This novel variant was also found in the expressed sequence tag (EST)-databases. The hRad51-iex9 protein was localized in the nucleus and detected as an expected molecular weight of 31 kDa in the testis. The hRad51-Áex9 protein showed DNA strand exchange activity that was comparable to that of hRad51, suggesting that this novel variant also functions as a recombinase. Additionally, using site-directed mutagenesis, we found that a short basic motif located in the C-terminus of hRad51-Áex9 may play a functional role in nuclear localization of this novel variant. Identification of hRad51-Dex9 A human testis 5 0 -stretch cDNA library (Clontech) was screened using a hRad51 cDNA probe. The cDNA probe was P 32 -labeled by random primer labeling, and hybridization was conducted in 50% formamide, 5Â SSPE (1Â SSPE: 150 mM sodium chloride, 10 mM sodium phosphate, 1 mM EDTA, pH 7.4), 10Â Denhardt's solution, 2% SDS and 100 mg/ml denatured salmon sperm DNA at 428C for 16 h. The filters were washed twice in 2Â SSC (1Â SSC: 150 mM sodium chloride, 15 mM sodium citrate, pH 7.0), 0.1% SDS at room temperature and then twice in 0.2 Â SSC, 0.1% SDS at 428C. Next, the filters were exposed to Kodak XAR film at -708C for varying periods of time. The positive phage clones were then sequenced using an ABI 310 automated DNA sequencer. The human EST database was also searched for identification of hRad51 paralogs using the BLASTN program (http://www.ncbi.nlm.nih.gov/cgi-bin/BLAST). The EST AI018041 clone was purchased from Open Biosystems. The nucleotide sequence reported in this paper will appear in the GenBank under accession number EU362635. RT-PCR analysis in human tissues Human Multiple Tissue cDNA panels (Clontech) were PCR-amplified using ExTag polymerase (Takara) with primers specific to both hRad51 and hRad51-Dex9 (forward: 5 0 -tttggagaattccgaactgg-3 0 ; and reverse: 5 0 -aggaagac agggagagtcg-3 0 ), which were derived from the flanking regions of exon 9. The reaction mixture was subjected to 30 cycles of 948C for 30 s, 588C for 30 s and 728C for 40 s with a predenaturation at 948C for 4 min and a final extension at 728C for 7 min. The amplified PCR products were then analyzed by electrophoresis on 2.0% agarose gels. Expression and purification of the recombinant hRad51 and hRad51-"ex9 proteins The full-coding sequences of hRad51 and hRad51-Dex9 were PCR-amplified from recombinant phage clones using Pfu DNA polymerase (Stratagene) according to the manufacturer's instructions. The sequences of the oligonucleotide primers are available upon request. A unique restriction site, either NotI or BamHI, was introduced into each primer to allow convenient subcloning. The PCRamplified fragments were then gel-purified and ligated into pET28b (Novagen) or pET21c (Novagen) at the NotI and BamHI restriction sites in frame with the C-terminal hexahistidine tag. The resulting expression constructs were then confirmed to contain the desired sequences by DNA sequence analysis using the BigDye termination version 3.0 (ABI). Among the expression constructs, pET28b-hRad51 and pET21c-hRad51-Áex9 were used for expression of the hRad51 and hRad51-Áex9 proteins, respectively. The Escherichia coli strain, BL21 (DE3) (Novagen), was used for transformation of the pET-derived expression constructs. The recombinant proteins were expressed and purified as previously described (24). However, the hRad51-Áex9 protein resulted in the formation of inclusion bodies. Denaturing and refolding of the hRad51-Áex9 protein into an enzymatically active form were done as previously published for other human proteins (25). Briefly, the inclusion bodies were precipitated by centrifugation at 8000 g for 20 min and then homogenized in 6 M urea, 10 mM K 2 HPO 4 , pH 8.2 and 3 mM b-mercaptoethanol. The solubilized recombinant proteins were then purified using Ni-NTA agarose resins (Qiagen). For refolding, the denatured hRad51-Áex9 protein was first dialyzed overnight against a buffer of 10 mM K 2 HPO 4 , pH 9.6, 200 mM CuCl 2 and 2% sodium N-lauroylsarcosinate and then against a buffer of 10 mM K 2 HPO 4 , pH 9.6 and 5 mM CuCl 2 . Next, the proteins were further dialyzed twice against 10 mM K 2 HPO 4 , pH 7.0. The concentration of the dialyzed protein samples was then determined using a BCA Protein Assay Kit (Bio-Rad). All of the purification procedures were conducted at 48C. The purity and size of the recombinant proteins were assessed by SDS-PAGE. The purified recombinant proteins were further confirmed by western blot analysis using a commercial hRad51 polyclonal antibody (Calbiochem). DNA strand exchange assays DNA strand exchange assays were done as previously described (26,27). Briefly, the recombinant hRad51 or hRad51-Áex9 protein (final concentration, 3.5 mM) was mixed with 125 ng (final concentration, 16.8 mM in nucleotides) of fX 174 viral DNA (New England Biolabs) in 20 ml buffer containing 20 mM HEPES, pH 6.5, 1 mM DTT, 6.6 mM MgCl 2 , 3 mM ATP, 20 mM creatine phosphate, 0.1 mg/ml creatine kinase and 50 mg/ml BSA. After 5 min of incubation at 378C, 120 ng (final concentration, 8.4 mM in base pairs) of PstI-linearized fX 174 dsDNA (New England Biolabs) in 1 ml and 1 ml of 100 mM MgCl 2 were added to the reaction mixture. Following subsequent incubation for 15, 30, 60, 120 or 240 min at 378C, 0.5% SDS and 0.5 mg/ml proteinase K were added to stop the exchange reaction. The incubated DNA samples were then run in 0.8% agarose gels. The gels were stained with 0.1 mg/ml of syber green (Molecular Probe) for 2 h and then distained in ddH 2 O for 2 h. Images were processed using Photoshop 7.0 (Adobe). Generation of a hRad51-"ex9-specific polyclonal antibody A synthetic peptide (EERKRGNQNLQNLRLS) was covalently conjugated to maleimide-activated keyhole limpet homocyanin. The peptide conjugate was then emulsified with an equal volume of complete Freund's adjuvant. Adult rabbits of 1.8-2.0 kg in weight were intramuscularly injected with 500 mg of the emulsified peptide conjugate four times at a 2-week interval. The rabbits were bled on Days 7 and 14 after the last injection, and the presence of antibodies was then evaluated using an ELISA assay. The antibodies were then purified using a Protein A Agarose Kit (KPL) according to the manufacturer's instructions. Western blot analysis in human tissues Human tissue specimens were homogenized in a lysis buffer containing 50 mM Tris, pH 7.5, 150 mM NaCl, 2% SDS, 1 mM EDTA, 1 mM PMSF, 1 mM aprotinin and 1 mM chymostatin. The protein concentrations of the tissue extracts were determined using a BCA Protein Assay Kit (Bio-Rad). For western blot analysis, 100 mg of tissue extracts was subjected to 12.5% SDS-PAGE and then immunoblotted onto a nitrocellulose membrane (Amersham Bioscience). The membranes were then blocked in Tris-buffered saline Tween-20 (TBST) containing 5% skimmed milk for 1 h at room temperature, after which they were incubated with the hRad51-Áex9-specific antibody, a commercial hRad51 polyclonal antibody (Calbiochem), or preimmune serum for 1 h at room temperature. The protein bands were visualized using an ECL detection system (Amersham-Pharmacia Biotech), and GAPDH was used as an internal control. Subcellular localization of hRad51-"ex9 Mammalian expression constructs of hRad51 and hRad51-Dex9 were generated by PCR-amplifying their full coding sequences from recombinant phage clones using Pfu DNA polymerase (Stratagene) according to the manufacturer's instructions. The sequences of the oligonucleotide primers are available upon request. A unique restriction site, either SacI or BamHI, was introduced into each primer for convenient subcloning. The PCR-amplified DNA fragments were then ligated into pEGFP-C1 (BD Biosciences) in frame with the N-terminal GFP tag. The resulting constructs were transiently transfected into COS-7 cells that were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 100 mg/ml of streptomycin and 100 U/ml of penicillin. At 4-10 h posttransfection, the cells were washed with phosphate-buffered saline (PBS) and then fixed with 4% paraformaldehyde for 5 min at room temperature. The fixed cells were rinsed twice with PBS, permeabilized by incubation in 0.2% Triton X-100 for 10 min and then rinsed three times with 0.1% BSA in PBS. Nuclei were stained with propidium iodide (1:1000) (Molecular Probes), and confocal microscopic analysis was performed using a Zeiss LSM510 laser-scanning microscope. Mutagenesis of hRad51-"ex9 Site-directed mutagenesis was performed using a PCRbased DpnI-treatment method that has been previously described (28). Mutagenic primers were designed to create R264A, K265Q and Del264RK in the amino acid sequence of hRad51-Áex9. The sequences of the oligonucleotide primers are available upon request. Thermocycling was conducted using Pfu DNA polymerase (Stratagene) according to the manufacturer's suggestions. The creation of mutations in the hRad51-Dex9 cDNA was confirmed by sequence analysis using the BigDye termination version 3.0 (BD Biosciences). To construct a C-terminal deletion mutant of hRad51, the sequence corresponding to codons 1 to 258 of hRad51 was PCR-amplified using Pfu DNA polymerase (Stratagene) with the following PCR primers: forward, 5 0 -ccgagctcgaatggcaatgcagatgcagc-3 0 ; and reverse, 5 0 -cgcggatcctcactcatcagcgagtcgcag-3 0 . A unique restriction site, either SacI or BamHI, was introduced into each primer to allow convenient subcloning. The PCRamplified DNA fragments were then ligated into pEGFP-C1 (BD Biosciences) in frame with the N-terminal GFP tag. Identification of hRad51-Dex9 The hRad51 gene is composed of 10 exons that encode a 339-amino acid polypeptide with a calculated molecular mass of 37 kDa. In an effort to identify additional hRad51 paralogs in humans, we searched a human testis cDNA library using a hRad51 cDNA probe with low stringency and obtained seven autoradiographically positive phage recombinants (data not shown). Sequence analysis of the recombinants revealed that all of the isolated clones were hRad51 cDNAs. However, one clone that contained a 1661-bp insert showed an exon-intron structure distinct from that of hRad51, specifically lacking the sequence corresponding to exon 9 of hRad51 ( Figure 1). This novel splice variant of hRad51, termed hRad51-Dex9, was also identified in searches of the human EST databases (EST ID number: AI018041). We conducted complete sequencing of EST AI018041 that was obtained from a commercial source and subsequently confirmed that the hRad51-Dex9 cDNA was identical to AI018041, with the exception that hRad51-Dex9 contained longer 5 0 -and 3 0 UTR sequences than the EST AI018041 clone. The 5 0 -UTR of hRad51-Dex9 is at least 299 bp, the coding region is 843 bp and the 3 0 -UTR is 469 bp. The deletion of exon 9 causes a frame-shift at codon 259, which leads to premature termination at codon 281. The expected amino acid sequence of the hRad51-Áex9 protein consists of codons 1 to 258 of hRad51 and 22 'out of frame' codons from exon 10, containing the Walker A and B ATP-binding motifs at residues 127-135 and 218-222, respectively (Figure 2). In addition, a basic motif that is composed of one lysine and two arginine residues is located at residues 303-306 of hRad51, and a similar basic motif is found at residues 264-266 in the newly created C-terminus of hRad51-Áex9 (Figure 2). RT-PCR ANALYSIS OF HRAD51-"EX9 IN HUMAN TISSUES To determine the expression of hRad51 and hRad51-Dex9 in human tissues, RT-PCR analysis was conducted using primers derived from the flanking regions of exon 9. The RT-PCR analysis was expected to generate a 467-bp fragment for hRad51-Dex9 and a 589-bp fragment for hRad51. DNA-amplicons of the expected sizes corresponding to both hRad51 and hRad51-Dex9 were most prominently detected in the testis (Figure 3). Both PCR amplicons were also detected, though to lesser extents, in the skeletal muscle, pancreas, thymus and ovary ( Figure 3). Additionally, the hRad51-specific amplicon was detected in the placenta, lung, liver, kidney, spleen and colon tissues, however, the hRad51-Dex9-specific-amplicon was not detected in these tissues, suggesting that different tissuespecificities exist between hRad51 and hRad51-Dex9 (Figure 3). The DNA strand exchange activity of the hRad51-"ex9 protein In an effort to express and purify enzymatically active forms of the hRad51 and hRad51-Áex9 proteins, we expressed the full coding domain sequences of hRad51 and hRad51-Dex9 using an E. coli expression system. Upon induction with 1 mM IPTG at 378C, the hexa-histidine tagged recombinant proteins of both hRad51 and hRad51-Áex9 were expressed at high levels. Fractionation of the cell lysates into different cellular compartments, such as cytoplasmic extracts, periplasmic extracts and inclusion body fractions, revealed that the recombinant hRad51 protein was present in the soluble fractions. However, the recombinant hRad51-Áex9 protein was expressed within the inclusion bodies. The insoluble hRad51-Áex9 protein was denatured by urea during purification and subsequently refolded by stepwise dialysis in the presence of N-lauroylsarcosinate and Cu 2+ . The apparent sizes of the expressed recombinant proteins were in good agreement with the deduced molecular mass, which was 38 kDa for the recombinant Rad51 protein and 32 kDa for the recombinant hRad51-Áex9 protein. The purified recombinant proteins were confirmed by western blot analysis using a commercial human Rad51 antibody ( Figure 4A). To assess the DNA strand exchange activities of hRad51 and hRad51-Áex9, we used the purified recombinant proteins with circular single-strand DNA (ssDNA) and linear double-strand DNA (dsDNA) of bacteriophage fX174. In DNA strand exchange reactions, the circular ssDNA forms joint molecules with the linear dsDNA through homologous pairing, and then the joint molecules are converted into nicked circular forms ( Figure 4B). Both the recombinant hRad51 and hRad51-Áex9 proteins showed the expected joint molecules and nicked circular forms of fX174 at each of the time-intervals tested. The intensities of the bands corresponding to the nicked circular form appeared approximately the same in the either reactions with hRad51 or hRad51-Áex9 ( Figure 4C), suggesting that strand exchange activity of hRad51-Áex9 is approximately similar to that of hRad51 at least in vitro. However, the hRad51-Áex9 protein showed a significantly higher activity than hRad51 in homologous DNA pairing at all the timeintervals ( Figure 4C). These results are comparable with the previous findings on C-terminal deletion mutants of the E. coli RecA protein, which also showed an enhanced activity in homologous DNA pairing (29)(30)(31). Western blot analysis of hRad51-"ex9 in human tissues To evaluate the expression of hRad51-Dex9 at the protein level in vivo, we generated a polyclonal antibody against the peptide sequence specific to hRad51-Áex9. This hRad51-Áex9 polyclonal antibody reacted with the purified recombinant hRad51-Áex9 protein, but not with the recombinant hRad51 protein (data not shown). Human placenta, lung, testis and small intestine tissues were then tested by western blot analysis. A band with the expected molecular mass of 31 kDa for hRad51-Áex9 was prominently detected in the testis; however, this 31-kDa band was rarely detected in the other tissues tested ( Figure 5A). We also investigated the expression of hRad51 and hRad51-Áex9 using a commercial antibody expected to react with both hRad51 and hRad51-Áex9. The 37-kDa hRad51 band was prominently detected in the testis, but at much lower levels in the placenta, lung and small intestine ( Figure 5B). The 31-kDa band corresponding to hRad51-Áex9, however, was detected only in the testis ( Figure 5A). These findings are consistent with those of the RT-PCR analysis that also showed prominent expression of hRad51-Dex9 only in the testis. Nuclear localization of hRad51-"ex9 To investigate the cellular localization of hRad51 and hRad51-Áex9, mammalian expression constructs containing the full coding sequence of hRad51 or hRad51-Dex9 in frame with the N-terminal GFP tag were transfected into COS-7 cells. Confocal microscopic analysis of the direct fluorescence of the fusion proteins displayed subcellular signals of hRad51 and hRad51-Áex9 in the nucleus (Figure 6Aa and b). In addition, both the hRad51 and hRad51-Áex9 proteins were co-localized with nucleusspecific propidium iodide staining, further confirming the nuclear localization of these proteins in the transfected cells (data not shown). However, the mutated hRad51 protein that did not contain the C-terminal region from codons 259 to 339 was primarily detected in the cytoplasmic area (Figure 6Ac). Taken together, these results indicate that the signal for the nuclear localization of hRad51 may reside in the C-terminus and, furthermore, that the frame-shifted region of hRad51-Áex9 may regain the residues required for nuclear localization. A basic motif containing a stretch of lysine and arginine residues was found at residues 264-266 (RKR) in the frame-shifted C-terminal region of hRad51-Áex9. Similar types of basic motifs have been known to act as a nuclear localization signal (NLS) in a number of nuclear proteins (32,33). To determine, therefore, if this basic motif in the C-terminus of hRad51-Áex9 could function as an NLS, we generated a series of mutant constructs that harbor a del254-256RK, R264A or K265Q mutation in the basic motif. In localization studies conducted using the mutant constructs, each of the mutated hRad51-Áex9 proteins was primarily detected in the cytoplasmic areas, but rarely in the nuclei (Figure 6Ba-c). These results strongly suggest that the basic motif located in the newly created C-terminal region of hRad51-Áex9 may function as a NLS in nuclear localization of this hRad51 variant. DISCUSSION Here we present a novel variant of hRad51, hRad51-Dex9, which aberrantly splices the hRad51 mRNA from exon 8 to exon 10, skipping exon 9. The predicted amino acid sequence of this novel variant contains a truncated C-terminus of hRad51, however, it retains the RecA domain for DNA recombination and the Walker A and B motifs for ATP binding and hydrolysis. With a purified recombinant hRad51-Áex9 protein, we showed that this novel variant is capable of catalyzing DNA strand exchanges in vitro, although further biochemical characterization would be required to determine the precise enzymatic properties of this hRad51 variant. In expression studies, hRad51-Áex9 was predominantly detected in the testis at both the mRNA and protein levels and, further, the hRad51-Áex9 protein was localized in the nucleus. Taken together, these findings indicate that hRad51-Áex9 catalyzes homologous pairing and DNA-strand exchange in the nucleus, suggesting that alternative pathways involving either hRad51 or hRad51-Áex9 may exist for DNA repair and recombination. Splice variants of other genes involved in DNA repair and recombination, including Rad52, Rad51D and DMC1, have been also reported (34)(35)(36)(37)(38). The murine and human Rad52 mRNAs undergo alternative splicing, resulting in several variants with a truncated C-terminus (34,35). Rad52 is known to catalyze the replacement of replication protein A with Rad51 on ssDNA and to promote strand exchange between complementary ssDNA and dsDNA (39,40). The human Rad52 variants interacted with both ssDNA and dsDNA; however, they did not bind to the full-length human Rad52 due to deletion of the selfinteraction domain (34). Furthermore, the murine Rad52 splice variants increased the frequency of sister chromatid repair in both mammalian cells and yeast, whereas the intact murine Rad52 was more likely involved in homology-directed repair (35). Alternatively spliced forms of Rad51D and DMC1 in both humans and mice have been also identified, but their functional significance has not been evaluated (36)(37)(38). However, the presence of these variants of the proteins involved in HR further implies the presence of alternative pathways for the control of recombinational repair of dsDNA breaks. Rad51 and its paralogs are found in the nucleus, however, it has not yet been determined if they are transported independently into the nucleus or through interactions with other proteins. BRCA2 has been known to play a critical role in the nuclear transport and foci formation of Rad51 upon exposure to exogenous damage (9)(10)(11). However, without any exogenous DNA damage, replication-associated formation of Rad51 foci occurred in a BRCA2-independent manner in CAPAN-1 cells that carry a BRCA2 truncation (41), suggesting that distinct mechanisms may be responsible for the nuclear localization and focus formation of Rad51 in the presence or absence of exogenous DNA-damaging agents. Further, several hRad51 paralogs have been shown to translocate into nucleus in a BRCA2-independent manner, using a basic motif composed of lysines and arginines as a NLS (42,43). hRad51C contains a basic motif composed of a short stretch of lysine and arginine residues at the C-terminus. Using a deletion construct of the C-terminal region, the basic motif of hRad51C was shown to function as a NLS for nuclear transport of hRad51C in mammalian cells (42). In addition, hRad51B that contains a basic motif at the N-terminus was shown to translocate into the nucleus in a BRCA2-independent manner (43). hRad51 also contains a basic motif at residues 303-306 (RKGR) in the C-terminus. This basic motif is deleted in hRad51-Áex9 due to the translational frame-shift. However, in the frame-shifted C-terminus of hRad51-Áex9, a similar basic motif reappears at residues 264-266 (RKR). Our studies with oligonucleotide-directed mutagenesis of the RKR motif in hRad51-Áex9 demonstrated that this short basic motif is required for the nuclear localization of hRad51-Áex9, suggesting that nuclear localization of hRad51-Áex9 may be independent of BRCA2 in the absence of any DNA-damaging agents, at least in the cultured cells tested. Rad51 has been reported to interact with p53 and BRCA2, both of which play pivotal roles in maintaining genome integrity. In response to DNA damage, p53 modulates HR through physical interaction with several proteins implicated in recombination, including Rad51, Rad54, BLM and WRN (44,45). Using in vitro binding assays, p53 was reported to interact with the region between codons 125 and 220 of hRad51 (8). The p53interactive region in hRad51 corresponds to the homooligomerization region that is critical for formation of the functional hRad51 nucleoprotein filaments (46). The conservation of the p53-interactive region in hRad51-Áex9 suggests that this novel variant also interacts with p53, unless the absence of the C-terminal region in hRad51-Áex9 affects the physical interaction with p53. BRCA2 interacts with Rad51 through the eight conserved BRC repeats (47,48), and mutations within these repeats are associated with an increased risk of breast cancer (49,50). Electron microscopy studies showed that the BRC repeat 4 interacts with the nucleotide-binding core of Rad51, whereas the BRC repeat 3 interacts with the N-terminal region of Rad51, suggesting that the BRC repeats bind to distinct regions of Rad51 (51). The BRCA2-interactive region in hRad51 was studied using yeast two-hybrid and in vitro binding assays, which revealed that the C-terminus of hRad51 (codons 98-339) is crucial for interaction with BRCA2 (47). Our finding that the C-terminal region (codons 280-389) of hRad51 is deleted in hRad51-Áex9 suggests that this novel variant may have a different binding property from hRad51 in interaction with BRCA2. Further characterization of the interactive profile of hRad51-Áex9, particularly with p53 and BRCA2, will be necessary to determine the functional roles that this novel recombinase may play in the maintenance of genome stability and the elimination of DNA double-strand breaks.
2014-10-01T00:00:00.000Z
2008-04-16T00:00:00.000
{ "year": 2008, "sha1": "f2ff262223dbd95d1506726d98b2526a9cc2f709", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/36/10/3226/7178687/gkn171.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2ff262223dbd95d1506726d98b2526a9cc2f709", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
70321663
pes2o/s2orc
v3-fos-license
Evaluating Gravity-Assist Range Set Based on Supervised Machine Learning The dynamics of gravity-assist (GA) trajectories contain strong nonlinearity, which makes the traditional methods for impulse transfer range set (RS) are intractable to deal with the gravity-assist RS. This paper develops a novel method to evaluate the gravity-assist RS based on regression methods in supervised machine learning (SML) field. The performances of three powerful regression methods with several common kernel functions are assessed. The Gaussian Processes Regression (GPR) method with Matérn 3/2 kernel is selected because of the minimum mean squared error (1.11×10−3 km2/s2). The predicting model based on GPR is constructed to make prediction form the orbital elements of destination orbits to the total velocity increment of corresponding optimal GA trajectories. The percentage error of predicting model is no more than 2%. Millions pairs of sample points are generated by the trained predicting model. The points with specified value of total velocity increment are extracted, of which the envelope constitutes the gravity-assist RS. Both of Venus GA and Mars GA trajectories are considered in this paper. Introduction The reachable domain of spacecraft designates the set of positions that the spacecraft can reach with given initial orbit and specified fuel constraint. Reachable domain is a useful tool for mission planning and collisions risk assessment between two spacecraft [1]. Existing researches on reachable domain are based on the description of the positional parameters in Cartesian coordinate [2] [3] or the Keplerian orbital elements [4]. Reachable domain is also known as range set (RS) in the researches descripted by Keplerian orbital elements. These researches were usually based on impulse transfer, in which appropriate mathematics tools could be used to derive the relationship between the boundary accessible points and the velocity increment. However, it is difficult to calculate the gravity-assist RS because the GA models are more complex comparing with impulse transfer models [5] [6]. For GA problem, the traditional methods in the field of RS are intractable to obtain the relationship discussed above. Regression methods in supervised machine learning (SML) aim to predict unsuspected data accurately and efficiently by detecting implicit relationships in training set [7]. These methods can substitute for the traditional methods, and dig out the relationship between the orbital elements of destination orbit and the total velocity increment opt V  of corresponding optimal GA trajectory. Inspired by this idea, a novel method based on SML is proposed to evaluate the gravity-assist RS numerically in this study. First, the feature and target of predicting model based on SML are determined. A hybrid optimization combining Differential Evolution (DE) and SNOPT is employed to solve the trajectory-optimization problem based on GA model with deep space maneuver (DSM). Later, the performances of three powerful SML regression methods with several common choices of kernels are assessed. Finally, millions pairs of sample points are generated by the trained predicting model thanks to the high efficiency of SML methods. The points with specified value of opt V  are extracted, and the envelope of these extracted sample points constitutes the gravity-assist RS. Both of Venus GA and Mars GA trajectories are considered in this paper. Problem Formulation The regression methods in SML field are able to construct the predicting model from the features to target based on the training set that contains numbers of pairs of feature values and corresponding target values. In this paper, we aim to evaluate the gravity-assist RS with the description of Keplerian Actually, this region covers more than 90% the current known main-belt asteroids. The calculation of gravity-assist RS within this region is significant for the target selection in practical mission. In this paper, the single GA trajectory is considered, and the candidate GA bodies contain Venus and Mars. The gravity-assist model with DSM is introduced to design the GA trajectories. The objective function of GA trajectory with DSM is given by where D V and I V denote two impulses applied at departure from a circular Earth parking orbit (200km height) and the injection to the destination orbit. accuracy of predicting models by calculating Mean Squared Error (MSE) or Percentage Error (PE) between the prediction output and the true value. Taking Mars gravity-assist case as an example, the MSE distribution is shown in figure 1. Figure 1. MSE distribution of different SML regression methods with different kernels In general, the MSEs of SML regression models decline as the training set size grows, and the MSE curves tend to converge after the training set contains about 1400 training pairs. Among all the SML regression models in figure 1, GPR models show the overwhelming superiority with faster convergence rate and lower stable value of MSE. In addition, the GPR models are relatively insensitive to the selection of kernel functions, while the performance of other models shows significant disparity with different kernels. As shown in the enlarged view of GPR models, the Maté rn 3/2 kernel has a slight advantage with 1.11×10 -3 km 2 /s 2 minimal MSE at 1150 training set size. In conclusion, GPR model with Maté rn 3/2 kernel function is selected to generate the gravity-assist RS, and the training set size is chosen as 1150 in subsequent sections of this study. The PE distribution of predicting model based GPR with Maté rn 3/2 kernel is shown as figure 2. It can be seen that the maximum PE is no more than 2%, and the vast majority of PEs are less then ±1%. According to the MSE and PE of predicting model, it can be concluded that the accuracy of the predicting model is satisfactory. As we can see in figure 3, only a small region in U is reachable when the TVIL for missions is set to 6km/s, and it tends to be completely unreachable given lower TVIL via Venus GA trajectory. The RS extends out obviously with the increase of TVIL. The pattern for Mars GA trajectory is analogous, while the minimum TVIL that make the region U partly reachable is lower than 4km/s (see figure 4). This reflects the advantage of Mars GA trajectory in reducing total velocity increment for missions. In general, the destination orbits with lower a and i are easier to reach via both Venus GA trajectories and Mars GA trajectories. Conclusions In this paper, a novel method based on SML is proposed to evaluate the gravity-assist RS. The SML regression methods are introduced to dig out the implied relationship between the orbital elements
2019-02-19T14:08:09.398Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "23a0bba960d41dc11ec4133cd58b96897d9af253", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/449/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "af152fa3586e1d1c6056e2d6da9dc3bddcab7959", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
85821695
pes2o/s2orc
v3-fos-license
Probing the pH Dependent Optical Properties of Aquatic, Terrestrial and Microbial Humic Substances by Sodium Borohydride Reduction Chemically reducing humic (HA) and fulvic acids (FA) provides insight into spectroscopically identifiable structural moieties generating the optical properties of HA/FA from aquatic, microbial and terrestrial sources. Sodium borohydride reduction provides targeted reduction of carbonyl groups. The contrast between the pH induced optical changes of untreated, reduced and reoxidized HA/FA highlights differences in the quantity, and physicality of structural components generating optical properties associated with HA/FA. Because borohydride reactions alter pH, pH re-adjustment to the original pH is required. Careful titrations of selected HA/FA provided the mole H g HA/FA required to titrate reduced and reoxidized HA/FA from pH 2-11; and the pH dependent spectral slope (S) at low (pH 2-3), neutral (pH 7-7.5) and high (pH 11.0). Molar extinction coefficients () (Lmgcm) at pH 7.6 and 350 nm provide a point of consistency linking intrinsic pH dependent optical properties to the concentration of material used for titrations. Soil derived humic acids differ from aquatic humic acids in the heterogeneity of optically identifiable group as well as the overall concentration of those groups. The microbial source of FA has a limited concentration of homogeneous titratable groups when compared to aquatic FA generated terrestrially. Microbial FA exhibits pH linked optical recovery upon reoxidation when compared to aquatic FA which is consistent with the presence of quinones in microbial FA. Introduction Humic substances (HS) and chromophoric dissolved organic matter (CDOM) are related and dynamic parts of the global carbon cycle (Hernes & Benner, 2003).The unique optical properties of CDOM/HS are integral to their environmental behavior.CDOM/HS have been studied for decades but limited understanding of how their optical properties are generated remains elusive (Coble et al. 2007;Moran et al. 2000).The complexity of their environmental interactions in combination with limited understanding of the structural underpinnings make the chemical and physical behavior of CDOM/HS relevant to a broad range of scientific fields including microbial science, soil science, the study of the fate and transport of metals and anthropological chemicals, and the behavior of other nutrient cycles HS (Koopal et al.2005;De Wit et al. 1993b).In this study, the optical properties were chemically probed using a reductant that specifically targeted carbonyl groups.The optical and potentiometic behavior of the selected model CDOM compounds representing specific ecosystems (soil, fresh water aquatic and marine) were compared before and after reduction in order to ascertain their similarities and differences.Several spectroscopic techniques were used to measure the chemical reductant and pH induced changes to the optical properties including UV-Vis, and fluorescence spectroscopy with the goal of identifying the chemical species involved in generating the unique optical properties universally found in CDOM. Two established models that provide justification for the experiments conducted.The first model presented is the electronic interaction model which seeks to explain the unique optical properties associated with all CDOM/HS (Ma, 2010).The second model is a combination of the non-ideal competitive absorption model (NICA) and the Donnan gel model (Koopal et al. 2005;De Wit et al. 1993a;De Wit et al. 1993b). The sample size required to implement the NICA-Donnan model limits its use to materials that can be gathered in large quantities making investigations of many colored dissolved organic samples (CDOM) difficult or prohibitively expensive.Optical titrations of lower concentration samples have been used to successfully model proton binding in Suwannee River Fulvic Acid (SRFA) by the use of pH dependent differential absorbance (A) (Dryer et al., 2008).The NICA-Donnan model describes two distributions of titratable groups centered at pH 4 and pH 7-8 respectively (Koopal et al. 2005;De Wit et al. 1993a;De Wit et al. 1993b). Model humic compounds (Table 1) derived from a solely microbial source as well as terrestrial sources from both aquatic environments and soil systems exhibit many of the same optical properties despite their dispirit methods, of generation in the environment and source materials (Brown et al., 2004).All show a pH dependent absorbance, exhibit increasing absorbance as wavelength decreases and a loss of absorbance upon borohydride reduction (Ma et al., 2010).A prominent theory linking humic model compounds is the electronic interaction model which highlights their ability to form electronic interactions and charge transfer complexes that extend long wavelength absorbance from source material that does not have absorbance spectra at long wavelengths (Ma et al., 2010).The underlying processes by which charge transfer bands or electronic interactions in CDOM/HS are generated are probed by optical and potentiometic titrations of untreated and borohydride reduced material.Borohydride reduction targets carbonyl functional groups such as aromatic ketones and uinines (Tinnacher & Honeyman, 2007).By removing a moiety suspected of participating in electronic interaction, long wavelength absorbance is partially or entirely lost and fluorescence emission increases and blue shifts (Ma et al., 2010).A direct comparison of divergent sources of fulvic and humic acid model compounds including an aquatic terrestrially derived fulvic acid, Suwannee River fulvic Acid (SRFA) and a microbial source of fulvic acid, Pony Lake fulvic Acid (PLFA) (Brown et al., 2004;McKnight et al., 1994), the soil derived humic acids Elliott humic acid (EHA) and Leonardite humic acid (LHA) and a terrestrially derived aquatic humic acid Suwannee River humic acid (SRHA) (Table 1).Analysis and comparison of untreated and borohydride reduced model humic compounds particularly at short wavelength (< 350) using potentiometric and optical titrations, spectral slope values (S), difference plots (A) (Dryer et al., 2008), fluorescence emission spectra, (Jørgensen et al., 2011), and fluorescence difference spectra (F) (Del Vecchio & Blough, 2004) provide insight into how electronic interactions work universally in the environment as well as how differences that exist between sources humic materials can potentially augment existing knowledge about the fate and transport of humic substances in the environment (Hernes & Ronald Benner, 2002). Quinine sulfate 215ehydrate and potassium hydrogen phthalate were obtained from Fisher Scientific.A Shimadzu UVPC 2401 spectrophotometer was used to acquire UV-visible absorption spectra.Fluorescence measurements were made with an Aminco-Bowman AB-2 luminescence spectrophotometer.A 4 cm band pass was used for excitation and emission.All experiments were conducted using 1 cm quartz cuvette with Milli-Q water adjusted to the appropriate pH. Humic Acids (HA) were dissolved in a minimum amount of sodium hydroxide, diluted to the desired concentration and pH adjusted using hydrochloric acid (HCl).The pH of each HA standard was adjusted from an average of 12 to the desired initial reduction of pH 7.6 and filtered with a 0.2 um Nalgene Nylon syringe filter (catalog number 195-2520) prior to finial dilution.Fulvic acids (FA) were dissolved in pH adjusted Milli-Q water.The pH was adjusted to the desired pH prior to finial dilution if needed.Fulvic acids were not filtered. Each standard was purged with UHP nitrogen (N 2 ) fitted with either a Restek oxygen scrubber (catalog number 20601) or a SGE Analytical Sciences oxygen trap (catalog number 103486) for 30 minutes prior to the addition of the reductant and purged continually with ultra high purity N 2 throughout the time course of the reduction.All reductions of HA/FA were carried out for at least 24 hours and selected reductions were extended to 48 or 72 hours.The samples were reoxidized for 24 hours.Samples were protected from ambient light during reduction and reoxidation.Selected spectra were titrated to the original A(0) pH because addition of borohydride consistently increased the pH of the solution.The pH and absorbance spectrum of each reduction was measured at re-oxidation time points of 10 minutes, 30 minutes, 1 hour and 24 hours.Difference spectra (A) were calculated by subtracting an absorbance spectra at time t (A(t)) from the original spectra (A(0)).Fractional difference spectra were generated by dividing an absorbance spectra at time t (A(t)) by the original spectra A(0).All pH measurements were conducted using an Orion pH meter calibrated at pH 4.00, 7.00 and 10.00 daily.Calibrations were considered acceptable when the correlation coefficient was greater than 0.98. Fluorescence emission spectra were collected from 280-600 nm excitation range.In order avoid inner filter effects fluorescence measurements were kept between 0.05 and 0.10 OD.Differential emission spectra (F) were calculated by subtracting the original spectra F(0) by subsequently borohydride reduced spectra at time F(t) where t was 24 hours.Quinine sulfate was used to standardize the fluorescence quantum yield measurements according to the method developed by the U.S. Department of Commerce, National Bureau of Standards publication 260-64 Standard Reference Materials: A Fluorescence Standard Reference Material: Quinine Sulfate Dihydrate (Velapoldi & Mielenz, 1980).Fluorescence measurements were made initially at pH 7.60, reduced pH 10.00 and reduced reoxidized pH 7.60. Samples used for titrations were passed over a Sephadex G-10 column until the pH of the sample was found to be between pH 7.00 and pH 7.60 in order to remove residual borate and facilitate background free titrations.Turmeric paper was used to ensure that no additional borate remained in the effluent (Scott & Webb, 1932).Molar extinction coefficients were determined using Milli-Q water at pH 7.60 and 350 nm.Absorption spectra were recorded from 190 to 820 nm against pH adjusted Milli-Q water.The concentrations of the column effluent from the Sephadex G-10 column were determined using a calibration curve of previously reduced samples at pH 7.60 and 350 nm.The absorptivity () (L mg -1 cm -1 ) of the absorbance as a function of concentration was derived according to Beer-Lambert Law (equation 1) where A is the absorbance, l is the path length of the cell (cm) and c is the concentration of the sample (mg L -1 ). In order to investigate the effect of ionic strength on the optical properties of HS, ionic strengths of 0.01, 0.10 and 1.00 mole L -1 NaCl were added to prior to titration of untreated and borohydride reduced samples at concentrations that were optimized for the visible region of the spectrum (1 OD, at 350 nm).Untreated humic and fulvic acid samples were matched to absorbance of their corresponding borohydride reduced and Sephadex cleaned samples.Initial reaction concentrations were chosen in order to generate absorbance spectrum at or below 1 absorbance unit in spectral ranges between 190 and 400 nm and 400 to 820 nm allowing for investigation of the UV and visible ranges of the spectrophotometer.Titrations from pH 3.00 to 11.00 were completed using 0.250 mole L -1 sodium hydroxide.Titrations from pH 11.00 to pH 3.00 were completed using 0.250 mole L -1 hydrochloric acid.Samples were titrated over the same pH range in order to eliminate hysteresis.Titrant was delivered in 5 l increments (1.25 mole aliquots).Untreated Suwannee River humic and fulvic acids (SRHA and SRFA) were titrated using a concentration of 100 mg L -1 , the untreated soil humic acids, LHA and EHA were titrated using a concentration of 50 mg L -1 and the untreated PLFA was titrated using a concentration of 500 mg L -1 . Borohydride reduced and Sephadex cleaned humic and fulvic samples were titrated at the following concentrations SRFA was titrated using 760 mg L -1 , SRHA was titrated using 200 mg L -1 , EHA was titrated using 48 mg L -1 , LHA was titrated using 75 mg L -1 , PLFA was titrated using 260 mg L -1 .All titrations were carried out using an initial volume of 3.00 ml and volume adjusted.Each reduced, cleaned and titrated sample was carbon normalized in order to complete the spectral slope calculations. A second set of titrations were completed at concentrations appropriate for the investigation of the UV region of the spectra (< 350 nm).These titrations were completed at an ionic strength of 0.01 mole L -1 NaCl.Difference plots were generated for the both the low concentration UV range titrations and the high concentration visible range titration.The concentrations for the untreated and borohydride reduced and Sephadex cleaned UV range samples respectively were for SRHA 20 and 19 mg L -1 , SRFA 26 and 28 mg L -1 , PLFA 75 and 30 mg L -1 , EHA 10 and 48 mg L -1 , LHA 13 and 11 mg L -1 . The wavelength dependence of the specific absorbance coefficient (a*) (equation 2) at low (pH 3.00), high (pH 11.00) and neutral pH (pH 6.00-7.60)at an initial wavelength 350 nm (equation 2) were used to calculate the spectral slope (S) as in equation 3. Specific absorbance (a*) were calculated according to equation (2). where a*(350 nm) is specific absorbance at 350 nm henceforth referred to as a*, a() = absorbance at a given wavelength, b = 0.01 is the absorbance path length in meters (1 cm cell), C is the total organic carbon in mg carbon L -1 .Total organic carbon was determined by high temperature oxidation using a Schimadzu 500A TOC analyzer calibrated using potassium hydrogen phthalate (KHP) at 680 0 C. The spectral slope parameter (S) was obtained by non-linear least squares fitting of the spectra over the range of 290-820 nm to expression (3) where a*( Ref ) is the specific absorption coefficient at the reference wavelength of 350 nm.Spectral slope parameterization was completed with a minimum of three replicates at pH 7.60 and 350 nm. Results Borohydride reduction of soil derived terrestrial material (Elliott and Leonardite humic acids) over a period of 24 hours decreases in absorbance across all wavelengths, but a maximum reduction is seen between 300 and 600 nm (Figure 1, Table 1).The difference and fractional difference maximum loss is 50 % for both Elliott and Leonardite HS.The percent loss of SRHA was consistent with the soil Has (50 %).The maximum fractional and differential fraction loss of SRFA was 70 % (Figure 1, Table 1). Figure 1.Absorbance, difference A(0) -A(T) and fractional difference A(T)/A(0) of the borohydride reduction and reoxidation of 100 mg L -1 Suwannee River humic acid (SRHA), Suwannee River fulvic acid (SRFA), 50 mg L -1 Elliott humic acid (EHA), 500 mg L -1 Leonardite humic acid (LHA) and 50 mg L -1 Pony Lake fulvic acid (PLFA) with 5 mg of sodium borohydride.Reduction and reoxidation were allowed to continue for 24 hours each.The finial pH was adjusted back to the initial pH 7.60 Absorbance, difference and fractional difference spectra of PLFA show that borohydride reduction of 500 mg L -1 PLFA produces as much as 90 % loss in absorbance with maxima between 400-500 nm (Figure 1, Table 1).Absorbance at longer wavelengths than 500 nm is very close to zero.Long wave length absorbance of untreated and borohydride reduced PLFA were found to be linearly related to concentration at pH 7.6 and 350 nm with molar extinction coefficients  = 0.00356 L mg -1 cm -1 and 0.00220 L mg -1 cm -1 respectively (Table 2).The other humic and fulvic acids, untreated and borohydride reduced exhibit linear absorbance as a function of concentration at 350 nm and pH 7.6 (Table 2).The absorbance of PLFA at long wavelengths is very low, making the use of high concentration (mg L -1 ) necessary in order to elucidate absorbance trends in this region.The absorbance increases sharply at wavelengths shorter than 350 nm.In order to capture absorbance trends across wavelength 230-800 nm and remain in the linear range of the spectrophotometer, reductions were carried out independently at two concentrations 50 mg L -1 from wavelength (230-350 nm) and at 500 mg L -1 (350-800 nm).At long wavelengths (350-800) the time dependence of the reduction behaves in a similar manner as do all other humic materials investigated (Figure 2).The majority of the borohydride induced reduction occurs within the first 2 hours of the reduction.Reoxidation over a period of 24 hours, post the 24 hours of borohydride reduction causes further loss of absorbance as seen in the time dependence of the reduction (Figure 3) and absorbance, difference and fractional difference spectra (Figure 1).Concurrently, with reoxidation absorbance loss is a reduction in pH (Table 3).An oxygen dependent reduction in pH was observed in all of the humic and fulvic acids consistent with the presence of 222uinines in the examined humic and fulvic acids (Table 3).An examination of difference and fractional difference absorbance plots and the time dependence of post borohydride reoxidation of Pony Lake and Suwannee River fulvic acids between 230 and 350 nm indicate that PLFA exhibits oxidation induced short wavelength (< 350 nm) absorbance recovery while SRFA and the other terrestrially based humic substances do not (Figures 3 and 4) despite oxidation induced changes (loss) in pH (Table 3).No concurrent, reoxidation induced long wavelength (> 350 nm) absorbance recovery is observed for PLFA or the terrestrially derived humic substances. Borohydride reduction caused irreversible changes in the fluorescence emission spectra.All reduced spectra were seen to blue shift and increase in intensity.The untreated (pH 7.60) and borohydride reduced (pH 7.60) emission spectra of SRFA and SRHA increase smoothly from long wavelength excitation to short wavelength excitation with the exception of very short wavelength (280-290 nm) which are noisier and shifted to the red (Figure 5).The gain in fluorescence is quantified by F = F(T) -F(0) recalling that F(T) represents 24 hours of reduction, 24 hours of reoxidation and titration back to the pH of F(0) which was 7.60.SRHA exhibits the smallest gain in fluorescent with an increase of 7 corrected fluorescence units; SRFA has an increase of 20 fluorescence units.Pony Lake fulvic acid (PLFA) doubles in intensity at wavelengths below 500 nm and does not change at wavelengths longer than 520 nm when reduced with sodium borohydride (Figure 5).The emission spectra of both the untreated and borohydride treated PLFA blue shifts uniformly with decreasing excitation wavelength with the exception of the shortest excitation wavelengths from 280-300 nm.Reduction causes a 50 nm blue shift in wavelength maxima of borohydride treated PLFA when compared untreated PLFA.An examination of the F of PLFA shows a gain in fluorescence emission at 300 nm that is not observed in other HA/FA standards.Fluorescence emission spectra and wavelength maxima of untreated terrestrial humic substances Elliott (EHA) and Leonardite (LHA) humic acids differ from SRHA.The fluorescence emission maximum of EHA increases to 575 nm from excitation wavelength 600-440 nm.Fluorescence emission maxima decreases to 550 nm between excitation wavelengths 450-400 nm remaining at fluorescence emission maxima at 550 nm from excitation wavelengths 400-280 nm.Leonardite HS fluorescence emission increases smoothly to 500 nm from excitation wavelengths 600-420 nm.The fluorescence emission remains at 500 nm from excitation 420-280 nm (Figures 5). Following borohydride reduction (Figures 5) reduced soil derived terrestrial HS (EHA and LHA) double in intensity and blue shift by approximately 100 nm.Upon inspection significant differences can be noted when comparing aquatic to soil derived emission spectra.The untreated soil derived material exhibit high fluorescence emission intensity at long wavelengths (red edge) relative to aquatic samples.Aquatic samples at the red edge recede to the baseline while the terrestrial samples show significant emission under the same optical conditions.Further, upon reduction the blue shift is on the scale of hundreds on nanometers for soil samples as opposed to tens of nanometers found in aquatic samples.Finally, aquatic samples appear to have smooth almost monotonic emission spectra as the excitation spectra increases in energy with the exception of the previously notes short wavelengths (280-300 nm).No sub-features appear in the aquatic samples.This is not the case in the terrestrial samples where a secondary feature can be seen at low excitation wave lengths (Figure 5).Quantum yield of untreated (Figure 6, lower left), borohydride reduced (Figure 6, lower right) show an increase in quantum yield in the terrestrially derived samples as well as the microbial sample that is at least double upon borohydride reduction at wavelengths below 450 nm reflecting the borohydride reduced loss of absorbance (Figure 1) and simultaneous but variable increase in fluorescence emission (Figure 5).The soil derived humic acids LHA exhibits a feature at 490 nm that corresponds to the secondary feature in the fluorescence emission spectra (Figure 5) that is not completely eliminated by borohydride reduction.The same feature is present in the untreated EHA fluorescence spectra (Figure 5) and eliminated by borohydride reduction.The loss of the underlying structural feature in EHA is reflected by the reduction of quantum yield at wavelengths above 450 nm.The wavelength maxima of fluorescence emission (Figure 6 untreated upper left and borohydride reduced upper right) of PLFA, SRFA and SRHA increase uniformly from low to high wavelength with the exception of very low wavelengths.The untreated soil derived humic acids exhibit a plateau at wavelengths below 420 nm.The fluorescence wavelength maxima of LHA (500 nm) is lower than the wavelength maxima of EHA (550 nm) (Figure 6). The pH dependence of the spectral slope values for untreated and borohydride reduced, Sephadex G-10 cleaned, terrestrially derived humic acids at concentrations that allow UV (< 350 nm) absorbance spectra and 0.01 mole L -1 ionic strength show a consistent pattern of behavior (Table 4).Untreated LHA and SRHA as well as SRFA have lower spectral slope values than do borohydride reduced spectral slope values at all pHs examined (pH 3.00, pH 6-7 and pH 11.00).The untreated terrestrial soil and aquatic samples exhibit less reduction in slope between pH 3.00 and the neutral pH (pH 6-7) then the degree of change found between the neutral pH (pH 6-7) and pH 11.00.Untreated EHA exhibits no difference in the rate of change of the spectral slope value from pH 3.00 to pH 11.00; the spectral slope of untreated EHA decreases as pH increases at a uniform rate.The borohydride reduced spectral slope value of EHA between pH 3.00 and the neutral pH (pH 6-7) increase, but the spectral slope decreases between the neutral pH point and pH 11.00 (Table 4).The borohydride reduced LHA has the same pattern as its untreated counterpart but each point is found at a higher spectral slope value (Table 4).Borohydride reduced SRHA and SRFA show a slight or no decrease respectively in the spectral slope value between pH 3.00 and pH 6-7, and a more pronounced reduction in spectral slope between the neutral pH point and the high pH point (pH 11.00).The neutral pH spectral slope values are consistent with the spectral slope values completed in at pH 7.6 (n > 3), with no supplemental ionic strength addition, or removal of residual borate (Table 4) with the exception of the two borohydride reduced fulvic acids.These two borohydride reduced samples exhibit a significant difference between the replicate spectral slope values generated at pH 7.6 and 350 nm and the pH dependent spectral slope values.Spectral differences due to changes in ionic strength are presented in Heighton, 2013. Discussion Spectral slope, optical titration and fluorescence difference spectra (F) indicate that quinones may act as acceptor moieties in PLFA.Quinones do not appear to be an important component of electronic interactions of terrestrially based aquatic or soil derived humic or fulvic acids.The optical properties of terrestrially based sources of humic and fulvic acid have been attributed to partial oxidation of land based plant materials, specifically lignin phenols.Microbial sources of fulvic or humic acids do not contain lignin, hence the generation of long wavelength absorbance cannot be assigned to the same precursor material; instead electronic interactions could be potentially forming in PLFA between secondary amine heterocyclic donors and tertiary amines, heterocyclic moieties, quinones or cyclic ketone acceptors.Amino acid/decomposition products of amino acid and or peptidoglycan decomposition products can form secondary and tertiary amines as well as other heterocyclic moieties (sulfur containing) supplying both donor and acceptor groups able to generate electronic interactions capable of generating long wavelength absorbance which could not be generated independently by the components of the system.The optical properties resulting from electronic interaction between amino acids or heterocyclic aromatic species and quinones could be expected to differ from lignin generated charge transfer bands in several ways. Historically, marine sources of CDOM have higher spectral slope values (S) than near shore CDOM samples (Helms et al., 2008).If marine sources of CDOM are closely related to or contain a high proportion of bacterial source FA than untreated PLFA should have a high spectral slope when compared to aquatic FA, namely SRFA.The spectral slope value (S) of borohydride reduced material such as SRFA a source of terrestrial humic acid would potentially differ from the spectral slope of borohydride reduced PLFA a microbial source of humic acid due to the difference in the donor moieties in charge transfer bands.If the short wavelength absorbance of PLFA (190-350 nm) can be attributed to largely amino acids/peptioglycan decomposition products producing heterocyclic amines with additional but proportionally less quinone and aromatic ketones than then the spectral slope should be steeper than the spectral slope of SRFA a terrestrial sources of FA.Specifically, the pH dependent spectral slope values of the microbial source of humic substance should be and are very different from the pH dependent spectral slope values of terrestrially based lignin phenol humic substances.Carboxylic acids from amino acids or other sources are not reduced by borohydride (Cleyden et al., 2001;Tinnacher & Honeyman, 2007).Amines (secondary or tertiary) are not reducible by simple borohydride reduction.The only nitrogen containing compounds easily reducible by borohydride are imides which may be present as they are used in biological systems to produce amino acids from keto acids in an enzymatically driven system (Cleyden et al., 2001) but it is unlikely that they make up a significant fraction of the PLFA structure.SRFA when compared to PLFA would not be enriched with nitrogen but would contain a substantially different group of reducible ketones potentially lacking in PLFA because of the terrestrial plant derived lignin phenol that SRFA and other terrestrially sourced humic material originate from (Fang et al., 2011).Loss in absorbance due to borohydride reduction in terrestrial sources potentially represents higher relative percent carbonyl moieties than that of PLFA but our results indicate that upon borohydride reduction 80% of the long wavelength absorbance of PLFA is lost while only 60 % of the SRFA absorbance is lost (Figure 1).This implies that loss of carbonyl groups in PLFA is seminal to the loss of long wavelength absorbance while SRFA retains some ability to maintain electronic interactions that result in persistence of the long wavelength tail despite borohydride reduction.The ability of SRFA to maintain some electronic interactions may mean that borohydride is physically prevented from reducing carbonyl groups that are sterically hindered or that SRFA has greater diversity of species participating in electronic interactions that can produce long wavelength absorbance and are not borohydride reducible when compared to PLFA. Absorbance of PLFA reduced with borohydride and reoxidized would be expected to recover to a greater extent than SRFA upon reoxidation if quinones are enriched when compared to aromatic ketones.Aromatic ketones are present in PLFA but when compared to SRFA and the diverse suite of aromatic ketone acceptors generated by lignin phenols in terrestrial sources of HA/FA the proportion of aromatic ketones in PLFA should expected to be lower.Quinones may be enriched when compared to aromatic ketones as quinones are ubiquitous in microbial systems (Hiraishi et al. 1998;Hiraishi et al. 1989;Liu et al. 2000).Ubiquinone 10 and napthaquinones with multiple isoprene side chains have been found at high concentrations (moles mg -1 dried cells) (Hiraishi et al. 1998). The mechanism generating long wavelength optical absorption bands, electronic interactions, is likely consistent between microbial sources of fulvic acid and terrestrial humic/fulvic substances generated from lignin phenol, but the constituents that act as the donor within the electronic interaction complexes differ based on the availability of source material.Microbial sources (PLFA) do not contain phenolic groups by definition but instead may employ heterocyclic moieties, such as secondary amines as donors and tertiary amines, quinones, aromatic ketones or other unidentified moieties as acceptors.Borohydride reduction of PLFA and the other humic/fulvic acids studied partially or completely remove acceptor moieties (quinones, aromatic ketones) from the charge transfer complex resulting in decreased absorbance, an increased spectral slope coefficient and increased fluorescence emission intensity.The relative amount of quinone moieties participating in electronic interactions of PLFA appears to be higher than found in the other fulvic and humic acids and can be seen in the recovery of absorbance at in the UV range of the absorbance and difference spectra (Figures 3-5, Table 3).Spectral slope parameterizes an expediential curve and as S increasing the short wavelength curve becomes steeper indicating that borohydride reduction is disrupting charge transfer bands.Donors and or acceptor chemical moieties within the charge transfer model are no longer being quenched resulting in an increase in short wavelength absorbance and a concurrent loss of long wavelength absorbance with a resultant increase in the spectral slope.The spectral slope additionally has a pH dependence that is directly linked to the pka of absorbing species.The pH dependence of the spectral slope of the borohydride reduced humic and fulvic acids of terrestrial origin decrease at high pH but the spectral slope value of PLFA is a consistent value despite changes in pH (Table 4).Clearly, borohydride reduction results in the loss of the long wavelength tail in PLFA but there is no concurrent gain in short wavelength absorbance.This may be a result of how spectral slope values are calculated.The spectral slope is historically parameterized from 290 nm to 820 nm (Green & Blough, 1994).The wavelength maximum of many quinones is at shorter wavelength than 290 nm (Ma et al., 2010).Further many heterocyclic secondary amines have pka values that are close to or above pH 11.00 (Yamauchi & Odani, 1985).This does not definitively implicate secondary amines and quinones formation of electronic interactions in PLFA but it does support the premise. Quantum yields and wavelength maxima reflect differences between soil and aquatic HS derived from lignin phenol (Figure 6).Two points are evident (1) soil humic substances (LHA) and (EHA) (Figure 5 and 6) have contributions to their fluorescence emission spectra that are not represented in aquatic materials (SRFA) and (SRHA) (Figures 5 and 6).Borohydride reduction eliminates the long wavelength fluorescence contributor in EHA and halves it in the LHA samples.This loss in long wavelength fluorescence may be an indication of disruption of stacking of black carbon and an inability to regenerate post borohydride reduction physical proximity (stacking) required for the extended conjugation needed to producing increased fluorescence emission at long wavelengths.(2) Untreated soil humic substances exhibit a plateau in fluorescence wavelength maxima that is not seen in other humic substances or observed post borohydride reduction (Figure 6).Speculatively, the soil wavelength maxima trend and the large fluorescence blue shift produced by borohydride reduction may be reflective of the lack of photochemical exposure and/or the relative physical stability afforded by this soil matrix; which may allow some moieties labile to photochemical alteration to be preserved in soil environments.It has been suggested that soil humic material is capable of forming micelles that can provide protection to labile moieties (Sutton & Sposito, 2005).Formation of micelles is unlikely due to the Beer Lambert behavior seen in all HS in this study (Table 2) but other forms of protection such as encapsulation or association with cations or clays is known to provide protection from enzymatic attack (Heighton et al., 2008;Hedges et al. 2000;Baldock & Skjemstad, 2000).The soil humic acids EHA and LHA have a higher percentage of black carbon or conjugated cyclic carbon (Skjemstad et al., 2002).The carbon content of LHA is potentially high when compared to other humic acids as this HA is derived from lignite a precursor of coal.Although, LHA potentially has significant amounts of black carbon, it is likely that titratable quinones shown to be present in the structure of black carbon are not represented as fully as would be in the Mollisol derived EHA because EHA contains more oxygen than does LHA (Mao et al., 2007;Allard & Derenne, 2007).Borohydride reduction of soil HA potentially captures additional types and a higher concentration of reducible species than seen in aquatic sources of lignin derived HA/FA.Borohydride reduction may cause physical alterations in the macro structure preventing reassembly of optical components not directly affected by the reduction.This is exemplified by the loss of long wavelength fluorescence emission in the HS (LHA and EHA) (Figure 5) and by the short wavelength absorbance recovery (< 350 nm) with no concurrent long wavelength recovery (Figures 1-3).The quinone derived recovery of reoxidized HS is small, much lower than the expected concentration observed by electrochemical means (Aeschbacher et al., 2010) indicating quinones although an important redox buffer may not play a substantial role as a charge transfer acceptor and therefore do not play large role in the optics of terrestrially sourced material (EHA, LHA, SRFA, SRHA).Soil sourced and aquatic sources of HA/FA should fit into a hierarchy of digenesis.The premise is supported by the ranking of spectral slope coefficient values (Table 4), and the identification of chromophores in soil derived humic acids not found in aquatic sources of humic substances (Figures 5). Figure 3 . Figure 3.Time dependence of the borohydride reduction and reoxidation of 35 mg L -1 Suwannee River fulvic acid (SRFA) and 50 mg L -1 Pony Lake fulvic acid (PLFA) over an optical absorbance range 230 to 350 nm.Samples were reduced for 24 hours, reoxidized for 24 hours and titrated back to the initial pH 7.60 Table 1 . Sample descriptions and abbreviations
2018-12-29T20:50:33.644Z
2014-08-24T00:00:00.000
{ "year": 2014, "sha1": "f7a890d8af2d45183c567cd4ff90927d8008e8f9", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/jgg/article/download/38500/21998", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f7a890d8af2d45183c567cd4ff90927d8008e8f9", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
234400797
pes2o/s2orc
v3-fos-license
Major variations in blood glucose levels in pediatric patients with type 1 diabetes Type 1 diabetes is one of the most common chronic diseases in children and adolescents, with an increasing incidence globally. Major variations in serum glucose cause severe ketoacidosis and hypoglycemia, acute metabolic complications of the disease. We performed a retrospective study on a group of 119 children and adolescents with type 1 diabetes in whom only the cases with ketoacidosis and severe hypoglycemia that required emergency hospitalization were quantified. At the same time, we identified the causes and determinants of these acute complications. According to the case study, 28.6% of patients (34 cases) presented severe hypoglycemia, the most common causes of hypoglycemia being intense physical activity without additional carbohydrate intake, delayed carbohydrate intake, and excess insulin. 15.3% of patients (18 cases) had ketoacidosis, of which 55.55% were recurrent ketoacidosis. Ketoacidosis has been detected in patients with poor glycemic balance and poor treatment compliance by not following a diet and skipping insulin doses. Among the additional risk fac-tors, we identified age over 13 years and the age of diabetes greater than 5 years, for both acute complications. INTRODUCTİON Type 1 diabetes is a chronic condition found in childhood and young adults, the pathogenic mechanism being autoimmune. In recent years, there has been a steady increase in the incidence of this disease (1,2). In addition to specific degenerative complications, the course of type 1 diabetes is exacerbated by acute metabolic complications, diabetic ketoacidosis, and severe hypoglycemia. Data from the literature report that diabetic ketoacidosis, the most serious metabolic disorder, is identified in 15% to 67% of children and adolescents at the onset of the disease (3). Although it can be found in other types of diabetes, it is specific to patients with type 1 diabetes (4). Ketoacidosis is diagnosed at a glycemia >11 mmol/l (200 mg/dl), a venous pH < 7.3, bicarbonate < 15 mmol/l, with ketonuria and ketones in serum, biochemical criteria agreed by different international societies (ADA, ESPE) (5,6). Clinical studies Regarding the annual incidence of ketoacidosis, it is between 4.8% and 5.2% in children and young adults (7), while the mortality rate is 6% to 24% in developing countries and less than 1% in developed ones (8). The most severe complication of ketoacidosis is cerebral edema which can occur in about 0.5-1% of children, with a mortality rate of 20-25% (9). With regard to diabetic ketoacidosis, apart from the physiological factors that condition the occurrence of ketoacidosis, there are also some additional risk factors. Small age (under 5 years), female gender, limited access to medical services, as well as unfavorable socio-economic status are discussed (10,11). Regarding hypoglycemia, it is the main barrier to achieving very good glycemic control, severe hypoglycemia being the greatest fear of patients with diabetes and their parents (12). Depending on serum glucose levels, hypoglycemia may be mild (glycemia values between 54 and 70 mg/dl), moderate (glycemia less than 54 mg/dl) or severe (when another person's intervention is needed, with blood sugar usually less than 40 mg/dl) (13). Children who have already had a severe hypoglycemic episode in the last 12 months may be included in a risk group. Among the neurological disorders associated with the acute phase of severe hypoglycemia, we mention transient mental deficit, electroencephalographic abnormalities, regional increase in cerebral blood flow. In patients with repeated severe hypoglycemia, especially in children under 5 years of age, permanent cognitive dysfunction may occur (14). The paper aims to determine the prevalence of ketoacidosis and severe hypoglycemia in children and adolescents with type 1 diabetes, as well as to figure out the causes and factors favoring these complications. MATERIALS AND METHODS We present the results of a retrospective study conducted between January 2015 -December 2019 on a group of 119 children and adolescents with type 1 diabetes in the records of the Pediatric Diabetology office within the "St. Spiridon" County Emergency Clinical Hospital from Iasi. The data from the dispensary sheets and their treatment notebooks from the last 5 years were analyzed. The mandatory criteria for inclusion in the study were age under 19 years and the duration of disease evolution of at least 1 year. Only the cases with ketoacidosis and severe hypoglycemia that required emergency hospitalization were quantified, aiming to determine the etiology of these complications, as well as to establish correlations with certain parameters: age, gender, environment, age of the disease, insulin therapy regimen used, glycemic balance. The study was conducted according to provisions of the Helsinki Declaration (the local ethics committee approved the study) and all the patients signed the consent for the participation in this study. SIgNIfIcANcE ANALySIS All analyses were performed using SPSS. The ANOVA test, c 2 test, Kruskall-Wallis and "Pearson" correlation were used. All data are presented as the mean ± the standard error of the mean. P<0.05 was considered to indicate a statistically significant difference. RESULTS The analysis of socio-demographic characteristics showed a slightly higher frequency of male children (52.9%) and from rural areas (54.6%). The current age of children varied between 3 years and 7 months and 18 years and 9 months, the average value being slightly higher in males (13.08 vs 12.74 years; p = 0.624) and in those from rural areas (13.33 vs. 12.42 years; p = 0.190) ( Table 1). The age of children at the onset of the disease ranged from 1 year and 7 months to 17 years and 4 months, with no significant differences in mean age between boys (boys 7.64 ± 3.71 vs. girls 7.84 ± 3.56 years; p = 0.769). However, there is a direct correlation, moderate in intensity, between HbA1c level and child age (r = + 0.277; p = 0.002), 27.7% of children having a higher level of HbA1c at older ages ( Figure 1). In patients with diabetes for more than 5 years, the estimated risk of severe hypoglycemia was approximately 2-fold higher (RR = 1.92; 95% CI: 1.32-2.78; p = 0.002) ( Table 4). Among the most common causes of severe hypoglycemia is intense physical activity without additional carbohydrate intake in 15 cases, delayed carbohydrate intake in 11 cases, and insulin excess in 8 cases. Regarding ketoacidosis, it was found in 15.13% of patients (18 cases), with an estimated risk slightly high- Among patients with ketoacidosis, 55.55% (10 cases) had recurrent ketoacidosis, being in 39% of cases adolescents with a disease duration of more than 5 years, with low adherence to treatment and a precarious glycemic balance. In this regard, the presence of increased glycosylated hemoglobin was observed in all patients with recurrent ketoacidosis, thus demonstrating the link between unsatisfactory glycemic control and the occurrence of ketoacidosis. In this study, because the majority of patients 87% (103 cases) followed a basal-bolus treatment regimen, no comparison could be made between the groups on glycemic balance and the frequency of acute metabolic complications depending on insulin treatment. Ketoacidosis was caused in 10 cases (55.5%) by major devia-tions from diet and additional carbohydrate intake, and in 8 cases (45.5%) by the omission of insulin doses. Among the factors favoring the appearance of acute metabolic complications, we also mention psychological disorders, these being identified in 17 patients (14.4%). Of these, during monitoring, 8 patients (47.1%; p = 0.005) had severe hypoglycemia and 7 patients (41.2%; p = 0.004) ketoacidosis. DIScUSSIONS Taking into consideration all the results we have obtained they can be compared to data from the literature that emphasizes the causal factors on severe hypoglycemia and ketoacidosis. In an 18-month study of a group of 142 children and adolescents with type 1 diabetes, Wysocki et al. reported a 41% incidence of severe hypoglycemia (15), while Rewers et al. found a frequency of 19% (16). According to some other data from, it is demonstrated a positive correlation between the age of the disease and hypoglycemia (17) and at the same time a causal relationship between severe hypoglycemia and delayed meals or snacks in 44% of cases (18). As we showed in our results, inadequate glycemic control and the occurrence of ketoacidosis are strongly linked. In an international study of 49,859 pediatric patients with type 1 diabetes, Maahs et al. showed an increase in the frequency of ketoacidosis in patients with a Hb A1c value between 7.5% and 9% (19). Regarding the insulin therapy regimen used, the DCCT study showed that intensive insulin therapy (multiple daily insulin injections or an insulin pump) provides better glycemic control than conventional treatment with 2 or 3 injections per day (20). Wei-Yu Chou et al. observed a better value of Hb A1c in intensively treated patients than in the conventionally treated group, therefore explaining the low risk for ketoacidosis in patients with intensive insulin therapy. The same authors showed that the risk of ketoacidosis was significantly higher only in patients with Hb A1c values ≥ 7.5% (21). Approximately 28% -65% of ketoacidosis occurs due to the omission of insulin doses, which is also the major cause of this complication in patients with type 1 diabetes (22)(23)(24)(25). Furthermore, the need for periodic psychological and neurological evaluation is very important in the task of monitoring of type 1 diabetes in children and adolescents (26,27). cONcLUSIONS Managing pediatric patients with type 1 diabetes can be considered a real challenge. Achieving therapeutic goals (immediate and long-term) is mainly influenced by maintaining a euglycemic status, with blood glucose values as close as possible to normal values. The diabetic child and adolescent may present, during the evolution of the disease, major variations of serum glucose with the appearance of acute metabolic imbalances, respectively ketoacidosis, and severe hypoglycemia. These are, in most cases, determined by low compliance with the recommended therapeutic means (insulin therapy, specific diet, physical activity, glycemic monitoring) and by the lack of involvement and awareness on the part of the patient. Severe hypoglycemia and ketoacidosis have immediate effects, through the vital prognosis, but also distant effects on the functionality of the nervous system in case of severe hypoglycemia. In conclusion, it is fundamental to prevent these complications through continuous medical education of diabetic patients and their families.
2021-05-13T00:03:17.165Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "c871dbf7c2a1efb5242c2693760e60087d8bb9c4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37897/rjmp.2020.4.14", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ed4fcda8c9d3cc7a086ae919b2fdc82a88871599", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249014906
pes2o/s2orc
v3-fos-license
The prognosis of elderly patients with hepatocellular carcinoma: A multi‐center 19‐year experience in Japan Abstract Aims This retrospective study compared the survival between elderly and non‐elderly patients. Methods A total of 5545 treatment‐naive patients with hepatocellular carcinoma (HCC) who visited 7 different hospitals from January 2000 to December 2018 were included. Patients ≥80 years old were defined as elderly patients. We divided the patients into three groups based on the timing of the initial treatment: Early, middle, and late periods defined as 2000 to 2005, 2006 to 2012, and 2013 to 2018, respectively. Results There were 132 (8.9%), 405 (17.5%), and 388 (22.2%) elderly patients in the early, middle, and late period, respectively, showing a significant increase over time (p < 0.001). In both elderly and non‐elderly patients, the median albumin‐bilirubin score significantly improved over time and the diagnosis of HCC was made slightly earlier over time. The median overall survival (OS) in elderly patients was 52.8, 42.0, and 45.6 months in the early, middle, and late period, respectively, without a significant improvement (p = 0.17) whereas the OS in non‐elderly patients was significantly improved (p < 0.001). The percentage of elderly patients receiving curative treatments did not significantly increase (p = 0.43), while that of non‐elderly patients did (p = 0.017). Non‐liver‐related death in elderly patients significantly differed among periods (p = 0.023), while liver‐related death did not (p = 0.050). Liver‐ and non‐liver‐related death in non‐elderly patients significantly differed among periods (p < 0.001, p = 0.005). Conclusions Survival in elderly patients was not improved despite an improvement in their liver function. Curative treatments should be conducted when appropriate after evaluating each elderly patient. | INTRODUCTION Primary liver cancer is the sixth-most frequent malignant tumor and the third leading cause of cancer death worldwide, with approximately 906,000 new cases and 830,000 deaths annually. 1 Hepatocellular carcinoma (HCC) accounts for 75%-85% of cases. 1 According to recent guidelines, 2,3 treatment of HCC, such as hepatic resection, liver transplantation, radiofrequency ablation (RFA), transarterial chemoembolization (TACE), systemic therapy, or best supportive care (BSC), is determined mainly based on the tumor stage and degree of liver function preservation. Thanks to advances in medical technology and healthcare, the longevity of the general population has increased in high-and upper-middle-income countries. For instance, in the United States of America, the number of people ≥65 years old markedly increased from 39.6 million in 2009 to 54.1 million in 2019, with an additional 19.6 years of average life expectancy. 4 In Europe, the average life expectancy increased from 66.9 years in 1960 to 76.8 years in 2015 for males and from 72.3 years to 82.6 years for females. 5 In general, elderly patients are considered 'fragile' due to their cardiopulmonary function, insufficient renal function, comorbidities, and altered drug metabolism, and they are considered more vulnerable to treatment-related adverse events than younger patients. Accordingly, older patients are likely to receive suboptimal or undertreatment. It is well known that cancer incidence increases with age, 6 and in cases of HCC, aging itself is a risk factor relevant to carcinogenesis. 7,8 Although many factors are considered to have contributed to the improvement in the survival rate of HCC, 9 whether or not elderly patients can obtain as good a survival benefit as non-elderly patients remains uncertain. The present study compared the survival between elderly and non-elderly patients based on a 19-year experience. | Patients In this multicenter retrospective study, we reviewed the medical records of 5545 patients with HCC who Results: There were 132 (8.9%), 405 (17.5%), and 388 (22.2%) elderly patients in the early, middle, and late period, respectively, showing a significant increase over time (p < 0.001). In both elderly and non-elderly patients, the median albumin-bilirubin score significantly improved over time and the diagnosis of HCC was made slightly earlier over time. The median overall survival (OS) in elderly patients was 52.8, 42.0, and 45.6 months in the early, middle, and late period, respectively, without a significant improvement (p = 0.17) whereas the OS in non-elderly patients was significantly improved (p < 0.001). The percentage of elderly patients receiving curative treatments did not significantly increase (p = 0.43), while that of non-elderly patients did (p = 0.017). Non-liver-related death in elderly patients significantly differed among periods (p = 0.023), while liver-related death did not (p = 0.050). Liver-and non-liver-related death in nonelderly patients significantly differed among periods (p < 0.001, p = 0.005). Conclusions: Survival in elderly patients was not improved despite an improvement in their liver function. Curative treatments should be conducted when appropriate after evaluating each elderly patient. K E Y W O R D S albumin-bilirubin grade, curative treatment, hepatocellular carcinoma, liver-related death, personalized treatment patient information, including their demographic features, underlying liver diseases, serum biochemistry, tumor extent, and initial treatment. Patients were diagnosed based on the pathological findings or typical radiological features of contrast-enhanced computed tomography (CT), magnetic resonance imaging (MRI), or ultrasonography (US). 10 All included patients were treatment-naïve and had not undergone any previous treatment. In general, treatment decision-making and implementation were conducted based on the discussions with multidisciplinary teams at each local hospital. The patients who were ≥80 years old were defined as elderly patients, while the remaining patients were defined as non-elderly patients. We divided the patients into three groups based on the timing of initial treatment: Early, period, and late periods defined as 2000 to 2005, 2006 to 2012, and 2013 to 2018, respectively. The entire study protocol was approved by the Institutional Ethics Committee of Ehime Prefectural Central Hospital (No. 27-34). All procedures were done in accordance with the Declaration of Helsinki. The need for written informed consent was waived because of the retrospective nature of the study. | Definition of underlying liver diseases The underlying liver diseases in all HCC patients were determined as hepatitis B virus (HBV), hepatitis C virus (HCV), HBV + HCV, alcohol-related, or others. Cases with seropositivity for hepatitis B surface antigen (HBsAg) and positivity for anti-HCV antibody were attributed to HBV and HCV infection, respectively. Cases with seropositivity for both HBsAg and anti-HCV antibodies were classified as HBV + HCV, and cases with seronegativity for both markers and with habitual significant alcohol intake were attributed to alcohol-related liver disease. | The assessment of the liver function, evaluation of the tumor stage, and definition of liver-related death Before the initial treatment, The severity of the liver function was evaluated by the Child-Pugh classification, albumin-bilirubin (ALBI) score, 11 and modified (ALBI) grade. 12 The ALBI score was calculated using the following formula: ALBI score = (log 10 bilirubin [μmol/L] × 0.66) + (albumin [g/L] × −0.085). 11 The mALBI grade was determined by calculating the ALBI score (≤ −2.60: Grade 1, > −2.60 to −2.27: Grade 2a, > −2.27 to −1.39: Grade 2b, > −1.39: grade 3). 12 We assessed the tumor stage according to the BCLC staging system 3 and tumor node metastasis stage by the Liver Cancer Study Group of Japan 6th edition. 13 We defined liver-related death as that mainly due to HCC, liver failure, and bleeding events and non-liver-related death as that due to the other causes of death. | Statistical analyses The categorical values were described as numbers (percentage) and compared using the χ 2 -test and Fisher's exact when appropriate. Continuous values were described as the median (interquartile range) and compared using the Mann-Whitney U test or Kruskal-Wallis test. The overall survival (OS) was defined as the period between the starting date of initial treatment and the date of death or last visit. We generated the Kaplan-Meier curve and compared values using the log-rank test. A post-hoc analysis was conducted using the Bonferroni method if significant differences were observed. The Cox proportional hazard model was used to investigate the factors associated with the OS. Gray's test was used to perform the competing risk survival analysis. The Fine-Gray model was applied to assess the factors relevant to non-liver-related death. Because there is a high correlation between BCLC stage and TNM stage, and between Child-Pugh classification and ALBI grade, we built two models to avoid multicollinearity. p values < 0.05 were considered to indicate statistical significance. All statistical analyses were performed using the EZR software program, ver. 1.53 (Saitama Medical Center, Jichi Medical University, Saitama, Japan), which is a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). 14 3 | RESULTS | Characteristics and survival curve of the overall patients Of the 5545 patients, 1485 (26.8%), 2315 (41.7%), and 1745 (31.5%) underwent initial treatment in the early, middle, and late period, respectively. The characteristics of overall patients are shown in Table 1. The group age increased significantly over time (p < 0.001), and there were 132 (8.9%), 405 (17.5%), and 388 (22.2%) elderly patients in the early, middle, and late periods, respectively, showing a significant increase in the proportion of elderly patients over time (p < 0.001). The proportion of male patients was roughly consistent over time. The median body mass index, the percentage of obesity and diabetes mellitus (DM) is increasing (p < 0.001, p < 0.001, and p < 0.001, respectively). The percentage of HCV significantly decreased over time, while the proportion of alcohol-related and other causes significantly increased over time (p < 0.001). Regarding the liver function, the percentage of patients with Child-Pugh class A significantly increased over time, going from 68.8% in the early period to 70.8% in the middle period to 76.3% in the late period (p < 0.001). The median ALBI score also significantly improved over time (p < 0.001), resulting in an increase in the proportion of mALBI grade 1 and a decrease in the proportions of mALBI grade 2a, 2b, and 3 (p < 0.001). While the difference was quite small, the HCC stage began to be diagnosed significantly earlier, and the serum level of αfetoprotein (AFP) decreased significantly over time. Regarding the initial treatments, the proportion receiving curative treatment showed an increasing trend. At the time of the analysis, 2957 (53.3%) patients were dead, and the remaining patients were recorded as having been lost to follow-up or still alive. The median OS in the overall patients in the early, middle, and late period was 54.0 months (95% confidence interval [CI] 49.2-60.0), 61.2 months (95% CI 56.4-66.0), and 70.8 months (95% CI 62.4-79.2), respectively, showing statistical significance (p < 0.001). The post-hoc analysis showed significant differences between the early and late periods (p < 0.001) and between the middle and the late period (p = 0.008). The 5-year OS rates in early, middle, and, late period were 47.1% (95% CI 44.2%-49.8%), 50.3% (95% CI 48.0%-52.5%), and 53.9% (95% CI 50.8-56.9%), respectively ( Figure 1). In the multivariate analysis, while an elderly age was a significant factor associated with the OS, albeit to a relatively low extent (hazard ratio [HR] of 1.17 in model 1 and 1.11 in model 2), the liver function (Child-Pugh class and mALBI grade), HCC stage (BCLC stage and TNM stage), and curative treatments greatly contributed to the survival benefit (Table S1). | Characteristics and survival curve of the elderly and nonelderly patients The characteristics of the elderly patients are described in Table 2. The trends concerning the underlying liver disease, liver function, and BCLC stage were comparable to those in the overall patients. Furthermore, the TNM stage and serum level of AFP were not significantly different between the period, showing similar trends to those of the overall patients as well. The percentage with BSC decreased from 23.5% (n = 31) in the early period to 23.0% (n = 93) in the middle period to 14.9% (n = 58) in the late period. What is particularly striking in this table is that the proportion receiving a curative treatment did not significantly increase over time (p = 0.43). The median OS in the early, middle, and late periods was 52.8 months (95% CI 33.6-64.8), 42.0 months (95% CI 37.2-50.4), and 45.6 months (95% CI 42.0-61.2), respectively. The 5-year OS rates in early, middle, and late periods were 47.2% (95% CI 37.3%-56.5%), 37.6% (32.0%-43.3%), and 42.9% (95% CI 35.3%-50.4%), respectively. Surprisingly, there were no significant differences in OS in the elderly patients among the periods (p = 0.17; Figure 2A). The characteristics of non-elderly patients are described in Table 3. The trends in the etiology of chronic liver disease and the degree of liver function preservation were consistent with those of the overall patients. While the difference was quite small, the HCC stage was diagnosed significantly earlier, and the serum level of AFP decreased significantly over time. The proportion of patients receiving curative treatments increased over time. As expected, the median OS in the early, middle, and late period was 55.2 months (95% CI 48.0%-60.0%), 66.0 months Figure 2B). | DISCUSSION The main findings of the present study were that the median age and percentage of elderly patients were increasing over time, and the liver function was remarkably improved while the diagnosis of HCC was made slightly earlier over time. Surprisingly, we confirmed that the prognosis of the elderly patients did not significantly improve over time, while the survival of the non-elderly patients was remarkably increased over time. Regarding the non-elderly patients, the improvement of the liver function and the increase in the curative treatment rate were presumed to largely contribute to the prolongation of the survival curve. However, no survival benefit was observed in the elderly patients, despite an improvement in their liver function and a decrease in the percentage with BSC. One possible reason for this is that the proportion of elderly patients receiving curative treatment, which greatly showed a low HR for the OS in the multivariate analysis, did not significantly increase. Another possible reason is that the non-liver-related death among elderly patients differed significantly with the greater absolute difference across the periods, while the liver-related death did not differed significantly. This finding was also supported by T A B L E 2 (Continued) F I G U R E 2 The overall survival according to the three periods in the elderly patients (A) and non-elderly patients (B). A significant difference was not observed in the elderly patients (p = 0.17), but the survival curve significantly differed in the non-elderly patients (p < 0.001) the results of a multivariate analysis showing that an elderly age had a high HR for non-liver-related death. T A B L E 3 Characteristics of non-elderly patients A recent population-based study 15 evaluated about 3.9 million patients in seven high-income countries (Australia, Canada, Denmark, Ireland, New Zealand, Norway, and United Kingdom) with seven primary cancers (esophagus, stomach, colon, rectum, pancreas, lung, and ovary). It showed that the survival benefit was limited in elderly patients, despite advances in the treatment of cancer, while a notable improvement in the OS was observed in younger patients. Although HCC was not assessed, and Japan was not included in that report, 15 the present study corroborated this previous finding and expanded upon it by showing that the prognosis of elderly patients with HCC did not significantly improve over time. Regarding curative treatments, surgical resection seems unsuitable for elderly patients because of the increased frequency of postoperative complications. However, many retrospective studies have found that surgical treatment was effective and safe for elderly patients in comparison to younger patients, due to advances in surgical techniques and postoperative management approaches. [16][17][18] RFA is considered curative treatment for early-stage HCC and may be feasible for elderly patients owing to its low invasiveness and mild deteriorative effect on the performance status. Two studies noted no significant differences in the OS, local tumor progression, or safety of RFA between elderly and non-elderly patients. 19,20 However, another study reported that the OS and local tumor progression were poorer in elderly patients than in non-elderly patients. 21 Although data associated with RFA have been conflicting, with careful interpretation required due to the retrospective nature of some analyses, details concerning treatment-related complications have been consistent, showing that RFA is a safe procedure for elderly patients. Given these findings of studies associated with surgical and RFA treatment, curative treatments should be conducted when appropriate after evaluating each individual case. Another point of note is that the rate of liver-related death in elderly patients did not significantly differ among periods, and age itself was strongly associated with nonliver-related death, with a high HR in the multivariate analysis. One possible reason for this is that the elderly patients had many comorbidities, leading to an increase in the rate of non-liver related death. Although the significant extent of the non-related liver death was presumed to hamper the survival benefit owing to the improvement of the liver function, the present results did not imply that older patients were only suitable for suboptimal or undertreatment. We believe that although precision treatment is always desirable, this is particularly necessary for elderly patients. Curative treatments should include not only proper monitoring to guarantee an adequate treatment intensity but also measures to prevent or minimize the development of adverse events and deterioration of the quality of life. In this connection, the elderly people at the age of 80 years were estimated to be 9.18 years in men and 12.01 years in women according to the life expectancy of the general population. 22 We also used the life table 23 to calculate the age-and sex-adjusted mortality rates, showing that the 5-year survival rate in people ≥80 years old was 94.7% (95% CI 90.1-97.2, data not shown). This calculation indicated that the life expectancy was significantly reduced in HCC patients compared with the general population. From this point of view, we also emphasized the importance of precision treatment regardless of the age. In the present cohort, the preserved liver function was improved over time. This is mainly because of the development of nucleos (t) ide analog therapy for HBV and direct-acting antivirals therapies for HCV. Viral infection accounts for the majority of underlying liver disease in the present cohort and these therapies are high efficacy with minimal adverse events, resulting in the improvement of liver function. 24,25 While the cut-off for the definition of elderly was varied according to the previous review, ranging from 65 to 85 years, many studies adopted 75 years as the cut-off value for the definition of elderly. 26 In the present cohort, the median age in late periods was 72.0 years old and there were many patients ≥75 years (it accounts for 42.2% [n = 992]; data was not shown), indicating that many Japanese physician frequently encounter HCC patients ≥75 years in clinical settings. Indeed, the patients aged 75-79 years improved the OS in our analysis (shown in Figure S3). On the other hand, patients ≥80 years accounts for about 20% in late period. Accordingly, we believed that the analysis of patients ≥80 years is important for growing aging society. While the HBV and HCV infection was still the leading risk factors of carcinogenesis in Asia, the prevalence of HCC due to the metabolic factors is increasing rapidly in Asian countries, 27 which were consistent with the present results. According to a previous retrospective analysis, the metabolic factors including obesity and DM in non-viralrelated HCC patients are significantly increasing comparing to the viral-related HCC patients. 28 Another study reported that DM is an unfavorable factor associated with OS in early-stage HCC patients. 29 The effective surveillance strategies will be required in this special population. F I G U R E 3 The cumulative incidence of liver-related death in elderly patients (A), non-liver-related death in elderly patients (B), liverrelated death in non-elderly patients (C), and non-liver-related death in non-elderly patients (D) The present study is associated with some limitations. First, this study was conducted in a retrospective manner, and observations in patients who were included in the late period were relatively short compared with those in the early and middle periods. Second, the cause of death was not recorded in about 15% of patients. This might have influenced the present results. Third, we were unable to evaluate the Model for end-stage liver disease (MELD) score due to the lack of serum creatinine value. A further study to evaluate the utility in MELD score will be conducted. In conclusion, the percentage of elderly patients is increasing, and the prognosis of elderly patients was not shown to have improved over time. Personalized treatment implemented after evaluating each individual case and considering age-related comorbidities is desirable, especially in elderly patients. FUNDING INFORMATION There are no funding sources.
2022-05-25T06:23:38.036Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "54f941a5ac290ae3731103460586f96cca7e0eb3", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "1b35f9647297f76c77ca40a448a63bd991e1c086", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
111379026
pes2o/s2orc
v3-fos-license
Characterization of Polystyrene Wastes as Potential Extruded Feedstock Filament for 3D Printing : The recyclability of polystyrene, acrylonitrile butadiene styrene and polyvinylchloride waste and their use as a source for 3D printing were studied. Filaments of about 3 mm in diameter were extruded successfully with a small-size extruder. The processed filaments were tested on a broad range of parameters-melt flow index, glass transition temperature, tensile properties and a pyrolysis scenario were obtained. The measured parameters were compared with parameters of virgin counterparts presented in the literature. In order to estimate the composition of the recycled material, Fourier Transform Infrared and elemental analysis of the samples was done. Introduction Plastic, being a highly versatile and resource-efficient material, has become irreplaceable material in many economic sectors, such as packaging, building and construction, transportation, and renewable energy, among others [1]. Growing plastic production and consumption have led to an increasing dependence of plastic manufacture from fossil fuel, the main resource for plastic processing, as well as to an increase of plastic waste. Statistics show that of the 27.1 million tons of post-consumer plastic collected in 2016, 31.1% was recycled, 41.6% incinerated and 27.3% landfilled. Thus, a large portion of plastic is still landfilled; however, this was the first time in Europe when recycling overcame landfill [1]. In the European Commission Action Plan for a circular economy from 2015, plastic production is identified as a key priority [2]. The circular plastic economy vision is based on the need for innovative solutions for developing new sustainable products, durable with a long lifespan, and for high-quality recyclable products after use. Among other waste management options, mechanical recycling of plastics is the most resource-effective, providing also more jobs than landfilling or incineration [3]. However, mechanical recycling can be limited by the presence of toxic components in the recyclant [4,5]. Recently, 3D printing technology, related to additive or direct manufacturing, has developed rapidly, raising interests in many fields of business and household services. Unlike conventional manufacturing, direct manufacturing makes it possible to manufacture product by using computeraided design (CAD) software or online services for product model design, whereby avoiding some intermediate stages. Such flexibility in design options allows to organize manufacturing at small companies, e.g., home 3D printing and local fabrication at a printshop [6]. Yet, 3D printing is a way for effective use of raw materials, minimizing waste and saving energy and other resources. Moreover, using "household"-scale recycling systems can be an alternative to centralized recycling due to the fact that some negative environmental impacts can be overcome. The collection, transport and transfer (CTT) of recyclable waste plays a significant part in greenhouse gas emissions from the total global warming potential of the recycling process [7]. The rapid development of 3D printing technology also includes development of the technology of 3D printer filaments. Today, multiple companies are specialized in the production and distribution of them [8][9][10]. The most popular plastics in 3D printing technology are acrylonitrile-butadienestyrene (ABS) and polylactic acid (PLA). Local recycling presumes that the filaments are produced from local recycled material and the quality of this material influences its recyclability, e.g., the quality of the final product. Examples of virgin and secondary plastic waste processing by using a small-scale filament extruder that converts plastic (chips, particles) into filament can be found in the literature. Baechler et al. studied the applicability of small-scale extruder for the processing of high density polyethylene (HDPE) filament [7]. Their study was focused on the estimation of filament uniformity, time of processing, and energy consumption. Mirón et al. studied influence of extrusion temperature on other extrusion parameters of PLA, such as extrusion speed and filament diameter and regularity [11]. Researchers also showed that the mechanical properties of the processed filament were similar to a commercially available one. Anderson compared 3D printed samples from virgin and recycled PLA and found that their mechanical properties are comparable or even higher for samples made from recycled source [12]. However, variability in the results of the recycled materials was significantly higher in comparison to virgin ones. Zander et al. showed that the tensile strength of printed material from recycled polyethylene terephthalate (PET) was equivalent to printed samples from commercial PET [13]. Moreover, commercial B-PET 3D filaments 100% made from recycled post-consumer PET bottles have been available on the market since 2015 [10]. The main aim of this study was to estimate the recyclability of ABS, polystyrene (PS) and polyvinylchloride (PVC) plastic waste, i.e., to measure the mechanical and physical properties of filaments manufactured from these plastics and to compare them with virgin grades. In addition, possible chemical contaminants that can present in plastic waste were also estimated. Plastic waste was collected from a local landfill or obtained from commercial companies. The plastics were separated into singular plastic grades. The samples for testing were manufactured by using a Filabot extruder. The extruded material was studied for its mechanical properties (tensile strength and modulus), melt flow index, and glass transition temperature, and a thermal degradation scenario was obtained. Elemental analysis of the samples was performed by using energy-dispersive X-ray spectroscopy (EDS). Emitted volatiles during pyrolysis were measured by a mass spectrophotometer (MS). Plastic Sources and Filament Manufacturing The materials were obtained from the companies Etelä-Karjalan Jätehuolto Oy (Lappeenranta, Finland) and Destaclean Oy (Tuusula, Finland). From the mixed plastic waste, three plastic types, ABS, PVC and PS were extracted by using a near infrared (NIR) spectroscopy device (Thermo Scientific TM MicroPHAZIR TM PC Analyzer for Plastic, Waltham, MA, USA). The plastic fragments were reduced to approx. 0.5 cm-sized flakes and then extruded by using a low speed extruder Filabot EX2 (Barre, VT, USA). The PVC filaments were extruded at 196 °C, PS at 200 °C and ABS at 180 °C constant temperatures, cooled with an Airpath-device (Tamil Nadu, India) using forced convection at ambient temperature. The extrusion flow rate was adjusted manually for each material. The diameter of the extruded filament was approx. 3 mm. Melt Flow Index Experimental melt flow index (MFI) was measured by using Dynisco LMI 5000 (Dynisco, Franklin, MA, USA) in accordance with standard EN ISO 1133-1. The MFI of ABS and PS was measured at 220 °C and 200 °C, respectively. Fourier-Transform Infrared Analysis (FTIR) Extruded filaments were analyzed with the Fourier-transform infrared (FTIR) technique. An FTIR spectrometer (Perkin-Elmer, Buckinghamshire, UK) equipped with an attenuated total reflection (ATR) device (MIRacle PIKE Technologies, Madison, WI, USA) with zinc selenide crystal was used. The spectra were collected by co-adding 4 scans at a resolution of 4 cm −1 in the range from 4000 to 400 cm −1 . Tensile Property Testing The tensile tests of the filaments were performed according to EN-527 standard on a Zwick Z020 machine (Ulm, Germany). The cross-head speed was 2 mm/min for modulus testing and 50 mm/min for the other measurements. The gauge length was 25 mm. The test samples, 120 mm long filaments were cut from a trial sample, conditioned according to above standard. Tests were carried out with 12 sample replicates. Differential Scanning Calorimeter (DSC) and Thermogravimetric Analysis (TGA) Thermal analysis measurements were performed by mean a differential scanning calorimeter (DSC), and thermogravimetric analysis (TGA) with a linear temperature increase (Simultaneous TG-DTA/DSC Apparatus STA 449 C/4/MFC/G/Jupiter ® , NETZSCH-Gerätebau GmbH, Selb, Germany). DSC was performed under a nitrogen atmosphere, at a 40 mL/min flow rate and heating rate of 10 °C/min. The sample of approx. 10 mg, was placed in an aluminum pan and heated from 25 to 200 °C and then cooled down to 25 °C after keeping at 200 °C for 10 min. This procedure was done twice, and the thermogram of the second scan was used for the analysis. For thermogravimetric analysis, approx. 10 mg of the specimen was heated from 25 °C to 800 or 900 °C at a rate of 10 °C/min under a helium atmosphere of 40 mL/min at a constant flow rate. Evolved gas emission (EGA) during TGA was analyzed by using a mass spectrophotometer (MS 403C Aëolos Mass Spectrophotometer, NETZSCH-Gerätebau GmbH, Selb, Germany) which was coupled with TGA. The MS analysis was limited to 160 m/z. The results were interpreted with N-Proteus ® software (NETZSCH-Gerätebau GmbH, Selb, Germany. For further EGA spectra interpretation, the database "NIST Chemistry WebBook 69" was used [14]. Scanning Electron Microscope (SEM) Analysis and Energy-Dispersive Spectroscopy (EDS) The surface morphology of the samples was studied with a scanning electron microscope (SEM), Hitachi SU3500 (Chiyoda, Tokyo, Japan). Surfaces were observed directly after processing as well as surface fracture after the tensile testing. Elemental analysis was performed with energy-dispersive Xray spectroscopy (EDS) (Thermo Scientific, Waltham, MA, USA). The results were interpreted with Pathfinder software (Pathfinder TM X-ray Microanalysis Software, Thermo Scientific, Waltham, MA, USA). Melt Flow Index (MFI) Table 1 presents the measured MFI parameters for the recycled polymers and parameters found in the literature for virgin ones. According to the results, the MFI value of recycled ABS was significantly lower than that of the virgin one, 8.9 and 15 g/10 min or 43.1 g/10 min, [15,16] respectively. The MFI of the PS, 11.5 g/10 min, was very close to the virgin grade, 12-16 g/10 min. The MFI of virgin rigid PVC varies from 1.4 to 60 g/10 min [17]. The attempts to measure the MFI of recycled PVC were not successful due to the material degrading and clogging the equipment during testing. MFI is sensitive to environmental impact and thermomechanical stress, e.g., during lifecycle and reprocessing. Jin et al. who studied the influence of multiple extrusion on the flow properties of low-density polyethylene (LDPE) reported that MFI decreased from 2.31 g/10 min to 0.02 g/10 min after 100 extrusions [18]. Fourier-Transform Infrared Analysis Infrared spectra of the recycled (rABS, rPS and rPVC) samples are shown in Figure 1. Comparative analysis of the samples with virgin counterparts, whose spectra are available in the literature, show that the samples are homogeneous, e.g., without noticeable impurities. The similarity in the molecular structure of ABS and PS, Figure 2, is also reflected in the similarity of their FTIR spectra. The characteristic peaks of ABS and PS, the C-H stretching vibration aromatic, at 3200-3000 cm −1 , and aliphatic, at 3000-2800 cm −1 , are clearly observed in both spectra. The band at 2237 cm −1 corresponds to the C≡N bond observed in the ABS spectra. The band at 1736 cm −1 , oxygen containing carbonyl groups (C=O) band is probably due to an oxidation process in the polymers during usage [20]. The peaks at 1602 cm −1 and 1592 cm −1 correspond to C=C aromatic double bond-stretching vibration. The absorptions at 1493 cm −1 and 1452 cm −1 are also due to carbon-carbon stretching vibrations in the aromatic ring. However, the band at 1452 cm −1 may have resulted from both ring breathing of the benzene ring and the deformation vibration of -CH2 [21]. The peaks at 1070 cm −1 and 1028 cm −1 are in-plane C-H bending of the aromatic ring. The two peaks at 757 cm −1 and 697 cm −1 are due out-of-plane aryl C-H bending for the (mono)substituted benzene ring. The absorbance bands at 966 cm −1 and 911 cm −1 in the ABS spectra correspond to C=C unsaturation (vinyl) in polybutadiene, and the 1,2 butadiene terminal vinyl C-H band, respectively [22]. The PVC spectra are characterized by aliphatic C-H stretching vibration, 3000-2800 cm −1 , the peak at 1452 cm −1 is due to -CH2 stretching vibration, and the peaks near 612 cm −1 and 691 cm −1 are due to C-Cl stretching vibration [23]. This spectrum also reveals the absence of phthalates, which have a specific region at 1620-1560 cm −1 [24]. Thus, this is consistent with the TGA-MS analysis, which also did not reveal phthalate emitting (see the section below). The peaks at 1430 cm −1 and 880 cm −1 might belong to calcium carbonate CaCO3 [25], the presence of which, i.e., Ca-ion, was detected by EDS analysis (see Table 2). The Ca-ion was also detected in the other samples, but in much smaller amounts than in PVC, which was the reason for the absence of CaCO3-specific regions in the IR spectra of ABS and PS. The ageing sign of rPVC could be detected by the presence of a peak near 1740 cm −1 , the carbonyl groups region. Differential Scanning Calorimeter (DSC) analysis The glass transition temperature, Tg, for the second and third heating-cooling cycles of the DSC analysis of the recycled polymers and their virgin counterparts, found in the literature, are listed in Table 3. As can be seen, the Tg of rPS and rPVC is lower than that of virgin ones. The Tg of rABS is similar to that of unprocessed conventional ABS [26]. Influence of additional heating-cooling cycle was insignificant for rPVC and rABS, however the Tg of rPS decreased by 2 degrees. The main reason for Tg changing is usually thermomechanical impact during reprocessing, as well as different external factors during material usage. During ageing, as known, random thermal scission or crosslinking can occur, which in turn decreases or increases Tg, respectively. However, changes in the Tg for recycled materials cannot be attributed solely to ageing due to other factors, as e.g., contaminants or additives, which are often present in recycled materials, can have an influence on the Tg parameter. Heat Resistance and Thermal Stability Experimental results of the thermal degradation of the polymers, mass losses, and the corresponding differential thermogravimetry (DTG) curves, are shown in Figure 3. As can be seen, PVC has a significantly faster mass loss rate compared to the ABS and PS grades. The sensitivity of PVC to heat is mostly related to the low binding energy of the C-Cl and thus process dechlorination starts at lower temperatures [28]. The low thermal stability of PVC is also associated with defects presence in the PVC structure, e.g., allylic and tertiary chloride moieties, which are formed during PVC polymerization [29]. This instability of PVC toward heat is compensated by leaving a large char portion at the end of pyrolysis. Char formation can be attributed to the formation of reactive carbonium-ion centers in the polymer, which act as an active center for crosslinking and char building [30]. Unlike in the other samples, the mass loss curve of PVC shows two steps, at the temperature region from 210 °C to 360 °C and from 360 °C to 540 °C, with peaks at 282 °C and 454 °C for the first and second steps, respectively ( Figure 3). The mass losses were about 58 wt% in the first step and in the second step about 83 wt% as a whole. Such large amount of residue was formed due to carbonaceous char formation and presence of inorganic additives [29]. In general, virgin PVC burns incompletely under inert atmosphere, leaving up to 10 wt%of carbon-rich char [31,32,33]. Based on this, the inorganic part of PVC can be estimated as about 7 wt%. Two-step mass loss during PVC pyrolysis is well known and has been described in various reports [34,35]. Briefly, the decomposition of PVC starts from dehydrochlorination, elimination of HCl, followed by benzene formation through cyclization of (CH=CH)n [36]. This process is displayed schematically in Figure 4. Along with HCl and benzene, many other volatiles are generated during PVC burning, and can be detected with FTIR, gas chromatography (GC), and MS analyzers or their combinations [34,35,37,38]. In this study, the probable gas emission was estimated on the basis of the masses-to-charge ratios (m/z) of the volatiles released, and interpreted by using the "NIST Chemistry WebBook 69" data base [14] and compared with published results found in the literature. According to the emitted gas analysis, two main components, HCl (m/z 36, 38) and benzene (m/z 77, 78), were emitted during the first degradation step, see Figure 5. Intensive water vapor, a peak at m/z 17, 18, and a large peak at m/z 44 due to CO2 evolution were also detected. Oxygen-containing gases can probably be formed due to the presence of O-containing additives. The presence of O-ion in the PVC was detected by EDS analysis (see the chapter below). Xu et al. defined CO2 generation in the presence of ferrites, O-containing fire retardants, whereas pure PVC did not emit CO2 during pyrolysis in the inert atmosphere [35]. A few peaks at m/z 39-65 can be attributed to the generation of light aliphatic hydrocarbons, C2-C5 (m/z 39-65), also including chlorinated ones [34]. McNeill et al. studied virgin PVC thermal degradation and found evolution of aliphatic hydrocarbons, namely C10-C13 alkenes (m/z 55-57) and cyclopentene (m/z 67) [36]. The second step of PVC degradation, which is clearly distinguishable in the mass loss curve and the related gas emission diagram is due to the increased emission of aliphatic hydrocarbons as well as cyclic compound generation. Along with benzene (m/z 77, 78), other aromatic compounds, toluene (m/z 91, 92), styrene (m/z 51, 78, 104), C3-C5 alkyl benzenes (m/z 105) and ethylbenzene (m/z 104), and the isomers of xylene (m/z 106) were formed [36]. It can be said that these aromatic components were formed during the first step in small amounts, with significantly increased amounts during the second step, observed previously for virgin PVC pyrolysis [36]. Phthalates, which are often used in PVC manufacturing as plasticizers, were not detected. Phthalates can be identified by the presence of a peak at m/z 149 [39]. This is in line with the FTIR analysis, which did not detect a peak associated with phthalates either. In this study, rigid PVC from water tube PVC waste was used, where the amount of plasticizers should be very insignificant. PS is a homopolymer where styrene is the monomer, see Figure 2. PS had one-step mass loss scenario with onset at 360 °C and offset at 500 °C, showing a DTG peak at 426 °C. The PS sample decomposed almost completely, with 2 wt% residual material left. In general, pure PS burns completely without char, which was reported in various studies [40,41,42]. The small residue in our case can be attributed to the additives and possible contaminants that can be present in recycled materials. Mass spectrometry analysis of the evolved gases ( Figure 5) showed ion currency peaks at 39, 51, 63, 65, 78, 91, 104, 117, 118 and 130 m/z. The signals at 51, 78 and 104 m/z belong to the styrene monomer; the peaks at 77 and 78 m/z originated from benzene; the peaks at 91 and 92 m/z came from toluene; and the signals at 118 and 130 m/z might belong to methylstyrene and phenylbutadiene, respectively [43]. Seleem et al. report that PS was pyrolyzed to toluene, styrene, benzaldehyde, and 4-phenyl-1-butyne [42]. The ABS polymer is a complex molecule composed of acrylonitrile (15 wt%), butadiene (40 wt%) and styrene (45 wt%). The ABS molecule monomer is shown schematically in Figure 2. The sample showed one-step mass loss which started from 360 °C and completed at about 500 °C, leaving a residue of about 4 wt%. The peak of mass loss was at 420 °C. This is consistent with a previously published result [43]. The evolved gas analysis of rABS showed that possible gases were acrylonitrile (m/z 53), benzene (m/z 77, 78), styrene (m/z 51, 78, 104), toluene (m/z 91, 92) and methylstyrene (m/z 118). The one-step pyrolysis of ABS was described in other reports [37,44]. However, in the quasiisothermal TGA method multicomponent ABS showed more than one step mass loss due to the possibility of separation of overlapped decomposition events [44]. It was shown that ABS decomposed, first, styrene acrylonitrile, followed with butadiene (m/s 54). In another study, the researchers showed that ABS generated, first, butadiene, starting from 340 °C, then styrene at 350 °C, and acrylonitrile starting at about 400 °C [37]. According to Vouvoudi et al., ABS starts to degrade from the abstraction of the side -CN groups [45]. Vouvoudi et al. studied the pyrolysis of recycled ABS from waste electrical and electronic equipment (WEEE), and showed that ABS had a three-step mass loss curve where acetonitrile, acrylonitrile and styrene emission, along with several aromatic compounds with 1, 2 or 3 phenyl rings and substituted nitriles were detected [45]. SEM-EDS Analysis EDS is a fast method for the analysis of constituent elements. The elemental composition detected in the polymers is shown in Table 2. In PVC, as expected, the share of chlorine (Cl) is high, about half of the total sample weight. Elements such as Mg, Ca, Ti, Al and oxygen originated from additives that are usually applied in plastic production. For instance, they can be attributed to metalcontaining fire-retardant Mg(OH)2 and Al(OH)3, Ca-based stabilizer and pigment (TiO2) . In fact, a large amount of Ca belongs to a Ca-based stabilizer which is widely applied in PVC manufacturing [46]. The small amount of silicon, Si, which is a component of sand, can have originated from soil impurities. In addition, small amounts of Na, K, Fe and Cu were detected in ABS. Mechanical Properties The extruded filaments were tested for their tensile properties. Experimental results and values for the virgin counterparts taken from the internet sources are listed in Table 4. As can be seen, the tensile strength of recycled materials was much smaller than those of the virgin ones. The tensile modules of rABS and rPVC, in turn, were comparable with the neat equivalents. Reduction of the mechanical properties in the recycled materials can be attributed to thermomechanical action during reprocessing, aging of material during usage and presence of additives and impurities. Important information related to the mechanical properties can be received form the study of the material microstructure. Inspection of the samples with SEM, Figure 6, showed that the microstructure on the side surface of the filaments is smooth without pores and cavities. However, small particles of inclusions associated with additives and impurities can be observed. The presence of the inclusions was detected with EDS analysis (see above). A PVC filament fracture after the tensile test detected heterogeneous morphology, whereas fracture surfaces of PS and ABS are regular, without any defects. Microstructural failings in PVC filaments can be attributed to an insufficient reduction of flake size or/and the non-optimal temperature regime of the extrusion. In addition, PVC can start to degrade at enhanced temperature resulting in structural defects. Conclusions Filament samples of about 3 mm in diameter were extruded with a small-scale extruder from recycled polystyrene (PS), acrylonitrile butadiene styrene (ABS) and polyvinylchloride (PVC) materials. No visible difficulties in the filaments' processing were detected. However, filament fracture micrograph analysis detected heterogeneous morphology in the case of PVC, while PS and ABS showed regular microstructures. In terms of mechanical properties, the tensile strength of the recycled plastics was lower than those of the virgin counterparts, whereas the modulus was comparable. Thermal analysis showed that the glass transition temperatures (Tg) of the recycled PS and PVC were lower than for their virgin counterparts, whereas Tg of recycled and neat ABS was similar. The melt flow index (MFI) of rPS was similar to virgin PS, whereas the MFI of rABS was significantly lower than that of virgin ABS; the MFI of PVC was not detected. Samples burned under inert atmosphere with solid residue left about 17, 2 and 4 wt% for PVC, PS and ABS, respectively. The analysis of gases evolved during pyrolysis showed that the studied plastics decomposed in accordance with a scenario similar to their virgin counterparts. Funding: No funding.
2019-01-29T11:51:04.211Z
2018-11-28T00:00:00.000
{ "year": 2018, "sha1": "f507828d84a84ee8f8a31bd928301bc8d728724d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-4321/3/4/57/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7f78ef368101e42d39295469cb0913df2a187f01", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
215754554
pes2o/s2orc
v3-fos-license
Rapid Damage Assessment Using Social Media Images by Combining Human and Machine Intelligence Rapid damage assessment is one of the core tasks that response organizations perform at the onset of a disaster to understand the scale of damage to infrastructures such as roads, bridges, and buildings. This work analyzes the usefulness of social media imagery content to perform rapid damage assessment during a real-world disaster. An automatic image processing system, which was activated in collaboration with a volunteer response organization, processed ~280K images to understand the extent of damage caused by the disaster. The system achieved an accuracy of 76% computed based on the feedback received from the domain experts who analyzed ~29K system-processed images during the disaster. An extensive error analysis reveals several insights and challenges faced by the system, which are vital for the research community to advance this line of research. INTRODUCTION Rapid damage assessment is a task that humanitarian organizations perform within the first 48 to 72 hours of a disaster and is considered a prerequisite of many disaster management operations1. Assessing the severity of damage helps first responders understand affected areas and the extent of impact for the purpose of immediate rescue and relief operations. Moreover, based on the results of early damage assessment, humanitarian organizations identify Microblogging and social media platforms such as Twitter play an increasingly important role during disasters (Castillo 2016;Imran, Castillo, Diaz, et al. 2015). People turn to Twitter to get updates about an ongoing emergency event (Starbird et al. 2010;Hughes and Palen 2009). More importantly, when people in disaster areas share information about what they witness in terms of damages caused by the disaster, flooded streets, reports of missing, trapped, injured or deceased people, or other urgent needs, that information could potentially be leveraged by humanitarian organizations to gain situational awareness and to plan relief operations (Imran, Elbassuoni, et al. 2013;Purohit et al. 2014). In addition to the textual messages, images shared on Twitter carry important information pertinent to humanitarian response. This work focused on the real-time analysis of the imagery data shared on Twitter during Hurricane Dorian in 2019. In collaboration with a volunteer response organization, Montgomery County, Maryland Community Emergency Response Team (MCCERT)3, we activated our image processing system before Hurricane Dorian made landfall in the Bahamas. Based on the information requirements of our partner organization, the system filtered images that were relevant to the disaster and identified the ones that showed some damage content (e.g., damaged buildings, roads, bridges). More specifically, the damage analysis task assessed the severity of the damage using three levels: (i) severe damage, (ii) mild damage, and (iii) little-to-no damage (i.e., none). During a 13-day deployment period, the system collected around ∼280K images. It used machine learning techniques to eliminate duplicate and irrelevant images before performing the damage assessment. As a result, around ∼160K images were found as relevant and around ∼26K as containing some damage content. Domain experts from our partner volunteer response organization examined an evolving sample of images during the disaster. The purpose of having human-in-the-loop was two-fold. First, to keep an eye on the system generated output to verify the system was correctly classifying the images and make corrections if a mistake was identified. Second, use the human corrections to better train the system for future deployments. The human experts performed two tasks while examining over ∼29K images over several days during the system's deployment period. First, they determined if an image contained any damage content. Second, if an image was identified as containing damage, they would determine the severity of the damage using the three severity levels mentioned above. Based on the results of each expert's assessment, the system achieved an accuracy of 76% for the damage detection task and 74% for the damage severity assessment task. These are reasonable accuracy scores, which prove the effectiveness of the system for analyzing real-world disaster imagery data for rapid damage assessment. Furthermore, we performed an error analysis of the corrections resulting from performing the two tasks. Among common mistakes, we observed that the system is weak in identifying scenes that show flooding taken from afar, foggy or blurry scenes, and low-light scenes. Moreover, images that resembled damage scenes but were verified as incorrect confused the system. For example, a pile of trash would sometimes be confused as damage. Identifying deficiencies during our deployment not only helps us improve our machine learning models, but also provides valuable information for the crisis informatics research community to better understand challenges of analyzing social media imagery data during real-world disaster situations. This could lead to the discovery of additional methods and models that seek similar qualifying actionable machine output imagery to benefit decision-makers. 2http://www.resiliencenw.org/2012files/LongTermRecovery/DisasterAssessmentWorkshop.pdf 3We met the lead of the CERT team in one of the ISCRAM conferences and discussed the possibility to do a joint activation of our automatic image processing system for damage assessment. The rest of the paper is organized as follows. The next section summarizes Related Work. In the Hurricane Dorian Deployment section we provide details of the event and our system deployment. Then, we report the data collection and analysis in the section Data and Results. We later discuss our findings in the Discussion section, identify challenges, and provide future directions. Finally, we conclude the paper in the last section. RELATED WORK The importance of imagery content for disaster response has been reported in a number of studies (Turker and San 2004;Chen et al. 2013;Plank 2014;Feng et al. 2014;Fernandez Galarreta et al. 2015;Attari et al. 2017;Erdelj and Natalizio 2016;Ofli et al. 2016). These studies dominantly analyze aerial and satellite imagery data. For instance, Turker and San 2004 analyze post-earthquake aerial images to detect damaged infrastructure caused by the August 1999 Izmit earthquake in Turkey. Plank 2014 provides a comprehensive overview of multi-temporal Synthetic Aperture Radar procedures for damage assessment and highlights the advantages of SAR compared to the optical sensors. On the other hand, Fernandez Galarreta et al. 2015 andAttari et al. 2017 report the importance of images captured by Unmanned Aerial Vehicles (UAV) for damage assessment while highlighting the limitations of remote sensing data. These studies propose per-building damage scores by analyzing multi-perspective, overlapping and high-resolution oblique images obtained from UAVs. Ofli et al. 2016 also highlights the importance of UAV images while addressing the limitations of satellite images. The authors propose a methodology that enables volunteers to annotate aerial images, which is then combined with machine learning classifiers to tag images with damage categories. Very recently, the study of social media image analysis for disaster response has received attention from the research community (Daly and Thom 2016;Mouzannar et al. 2018;Alam et al. 2018b). For example, Daly and Thom 2016 analyze images extracted from social media data collected during a fire event. Specifically, they analyze spatio-temporal meta-data associated with the images and suggest that geo-tagged information is useful to locate the fire-affected areas. Mouzannar et al. 2018 investigate damage detection by focusing on human and environmental damages. Their study includes collecting multimodal social media posts and labeling them with six categories such as (1) infrastructural damage (e.g., damaged buildings, wrecked cars, and destroyed bridges) (2) damage to natural landscape (e.g., landslides, avalanches, and falling trees) (3) fires (e.g., wildfires and building fires) (4) floods (e.g., city, urban and rural) (5) human injuries and deaths, and (6) no damage. While many of the past works on rapid damage assessment need expensive data sources, some of which are also time consuming to deploy such as UAVs, satellites, and SAR, our work highlights the usefulness of Twitter images and utilizes an image processing pipeline proposed in (Nguyen et al. 2017). This image processing system filters irrelevant content, removes duplicates, and assesses damage severity for real-time damage assessment using deep learning techniques and human-in-the-loop. Hurricane Dorian On the morning of August 30, 2019, Hurricane Dorian was a Category 2 in the eastern Caribbean barreling toward the northern Bahaman Islands and central Florida. In the next 24 hours, the tropical storm rapidly intensified and became a potential danger. tasks. Some CERTs have expanded their team capabilities to provide virtual assistance that includes social media analysis. Montgomery County, Maryland CERT applies a methodological framework as described by (Peterson et al. 2019) when searching for mission-specific content extracted from Twitter. This includes, but is not limited to, performing the following tasks to find reports of damage: 1. Use hashtags and keywords to manually search for relevant tweets, including tweets containing images showing some degree of damage. 2. Analyze tweet text for pertinent cues that would qualify it as valuable. (e.g., context, location, user profile, etc.). 3. Download damage images into a team collaborative working document and determine the applicability of each image to the mission assignment. 5. Repeat above steps throughout operational period. The above-described methodological framework is effective for social media analysis during disasters, when the mission assignment is focused on text. For example, searching tweets for information indicating road conditions within a region impacted by the disaster. Most social media management tools that Montgomery County, Maryland CERT has used lack the capability to retrieve only tweets containing disaster images. This could hinder future mission assignments related to retrieving visual data because of complex and time-consuming manual steps. For example, first, each tweet would need to be individually checked by a human to determine if an image was included. Secondly, if the tweet did contain an image, and that image was determined to be of value to the mission assignment, it would need to be extracted and placed within a collaborative document. Then potentially another human would determine the applicability of the image to the mission assignment. Manual analysis of a high-volume data source such as Twitter often leads to information overload (Hiltz and Plotnick 2013). Therefore, instead of following the above manual steps, we used an automatic Twitter image collection and processing system to find reports of damages caused by Hurricane Dorian as it was progressing. Next, we describe the details of the automatic processing system. Automatic Image Processing System Deployment We used AIDR image processing system (Nguyen et al. 2017;Imran, Castillo, Lucas, Meier, and Vieweg 2014) to start collecting tweets related to Hurricane Dorian on August 30, 2019. The collection ran for about two weeks and stopped on September 14, 2019. In total, approximately 6,890,106 tweets were collected. The below listed keywords were used to collect English language tweets. Images Processing Modules The system has a number of different image analysis modules to process images on Twitter. In this system deployment, we used four of them, which are described below. resized, or re-shared with additional text inserted on the existing image. Therefore, determining whether an image is a duplicate by comparing it to all the existing images collected by the system to date is crucial. The image deduplication module performs this check by measuring the distance between a newly collected image and existing images using the Euclidean distance on features extracted from images. More specifically, the system uses a deep neural network to extract features from an image and keeps it in a hash. We use a fine-tuned VGG16 model (Simonyan and Zisserman 2014) and extract features from its penultimate fully-connected (i.e., "fc2") layer. A Euclidean distance less than 20 between the features of two images is considered as the two images are duplicate or near-duplicate. Determining an optimal distance threshold is an empirical question, which is not the focus of this work. However, a distance of 20 worked best for our setting. Junk filtering: Generally, Twitter is full of noisy content and disasters are not an exception. Research studies have found images of cartoons, advertisements, celebrities, and explicit content shared in tweets related to a disaster event (Alam et al. 2018a;Alam et al. 2018b). Trending hashtags are often exploited for this purpose. Such irrelevant content must not be shown to decision-makers during disaster response and recovery efforts given their time is valuable and limited. Unnecessary disruptions must be avoided. The junk filtering module tries to detect irrelevant images by using a deep learning model which is trained to detect irrelevant concepts such as cartoon, celebrities, banners, advertisements. The F1 score (i.e., the harmonic mean of the precision and recall) of this model is 98% (Nguyen et al. 2017). Damage severity assessment: A unique and potentially relevant image is then finally analyzed by the damage severity assessment module, which determines the level of damage shown in the image. We used a transfer learning technique to fine-tune an existing VGG16 model originally trained on the ImageNet dataset. The fine-tuning of the network (all layers) is performed based on the damage-related labeled dataset consisting of three classes. The three classes are severe, mild, and none. The severe damage class contains images that show fully destroyed houses, building, bridges, etc. The mild damage class contains images that show partially destroyed scenes of houses, building, or transportation infrastructure. The F1 score of this model is 83%. Human-in-the-loop for Image Labeling in Real Time Automatic systems are not perfect and may make mistakes. It is essential to have some human involvement either to verify the produced results or provide supervision to the system if/when needed (Imran, Castillo, Lucas, Meier, and Rogstadius 2014). Our system uses human-in-the-loop for both verification and gaining supervision purposes. Data items processed by the system are used to take samples for humans to verify and guide the system if a mistake is identified. Such mistakes could be false positives or false negatives. Human-labeled items would then be ideally fed back to the system for retraining a new model for enhanced performance. Figure 2. Task description page showing details of the tasks including classes definition To involve humans in the verification and supervision process, we used our MicroMappers crowdsourcing system7. Images downloaded and classified by our data processing system were first used to take samples. We performed this sampling every couple of hours during the operational period for Montgomery County, Maryland CERT (details in the next section). In most of the samples, we selected all severe damage and mild damage images and some from the none class from the system-processed images in a given time-window of past T-hours. We did not fix the T value, i.e., number of hours, as human processing speed depends on many unknown factors. The sampled images were then shown to human experts. For this deployment, we decided to only crowdsource the output of our damage severity assessment module, which classifies an image into one of three damage levels (i.e., classes), as described above. On a web interface, we showed an image along with the system predicted class to the expert. The human expert either agreed or disagreed with the machine classification. In the case where they disagreed with the machine classification, they would provide a new label to the image. Figure 1 depicts the crowdsourcing interface. The interface first showed the options (Damage, No Damage, and Don't know or can't judge), which can be seen on the left. If the human selected the Damage label, the interface would further show two severity levels (Mild, Severe), which would appear on the right side of the screen. The human would select one of these two severity labels and submit their assessment. If the human selected "Don't know or can't judge", then the system would not show the two additional severity labels. The human experts were allowed to provide additional comments using the text boxes on the interface. In addition to the labeling interface, we established two other pages, one for showing the task details ( Figure 2) and the second for a detailed tutorial8 with concrete examples for each class. Each human expert was instructed to go through the tutorial before beginning their labeling effort. Data Statistics As shown in Table 1, out of all 6,890,106 tweets collected, 280,063 unique image URLs were found. The total number of downloaded images was 279,819. Around 244 images failed to download due to one of several reasons, e.g., the tweet author deleted the actual tweet, the image host server was down, or connection timed out, etc. Automatic Classification Results The 279,819 images, which were successfully downloaded, were then analyzed by the image processing modules described in the previous section. An image-based deduplication was performed as the first step followed by the Table 2 shows the number of images which were found as unique, relevant, and with some level of damagespecifically severe and mild damage. Out of 279,819 images, the image-based deduplication module found 119,767 unique images, which was around 42% of the whole set. As described earlier, this image-based deduplication module relies on deep features extracted from images using a deep neural network. Due to the high retweet/re-sharing ratio on Twitter, even during a large-scale natural disaster, 58% of the images were identified as exact or near-duplicate by the system. At this stage, the process of automatically finding near-duplicate images had already reduced the chance of information overload affecting the human experts. Figure 3. Images that are relevant but do not show any damage Furthermore, out of the 279,819 images, 77,580 were identified as relevant by the system. These images did not contain cartoons, celebrities, banners, advertisements, etc. Among the relevant images, some contained damage scenes while others did not. Figure 3 shows a few images that did not show any damage but were identified as relevant. Many of the relevant images showed hurricane maps or some other scene associated to rescue efforts. We show the distribution of total, relevant, and irrelevant images for the whole deployment period in Figure 4. Out of all relevant images, 26,386 were identified as containing some damage where 11,044 showed severe and 15,342 showed mild damage. The images with damage scenes were around ∼10% of all the downloaded images. The system's ability to filter out ∼90% of images as potentially not containing any damage content is a significant reduction in risk of information overload to humans. Figure 5 contains a few images, which according to the system showed severe damage. Figure 6 shows a few images, which according to the system included mild damage. We show the distribution of mild and severe damage images as classified by the system for the whole duration period in Figure 7. Finally, Table 3 shows distribution of images identified as duplicate, not relevant, and containing no damage. Human Verification and Image Labeling Results As Hurricane Dorian progressed, human experts from Montgomery County, Maryland CERT (N=28) were asked to look at an evolving sample of system processed images to verify if the system was producing desired results. The experts were also instructed to correct any identified mistakes done by the system. Given they were trained domain experts, not employed from an online paid crowdsourcing platform, we trusted their judgements without the need to ask multiple assessors. This meant each image was assessed by only one human expert. At the conclusion of the CERT operational period, their team lead reviewed around ∼2K of the completed tasks for quality assurance. This feedback can be found in the Discussion section. Table 4 shows the results of the damage detection task. In total, the human experts analyzed 29,136 images over a 42-hour operational period from 8:00pm on September 6 to 2:00pm on September 8. These images were initially processed by the system and contained scenes of both damage and no damage. Moreover, when an image contained damage, it had one of three damage severity labels (severe, mild, none) assigned by the system. Of all 29,136 analyzed images, 1,086 were labeled as "Don't know or can't judge" by the experts. This could have been due to several reasons including blurred/low quality images, closeup shots, too dark/small, or an image containing text. From the remaining set (i.e., 28,050), the experts agreed with the system predictions for 21,384 images. This agreement can be seen in the two diagonal colored cells of the Table 4, where in 2,088 cases both system and human agreed that the image showed some damage and 19,296 cases the image showed no damage. However, there were 6,666 (5,954 + 712) images which the experts did not agree with the system. Based on the results of this human analysis, we compute the system accuracy = 76% . For the second task, which aimed to assess the severity of damage in an image, the results are shown in Table 5. The human experts agreed with the system 20,887 times, as shown in the three diagonal colored cells. However, we received a disagreement for 7,163 images. Based on the results of this human analysis, we measured the system accuracy as 74% for this task. We report detailed system performance results in terms of precision, recall, F1, and accuracy for both tasks in Table 6. The system achieved a precision of 0.89 for both tasks, which is a reasonable score. However, the recall scores are a little lower, i.e., 0.76 for task 1, and 0.74 for task 2. DISCUSSION AND ERROR ANALYSIS From an emergency manager's point of view, it is important the system does not miss any damage reports, regardless of the severity of damage. Missed damage reports could provide relevant information on an impacted area that had minimal or no actionable intelligence immediately available for decision-making. Therefore, among other cases, 'false negative' are most important for us to analyze. For example, when the machine predicts an image as not containing any damage i.e., "None" but the human expert labels it as "Severe" or "Mild". There were 357 cases where Machine=None & Human=Severe and 355 cases where Machine=None & Human=Mild. These cases can be seen in Table 5 and are analyzed next. Machine:None vs. Human:Severe: Figure 8 shows a few images where the machine prediction was None (i.e., no damage) and human assessment was Severe damage. Our in-depth analysis of these 357 images reveals that in most of these false-negative cases, the machine mainly missed flooded scenes. Another main pattern that emerged is where images with low light confused the machine such as the third image from the left in Figure 8. Also, aerial images covering a wide area caused issues for the machine to understand them (i.e., first image on the left). Image collages are also a source of problem for the machine. We define an image collage as multiple images joined together to appear as one. Such cases create even more challenges to accurately classify damage severity when the level of damage in at least one of the images contradicts the level of damage in another image within the same collage. Figure 9 shows a few cases out of 355 where the machine prediction was None, but according to the human experts these images showed Mild damage. Our analysis of these cases revealed that most of the damage appearing in an image was covered by another object. This caused issues for the machine to classify them as a damage image. In the second and third image from the left in Figure 9, it can be seen there are people standing and covering some parts of damage scenes. Whereas in the fourth image, a white door is covering 80% of the damage scene, leaving only a small area for the machine to predict it as a mild damage case. Moreover, we noticed that scenes with trees showing strong winds were also missed by the machine. For the above two cases, further investigation revealed that our damage severity assessment model's training data lacks flooded and strong winds scenes, which is one of the reasons the model missed many such cases. another important area for us to understand are false positives our system generates. As shown in Table 5, there were 5,954 (721 + 5,233) images which, according to the machine, either contained Severe damage (i.e., 721 cases) or Mild damage (i.e., 5,233 cases), but according to the human experts these cases were None-meaning they did not show any damage. Next, we study these two cases. Machine:Severe vs. Human:None: We extensively analyzed these 721 images. A few of them are shown in Figure 10. Our analysis revealed that most of the images appeared to contain some damage, but actually they did not. Many images contained scenes with irregular arrangements of wooden pieces, which deceived the model to predict them as damage. The first image from the left in Figure 10 shows a pile of trash that could be interpreted as debris of a destroyed built infrastructure. The second image has a wooden pathway with irregular arrangements of lumber. Perhaps, part of the pathway is slightly damaged, but it is not a severe damage scene. Similarly, the other two images caused confusion for the model. Having an ability to identify non-damage scenes which resemble damage scenes would be one of the most challenging tasks to address from the machine's modeling point of view. More hard negative examples would help models better understand and discriminate between positive and negative cases. Figure 11 shows a few images from this category. Our analysis revealed that the majority of these images showed maps depicting the hurricane's path as seen in the first image from the left of Figure 11. Among other scenes, there were rough sea images or memes with flooding scenes (i.e., last image from left). Furthermore, we also noticed this category contained many images with people standing in groups or performing some activity. Overall, this category shows more variation in the scenes compared to the other categories. The misclassifications relating to the maps and images where there are people can be easily fixed by feeding more hard negative examples to the machine. However, scenes of rough seas and flooding closely border between the mild or severe categories and thus would be hard to accurately tackle. Hurricane path map Sand/soil and white bags Rough sea Meme Challenges and Future Work Based on feedback from the human experts, we identified a number of weaknesses and challenges that our image processing models faced. We list these challenges as future work below. • Flood scene variations: Capturing different variations in flood scenes such as flooding on roads, in houses, forests, or fields is important yet challenging for machine learning models. In our case, this problem occurred mainly due to the lack of appropriate training data that represented such variations. However, in some cases, even a sufficient amount of labeled data might not be enough to resolve ambiguities between a natural scene and a disaster scene. For example, rough sea scenes should not be confused as flood scenes. These difficult cases require additional considerations while training machine learning models and also raise awareness to the need for further research on effective integration of human intelligence into machine learning models. • Low-light damage scenes: We noticed many foggy and low-light scenes were missed by our models. Similar to the previous challenge, lack of training data collected in low-light conditions caused our models to miss such cases. Addressing this issue is important from a time perspective for decision-makers when a disaster occurs at night time. Accurately classified images can provide awareness of the severity of damage before daylight arrives, thus saving time and allowing for some decisions to begin to be made (e.g., resource allocation planning). In addition to collecting more appropriate training data, other image processing techniques can be used to adjust image contrast, brightness, or saturation as pre-processing steps before feeding them into the model. • Wide-area and aerial images: Images taken from afar often cover a wide area that shows many objects such as houses, trees, sky, etc. These images do not only show objects at a much smaller scale than ground-level images but they also often contain scenes with a mix of both damaged and undamaged objects and areas. Due to such large differences in the scale of objects and areas, it may not be ideal to design a single model that operates on both aerial and ground-level images for any given task. In particular for the damage assessment task, the ideal solution may require designing separate aerial and ground-level models with more localized (i.e., object-level) damage detection and assessment capabilities. • Maps and memes: Our models suffered while identifying maps and memes. However, we noticed this deficiency is mainly due to the lack of appropriate labeled data on which our models were initially trained. Adding more suitable training images would help eradicate this problem. • Damage-resembling scenes: Images that show scenes resembling damaged objects or areas constitutes a big challenge for automatic image processing models. We identified around 700 such cases during our deployment. Machine learning for such scenes may need additional semantic information about objects surrounding a damage scene to help models understand. For example, if a nearby crop field shows intact healthy crops, then it is less likely that overall image shows severe damage. CONCLUSIONS Rapid damage assessment provides crucial information about damage severity caused by a disaster in the early stages of response. Humanitarian and formal response organizations rely on field assessment reports, remote sensing methods, or satellite imagery to perform damage assessment. This work leveraged imagery data shared on Twitter to identify reports of damages using image processing techniques based on deep neural networks. Moreover, the image processing system filters out duplicate and irrelevant images which are not useful for decision-makers responding to the disaster. The system was activated before Hurricane Dorian made landfall in the Bahamas and ran for 13 days. Over a 42-hour operational period of collaborating with our partner volunteer response organization, the damage reports identified by the system were examined by the domain experts of this organization, whose feedback revealed that the system achieved an accuracy of 74% and 76% for the two damage assessment tasks. Although these scores show the system's effectiveness to process real-world disaster data, we identified a number of shortcomings of our machine learning models, which are listed in the previous section and considered as potential future work.
2020-04-15T01:01:00.462Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "090bf8a2fc5286bd2e3194ced78e9a0a5dc73c4a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "090bf8a2fc5286bd2e3194ced78e9a0a5dc73c4a", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
16286784
pes2o/s2orc
v3-fos-license
Evaluation of Dry Eye and Meibomian Gland Dysfunction in Teenagers with Myopia through Noninvasive Keratograph Purpose. This study aims to evaluate dry eye and ocular surface conditions of myopic teenagers by using questionnaire and clinical examinations. Methods. A total of 496 eyes from 248 myopic teenagers (7–18 years old) were studied. We administered Ocular Surface Disease Index (OSDI) questionnaire, slit-lamp examination, and Keratograph 5M. The patients were divided into 2 groups based on OSDI dry eye standard, and their ocular surfaces and meibomian gland conditions were evaluated. Results. The tear meniscus heights of the dry eye and normal groups were in normal range. Corneal fluorescein scores were significantly higher whereas noninvasive break-up time was dramatically shorter in the dry eye group than in the normal group. All three meibomian gland dysfunction parameters (i.e., meibomian gland orifice scores, meibomian gland secretion scores, and meibomian gland dropout scores) of the dry eye group were significantly higher than those of the normal group (P < 0.0001). Conclusions. The prevalence of dry eye in myopic teenagers is 18.95%. Meibomian gland dysfunction plays an important role in dry eye in myopic teenagers. The Keratograph 5M appears to provide an effective noninvasive method for assessing ocular surface situation of myopic teenagers. Introduction Dry eye disease is defined by the Report of the Definition and Classification Subcommittee of the International Dry Eye WorkShop as a multifactorial disease of tears and ocular surface, which results in symptoms of discomfort, visual disturbance, and tear film instability, with potential damage to the ocular surface [1]. Dry eye is a common ocular surface disease that often occurs in the elderly [2]. More than 20% of people in 30-40-year-olds have dry eye, and the prevalence of dry eye in people over 70 years old is as high as 36.1% [3]. Currently, with the increasing popularity of computers, video games, and smartphones in the younger generation, the incidence of myopia in teenagers is increasing annually, with a growing number of myopic teenagers exhibiting frequent blinking, sensitivity to light, and other dry eye ocular discomfort [4]. Dry eye is of an increasingly important clinical significance in myopic adolescents as it affects their quality of life. Diagnosis of dry eye currently relies on break-up time (BUT) and Schirmer's tests. However, BUT speed is different for different people. Moreover, fluorescein sodium affects the tear film's stability. BUT and Schirmer's tests are both invasive examinations. Adolescents are more difficult to evaluate than adults for ocular surface dysfunction because of poorer compliance with the procedure. Thus the traditional diagnostic methods for identifying dry eye in adolescents are less definitive since children are more sensitive to the procedure than adults. Accordingly, the data reproducibility is more variable making it more difficult to identify the disease signs in an adolescent population. Accordingly, reported dry eye incidence in myopics is underdiagnosed. Given the lower prevalence of dry eye disease in children, the diagnosis of dry eye is often overlooked by many ophthalmologists [5]. Previous studies have confirmed that Keratograph 5M (Oculus, Wetzlar, Germany) noninvasively measures noninvasive break-up time (NIBUT), tear meniscus height, and meibography with low irritability [6][7][8][9][10]. Therefore, in this study, we used Keratograph 5M combined with slit-lamp examination and dry eye questionnaire to give myopic adolescents a series of dry eye-related inspections and assessments and to determine the prevalence of dry eye and ocular surface conditions among myopic adolescents. Materials. A total of 248 consecutive patients (average age 12.26 ± 1.86 years, range 7-18 years; 132 female, 116 male, male to female ratio = 1 : 1.14) who went to Tianjin Medical University Eye Hospital myopia clinic from January to June in 2014 with no systemic or ocular treatment, contact lens wear, keratitis, ocular allergic disease, any other ocular surface disease, glaucoma, active and chronic uveitis, or previous ocular surgery or injury were recruited in this prospective study. Written informed consent was obtained from the parents of the patients. The study was approved by the Institutional Review Board of the Tianjin Medical University Eye Hospital and performed in accordance with the tenets of the Declaration of Helsinki. Methods. This study was a prospective study, and all inspections were performed by the same experienced examiner. Questionnaire Regarding Dry Eye. Before clinical examination, each patient completed an Ocular Surface Disease Index (OSDI) questionnaire for assessment of ocular surface symptoms and the severity of dry eye. This questionnaire [11] included questions regarding the frequency of dry eye symptoms experienced in the previous week (light sensitivity, gritty sensation, painful or sore eyes, blurred vision, and poor vision), vision-related daily activities (reading, watching TV, working on computers, and driving at night), and environmental triggers (wind, air conditioning, and low humidity). Each answer was scored on a 5-point scale (all of the time: 4, most of the time: 3, half of the time: 2, some of the time: 1, and none of the time: 0), and the OSDI score was calculated as follows: {(sum of scores × 25)/total number of questions}. Thus, the total OSDI score ranged from 0 to 100. A higher OSDI score represented greater disability. Answering was completed with the assistance of one doctor, and the completion time was controlled within 4-6 min. Currently, no uniform national standards have been established for the diagnosis of dry eye, and the diagnostic criteria are inconsistent worldwide. Based on their OSDI scores, the patients were categorized as having a normal ocular surface (0-12 points) or as having mild (13-22 points), moderate (23-32 points), or severe (33-100 points) ocular surface disease [12]. The study population was divided into normal and dry eye groups, which included those with mild dry eye, moderate dry eye, and severe dry eye. The two groups were compared to assess their ocular surface conditions. Keratograph 5M: Noninvasive Measurement for Ocular Surface. Keratograph 5M inspection items include noninvasive tear film break-up time, noninvasive tear meniscus height, and meibography. The tests were first measured in the right eye and then the left eye. Three measurements were taken, and the average of results was considered in the statistics. Keratograph 5M was used to grade the right eyelid using the following meibomian gland dropout degrees as meiboscore [13]: Grade 0: no loss of meibomian gland; Grade 1: loss of < 1/3 of the whole gland area; Grade 2: loss of 1/3-2/3 of the whole gland area; and Grade 3: loss of > 2/3 of the whole gland area. The meiboscore of each eye was calculated as the sum of the scores from both upper and lower eyelids, making the total meiboscore per eye in a range of 0-6. Slit-Lamp Examination of the Anterior Segment. The following examinations were carried out sequentially using a slit-lamp: meibomian gland orifices, meibomian gland lipid secretion, and corneal fluorescein staining scores. The quality of the meibomian gland orifices was scored semiquantitatively in the central eight glands of the lower right eyelid as follows: Grade 0 is normal, that is, no obstruction of orifice and being covered with a thin and smooth fluid; Grade 1 is obstruction of one or two meibomian gland orifices or secretions or occlusion; Grade 2 is obstruction of two or three meibomian gland orifices with thick fluid; Grade 3 is obstruction or narrowing of almost half of the meibomian gland orifices; Grade 4 is obstruction or narrowing of more than half of the meibomian gland orifices with sticky secretions. The quality of the meibum was scored semiquantitatively in the central eight glands of the lower right eyelid as follows (0-24 points in total) [14]: Grade 0: clear fluid; Grade 1: cloudy fluid; Grade 2: cloudy, particulate fluid; and Grade 3: inspissated, toothpaste-like fluid. Corneal fluorescein staining was graded from 0 to 12, which was a sum of the scores of corneal four quadrants scored individually as 0 (no staining), 1 (mild staining with a few scattered dots of stains), 2 (moderate staining between 1 and 3), and 3 (severe staining with confluent stains or corneal filaments) [15]. Statistical Analysis. Statistical analysis was performed using SPSS version 19.0. All variables were expressed as the mean ± standard deviation. Indexes were analyzed using nonparametric Mann-Whitney test, and the intergroup data were compared using Shapiro-Wilk test. Spearman correlation analysis was used to estimate the correlations between various factors. Categorical variables were compared between the groups using the chi-square test. The confidence interval was set at 95%, and probability values of < 0.05 were considered statically significant. Dry Eye Detection Rate. A total of 248 subjects (496 eyes, average age 12.26 ± 1.86 years) were recruited for the study. A total of 116 males (average age 11.9 ± 2.55 years) and 132 females (average age 12.2 ± 2.45 years) participated. Comparison of General Condition and Ocular Statistical Indexes between the Dry Eye Group and the Normal Group. Table 1 shows that no significant differences in age, gender, and tear meniscus height were found between the dry eye and the normal groups. Tear meniscus height was normal for both groups (>0.20 mm), with 0.23 ± 0.03 mm in the dry eye group and 0.22 ± 0.03 mm in the normal group. The average score of OSDI of the dry eye group was 27.02 ± 14.35, and the average score of corneal fluorescein in the dry eye group was 3.51 ± 1.67. The average score of corneal fluorescein in the normal group was 7.29 ± 3.36 and the average score of corneal fluorescein in the normal group was 1.23 ± 2.32. These two indicators were significantly higher in the dry eye group than in the normal group ( < 0.001). The average of NIBUT in the dry eye group was 6.32 ± 2.49 and was significantly lower than that of the normal group, which was 13.14 ± 3.67 ( < 0.001). Comparison of Meibomian Gland Indexes between the Dry Eye Group and the Normal Group. In contrast with the normal group, the meibomian gland orifice scores, meibomian gland secretion scores, and meibomian gland dropout scores were significantly higher in the dry eye group ( < 0.0001) ( Table 2). significant inverse correlation was observed between the value of OSDI and NIBUT (rs = −0.982, = 0.000) (Figure 1). Moreover, a highly significant correlation was observed between the value of OSDI and meibomian gland dropout scores (rs = 0.838, = 0.000) (Figure 2). Discussion Recent studies showed that dry eye is a major clinical problem affecting quality of life [4] as it reduces the immunity of ocular surface, causes eye symptoms in children, leads to visual fluctuations during the day, and affects visual clarity in the daytime. Moreover, dry eye can reduce learning efficiency in children. Dry eye is widely believed to be a type of disease whose incidence increases with age [5], and thus scholars have conducted much dry eye research for the elderly. The ability of children to express eye symptoms are worse than adults, or some children may be able to express it clearly but dry eye examinations are difficult. Moreover, allergic conjunctivitis has a higher prevalence in children, and many children who have this condition also suffer from dry eye, making dry eye diagnosis more difficult [16]. Thus, the dry eye incidence in children was underestimated by many scholars. In this study, we use Keratograph 5M combined with slit-lamp examination and dry eye questionnaire to give myopic adolescents a series of dry eye-related inspections and assessments. Dry eye incidence in children was found to be 18.95% which is lower than that in adults but still not significant. Undiagnosed dry eye can lead to fragile ocular surface environment, irreversible eye damage, and increased possibility of corneal ulcers and scars [5]. Accurate diagnosis, systemic treatment, and etiological control can improve eye health and ensure good visual quality in young people. Keratograph 5M is an objective, comprehensive, and noninvasive dry eye diagnostic device that can detect NIBUT, noninvasive tear meniscus height, and meibomian gland dropout. Keratograph 5M exhibits high accuracy in the dry eye diagnosis in adults [17]. The current study shows that Keratograph 5M has a good implementation even in children, and it can be combined with questionnaire to facilitate clinical diagnosis of dry eye in children. OSDI, NIBUT, and meibomian gland dropout are correlated to dry eye in adolescents, which means that aggravated dry eye symptoms are associated with worse unstable tear film and increased meibomian gland dropout. The lower prevalence of dry eye disease in children relative to adults, limitations of diagnosis, lower degree of the subjective assessment of symptoms in children, and the lack of clinician attention reduce dry eye awareness. The meibomian glands are the main source of lipids for human tear film. The lipid layer of the tear film slows evaporation of the aqueous of tear film, preserves a clear optical surface, and forms a barrier to protect the eye from microbial agents and organic matter [18]. The meibomian gland plays a more important role than aqueous tear volume in determining the severity of ocular discomfort and dry eye conditions [19]. Lipid-deficient dry eye caused by meibomian gland dysfunction (MGD) has increasingly drawn ophthalmologists' attention. MGD is a chronic, diffuse abnormality of the meibomian glands, commonly characterized by terminal duct obstruction or qualitative/quantitative changes in the glandular secretions. MGD may result in alteration of the tear film, symptoms of eye irritation, clinically apparent inflammation, and ocular surface disease [20]. MGD could reduce tear film stability and cause ocular complaints, inflammation, and other ocular surface disorders [21]. The mean values of tear meniscus height in the dry eye and the normal groups were both in the normal range, whereas NIBUT in the dry eye group was shorter than that of the normal group, which suggests that the dry eye group has normal tear volume but relatively unstable tear film relative to the normal group. The dry eye group of myopic teenagers has a high corneal staining score, more abnormality of meibomian gland orifices and meibomian gland lipid secretions, and more meibomian gland dropouts, causing serious MGD. This result is similar to that of previous studies where lack of meibomian gland is also accompanied by damaged meibomian gland function [7]. This result implies that the common type of dry eye among myopic teenagers is lipid abnormalities of dry eye (i.e., evaporative dry eye). Currently, the clinical evaluation of dry eye is mainly based on BUT and Schirmer tests, whereas the evaluation of meibomian gland function and lipid layer is deficiency. Keratograph 5M, which has a high compatibility in children, has been found to provide early diagnostic and therapeutic values in children for the diagnosis of meibomian gland function and tear film stability. Combined with the questionnaire, the ratio of failure diagnosis of dry eye in children can be reduced. Currently, the main correction methods of juvenile myopia are frame glasses, contact lens, and orthokeratology (ortho-k). The effectiveness of overnight orthokeratology in flattening the cornea and temporarily reducing myopia has been widely documented [22]. Parents increasingly choose night-wear ortho-k to control myopia of their children. Given that ortho-k is placed on the cornea for the whole night, the ocular surface condition of adolescents with refractive errors should be fully assessed. When considering adolescent orthok treatment, we should also pay attention to the situation of the ocular surface of the patients, especially meibomian gland function and dry eye prevalence, which can help improve the safety of the treatment. The clinical and epidemiological aspects of dry eye in children have not been as well described as in adults [5]. The prevalence of dry eye disease in children varies greatly depending on which criteria and methods were used in previous research. Reportedly, 9.7% of all children have been diagnosed with dry eye disease [4]. Dry eye disease associated with longtime reading can have many signs and symptoms involved, a lot of which are still not understood. Many Chinese children with arduous learning tasks have experienced these signs and symptoms. Myopia has been associated with strenuous near task as well. Blink rates during near work are decreased leading to improper tear film placement. In this study, only normal myopic adolescents were chosen to analyze dry eye and ocular surface. The results suggest that the prevalence of dry eye in adolescents with myopia is 18.95% higher than other research documents entail. For further study regarding dry eye disease in children expanding the number of patients and the inclusion of emmetropes adolescents should be considered.
2016-05-04T20:20:58.661Z
2016-01-06T00:00:00.000
{ "year": 2016, "sha1": "b199a0912fd425acc04524c599f69798335b6993", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2016/6761206", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76ea8fab881cc2352600bf0b45b501544f37b48f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14486618
pes2o/s2orc
v3-fos-license
Modified LLL algorithm with shifted start column Multiple-input multiple-output (MIMO) systems are playing an important role in the recent wireless communication. The complexity of the different systems models challenge different researches to get a good complexity to performance balance. Lattices Reduction Techniques and Lenstra-Lenstra-Lovasz (LLL) algorithm bring more resources to investigate and can contribute to the complexity reduction purposes. In this paper, we are looking to modify the LLL algorithm to reduce the computation operations by exploiting the structure of the upper triangular matrix without big performance degradation. Basically, the first columns of the upper triangular matrix contain many zeroes, so the algorithm will perform several operations with very limited income. We are presenting a performance and complexity study and our proposal show that we can gain in term of complexity while the performance results remains almost the same. INTRODUCTION MIMO communication systems are used to provide high data rate. Basically, the MIMO system consists of transmitting multiple independent data symbols over multiple antennas. At the receiver side, a MIMO decoder need to be used to detect, separate, and reconstruct the received symbols. Several linear detection schemes can be used, such as the zero-forcing (ZF) or the minimum mean square error (MMSE) criterion, Maximum likelihood (ML) is consider as the optimal solution for the MIMO detection. But, unfortunately the ML algorithm remains complex for hardware implementations. Therefore, linear MIMO detection techniques like ZF and MMSE seems to be suitable in term of complexity, but suffers from bit error-rate (BER) performance degradation. During last years, a lattice-reduction (LR) pre-processing techniques has been proposed to be used with linear detection in order to transform the system model into an equivalent system with better effect channel matrix. The populated LR algorithm is called the Lenstra-Lenstra-Lovàsz (LLL) algorithm is the most used one. It was called according to the name of the inventors [1]. But, the LLL algorithm brings many challenges due to higher processing complexity and the nondeterministic execution time [2]. Multiple other varieties of the LLL are presented, such as [4] and [5] where the goal was to give a good complexity to performance balance. In this paper, we will focus on the ZF decoding technique and we propose a modification to the original LLL algorithm to reduce the number of loops by shifting the iteration start point. This reduces the complexity of the algorithm and keeps the BER degradation negligible. SYSTEM MODEL DESCRIPTION During this paper we will consider that (. ) and (. ) denote respectively the hermitian transpose and the transpose of a matrix. We consider the spatial multiplexing MIMO system with transmit and receive antennas with a Rayleigh channel non variant in the time. = . + On the receiver side, = [ , , … , ] are the symbols at receiver's respective antennas which will be used to estimate transmitted symbols [3]. The receiver will analyse all received information to compute the transmitted data. So, a detection, computation, equalization and estimation of the received data will happen. At receiver side, the linear zero forcing (ZF) detector compute the inverse of the channel matrix to estimate the transmitted symbols which can be expressed by, The channel matrix is QR decomposed into two parts as = . LATTICE REDUCTION TECHNIQUE We can interpret the columns ℎ of the channel matrix as the basis of a lattice and assume that the possible transmit vectors are given by ℤ , the m dimensional infinite integer space. Consequently, the set of all possible undisturbed received signals is given by the lattice. The LR algorithm generates a lattices reduced and near-orthogonal channel matrix = . . With matrix = . generates the same lattice as , if and only if the m × m matrix T is unimodular [2], i.e. contains only integer entries and ( ) = ±1: Also, We can find multiple bases that can be included in the space , and the goal of the LR algorithm is to find a set of least correlated base with the shortest basis vectors [5].Initially, an efficient (but supposed not optimal) way to determine a reduced basis was proposed by Lenstra, Lenstra and Lovàsz [1].Where they defined (LLL-Reduced): A basis with QR decomposition = . is called LLL-reduced with parameter δ with (1/4 < ≤ 1), if , ≤ . , 1 ≤ < ≤ And , ≤ , + , = 2, … , The first condition is called, size-reduced and the second one is called Lovàsz condition. The parameter plays an important role to the quality of the reduced basis. We will assume = 3 4 ⁄ as proposed in [1]. After applying the QR decomposition of H and doing successive size-reduces operations if the condition is fulfilled, the algorithm exchanges two vectors if Lovàsz condition is not fulfilled to generate and compute and . And so, the LLL algorithm will output , and . Looking to the LLL algorithm [1], one important element of its complexity is related to the fact that the LLL algorithm is applied for the real integer vectors. It is mandatory to reformulate the different matrices to their real-valued form, so we got: This kind of reformulation increases the number of operations and adds more latency for the system. The idea behind LR-aided linear detection is to consider the equivalent system model and perform the nonlinear quantisation on it [8]. In fact, if we combine equations (1) and (5), we can get: the equivalent model and in this case will represent a better channel quality. And so, the detector can be represented with an equivalent model with better performance due to the less noise enhancement increased by . Thus, the basic idea behind approximate lattice decoding (LD) is to use LR in conjunction with traditional low-complexity decoders. With LR, the basis B is transformed into a new basis consisting of roughly orthogonal vectors [8]. After processing the Zero Forcing lattice reduction (ZF-LR) mechanism and by combining equations (2) and (11), we can generate: The different enhancements for the original algorithm were looking for a limited iterations in term of stopping criteria, like in [5]. But we believe that the structure of the triangular matrix generated by the QR decomposition can be an axe of improvement and complexity reduction. Exploiting R matrix's structure to improve the LLL algorithm As shown in table 1 the outputs of the algorithm will be , , . With is an upper triangle matrix. The relation between them will follow (5). Looking to the LLL algorithm, at lines 4 & 5 we can see that the loop is starting from = 2. This choice is taken to reach the first column of . This means that we can start from any other column > 2 and in this case we will not perform the column swap of columns1 − 2. So, in the case that the loop starts from 3; we will not perform column swap for first column. In this case we will gain 1 loop iteration and we will reduce the column swaps at least by 1. Looking to the morphology of the matrix R which is a triangle matrix, so the first column contains only 1 active element (the rest are"0"). The major number of active elements is in the rest of the matrix. 3.2.R matrix's structure Below, is a representation of the matrix R in the case of 4 × 4 MIMO system. The number of elements by column is increasing from left to right = R , R , 0 R , R , R , R , R , Let's decompose it schematically as 2 parts, , and , (Just to mention that the above choice is arbitrary). In , , we have 26 active elements and in R , we have only 10 active elements. So, if we consider , we can get 72% of the matrix elements. Adding the , column we can get 30 elements and so 83% of the matrix elements. We are adding , to be conforming to lines 6 to 11, 13 and 16 to 17 in table 1; if we consider = 5. If we consider a new matrix R that conisists of the elements of R from column R , to R , , so we will get a matrix 5 × 8R . Which consist of 83% of actives elements of R. Thus, at the output of the LLL algorithm we will generate a matrix with 3 first elements as and R will keep the firsts 3 elements of . T , T , T , T , T , T , T , T , T , T , T , T , T , T , T , T , 0 0 0 0 0 T , 0 T , 0 0 0 0 0 T , 0 T , T , T , T , T , T , T , T , T , T , T , T , T , T , T This means that we have generated only 40 from 64 possible matrix element and only 5 from 8 possible columns for the matrix T. Consequently, for matrix we have manipulated only 30 from 36 possible active elements. This is a considerable computation relaxation. This approach can be generated for all column indexes which allow to gain more operations, and so we can change the algorithm of table 1 as below. But we should note that, logically the BER performance degradation will increase. In fact, we have some compromises to take into consideration (operations vs performance balance). Also, this approach will be more efficient as much as we use more antennas for both sides of the system. This means that we need to evaluate the cases where the approach will be beneficial in terms of complexity while keeping an acceptable performance. In the next sections we will present the simulation results and the complexity study of the proposed approach. The performance degrdadtion is around 1dB Figure 2, shows that if we consider the LLL algorithm starting point at column 3 or 4, the BER is not dramatically degrading (limited). But we gain a lot in terms of computation operations. In fact the proposed modification will avoid that the algorithm do more iterations and operations to reduce the elements of R (simultaneously to generate T), especially for the vectors without big effects on the results (performance). So, we will focus on the matrix column with maximum of active elements and avoiding making operation with almost "zeros" valued columns. Figure 4 shows the BER performance for the same approach applied to an 8×8 MIMO system. It illustrates clearly that the approach can be applied for any × MIMO system. Also, we can observe that for big sized matrixes the approach is showing better results. In fact, as much as we increase the matrix size we have more "zeroes" in the first columns and more "non-zeroes" elements for the right part of the matrix. So, we will get more possibilities to shift the start column index. COMPLEXITY GAIN In this section we will present an analysis of the operations load of the algorithm while being executed. After that, we will show the gain in term of operations and complexity that we can make after applying our proposed approach. Operations analysis By looking to the algorithm in table 1, we can observe that: • The size reduction operations (lines 7 to 13), is doing a kind of loop with − 1 iterations for a set of operation that contains; a division, two subtractions (than can be considered as addition [6]) and two multiplications if the line 9 is valid. So, in maximum of cases, the size reduction can be done with operations. • Line 14 representing Lovàsz's conditions require: Considering that the superior verification can be achieved via a subtractions operation [6]. • The columns permutation operation is being done elements by elements. Knowing that the simple two elements permutation is equivalent to three additions. Also, the algorithm doesn't make difference for zero or non-zero values. So, the column permutation will be done in: • The Givens rotation matrix corresponds to the computation of the and parameters and this is being done via a "norm" calculation from one side, which corresponds to a square root operation, two multiplications and one additions. And, two divisions from another side. • Line 19 corresponds to a comparison and an assignment. This doesn't take in consideration the constellation, since we are using the same constellation for all the paper and so the analysis remains the same. Flops analysis Ameer and al [7] and Markus in [6] indicated a kind of correspondence between the operation and the number of flops. The tables below will show the flops needed by a MIMO 4 × 4 and MIMO 8 × 8 systems and the gain that we can get after a start column shift. 32% As mentioned above, in the case of an upper triangular matrix, almost of the first columns elements are "zeros". Also, the columns permutation and matrix multiplication in the algorithm don't make a difference for "zero" and "non-zero" elements. So, it makes a lot of additions of element with zero or a multiplication by zero, etc.… Thus, the operation done in the first columns will consume a lot of resources while its income in terms of information is limited. With our approach we target to avoid the non-useful operations (first columns which are full of "zeros") and concentrate the effort on the columns with the maximum of information. Our approach shows a good operations gain (which equivalent to complexity in this case) and good performances (the BER results). In the case of MIMO 8 × 8 system we can reduce 32% of operations, when doing a column shift of 9 columns, with a very limited BER degradation (less than 2dB). We should note that in this paper, we didn't present the case of MIMO 2 × 2 system. This was related to the size of the matrix which is small and the new algorithm will not bring a considerable outcome. Keeping in mind that the LLL algorithm is mainly used to simplify the decoding with "big size" channels matrixes [2]. CONCLUSION In this paper, we proposed a modified LLL algorithm that exploits a kind of shift start column. We started from the original LLL algorithm and we modified it to escape the almost "zeros" columns of the upper triangular matrix R. And so, we avoided doing computation for the columns without big influence on the BER performance. The proposed approach is not one of the fashion modification of the LLL algorithm, but its added value come from its simplicity and complexity gain.This approach was simulated for both 4×4 and 8×8 MIMO systems and can be extended for any other MIMO system model. We have presented that we can gain, respectively, 23% and 32% for the MIMO 4×4 and MIMO 8×8 scheme. This is an important point, we are reducing the computation operations and so the decoding time with very limited BER degradation. So, the trade-off complexity vs performance is interesting and the gain in terms of complexity counterbalance the limited performance degradation. We considered the 16QAM modulation and ZF receiver where our approach shows good results. It will be interesting to extend this study to the MMSE and other modulation techniques. Also, in this paper, we discussed the case with a same antennas number on both sides. The case with a different antennas number on both sides will be the subject of a new study. Finally, this approach is showing better results for the "big size" MIMO systems and it we believe that extend it to the case of massive MIMO will be interesting.
2016-07-12T08:36:07.000Z
2016-07-12T00:00:00.000
{ "year": 2016, "sha1": "33ef147f71eb1ee2f1f3eade3e935d229afc1915", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "33ef147f71eb1ee2f1f3eade3e935d229afc1915", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
6875965
pes2o/s2orc
v3-fos-license
Large Electrocaloric Effect in Relaxor Ferroelectric and Antiferroelectric Lanthanum Doped Lead Zirconate Titanate Ceramics Both relaxor ferroelectric and antiferroelectric materials can individually demonstrate large electrocaloric effects (ECE). However, in order to further enhance the ECE it is crucial to find a material system, which can exhibit simultaneously both relaxor ferroelectric and antiferroelectric properties, or easily convert from one into another in terms of the compositional tailoring. Here we report on a system, in which the structure can readily change from antiferroelectric into relaxor ferroelectric and vice versa. To this end relaxor ferroelectric Pb0.89La0.11(Zr0.7Ti0.3)0.9725O3 and antiferroelectric Pb0.93La0.07(Zr0.82Ti0.18)0.9825O3 ceramics were designed near the antiferroelectric-ferroelectric phase boundary line in the La2O3-PbZrO3-PbTiO3 phase diagram. Conventional solid state reaction processing was used to prepare the two compositions. The ECE properties were deduced from Maxwell relations and Landau-Ginzburg-Devonshire (LGD) phenomenological theory, respectively, and also directly controlled by a computer and measured by thermometry. Large electrocaloric efficiencies were obtained and comparable with the results calculated via the phenomenological theory. Results show great potential in achieving large cooling power as refrigerants. Polarization properties: Figure S4 shows the polarization as a function of temperature and external electric field for two samples. (b) ceramics Pyroelectric properties: Figure S5 shows the pyroelectric coefficient (dP/dT) as a function of temperature and external electric field for two samples. Relaxor ferroelectric properties: In general, the relaxation behavior of ferroelectric can be determined by the modified Curie-Weiss law S1 where and are the maximum dielectric constant and the corresponding temperature, and T the dielectric constant and corresponding temperature above , ′ the Curie-like constant. is the critical exponent and associated with the type of ferroelectric. When = 1 and 2, the material is corresponding to an ideal normal ferroelectric and to an ideal relaxor ferroelectric, respectively. The relaxation behavior of the ferroelectric is gradually increasing with when is between 1 and 2. can be worked out by fitting the logarithmic plots of the reciprocal permittivity ( 1 − 1 ) measured at the same frequency as a function of temperature ( − where b and c are assumed to be temperature-independent phenomenological coefficients. For the parameter a a linear temperature dependence based on the Curie-Weiss law S3 , The Landau-Ginzburg-Devonshire (LGD) phenomenological theory has also been used to explain the phase transition and dielectric properties of the antiferroelectric PZT system S3-S5 . For the antiferroelectric with orthorhombic symmetry, the polarization is along the [110] direction. It should be noted that the above relations are merely suitable for antiferroelectric single domains S4 . Based on them, the single-domain properties of PLZT can be determined and the intrinsic contributions to the properties understood. Hence, by neglecting extrinsic contributions (e.g. domain wall and defect motions), the theories can be used to further understand the properties of polycrystalline materials S4 . For the antiferroelectric ceramics, the grains distribute randomly, which leads to disordered orientation of domains. When an electric field is applied on the polycrystalline ferroelectric ceramic, the distortions of at least some of the crystallites, initially randomly distribute, orient along the allowable direction along the poling electric field. Some literatures have reported the polarization of ferroelectric ceramics and crystals with the same composition at the same poling condition S6,S7 . The relationship between upper limits ̅ of the polarization of the ceramic and P of the antiferroelectric/ferroelectric single-domain is as follows S7 : tetragonal ceramic ̅ =0.831 P, rhombohedral ceramic ̅ =0.866 P, and orthorhombic ceramic ̅ =0.912 P. All of the coefficients of the Gibbs free energy function were independent of temperature, except for the antiferroelectric and ferroelectric dielectric stiffness coefficients σ 1 and α 1 , which were given as linear temperature dependences based on the Curie-Weiss law S3,S7,S8 . For the antiferroelectric orthorhombic phase, let σ 1 be β(T-T C ). Further, β, 2σ 11 +σ 12 , and σ 111 +σ 112 in the equation (S6) can be found from the first partial derivative stability conditions: where 3 and 3 are the electric field and the polarization components of a single-domain material along the coordinate axis. The electric field strengths, 5, 6 and 7 MV/m and their corresponding polarizations were selected respectively and substituted into Equation (S7) to procure the coefficient β. Then the reversible adiabatic changes in entropy (ΔS) and temperature (ΔT) can be obtained by using the relations as mentioned in Equations (S4) and (S5), and the polarization 3 as well. The parameters ( ) used for the calculation of electrocaloric effect are listed in the Table S1. where ℎ is the surrounding temperature and t the heat transfer time. More details about the test procedure and data analysis can be found in Ref. S9 and S10. During this test, an electric field of 3 MV/m was applied to the sample for 15 seconds to obtain temperature equilibrium first, then the electric field was released immediately. Meanwhile, the ECE signal appears as shown in Figure S6. The red curves are the fitted curves using equation (S8). is obtained by extrapolating the fitting toward the time of the fall of the step-like pulse. is measured in the temperature range from 303 K to 423 K at successive increments of 10 K in the temperature range of 303 K to 423 K. In the direct measurement of , one concern is the Joule heating in the samples, which will cause the enhancement of temperature when the field is applied. But in this test, the base line temperature T in Figure S6 is constant except while withdrawing the electric field, which indicates that the observed temperature change is due to ECE.
2018-04-03T03:34:13.067Z
2017-03-27T00:00:00.000
{ "year": 2017, "sha1": "13d2996b73fd0bdb93a79998875da13b172eeb1b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep45335.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13d2996b73fd0bdb93a79998875da13b172eeb1b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
108292391
pes2o/s2orc
v3-fos-license
Breast tuberculosis, a rare entity Breast tuberculosis is a rare form of extra-pulmonary tuberculosis. It is rare in western countries, usually occurs in multiparous and lactating women but rare in male and older women. It has a varied clinical, radiological and pathological presentation that can be similar to that of a breast abscess or carcinoma. Constitutional symptoms are not usually present making it even harder to diagnose clinically. Here we present a case of a young Nepalese woman with tubercular mastitis who was initially misdiagnosed as breast abscess. Introduction Breast tuberculosis (TB) was first defined by Sir Astley Cooper in 1829 as the "scrofulous swelling in the bosom of young women" and is a rare form of extra pulmonary TB [1,2]. It is more frequently encountered in developing countries like Africa and Asia, where TB is common [2]. Breast TB is of increasing clinical relevance in the western countries due to immigration and lack of awareness is very likely to both delay diagnosis and may result in unnecessary and/or disfiguring surgery pursuing a diagnosis of carcinoma [3]. Breast tuberculosis commonly affects women of reproductive age, usually between 21 and 30 years and can present either as an abscess or as a unilateral, painless breast mass [1]. From few months to few years duration, breast tuberculosis usually presents as a solitary breast lump in the central or upper outer quadrant due to frequent extension from axillary lymph node to the breast. Presentation with multiple or bilateral breast masses is uncommon. The lump is usually irregular, ill-defined and hard, mimicking carcinoma. It may be painful, mobile or fixed to skin or underlying muscle and chest wall and can also present with ulceration of the overlying skin, breast abscess, nipple retraction, peau d'orange and breast edema. Diagnosis of breast tuberculosis is even harder to make and less likely to be considered in men [4]. Differential diagnosis of breast tuberculosis is bacterial breast abscess and mastitis, carcinoma of breast, sarcoidosis, fungal infection and other granulomatous diseases. Mammography and ultrasound of the breast are not specific enough to aid in the diagnosis of breast tuberculosis. Tuberculin skin test, interferon gamma release assay, chest radiography, computed tomography (CT) scan, fine needle aspiration cytology (FNAC), open biopsy, tuberculosis polymerase chain reaction (TB PCR) can help in accurate diagnosis of such cases. The mainstay of treatment is antitubercular treatment for at least six months. Surgical management is usually limited to drainage of abscess, resection of sinuses, excisional biopsy, segmentectomy or rarely simple mastectomy [5][6][7]. Here we present the initial presentation, pathological diagnosis and management of a case of breast tuberculosis in a 34 year old Nepalese female who was primarily diagnosed as a breast abscess. Case report A 34 year old non-lactating female presented with complaints of intermittent fever, right breast pain and swelling for 25 days. There was no history of tuberculosis or breast carcinoma in the family members. She was initially treated with antimicrobials (flucloxacillin) in another health care center but her symptoms were not relieved. On examination, there was a firm lump of 2 Â 2 cm at the lateral margin of the areola of the right breast. It was slightly tender and overlying skin was red without any discharge. On ultrasonography, there was a cystic lesion in the right breast at 9 o'clock position measuring 4 Â 5 mm. Surrounding the lesion was echogenic and edematous breast tissue likely to be focal mastitis. Ultrasoundguided FNAC of right breast lump was done. It showed moderately cellular benign ductal epithelial cells arranged in clusters and sheets, as well as in staghorn pattern intermingled by myoepithelial cells. Illdefined granulomas along with multinucleated giant cells were also observed against a background of numerous neutrophils and epithelioid histiocytes (Figs. 1-3). Acid fast bacilli (AFB) stain of the specimen showed numerous AFB in clusters (Fig. 3). The pathological diagnosis was given as Granumolatous mastitis, probably tuberculosis (Fig. 4). Polymerase Chain Reaction (PCR), Mantoux test and interferon gamma release (IGR) assay for Mycobacterium tuberculosis were advised to rule out other mycobacterial infection. Mycobacterium tuberculosis was detected on PCR and IGR assay (QuantiFERON-TB Gold test). She was then started with anti-tubercular treatment according to national protocol. Four drugs including rifampin, isoniazid, pyrazinamide and ethambutol were prescribed for four months which was followed by rifampicin and isoniazid for two more months. She was also prescribed with Vitamin B6 along with this tuberculosis drug regimen. Her symptoms resolved after six months of treatment. Discussion Nepal as a developing nation has major incidence of tuberculosis. Majority of the cases are pulmonary, though extrapulmonary TB is also common. Tuberculosis of the breast is very rare as compared to other extra pulmonary tuberculosis. Its incidence in histological breast specimens ranges from 3 to 4.5% in developing countries to less than 0.1% in western countries. It is far more common in females than in males with peak incidence in the age group of 21 to 40 years. Multiparty, lactation, trauma and past history of suppurative mastitis are considered to be the risk factors for breast tuberculosis [1,2]. In our case, breast was the only site involved with no evidence of another tuberculous focus on physical or radiological examination as well as no prior history of tuberculosis. Primary breast tuberculosis has also been reported in case reports by Biswas et al [8], Singal et al [9] and Azorkar et al [5]. There is lack of awareness among healthcare profession of its manifestations, so it is often overlooked in many patients. It might present as tuberculous mastitis as evidenced by a breast lump which mimics carcinoma of the breast [7]. Females of reproductive age, when they are in lactation period are at risk for tuberculous mastitis. Both breasts can be involved with equal frequency [7]. A breast mass with or without ulceration of overlying skin and discharging sinuses are common manifestations of breast TB. Multiple nodules and multiple sinuses may occur, but multiple lumps are unusual. Tenderness is more commonly seen in breast TB rather than in breast carcinomas. Our patient also initially presented with breast lump and tenderness. The upper outer quadrant of breast is most commonly involved in breast TB. Nipple and areola are rarely involved. Fixation of the overlying skin is usually seen in breast cancer, but it can be seen in breast TB [7,8].Constitutional symptoms like malaise, fever, weight loss and night sweats are present in less than 20% of the cases. Depending on the clinical and radiological features, breast tuberculosis has been classified most recently into three forms: nodular, diffuse and sclerosing. The nodular form is slow growing and well circumscribed. It has an oval tumor shadow on mammography, which can hardly be differentiated from breast cancer. The disseminated form is characterized by multiple lesions associated with sinus formation. This form mimics inflammatory breast cancer on mammography. The sclerosing form of the disease is seen in elderly women and is characterized by an excessive fibrotic process [6,7]. FNAC is an important diagnostic tool to diagnose breast tuberculosis. Imaging modalities like mammography or ultrasonography are of limited value as the findings are often indistinguishable from breast carcinoma [7]. Since FNAC of the breast can yield the cells from the breast, direct visualization of the epithelioid cell granulomas, Langhans giant cells and necrosis can aid in diagnosis. We can also perform acid fast bacilli stain or TB PCR of the aspirate. Use of ultrasound-guided breast core biopsy rather than FNAC, is advocated by some author as the first-line intervention to establish or exclude the diagnosis [10]. Conclusion Breast tuberculosis should be considered for any patients presenting with breast problem like abscess or lump without any constitutional symptoms. FNAC and/or biopsy with detection of mycobacterium can be diagnostic. Full recovery usually occurs with antituberculous treatment alone.
2019-04-12T13:29:40.656Z
2019-03-26T00:00:00.000
{ "year": 2019, "sha1": "8b4b4c01b373b93b3adb00d930c95220e210da57", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.idcr.2019.e00530", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b4b4c01b373b93b3adb00d930c95220e210da57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211078487
pes2o/s2orc
v3-fos-license
Contraception for Adolescents Although pregnancy and abortion rates have declined in adolescents, unintended pregnancies remain unacceptably high in this age group. The use of highly effective methods of contraception is one of the pillars of unintended pregnancy prevention and requires a shared decision making process within a rights based framework. Adolescents are eligible to use any method of contraception and long-acting reversible contraceptives, which are “forgettable” and highly effective, may be particularly suited for many adolescents. Contraceptive methods may have additional non-contraceptive benefits that address other needs or concerns of the adolescent. Dual method use should be encouraged among adolescents for the prevention of both unintended pregnancies and sexually transmitted infections. Health care providers have an important role to play in ensuring that adolescents have access to high quality and non-judgmental reproductive health care services and contraceptive methods in adolescent-friendly settings that recognize the unique biopsychosocial needs of the adolescent. Introduction Adolescents, defined by the World Health Organization (WHO) as individuals between the ages of 10-19 years (1), represent almost one-fifth of the world's population. During adolescence, young people navigate numerous physical, cognitive, emotional, and behavioural changes as they acquire increasing autonomy and experiment in many areas. Experimentation may include alcohol or drug use, smoking, and sexual activity, all of which may be associated with sexual and reproductive health risks such as unintended pregnancy and sexually transmitted infections (STIs). The United Nations and the WHO consider that access to safe, voluntary family planning is a human right because it is essential for promoting gender equality, advancing the autonomy of women, and reducing poverty (2,3). The WHO has identified key elements in quality of care in family planning which include: having choice among a wide range of methods; patient-provider relationships based on respect for informed choice, privacy, and confidentiality as well as the cultural and religious beliefs of the young woman; providing evidence-based information on the effectiveness, risks, and benefits of the different contraceptive methods; having technically competent trained health care workers; and having convenient access to a range of relevant services (2). The WHO also states that no method of contraception is contraindicated on the basis of age alone (4). These position statements extend to adolescents who also have the right to sexual and reproductive health services, including contraceptive care and counselling. However, access to contraceptive education and information and the availability and accessibility of contraceptive methods may be affected by the complex dynamics of social, cultural, political, and religious influences, particularly for adolescents. almost 80%. Many have had more than one partner (5,6,7,8). Adolescents have the lowest level of contraceptive knowledge and use (9). Initiation of sexual activity while they lack adequate knowledge and skills to protect themselves places adolescents at higher risk of unwanted pregnancy, unsafe abortion, and STIs (10). Although there appears to be an increase in contraceptive use at first intercourse, many adolescents still do not use any method of contraception at first intercourse or do not continue to use contraception consistently (7,11). The most commonly used method of contraception at first intercourse is the male condom, which is important from the STI prevention perspective but is less reliable as a contraceptive method due to typical use failure rates that are significantly higher than those seen with other contraceptive methods (12). Unintended pregnancy in adolescents can have major consequences for the young woman, her family, and society. The use of effective contraceptive methods is a cornerstone of adolescent pregnancy prevention. Although adolescent pregnancy rates are decreasing worldwide, adolescent mothers make up 11% of births (13). Although there are variations in cultural norms around age of marriage and childbearing, the majority of adolescent pregnancies are unintended (9,14,15). Adolescent pregnancy contributes to maternal and child mortality, with complications from pregnancy and childbirth being the leading cause of death for girls aged 15-19 years (13). Adolescents who give birth face significant socioeconomic challenges. Adolescents at greater risk of unintended pregnancy include those who are living in poverty, with low education and fewer employment opportunities, and marginalized populations. Pregnancy itself is an important opportunity to counsel on future contraceptive plans, as rapid repeat pregnancy is common among adolescent mothers (16). The Centers for Disease Control (CDC) Medical Eligibility Criteria for Contraceptive Use (MEC) provides guidance on post-partum contraceptive options (17). Barriers to Contraceptive Access and Use Barriers to accessing contraceptive information and methods include social or culture taboos, legal restrictions, health care provider (HCP) attitudes, and healthcare systems (9,10). The acceptability and availability of contraception for adolescents varies by region and even by countries in the same region. Adolescents may experience barriers accessing contraception including inconvenient medical clinic hours, financial restrictions, lack of confidentiality, and lack of provider training. HCPs themselves may act as medical barriers by imposing their own personal values/moralistic beliefs on the adolescent, by applying inappropriate medical contraindications on recommendations for contraceptive use, by delaying initiation of contraception unnecessarily (i.e. waiting until the next menses or until STI screening results are available), by requiring unnecessary investigations prior to contraceptive initiation (i.e. by erroneously insisting on a Pap smear prior to starting contraception), or by perpetuating unfounded myths about contraceptive use (18). HCPs should ensure that they have the necessary skills and knowledge to provide unbiased, non-judgemental, evidenced-based, adolescent-friendly sexual health and reproductive health care and to be able to dispel common myths and misperceptions about contraceptive use (Table 1) (9). The cost of contraception services and methods is a potential barrier for adolescents. Contraception may be prohibitively costly for an adolescent and the need for parental financial assistance may compromise confidentiality. Although contraception is provided at no cost in some countries, in other countries contraception is covered by private healthcare and/or by the patient paying directly. Provision of contraception at no cost may remove one financial barrier but does not guarantee high rates of utilization. Nonetheless, universal subsidies for contraception appear to be cost-effective (25). The annual direct cost estimates for unintended pregnancy are $320 million in Canada and $4.6 billion in the United States (26,27). Contraceptive nonadherence accounts for 69% of this cost. Cost models have shown that provision of/switching to long-acting reversiblecontraceptives (LARC) would reduce contraceptive failures and lead to cost neutrality within 12 months (26,27). The Contraceptive CHOICE Project determined that provision of free contraceptives to adolescents reduces teen pregnancy, teen birth, and abortion (28) while yielding significant cost savings (29). The CHOICE Project also found that when cost is removed, the majority of adolescents (~70%) would choose LARC. Contraceptive Counselling There should be no restrictions on the ability of adolescents to receive complete and confidential contraceptive services. An assurance of confidentiality will increase the willingness of adolescents to disclose sensitive health information and seek health care advice, while a loss of confidentiality can negatively impact an adolescent's participation in sexual health services (30). Confidentiality, including its scope and limits, should be discussed with adolescents and caregivers, and reiterated once the adolescent is alone. Regrettably, adolescents' legal rights to confidential family planning services vary by region and change over time (31). Adolescents should also be aware of instances where confidentiality may need to be breached (32). HCPs should consult local laws regarding confidentiality and age of consent, which may vary by region. An adolescent's choice of contraception should be respected, and contraception should never be coercive. The clinic should be welcoming to adolescents, ideally with flexible scheduling, convenient times (timed around school), and age appropriate visual aides (33). Scheduled follow-up visits are important to ensure method acceptability and ongoing contraceptive adherence. HCPs should engage in a shared decision making process with adolescents. There are many suggested approaches to contraception counselling. The CDC suggest that sexual history taking should include the "5Ps": Partners, Practices, Protection from STIs, Past history of STIs, and Pregnancy Prevention (33). This can help HCPs and adolescents work toward a contraceptive plan that is focussed on anticipatory guidance, education, and disease prevention. Another approach to contraception counselling is the "GATHER" approach where the HCP Greets and builds rapport, Asks questions and listens, Tells her relevant information to help her make an informed choice, Helps make a decision and provides other related information, Explains the method in detail including its effectiveness, potential side effects, and how to use it, and lastly has the patient Return for advice or further questions (34). Another approach to contraception counselling can be found in Table 2. Adolescents should be asked about intimate partner violence, and specifically about reproductive coercion. HCPs should counsel on all available contraceptive options without bias. Effectiveness, advantages and disadvantages should be discussed. Adolescents should be advised that failure rates are highest for user dependent methods (e.g. natural family planning, withdrawal, condoms, oral contraceptives) (12). LARC methods act continuously and are less user-dependent [e.g. contraceptive implants and intrauterine contraceptives (IUCs)]. A recent Cochrane review did not find significant differences amongst hormonal contraception, levonorgestrel releasing system (LNG-IUS), and copper intrauterine device (Cu-IUD), although the studies were small, and of low to moderate quality (35). Anticipatory discussion around anticipated menstrual side effects can reduce discontinuation of the shorter acting methods (36). The WHO has developed a tiered system to discuss contraception ( Figure 1) (37): Tier 1: LARC are methods that do not rely on the user. Myth Fact The COC pill causes weight gain and acne Placebo-controlled trials have not shown an association between COC use and weight gain. Acne improves in most women using COCs due to a decrease in circulating free androgens A pelvic exam is required prior to initiating contraception With the exception of an IUC (which requires a pelvic exam for insertion), pelvic examination is not required prior to starting a contraceptive method It is important to "take a break" from the COC every few years It is not necessary to take a "pill break". Unless medical conditions arise that contraindicate its use, the COC may be continued until pregnancy is desired or a woman wishes to switch to another contraceptive method COCs and IUCs can affect future fertility When COCs or IUCs are discontinued, a woman quickly returns to her baseline fertility IUCs cannot be used in adolescents or in nulliparous women IUCs can be safely used by adolescents and nulliparous women IUCs increase the risk of ectopic pregnancy IUCs work primarily by preventing fertilization so IUC users have half the risk of ectopic pregnancy compared to women not using contraception (19) IUCs do not have any non-contraceptive benefits The LNG-IUS is associated with a decrease in menstrual flow and less menstrual cramping. All IUCs are associated with a decreased risk of endometrial cancer (20) COCs cause cancer COCs are associated with a decreased risk of endometrial and ovarian cancer and potentially colorectal cancer. The risk of cervical cancer may be increased in COC users compared with non-users. Data on breast cancer risk with COC use is conflicting but many studies have failed to demonstrate an increased risk of breast cancer or breast cancer mortality in COC users (21,22,23) IUCs can only be inserted during menses An IUC can be inserted at any time during the menstrual cycle provided that pregnancy or the possibility of pregnancy can be ruled out (24) COC: combined oral contraceptive, IUC: intrauterine contraceptive, LNG-IUS: levonorgestrel-releasing intrauterine system, IUCs: intrauterine contraceptives Tier 3: Methods that rely on user during sexual activity (male and female condom, spermicide, natural family planning), or immediately after [emergency contraception (EC)]. Many international organizations have recommended moving to a tiered approach to contraceptive counselling, whereby HCPs present contraceptive options in order of contraceptive effectiveness and start the contraceptive discussion with Tier 1 LARC methods (8,33,38). Contraceptive effectiveness is one of a woman's most important considerations when choosing a contraceptive method (39) and using top tier methods would achieve the highest effective contraception. However, while effectiveness is a paramount characteristic, it is important that tiered counselling focused on "LARC-first" does not become too directive or coercive, particularly in vulnerable populations (40). In a rights-based family planning framework, the choice of contraception should be made in collaboration with each individual adolescent taking into account safety, effectiveness, accessibility, and affordability while respecting her personal beliefs, culture, preferences, and ability to be adherent (25). Age alone is not a contraindication to any contraceptive method (2,32,41). HCPs should address common myths and misperceptions (Table 1) as well as common side effects. Adolescents may fear weight gain, bleeding, acne, and mood side effects, while their parents may fear effects on future fertility and the risk of cancer. Regardless of the method of contraception chosen, adolescents should be counselled on the importance of the use of latex condoms to reduce the risk of STI acquisition (dual method) (25,38). Starting Contraception Most contraceptive methods can be initiated at any time during the menstrual cycle provided that pregnancy or the possibility of pregnancy can be ruled out (Table 3) (41,42). The "Quick Start" method refers to starting a method immediately rather than waiting for the next menstrual Table 2. Contraceptive counselling in the adolescent Be Welcoming -Use adolescent friendly language and material -Acknowledge the need for confidentiality -Remain unbiased and non-judgmental What to Ask -Reproductive and sexual history, including previous and current use of contraception -Medical history including any specific medical conditions or medications that may be contraindications to contraceptive use -Her current relationships, partners, and whether she has any concerns -What is she currently doing to prevent pregnancy? -How important is it to her to avoid pregnancy currently? -Her ability and motivation to use contraception regularly and correctly -Her needs and expectations from a contraceptive method -The level of support she has at home or from her partner -Whether she needs to hide her use of contraception -Would she prefer to have periods or to not have periods? Be Sure to Check -Her awareness of methods and whether she already as a preference -The accuracy of her knowledge -Methods matching her needs and expectations have been discussed -The identified potential options are acceptable to her -How will she pay for contraception? -Is STI screening appropriate? -Does she have any fears or concerns? What to Tell -How the method works, how effective it is, how to use it consistently and correctly, what to do if they miss/are late for a dose, and when to seek medical attention? -How it will affect her menstrual cycle? -What are the non-contraceptive benefits? -Potential side effects and what to do if they occur? -When to return for a follow-up visit? period. Waiting to initiate contraception may place an adolescent at an increased risk of unintended pregnancy. Starting contraception immediately/at the time of the visit, has been associated with improved short-term compliance and is not associated with an increased incidence of breakthrough bleeding or other side effects (43,44). When the possibility of pregnancy is uncertain, the benefits of starting a combined hormonal contraceptive (CHC) (CHC: COC, vaginal contraceptive ring, contraceptive patch) likely exceed any risk. Thus CHC can be started immediately and a follow-up pregnancy test arranged in 2-4 weeks. Adolescents who choose to Quick Start contraception when a very early pregnancy cannot be completely excluded can be reassured that current evidence does not demonstrate an adverse impact of contraceptive hormone exposure on either fetal development or pregnancy outcomes (45,46). When using the Quick Start method, back-up contraception (barrier method and/or abstinence) should be used for the first seven consecutive days of contraceptive use unless it is initiated on the first day of menses (42). Adolescents may choose to start hormonal contraception on the first day of the next menstrual cycle or do a "Sunday start". Starting on the first day of the menstrual cycle allows an adolescent to be reasonably sure that they are not pregnant. Initiating on a Sunday allows for a withdrawal bleed to occur on a Monday, assuming a seven-day hormone-free interval (HFI). CHCs, injectable progestins, or contraceptive implants may be started immediately after a surgical or medical pregnancy termination (47). An IUC can be inserted immediately after first or second trimester abortion. In asymptomatic patients, there is no requirement for a pelvic exam prior to initiating contraception. Pap smear screening recommendations have changed in recent years and vary by region, but most no longer advocate for Pap smear screening in adolescents; some bodies recommend delaying screening until age 21 in sexually active women while others endorse delaying Pap smear screening until age 25. STI screening can be accomplished with urine sample Table 3. Criteria for being reasonably certain a woman is not pregnant A woman has no signs or symptoms of pregnancy and meets one of the following criteria: -Is ≤7 days after the start of a normal menses -Has not had sexual intercourse since the first day of her last normal menses -Has been using a reliable method of contraception consistently and correctly -Is ≤7 days post-abortion (spontaneous or induced) -Is <4 weeks post-partum -Is exclusively breastfeeding, amenorheic, and <6 months post-partum If any of the above criteria are met, a pregnancy test is not required. In most other cases, a negative high sensitivity urine pregnancy test will reasonably exclude pregnancy IUD: intrauterine device for polymerase chain reaction, self-collection swabs, or cervical swab collection. STI screening is not a requirement prior to IUC placement. STI screening may be performed on the day of IUC insertion but insertion should not be delayed while waiting for the results, provided that there are no overt signs of infection. HCPs should provide at least a year-long prescription and should consider having samples on site to provide to adolescents (38). All adolescents should be counselled on how long to use back up contraception after starting a new contraceptive method. The Cu-IUD is effective immediately while CHC methods, the single rod implant, the LNG-IUS, and DMPA are effective after seven consecutive days of use. Additional information on what to do if they miss/delay taking their contraceptive method should be provided. Non-contraceptive Benefits Counselling on contraceptive options should also include discussion about non-contraceptive benefits. Hormonal methods can provide improvement in heavy menstrual bleeding (HMB) and dysmenorrhea. CHC can also improve cycle regularity, acne, hirsutism, and premenstrual symptoms. Adolescents may prefer concealed options such as injectables, implants or IUC. Emergency Contraception Regardless of the contraceptive method they choose, adolescents should be aware of EC and know that it can be used in the setting of contraceptive failure, such as condom interruption, non-adherence to hormonal contraception, or no contraceptive method used. HCPs should write prescriptions for EC, and provide information on how and when to access EC. Hormonal EC is available in many countries without a prescription. Increased availability of hormonal EC does not increase the frequency of unprotected intercourse (UPI), the likelihood of sexual risk-taking, or make women less likely to use effective contraception (48). Available EC options include: LNG-EC, 1.5 mg orally x 1 dose, high dose CHC (Yuzpe method), ulipristal acetate (UPA) (UPA-EC, 30 mg orally x 1 dose), mifepristone (low, mid dose) and insertion of Cu-IUD (25,49). The most effective EC is the Cu-IUD, which can be used up to seven days after UPI provided a pregnancy test is negative. It also has the additional benefit of ongoing contraception; however adolescents may experience barriers accessing a provider within the recommended time window (25,32). Hormonal EC can be offered up to 120 hours after UPI or contraceptive failure, although LNG-EC is more effective the sooner it is taken. UPA-EC may be used up to five days after UPI and may be more effective than LNG-EC in obese adolescents (50). There are no absolute contraindications to EC, aside from pregnancy or previous sensitivity reactions. Use of a Cu-IUD for EC has the same eligibility criteria as routine Cu-IUD insertion (2,41). LNG-EC, UPA-EC, and mid dose mifepristone are all more effective than the Yuzpe method although all methods have been shown to decrease pregnancy rates (49). The Cu-IUD causes an inflammatory reaction that is toxic to oocytes, spermatozoa, and increases smooth muscle activity in fallopian tubes and myometrium preventing implantation. Hormonal EC works by impairing follicular development of the dominant follicle provided they are taken prior to ovulation. LNG-EC is preferred over the Yuzpe method owing to higher effectiveness -up to 85% if used within 72 hours. UPA-EC is more effective than LNG-EC likely due to its ability to disrupt ovulation even if taken after the LH surge has begun. For adolescents using LNG-EC or the Yuzpe regimen, hormonal contraception can be resumed immediately. In the case of UPA-EC, initiation of hormonal contraception should be delayed for five days due to potential interactions between the two medications that may affect effectiveness and UPA-EC's ability to delay ovulation (51). Backup contraception and/or abstinence should be used until hormonal contraception has been taken for at least seven consecutive days. On the other hand, the Cu-IUD is immediately effective for ongoing contraception. EC users should have a pregnancy test if spontaneous menses do not occur within 21 days of EC use, if the next menstrual period is lighter than usual, or if it is associated with abdominal pain not typical of the woman's usual dysmenorrhea. If a pregnancy occurs in a cycle during which oral EC was taken, the adolescent should be advised that there does not appear to be a harmful effect on pregnancy outcomes and there is no increased risk of congenital abnormality (48). EC is a useful back-up method for condom use: if the condom breaks, slips, or is not used, there is still a further possibility of preventing pregnancy. However, the efficacy of hormonal EC is significantly lower than regular use of contraception and its preventive efficacy should not be overestimated. In most clinical scenarios, EC provision should be considered an opportunity for counselling and to start a continuous and effective contraceptive method as soon as possible (5). Quick Start is described previously. Medical Eligibility Criteria for Contraceptive Use in Adolescents Although age itself is not a contraindication to the use of any method of contraception, reversible contraceptive methods are generally preferred in adolescents. Guidance for the safety of contraceptive use in women with certain characteristics or medical conditions are provided in the form of MEC from the WHO, the CDC, the Faculty of Sexual and Reproductive Healthcare, and other international organizations (4,17,52). For each medical condition/characteristic, contraceptive methods are placed in one of four categories to determine contraceptive eligibility ( Table 4). The WHO and CDC also developed Selective Practice Recommendations for Contraceptive Use that recommend which tests and exams should be performed prior to providing contraception (2,41). Breast, pelvic and genital examination, Pap smears, and bloodwork are not recommended routinely because they do not contribute to increased safety of CHC use. Ideally, blood pressure and body mass index (BMI) should be recorded for adolescents prior to starting CHC but should not delay initiation of contraception. A medical history should be taken to alert HCPs to conditions or risk factors that might be a contraindication to contraceptive use. Contraceptive Options for Adolescents Intrauterine Contraception IUCs are LARC methods that are highly effective and can be used by women of any age. Neither age nor nulliparity are contraindications to their use although rates of IUC expulsion are significantly higher in adolescents compared to older women regardless of parity or IUC type (4,53). Many international societies have stated that IUCs are a safe first line choice for adolescents (8,31,32,38,54,55) and encourage HCPs to counsel all adolescents on their use for the prevention of pregnancy due to their low typical use-failure rates and high one-year continuation rates. IUC rates have a 99% efficacy, with over 80% continuing with the method at one year (54). There are two types of IUCs: Cu-IUD and LNG-IUS. The Cu-IUDs may either have a frame (usually T-shaped) or be frameless and contain a varying amount of copper. The LNG-IUS's (LNG-IUS 20, LNG-IUS 12, LNG-IUS 8) contain different amounts of levonorgestrel in their reservoir. The main mechanism of action of all IUCs is the prevention of fertilization. Prior to providing or placing an IUC, absolute and relative contraindications should be reviewed. There is no requirement for pre-placement ultrasound. HCPs may require additional training for insertion. The success rate for insertion in adolescents is 96% (56). Adolescents may choose the LNG-IUS for its non-contraceptive benefits that include a reduction in menstrual bleeding and dysmenorrhea. The LNG-IUS 20 (Mirena®) is approved for treatment of HMB, and may prove beneficial for adolescents with HMB, bleeding disorders, and those on anti-coagulation (57). Although the LNG-IUS has less systemic absorption compared to CHCs, some adolescents experience hormonal side effects including acne, breast tenderness, headaches, and mood changes. Functional ovarian cysts may occur in LNG-IUS users, however these cysts are often asymptomatic and do not require further intervention (54). Adolescents choosing a Cu-IUD may be seeking a LARC method with minimal hormonal exposure. Cu-IUD users may experience increased menstrual blood loss and dysmenorrhea. Adolescents can be offered non-steroidal anti-inflammatory drugs (NSAIDs) and/or tranexaminic acid to help decrease menstrual blood loss and dysmenorrhea. With time, the number of unscheduled bleeding days tends to decrease with both LNG-IUS and Cu-IUD users. Occasionally IUC users may request IUC removal due to ongoing dysmenorrhea. HCPs should counsel the adolescent about IUC insertion and not rush. Handouts may be helpful and can include information about the need for ongoing condom use to protect against STIs, duration of back-up contraception after insertion (seven days for the LNG-IUS, none required for Cu-IUD), recommendations for prophylactic NSAIDs for insertion, common initial side effects such as cramping or unscheduled bleeding, and when to seek medical assessment. Pre-placement NSAIDs have been shown to reduce discomfort post-insertion. Currently, there is no evidence to support routine pre-and post-placement ultrasound. Although in selected cases vaginal and/or oral misoprostol taken pre-procedure may help with IUC insertion, its routine use should be discouraged due to an increase in side effects such as bleeding, abdominal pain and cramping, fever, and higher pain scores post-IUC No restriction on the use of the contraceptive method. 2 The advantages of using the method generally outweigh the theoretical or proven risks. The method can generally be used but more careful follow-up may be required. 3 The theoretical or proven risks usually outweigh the advantages of using the method. Use of the method requires expert clinical judgement and/or referral to a specialist contraceptive provider because use of the method is not usually recommended unless other more appropriate methods are not available or not acceptable. 4 There is an unacceptable health risk and the method should not be used. insertion (58). Paracervical blocks may reduce pain with tenaculum placement, but have not been shown to reduce pain with IUC insertion. Smaller diameter LNG-IUS's (LNG-IUS 12, LNG-IUS 8) and Cu-IUDs may be associated with less pain on insertion. Adolescents should be offered IUC placement in the clinician's office, and routine insertion in the operating room should be avoided unless this is the adolescent's preference. Prior to IUC placement, the HCP should rule out the possibility of pregnancy (Table 3). IUCs are not associated with an increased risk of pelvic inflammatory disease or STI acquisition although there is a small increased risk of pelvic infection seen within 21 days of IUC placement (59). STI screening should be performed in women at high risk of STIs prior to or at the time of insertion but it is not necessary to delay IUC insertion until the results are available. Positive results can be treated while the IUC remains in situ (54). Routine antibiotic prophylaxis at the time of IUC placement is not recommended. IUCs can safely be used in adolescents with a history of STI, including human immunodeficiency virus (HIV), although insertion should be delayed if there is evidence of mucopurulent discharge. Immunosuppression is not a contraindication to IUC use (4,8). IUCs may be safely inserted in the immediate post-abortion and post-partum period (delivery to 48 hours). While there may be a slightly higher expulsion rate (10%), this should not be a barrier to offering placement. Immediate postplacental insertion should not be offered in the setting of chorioamnionitis and/or post-partum hemorrhage. Progestin-only Contraceptive Options Progestin-only contraceptives do not contain estrogen and thus may be good options for young women who cannot take estrogen. There are few contraindications to progestin-only methods: current breast cancer (Category 4), breast cancer remission within five years, severe cirrhosis, hepatocellular adenoma, malignant liver tumour, and unexplained vaginal bleeding (Category 3) (4,17,60). Non-contraceptive benefits of progestin-only options include decreased dysmenorrhea and endometriosis-related pain. The most common side effect is unscheduled bleeding. All progestin-only contraceptive options are safe for adolescents, with the implant being a WHO Tier 1 contraceptive method (37). A) Contraceptive Implant The single rod implant containing etonogestrel, an active metabolite of desogestrel, is the most effective method of reversible contraception with an efficacy of 99%. It is effective in situ for up to three years, although it is likely effective for up to four years, and high continuation rates are seen at one and two years (28,60,61). Its contraceptive effect is due to cervical mucous thickening, thinning of endometrial lining, and ovulation inhibition. The most common side effect is unscheduled bleeding which is variable and does not necessarily improve with time. Implant users requesting removal often cite abnormal uterine bleeding, weight gain, or acne as the reason for removal (62). Functional cysts can be seen in users, but usually do not require further intervention (60). The implant does not have an adverse effect on bone mineral density (BMD) such as that seen with DMPA, likely owing to ongoing ovarian activity that allows for endogenous estradiol to support bone health, but there is limited evidence in adolescents. This Tier 1 method may be a good option for adolescents because it is non-coitally dependent, does not require daily user action, and is discrete. Advantages of this LARC include 3-year duration of effectiveness, reversibility, discretion, and can be used by adolescents who have contraindications to estrogen. It can be seen on X-ray. Contraceptive implants can be inserted post-abortion, and immediately post-partum thereby reducing rapid repeat pregnancy and repeat abortions among adolescents (63). B) DMPA DMPA-IM is an intramuscular injection that is administered every 12 weeks by a HCP. A lower dose subcutaneous version (DMPA-SC) that can be self-administered is available in some countries. DMPA inhibits pituitary gonadotropins, leading to anovulation and causes thickening of cervical mucous. Advantages of this method include discretion, infrequent dosing, and non-contraceptive benefits such as reductions in dysmenorrhea, premenstrual symptoms, HMB, fibroids, anemia, seizures, and sickle cell crises (8,60). It is one of the few systemic hormonal contraceptives that can be reliably used with liver-enzyme inducing drugs because its concentrations are not affected (5). Disadvantages may include having to access a HCP for intramuscular injections, unscheduled bleeding, delayed return to fertility, and weight gain. Adolescents using DMPA appear to gain more weight than non-users or users of other contraceptive methods (64). Adolescents who experience more than a 5% weight gain after six months of DMPA use may be at risk of continued excessive weight gain (65). DMPA has high rates of amenorrhea, with up to 68% of DMPA users being amenorrheic at 24 months. Although unscheduled bleeding may decrease in amount and frequency with time, irregular bleeding is a common reason for discontinuation. DMPA use can be associated with a reversible BMD loss, likely due to the estrogen deficiency that accompanies its use (66). This may be of concern in adolescence, when bone accrual should be occurring (67,68). The BMD loss associated with DMPA use is greatest in the first one to two years which has led several organizations to recommend a maximum duration of use of two years. The bone loss seen with DMPA use is similar to bone loss seen with pregnancy and appears to return to baseline within two years of discontinuation (69,70). Both the American College of Obstetricians and Gynecologists and the Society of Obstetricians and Gynaecologists of Canada have recognized the risks of unintended pregnancy in adolescents if their contraceptive options are limited and hence have stated that there should no restriction on the use of DMPA or duration of use in women who are otherwise able to use the method (60,71). The WHO has determined that for females younger than 18 years, the advantages of using DMPA generally outweigh the theoretic safety concerns regarding fracture risk (72). Routine BMD monitoring is not recommended in adolescents using DMPA because dual energy X-ray absorbtiometry has not been validated in these populations. Although studies have demonstrated that low dose estrogen supplementation limits BMD loss in adolescent DMPA users, it isn't recommend because of potential adverse effects and because there is lack of clinical evidence for the prevention of fractures in the adolescent population (71). Adolescent DMPA-users should be counselled on adequate calcium and vitamin D, weight bearing activity, and avoidance of alcohol, caffeine, and smoking which can be associated with BMD loss. HCPs should discuss the overall risks and benefits with DMPA users at regular intervals. Recently, the WHO reviewed concerns about potential increased HIV acquisition in DMPA users. They determined that for women at high risk of HIV acquisition there are no restrictions for use of reversible methods (73). A recent randomized controlled trial did not find an increased risk of HIV acquisition amongst Cu-IUD, DMPA-IM, or LNG implant users (74). C) The Progestin-only Pill (POP) The POP is taken every day, without a HFI. This method works via thickening cervical mucous with anovulation seen in only 50% of user. Adolescents should be counselled that POP needs to be taken at the same time every day to avoid pregnancy risk. It is often used as post-partum contraception when women are breastfeeding. Users may continue to have regular cycles, however, unscheduled bleeding is the most common reason for discontinuation Combined Hormonal Contraception CHC methods contain an estrogen and a progestin. They include the pill, patch, and vaginal ring. In the absence of medical contraindications adolescents can safely use CHC. Absolute and relative contraindications should be reviewed prior to initiation (4,17). Common side effects including unscheduled bleeding, nausea, and headaches, should be discussed with the adolescent prior to initiation, as this improves continuation (36). Adolescents and young women can be counselled that they can take the CHC with a 4-or 7-day HFI, and/or can take cyclically or in extended cycle (Skipping periods). Benefits of extended cycle use include reduction in dysmenorrhea, HMB, acne, anemia, and conditions exacerbated by cyclic variations (e.g. migraine without aura, epilepsy, irritable bowel syndrome, inflammatory bowel disease, mood, behaviour) (8,75). Women taking CHC in extended cycle either experience equivalent or less unscheduled bleeding compared to cyclic counterparts (75). Extended/continuous cycles can be achieved by using the hormone for two, three, or more cycles back-to-back, without taking a HFI and having a withdrawal bleed. The safety of this approach is well established and adolescents should be counselled that not experiencing bleeding during a HFI is safe, as evidenced by equivalent endometrial assessment via ultrasound and/ or endometrial biopsy (75). For contraceptive efficacy, a HFI should not be taken until at least 21 consecutive days of hormonal contraception has been used. It is helpful to provide adolescents with written instructions or website links on how to take CHC in extended cycle, and what to do if a dosage is missed. Follow-up should be scheduled at one and three months to ensure the method is acceptable and to assess side effects. A. Combined Oral Contraceptive (COC) pills are the most popular hormonal contraceptives among adolescents. Typical use failure rate is 9% (12) and is usually secondary to non-adherence. Adolescents should be counselled on behaviours to increase contraceptive adherence including: regular schedule, phone alarm, and family member support (8,9). Adolescents should be provided with resources (paper, app, online) to assist when pills are missed. Considerations with Combined Hormonal Contraceptive i. Weight gain: A Cochrane review did not find a significant association between COC or transdermal CHC and weight gain (78). There is currently insufficient evidence to link CHC use with weight gain. When counselling adolescents about weight gain, it is important to discuss ongoing physical development, and average weight changes for women over a year. ii. Mood: Data on CHC effect on mood is conflicting. Placebocontrolled trials have not demonstrated a significantly increased risk of mood changes in CHC users compared with placebo users, and there is some evidence that COCs are protective for mood (79). COC's containing drosperinone are associated with an improvement in premenstrual dysphoric disorder symptoms (80). Conversely, a large Danish prospective cohort study found an increased risk for first use of an antidepressant and first diagnosis of depression among users of different types of hormonal contraception, with the highest rates among adolescents (81). HCPs should counsel adolescents that CHC may be associated with mood changes, but there is no conclusive evidence linking CHC to depression (32). iii. Venous thromboembolism (VTE): The baseline risk of VTE in adolescents is very low (1 per 10,000). CHC use is associated with a 3-fold increase risk for VTE with an absolute risk of 3-4 per 10,000 in adolescents. There currently is inadequate data to support preferential prescribing related to increased VTE risk based on type of progestin or dose of ethinyl estradiol (82). Prospective cohort studies do not seem to show a significant difference in VTE risk by progestin type (83,84). Routine thrombophilia screening in adolescents prior to initiating CHC is not advised. iv. BMD: Adolescence is a time of bone mass accrual which continues up to approximately age 25 years (38). Although data on CHC effects on BMD is conflicting, there is currently no evidence supporting increased risks of osteoporosis or fracture in CHC users (72,85). Early data has suggested that in healthy adolescents, COCs with at least 30 mcg ethinyl estradiol may be preferred due to poorer bone mineralization seen with lower dose options (38), and that extended regimens may be preferred to 28-day cyclic regimens because there is greater bone accrual (86). Adolescents with eating disorders are at risk for decreased BMD. Although a recent study suggested COC use was associated with normalization of bone resorption markers in adolescents with anorexia nervosa and may limit bone loss (87), CHCs are generally not recommended for prevention of osteoporosis in this population (32). v. Obesity: There are no contraindications to CHC use based on body weight and/or BMI alone (17,42). Studies demonstrate either equivalent or increased pregnancy rates among obese CHC users, however more high quality studies are needed (88). Barrier Contraception Male condoms are the most commonly used contraceptive method at first intercourse, and one of the most commonly used methods among adolescents (9). This method retains its popularity due to its low costs and lack of need for a prescription. Typical use failure rates are as high as 18% and may be higher in adolescents due to inconsistent/ incorrect use (8,89). HCPs can help ensure that adolescents understand proper condom use including sizing, placement, storage, and safe lubricants as well as how to negotiate condom use with their partners (32,89). There are concerns that adolescents choosing LARCs have the lowest rates of dual method use (90). Regardless of the contraceptive method chosen, HCPs should encourage adolescents to continue to use condoms for STI prevention as well as contraceptive back-up in the event of a contraceptive failure and/or non-use. Conclusion The ability to freely choose when and how many children to have is a basic human right. Contraception is an important pillar for the prevention of unintended pregnancy in adolescents. HCPs should strive to provide care within the human rights based framework and to work with adolescents to find a method that best meets their personal biopsychosocial needs and that they will be able to adhere to. Adolescents should have access to a wide range of contraceptive options with LARCs being first line options due to their greater effectiveness. However, as LARC uptake increases among adolescents, it is important to incorporate messages about condom use specifically for STI prevention. Healthcare providers must provide counselling that is appropriate to the adolescent, acknowledges how they access health care, and is not perceived as directive or coercive.
2018-05-08T18:08:56.191Z
2013-09-01T00:00:00.000
{ "year": 2020, "sha1": "146a8130bfde917d35c97a94dff11492570e7471", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4274/jcrpe.galenos.2019.2019.s0003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a0f372b6088ca52ad2d134c89ffefa42293d995", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260950340
pes2o/s2orc
v3-fos-license
Use of Dichlorodimethylsilane to Produce Polydimethylsiloxane as a Substitute for Vitreous Humour: Characteristics and In Vitro Toxicity Polydimethylsiloxane (PDMS) is a substitute for vitreous humour in vitreoretinal surgery and is usually produced from octamethylcyclotetrasiloxane (D4). In Indonesia, both commercial PDMS and D4 are limited and expensive. Dichlorodimethylsilane (DCMS) can be an alternative to produce PDMS. DCMS is cheaper and easier to obtain than D4. However, more extra effort is needed in order to produce PDMS from DCMS. Therefore, this study aimed to produce PDMS from DCMS by varying the ratio of DCMS precursor to dichloromethane (DCM) solvent at ratios of 1:1 and 1:4 through the hydrolysis–condensation method under neutral conditions. The PDMS produced had medium- (2.06 Pa·s) and high viscosity (3.59 Pa·s), with densities ranging from 0.96 to 0.99 g/mL. The refractive index was 1.4034–1.4036 and surface tension was 21 × 10−3 N/m, while they were able to transmit ~100% visible light, which were similar values to the commercial PDMS characteristics. PDMS samples were characterized using IR and NMR spectroscopy, which confirmed they were of PDMS type. The most optimum DCMS:DCM ratio was 1:1 due to the medium-viscosity PDMS type that could be produced. The in vitro HET–CAM toxicity test showed that samples were non-irritant, similar to PDMS produced from D4. PDMS from DCMS was non-toxic and ready to be used as a vitreous humuor substitution. Introduction Vitreous humour is a clear gel that fills most of the volume of the eye socket [1][2][3]. It has physical and chemical properties that should not change, to avoid retinal detachment. This eye disorder requires vitreoretinal surgery, which involves a procedure called the vitrectomy technique. The surgery is carried out by replacing the damaged vitreous humour with substitute material that has similar characteristics to the natural vitreous humour [4]. Polydimethylsiloxane (PDMS) is a fluid commonly used as substitute for vitreous humour in vitreoretinal surgery [3,5]. Compared to other substitute materials, PDMS can be used for both short-and long-term applications. Previous studies have also reported various advantages of using PDMS as a vitreous substitute. PDMS has high surface tension, low specific gravity, low toxicity, transparency, and ease of removal, and also allows more controlled retina manipulation during surgery. PDMS is the best choice for complicated 2 of 12 retinal cases. Dissimilar to use of gasses, it is safe for patients who used PDMS to travel by airplane or ascend to high places [1,2,4]. Moreover, the physicochemical properties of PDMS, especially its high viscosity, have also been reported to have good stability based on in vivo tests [6,7]. Therefore, this material is the most suitable and widely used due to its properties and ability to provide a tamponade effect [1][2][3][4][5]. However, PDMS has been reported as unable to provide a tamponade effect on inferior retinal breaks, causing emulsification that leads to complications (cataracts, glaucoma, keratopathy, proliferative vitreoretinopathy), inflammation, corneal toxicity, increased IOP, and decreased choroidal thickness [1][2][3]. Moreover, a second operation is also required for its removal, most commonly 3 weeks (the earliest) or 3-6 months after being injected into the eyes [2,8]. PDMS is usually produced from octamethylcyclotetrasiloxane (D4) [9][10][11][12][13]. In Indonesia, both the commercial PDMS and the D4 monomer as raw materials are limited and expensive [10]. Dichlorodimethylsilane (DCMS), with the chemical formula of Si(CH 3 ) 2 Cl 2 , is a precursor that contains methyl (CH 3 ) and silicon (Si) bonds. It can be an alternative material to produce PDMS. PDMS from DCMS has been synthesized by a hydrolysis and condensation method [14][15][16][17]. The hydrolysis method requires a solvent. Hydrolyzed DCMS produces cyclic siloxane such as hexamethylcyclotetrasiloxane (D3) and D4. Then, the monomer condenses into PDMS [16,17]. However, more effort is needed in order to produce PDMS from DCMS that has suitable properties as a vitreous substitute. Viscosity of a vitreous substitute is classified into three types, namely, low-, medium-, and high-viscosity. The materials used in vitreoretinal surgery must have a viscosity of at least~1 Pa·s, 1.8 Pa·s, and~3 Pa·s for low-, medium-, and high-viscosity scenarios [11,18]. Another requirement as a vitreous substitute is that the material must not contain toxins harmful to the eyes. Therefore, as a material used for medicinal purposes, it must go through preclinical testing and clinical trials. The Hen's Egg Chorioallantoic Membrane (HET-CAM) method can be used to evaluate acute and chronic inflammatory responses to biomaterials [19,20]. This test performs tissue reaction filtering against biomaterials quickly, simply, and cost-effectively [21]. HET-CAM is a test designed to examine macroscopic changes in the Chorioallantoic Membrane (CAM), such as changes in the blood vessel width, hyperemia, lysis, and coagulation [22]. The CAM model also provides the ability to visualize the implant site without sacrificing test animals. A toxicity test is needed to be carried out to determine the feasibility and possible impact of the use of a vitreous substitute. Previous studies have successfully synthesized low-and high-viscosity PDMS from hydrolysis-condensation of DCMS under basic conditions. Setiadji et al. successfully obtained high-viscosity (3.84 Pa·s) PDMS using the volume ratio between DCMS and dichloromethane (DCM) solvent of 1:10 through 18 h of hydrolysis and 10 min of polymerization [15]. However, this synthesis process was considered not optimal due to the long hydrolysis process. Fauziah et al. successfully synthesized low-(1.53 Pa·s) and highviscosity (4.49 Pa·s) PDMS with a ratio of DCMS:DCM = 1:4. The hydrolysis process took 2 h, while the polymerization process took up to 18 days (for low-viscosity) and 63 days (for high-viscosity) at low temperatures (15-20 • C) with self-polymerization techniques [14]. The self-polymerization techniques, which required a very long processing time, are one of the disadvantages for development of this material. Apart from basic condition, several previous studies also reported that PDMS can also be synthesized from DCMS under acidic or neutral conditions (with or without adding an initiator) [17,23]. However, producing PDMS under acidic conditions cannot produce PDMS with the viscosity required for vitreous humour substitution [17]. In order to produce and develop PDMS, it is very important to find the optimal conditions of the synthesis process. It is clear that, in previous studies, PDMS with high quality for vitreous humour substitution could not be produced with a shorter and easier synthesis process. Moreover, the toxicity information of the materials was also still unknown. Therefore, this research reported the synthesis of PDMS through hydrolysis and high-temperature condensation polymerization methods using DCMS as a precursor in order to produce PDMS with suitable properties as a vitreous humour substitution. The hydrolysis process was carried out under neutral conditions using DCM solvent. Ratio of precursor and solvent used were varied to determine the most optimum process to obtain a product with the best optimal characteristics. Furthermore, high-temperature treatment with the additions of potassium hydroxide (KOH) as a catalyst and hexamethyldisiloxane (MM) as a chain terminator were carried out using a condensation polymerization method to accelerate the reaction and obtain a faster polymerization time. An in vitro preclinical test of HET-CAM toxicity was also carried out on the synthesized PDMS. Materials For the synthesis process, dichlorodimethylsilane, Si(CH 3 ) 2 Cl 2 , DCMS, with purity of >99.5%, was obtained from Sigma Aldrich, Darmstadt, Germany. Dichloromethane, CH 2 Cl 2 , DCM, with purity of 99.8%, acted as the solvent and was obtained from Merck, Darmstadt, Germany. Chloroform, CHCl 3 , from Merck, Germany, was used in the purification process, while Milli-q water, H 2 O, was used in the hydrolysis and purification processes. Potassium hydroxide, KOH, acted as a catalyst and was obtained from Merck, Germany. Hexamethyldisiloxane, O(Si(CH 3 ) 3 ) 2 , MM, acted as a chain terminator and was obtained from Sigma Aldrich, Germany. The commercial polydimethylsiloxane, PDMS, (ARCIOLANE 1300 (low-viscosity) and ARCIOLANE 5500 (high-viscosity) from Arcadoptha, Toulouse, France, was used for the comparison of properties. Synthesis Procedure The PDMS synthesis process consisted of several steps, as shown in Figure 1. It was started by the hydrolysis method under neutral conditions to form the OH functional group in the sample. The hydrolysis reaction was initiated by mixing DCM solvent with DCMS precursor in a varied volume ratio. Subsequently, milli-Q water was added slowly to the solution. The hydrolysis reaction was carried out at a stirring rotation speed of 300 rpm and a temperature of 60 • C for 240 min. The by-products and residual precipitates formed after the hydrolysis process were separated from the non-polar phase using a separating funnel. The non-polar phase was evaporated using a rotary evaporator at a temperature of 50 • C for 60 min to remove the residual solvent that was still present in the hydrolysed gel. PDMS through hydrolysis and high-temperature condensation polymerization methods using DCMS as a precursor in order to produce PDMS with suitable properties as a vitreous humour substitution. The hydrolysis process was carried out under neutral conditions using DCM solvent. Ratio of precursor and solvent used were varied to determine the most optimum process to obtain a product with the best optimal characteristics. Furthermore, high-temperature treatment with the additions of potassium hydroxide (KOH) as a catalyst and hexamethyldisiloxane (MM) as a chain terminator were carried out using a condensation polymerization method to accelerate the reaction and obtain a faster polymerization time. An in vitro preclinical test of HET-CAM toxicity was also carried out on the synthesized PDMS. Materials For the synthesis process, dichlorodimethylsilane, Si(CH3)2Cl2, DCMS, with purity of >99.5%, was obtained from Sigma Aldrich, Darmstadt, Germany. Dichloromethane, CH2Cl2, DCM, with purity of 99.8%, acted as the solvent and was obtained from Merck, Darmstadt, Germany. Chloroform, CHCl3, from Merck, Germany, was used in the purification process, while Milli-q water, H2O, was used in the hydrolysis and purification processes. Potassium hydroxide, KOH, acted as a catalyst and was obtained from Merck, Germany. Hexamethyldisiloxane, O(Si(CH3)3)2, MM, acted as a chain terminator and was obtained from Sigma Aldrich, Germany. The commercial polydimethylsiloxane, PDMS, (ARCIOLANE 1300 (low-viscosity) and ARCIOLANE 5500 (high-viscosity) from Arcadoptha, Toulouse, France, was used for the comparison of properties. Synthesis Procedure The PDMS synthesis process consisted of several steps, as shown in Figure 1. It was started by the hydrolysis method under neutral conditions to form the OH functional group in the sample. The hydrolysis reaction was initiated by mixing DCM solvent with DCMS precursor in a varied volume ratio. Subsequently, milli-Q water was added slowly to the solution. The hydrolysis reaction was carried out at a stirring rotation speed of 300 rpm and a temperature of 60 °C for 240 min. The by-products and residual precipitates formed after the hydrolysis process were separated from the non-polar phase using a separating funnel. The non-polar phase was evaporated using a rotary evaporator at a temperature of 50 °C for 60 min to remove the residual solvent that was still present in the hydrolysed gel. After the evaporation process, a clear hydrolysis gel was produced. The gel was saturated (stirring) at a temperature of 50 • C with a stirring speed of 200 rpm to complete hydrolysis process. Subsequently, purification was carried out to purify the sample until it reached a neutral pH of 7. As a result, the pure hydrolyzed gel was obtained. Condensation polymerization reaction was carried out at high temperatures. The purified hydrolysis gel was condensed at a temperature of 130-140 • C with a stirring rotation speed of 300 rpm to obtain PDMS gel. In this condensation process, KOH was used as a catalyst and MM was used as a chain terminator. Condensation was carried out by mixing the purified hydrolysis gel with 0.6 M of KOH and small amount of MM. Subsequently, the PDMS gel was obtained. The resulting PDMS gel was further purified to remove residues and obtain pure PDMS gel. The purification was carried out by dissolving the sample in chloroform and adding milli-q water. The purification method used followed the previous research [10]. When the sample was neutral, a pure PDMS gel could be obtained. The Milli-q water and Chloroform were separated from the sample so that pure PDMS gel remained. PDMS samples were synthesized by several synthesis conditions, such as the volume ratio between DCMS and DCM solvent and polymerization temperature. The synthesis condition of each sample was listed in Table 1. The synthesis parameters were the result of optimization. The sample with the first synthesis condition was coded as P-1 and the second synthesis condition as P-2. Characterization The characterization of the synthesized PDMS gel was carried out to find out the characteristics of the samples. The results were compared with the characteristics of lowand high-viscosity commercial PDMS. In the characterization process, the density of PDMS was measured through mass and volume measurements using Equation (1) [24]: where ρ = density (g/mL), m = mass (gr), and V = volume (mL). The viscosity of 3 mL of PDMS samples was determined by the torsional oscillation method using a SEKONIC VISCOMATE viscometer model VM-10A-MH from SEKONIC, Tokyo, Japan. The surface tension values were determined by a capillary method using Capillary Rise Method Dyne Gauge, DG-1 from Surfgauge Instrument, Chiba, Japan, in environmental conditions of 16-20 • C, 40-65% RH. Furthermore, AS ONE I-500 refractometer Brix 0~90% from AS ONE, Osaka, Japan, was used to measure refractive index. From refractive index data, the additional diopters can be calculated using Equation (2) [25]: where Ns = refractive index of the sample, Nv = refractive index of the vitreous (1.3348), AL = axial length in mm (23.35 mm), and ACD = anterior chamber depth in mm (3.06 mm). UV-Vis spectrometer T + 70 from PG Instrument Ltd., Lutterworth, UK was used for measuring the transmittance of the samples. The samples were prepared on a glass substrate (2.5 × 1 cm) and measured at wavelength ranging from 400 nm to 900 nm (visible light). The functional groups of the samples were identified using Perkin Elmer Spectrum 100 FTIR spectrometer from PerkinElmer, Inc., Shelton, CT, USA, with wavenumber ranging from 500 cm −1 to 4000 cm −1 , and verified using 1 H-and 13 C-NMR Agilent-VNMRS500 500 MHz from Agilent Technologies, Inc., Santa Clara, CA, USA. NMR characterization was carried out for sample P-2. For the NMR characterization, the sample was dissolved in 4 mL of deuterated chloroform (CDCl 3 ) solvent. In Vitro HET-CAM Test The HET-CAM test was carried out using 7-day-old embryonic white leghorn eggs weighing 50-60 g. This test used a positive control of 1% sodium dodecyl sulfate (SDS) and negative control of 0.9% sodium chloride (NaCl). The prepared chicken eggs were divided into three groups, consisting of three eggs each for the tested sample, positive control, and negative control. The selected eggs were incubated for seven days at 37 • C. After incubation process, the egg membrane was opened and 300 µL of the test material (the samples, positive, or negative control) was implanted into each egg. Subsequently, observations were made at 0, 15, 30, 60, 100, and 300 s. The observations of in vitro HET-CAM tests were made by identifying the changes in the width of the blood vessel and the presence of hemolysis. The observation of blood vessel changes was carried out on primary, secondary, and tertiary blood vessels. The toxic samples showed a change in the width of blood vessels, as well as positive control materials, while non-toxic samples did not show any change in blood vessels, as was the case with negative control materials. Figure 2 showed the physical appearance of purified hydrolyzed gel and a PDMS sample. It was discovered that both sample P-1 and sample P-2 were successfully synthesized and purified, resulting in transparent materials. ranging from 500 cm −1 to 4000 cm −1 , and verified using 1 H-and 13 C-NMR Agilent-VNMRS500 500 MHz from Agilent Technologies, Inc., Santa Clara, CA, USA. NMR characterization was carried out for sample P-2. For the NMR characterization, the sample was dissolved in 4 mL of deuterated chloroform (CDCl3) solvent. In Vitro HET-CAM Test The HET-CAM test was carried out using 7-day-old embryonic white leghorn eggs weighing 50-60 g. This test used a positive control of 1% sodium dodecyl sulfate (SDS) and negative control of 0.9% sodium chloride (NaCl). The prepared chicken eggs were divided into three groups, consisting of three eggs each for the tested sample, positive control, and negative control. The selected eggs were incubated for seven days at 37 °C. After incubation process, the egg membrane was opened and 300 µL of the test material (the samples, positive, or negative control) was implanted into each egg. Subsequently, observations were made at 0, 15, 30, 60, 100, and 300 s. The observations of in vitro HET-CAM tests were made by identifying the changes in the width of the blood vessel and the presence of hemolysis. The observation of blood vessel changes was carried out on primary, secondary, and tertiary blood vessels. The toxic samples showed a change in the width of blood vessels, as well as positive control materials, while non-toxic samples did not show any change in blood vessels, as was the case with negative control materials. Figure 2 showed the physical appearance of purified hydrolyzed gel and a PDMS sample. It was discovered that both sample P-1 and sample P-2 were successfully synthesized and purified, resulting in transparent materials. Table 2 showed the characteristics of density (ρ), viscosity (η), refractive index (n), additional diopters (D), and surface tension (γ) of all samples. The properties of commercial PDMS were also listed as a comparison. The results indicate that the density of all samples was lower than water (1 g/mL). Based on the viscosity value, sample P-1 was categorized as medium-viscosity and P-2 as high-viscosity PDMS. The refractive index value of sample P-1 had a slight difference from P-2. The samples had different refractive indexes compared with natural vitreous (1.3348). Therefore, sample P-1 and P-2 had additional diopters of +3.406 and +3.396, respectively. However, the additional diopters of sample P-1 and P-2 were smaller than the commercial PDMS. In addition, both P-1 and P-2 had 21 × 10 −3 N/m as surface tension values, which were higher than the commercial PDMS. Table 2 showed the characteristics of density (ρ), viscosity (η), refractive index (n), additional diopters (D), and surface tension (γ) of all samples. The properties of commercial PDMS were also listed as a comparison. The results indicate that the density of all samples was lower than water (1 g/mL). Based on the viscosity value, sample P-1 was categorized as medium-viscosity and P-2 as high-viscosity PDMS. The refractive index value of sample P-1 had a slight difference from P-2. The samples had different refractive indexes compared with natural vitreous (1.3348). Therefore, sample P-1 and P-2 had additional diopters of +3.406 and +3.396, respectively. However, the additional diopters of sample P-1 and P-2 were smaller than the commercial PDMS. In addition, both P-1 and P-2 had 21 × 10 −3 N/m as surface tension values, which were higher than the commercial PDMS. Characteristics of PDMS The spectrum of PDMS transmittance measurement in Figure 3 showed that PDMS samples had transparency values of~100% at the visible light wavelength. The infrared (IR) spectra of all samples are presented in Figure 4, while Table 3 showed the functional group identification results of all samples. Based on the results, it was discovered that all samples had a slight difference in wavenumber compared to commercial PDMS. However, their transmittance peaks showed the same spectrum, indicating that they had the same functional groups as commercial PDMS without any impurities. The spectrum of PDMS transmittance measurement in Figure 3 showed that PDMS samples had transparency values of ~100% at the visible light wavelength. The infrared (IR) spectra of all samples are presented in Figure 4, while Table 3 showed the functional group identification results of all samples. Based on the results, it was discovered that all samples had a slight difference in wavenumber compared to commercial PDMS. However, their transmittance peaks showed the same spectrum, indicating that they had the same functional groups as commercial PDMS without any impurities. Characterization via 1 H-NMR and 13 C-NMR In Figure 5, the 1 H-NMR and 13 C-NMR spectra of sample P-2 are shown. The 1 H-NMR measurement revealed only one peak-the origin for the methyl (Si-CH 3 In Vitro Toxicity Test The micro-image of the blood vessels in the HET-CAM test from 0 s and 300 s observations are shown in Figure 6. The blood vessel damage, namely, changes in the width of blood vessels and hemolysis, were identified in the positive control group (1% SDS) (Figure 6a), while the PDMS samples (P-1 ( Figure 6b) and P-2 (Figure 6c)) and the negative control (0.9% NaCl) (Figure 6d) did not show any damage to the blood vessels. In Vitro Toxicity Test The micro-image of the blood vessels in the HET-CAM test from 0 s and 300 s observations are shown in Figure 6. The blood vessel damage, namely, changes in the width of blood vessels and hemolysis, were identified in the positive control group (1% SDS) (Figure 6a), while the PDMS samples (P-1 ( Figure 6b) and P-2 (Figure 6c)) and the negative control (0.9% NaCl) (Figure 6d) did not show any damage to the blood vessels. Discussion In the hydrolysis process of DCMS, the chloride bonds were broken by water OH group bound with Si, replacing Cl. This caused Cl to be released from Si, which with H to form HCl. The reaction continued until two liquid phases were formed, the hydrolysed gel and the residue containing the HCl precipitate. The two liqui separated and the evaporation process was carried out to remove the residual solv remained in the sample. The sample was saturated (stirred) and purified to obtain hydrolysis gel with a neutral pH. The resulting hydrolysis gel was a monomer th be polymerized to produce PDMS. To carry out this process, the pure hydrolysed had been saturated was polymerized through a high-temperature condensation and assisted by a catalyst. In the condensation process, MM was also used to con polymerization process. Subsequently, the polymerized sample was purified to p a clear and transparent pure PDMS gel, as shown in Figure 2. In the hydrolysis-condensation method, some synthesis parameters play portant roles, such as temperature and duration of the synthesis process. We succe synthesizing PDMS with a suitable viscosity as a vitreous substitute from DCMS an initiator (for the hydrolysis process) in a shorter time than in previous studies. P studies required a greater amount of solvent and a longer synthesis time (both hy and condensation) for producing PDMS gel [14,15]. Increasing the polymerizati perature and adding an amount of KOH and MM have an impact on accelerat controlling polymerization process. In addition, we found that the ratio betw DCMS precursor and the solvent also affects the synthesis process and the charac Discussion In the hydrolysis process of DCMS, the chloride bonds were broken by water and the OH group bound with Si, replacing Cl. This caused Cl to be released from Si, which bound with H to form HCl. The reaction continued until two liquid phases were formed, namely, the hydrolysed gel and the residue containing the HCl precipitate. The two liquids were separated and the evaporation process was carried out to remove the residual solvent that remained in the sample. The sample was saturated (stirred) and purified to obtain a pure hydrolysis gel with a neutral pH. The resulting hydrolysis gel was a monomer that could be polymerized to produce PDMS. To carry out this process, the pure hydrolysed gel that had been saturated was polymerized through a high-temperature condensation method and assisted by a catalyst. In the condensation process, MM was also used to control the polymerization process. Subsequently, the polymerized sample was purified to produce a clear and transparent pure PDMS gel, as shown in Figure 2. In the hydrolysis-condensation method, some synthesis parameters played important roles, such as temperature and duration of the synthesis process. We succeeded in synthesizing PDMS with a suitable viscosity as a vitreous substitute from DCMS without an initiator (for the hydrolysis process) in a shorter time than in previous studies. Previous studies required a greater amount of solvent and a longer synthesis time (both hydrolysis and condensation) for producing PDMS gel [14,15]. Increasing the polymerization temperature and adding an amount of KOH and MM have an impact on accelerating and controlling polymerization process. In addition, we found that the ratio between the DCMS precursor and the solvent also affects the synthesis process and the characteristics of the resulting sample. In this study, we found that changing the synthesis parameters greatly affects the viscosity of the sample. In addition, we also proved that the hydrolysis process of DCMS can be carried out under neutral conditions. The hydrolysis process under these conditions even made the whole PDMS gel synthesis process faster compared to previous research methods. PDMS gel characterization results in Table 2 showed that PDMS samples had density values of 0.96 for P-1 and 0.99 g/mL for P-2. These values were close to the 0.97 g/mL value of the commercial PDMS and the 1.0053-1.0089 g/mL value of vitreous humour [26]. The density value was related to intermolecular bonds, with higher values indicating greater intermolecular bonds. It was also reported as one of the factors that influence the possibility of emulsification [5]. The closer the density value of the material to the natural vitreous, the better the performance of the material [3]. By varying ratio volume of DCMS:DCM, two types of PDMS were obtained. Viscosity values obtained were in the range of medium-and high-viscosity for sample P-1 (2.06 Pa·s) and P-2 (3.59 Pa·s), respectively. Medium-viscosity PDMS was a new type of PDMS and was considered the most optimal type, due to its low emulsification tendency compared to others [27]. Furthermore, this PDMS type can also be easily injected due to its viscosity. High-viscosity PDMS was reported to be able to reduce the emulsification level. Therefore, high-viscosity PDMS has been widely chosen by surgeons for long-term use and can even be used for permanent vitreous substitute [3,5]. However, this study has not been able to produce all PDMS types that were used as vitreous substitutes. Producing low-viscosity PDMS from DCMS is the next challenge. The refractive index values of the samples were close to the commercial PDMS and were still within the allowable range of the refractive index value of vitreous substitute, namely, +3.0D to +3.5D [26]. The refractive index values of the samples were still different from the natural vitreous (1.3348). Using PDMS as a substitute for vitreous humor cannot avoid this difference in refractive index. The difference between refractive index value of the sample and vitreous humour caused a change in refraction. The final results on vision also depend on the eye condition before silicone oil is injected [28,29]. However, the refractive index values of the samples were smaller than the commercial PDMS. The closer the refractive index of the sample was to the vitreous, the higher the visual acuity. Furthermore, the surface tension values of samples P-1 and P-2 were higher than the commercial PDMS. This high surface tension was required to avoid emulsification of the ocular fluids. The surface tension acts as a barrier wall protecting the liquid from outside influences. The high surface tension provides a good tamponade effect. The transmittance spectrum of PDMS samples in visible light was presented in Figure 3. The P-2 sample, which had a higher viscosity, exhibited slightly lower transmittance than P-1. However, the synthesized PDMS samples had a transparency value of~100%, indicating that the samples can efficiently transmit all visible light. It is very important in the optical function of vitreous substitute to transmit the light that enters the eye toward the retina [2]. The commercial PDMS is known to have excellent transparency [3]. In this research, that condition was successfully obtained in the synthesized PDMS samples. Based on the physical properties, the most optimum volume ratio of DCMS:DCM was 1:1 due to the PDMS type that can be produced (medium-viscosity PDMS). Figure 4 showed the results of the functional groups from all samples, and their identification was listed in Table 3. Compared to the commercial PDMS, all samples had a slight difference in wavenumber. However, the transmittance peaks of all samples showed the same spectrum, indicating that they had the same functional groups as commercial PDMS. Based on the IR characterization results, the synthesized PDMS gel had a functional group absorption of PDMS type with the main peaks of Si-C stretching and rocking CH 3 , Si-O-Si stretching, CH 3 symmetric deformation of Si-CH 3 , CH 3 asymmetric deformation of Si-CH 3 , and CH stretching of CH 3 . This was supported by the 1 H-NMR and 13 C-NMR characterization results, where sample P-2 had PDMS characteristics and showed only one peak in each NMR characterization, namely, Si-CH 3 . The sample only had an H atom in the methyl bonded to Si, and the C atom was only in the methyl bonded to Si. Therefore, both the 1 H NMR and 13 C NMR measurements showed only one peak for methyl. The FTIR characterization results did not show any absorption of functional groups other than PDMS, similar to the NMR characterization results. Therefore, it was concluded that the samples were pure PDMS without any impurity contamination. In this research, toxicity tests were carried out on both samples and the results were obtained after 300 s of observation. Figure 6 showed the results of the vessels of positive control, samples test, and negative control. The negative control did not reveal any symptoms of irritation, such as a change in the width of blood vessels and hemolysis, indicating a non-toxic sample. Meanwhile, the positive control showed damaged vessels, indicating that the sample was toxic. Based on Figure 6, samples P-1 and P-2 did not show any damage in blood vessels during the observation process, the same as for the negative control result. Therefore, it can be concluded that the synthesized PDMS samples were non-toxic through the in vitro HET-CAM toxicity test. These results ensured that the synthesized PDMS was safe for used as a vitreous substitute. The HET-CAM toxicity test method has been reported to have a good correlation (76%) with in vivo test results (Draize test) and has also been accepted as a full replacement for severe irritation tests on animals in several European countries [30]. The toxicity results of sample P-1 and sample P-2 were also in accordance with other in vitro toxicity results. Romano et al. reported that cytotoxicity effects of commercial PDMS in human retinal cells (ARPE-19 and BALB 3T3) were not found [31]. However, direct or indirect retinal toxicity of commercial PDMS has been reported in the inner retinal layer after use for 15 months (long-term) [32]. Meanwhile, the recommended time to remove the silicone oil is generally after 6 months of being injected [8]. Therefore, further studies regarding the long-term and in vivo toxicity of PDMS from DCMS are needed. Based on the characterization and toxicity results, medium-and high-viscosity PDMS were successfully synthesized from DCMS with good quality. The non-toxic HET-CAM test results indicated that the synthesized PDMS did not contain any harmful materials, the same as PDMS from the D4 monomer [33]. The synthesis of PDMS with a 1:1 of volume ratio between DCMS and DCM solvent successfully produced the optimum type of PDMS with high surface tension and non-toxicity properties. The properties of the samples indicated that PDMS from DCMS was successfully synthesized with good quality and was ready to be used as an alternative for producing PDMS as a vitreous humour substitution. Conclusions The two types of PDMS were successfully synthesized through hydrolysis and hightemperature condensation by varying the volume ratio of DCMS:DCM. The chemical and physical properties of the synthesized medium-and high-viscosity PDMS showed that they had characteristics of PDMS type. Moreover, both types of medium-and highviscosity PDMS samples were non-toxic, as determined through in vitro HET-CAM toxicity tests. The optimum volume ratio of DCMS:DCM to produce PDMS was 1:1. This method succeeded in producing the optimum type (medium-viscosity) of PDMS with high surface tension and non-toxic properties. Based on the properties and toxicity, PDMS from DCMS had good quality and non-toxicity, and was ready to be used as an alternative for producing PDMS as a vitreous humour substitute.
2023-08-17T15:08:52.941Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "a0e1e02676f67f9731786033ab500db7a67d05c3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4983/14/8/425/pdf?version=1692065640", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3148587c5008c3280a47cc2bb6d1b1a3fa41b3d1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
262169984
pes2o/s2orc
v3-fos-license
Quality Risk Evaluation of Urban Rail Transit Construction Based on AHP – FCE Method . The demand for urban transport is increasing globally, and urban rail transit is an important infrastructure for meeting this demand. The objectives of this study were to effectively control and prevent all types of risks in the construction of metro projects and improve the quality and safety control of urban metro project construction. First, 20 index factors were selected from the fi ve dimensions of “ man – machinery – materials – methods – environment ” and constructed an index system for assessing urban metro construction quality risks. Second, the analytic hierarchy process (AHP) and fuzzy comprehensive evaluation (FCE) methods were used to comprehensively evaluate the construction quality risks of subway projects, and the weights of the secondary indices were determined. Finally, the importance of secondary indicators was evaluated using the integrated AHP – FCE method, and the model was applied to engineering practice for validation. The results indicated that the comprehensive AHP – FCE method has good adaptability and rationality and has practical application value for metro project construction quality and safety risk assessment. It can help prevent urban metro construction quality accidents and provides a novel idea for metro project construction quality risk assessment. Introduction With continuous urbanization throughout the world, the number of cities is gradually increasing, urban space is expanding, and the demand for urban transport is rising.Thus, urban rail transit is an important infrastructure that provides an effective solution for daily urban transport [1].Since the world's first underground was constructed and put into operation in London in 1863 [2,3], many large cities have accelerated the construction of similar efficient, safe, low-carbon, and sustainable infrastructure, which is expected to alleviate traffic congestion, reduce air pollution, and promote sustainable urban development.As of December 2022, 545 cities in 78 countries and regions around the world had urban rail transit, with a total mileage of >41,386.12km.China (including Hong Kong, Macao, and Taiwan) had 61 cities with rail transit, corresponding to a total operating mileage of 10,857.17km, which ranked first in the world and accounted for 26.2% of the global total mileage, while Germany, Russia, the United States, and Ukraine ranked 2nd to 5th, respectively [4].Urban rail transit systems are a public good that has become an integral part of urban transport and will be increasingly used globally [5,6]. Urban rail transport plays an important role in guiding the rational development of the spatial structure of cities, coordinating social resources, and improving people's lives, and it is an important way to solve the problem of urban traffic congestion [7].However, most metro construction is underground, with a closed construction space and complex building structures and construction environments, often in densely populated areas.Thus, the occurrence of uncertain disaster events such as floods, fires, equipment failures, and other quality hazards during the construction process can lead to safety accidents and cause a social amplification effect [8,9].On January 12, 2007, seven people died as a direct result of a collapse at the construction site of Pinheiros station on yellow line 4 in São Paulo, Brazil [10].On August 23, 2012, flooding during the construction of an underground tunnel in Warsaw, Poland, brought the city's traffic to a standstill [11].In China, construction quality and safety accidents occur frequently during metro construction.On July 1, 2003, leakage in the tunnel liaison channel of Shanghai Metro Line 4 caused significant ground settlement and damage to several buildings, directly resulting in economic losses of RMB 150 million [12].On February 5, 2007, an EBP shield construction accident occurred during the construction of Nanjing Metro Line 2, resulting in extensive subsidence and settlement on the road above the tunnel, which led to water, gas, and electricity outages for 2 days and caused considerable inconvenience to 5,400 residents [13].On January 2, 2015, a combustible gas explosion occurred during the underground construction of Wuhan Metro Line 3, resulting in the death of two people [14].Therefore, to promote the safe, rapid, and stable development of urban rail transit systems, it is necessary to effectively perform risk management in the construction phase of which risk assessment is an indispensable part [15].The risk assessment of urban rail transit construction projects must be considered with regard to five aspects-construction man, machinery, materials, methods, and environment-to comprehensively sort the factors affecting the construction quality in the construction process; develop a practical, efficient, and systematic urban rail transit construction quality risk assessment system; implement risk prevention and control measures according to the assessment results; reduce the risk of loss; and ensure the smooth advancement of urban rail transit construction projects. Scholars worldwide have conducted extensive research on risk assessment of urban rail transit construction and have made significant progress.Zhou et al. [16] identified metro construction collapse patterns with regard to scenarios, consequences, and causality and developed safety strategies and valuable countermeasures for metro construction practices to avoid varying degrees of quality risk in metro construction.Wu et al. [17] developed a risk assessment and safety decision-making methodology that can provide guidance for dynamic safety analyses of tunnel-induced pavement damage over time.Yan et al. [15] combined vague set and object-element theory to develop a vague fuzzy object-element model for risk assessment; the practicality and effectiveness of the model were verified through examples.Wu et al. [18] developed an intelligent monitoring system platform for urban rail transit project construction to realize project site monitoring and dynamic early warning management of sources of risk.Li et al. [19] proposed a BIM platform-based metro construction safety risk identification and early warning systems.Various methods for risk analysis and assessment of urban rail transit construction projects include probabilistic risk assessment (PRA) [20], the safety risk identification system (SRIS) [9], and risk-factor analysis (RFA) [13].Among them, PRA and RFA use the questionnaire survey method to classify risk factors; then, practical countermeasures are identified for metro construction by identifying and evaluating key risk factors.The SRIS, by applying graphical recognition and risk identification automation technologies to risk assessment in the preconstruction period, can identify potential safety hazards and provide dynamic risk control and early warning during the construction of urban railways.Additionally, in engineering practice, fuzzy theory has been widely used to better identify uncertainties in the quality risk assessment of construction projects.Sari et al. [21] evaluated the urban rail system in Istanbul under different risk factors using the fuzzy analytic hierarchy process (FAHP) and conducted a multicriteria assessment of the existing rail system to allocate scarce resources.Al-Labadi et al. [22] proposed a fuzzy set model that can accurately assess the safety performance of grouting operations during metro tunnel construction.Zhang et al. [23] proposed a fuzzy decision analysis method to provide guidance for safety management in metro construction. Most previous studies only focused on a particular aspect of metro construction through the research and development of monitoring platforms or the development of evaluation systems and the corresponding measures.There have been relatively few studies on construction quality risk evaluation of urban rail transit construction projects from the perspective of management science, and there are few methods for evaluating the quality risk in the actual construction process of metro projects.In the present study, we evaluated urban rail transit construction quality risks on the basis of the existing research.The three main contributions of the study are (1) methodological innovation: the integrated application of the analytic hierarchy process and fuzzy comprehensive evaluation (AHP-FCE) allows more scientific, reasonable, and practical quantitative evaluation of urban rail transit construction quality risks, and the results of the metro construction quality risk assessment can be expressed as both a vector and a value.(2) Innovative perspective: The "4M1E" management method was used to systematically explore the quality of urban rail transit construction with regard to the "man-machinery-materials-methods-environment" aspects.(3) Innovation in content: This study focused on the quality and safety risks in urban rail transit construction, which is conducive to identifying risks and helps to realize the management of metro construction quality risk evaluation.We propose a scientific and accurate evaluation method for enhancing metro construction quality control.The remainder of this paper is organized as follows: Section 2 presents the data sources and research methodology.Sections 3 and 4 present and discuss the results, respectively, Section 5 presents the main conclusions and discusses policy implications.In everyday production activities, decisions regarding various matters must be made.Decisions can be made according to subjective perceptions or experiences when evaluating simple matters.However, the subjective approach to decision-making is inadequate for complex system projects.Therefore, it is necessary to consider the whole picture, comprehensively consider the object under study, and grasp the general nature of the matter to obtain accurate and reasonable evaluation results.According to the relevant literature [24][25][26][27], methods 2 Advances in Civil Engineering widely used for construction risk evaluation are presented in Table 1. Considering the various quality risk assessment indicators with obvious hierarchy during the construction of urban rail transit, in this study, the AHP method was used to determine the weight of each subgoal and subsystem and perform mathematical analysis based on expert semi-structured interviews to determine the weights of the indicators.FCE is a method based on fuzzy mathematics membership theory and can solve multivariable problems in complex decisionmaking processes [26,28].However, when dealing with complex problems and multiple evaluation indicators, it is difficult for FCE to directly provide the weight of each evaluation, for which the AHP is effective [29].Therefore, combining the two methods can not only compensate for their respective shortcomings but also ensure the accuracy and comprehensiveness of the evaluation results [30,31].Figure 1 shows the application steps of the AHP-FCE method in the quality risk assessment of urban rail transit construction.The following are the specific steps of the AHP-FCE method. Step 1: Determine the research objectives and use the integrated AHP-FCE method to evaluate the risk of urban rail transit construction quality.Step 2: Using the "4M1E" management method, establish a quality risk assessment index system for urban rail transit construction with regard to five aspects: "man-machinery-materials-method-environment." Step 3: Collect and calculate the weights (W) of the indicators at each level (using Saaty's 1-9 point scale).Comprehensively evaluate the comment set R (using a 1-5 point scale).Multiply the indicator weight set W by the fuzzy correlation matrix R to calculate the FCE vector A. AHP-FCE Step 4: Quantitative evaluation results (P) Steps for quality risk assessment during the construction period of urban rail transit. Advances in Civil Engineering Step 4: Quantify the evaluation results for evaluating the risk level of urban rail transit construction quality. Principles of Indicator System Construction. Depending on the study object, three principles should be followed for selecting an appropriate evaluation model to solve a practical problem: (1) Principle of applicability.Different evaluation methods have different advantages and disadvantages, use conditions, and application scopes.Therefore, when selecting the method of the evaluation model, the most suitable solution should be selected according to the research problem.(2) Principle of rationality.Research methods should be selected practically to ensure that they can support the research and are not cumbersome, and that the data are easy to collect.(3) Principle of comprehensiveness.When a research method that cannot support the research results alone is selected, it should be sufficiently flexible to allow multiple methods to be combined to ensure the accuracy of the target results. This study preferred the AHP and the FCE.In contrast, a comprehensive comparison and selection of evaluation methods and their combinations produced an evaluation model based on AHP-FCE.A construction quality risk index system for urban rail transit was constructed by integrating the construction quality risk factors in engineering construction.First, a structural model of metro construction quality risk influencing factors was constructed via the AHP, and the weights of different indicators were determined.Second, the fuzzy synthesis method was used to determine the evaluation index affiliation matrix according to the subjective scores of experts.Finally, the qualitative is transformed into quantitative, and the combination of qualitative and quantitative is adopted to comprehensively and systematically evaluate the urban rail transit construction quality risk. Selection of the Evaluation System. Through systematic sorting and investigation of quality issues that occur during the construction of subway projects, it was found that construction quality issues are influenced by various factors, such as the construction man, machinery, materials, methods, and environment.In this study, to further investigate the factors that affect the quality of subway engineering construction and facilitate the development of improvement plans, risk prevention, and control measures for project management personnel, we systematically analyzed the quality factors that affect subway engineering construction with regard to five aspects, i.e., man-machinery-materials-methods-environment, and constructed a subway engineering quality risk assessment system. (1) Factors Related to Construction Man.The participants in subway construction are not only the main people involved in project production and operation but also the main people in engineering construction quality control.The management level, technical ability, quality and safety awareness, and work experience of the participants all affect the construction quality of subway projects.Since the beginning of subway construction, the human factors that have triggered and caused quality incidents are mainly reflected in the technical level of subway engineering professionals and the standardization of construction personnel operations. (2) Factors Related to Construction Machinery.A large amount of mechanical equipment is used in the construction of subway projects, and the practicality and efficiency of the mechanical equipment are prerequisites for ensuring construction progress and quality.In the construction of subway projects, the main factors related to the construction machinery and equipment include the performance of the machinery and equipment, the failure rate of the construction machinery, daily maintenance and upkeep of the machinery and equipment, and monitoring level of the machinery and equipment.Problems in different links have varying degrees of impact on the construction quality and safety.Therefore, monitoring involves reading machinery and equipment operating parameters in real time and understanding the state and performance of machinery and equipment can prevent all types of quality risk accidents caused by machinery and equipment. (3) Factors Related to Construction Materials.In-depth visits and questionnaire surveys on the metro projects of the China Railway system under construction in 2016-2022 revealed that the construction of metro projects involves various engineering materials, such as steel reinforcement, concrete, prefabricated shield pieces, and electromechanical equipment.Therefore, the main factors related to construction materials that cause construction quality risks are the quality of incoming materials, whether the supply of primary (auxiliary) materials is timely, whether the management of materials in the station is standardized, and whether the waterproofing measures for materials are effective. (4) Factors Related to Construction Methods.The construction of metro projects is characterized by long periods, high difficulty, and high technological requirements.The reasonableness of the construction plan and construction process design and whether they meet the actual needs impact the quality of the construction project.Owing to the concrete being poured from a great height during metro construction, a weak and unstable support system can potentially result in formwork deformation, which will have an incalculable negative impact on the project quality. (5) Factors Related to Construction Environment.The construction of the metro is located in an urban area.Commercial and residential buildings surround the project.There are many construction units, and multiprofessional crossconstruction is frequent.The construction and process connection increases the pressure on the schedule and necessitates highly skilled technical personnel.The concealed excavation project for the underground station is influenced by geological conditions.Therefore, the environmental impact of the construction quality risk is mainly caused by the geology of the construction site, the surrounding buildings, and the site construction environment, such as the undercutting of municipal pipelines. 4 Advances in Civil Engineering The metro quality risk index system is constructed through the collation and analysis of the factors affecting the construction quality of metro projects, as shown in Figure 2. Evaluation Models and Principles. The AHP-FCE evaluation method combines the two methods, taking into account their respective advantages, to overcome the shortcomings of subjective assumptions and better reflect the objective reality.The AHP and FCE are used to quantify the qualitative evaluation descriptions and construct a comprehensive evaluation model for urban rail transit construction quality risk, providing an alternative reference for construction quality risk evaluation in domestic urban rail transit engineering construction. AHP Method. The objective of the AHP method is to translates experts' judgments of qualitative aspects into quantitative data and constructs a clear system by combining qualitative and quantitative aspects to build a hierarchy of complex problems for calculating the weight values of each indicator scientifically.The main steps are as follows: (1) The main factors influencing the decision-making of the problem are analyzed at the criterion and indicator levels.(2) The weights of the factors are calculated. (3) The weights of the factors at different levels are analyzed analogously. Step 1: Establishing a hierarchical ladder model. Considering the quality risk influencing factors of the urban rail transit construction process, the objectives are graded, and each element is analyzed layer-by-layer. In the AHP, experts from relevant fields are invited to use the 1-9 scale method to determine the relative importance of influencing factors in the standard or indicator layer according to subjective and objective conditions and assign values according to the relative importance of different factors.The indicator vector of the standard layer is determined as W ¼ W 1 ; ½ W 2 ; :::; W n , and the weight vector of the indicator layer is In this study, the nine-level scale, presented in Table 2, was used as the scoring criterion.b ij denotes the importance ratio of factor i to factor j. It is assumed that there are n schemes and i indicators in the indicator layer and indicator i is compared with indicator j with regard to importance.Then, the matrix is obtained via scoring by the experts. After the scores for all the indicators were obtained, the value was used as the basis for judgment to clarify the importance of each indicator between each level, and the Advances in Civil Engineering eigenvector of the maximum eigenvalue λ max of the judgment matrix was obtained.After the weights of each matrix are calculated, to avoid unreasonable results, the consistency indicator (C.I.) and average random consistency indicator (C.R.) values of the matrix are used to determine whether the matrix has C.I. as follows: C:R: ¼ C:I: R:I: ; where i represents the number of rows in the matrix, j represents the number of columns in the matrix, W i is the weight vector of the indicator, N represents the order of a matrix, and P i represents the product of all indicator assignments in row i.When C.R. < 0.1, the matrix passes the consistency test and has good consistency. FCE Method. The FCE method is based on fuzzy mathematical theory, which decomposes the total objective of the evaluation into a fuzzy set consisting of several indicators to deal with uncertain information.The operation process includes establishing sets of evaluation index factors and rubrics and constructing the affiliation and fuzzy relationship matrices. Step 1: Establish a comprehensive evaluation index factor set. This set consists of the construction quality risk guideline layer impact index factor Step 2: Establishing a comprehensive evaluation rubric set. From the literature review and expert interviews, the indicator evaluation criteria were classified into five levels with a rubric set of Step 3: Determine the fuzzy relationship matrix.r ij was expressed as the degree of subordination between the evaluation factors and a specific evaluation level.When m elements are evaluated in a comprehensive analysis, a matrix R with m rows and n columns can be obtained as Step 4: Calculation of FCE. According to the AHP, the criterion layer indicator vector W and indicator layer weight vector W i are obtained, and then the weight coefficients of each indicator are calculated.The final FCE vector is the operation between the fuzzy matrix R i and the weight vector W i , which is denoted as A i . In multifactor evaluation, the AHP-FEC comprehensive evaluation model S was obtained using Equations ( 6)-( 9), and finally, the comprehensive evaluation score P was calculated using the scoring set. Results In this study, the construction project of a station in the first phase of Shaoxing city rail transit line 2 was considered as an example.The construction period of this project was short, covering a wide range of areas with high-quality requirements and frequent cross-processes.Thus, a quality and safety construction management team headed by the project manager was set up at the early stage of the project Degree of importance Definition 1 Factor i is as important as factor j 3 Factor i is slightly more important than factor j 5 Factor i is more important than factor j 7 Factor i is significantly more important than factor j 9 Factor i is far more important than factor j 2, 4, 6, 8 Intermediate evaluation value Reciprocal If the judgement of factor i compared with factor Advances in Civil Engineering construction to ensure that the quality of the project would be "qualified" and that no general or above-project quality accidents would occur.First, a questionnaire was designed to assess the proposed evaluation indicators.During the questionnaire process, the interviewees were asked to rate the 20 quality risk influencing factors of urban rail transit construction, quantitatively process the obtained data to obtain the weights of each indicator in the evaluation index system, and comprehensively evaluate the quality risks of urban rail transit construction through calculation and analysis. Data Sources and Tests. To ensure the objectivity and authority of the weights of the indicators, 10 experts engaged in different fields related to urban rail construction were invited to score the firstand second-level indicators in the indicator system, and the background information of the interviewees is shown in Figure 3. Advances in Civil Engineering reliability coefficient α was 0.789.The value of α > 0.7 indicating that the questionnaire and indicator data had high credibility. Determination of Indicator Weights. Using the 1-9 point scale, 10 experts and technicians in metro engineering construction were invited to form an expert group to judge the importance of the indicators at the criterion level and then calculate the judgment matrix to obtain the final weights of each indicator and conduct consistency tests.According to the calculation method for the judgment matrix, the indicator weights of the evaluation model for all the factors influencing construction quality risk were determined, as shown in Table 4. Here, the guideline layer has λ max = 5.15, C.I. = 0.037, R. I. = 1.12, and C.R. = 0.033 < 0.1, satisfying the consistency test.From Table 4, it can be seen that the index weights of the urban rail transit construction quality guideline layer are B = {B 1 , B 2 , B 3 , B 4 , B 5 } = {0.114,0.070, 0.486, 0.043, 0.287}.The construction materials have the largest weight, reflecting their key role in the evaluation of construction quality risks in urban rail transit engineering construction.Conversely, the construction method has the smallest weight.In the index layer, the quality of incoming construction materials, material waterproofing measures, construction site geology, and construction environment materials have higher weights.They should be given more attention during the construction of metro projects. Construction Affiliation Matrix. To increase the accuracy of the evaluation results, members of the expert group were invited to evaluate the factors affecting the quality of metro construction, and an evaluation matrix was established according to the evaluation results.The evaluation results were divided into five levels, each corresponding to a different evaluation value.The comprehensive evaluation set of indicators was V = {V 1 , V 2 , V 3 , V 4 , V 5 } = {slightly low, low, average, medium, high}; the evaluation set corresponded to the set of scores expressed as U = {U 1 , U 2 , U 3 , U 4 , U 5 } = {50, 60, 70, 80, 90}.Using the expert determination of the level to which each indicator of metro construction quality risk belonged, the affiliation of each indicator was obtained according to the frequency, as shown in Table 5. The above solution process yielded a final FCE result of 85.21.Thus, the final score of the urban rail transit construction quality risk was 85.21, which is between 80 and 90.Therefore, the overall risk of metro construction quality is "medium," which is consistent with the results of the expert study.Accordingly, there is considerable room for optimization and improvement in project quality control. Index System Construction and Importance Analysis. Incorporating the advantages of the AHP into FCE and constructing the AHP-FCE integrated model for qualitative and quantitative analysis can achieve the purpose of identifying and evaluating risks.The main steps of the proposed method are risk-factor identification questionnaire survey and data processing, calculation of the weight coefficients of evaluation indices based on the AHP, and risk level evaluation based on FCE [30,36].As shown in Tables 4 and 5, the construction materials (B 3 ) had the largest weight coefficients among the five main indicators for the risk assessment of urban rail transit construction quality, followed by the construction environment (B 5 ), construction man (B 1 ), construction machinery (B 2 ), and construction methods (B 4 ).The five factors with the largest weight coefficients in the 20-indicator system were the quality of incoming materials (C 31 ), waterproofing measures for materials (C 34 ), geology of the construction site (C 51 ), onsite construction environment (C 54 ), and technical level of construction personnel (C 11 ).During the construction process, managers should focus [8].Additionally, if the concrete materials are not proportioned according to the design requirements and the construction process is not conducted according to the construction technology, the construction quality of the metro station will be negatively impacted. Construction Site Geology (C 51 ). To ensure the excavation of the foundation pit and smooth tunneling of the metro station, the construction party must organize several field surveys involving technical personnel [13], invite experts to engage in risk studies and technical debates, formulate safe and efficient construction plans, and adopt new construction techniques according to the local conditions to accelerate the connection of work processes and increase the construction efficiency.Additionally, they must actively use information technology to monitor the geological environment and provide reasonable construction plans for poor-quality strata to ensure the quality of the project [15]. Onsite Construction Environment (C 54 ) . Urban rail transit is generally near on the main road of the city, surrounded by a large number of existing buildings and underground pipelines (gas pipelines, lighting, and power cables) [9].Thus, from the perspective of the construction environment, there are numerous uncertain factors.Therefore, in the process of metro construction, hidden dangers and risks associated with the construction environment should be carefully assessed to avoid ground subsidence, which can trigger the collapse of the surrounding buildings or the emergence of cracks. Technical Level of Construction Personnel (C 11 ). The management of construction personnel, which constitute the main body of underground construction, must be strengthened to ensure that construction personnel have both professionalism and adequate technical level [37].Additionally, it is necessary to improve training on the related equipment, including new technologies and maintenance [13].Persons who do not satisfy the training and assessment requirements should be prohibited from participating in the construction process to avoid quality and safety accidents. Conclusions Quality risk identification and evaluation is a complex decision-making process affected by various factors.This study focused on the quality risk in the construction process of an underground station in Shaoxing.The elements of the construction quality risk were comprehensively analyzed through expert empirical judgment and assessment, and an AHP-FCE evaluation model was developed to quantitatively evaluate the quality risk of underground construction.Objective data were combined with subjective judgments to determine the ranges of the affiliation values and the importance ranking of the construction quality risk factors.According to the results of the study, the following conclusions are drawn: (1) The "4M1E" management method based on five aspects ("man-machine-materials-methods-environment") of the construction of urban rail transit engineering construction quality risk assessment of the five categories of a total of 20-indicator system, a systematic analysis of the underground construction quality of the factors affecting the perspective of the study has a certain degree of innovation.(2) The combined AHP-FCE-based method can quantitatively evaluate the risk of construction quality and simultaneously rank the importance of various risk factors of construction quality.The comprehensive selection of research methods not only plays the role of expert experience but also reduces the errors caused by human subjectivity and improves the objectivity and accuracy of the evaluation results.Among the factors affecting the construction quality risk, the construction materials have the largest weight (0.486), and the focus in the indicator layer is reflected in the quality of incoming materials, waterproofing measures of materials, the geology of the construction site, and the construction environment.The results of this study can help managers better understand the risks in the construction of underground projects and provide corresponding control recommendations.(3) Using the AHP-FCE evaluation model, the final score of the metro construction quality risk was obtained as 85.21, corresponding to a "medium" risk level.The evaluation results match the actual situation of the selected project, confirming the practicality and effectiveness of the evaluation model.The results of the study provide a scientific basis and reference for the evaluation of the construction quality risk of domestic urban rail transit.The proposed method is also applicable to other types of risk assessment; however, it is necessary to replace the assessment indices according to the actual situation. Quality of materials on construction sites C 31 Availability of construction materials C 32 Stockpiling of construction materials C 33 Waterproofing of construction materials C 34 Soundness of programming C 41 Feasibility of construction handover C 42 Stability of formwork supports C 43 Production of construction site samples C 44 Factors related to construction man B 1 Technical level of construction personnel C 11 Specification of construction personnel C 12 Management level of managers C 13 Safety awareness for construction workers C 14 Factors related to construction machinery B 2 Performance of machinery and equipment C 21 Maintenance of machinery and equipment C 22 Failure rate of machinery and equipment C 23 Level of monitoring of equipment C 24 Factors related to construction methods B 4 Factors related to construction environment B 5 Factors related to construction materials B 3 Geology of the construction site C 51 Buildings around the construction site C 52 Undercutting of municipal pipelines C 53 Environment of the construction site C 54 Risk performance of urban metro construction quality Identified factors influencing the quality of urban metro construction Indicator layer Criterion layer Target layer FIGURE 2 : FIGURE 2: Indicator system establishment for the construction quality of urban subway. Step 3: Questionnaire and quantitative processingQuality risk evaluation of urban rail transit constructionStep 1: Determine research objectives Integrated AHP-FCE method Step 2: Determine evaluation index C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 51 C 52 C 53 C 54 Establish pairwise comparison matrix with Saaty's 1-9 point scale Establish evaluation matrix with 1-5 point scale AHP FCE Table 3 . The final standardized TABLE 3 : Reliability analysis based on Cronbach's α coefficient for the indicators. TABLE 4 : Evaluation model indicator weights. TABLE 5 : Summary of evaluation model indicator weights.
2023-09-24T15:53:40.916Z
2023-09-20T00:00:00.000
{ "year": 2023, "sha1": "0c75d0a67fe8ec1ba76f4faa05ea3892763bc726", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2023/2187071.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "52bd20f472857bddbb6cbd038b3342c3b432e84d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
54826803
pes2o/s2orc
v3-fos-license
Possibilities of Soil Revitalization in Slovakia towards Sustainability Abtract The quality of the environment in the Slovak Republic is considerably differentiated. Based on the comprehensive assessment of individual components of the environment, the territories with reduced quality of the environment (or some components of environment) were located in the process of environmental regionalization in the Slovak Republic. Damage and reduction of the environmental quality is mainly caused by anthropogenic activity and negatively influence the health and quality of life of the population living in these areas. To ensure sustainable development, it is necessary to eliminate negative impacts resulting from the reduced quality of the environment. The aim of this paper is to evaluate and propose options for soil revitalization in Slovakia, towards ensuring sustainability (especially in damaged areas). Soil recovery is now possible to perform using a variety of innovative and biological processes which are discussed in this paper. The realization of such soil revitalization could at least partially improve the current state of environmental quality (i.e. environmental sustainability) and, secondarily, in the quality of life and health of the population (social sustainability). Introduction Regions of the Slovak Republic have a different state of damage of the components of the environment, mainly due to anthropogenic activities.In the process of environmental regionalization, damaged regions were allocated.These regions with their specificities require a specific approach to the design and implementation of economic, social and environmental policies and activities so the activities carried out in these areas led towards sustainable development. Quality of environment in the Slovak republic In 1997 was established the Slovak Environmental Agency work group dedicated to specially vulnerable areas.On the basis of the state of the environment, environmental regionalization of the Slovak Republic has been developed.According SAŽP, environmental regionalization process is spatial division of the land in which, according to the established criteria and environmental characteristics of the selected files the regions with a certain quality of the environment and trends of environmental change were framed.These regions are characterized by the quality of the components of the environment, state of environmental risk factors and measures to protect the environment.In the process of environmental regionalization, 5 level of environmental quality was allocated, as indicated in Table 1.Areas classified as an area with a high level of environment quality (1st level) represent the regions least negatively affected by human activities.On the contrary, the territories included in the level 5 of environmental quality represent areas extremely adversely affected by human activities -the areas with the highest share of environmental burdens and ecological problems reflected in a change of environmental quality. According to the newest approach in the process of environmental regionalization, based on five levels of environmental quality, geomorphological conditions and other geographical or administrative specificities, three basic types of environmental quality regions were defined (Klinda, Bohuš, Semrád, 2005): • Regions of the first level of environmental quality -this type of region cover mainly the environment of high quality (level 1). • Regions of the second level of environmental quality -this type presents a transitional area, which is highly heterogeneous.The dominant regions are these with satisfactory environment (level 2) with a moderately deteriorated environment (level 3). • Regions of the third level of environmental quality -represent the area where environmental burdens are cumulated; the area is mostly heavily deteriorated environments (level 5) and deteriorated environment (level 4) Based on a comprehensive assessment of the state of the atmosphere, the quality of groundwater and surface water, soil, rock environment, biota and other factors, the risk (problematic) areas have been identified.These areas are identified as Level 4 -deteriorated environment and Level 5 -heavily deteriorated environment (Table 1).Recently they present almost 16% of the total land area and are inhabited by more than 37% of the total population (SAŽP and enviropotal, 2012).In these regions, there are different problems of individual components of the environment such as excessive load of SO 2 , NO x , CO emissions, decreased quality of groundwater and surface water, but also significantly deteriorated soil and soil contamination.Problems in the environmental field are then reflected on the health of the population and reduced quality of life of people living in these areas.As indicated earlier, these areas are inhabited by more than two millions of inhabitants. In some polluted areas the soil is contaminated by heavy metals (such as nickel in Galantská area, or Cadmium, lead and copper in the Upper Nitra region), in other areas soils are strongly acidified resp.alkalized.To ensure sustainable development (in all its dimensions -i.e.economic, environmental, but also social) in these regions is necessary to eliminate negative impacts resulting from the reduced quality of the environment.The revitalization of contaminated land is an essential part of the improvement of the environment and elimination the negative effects of previous anthropogenic activities. Soil alkalization as a special problem in selected areas The aim of this paper is to evaluate the possibilities of revitalization of the soil in Slovakia (in example of biological revitalization performed in the selected burdened area) which should be directed to ensure sustainable development (and its dimensions).The quality of soil resource as one of the fundamental components of the environment as well as agro-ecological conditions in the Slovak Republic is diversified.As a result of previous orientation of the national economy to build heavy industry based on high energy and raw material intensity, occurred in our country the (excessive) contamination of soil, which has negatively affected the quality of the environment including the soil characteristics.The consequences of industrial activity resulted mostly in soils acidification, alkalization and metallization.Alkalinisation of soils is largely the result of alkaline, mostly particulate imissions and despite the fact that in Slovakia it is not so widespread phenomena, from the milder impact on soil, e.g.surroundings of cement and lime plants, it can have highly devastating effects (e.g. in the areas of magnesite processing plants).Endogenous resources of soil alkalinisation are particularly heavily mineralized groundwater, causing soil salinization associated with alkalization.Salinization and alkalinisation of soils greatly reduces agricultural production.Alkalinisation of soils in Slovakia is caused mainly by anthropogenic activities.Slovak republic is a country extremely rich in the natural crystal magnesite.The Slovak Republic is the fourth largest producer of magnesite in the world.On the territory of Slovakia is produced more than 6.5% of world production of magnesite.Mining and processing of magnesite is also a major economic sector of the Slovak national economy.The production of magnesite is localized on sites Jelšava (Slovak Magnesite Plant in Jelšava) and Lubeník (SLOVMAG Lubeník).Extraction of magnesite and subsequent processing is a very dusty process.Production of magnesite clinkers is conducted by thermal decomposition and clinker process.These companies have affected by its production not only air quality, but mainly the quality of soils, which are due to the extraction and processing of magnesite highly alkalized.A strong alkalinisation of soils caused heavy deterioration of soils in some sites placed in the immission field of above mentioned companies to the extent that the microbial life there has disappeared.In these areas are soil also significantly metalized (with the high doses of heavy metals, in particular Hg, Mn, As, Cd, Pb, Cu, Al, Fe).Heavy metals are a leading group of contaminants, which is involved in changing soil properties and significantly interferes with the processes occurring in the soil environment.(Javoreková a kol. 2008). Jelšava-Lubeník area (figure 1) where the mining and processing of magnesite is concentrated, has become one of the most polluted areas of the Slovak Republic, where is present strong alkalinisation of the soil.In this area, during the processing of magnesite, magnesium oxide is emitted into the atmosphere.Magnesium oxide causes the alkalinisation of more than 12 000 hectares of agricultural land and more than 6,600 ha of forest land.In this area, the soil pH is around 8-9, which corresponds to a strong alkalinisation of soils.Magnesium imissions cause many undesirable phenomena on soil, vegetation and animals, reflected also in many adverse events such as poorer production and economic results.These events led to the collapse of indigenous plant communities on soils in imission field of above mentioned companies and only a few resistant species insignificant from production, agricultural, forest and aesthetics point of view are present there.Alkalinisation of soils and accumulation of other environmental problems in this area have arisen primarily to reduced crop production.In some areas, strong alkalinisation caused loss of production ability of soil where any plant is hard to be grown.Adjusting soil pH towards neutralizing the soil is quite difficult and long-term process.Adjusting soil pH is economically but also very time consuming.One of the options in the process of adjusting soil properties and revitalization of damaged soils is the use of biological processes. Possibilities of alkalized soil revitalization by phytomass as a way of sustainable revitalization Landscape revitalization is a complex and highly complicated process.Represents the active implementation of measures aimed at full operationalization of ecosystem functionsservices (nature, biodiversity, etc.), which country provides and which are focus on improving the quality of the existence and survival of organisms and ultimately also for sustainable human life There are most important revitalization measures in the country -agriculture, forestry and water management measures.In connection with the revitalization of the country is necessary mention of the potential and strength of the country, because in many places were caused by anthropogenic activities to their restriction or distortion, and this is what gives need of country (components of environment) revitalization in the present.Revitalization, in essence, means renewal of vital functions, (re) recovery, strengthening, recovery, renaturation, or restore soil fertility generally after damage by human activities.Revitalization of damaged soils is one of the possibilities for the preservation of their production and non-production characteristics and ensuring sustainable development.Revitalization is an increase of its ecological stability of country and the possibility of providing its sustainability (Novotný 2012).About other possibilities of revitalization of soil contaminated by anthropogenic activity deals aslo Vráblíková, Vráblík (2002).Long-term observation and our previous research showed, that in this contaminated area the Phragmites australis (Cav.)Trin appeared in last years.This plant represents one of methods of contaminated soil revitalization.Phragmites australis (Cav.)Trin is originally humid plant, but in this area it grows literally in dry sites, where ground water is in the depth of several metres.Striking vitality of Phragmites australis (Cav.)Trin was found, as mega population in more sites, where the pH value reached more than 9 (which is on the border of strongly alkalized soil) and in such sites, where it does not occur and according to the published statements its presentation was not recorded in the past.It is hopeful, dominant, resistant, anti-erosive and technically available kind providing alternative solution of sanitation and fertilization of contaminated soils.Phragmites australis (Cav.)Trin can be considered very suitable plant for revatalization and remediation of contaminated soils through biological processes. In the imission field of exhalation sources of Slovak magnesite plants Jelšava and Slovmag Lubeník we conducted a further study to survey the species, which should be characterized by increased resistance and could have potential properties such has Phragmites australis (Cav.)Trin.However, Phragmites australis (Cav.)Trin proved for purpose as the most suitable crop for several reasons.Therefore, we further verified the various methods of reproduction of Phragmites australis (Cav.)Trin., so that growing of this plant on contaminated soil was the most economically efficient.We have verified the vegetative and generative methods of reproduction, and we came to the conclusion that the generative reproduction appears to be more efficient (also in terms of next generation of biomass).It was subsequently verified the intensity (speed) of growth and biomass production in the process of growth and it has been established that the method of reproduction does not affect the formation of biomass (production of leaves). Other benefits of phytomass used in the process of alkalized soil revitalization towards sustainability Currently our research is focused on the possible use of Phragmites australis as an alternative energy source.This is in line with the current EU objectives of renewable energy use and sustainable development.The EU aims to achieve a higher proportion of energy from renewable sources (for 2020 should so called green energy represent 20 percent of total energy consumption).Increasing share of renewable energy is also one of the main priorities of the Energy Policy of Slovak republic (Ministry of Environment).Although current trends in the use of phyto-biomass for energy purposes move towards the use of cereals, but also other alternative and economically efficient sources are sought.E.g. maize as energy crop has many disadvantages -high inputs, fluctuations in harvest, the risk of soil erosion and limited area for cultivation (Jamriška, P).From traditional crops are best cereals (triticale, rye), including straw, while the straw energy efficiency is higher than the combustion of entire plants.For other crops can be grown for energy purposes e.g.Brassica napus, Helianthus annuus but also, for example, the grasses -Festuca arundinacea, Arrhenatherum elatius, Phragmites communis etc.About othe possibilities of using of biomass deals Bejda et al. (2002);Horbaj (2006), Source : Porvaz, Naščáková, Kotorová, Kováč, 2009 Targeted production of phytobiomass for energy use has a unique importance both for the acquisition of renewable energy, but also for the use of marginal and disadvantaged areas in the regions of Slovakia (Porvaz, Naščáková, Kotorová, Kováč, 2009). In the process of using Phragmites australis as energy crops, however, still problematic seems to be possible transfer of contaminants (from alkalized soil) through biomass into the air.Solutions to this problem and verifying energy efficiency of Phragmites australis is the aim of our further research In verifying the possibility of using Phragmites australis (Cav.)Trin.for energy purposes, we concluded that it is possible to suggest this crop to produce pellets for combustion, or even the production of briquettes.Phragmites australis after such processing has good heat value, which is comparable with a heat value of lignite.It presents a significant array of possibilities for its use as biofuel.Furthermore, besides energy efficiency also economic feasibility of growing the crop is also obvious.Phragmites australis (Cav.)Trin.might be a determining crop in the process of biological revitalization of soils, but also in the process of renewable energy use, with relatively low economic demands related to its growing and high economic efficiency of the process of its use as a renewable energy source.The benefits of growing Phradgmites australis are also evident in the field of sustainable development, but in particular in the environmental and economic dimension. Conslusion Increasing of fertility of contaminated soil requires significant financial funding.Growing Phragmites australis at the contaminated sites provides a cheaper and more effective alternative to the chemical or technical revitalization of the soil.The biomass of the Phragmites australis also provides additional "green" ways of use (e.g.production of green energy), which seems to be desirable in the process of enhancing sustainable development.Soil revitalization using biological processes, such as growing Phragmites australis at least partially leads towards improving the quality of the environment (i.e. the improvement in the environmental site of sustainable development) and secondarily in terms of quality of life and health of the population (i.e. the improvement of the social site of sustainable development). Table 1 Levels of environmental quality in Slovak Republic (SR) Source: SAŽP; Note: Data for 2010 (update for 2014 is not yet processed and available) Figure 1 Map of environmental quality levels in Slovakia Source: SAŽP Table 2 Energy recovery of selected phytomass
2018-12-12T14:15:16.333Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "521b61b80c492894924e5d9034259b6d07aa279d", "oa_license": "CCBYNC", "oa_url": "http://ojs.ecsdev.org/index.php/ejsd/article/download/244/235", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "521b61b80c492894924e5d9034259b6d07aa279d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
258966405
pes2o/s2orc
v3-fos-license
Microfluidic Distillation System for Separation of Propionic Acid in Foods A microfluidic distillation system is proposed to facilitate the separation and subsequent determination of propionic acid (PA) in foods. The system comprises two main components: (1) a polymethyl methacrylate (PMMA) micro-distillation chip incorporating a micro-evaporator chamber, a sample reservoir, and a serpentine micro-condensation channel; and (2) and a DC-powered distillation module with built-in heating and cooling functions. In the distillation process, homogenized PA sample and de-ionized water are injected into the sample reservoir and micro-evaporator chamber, respectively, and the chip is then mounted on a side of the distillation module. The de-ionized water is heated by the distillation module, and the steam flows from the evaporation chamber to the sample reservoir, where it prompts the formation of PA vapor. The vapor flows through the serpentine microchannel and is condensed under the cooling effects of the distillation module to produce a PA extract solution. A small quantity of the extract is transferred to a macroscale HPLC and photodiode array (PDA) detector system, where the PA concentration is determined using a chromatographic method. The experimental results show that the microfluidic distillation system achieves a distillation (separation) efficiency of around 97% after 15 min. Moreover, in tests performed using 10 commercial baked food samples, the system achieves a limit of detection of 50 mg/L and a limit of quantitation of 96 mg/L, respectively. The practical feasibility of the proposed system is thus confirmed. Introduction Propionic acid (PA) (CH 3 CH 2 COOH) is a three-carbon short-chain fatty acid formed naturally in the human body through the fermentation of dietary fiber and indigestible carbohydrates by symbiotic bacteria in the colon. It is also widely used in the pesticide, food, plastics, and beverage industries. One of the most common uses of PA is as a preservative in extending the shelf life of baked foods such as bread and cookies. PA is naturally present in the human body and plays an important role in preventing obesity and improving the health condition of diabetes type 2 patients [1,2]. However, an excessive intake of PA is associated with a range of adverse health effects, including cognitive decline [2], gingival inflammation [3], neurotoxicity [4], and the aggravation of autism spectrum disorders (ASD) [5]. Consequently, the concentration of PA additives in foods and beverages must be carefully controlled. For example, the Taiwan Food and Drug Administration (TFDA) stipulates that the concentration of PA in bread and cakes should not exceed 2.5 g/kg (2500 ppm). Many methods are available for separating and quantifying the PA content in food, including high-performance liquid chromatography-UV detection (HPLC-UV detection) [6], gas chromatography-mass spectrometry (GC/MS) [7], and gas chromatography-flame ionization detection (GC-FID) [8]. These methods all require sample pretreatment prior to the separation and detection process. Owing to commonly high fat content and volatile components, the PA in baked foods must be isolated in some way such that its content can be properly determined. The HPLC method is most commonly performed using steam distillation (see Official Method No. 1001900044TFDA of the Taiwan Food and Drug Administration (TFDA), for example). However, the distillation process is lengthy (typically around 4~6 h) and requires professional expertise and the use of bulky and specialized equipment. Thus, the distillation procedure is largely confined to modern and well-equipped laboratories. Consequently, the development of alternative distillation methods which allow the PA pretreatment process to be performed in a cheaper, faster, and more straightforward manner is of great interest. The performance of the micro-distillation systems described above is fundamentally dependent on the heat transfer efficiency and temperature distribution within the microfluidic device. Accordingly, many researchers have employed numerical simulation methods to optimize the performance of micro-distillation systems [36][37][38][39][40][41][42]. For example, Stanisch et al. [38] conducted numerical simulations to investigate the effects of the primary processing parameters (e.g., the reflux ratio, evaporation rate, and choice of feed stage) on the performance of a micro-distillation system intended for the separation of ethanol/water feed streams. Overall, the results presented in [37,38] confirmed that numerical simulations provide a versatile and effective approach for the design, characterization, and optimization of microfluidic distillation systems. The present study proposes a microfluidic distillation system consisting of a PMMA microchip and a self-built distillation module for separating the PA content in foods and beverages. The microchip incorporates a micro-evaporator filled with deionized (DI) water, a sample chamber, a serpentine microchannel condenser, and a distillate collection zone. In the distillation process, the microchip is mounted on the side of the distillation module and the evaporator chamber is heated in order to produce steam. The steam flows through a connecting microchannel to the sample chamber, where it vaporizes the homogenized sample to produce PA vapor. The vapor then flows through the serpentine channel, where it is cooled and condensed to produce PA extract. A small quantity of the distilled extract is then transferred to a HPLC and photodiode array (PDA) detector system to determine the corresponding PA concentration. 6 mm), and an aluminum foil adhesive layer (thickness 0.3 mm). As shown in Figure 1b,c, the body layer of the microchip incorporates an evaporation chamber, a sample reservoir, a serpentine condensation microchannel, a distillate collection zone, and a release valve. The release valve is designed to prevent excessive pressure in the micro-condensation channel during distillation, to balance the pressure between the micro-condensation channel and the atmosphere, and to prevent the extract solution stock from splashing out. The PMMA layers were designed using commercial AutoCAD software (2011) and fabricated by a CO 2 laser ablation system [43]. The cover layer and body layer were joined using a conventional hot-press bonding technique, and the aluminum foil was adhered to the bottom of the body layer to seal the device and improve the heat transfer efficiency within the chip. The finished chip had overall dimensions of 210 mm × 76 mm × 7.8 mm. Figure 1a presents a schematic illustration of the proposed micro-distillation chip consisting of a PMMA cover layer (thickness 1.5 mm), a PMMA chip body layer (thickness 6 mm), and an aluminum foil adhesive layer (thickness 0.3 mm). As shown in Figure 1b,c, the body layer of the microchip incorporates an evaporation chamber, a sample reservoir, a serpentine condensation microchannel, a distillate collection zone, and a release valve. The release valve is designed to prevent excessive pressure in the micro-condensation channel during distillation, to balance the pressure between the micro-condensation channel and the atmosphere, and to prevent the extract solution stock from splashing out. The PMMA layers were designed using commercial AutoCAD software (2011) and fabricated by a CO2 laser ablation system [43]. The cover layer and body layer were joined using a conventional hot-press bonding technique, and the aluminum foil was adhered to the bottom of the body layer to seal the device and improve the heat transfer efficiency within the chip. The finished chip had overall dimensions of 210 mm × 76 mm × 7.8 mm. Compared with the devices proposed in previous studies by the present group [33,34], the proposed microfluidic distillation system has the advantage that the steam required for distillation purposes is generated by an external heating source mounted in the distillation module. Similarly, the cooling effect required to condense the PA vapor in the serpentine coil is also produced by an external system installed in the distillation module. Thus, the size, cost, and complexity of the micro-distillation chip are all reduced. Furthermore, the water required to vaporize the homogenized sample is stored in the chip itself, and hence the need for an external water tank is removed. Finally, the simple design of the distillation chip, together with its low cost (<US$3), renders it suitable for single-use application, thereby eliminating the risk of cross-contamination from samples. Micro-Distillation Chip Fabrication In general, different food additives have different characteristics (e.g., different boiling points, densities, chemical properties, and so on) and thus appropriate micro-distillation chip designs are required to maximize the separation efficiency depending on a specific analyte. The simple design of the microchip proposed in the present study lends itself to the use of numerical simulation methods, optimizing not only the design of the micro- Compared with the devices proposed in previous studies by the present group [33,34], the proposed microfluidic distillation system has the advantage that the steam required for distillation purposes is generated by an external heating source mounted in the distillation module. Similarly, the cooling effect required to condense the PA vapor in the serpentine coil is also produced by an external system installed in the distillation module. Thus, the size, cost, and complexity of the micro-distillation chip are all reduced. Furthermore, the water required to vaporize the homogenized sample is stored in the chip itself, and hence the need for an external water tank is removed. Finally, the simple design of the distillation chip, together with its low cost (<US$3), renders it suitable for single-use application, thereby eliminating the risk of cross-contamination from samples. In general, different food additives have different characteristics (e.g., different boiling points, densities, chemical properties, and so on) and thus appropriate micro-distillation chip designs are required to maximize the separation efficiency depending on a specific analyte. The simple design of the microchip proposed in the present study lends itself to the use of numerical simulation methods, optimizing not only the design of the microdistillation chip but also the operating conditions. Figure 2a shows the self-built distillation module developed in the present study. As shown, the main components include a power supply system, a heater module, a cooler module, and two temperature control panels. The heater module comprised a 15 W pliability heater (TSC0100010gR70.5, King Lung Chin Co., Ltd., Taichung, Taiwan) with a maximum temperature capability of 180 • C mounted on a solid copper block with dimensions of 52 mm × 30 mm × 10 mm. The cooling block consisted of a commercial cooling module (72041/071/150B, Ferrotec Taiwan Co., Ltd., Hsinchu, Taiwan) with a power of 10 W, a minimum temperature capability of 4 • C, and a size of 110 mm × 40 mm × 10 mm. The module casing was made of ABS using a 3D printer (Kingssel K3040, Mastech Machine Co., Ltd., New Taipei City, Taiwan) and measured 200 mm × 100 mm × 65 mm. Figure 2a shows the self-built distillation module developed in the present study. As shown, the main components include a power supply system, a heater module, a cooler module, and two temperature control panels. The heater module comprised a 15 W pliability heater (TSC0100010gR70.5, King Lung Chin Co., Ltd., Taichung, Taiwan) with a maximum temperature capability of 180 °C mounted on a solid copper block with dimensions of 52 mm × 30 mm × 10 mm. The cooling block consisted of a commercial cooling module (72041/071/150B, Ferrotec Taiwan Co., Ltd., Hsinchu, Taiwan) with a power of 10 W, a minimum temperature capability of 4 °C, and a size of 110 mm × 40 mm × 10 mm. The module casing was made of ABS using a 3D printer (Kingssel K3040, Mastech Machine Co., Ltd., New Taipei City, Taiwan) and measured 200 mm × 100 mm × 65 mm. Distillation Module As shown in Figure 2b, in the distillation process, the micro-distillation chip was clipped to the side of the micro-distillation module and positioned such that the evaporator chamber and micro-condensation channel were aligned with the heater and cooler modules, respectively. On completion of the distillation process, the device was removed from the module and a small quantity of distillate was retrieved from the collection zone and transferred to a cuvette for HPLC-PDA determination of the PA concentration. Experimental Details The reagents employed in the present study included phosphoric acid (H3PO4, 85~87%, J. T. Baker, Phillipsburg, NJ, USA), PA (CH3CH2COOH, Nippon Reagent Industry Co., Ltd., Osaka, Japan, boiling point: 141 °C), and ammonium dihydrogen phosphate ((NH4)H2PO4, Showa Kako Corp., Osaka, Japan). All the chemicals were reagent grades with a resistance of 18.2 MΩ in DI water. A 1 M phosphoric acid solution was prepared by diluting 67.4 mL phosphoric acid in 1000 mL DI water. 1 g of PA was dissolved in 100 mL of DI water and then diluted with 1 M phosphoric acid solution as required to produce control samples with concentrations of 50~3000 mg/L. 1.5 g of diammonium hydrogen phosphate was dissolved in 1000 mL of DI water and adjusted to pH 3 through the addition of phosphoric acid to serve as the mobile phase for the HPLC determination process. To determine the PA concentration of the real food samples, 5 g of each food was homogenized by a commercial machine, and 0.1 g of the homogenized sample was dissolved in 1 mL DI water for 15 min distillation in the micro-distillation chip. Following the distillation process, the PA content of the sample was determined via HPLC-PDA system in accordance with the official method published by the Taiwan Food and Drug Administration (TFDA, No. 1001900044). For comparison, the PA content of the food samples was also evaluated using a traditional benchtop steam distillation apparatus followed by HPLC-PDA separation and detection in accordance with the TFDA No. 1001900044 As shown in Figure 2b, in the distillation process, the micro-distillation chip was clipped to the side of the micro-distillation module and positioned such that the evaporator chamber and micro-condensation channel were aligned with the heater and cooler modules, respectively. On completion of the distillation process, the device was removed from the module and a small quantity of distillate was retrieved from the collection zone and transferred to a cuvette for HPLC-PDA determination of the PA concentration. Experimental Details The reagents employed in the present study included phosphoric acid (H 3 PO 4 , 85~87%, J. T. Baker, Phillipsburg, NJ, USA), PA (CH 3 CH 2 COOH, Nippon Reagent Industry Co., Ltd., Osaka, Japan, boiling point: 141 • C), and ammonium dihydrogen phosphate ((NH 4 )H 2 PO 4 , Showa Kako Corp., Osaka, Japan). All the chemicals were reagent grades with a resistance of 18.2 MΩ in DI water. A 1 M phosphoric acid solution was prepared by diluting 67.4 mL phosphoric acid in 1000 mL DI water. 1 g of PA was dissolved in 100 mL of DI water and then diluted with 1 M phosphoric acid solution as required to produce control samples with concentrations of 50~3000 mg/L. 1.5 g of diammonium hydrogen phosphate was dissolved in 1000 mL of DI water and adjusted to pH 3 through the addition of phosphoric acid to serve as the mobile phase for the HPLC determination process. To determine the PA concentration of the real food samples, 5 g of each food was homogenized by a commercial machine, and 0.1 g of the homogenized sample was dissolved in 1 mL DI water for 15 min distillation in the micro-distillation chip. Following the distillation process, the PA content of the sample was determined via HPLC-PDA system in accordance with the official method published by the Taiwan Food and Drug Administration (TFDA, No. 1001900044). For comparison, the PA content of the food samples was also evaluated using a traditional benchtop steam distillation apparatus followed by HPLC-PDA separation and detection in accordance with the TFDA No. 1001900044 method. In the distillation process, the temperature and cooling modules of the distillation unit were set to 150 • C and 20 • C, respectively, and 4.5 mL of DI water was injected into the micro-evaporation chamber of the chip. 1 mL of homogenized sample (containing 0.1 g of the original sample) was placed in the sample reservoir. The injection inlets of the evaporation chamber and sample chamber were both sealed with heat-resistant tape. The microchip was then clipped to the side of the distillation module (as shown in Figure 2b). During the distillation process, the steam produced in the evaporation chamber flowed into the sample chamber, prompting the generation of PA vapor. The vapor flowed through the cooled serpentine channel, where it condensed and then entered the distillate collection zone due to gravity and steam driving force. The distillation process was stopped after 15 min (as discussed later in Section 3). The distillate was retrieved from the collection zone, and its pH value was adjusted to about 3.0 through the addition of 1 M phosphoric acid solution. 25 µL of the test solution (according to TFDA, No. 1001900044) was taken for HPLC-PDA analysis. The HPLC procedure was conducted on a Shimazdzu LC-20AT system equipped with a 5-µm reversed-phase chromatography column (Agilent ZORBAX Eclipse Plus C18, 0.46 cm × 25 cm). The separation process was performed using 0.15% disodium hydrogen phosphate as the mobile phase with a flowrate of 1.2 mL/min. PDA detection was then performed using ultraviolet light with a wavelength of 214 nm. Results In general, numerical simulations provide an efficient means of optimizing the design of micro-distillation chips and exploring the corresponding flow field, steam temperature, and distillation efficiency [44,45]. In the present study, the flow field and steam temperature distribution within the micro-distillation chip were examined by ANSYS FLUENT simulations. (Note that full details of the numerical method and solution procedure are described elsewhere [33,34].) As shown in Figure 3a, a vortex structure was formed as the vapor stream entered the sample reservoir after being accelerated through the connecting microchannel. The vortex structure perturbed the sample within the chamber, thereby improving the vaporization efficiency. Figure 3b shows the simulated temperature distribution within the microchip. In general, the results confirm that a temperature setting of 150 • C for the micro-evaporator chamber is sufficient to prompt the vaporization of the PA, while a cooling temperature of 20 • C is sufficient to condense the vapor and produce PA distillate in the collection zone. unit were set to 150 °C and 20 °C, respectively, and 4.5 mL of DI water was inject the micro-evaporation chamber of the chip. 1 mL of homogenized sample (contain g of the original sample) was placed in the sample reservoir. The injection inlets evaporation chamber and sample chamber were both sealed with heat-resistant tap microchip was then clipped to the side of the distillation module (as shown in Figu During the distillation process, the steam produced in the evaporation chamber into the sample chamber, prompting the generation of PA vapor. The vapor through the cooled serpentine channel, where it condensed and then entered the di collection zone due to gravity and steam driving force. The distillation proce stopped after 15 min (as discussed later in Section 3). The distillate was retrieved fr collection zone, and its pH value was adjusted to about 3.0 through the addition phosphoric acid solution. 25 µL of the test solution (according to TFDA, No. 10019 was taken for HPLC-PDA analysis. The HPLC procedure was conducted on a Shim LC-20AT system equipped with a 5-µm reversed-phase chromatography column (A ZORBAX Eclipse Plus C18, 0.46 cm × 25 cm). The separation process was performed 0.15% disodium hydrogen phosphate as the mobile phase with a flowrate of 1.2 m PDA detection was then performed using ultraviolet light with a wavelength of 21 Results In general, numerical simulations provide an efficient means of optimizing sign of micro-distillation chips and exploring the corresponding flow field, steam t ature, and distillation efficiency [44,45]. In the present study, the flow field and temperature distribution within the micro-distillation chip were examined by A FLUENT simulations. (Note that full details of the numerical method and solution dure are described elsewhere [33,34].) As shown in Figure 3a, a vortex structu formed as the vapor stream entered the sample reservoir after being accelerated th the connecting microchannel. The vortex structure perturbed the sample within the ber, thereby improving the vaporization efficiency. Figure 3b shows the simulate perature distribution within the microchip. In general, the results confirm that a t ature setting of 150 °C for the micro-evaporator chamber is sufficient to prompt the ization of the PA, while a cooling temperature of 20 °C is sufficient to condense the and produce PA distillate in the collection zone. Overall, the simulation results substantiate the ability of the micro-distillatio to accomplish the distillation and condensation operations required, to separate Overall, the simulation results substantiate the ability of the micro-distillation chip to accomplish the distillation and condensation operations required, to separate the PA content of the homogenized sample prior to HPLC-PDA determination. The average temperatures of the sample reservoir and distillate outlet of the micro-distillation chip were measured experimentally using thermocouples and were found to be 101.8 • C and 20.5 • C, respectively. The simulated temperature values (i.e., 100.2 • C and 20.1 • C, respectively) deviated by no more than 2.5% from the experimental measurements. Thus, the basic validity of the numerical model was confirmed. For calibration purposes, six control solutions with known PA concentrations in the range of 50~3000 mg/L were prepared. For each sample, the distillation (separation) efficiency was evaluated by Equation (1). (1) Figure 4 shows the variation in the distillation efficiency over time for the control sample with a PA concentration of 1000 mg/L. As shown, the efficiency increases initially with an increasing distillation time, which could be attributed to increased amount of steam produced in the evaporation chamber due to increasing distillation time. Thus, a greater amount of acid vapor is generated in the sample chamber and flows through the condensation channel. However, as the heating time is further increased, the DI water in the evaporation chamber is gradually consumed. Consequently, the quantity of PA vapor reduces, and the distillation efficiency saturates at an approximately constant value. The maximum distillation efficiency is around 97% and is obtained after 15 min. Consequently, the distillation time was set as 15 min in all of the remaining distillation experiments. content of the homogenized sample prior to HPLC-PDA determination. The average temperatures of the sample reservoir and distillate outlet of the micro-distillation chip were measured experimentally using thermocouples and were found to be 101.8 °C and 20.5 °C, respectively. The simulated temperature values (i.e., 100.2 °C and 20.1 °C, respectively) deviated by no more than 2.5% from the experimental measurements. Thus, the basic validity of the numerical model was confirmed. For calibration purposes, six control solutions with known PA concentrations in the range of 50~3000 mg/L were prepared. For each sample, the distillation (separation) efficiency was evaluated by Equation (1). Figure 4 shows the variation in the distillation efficiency over time for the control sample with a PA concentration of 1000 mg/L. As shown, the efficiency increases initially with an increasing distillation time, which could be attributed to increased amount of steam produced in the evaporation chamber due to increasing distillation time. Thus, a greater amount of acid vapor is generated in the sample chamber and flows through the condensation channel. However, as the heating time is further increased, the DI water in the evaporation chamber is gradually consumed. Consequently, the quantity of PA vapor reduces, and the distillation efficiency saturates at an approximately constant value. The maximum distillation efficiency is around 97% and is obtained after 15 min. Consequently, the distillation time was set as 15 min in all of the remaining distillation experiments. Figure 5 shows the experimental results for the distillation efficiencies varied with the condensation channel length. As the channel length first increases, the distillation efficiency also increases since the time for which the PA vapor is exposed to the low temperature condition (20 °C) increases. However, as the channel length increases beyond 50 cm, the driving force provided by the steam is insufficient to push the distillate through the channel and into the collection zone, and therefore the distillation efficiency drops. Accordingly, the optimal condensation channel length was determined to be 50 cm. Figure 5 shows the experimental results for the distillation efficiencies varied with the condensation channel length. As the channel length first increases, the distillation efficiency also increases since the time for which the PA vapor is exposed to the low temperature condition (20 • C) increases. However, as the channel length increases beyond 50 cm, the driving force provided by the steam is insufficient to push the distillate through the channel and into the collection zone, and therefore the distillation efficiency drops. Accordingly, the optimal condensation channel length was determined to be 50 cm. Figure 6 shows the variation in the distillation efficiency with the volume of DI water injected into the micro-evaporator chamber of the chip. Note that the results correspond to the 1000 mg/L control sample with a volume of 1 mL. As the amount of DI water increases, the volume of steam vapor generated over the distillation process also increases, hence a greater amount of distillate is obtained in the collection zone. However, for 5 mL of water, the entire sample is distilled within 15 min and accompanied by saturation of distillation efficiency. Accordingly, the injection volume of DI water was set as 4.5 mL and the sample volume as 1 mL in all of the remaining experiments. The feasibility of the proposed microfluidic distillation system was investigated by measuring the PA concentrations of the six control samples with known concentrations of 50 mg/L, 500 mg/L, 1000 mg/L, 1500 mg/L, 2500 mg/L and 3000 mg/L, respectively. For Figure 6 shows the variation in the distillation efficiency with the volume of DI water injected into the micro-evaporator chamber of the chip. Note that the results correspond to the 1000 mg/L control sample with a volume of 1 mL. As the amount of DI water increases, the volume of steam vapor generated over the distillation process also increases, hence a greater amount of distillate is obtained in the collection zone. However, for 5 mL of water, the entire sample is distilled within 15 min and accompanied by saturation of distillation efficiency. Accordingly, the injection volume of DI water was set as 4.5 mL and the sample volume as 1 mL in all of the remaining experiments. Figure 6 shows the variation in the distillation efficiency with the volume of DI water injected into the micro-evaporator chamber of the chip. Note that the results correspond to the 1000 mg/L control sample with a volume of 1 mL. As the amount of DI water in creases, the volume of steam vapor generated over the distillation process also increases hence a greater amount of distillate is obtained in the collection zone. However, for 5 mL of water, the entire sample is distilled within 15 min and accompanied by saturation of distillation efficiency. Accordingly, the injection volume of DI water was set as 4.5 mL and the sample volume as 1 mL in all of the remaining experiments. The feasibility of the proposed microfluidic distillation system was investigated by measuring the PA concentrations of the six control samples with known concentrations of 50 mg/L, 500 mg/L, 1000 mg/L, 1500 mg/L, 2500 mg/L and 3000 mg/L, respectively. For comparison, the concentrations were also measured using the official HPLC-PDA detection method with a benchtop steam distillation apparatus. Figure 7 compares the measurement results obtained by the two methods. The high correlation coefficient (R 2 = 0.9971) indicates a good agreement between the two sets of results. Moreover, six different PA concentrations (50 mg/L, 500 mg/L, 1000 mg/L, 1500 mg/L, 2500 mg/L and 3000 mg/L) were added to PA-free breads. The analytical accuracy of the proposed microfluidic distillation system and HPLC-PDA detector system is 96.7 ± 1.8%. (Note that the accuracy was evaluated using Equation (1) below.) Thus, the basic feasibility of the proposed system was validated. Micromachines 2023, 14, x FOR PEER REVIEW 8 of 1 comparison, the concentrations were also measured using the official HPLC-PDA detec tion method with a benchtop steam distillation apparatus. Figure 7 compares the measurement results obtained by the two methods. The high correlation coefficient (R 2 = 0.9971) indicates a good agreement between the two sets o results. Moreover, six different PA concentrations (50 mg/L, 500 mg/L, 1000 mg/L, 1500 mg/L, 2500 mg/L and 3000 mg/L) were added to PA-free breads. The analytical accuracy of the proposed microfluidic distillation system and HPLC-PDA detector system is 96.7 ± 1.8%. (Note that the accuracy was evaluated using Equation (1) below.) Thus, the basi feasibility of the proposed system was validated. The practical applicability of the proposed system was verified by detecting the PA concentrations of 10 real-world baked food samples acquired from convenience stores in Taiwan (see Table 1). For each sample, the pretreatment process was performed using a microfluidic distillation system (as described in Section 2.3), and the PA content was then evaluated using the HPLC-PDA method listed in Official Method No. 1001900044 of the Taiwan Food and Drug Administration (TFDA). For comparison purposes, the PA conten of each sample was also evaluated by the Center for Agriculture and Aquaculture Produc Inspection and Certification (CAAPIC) at National Pingtung University of Science and Technology (NPUST) in Taiwan, using the benchtop steam distillation, separation, and detection procedures with the same official method. In the case of the micro-distillation process, the reliability of the measurement results was ensured by testing each food sam ple five times using a newly homogenized sample on each occasion. As shown in Table 1, no PA was detected in Samples #4, #8, or #9 using the micro distillation chip. Thus, it was inferred that these samples either contained no PA, or had a PA concentration lower than the limit of detection (LOD) of the proposed device. For the official method conducted by CAAPIC, no PA was detected in Samples #4, #8, or #9 or in Sample #2. Taking the detection results obtained using the exact official HPLC method as a benchmark, the detection accuracy of the proposed micro-distillation system was quanti The practical applicability of the proposed system was verified by detecting the PA concentrations of 10 real-world baked food samples acquired from convenience stores in Taiwan (see Table 1). For each sample, the pretreatment process was performed using a microfluidic distillation system (as described in Section 2.3), and the PA content was then evaluated using the HPLC-PDA method listed in Official Method No. 1001900044 of the Taiwan Food and Drug Administration (TFDA). For comparison purposes, the PA content of each sample was also evaluated by the Center for Agriculture and Aquaculture Product Inspection and Certification (CAAPIC) at National Pingtung University of Science and Technology (NPUST) in Taiwan, using the benchtop steam distillation, separation, and detection procedures with the same official method. In the case of the micro-distillation process, the reliability of the measurement results was ensured by testing each food sample five times using a newly homogenized sample on each occasion. As shown in Table 1, no PA was detected in Samples #4, #8, or #9 using the microdistillation chip. Thus, it was inferred that these samples either contained no PA, or had a PA concentration lower than the limit of detection (LOD) of the proposed device. For the official method conducted by CAAPIC, no PA was detected in Samples #4, #8, or #9 or in Sample #2. Taking the detection results obtained using the exact official HPLC method as a benchmark, the detection accuracy of the proposed micro-distillation system was quantified by Equation (2). As shown in Table 1, the detection accuracy varies from 96.8% (Sample 10) to 99.2% (Sample 3). In other words, the accuracy deviates from that of the official method by no more than 3.2%. Moreover, the proposed method has a LOD of 50 mg/L and a LOQ of 96 mg/L. Finally, the proposed distillation method requires just 0.1 g of sample for determination purposes, whereas the exact official method requires more than 100 g. Thus, the proposed system has significant benefits over the traditional method for the real-world determination of the PA concentration in baked food products. Table 2 presents a qualitative comparison of the microfluidic distillation system and detection method proposed in the present study with other PA detection methods reported in the literature. The micro-distillation method developed in this study not only achieves an outstanding recovery, but also could shorten the sample pretreatment time significantly. Moreover, the developed method requires only a very small amount of test sample and analytical reagents. All the advantages combined would make this developed microdistillation chip a perfect tool in a largescale market sampling survey of PA in foods. Conclusions This study has presented a microfluidic distillation system to facilitate the determination of the PA concentration in baked foods. The proposed system consists of a PMMA-based micro-distillation chip and a self-built distillation module with heating and cooling components. By retrieving the distillate from sample, the PA concentration is determined using a conventional HPLC-PDA system. The proposed microfluidic distillation system provides several important advantages over a traditional benchtop apparatus, including a higher throughput, a reduced sample and reagent consumption, a lower power consumption, minimal risk of cross-contamination, greater portability, and a lower fabrication cost. The experimental results have shown that the microfluidic distillation system achieves a distillation efficiency of 97% in 15 min. Moreover, the detection results obtained for control samples with known PA concentrations in the range of 50~3000 mg/L have been shown to be in excellent agreement (R 2 = 0.9971) with those obtained using an official HPLC-PDA detection method with a traditional benchtop steam distillation process. Finally, the detection results obtained for 10 real-world baked food products have shown that the proposed system has an LOD of 50 mg/L and an LOQ of 96 mg/L. The system thus outperforms the official distillation method employed in the present study (LOQ = 500 mg/L) and provides a rapid and feasible approach for practical PA determination in foods. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2023-05-30T15:02:56.547Z
2023-05-28T00:00:00.000
{ "year": 2023, "sha1": "88ab7f0af8ad25f38a90253fd92c5abe8dcb505f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/6/1133/pdf?version=1685261719", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c3c180663ae830c44703a6def451e474a24aef1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
18503531
pes2o/s2orc
v3-fos-license
Phenotypic and Genotypic Characterization of Enteroaggregative Escherichia coli Strains Isolated From Diarrheic Children in Iran Background: Several studies performed in developed and developing countries have identified enteroaggregative Escherichia coli (EAEC) as the emerging cause of pediatric diarrhea. Objectives: This study investigated the phenotypic and genetic characteristics of EAEC strains isolated from children with diarrhea between 2007 - 2008 in Tehran, Iran. Materials and Methods: EAEC strains were examined for virulence plasmid genes (aap, aggR, and aatA), biofilm formation, and drug resistance. In addition, pulsed-field gel electrophoresis (PFGE) profiles of these strains were determined. Results: Significant percentage of local EAEC strains carried the virulence plasmid genes and formed biofilms. In addition, these strains showed high resistance to ampicillin (100%), tetracycline (65.7%), streptomycin (58.7%), chloramphenicol (52.6%), and trimethoprim/sulfamethoxazole (51.7%) and had different PFGE patterns. Conclusions: These results indicated that EAEC strains isolated from Iranian children with diarrhea were heterogeneous and showed high resistance rates against commonly used antibiotics, which was similar to that reported in studies performed in other countries. Background Diarrheagenic Escherichia coli are the most common cause of bacterial diarrhea in infants in developing countries (1). Several studies performed in developed and developing countries worldwide have identifiedbenteroaggregative Escherichia coli (EAEC) strains as the emerging and the most common cause of pediatric diarrhea (2)(3)(4). HEp-2 or HeLa cell adherence assay is the gold standard test for identifying EAEC strains. EAEC strains show a special stacked-brick aggregative adherence (AA) pattern (5). The EAEC strains in the HEp-2 cell assay exhibit the AA pattern. However, this technique requires special expertise and facilities and is time consuming. Therefore, it is only performed in a few laboratories worldwide, thus limiting both the diagnosis of and epidemiological studies on EAEC (6). A recent study described a molecular method called multiplex PCR (mPCR) to identify EAEC strains by detecting 3 virulence plasmid genes, namely, aap, aggR, and aatA (6). Cell adhesion properties of some EAEC strains are attributed to a 60-to 65-MDa plasmid PAA. The plasmid PAA contains genes encoding several virulence factors such as AA fimbriae (AAF/I, AAF/II, and AAF/III), dispersin (AaP), transcriptional activator (AggR), plasmid-encoded toxin, and heat-stable toxin 1 (7). EAEC strains adhere to the mucosal surface of the small and large intestines and stimulate mucus secretion, thus resulting in the formation of a thick, aggregating biofilm (8). Biofilm formation restricts the penetration of antimicrobial agents, decreases the growth rate of EAEC strains, and increases the possibility of the expression of resistance genes. For these reasons, biofilm colonies of EAEC cannot be easily eradicated using bactericidal antibiotics (9). Several methods such as pulsed-field gel electrophoresis (PFGE), Random amplified polymorphic DNA (RAPD), and Multilocus sequence typing (MLST) have been used for determining the molecular epidemiology of EAEC strains (7,10,11). PFGE is a powerful tool for determining the clonal identity of bacteria and for obtaining information to understand and control the spread of diseases (12). PFGE is the gold standard technique for typing many bacterial species, including E. coli, but not for typing some species such as Mycobacterium tuberculosis (13,14). Objectives Because the cell adherence assay requires special expertise and is expensive and time consuming, we investigated the utility of mPCR for detecting EAEC strains. Few studies have performed the molecular typing of and have investigated the virulence characteristic of EAEC strains isolated from children in Iran. To our knowledge, this is the first study to use PFGE for determining the clonal relatedness of EAEC strains isolated from Iranian children with diarrhea. Thus, this study aimed to investigate the phenotypic and genotypic characteristics of EAEC strains from isolated from Iranian children with diarrhea who were referred to the Children's Medical Center in Tehran, Iran. Bacterial Strains This study included 170 EAEC strains obtained from a culture collection center at Molecular Biology Unit, Pasteur Institute of Iran. These strains were originally isolated from children with diarrhea aged below 5 years who were admitted to the children's medical center hospital, Tehran, Iran, during 2007 -2008. All the strains had been previously characterized as EAEC by performing the HeLa cell adherence assay. These strains were maintained at -80°C in Luria broth (HiMedia, India) supplemented with 20% glycerol (Merck, Germany). This study was approved by the Research Ethics Committee of the Pasteur Institute of Iran (no. 4312). Multiplex PCR Sequences of primers used for mPCR were based on those described previously (6). EAEC strains (17-2 and O42) and E. coli K12 were used as positive and negative controls, respectively. Multiplex PCR was performed in a 25-µL reaction mixture containing the extracted plasmid DNA as the template, 200 μmol dNTP's, 20 pmol of primers (Takapouzist, Iran) against aatA, 15 pmol of primers against aggR, and 10 pmol of primers against aap, 0.75 µL MgCl 2 , 1 U Taq polymerase (Gibco, UK), and 2.5 µL 10 × PCR buffer (Gibco, UK). PCR was performed in MasterCycler Gradient (Eppendorf, Germany) by using the following conditions: initial denaturation at 94.5°C for 3 minutes; 30 cycles of denaturation at 94.5°C for 1 minute, annealing at 50°C for 1 minute, and extension at 72°C for 1.5 minutes; and final extension at 72°C for 8 minutes. PCR products were electrophoresed on 1% agarose gel, and amplicons of correct size were considered positive. Biofilm Assay Biofilm formation test was performed according to a method described by Wakimoto et al. (17) with some modifications. Briefly, 200 µL Mueller-Hinton broth (Merck, Germany) supplemented with 0.45% glucose was added to 96-well flat-bottom microtiter polystyrene plates (Greiner, Germany) and was inoculated with 5 µL of EAEC culture grown overnight in Luria broth at 37°C with shaking. The samples were incubated overnight (18 hours) at 37°C and were visualized by staining with 0.5% crystal violet (Sigma-Aldrich, Germany) for 5 minutes after washing with water. Biofilm formation was quantified in duplicate by adding 200 µl of 95% ethanol and by using an enzyme-linked immunosorbent assay plate reader (BioTek Instruments, Winooski, VT) at 570 nm. EAEC strain 042 was used as a positive control, and E. coli HB101 was used as a negative control. Strains with OD at 570 nm of more than 0.2 were regarded as biofilm producers (biofilm-positive strains) according to a previous study (17). Pulsed-Field Gel Electrophoresis Thirty-one EAEC strains carrying the virulence plasmid genes were selected as representative strains and were typed using PFGE to investigate their clonal relationships according to a protocol described by Zhao et al. (18). The resulting fragments from PFGE were resolved by performing contour-clamped homogeneous electric field electrophoresis with CHEF Mapper system (Bio-Rad, USA) in an autoalgorithm mode and 1% PFGE-grade agarose gel (Bio-Rad) in 0.5 × TBE (44.5 mM Tris-HCl, 44.5 mM boric acid, and 1.0 mM EDTA [pH 8.0]) at 6 V/cm for 18 hours at 14°C. The gels were stained with ethidium bromide (30 mg/L) and were digitized for computer-aided analysis. Chromosomal DNA (225-2,200 kb; Bio-Rad) of Saccharomyces cerevisiae was used as a DNA marker. The DNA fragments were digested with XbaI and were separated on 1% agarose gel. Images of PFGE patterns were clustered using GelCompar II software (Applied Maths, Belgium). Similarity percentage was determined using Dice coefficient. Strains were considered to be clonally related if their Dice coefficient of correlation was ≥80% (19,20). Multiplex PCR The mPCR assay detected the 3 virulence plasmid genes, namely, aap, aggR, and aatA, in the examined EAEC strains. A representative agarose gel of the mPCR assay for control strains and some EAEC strains is shown in Figure 1. Of the 170 EAEC strains that were previously characterized by determining AA to HeLa cells, 114 (67%) strains yielded positive results in the mPCR assay and had at least one virulence plasmid gene. The remaining 56 (33%) strains yielded negative results in the mPCR assay. The frequency of aap, aggR, and aatA was 67%, 64.7%, and 47%, respectively. Biofilm Formation Biofilm formation is more common among EAEC strains than among other E. coli pathotypes (17). Therefore, we compared mean biofilm formation between EAEC and EPEC strains. Biofilm formation test was performed for all EAEC strains that yielded positive results in the mPCR assay and for 40 EPEC strains. In all, 73 (64%) EAEC strains produced biofilm, with a mean biofilm production of 0.857 and standard deviation of ± 0.763. Of the 40 EPEC strains, 15 strains (37.5%) produced biofilm, with a mean biofilm production of 0.698 and standard deviation of ± 0 926. The dendrogram showed 3 PFGE types, namely, A, B, and C. Type A included 11 EAEC strains all of which could form biofilms. Type B was the largest type and included 16 strains, of which 9 produced biofilm. This PFGE type also included EAEC strain 042. Type C was smaller than the other 2 PFGE types and included 4 strains, of which 3 could form biofilms. Thus, only PFGE type A included all biofilm-producing EAEC strains; the other 2 PFGE types included both biofilm-producing and biofilmnonproducing EAEC strains. Almost all the EAEC strains examined carried aap and aggR. Twenty-two strains in the PFGE dendrogram carried aatA, of which 12 were included in PFGE type B, 7 were included in PFGE type A, and 3 were included in PFGE type C. EAEC strains in the 3 PFGE types showed different antibiotic resistance profiles ( Figure 2). Discussion EAEC cause acute and persistent diarrhea, mainly in children in both developing and developed countries. The pathogenesis of infections caused by EAEC is not well understood. Several studies have shown that EAEC strains exhibit considerable heterogeneity (10,(21)(22)(23)(24). Only a few studies have been performed on EAEC strains isolated from Iran (25,26). In the present study, EAEC strains were characterized using different phenotypic and genotypic methods. To our knowledge, this is the first study in Iran to investigate the PFGE profiles and biofilm production by EAEC strains. We observed that 67% EAEC strains yielded positive results in the mPCR assay. The frequency of aap, aggR, and aatA was 67%, 64.7%, and 47%, respectively. The remaining 33% strains yielded negative results in the mPCR assay. A study by Cerna et al. (6) and our previous studies showed that 14% and 18.9% EAEC strains, respectively, yielded negative results in the mPCR (27). However, mPCR cannot detect some EAEC strains. The use of mPCR increases both the sensitivity and specificity for EAEC detection, which may help in the early diagnosis of infections caused by these bacteria (6). The HeLa cell adherence assay requires special expertise and is time consuming and expensive. In contrast, the mPCR assay is inexpensive and quick (6). In developing countries such as Iran that have limited resources, the mPCR assay may be useful for monitoring diarrheagenic E. coli. The results of the present study are consistent with those of several previous studies, which indicated that EAEC strains showed high antibiotic resistance and different antibiotic resistance patterns (7,10,11,25). In the present study, only 3 antibiotic resistance patterns were observed among the EAEC strains examined. These findings are consistent with those of a study by Kahali et al. (7) that showed different antibiotic resistance patterns among EAEC strains. However, high resistance rates of diarrheagenic E. coli, including EAEC, against commonly used antibiotics such as ampicillin, trimethoprim/sulfamethoxazole, and tetracycline are concerning because they may lead to treatment failure (28). Bangar and Mamatha (29) and Wakimoto et al. (17) reported that all EAEC strains formed biofilms. However, only 64% strains examined in our study produced biofilm. In addition, we observed a significant relation between EAEC strains and biofilm production (P < 0.05). Similar to that observed in previous studies (3,7,11), EAEC strains examined in the present study had different PFGE patterns, indicating that they were genetically heterogeneous. Only some strains had identical PFGE patterns, suggesting that these strains were epidemiologically related. The differences in the PFGE patterns of EAEC strains examined in the present study and those reported in previous studies could be because of the different ancestral origins of EAEC strains in each country (28). Because EAEC strains examined in this study showed dif-ferent PFGE and antibiotic resistance patterns, no association could be determined between PFGE patterns and antibiotic resistance patterns. These results indicated that the local EAEC strains examined in this study were heterogeneous, which was consistent with that observed in other regions around the world. The PFGE patterns of the strains included in the present study also confirmed this heterogeneity and were consistent with other phenotypic studies performed on these strains. However, these findings did not provide information on the pathogenesis and the role of these strains in pediatric diarrhea.
2017-04-29T17:47:07.820Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "17efdae9b67502a4b91e9e2f60023bda0b19e284", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4609111?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "17efdae9b67502a4b91e9e2f60023bda0b19e284", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235270825
pes2o/s2orc
v3-fos-license
The Classroom Impact of Trained Special Needs Education Teachers in Selected Schools: An Evaluation Study This study sought to find out factors that influence special needs education trained teachers’ performance in class. It was conducted between January and June 2019, involving a target population which comprised 3 government universal primary schools, with a total of 94 teachers and 2,386 learners. Study samples were selected, involving special needs education teachers (N = 73) and LwDs and OSNs (N = 30). Purposive sampling method was used to choose the required samples. A descriptive study design, involving qualitative approach was used. Open ended questionnaires and interview guides were used for collecting data on the critical role that teachers play in supporting LwDs and OSNs who experience barriers to learning under inclusive setting. One of the findings reveals that class size poses a serious challenge to teachers who are not well trained when they have LwDs and OSNs in large classes. Another finding indicates that teachers face challenges with the way the curriculum is designed—posing challenge to them on how to best handle it. It is also found that teachers face challenge to manage the average class number (teacher-learner ratio). It is also found that class room environment, resources and implementation of policies on education for LwDs and OSNs have both direct and indirect influence on the teachers’ impact in class. Basing on the findings, recommendations were made that: relevant authorities should increase support for teacher training and retraining for LwDs and OSNs. That curriculum modification should be done regularly. Classroom environments need regular improvement to be more disability friendly. Lastly, implementation of policies on disabilities and other special needs should be carried out on regular basis. INTRODUCTION It was out of personal, as well as professional interest that the researchers opted to carry out this study. Being involved in teacher training for learners with disabilities (LwDs), others with special needs (OSNs) and ordinary learners, the researchers have always maintained strong belief in the efficacy of improvement in teaching and learning of all categories of learners. Learners with disabilities are those individuals with impairments (sensory or bodily damage caused by diseases, genetic disorders, or injuries) that may comprise visual, hearing, physical, and intellectual impairments. These impairments can be acquired before, during or after birth. Each of the impairments can manifest itself in an individual in: mild, moderate or severe forms. An impairment leads to denial of an individual to perform specific function in a normal way. For example, an individual with a visual impairment loses the visual function-thus, such a person becomes disabled. Similarly, a person with hearing impairment is one who has lost the function of hearing. Society tends to refer to such people as disabled people. This kind of terminology rather sounds derogatory and discriminatory. Instead, such people are better referred to as persons with disabilities (PwDs). Impairment leads to disability, and if societal or self-negative attitude is added then an individual becomes handicapped. Nowadays, individuals should never be referred to as handicapped anywhere in this world. Special needs, on the other hand, is rather a wide concept covering different conditions, such as (impairments already mentioned), slow learning, being exceptionally talented and gifted, difficulties with spellings, communication, social emotional difficulties, behavioural problems, health problems (such as asthma, diabetes, sickle cell anaemia, and the like), being disadvantaged, living in streets, being homeless, being orphaned, to mention, but a few. The term "ordinary, " here may seem to be a bit disconcerting in itself, but it was felt better to use it than the term "normal" learners as often used in schools by virtually everybody. It was also under the second context, whereby the demand for education consumption, especially in developing countries has been on sharp increase over the years without matching its supply, that educators like the researchers on this study got concerned and were compelled to continuously conduct a study of this nature. Consumption of education is a right for everybody no matter what ability or disability as enshrined in constitutions of many countries and also tackled under other international commitments (such as: United Nations [UN], 1948[UN], , 1993[UN], , 2006United Nations Educational Scientific andCultural Organisations [UNESCO], 1990, 1994;African Union [AF], 2000). The third context under which the researchers were compelled to conduct this study was attributed to the examination-oriented syndrome of Ugandan education system. Uganda, like many other countries, set standard age of six (6) years as the school going age for young children. At the age of 6 years a child has to join Primary (P.)1, age 7 years should go to P.2, at 8 years P.3, at 9 years P.4, at 10 years P.5, at 11 years P.6, and at 12 years P.7. At the end of P.7 all children sit for a national examination, known as Primary Leaving Examinations (PLE). Successful candidates who get first class grades have the opportunity to join the socalled good secondary schools for four (4) years. At the end of this cycle the learners are again subjected to another national examination (referred to as the Ordinary Level examination, or Uganda Certificate of Education-UCE). Successful candidates at UCE will join higher secondary (Advanced Level) schools for two (2) years, after which they sit for A-Level (Uganda Advanced Certificate of Education-UACE). Successful candidates on these examinations enter university to pursue undergraduate bachelor's degree courses, which range from three (3) to seven (7) years to graduate. Because of the tendency to repeat primary classes, some children leave primary education cycle when they have clocked anything between 13 and 14 years of age. Many of these students who get poor grades drop out of school. LwDs and OSNs have slim chance of doing very well in the PLE. It is a formidable task for categories of such learners to cope with the too much academic oriented type of learning that has been going on in Uganda-learners virtually being expected to produce academic results as if they are robots, while schools operate as if they are factories for producing facts. Some schools are graded as good, while others are labelled as poor schools. A school labelled good school is the type where, for instance, three hundred (300) children may register for PLE, go through rigorous coaching and when results are out, about 284 of them may pass in grade A, while only 6 may get strong grade B. This appears like education is treated as a sub-set of examination, and not the other way round. In light of all these and the related aspects in the background, the researchers deemed it necessary to carry out this study. The purpose (aim) of this study, therefore, was to find out factors that influence special needs education trained teachers' performance in class. Specific research questions posed were: (a) of what concern is class size to teachers who are not properly well trained when they have LwDs and OSNs in the large classes? (b) In what ways do teachers work to complete curriculum to satisfaction of all categories of learners? (c) How do teachers manage the average class number (teacher-learner ratio)? (d) In what ways does a class room environment pose a challenge to teachers' direct management of teaching of the different categories of learners under inclusive setting? (d) How does availability of resources concern teachers' performance in promoting teaching and learning? LITERATURE REVIEW Whereas by the turn of the 1980s, fewer LwDs and OSNs attended ordinary schools, as asserted by Miles (2011), from the beginning of the 1990s, the state of affairs began to take a new trend. The reasons for these changes were, inter alia, the increased social awareness creation among the population the world over. Secondly, disability movement advocates took a centre stage to ensure that implementation of education for LwDs and OSNs alongside ordinary learners was given the priority it deserved (Mwangala, 2013). Some of the key roles played by the proponents of disabilities and other special needs, as well as the ordinary learners (Klibthong, 2015), included instilling social skills among LwDs and OSNs. The acquired skills, Klibthong adds, enabled LwDs and OSNs, as well as other ordinary learners to interact freely and to share all forms of activities with ease and comfort. In summary, Klibthong (2015) claims that teachers were contented with the way the development of interaction among the different categories of learners brought about significant support that later became beneficial for their own wellbeing. In line with Klibthong (2015)'s observation, Lodge and Lynch (2004) had noted with interest, the high levels of accepting attitudes displayed among ordinary learners toward their counterparts with disabilities and other special needs. In their view, Attfield and Williams (2013), point out the need to implement inclusion, because it is such a strong tool for cementing unity among LwDs and OSNs and their ordinary peers in schools. Through inclusion, Attfield and Williams believe that the existing discriminatory attitudes and all forms of prejudice and bias against LwDs and OSNs can get stamped out permanently in schools and in society generally. In light of this, Kotele (2010), Miles (2011), andMnangu (2016) who perceive teachers to be the most valuable human resources available, suggest that they should be supported to promote inclusive practices in all schools. In order to enhance teachers' competence, Naicker (2006), for one, suggests that teachers have to be trained and retrained, for example, through in-service training if successful inclusive classroom implementation is to be achieved. Much as useful as it may be an effective teacher training should not majorly be focused on academic knowledge only, but it must be balanced with skills acquisition as well (Westwood, 2017). Balanced teacher training is the only way to go, according to McConkey and Bradley (2017), because LwDs and OSNs all learn at their own paces (so are ordinary learners), thus, without taking this into consideration, the authors observe, all good efforts to promote inclusive schools can end up wasted. McConkey and Bradley (2017), therefore, recommend that the contemporary teacher training programmes should be reviewed in order to empower teachers to be better equipped with the necessary skills that can help them to assess learning needs of each and every learner and be capable of managing a variety of individualised learning programmes (IEPs). Well trained teachers, according to Hamill et al. (2016), are those individuals who possess practical skills in instruction, communication, collaboration, alternative forms of evaluation, classroom management, conflict resolution, and those who know how to adapt curriculum and cooperative learning strategies. Hamill et al. (2016) suggest a number of helpful teaching/learning strategies, which, inter alia, include co-operative teaching/learning, IEP, the Socratic method, inquiry-based (discovery) learning, collaborative problem-solving, heterogeneous grouping, and differentiation. In support of Hamill et al. (2016) and Wedell (2016) affirms that effective teachers are those who understand a child's development and learning in addition to academic content. This argument supports the earlier view expressed by Attfield and Williams (2013), that for teachers to increase their confidence and skills, their training and development must encompass a wider scope than course attendance alone. The authors further add that a comprehensive teacher training is always an important requirement for good classroom learning of LwDs and OSNs (and for the ordinary learners too, although the authors do not mention this). Teaching experience factors have been identified by Bruwer and Heathel (2017) as an important tool necessary for promoting an effective classroom performance for both teachers and learners of all categories. The authors investigated performance of teachers who were all trained but noted that those who had long experience were comparatively performing much better than those with little experience. Much as this argument appears convincing, it should, however, be pointed out that such good experience equally works well for ordinary learners. The researchers, thus, noted that experience was one of the vital crucial factors that promotes successful learning of LwDs and OSNs. The findings, according to Bruwer and Heathel (2017), proved that however much a school was well equipped with all types of necessary teaching/learning resources, it still needed teachers with more experience on the ground. That, such teachers were needed as they had the ability and skills in utilising resources to stimulate effective and successful learning among LwDs and OSNs and ordinary learners in schools. Besides teachers' good experiences in classroom, Khan (2011) noted another area where teachers' roles were important in supporting successful learning of LwDs and OSNs. This was to do with identification and utilisation of curriculum that were consumable by different categories of LwDs and OSNs. Lunga (2015) advances Khan's argument and concern by proposing that teachers should work with curriculum designers and developers to formulate the type of curriculum that is accessible and consumable by LwDs and OSNs and ordinary learners at their respective pace, abilities and capabilities. Drewer (2016) supports Lunga's point of view on teachers' roles, by calling upon other professionals who are engaged in curriculum development to be flexible and supportive to teachers in development curriculum that is flexible and realistic to LwDs and OSNs, as well as ordinary learners. In conclusion, Drewer (2016) suggests that teachers should always work with other experts and guide them to design and develop curriculum that is adaptable to the needs of LwDs and OSNs. Okwano (2016), for one, dwells on the policy framework for LwDs and OSNs. Okwang believes that some governments stop at formulating policies on the provision of education for LwDs and OSNs, and that they do little to enforce implementation of such policies. Teachers, Okwang, believes, have a responsibility to remind the relevant authorities to enforce enabling environment for implementation of such important policies. In supporting Okwang, Jarvis (2016) points out that lack of action taken to implement education for LwDs and OSNs means that most young children are rendered vulnerable to multiple and intersecting risks and danger that profoundly affect their growth and development. When Weyers (2016) focused a study on the ecological aspects that influenced implementation of inclusive education in mainstream primary schools in the Eastern Cape, South Africa, the findings revealed that implementation was attributed to the entire ecological system of education in that country. Weyers noted that the systems were not supportive of one another for the success of implementation of inclusive education, pointing out that no system could stand alone, and that not even teachers could do much to improve the situation. Other observations that Weyers (2016) noted, were that classes were not very accommodative and user-friendly for learners who experienced barriers to learning. Here, Weyers further revealed that there was lack of structural modification among participating schools to accommodate the needs of learners with limited mobility. Against this background, it was further noted that LwDs and OSNs were excluded from aspects of school life and that teachers' roles to help improve the situation of such learners were severely restricted by the circumstances. This, according to Weyers, limited the learners' full participation in classroom activities, and that they were thereby denied the opportunity of developing optimally. Teachers were, therefore noted, according to Weyers, to be unable to help improve the situation. In Uganda, such a dilemma for teachers' presumed failure can be attributed to policy conflicts, so to say. Uganda is currently one of the leading countries in the eastern, central and southern African regions on policies for LwDs and OSNs. As mentioned earlier, well trained teachers for LwDs and OSNs, due to examination-oriented type of education in the country may not find it possible to provide good quality of child centred teaching/learning. In Uganda there are some teachers who are recognised as good teachers by parents, school administrators and politicians when they produce learners who score as many grade A passes at PLE as possible. For that reason, teachers are always seen on their toes, preparing learners for perfect PLE performance. Individualised Educational Planning (IEP) knowledge and skills that teachers acquire when they obtain higher qualifications in teaching LwDs and OSNs is something that they have to put aside for a while if they are to survive the competition of coaching learners for super grade A performance at the end of the primary education system cycle. Without adequate orientation, Kurawa (2015) points out that teachers would not do much to support and provide assistance to their LwDs and OSNs; and that they also need instructional and technical skills to work with learners' diverse needs. In Lunga (2015)'s view, some of the problems are that most schools share a common factor of having teachers who possess only low levels of qualification. All these arguments reflect the importance of teachers' training, as well as the accumulated working experiences already mentioned earlier. In the next section, empirical search, meant to solicit data from practical point of view is presented. This is based on the research purpose and the relevant research questions formulated for the study. METHODOLOGY The purpose (aim) of the study was to find out factors that influence special needs education trained teachers' performance in class. The study conducted between January and June 2019, adopted a descriptive study design and used qualitative approaches in sampling, data collection and data analysis. The study participants were 103, comprising 73 special needs teachers and 30 pupils with disabilities, who voluntarily participated in the study. The sample is as shown in Table 1. The target population consisted of 3 government universal primary schools selected from Kampala, Wakiso and Mukono. There were 94 teachers and 2,386 learners in the identified schools. For ethical purpose, the identified schools were code named as A, B, and C. The study covered participants who were aged between 9 and 14 years. Purposive sampling technique was used to select the required samples for teachers and learners. As noted in Table 1, methodological aspects address questions such as grade levels, location (namely: Kampala, Wakiso, and Mukono in Uganda), class size, inclusive classrooms, levels of teacher training and years of teaching experience. It may appear as if the Teacher: Learner ration is around 1:30, as is seen in the table. This would only be possible if each of the classes were to be divided and shared at the time of teaching. Instead, each teacher has to teach the entire class each time, as indicated in each of the brackets. Take for example, if in School A, a teacher of P.7 is to teach a specific subject, he/she has to face the entire (202), instead of 29 learners. Thus, the volume of workload becomes cumbersome. These various aspects are included for they influence the way participants provide responses for this study. Seventy-three (73) open ended questionnaires were distributed to special needs teachers, while interviews were conducted with the identified 30 pupils with disabilities and other special needs. The questionnaires, accompanied with cover letters were delivered to the prospective participants who filled them in and returned to the researchers. After gathering the questionnaires filled in by the participants, the researchers embarked on the in-depth interviews in the selected schools. Before each interview session consent forms were signed and given back to the interviewer. These were all focused on teachers' qualifications, teaching experiences, teaching methods, the classroom environment, the relevant policy framework on the provision of education for LwDs and OSNs. RESEARCH INSTRUMENTS As noted above, a mixture of instruments was used, and they comprised open ended questionnaires on one hand, and interview guides, on the other hand. Each of the instruments focused on the research questions, which are reproduced here, for the attention of the reader as follows: (a) the concern on class size to teachers who are not properly been well trained when they have LwDs and OSNs in the large classes (b) the ways by which teachers use to complete curriculum for the satisfaction of all categories of learners (c) how teachers manage the average class number (teacher-learner ratio) (d) the ways by which class room environment pose a challenge to teachers' direct management of all categories of learners under inclusive setting (d) how availability of resources concern teachers' performance in promoting teaching and learning. DATA ANALYSIS Content data analysis involved identification of themes and sub-themes, and categorisation of emerging themes where applicable. Analysis also involved narrations and direct quotes where necessary. In short, data was analysed using content and thematic analysis. ETHICAL ISSUES For ethical purpose, the identified schools were code named as A, B, and C. The study involved thirty (30) learners aged between 9 and 14 years, who were interviewed during the data collection. The consent of this category of participants was sought prior to commencement of each of the interview sessions. A consent form was given to each of them to sign, with a reassurance that their names and whatever information they were to give would remain confidential. They were also reassured that neither their photographs would be taken by the researchers, nor recorded voices be revealed to the third party. * The signed consent form can be provided on request by the editor of this journal. FINDINGS The findings of the study reflected similar views, as well as divergent views expressed by the participants, as well as the issues noted through observations. Findings emerged, revealing the importance of teachers' experience, the methods they use for delivery, the pressure they go through to complete the nature of curriculum and its relevance to LwDs and OSNs, the importance of policies on the provision of education for LwDs and OSNs, the class room environment, and generally the key roles played by teachers in the implementation of inclusive education for LwDs and OSNs-reflecting how vital teachers' training and experience are. All participants in this study stressed the fact that every child was different and therefore there ought to be options available to best suit the needs of learners with special needs. For some families the situation may need to change, but ultimately the important thing is to have the best suitable option. This study found out that there was a definite emphasis and importance placed on the classroom development of children with intellectual disabilities. All participants identified social learning and social awareness as positive aspects of inclusive education settings. It is not only children with disabilities that benefit socially, but all ordinary children in the school do benefit. From discussions on children's social interaction, an emphasis on the caring nature exhibited by primary school children toward pupils with special educational needs became evident (Nolan, 2011). Teachers identified this to be particularly so, as children grew older and the gap widened socially between children with disabilities and their peers. "The gap is going to get bigger and bigger, between her and the rest of the class, but that happens with all children with disabilities" Class size is a massive issue. "It just goes without saying, if you have a big class of thirtyodd children and you have somebody with special needs, either that child is going to lose out or the rest of the class are going to, you know someone's losing out, because you can't get to everything" (Teacher). Teachers admitted to feeling overwhelmed or anxious in some cases. "I really felt at sea I have to say, in September, because a child with particular special needs was coming into class. There are no guidelines, there's no. . .there's nothing. You are just, you are just there and you have to figure it out yourself nearly" (Teacher). The challenge in the selected schools is that most of the participants (teachers) were not trained to teach in inclusive classrooms or how to practise. A participant said: "It is important that all children must learn together so that they feel appreciated, but the problem is that we are not trained to teach learners with disabilities. If we can be trained and be supplied with resources that will assist learners such as Braille for learners who cannot see clearly, we will be able to practice inclusive education without struggling." This indicates that proper training for practising teachers is needed. TIME TABLING AND CLASSROOM SETTING Findings have revealed that the most significant barriers to learning for learners in the curriculum were the pace of teaching and the time available for completing the curriculum. A participant was concerned about learners who could not see clearly, and even though teachers always made them sit where they could see, it was difficult for learners because teachers started writing from the top of the board and the hand writings were too small to be seen and recognised the learners.: "I always let learners who cannot see clearly sit in front chairs so that they can see what is written on the chalkboard. Sometimes I have to enlarge copies of the text for those learners so that they can read or copy from big printed activities." Kibuuka (2017) advises that visual aids and enlarged print materials should be made available to all learners to learn properly at schools so that the needs of every learner are met and barriers to learning can be addressed. A participant also mentioned: "The time that is allocated for daily routine disadvantages some learners in completing their activities because we have to move from one activity to another without stopping. Sometimes I cannot cover curriculum for the term because I have to make intervention for learners who experience challenges." According to Nsamenang (2011) and reaffirmed by Kabumba (2017), different activities inside the classroom should take place simultaneously, not one after the other so that learners are able to choose the activities they want to participate in and they should decide for themselves the order in which they will tackle the different activities. CLASSROOM ENVIRONMENT AND SPACE A participant pointed this out: "learners' rotate in writing the activities because there is few furniture and lack of space." This indicates that teachers spend more time in completing activities with learners because they have to rotate in using furniture for writing. Based on the observations, teachers could have used different strategies, such as oral work or practised the activities outside the classroom. A participant noted that classes were overcrowded; some learners could be ignored because teachers would not identify them. Mwangala (2013) asserts that teacher-learner ratio and group sizes are assumed to be important because as the number of children increases, teachers' ability to individualise attention to children decreases, and managing large numbers of children can be stressful for even the most sensitive and knowledgeable teacher. Learners do not sit comfortably in class because space is too limited and they do not have enough chairs. A participant said: "I think in school inclusive education is hindered by overcrowding and lack of space." Another participant added: "If learners are more than 40 in class, it is difficult to attend to individual problems because we do not have enough time to do so." The size of the classroom and number of learners in the class has an impact on monitoring and supporting learners. It is important for learners to be supported by teachers in class so that they can gain confidence of learning. Miles (2011) noted that overcrowding had also been identified as another factor that affects the practices of inclusive education. Overcrowding created a challenge for teachers to be able to identify and attend to learners who experienced challenges in class. In most cases learners with learning difficulties are ignored due to overcrowding and lack of space. It is important for all learners to be accommodated in teaching-learning classrooms. CLASSROOM MANAGEMENT Findings have revealed that some learners need the teachers' attention in order to focus to their work and complete successfully. A respondent noted: "we, as teachers sometimes rush children in answering questions instead of giving them time to respond, this creates frustrations and fear in them." According to Westwood (2017), teachers should vary paces when speaking to learners can, by so doing enable them to comprehend what is being said. That, teachers should understand that not all activities are suitable for the; different categories of learners with special needs. The researcher observed how teachers and practitioners were administering their activities and assessments in classes. This observation is in line with Kibria (2005), who is of the view that planned activities should always be of relevance and stimulating. That learning activities should be of interest to learners and that they should be developed in ways that learners find them enjoyable. TEACHERS' EXPERIENCES AND QUALIFICATIONS The researchers have observed that teachers who are more qualified understand learners' actions better than those who are less or not qualified. As noted earlier, in Uganda education is so examination oriented. In light of that a well-qualified and experienced teacher will have the knowledge and belief that education is not merely passing examinations very highlythat education should be planned to shape a learner's better future. That learners, be it they are ordinary or, those with disabilities and other special needs must not be turned into machines for cramming facts and regurgitate them for purpose of passing national examinations in grade A all the time. The researchers have observed that planning and presentation of qualified teachers are more interesting and appealing to learners, while on the contrary presentations of practitioners are found not to be appealing to learners. Most of the practitioners, except those who are upgrading their qualifications; struggle to maintain order in the class and to help learners with learning difficulties. On completion of their courses, the upgraders exhibit knowledge and skills that enable them maintain order in class. That, before attaining new qualifications they would lack the necessary skills, so they would have to struggle to put things right, as such. RESOURCES Resources enhance learners' understanding and grasping of the content of what is being taught while allowing them not to forget what they have learnt. Resources that will enhance learners' knowledge should be prioritised in schools so that learners are provided with the opportunity of learning and gaining understanding of a concept with ease. Availability of resources in schools contributes positively to the teaching and learning of different learners in the classroom if the resources are properly utilised. Most teachers are unable to practise inclusive education due to lack of relevant resources that enhance teaching and learning. The developing countries are the ones that are most affected on this. Developed countries have virtually all the resources they need to support their education provision. According to Kristensen et al. (2003), identification of resources and assets in the children's environment does not only help to provide a basis for learning opportunity and participation, but is important for early childhood education. The researchers have observed that most classes, especially those in developing countries lack resources that can be utilised to enhance active learning during teaching and learning while few classes have some resources but they cannot use them. Kotele (2010) noted that lack of resources prevents teachers/practitioners from differentiating activities to accommodate different learning abilities. Other findings have revealed that in some selected schools, some classes do not have enough furniture for all categories of learners, and that some learners are inconvenienced and disadvantaged. Findings have revealed that lack of infrastructures affects the practices of inclusive education. Foundation phase classes should have enough space for learners' movement and for putting different resources in different spaces as pointed out by a participant. The participant said: "I think in some schools' effective implementation of activities is hindered by overcrowding and lack of space, our classes are not user friendly, for example, if a child uses a wheelchair, he/she would not be able to move around or go to the toilet, because even our toilets are not suitable for them." The sizes of the classrooms should allow both teachers and learners to move freely without disturbing or hurting each other. The researchers have observed that practical objects are very important for enhancing learners' understanding and emphasising what is being taught so that they do not forget what they have learnt. The researchers have observed that learners could easily identify objects from posters and mention them. This is because the learners will have first seen such posters, touched them while teachers were teaching. The above observations are in line with Klibthong (2015), who is of the view that unsuitable school buildings are demotivating factors for a successful inclusion of LwDs and OSNs. CONCLUSION The study has concluded that teachers' long experience, qualification and continuous training and retraining are crucial factors for effective roles they play in fostering successful provision of education for LwDs and OSNs, as well as for ordinary learners. As such, it is recommended that relevant authorities should be informed of the need to channel resources for improvement in this area. One other important conclusion is that the manner in which teachers deliver curriculum content to learners is crucial and that where there is weakness, a correction ought to be done without delay. In this regard the authorities in education in a country like Uganda should redesign curriculum which is flexible and consumable by all categories of learners, and move away from the current rigid examinationoriented curriculum. Another conclusion is that formulation of good educational policies is not helpful if such policies are not effectively implemented. It is also concluded that classroom environment is a crucial matter. Last, but not least, it is concluded that classroom environment and relevant resources contribute both directly and indirectly to teachers' effective performance in the promotion of successful learning for all categories of learners. It is therefore recommended that the relevant authorities be informed of the need to improve support in these areas. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
2021-06-02T13:11:01.475Z
2021-06-02T00:00:00.000
{ "year": 2021, "sha1": "f8309974e8a26b86bd99d6d68b38bfde873c0ed6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feduc.2021.630806/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f8309974e8a26b86bd99d6d68b38bfde873c0ed6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
220631141
pes2o/s2orc
v3-fos-license
Serum Tau Proteins as Potential Biomarkers for the Assessment of Alzheimer’s Disease Progression Total tau (t-tau) and phosphorylated tau (p-tau) protein elevations in cerebrospinal fluid (CFS) are well-established hallmarks of Alzheimer’s disease (AD), while the associations of serum t-tau and p-tau levels with AD have been inconsistent across studies. To identify more accessible non-invasive AD biomarkers, we measured serum tau proteins and associations with cognitive function in age-matched controls (AMC, n = 26), mild cognitive impairment group (MCI, n = 30), and mild-AD group (n = 20) according to the Mini-mental State Examination (MMSE), Clinical Dementia Rating (CDR), and Global Deterioration Scale (GDS) scores. Serum t-tau, but not p-tau, was significantly higher in the mild-AD group than AMC subjects (p < 0.05), and there were significant correlations of serum t-tau with MMSE and GDS scores. Receiver operating characteristic (ROC) analysis distinguished mild-AD from AMC subjects with moderate sensitivity and specificity (AUC = 0.675). We speculated that tau proteins in neuronal cell-derived exosomes (NEX) isolated from serum would be more strongly associated with brain tau levels and disease characteristics, as these exosomes can penetrate the blood-brain barrier. Indeed, ELISA and Western blotting indicated that both NEX t-tau and p-tau (S202) were significantly higher in the mild-AD group compared to AMC (p < 0.05) and MCI groups (p < 0.01). In contrast, serum amyloid β (Aβ1–42) was lower in the mild-AD group compared to MCI groups (p < 0.001). During the 4-year follow-up, NEX t-tau and p-tau (S202) levels were correlated with the changes in GDS and MMSE scores. In JNPL3 transgenic (Tg) mice expressing a human tau mutation, t-tau and p-tau expression levels in NEX increased with neuropathological progression, and NEX tau was correlated with tau in brain tissue exosomes (tEX), suggesting that tau proteins reach the circulation via exosomes. Taken together, our data suggest that serum tau proteins, especially NEX tau proteins, are useful biomarkers for monitoring AD progression. Introduction Alzheimer's disease (AD) is the most common neurodegenerative disorder, currently afflicting over 35.6 million individuals worldwide [1,2]. The disease is characterized behaviorally by progressive dementia and pathologically by local accumulations of amyloid β (Aβ) peptide and neurofibrillary tangles (NFTs) composed of tau protein in the brain [3]. Both Aβ accumulation and aggregation of tau in NFTs are believed to contribute directly to AD neurodegeneration and the associated cognitive 73.92 ± 0.88 75.13 ± 0.99 76.55 ± 1.33 We first measured serum concentrations of t-tau and p-tau in all subjects by enzyme-linked immunosorbent assays (ELISAs) to examine the potential of the proteins as non-invasive biomarkers for AD ( Figure 1). Indeed, serum t-tau was significantly higher in the Mild-AD group compared to AMCs (351.9 ± 50.04 pg/mL vs. 245.6 ± 33.76 pg/mL; p < 0.05, Figure 1A), while concentration in the MCI group did not differ from AMCs (263.0 ± 37.12 pg/mL). Serum p-tau (pSer202: S202) was also slightly higher in the MCI and Mild-AD groups compared to the AMCs, but the differences did not reach significance (AMC, 98.60 ± 16.23; MCI, 127.0 ± 20.07; Mild-AD, 120.1 ± 17.84, p = 0.38, Figure 1B). The serum p-tau (S202)/t-tau protein ratio also did not differ among groups (AMC, 0.36 ± 0.03; MCI, 0.45 ± 0.03; Mild-AD, 0.35 ± 0.04, p = 0.799, Figure 1C). These results suggest that serum t-tau may distinguish Mild-AD but not MCI from age-matched health subjects. Next, we evaluated the correlations between serum t-tau levels and neurocognitive test scores because only serum t-tau was significantly higher in the AD groups according to ELISA results. Serum t-tau concentration exhibited a weak negative correlation with MMSE score (r = −0.19, p = 0.11, Figure 1D) and positive correlation with GDS score (r = 0.22, p = 0.06, Figure S1A) but no correlation with CDR-SOB score (r = 0.13, p = 0.27, Figure S1D). There were also no significant correlations between serum p-tau (S202) levels or p-tau (S202)/t-tau ratio and neurocognitive test scores ( Figure 1E,F and Figure S1B,C,E,F). Moreover, there was no correlation between serum t-tau or p-tau (S202) and age (t-tau, r = 0.05, p = 0.66, Figure S1G; p-tau, r = 0.02, p = 0.88, Figure S1H) despite the strong influence of age on AD risk. We also performed Receiver operating characteristic (ROC) analysis to evaluate the diagnostic utility of serum t-tau and p-tau. Serum t-tau elevation above 234.4 pg/mL distinguished Mild-AD from AMC group subjects with 75% sensitivity and 61.54% specificity (area under the curve (AUC) = 0.675, p = 0.044, Figure 1G), while serum p-tau (S202) above 58.34 pg/mL distinguished Mild-AD from AMC subjects with 78.95% sensitivity and but only 40% specificity (AUC = 0.5958, p = 0.281, Figure 1H). Serum p-tau (S202)/t-tau ratio also was not a reliable marker, distinguishing Mild-AD from AMC group subjects with 42.11% sensitivity and 76% specificity using a cut-off of 0.245 (AUC = 0.525, p = 0.78, Figure 1I). Therefore, a rise in t-tau or p-tau distinguished mild-AD from healthy age-matched controls Figure 1. Elevated serum total tau (t-tau) protein in patients with mild Alzheimer's disease (AD). A) Total tau (t-tau), B) phosphorylated (p)-tau (S202), and C) p-tau (S202)/t-tau ratio in human serum were quantified using ELISA. Serum t-tau was higher in the Mild-AD group compared to the agematched control (AMC) group. All data were shown as means ± SEM. * p < 0.05 compared to the AMC group by one-way ANOVA and post hoc Dunn's multiple comparison test. Correlations of serum D) t-tau, E) p-tau (S202), and F) p-tau (S202)/t-tau with Mini-mental state examination (MMSE) scores were assessed by the nonparametric Spearman's rank correlation test. Graphs show regression lines with 95% confidence intervals. Serum t-tau was significantly correlated with MMSE scores. Receiver operating characteristic (ROC) analyses of serum G) t-tau, H) p-tau (S202), and I) p-tau (S202)/t-tau. ROC analysis revealed moderate diagnostic accuracy of elevated serum t-tau. AUC, area under the curve. Next, we evaluated the correlations between serum t-tau levels and neurocognitive test scores because only serum t-tau was significantly higher in the AD groups according to ELISA results. Serum t-tau concentration exhibited a weak negative correlation with MMSE score (r = −0.19, p = 0.11, Figure 1D) and positive correlation with GDS score (r = 0.22, p = 0.06, Figure S1A) but no correlation with CDR-SOB score (r = 0.13, p = 0.27, Figure S1D). There were also no significant correlations between serum p-tau (S202) levels or p-tau (S202)/t-tau ratio and neurocognitive test scores ( Figure 1E,F and Figure S1B,C,E,F). Moreover, there was no correlation between serum t-tau or p-tau (S202) and age (t-tau, r = 0.05, p = 0.66, Figure S1G; p-tau, r = 0.02, p = 0.88, Figure S1H) despite the strong influence of age on AD risk. We also performed Receiver operating characteristic (ROC) analysis to evaluate the diagnostic utility of serum t-tau and p-tau. Serum t-tau elevation above 234.4 pg/mL distinguished Mild-AD from AMC group subjects with 75% sensitivity and 61.54% specificity (area under the curve (AUC) = 0.675, p = 0.044, Figure 1G), while serum p-tau (S202) above 58.34 pg/mL distinguished Mild-AD from AMC subjects with 78.95% sensitivity and but only 40% specificity (AUC = 0.5958, p = 0.281, Figure 1H). Serum p-tau (S202)/t-tau ratio also was not a reliable marker, distinguishing Mild-AD from AMC Figure 1. Elevated serum total tau (t-tau) protein in patients with mild Alzheimer's disease (AD). (A) Total tau (t-tau), (B) phosphorylated (p)-tau (S202), and (C) p-tau (S202)/t-tau ratio in human serum were quantified using ELISA. Serum t-tau was higher in the Mild-AD group compared to the age-matched control (AMC) group. All data were shown as means ± SEM. * p < 0.05 compared to the AMC group by one-way ANOVA and post hoc Dunn's multiple comparison test. Correlations of serum (D) t-tau, (E) p-tau (S202), and (F) p-tau (S202)/t-tau with Mini-mental state examination (MMSE) scores were assessed by the nonparametric Spearman's rank correlation test. Graphs show regression lines with 95% confidence intervals. Serum t-tau was significantly correlated with MMSE scores. Receiver operating characteristic (ROC) analyses of serum (G) t-tau, (H) p-tau (S202), and (I) p-tau (S202)/t-tau. ROC analysis revealed moderate diagnostic accuracy of elevated serum t-tau. AUC, area under the curve. Characteristics of Neuronal Cell-Derived Exosomes (NEX) To determine whether serum tau proteins originate from neuronal cells and enter the circulation via neuronal-derived exosomes (NEX), we isolated exosomes from serum using the ExoQuick EX precipitation solution according to Perez-Gonzalez [27] with minor modifications and then enriched (NEX) by immunochemical methods ( Figure S2A). To ensure the identity and quality of EX and NEX, we characterized the microvesicles by NanoSight, Western blotting, and transmission electron microscopy (TEM). NanoSight results showed that particles in the ExoQuick precipitates (total serum EX) ranged in diameter from 83 to 159 nm, consistent with expected EX size ( Figure S2C). Further, EX identity and NEX enrichment were confirmed by Western blot detection of the EX-specific protein marker CD63 and the neuronal marker NCAM-L1 ( Figure S2D). Both CD63 and NCAM-L1 were expressed in the total EX fraction (initial ExoQuick precipitate) and the NEX fraction (after immuno-enrichment) from the serum of AD patients and CTL subjects ( Figure S2D). Expression of NCAM-LI was higher in the NEX fraction, consistent with a neuronal origin, whereas CD63 expression was lower than in the total EX faction (EX + NEX), consistent with enrichment ( Figure S2D). In addition, consistent with a brain origin of NEX isolated from serum, TEM analysis of both exosomes isolated from serum and brain tissue samples revealed similar circular structures within the same diameter range of 50 to 150 nm ( Figure S2E). These combined morphometric and immunolabeling results confirmed the successful isolation of exosomes from brain and serum as well as the neural origin of the serum NEX fraction. NEX t-tau and p-tau Protein Levels in Controls, Mild Cognitive Impairments, and Mild-AD Patients To investigate whether serum NEX tau proteins more accurately reflect the severity of AD than total serum tau proteins, we measured t-tau and p-tau levels in suspensions of human neural exosomes (hNEX) from the AMC (n = 23), MCI (n = 29), and Mild-AD (n = 18) groups by ELISA. The variation in EX yield was controlled by normalizing hNEX number to EX marker CD63 immunoreactivity. The number of hNEX in the Mild-AD group was significantly lower than in the MCI group (1.38 × 10 9 ± 2.87 × 10 8 vs. 4.32 × 10 9 ± 7.67 × 10 8 ; p < 0.05, Figure 2A) but did not differ from the AMC group (2.39 × 10 9 ± 4.04 × 10 8 ). To determine whether serum tau proteins originate from neuronal cells and enter the circulation via neuronal-derived exosomes (NEX), we isolated exosomes from serum using the ExoQuick EX precipitation solution according to Perez-Gonzalez [27] with minor modifications and then enriched (NEX) by immunochemical methods ( Figure S2A). To ensure the identity and quality of EX and NEX, we characterized the microvesicles by NanoSight, Western blotting, and transmission electron microscopy (TEM). NanoSight results showed that particles in the ExoQuick precipitates (total serum EX) ranged in diameter from 83 to 159 nm, consistent with expected EX size ( Figure S2C). Further, EX identity and NEX enrichment were confirmed by Western blot detection of the EX-specific protein marker CD63 and the neuronal marker NCAM-L1 ( Figure S2D). Both CD63 and NCAM-L1 were expressed in the total EX fraction (initial ExoQuick precipitate) and the NEX fraction (after immunoenrichment) from the serum of AD patients and CTL subjects ( Figure S2D). Expression of NCAM-LI was higher in the NEX fraction, consistent with a neuronal origin, whereas CD63 expression was lower than in the total EX faction (EX + NEX), consistent with enrichment ( Figure S2D). In addition, consistent with a brain origin of NEX isolated from serum, TEM analysis of both exosomes isolated from serum and brain tissue samples revealed similar circular structures within the same diameter range of 50 to 150 nm ( Figure S2E). These combined morphometric and immunolabeling results confirmed the successful isolation of exosomes from brain and serum as well as the neural origin of the serum NEX fraction. NEX t-tau and p-tau Protein Levels in Controls, Mild Cognitive Impairments, and Mild-AD Patients To investigate whether serum NEX tau proteins more accurately reflect the severity of AD than total serum tau proteins, we measured t-tau and p-tau levels in suspensions of human neural exosomes (hNEX) Figure 2. Serum total tau and phosphorylated tau in neuronal cell-derived exosomes are elevated according to the severity of Alzheimer's disease. (A) The number of human neuronal cell-derived exosomes (hNEX) was quantified by ELISA for the exosome marker CD63. The number of hNEX was lower in the Mild-AD group than the MCI group. (B) Total tau (t-tau), (C) p-tau (S202), and (D) p-tau (S202)/t-tau ratio in human neuronal cell-derived exosomes (hNEX) were quantified using ELISA. hNEX t-tau and p-tau (S202) were higher in the Mild-AD group than the AMC and MCI groups. All data were shown as means ± SEM. * p < 0.05 and ** p < 0.01 compared to the AMC group and # p < 0.05 compared to the MCI group by one-way ANOVA and Holm-Sidak's or Dunn's multiple comparison test. Correlations of hNEX (E) t-tau, (F) p-tau (S202), and (G) p-tau (S202)/t-tau with MMSE scores were assessed using the nonparametric Spearman's rank correlation test. Graphs show regression lines with 95% confidence intervals. hNEX p-tau (S202) and p-tau (S202)/t-tau were significantly correlated with MMSE scores. ROC analyses of hNEX (H) t-tau, (I) p-tau (S202), and (J) p-tau (S202)/t-tau indicating moderate diagnostic accuracy of elevated hNEX p-tau (S202). AUC, area under the curve. Phosphorylated tau Protein Levels in Serum and hNEX Predict Cognitive Deterioration These tau protein measures and correlations with AD severity are from patients at different stages of AD, and thus associations with diagnostic significance may be overlooked. Therefore, we examined these associations prospectively during patient follow-up. Blood samples were collected only in the first year, and cognitive function tests were performed annually for 4 years. Changes in GDS and MMSE scores were used for the evaluation of cognitive deterioration. Based on these results, patients were divided into a slow progression group showing no significant increase in mean GDS score or a significant decrease in mean MMSE score, and a cognitive deterioration group demonstrating significantly higher mean GDS scores (1st, 3.40 ± 0.16; 4th, 4.60 ± 0.16, p < 0.01, Figure 3B) and numerically lower mean MMSE score (1st, 18.30 ± 1.19; 4th, 15.20 ± 1.17, p = 0.08, Figure 3C). Serum p-tau (S202) levels were higher in the cognitive deterioration group than the slow progression group (184.2 ± 29.29 vs. 114.6 ± 15.64 pg/mL, p < 0.05, Figure 3E), and there was a significant positive correlation between serum p-tau (S202) and the change in GDS score (∆GDS) (r = 0.3909, p = 0.0297, Figure 3I). Alternatively, there were no group differences in t-tau, p-tau (T181), and Aβ 1-42 or correlations of these factors with ∆GDS ( Figure 3H-K). Baseline hNEX t-tau level was also greater in the cognitive deterioration group than the slow progression group (33.83 ± 6.90 vs. 18.78 ± 2.46 pg/mL, p < 0.05, Figure 4A). Similarly, hNEX p-tau (S202) level was higher in the cognitive deterioration group compared to the slow progression group (17.20 ± 3.84 vs. 9.94 ± 1.63; p < 0.05, Figure 4B). Thus, elevated hNEX t-tau, hNEX p-tau (S202), and serum p-tau (S202) are predictive of cognitive deterioration. In contrast, hNEX t-tau, p-tau (S202), p-tau (T181), and Aβ 1-42 were not correlated with ∆GDS ( Figure 4E-H). In addition, t-tau, p-tau (T181), and Aβ 1-42 in human serum and neuronal cell-derived exosomes were not correlated with ∆MMSE score ( Figure S6), but serum p-tau (S202) was negatively correlated with ∆MMSE score (r = −0.35, p = 0.05, Figure S6B). . High baseline p-tau in serum predicts long-term cognitive deterioration. A) Timeline of the follow-up study. Blood was collected once during the first year, and cognitive function tests were performed annually for 4 years. Changes in B) GDS score (ΔGDS) and C) ΔMMSE score are indicative of cognitive deterioration. All data were shown as means ± SEM. ### p < 0.001 compared to the firstyear score in the cognitive deterioration group using the Mann-Whitney test. Comparisons of baseline serum D) t-tau, E) p-tau (S202), F) p-tau (T181), and G) Aβ1-42 between slow progression and cognitive deterioration groups. Serum p-tau (S202) levels were higher in the cognitive deterioration group than the slow progression group. All data were shown as means ± SEM. * p < 0.05 compared to the slow progression group using the Mann-Whitney test. Correlations of serum H) t-tau, I) p-tau (S202), J) p-tau (T181), and K) Aβ1-42 with ΔGDS were assessed using the nonparametric Spearman's rank correlation test. Serum p-tau (S202) levels were significantly correlated with ΔGDS scores. group than the slow progression group. All data were shown as means ± SEM. * p < 0.05 compared to the slow progression group using the Mann-Whitney test. Correlations of serum (H) t-tau, (I) p-tau (S202), (J) p-tau (T181), and (K) Aβ 1-42 with ∆GDS were assessed using the nonparametric Spearman's rank correlation test. Serum p-tau (S202) levels were significantly correlated with ∆GDS scores. Discussion In this study, we found significantly elevated total tau protein in the serum of Mild-AD patients compared to MCI group and AMC group, but this elevation was only of modest efficacy for identifying AD patients according to ROC analysis. The tau protein has more than 25 phosphorylation sites [7], and changes in pSer202 (p-tau (S202)) and pThr181 (p-tau (T181)) are implicated in AD [26]. However, no significant changes in serum p-tau (S202) and p-tau (T181) were observed among AD groups compared to healthy controls. We did find significant correlations of serum t-tau levels with MMSE and GDS scores and no age-or gender-dependent changes, indicating specific dependence on disease progression. However, these correlations were not strong, possibly due to the absence of severely symptomatic patients in our sample. There was also no correlation between t-tau and CDR-SOB scores. The pathophysiological process of AD is thought to begin several years before clinical symptoms become apparent [28]; therefore, CDR-SOB scores are believed to follow neurodegenerative changes. In our study, the CDR-SOB score was measured once when patients visited the hospital voluntarily, but a single measure was insufficient to identify mild-AD, which may explain why there were no correlations with other disease markers. In addition to the low-to-moderate accuracy, sensitivity, and specificity of serum t-tau for AD diagnosis, we found no changes in serum Aβ or correlations with neurocognitive test scores. In summary, there were significant differences in serum tau between AMC and Mild-AD, although there was no clear difference between MCI and Mild-AD groups. However, serum Aβ levels decreased with disease Discussion In this study, we found significantly elevated total tau protein in the serum of Mild-AD patients compared to MCI group and AMC group, but this elevation was only of modest efficacy for identifying AD patients according to ROC analysis. The tau protein has more than 25 phosphorylation sites [7], and changes in pSer202 (p-tau (S202)) and pThr181 (p-tau (T181)) are implicated in AD [26]. However, no significant changes in serum p-tau (S202) and p-tau (T181) were observed among AD groups compared to healthy controls. We did find significant correlations of serum t-tau levels with MMSE and GDS scores and no age-or gender-dependent changes, indicating specific dependence on disease progression. However, these correlations were not strong, possibly due to the absence of severely symptomatic patients in our sample. There was also no correlation between t-tau and CDR-SOB scores. The pathophysiological process of AD is thought to begin several years before clinical symptoms become apparent [28]; therefore, CDR-SOB scores are believed to follow neurodegenerative changes. In our study, the CDR-SOB score was measured once when patients visited the hospital voluntarily, but a single measure was insufficient to identify mild-AD, which may explain why there were no correlations with other disease markers. In addition to the low-to-moderate accuracy, sensitivity, and specificity of serum t-tau for AD diagnosis, we found no changes in serum Aβ or correlations with neurocognitive test scores. In summary, there were significant differences in serum tau between AMC and Mild-AD, although there was no clear difference between MCI and Mild-AD groups. However, serum Aβ levels decreased with disease progression from MCI to mild-AD. Until now, there have been numerous studies searching for useful blood-based AD-biomarkers, including plasma Aβ or tau. A recent study using ultrasensitive methods showed that plasma Aβ was associated with cognitive status and CSF biomarkers and plasma Aβ 42 and Aβ 40 were lower in AD than in amnestic MCI than in non-amnestic MCI [29], whereas others reported the opposite [30][31][32][33][34]. These equivocal findings yield conflicting results concerning the predictive value of AD diagnosis and cognitive decline in the AD group. Recent reports regarding tau levels revealed that plasma tau reflected brain tau levels [17], and plasma tau levels were specifically elevated in AD patients [19]. At the same time, other studies concluded that t-tau were not suitable as AD biomarkers because of the large overlap of plasma t-tau levels between normal aging and AD [19,20]. A more recent study quantifying plasma p-tau (T181) showed that the plasma p-tau (T181) in the AD group was significantly higher than that in the age-matched control group, but showed a too low cut-off value (0.0921 pg/mL) of plasma p-tau181 [20] This result with plasma p-tau (T181) differs from our ELISA results that serum p-tau (T181) did not differ between AMC, MCI, and Mild-AD. In addition, the scale of the measured value fo p-tau was different. Perhaps this discrepancy is due to differences in the experimental scheme, including blood samples and measurement methods. However, our WB results for p-tau (T181) levels show that p-tau (T181) protein in Mild-AD was significantly increased compared to both the MCI and AMC groups. P-tau (T181)/t-tau ratio of the Mild-AD group also significantly higher than that of the AMC group. Since the results seem to differ depending on the experiment method, the potential for AD biomarkers of plasma t-tau and p-tau requires further examination for diagnostic reliability, including correlation with clinical features such as cognitive dysfunction. In contrast to total serum tau proteins, we found that serum exosome tau was more strongly predictive of disease status according to ROC, presumably as these vesicle-associated proteins better reflect the pathological status of the brain. Tau proteins are abundant in the brain, especially in distal axons [7]. We hypothesized that pathological proteins such as t-tau and p-tau can be delivered to the circulation by exosomes, which can act as biological barrier-permeable carriers. Therefore, we isolated human neural exosomes (hNEX) in serum samples from the AMC, MCI, and Mild-AD group subjects and measured expression levels of tau protein by ELISA and Western blot. Although the level of tau protein in NEX was about one-tenth that in serum, group differences were more pronounced. For instance, while serum p-tau did not differ among groups (Figure 1), serum hNEX p-tau was significantly higher in the Mild-AD group than the MCI and AMC groups, while the hNEX p-tau/t-tau ratio was higher in the Mild-AD group than the AMC group, respectively. Further, both NEX t-tau and p-tau (S202) protein levels distinguished MCI from mild-AD with high specificity or sensitivity according to ROC analysis. NEX p-tau (S202) was also significantly correlated with MMSE scores, further suggesting a strong association with disease progression. As the variability of cross-sectional data can overlook clinically useful associations, enrolled subjects were also followed for 4 years with annual cognitive function testing following a single blood test. Both serum and NEX p-tau (S202) during the first year were correlated with the changes in GSD and MMSE scores after 4 years, indicating that high baseline serum or NEX p-tau protein is predictive of faster disease progression and cognitive decline. Alternatively, p-tau (T181) and Aβ levels did not correlate with these changes in GSD and MMSE scores during follow-up. Both t-tau and p-tau proteins were significantly elevated in the Mild-AD group, as confirmed by Western blotting. In particular, PHF (pSer202 + pThr205) and pThr181 were significantly higher in the Mild-AD group. These results clearly show that at least some pathogenic proteins in the brain tissue can enter the circulation via exosomes, thus reflecting the current state of brain pathology. It has been reported that tau protein is difficult to measure in blood because of its short half-life [35,36]. However, pathogenic proteins in blood may be protected from enzymatic damage if contained within vesicles such as exosomes. Although NEX tau proteins can distinguish between MCI and mild-AD more accurately than the corresponding free serum proteins, there are several drawbacks. First, the isolation process is complex. Further, individual proteins only partially reflect the complex pathology of AD. A combination of NEX proteins may provide a more accurate and reliable diagnosis, disease staging, and prognosis. Since NEX isolated from blood is derived from brain tissue exosomes, we hypothesized that the changes in NEX proteins would reflect pathogenic changes in the brain. There was a significant decrease in the number of hNEX in the Mild-AD group compared to the MCI group as well as in aged AD model mice compared to younger WT and Tg mice, suggesting reduced tau protein clearance in the brain with disease progression and that exosomal egress serves a protective function. In addition to pathogenic effects, tau proteins remaining in the brain may facilitate further accumulation. This possibility was confirmed using JNPL3 Tg mice, in which p-tau accumulation was greater in the brains of 15-month-old than 4-month-old animals. In accord with human blood NEX, the number of mouse neural exosomes (mNEX) was significantly lower in 15-month-old Tg JNPL3 mice compared to age-matched WT mice. We also found that the number of neural exosomes was significantly correlated with the number of brain tissue exosomes. Expression levels of t-tau and p-tau proteins in blood NEX as well as brain tissue exosomes increased with pathological progression in JNPL3 mice. Moreover, the expression of NEX p-tau protein was significantly correlated with p-tau protein in brain exosomes. Collectively, these findings indicate that tau proteins in blood exosomes reflect the level of tau proteins in the brain, and thus may be useful markers for monitoring the progression of AD. In this study, we demonstrate that total tau and phospho-tau (S202) associated with brain-derived serum exosomes can distinguish mild AD from MCI and healthy controls with greater accuracy than free serum proteins. Further, we show that elevated levels of these serum exosome-associated proteins at baseline can predict long-term cognitive decline. Finally, we provide compelling evidence from AD model mice that AD-related proteins in serum exosomes are indeed derived from brain exosomes. Patients, Controls, and Methods A total of 76 subjects aged 65-90 years were recruited from Gachon University Gil Medical Center, Incheon, Korea, including 26 healthy age-matched control subjects and 50 cognitive impairment confirmed according to the criteria described in our previous report [37]. Briefly, individuals with subjective cognitive complaints were first screened for cognitive impairment using a cut-off score of > 26 of 30 on the Mini-Mental State Examination (MMSE) [38]. Subjects scoring below 26 were subjected to detailed neuropsychological testing, including the Clinical Dementia Rating-Sum of Box (CDR-SOB; scores > 2.5) and Global Deterioration Scale (GDS; scores > 3), which are broadly accepted measures for dementia [39]. Patients with comorbidities were excluded. Patients were diagnosed according to American Psychiatric Association DSM-IV criteria. All clinical tests were performed by investigators blinded to the subjects' genetic status; however, the blinded condition could not realistically be maintained for overtly demented subjects. Table 1 summarizes the clinical and demographic characteristics of the study population. The study was conducted according to the guidelines of the Ethics Committee of Gachon University Gil Medical Center and with the approval of the institutional review board of Gachon University Gil Medical Center (GAIRB2013-264, 23 October 2013, GCIRB2016-015, 21 January 2016). All the subjects provided written informed consent before participating via self-referral or referral from a family member. Serum Separation Ten milliliters (mL) of blood was collected from each participant by vein puncture into sterile vacutainers under strict aseptic conditions. The samples were kept at room temperature (RT) for 30-40 min to clot, and then centrifuged for 20 min at 1000× g to separate the serum. Serum was collected carefully, and a protease inhibitor cocktail (535140; EMD Biosciences, Inc., Darmstadt, Germany) and phosphatase inhibitor cocktail (P5726 and P0044; Sigma-Aldrich, Inc., St. Louis, Missouri, USA) were added. Serum samples were aliquoted and immediately stored at −80 • C until further analysis. Aliquots were thawed on the day of analysis. Isolation of NEX from Serum From each patient group, 16 serum samples were selected randomly for NEX isolation. Briefly, 0.5 mL of serum was mixed with 252 µL of ExoQuick EX precipitation solution (EXOQ5A-1; System Biosciences, Inc., Palo Alto, CA, USA) and incubated for 1 h at 4 • C. After centrifugation at 1500× g for 30 min at 4 • C, the pellet was resuspended in 250 µL of calcium-and magnesium-free Dulbecco's balanced salt solution (DPBS; Thermo Fisher Scientific, Waltham, MA, USA) with inhibitor cocktails. The NEX fraction was then enriched according to a previous report [26]. Each sample was mixed with 100 µL of 3% bovine serum albumin (BSA) and incubated for 1 h at 4 • C, followed by the addition of 1 mg rabbit anti-human CD171/NCAM-L1 (L1 cell adhesion molecule [L1CAM]) biotin-conjugated antibody (bs-1996R-Biotin; Bioss Antibodies Inc., Woburn, MA, USA) and 25 µL streptavidin-agarose resin (53116, Thermo Fisher Scientific) plus 50 µL of 3% BSA. After centrifugation at 200× g for 10 min at 4 • C and removal of the supernatant, the pellet was resuspended in 50 µL of 0.05 M glycine-HCl (pH 3.0) by vortexing for 10 s. The suspension was then mixed with 0.45 mL of M-PER mammalian protein extraction reagent (78501, Thermo Fisher Scientific) and incubated for 10 min at 37 • C with vortex mixing. Extracted proteins were immediately stored at −80 • C until further analysis. Isolation of TEX from Brain Tissue Exosomes were extracted from brain tissue according to the methods of Perez-Gonzalez and colleagues [27] with minor modifications. Frozen (−80 • C) mouse brain (one hemisphere) was chopped in trypsin solution prewarmed to 37 • C at 0.1 g/mL. The tissue was then transferred to 15 mL tubes containing 1 × trypsin (Gibco BRL., Thermo Fisher Scientific) and incubated for 20 min at 37 • C with gentle shaking. Ice-cold DMEM (with protease inhibitor (PI), Gibco BRL) was added to stop digestion. The suspension was then mixed by being gently pipetted twice, filtered through 40 µm mesh, and centrifuged at 300 × g for 10 min at 4 • C to remove brain cells and tissue. The supernatant was transferred to a fresh tube and centrifuged at 2000 × g for 10 min at 4 • C. The second supernatant was then centrifuged at 10,000 × g for 30 min at 4 • C and sequentially filtered with 0.45 µm filter and 0.2 µm filter. Each filtrate was diluted with ice-cold phosphate-buffered saline (PBS) and centrifuged 3 times, each time at 100,000 × g for 70 min at 4 • C using an ultracentrifuge (SW28, Beckman, Miami, FL, USA) to pellet the vesicles. The final extracellular vesicle pellet was resuspended in 0.95 M sucrose solution, and centrifuged through a 6-layer sucrose gradient (0.25 M, 0.6 M, 0.95 M, 1.3 M, 1.65 M, and 2 M sucrose solution in 20 mM HEPES) at 200,000× g for 16 h at 4 • C using an ultracentrifuge (SW41, Beckman). Each fraction was diluted with ice-cold PBS and centrifuged at 100,000 × g (average) for 1 h at 4 • C (SW41, Beckman) to pellet the vesicles. The supernatant was discarded and pellets were collected in ice-cold PBS (with PI). The vesicles in fractions 1-3 were then characterized by a suite of techniques including transmission electron microscopy to confirm enrichment of exosomes. Electron Microscopy Analysis of exosomes by TEM was conducted with the support of the Brain Research Core Facilities at the Korea Brain Research Institute (KBRI, Daegu, Korea). Glow discharge Formvar-carbon coated grids were prepared using the PELCO easiBlow Glow Discharge system (Ted Pella Inc., Redding, CA, USA). Grids were glow discharged for 30 s at 15 mA. Exosome pellets were suspended in 0.15 M cacodylate buffer (pH 7.4), applied onto glow discharged Formvar-carbon coated EM grids (Ted Pella Inc.), and left for 1 min in the air to allow the membranes to absorb on the surface. The excess sample liquid was then blotted off using filter paper (3M, Maplewood, MN, USA). The grid surface was floated on the surface of a 4% uranyl acetate staining solution (EMS-CHEMIE, Eftec North America, Taylor, MI, USA) for 1 min and then the excess solution was blotted off using 3M filter paper. The grid surface was then floated on the surface of a water droplet, and the water blotted off with filter paper. This latter procedure was repeated 5 times, and the grid was then dried in the air. Grids were then examined using a Tecnai G2 transmission electron microscope (Thermo Fisher Scientific). Vesicle size was measured using ImageJ between groups, we measured the density of the band using ImageJ (National Institutes of Health [NIH], Bethesda, Maryland, USA, https://imagej.nih.gov/ij/, 1997-2016). Nanoparticle Tracking Analysis Exosome pellets were resuspended in 60 µL PBS. Sucrose gradient fractions SN 0, SN ∆1, and SN ∆2 were concentrated in Amicon ® Ultra-4 10 kDa nominal molecular weight centrifugal filter units to a final volume of 60 µL and a 10 µL of each prepared fraction diluted to 1:100 in PBS. Pellets obtained from 2 mL of media with Exo-quick were resuspended in 1 mL of PBS for nanoparticle tracking analysis (NTA) (particle concentrations were corrected for this concentration factor). Samples were analyzed in the range of 3-15 × 10 8 /mL by NTA using the NanoSight NS500 (Malvern Panalytical, Worcestershire, UK) equipped with a 405 nm laser. Videos were acquired and analyzed using the accompanying NTA software (version 3.1, Malvern Panalytical). The number of vesicles in each sample was presented as particles/mL media (mean ± S.D., n = 6). Measurements of t-tau and p-tau Protein Levels Serum concentrations of t-tau, p-tau, and amyloid-beta (Aβ 42 ) were detected using the following ELISA kits: Human tau protein ELISA kit, human phospho-tau (S202, T205) protein ELISA kit, human phosphor-tau (T181) protein ELISA kit, human phosphor-tau (S231) protein ELISA kit, and human Aβ42 ultrasensitive ELISA kit (Table 2). We used 50 µL of the sample according to the manufacturer's instructions and diluted it 1:5 if necessary. Tau protein levels in NEX were quantified by ELISA kits, and number of exosomes in each NEX sample was quantified by ExoELISA for the CD63 antigen (EXOEL-CD63; System Biosciences, Inc., Palo Alto, CA, USA) according to the manufacturer's instructions. The mean value for all CD63 determinations in each assay group was set to 1.00, and the relative values for each sample were used to normalize t-tau and p-tau protein levels. The samples and standards were measured in duplicate, and the means of the duplicates were used for the statistical analyses. Western Blot Analysis We used Western blot to confirm the changes in t-tau and p-tau protein levels in NEX. From each group, 9 NEX samples were selected randomly for the Western blot experiment. For easy analysis, total protein in 20 µL of each sample was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride (PVDF) membranes (Merck Millipore, Darmstadt, Germany). Membranes were blocked with 5% nonfat dry milk prepared in Tris-buffered saline (TBS) (10 mM Tris pH 7.5, 150 mM NaCl) for 1 h at RT and then incubated at 4 • C overnight in the following primary antibodies diluted with TBS: primary rabbit anti-human CD63 IgG and primary rabbit anti-human TSG101 IgG for EX, goat anti-human NCAM-L1 IgG for NEX, goat anti-human tau (C-17) for t-tau, mouse anti-human phospho-PHF-tau pSer202 + Thr205 antibody (AT8), mouse anti-human phospho-Tau (Thr181) monoclonal antibody (AT270), and mouse anti-human phospho-tau (Thr231) monoclonal antibody (AT180) for p-tau (Table 2). After washing with TBS-T (20 mM Tris pH 7.5, 500 mM NaCl, 0.05% Tween 20), blotted membranes were incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies (goat anti-rabbit IgG, goat anti-mouse IgG, or donkey anti-goat IgG at RT for 1 h ( Table 2). After washing with TBS-T, bands were visualized by an enhanced chemiluminescence system (Thermo Fisher Scientific). Band densities were measured using ImageJ to estimate protein expression. For the comparison of NCAM-L1 expression among samples, results were first normalized to CD63 expression. For comparison of tau protein expression among groups, results were normalized to NCAM-L1 expression. All results were then expressed fold change relative to the AMC group. Immunofluorescence Mice were perfused transcardially with saline containing heparin for immunohistochemical analysis. Brains were isolated, fixed in 4% paraformaldehyde at 4 • C for 24 h, and incubated in 30% sucrose solution at 4 • C for 72 h. Frozen blocks of brain tissue were cut into 30 µm-thick coronal slices using a cryostat (Cryotome, Thermo Electron Corporation, Waltham, MA, USA), and these were stored at 4 • C in cryoprotectant solution (ethylene 30% and glycerol 30% in PBS). Brain slices were washed 3 times in PBS containing 0.2% Triton X-100, incubated in a blocking solution (0.5% BSA and 3% normal horse serum in 0.4% PBS with Tween 20) at RT for 1 h, and then incubated with mouse anti-human phospho-PHF-tau pSer202 + Thr205 antibody (AT8) (1:100, Thermo Fisher
2020-07-19T13:05:36.028Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "6860eccabad0dd24f3751f443ae3d2bc60a4debf", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/ijms/ijms-21-05007/article_deploy/ijms-21-05007-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06543f0d2d0ebe4c347179ecc2e125ff5666643b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51338252
pes2o/s2orc
v3-fos-license
ISA-Pol: Distributed polarizabilities and dispersion models from a basis-space implementation of the iterated stockholder atoms procedure Recently we have developed a robust, basis-space implementation of the iterated stockholder atoms (BS-ISA) approach for defining atoms in a molecule. This approach has been shown to yield rapidly convergent distributed multipole expansions with a well-defined basis-set limit. Here we use this method as the basis of a new approach, termed ISA-Pol, for obtaining non-local distributed frequency-dependent polarizabilities. We demonstrate how ISA-Pol can be combined with localization methods to obtain distributed dispersion models that share the many unique properties of the ISA: These models have a well-defined basis-set limit, lead to very accurate dispersion energies, and, remarkably, satisfy commonly used combination rules to a good accuracy. As these models are based on the ISA, they can be expected to respond to chemical and physical changes naturally, and thus they may serve as the basis for the next generation of polarization and dispersion models for ab initio force-field development. I. INTRODUCTION In the last few years, the field of intermolecular interactions has seen a tangible increased level of importance. The deep level of understanding we have achieved from decades of theoretical developments has formed the basis of new models for intermolecular interactions that finally give us the promise of the long-awaited accuracy and predictive power needed in application to complex molecular aggregation processes. These intermolecular interaction models are being developed primarily from interaction energies computed using some variant of symmetry-adapted perturbation theory (SAPT), and predominantly using the version of SAPT based on density-functional theory, SAPT(DFT). The latter choice is based both on the favourable accuracy and computational efficiency of SAPT(DFT). The general procedure for model development typically uses some mix of SAPT(DFT) calculations at specific, close-separation dimer configurations, and an analytical multipole-expanded form of the interaction energy suitable for the long range. The various implementations of this approach have been described elsewhere [1][2][3][4][5] . The advantage of using a theory like SAPT or SAPT(DFT) for the short-range energies is that the resulting interaction energy has a well-defined multipole-expanded form. Consequently, if this multipole-expanded form can be determined analytically, there can be a rigorous match between the short and long range. Indeed, this has been the basis of the above philosophy for many decades (see for example refs. [6][7][8][9][10][11]. Here SAPT(DFT) has an advantage over SAPT in that the multipolar molecular properties (multipole moments, polarizabilities, dispersion coefficients) can be readily derived from the underlying density functional method, and usually at a comparatively low computational cost. However, as is now well known 12-28 intermolecular properties must be distributed if we are to achieve high enough accuracies. The single-centre multipole expansion, which is a use-ful paradigm for diatomics or triatomics, is poorly convergent for larger molecules, for which we must use multiple expansion centres. These expansion centres have usually been taken to be the locations of the nuclei in the molecule, though this need not be the case, and indeed, for some cases 29,30 multiple, off-atomic sites are chosen to obtain even faster convergence of the multipole expansion. The problem with calculating distributed properties is that it does not seem possible to define a unique way of partitioning a molecular property into portions associated with the atoms in a molecule (AIMs). This ambiguity has led to a whole range of schemes to define the AIMs (see for example Refs. [31][32][33][34], which have, in turn, resulted in some lively discussion in the published literature 35,36 . Here we do not wish to address the more philosophical issues associated with the atom-in-amolecule, but rather focus on some of the practicalities that result from the choice of AIM method. Consider the following list of features of the distributed molecular properties that we might like to see achieved: • Uniqueness for a given choice of AIM algorithm: While the AIMs themselves are not unique, the actual atomic domains that result from a particular choice of partitioning algorithm should be unique. That is, the result should not depend on numerical parameters, and should have a well defined basis-set limit. This will usually imply that the resulting distributed molecular properties are also unique. • Rapid convergence with rank: As the distributed properties will typically be used in a model for the molecular interactions, for computational reasons it is usually desirable that these models be rapidly convergent with rank. This condition implies that the atomic domains from the AIM are as close to being spherical as is possible. • Agreement with reference energies: The distributed arXiv:1806.06737v2 [physics.chem-ph] 31 Aug 2018 properties should result in energies in good agreement with those from the reference electronic structure method. In our case this will be taken to be appropriate interaction energies from SAPT(DFT). • Insensitivity to molecular conformation: We fully expect distributed properties to vary with molecular conformation, but, particularly for soft deformations, that is those with a small change in the electronic distribution, we may expect the AIM domains and resulting molecular properties also to change only slightly. • Agreement with physical/chemical expectations: This condition is qualitative as we cannot define what the physically meaningful properties of an atom in a molecule should be. We can however hope that the resulting properties be in broad agreement with chemical/physical intuition. • Computational efficiency: This is important if we are to apply the distribution techniques to large systems. Ideally we would like the algorithm to scale linearly with the size of the system. Not all of these requirements need to be met to develop an interaction model for a specific system: after all, the long-range parameters can be treated as fitting parameters chosen to result in the best fit to the reference energies. However the parameters resulting from such a mathematical fit rarely have any link to the physical properties of the system, and consequently cannot be used for the development of more general interaction models. Instead we must turn to methods that are somehow linked to the underlying properties of the atom in a molecule. Some of the methods used to define the properties of the atoms in a molecule can be regarded as being more mathematical or numerical, though physical properties like the van der Waals radii may be used. In these methods, the molecular properties may be partitioned in a basis-space or real-space manner, though hybrids of the two are also used. Some of the more successful of these methods include the distributed multipole analysis (DMA) of Stone 37,38 , the Lo-Prop and MpProp approaches 15,39 , and methods based on constrained density fitting for the multipole moments 40 and for the polarizabilities 14,17 . We will refer to the original constrained density-fitting method of Misquitta & Stone 17 as the cDF method, and the related 'self-repulsion plus local orthogonality' method of Rob & Szalewicz 14 as the SRLO method. Both the cDF and SRLO distribution techniques use constraints in the density fitting to allow the molecular polarizabilities to be partitioned into non-local, site-site polarizabilities. These are not the local polarizabilities that one might conventionally think of, but include terms that allow for nonlocal, or through-space polarization in the molecule. 13 ( §9. 2) The methods differ in the constraints applied, with the SRLO algorithm using a constraint to reduce the charge-flow terms, that is, the polarizabilities that allow for charge movement in the molecule, to nearly zero. Using appropriate localization techniques 18,19 both the cDF and SRLO models can be made to yield effective local polarizability models. In the case of the former, we have referred to the combined method as the Williams-Stone-Misquitta, or WSM model. This model has formed the basis of much of our work so far, and indeed has been used to develop intermolecular interaction models by other groups either directly 41 or by extension 2,5,42 . As the localization schemes in the WSM model can be applied to any of the non-local polarizability models, we will refer to the localized models by appending '-L', for example the SRLO-L model would be the SRLO non-local model localized using the WSM approach. While these methods have been successful in developing useful models for both the polarization and the dispersion energies, the AIM properties resulting from either the SRLO-L or cDF-L algorithms do not have a well-defined basis-set limit and can result in unexpected, and perhaps unphysical AIM properties. Consider the cDF-L localized, isotropic polarizabilities for the thiophene molecule shown in Table I. While the dipole-dipole polarizabilities for all sites appear to be reasonably stable with basis with variations of 5% or so, the same cannot be said for the higher ranking polarizabilities: there are significant variations with basis set in the quadrupole-quadrupole polarizabilities, with negative values for the two hydrogen AIMs in the triple-ζ basis, and the octopole-octopole AIM polarizabilities are negative for most of the data in the table. We note that even though these individual polarizabilities appear unphysical, the whole description yields the correct total molecular polarizability. The SRLO-L polarizability models yield much the same picture and are not shown. These problems can be partially reduced by constraining the localization or by including more data during the refinement steps of the WSM method as indeed has been done by McDaniel and Schmidt 42 , but an alternative is needed. I. Localized, isotropic polarizabilities for the symmetrydistinct sites in the thiophene molecule computed with the cDF-L model, that is using cDF non-local polarizabilities localized using the WSM algorithm. The basis sets used are aug-cc-pVDZ (aDZ), aug-cc-pVTZ (aTZ), and aug-cc-pVQZ (aQZ). Atom C1 is the carbon atom attached to the sulfur atom and H1 is the hydrogen atom attached to C1. Atomic units used for all polarizabilities. Consider the more physically motivated schemes to define the AIMs. These include Bader's topological analysis (the so-called quantum theory of atoms-in-a-molecule, or QTAIM) 31 , maximum probability domain (MPD) analysis 43 , and the various methods based on the Hirshfeld stockholder partitioning [32][33][34] . The method of Bader is perhaps the most well known of the AIM techniques and has been used for defining both distributed multipole moments and polarizabilities 22,23,44,45 and has also been used to construct distributed dispersion models 46 . However while this technique satisfies a number of the properties listed above, it results in unusual AIM domains that lead to a somewhat slower convergence with rank of the expansion. The MPD approach is relatively new and has not yet been used as a means of obtaining distributed properties, but like the QTAIM method it is well defined. The Hirshfeld-like methods are appealing in their simplicity: If we define reference, usually sphericallysymmetrical atomic densities w a (r) for atom a -we shall term these the shape functions (though in other papers 47 this term is used for these functions normalized to unity) -then the density allocated to atom a in the molecule with total electronic density ρ(r) is given by Notice that even if the shape functions are spherically symmetrical, the AIM density ρ a will normally be anisotropic. This scheme for partitioning the molecular density is not only elegant, but results in smooth, nearly spherical AIM densities which satisfy many of the requirements we have listed above. However there are problems with the original Hirshfeld scheme in which the reference atomic densities were chosen to be the densities of the isolated, neutral atoms. This has been recognised 34,48 to be a poor choice as it causes the AIM densities to be as similar as possible to the neutral free atoms with the consequence that charge movement in the molecule was sometimes severely underestimated. Bultinck et al. 34,48 provided an elegant solution to this problem by allowing the reference state to be a linear combination of free ionic states, with the occupancy probabilities being determined selfconsistently in what is known as the Hirshfeld-I scheme. An even more elegant solution to the problem of the original Hirshfeld scheme was proposed by Lillestolen & Wheatley 33 who proposed that the reference atomic densities be determined self-consistently by defining them as the spherical average of the AIM densities: This method, termed the iterated stockholder atoms (ISA) algorithm requires no a priori reference states. Instead, once a guess to the states is made, eq. (1) and eq. (2) are iterated to self-consistency to achieve the desired solution. Early attempts at finding the ISA solution often needed as many as a thousand iterations to reach convergence, and sometimes failed to converge at all, but more robust algorithms have recently been developed that generally achieve convergence in a few dozen iterations. 49,50 These new methods work by restricting the variational freedom given to the ISA reference functions by defining them via a basis expansion rather than in real space as was formerly done. One of these methods is the basis-space ISA, or BS-ISA algorithm that we have developed and implemented in the CamCASP 51 program. We have used the BS-ISA algorithm to define distributed multipole models and have demonstrated that these multipoles exhibit all of the properties we have listed above. In fact, the BS-ISA distributed multipoles -or ISA-DMA models for short -surpass those from the wellestablished distributed multipole analysis (DMA) algorithm by Stone 38,52 in the rapidity of convergence with rank and in the stability with respect to basis set. Further, we have demonstrated how the BS-ISA density partitioning can be used, via the distributed overlap model, to achieve robust fits to the short-range part of the interaction energy and thereby to easily develop detailed analytic models for the intermolecular interaction 1 . Finally, in collaboration with Van Vleet and Schmidt 2,5 data from the BS-ISA algorithm has been used to develop the short-range repulsion and dispersion damping models for two general force fields: the Slater-FF and MAS-TIFF models. In this paper we extend the applicability of the BS-ISA algorithm to the second-order energies and we demonstrate how we can use this method to obtain distributed frequencydependent polarization models, and from these, distributed dispersion models for any closed-shell molecular system. We first describe this new algorithm, termed ISA-Pol. Next we describe a new, simplified and more flexible version of the BS-ISA algorithm, one that allows more accurate ISA solutions as well as additional sites and coarse-graining. The ISA-Pol method results in what are known as non-local polarizabilities which describe through-space polarization and charge movement in the system. While this is an important subject and leads to unexpected van der Waals interactions [53][54][55] in lowdimensional systems, we will instead focus here on the localized distributed models that lead to the conventional polarization and dispersion interactions. We describe the localization procedures in brief along with some of the important features of the methods. Then we present a wide range of results that compare the polarizabilities from ISA-Pol with those from cDF and SRLO, and demonstrate that the new models are superior in many ways. Finally we compare the dispersion energies from localized ISA-Pol models with those from SAPT(DFT). We end with an outlook on the scope and power of this method. II. THEORY The frequency-dependent polarizability tensors can be defined from the frequency-dependent density susceptibility (FDDS) function and the multipole moment operators (or any one-electron operators 17 ) as follows whereQ t is the (real) multipole moment operator of index t where the index (rank and component) is expressed in the compact notation of Stone 13 : t = 00, 10, 11c, 11s, · · · . The FDDS describes the linear response of the electron density to a frequency-dependent perturbation and can be written in sum-over-states form as whereρ(r) = k δ(r − r k ) is the electron density operator and k runs over the electrons in the system. To achieve a partitioning of the total molecular polarizability, eq. (3), into contributions from the AIM domains we define a unit function: where p a (r) is the probability of a quantity being associated with AIM a at point r. With two such unit functions we can define the distributed form of the FDDS as follows: Notice that the FDDS, being a two point function, is partitioned into contributions from pairs of sites. Having thus partitioned the FDDS, we can now define the distributed, non-local polarizabilities as where the multipole moment operators are now defined using the centres of sites a and b. These are the distributed multipole operators, for which we will also use the notationQ a t (r) ≡ Q t (r − R a ). A. A simplified and flexible BS-ISA algorithm In the BS-ISA algorithm 50 we represent the ISA atomic density for site a, ρ a , in terms of an appropriate local, atomic basis set: where the ξ a k are basis functions associated with site a and the coefficients c a k are determined by minimizing an appropriate ISA functional (see below). The piece-wise continuous shape functionw a is defined as where the transition radius r a 0 is defined appropriately 50 . The short-range form w a is given by a basis expansion: where the basis set consists of s-type functions taken from the basis used for the atomic expansion given in eq. (8). The longrange form of the shape function is given by where the constants A a and α a are obtained selfconsistently 50 . As we have previously explained, the purpose of this piece-wise definition of the shape function is to enforce the exponential decay of the ISA atomic densities, which is difficult to obtain with Gaussian basis sets as the very diffuse basis functions needed to model the long-range density tails tend to lead to numerical instabilities. Using w a L allows us to obtain an exponential decay without needing to use very diffuse basis functions. The ISA solutions are then be obtained from an iterative process, where, at each step of the iterations a suitable functional is minimized. One of these is the ∆ stock(A) functional which is the default in the CamCASP program. A computationally important feature of the ∆ stock(A) functional is that it can be minimized with O(N) computational cost, where N is the number of ISA sites in the system. This is possible as the ∆ stock(A) functional can be written as the sum of subfunctionals: where each of the sub-functionals, ∆ a stock(A) can be minimized independently of the others. Importantly, the total density ρ used in this functional is obtained via density fitting 50 ; this is needed to reduce the computational scaling to O(N), and it also simplifies the integrals needed. However in the original implementation, minimizing the ∆ stock(A) functional tended to lead to unacceptable inaccuracies in the ISA AIM densities; in particular the total charge of the system was often not conserved, with differences of 0.01e often encountered. Also, higher ranking molecular multipoles would not be well reproduced. Consequently we combined the ∆ stock(A) functional with the density-fitting functional to result in a hybrid DF-ISA algorithm. This algorithm involved a single parameter that controlled the relative weights given to each scheme, with a 90% weighting of the DF functional being recommended. While the results were better, there were two problems: (1) the new method had a computational scaling of O(N 3 ), and, (2) despite the mixture of the density-fitting and ISA functionals, there was still an overall loss in accuracy which resulted in small residual errors in the electrostatic energies computed from the DF-ISA algorithm compared with reference energies from SAPT(DFT). The primary reason for the inaccuracy of the original algorithm was that the ISA atomic basis sets were constructed from the auxiliary basis used in the density fitting, and this inextricably linked the two basis sets. This placed limits on both basis sets, and therefore resulted in inaccuracies both in the fitted density and in the ISA solutions. This restriction in the basis sets was required for technical reasons associated with the implementation of the BS-ISA algorithm in version 5.9 of the CamCASP program. It was because of these inaccuracies that we needed to use the more computationally demanding DF-ISA algorithm. In the present algorithm implemented in CamCASP 6.0 we have removed these restrictions by introducing a third, independent, atomic basis set in the CamCASP p rogram which now contains the following bases: • The main basis: used for the molecular orbitals. • The auxiliary basis: used for the density fitting. This basis may use either Cartesian or spherical GTOs. • The atomic basis sets: used for the ISA atomic expansions. This basis set must use spherical GTOs, but is otherwise independent from the above basis sets. The atomic basis sets can therefore be increased in size if needed and placed on arbitrary sites, or removed from some sites. With this change, we are now able to control the variational flexibility of the ISA solution independently of that of the density fitting. As the ISA expansions are known to require an increased variational flexibility compared with the density fitting, we can now use larger basis for the ISA expansions, thereby leading to overall higher accuracies with functional ∆ stock(A) ; there is no longer a need to use the DF-ISA algorithm. This not only restores the O(N) computational scaling of the algorithm, but also allows us to use Cartesian GTOs in the density-fitting step, thereby significantly reducing the errors in the fitted density. In addition, we have made improvements to the way in which distributed molecular properties are extracted using the ISA solutions. Previously, distributed molecular properties such as the multipole moments were defined in terms of the ISA atomic expansions ρ a : where Q a t is the (real) distributed multipole moment of index t for site a. In the new scheme we instead use the expression This expression is formally identical to eq. (13), but as eq. (1) is never an identity, the latter expression is usually more accurate. We refer to multipole moments computed with eq. (14) as the ISA-GRID moments. III. NUMERICAL IMPLEMENTATION For single-reference wavefunctions, such as those from Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT), the FDDS can be evaluated using coupled linearresponse theory and is expressed as a sum over occupied and virtual single-particle orbitals and eigenvalues as where the subscripts i and i (v and v ) denote occupied (virtual) molecular orbitals, φ are the single-particle orbitals, and the frequency-dependent coefficients C iv,i v (ω) are defined in terms of the electric and magnetic Hessians 17,56,57 . Using density fitting [58][59][60] we express the transition densities in terms of an auxiliary basis χ k : and this allows us to write the FDDS as 17,61,62 where theC kl (ω) are the transformed coefficients which are defined asC Using the density-fitted form of the FDDS in eq. (7) we get where in the last step we have defined the distributed multipole moment integrals for sites a/b and auxiliary basis functions k/l: Notice that these multipole integrals are analogous with those used to define the ISA-GRID multipole moments shown in eq. (14). This is the ISA-Pol model for distributed frequencydependent non-local polarizabilities. In the cDF and SRLO methods the distribution is achieved via the auxiliary basis functions themselves 14,17 . These methods are linked to the ISA-Pol algorithm by setting the probability functions p a (r) = 1 and limiting the sum over k/l in eq. (19) to include only those auxiliary functions on sites a/b. This has the advantage of simplicity, but disadvantage that the results are dependent on the auxiliary basis set 17 . In the ISA-Pol approach the distributed polarizabilities are uniquely defined for a given set of probability functions p a , and as we know that the ISA solutions are unique 34,63 , we should expect that the ISA-Pol algorithm leads to unique distributed polarizabilities. We shall demonstrate this below. Linearising the algorithm: Issues Once the frequency-dependent coefficientsC kl (ω) have been calculated, the evaluation of α ab tu (ω) using eq. (19) for a given pair of sites a, b and angular momenta t, u scales as O(M 2 ) where there are M auxiliary basis functions in the system. If l is the maximum angular momentum for which distributed polarizabilities to be computed and N is the number of sites in the system, then there are O(N 2 l 4 ) non-local polarizabilities, so the total scaling of the calculation is O(l 4 N 2 M 2 ). If we assume on the average m auxiliary basis functions per site, then M = mN, so the computational scaling is O(l 4 m 2 N 4 ), that is, it scales as the fourth power as the number of sites. While the scaling is not necessarily unfavourable, the pre-factor, l 4 m 2 , can easily be of the order 10 6 , thereby making this calculation computationally burdensome, though it can be trivially parallelized over the pairs of sites a, b. The distributed multipole integral in the auxiliary basis defined in eq. (20) must be evaluated numerically, on a grid due to the ISA probability function p a . This function is defined as the ratio of the ISA shape functions which makes analytic evaluation unfeasible, but these are themselves piecewise continuous, so numerical evaluation is mandatory. As the numerical integration grid size scales with the number of atoms in the system, the evaluation of the Q a t,k integrals using eq. (20) would incur a computational cost scaling as O(l 2 m n g N 3 ), where n g is the average number of grid points per atom, that is, the scaling is O(N 3 ) with number of atoms. As we need fairly dense grids, particularly in the angular coordinates, to converge the higher ranking multipole moment integrals, the pre-factor l 2 m n g can be as large as 10 7 . This can make the evaluation of these integrals a significant computational cost, and even though this evaluation needs to be done only once in a calculation, it would be advantageous if the scaling could be reduced. Fortunately both of these computational costs can be reduced using locality enforced by defining neighbourhoods for each site in the system 50 . We define the neighbourhood N a of site a as site a itself and all other sites whose auxiliary basis functions overlap with those of site a within a specified threshold. Now consider how the neighbourhood N a can be used to reduce the computational cost of the multipole moment integrals Q a t,k for site a: • Integration grids: Rather than spanning all atoms in the system, the grids are based on sites in N a . • Probability function evaluation: p a includes a sum over all sites in the system, but this sum can be restricted to go over only sites in N a . • Auxiliary basis function k: Q a t,k is evaluated only for those k that belong to sites in N a and is set to zero otherwise. With these three changes, the computational cost of evaluating the multipole integrals is reduced to O(N). In a similar manner the cost of evaluating eq. (19) is reduced to O(N 2 ) by restricting the sum over auxiliary basis function indices k and l to include only those functions from sites in the neighbourhood of sites a and b, respectively: At present we use the same neighbourhood definition for the integration grids, ISA probability functions, and auxiliary basis functions. This may not be ideal as it is quite possible that efficiency gains may be obtained by using different definitions for the three. We have yet to explore such a possibility. There are limitations to the use of neighbourhoods to achieve linearity in computational scaling: for heavily delocalized systems such as the π-conjugated molecules the neighbourhoods may need to be increased in order to achieve sufficient accuracy in the polarizabilities. In this case, using neighbourhoods that are too small leads to increased chargeconservation errors in the BS-ISA solution, and to sum-rule violations in the charge-flow 13 contributions to the non-local polarizabilities. A. Localization of the non-local polarizabilities The main focus of this paper is not the non-local polarizabilities defined in eq. (19), but rather the localized distributed polarizability models that can be derived from these using techniques described in detail in some of our previous publications 18, 19 . This is not to diminish the importance of the non-local polarizability models, indeed these models are essential for heavily delocalized systems, and in low dimensional systems leads to van der Waals interactions that cannot be replicated by any local model [53][54][55] . However it is the local models that are commonly used, so for very pragmatic reasons we will focus on these here. Local polarizability models are an approximation, but one that often turns out to be reasonable, particularly for insulators for which electron correlations are largely local. In the WSM algorithm 18,19 we have defined a means for converting any non-local polarizablity model into an effective local one using two transformation steps: • Multipolar localization: In the two-step localization scheme that forms part of the WSM model we first transform away the non-local contributions using a multipole expansion 13 ( §9.3.3). We have explored two schemes for this purpose: the method of LeSueur & Stone 64 and that of Wheatley & Lillestolen 25 . Of these, the latter has the advantage that the non-local terms are localized along the molecular bonds and should result in better convergence of the resulting model. However either of these localization procedures lead to a degradation in the convergence of the resulting polarizability expansion. • Constrained refinement: In this step the multipolar localized polarizability models are refined to reproduce the point-to-point polarizabilities 65,13 ( §9.3.2) computed on a pseudo-random set of points surrounding the molecule. The idea here is to use the local polarizabilities from the first step as prior values, and allow them to relax using constraints to keep them close to their original values. These steps can be performed for polarizabilities at any frequency. One of the features of this approach is that at the refinement stage symmetries can be imposed, and if needed, models may be simplified. The WSM procedure ensures that the best resulting model is obtained. In the original WSM model we relied on non-local polarizabilities from the cDF algorithm as the starting point. This did not always work out well as the multipolar localized models often contained terms with unphysical values which would change by a considerable amount in the refinement stage. For this reason the constraints we recommended 19 were weak for the dipole-dipole polarizabilities, and completely absent for the higher ranking terms. The lack of constraints for the higher ranking terms was simply a recognition that our prior values were simply too unreliable. Looked at another way, the final polarizability models depended quite strongly on the kinds of constraints used. Here we use the ISA-Pol non-local polarizabilities as input to the WSM algorithm. From empirical observation we know that the multipolar localized models are already good and only relatively small changes occur on refinement. However the refinement step does still improve the localized models, so we continue to use it, but this time with much stricter constraints. Referring to eq.(36) in Ref. 18 (see also eq. 9.3.13 in Ref. 13), we now define the constraint matrix to be where k/k is a model parameter index (these label the polarizabilities), δ kk is the Kroneker-delta function, w 0 is a constant, and p 0 k is the reference value of the parameter (that is, the local polarizability) obtained from the multipolar step. We use w 0 = 10 −3 for calculations on the larger systems, but for smaller systems, where there is sufficient data in the point-topoint polarizabilities to yield a meaningful refinement of even the higher-ranking polarizabilities, the constraints may be relaxed using w 0 = 10 −5 . It may seem paradoxical to use constraints of any kind if the refinement step does not alter the multipolar localized ISA-Pol model by much. The reason for the use of constraints is that in a mathematical optimization it is possible for parameters to alter without a meaningful change in the cost-function. The constraints prevent this kind of mathematical wandering of parameters, particularly for large systems for which we rarely have enough data in the point-to-point polarizabilities to act as natural constraints to the parameters. IV. NUMERICAL DETAILS All SAPT(DFT) calculations have been performed using the CamCASP 5.9 program 51 with orbitals and energies computed using the DALTON 2.0 program 66 with a patch installed from the Sapt2008 code. The Kohn-Sham orbitals and orbital energies were computed using an asymptotically corrected PBE0 67 functional with Fermi-Amaldi (FA) longrange exchange potential 68 and the Tozer & Handy splicing scheme. Linear-response calculations and ISA-Pol polarizabilities were performed using the same functional but with a developer's version of CamCASP 6.0. The kernel used in the linear-response calculations is the hybrid ALDA+CHF kernel 17,69 which contains 25% CHF (coupled Hartree-Fock) and 75% ALDA (adiabatic local-density approximation). This kernel is constructed within the CamCASP code. The PW91 correlation functional 70 is used in the ALDA kernel. The shift needed in the asymptotic correction has been computed self-consistently using the following ionization potentials: thiophene: 0.326 a.u. 71 ; pyridine: 0.3488 a.u. 1 ; water: 0.4638 a.u. 71 ; methane: 0.4634 a.u. 71 . The vibrationally averaged molecular geometry was used for water 72 and methane 73,74 molecules, the pyridine geometry has been taken from Ref. 1, and the thiophene geometry has been obtained by geometry optimization using the PBE0 functional and the cc-pVTZ basis 75 with the NWChem 6.x program 76 . The SAPT(DFT) calculations use two kinds of basis sets: the main basis, used in the density-functional calculations, is in the MC + basis format, that is, with mid-bond and far-bond functions, and the auxiliary basis used for the density fitting is in the DC + format. The following main/auxiliary basis sets were used for the systems studies in this paper: • Methane dimer, water dimer, methane..water complex: main basis: aug-cc-pVTZ with 3s2p1d mid-bond set, and auxiliary basis: aug-cc-pVTZ-RI basis with 3s2p1d-RI basis. The ISA-Pol calculation is preceded by a BS-ISA calculation which is subsequently fed into the distributed polarizability module in CamCASP . As described in Ref. 50, the ISA expansions use basis sets created from a special set of stype functions with higher angular momentum functions taken from a standard resolution of the identity (RI) fitting basis. We have used the following combinations of basis sets for the calculations reported in this paper: • The methane and water molecules: main basis: d-augcc-pVTZ (spherical); auxiliary basis: aug-cc-pVQZ-RI (Cartesian) with ISA-set2 with s-functions on the hydrogen atoms limited to a smallest exponent of 0.25 a.u. atomic basis: like the auxiliary basis, but with spherical GTOs. For these three molecules we used the ∆ stock(A) functional for the ISA calculations, but for the thiophene molecule we used the older 'A+DF' algorithm in which we first converge the ISA solution using the ∆ stock(A) functional, and subsequently use the DF+ISA algorithm with ζ = 0.1, that is, with a weighting of 10% given to ∆ stock(A) and 90% to the density-fitting functional. As we have discussed in §II A, the DF+ISA algorithm places restrictions on the auxiliary basis set, so the basis sets used for the thiophene molecule are different, with the auxiliary and atomic basis sets being the same. For thiophene we have reported results using three kinds of main basis sets: for the aug-cc-pVDZ and aug-cc-pVTZ main basis sets, we have used an auxiliary basis consisting of ISA-set2 s-type functions with higher angular functions taken from the aug-cc-pVTZ-RI basis with spherical GTOs, and for the augcc-pVQZ main basis we have used an auxiliary basis consisting of s-functions from the ISA-set2 basis with higher angular terms from the aug-cc-pVQZ-RI basis also using spherical GTOs. We have not used the aug-cc-pVDZ-RI basis as it is not large enough for an ISA calculation. V. RESULTS Although the non-local polarizability models are fundamental, these are also, at present, of high complexity and are not suitable for most applications. So while we assess some features of the ISA-Pol non-local polarizability models, we will here be primarily concerned with the localized models. A. Convergence with rank The assessment of the polarizability models is complicated by the fact that there is no pure polarization energy defined in SAPT or SAPT(DFT): the second-order induction energy in these methods contains both a polarization and a chargetransfer contribution. While it is possible to separate these, for example using regularized SAPT(DFT) 80 , we inevitably then encounter the problem of damping 1,18 . An elegant solution to the first problem is to compute the polarization energy of the molecule interacting with a point charge probe. This has the advantage that the energies can be easily displayed on a surface around the molecule, and as reference energies can be easily computed using the CamCASP program, it is relatively straightforward to make comparisons of the model and reference energies and visualise the differences on the molecular surface. There is however still the issue of the damping, and we have chosen to use a simple proposal: a single parameter Tang-Toennies 81 damping model is used, and the damping param-eter is determined by requiring that the mean signed error (MSE) of the damped model energies against the reference SAPT(DFT) energies is as small as possible. We have studied three series of polarization models for each of the ISA-Pol and cDF distribution algorithms: the non-local, and localized isotropic and anisotropic models. We have determined a polarization damping parameter for each of the six series of models from the highest ranking model in the series; this parameter is then fixed for all lower ranking models in the series. For the ISA-Pol models the damping parameters are 1.57, 1.50 and 1.51 a.u. for the non-local, local (anisotropic), and local (isotropic) models, respectively, while the corresponding damping parameters for the cDF models are 1.32, 1.49 and 1.61 a.u. In Figure 1 we have displayed the reference SAPT(DFT) polarization energies for the pyridine molecule interacting with a +1e point charge probe. The energies are displayed on a 10 −3 isodensity surface computed using the CamCASP program. The resulting polarization energies are uncharacteristically large, due both to this choice of surface (which corresponds approximately to the van der Waals surface) and to the large size of the charge: typical local charges in atomic systems will usually be half as much. Also shown in Figure 1 are the errors made by the damped polarization models against the reference energies. Consider first the non-local models: the positive errors made by the NL1 model indicate an underestimation of the polarization energy. The agreement with the reference energies gets progressively and systematically better as the maximum rank increases through 2 to 3. Results for the NL4 model (the maximum rank of the non-local models, and also the most accurate for the choice of damping) are not shown. The localized, anisotropic models exhibit similar errors, but the localized, isotropic models show larger variations in the errors made. In particular, these models shown an underestimation of the polarization near the hydrogen and nitrogen atoms, and a large overestimation of the polarization in the centre of the ring. This is due to the simplicity of the isotropic models: the polarizability of an anisotropic system like pyridine cannot be correctly modelled everywhere using isotropic AIM polarizabilities. As with the distributed multipole moments 50 , the ISA AIMs lead to polarization models with better convergence with increasing rank and fewer artifacts in both the non-local and local models. In Figure 2 are shown similar results, this time for the models from the cDF algorithm. These differ from the ISA-Pol models in important ways: first of all the errors are larger, even for the non-local models, but perhaps more importantly, the variations in the errors are much larger for all models. It is the latter that is the bigger concern for model building, as variations in errors arise from to position and angle dependent variations in the quality of the model, leading to unreliable predictions. VI. CONVERGENCE WITH BASIS OF THE LOCALIZED MODELS The next question we need to address is the basis set convergence of the ISA-Pol models. We will not discuss the performance of the non-local or local anisotropic models here as it is difficult to display the data contained in these models in a meaningful and concise manner. Instead we will focus on the local, isotropic models. The construction of a local, isotropic (frequencydependent) polarizability model begins with the multipolar localization (see §III A) of the ISA-Pol non-local model. This results in an anisotropic, local model which has not yet been refined against the point-to-point polarizabilities. The isotropic model may now be obtained in one of three ways: • Directly from the unrefined anisotropic model by retaining only the isotropic part of the polarizabilities. • By refining this isotropic model using the point-to-point polarizabilities. • By refining the anisotropic model as described in §III A and subsequently retaining only the isotropic part of the polarizabilities. The second and third options should, in principle, lead to more accurate models. These two approaches lead to similar, but not identical local, isotropic polarizability models. By refining the isotropic models (the second option) we ensure that the resulting isotropic models are the most accurate possible given the limitations imposed. But while this approach may be applicable to small systems for which the isotropic approximation may be valid, it will fail for strongly anisotropic systems for which the third approach may be more appropriate. We have used the second method to obtain the isotropic polarizability models discussed in this paper. In Table II we present ISA-Pol localized, isotropic polarizabilities for the symmetry-distinct atoms in the thiophene molecule computed in three basis sets. The dipole-dipole polarizabilities (i.e. rank 1) are already reasonably well converged in the aug-cc-pVDZ basis, with the exception of the sulfur atom which needs the larger aug-cc-pVTZ basis. The quadrupole-quadrupole (rank 2) polarizabilities on the carbon and hydrogen atoms are converged in the aug-cc-pVTZ basis but the aug-cc-pVQZ basis is needed for the sulfur atom. At rank 3, the octopole-octopole polarizabilities on the carbon atoms seem to be approaching convergence in the aug-cc-pVQZ basis, but the sulfur atom is far from conver- gence. The negative octopole-octopole terms on the hydrogen atoms seem to be a result of the lack of sufficient higher angular terms on these atoms, and of the absence of dipolequadrupole and quadrupole-octopole polarizabilities in this rather drastic approximation. In the aug-cc-pVQZ basis there is only one negative term present on the H1 atom. Compare these results to those from the cDF approach shown in Table I. The ISA-Pol algorithm is clearly the more systematic of the two with the AIM local polarizabilities converged or approaching convergence at all ranks. Dispersion models are obtained from the ISA-Pol-L polarization models computed at imaginary frequency and recombined using methods 65,13 ( §4.3.4) implemented in the Casimir module that forms part of the CamCASP suite of programs. While we can compute both anisotropic and isotropic dispersion models, the isotropic models are easier to analyse and use, so we will focus on these only. In Figure 3 we examine the convergence of the distributed dispersion models with basis set. As the dispersion coefficients span many orders of magnitude, we have instead plotted the ratio C aa n [basis]/C aa n [aDZ] as a function of basis set used. This allows us to readily determine how the dispersion coefficients vary with increasing basis size. In the case of the two carbon atoms, the C 6 and C 8 terms have converged in the augcc-pVTZ basis, and the C 10 and C 12 terms nearly so in the aug-cc-pVQZ basis. For the two hydrogen atoms the C 6 and C 8 terms are converged in the aug-cc-pVDZ basis, but the C 10 and C 12 terms are less settled with basis set. This is probably the result of deficiencies in the higher angular part of the hydrogen basis sets, but this needs to be verified. In any case, the higher ranking dispersion terms do not make a significant contribution to the dispersion energy, and have even been fully omitted in some of our earlier models 82,83 . However the same cannot be said for the sulfur atom which is expected to make an important contribution to the dispersion energy due to its large polarizability: here while the C 6 term is well converged even in the aug-cc-pVDZ basis, the C 8 term is only just stabilizing in the aug-cc-pVQZ basis, and neither C 10 nor C 12 is even close to stabilizing in the largest basis set used. This may be either an artifact of the ISA-Pol algorithm, or a genuine shortcoming of the standard basis sets. Further and more systematic tests on a wider range of systems will be needed to determine the cause of this apparent non-convergence. In Table III we report the ISA-Pol-L isotropic dispersion coefficients for the symmetry-distinct sites in the water, methane, pyridine and thiophene molecules. Only the di- agonal, that is same-site, terms are reported: the complete dispersion models for these molecules and also those for the methane..water complex are given in the S.I. Notice that while the dispersion coefficients for the carbon atoms in these molecules are of similar magnitude, they nevertheless vary considerably in accordance with what might be expected from the variations in the local chemical environment. For example, the C1 atom in pyridine and the C1 atom in thiophene both have smaller dispersion coefficients than the other carbon atoms in the molecules, which should be expected as these atoms are bonded directly to the more electronegative N and S atoms in the respective molecules. Likewise, while the dispersion terms on the hydrogen atoms are similar, those on the hydrogen atom in water are substantially smaller due to the large electronegativity of the oxygen atom in the water molecule. The ability of the ISA-Pol-L models to provide dispersion terms from C 6 to C 12 which respond to the chemical environment of the atoms in the molecule could be used to develop more detailed and comprehensive models for the dispersion energy, but more extensive data sets will be needed for a full analysis. A. Assessing the models using SAPT(DFT) The ultimate test of any dispersion model is how well it is able to match the reference dispersion energies. Here, as with the polarization models, there is the issue of damping, without which meaningful comparisons can only be made at large intermolecular separations where the damping is negligible. However such a comparison is not useful from the practical point of view as we are usually interested in the performance of the models at energetically important configuration, that is, in the region of the energy minimum. Consequently we do (3) TABLE III. Localized, isotropic diagonal dispersion coefficients for the symmetry-distinct sites in the pyridine, water, methane, and thiophene dimers computed with the ISA-Pol-L model. The off-diagonal terms, including those between water and methane are provided in the S.I. These results were computed using the d-aug-cc-pVTZ basis with the exception of the thiophine molecule for which we report results computed in the aug-cc-pVQZ basis. Due to the large range of numbers involved, the data are provided in a compact exponential notation with the power of 10 indicated in parenthesis. That is x.y(n) = x.y × 10 n . Atomic units used for all dispersion coefficients. need to address the issue of damping, but as this is not the focus of this paper, we will limit the present discussion to the familiar Tang-Toennies 81 damping functions: where the order n corresponds with the rank in the dispersion expansion 13 and x is a function of the site-site distance and the damping coefficient. The damping models we have used differ in the definition of x as follows: • Ionization potential (IP) damping 82 : where I A and I B are the vertical ionization energies, in a.u., of the two interacting molecules. This is the simplest of the damping models with one damping parameter for all pairs of sites (a, b) between the interacting molecules A and B. • The Slater damping from Van Vleet et al. 2 . Here the damping parameter is dependent on the pairs of interacting atoms and is given by x ab = β ab r ab − β ab (2β ab r ab + 3) β 2 ab r 2 ab + 3β ab r ab + 3 , where the parameter β ab is now dependent on the sites and is defined as β ab = √ β a β b , where the parameter β a is extracted from the ISA shape function w a by fitting it to an exponential of the form K exp (−β a r), and β b likewise. 2,50 . This damping function is motivated by the form of the overlap of two such Slater exponentials 2 . • The scaled ISA damping model is a simplification of the Slater damping model. Here we define a scaled parameterβ a for each site in molecule A as follows: where β a is defined above and s A is the moleculespecific empirical scaling parameter. Next we define β ab from the combination rule and x ab = β ab r ab . In Ref. 2 the scaling parameter is taken to be a constant s = 0.84 independent of the type of molecule, but here we allow the parameter to vary according to the molecule and determine it empirically by fitting the model energies to the reference dispersion energies. In the comparisons of the ISA-Pol-L dispersion models that we now discuss, the reference dispersion energies used in the comparisons have been computed using SAPT(DFT) and are defined as All dispersion models are computed from isotropic ISA-Pol-L polarizabilities, consequently we should expect errors for systems with a strong anisotropy. In all cases the isotropic ISA-Pol-L dispersion models contain even terms from C 6 to C 12 on all atoms. In Figure 4 we display dispersion energies for the methane dimer in more than 2600 dimer configurations. Because the methane molecule has high symmetry and indeed is nearly spherical, we should expect the dispersion energy of this system to be well approximated by an isotropic dispersion model. This is indeed the case, and we see nearly perfect correlation of the ISA-Pol-L dispersion energies with the scaled damping model with the reference energies. In this case a scaling parameter of 0.76 was determined. On the other hand, the IP damping model which we have recommended in the past does not provide sufficient damping, and nor does the Slater model, though it is better. Figure 5 shows data for the water dimer in more than 2000 dimer configurations. Water is a more anisotropic system than methane, and we cannot expect the isotropic models to behave as well for water dimer as for methane dimer. Once again both the IP and Slater damping models result in underdamping, though not as severely as for methane dimer. The scaled damping model with a scaling factor of 0.76 fares far better, resulting in dispersion energies for most of the dimers within ±5% from the reference energies. In Figure 6 we have displayed dispersion energies for the mixed methane···water system. The picture is the same, with the scaled damping model correlating very well with the reference energies. In Figure 7 we display dispersion energies for the pyridine dimer in over 700 configurations taken from data sets 1 and 2 from Ref. 1. The pyridine molecule is the most anisotropic one we have considered in this paper and we may therefore expect to see a relatively large scatter in the model dispersion energies. This is indeed the case: while the scaled damping dispersion model still results in the best dispersion energies, these now deviate from the reference energies by slightly more than 5%. The scaling parameter has been determined to be 0.71 which is smaller than the values obtained for the water and methane systems, and considerably smaller than the value of 0.84 recommended by Van Vleet et al. 2 Part of the reason for this is that the AIM densities for the pyridine molecule are themselves strongly anisotropic due to the π-electron density of the molecule, but the parameters β a used in eq. (26) are obtained from the isotropic shape functions and therefore the correct AIM density decay is not obtained. Instead the anisotropic AIM densities ρ a should be used, and we are currently investigating this possibility. Curiously, for this system the IP damping model is quite similar to the scaled damping, but the Slater damping model once again under-damps. B. Convergence with rank Although it is reasonably well known that the dispersion expansion should include terms beyond C 6 , it is perhaps not as well appreciated just how many terms are required for this expansion to converge (when appropriately damped). We have explored this issue in a previous paper 82 , where we concluded that models including terms to at least C 10 were needed to achieve sufficiently good agreement with SAPT(DFT). In Figure 8 we present even more extensive data for the methane dimer which clearly demonstrates that the C 6 -only models commonly used in simple force-fields, and indeed in many dispersion corrections to density-functional theory, severely underestimate the dispersion energy from SAPT(DFT). For this dimer, we need to include terms to C 10 before we begin to agree with the reference energies to within 5%. C. Combination rules Dispersion models in common intermolecular interaction models are usually constructed to satisfy combination rules, usually through a constrained fitting process (see for example Ref. 42). This has the advantage of greatly reducing the number of parameters in the model, and the most commonly used geometric mean combination rule has good justification from theory, although the actual dispersion coefficients may not satisfy a combination rule accurately. The geometric mean combination rule defines the mixed site C ab n dispersion coefficients as follows: where C aa n and C bb n are the same-site coefficients. This combination rule may be derived for the n = 6 terms 84 from the exact expression for the isotropic C ab 6 coefficient: by using the single-pole approximation to the isotropic frequency-dependent polarizabilities where v 0 is the pole. We additionally have to assume that the poles for the two sites a and b are similar, that is, v a 0 ≈ v b 0 . This is identical to the Unsöld average energy approximation 13 . The advantages of this combination rule are apparent: for a system of N interacting sites, only O(N) dispersion coefficients would be needed, rather than the O(N 2 ) needed without such a rule. Do the ISA-Pol dispersion models satisfy the geometric mean combination rule? Once again this question is a complex one if we account for the angular variation of the dispersion parameters, so here we will restrict this discussion to the isotropic dispersion models only. In Figure 9 we plot the dispersion coefficients for the thiophene molecule computed using the geometric mean combination rule against reference ISA-Pol-L isotropic dispersion coefficients. This is done for the aug-cc-pVnZ, n = D,T,Q basis sets. It can be seen that the ISA-Pol-L models satisfy the combination rule very well for n = 6, 8, 10, 12, that is for all ranks of the dispersion coefficients considered in this paper. In all cases, the terms that are most in error are those involving at least one of the hydrogen atoms, but these errors are reduced as the basis set gets larger, echoing the trend to more well-defined polarizabilities seen in Table II. This property of the dispersion models derived from ISA-Pol-L polarizabilities seems to hold for a variety of systems, though less well for those containing a larger fraction of hydrogen atoms. This is remarkable given that the combination rules are never imposed, and there is no reason to expect the single-pole approximation to hold, or indeed for the poles on different atoms to be similar. Further work is needed to analyse exactly why this is the case, and if and when it breaks down, but this property of the ISA-Pol-L models, if generally applicable, will be a very useful feature for the development of models of more diverse interactions. VII. ANALYSIS & OUTLOOK We have described and implemented the ISA-Pol algorithm for computing distributed frequency-dependent polarizabilities and dispersion coefficients for molecular systems. This algorithm is based on a basis-space implementation 50 of the iterated stockholder atoms (ISA) algorithm of Lillestolen and Wheatley 33 . We have described a simpler and more versatile implementation of the BS-ISA algorithm and have implemented this algorithm in a developer's version of CamCASP 6.0. This new algorithm allows for higher accuracies in the ISA solution and in the resulting distributed properties. Additionally the algorithm has a computational cost that scales linearly with the system size. The ISA-Pol algorithm results in non-local distributed polarizabilities which can be localized to result in approximate atomic polarizabilities using schemes we have discussed and demonstrated. The resulting models have many of the desired properties discussed in the Introduction. The most important of these are: • Systematic convergence of the ISA-Pol non-local polarizabilities as a function of rank. This model has been demonstrated to converge more systematically than the constrained density fitting, cDF, model we have previously proposed 17 , and also the related SRLO algorithm from Rob & Szalewicz 14 . • The localized ISA-Pol polarizabilities (ISA-Pol-L) are well defined and are usually positive definite where local models can give a good account of what are inherently non-local effects. In other words, for systems with relatively short electron correlation lengths, the ISA-Pol-L models are appropriate and systematic and lead to reasonably accurate polarization energies. • We have demonstrated that the ISA-Pol-L polarizabilities converge systematically with basis set and appear to have a well-defined basis set limit. The systematic behaviour of these distributed polarizabilities should make it possible to extrapolate the polarizabilities of the atoms-in-the-molecule (AIMs) to the complete basis set limit. This was not possible with the WSM models 18,19 built from cDF non-local polarizabilities as has been illustrated in the Introduction. • Dispersion models constructed from the ISA-Pol-L frequency-dependent polarizabilities are well defined and, when suitably damped, show exceptionally good reproduction of the SAPT(DFT) dispersion energies for a variety of anisotropic systems. • Damping of the dispersion models is achieved using the Tang-Toennies functions with atom-specific damping parameters derived using the BS-ISA algorithm. A single scaling parameter is used as described by Van Vleet et al. 2 , though we have allowed the scaling parameter to vary with the molecule. • The isotropic dispersion coefficients from the ISA-Pol-L algorithm have been shown to satisfy the geometric mean combination rule that is used in many empirical models for the dispersion energy, but is not imposed at any stage in developing the localized ISA-Pol polarizabilities. This is the case for terms from C 6 to C 12 and the accuracy of the combination rule improves with increase in the basis set used for the ISA-Pol calculation. These properties alone make the ISA-Pol and associated localized ISA-Pol-L models promising candidates for developing detailed and accurate polarization and dispersion models for intermolecular interactions. At present, these methods are limited to closed-shell molecules, but this is to a large extent a limitation of the implementation in the CamCASP 6.0 program. Amongst the issues that we have not yet resolved adequately are the determination of the damping of the polarization and dispersion models, and the problem of the anisotropy of the dispersion models. The polarization damping question has been raised by one of us elsewhere 80 but it needs to be revisited in context of the ISA-Pol models for which the damping needed is clearly different from models derived from the cDF polarizabilities (see §V A). The damping models introduced by Van Vleet et al. 2 are definitely promising. In particular, we have shown that the scaled ISA damping model can result in dispersion energies that agree with the reference SAPT(DFT) total dispersion energy, E (2) DISP , to 5% or better. In fact, for the methane dimer the agreement is much better than 5%, and also substantially better than that achieved by a recently proposed anisotropic LoProp-based dispersion model 16 . However there remains the question of how this can be improved and it seems like there are a few issues that need to be investigated: • Anisotropy in the damping: Perhaps the damping coefficients need to be extracted from the ISA AIM densities ρ a rather than from the ISA shape functions w a as we do currently. This would have the consequence of making the damping parameters anisotropic and these may be more appropriate at modelling interactions involving sites that are themselves strongly anisotropic. This would be the case for the oxygen atom in water and for the carbon atoms in a π-conjugated system. • Anisotropy in the dispersion coefficients: The dispersion models derived from the ISA-Pol-L polarizabili-ties include anisotropy, but we have, as yet, focused only on the isotropic parts of these models. This has been done mainly for computational reasons: most simulation codes accept only isotropic dispersion models, and the anisotropic models tend to be very complex. Recently, Van Vleet et al. 5 have demonstrated how the inclusion of atomic anisotropy can result in a rather significant improvement in the model energies, but this approach is empirical in the sense that the anisotropy parameters are determined by fitting to reference SAPT(DFT) dispersion energies. We need a way to develop practical models in a non-empirical manner. We have not investigated the transferability of the ISA-Pol polarizabilities as these are not the fundamental AIM polarizabilities, but are effective atomic polarizabilities after throughspace polarization in the Applequist sense 13,85 has been taken into account. It should however be possible to derive the 'bare' AIM polarizabilities from those computed from ISA-Pol and this is something we are currently exploring. Finally, the fundamental relation of the ISA-Pol models with the underlying ISA decomposition may eventually lead to the development of approximations that allow the models to be mapped onto the properties of the ISA AIM densities. If possible, this would significantly increase our ability to easily construct polarization models for complex molecular system, especially those too large for routine linear-response calculations in a large enough basis set. This too is something we are currently exploring. VIII. ACKNOWLEDGEMENTS We dedicate this article to the memory of Dr János Ángyán for a friendship and for many scientific discussions, one of which led to the ISA-Pol method. AJM thanks Queen Mary University of London for support and the Thomas Young Centre for a stimulating environment, and also the Université de Lorraine for a visiting professorship during which part of this work was completed. We also thank Dr Rory A. J. Gilmore for assistance in calculating the SAPT(DFT) reference energies for the water..water, methane..methane, and water..methane complexes. We thank Dr Toon Verstraelen for helpful comments on the manuscript. IX. ADDITIONAL INFORMATION All developments have been implemented in a developer's version of the CamCASP 6.0 51 program which may be obtained from the authors on request. CamCASP has been interfaced to the DALTON 2.0 (2006 through to 2015), NWChem 6.x, GAMESS(US) , and Psi4 1.1 programs. The supplementary information (SI) contains additional data from the systems we have investigated but not included in this paper. I. Localized, isotropic polarizabilities for the symmetry-distinct sites in the thiophene molecule computed with the SRLO-L model, that is using SRLO non-local polarizabilities localized using the WSM algorithm. The basis sets used are aug-cc-pVDZ (aDZ), aug-cc-pVTZ (aTZ), and aug-cc-pVQZ (aQZ). Atom C1 is the carbon atom attached to the sulfur atom and H1 is the hydrogen atom attached to C1. Atomic units used for all polarizabilities. I. FIGURES AND TABLES FROM THE SRLO AND CDF METHODS These figures and tables are the analogues of the ISA-Pol data presented in the main body of the paper. These are provided here so as to facilitate comparisons with these methods and the ISA-Pol approach. A. Convergence of distributed polarizabilities with basis The data for the ISA-Pol-L and cDF-L approaches are in the main paper. Here we present the data for the SRLO-L method. Relative dispersion coefficients for the symmetry-distinct sites in thiophene computed using the localized, isotropic SRLO polarizabilities using the aug-cc-pVDZ (aDZ), aug-cc-pVTZ (aTZ) and aug-cc-pVQZ (aQZ) basis sets. The dispersion coefficients are relative to the values computed in the aug-cc-pVDZ basis. II. GEOMETRIES & LOCAL-AXIS DEFINITIONS The molecular geometries used for the molecules studied in this paper are provided here along with the local axis systems used to define the localized polarizabilities. The axis systems are defined in a way suitable for use in the Orient program and the reader is refered to the documentation of that program for further details. C z from C t o H1 x from H3 t o H2 H1 z from C t o H1 x from H3 t o H2 H2 z from C t o H2 x from H4 t o H1 H3 z from C t o H3 x from H1 t o H4 H4 z from C t o H4 x from H2 t o H3 End E. Thiophene dimer Here we include the ISA-Pol-L, cDF-L and SRLO-L dispersion models for thiophene as calculated using the aug-cc-pVQZ basis. Details of the electronic structure methods used are given in the main paper.
2018-07-15T13:52:36.619Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "bb17a8045bcd07ae4b483558633f4ee2159bf02f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00214-018-2371-4.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0ad984ac455e52a1dfebddca9ecbd544c27d02b0", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
258082702
pes2o/s2orc
v3-fos-license
Energy metabolism disturbance in migraine: From a mitochondrial point of view Migraine is a serious central nervous system disease with a high incidence rate. Its pathogenesis is very complex, which brings great difficulties for clinical treatment. Recently, many studies have revealed that mitochondrial dysfunction may play a key role in migraine, which affects the hyperosmotic of Ca2+, the excessive production of free radicals, the decrease of mitochondrial membrane potential, the imbalance of mPTP opening and closing, and the decrease of oxidative phosphorylation level, which leads to neuronal energy exhaustion and apoptosis, and finally lessens the pain threshold and migraine attack. This article mainly introduces cortical spreading depression, a pathogenesis of migraine, and then damages the related function of mitochondria, which leads to migraine. Oxidative phosphorylation and the tricarboxylic acid cycle are the main ways to provide energy for the body. 95 percent of the energy needed for cell survival is provided by the mitochondrial respiratory chain. At the same time, hypoxia can lead to cell death and migraine. The pathological opening of the mitochondrial permeability transition pore can promote the interaction between pro-apoptotic protein and mitochondrial, destroy the structure of mPTP, and further lead to cell death. The increase of mPTP permeability can promote the accumulation of reactive oxygen species, which leads to a series of changes in the expression of proteins related to energy metabolism. Both Nitric oxide and Calcitonin gene-related peptide are closely related to the attack of migraine. Recent studies have shown that changes in their contents can also affect the energy metabolism of the body, so this paper reviews the above mechanisms and discusses the mechanism of brain energy metabolism of migraine, to provide new strategies for the prevention and treatment of migraine and promote the development of individualized and accurate treatment of migraine. Introduction Migraine is a syndrome with various neurological and non-neurological manifestations. It is the sixth leading cause of disability in the world. With the development of time, migraine is gradually ranked at the top, affecting 11% of the world's adults' physical and mental health, seriously affecting personal health, quality of life, and social and economic development (Murray et al., 2012;Cauchi and Robertson, 2016;Kowalska et al., 2016;Goadsby et al., 2017). Several Italian surveys have shown that migraine has a significant impact on the interpersonal relationships, emotional state, and quality of life of patients in their daily lives (Leonardi et al., 2010;Raggi et al., 2013). Therefore, the prevention and treatment of migraine has become a global problem and is of increasing concern. The complex pathogenesis of migraine poses several difficulties for the study of migraine. Impaired cerebral energy metabolism is closely related to migraine attack threshold, and abnormal mitochondrial enzyme function is a common cause of impaired cerebral energy metabolism. Under specific environmental and triggering factors, oxidative stress and impaired cerebral energy metabolism can not only trigger migraine attacks, but can even influence the severity of migraine (Gross et al., 2019). Mitochondria are sites of energy production and release and play an important role in various tissues. They regulate the energy metabolism of brain neurons and other cells mainly through the regulation of ATP synthesis, oxidative phosphorylation reactions, tricarboxylic acid cycle (TCA), and ion homeostasis. Most of the body's energy supply is generated through the oxidative phosphorylation pathway, or the electron transport chain (ETC) contained in the mitochondria. Oxygen consumption in the brain can account for up to 25% of the body's total consumption, and its effective function requires a continuous supply of energy from ATP. Abnormal energy metabolism of mitochondria can easily damage brain tissue's normal function, so the brain and muscles are the most seriously damaged tissues in mitochondrial diseases (Roos-Araujo et al., 2014). It has been shown that the prevalence of migraine in patients with mitochondrial disease is high, suggesting that mitochondrial dysfunction may be one of the triggers for the development of migraine (Tiehuis et al., 2020). The phenomenon of cortical spreading depression (CSD) was first discovered by Leao in 1944 in experimental animals, CSD is a wave of intense depolarization of neurons and glial cells, originally described as a cortical response to noxious stimuli, manifested by negative fluctuations in cortical stability potentials and spontaneous suppression of cortical electrical activity. During the recovery of CSD, a large number of ion pumps are activated and oxygen consumption and metabolism increase, leading to hypoxia in brain tissue. In hypoxia, the ETC and oxidative phosphorylation pathways are impaired, enzyme activity in the ETC is inhibited, ATP synthesis is blocked, and excess oxygen radicals are generated, causing damage to the mitochondria, which in turn leads to cell damage. Nitric oxide (NO) is a key factor in the development of migraine and other headaches and is thought to be the trigger point for primary headaches, playing a key role in the development of migraine (Olesen, 2008). Calcitonin gene-related peptide (CGRP) is a multifunctional neuropeptide. It plays an important role in the pathophysiology of migraine by modulating neurogenic inflammation and regulating nociceptive afferents (Russell et al., 2014;Russo, 2015), CGRP and its receptors are being used as promising targets for the treatment of migraine (Edvinsson, 2015). Experimental studies have shown that changes in neurotransmitters and vasoactive substances caused by CSD play an important role in migraine attacks and that the occurrence of CSD can cause changes in the release of NO in plasma and the levels of vasoactive peptides such as CGRP (Read et al., 2000). High concentrations of NO and CGRP can further affect the activity of various components of the mitochondrial respiratory chain, resulting in damage to the mitochondrial respiratory chain, reduced ATP production, and excessive reactive oxygen species (ROS) production, leading to oxidative stress and thus causing neuronal dysfunction, which can lead to migraine (Benemei et al., 2013). Mitochondrial dysfunction and excessive ROS production can also affect the expression of NO and CGRP, and the above factors interact with each other to eventually lead to the development of migraine. In this paper, oxidative phosphorylation, tricarboxylic acid cycle, oxidative stress, NO and CGRP, which are closely related to migraine, are used as entry points to explore the mechanism of cerebral energy metabolism in migraine and provide new ideas for the treatment of migraine. Cortical spreading depression and migraine CSD is one of the factors in the pathogenesis of migraine. Studies have shown that patients with migraine have the transmission pattern of CSD, which activates the trigeminovascular system (TVS) pathway (Unekawa et al., 2015). CSD can induce significant changes in neurovascular responses, including the release of NO, while high concentrations of NO can significantly inhibit the activity of many components in the mitochondrial respiratory chain, resulting in damage to the mitochondrial respiratory chain. Mitochondrial damage in any part of the brain may lead to mitochondrial energy deficiency, while migraine is a response to brain energy deficiency or oxidative stress levels exceeding antioxidant capacity (Fila et al., 2019;Gross et al., 2019). Studies have shown that CSD can be an inducer of ROS formation in the cerebral cortex, meninges and trigeminal ganglia. ROS may be involved in the formation of central and peripheral sensitization by regulating protein kinase activity, altering glutamatergic nerve transmission, mediating neurogenic inflammation, and regulating ion channels, such as transient receptor potential channel V1 (TRPV1) (Meents et al., 2010). ROS can also directly activate pain receptors and promote peptidergic nerves to release migraine mediator CGRP. CGRP can act on vascular smooth muscle cells, vascular endothelial cells, and trigeminal ganglion nerve cells, resulting in corresponding vasodilation, further causing the attack of migraine. A critical role in the regulation of cortical susceptibility to CSD is played by ROS/transient receptor potential A1 (TRPA1)/ CGRP signalling. Inhibition of ROS and deactivation of TRPA1 channels may have therapeutic benefits in the prevention of stress-induced migraine via CGRP (Jiang et al., 2019). During CSD episodes, it leads to hypoxia in the brain, which impairs the normal function of mitochondria and eventually leads to oxidative stress, which can also further induce CSD (Borkum, 2021). Some studies have shown that the increase of intracellular Ca 2+ concentration in astrocytes leads to vasoconstriction during CSD, which is mediated by phospholipase A2, an arachidonic acid metabolite (Metea and Newman, 2006). A high concentration of intracellular Ca 2+ enhances peripheral pain, while a low concentration of intracellular Ca 2+ inhibits peripheral pain. The main mechanism of intracellular Ca 2+ blocking is through mitochondria, and mitochondrial dysfunction can lead to pain hypersensitivity. Mitochondrial calcium overload can depolarize the mitochondrial membrane and increase ROS, resulting in mitochondrial dysfunction, and affects ATP production through the Mitochondrial Permeability Transition Pore (mPTP), Cytochrome c (Cyt-c), Nitric Oxide Synthase (NOS), ETC, Complex I and Complex IV (Brookes et al., 2004;Xu et al., 2020a). Relationship between oxidative phosphorylation and migraine Mitochondria have multiple functions, an important one being the production of ATP through oxidative phosphorylation (eukaryotes). Experimental results suggest that CSD may exacerbate brain mitochondrial damage under hypoxic conditions . Mitochondrial damage includes changes in mitochondrial respiratory function and mitochondrial membrane potential, and hypoxia can lead to cell death and induce migraine (Ouyang et al., 2006). In the mitochondria, the process of ATP production depends on the cellular respiratory function. 95% of the energy needed for cell survival is provided by the mitochondrial respiratory chain, which includes two processes: TCA and oxidative phosphorylation. The oxidative phosphorylation process is mainly involved in five molecular complexes with enzyme activity located on the mitochondrial inner membrane, namely, complex I, complex II, complex III, Complex IV, and complex V, namely, ATP synthase (Kobayashi et al., 2020). Complex I (NADH ubiquinone oxidoreductase/NADH dehydrogenase) has a redox module and is a classical L-shaped structure, including hydrophobic and hydrophilic structures. It is the main electron portal of the mitochondrial electron transport chain (mETC) and provides up to 40% protons for the formation of mitochondrial ATP. Under pathological conditions, complex I am the main source of reactive oxygen species. It was found that both PACAP-38 (pituitary adenylate cyclase-activating polypeptide-38) and PACAP (6-38) treatment resulted in significant downregulation of complex I subunit B6 expression in primary cultured trigeminal ganglion cells (Takács-Lovász et al., 2022), and metabolic changes and mitochondrial dysfunction, such as reduced complex I, III, IV and citrate synthase activities, were detected in migraine patients (Gross et al., 2019). Complex II (succinate-ubiquinone oxidoreductase/succinate dehydrogenase) exists in various aerobic organisms and is a complete membrane protein complex in the tricarboxylic acid cycle and aerobic respiration. Studies have shown that proapoptotic compounds, such as various anticancer drugs. Fas Ligand (FasL), or TNF-α, can induce a decrease in cytoplasmic and mitochondrial pH. These pH changes lead to the dissociation of SDHA and SDHB subunits from complex II, resulting in partial inhibition of complex II activity without any damage to SDH response. This specific inhibition leads to complex II uncoupling, superoxide production, and apoptosis (Lemarie et al., 2011). Riboflavin is required for the conversion of oxidized glutathione to the reduced form and mitochondrial respiratory chain, as complex I and II contain flavoprotein reductase and electron transferring flavoprotein (Plantone et al., 2021). Riboflavin deficiency has been shown to impair the oxidative state of the body, and some experiments have also shown that riboflavin can be effective in treating migraines (Thompson and Saluja, 2017). Complex III (cytochrome c reductase/cytochrome bc1 complex) is the central component of the respiratory chain, which transfers electrons from coenzyme Q (transfer from complex I and complex II) to cytochrome c (within complex IV) and contributes to the production of proton gradients (mitochondrial inner membrane). Migraine prevention can be achieved by applying highly specific inhibitors of oxidant production at the ubiquinone position of complex I or complex III (Orr et al., 2015). These inhibitors do not affect electron transport or ATP production, but they do reduce ROS. This approach prevents oxidative stress from reverse electron transport, but presumably does not affect the consumption of antioxidant defences by nicotinamide nucleotide transhydrogenase (NNT), which operates in the reverse mode (Borkum, 2021). Complex IV (cytochrome c oxidase) is the last electron acceptor of the respiratory chain, which participates in the reduction of oxygen to water molecules and transfers protons in the mitochondrial matrix to the intermembrane space, which is also helpful to produce the transmembrane proton gradient difference. Complex IV has been shown to bind to NO. NO isolation may exist in the blood vessels rich in complex IV, thus preventing vasodilation (Torres et al., 1995). Complex Ⅳ is the main target of gas signaling molecules to inhibit mitochondrial respiration. NO, carbon monoxide (CO) and hydrogen sulfide (H2S) can all reduce oxygen consumption and ATP production through complex Ⅳ pathway (Cooper and Brown, 2008). The mitochondrial dysfunction induced by nitroglycerin (GTN) was associated with abnormal levels of Bax, Bcl-2, cytochrome C oxidase and ROS, and these changes were attenuated by valproate treatment. As Bax, Bcl-2 and ROS are closely related to cell apoptosis, the abnormal Bax, Bcl-2 and ROS levels in study suggest that migraine may also be related to neurocyte apoptosis . The final enzyme of the oxidative phosphorylation pathway is the ATP synthase complex, which can use the transmembrane proton electrochemical potential energy formed by mETC to drive ADP to combine with Pi to form ATP. ATP synthase complex is a kind of protein complex that can exchange mitochondrial ADP and other inorganic salts with ATP. The ATP synthase complex is composed of F1F0-ATP synthase and two members of the mitochondrial solute carrier (SLC) protein family, namely, adenine nucleotide transferase (ANT) and mitochondrial phosphate carrier (PiC). Among them, the SLC25A3 gene is an important part of the PiC gene. It is found that the expression of PiC is closely related to the expression of SLC25A3 (Bhoj et al., 2015). ANT includes three subtypes, namely, SLC25A4 (ANT1), SLC25A5 (ANT2), and SLC25A6 (ANT3), while SLC25A3 mediates mitochondrial absorption and is responsible for the exchange between ATP and ADP. Studies have shown that the dimer of mitochondrial ATP synthase is very important for the formation of mPTP (Giorgio et al., 2013). The c subunit of mitochondrial ATP synthase may be necessary for mPTPdependent mitochondrial breakage and cell death (Bonora et al., 2013). F1F0-ATP synthase is a reversible enzyme, when calcium flux out of the matrix is positive, the proton gradient is reinforced, supporting the production of ATP. Conversely, when the flux is negative, the proton gradient is compromised. If too many protons accumulate in the matrix, the F0F1-ATP synthase reverses and the enzyme consumes ATP rather than produces it (Xu et al., 2020b). CSD is one of the pathogenesis of migraine. During CSD attacks, Ca 2+ enters mitochondria in large quantities, thus affecting F0F1-ATP synthase, so F0F1-ATP synthase is a new direction to study the pathogenesis of migraine. A feedforward loop will be formed among Ca 2+ , ROS, and mPTP (Qi and Shuai, 2016). That is, the accumulation of Ca 2+ in mitochondria can promote the production of ROS (Bertero and Maack, 2018), they act together to regulate the opening of the mPTP (Kovac et al., 2017). Among them, Ca 2+ enters the mitochondria through the mitochondrial calcium uniporter (MCU) complex and leaves the mitochondria through Na + /Ca 2+ exchanger (mNCX), H + /Ca 2+ exchanger (mHCX), and mPTP. The transient opening of mPTP can quickly release Ca 2+ . Calcium ion Ca 2+ homeostasis in mitochondria is achieved through mNCX, mHCX, MCU complex, and the mPTP (Agarwal et al., 2017). Excessive Ca 2+ in mitochondria may induce oxidative stress by depolarizing mitochondria, increasing the activity of complex I and II, and impairing mitochondrial function. Mitochondria play an important role in regulating Ca 2+ homeostasis. The absorption of Ca 2+ by mitochondria can buffer the concentration of Ca 2+ in and out of the cytoplasm, form intracellular Ca 2+ signal, and stimulate ATP production. When too much Ca 2+ is ingested, calcium phosphate deposits are formed in the mitochondria, which reduces the production of ATP and leads to mitochondrial dysfunction. CaV2.1 channels are calcium channels located in the presynaptic membrane and play an important role in communicating between neurons by controlling neurotransmitter release, while presynaptic Cav2 channels might be expected to drive CGRP release associated with migraine, high-voltage-activated and canonical postsynaptic Cav1 channels and low-voltage-activated Cav3 channels have both been found to regulate CGRP release in the trigeminal ganglion. Ca 2+ , potassium and sodium levels are all altered in the course of a migraine. This has led scientists to argue that migraine is a channelopathy (Antunes et al., 2022). CSD is a pathophysiological phenomenon that may be a contributor to brain damage in migraine. The researchers found that topical application of a cortical ionophore increased the rate of CSD propagation and that a higher dose of this compound induced CSD, suggesting a role for Ca 2+ influx into cells in CSD (Torrente et al., 2014). Mitochondrial permeability transition pore The mitochondrial permeability transition pore is a complex located on the mitochondrial membrane, which was previously thought to be composed of voltage-dependent anion channels (VDAC) in the outer membrane of the mitochondria, ANT in the mitochondrial inner membrane, and cyclophilin D (CypD) in the matrix (Haleckova et al., 2022). Stimulation of primary cultured trigeminal ganglion cells with capsaicin can mimic migraine attacks in vitro (Wang et al., 2016), and studies have revealed transcriptional upregulation of cytochrome c oxidase subunit IV, Mic60/Mitofilin, and VDAC1, which implies induction of mitochondrial biogenesis to compensate for the loss of mitochondria (Shibata et al., 2020). Other studies have suggested that mPTP is formed by dimers of F1F0-ATP synthase, this is called the F1F0-ATP synthase dimer model for the formation of mPTP. However, Bonora et al. believe that the c subunit of F1F0-ATP synthase is the key component of mPTP. Studies have shown that the c subunit ring of purified and reconstructed F1F0-ATP synthase forms a voltage sensitive channel, which leads to rapid and uncontrolled depolarization of the mitochondrial inner membrane in cells. High concentration of Ca 2+ in the mitochondrial matrix for a long time enlarged the c subunit loop and dissociated it from the CypD/ciclosporin A (CsA) binding site of the F1F0-ATP synthase F1 subunit. According to the latest study by Bonora (Bonora et al., 2017), the opening of the permeability transition pore complex requires the dissociation of the F1F0-ATP synthase dimer and includes the c ring of the F1F0-ATP synthase. Therefore, it is considered that the c subunit channel of highly regulated F1F0-ATP synthase may be mPTP, which is called the F1F0-ATP synthase monomer and c subunit ring model. Recently, Giorgio et al. have found that the binding of Ca 2+ to the β subunit of mitochondrial F1F0-ATP synthase leads to the transition of mitochondrial permeability. In any case, these studies provide convincing evidence that F1F0-ATP synthase is necessary for mPTP function. Under physiological conditions, mPTP opens periodically and non-selectively to allow water and small molecules with a relative molecular weight of <1.5 × 10 3 kDa to pass through. This maintains the electrochemical balance in the mitochondria, while protons can freely pass through the inner mitochondrial membrane, creating a potential difference inside and outside the mitochondrial matrix and forming a balanced mitochondrial membrane potential. Under various exogenous pathological stimuli, mPTP opened explosively, allowing substances with a relative molecular weight of more than 1.5 × 10 3 kDa to pass non-selectively, resulting in the collapse of mitochondrial membrane potential, the decoupling of oxidative phosphorylation, and the inhibition of ATP production. Because the surface area of the inner membrane of the mitochondria is significantly larger than that of the outer membrane, the swelling of the mitochondrial matrix caused by the opening of mPTP will lead to the rupture of the outer membrane of the mitochondria, promote the release of Cyt-c and other pro-apoptotic factors into the cytoplasm and initiate endogenous apoptosis. Programmed cell death (PCD) refers to the process of orderly removal of non-essential cells, specialized cells, or injured cells by suicide under the control of related genes and signal pathways. The previously discovered PCD types include intrinsic and extrinsic apoptosis (Galluzzi et al., 2012), autophagy, necroptosis, parthanatos, pyroptosis, and ferroptosis. GTN-mediated increases of pain intensity, apoptosis, death, cytosolic ROS, mitochondrial ROS, caspase-3, caspase-9, cytosolic Ca 2+ levels, and cytokine generations (TNF-α, IL-1β, and IL-6) in the TG of transient Frontiers in Physiology frontiersin.org 04 receptor potential melastatin 2 (TRPM2) wild-type mouse were further increased by the TRPM2 activation (Yazğan and Nazıroğlu, 2021). It is worth noting that the emergence of mitochondrial permeability transition is a necessary condition for the occurrence of apoptosis, autophagy and necroptosis (Vanden Berghe et al., 2014), the knockout of CypD or the use of its inhibitor CsA can effectively resist PCD, suggesting that the mPT phenomenon may be mediated by a specific channel composed of a series of macromolecular proteins including CypD, which is an important target for mitochondrial regulation of PCD process confirmed by a large number of studies in the future. Cyclophilin D CypD is a mitochondrial matrix protein encoded by Peptidylprolyl cis-trans isomerase F(PPIF). It is a member of the macrocyclic protein gene family. The complete CypD protein consists of 207 amino acids (22 kDa). Its most prominent feature is the cyclophilin domain of 109 amino acids, which endows most cyclophilin with conservative prolyl isomerase activity. CypD is the only molecule genetically identified to regulate the opening of mPTP. Hafner et al. reported that the decrease of NAD+/NADH ratio will cause a decrease in silent mating-type information regulation 2 homolog 3 (SIRT3) activity, while the decrease of SIRT3 activity can increase the acetylation level of CypD, which can induce the open and membrane potential depolarization of mPTP by inducing the acetylation of CypD . Under physiological conditions, the transient opening of mPTP mediated by CypD can lead to slight changes in mitochondrial membrane potential, which does not have any adverse effect on cell viability, and can reduce excessive metabolites and ions (especially Ca 2+ ) in mitochondria, to avoid mitochondrial swelling, prevent the release of pro-apoptotic factors, and finally maintain the integrity of mitochondria. mPTP is an adjustable non-selective protein channel through which water and solute, including VDAC through the mitochondrial outer membrane (OMM) and ANT through the mitochondrial inner membrane (IMM) (Gutiérrez-Aguilar and Baines, 2015; Briston et al., 2017). Under pathological conditions, the permeability transition pores continue to open under the stimulation of oxygen free radicals or Ca 2+ , resulting in mitochondrial swelling, the release of Cyt-c, and consequently caspase-3 activation, resulting in a series of mitochondriamediated apoptotic cascade responses (Eloy et al., 2012). Stimulation of cells by H 2 O 2 can simulate oxidative stress in migraine in vitro, a model of CypD protein expression or high expression in endothelial cells was established through gene silencing or cloning. The cell apoptosis rate of the CyPD low expression group was significantly lower than that of the control group, the apoptosis rate of the CyPD high expression group was significantly higher than that of the control group. CypD protein could increase oxidative stress and cause endothelial cell injury and apoptosis (Peng et al., 2015). It was shown that CypD could bind to the oligomycin sensitivity-conferring protein (OSCP) of the lateral stalk of ATP synthase, which is composed of OSCP, F6, b, and d subunits (Giorgio et al., 2009), when the OSCP on the side stalk of ATP synthase is reduced, it can increase the sensitivity of mPTP to Ca 2+ and also lead to the destabilization of ATP synthase. It was found that the interaction between TNF receptor associated protein 1 (TRAP1) and F1F0-ATP synthase increased its enzyme activity and inhibited the opening of mPTP. TRAP1 and CyPD affect cell bioenergy characteristics and survival in noxious conditions by competing with each other for their binding to F1F0-ATP synthase OSCP subunits, which plays an important role in pathophysiological conditions such as tumor transformation or adaptation to hypoxia (Cannino et al., 2022). Oligomycin sensitivity-conferring protein OSCP contains 180-190 amino acids, encoded by the nuclear ATP50 gene, its molecular weight is about 23 kDa and plays a key role in the assembly of ATP synthase. It occurs in a modular manner, preventing the formation of OSCP that can depolarize the membrane or waste intermediates of ATP (Rühle and Leister, 2015). Mitochondrial F1F0-ATP synthase is a complex V on the respiratory chain of the mitochondrial inner membrane and a 600 kDa multi-subunit complex, which is mainly composed of two parts: one is the spherical catalytic part, that is, the F1 part protruding from the inside of the membrane, including a, β, γ, and other subunits; the other part is the proton transport part, that is, the F0 part embedded in the membrane, including a, c, e, A6L, f, g, and other subunits, in which c subunits form c-ring (Giorgio et al., 2009). There is also a peripheral stem attached to the outer side of F1 and F0, consisting of OSCP, F6, b and d subunits, OSCP is located at the tip of F1 and is the binding target of CyPD. both F1 and F0 are connected through the peripheral stem and catalyze the synthesis of ATP. Studies have shown that the binding ability of TRAP1 and OSCP exceeds that of CyPD and OSCP, and TRAP1 can also isolate CyPD from F1F0-ATP synthase, both of which are non-mutually exclusive. We believe that OSCP acts as a hub to fine-tune the enzyme activity of F1F0-ATP synthase and its conversion to PTP by interacting with different protein regulatory factors. The combination of OSCP and different partners will produce specific biochemical results, thus optimizing biological output to adapt to changes in environmental conditions. Some studies have shown that honokiol treatment can increase the expression of SIRT3 nearly twice and further increase its activity. The Increased SIRT3 activity and mitochondrial SIRT3 substrate, which is related to the decrease of acetylation of OSCP. ATP synthase F1 proteins α, β, γ, and OSCP contain SIRT3-specific reversible acetyllysine, which is evolutionarily conserved and binds to SIRT3. OSCP was further studied and it was found that OSCP lysine 139 was a nutritionally sensitive target for SIRT3-dependent deacetylation (Vassilopoulos et al., 2014). To further study the function of the OSCP protein, HAP1-ΔOSCP, a specific knockdown OSCP monoclonal cell, was screened from human HAP1 cells. Compared with HAP1-WT cells, HAP1-ΔOSCP grew slowly; the copy number of mitochondrial DNA decreased by 30%; except for complex II, the protein levels of complex I, complex III, and complex IV all decreased, and the level of oxidative phosphorylation decreased significantly (He et al., 2017). Bcl-2 family The Bcl-2 family includes two types of members, one is Bcl-2 and Bcl-xl which inhibit apoptosis, and the other Bax, Bak, Bcl-xs, Bad, Bid, and Bik which promote apoptosis. Frontiers in Physiology frontiersin.org Studies have confirmed that the Bcl-2 family regulates apoptosis mainly by regulating the opening of mPTP, in the experimental system of isolating mitochondria, Bcl-2 can inhibit the disintegration of mitochondrial membrane potential induced by many factors; some studies have also proved that Bcl-xl can inhibit the redistribution of Cyt-c and the depolarization of mitochondrial membrane potential, thus inhibiting apoptosis (Vander Heiden et al., 1997). Bax can interact with ANT, an important component of mPTP, to form pores in the mitochondrial membrane, while Bcl-2 can prevent this action and inhibit pore formation. In addition, the expression of Bcl-2 is related to the benzodiazepine receptor, another component of mPTP, indicating that Bcl-2 itself can regulate the opening of mPTP. Overexpression of Bcl-2 can also inhibit the open inducer of mPTP, thus inhibiting apoptosis. Some studies have shown that PTP is mediated by the opening of the permeability transition pore complex (PTPC). PTPC is a widely used supramolecular entity assembled at the junction of the mitochondrial membrane, which is composed of CypD, VDAC, ANT, and so on. Pro-and anti-apoptotic members of the Bcl-2 family, including Bax, Bid, Bcl-2, and Bcl-xl, have been shown to bind to PTPC and thus regulate its function (Vander Heiden et al., 2001). Bcl-xl is the main anti-apoptotic protein in the adult brain. Studies have shown that Bcl-xl directly interacts with the β-subunit of F1F0-ATP synthase to reduce ion leakage in F1F0-ATP synthase, thus increasing the transport of H + by F1F0 in the process of F1F0-ATP synthase activity (Alavian et al., 2011). Bcl-xl also enhances the exchange of metabolites between mitochondria and cytoplasm through interaction with VDAC, helping to prevent the release of death-promoting factors. The enhancement of ATP production by the mitochondrial inner membrane F1F0-ATP synthase complex requires VDAC to remain open to release newly synthesized ATP into the cytoplasm. Neurons overexpressing Bcl-xl had higher levels of ATP, while cells with depleted or suppressed endogenous Bcl-xl had lower levels of ATP. Pro-apoptotic proteins Bax and Bak accelerate the opening of VDAC, while anti-apoptotic protein Bcl-xl shuts down VDAC by directly binding to VDAC. Bax and Bak allow cytochrome c to pass through VDAC from liposomes, but Bcl-xl prevents cytochrome c from passing through (Shimizu et al., 1999). Once Bax and Bak are activated, it promotes the release of cytochrome c and the division of mitochondria, resulting in the activation of apoptotic protease activating factor-1 (apaf-1) into apoptosomes and the activation of caspase-9 to activate caspase-3. It has been shown that stimulation of the dura mater of C57BL/6 mice with inflammatory soup leading to migraine attacks can initiate a programmed cell death pathway that activates total caspase-1 and converts it to active cleaved caspase-1, which cleaves and activates total caspase-3 into cleaved caspase-3 to induce apoptosis (Wang et al., 2022). Bcl-2 is located in the outer membrane of mitochondria, which can bind with Bax to form a Bcl-2/Bax heterodimer, prevent Bax from inserting into the outer membrane of mitochondria, and protect cells from apoptosis. Bcl-2 and Bax are often co-expressed in tissues and cells. when Bcl-2/Bax increases, mPTP shuts down, which promotes cell survival and otherwise leads to cell death. Parthenolide (PTL) is a sesquiterpene lactone found in large quantities in the leaves of feverfew, which possesses antiinflammatory, anti-migraine and anti-cancer properties. PTL was shown to significantly increase the Bcl-2/Bax ratio (Ren et al., 2019). The c-conformation of ANT (when ADP binds to the cytoplasmic side) is more conducive to the opening of mPTP, while the m-conformation of ANT (when ATP binds to the matrix side) is more conducive to the closed state of mPTP. Therefore, Atractyloside can stabilize the c-configuration of ANT and promote mPTP, while bongkrekic acid (Henderson and Lardy, 1970) can stabilize the m-configuration of ANT and inhibit mPTP. Mitochondrial membrane potential Under normal circumstances, the outer mitochondrial membrane has high permeability, while the permeability of the inner mitochondria membrane is relatively low. The mitochondrial membrane potential is caused by the asymmetric distribution of electrons on both sides of the inner mitochondria membrane. The low permeability and electrochemical proton gradient of the inner mitochondria membrane are the basis for maintaining the mitochondrial membrane potential, while the normal mitochondrial membrane potential is necessary for mitochondrial function (Tsujimoto and Shimizu, 2007). CaV 2.1 voltage-gated calcium channels (VGCCs) are highly expressed by cerebellar neurons, and their dysfunction is associated with human disorders such as familial hemiplegic migraine. The researchers are studying leaner and tottering mice that carry autosomal recessive mutations in the gene coding for the α1A pore-forming subunit of CaV 2.1 VGCC. Excessive leaner cerebellar granule cell (CGC) death begins soon after postnatal day 10. Calcium homeostasis and mitochondrial membrane potential were also changed in the CGC of leaner mice (Bawa and Abbott, 2008). Under physiological conditions, the mitochondrial membrane potential oscillates to a small extent. Under pathological conditions, metabolic stress occurs, and when the balance between ROS production and ROS clearance is disrupted, the mitochondrial network of the whole cell is locked in a low-frequency, highamplitude oscillation mode (Aon et al., 2008). The opening and closing of mPTP are affected by many factors, in addition to free radicals, Ca 2+ , and other factors, there are some regulatory proteins: peripheral-type benzodiazepine receptor (PBR) is an important component and regulatory protein of mPTP. Mitochondrial inner membrane anion channel (IMAC) inhibitors can block mitochondrial oscillation, and PBR ligands can also reduce mitochondrial oscillation. PBR is composed of VDAC, ANT, and translocator protein (TSPO). The binding of PBR and ligand triggers the conformational change of mPTP, which induces the decrease of mitochondrial membrane potential, activation of caspase-3, and the increase of cell permeability, which leads to the release of apoptotic factors. The reasons for the decrease of mitochondrial membrane potential have the potential to convert F1F0-ATP synthase to ATP hydrolase in theory (Diez et al., 2004). In this process, the function of ANT in the mitochondrial inner membrane also changes with the function of the F1F0-ATP synthase. Under normal physiological conditions, ANT transports the ADP in the cytoplasm to the mitochondria and transports the ATP synthesized by F1F0-ATP synthase to the cytoplasmic matrix for the physiological response in the cell. At this time, the cytoplasmic matrix ATP/ADP ratio is about 100 times that of the mitochondrial Frontiers in Physiology frontiersin.org matrix ATP/ADP (Maldonado et al., 2017). Under the pathological condition, the function of ANT can be reversed and ATP can be injected into mitochondria for F1F0-ATP synthase hydrolysis. However, the synergistic change of the function of F1F0-ATP synthase and ANT does not occur at the same time, because the change of membrane potential causing the reversal of F1F0-ATP synthase function is greater than the decrease of membrane potential causing the reversal of ANT function. The critical value of membrane potential during functional transition is defined by reversing constant Erev-F1F0-ATPase and Erev-ANT, respectively. When the membrane potential is negatively polarized and less than Erev-F1F0-ATPase and Erev-ANT, the F1F0-ATP synthase synthesizes ATP, and ANT transports ATP from the mitochondria. When the membrane potential is depolarized, both greater than Erev-F1F0-ATPase and Erev-ANT, F1F0-ATP synthase hydrolyzes ATP, and ANT transports ATP from the cytoplasm to the mitochondria. When the membrane potential is between Erev-F1F0-ATPase and Erev-ANT, only the function of F1F-ATP synthetase shifts and the function of ANT does not change, that is, only when the membrane potential decreases slightly, F1F0-ATP synthase can hydrolyze mitochondrial ATP without affecting the cytoplasmic ATP content (Chinopoulos, 2011). The mitochondrial membrane potential is an important index to observe early apoptosis. When the mitochondrial membrane potential decreases, the pro-apoptotic proteins located between the inner and outer membranes of the mitochondria are released into the cytoplasm, leading to apoptosis. The mitochondrial membrane potential of protein is related to mPTP, In the early stage of apoptosis, mPTP can allow molecules with relative molecular weight less than 1.5 × 10 3 kDa to pass through, which leads to the decrease of mitochondrial membrane potential, the increase of mitochondrial membrane permeability and the release of pro-apoptotic proteins, which leads to apoptosis (Fall and Bennett, 1999). Mitochondrial damage in any part of the brain can lead to insufficient mitochondrial energy, while migraine is a response to brain energy deficiency or oxidative stress levels exceeding antioxidant capacity (Zhang et al., 2011;Kowalska et al., 2021). Oxidative phosphorylation dysfunction, mitochondrial membrane potential changes, ROS production, and energy metabolism disorders may affect astrocytes. Astrocytes repair and protect cells in the nervous system. The role of astrocytes is to eliminate ROS and maintain the environment of extracellular ions and neurotransmitters, both of which are energy-dependent processes (Chen et al., 2003;Lian and Stringer, 2004). Energy consumption caused by mitochondrial dysfunction can damage the function of astrocytes, thus increasing the susceptibility of neurons to CSD. Translocator proteins TSPO is a ubiquitous conserved protein located in the outer membrane of mitochondria. In the past few decades, the role of TSPO has been widely studied. It can form a complex with VDAC or ANT, has the function of transporting cholesterol during steroid production, and is also involved in regulating cell proliferation, apoptosis, and migration (Lin et al., 2014), as well as mitochondrial respiratory and oxidative stress. It has been reported that TSPO inhibits downstream mitochondrial autophagy by producing ROS to inhibit ubiquitin ligase parkin (PARK2)-induced protein ubiquitination (Gatliff et al., 2014). TSPO increases the level of ROS by regulating mitochondrial Ca 2+ signal transduction, increasing the level of cytoplasm Ca 2+ , and activating NADPH oxidase (NOX). Downregulation of TSPO expression can reduce the level of ROS in hypoxia/reoxygenation cardiomyocytes and reduce oxidative stress, mitochondrial damage, and apoptosis (Meng et al., 2020). TSPO knockout and the role of TSPO ligands, including TSPO/ VDAC interactions, have demonstrated the effects of TSPO on gene expression and function (Yasin et al., 2017). The role of TSPO in metabolism is illustrated by studies showing that in TSPO knockout mice, TSPO knockout leads to altered mitochondrial energy metabolism, together with reduced oxygen consumption, mitochondrial membrane potential and ATP. TSPO also controls mitochondrial energy homeostasis by regulating fatty acid oxidation in steroidogenic cells (Lan et al., 2016). In addition, TSPO regulates autophagy by generating acute ROS and preventing PARK2 from completing protein ubiquitination (Gatliff et al., 2014). It has been reported that TSPO deregulates mitochondrial Ca 2+ signaling, leading to an increase in cytoplasm Ca 2+ levels, resulting in the activation of Ca 2+ -dependent NOX, which increases ROS levels (Gatliff et al., 2017). Mitophagy, a specific autophagic pathway, promotes the turnover of damaged mitochondria that are engulfed in autophagosomes through a lysosomal-dependent process. However, there is increasing evidence that a form of autophagic cell death is induced by oxidative stress (Ren et al., 2019). TSPO inhibits mitophagy and prevents the necessary ubiquitination of proteins. Recent studies have shown elevated levels of TSPO in the brain and/or spinal cord of patients with various chronic pain conditions (such as chronic low back pain, fibromyalgia, migraine and Gulf War illness) (Figure 2) (Alshelh et al., 2022). mPTP, mitochondrial permeability transition pore; IMAC, inner membrane anion channel; ANT, adenine nucleotide translocase; VDAC, voltage-dependent anion channel; CYPD, Cyclophilin D; OSCP, oligomycin sensitivity-conferring protein; TSPO, translocator protein; MCU, mitochondrial calcium uniporter; NAD+, nicotinamide adenine dinucleotide; NADH, reduced nicotinamide adenine dinucleotide; SIRT3, Sirtuin 3. Oxidative phosphorylation inhibitor Oxidative phosphorylation inhibitors mainly include oligomycin, carbonyl cyanide p-trifluoromethoxyphenylhydrazone (FCCP), rotenone, and antimycin A, which have different functions. Understanding their specific action sites is helpful for us to better understand and design the experiment. Oligomycin Oligomycin is an inhibitor of oxidative phosphorylation in mammalian cells. It can effectively bind the functional subunit F0 of mitochondrial F1F0-ATP synthase and change the configuration of ATP synthase, thus inhibiting the proton flow from the mitochondrial intermembrane space back to the Frontiers in Physiology frontiersin.org mitochondrial matrix. As a result, the synthesis of ATP is blocked, resulting in a lack of energy needed for biological metabolism, so oligomycin is highly toxic to mammals. However, it has also been shown that oligomycin can act as an inhibitor of tumor cell apoptosis. In mouse histiocytomas, oligomycin treatment reduced cellular ATP levels and significantly inhibited apoptosis, and also oligomycin inhibited caspase-1 and caspase-3 activities and loss of mitochondrial membrane potential, suggesting that the inhibition of apoptosis by oligomycin may be due to inhibition of ATP production, caspase activity and inhibition of mitochondrial depolarization (Singh and Khar, 2005). The change of cellular biological energy is closely related to migraine. The regulation of F1F0-ATP synthase that makes eukaryotic cells produce most of ATP is expected to be used in the treatment of migraine (Johnson et al., 2006). Mitochondria are key regulators of programmed cell death and energy metabolism, which means that mitochondria can be used as targets for the treatment of migraine. Carbonyl cyanide P-trifluoromethoxyphenylhydrazone FCCP is a lipophilic weak acid, which can easily spread through the inner membrane of the mitochondria and enter the acidic mitochondrial intermembrane space, bringing the H+in the intermembrane space back to the mitochondria and releasing it into the matrix in an uncharged form, thus eliminating the H + concentration gradient on both sides of the inner membrane of the mitochondria, resulting in the loss of the activated proton driving force of ATP synthase and the inability to synthesize ATP, thus uncoupling Oxidative phosphorylation, inhibition of oxidative phosphorylation (Monteiro et al., 2011). As an uncoupling agent of the electron transport chain and oxidative phosphorylation, FCCP is a proton carrier, which reduces the level of ATP by consuming ATP. However, if the concentration of FCCP is too high, it will not only reduce the production of ATP but also consume the ATP of the cell itself, resulting in cell death or changes in a survival state, which is not conducive to follow-up research but also makes the experimental results ineffective. Therefore, when using an Extracellular Flux Analyzer to determine the changes in cell bioenergy metabolism, the appropriate concentration of FCCP is very important. Rotenone Rotenone is a fat-soluble substance extracted from the natural plant Derris trifoliata Lour and has been widely used as an insecticide in agricultural production since the 1940s. It has strong fat solubility and can easily pass through the blood-brain barrier (BBB). Into the brain mitochondria, selectively block the role of iron-sulfur cluster N2 and CoQ to inhibit the mitochondrial respiratory chain, resulting in cytotoxicity, resulting in apoptosis. Mitochondria are the largest iron metabolizing organelles in the cell and are mainly responsible for the synthesis of heme and iron-sulfur clusters. Heme and iron-sulfur clusters are two important cofactors involved in the synthesis and repair of DNA, protein synthesis and folding, tricarboxylic acid cycle, and the normal conduct of the mitochondrial electron transport chain. Disruption of iron metabolism in mitochondria can seriously affect the iron metabolism and energy metabolism of the whole cell, thus affecting the function of mitochondria and leading to the development of various diseases (Wachnowsky et al., 2018). Studies have shown that rotenone has a high affinity with mitochondria and can selectively inhibit mitochondrial complex I, which in turn affects mitochondrial function and cell survival (Xiong et al., 2012). The loss of mitochondrial membrane potential plays an important role in the process of apoptosis. Rotenone significantly reduces the mitochondrial membrane potential and ATP content. Mitochondrial membrane potential is an important index to reflect the function of mitochondria. The change of mitochondrial membrane potential affects the function of the proton pump and then affects the production of ATP. Antimycin A Antimycin A, an inhibitor of mitochondrial complex III, is a bactericidal antibiotic. Its mechanism includes the inhibition of electron transport between nicotinamide adenine dinucleotide (NADH) oxidase and mitochondrial cytochrome bc1 (Ransac and Mazat, 2010). The inhibition of electron transport in mitochondria will lead to the collapse of the proton gradient across the mitochondrial inner membrane, thus destroying the mitochondrial membrane potential; this inhibition also leads to the formation of ROS. Relationship between tricarboxylic acid cycle and migraine Mitochondria are the center of cellular energy metabolism, and the TCA cycle in mitochondria is a common metabolic pathway in aerobic organisms. The main function of the TCA cycle is to produce reducing equivalents, such as NADH and FADH 2 (produced by succinate dehydrogenase). NADH and FADH 2 can transfer electrons to, ETC., to drive oxidative phosphorylation and produce ATP. The TCA cycle consists of eight enzymes, namely, citrate synthase (CS), aconitase (ACO2), isocitrate dehydrogenase (IDH), α-ketoglutarate dehydrogenase (α-KGDH), succinyl CoA synthetase (SCS), succinate dehydrogenase (SDH), fumarase hydratase (FH) and malate dehydrogenase (MDH). Related studies evaluated the activity of platelet mitochondrial enzymes in patients with migraine with or without aura, and found that complex I, CS, and complex Ⅳ were damaged in migraine patients (Sangiorgi et al., 1994). Biochemical studies have shown that the level of lactic acid in cerebrospinal fluid of patients with migraine attacks is significantly higher than that in the intermittent period because lactic acidosis indicates the disturbance of pyruvate utilization in the TCA cycle. The finding that migraine patients show signs of lactic acidosis, thus suggesting abnormalities in the functioning of the TCA, has led researchers to further investigate the TCA, for example, for pyruvate, SDH, nicotinamide adenine dinucleotide cytochrome c reductase, succinate cytochrome c reductase, NADH dehydrogenase and CS, in order to investigate the relationship between migraine and the TCA (Figure 3) (Stuart and Griffiths, 2012). Under physiological conditions, the TCA produces Frontiers in Physiology frontiersin.org NADH, and NNT uses NADH as a substrate to produce nicotinamide adenine dinucleotide phosphate (NADPH). This reaction is energetically favored because the transhydrogenation between NADH and NADPH is coupled to the proton gradient across the IMM. NNT and NADPH play an important role in the scavenging of ROS, so the TCA is essential for maintaining the body's antioxidant levels. However, in pathological conditions, NNT depletes NADPH and regenerates NADH to produce ATP to satisfy the body's energy requirements (Bertero and Maack, 2018). The expression of antioxidant enzymes SOD2, catalase, and glutathione peroxidase 3 (GPx-3) decreased significantly in IDH and NNT knockout mice (Lee et al., 2020). CSD can lead to a sharp decrease in NADH content, the destruction of the proton gradient, and the damage of NNT to antioxidant defense (Borkum, 2021). Valproic acid (VPA) and its salts are widely used in migraine, the metabolism of VPA is complex and continues to be studied. Known pathways of VPA metabolism is βoxidation in the tricarboxylic acid cycle (acetylation). This could also prove that there is a correlation between migraine and the TCA (Figure 3) (Shnayder et al., 2023). MDH, malate dehydrogenase; IDH2, Isocitrate dehydrogenase 2; α-KGDH, α-ketoglutarate dehydrogenase; IMM, inner mitochondrial membrane; ETC, electron transport chain; NNT,nicotinamide nucleotide transhydrogenase; NADP+, nicotinamide adenine dinucleotide phosphate; NADPH, reduced form of nicotinamide adenine dinucleotide phosphate. Relationship between reactive oxygen species and migraine Mitochondria are the main sites for the production of ROS, and mROS mainly comes from electron leakage of oxidative phosphorylation of mitochondria. ROS are produced by extra electrons obtained by O 2 , including superoxide anions (O 2-), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH), among which H 2 O 2 is the most important ROS signal molecule in cells. ROS are a series of highly active superoxide anion radicals formed by one molecule O 2 receiving one electron (Sheng et al., 2014). In the process of electron transfer along the mitochondrial, ETC, peroxides and other ROS are formed, collectively referred to as mitochondrial-derived reactive oxygen species (mROS). Complex I and III can produce mROS, when the NADH/NAD + ratio is high, the TCA cycle dehydrogenases also generate high amounts of ROS. When NAD + is not available, oxygen becomes the default electron acceptor, producing superoxide and hydrogen peroxide. In vitro study showed that NAD + administration had protective effects on hypoxiainduced neuroinflammation and mitochondrial damage, and ROS production in BV2 microglia through the Sirt1/PGC-1α pathway (Zhao et al., 2021). ROS produced by abnormal mitochondrial function have been linked to migraine. Migraineurs have an impaired metabolic capacity with increased ROS production. NTG induced more ROS in the low glucose condition than in the high glucose condition. The mitochondrial dysfunction detected by Seahorse may explain the increase in ROS. Inhibition of ROS may have therapeutic benefits in the prevention of migraine. Antioxidants may become a potential drug for migraine treatment (Li et al., 2022). Under normal physiological conditions, oxygen free radicals are catalyzed by antioxidant factors such as SOD and glutathione peroxidase (GPX) to form water, which keeps intracellular ROS at a harmless low level. When the body is under oxidative stress, a large number of ROS will be produced in the mitochondria. These oxygen free radicals can unselectively damage various components of the cell, and intracellular autophagy can reduce the secondary harm of the damaging substances and delay cell death caused by stress. ROS can reduce the level of ROS by inducing autophagy, but excessive ROS can induce excessive autophagy and induce apoptosis (Yu et al., 2006;Castino et al., 2010). When the production of mROS exceeds the antioxidant capacity of the body, oxidative stress occurs, which consumes reduced glutathione (GSH) and damages cellular components such as proteins, lipids, DNA, and sugars, resulting in oxidative damage and mutation of mitochondrial DNA and increased permeability of mPTP. It further leads to the decrease of mitochondrial membrane potential, the decrease of the amount of ATP produced by cells, and the acceleration of mitochondrial apoptosis, which leads to cell apoptosis (Kalogeris et al., 2014). The increase of permeability of mPTP can further promote the accumulation of mROS, which is called ROSinduced ROS release (RIRR) (Zorov et al., 2014). Studies have shown that the content of malondialdehyde (MDA) in blood samples of migraine patients increases (Bernecker et al., 2011). MDA is a lipid peroxide and a marker of oxidative stress. Lipid peroxidation not only increases the production of mROS but also destroys the integrity of the mitochondrial membrane and opens mPTP. Opening of mPTP is an important step in the mechanism of necrosis and apoptosis (Ma et al., 2017), When mPTP is opened, its large conductance can lead to rapid swelling due to the osmotic pressure of matrix solute, the rupture of the mitochondrial outer membrane and the disintegration of mitochondrial membrane potential (Sanderson et al., 2013). mROS plays an important role in regulating some normal functions of the body and tissue. If mROS increases, without being decomposed by cells in time, it will reduce the activity of the respiratory chain complex, thus reducing the function of mitochondria, affecting oxidative phosphorylation and reducing ATP synthesis, while insufficient synthesis of ATP will further damage mitochondria and induce more mROS. Finally, excessive mROS can not be eliminated in time, resulting in accumulation. Thus damage to mitochondria, cell metabolism disorder, can not work properly, or even apoptosis, causing migraine (Sparaco et al., 2006). Under physiological conditions, the increase of Ca 2+ in mitochondria will promote the production of ATP, but in pathological conditions, the increase of Ca 2+ in mitochondria will promote the production of ROS (Brookes et al., 2004). It has been found that CSD can induce the formation of ROS in the cortex, meninges, and trigeminal ganglia. ROS can directly activate painful electrophysiology and promote peptidergic nerves to release migraine mediator CGRP. CGRP can further promote the sensitization and inflammation of meningeal neurons related to oxidative stress. Therefore, both direct and indirect effects of ROS can lead to migraine. Antioxidants and ROS scavengers can reduce the pain caused by ROS (Shatillo et al., 2013). Oxidative stress marker It was found that advanced oxidation protein products (AOPP) were significantly decreased in migraine patients treated with CGRP receptor antagonists (De Luca et al., 2021). AOPP is not only considered a marker of protein damage caused by oxidative Frontiers in Physiology frontiersin.org stress, but also an important mediator of oxidative stress and inflammation (Wei et al., 2009). 8-hydroxy-2 deoxyguanosine (8-OHdG) is an oxidative adduct produced by free radicals such as hydroxyl radical and singlet oxygen, which attacks the eighth carbon atom of guanine base in the DNA molecule. It is related to oxidative stress damage in cerebrovascular diseases. The higher the 8-OHdG level, the more serious the oxidative stress damage (Li et al., 2013). 8-OHdG reflects the oxidative damage of the nucleus and mitochondrial DNA caused by free radicals. It was found that the serum 8-OHdG of migraine patients was significantly higher than that of healthy controls (Geyik et al., 2016). Active oxygen scavenger mROS scavenging mainly depends on three kinds of superoxide dismutase and antioxidant proteins, of which the highest level is antioxidant peroxidase (PRX), while PRX3 is responsible for the degradation of most of the intracellular H 2 O 2 . Some studies have shown that the reduction of H 2 O 2 degradation after PRX1 phosphorylation leads to the local accumulation of H 2 O 2 into the cytoplasm, thus activating the growth factor-dependent signal pathway (Woo et al., 2010). Superoxide dismutase (SOD) is an endogenous antioxidant that scavenges oxygen free radicals. Studies have found that SOD activity continues to decrease in migraine patients (Neri et al., 2015). The clearance of ROS in mitochondria is mainly degraded by SOD2. SOD2 is the only antioxidant enzyme in mitochondria that can catalyze ROS to hydrogen peroxide. The activity of the SOD2 enzyme was mainly affected by its acetylation level. However, the acetylation level of mitochondrial proteins is mainly regulated by SIRT3. SIRT3 can enhance the activity of oxygen-free radical scavenging enzymes, reduce the level of ROS in mitochondria and reduce oxidative stress damage. At present, three kinds of mitochondrial deacetylase have been found, which are SIRT3, SIRT4, and SIRT5. SIRT3 is the strongest deacetylase in mitochondria. SIRT3 regulates the level of intracellular ROS by regulating the acetylation level of SOD2. More importantly, SIRT3 can directly bind to SOD2, resulting in the deacetylation of SOD2 and enhancing the activity of SOD2, which plays an important role in regulating the homeostasis of ROS in mitochondria (Giralt and Villarroya, 2012). Most of the antioxidation of ROS in the cytoplasm is carried out through GPX. Glutathione can be dehydrogenated to oxidized glutathione (GSSG), which can be reduced to GSH by NADPH under the action of glutathione reductase (GR). GPX is one of the main enzymes that catalyze the oxidation of reduced glutathione in the glutathione redox cycle. GPX can not only specifically catalyze the reaction of GSH with ROS to form GSSG, protect the biofilm from ROS damage, and maintain the normal function of cells (Yin et al., 2016). ALA, also known as alpha-lipoic acid, is an eight-carbon sulfur compound, which has the functions of water-soluble and fatsoluble antioxidants. It can reduce oxidative stress directly (by removing active substances) and indirectly (by chelating into metal ions). As a coenzyme, ALA also plays an important role in energy metabolism (Müller and Krieglstein, 1995;Packer et al., 1995;Bast and Haenen, 2003). Riboflavin may play an important role in migraine prevention by participating in antioxidant and antiinflammatory responses caused by mitochondrial dysfunction (Fila et al., 2021). Coenzyme Q10 (CoQ10) participates in many cellular redox reactions and plays an important role in bioenergetics and antioxidant defense. It can associate mitochondrial function with energy production and oxidative stress. Some studies have found that CoQ10 can play a role in the treatment of migraine (Parohan et al., 2020). Peroxisome proliferator-activated receptorgamma coactivator 1 alpha (PGC-1α) is a transcriptional coupregulation factor, which is highly expressed in the brain, heart, skeletal muscle, and other tissues. It plays a central role in controlling mitochondrial function and mitochondrial biogenesis and is a known positive regulator of mitochondrial function and oxidative metabolism (Fernandez-Marcos and Auwerx, 2011;Wang et al., 2015). It was found that repeated infusion of inflammatory soup into the dura mater could decrease the expression of PGC-1α in the trigeminal nucleus caudalis (TNC) in SD rats (Figure 4) (Liang et al., 2021). FIGURE 1 Cortical spreading depression induced by potassium chloride and nitric oxide release induced by nitroglycerin can affect ROS/TRPA1/TRPVA/CGRP pathway, which forms positive feedback with CSD and NO, which leads to energy metabolism disorder and migraine. Relationship between NO and energy metabolism of migraine NO is a kind of bioactive molecule existing in various tissues of organisms. It has protective effects such as dilating blood vessels and improving local blood circulation. Some studies have found that NO is a key factor in migraine and other headaches. It is considered to be the trigger point of primary headaches and plays a key role in the pathogenesis of migraine (Olesen, 2008). Studies have shown that oxidative stress levels are elevated during migraine attacks and decreased during the intermittent period (Borkum, 2021). As a result, most of the biological markers related to oxidative stress, such as some heavy metals (iron ions) and NO, have corresponding expression changes in different periods of migraine. The levels of these biomarkers related to oxidative stress can fluctuate significantly due to migraine attack frequency, attack cycle, aura, or not (Togha et al., 2019;Fila et al., 2021). NO is involved in the vascular mechanism induced by CSD (Pradhan et al., 2018). In the cat model of migraine induced by nitroglycerin, compared with the control group, CSD significantly enhanced the release of NO. Experimental studies have also shown that the changes in neurotransmitters and vasoactive substances caused by CSD play an important role in migraine attacks. CSD can cause the release of NO and the content of vasoactive peptides such as CGRP in plasma (Read et al., 2000). Many experimental studies and clinical drugs have confirmed that the pathogenesis of migraine is closely related to the vasomotor state, that is, there are many metabolic disorders of vasoactive neuropeptides or neurotransmitters during the attack of migraine. These evidences show that NO is closely related to migraine, and the production of NO causes migraine. A high concentration of NO can significantly inhibit the activity of many components in the mitochondrial respiratory chain, resulting in the damage of the mitochondrial respiratory chain and the decrease of ATP production, resulting in neuronal dysfunction and migraine. NO can modify the sulfhydryl group, heme FIGURE 2 Mechanism of energy metabolism in mitochondria there are many proteins and protein channels on the inner membrane and outer membrane of mitochondria. When energy metabolism is abnormal, the following abnormalities will occur in mitochondria, which eventually lead to migraine. 1. mPTP and IMAC are pathologically open, resulting in the release of cytochrome C and reactive oxygen species. 2. A large number of calcium ions inflow through the MCU channel, resulting in calcium overload in the mitochondria. 3. The dysfunction of the mitochondrial respiratory chain leads to the production of reactive oxygen species and insufficient production of ATP, which will stimulate the production of nitric oxide. 4. The decrease of NAD+/ NADH can lead to the change of SIRT3, CYPD, and other proteins, and finally lead to the decrease of mitochondrial membrane potential and the opening of mitochondrial permeability transition pore. 5. Abnormal proportion of pro-apoptotic proteins and anti-apoptotic proteins can lead to apoptosis and eventually lead to migraine. Frontiers in Physiology frontiersin.org prosthetic group, and iron-sulfur center of some proteins through biofilms such as cell membrane and mitochondrial membrane, and can directly regulate the binding and release of oxygen and heme. NO controls the oxygen supply to mitochondria in this way. A high concentration of NO could significantly inhibit the activities of many components of the mitochondrial respiratory chain, such as the binding site of cytochrome oxidase to oxygen and NADH dehydrogenase (Nisoli et al., 2003). NO and carbon monoxide (CO) have shared functions as they both bind to haemoglobin and inhibit the oxygen transport system. They also inhibit mitochondrial oxidative phosphorylation by competitive binding to cytochrome c oxidase. NO can bind both oxidised and reduced cytochrome c oxidase, migraine can occur through all of the above pathways (Arngrim et al., 2014). Under the action of NO, cells can control the release of Cyt-c by regulating the opening of mPTP. Generally speaking, in the presence of a high dose of NO, the opening of mPTP increases, and the mitochondrial membrane potential decreases, which leads to the increase of Cyt-c release (Brookes et al., 1999). Complex IV in the mitochondrial respiratory chain, which has been shown to bind NO, may reduce NO activity if the vasculature contains sufficient complex IV (Leao, 1947;Torres et al., 1995), thereby preventing vasodilation. Modification of the complex IV:cyt-c ratio increases the susceptibility of endothelial cells to apoptosis. Inhibition of mitochondrial protein synthesis by chloramphenicol results in an alteration of the complex IV: cyt-c ratio in mitochondria. This leads to an increase in the amount of free cyt-c in the mitochondrial intermembrane space. There is also an alteration in mitochondrial bioenergetics and ROS production. These factors then make the cell more susceptible to NO-induced apoptosis (Ramachandran et al., 2002). Exogenous NO treatment significantly increased the content of ROS in mitochondria and apoptosis, and the content of ROS in mitochondria of neurons pretreated with NO scavenger hemoglobin (Hb) was similar to that of normal cells. This shows that NO can increase the content of ROS in mitochondria, which leads to apoptosis. In addition, high-dose NO can inhibit the enzyme activity of the mitochondrial respiratory chain, which will increase the electron leakage of mitochondria and increase the production of endogenous superoxide anions in mitochondria. Superoxide anion can react quickly with NO in mitochondria to form peroxynitrite (ONOO − ) with greater toxicity and pro-apoptotic effect. The generated ONOO − can directly damage the DNA of cells and induce apoptosis, and it can also stimulate p53 due to DNA damage, resulting in the decrease of Bcl-xl and the increase of Bax, resulting in the occurrence of apoptosis (Singh and Dikshit, 2007). Studies have shown that nitroglycerin can be converted into NO in vivo, thus activating the TRPA1 channel to produce oxidative stress (Wenzl et al., 2009). The activation of the TRPA1 channel in trigeminal ganglion neurons can promote the production of ROS, while ROS can further activate trigeminal ganglion neurons to form a positive feedback circuit and increase the release of CGRP. Relationship between CGRP and energy metabolism of migraine As a multifunctional neuropeptide, CGRP plays an important role in the pathophysiology of migraine by regulating neurogenic inflammation and nociceptive signal input (Russell et al., 2014;Russo, 2015). It has been found that ROS promotes CGRP production in a migraine rat model of CSD caused by Kcl stimulation, which allows ROS to reverse the reduction of cortical sensitivity to CSD after CGRP inhibition (Jiang et al., 2019). TRPA1 channel is a non-selective cation channel FIGURE 3 The tricarboxylic acid cycle produces NADH, which provides NADH for the electron transport chain and NNT. The electron transport chain can transfer hydrogen ions to the intermembrane space, and the hydrogen ions located in the membrane gap pass through NNT, resulting in antioxidant NADPH. FIGURE 4 The production of reactive oxygen species can lead to the opening of mPTP, the opening of mPTP can increase the markers of oxidative stress, and the close of mPTP can lead to the decrease of antioxidants. Frontiers in Physiology frontiersin.org expressed in sensory nerve endings, which is widely distributed in nerves and non-nerve cells. Intracellular and extracellular Ca 2+ is not only a key regulator of the TRP channel but also an important ion for activating and regulating the activity of the TRPA1 channel. The regulation of Ca 2+ in TRPA1 is dependent on calmodulin (CaM). CaM binds to TRPA1 to form calcium sensitive channel complex (Hasan et al., 2017). Activation of TRPA1 channels increases intracellular calcium levels, similar to L-type voltage-gated calcium channels (Kowalska et al., 2021). VGCC are classified according to their electrophysiological and pharmacological properties. The activation threshold of P/Q (Cav2.1), R (Cav2.3), L (Cav1. , and N(Cav2.2)-type VGCC is higher than that of the T (Cav3.1-CaV3.3) channel. Although L-type and T-type VGCC are widely distributed in many cell types, P/Q, N-and R-type channels are mainly limited to neurons. P/Q-type and N-type calcium channels are located in neurons, and L-type calcium channels are mainly found in the trigeminal ganglion and spinal trigeminal nucleus of rats. CaV2.1 channel is a calcium channel located in the presynaptic membrane and plays an important role in the communication between neurons by controlling the release of neurotransmitters. The mutation of the CACNA1A gene leads to the increase of CaV2.1 activation, which in turn leads to the increase of intracellular Ca 2+ . Except for CaV2.1, other VGCCs may play a role in the pathogenesis of migraine (Nanou and Catterall, 2018). Although presynaptic CaV2 channels may drive the release of CGRP associated with migraine, high voltage activated and typical postsynaptic CaV1 channels and low voltage activated CaV3 channels have also been found to regulate CGRP release in the trigeminal ganglion, which can be proved by drug blocking experiments (Amrutkar et al., 2011). TRPA1 channel has received extensive attention in migraine studies because it is activated by a series of endogenous and exogenous stimuli that may be associated with migraine. These substances include ROS and nitrogen (Nassini et al., 2014). ROS can activate TRP channels and promote the release of CGRP from sensory nerve endings. The function of CGRP in various processes can also be explored by TRP (Dussor et al., 2014;Russell et al., 2014;Veldhuis et al., 2015). H 2 O 2 is often used as an inducer of oxidative damage to cells. By inducing the production of ROS and causing inflammation, H 2 O 2 is involved in the pathogenesis of many nervous system diseases (Lacraz et al., 2009). H 2 O 2 can activate sensory neurons through the cysteine group in the TRPA1 channel, while H 2 O 2 can promote the production of CGRP (Shatillo et al., 2013). TRPV1 and TRPA1 channels are expressed by peptidergic trigeminal neurons, which synthesize and store the main migraine mediator CGRP (Andersson et al., 2008). The increase of oxidative stress in trigeminal ganglion leads to a significant increase in CGRP level in trigeminal ganglion (Marone et al., 2018;Messlinger et al., 2020), which is also closely related to the pathophysiology of migraine. It is generally admitted that activation and sensitization of primary afferent nociceptors that innervate the dural and meningeal vasculature trigger both CGRP-induced vasodilatation and neurogenic inflammation. Pain signals pass through the TNC, which relays signals to higher-order neurons in the thalamus and cortex. Central and peripheral sensitization may contribute to the maintenance of pain signals and predispose them to future migraine attacks (Takayama et al., 2019). Melo-Carrillo and colleagues further demonstrated that CSD leads to sensitization of central trigeminal nociceptive neurons. That study also found that systemic administration of an anti-CGRP antibody, used in migraine prophylaxis, inhibited the development of both the activation and sensitization of high threshold trigeminal neurons following CSD, further substantiating the link between CSD, meningeal nociception, FIGURE 5 The dysfunction of the trigeminovascular system and energy metabolism can lead to migraine. The primary afferent nerve of the trigeminal nerve innervated the pia mater and dural meningeal vessels. Its efferent projection fibers connect with secondary neurons in the TNC of the brainstem. The nerve fibers of TNC project to the thalamus and then rise further to connect with the higher cortical area. At the junction of nerve endings and vascular smooth muscle, ROS and NO jointly activate the TRPA1 channel, which leads to calcium influx, stimulates CGRP release, binds to the CGRP receptor of vascular smooth muscle, and then leads to vasodilation and migraine. Conclusion In the past few years, we have made great progress in understanding the mechanism of energy metabolism in migraine. KCL-induced CSD and nitroglycerin-induced cerebral vasodilation are commonly used migraine models, which can cause disorders of energy metabolism and lead to oxidative phosphorylation and TCA dysfunction. The dysfunction of oxidative phosphorylation is mainly related to calcium ions, mPTP, mitochondrial membrane potential, CYPD, OSCP, TSPO, and the Bcl-2 family. Oxidative phosphorylation inhibitors are also the focus of attention. In some studies, oxidative phosphorylation inhibitors can be used to specifically inhibit complex I, complex II, complex III, and complex IV, to observe the corresponding affected proteins. The relationship between ROS and energy metabolism is also very close. Too much ROS will lead to oxidative stress, which will promote the opening of mPTP, and then affect the expression of pro-apoptotic and anti-apoptotic proteins of the Bcl-2 family. ROS and CGRP, TRPA1, and TRPV1 can interact to cause migraine. Author contributions Conception or design of the work, YLZ and YCW. Data collection, YCW and YLW. Drafting the article, YCW and YLW. Critical revision of the article, GXY. All authors contributed to the article and approved the submitted version. The funding sources had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.
2023-04-13T13:15:18.687Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "8b2304f0ec248fd2609d3a1e32c2251547c6e99f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8b2304f0ec248fd2609d3a1e32c2251547c6e99f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52821963
pes2o/s2orc
v3-fos-license
Systemic signaling contributes to the unfolded protein response of the plant endoplasmic reticulum The unfolded protein response (UPR) of the endoplasmic reticulum constitutes a conserved and essential cytoprotective pathway designed to survive biotic and abiotic stresses that alter the proteostasis of the endoplasmic reticulum. The UPR is typically considered cell-autonomous and it is yet unclear whether it can also act systemically through non-cell autonomous signaling. We have addressed this question using a genetic approach coupled with micro-grafting and a suite of molecular reporters in the model plant species Arabidopsis thaliana. We show that the UPR has a non-cell autonomous component, and we demonstrate that this is partially mediated by the intercellular movement of the UPR transcription factor bZIP60 facilitating systemic UPR signaling. Therefore, in multicellular eukaryotes such as plants, non-cell autonomous UPR signaling relies on the systemic movement of at least a UPR transcriptional modulator. I n physiological conditions of growth and in disease, eukaryotic life depends on the biosynthetic ability of the endoplasmic reticulum (ER) to synthesize correctly folded secretory proteins. Conditions that alter the ER proteostasis and induce accrual of misfolded proteins in the ER lead to a potentially lethal condition known as ER stress 1,2 . At the onset of ER stress, cells activate cell-intrinsic UPR signaling pathways mediated by specialized ER stress sensors whose function is to reprogram gene expression for the synthesis of effectors that attenuate ER stress and restore the biosynthetic ability of the ER 3 . If the UPR is ineffective to initiate proper cytoprotective mechanisms to attenuate ER stress, it induces programmed cell death 4 . The main branch of the UPR is mediated by the ER-associated kinase and ribonuclease inositol-requiring protein 1 (IRE1) through largely conserved mechanisms. During ER stress, upon oligomerization and trans-autophosphorylation for self-activation, IRE1 splices the mRNA of a basic leucine zipper (bZIP) transcription factor, namely HAC1 in yeast, XBP1 in metazoans, and bZIP60 in plants. This step removes the coding region for a transmembrane domain (TMD), releasing the translational inhibition of a potent UPR transcriptional factor. The newly synthesized transcription factor is translocated to the nucleus where it modulates the expression of nuclear UPR target genes for the restoration of ER proteostasis [5][6][7][8] . Multicellular eukaryotes also harness another UPR branch, which is mediated by ER membrane tethered transcription factors (MTTFs), such as ATF6 in metazoans and bZIP28 in plants 9 . Upon ER stress sensing, these MTTFs translocate to the Golgi where the transcription factor domain is cleaved off the transmembrane anchor and is then transported to the nucleus to regulate transcription of UPR target genes 9,10 . In addition to cell-intrinsic signaling, the metazoan UPR may actuate non-cell autonomous signaling for the activation of stress responses in tissues and cell types that are different from those where the ER stress signal is originated. Specifically, in C. elegans intercellular signaling of the UPR has been induced through the overexpression of spliced (i.e., active) XBP1 in neuron cells, which elicits UPR activation in non-stressed intestine cells 11 . Similarly, in mice overexpression of active XBP1 in hypothalamic proopiomelanocortin (POMC) neurons is followed by non-cell autonomous splicing of XBP1 and UPR activation in the liver 12 . Although the existence of secreted stress signals to actuate transcellular UPR has been hypothesized 11 , the identity of the effectors that act downstream XBP1 in intercellular communication of the UPR in metazoans is currently unknown. It is yet also unknown whether the systemic UPR signaling occurs in experimental conditions that do not rely on tissue-specific overexpression of XBP1. Plants show cell-intrinsic UPR signaling; 13 however, whether they also execute non-cell autonomous UPR signaling is still an open question. Here, we demonstrate that in plants, in addition to cell-autonomous signaling, the UPR extends to systemic tissues by non-cell autonomous signaling through the contribution of the mobile UPR transcription factor bZIP60. Our findings indicate that in eukaryotes non-cell autonomous UPR signaling can directly rely on the translocation of at least one UPR transcriptional regulator. Results Spliced bZIP60 translocates transcellularly. To test whether systemic UPR signaling may take place in plants, we first adopted a cell-type specific expression assay in Arabidopsis transgenic roots. We used the short-root (SHR) promoter, which is exclusively active in the stele, the central tissue of the root, and drives the expression of SHR 14 . The latter is a nucleus-localized transcription factor that moves from the stele, where it is synthesized, to the endodermis, a tissue layer surrounding the stele; notoriously, SHR does not reach the cortex and epidermis, which envelope the endodermis 14 . We used the SHR promoter to drive expression of cytosolic green fluorescent protein (GFP) (pSHR-GFP) 15,16 , and GFP fused either to SHR (pSHR-SHR-GFP) 14 or to a constitutively active form of bZIP60, spliced bZIP60-GFP (pSHR-sbZIP60-GFP). We used wild-type Col-0 (hereafter Col-0), an SHR knockout 17 (shr-2, hereafter shr), and bzip28/60-1 18 (hereafter bzip28/60). In bzip28/60 both UPR branches (i.e., bZIP60 and bZIP28) are inactive in conditions inducing ER stress (i.e., Tunicamycin (Tm) treatment), and the expression of UPR genes, including BiP3, the major target of sbZIP60 8 , is not actuated 19 . This setup was therefore designed to test movement of bZIP60 across tissues. A GFP fusion of bZIP60 driven by the native promoter in a bzip60 mutant 20 is localized throughout the root tissues in control conditions and in conditions of ER stress ( Supplementary Fig. 1) hampering the possibility to assess systemic movement of this transcription factor. In our experimental setup, we expected that cytosolic GFP would be detected exclusively in the stele, while SHR-GFP would be localized in the stele and the endodermis. Conversely, if sbZIP60 moved transcellularly, then sbZIP60-GFP expression in the stele would result in the accumulation of sbZIP60-GFP in the stele as well as in other cell layers. Confocal imaging of cytosolic GFP and SHR-GFP in the root of the respective Col-0 and shr transgenic lines showed a diffuse distribution of cytosolic GFP in the stele, and a localization of SHR-GFP in the nuclei of the stele and endodermis (Fig. 1a). These results are consistent with earlier findings 21 and indicate that stele-expressed cytosolic GFP accumulates only in the stele, while SHR-GFP, which is produced in the stele, moves to the endodermis 15,22 . When we analyzed bzip28/60; pSHR-sbZIP60-GFP roots, we found accumulation of sbZIP60-GFP in the nuclei and cytoplasm of cells in the stele and endodermis, as well as cortex and epidermis (Fig. 1a), which is comparable with the localization of GFP-bZIP60 driven by the bZIP60 native promoter in conditions of ER stress 20 (see also Supplementary Fig. 1). In addition, such distribution pattern was visible throughout the division, elongation and differentiation zones of roots with graded level of fluorescence from the younger regions of the root upward ( Supplementary Fig. 2). In light of the restricted accumulation of cytosolic GFP to the stele and of SHR-GFP to the stele and endodermis, these results strongly support that sbZIP60 can move transcellularly from the stele to the epidermis through the endodermis and cortex. Next, we tested whether the transcellular movement of sbZIP60 could play a role in UPR signaling in the tissues in which it is translocated. We developed a genetically encoded reporter for UPR activation by sbZIP60 in systemic tissues by expressing βglucuronidase (GUS) under the control of the BiP3 promoter (pBiP3-GUS) in either Col-0 (Col-0; pBiP3-GUS; positive control) or bzip28/60. In the latter background we introduced pBiP3-GUS alone (bzip28/60; pBiP3-GUS; negative control) or in combination with pSHR-sbZIP60-GFP (bzip28/60; pSHR-sbZIP60-GFP/pBiP3-GUS). We then tested where the UPR could be activated systemically in the root by stele-expressed sbZIP60. We expected that if sbZIP60 activated the BiP3 promoter in a systemic manner, then the tissue labeling by GUS would mirror the verified tissue distribution of sbZIP60-GFP (Fig. 1a) and would show staining intensity levels above background. As expected, we found no expression of pBiP3-GUS throughout the bzip28/60 roots (Fig. 1b, c), owing to the lack of functional bZIP60 and bZIP28. In wild type, BiP3 is generally expressed in the absence of induced ER stress 9,23 . Consistently with this, in Col-0; pBiP3-GUS we verified GUS expression, which was visible in the endodermis of the upper maturation zone, and predominantly in the stele and endodermis of the middle maturation and lower maturation zones (Fig. 1b, c), in agreement with tissue-specific transcriptomics analyses of the root 24 . In conditions of ER stress, the BiP3 expression was robustly enhanced in the stele, endodermis, cortex, and epidermis layers throughout all the root zones ( Supplementary Fig. 3), consistently with previous findings 25 . We then analyzed bzip28/60; pSHR-sbZIP60-GFP/ pBiP3-GUS transgenic plants and found a strong GUS expression in the stele and endodermis but also in the cortex and epidermis of all the root zones under analysis (Fig. 1b, c). The strong GUS activity in the bzip28/60; pSHR-sbZIP60-GFP/pBiP3-GUS line is likely linked to the overabundance of sbZIP60 driven by pSHR in these tissues compared to Col-0; pBiP3-GUS. Importantly also, the GUS activity in bzip28/60; pSHR-sbZIP60-GFP/pBiP3-GUS mirrors the verified distribution of pSHR-sbZIP60-GFP in the cortex and epidermis as well as BiP3 expression pattern under stress condition (Fig. 1a, Supplementary Fig. 3). These results support our original observations that sbZIP60-GFP can translocate systemically from a tissue where it is specifically expressed and activate transcription of a target gene in systemic tissues. Taken together, these data indicate that, in conditions of overexpression of a constitutively active UPR modulator, the UPR signaling is executed systemically in plants. The results also indicate that overexpressed sbZIP60 can act as transcellular mobile transcription factor, triggering UPR gene expression in a systemic manner. pSHR-SHR-GFP, and bzip28/60; pSHR-sbZIP60-GFP at the primary root tips of 5-day-old transgenics reveals stele (St) accumulation of GFP, and stele and endodermis (En) distribution of SHR-GFP; noticeably, sbZIP60-GFP is localized in the stele, endodermis, cortex (Co) and epidermis (Ep). Similarly to SHR-GFP, sbZIP60-GFP is localized in nuclei (arrows). As also reported earlier 22 , we did not find SHR-GFP localization in the nuclei of the cortex and epidermis. Propidium iodide (PI) was used for counterstaining. Scale bar: 50 μm. b Expression of pBIP3:GUS in bzip28/60, Col-0 and bzip28/60;pSHR-sbZIP60-GFP seedlings grown vertically on half LS agar medium for 11 days. X-Gluc was used for histochemical staining to monitor GUS activity. Scale bar: 100 μm. 60 lines that express sbZIP60 exclusively in the root. We expected that if a UPR signal moved shoot-ward from the root, we would detect expression of sbZIP60, and possibly of BiP3, not only in the roots but also in the shoots. To express sbZIP60 specifically in the root, we generated transgenic bzip28/60 expressing sbZIP60 under the control of the pRoot promoter 26 . pRoot drives the Arabidopsis glycosidase-transferase GLYT expression specifically in the roots 26 , both in unchallenged and Tm-challenged seedlings, as supported by the evidence that the expression of GLYT in the shoots is proximal to the detection limit of quantitative RT-PCR (qRT-PCR) and not statistically different in both experimental conditions ( Supplementary Fig. 4). We selected two independent transgenic lines that we named bzip28/60; pRoot::sbZIP60 and adopted an Arabidopsis shoot-root split culture system 27 in which 2-week old intact seedlings grown on vertical plates are transferred to Petri dishes that are subdivided by a sealed plate divider to separate growth media with different composition 28 . By laying intact seedlings across the plate divider, this system exposes shoot and root to the medium contained in each plate sub-compartment separately. We applied Tm or DMSO (Tm solvent) for 24 h to the medium exposed to the root and compared the levels of UPR gene transcripts in shoot and root of seedlings challenged by Tm at the root by qRT-PCR. We used Col-0 and bzip28/60 as positive and negative controls, respectively. As expected, in Col-0 the levels of the UPR gene transcripts sbZIP60 and BiP3 were more abundant in Tm-treated seedlings compared to DMSO control ( Fig. 2a, b). Conversely, the bzip28/ 60 mutant did not show elevation of either UPR marker gene in the absence and in the presence of Tm (Fig. 2a, b), despite its ability to absorb Tm ( Supplementary Fig. 5). However, the bzip28/60; pRoot::sbZIP60 lines showed sbZIP60 and BiP3 transcripts in the root at levels that were significantly higher compared to bzip28/60 ( Fig. 2a, b), indicating that the pRoot promoter is functional for the expression of these genes. Next, analyses of sbZIP60 and BiP3 transcripts in the shoot of bzip28/60; pRoot:: sbZIP60 lines showed significantly higher levels of sbZIP60 and BiP3 transcripts compared to bzip28/60. Unlike in Col-0, in bzip28/60; pRoot::sbZIP60 the sbZIP60 and BiP3 transcript levels were largely unchanged by Tm treatment, as it would be expected for genes normally controlled by ER stress-responsive promoters. Together with the evidence that pRoot is unresponsive to Tm and that it is expressed specifically in roots ( Supplementary Fig. 4), as well as the consideration that the experiments were conducted in a genetic background normally lacking endogenous expression of the UPR bZIP-transcription factors and, consequently of their target genes under ER stress, these data indicate that the rootgenerated bZIP60 and BiP3 transcripts are found in tissues in which their expression is not driven locally by an ER stressresponsive promoter. These findings support the hypothesis that the plant UPR signaling is non-cell autonomous and imply that the transcellular translocation of at least sbZIP60 transcripts from root to shoot may be involved in long-distance transduction of the UPR. Systemic UPR acts systemically in a shoot-ward direction. We next tested the occurrence of endogenous systemic UPR using the shoot-root split culture system in order to apply Tm for 24 h either to the medium at the shoot or the root of 2-week old wildtype Col-0 seedlings (Fig. 3a); we then monitored the UPR signaling in each portion of the seedling by qRT-PCR (Fig. 3b). We first monitored the sbZIP60 and BiP3 mRNA levels in seedlings treated with Tm at the roots and found increased levels compared to the root of a mock control ( Fig. 3b; D/T vs 0 h). When we analyzed the untreated shoot of the root-treated seedlings, we observed a significant raise in the transcript levels of sbZIP60 and BiP3 compared to mock control ( Fig. 3b; D/T vs 0 h). These results indicate that Tm-treatment of the root leads to UPR signaling activation both in the Tm-treated root as well as in the untreated shoot. Next, we analyzed the UPR gene transcripts in seedlings treated with Tm at the shoots ( Fig. 3a; T/D). We found that both sbZIP60 and BiP3 transcript levels increased in the shoot compared to mock control ( Fig. 3b; T/D). Furthermore, in net contrast to the D/T seedlings, no significant induction of sbZIP60 and only a slightly induction of BiP3 were detected in the untreated root upon shoot treatment compared to mock control ( Fig. 3b; T/D vs 0 h). These results point towards the possibility of a systemic actuation of ER stress responses mainly in a shootward direction. We next tested the kinetic profiles of bZIP60 transcription and splicing as well as the subsequent activation of BiP3 expression in response to ER stress within 24 h. We found that in the Tm-treated roots, a significant induction of unspliced bZIP60 (unbZIP60) occurred at 3 h ( Supplementary Fig. 6A). Consistently, the emergence of sbZIP60 transcripts started at 3 h and reached peak expression at 6 h post treatment followed by a bzip28/60; pRoot::sbZIP60 bzip28/60; pRoot::sbZIP60 Root-expressed sbZIP60 transcripts are translocated to the shoot. a, b Quantitative RT-PCR analyses of sbZIP60 (a) and BiP3 (b) in 14-day-old wild type (Col-0), bzip28/60, and bzip28/60; pRoot::sbZIP60 seedlings treated with DMSO or 0.5 µM Tm (Tunicamycin) at the root in the shoot-root split system for 24 h. Transcription of UBQ10 was used as internal control. Error bars represent s.e.m among three biological replicates. Data significantly different from the corresponding control are indicated by asterisks (*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001; Unpaired t-test). Black asterisks designate differences between bzip28/60 and bzip28/60; pRoot::sbZIP60 lines under Tm non-treatment condition. Red asterisks designate differences between bzip28/60 and bzip28/60; pRoot::sbZIP60 under Tm treatment condition decline of induction levels at 12 h and 24 h post treatment ( Supplementary Fig. 6B). The BiP3 expression displayed a similar trend like sbZIP60 but reached the peak at 12 h post treatment ( Supplementary Fig. 6C), in line with a time correlation of the sbZIP60-driven BiP3 activation in response to the ER stress. In the untreated shoots, a significant induction of unbZIP60 and sbZIP60 was detected at 3 h post treatment and BiP3 expression was predominantly detected at 6 h post treatment (Supplementary Fig. 6A-C) supporting a roughly 3 h-requirement for systemic UPR signals, including bZIP60 transcripts translocation, from treated roots to shoots to elicit the downstream UPR gene BiP3 in our experimental conditions. We then monitored Tm levels in the shoot and root of seedlings treated with Tm in the shoot-root split culture system, using HPLC/MS. As controls, we used seedlings exposed to either DMSO (D/D) or Tm (T/T) at both the shoot and the root. As expected, we observed a significant Tm accumulation in the treated shoot and root compared to the respective DMSO (mock control)-only treated tissues ( Fig. 3c; D/T and T/D vs D/D), indicating that the seedlings absorbed Tm from the growth medium. Furthermore, the root of seedlings with Tm-treated shoot showed accumulation of Tm ( Fig. 3c; T/D vs D/D), indicating that Tm can translocate from the shoot to the root. Importantly however, we found no significant increase of Tm levels in the DMSO only-treated shoot of seedlings with Tm-treated root compared to mock seedlings ( Fig. 3c; D/T vs D/D) indicating that Tm is not transported from the root to the shoot in the shoot-root split culture system to a detectable level. Taken together, these results support that an Local ER stress ignites the UPR systemically, mostly in a shoot-ward direction. a Diagrams illustrating the Arabidopsis shoot-root split culture system in which the shoot and root of an intact seedling are exposed to separate growth media with different chemical conditions: mock DMSO (D) or 0.5 µM Tm (T, Tunicamycin). T/D denotes shoot on Tm-containing medium and root on DMSO-containing medium; conversely, D/T denotes shoot on DMSOcontaining medium and root on Tm-containing medium. b Quantitative RT-PCR analyses of UPR markers in 14-day-old wild-type seedlings treated with DMSO or 0.5 μM Tm for 24 h as described in a. Values are presented relative to non-treated control (0 h), which was set to 1. Transcription of UBQ10 was used as internal control. Error bars represent s.e.m among three biological replicates. Data significantly different from the corresponding control are indicated by asterisks (*P < 0.05, ****P < 0.0001, NS, nonsignificant; Unpaired t-test). c Quantitative HPLC/MS analyses of Tm content in shoot and root of seedlings after treatments as described in a in a shoot-root split culture system. The numbers over the histograms express ng g −1 fresh weight (F.W.). Data significantly different from the corresponding control are indicated by asterisks (**P < 0.01, NS, nonsignificant; Unpaired t-test) NATURE COMMUNICATIONS | DOI: 10.1038/s41467-018-06289-9 ARTICLE NATURE COMMUNICATIONS | (2018) 9:3918 | DOI: 10.1038/s41467-018-06289-9 | www.nature.com/naturecommunications endogenous UPR signaling acts systemically mainly in a shootward direction that is independent of Tm transport from the root to the shoot. Root signals induce UPR genes in unchallenged tissues. We next performed reciprocal micro-grafting analyses in which the aerial tissue (scion) is grafted onto the root (rootstock) of a different plant, using wild type (Col-0) and bzip28/60 (Fig. 4). We expected that if an endogenous transcellular UPR signal existed as so far supported by our experiments (Fig. 3, Supplementary Fig. 6), we would observe UPR gene transcripts in the bzip28/60 tissues grafted with Col-0 tissues. Because our results indicate that a UPR signaling moves mainly in a shoot-ward direction (Fig. 3b, Supplementary Fig. 6), to monitor the UPR signaling at tissuespecific level we compared the abundance of sbZIP60, BiP3 and bZIP28 transcripts in self-grafts (same genetic background) and hetero-grafts (different genetic background) of scion and rootstock from independent seedlings treated with Tm at the rootstock. As a reference, we used untreated (i.e., DMSO only) micrografted seedlings with the same genetic combination as the respective Tm-treated micro-grafted seedlings. As expected, upon Tm treatment the self-grafted bzip28/60 (scion/rootstock combination indicated as bzip28/60/bzip28/60) did not show a significant increase of sbZIP60, BiP3 and bZIP28 transcripts compared to the same untreated background both in the scion and in the rootstock (Fig. 4a-c). In net contrast, compared to the bzip28/60/bzip28/60 self-grafts, the Col-0 self-grafts (Col-0/Col-0) displayed significant induction of the UPR marker genes, sbZIP60 and BiP3, in both scion and rootstock but non-induced levels of bZIP28, consistent with a bZIP28 signaling in ER stress mainly mediated at a protein level 9 (Fig. 4a-c). These controls indicate that the micro-grafting approach does not hamper the response of the grafted unions to ER stress. We then tested the bzip28/60 scion grafted on Col-0 rootstock (bzip28/60/Col-0). Compared to Col-0 self-grafts, in the rootstock of bzip28/60/Col-0 we found similar levels of UPR gene transcripts (Fig. 4a-c), indicating that the Col-0 rootstock in the bzip28/60/Col-0 grafts can respond to ER stress as the self-grafted Col-0/Col-0 rootstock. In the scion of bzip28/60/Col-0 hetero-grafts, the UPR gene transcript levels were significantly higher compared to the scions of bzip28/60 selfgrafts albeit lower when compared to the Col-0/Col-0 scion (Fig. 4a-c). Because the bzip28/60 background is unable to evoke the UPR, these results indicate that the scions of bzip28/60/Col-0 hetero-grafts contain UPR gene transcripts originated from the Col-0 rootstock. These results are consistent with our observations that a shoot-ward signal originated from a Tm-treated tissue can induce the UPR in an unchallenged systemic tissue (Fig. 3b) and findings that the bZIP28 mRNA can move intracellularly in unstressed conditions 29 . Next, we tested whether UPR signaling other than the canonical bZIP60 and bZIP28 arms could be involved in the systemic UPR response. We conducted a separated reciprocal grafting with Col-0 as the scion and bzip28/60 as the rootstock (Col-0/bzip28/60). We expected that, if other shoot-ward UPR signaling were in place beside the bZIP28 and bZIP60 arms, then the levels of BiP3 would be affected in the scion of the Col-0/ bzip28/60 hetero-graft. Similar to bzip28/60 self-grafts, there was no significant induction of sbZIP60, BiP3, and bZIP28 in the rootstocks of Col-0/bzip28/60 hetero-grafts upon Tm treatment ( Supplementary Fig. 7A-C). Also, in the scions of the Col-0/ bzip28/60 hetero-grafts the sbZIP60, BiP3, and bZIP28 were not induced and their respective mRNA levels were not significantly different compared to the scions of bzip28/60 self-grafts ( Supplementary Fig. 7A-C). While the lack of BiP3 induction in the scions is likely due to the absence of a functional UPR machinery in the rootstocks of Col-0/bzip28/60 hetero-grafts, these data indicate that the systemic UPR signaling requires the function of the canonical UPR bZIP-arms. Systemic UPR signaling is plasmodesmata dependent. Direct intercellular communication between plant cells, including cells of the stele and the endodermis, occurs via plasmodesmata (PD), which are connecting micro-channels between adjacent cells and the main route for certain signaling molecules in cell-to-cell trafficking 30 . To establish if the systemic UPR relies on PDmediated traffic, we tested whether sbZIP60 could target the PD. We generated a YFP fusion to sbZIP60 driven by a constitutive promoter for confocal microscopy analyses in leaf epidermal cells, which are more suitable for imaging analyses of PD compared to root tip cells. We verified a nuclear localization of YFP-sbZIP60 as well as a diffused distribution with conspicuous punctate reminiscent of PD ( Supplementary Fig. 8A). The PD localization of sbZIP60 was confirmed through co-localization analyses with CFP fused to the established PD-localized receptor-like transmembrane protein, PDLP1 31 ( Supplementary Fig. 8A). In contrast, a cytosolic YFP (cYFP) control did not show marked colocalization with PDLP1-CFP ( Supplementary Fig. 8A). Integrated density measurements of the YFP signal at the PD using Aniline Blue (AB), a vital PD dye 32 , and Pearson correlation coefficient 33 estimation, as a measure of the association of fluorescence intensity between the YFP and AB signals, indicated that YFP-sbZIP60 co-localized at PD at significantly higher levels compared to cYFP (Supplementary Fig. 8B). The evidence that sbZIP60-GFP moves away from the stele (Fig. 1a, Supplementary Fig. 2), a process that is mediated exclusively by PD, and the coincidental distribution of YFP-sbZIP60 at PD ( Supplementary Fig. 8) support that sbZIP60 is translocated via PD in conditions of ectopic expression. A PD dependency of endogenous systemic UPR response was tested in the shoot-root spilt culture system using the conditional PD blockage mutant, pMDC7-icals3m 34-36 , in which the PD passage is reduced by an enhanced accumulation of callose throughout whole seedling upon gene induction by estrogen 34,37 . In Col-0, in the presence of estrogen, the expression of sbZIP60, and BiP3 was similar to the mock condition, indicating that exogenous estrogen does not elicit ER stress (Fig. 5a). Both UPR markers exhibited similar expression profiles in shoots and roots in conditions of Tm-treatment only, or in condition of Tm-treatment in conjunction with estrogen application to the roots (Fig. 5a), further supporting that the addition of exogenous estrogen does not affect the systemic UPR response. In the conditional PD blockage mutant, there were no significant differences in the transcript levels of UPR markers in Tm+estrogen treated roots compared to Tm only treated roots, indicating no effect of estrogen on the UPR in this mutant (Fig. 5b). Most importantly, the expression of both UPR markers was significantly reduced in the shoots of seedlings treated with Tm+estrogen at the roots compared to the shoots of seedlings with Tm-only treated roots (Fig. 5b). Therefore, an induction of PD closure compromises the systemic UPR signaling likely mediated by a translocation of signaling molecules such as sbZIP60 to elicit UPR in the distal tissues. Discussion Long-distance signaling is required for plants to actuate physiological processes and thrive in response to environmental challenges [38][39][40] . Here we demonstrate that during a 24 h-time course upon local application of Tm to wild-type seedlings, the UPR markers can be detected in systemic tissues, indicating that ER stress in plants evokes a long-distance signal transduction of the UPR. Our conclusions are further corroborated by reciprocal grafting and ectopic expression assays showing significant transcript levels of sbZIP60 and BiP3 distally from the Tm-treated tissues. Therefore, our work demonstrates that, in addition to the well-established existence of a cell-intrinsic UPR signaling, plants harness long-distance signaling to communicate the occurrence of ER stress in a tissue to systemic tissues. We also show that sbZIP60 moves transcellularly and that its movement causes downstream UPR activation in distal tissues. Based on these results, we propose that sbZIP60 participates as a non-cell autonomous factor to actuate distal UPR signaling directly through its movement across cells. Previous studies in C. elegans and mice relied on overexpression of transcriptionally active XBP1 to infer the existence of systemic UPR signaling 11,12 . Similarly, local expression of However, an endogenous signal is actuated to evoke a systemic UPR, as supported by the detection of endogenous sbZIP60 and BiP3 transcripts in the shoot of wild-type seedlings exposed to Tm at the root in the split-plate system and the micro-grafting experiments using the bzip28/60/Col-0 hetero-grafts in which we detected the endogenous sbZIP60 and BiP3 transcripts (i.e., Col-0-originated) in the scion. The lack of genetic information in the bzip28/60/Col-0 scions that is necessary to express normally sbZIP60 and BiP3 in the aerial tissues supports that the presence of transcripts of these genes in the scion is the result of endogenous systemic UPR signaling. The induction level of BiP3 in bzip28/60/Col-0 scion was lower than that in Col-0 shoots under shoot-root split treatment, which may be a consequence of a lack of UPR amplification in bzip28/60 scions. In plants, several types of mobile RNA have been identified 41 . Transcriptomic analyses in Arabidopsis phloem also reported on the existence of cellular mRNA, suggesting the potential role of these mRNA as signaling molecules in the long-distance trafficking 42 . However, also proteins can contribute to systemic signaling, as it occurs for the well-established floral systemic signaling mediated not only by FT proteins but also by the FT mRNA 39 . This may be also the case for sbZIP60 protein and sbZIP60 mRNA, which both could function as transposable molecules eliciting systemic UPR signaling. The evidence that bZIP60 is localized at the PD supports the possibility that the bZIP60 protein moves systemically, but it also possible that the intercellular translocation of bZIP60 mRNA leads to translation of an active transcription factor in systemic tissues. bZIP28 is another master UPR modulator contributing to the activation of ER stress response genes, including BiP3, ERDJ3A, and TIN1 in the plant UPR 19,43 . While the results obtained using bZIP60 as transgene in the bzip28/60 background provide evidence for a role of bZIP60 in systemic UPR signaling, the presence of bZIP28 in the wild-type background may facilitate to some extent the expression of BiP3 in the wild-type shoot of seedlings challenged with Tm at the root on the split-plate system. The bZIP28 transcripts are transposable between cells, at least in conditions different from ER stress 29 . Therefore, bZIP28 mRNA or its gene product may be involved in the systemic UPR regulation in parallel to the bZIP60 arm. Indeed, the lack of induction of UPR transcripts in the scions of Col-0/bzip28/60 hetero-grafts exposed to Tm at the roots argues that the systemic UPR signaling relies on the canonical UPR arms. In plants, cell-to-cell communication takes place through pathways that involve the apoplast via the continuum of the cell wall, symplastic-driven cytoplasmic transport between different cells within the same tissue or among tissues connected by PD, and vascular-driven transport between different groups of cells or tissues utilizing conducting system composed of phloem and xylem 44 . Using subcellular localization analyses and transcriptional analyses, we have found sbZIP60 protein and sbZIP60 mRNA in ectopic tissues, respectively; we also found that the sbZIP60 protein can associate with PD and enter the nucleus, implying that the movement of bZIP60 in systemic signaling may occur through PD. A requirement for PD availability for longdistance UPR signaling is supported by the evidence that in a conditional PD mutant the systemic UPR is attenuated when PD closure is induced. Although we have verified the presence of sbZIP60 protein at the PD, the sbZIP60 mRNA may translocate through the PD, as it occurs for the mRNA of other proteins 39 . Additionally, the reported existence of unspliced bZIP60 (unbZIP60) mRNA and likely co-localization with ER-associated unbZIP60 protein 20 does not exclude a potential mobility of unbZIP60 mRNA for involvement in the systemic UPR through PD where a modified ER is present 45 . The visualization of sbZIP60 protein at the PD may be facilitated by expression of sbZIP60 by the CaMV 35S promoter. However, as PD protein targeting is a highly specific process 46 , the subcellular localization of sbZIP60 at PD is unlikely a result of overexpression. This is further supported by a relatively lower frequency of localization of cytosolic YFP at the PD in the same experimental conditions. In plants, some systemic response regulators that target PD do not act directly on systemic response effectors 47,48 . The evidence provided in our work that transcellularly translocated sbZIP60 protein can induce the activity of the promoter of a target gene indicates that the systemic movement of sbZIP60 is functional in evoking UPR gene expression. Biotic stress like pathogen attack induces SAR through which the signals generated from infected sites are translocated to distal plant tissues priming the defense response and eliciting immunity for subsequent infections 40 . Based on the results that sbZIP60 traffics across cells, we propose that, similar to SAR, sbZIP60 participates in long-distance stress signaling to modulate the UPR in cells distally from the site where ER stress occurs. In nature, ER stress is caused by a variety of challenges including pathogens, heat, and salt 7,49,50 . A long-distance signaling of ER stress from challenged tissues may help cells anticipate incoming stress to yet-unchallenged tissues. For example, the UPR is required for plant defense response by modulating secretion of antimicrobial proteins 51 . Therefore, a systemic signaling of ER stress may prepare cells of systemic tissues for responding to a potentially-upcoming ER stress by inducing the accumulation of transcripts of ER stressattenuating proteins. In our work, we have verified significant levels of sbZIP60 and BiP3 transcript accumulation in the shoot when Tm was applied to the root; conversely, when applied to the shoot, Tm induced sbZIP60 and BiP3 transcript accumulation in the roots to much lower levels. Based on these results, we propose that a rootoriginated transcriptional signal may operate mainly in a shootward direction for ER stress signaling. However, a shootoriginated signal that moves in a root-ward direction may also exist. If such a signal were in place it would operate to lower levels than the root-originated signal, which would prevent its detection by qRT-PCR. Long-distance stress signaling in plants is mediated by a number of inducers 47 . The role of these inducers in long-distance ER stress signaling is yet to be evaluated and it is conceivable that long-distance ER stress signaling may overlay a bZIP60-mediated signaling with the action of other stress transducers. Nonetheless, it has been shown in seedlings that ER stress responses are independent from endogenous SA and that Tm does not induce accumulation of SA 52 . Therefore, long-distance ER stress signaling is likely independent from SA. The evidence that micro-grafts with a bzip28/60 rootstock fail to evoke UPR signaling in a Col-0 scion further supports that the systemic UPR signaling mainly relies on the canonical UPR arms. Our findings address the long-standing question-whether plant UPR constitutes an endogenous systemic signal. From our results, we conclude that this is the case. The identification of bZIP60 as a component of the long-distance UPR signal transduction is a significant step forward in the understanding of the mechanisms underlying systemic signaling transduction of ER stress responses in intact organisms. Shoot-root split culture system. Plants were grown vertically on the half-strength LS medium for 14 days and transferred onto a 9 cm Petri dish equipped with two compartments (Kord-Valmark #2903, USA) containing medium with either DMSO in one side or 0.5 μM Tm, 10 μM 17-β-estradiol and 0.5 μM Tm in combination with 10 μM 17-β-estradiol in the other side and cultured horizontally for an additional 1 day. Data were acquired on at least three biological replicates. qRT-PCR for gene expression analyses. Total RNA was extracted using Macherey-Nagel NucleoSpin RNA Plant kit (www.mn-net.com). All samples within an experiment were reverse transcribed at the same time using iScript cDNA synthesis Kit (BIO-RAD 1708891). Real-time qRT-PCR with SYBR green detection was performed in triplicate using the Applied Biosystems 7500 Fast Real-Time PCR System 55 . UBQ10 was utilized as an internal control in normalization of qRT-PCR, unless otherwise stated in Supplementary Fig. 4. Similar patterns of expression were observed in the three independent biological replicates. Primers used in this study are listed in Supplementary Table 1. Tunicamycin measurements. Two-week-old plate-grown seedlings (~100 mg) exposed to the shoot-root split culture system as detailed above were harvested and ground in liquid nitrogen. The samples were extracted at 4°C overnight using 1 ml of ice cold methanol:water (80:20 v/v) containing 0.1% formic acid; 0.1 g L −1 tunicamycin spiked with propyl 4-hydroxybenzoate as an internal standard for quantification of Tm levels. Then samples were vortexed and centrifuged at 12,000× g 4°C for 10 min, after which the flow-through samples were transferred to HPLC vials for the measurement of endogenous concentration of Tm by LC-MS with Quattro Premier XE (Waters company) instrument according to an established protocol 56 . The experiments were repeated at least two times with three biological replicates each time showing similar results. Grafting experiment. Hypocotyl micro-reciprocal grafting was performed as described in Marsch-Martínez et al. 57 , with minor modifications. Briefly, micrografting was conducted using ten-day-old seedlings grown vertically that were first grown in 16h-light/8h-dark 120 µmoles.m −2 s −1 for 6 days and then in dark for 4 additional days. Grafts were generated by transverse sectioning of the aerial and root portions followed by conjunction on growth medium. After the grafts were made, samples were grown in dark vertically for 2 days, moved to light but covered with fine mesh for 1 day and then uncovered to grow in light. Adventitious roots were removed every 2-4 days. Samples were harvested by removing the whole hypocotyl portion avoiding the potential contamination at grafted junction and processed 4 weeks after micro-grafting for further analysis. Lack of contamination of adventitious roots in grafted plants was confirmed by genotyping with PCR. Analyses were performed on at least three biological replicates. Plasmid construction. The Phusion high-fidelity DNA polymerase (NEB, USA) was employed to amplify all DNA sequences using the primer sets in Supplementary Table 1; the Gateway system (Invitrogen, USA) was used to generate expression plasmids. The promoter sequences of GLYT, SHR, and BiP3 and the coding sequences of spliced bZIP60 and PDLP1 were amplified using Arabidopsis genomic and cDNA as templates, respectively. The amplified coding sequences were recombined into indicated Gateway vectors via LR reaction (Invitrogen, USA) and confirmed by sequencing. Confocal laser scanning microscopy imaging. Confocal imaging was performed with an inverted laser scanning confocal microscope Nikon A1RSi. For subcellular localization assays, leaf tissues were mounted on a slide in a drop of tap water and viewed with the confocal microscope. GFP fluorescence was monitored at excitation wavelength of 488 nm and a bandpass 500-550 nm emission filter. Propidium iodide was monitored with a 560 nm excitation wavelength and a 570-620 nm bandpass emission filter. YFP fluorescence was monitored with a 513 nm excitation wavelength and 520-550 nm bandpass emission filter. CFP was monitored with a 443 nm excitation wavelength and a 465-505 nm emission filter. Quantification of protein co-localization at PD. Agrobacterium cultures at a cell density A 600 : 0.025 and 0.5, containing the constructs for cytosolic YFP and YFP-sbZIP60, respectively, were infiltrated through stomata into Nicotiana tabacum leaves using needleless syringe. The images were acquired 48 h after infiltration. Pearson correlation coefficient (PCC) values were calculated using the colocalization tool with default settings of the Nikon microscope software (NIS-Element AR 4.30). Areas with CFP and/or YFP signals at PD puncta were selected for the coefficient calculations. The coefficient calculations were performed on 170 of 3.4 µm 2 areas. Propidium iodide staining. Arabidopsis transgenic seedlings were stained in the 10 μg ml −1 working solution of propidium iodide which was prepared by diluting with half LS liquid medium from 1 mg ml −1 stock solution (Sigma-Aldrich P4864). GUS staining. β-Glucuronidase activity in transgenic roots carrying pBIP3-GUS was visualized by X-Gluc as substrate using a conventional protocol 58 . Staining was performed three times in three independent lines with consistent results. Aniline Blue staining. The vital dye Aniline Blue diammonium salt (Sigma-Aldrich 415049) dissolved in 1 M Glycine (pH = 9.5) (0.1% working solution) was adopted for PD-associated callose staining upon stoma infiltration with a needleless syringe into Nicotiana tabacum leaves for confocal microscopy analyses. Staining was performed three times in three independent lines with consistent results.
2018-09-26T15:06:02.689Z
2018-09-25T00:00:00.000
{ "year": 2018, "sha1": "f64c2cc99635a3790f94bcbe0a3fcc90cd3fd70a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-06289-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f64c2cc99635a3790f94bcbe0a3fcc90cd3fd70a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244965506
pes2o/s2orc
v3-fos-license
Evaluation of Human Resource Information System by Using HOT-Fit Model Today’s competitive business environment, many organization give more attention to enhance effectiveness and efficiency of employee. PT Pembangkitan Jawa Bali as a subsidiary of PT PLN (Persero) also continues to improve business processes of Human Resource Management through the application of Human Resource Information System (HRIS). This application has been developed since 2016 and is still implemented 67% of the HRIS PJB model design. This certainly raises questions about the development of HRIS PJB implementation, which is need for an evaluation of the implementation of HRIS PJB applications. HRIS Manager has never conducted an evaluation of HRIS PJB that can strengthen the benefits of its use for employees. Therefore, this study was conducted to evaluate the use of HRIS PJB applications in this case the focus on the Personal Management module in order to assess its usefulness to the needs of users and organizations by using Human Organization Technology (HOT) Fit Model. Primary data were obtained through a survey method by distributing questionnaires to PT PJB employees as application users. Data analysis method used is Partial Least Square using SmartPLS. The results of this study shows the suitability of this application benefits. If there is a gap, alternative solutions are needed to improve and develop applications in the future. Decision making through Borda Count Method is used to get the right solution with the current conditions and can be used as improvements priority and development of HRIS PJB applications for both application managers and management of PT PJB. I. INTRODUCTION OWADAYS it is undeniable that information technology is one of the main resources in an organization to improve competitiveness and optimal service. Therefore, every organization tries to apply information technology in order to increase effectiveness and efficiency in business processes, it aims to be able to provide added value in the form of competitive advantage. No exception the function of Human Resources (HR) in organizations that have been affected by the paradigm shift where human resource management is now moving from a silo approach to an integrated approach [1]. Integrating the HR function in planning a company's business strategy is needed so that the process of managing resources can be done effectively and efficiently. While all current HR practices are influenced by information technology, the term Human Resource Information System (HRIS) appears. According to Hendrickson [2], HRIS is defined as an integrated system used to collect, store and analyze information about an organization's human resources consisting of databases, computer applications, hardware and software needed to collect, record, store, manage send, present and manipulate data for human resource functions. HRIS consists of several modules, one of which is Personal Management. This module is related to the personnel administration process in an organization. This study evaluates the application of the Human Resource Information System (HRIS) system by focusing on the Personal Management Application (PMAN) at PT Pembangkitan Jawa Bali by using the Human Organization Technology (HOT) Fit model. This model was chosen because it was considered capable of explaining a comprehensive evaluation approach to the core components of the information system, which are Human, Organization, Technology and the suitability of the three components affecting net benefits on implementation of the information system. Based on the background of the problem, problem formulation of this study is evaluating the success of the application of the PMAN application at PT Pembangkitan Jawa Bali by referring to the HOT-Fit model by looking at three factors, which are human, technology, and organization. The objectives of this reasearch are to evaluate the success of the application of PMAN at PT Pembangkitan Jawa Bali and get empirical evidence regarding the successful application of the PMAN by using HOT-Fit model. By knowing the empirical evidence resulted of this study, recommendations for developing PMAN and HRIS PJB can be carried out appropriately by management in managing HRIS PJB, so that it can make the optimal utilization of PJB HRIS. II. METHOD This study uses a quantitative descriptive study method by conducting surveys and collecting primary data through interviews with guidance on distributing questionnaires to PMAN application users as respondents. In this study the object and material are the users of the PMAN application at PT Pembangkitan Jawa Bali. The types of questions used in the questionnaire are closed questions. The sampling technique used in this study was to use the Slovin formula (α = 5%) which was measured using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) [13].This study uses the HOT-Fit model developed by Yusof [3], with several modifications to assess [14]. The definition and concept of HOT-Fit variables used in this study can be explained as follows: A. Human The Human Aspect in HOT-Fit Model uses two dimensions to assess the success of information technology application , which are the system use and user satisfaction. System use focuses on the frequency and breadth of functions and investigations of information systems. Besides system use can be measured through: 1) people who use, 2) the level of use, 3) training , 4) knowledge, 5) beliefs, 6) expectations, and 7) acceptance or rejection of the system. The frequency of system use is usually measured by how often or long a user uses the system which will result in the user's dependence on the system.User Satisfaction focuses on measuring system success. User satisfaction is subjective because it depends on whose satisfaction is measured. User satisfaction is defined as an overall evaluation of user experience in using the system and the potential impact of the system. User satisfaction can be associated with perceptions of usability and user attitude towards the system that is influenced by personal characteristics. User satisfaction can be measured through 1) experience using the system, 2) the potential impact of the system, 3) perceived usefulness, 4) attitudes which are influenced by his / her personal characteristics [3] B. Organization Organizational aspects assess information systems in terms of organizational structure and organizational environment. C. Technology The technology aspect evaluates the system in terms of the quality of the information system that is related to the system quality, information quality and service quality. System quality in question is the quality or performance of the system itself. Both in terms of hardware and software that provides information for users. System quality can be measured through 1) ease of use, 2) ease of learning, 3) response time, 4) usefulness, 5) availability, 6) reliability, 7) completeness, 8) system flexibility, and 9) security. Even the existing system is often not used because it is not as expected. Therefore, it is important to determine whether the system (1) meets the needs of the projected user, (2) is comfortable and easy to use, and (3) matches the user's work pattern. Information quality is the quality of information output provided by the information system. Information quality can be measured through 1) completeness, 2) accuracy, 3) legibility, 4) timeliness, 5) availability, 6) relevance, 7) consistency, and 8) reliability. Service quality is related to the overall quality support provided by external providers to internal departments. Service quality can be assessed through 1) technical support, 2) quick responsiveness, 3) assurance, 4) empathy, and 5) follow-up service [3]. D. Net benefits A system can benefit a user, a group of users, an organization or a company as a whole. Net Benefit captures the balance of positive and negative impacts for its users, which includes managers and IT, staff, system developers, departments, work units or all sectors in the organization. From the human aspect, the impact of user behavior is influenced by the information it receives through the system. Changes can be in the form of influences on performance, changes in work activities, and increased productivity. Thus, an individual's Net Benefit can be evaluated using impact on work, efficiency, effectiveness, decision quality, and error reduction. From the Organizational aspect, the influence of information impacts the perceived performance of the organization. Just like individual Net Benefit, the organization's Net Benefit can also be evaluated using 1) job effects, 2) efficiency, 3) effectiveness, 4) decision quality, and 5) error reduction. Fit can be measured and analyzed using the number of definitions given by these three factors related to the relationship dimensions and information system success, which are System Quality, Information Quality, Service Quality, System Use, User Satisfaction, Structure, Environment, and Net Benefit [8] [9]. Figure 1 shows the model and variables that will be used in this study based on the HOT-Fit Model. There are three main aspects with different variables and one complementary variable. The first aspect is Technology with dimensions of system quality, information quality, and service quality. The second aspect is Human with the dimensions of system use and user satisfaction. The third aspect is the Organization [3], there were no changes in all aspects and dimensions used in this study. However, in this study a slight change was made to match the problem conditions at PT Pembangkitan Jawa Bali, which is the change in relationships between variables in each aspect in one direction. From the aspect of technology, each dimension is connected one way to each dimension on the aspects of human and organization. Likewise in each dimension on the aspects of human and organization that are connected in one direction on the net benefit dimension. This one-way relationship was chosen because of the evaluation adjustments made to find out whether the use of PJB HRIS affects the user (human) and PT Pembangkitan Jawa Bali (organization) in obtaining the overall benefit (net benefit). Furthermore, the formulation of hypotheses based on the HOT-Fit model used includes This study uses population as samples. Users are employees of PT Pembangkit Jawa Bali with a total number as of April 2020 of 2,984 employees. The number of samples was calculated using the Slovin formula (α = 5%) with a result of 353 employees. The questionnaire was designed according to the HOT-Fit model. The choice of answers is mapped in the form of a Likert scale with a range of 1 (strongly disagree) to 5 (strongly agree). Data will be procesed by using SmartPLS software to test the outer and inner model,. The first thing to do is determine the parameters according to the Rule of Thumb for the Outer and the Inner Model according to the PLS rules shown Table 1 [11]. III. RESULT AND DISCUSSION From the average respondent's survey, overall data shows that many respondents gave four (4) assessments, which means System Quality, Information Quality, and Service Quality of HRIS application as a whole is Good. Even if examined and calculated more deeply there are several factors that need to be improved. Table 2 shows the outer loading value of all indicator items on each variable has a value greater than 0.7 so that it can be concluded that the indicators used in this study have met the convergent validity. The square root of AVE for each variable already has a greater value than the value of the Average Variance Extracted variable itself as in Table 3, hence the evaluation of discriminant validity has been fulfilled. Table 4 shows that the composite reliability and Cronbach Alpha of each variable already has a value greater than 0.7. Thus research model compiled by the researcher has met the construct reliability. Then, testing of the inner model will be explained by using R-Square and Bootstrapping. The result of R-Square values are as follows. R-Square values of each latent variable are greater than 0.1 or 10% (see Table 5 Hypothesis testing is done by looking at the effect coefficient value and the T-Value value generated in the inner PLS model. The significance level used in hypothesis testing was 95% (α = 0.05). The t-table value with a significance level of 95% is 1.96. The research hypothesis will be accepted if the T-Statistic > 1.96 and P-Value <0.05. Testing the research hypothesis is based on the results of the bootstrap estimation in Figure 2. From Table 6, we can find out the hypothesis rejected or accepted by looking at the value of Tstatistic and p-value. Hypothesis Testing (Bootstrapping) resulted in only four of the sixteen hypotheses rejected, which are H3 (System Quality and Structure), H4 (System Quality and Environment), H6 (Information Quality and User Satisfaction), and H13 (System Usage and Net Benefit). Based on the hypothesis test (bootstrapping), it is r that the factors that influence the use of HRIS application in PT PJB from the most influential to the least influential, are Service Quality (SL) > Environment (EV) > User Satisfaction (US) > Structure (ST) > Information Quality (IQ) > System Quality (SQ) > System Use (SU). So that the four most influential variables then tested using the Borda Count Method (BCM) to determine priority solution variables that need to be improved to strengthen the benefits received by users [12]. Solution variable is constructed by distributing questionnaires involving seven respondents directly responsible for HRIS application. The selection of respondent considers role map of the process owner of HRIS Application, which are 4 experts in subdivision of Human Capital Information System. Based on questionnaire result, a calculation is constructed ranking solution variable by using ranking weight. Borda Count Method shows that service quality variable gets the highest score. Therefore, the priority focus of the solution to the improve and develop HRIS application is on service quality, structure, user satisfaction, and the environment (see Figure 3). Alternative solutions of this study then constructed in Table 7 to get successfull of net benefit by considering the suggestions of questionnaire recap. IV. CONCLUSION A. Based on the results of statistical data analysis and discussion regarding the evaluation of the application of HRIS at PT Pembangkitan Jawa Bali, the following conclusions can be drawn: B. Success of PJB HRIS application is influenced by system quality factors, information quality, service quality, usage system, system satisfaction, environment and the role of organizational structure. C. System quality affects the system usage and user satisfaction. This means getting higher the system quality on HRIS PJB, then also increasingly the system usage and user satisfaction against HRIS PJB. Conversely, the system quality has no influence on structure organization and environment. D. Quality information has an influence on system usage, organizational structure, and environment. Conversely, the quality information has no influence on user satisfaction. Add FAQ feature from problems or obstacles that often appear in the application 3 Service Quality (SQ) Evaluate and upgrade the system regularly to maintain stable loading of applications 4 The addition of HR service features that are currently still manual or face to face 5 Provides internet access for all features so that they can be opened anytime and anywhere 6 Developing integrated HRIS PJB application 7 Update rules on HR Regulations application 8 Structure (ST) Giving access of employee's data recap for all HR Admin 9 Use reminder notification in HRIS application to remind employees of the important schedules of PJB HR activities 10 Develop dashboard features of HR for decision making 11 Make evaluations and update the needs of users regularly 12 Added data update features and certificate documents of training / certification 13 User Satisfaction Utilizing HRIS application as a feedback media evaluation periodically 14 (US) Socializing the use of HRIS application to the admin (HR Admin) and users (all employees) periodically 15 Update user manual of HRIS application regularly 16 Digital mindset socialization for all employees to increase the effectiveness of the company's business processes 17 Environment (EV) Hold regular meetings with related divisions who have responsible for the evaluation and development of the HRIS application 18 Rearrange the completeness and updating of employee dossier files 19 Utilizing the PMAN application for career information or job posting at the company
2021-12-09T17:54:39.586Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "767862962470421aeb87a3c98f1c549fc2e01ac4", "oa_license": "CCBYSA", "oa_url": "https://iptek.its.ac.id/index.php/jps/article/download/11073/6158", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5fa0d6bbb69d4b72cb652108dadef3830d52fa29", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250625819
pes2o/s2orc
v3-fos-license
ESG performance and stock prices: evidence from the COVID-19 outbreak in China This paper investigates the role of environmental, social, and governance (ESG) performance in stock prices during the market financial crisis caused by the COVID-19 pandemic. We use the Chinese listed company data as the bases for adopting an event-study method to identify the impact of ESG performance on cumulative abnormal returns. Empirical results suggest that ESG performance significantly increases firms’ cumulative abnormal returns and has asymmetric effects during the pandemic. Our results are robust to various robustness checks that consider the replacement of event window period, ESG measurement, adding other control variables, and sample exclusion of Hubei Province. We further find that reputation and insurance effects are important mechanisms through which ESG performance influences stock prices. Lastly, heterogeneous analyses show that ESG effects are considerably pronounced among firms with low human capital and bad image and in high-impact regions. Introduction I n recent years, environmental, social, and governance (ESG) investments, frequently called ethical or sustainable investments, have rapidly increased globally (Galbreath, 2013). ESG investing is an investment process that integrates ESG considerations into investment decisions (Mǎnescu, 2011). Given the COVID-19 pandemic, the necessity of ESG investing has been highlighted again (Demers et al., 2021;Manabe and Nakagawa, 2022). Investors are interested in ESG investments for at least two reasons (Renneboog et al., 2008). First, by focusing on ESG investments, ethical investment practices are actively promoted (Baldini et al., 2018;Broadstock et al., 2021). Second, ESG investments are increasingly recognized as improving the performance of managed portfolios, reducing portfolio risks, and increasing returns (Albuquerque et al., 2020;Díaz et al., 2021;Broadstock et al., 2021). Early literature on ESG investing has been partially inspired by studies of the eminent economist Milton Friedman (Friedman, 2007), who argued that ESG practices constitute a misallocation and misappropriation of valuable corporate resources. Renneboog et al. (2008) concluded that existing studies hint, but do not explicitly demonstrate, that ethical investors are willing to accept sub-optimal financial performance to pursue social or ethical objectives. Subsequently, a series of studies have expanded on the preceding literature. Some studies on ESG investing have focused on the application of returns and risk management. Hartzmark and Sussman (2019) found that investors make positive predictions on sustainable assets, steering money away from funds with low portfolio sustainability ratings to those with high ratings. They also found no evidence that high-sustainability funds outperform low-sustainability funds. Demers et al. (2021) determined that ESG performance facilitates the accumulation of intangible assets but does not serve as protection against downside risk. However, emerging studies have supported the view that ESGthemed investments have low downside risks and are minimally volatile in price during turbulent times. Hoepner et al. (2021) and Pedersen et al. (2021) obtained empirical evidence that ESG engagement reduces firms' downside risks and their exposure to downside risk factors. Albuquerque et al. (2020) developed a theoretical framework to show that stocks with high ESG ratings have significantly higher returns, lower return volatilities, and higher trading volumes than other stocks. Broadstock et al. (2021) showed that high-ESG portfolios typically outperform low-ESG portfolios, thereby mitigating financial risks during financial crises. Although there is limited research on the specific role of ESG performance during times of crisis, some insights have been gained from the 2008-2009 global financial crisis. Nejati et al. (2010) noted that the root causes of the current economic crisis could be moderated by a global transparency and accountability system and a public reporting of ESG performance. Erragraguy and Revelli (2015) showed that the adoption of ESG standards by firms during the crisis increased transparency, mitigated information asymmetries, and improved stock market liquidity and quality. Henke (2016) demonstrated that high-ESG-rated funds outperformed low-ESG-rated funds during the crisis, further supporting the view that investors place intrinsic value on ESG investments. In the first few months of 2020, the sudden market-wide financial crisis was triggered in response to the emerging global health crisis (i.e., , the consequences of which were more severe than those of the Great Depression in 1929-1933 and the global financial crisis in 2007/2008 (Broadstock et al., 2021). The current study shows that stock prices are empirically tested for negative shocks during the COVID-19 pandemic in some products and firms but not in others (Al-Awadhi et al., 2020;Shen et al., 2020). This result leads to a compromise opinion that the role of ESG generally depends on exposure to crisis shocks. Moreover, we are motivated to question how ESG effects vary along with different products and firms during public crises. However, only a few studies have indicated the specific role of ESG performance in crisis periods. Therefore, the goal of this paper is to fill in this research gap. We use the Chinese listed company data as the bases for adopting an event-study method to identify the impact of ESG performance on cumulative abnormal returns. The empirical results show that ESG performance is positively associated with cumulative abnormal returns during the COVID-19 pandemic. When decomposing firms with positive and negative shocks, we find that cumulative abnormal returns are positively related to ESG among firms with negative shocks but not positive shocks. These results suggest that the importance of ESG performance is reinforced in times of crisis, and is consistent with the inference that investors use ESG performance as a signal of future returns and risk mitigation. Our work builds on the current literature on the role of ESG performance in stock prices (Duuren et al., 2016;Remmer Sassen et al., 2016;Jagannathan et al., 2017) and extends the results on ESG to public crisis events. We use an event study approach to analyze the volatility of ESG performance on stock prices during the COVID-19 pandemic. Compared with the existing literature (Nejati et al., 2010;Erragraguy and Revelli, 2015;Henke, 2016), this study provides a more comprehensive perspective on the importance of ESG performance in major crises. Understanding the importance of ESG is necessary because it is a crucial indicator of risk management, non-financial performance, and sustainability. Through ESG practices, firms can obtain significant reputation and risk protection to reduce price volatility in times of crisis, thereby contributing to their long-term operations and sustainability. This study adds to the limited number of prior studies that have examined the impact of firms' ESG performance on their stock prices but have provided conflicting results (Friedman, 2007;Renneboog et al., 2008;Hartzmark and Sussman, 2019;Demers et al., 2021). The current study also provides additional empirical evidence on the important mechanisms of ESG performance through further analysis, thereby opening a black box for a positive relationship between ESG performance and stock prices. The results of this study suggest that reputation and insurance effects are important mechanisms by which ESG performance influences stock prices. This outcome reflects the fact that sustainability investments have low downside risks and are minimally volatile in price during turbulent periods. This finding clarifies the important role of current ESG practices in guiding investors' decision-making and provides empirical evidence for investors to focus on sustainable investment. This paper also complements the literature on the effects of public crises on financial markets. Existing literature has discussed the influence on financial markets mainly in terms of the 2008-2009 financial crisis (Nejati et al., 2010;Erragraguy and Revelli, 2015;Henke, 2016). We differ from these studies by specifically focusing on the COVID-19 pandemic. Note that the subject of our study is Chinese listed companies. Given that China is the second-largest economy in the world where the COVID-19 public crisis spread earlier and was interrupted by containment measures, a reasonable undertaking is to investigate the influence of this crisis on financial markets in the Chinese sample we use compared with other countries. The remainder of this paper is organized as follows. Section "Data and methodology" presents the sample and variables. Section "Empirical results" discusses the main empirical results and robustness analyses. Section "Further analysis" provides further mechanisms. Lastly, section "Conclusion" concludes the study. Data and methodology Data. Data of this study cover Chinese non-financial A-share listed firms in 2020. We collect data from several resources. First, we acquire ESG data from China Sino-Securities Index Information Service (Shanghai) Company Limited, a third-party data provider based in China specializing in ESG data. Second, stock prices, firm financial data, and firm management data are obtained from China Securities Markets and Accounting Research Database. In particular, stock prices are measured using cumulative abnormal returns calculated by utilizing the eventstudy method. Third, we obtain media coverage data mainly from the Chinese Research Data Service database to measure the level of media attention. Main variables. Data of this study cover Chinese non-financial A-share listed firms in 2020. We collect data from several resources. First, we acquire ESG data from China Sino-Securities Index Information Service (Shanghai) Company Limited, a thirdparty data provider based in China specializing in ESG data. Second, stock prices, firm financial data, and firm management data are obtained from the China Securities Markets and Accounting Research Database. Specifically, stock prices are measured using cumulative abnormal returns calculated by the event-study method. Third, we obtain media coverage data mainly from the Chinese Research Data Service database to measure the level of media attention. Stock price (CAR). Cumulative abnormal return (CAR) is widely used as a measure of stock price. We follow the previous literature (Demers et al., 2021;Zhang et al., 2021;Broadstock et al., 2021) and use CAR estimated through an event study approach as the dependent variable. CAR is an impartial estimate of additional or reduced firm value that accrues as a result of the occurrence (McWilliams and Siegel, 1997;Campbell et al., 1998;Fernando et al., 2012). In the COVID-19 context, this paper uses the event study method to analyze the impact of ESG performance on stock prices. In an event study, we need to determine the event date, event window, estimation window, and estimation model. The details are as follows. (1) Event date: As COVID-19 received national attention on 20 January 2020, we draw on existing literature and use this date as an event date in the event study. On 20 January 2020, China's central government provided important instructions on the prevention and containment of COVID-19. On the same day, panic ensued after the announcement by pulmonary experts that COVID-19 was more transmissible than previous diseases and could be passed from one person to another person. As of 20 January 2020, the Chinese National Health Council officially issued the number of new cases in each province and incorporated COVID-19 into the Infectious Diseases Act and Sanitary and Isolation Act . (2) Event window: This paper follows previous studies (Kanas, 2005;Miyajima and Yafeh, 2007;Fernando et al., 2012) and selects the event window consisting of 11 days from t −5 to t +5 days. (3) Estimation window: Campbell et al. (1998) documented that the selection of estimation window for short-term events can be 120 days or even longer. Thus, we choose the estimated window period of 175 days from t −210 to t −36 days. (4) Estimation model: We use the existing literature (Campbell et al., 1998) as a basis to take the OLS market model for calculating the expected returns. This study requires the calculation of expected, abnormal, and cumulative abnormal returns (Campbell et al., 1998). We first calculate each firm's expected return (ER i,t ) during the event period: where ER i,t is the expected return of firm i on day t during the event period, R M,t represents the market return on day t during the event period, and β 0 and β 0 are the estimated parameters in Model (1). Thereafter, we use expected returns (ER i,t ) gained from Model (1) where AR i,t is the abnormal return of firm i on day t during the event period, R i,t is the actual return of firm i on day t during the event period, and ER i,t is the expected return obtained from Model (1). Lastly, we use abnormal returns (AR i,t ) obtained from Model (2) to calculate the cumulative abnormal returns: where CAR i,t is a calculation from the period of days between k and j. In this paper, the event windows to calculate CAR is [−5, 5]. ESG performance (ESG). The Independent variable used in this study is the quarterly score of ESG performance, which is calculated based on three dimensions: environmental, social, and governance. Given that ESG data are disclosed quarterly, we calculate the average of ESG over four quarters to measure the annual ESG performance. We also follow the previous literature Broadstock et al., 2021;Díaz et al., 2021;Demers et al., 2021) and include the following set of control variables in the estimation: leverage ratio (Lev), firm profitability (Roa), firm size (Size), nature of equity (Soe), degree of risk (Beta), shareholder structure (Top1), number of board members (Board), the duality of CEO and chairman (Dual), the proportion of independent directors (Independ), intangible asset (Intangible), and tangible asset (Tangible). The detailed definitions of the variables are presented in Table 1. Summary statistics. In response to the COVID-19 pandemic, this paper selects Chinese A-share listed companies as a study sample. This study selects the sample based on the following considerations. First, the financial sector is excluded owing to the uniqueness of its business, financial reporting, and regulatory structure. Second, we exclude firms with losses and those specially treated by stock exchanges. Third, we exclude samples with an estimation window of under 175 trading days. Fourth, our article removes samples with missing values in the ESG, CAR, and control variables. Lastly, continuous variables are winsorized at the 1% and 99% levels to mitigate concerns with extreme values. The resulting sample in our study is 2188 observations. Table 2 provides the descriptive statistics of the variables. CAR index ranges from −0.2496 to 0.4665. In addition, the mean and median of CAR are −0.0115 and −0.0367, respectively, with a standard deviation of 0.1223. This result suggests that the level of cumulative abnormal returns varies considerably across firms and that overall cumulative abnormal returns are low. Moreover, this situation implies that firms are generally subject to negative shocks during the COVID-19 pandemic. The average ESG is 0.8447, which is within the range of good. For the control variables, the average leverage is 44.21%, ROA is 5.13%, and firm size is 22.59. Our sampled chairman has~27.06% of the CEO, and the largest shareholder holder accounts for 34.61% of firm stocks. The average board size is about 8 (=e 2.1260 ) members, 37.61% of whom are independent directors. This result is consistent with the CSRC requirement for board independence . The distribution of the control variables in this paper is similar to that reported in previous research Broadstock et al., 2021;Díaz et al., 2021). Figure 1 shows a graphical representation of AARs and CARs for the event window [−5, 5]. As shown in Fig. 1, the two trend lines rise initially and fall thereafter from the announcement date. In the 4 days from 20 to 23 January 2020, firms reacted positively to the pandemic market. In the early stage of the COVID-19 pandemic, the demand for products to prevent pandemic infection was greater than the supply. This phenomenon prompted speculators to use arbitrage opportunities to invest, thereby explaining the positive response of the capital market. On the 4th day of the COVID-19 outbreak (23 February 2020), the city of Wuhan, which was the most affected city, was closed. Shortly after the lockdown, the stock market was closed for the Chinese Spring Festival from 24 January to 2 February. The market reopened on 3 February. Note that on the 5th day of trading following the COVID-19 outbreak, which was the first trading day after the closure of Wuhan (3 February 2020), the response of firms to the pandemic was extremely negative. The pandemic caused chaos and weakened the economy of China, particularly after the closure of Wuhan. The closure of the city to curb the further spread of the disease will destroy the entire logistics and supply chain system (Tang et al., 2021). Most factories faced shutdowns, production stoppages, and even closures, thereby causing a significant downward trend across the capital markets. The downward trend of the stock market was consistent with the previous statistical description that the COVID-19 epidemic in 2019 has had a significant negative impact on the financial market. Empirical results Effect of ESG performance on firms' stock price. We use the following multiple regression model to investigate the general relationship between ESG performance and firm stock prices during the COVID-19 pandemic: where i denotes the firm, CAR represents the firm's cumulative abnormal return during the COVID-19 pandemic, and ESG is the firm's ESG performance measure proxied by its ESG score. Our results focus on β 1 , which captures how ESG performance affects a firm's stock price. The vector Controls i stacks a series of control variables that account for the impact of firm characteristics on the stock price. Details of these variables are described in the section "ESG performance (ESG)". We also include province-fixed effects to control for unobservable regional characteristics. ESG is a highly industry-related variable (Yu and Luu, 2021). To control for industry characteristics, we control for industry fixed effects, in which ε i is the error term. Variables and definitions used in the model are shown in Table 1. We report the results of the baseline OLS regressions in Table 3. In column (1), we include only the ESG and CAR indicators after controlling for the province-and industry-fixed effects. The coefficient of ESG is 0.1479 (t-stat 3.5072), which is positive and statistically significant at the 1% level. This result indicates that ESG practices play an important role in reducing price volatility in times of crisis. Our results show that ESG performance has a significant positive impact on the cumulative excess return. In other cases, we include firm control variables in column (2). However, estimates remain positive and statistically significant. For control variables, we find that the coefficient of Size is significantly positive, suggesting that large firms reduce their risk of stock price declines. Conversely, the coefficient of Lev is negative, implying that firms with more leverages will suffer a greater risk of stock price declines. Our results are consistent with those of previous studies regarding control variables (Demers et al., 2021;Broadstock et al., 2021). In summary, our results provide suggestive evidence that firms with high ESG ratings are conducive to mitigating the downside risk of stock prices during the COVID-19 pandemic. To significantly understand whether or not the role of ESG depends on the risk exposure characteristics of crisis shock, this paper examines the influence of ESG performance on stock prices of firms subject to positive and negative shocks during the COVID-19 pandemic. We choose two approaches to classify the sample into firms with positive and negative shocks during the pandemic. On the one hand, this paper draws on the existing literature (Kanas, 2005;Miyajima and Yafeh, 2007) in classifying the positive and negative shock groups by determining whether or not the cumulative abnormal return is above 0. On the other hand, we use the previous literature (Al-Awadhi et al., 2020;Shen et al., 2020) as a basis for identifying the following industries as positive groups: information technology and medicine manufacturing industries. Moreover, we select the following industries to be identified as negative groups: tourism, transportation, restaurants, wholesale and retail trade, realty business, and export manufacturing industries. Table 4 shows the results of the tests of exposure characteristics during the crisis. The first two columns and last two columns show that the results are similar whether the sample is divided in (1) and (2). Detailed variable definitions are presented in Table 1. All variables are winsorized at the 1st and 99th percentiles. t-statistics are reported below coefficient estimates and are calculated based on robust standard errors. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. the first or second way. Coefficients of ESG for firms in the negative group are positive and statistically significant. Conversely, coefficients of ESG for firms in the positive group is not statistically significant. These results suggest that the positive effect of ESG performance on stock prices is more significant among firms that are more severely affected by negative shocks. That is, our findings provide evidence that ESG performance acts as a risk protection tool that contributes to the sustainability of operations in turbulent times, specifically among firms with severe negative shocks. Alternative proxies for ESG performances. Given that raw data for the ESG score is quarterly, this paper uses it for testing the annual data of firms that are susceptible to measurement error and raise concerns about their validity. To address these potential issues, we consider two alternative proxies for ESG performance. First, we choose the median of the quarterly data on ESG scores (ESG1) to measure ESG performance. Second, we measure ESG performance as ESG score in the first quarter of year t + 1 (ESG2). Similarly, we find a robust positive association between ESG performance and stock price, as shown in columns (5) and (6) of Table 5. Robustness tests Excluding the Hubei sample. This paper considers a subsample that excludes Hubei Province, which was the most affected by COVID-19. The early outbreak of COVID-19 in China was concentrated in Wuhan (Wuhan), Hubei Province, and spread rapidly to other provinces. Hubei has many listed firms. Thus, we draw on the existing literature (Ren et al., 2021) and exclude the Hubei sample to emphasize that our results are not driven by firms in the province most affected by COVID-19. We reestimate our regression using the subsamples. Column (7) of Table 5 shows that the coefficient of ESG is positive and significant, which remains consistent with our findings. Inclusion of other variables. Existing research has suggested that institutional investor ownership and cash holdings of firms may be associated with stock price volatility (Bushee and Noe, 2000;Chang et al., 2017). Whether or not our regression results are sensitive to include institutional investor ownership and cash holdings has aroused our concern. To mitigate this concern, we control for institutional investor ownership and cash holdings of firms in the regression process. In column (8) of Table 5, empirical results remain positive and statistically significant for the cumulative abnormal returns. The inclusion of firms' institutional investor ownership and cash holdings does not induce material change in coefficient magnitudes. That is, empirical evidence implies that the inclusion of institutional investor ownership and cash holdings should not be an issue in our research. Further analysis Potential mechanism. This section extends the preceding results to clarify the potential mechanisms of why ESG performance positively affects firms' cumulative abnormal returns. Reputation effect. Demers et al. (2021) suggested that ESG performance is an intangible asset, which plays a positive role in optimizing supply chain partnerships, enhancing consumer product satisfaction, and improving employee productivity. Given the attention of social media and stakeholders, ESG practices are gradually becoming a tool for corporate impression management that can enhance social visibility and expand market share (Lokuwaduge and Heenetigala, 2017;Xie et al., 2019). Thus, we expect ESG performance to be conducive to increasing cumulative abnormal corporate returns through reputation enhancement. In particular, we take the number of positive online media and financial newspaper reports to represent the reputation gained by the firm. Thereafter, we use the natural logarithm of the number of positive online media and financial newspaper reports as our variables New1 and New2, respectively. To test the reputation effect mechanism, we introduce an interaction term between ESG and New1 or New2 in our model. As shown in columns (1) and (2) of Table 6, the coefficients of New1 and New2 are significantly positive, suggesting that ESG practices are beneficial in enhancing corporate reputation. We also find that the coefficients of ESG × New1 and ESG × New2 are significantly negative, implying that the reputation effect of ESG performance is more pronounced among firms with lower reputations. Our findings demonstrate that ESG practices are beneficial in enhancing corporate reputation to improve cumulative excess returns during turbulent times. Insurance effect. Academics and practitioners agree that firms' risk exposures are linked to their ESG profiles. Albuquerque et al. (2020) presented empirical evidence suggesting that firms' increasing product differentiation through ESG investments reduces systemic risk and improves firm value. Hoepner et al. (2021) found corroborating evidence that ESG practices reduce firms' exposure to downside risk factors. These results support practitioner arguments that including ESG factors in investment decisions can mitigate uncompensated portfolio risks (Jagannathan et al., 2017;Pandey and Kumari, 2021;Broadstock et al., 2021). Hence, we expect ESG performance to be considered a risk management tool during the COVID-19 pandemic to increase cumulative excess returns by reducing business risk. This paper Utilizes the previous literature (Ghosh and Olsen, 2009) as basis in using the standard deviation of sales revenue over the past 5 years (Risk1) to measure business operating risk. To remove the effect of industry, we also use the standard deviation of sales revenue over the past 5 years adjusted for industry (Risk2) to measure business operating risk. We introduce an interaction term between ESG and EU1 or EU2 in the model to test the insurance effect mechanism. The results in columns (3) and (4) of Table 6 show that the coefficients of Risk1 and Risk2 are negative and significant. This result is consistent with findings of previous studies that ESG performance reduces downside risk. We also find that coefficients of ESG × Risk1 and ESG × Risk2 are significantly positive, suggesting that the insurance effect of ESG performance is markedly pronounced among firms with high operational risk. The results of this study indicate that ESG practices can be used as a risk management tool in turbulent times, thereby increasing cumulative abnormal returns. These findings further support the insurance-enhancing effects of ESG practices. Cross-sectional analysis. In this subsection, we conduct several cross-sectional tests to examine how the impact of ESG performance on cumulative abnormal returns varies with firm characteristics during the COVID-19 pandemic. Cross-sectional tests of human capital characteristics. We investigate whether or not the human capital characteristics of firms affect the contribution of ESG performance in terms of cumulative abnormal returns during the COVID-19 pandemic. Apart from the detrimental influence of the virus on staff safety, the lockdown and physical separation measures damage firms' financial performance. Fahlenbrach et al. (2020) showed that labor-intensive firms, in which work-from-home policies are difficult to implement, have high exposure to COVID-19. By contrast, firms with high technological equipment are less affected and even have the opportunity to expand their business . Thus, we predict that firms with low human capital are considerably at risk during the COVID-19 pandemic, thereby possibly increasing the sensitivity of ESG performance to cumulative abnormal returns. This paper draws on prior literature and selects the ratio of the number of employees to sales as a measure of human capital intensity (Labor). The more value of Labor, the higher the productivity of employees and the higher the human capital of the firm. In addition, we divide the two groups of high and low human capital according to whether or not Labor is above the median value of the sample. Columns (1) and (2) of Table 7 show that the coefficient of ESG is significantly positive at the 1% level among firms with low human capital, but it is not statistically significant among firms with high human capital. Thus, the role of ESG performance in preventing stock price downside is markedly significant among firms with low human capital during the COVID-19 pandemic. This result is consistent with our expectations. Cross-sectional test of public image characteristics. This subsection further investigates whether or not our main results are influenced by corporate image. The concepts of corporate image and trust have obtained special relevance, which significantly influence individual behavior (Rindell et al., 2015). Flavián et al. (2005) argued that the image perceived by consumers makes the factors existing in the transaction visible, thereby reducing the risk perceived by individuals and increasing the possibility of purchase. Compared with firms with bad image, firms with good image are expected to have stronger protection against downside risks (Lee et al., 2022). Consequently, we predict that firms with bad image have a high downside risk during the COVID-19 pandemic, thereby possibly increasing the sensitivity of ESG performance to cumulative abnormal returns. This study chooses the number of negative reports in online media (Bad_image) measure corporate image. Columns (3) and (4) of Table 7 report that the coefficient of ESG is significantly positive at the 1% level among firms with numerous negative online media reports (firms with Bad_image above the sample median), but it is not statistically significant at a low number of negative online media reports. This finding is consistent with our expectation that firms with bad image face high downside risk during the COVID-19 pandemic, thereby increasing the sensitivity of ESG performance to cumulative abnormal returns. Cross-sectional test of regional characteristics. Lastly, we explore the heterogeneous effect of ESG performance on cumulative abnormal returns for firms with different geographical locations. Shen et al. (2020) noted that by using the region as a criterion, COVID-19 has a major negative influence on the serious-impact regions. After China began its comprehensive campaign against COVID-19, seven provinces, namely, Hubei, Hunan, Henan, Jiangxi, Anhui, Guangdong, and Zhejiang, enforced harsh labor restrictions and the resumption of work. These restrictions have led to a decline in consumption levels and closure of many firms in high-impact areas. By contrast, for cities far from the infected areas, the resumption of operations will be significantly earlier. Fu and Shen (2021) showed that the early resumption of work sends a signal of reduced risk to stakeholders, thereby promoting firms to obtain more investment capital. In summary, we expect that the protection of ESG performance against downside risk during the COVID-19 pandemic is more significant in high-impact regions. We draw on the previous literature (Shen et al., 2020) in selecting seven provinces (i.e., Hubei, Hunan, Henan, Jiangxi, Table 7 Results of heterogeneity test. Anhui, Guangdong, and Zhejiang) as high-impact regions and other provinces as low-impact regions. Columns (5) and (6) of Table 7 show that the coefficient of ESG is significantly positive at the 1% level for firms in high-impact regions, but it is not statistically significant for firms in low-impact regions. This result suggests that ESG practices of firms in high-impact regions play a key role in risk management and reduce stock price volatility during the COVID-19 pandemic. Conclusion During the COVID-19 pandemic, the significant decrease in global equity values reflects strong negative investor sentiments (Broadstock et al., 2021). Hence, we ask if this negative sentiment transfers asymmetrically across the firms, or whether or not ESG performance may be used as a valuable signal for systematically avoiding negative risk during the crisis. However, only a few studies have provided the specific role of ESG performance in this crisis period. Therefore, the goal of this paper is to fill in this research gap. We use a unique environmental setting and find that ESG performance is positively associated with cumulative abnormal returns around the COVID-19 pandemic and has an asymmetric impact. We contribute to the literature with empirical evidence on the resilience of stocks with high ESG performance during financial crises. This finding is consistent with the view that investors may take ESG performance as a signal of risk mitigation during the crisis. We further find that the reputation and insurance effects are important mechanisms through which ESG performance influences stock price. Moreover, heterogeneous analyses show that ESG effects are considerably pronounced among firms with low human capital, bad image, and in highimpact regions. Overall, we conclude that ESG practices can be used as a risk management tool to enhance share price resilience, particularly in turbulent times. Our findings are of particular relevance to business managers, investors, and policy makers. For managers, this paper provides empirical evidence supporting ESG investing as a value-enhancing strategy. In addition, we find evidence that ESG practices act as impression and risk management tools to reduce risk downside in turbulent times. Thus, firms should elevate their ESG performance to make them markedly attractive targets in the market to expand their market share. For investors, ESG investments improve the performance of managed portfolios, reduce portfolio risk, and increase returns. Investors should consider ESG factors in their investment decisions to enhance investment returns in turbulent times. Lastly, policy makers should advocate the adoption of ESG practices and encourage companies to disclose information on ESG performance, which are essential for economic sustainability. Several limitations should be considered in future research. One of the limitations of this study is that the data set only includes large listed firms selected from the China Securities Market and Accounting Research Database. In our opinion, limited resources may play a key role in determining the ESG performance of small firms. Consequently, the inclusion of small-and medium-sized firms may provide different results, which will be left for future research. Furthermore, this study is mainly based on a sample of Chinese listed companies, thereby limiting the generalizability of this research. Lastly, future research could investigate whether or not these results are valid in the context of developed countries or international markets, in which business strategies and ESG disclosures of firms are influenced by the institutional environment. Data availability Data set used in this study is available from the corresponding author on the reasonable request. Further data is publically available on China Securities Markets and Accounting Research Databases.
2022-07-19T13:10:40.034Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "8ce131e2be60330f0ad50e7cedb4f59bec0d23e6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0b4913bd4aefe19c835c8a69f227a3ad6c9d4e31", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
11459476
pes2o/s2orc
v3-fos-license
PCR-based detection of composite transposons and translocatable units from oral metagenomic DNA A composite transposon is a mobile genetic element consisting of two insertion sequences (ISs) flanking a segment of cargo DNA often containing antibiotic resistance (AR) genes. Composite transposons can move as a discreet unit. There have been recently several reports on a novel mechanism of movement of an IS26-based composite transposon through the formation of a translocatable unit (TU), carrying the internal DNA segment of a composite transposon and one copy of a flanking IS. In this study, we determined the presence of composite transposons and TUs in human oral metagenomic DNA using PCR primers from common IS elements. Analysis of resulting amplicons showed four different IS1216 composite transposons and one IS257 composite transposon in our metagenomic sample. As our PCR strategy would also detect TUs, PCR was carried out to detect circular TUs predicted to originate from these composite transposons. We confirmed the presence of two novel TUs, one containing an experimentally proven antiseptic resistance gene and another containing a putative universal stress response protein (UspA) encoding gene. This is the first report of a PCR strategy to amplify the DNA segment on composite transposons and TUs in metagenomic DNA. This can be used to identify AR genes associated with a variety of mobile genetic elements from metagenomes. INTRODUCTION Composite transposons are common mobile genetic elements (MGEs) responsible for the dissemination of genes responsible for bacterial adaptation and survival including those conferring antibiotic resistance and xenobiotic degradation (Nojiri, Shintani and Omori 2004;Bennett 2008). They consist of two insertion sequences (ISs) that flank a segment of DNA, which can transpose as a whole unit (including the two flanking IS elements) and the active IS element alone can also transpose out from the unit. Recently, another mechanism for the translocation of DNA within an IS26-based composite transposon was described (Harmer, Moran and Hall 2014;Harmer and Hall 2015). The model is summarized in Fig 1. A single copy of the IS element is excised together with flanking DNA to form a circular molecule. To distinguish this from 'conventional' transposition of IS elements, the transposing region was called a translocatable unit (TU). This molecule can be formed by two different mechanisms: intramolecular replicative transposition and conservative transposition by excising out from the composite transposon (Fig. 1). Most of the DNA segments reported to be found on TUs are antibiotic resistance genes, such as the kanamycin resistance gene in IS26-aphA1a TU and tetracycline resistance gene in IS1216tet(S) TU (Ciric et al. 2014;Harmer and Hall 2015). Previously, studies on composite transposons and TUs focused on cultivable bacteria. However, most of the bacteria in environmental samples have not yet been cultured in the laboratory. For example, more than 700 bacterial species have been identified in human oral cavity, but less than half of them can be cultivated (Wade 2011). Human oral metagenomic DNA was used to screen for composite transposons and TUs, as several TUs were found in oral bacteria, including Streptococcus oralis and S. infantis (Ciric, Mullany and Roberts 2011;Ciric et al. 2014) The aim of this work is to determine whether composite transposons and TUs could be detected through PCR in metagenomic DNA. To do this, PCR primers were designed to amplify DNA flanked by IS elements to check for the presence of composite transposons in oral metagenomic DNA. Then, another set of primers was designed to determine if TUs derived from these putative composite transposons were present. We showed that novel TUs were detectable within human oral metagenomic DNA. Acquisition of the human oral metagenomic DNA Saliva samples were collected from 11 healthy males and females from Department of Microbial Diseases, University College London (UCL) Eastman Dental Institute as described previously (Tansirichaiya et al. 2016). All of the volunteers did not receive any antibiotic treatment for at least 3 months, and gave the written consent prior to the sample collection. The saliva collection and processing procedures were approved by the UCL Ethics Committee (project number 5017/001) and are described previously (Tansirichaiya et al. 2016). PCR amplification The amplifications on the human oral metagenome were performed with the primers listed in Table S1 (Supporting Information). The PCR reaction was carried out with an initial denaturation at 94 • C for 3 min, following by 35 cycles of (i) denaturation at 94 • C for 1 min (ii) annealing at 50 • C-65 • C for 30 s and (iii) extension at 72 • C for 3 min for standard PCR, 10 min for long PCR and 5 min for Q5 PCR, and final extension at 72 • C for 10 min. The standard PCR reaction was contained 15 μL of 2X BioMix Red (Bioline, London, UK), 50-100 ng of DNA template, 0.2 μM of each primer and molecular grade water (Sigma, Dorset, UK) up to 30 μL. The long PCR reaction contained 0.25 μL of TaKaRa Ex Taq (5 units/μL) (Takara Bio, Saint-Germain-en-Laye, France), 4 μL of dNTP mixture (2.5 mM each), 5 μL of 10X Ex Taq buffer, 0.2 μM of each primer, 50-100 ng of DNA template and molecular grade water up to a total volume of 50 μL. Q5 PCR reaction consisted of 12.5 μL Q5 high-fidelity 2X master mix (NEB, Hitchin, UK), 0.2 μM of each primer, 50-100 ng of DNA template and molecular grade water up to a total of 25 μL. PCR amplicons were visualized by gel electrophoresis on 1% agarose gel stained with GelRed nucleic acid stain (Biotium, Cambridge, UK). PCR purification, ligation and transformation PCR products were purified by using either the QIAquick PCR Purification Kit or the QIAquick Gel Extraction Kit (Qiagen), following the manufacturer's protocols. The purified products were subsequently cloned into the pGEM-T Easy vector (Promega, Southampton, UK) and transformed into Escherichia coli α-Select Silver Efficiency competent cells (Bioline) by heat shock. Cells were grown on the Luria-Bertani (LB) agar containing 100 μg/mL ampicillin, 40 μg/mL X-Gal and 0.4 mM IPTG, and incubated overnight at 37 • C. Plasmid isolation and sequencing By using the blue-white screening, the white colonies (containing inserts) were subcultured into 5 mL of LB broth containing 100 μg/mL ampicillin and incubated overnight in 37 • C shaker. The plasmid isolation was performed by using QIAprep Spin Miniprep Kit (Qiagen), as per the manufacturer's instruction. The inserts containing in each plasmid were sequenced using M13 forward and M13 reverse primers at the Beckman Coulter Genomics (Genewiz, Essex, UK) with an ABI 3730XL. If the initial sequencing reaction cannot cover the sequences of the inserts, extra primers were designed and used for additional sequencings. Sequence analyses The analysis of sequencing results was performed with BioEdit software version 7.2.0 (http://www.mbio.ncsu.edu/ bioedit/bioedit.html). If the insert was sequenced using multiple primers, the sequences were assembled with CAP contig function within the software (Huang 1992). The vector sequences were initially trimmed from the sequences by using VecScreen analysis tool (http://www.ncbi.nlm.nih.gov/tools/vecscreen) and identified for the primer binding sites. The sequences were then analyzed by using BlastN and BlastX for the matching with the sequences in the databases, ISFinder for the identification of IS element and Clustal Omega for the alignment of the sequences (Altschul et al. 1990;Siguier et al. 2006). The sequences of all composite transposons (CTA1 to CTA5) were submitted to the DNA database with the accession numbers from KX305930 to KX305934. Confirmation of IS elements in oral metagenomic DNA Twelve IS elements were selected for the screening of composite transposons, including those of the IS26 family (IS26, IS240, IS257 and IS1216), IS elements found in Streptococcus spp. (the most prevalent bacteria in oral cavity) (IS861, IS1161, IS1167, IS1381 and IS1548), and IS elements commonly associated with plasmids and transposons (IS3, IS256 and IS1485) (Kreth, Merritt and Qi 2009;Clewell et al. 2014). Prior to the detection of composite transposons, the presence of each IS element in oral metagenomic DNA was determined by designing the PCR primers to amplify the ISs. The amplicons of the expected size were sequenced to confirm the results. Among 12 IS elements, the sequencing results showed that 6 of them were confirmed to be present in the extracted oral metagenome, these were IS26, IS257, IS1216, IS1161, IS1167 and IS1485. Detection of composite transposons by PCR A set of PCR primers were designed to amplify outwards from the detected ISs. The amplicons from this PCR amplification could be either the DNA segment flanked by two ISs of a composite transposon or the DNA segment carried by a TU structure. After the screening, five different putative composite transposon amplicons (CTA1-5) were identified, four were IS1216based amplicons and another was an IS257-based amplicon ( Table 1 and Table S1, Supporting Information). The first amplicon (CTA1) contained two potential ORFs: one predicted to encode a small multidrug resistance protein (Qrg) and the other a hypothetical protein (similar to NAD + diphosphatase). It had 99% nucleotide identity to the IS1216 composite transposon found on Tn6087 of Streptococcus oralis F.MI.5 (Ciric, Mullany and Roberts 2011). The second amplicon (CTA2) was similar to the first one, but it was missing 229 bp of the gene predicted to encode the hypothetical protein and a region between qrg and the flanking IS1216. The next structure (CTA3) was similar to part of plasmid pIL5 and plasmid pBL1 from Lactococcus lactis subsp. lactis (Sanchez et al. 2000;Gorecki et al. 2011). The main part of this structure (84% of the query) had 98% nucleotide identity with orf14, 15 and partial orf16 from plasmid pIL5, encoding a transposase, a universal stress-like protein and a Mn 2+ /Fe 2+ transporter-like protein, respectively. The matching part of pBL1 plasmid to this structure was an ISS1-like element with 100% nucleotide identity. By analyzing the orf14 and the ISS1-like element with ISfinder, they both were matched to IS1216 with 100% similarity. Another structure (CTA4), which was similar to the CTA3, was also identified, but it had an additional 2329 bp compared to CTA3. The extra nucleotides were the rest of orf16 sequences missing from CTA3, and a transposase gene orf17 from pIL5. The fifth structure, CTA5, was the only structure amplified with IS257-based primers. Sequencing analysis showed that the kanamycin resistance gene, knt, and a truncated rep gene were flanked by transposase genes. It had 99% identity to part of plasmid SAP079A from Staphylococcus aureus (McDougal et al. 2010). Amplification of putative TU structures As each of amplicons described above may have formed from a TU template, another set of primers were designed to amplify outward from the DNA segments between the flanking ISs (Fig. 1). The verification PCR was carried out by using Biomix red and additionally using highly processive Q5 polymerase. This was to make sure that the amplicons were not a result of an early fall-off of the polymerase, leaving partial amplicons that could themselves act as primers in subsequent round of PCR. TUs were detected in PCRs carried out with primers designed from the DNA segments within CTA2 and CTA4; amplicons of the expected size were confirmed as containing a single entire copy of IS1216 by DNA sequencing (Fig. 2). DISCUSSION By designing the primers to read outward from IS elements, five different putative composite transposons were identified from human oral metagenomic DNA, each containing different genes predicted to be involved in environmental adaptation. The qrg gene in samples CTA1 and CTA2 was predicted to encode a small multidrug resistance protein that confers resistance to cetyltrimethylammonium bromide (CTAB), a cationic antiseptic (Ciric, Mullany and Roberts 2011). For the sample CTA5, it contained kanamycin resistance gene knt, encoding kanamycin nucleotidyltransferase protein. This protein was shown to confer resistance by catalyzing the transfer of a nucleotidyl group to the 4'-hydroxyl group of the aminoglycoside resulting in inactivating kanamycin drug (Pedersen, Benning and Holden 1995). A gene (uspA) predicted to encode a universal stress protein was found in samples CTA3 and CTA4. The precise biological function of UspA remains unknown, but it was shown that the levels of UspA are elevated during a variety of stress conditions including heat, oxidant exposure, nutrient starvation and exposure to antibiotics including polymixins and cycloserine (Kvint et al. 2003;Nachin, Nannmark and Nystrom 2005). It was hypothesized to function in the reprogramming of cells toward defense and escape by enhancing the cell's capacity to withstand stresses and modulating activities related to motility and adhesion (Nachin, Nannmark and Nystrom 2005). By designing another set of primers to determine the presence of TUs based on each amplicon, we confirmed that CTA2 and CTA4 were likely to be present as small circular molecules as the entire predicted single copy of the expected IS element could be amplified. In order to control for PCR artefacts that may result from short primer extension within a chromosomally located composite transposon, we used the highly processive Q5 DNA polymerase that confirmed the results of the Biomix red PCR. The lack of positive results for the TU PCR of CTA1, CTA3 and CTA5 (which contained similar sized direct repeats) demonstrates that PCR artefacts are unlikely. This is the first time that TUs were detected in metagenomic DNA and also the first time that the stress adaptation gene uspA was found on a TU. There is a possibility that the TU amplicon could have arisen if the entire composite transposon was repeated in the host genome; however, we think that this is unlikely due to the inherent instability of large repeated units of mobile DNA. The studies on TUs are still in an early stage, and there are only a small number of identified examples. Reports on the integration and excision of TUs are based on IS26 composite Table 1. The predicted structure of putative composite transposons amplifying from the human oral metagenomic DNA. The open arrowed boxes represent ORFs, pointing in the probable direction of transcription. The IS elements and ORF in the DNA segment are shown in blue and green, respectively. The dash boxes, arrow boxes and dotted lines represent the regions that are not present compared to the sequences in the database. transposons. IS26 is a member of the IS6 family that also contains IS1216 and IS257. The transposase protein encoded by the IS elements in this family has 40%-94% identity. It was shown that IS26 TU preferred to integrate with a preexisting IS26 element via a conservative reaction catalyzed by the Tnp26 transposase, rather than a replicative transposition to a new site (Harmer, Moran and Hall 2014). An intact transposase gene was shown to be important for the integration and excision of the TUs (Harmer, Moran and Hall 2014;Harmer and Hall 2015). Recently, it was shown that RecA-dependent homologous recombination could also mediate the integration of a TU; however, it is not a major factor because it occurs at least two orders of magnitude lower efficiency than the reaction catalyzed by Tnp26 (Harmer and Hall 2016). TUs are similar to another recently described MGE, called an unconventional circularizable structure (UCS) (Palmieri, Mingoia and Varaldo 2013). UCSs can be excised from a replicon that contains two direct repeats (DRs) flanking a DNA segment. After ex-cision, it results in a non-replicative circular structure containing the DNA segment and one of the DRs. The movement of TUs and UCSs can both occur in RecA-deficient bacteria, suggesting that homologous recombination is not essential for their formation and insertion (Azpiroz, Bascuas and Laviña 2011;Harmer, Moran and Hall 2014). The difference between UCS and TU is that UCS has no recombinases gene to catalyze their insertion reactions (Palmieri, Mingoia and Varaldo 2013). This PCR strategy to detect TUs can also be applied to UCSs by designing the primers based on the DRs of the UCSs. As it was shown that the IS6 family of composite transposons can transpose either as a whole unit of composite transposons or as TUs, the resistance genes associated with them may then have more chance to be transposed and spread in bacterial population (Harmer and Hall 2016). Furthermore, composite transposons are often located on other MGEs such as plasmids and conjugative transposons, which can also facilitate their horizontal gene transfer. If the excised TUs do not integrate into a replicon, they will presumably be lost from the population during cellular replication. However, in sample CTA5, we found a truncated rep gene on a putative IS257 composite transposon. This raises the possibility that TUs can facilitate the movement of rep genes between DNA molecules further adding to the complexity of MGE biology. Indeed, there are some possible structures representing such rep containing TU insertion reactions such as repA-repC that is flanked by IS26 on Tn6029 (Reid, Roy Chowdhury and Djordjevic 2015) and repB located next to IS1216 on Tn6079 (de Vries et al. 2011;Reid, Roy Chowdhury and Djordjevic 2015). In conclusion, we have determined that a metagenomic approach can be used to recover both composite transposons and TUs from oral metagenome by performing PCR amplification with DNA primers based on the IS elements. Due to the fact that the primers were designed based on the IS elements, using this approach could also amplify novel genes carried by those composite transposons. This method might also be a more promising approach for the detection of the TUs in metagenome, as these small circular molecules are likely to be rare and therefore could be missed, or not assembled by metagenomic sequencing.
2018-04-03T00:22:34.405Z
2016-08-11T00:00:00.000
{ "year": 2016, "sha1": "535e9bfc84a582490fc4f64ee435d161d5cfcbdb", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/femsle/article-pdf/363/18/fnw195/23926933/fnw195.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "535e9bfc84a582490fc4f64ee435d161d5cfcbdb", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258012118
pes2o/s2orc
v3-fos-license
MOSFE-Capacitor Silicon Carbide-Based Hydrogen Gas Sensors The features of the wide band gap SiC semiconductor use in the capacitive MOSFE sensors’ structure in terms of the hydrogen gas sensitivity effect, the response speed, and the measuring signals’ optimal parameters are studied. Sensors in a high-temperature ceramic housing with the Me/Ta2O5/SiCn+/4H-SiC structures and two types of gas-sensitive electrodes were made: Palladium and Platinum. The effectiveness of using Platinum as an alternative to Palladium in the MOSFE-Capacitor (MOSFEC) gas sensors’ high-temperature design is evaluated. It is shown that, compared with Silicon, the use of Silicon Carbide increases the response rate, while maintaining the sensors’ high hydrogen sensitivity. The operating temperature and test signal frequency influence for measuring the sensor’s capacitance on the sensitivity to H2 have been studied. Introduction Field-effect solid-state gas sensors based on metal-insulator-semiconductor (MIS) or metal-oxide-semiconductor (MOS) structures have been known for half a century. The beginning of the devices' practical implementation was initiated by hydrogen-sensitive transistors based on Pd/SiO 2 /Si structures [1,2]. The structure type choice was largely due to the compatibility of the sensitive elements' manufacturing process with silicon technology production [3]. This contributed to the sensor miniaturization and the acceptable parameter's reproducibility in mass production. It is interesting to note that the first transistors in the early 1950s were made not from Silicon, but from Germanium, which has higher electrons' and holes' mobility, and which was easier to clean from impurities. However, over time, Germanium revealed a significant drawback that limited its further use. The band gap of Ge is only 0.67 eV, and, as a result, at a temperature of about 75 • C and above, Germanium transistors are inoperable due to the free excess electrons. The way out of this situation was to use Silicon with a band gap of 1.1 eV and the impurity Si wafers obtained by using gaseous diffusion technology. However, by the 1990s, Silicon had also approached the limits of its use, and the further electronics evolution required the accelerated development of the wide band gap (WBG) semiconductor technology [4,5]. Scientific and practical results of such studies are also successfully applied in the semiconductor gas sensors' production [6][7][8][9][10][11][12]. Energy, transport, and various industries are the main areas of hydrogen sensor use at present and in the future. Hydrogen is used in metal smelting, household chemicals, glass manufacturing, semiconductor manufacturing, and oil extraction. Hydrogen is also used as a fuel for environmentally friendly cars. In addition, since Hydrogen is explosive, it is necessary to control its concentration in coal mines, nuclear reactors, battery rooms, etc. Therefore, the Hydrogen gas analyzers' development based on explosion-proof sensors does Energy, transport, and various industries are the main areas of hydrogen sensor use at present and in the future. Hydrogen is used in metal smelting, household chemicals, glass manufacturing, semiconductor manufacturing, and oil extraction. Hydrogen is also used as a fuel for environmentally friendly cars. In addition, since Hydrogen is explosive, it is necessary to control its concentration in coal mines, nuclear reactors, battery rooms, etc. Therefore, the Hydrogen gas analyzers' development based on explosion-proof sensors does not stop; nor does the improvement of their parameters in terms of speed and measurement accuracy [13,14]. This article investigates capacitive MOSFE hydrogen sensors in a high-temperature ceramic design (Figure 1a,b), the description of which is considered in detail in [15]. The physical basis for the capacitive MOSFE sensors' operation is the field effect, which changes free charge carriers' concentration in the semiconductor's near-surface region at the interface with the insulator under the action of an electrical voltage applied to the sensor. When the sensor is exposed to the detected gas, its molecules diffuse through the electrode film to the metal-insulator interface, where they are adsorbed by active capture centers. This leads to a change in the electric field in the insulator and the semiconductor and a free charge carriers' redistribution in the semiconductor's near-surface region and, as a result, a shift in the MOS structure's capacitance-voltage characteristic (CVC or CV characteristics) along the voltage axis ( Figure 1c). The shift ΔUbias value can be compared quantitatively with the detected gas concentration, while the MOSFE capacitor's useful signal can be either directly the ΔUbias value at a fixed reference capacitance Cref value, or the change in capacitance ΔC at a fixed Ubias value. Our technology-distinctive feature to produce MOSFEС sensors with the Pd/Ta2O5/SiO2/Si type structure is high gas sensitivity in the operating temperature range from 50 °C to 150 °C with a limit of detection (LOD) for Hydrogen at the level of 150 ppb [16,17]. Such a sensitivity, in our opinion, is largely due to a combination of the following technological factors: the use of Pulsed Laser Deposition (PLD) for the thin films' fabrication; the capacitor type of the MOS structures; and a porous electrode with a diameter of 2 to 3 mm made of catalytically active Palladium. For comparison, [13] presents a large-scale review of semiconductor hydrogen sensors (resistive, based on Schottky diodes, MOS transistors, MOS capacitors, etc.) manufactured using various physical methods: Sol-Gel Annealing, Magnetron Sputtering (MS), Thermal Oxidation, Spray Pyrolysis, PLD, etc. The LOD of the most sensitive samples presented in this review is 5 ppm. In [18], the chemo-resistive nanocomposite NiO:Pd sensor capable of detecting High-temperature ceramic package (without MOS structure) created using laser micromilling technology with Platinum and gold metallization (a). An example of the MOSFEC sensors' high-temperature design: the standard TO-8 package is used as a base (adapter) for the monolithic sintered ceramic housing (b). An example of the CV characteristics' shift of the Si-based MOS capacitor under the Hydrogen action at the operating temperature of 100 and 200 • C-black and red curves, respectively (c). Solid lines-CVC in the absence of H 2 , dotted lines-CVC in the presence of H 2 . See the text for a description of the designations. Our technology-distinctive feature to produce MOSFEC sensors with the Pd/Ta 2 O 5 /SiO 2 /Si type structure is high gas sensitivity in the operating temperature range from 50 • C to 150 • C with a limit of detection (LOD) for Hydrogen at the level of 150 ppb [16,17]. Such a sensitivity, in our opinion, is largely due to a combination of the following technological factors: the use of Pulsed Laser Deposition (PLD) for the thin films' fabrication; the capacitor type of the MOS structures; and a porous electrode with a diameter of 2 to 3 mm made of catalytically active Palladium. For comparison, [13] presents a large-scale review of semiconductor hydrogen sensors (resistive, based on Schottky diodes, MOS transistors, MOS capacitors, etc.) manufactured using various physical methods: Sol-Gel Annealing, Magnetron Sputtering (MS), Thermal Oxidation, Spray Pyrolysis, PLD, etc. The LOD of the most sensitive samples presented in this review is 5 ppm. In [18], the chemo-resistive nanocomposite NiO:Pd sensor capable of detecting Hydrogen concentrations in the air up to 300 ppb and operating in the temperature range of 115-145 • C was described. In [19], the best 50 ppb Hydrogen LOD result that we could find in the literature is presented: a gas sensor based on SnO 2 -loaded ZnO nanofibers fabricated using an electrospinning technique with optimal working temperature of 300 • C. Nevertheless, exceptionally high sensitivity is not sufficient, and the main problemwhich is the motivation for this work-is the following. The operating temperature of 50-150 • C, which is typical for MOSFEC Si-based sensors, gives a response speed of 5-10 min when detecting hydrogen concentrations at the units-hundreds of ppm level. This corresponds to other authors' results for different types of hydrogen sensors [13,20,21]. Such indicators are not always acceptable for safety tasks in conditions where there is a risk of harmful and dangerous gases' rapid formation and accumulation. Increasing operating temperatures can be a solution. For example, in review [13], the best response times to Hydrogen of 1000-1500 ppm are a few units to tens of seconds for sensors with an operating temperature of 300-500 • C. At 200 • C and above, however, active generation of the intrinsic charge carriers occur in the Si semiconductor. As a result, the CV characteristics' shape of classical Pd/SiO 2 /Si type MOS structures is significantly deformed (Figure 1c) and the ∆U bias or ∆C values under the gas action measurement error increase, which worsens the MOSFEC sensors' LOD parameter. In addition, a thin-film Pd electrode, which, according to our experimental data, begins to oxidize at 220 • C and loses its conductive properties, can also cause a failure in operation. A well-known solution to this problem is the WBG semiconductor use (for example, SiC, AlN, GaN, AlGaN, diamond) as a substrate, and catalysts resistant to high temperatures as a gate material; for example, Platinum or Ruthenium [8,22,23]. In this work, we used the SiC semiconductor, which has advantages such as high chemical inertness, physical stability, and high thermal conductivity [24]. All this makes the SiC product suitable for use in harsh environments such as high temperatures and radiation. For example, it was shown in [22] that MOS capacitors with the Pt/TaO x /SiO 2 /SiC structure (Pt is a porous electrode; n-type (0001) Si-face 4H-SiC substrates) can operate at the temperature of 200 • C in the environments with an extremely high concentration of water vapor (about 45% vol.). At the same time, they maintain sensitivity to H 2 , CO, ethane, and ethene with a LOD of a few units to tens ppm, which is applicable to solve the monitoring exhaust problem of the cell gases fuel based on Hydrogen or Hydrocarbons. The aim of this work is to create MOSFEC sensors using PLD technology based on a SiC semiconductor substrate to expand the operating temperature range, increase speed and maintain high sensitivity to Hydrogen with a LOD of at least 150 ppb, as well as to compare the characteristics of the obtained sensors with classical sensors on the Si substrate. The metal electrode (gate) film "Me" is a distinctive feature: for Samples No. 1-Palladium obtained by the PLD method; for Samples No. 2-Platinum formed by MS on the surface of an insulator doped with Palladium by the PLD. Samples' Production and Setup Description Palladium doping was used to maintain the sensitivity at the same level as that of sensors with a Pd/Ta 2 O 5 metal-insulator interface, which largely determines the gassensitive properties of sensors with a porous electrode [25,26]. For example, Figure 2b shows the Pd electrode surfaces SEM photograph, which illustrates the metal film porosity. Control samples of Si-based MOS structures for gas sensors were used as a "starting point" for comparing and identifying the WBG semiconductor contribution. The control samples were fabricated based on a n-type silicon wafer (resistivity 15 Ω·cm) with a basic thermally oxidized silicon insulator layer. sensors with a Pd/Ta2O5 metal-insulator interface, which largely determines t gas-sensitive properties of sensors with a porous electrode [25,26]. For example, Figu 2b shows the Pd electrode surfaces SEM photograph, which illustrates the metal fi porosity. Control samples of Si-based MOS structures for gas sensors were used as "starting point" for comparing and identifying the WBG semiconductor contributio The control samples were fabricated based on a n-type silicon wafer (resistivity 15 Ω•cm with a basic thermally oxidized silicon insulator layer. A solid-state yttrium aluminum garnet laser was used in the PLD setup. The dep sition was carried out at a pressure of 1 × 10 -5 Torr. The MS system was equipped with 3-inch circular planar magnetron (Pinch Magneto series). The magnetron was operated an argon pressure of 1 Pa (7.5 × 10 -3 Torr). The target material was deposited on the su strate surface through a ceramic mask. Both methods of thin films' vacuum depositi (which have been used since the 1960s-1970s, are well studied, and debugged) provi high adhesion of the deposited film to the substrate due to the optimal value of the d posited particles' energy without damaging the substrate surface, and without mutu mixing of the target and substrate materials. This makes it possible to increase the yie of high-quality and long-term stable MOS structures and to achieve a minimum spread the characteristics of gas sensors based on them [16,27]. On the obtained MOS structures' bases, gas sensors were fabricated in specializ miniature metal-ceramic packages measuring 6.0 × 6.4 × 2.0 mm from monolithic 96 aluminum oxide ceramics with a built-in platinum heater using Adaptive Laser Micr engraving technology [15,[28][29][30]. This technology of laser processing of monolithic si tered ceramics is an affordable alternative to LTCC technology (low-temperature co-fir ceramics) and allows us to create metal-ceramic packages quickly and cheaply small-scale production using non-standard solutions. An example of the sensors' co structive implementation is shown in Figure 1b. A similar ceramic housing, in contrast the usual glass-to-metal one made (operating temperature limit 250 °C)-which is al shown in Figure 1b, and used as a carrier and a DIP adapter-provides the ability to o erate at temperatures up to 500 °C with power consumption of 0.5 W at 200 °C. The studies were carried out on an experimental setup in which gas concentratio were created by the static dilution method of Hydrogen with the air. To do this, MOSFE sensors, the operating temperature of which was maintained and regulated by an ele tronic board, were placed in a sealed fixed volume chamber with the possibility pumping and updating the gas mixture with the pump. It was also possible to dose H A solid-state yttrium aluminum garnet laser was used in the PLD setup. The deposition was carried out at a pressure of 1 × 10 −5 Torr. The MS system was equipped with a 3-inch circular planar magnetron (Pinch Magneto series). The magnetron was operated at an argon pressure of 1 Pa (7.5 × 10 −3 Torr). The target material was deposited on the substrate surface through a ceramic mask. Both methods of thin films' vacuum deposition (which have been used since the 1960s-1970s, are well studied, and debugged) provide high adhesion of the deposited film to the substrate due to the optimal value of the deposited particles' energy without damaging the substrate surface, and without mutual mixing of the target and substrate materials. This makes it possible to increase the yield of high-quality and long-term stable MOS structures and to achieve a minimum spread in the characteristics of gas sensors based on them [16,27]. On the obtained MOS structures' bases, gas sensors were fabricated in specialized miniature metal-ceramic packages measuring 6.0 × 6.4 × 2.0 mm from monolithic 96% aluminum oxide ceramics with a built-in platinum heater using Adaptive Laser Microengraving technology [15,[28][29][30]. This technology of laser processing of monolithic sintered ceramics is an affordable alternative to LTCC technology (low-temperature co-fired ceramics) and allows us to create metal-ceramic packages quickly and cheaply in small-scale production using non-standard solutions. An example of the sensors' constructive implementation is shown in Figure 1b. A similar ceramic housing, in contrast to the usual glass-to-metal one made (operating temperature limit 250 • C)-which is also shown in Figure 1b, and used as a carrier and a DIP adapter-provides the ability to operate at temperatures up to 500 • C with power consumption of 0.5 W at 200 • C. The studies were carried out on an experimental setup in which gas concentrations were created by the static dilution method of Hydrogen with the air. To do this, MOSFEC sensors, the operating temperature of which was maintained and regulated by an electronic board, were placed in a sealed fixed volume chamber with the possibility of pumping and updating the gas mixture with the pump. It was also possible to dose Hydrogen, obtained from the generator by the electrolysis method or control gas mixture cylinder, into the volume of the chamber using a measuring syringe. To measure the sensors' response under the Hydrogen action, two measuring devices were used independently of each other: (1) electronic circuit board based on the PCap-01D chip (further in the text "Board CDC") [31,32] and (2) precision digital meter RCL Aktakom AMM-3068 (NPP "ELIKS" company, Moscow, Russian Federation [33]) with the following settings: test frequencies of the measuring signal in the range from 2 to 200 kHz; test signal voltage is fixed at 50 mV; output impedance 10 Ω; and scanning speed 2.7 meas./s. The electronic Board CDC used includes the following functional blocks: (1) capacitance conversion; (2) bias voltage generation; (3) heating and sensor's temperature control; and (4) communication with external devices and control. The operation principle is since the converter periodically charges and discharges the MOS capacitor and determines the capacitance value by the discharge time (which is uniquely related to the capacitance value). The measurement upper limit was 3500 pF. The bias voltage in the range from −4 to +0.5 V is set by the microcontroller and the DAC chip. The sensor's operating temperature is set and regulated according to a proportional-integral algorithm using a program stored in the microcontroller's memory. The sensor temperature is measured by a thermistor also connected to the circuit. The voltage from the thermistor, proportional to the sensor's temperature, is digitized by the ADC chip, and read by the microcontroller via the SPI interface. Response Speed Determination The tasks of the first experiment series were to determine the sensors' response speed and to assess the sensitivity level using the Board CDC, which, in addition to measuring the capacitance value of the sensor, makes it possible to measure the CV characteristics. The results are shown in Figures 3 and 4 and in Table 1. PCap-01D chip (further in the text "Board CDC") [31,32] and (2) precision digital met RCL Aktakom AMM-3068 (NPP "ELIKS" company, Moscow, Russian Federation [33 with the following settings: test frequencies of the measuring signal in the range from 2 200 kHz; test signal voltage is fixed at 50 mV; output impedance 10 Ω; and scannin speed 2.7 meas./s. The electronic Board CDC used includes the following functional blocks: (1) capa itance conversion; (2) bias voltage generation; (3) heating and sensor's temperature co trol; and (4) communication with external devices and control. The operation principle since the converter periodically charges and discharges the MOS capacitor and dete mines the capacitance value by the discharge time (which is uniquely related to the c pacitance value). The measurement upper limit was 3500 pF. The bias voltage in th range from −4 to +0.5 V is set by the microcontroller and the DAC chip. The sensor's o erating temperature is set and regulated according to a proportional-integral algorith using a program stored in the microcontroller's memory. The sensor temperature measured by a thermistor also connected to the circuit. The voltage from the thermisto proportional to the sensor's temperature, is digitized by the ADC chip, and read by th microcontroller via the SPI interface. Response Speed Determination The tasks of the first experiment series were to determine the sensors' respon speed and to assess the sensitivity level using the Board CDC, which, in addition measuring the capacitance value of the sensor, makes it possible to measure the C characteristics. The results are shown in Figures 3-4 and in Table 1. The results predictably show that, as the temperature increases, the response tim of all samples improve significantly. In this case, the sensitivity of the Control sample (S decreases, while that of the SiC samples, on the contrary, increases significantly. A cording to the response times, the Control sample and Sample No. 1 (both with Pall dium electrodes) are close to each other. Sample No. 2 with a Platinum electrode is n Sensors 2023, 23, x FOR PEER REVIEW 6 ticeably worse, both at 50 °C and at 200 °C. On Figure 4 shows the sensor capacit dependencies (along the main Y-axis) and the response value ∆C (along the auxil Y-axis), when Hydrogen is supplied, on the bias voltage Ubias applied to the MOS ca itor's plates. This dependence is non-linear, so the sensors' sensitivity level, estim using the Board CDC and quantified in Table 1, is relative. For a more correct compar of the sensitivity parameters, it is necessary to turn to the physical foundations of capacitive MOS structures' gas sensitivity and consider the magnitude of the shift in CV characteristics ΔUbias under the gas action. However, due to the peculiarities of Board CDC's measuring circuit, the value of ΔUbias depends on the reference capacit value Cref, by analogy with the dependence ΔС(Ubias): see Figure 4. In addition, at 200 the SiC sensors' capacity, as can be seen from Figure 4b, at some values, Ubias exceeds pF, which is beyond the Board CDC's measurement capabilities. For these reasons, ther sensitivity studies were carried out by using the RCL-meter, taking into acco previously obtained information about the sensor's response speed. The Influence of the Measuring Test Signal Frequency Before the main study of the sensors' hydrogen sensitivity, the effect of the m uring test signal frequency influence was established. The CV characteristics' shift a Designations: T-sensor operating temperature; τ 0.9 -time during which the sensor response to Hydrogen supply reaches 90% of the maximum value; τ 0.1 -time required for the sensor readings to return to the 10% level of the maximum response value after Hydrogen removal; τ full -time required to sensor readings' return to the zero level after the Hydrogen removal; Sensitivity-ratio of the response value ∆C to the supplied Hydrogen concentration value (∆C/C H2 ); U bias -bias voltage value applied to the MOS structure, relative to which the response value ∆C is determined. The results predictably show that, as the temperature increases, the response times of all samples improve significantly. In this case, the sensitivity of the Control sample (Si) decreases, while that of the SiC samples, on the contrary, increases significantly. According to the response times, the Control sample and Sample No. 1 (both with Palladium electrodes) are close to each other. Sample No. 2 with a Platinum electrode is noticeably worse, both at 50 • C and at 200 • C. On Figure 4 shows the sensor capacitance dependencies (along the main Y-axis) and the response value ∆C (along the auxiliary Y-axis), when Hydrogen is supplied, on the bias voltage U bias applied to the MOS capacitor's plates. This dependence is non-linear, so the sensors' sensitivity level, estimated using the Board CDC and quantified in Table 1, is relative. For a more correct comparison of the sensitivity parameters, it is necessary to turn to the physical foundations of the capacitive MOS structures' gas sensitivity and consider the magnitude of the shift in the CV characteristics ∆U bias under the gas action. However, due to the peculiarities of the Board CDC's measuring circuit, the value of ∆U bias depends on the reference capacitance value C ref , by analogy with the dependence ∆C(U bias ): see Figure 4. In addition, at 200 • C, the SiC sensors' capacity, as can be seen from Figure 4b, at some values, U bias exceeds 3500 pF, which is beyond the Board CDC's measurement capabilities. For these reasons, further sensitivity studies were carried out by using the RCL-meter, taking into account previously obtained information about the sensor's response speed. The Influence of the Measuring Test Signal Frequency Before the main study of the sensors' hydrogen sensitivity, the effect of the measuring test signal frequency influence was established. The CV characteristics' shift at the sensors' operating temperature of 170 • C was studied. The experimental frequencies' values of the measuring signal with a fixed amplitude of 50 mV were 2, 20, and 200 kHz. The results are shown in Figure 5 and in Table 2. As can be seen, the Control Si samples' CV characteristics (Figure 5c,f) are greatly different from the others SiC Samples No. 1 and No. 2. The main reason for this is that the operating temperature of 170 • C is already high enough to start the intrinsic charge carriers' generation process in Si samples, and this deforms the CV characteristics and increases the useful signal's measurement error. We have already talked about this in the Introduction (Figure 1c) and confirmed it experimentally here. It has been shown that, within the measurement error, the values of ∆U bias do not depend on the measuring test signal frequency, which correlates with our previous results [32,34]. The value of ∆C max in the case of Sample No. 2 decreases non-linearly as the test signal frequency increases (Figure 5e). Further, in the work, the measuring signal parameters were used: 20 kHz, 50 mV. sensors' operating temperature of 170 °C was studied. The experimental frequencies' values of the measuring signal with a fixed amplitude of 50 mV were 2, 20, and 200 kHz. The results are shown in Figure 5 and in Table 2. The Influence of the Sensor Operation Temperature The results of the sensor operating temperature influence study on the hydrogen sensitivity are shown in Figure 6 and in Table 3. For comparison, Table 3 also shows the dependence ∆C max (U bias ) data obtained on the Board CDC (Figure 4). The total range of temperatures studied was from 50 to 300 • C. parameters were used: 20 kHz, 50 mV. The Influence of the Sensor Operation Temperature The results of the sensor operating temperature influence study on the hydrogen sensitivity are shown in Figure 6 and in Table 3. For comparison, Table 3 also shows the dependence ΔСmax(Ubias) data obtained on the Board CDC (Figure 4). The total range of temperatures studied was from 50 to 300 °С. As expected, the SiC samples exhibited a lower temperature dependence compared to the Si samples. For example, the Control sample's CVC is deformed starting from 150 °C. At 200 °C, the ΔC(Ubias) function becomes sign-variable, which is unacceptable for a sensor gas analyzer. In this case, with a response to 1000 ppm H2, there are Ubias values at which ∆C < 0; for example, if Ubias = −1.5 V, then ∆C = −0.05 nF: see Figure 6f. This means As expected, the SiC samples exhibited a lower temperature dependence compared to the Si samples. For example, the Control sample's CVC is deformed starting from 150 • C. At 200 • C, the ∆C(U bias ) function becomes sign-variable, which is unacceptable for a sensor gas analyzer. In this case, with a response to 1000 ppm H 2 , there are U bias values at which ∆C < 0; for example, if U bias = −1.5 V, then ∆C = −0.05 nF: see Figure 6f. This means that the gas analyzer will register a decrease signal in the hydrogen concentration, although in fact the opposite is true. Similar problems may arise in the case of the Si samples with a high operating temperature and when measuring the useful signal as ∆U bias (C ref ). When comparing the SiC samples with different electrodes (Pd and Pt), we note the following features. The sensors' operating temperature-at which it is possible to register the maximum value of the response ∆C max per 1000 ppm H 2-for both samples is 150-200 • C. Therefore, the use of Platinum for high-temperature performance is not a necessary condition for MOSFEC hydrogen sensors. Palladium remains the decree that provides the highest sensitivity: compare the ∆U bias values for Samples No. 1 and No. 2 in Table 3. In the operating temperature range of 50-150 • C, Si samples are more efficient than SiC. Thus, the following optimal operating temperatures for sensors are set. • The choice is due to the balance between sensitivity and speed. These values were used further in the study of the Hydrogen sensitivity in the concentration range from 1 to 1000 ppm. Investigation of the Hydrogen Sensitivity and LOD As noted in the Introduction, the MOSFEC sensor's useful signal (and, therefore, the sensitivity) can be measured in two ways: by the ∆U bias value at a fixed reference capacitance C ref value, or by the change in capacitance ∆C at a fixed U bias value (Figure 1c). However, experimental data have shown that the sensitivity estimate can be highly dependent on the measurement method and measurement signal parameters: compare the data obtained with the RLC-Meter and the Board CDC data ( Table 3). The operation principle of the capacitive MOSFE gas sensors and the experimental results presented in Sections 2.3 and 2.4 indicate that, for a correct comparative analysis of different samples, it is necessary to focus on the initial signal ∆U bias . However, the mediated ∆C signal has a higher resolution, and it is more efficient for estimating LOD. The calibration results of the sensors' sensitivity to Hydrogen according to the RLC-Meter data are presented in Figure 7 and in Table 4, and according to the Board CDC data on the example of Sample No. 1 in Figure 8. According to the ∆U bias data in Table 4, the leaders in Hydrogen sensitivity are Si samples with an operating temperature of 100 • C (Figure 9a). However, the difference between Si and SiC samples is not so great. This can be explained by the metal-insulator interface uniformity, which largely affects the sensitivity and function ∆U bias (C H2 ). However, according to the ∆C max data, the Hydrogen sensitivity of SiC samples with an operating temperature of 200 • C is 1-2 orders of magnitude higher than that of Si. From our point of view, the main reason for this is the lower temperature dependence of the SiC samples' CV characteristics. The calibration results of the sensors' sensitivity to Hydrogen according to the RLC-Meter data are presented in Figure 7 and in Table 4, and according to the Board CDC data on the example of Sample No. 1 in Figure 8. The calibration results of the sensors' sensitivity to Hydrogen according to the RLC-Meter data are presented in Figure 7 and in Table 4, and according to the Board CDC data on the example of Sample No. 1 in Figure 8. between Si and SiC samples is not so great. This can be explained by the metal-insulator interface uniformity, which largely affects the sensitivity and function ΔUbias(CH2). However, according to the ΔCmax data, the Hydrogen sensitivity of SiC samples with an operating temperature of 200 °C is 1-2 orders of magnitude higher than that of Si. From our point of view, the main reason for this is the lower temperature dependence of the SiC samples' CV characteristics. (a) (b) According to the data in Figure 8b, we calculate the LOD of Hydrogen for different samples: where N is the noise of the sensor capacitance signal, pF; Smax is the maximum sensor sensitivity, pF/ppm; and ΔCmax is the maximum change in sensor capacitance, pF, under the action of H2 with CH2 concentration, ppm. For example, for the Control sample, the calculated LOD would be: Control sample-τ0,9 = 5 ± 3; τ0,1 = 10 ± 5; τfull = 20 ± 5. Therefore, using the data obtained, we will answer the main questions of this work: What are the advantages of using SiC and what potential difficulties are associated with it? Discussion Let us turn again to Figure 9, which shows the calibration data for the sensors' hydrogen sensitivity. Figure 9a, already mentioned above, shows that, in terms of increasing the signal ΔUbias, the use of SiC does not bring benefits compared to Si. However, in this work, the main goal is to accelerate the response speed of the sensor by increasing the operating temperature. This was achieved due to Sample No. 1 with a Pd electrode. According to the data in Figure 8b, we calculate the LOD of Hydrogen for different samples: where N is the noise of the sensor capacitance signal, pF; S max is the maximum sensor sensitivity, pF/ppm; and ∆C max is the maximum change in sensor capacitance, pF, under the action of H 2 with C H2 concentration, ppm. For example, for the Control sample, the calculated LOD would be: Response times at 1-100 ppm of Hydrogen, min; • Sample No. 1-τ 0.9 = 1 ± 0.5; τ 0.1 = 2 ± 1; τ full = 7 ± 2; • Sample No. 2-τ 0.9 = 5 ± 1; τ 0.1 = 5 ± 2; τ full = 30 ± 10; • Control sample-τ 0.9 = 5 ± 3; τ 0.1 = 10 ± 5; τ full = 20 ± 5. Therefore, using the data obtained, we will answer the main questions of this work: What are the advantages of using SiC and what potential difficulties are associated with it? Discussion Let us turn again to Figure 9, which shows the calibration data for the sensors' hydrogen sensitivity. Figure 9a, already mentioned above, shows that, in terms of increasing the signal ∆U bias , the use of SiC does not bring benefits compared to Si. However, in this work, the main goal is to accelerate the response speed of the sensor by increasing the operating temperature. This was achieved due to Sample No. 1 with a Pd electrode. Nevertheless, it was shown that the sensitivity can also be increased by registering the useful signal not by the value of ∆U bias , but by ∆C (Figure 7, Table 4). This approach has some peculiarities. Figure 9b, using the example of Sample No. 1, illustrates the calibration characteristic ∆C(C H2 )'s dependence on the measuring device (RLC-meter or Board CDC) and the operating settings choice. The "C max " curve corresponds to ideal conditions under which the maximum sensor sensitivity is achieved over the entire range of gas concentrations, but, in reality, this is unattainable. Examples of really possible calibration characteristics are the curves corresponding to U bias = 0.53 V or U bias = 0.74 V (see also lines 1 and 2 in Figure 7a). As can be seen from Figure 9b, such calibrations will differ from ideal ones to the measurements' detriment of either high or low Hydrogen concentrations. Unfortunately, this feature is equally inherent for all experimental samples (Table 4). Even in the case of measuring ∆C, however, the higher sensitivity of SiC samples did not contribute to better LOD values. This can be explained by the fact that Board CDC is a circuit solution designed for the Si-based sensors. Therefore, in order to record the ∆C signal in the case of SiC samples as efficiently as possible, optimization is required. Thus, for the MOSFEC hydrogen sensors in the substitution from the Si substrate to SiC with the preservation of the Pd electrode, it was possible to achieve the following: (1) reducing the temperature effect on the CV characteristics; (2) increase of Hydrogen sensitivity function ∆C(C H2 ) by 1-2 orders of magnitude; (3) the response speed increase is not worse than 2 times. In the future, SiC will also make it possible to miniaturize the MOSFEC sensor while maintaining a high sensitivity level. Therefore, in [35,36], one of such methods is described, which consists of the nanocomposite Pd:SnO 2 film formation obtained using the Reactive Sputtering. In this case, the sensor operation efficiency requires an operating temperature increase in order to activate the oxygen vacancies' (adsorption centers) formation on the SnO 2 crystallites surface. Since the idea of upgrading field-effect gas sensors by using WBG semiconductors is not new, it is interesting to compare our results with other authors' data. Thus, in [21], using the example of hydrogen sensors based on Schottky diodes and Pt/Ta 2 O 5 structures deposited on different Si and SiC substrates by radio frequency (RF) sputtering, a comparison of the sensors' operating parameters at temperatures from 25 • C to 200 • C was undertaken. The authors tested samples for exposure to H 2 with a concentration in the range from 600 ppm to 1% vol. (10,000 ppm) and showed that the SiC sensor exhibits relatively greater sensitivity, and the Si sensor a faster response. For example, at an operating temperature of 150 • C, the characteristic response times to 1250 ppm Hydrogen for Si samples were about τ 0.9 = 2 min and τ 0.1 = 6 min, and for SiC samples, they were τ 0.9 = 5 min and τ 0.1 = 13 min. Comparing our results ( Figure 3, Table 1) with [21], we see that the response times of the Schottky diodes with a Pt contact are comparable to SiC Sample No. 2 with the Pt electrode, while our sensors with the Pd electrode (both Si-based and SiC-based) at 200 • C have a response time of 30 to 70 s. Table 5 presents the sensitivity assessing results of the experimental samples studied in this work by the value of ∆U bias in comparison with the voltage shift data of the other authors. The comparison shows that capacitive MOSFE sensors-both ours and those of other colleagues-are more sensitive to hydrogen. It is worth noting, however, the inevitable difficulties and limitations associated with the use of the SiC-based sensors at high operating temperatures. For example, according to estimates [37], SiC single crystals are capable of operating at 500 • C and higher, but most of the other typical structural gas sensors' elements (housing, metal contacts, etc.) are either not able to withstand such temperatures for a long time or have a strong limited resource. This is due to several high temperature undesirable consequences: materials' mutual diffusion, thermal expansion, corrosion, etc. Thus, the WBG semiconductors' use imposes stringent requirements on the gas sensors' design; therefore, it took more than 25 years for their commercial implementation [10,12,38]. Besides, high temperature increases the catalytic processes efficiency on the electrode surface and thus worsens the sensors' selectivity parameters. For example, in [39], to control SO 2 in emission desulfation systems in the energy sector, the authors propose to combat non-selectivity, i.e., reduce the O 2 , CO, and NO x influence by using the cyclic mode of the sensor heating up to 350-400 • C (to shift the maximum sensitivity in SO 2 favor) and carrying out linear discriminant analysis (LDA). Summary MOSFEC hydrogen sensors in a high-temperature ceramic housing based on Me/Ta 2 O 5 / SiC n+ /4H-SiC/Pt structures type with two types of gas-sensing electrodes were fabricated: Palladium, obtained using PLD; and Platinum formed using MS. The features of the wide band gap SiC semiconductor use in the capacitive MOSFE sensors' structure in terms of the Hydrogen gas sensitivity effect, the response speed, and the measuring signals' optimal parameters were studied. The operating temperature and test signal frequency influence for measuring the sensors' capacitance on the sensitivity to H 2 were studied. It has been experimentally established that the sensors' operating temperature, at which it is possible to register the maximum value of the response to H 2 , for SiC-based samples both with a Palladium electrode and Platinum, is 150-200 • C. In the operating temperature range of 50-150 • C, Si samples are more efficient than SiC. The calculated LOD H2 values for the experimental samples range from 75 to 500 ppb. It is shown that with the MOSFEC Hydrogen sensors in the substitution from the Si substrate to SiC with the preservation of the Pd electrode, it was possible to achieve a reduction the temperature effect on the CV characteristics, an increase of Hydrogen sensitivity function ∆C(C H2 ) by 1-2 orders of magnitude, and the response speed increase is not worse than 2 times. In the future, SiC will also make it possible to miniaturize the MOSFEC sensor while maintaining a high sensitivity level.
2023-04-08T15:07:55.811Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "09d9a74ebda4c9c3a92853da1edd4391196eb524", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "945eb1313a20d61041ebcf13b7757e8083be916c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
231879902
pes2o/s2orc
v3-fos-license
Sufficiently Accurate Model Learning for Planning Data driven models of dynamical systems help planners and controllers to provide more precise and accurate motions. Most model learning algorithms will try to minimize a loss function between the observed data and the model's predictions. This can be improved using prior knowledge about the task at hand, which can be encoded in the form of constraints. This turns the unconstrained model learning problem into a constrained one. These constraints allow models with finite capacity to focus their expressive power on important aspects of the system. This can lead to models that are better suited for certain tasks. This paper introduces the constrained Sufficiently Accurate model learning approach, provides examples of such problems, and presents a theorem on how close some approximate solutions can be. The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning. I. INTRODUCTION Dynamics models play an essential role in many modern controllers and planners. This is for instance the case in Model Predictive Control (MPC) [1]. They can also be used to compute feedforward terms for feedback controllers [2], or be used with a planner to find a trajectory from which stabilizing controllers can be generated [3], [4]. While modelfree Reinforcement Learning techniques can solve similar problems without the use of a dynamics model [5]- [7], they generally do not scale to new problems or changes in problem parameters. Model learning methods perform admirably when the models can approximate the dynamics of the system accurately. However, the performance of these controllers can be degraded with uncertainty in the model. While robust controllers can be designed to attempt to alleviate these issues [8], these methods typically perform conservatively due to having to cater to worst case approximations of the model. In addition, there may be effects that a robust controller designer may not be aware. For example, consider the problem of landing a quadrotor precisely at a target as shown in Figure 9. There are complex aerodynamic effects associated when nearby surfaces cause disturbances to the airflow. This may result in large torques when the quadrotor is hovering close to the ground and hamper precise landings. This is known as the "ground effect." These aerodynamic effects can be hard to model from just prior knowledge and may show up as a highly correlated, state dependent, non-zero mean noise. A common method that has been suggested to model similar effects is to learn or adjust a dynamics model with real data taken from running the system. C. Zhang For linear systems, this method of System Identification (SI) or Model Learning has many results [9]- [14] -such as recoverability of linear system dynamics when the data contains sufficient excitation. System identification methods have been proposed for non-linear systems [15]- [17]. However, they suffer from issues such as the need for a large amount of data or the requirement of a special problem structure such as block-based systems. While generic methods for non-linear systems do not provide the same theoretical guarantees, there has been some success in practice. In robotics, Gaussian Processes, Gaussian Mixture Models, or Neural Networks have been used to learn models of dynamics [18]- [21]. A typical process for learning these models involves selecting a parameterized model, such as a neural network with a fixed number of layers and neurons, and choosing a loss function that penalizes the output of the model for not matching the data gathered from running the real system. Then, one optimizes the parameters by minimizing the empirical risk using, for instance, stochastic gradient descent like algorithms. This formulation assumes that all transitions are equally important, since it penalizes the mismatch between model and data uniformly on all portions of the state-action space. While this formulation has shown success in some applications, prior knowledge about the task and system can inform better learning objectives. A control designer may know that a certain part of the state space requires a certain accuracy for a robust controller to work well, or that some part of the state space is more important and should have hard constraints on the model accuracy. For example, to precisely land a quadrotor, a designer may note that the accuracy of modeling the complex ground effect forces is more important near the landing site. Incorporating this prior knowledge can lead to better performing models. To address the problem of incorporating prior knowledge into model learning, we introduced the idea of Sufficiently Accurate Model Learning in [22]. This formulation is based on the inclusion of constraints in the optimization problem whose role is to introduce prior-knowledge about the importance of different state-control subsets. In the example of the quadrotor, notice that when the quadrotor is away from the surfaces, the ground effect is minor and thus, it is important to focus the learned model's expressiveness in the region of the statespace that is most heavily affected. This can be easily captured by a constraint that the average error in the important statespace regions is smaller than a desired value. These constraints will allow models with finite expressiveness concentrate on modeling important aspects of a system. One point to note is that this constrained objective can be used orthogonally to many existing methods. For example, the constrained objective can replace the unconstrained objective in [20], [21], and all other aspects of the methods can remain the same. The data can be collected the same way. The idea of using extra sensors during training for [20] need not change. While not trivial, the idea of constraints can also be applied to Gaussian process models such as [18], [19]. In its most generic form, the problem of model learning is an infinite dimensional non-convex optimization problem that involves the computation of expectations with respect to an unknown distribution over the state-action space. In addition, the formulation proposed here introduces constraints which seems to make the learning process even more challenging. However, in this work we show that solving this problem accurately is not more challenging than solving an unconstrained parametric learning problem. To reach this conclusion we solve a relaxation of the problem of interest with three common modifications: (i) function parameterization, (ii) empirical approximation, and (iii) dual problem solving. Function parameterization turns the infinite dimensional problem into one over finite function parameters. Empirical approximation allows for efficient computation of approximate expectations, and solving the dual problem leads to a convex unconstrained optimization problem. The three approximations introduced however may not yield solutions that are good approximations of the original problem. To that end, we establish a bound on the difference of the value of these solutions. This gap between the original and approximate problem depends on the number of samples of data as well as the expressiveness of the function approximation (Theorem 1). In particular, the bound can be made arbitrarily small with sufficient number of samples and with the selection of a rich enough function approximator. This implies that solving the functional constrained problem is nearly equivalent to solving a sequence of unconstrained approximate problems using primal-dual methods. This paper extends [22] to the case of empirical samples and presents Theorem 1 that relates number of samples, function approximation expressiveness, and loss function smoothness to approximation error. In addition, there is experimental validation of Theorem 1 as well as different examples to showcase the framework of Sufficiently Accurate model learning. This paper is organized as follows: Section II introduces the Sufficiently Accurate model learning framework, Section III presents in detail the three approximations introduced to solve the problem as well as a result that bounds the error on the approximate problem (Theorem 1). Section IV provides the proof for the main theorem. Section V presents a simple primal-dual method to solve the constrained problem, while experimental results are presented in Section VI on a double integrator system with friction, a ball bouncing task, and a quadrotor landing with ground effect in simulation. Theoretical results are experimentally tested for the double integrator system. Section VII presents the paper's conclusions. II. SUFFICIENTLY ACCURATE MODEL LEARNING In this paper we consider the problem of learning a discrete time dynamical system. Let us denote t ∈ Z as the time index and let x t ∈ R n , u t ∈ R p be the state and input of the system at time t. The dynamical system of interest is represented by a function f : R n × R p → R n that relates the state and input at time t to the state at time t + 1 One approach to System Identification or Model Learning consists of fitting an estimated dynamical model, φ : R n × R p → R n , to state transition data [18]. This state transition data consists of tuples of s = (x t , u t , x t+1 ) drawn from a distribution S D with the sample space S ⊆ R n × R p × R n . The estimated model φ belongs to a class of functions, Φ, of which f is an element. Φ could be, for example, the space of all continuous functions. Then, the problem of model learning reduces to finding the function φ in the class Φ that best fits the transition data. The figure of merit is a loss function : S × Φ → R. With these definitions, the problem of interest can be written as the following stochastic optimization problem min The loss function needs to be selected to encourage the model φ to match its output with the state transition data. An example of a loss function is the p-norm, l(s, φ) = φ(x t , u t ) − x t+1 p . For p = 1, this reduces to a sum of absolute differences between each output dimension of φ and the true next state, x t+1 . When p = 2, this is simply a euclidean loss. Other common losses can include the Huber loss and, in discrete state settings, a 0-1 loss. Often times, one can derive, from first principles, a model f of the dynamical system of interest. Depending on the complexity of the system of, these models may be inaccurate since they may ignore hard to model dynamics or higher order effects. For instance, one can derive a modelf for a quadrotor from rigid body dynamics where forces and torques are functions of each motor's propeller speed. However, the accuracy of this model will depend on other effects that are harder to model such as aerodynamic forces near the ground. If these aerodynamic effects are ignored, it can result in a failure to control the system or in poor performance [23]. In these cases, the target model, denoted byφ, can decomposed as the sum of an analytic model and an error The learning of the error term-or residual model-fits the framework described in (2). For instance, for the p norm loss can be modified to take the form l(s, A characteristic of the classic model learning problem defined in (2) is that errors are uniformly weighted across the state-input space. In principle, one can craft the loss so to represent different properties on different subsets of the state-input space. However, this design is challenging, system dependent, and dependant on the transition data available. In contrast, our approach aims to exploit prior knowledge about the errors and how they impact the control performance. For instance, based on the analysis of robust controllers one can have bounds on the error required for successful control. This information can be used to formulate the Sufficiently Accurate model learning problem, where we introduce the prior information in the form of constraints. Formally, we encode the prior information by introducing K ∈ N functions g k : S → R. Define in addition, a collection of subsets of transition tuples where this prior information is relevant, S k ⊂ S and corresponding indicator functions I k (s) : S → {0, 1} taking the value one if s ∈ S k and zero otherwise. With these definitions the sufficiently accurate model learning problem is defined as Note that the sets S k that define the indicator variables I k (s) are not necessarily disjoint. In fact, in practice, they are often not. The sets can be arbitrary and have no relation to each other. Examples of how these sets are used are given in examples at the end of this section. Notice that the (4) is an infinite dimensional problem since the optimization variable is a function and it involves the computation of expectations with respect to a possibly unknown distribution. An approximation to this problem is presented in Section III. For technical reasons, the functions g k and l should be expectation-wise Lipschitz continuous. The expectation-wise Lipschitz assumption is a weaker assumption than Lipschitz-continuity, as any Lipschitzcontinuous function with a Lipschitz constant L is also expectation-wise Lipschitz-continuous with a constant of L. In particular, the loss functions in Example 1 and 2 are expectation-wise Lipschitz-continuous with some constant (cf., Appendix A). There is no assumption that the functions should be convex or continuous. Before we proceed, we present two examples of Sufficiently Accurate model learning. For notational brevity, when an expectation does not have a subscript, it is always taken over s ∼ S D . This problem is a simple modification of (2). It has the same objective, but adds a constraint that a certain state-control subset, defined by a set A, should be within c accuracy. The indicator variable I A (s) will be 1 when s is in the set A. Here, g(s, φ) = φ(x t , u t ) − x t+1 2 − c EI A (s) . This formulation allows you to trade off the accuracy in one part of the state-control space with everything else as it may be more important to a task. Another use case can be to provide an error bound for robust controllers. This is the formulation used in the quadrotor precise landing experiments detailed later in Section VI-C, where the set A is defined to be all states close to the ground where the ground effect is more prominent. VI-C. Example 2. Normalized Objective where I A is the indicator variable for the subset A = {s ∈ S : x t+1 2 ≥ δ c }, and A C is the complement of the set A. This problem formulation looks at minimizing an objective such that the error term is normalized by the size of the next state. This can be useful in cases where the states can take values in a very large range. When the state is small, the "true" measurement of the state can be dominated by noise, and the model can be better off just bounding the error rather than focusing on fitting the noise. This is the formulation used in the ball bouncing experiment in Section VI-B, where the we would like the errors in velocity prediction to be scaled to the speed, and all errors below a small speed can be constrained with a simple constant. III. PROBLEM APPROXIMATION The unconstrained problem (2) and the constrained problem (4) are functional optimization problems. In general, these are infinite dimensional and usually intractable. Instead of optimizing over the entire function space Φ, one may look at function spaces, Φ θ ⊂ Φ, parameterized by a d-dimensional vector θ ∈ Θ = R d . Examples of these classes of functions are linear functions of the form φ θ (x, u) = θ x x + θ u u where θ = [θ x , θ u ] is a vector of weights for the state and control input. More complex function approximators, such as neural networks, may be used to express a richer class of functions [21], [24]. Restricting the function space poses a problem in that the optimal solution to (4) may no longer exist in the set Φ θ . The goal under these circumstances should be to find the closest solution in Φ θ to the true optimal solution φ . Additionally, the expectations of the loss and constraint functions are in general intractable. The distributions can be unknown or hard to compute in closed form. In practice, the expectation is approximated with a finite number of data samples s i ∼ S D with i = 1, . . . , N . This yields the following empirical parameterized risk minimization problem While both function and empirical approximations are common ways to simplify the problem, the approximate problem introduced in (8) is still a constrained optimization problem and can be difficult to solve in general as it can be nonconvex in the parameters θ. This is the case for instance when the function approximator is a neural network. One approach to solve this problem is to solve the dual problem associated with (8). To aid in the definition of the dual problem, we first define the dual variables (also known as Lagrange multipliers), λ ∈ R K + , along with the Lagrangian associated with (8) Here, the symbol, 0 (s i , φ θ ), is defined as l(s i , φ θ )I 0 (s) to condense the notation. Similarly, the bolded vector, The dual problem is now defined as Notice that (10) is similar to a regularized form of (8) where each constraint is penalized by a coefficient ω k Adding this type of regularization can weight certain stateaction spaces more. In fact, if ω k is chosen to be λ N , solving (11) would be equivalent to solving (10). However, arbitrary choices of ω k provide no guarantees on the constraint function values. By defining the constraint functions directly, constraint function values are determined independent of any tuning factor. For problems where strong guarantees are required or easier to define, the Sufficiently Accurate framework will satisfy them by design. An alternative interpretation is that (10) provides a principled way of selecting the regularization coefficients. In Section V, we discuss an implementation of a primal dual algorithm to do so. The dual problem has two important properties that hold regardless of the structure of the optimization problem (8). For any θ that minimizes the Lagrangian L N , the resulting function-termed the dual function-is concave on the multipliers, since it is the point-wise minimum of linear functions (see e.g. [25]). Therefore, its maximization is tractable and it can be done efficiently for instance using stochastic gradient descent. In addition, the dual function is always a lower bound on the value P N and in that sense solving the dual problem (10) provides the tightest lower bound. In the case of convex problems (that fulfill Slater's Condition), it is well known that the problems have zero duality gap, and therefore P N = D N [25, Section 5.2.3]. However, the problem (8) is non-convex and a priori we do not have guarantees on how far the values of the primal and the dual are. Moreover, recall that the primal problem in (8) is an empirical approximation of the problem that we are actually interested in solving (4). The previous discussion leads to the question about the quality of the solution (10) as an approximation to (4). The duality gap is defined as the difference between the primal and dual solutions of the same problem. Here, the gap is the difference between the primal and the dual of different but closely related problems. Hence, the quantity we are interested in bounding is the surrogate duality gap defined as To provide specific bounds for the difference in the previous expression we consider the family of function classes Φ θ termed -universal function approximators. We define this notion next. To provide some intuition on the definition consider the case where Φ is the space of all continuous function, the above property is satisfied by some neural network architecture. That is, for any , there exists a class of neural network architectures, Φ θ such that Φ θ is an -universal approximator for the set of continuous functions [24, Corollary 2.1]. Thus, for any dynamical system with continuous dynamics, this assumption is mild. Other parameterizations, such as Reproducing Kernel Hilbert Spaces, are -universal as well [26]. Notice that the previous definition is an approximation on the total norm variation and hence it is a milder assumption than the universal approximation property that fully connected neural networks exhibit [24]. Next, we define an intermediate problem on which the surrogate duality gap depends: a perturbed version of problem (4) where the constraints are relaxed by L ≥ 0 where L is the constant defined in Assumption 1 and the universal approximation constant in Definition 1 1 is a vector of ones. The perturbation results in a problem whose constraints are tighter as compared to (4). The set of feasible solutions for the perturbed problem (13) is a subset of the feasible solutions for the unperturbed problem (4). The perturbed problem accounts for the approximation introduced by the parameterization. In the worst case scenario, if the problem (13) is infeasible, the parameterized approximation of (8) may turn infeasible as the number of samples increases. Let λ L be the solution to the dual of (13) With these definitions, we can present the main theorem that bounds the surrogate duality gap. Theorem 1. Let Φ be a compact class of functions over a compact space such that there exists φ ∈ Φ for which (4) is feasible, and let Φ θ be an -universal approximator of Φ as in Definition 1. Let the space of Lagrange multipliers, λ, be a compact set as in [27]. In addition, let Assumption 1 hold and let Φ θ satisfy the following property where λ L 1 is the optimal dual variable for the problem (14), L is the Lipschitz constant for the loss function, and H Φ θ is the random VC-entropy [28, section II.B]. Note that both arguments for H Φ θ must be positive. Then P and D N , the values of (4) and (10) respectively, satisfy where the probability is over independent samples {s 1 , s 2 , . . . , s N } drawn from the distribution S as defined in problem (8). Proof. See Section IV The intuition behind the theorem is that given some acceptable surrogate duality gap, δ, there exists a neural network architecture, Φ θ , and a number of samples, N such that the probability that the solution to (10) is within δ to the solution to (4) is very high. The choice of neural network will influence the value of and λ L . These in turn will decide the duality gap, δ, as the quantity δ − L( λ L 1 + 1) must be positive. A larger neural network will correspond to a smaller which will also has an impact on the perturbed problem (13). A smoother function and smaller will lead to smaller perturbations. Smaller perturbations can lead to a smaller dual variable, λ L . Thus, larger neural networks and smoother dynamic systems will have smaller duality gaps. If L is large, then the perturbed problem may be infeasible. In theses cases, λ L will be infinite. This corresponds to problems where the function approximation simply can not satisfy the original constraint functions. For example, using constant functions to approximate a complicated system may violate the constraint functions g k for all possible constant functions. Thus, no δ exists to bound the solution as the parameterized empirical problem (8) has no feasible solution. This theorem suggests that with a good enough function approximation and large enough N , solving (10) is a good approximation to solving (4) with large probability. There are some details to point out in Theorem 1. First, the function H Φ θ a complicated function that will usually scale with the size of the neural network. A larger neural network will lead to a smaller , but may require a larger number of samples N to adequately converge to an acceptable solution. The assumption on the limiting behavior of H Φ θ is fufilled by some neural network architectures [29], but the general behavior of this function for all neural network architectures is still a topic of research. Additionally, we assume the space of Lagrange multipliers is a compact set. This will imply, along with compact state-action space, that L is bounded. A finite Lagrange multiplier is a reasonable assumption as the problem (4) is required to be feasible [27]. The bound established in Theorem 1 depends on quantities that are in general difficult to estimate, These include H Φ θ , λ L 1 , L, . Thus, while this theorem provides some insights on how these quantities influence the gap between solutions, it is mainly a statement of the existence of such values that can provide a desired result. In practice, this result can be achieved by choosing increasing the sizes of neural networks as well as data samples until the desired performance is reached. Note that the theorem follows our intuition that larger neural networks and more data will give us more accurate result. However, this theorem formalizes not only that it is more accurate, but that the error will tend to 0 as number of samples and number of parameters increase. IV. PROOF OF THEOREM 1 To begin, we define an intermediate problem Note that this is the unperturbed version of (13). As a reminder, this problem uses a class of parameterized functions, but does not use data samples to approximate the expectation. Thus, it can be seen as a step in between (4) and (8). As with the dual problem to (8), we can define the Lagrangian associated with (17) (18) and the dual problem Using this intermediate problem, we can break the bound P − D N into two components. As a reminder, P is the solution to the problem we want to solve in (4), D N is the solution to the problem (10) we can feasibly solve, and D θ is the solution to an intermediate problem (19). The first half of this bound, P − D θ , is the error that arises from using a parameterized function and dual approximation. The second half of this bound, D θ − D N , is the error that arises from using empirical data samples. It can be seen as a kind of generalization error. The proof will now be split into two parts that will find a bound for each of these errors. A. Function Approximation Error We first look at the quantity P − D θ . This can be further split as follows where D is the solution to the dual problem associated with (4). This is defined with the Lagrangian and the dual problem We note that the quantity |P − D | is actually 0 due to a result from [30,Theorem 1]. The theorem is reproduced here using the notation of this paper. Theorem 2 ( [30], Theorem 1). There is zero duality between the primal problem (4) and dual problem (23), if 1) There exists a strictly feasible solution (φ, λ) to (23) 2) The distribution S is nonatomic. While the problem defined in [30] is different from the sufficiently accurate problem defined in 4, there is an equivalent problem formulation (see Appendix B). Since Theorem 1 fulfills the assumptions of Theorem 2, we get |P − D | = 0. For the second half of this approximation error, D − D θ , has also been previously studied in [31,Theorem 1] in the context of a slightly different problem formulation. The following theorem adapts [31, Theorem 1] to the Sufficiently Accurate problem formulation (4). Theorem 3. Given the primal problem (4) and the dual problem (19), along with the following assumptions 1) Φ θ is an -universal function approximator for Φ, and there exists a strictly feasible solution φ θ for (17). 2) The loss and constraint functions are expectation-wise Lipschitz-continuous with constant L. 3) All assumptions of Theorem, 2 The dual value, D θ is bounded by where λ L is the dual variable that achieves the optimal solution to (14). Proof. See Appendix C Again, the assumptions of Theorem 1 fulfill the assumptions for Theorem 3. Due to notational differences, as well as a different way of framing the optimization problem, the proof has been adapted from [31] and is given in Appendix C. With Theorem 2 and 3, the following can be stated B. Empirical Error We now look at the empirical error, D θ − D N . We first observe the following Lemma. Then under the assumption of Theorem 1 it follows that Proof. See Appendix D C. Probabilistic Bound Substituting the parameterized bound (25) and the empirical bound (26) in (20) yields the following implication Let P |P − D N | ≤ δ be a probability over samples {s 1 , s 2 , . . . , s N } that are drawn to estimate the expectation in the primal problem (8). Using the implication (27) it follows that where the equality follows directly from the fact that for any event A, P(A) = 1 − P(A c ). The assumptions of Theorem 1 allows us to use the following result from Statistical Learning Theory [28, (Section II.B)], Note that this theorem requires bounded loss functions. The assumptions for a bounded dual variable, and compact stateaction space in Theorem 1 satisfies this constraint. Thus, this establishes that for any δ > 0, we have lim N →∞ P |P − D N | ≤ δ = 1. This concludes the proof of the theorem. Section III has shown that problem (10) can approximate (4) given a large enough neural network and enough samples. This section will discuss how to compute a solution (10). There are many primal-dual methods [32]- [34] in the literature to solve this exact problem, and Algorithm 1 is an example of a simple primal-dual algorithm. One way to approach this problem is to consider the optimal dual variable, λ N . Given knowledge of λ N , the problem reduces to the following unconstrained minimization UseŜ to compute estimates of ∇ θ L N (θ, λ) and ∇ λ L N (θ, λ) (See (31) and (32)); A possible solution method is to start with an estimate of λ N , and solve the minimization problem. Then holding the primal variables fixed, update the dual variables by solving the outer maximization. This method can be seen as solving a sequence of unconstrained minimization problems. This method can be further approximated; instead of fully minimizing with respect to the primal variables, a gradient descent step can be taken. And instead of fully maximizing with respect to the dual variables, a gradient ascent step can be taken. This leads to Algorithm 1 where we iterate between the inner minimization step and the outer maximization step. At each iteration, dual variables are projected onto the positive orthant of R K , denoted by the projection operator, [λ] + . This is to ensure non-negativity of the dual variables. In many cases, the full gradient of ∇ θ L N (θ, λ) and ∇ λ L N (θ, λ) can be too expensive to compute. This is due to the possibly large number of samples N . An alternative is to take stochastic gradient descent and ascent steps. The gradients can be approximated by taking M random samples of the whole dataset S = s 1 , . . . , s N . The samples will be denoted asŜ = s i1 , . . . , s i M where i 1 is an integer index into whole dataset S. UsingŜ, we obtain The gradients ∇ θ 0 (s ij , φ θ ) and ∇ θ g(s ij , φ θ ) can be computed easily using backpropogation. Similarly, for ∇ λ L N (θ, λ), The dual gradient can be estimated as simply the average of the constraint functions over the sampled dataset. In the simplest form of the primal-dual algorithm, the variables are updated with simple gradient ascent/descent steps. These updates can be replaced with more complicated update schemes, such as using momentum [35] or adaptive learning rates [36]. Higher order optimization methods such as Newton's method can be used to replace the gradient ascent and descent steps. For large neural networks, this can be unfeasible as it requires the computation of Hessians with respect to neural network weights. The memory complexity for the Hessian is quadratic with the number of neural network weights. The primal-dual algorithm presented here is not guaranteed to converge to the global optimum. With proper choice of learning rate, it can converge to a local optimum or saddle point. This issue is present in unconstrained formulations like (2) as well. An example of the evolution of the loss and constraint functions is shown in Figure 1. VI. EXPERIMENTS This section shows examples of the Sufficiently Accurate model learning problem. First, experiments are performed using a simple double integrator experiencing unknown dynamic friction. The simplicity of this problem along with the small state space allows us to explore and visualize some of the properties of the approximated solution. Next, two more interesting examples are shown. One example learns how a ball bounces on a paddle with unknown paddle orientation and coefficient of restitution. The other example mitigates ground effects which can disturb the landing sequence of a quadrotor. The experiments will compare the sufficiently accurate problem (4) with the unconstrained problem (2) which will be denoted as the Uniformly Accurate problem. Each experimental subsection will be broken down into three parts, 1) System and Task introduction, 2) Experimental details, and 3) Results. A. Double Integrator with Friction 1) Introduction: To analyze aspects of the Sufficiently Accurate model learning formulation, simple experiments are performed on a simple double integrator system with dynamic friction. When trying to control the system, a simple base model to use is that of a double integrator without any friction where p is the position of the system, v is the velocity, u is the control input, and ∆t is the sampling time. The state of the system is x = [p, v].The true model of the system that is unknown to the controller is where b(p t ) a position varying kinetic friction. c is a function that ensures that the friction cannot reverse the the direction of the speed (it is an artifact of the discrete time formulation) If within a single time step, the friction force will change the sign of the velocity, c will set v t+1 to be 0. Otherwise, c will not modify the friction force in any way. The specific b(p) used is shown in Figure 3 and the sampling time is set to ∆t = 0.1. The task is to drive the system to the origin p v = 0 0 . 2) Experimental Details: The goal of model learning in this experiment is to learn (34) andf is (33). A Uniformly Accurate model will be learned using (2) along with a Sufficiently Accurate model using the problem defined in Example 1. In the scenario defined by (6), I(s) is active in the region {(p, v) ∈ R 2 : [p, v] ∞ ≤ 0.5} and c = 0.035. The constraint, therefore, enforces a high accuracy in the state space near the origin. The neural network, φ θ , used to approximate the residual dynamics has two hidden layers. The first hidden layer has four neurons, while the second has two. Both hidden layers use a parametric rectified linear (PReLU) activation [37]. The input into the network is a concatenated vector of [p t , v t , u t ]. The output layer's weights are initially set to zero so before learning the residual error, the network will output zero. The dataset used to train both the sufficiently and uniformly accurate models is generated by uniformly sampling 15, 000 positions (34) is then used to obtain the true next state. Instead of simple gradient descent/ascent, ADAM [36] is used as an update rule with α θ = 1 × 10 −3 and α λ = 1 × 10 −4 . Both models were trained in 200 epochs. The models are then evaluated on how well it performs within a MPC controller defined in (36). This controller seeks to drive the system to the origin while obeying control con- The sufficiently accurate formulation utilizes the prior knowledge that the model should be more accurate near the goal in order to stop efficiently. While the system is far from the origin, the control is simple, regardless of the friction; the controller only needs to know what direction to push in. A plot of the accuracy of both models is shown in Figure 2 and summarized in Table II. It is noticeable that the Sufficiently Accurate model has low average error near the origin, but suffers from higher average error outside of the region defined by I(s). This is the expected behavior. The performance of the controllers are summarized in Table I. Even though Sufficiently Accurate has higher error outside of the constraint region and lower error within, it leads to lower costs when controlling the double integrator. The reason is shown in Figure 5, where the sufficiently accurate model may get to steady state a bit slower but is able to control the overshoot better and not have oscillations near the origin. This is because the model is purposefully more accurate near the origin as it is more important for this task. 4) Convergence Experiments: The double integrator is a simple system. This enables running more comprehensive tests to experimentally show some aspects of Theorem 1. For this particular system, we will run one more experiment where 4 different neural network architectures were used. Each network has two hidden layers with PReLU activation, where the only difference is in the number of neurons in each layer. N , 15 random datasets are sampled, and each neural network is trained with each dataset using the Sufficiently Accurate objective described in Section VI-A2. There is one minor difference in how the data is collected; a zero mean Gaussian noise with σ = 0.2 is added to v t+1 . With noisy observations of velocity, the optimal model that can be learned for (4) will have an objective value of P = 0.04. The results of training each neural network model with each random dataset is shown in Figure 4. Each boxplot in the figure shows the distribution of the final value of the Lagrangian, L N , at the ending of training. This is an approximation of D N . The primal-dual algorithm may not be able to solve for the optimal D N , but the expectation is that for a simple problem like double integrator, the solution is somewhat close. In fact, Figure 4 shows that with increasing model sizes and larger N , the distribution of the solutions appear to be converging to P . Note that the figure shows the value of the Lagrangian with training data. Thus for small N , networks can overfit and have a near zero Lagrangian value. When increasing N , the networks have less of a chance to overfit to the training data. B. Ball Bouncing 1) Introduction: This experiment involves bouncing a ball on a paddle as in Figure 6. The ball has the state space x = [p ball , v ball ], where p ball is the three-dimensional position of the ball, v ball is the three-dimensional velocity of the ball. The control input is u = [v paddle , n] where v paddle is the velocity of the paddle at the moment of impact with the ball and n is the normal vector of the paddle, representing its orientation. This control input is a high level action and is realized by lower level controllers that attempt to match the velocity and orientation desired for the paddle. A basic model of how the velocity of the ball changes during collision is where the superscript − refers to quantities before the paddleball collision and the superscript + refers to quantities after the paddle-ball collision (the paddle velocity and orientation are assumed to be unchanged during and directly after collision). α r is the coefficient of restitution. In this experiment, a neural network is tasked to learn the model of how the ball velocity changes, i.e. (37). 2) Experimental Details: First, a neural network is trained without knowledge of any base model of how the ball bounces. This will be denoted as learning a full model as opposed to a residual model. This network is trained two ways, with the Uniformly Accurate problem (2) as well as the Sufficiently The (x, y z) trajectory of the ball is plotted for both the base model (with wrong parameters) and a learned model using the sufficiently accurate objective. This plot shows that the base model is not sufficient by itself to bounce the ball at a desired location. Accurate problem realized in Example 2. The constants used in Example 2 are defined here as c = 0.1 and δ c = 0.1. A second neural network is trained for both the uniformly and sufficiently accurate formulations that utilizes the base model,f (x, u) given in (37) to learn a residual error. In the base model, the coefficient of restitution, α r , is wrong and the control n has a constant bias where a rotation of 0.2 radians is applied to the y-axis. This is to simulate a robot arm picking up the paddle and not observing the rotation from the hand to the paddle correctly. The neural network used for all models has 2 hidden layers with 128 neurons in each using the pReLU activation. The input into the network is the the state of the ball and the control input at time of collision, [x − , u], and it outputs the ball velocity after the collision, v + ball . The network was trained using the ADAM optimizer with an initial learning rate of 10 −3 for both the primal and dual variables. The data used for all model training was gathered by simulating random ball bounces in MuJoCo for the equivalent of 42 minutes in real life. All learned models are then evaluated with how well a controller utilizes them. The controller will attempt to bounce the ball at a specific xy location. This is represented through the following optimization problem that the controller solves where loc(·) is a function that maps the velocity of the ball to the xy location it will be in when it falls back to its current height. roll and pitch are both derived from the paddle normal n. [loc desired , roll min , roll max , pitch min , pitch max , v min , v max ] are parameters of the controller that can be chosen. The system and controller is then simulated in MuJoCo [39] using libraries from the DeepMind Control Suite [40]. Each model is evaluated 500 different times for varying controller parameters. loc desired is uniformly distributed in the region {(x, y)| − 1m ≤ x ≤ 1m, −1m ≤ y ≤ 1m}, v min uniformly sampled from the interval [3m/s, 4m/s), and v max is selected to be above v min by between 1m/s to 2m/s. 3) Results: A plot of the model errors are shown in Figure 7. While the uniformly accurate model has errors that are distributed more or less uniformly across all magnitudes of ball velocity, the sufficiently accurate model has a clear linear relationship. This is expected from the normalized objective that is used which penalizes errors based on large the velocity of the ball is. Therefore, larger velocities can have larger errors with the same penalty as smaller velocities with small errors. The results of running each model with the controller 500 times is shown in Table III. The error characteristics of the Sufficiently Accurate model (Figure 7) allow it to out perform its Uniformly Accurate counterpart with both a full model and a residual model. For the full model, the uniformly accurate problem yields a failure rate of over 20% while the sufficiently accurate problem yields a failure rate of under 1%. Here, failure means the paddle fails to keep the ball bouncing. For the residual model, neither model failed because the base model provides a decent guess (though the base model by itself is not good enough for control, see Figure 8). The sufficiently accurate model still provided better mean errors. We hypothesize that the large errors spread randomly across the Uniform model leads to high variance estimates of the output given small changes in the input. For optimizers that use gradient information, this leads to a poor estimate of the gradient. For optimizers that are gradient free, this still causes problems due to the high variance of the values themselves. C. Quadrotor with ground effects 1) Introduction: The last experiment deals landing a quadrotor while undergoing disturbances from ground effect. This disturbance occurs when a quadrotor operates near surfaces which can change the airflow [23]. The state for the quadrotor model is a 12 degree of freedom model which consists of x = [p, v, q, ω] where p ∈ R 3 is position of the center of mass, v ∈ R 3 is the center of mass velocity, q ∈ SO(3) is a quaternion that represents the orientation the quadrotor, and w ∈ R 3 is the angular velocity expressed in body frame. The control input is u = [u (1) , u (2) , u (3) , u (4) ] where u (i) is the force from the i th motor. The base model of the quadrotor,f (x, u) is as follows where m is the total mass of the quadrotor (set to be 1kg for all experiments) and I is inertia matrix around the principle axis (set to be identity for all experiments). The × symbol represents cross product, and ⊗ represents quaternion multiplication. When using ⊗ between a vector and a quaternion, the vector components are treated as the imaginary components of a quaternion with a real component of 0. The discrete model normalizes the quaternion for each state update so that it remains a unit quaternion. The body frame of the quadrotor is such that the x axis aligns with one of the quadrotor arms, and the z axis points "up." The true model used in simulation adds disturbances to the force on each propeller, but is otherwise the same as the base model: f where h : R n × R p → R p is the ground effect model. In this experiment we provide a simplified model of ground effects where each motor has independence disturbances. The i th output of the ground effect model, h i (x, u) is where h prop is height of the propeller above the ground (not the height of the center of mass), h max is a constant that determines the height at which the ground effect is no longer in effect. θ ground is the angle between the unit vector aligned with the negative z axis of the quadrotor and the unit vector [0, 0, −1]. α is a number in the set [0, 1] that represents the maximum fraction of the propeller's generated force that can be added as a result of ground effect. As a reminder, the [·] + operator projects its arguments onto the positive orthant. A visualization of K ground is shown in Figure 10. In the experiments, h max = 1.5, α = 0.5. 2) Experimental Details: The Sufficiently Accurate model trained using the problem presented in Example 1, where c = 0.001 and the indicator I(s) is active when the height of the quadrotor is less than 1.5. A Uniformly Accurate and Sufficiently Accurate model are trained to learn the residual error between f (x, u) andf (x, u). Both models use a neural network with 2 hidden layers of 16 and 8 neurons each with pReLU activation. The update for primal and dual variables used ADAM with α θ = 1 × 10 −3 and α λ = 1 × 10 −4 , and both models trained using 3, 000 epochs. The training data consists of 10, 000 randomly sampled quadrotor states. The (x, y) positions of the quadrotor were uniformly sampled from Both models are tested by sampling a random starting location and asking the quadrotor to land at the origin. The controller used for landing is an MPC controller that repeatedly solves the following problem where p is the position of the quadrotor, h t,com is the height of the center of mass, and q w is the real component of the quaternion at the last time step. This problem encourages reaching a target, subject to control and dynamics constraint. It also has a constraint on the height of the quadrotor so it is always above a certain small altitude, and an orientation constraint on the last time step so it is mostly upright when it lands. 3) Results: The results of running this controller over several different starting locations is shown in Table IV. Similar to previous experiments, the Sufficiently Accurate model has a higher loss overall, but better accuracy in the constrained area which is more important to the task. This allows the controller to utilize the higher accuracy to land the quadrotor precisely. An example of one of the landing trajectories is shown in Figure 9. It can be seen that the Sufficiently Accurate model can more precisely land at the origin (0, 0). It also is able to reach the ground faster, as it can more accurately compensate for the extra force caused by the ground surface. The ground effect can also disturb roll and pitch maneuvers which can offset the center of mass as well. VII. CONCLUSION This paper presents Sufficiently Accurate model learning as a way to incorporate prior information about a system in the form of constraints. In addition, it proves that this constrained learning formulation can have arbitrarily small duality gap. This means that existing methods like primal-dual algorithms can find decent solutions to the problem. With good constraints, the model and learning method can focus on important aspects of the problem to improve the performance of the system even if the overall accuracy of the model is lower. These constraints can come from robust control formulations or from knowledge of sensor noise. Some important questions to consider when using this method is how to choose good constraints. For some systems and tasks, it can be simple while for others, it can be quite difficult. This objective is not useful for all tasks and systems but rather for a subset of tasks and systems where prior knowledge is more easily expressed as a constraint. Using the reverse triangle inequality, we get By, the equivalence of norms, there exists L such that Since this is true for any s, it is also true in expectation. The loss function l(s, φ) = φ(xt,ut)−xt+1 2 xt+1 2 I A (s) is also expectation-wise Lipschitz-continuous in φ. Following the same logic as for the euclidean norm, we get This reduces to when considering the case where s ∈ A. The largest that can be is 1 over the smallest value of x t+1 2 . If s ∈ A, both sides reduce to 0, as the indicator variable is 0. This leads to APPENDIX B EQUIVALENT PROBLEM FORMULATION Using the notation in this paper, the problem defined in [30] is where f 0 , and f 2 are concave functions. f 1 is not necessarily convex with respect to φ. X is a convex set and Φ is compact. Note that f 0 , f 1 , f 2 , x, and X are not directly present in the Sufficiently Accurate problem. To translate the problem, let us assume that x is K + 1 dimensional, where K is the number of constraints in (4). Let the last element of f 1 (s, φ) be equal to −l(s, φ)I 0 (s), the negative of the objective function in the Sufficiently Accurate problem. Let the first k th element of f 1 (s, φ) be equal to −g k (s, φ)I k (s), the negative of a constraint function in (4). Set the objective function to be f 0 (x) = x K+1 , where x K+1 is the (K + 1) th element of x. f 2 can be ignored by setting it to be the zero function. Under these assumptions, (49) is equivalent to the following s.t. x k ≤ −E[g k (s, φ)I k (s)], k = 1, . . . , K x K+1 ≤ −E[l(s, φ)I 0 (s)] x ∈ X , φ ∈ Φ. (50) Now, define the set X = {(x 1 , x 2 , . . . , x K+1 ) : x k = 0, k = 1, . . . , K, x K+1 ∈ X K+1 }, where X K+1 is an arbitrary compact set in one dimension. This set of vectors, X , is a set that is 0 in the first K components, and is compact in the last component. This, will further simplify (50) to the following This completes the translation of problem (49) to (4). APPENDIX C PROOF OF THEOREM 3 This proof follows some of the steps of the proof for [31,Theorem 1]. Let (φ , λ ) be the primal and dual variables that attains the solution value of P = D in problems (4) and (23). Similarly, let (θ , λ θ ) be the primal and dual variables that attain the solution value D θ in problem (19). φ θ is the function that θ induces. Note that the optimal dual variables for (23), λ , are not necessarily the same as the optimal dual variables for (19), λ θ . A. Lower Bound We first show the lower bound for D θ . Writing out the dual problem (23), we obtain Since λ is the optimal dual variable that achieves the maximal value for the maximization and minimization for the Lagrangian, it is true that Thus for any φ, We now look at the parameterized dual problem (19). This simply redefines L θ in terms of L as the only difference is that L θ is only defined for a subset of the primal variables that L is defined for. By definition, λ θ maximizes the minimization of L θ over θ. That is to say for the dual solution (θ , λ θ ), λ θ minimizes L(φ θ , ·) λ θ = arg max Thus, for all λ, it is the case that Putting together (55) and (58), we obtain B. Upper Bound Next, we show the upper bound for D θ . We begin by writing the Lagrangian (18) as previously written in (56). By adding and subtracting L(φ, λ), we obtain D θ = max where the last line comes from the fact that the absolute value of an expression is always at least as large as the original expression, i.e. x ≤ |x|. Looking just at the quantity L(φ θ , λ) − L(φ, λ) , we can expand it as L(φ θ , λ) − L(φ, λ) = |E[ 0 (s, φ θ ) + λ g(s, φ θ )]− E[ 0 (s, φ) + λ g(s, φ)]| (62) Using the triangle inequality, this is upper bounded as Using Hölder's inequality, we can create a further upper bound where the infinity norm of the scalar value E[ 0 (s, φ θ ) − 0 (s, φ)] is the same as its absolute value. Using the fact that the infinity norm is convex and Jensen's inequality, we can move the norm inside of the expectation. By expectation-wise Lipschitz-continuity of both the loss and constraint functions, Combining (61) with (66), we obtain [L(φ, λ) + ( λ 1 + 1)L min θ∈Θ E φ θ − φ ∞ ] (67) Since, Φ θ is an -universal approximation for Φ, we can write min θ∈Θ E φ θ − φ ∞ ≤ . This further reduces (67) to Note that (68) is true for all φ. In particular it must be also true for the λ that minimizes the inner value, i.e. (69) The second half of (69) is actually the solution to the dual problem (14). The primal problem is reproduced here for reference, P L = min That is to say, D θ ≤ L + D L . The primal problem (13) is a perturbed version of (4), where all the constraints are tighter by L . There exists a relationship between the solution of (13) and (4) from [25,Eq. 5.57]. Treating (4) as the perturbed version of (13) (that tightens the constraints by −L ), the relationship between the two solutions is Since both (4) and (13) have zero duality gap by Theorem 2, this is the same as
2021-02-12T02:15:47.419Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "dd7541c790943cfcf88373dfc46b2a20413f36c5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd7541c790943cfcf88373dfc46b2a20413f36c5", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
96098488
pes2o/s2orc
v3-fos-license
EXTRACTION/FRACTIONATION AND DEACIDIFICATION OF WHEAT GERM OIL USING SUPERCRITICAL CARBON DIOXIDE - Wheat germ oil was obtained by mechanical pressing using a small-scale screw press and by supercritical extraction in a pilot plant. With this last method, different pressures and temperatures were tested and the tocopherol concentration in the extract was monitored during extraction. Then supercritical extracted oil as well as commercial pressed oil were deacidified in a countercurrent column using supercritical carbon dioxide as solvent under different operating conditions. Samples of extract, refined oil and feed oil were analyzed for free fatty acids (FFA) and tocopherol contents. The results show that oil with a higher tocopherol content can be obtained by supercritical extraction-fractionation and that FFA can be effectively removed by countercurrent rectification while the tocopherol content is only slightly reduced. INTRODUCTION Supercritical CO 2 extraction of vegetable oils from plant materials is an alternative process to solvent extraction. This technique has several well-known advantages: CO 2 is a nontoxic, nonflammable, nonexplosive and low-cost gas. It is also easily removed from the solute by reducing the pressure and has a relatively low critical pressure and temperature (King and Bott, 1993). This last property allows extraction of heat-sensitive material, such as flavor and aroma compounds. Another very important advantage of CO 2 is that its solvent power or selectivity can be modified by adjusting the temperature and pressure. This interesting feature has been used to extract volatile oils from solid matrices with minimum coextraction of triglycerides (Reverchon, 1997) and to concentrate important nutritional compounds, such as tocopherols and carotenoids (King et al. 1996;Ambrogi et al. 2002). Tocopherols are a group of monophenolic antioxidants found in many plant materials. Antioxidants eliminate free radicals, providing in this way primary defense to our body. They accomplish the same task in vegetable oils, preventing the formation of hydroperoxides. Wheat germ oil has the highest tocopherol content of all vegetable oils, up to about 2500 mg/kg (Shuler, 1990), and also the highest content of α-tocopherol, which represents around 60% of the total content. Also, wheat germ oil is highly valued due to its high content of unsaturated fatty acids: it has about 80%, mostly linoleic (18:2) and linolenic (18:3) (Wang and Johnson, 2001). These two fatty acids are of great importance in human metabolism and cannot be synthesized by the organism. They are precursors of a group of hormones called prostaglandins, which play an important role in muscle contractions and in the proper healing of inflammatory processes (Coultate, 1989). Furthermore, linoleic acid helps to eliminate cholesterol and is a precursor of cell membrane phospholipids (Salinas, 1993). To increase stability and improve appearance, most oils are refined after extraction. One of the key objectives of this process is the removal of free fatty acids (FFA). FFA content in crude wheat germ is usually high (5-25 %) (Wang and Johnson, 2001). High FFA content affects stability and is responsible for bitter and soapy flavors (Mistry and Min, 1987). Also, during refining, together with the FFA and other unwanted substances, many important and valuable minor compounds, such as tocopherols, are either removed or destroyed. In this work, several experiments regarding the processing of wheat germ oil with supercritical CO 2 are presented. Solid extraction kinetics was evaluated under different extraction conditions and tocopherol concentration in oil was monitored during extraction with the aim of obtaining tocopherolenriched oil. Also, rectification experiments were performed with the objective of removing FFA while retaining the tocopherols. MATERIALS AND METHODS Supercritical CO 2 extraction experiments were carried out in two pilot plants. Pilot plant P1 is located at the Universidad Nacional de Río Cuarto (UNRC). It has a 2.3 liter extractor, single stage separation and solvent recycle capabilities. It can be operated at pressures up to 50 MPa and at flow rates up to 20 kg CO 2 /h. A detailed description of this plant can be found in Ambrogi et al. (2002). The other plant, P2, located at the Technische Universität Hamburg-Harburg (TUHH), can be used for both extraction and rectification experiments. For extraction (1.3 l extractor), the plant can be operated up to 100 MPa and up to 20 kg/h with a piston pump. The rectification column has a height of 7.5 m and an inner diameter of 40 mm. The column can operate at up to 50 MPa, 100°C and at flow rates up to 20 kg CO 2 /h at 50 MPa (one pump) or up to 45 kg CO 2 /h at 30 MPa (2 pumps in parallel) (Jaeger, 2001). The extraction experiments were carried out at 40 and 60 °C at 20 MPa and 40 MPa. About 450 g and 220 g of solids were used at P1 and P2, respectively. The flow rates used were 8 kg CO 2 /h for P1 and 6 kg CO 2 /h for P2 and the separation conditions were 6 MPa and 50 °C. For some extractions, five different samples of oil were taken at different times. These samples were later analyzed for tocopherol content. The wheat germs used at the UNRC and at the TUHH had been processed using the same milling procedures. This product was in the form of small flakes and had a 10.9% moisture content and a 10.3% oil content (dry basis) for the material used at the UNRC and a 11.9% moisture content and a 10.6% oil content for the one used at the TUHH. Oil content was determined using "Soxhlet" extraction equipment with petrol ether as extracting solvent. To decrease the moisture content of the samples that were to be SCO 2 extracted, the wheat germs were placed in a thermostatic oven at 85°C until levels between 1.9 and 2.3 % were attained. With the objective of comparing the characteristics of oils obtained by different methods, wheat germs were pressed using a laboratory screw press (Komet S 87G, IBG Monforts, Germany). The oil obtained by pressing, commercial pressed wheat germ oil, solvent extracted and supercritical CO 2 extracted oil were analyzed for free fatty acids, phosphorous and tocopherol contents. For the deacidification experiments two different methods were tested. With the first method, called countercurrent supercritical spray extraction, the packing was removed and the oil sprayed in at the top of the column by forcing it through a capillar tube (0.25 mm id). With the second method, the column was filled with Sulzer BX packing and the oil was also fed in at the top. These experiments were performed with both supercritical extracted oil (40 MPa -40 °C) and commercial pressed wheat germ oil. For each experiment between 1.7 and 2 kg of oil were used. Samples of the extract, raffinate and feed material were also analyzed for free fatty acids and tocopherol contents. The separation conditions were 6 MPa and 50 °C. Tocopherol content analyses were carried out by HPLC. The apparatus is equipped with a Merck Superspher 4 Si 60, 125 × 4 mm column and a Shimadzu RF 530 fluorescent detector. The mobile phase used was isooctane:ethyl acetate (90%:6%), the flow rate was 0.8 ml/min and the detector was set at 294 nm/ 330 nm (excitation/emission wavelengths). RESULTS AND DISCUSSION The difference in extraction rates at different temperatures and pressures is shown in Figure 1. As expected, at higher pressures, faster extraction rates were achieved. At 40 MPa, the initial slope of the curves is similar, indicating that the so-called crossover point (equal solubilities at different temperatures) should be situated nearby (Eggers et al. 1985). In this figure, an extraction curve obtained using pilot plant 2 (P2) is also shown. As can be seen, the results agree with those obtained in P1, indicating that experiments can be reproduced using different plants and feed materials. Figures 2 and 3 show the experimental results of the tocopherol fractionation study for the solid extraction experiments. From both graphs it can be seen that there is a variation in concentration of both α and βtocopherol in the extracted oil during extraction. At the lower pressure higher concentrations of both components were achieved, indicating that this condition is more favorable for tocopherol fractionation. In Table 1 an analysis of the samples obtained by solvent extraction, supercritical extraction and pressing as well as an analysis of a sample of commercial wheat germ oil are presented. The parameters evaluated were free fatty acids, phosphorous and tocopherol contents. As expected, the phosphorous content in supercritical extracted oil was below the detection limit because phosphatides are practically insoluble in carbon dioxide under the experimental conditions evaluated (Eggers and Sievers, 1989). In the case of FFA content, all the results obtained are in the same range, with the pressed oil showing a higher value. These values are high but similar to values found in the literature (Wang and Johnson, 2001). Tocopherol contents showed the highest values for solvent-extracted oil and the lowest for pressed oil. Similar results are also available in the literature (Shuler, 1990;Wang and Johnson, 2001;Formo et al. 1979). The experimental parameters used for the deacidification experiments are shown in Table 2. Figures 4 and 5 show the extract/feed and raffinate/feed ratio for each experiment and the amount of FFA in each stream, respectively. In these graphs it can be observed that, although the higher pressure condition yielded a lower FFA content in the refined oil, the extract/feed ratio is too high. This ratio should be as close as possible to the amount of FFA in the feed oil to minimize refining losses. Bondioli et al. (1992) and Dunford and King (2000) reported similar results for the deacidification of other vegetable oils. Relevant work in the field has been carried out by Ziegler and Liaw (1993) and Simoes et al. (1994) among others. For the operating conditions and the column length tested, the results for the two methods show no significant difference, with the spray results slightly better than those obtained with packing. In Figure 6 the level of α-tocopherol of each stream for the three spray experiments can be observed. Here, lower pressure seems to be more favorable due to the lower levels of these important antioxidants removed from the feed oil. CONCLUSIONS The extraction kinetics for wheat germ oil was studied at different temperatures and pressures. Experiments could be reproduced in two different pilot plants and with raw material from different sources. Also, enriched tocopherol fractions could be obtained by extraction-fractionation. Finally, wheat germ oil was deacidified using countercurrent supercritical rectification techniques. The results showed that the deacidification effect improved with an increase in operating pressure. But as this variable increases, the amount of coextracted oil also increases, causing the refining-loss levels to rise.
2019-04-05T03:38:16.303Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "27e311238d7f2ed9fac88525b897bca57787bd17", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/bjce/v23n1/29902.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "091162d035c539fff4c283f2e7e8d0c5dd0511f7", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
247146960
pes2o/s2orc
v3-fos-license
Influence of Fluoride-Resistant Streptococcus mutans Within Antagonistic Dual-Species Biofilms Under Fluoride In Vitro The widespread application of fluoride, an extremely effective caries prevention agent, induces the generation of fluoride-resistant strains of opportunistic cariogenic bacteria such as fluoride-resistant Streptococcus mutans (S. mutans). However, the influence of this fluoride-resistant strain on oral microecological homeostasis under fluoride remains unknown. In this study, an antagonistic dual-species biofilm model composed of S. mutans and Streptococcus sanguinis (S. sanguinis) was used to investigate the influence of fluoride-resistant S. mutans on dual-species biofilm formation and pre-formed biofilms under fluoride to further elucidate whether fluoride-resistant strains would influence the anti-caries effect of fluoride from the point of biofilm control. The ratio of bacteria within dual-species biofilms was investigated using quantitative real-time PCR and fluorescence in situ hybridization. Cristal violet staining, scanning electron microscopy imaging, and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide assay were used to evaluate biofilm biomass, biofilm structure, and metabolic activity, respectively. Biofilm acidogenicity was determined using lactic acid and pH measurements. The anthrone method and exopolysaccharide (EPS) staining were used to study the EPS production of biofilms. We found that, in biofilm formation, fluoride-resistant S. mutans occupied an overwhelming advantage in dual-species biofilms under fluoride, thus showing more biofilm biomass, more robust biofilm structure, and stronger metabolic activity (except for 0.275 g/L sodium fluoride [NaF]), EPS production, and acidogenicity within dual-species biofilms. However, in pre-formed biofilms, the advantage of fluoride-resistant S. mutans could not be fully highlighted for biofilm formation. Therefore, fluoride-resistant S. mutans could influence the anti-caries effect of fluoride on antagonistic dual-species biofilm formation while being heavily discounted in pre-formed biofilms from the perspective of biofilm control. INTRODUCTION Caries is a disease of chronic and progressive destruction of the hard tissue of teeth caused by multiple factors, including bacteria (Mathur and Dhillon, 2018). In the United States, for example, caries prevalence indicated that about 45.8% of children aged 2-19 years old had experienced dental caries in primary and permanent dentitions, while 57% of adults had experienced dental caries (Fleming and Afful, 2018;Brandfass et al., 2019). With the growing sugar intake becoming a global issue, the incidence of dental caries has increased rapidly and has a profound impact on the general health of individuals (Van Loveren, 2019). Over 1,000 different microbial species, also known as oral biofilms, have been identified within the dental plaque (Dewhirst et al., 2010). The caries ecological hypothesis proposed by Phil D. Marsh suggested that there was a balance in the microecology of oral plaque (Marsh et al., 2015). Dental caries result from the disruption of homeostasis in this microecology. Excessive sugar intake and reduced salivary production contribute to the decrease in pH in oral biofilms. Consecutively, acid-tolerant and acid-producing bacteria would survive and strengthen acid production, thus causing the occurrence of demineralization and the development of caries (Marsh et al., 2015). Fluoride, such as sodium fluoride (NaF), acidulated fluorophosphates (APF), and stannous fluoride (SnF 2 ), is commonly used to prevent the development of caries. They are used in clinical anti-caries applications, including toothpaste, mouthwash, gel, varnishes, and tooth-filling materials that release ionic fluoride (Anusavice et al., 2005;Pitts et al., 2017). Fluoride enhances the acid resistance of teeth by inhibiting demineralization and enhancing remineralization and also inhibits the growth and metabolism of bacteria by suppressing the activities of enzymes such as enolase and ATPase (Ten Cate, 2004;Oh et al., 2017). When the pH of the extracellular environment decreases, protons (H + ) and fluoride (F + ) from fluoride materials diffuse into bacterial cells and exist as hydrogen fluoride (HF) in the cytoplasm (Marquis et al., 2003). This influx of HF both directly and indirectly affects the growth and cariogenicity of bacteria . The protective effect of fluoride on the enamel was observed at 0.02 mg/L fluoride, and fluoride significantly reduced the number of Streptococcus mutans (S. mutans) at a concentration starting from 0.25 g/L. S. mutans is one of the main opportunistic cariogenic bacteria, and its virulence factors are acid production, acid tolerance, and adhesion (Liu et al., 2015). However, the widespread application of fluoride induces the generation of fluoride-resistant strains, including opportunistic cariogenic bacteria such as fluoride-resistant S. mutans. As early as 1980, transient fluoride-resistant S. mutans strains were isolated from the plaque of radiation-induced xerostomia patients who were treated daily with preventive NaF gel (Streckfuss et al., 1980). Laboratory-induced and characteristically stable (at least 50 generations) fluoride-resistant S. mutans were used to study their phenotypes and fluoride-resistant mechanisms . This type of fluoride-resistant S. mutans is generally able to withstand fluoride concentrations three times higher than its wild strains . Some studies have reported the phenotypic characteristics of fluoride-resistant strains, such as the stability of fluoride resistance, fitness, growth, acidogenicity, and cariogenicity. These results showed that there are many significant differences in these phenotypic characteristics between the fluoride-resistant strains and their wild strains, such as higher fluoride resistance, higher acid tolerance, lower growth, and some controversial cariogenitic characteristics (Van Loveren et al., 1991;Zhu et al., 2012;Liao et al., 2015;Cai et al., 2017;Liao et al., 2017;Liao et al., 2018;Lee et al., 2021). These genetic stability differences could be caused by genetic mutations, as revealed by gene sequencing (Liao et al., 2015;Liao et al., 2018, Lee et al., 2021. Although the complete mechanism of S. mutans fluoride resistance needs to be further studied, some genes or genetic loci have been found to be responsible for the fluoride resistance of S. mutans (Liao et al., 2016;Men et al., 2016;Murata and Hanada, 2016;Tang et al., 2019;Lu et al., 2020;Yu et al., 2020). However, the ecological effects of fluoride-resistant S. mutans remain unknown. Accumulating evidence indicates that there is a competitive and antagonistic relationship between S. mutans and S. sanguinis (Marsh and Zaura, 2017). A significant association has been reported between the S. mutans and S. sanguinis ratio and severe early childhood caries in dental plaque (Mitrakul et al., 2016). The presence of S. sanguinis had a negative relationship with the occurrence of dental caries. Kreth et al. reported that S. sanguinis could produce hydrogen peroxide to inhibit S. mutans (Kreth et al., 2008). S. mutans can suppress the adhesion of S. sanguinis through mutacin production (Valdebenito et al., 2018). The balance between these two strains represents the equilibrium of dental plaque to some extent (Sun et al., 2019;Du et al., 2021). However, there have been no studies on the influence of fluorideresistant S. mutans on oral microecological homeostasis under fluoride. The present study used an antagonistic dual-species biofilm model composed of S. mutans and S. sanguinis to investigate the influence of fluoride-resistant S. mutans on microbial flora under fluoride. We hypothesized that, under the screening effect of fluoride, fluoride-resistant S. mutans might gain a survival advantage within antagonistic dualspecies biofilms, which destroys the ecological balance of oral biofilms and leads to the occurrence and development of dental caries. Eventually, it would influence the anti-caries effect of fluoride. This in vitro study was designed to verify this hypothesis. Bacterial Strains and Growth Conditions S. mutans UA159 and S. sanguinis ATCC 10556 were obtained from the School and Hospital of Stomatology, Wenzhou Medical University. Fluoride-resistant S. mutans was induced in vitro as previously described, with modifications (Zhu et al., 2012). Briefly, an overnight bacterial suspension was inoculated on a brain heart infusion (BHI, Oxoid, Basingstoke, UK) agar plate containing 0.5 g/L NaF for 48 h growth, where a single colony of S. mutans was picked and passaged on BHI agar without NaF for 50 generations. The fluoride-resistant characteristics of S. mutans were confirmed on BHI solid medium with 0.5 g/L NaF. BHI medium was used for bacterial amplification, and BHI with 1% sucrose (BHIS) was used for biofilm formation. The growth conditions were 37°C and 5% CO 2 . Biofilm Culture In this study, we characterized fluoride-resistant S. mutans in biofilm formation and in pre-formed biofilms under fluoride. For biofilm formation, overnight bacterial suspensions of one or two species were diluted 50-fold into BHIS containing 0, 0.275, and 1.25 g/L NaF (0.275 and 1.25 g/L NaF were the fluoride content in regular and prescription toothpaste, respectively, after 3-fold dilution) and incubated for 24 h (Nassar and Gregory, 2017). For the pre-formed biofilm assay, after 24 h of biofilm formation without fluoride, the culture medium was replaced with fresh BHIS with different concentrations of NaF and incubated for another 24 h. The groups in this experiment were divided into singlespecies biofilms of S. mutans wild-type strain (S.m WT), single-species biofilms of fluoride-resistant S. mutans (S.m FR), single-species biofilms of S. sanguinis (S.s), dual-species biofilms of the wild type of S. mutans strain and S. sanguinis (S.m WT + S.s), and dual-species biofilms of fluoride-resistant S. mutans and S. sanguinis (S.m FR + S.s). Each group was treated with different fluoride concentrations, which included control, low (0.275 g/L), and high concentrations (1.25 g/L). Crystal Violet Staining The biomass of the biofilm was determined using crystal violet (CV) staining (Zhu et al., 2021). Biofilms in 96-well plates were washed with phosphate-buffered saline (PBS) and fixed with methanol for 15 min. Air-dried biofilms were stained with 100 µl of 0.1% crystal violet solution for 30 min and washed with PBS. Images of the stained biofilms were captured using a stereo microscope (Nikon SMZ800, Nikon Corporation, Japan). Next, they were dissolved in 200 µl of 33% acetic acid with shaking for 15 min, and the absorbance was measured at 590 nm using a microplate reader (SpectraMax M5, Molecular Devices, USA). Metabolic Activity For metabolic activity assessment, biofilms growing on round glass wafers were washed with PBS to remove planktonic bacteria and stained with 1 ml 0.5% 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyl-2H-tetrazolium bromide (MTT) solution (dissolved in PBS) for 1 h. Subsequently, the wafers were transferred to a new plate with dimethyl sulfoxide (1 ml per well). Thereafter, the plate was shaken for 30 min to completely dissolve the crystals. A 200-µl aliquot of the solution was measured at 540 nm using a microplate reader (SpectraMax M5, Molecular Devices, USA). Lactic Acid and pH Measurement Lactic acid and pH measurements were conducted to monitor acid production (Sun et al., 2019). Biofilms in wafers were first washed with cysteine peptone water and then cultured in buffered peptone water (BPW) containing 0.2% sucrose (1 ml/ well) for 3 h to allow acid production. Lactic dehydrogenase was used to quantify lactate concentrations in the BPW solution. The absorbance was read at 340 nm, and standard curves were generated using a lactic acid standard. For pH measurement, the supernatant of the biofilms was measured using a pH meter (Mettler Toledo Instruments Co., Ltd., Shanghai, China). Scanning Electron Microscopy Imaging SEM imaging was performed to observe the morphology and structure of the biofilms . Biofilms were fixed with 2.5% glutaraldehyde and dehydrated using an ethanol gradient (50, 60, 70, 80, 90, and 95% and absolute ethyl alcohol) for 30 min at each concentration. Dry biofilms were sputter-coated with gold-palladium for observation using SEM at ×2,000 magnification (Hitachi, Tokyo, Japan). Water-Insoluble Exopolysaccharide Measurement The water-insoluble EPS of biofilms was measured using the anthrone method (Sun et al., 2021). Briefly, the biofilms were collected, washed twice with sterile water, and resuspended in 0.4 M NaOH. After centrifugation, 200 µl of the suspension was mixed with 600 µl of anthrone reagent and incubated at 95°C for 6 min. The absorbance was monitored at 625 nm using a microplate reader (SpectraMax M5, Molecular Devices, USA). Standard curves were prepared using dextran standard. Confocal Laser Scanning Microscopy Assay To observe the EPS production in biofilms, fluorescence staining was conducted . Alexa Fluor-647 dextran conjugate (Molecular Probes, Invitrogen Corp., Carlsbad, CA, USA) was added to the culture medium at the beginning of biofilm formation to label the EPSs. At the end of biofilm formation, the biofilms were stained with SYTO 9 (Molecular Probes, Invitrogen Corp., Carlsbad, CA, USA) for total bacteria measurement. Random fields were selected, and images were captured using a ×60 oil immersion lens with a confocal laser scanning microscope (Nikon Corporation, Tokyo, Japan). Quantitative Real-time PCR Assay To determine the ratio of S. mutans and S. sanguinis in dualspecies biofilms, TB Green Premix Ex Taq ™ II kit (Takara Bio Inc., Otsu, Japan) was used for qRT-PCR analysis. The total DNA of dual-species biofilms was extracted using Rapid Bacterial Genomic DNA Isolation Kit (Sangon Biotech, Shanghai, China). The primers used in this study were the same as those previously described (Huang et al., 2015) (Supplementary Table 1). A total of 20 µl of reaction mixture contained 10.0 µl 2× TB Green Premix Ex Taq II, 0.8 µl forward primer, 0.8 µl reverse primer, 2.0 µl cDNA, and 6.4 µl sterilized distilled water. We used a LightCycler 96 instrument (Roche Diagnostics, Basel, Switzerland) and programmed the system for 30 s of pre-denaturation at 95°C, followed by 40 cycles of 5-s denaturation at 95°C, 30 s annealing at 55°C, and 30-s extension at 72°C. The standard curves of S. mutans and S. sanguinis were generated based on the known quantities of bacteria by CFU count. Fluorescence In Situ Hybridization FISH was used to observe the proportion of bacterial components in the dual-species biofilms. Briefly, after washing with PBS twice, the biofilms on the wafers were fixed with 4% paraformaldehyde for 6 h. Lysozyme was used to lyse the cell wall. The biofilms were then dehydrated with gradient ethanol and dried at 46°C for 10 min. Specific fluorescent probes (Supplementary Table 2) were used to stain S. mutans and S. sanguinis within dual-species biofilms (Zheng et al., 2013). A confocal laser scanning microscope (Nikon Corporation, Tokyo, Japan) was used to capture the FISH results using a ×60 oil immersion lens. Statistical Analysis All experiments were repeated independently at least thrice. One-way analysis of variance was performed, and statistical significance was set at p <0.05 using SPSS software (version 24.0; SPSS Inc., Chicago, IL, USA). Fluoride-Resistant S. mutans Obtained Remarkable Competitive Advantage Within Dual-Species Biofilms During Biofilm Formation While Not in Pre-Formed Biofilms Under NaF The ratio of S.m and S.s in dual-species biofilms was analyzed using FISH and qRT-PCR ( Figure 1). We found that S.s had an advantage (more than 50%) in competition with S.m WT and FR without fluoride in biofilm formation. In the fluoride-free group, S.m FR accounted for 11.72% of dual-species biofilms, while S.m WT accounted for 31.93%. However, with the addition of NaF, the proportion of S.m FR (over 90%) was much higher than that of S.s, occupying a dominant position in dual-species biofilms. However, the ratio of S.m WT (less than 50%) maintained a previous trend with the effect of NaF. Surprisingly, in the preformed biofilm, S.m FR did not gain advantage over S.s under NaF-like biofilm formation, and the proportion of S.m WT was higher than that of S.s in all experimental groups in the preformed biofilm. NaF Had Different Effects on Biofilm Formation and Pre-formed Biofilm Even in WT Strains Using CV staining, we compared the biomass of different biofilms under NaF treatment (Figure 2). During the biofilm formation of single-species biofilms, S.m FR showed a stronger biofilm-forming ability than both S.m WT and S.s under NaF and even formed a robust biofilm at 1.25 g/L NaF (Figure 2A). For the biofilm formation of dual-species, S.m FR + S.s also showed observably improved biofilm formation capability under NaF ( Figure 2B). However, NaF had a little anti-biofilm effect on both S.m FR and S.m WT in pre-formed biofilms ( Figure 2C). The pre-formed S.s biofilms did not show strong resistance as two S.m strains under NaF and its biofilm biomass were reduced significantly under 1.25 g/L NaF ( Figure 2C). In the pre-formed dual-species biofilm, both types of dual-species biofilms withstood NaF stress and only decreased by 19.77 and 36.93% for S.m WT + S.s and 5.57 and 24.13% for S.m FR + S.s under 0.275 and 1.25 g/L NaF, respectively ( Figure 2D). The SEM results also showed a similar tendency in which FR strain-related biofilms acquired survival advantage at 1.25 g/L NaF during biofilm formation, thus forming a more robust biofilm than that of the WT biofilms ( Figure 3). However, the fluoride resistance advantage of S.m FR was not highlighted in the pre-formed biofilms (Figure 3). Biofilm Formation and Pre-formed Biofilm Showed Different Susceptibilities to NaF in Metabolic Activity Even When S.ms Was Compared to its Fluoride-Resistant Strain-Related Biofilms The MTT assay was used to detect the metabolic activity of the biofilms (Figure 4). In general, NaF suppressed the metabolic activity of biofilms, while this inhibitory effect was different between biofilm formation and pre-formed biofilms. At 0.275 g/L NaF, all S.m and its containing groups showed greater metabolic activity than the fluoride-resistant and S.s strains. During biofilm formation, either S.m FR or S.m FR + S.s involving dual-species biofilm showed a higher metabolic activity than the other groups under NaF at 1.25 g/L ( Figures 4A, B). Similar to the CV results in pre-formed biofilms, although NaF showed a suppressive effect on metabolic activity in a strain-, species-, and dose-dependent manner, all pre-formed biofilms could still sustain biofilm even at 1.25 g/L NaF ( Figures 4C, D). There were no obvious differences between S.m WT and S.m FR as well as their involved dual-species biofilms at 1.25 g/L NaF ( Figures 4C, D). Fluoride-Resistant S.m-Related Biofilms Produce More EPS Than WT Strain Under Fluoride EPS staining ( Figure 5) and the anthrone method ( Figure 6) were used to measure biofilm EPS production. The biofilm formation results showed that fluoride-resistant strain-related biofilms produced less EPS than WT strains without fluoride, while there was more EPS under fluoride ( Figures 5A and 6A, B). In pre-formed biofilms, the results were disparate ( Figures 5B and 6C, D). Although fluoride-resistant S.m-related biofilms synthetized more EPS at 0.275 mg/L, there were no significant differences between fluoride-resistant S.m-related biofilms and their WT biofilms at 1.25 g/L ( Figures 6C, D). Analogous results of EPS production were also confirmed by the SEM images ( Figure 3). Fluoride-Resistant S.m-Related Biofilms Had Lower Supernatant pH Than Wild-Type Strains in All NaF-Containing Groups Except in Pre-Formed Biofilms at High NaF In total, NaF treatment resulted in a higher pH of the biofilm supernatant ( Figure 7). During biofilm formation, S.m WT and S.m WT + S.s had a lower pH than S.m FR and S.m FR + S.s without NaF (Figures 7A, B). However, the opposite was observed with the addition of NaF, as S.m FR and S.m FR + S.s had a lower pH ( Figures 7A, B). In pre-formed biofilms, S.m FR and S.m FR + S.s had a lower pH than S.m WT and S.m WT + S.s only at 0.275 mg/L ( Figures 7C, D). Fluoride-Resistant Strain-Related Biofilms Showed Stronger Lactic Acid Production Under High Fluoride Concentration in Biofilm Formation While Not in Pre-Formed Biofilms Lactic acid production in the two models of biofilms, biofilm formation and pre-formed biofilms, was also detected. The lactic acid measurements showed that the production of lactic acid by fluoride resistance was inhibited by 0.275 g/L ( Figures 8A, B). At 1.25 g/L, the lactic acid production of S.m FR and S.m FR + S.s was much higher than that of S.m WT and S.m WT + S.s in biofilm formation ( Figures 8A, B). Surprisingly, in the preformed biofilm, S.m FR and S.m FR + S.s produced less lactic acid than S.m WT and S.m WT + S.s under NaF ( Figures 8C, D). DISCUSSION The present study investigated whether fluoride-resistant S. mutans would influence oral microecological homeostasis under fluoride in an antagonistic dual-species biofilm model to further investigate whether fluoride-resistant strains would influence the anti-caries effect of fluoride. A dual-species biofilm composed of S. mutans and S. sanguinis was chosen, as homeostasis of this dual-species biofilm used in our study could also represent dental plaque balance to a certain degree. Both dual-species biofilm formation and pre-formed biofilm were monitored, and our results showed that fluoride-resistant S. mutans influenced the composition, biomass, structure, metabolic activity, acid production, and EPS production of dual-species biofilms when compared with wild-type biofilms under NaF. Fluoride-resistant S. mutans had a survival advantage and stronger cariogenic potency in dual-species biofilm formation under NaF but could not highlight its fluoride- resistant superiority thoroughly in pre-formed dual-species biofilms under NaF. The lower ratio of the fluoride-resistant strain within the dual-species biofilm without NaF compared to its wild strain might be partly attributed to the slow growth rate of the fluorideresistant strain in our study (data not shown). The slow growth rate of the fluoride-resistant strain was consistent with previous reports, which might have resulted from bacterial-deficient carbohydrate uptake (Liao et al., 2015;Lee et al., 2021). The ratio of either S. mutans or its fluoride-resistant dual-species was raised without NaF with the development of biofilm when compared between 24 and 48 h, which was confirmed by FISH and qRT-PCR. This tendency was similar to a previous study on the association between S. mutans and S. sanguinis. It has been reported that the level of S. mutans is lower than that of S. sanguinis in the initial biofilm and higher in the mature biofilm (Mitrakul et al., 2016). However, in line with our expectations, the fluoride-resistant S. mutans was at an advantage in competition with S. sanguinis with the addition of NaF in biofilm formation owing to its fluoride-resistant properties, which could be supported by the biomass and metabolic activity of single-species biofilm formation. Surprisingly, in the pre-formed biofilm, the trend was completely different, as fluoride-resistant S. mutans did not achieve a competitive advantage within dual-species biofilms. The different outcomes of fluoride-resistant S. mutans within dual-species biofilms between biofilm formation and pre-formed biofilms under NaF might be explained as follows: NaF was added at the beginning of biofilm formation, and its screening effect on bacterioplankton was effective immediately, resulting in a survival advantage of the fluoride-resistant strain. Once dominant in biofilms, S. mutans produces more acid and creates an environment conducive to its own growth, thus taking an advantage over S. sanguinis (Takahashi and Nyvad, 2011). Nevertheless, in preformed biofilms, NaF was added after 24-h formed biofilms. The biofilms are more resistant to drugs than planktonic bacteria as reported previously (Stewart and Costerton, 2001). Owing to the resistance of mature biofilms to drugs, the survival advantage of the fluoride-resistant strain under NaF was almost entirely covered in the pre-formed biofilm. Fluoride can influence the adherence of S. mutans, a dominant cariogenic virulence (Shani et al., 2000). It has been reported that fluoride-resistant strains retain more adherence ability under fluoride (Men et al., 2016). EPS plays an important role in the stability of biofilms and adhesion to tooth surfaces (Flemming and Wingender, 2010). In biofilm formation, the EPS production of dual-species biofilms of fluoride-resistant S. mutans was lower than that of the wild strain without NaF. However, under fluoride, the EPS production of dual-species biofilms containing fluoride-resistant S. mutans was significantly higher than that of the wild strain, which was the same as that of the single-species biofilms. In pre-formed biofilms, fluorideresistant S. mutans-related biofilms only produced more EPS at 0.275 g/L NaF. EPS is known to help bacteria increase the resistance of biofilms to escape antibiotic drugs and immune responses (Ðapa et al., 2013). More EPS accumulation may enhance the resistance and adherence of biofilms and even cause higher cariogenicity. This could partly explain why preformed biofilms were more resistant to NaF. As a major factor in cariogenicity, glucosyltransferases (Gtfs) play a critical role in EPS formation (Bowen and Koo, 2011). However, previous studies found that there was no inhibition of Gtfs activity of the S. mutans wild strain by fluoride (Pandit et al., 2011;Guo et al., 2014). Whether the Gtfs activity of our fluoride-resistant S. mutans was suppressed by NaF needs to be further investigated. We were unaware whether there was a direct relationship between fluoride resistance acquisition and EPS production as shown in single-species biofilm results without NaF. The survival advantage of fluoride-resistant S. mutans made a significant contribution to the EPS production of its dual-species biofilms during biofilm formation. Acid production is also an important factor in cariogenic virulence. In general, NaF had an inhibitory effect on acid production in biofilms, whether in single or dual species, according to our data, which was consistent with a previous report . At present, there is no unified conclusion about the variation in acid production ability of wildtype S. mutans compared with fluoride-resistant S. mutans. Some studies found that fluoride-resistant S. mutans had a weaker acid production ability, while others found it to be stronger when compared to its related wild strain, and further studies found that there was no significant difference between these two strains (Eisenberg et al., 1985;Van Loveren et al., 1991;Hoelscher and Hudson, 1996;Cai et al., 2017;Lee et al., 2021). This diversity might be derived from different culture conditions and bacterial strains and induced by fluoride-resistant strains. In our study, fluorideresistant S. mutans-related biofilms had a lower supernatant pH than the wild-type strains in all NaF-containing groups except in pre-formed biofilms at high NaF, indicating a greater cariogenic potential. This result may partly result from the suppression effect of NaF on acid production. During the lactic acid production process without NaF, fluoride-resistant S. mutans-related biofilms showed stronger lactic acid production under a high fluoride concentration in biofilm formation, but not in pre-formed biofilms. Lactic acid production was derived from carbohydrate metabolism (Krzysćiak et al., 2014). The lactic acid result may be partly attributed to the stronger resistance to NaF in pre-formed biofilm, including the wild-type ones, resulting in an entirely different trend of biofilm metabolic activity when compared to that of biofilm formation, which would further influence lactic acid production. In addition, biofilm composition also contributed to the observed difference, as S. sanguinis produced less acid than S. mutans and its fluoride-resistant strain within the biofilm. The inconformity between pH and lactic acid results originated from the methods used. For lactic acid measurement, after 24 h of biofilm formation or treatment of pre-formed biofilm with NaF for another 24 h, the resulting biofilms were used for lactic acid production without fluoride. For pH, the culture medium contained fluoride for 24 h in both the biofilm formation and pre-formed biofilms. We hypothesize that the acidogenicity of the fluoride-resistant strain in our study was higher than that of wild strains under fluoride in biofilm formation, which is consistent with previous studies of single-species biofilms (Van Loveren et al., 1991;Hoelscher and Hudson, 1996;Sheng and Liu, 2000). However, this preponderance could not be observed in the preformed biofilms. Although the balance between S. mutans and S. sanguinis could represent the dental plaque equilibrate to some extent, dental plaque is intricate (Sun et al., 2019). Further studies need to be conducted using saliva or in vivo biofilms to evaluate the impact of fluoride-resistant strains on the micro-ecology of dental plaque. In addition, studies on the use of more clinically isolated fluoride-resistant strains, including S. mutans, were encouraged, as lab-induced fluoride-resistant strains might provide different results from clinical isolates. There is no doubt that a comprehensive understanding of the fluorideresistant mechanism would inspire more methods to control these fluoride-resistant opportunistic cariogenic bacteria. Drug resistance is a worldwide crisis, especially in the post-antibiotic era. To inhibit biofilm formation containing drug-resistant bacteria, controlling drug-resistant strains should be considered; otherwise, it would occupy an absolute ecological advantage, as in our study. The pre-formed biofilms were more resistant than biofilms during formation in our study, just as reported before (Angelopoulou et al., 2020). Dispersal molecules might be a route to consider, which could trigger biofilm degradation and disperse pre-formed biofilms to the bacterioplankton state and thus could control it by inhibiting biofilm formation (Fleming and Rumbaugh, 2017). Moreover, although fluoride had less impact on controlling fluorideresistant S. mutans biofilms, especially in biofilm formation, whether fluoride-resistant S. mutans would finally disrupt homeostasis between demineralization and remineralization remains to be further studied. Fluoride could inhibit demineralization and enhance remineralization, with the exception of the antibacterial effect. In summary, this study investigated the effect of fluorideresistant S. mutans on microecological homeostasis using an antagonistic dual-species biofilm model under fluoride. Under the screening effect of fluoride, fluoride-resistant S. mutans gained a survival advantage within antagonistic dual-species biofilms during biofilm formation, thus disrupting the ecological balance. Fluoride-resistant S. mutans also exhibited stronger cariogenic virulence, including acidogenicity and EPS production, which might further influence the anti-caries effect of fluoride from the perspective of biofilm control. However, in pre-formed biofilms, even wild-type S. mutans containing dualspecies biofilms showed strong resistance, and the advantage of fluoride-resistant S. mutans could not be fully highlighted for biofilm formation. However, this does not mean that fluoride is invalid for fluoride-resistant strains. Inhibition of biofilm biomass, metabolism, acidogenicity, and EPS production was found within biofilms under NaF. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
2022-02-28T14:10:32.673Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "6aa6128803fa165aff69296ad046dc7d399467bd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6aa6128803fa165aff69296ad046dc7d399467bd", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
26095741
pes2o/s2orc
v3-fos-license
Forebrain-specific Expression of Monoamine Oxidase A Reduces Neurotransmitter Levels, Restores the Brain Structure, and Rescues Aggressive Behavior in Monoamine Oxidase A-deficient Mice* Previous studies have established that abrogation of monoamine oxidase (MAO) A expression leads to a neurochemical, morphological, and behavioral specific phenotype with increased levels of serotonin (5-HT), norepinephrine, and dopamine, loss of barrel field structure in mouse somatosensory cortex, and an association with increased aggression in adults. Forebrain-specific MAO A transgenic mice were generated from MAO A knock-out (KO) mice by using the promoter of calcium-dependent kinase IIα (CaMKIIα). The presence of human MAO A transgene and its expression were verified by PCR of genomic DNA and reverse transcription-PCR of mRNA and Western blot, respectively. Significant MAO A catalytic activity, autoradiographic labeling of 5-HT, and immunocytochemistry of MAO A were found in the frontal cortex, striatum, and hippocampus but not in the cerebellum of the forebrain transgenic mice. Also, compared with MAO A KO mice, lower levels of 5-HT, norepinephrine, and DA and higher levels of MAO A metabolite 5-hydroxyindoleacetic acid were found in the forebrain regions but not in the cerebellum of the transgenic mice. These results suggest that MAO A is specifically expressed in the forebrain regions of transgenic mice. This forebrain-specific differential expression resulted in abrogation of the aggressive phenotype. Furthermore, the disorganization of the somatosensory cortex barrel field structure associated with MAO A KO mice was restored and became morphologically similar to wild type. Thus, the lack of MAO A in the forebrain of MAO A KO mice may underlie their phenotypes. This study shows that a forebrain-specific expression of monoamine oxidase (MAO) 3 A is able to restore the disorganized somatosensory cortex barrel field structure and an aggressive behavior seen in MAO A KO mice. MAO A and B are outer membrane mitochondrial enzymes responsible for the metabolic degradation of biogenic amines in humans (1,2). MAO A prefers serotonin (5-HT) and norepinephrine (NE) as substrates, whereas MAO B prefers phenylethylamine (PEA) and benzylamine (1)(2)(3)(4)(5). Dopamine (DA), tyramine, and tryptamine are common substrates for both forms. MAO A and B are encoded by different genes (6,7) closely linked on the X chromosome (8). Both isoenzymes are widely present in most brain regions (9). MAO A gene deficiency in a Dutch family shows borderline mental retardation and impulsive aggression (10,11). A promoter region was associated with high or low MAO A promoter activity (12). Individuals with low MAO A promoter activity who are maltreated in early childhood have increased risk for antisocial behavior as adults (13,14). These results indicated that the interaction of the gene and the environment of early childhood predispose the adult behavior. Functional magnetic resonance imaging in healthy human volunteers shows that the low expression variant, associated with increased risk of violent behavior, correlated with pronounced limbic volume reductions, hyper-responsive amygdala during emotional arousal, and diminished reactivity of regulatory prefrontal regions as compared with the high expression allele (15,16). MAO A knock-out (KO) mice showed elevated brain levels of MAO A substrates 5-HT, DA, and NE in all brain regions (17,18). In contrast to MAO A deficiency, only an increase in PEA level was found in the brain of MAO B-deficient mice and humans (19,20). They do not show aggressive behavior. These studies suggest that the changes in brain monoamines such as 5-HT, NE, and DA may be related to aggression in MAO A/B double KO mice. Brain levels of 5-HT, NE, DA, and PEA all increased to a much greater degree than in either MAO A or B single KO mice (21). They show chase/escape and anxiety-like behavior different from MAO A or B single KO, suggesting that varying monoamine levels result in a unique behavioral phenotype. MAO A KO mice show complete absence of barrels in the somatosensory cortex and aggressive behavior (17). Neonatal administration of the tryptophan hydroxylase inhibitor parachlorophenylalanine reduced 5-HT and partly restored the capacity to form cortical barrels (17). Restoration of the barrel field can be accomplished before postnatal day 7 (22,23). These studies suggest the neuronal developmental stage is critically important for the aggressive behavior in adulthood in both human and mice, prompting us to create a transgenic mouse that would restore the somatosensory cortex structure in the early developmental period and alleviate the aggressive behavior in the adult male mouse. To do this, we used the mouse calcium-calmodulin-dependent kinase II␣ (CaMKII␣) promoter to drive expression of a human MAO A transgene in MAO A KO mice. CaMKII␣ is an abundant protein specifically expressed in forebrain synapses from postnatal day 1 to adult life. This promoter has been well characterized with regard to specific forebrain expression, including neocortex, striatum, and hippocampus (24 -27). This is the first study showing that the forebrain of transgenic mice with the mouse CaMKII␣ promoter/human MAO A construct in MAO A KO mice is able to modify the phenotypes observed in MAO A KO mice to a phenotype more like wild type. This study further confirms the role of MAO A itself in somatosensory cortex organization and aggressive behavior. EXPERIMENTAL PROCEDURES Construction of pCaMKII␣-MAO A cDNA Vector-The 20-kb human MAO A cDNA was excised by EcoRI digestion from a pECE vector. The insert was ligated into the EcoRV site of pNN265. This vector contained a 5Ј intron and a 3Ј intron plus a poly(A) signal from SV40 (28). The insert containing human MAO A cDNA was digested by NotI and was ligated into the unique NotI site of pMM403 that contains the 8.6-kb CaMKII␣ promoter (24). The orientation of the insert was determined by digestion with HindIII. The linearized plasmid DNA ( Fig. 1) was then microinjected into fertilized eggs from MAO A KO mice. Identification of Forebrain Transgenic Mice by PCR-The presence of MAO A cDNA in forebrain MAO A transgenic mice was shown by PCR using a pair of primers that are complementary to parts of human MAO A exon 3 (E3F, 5Ј-GAT-TACGTAGATGTTGGTGGAGC-3Ј; 254 -276) and exon 10 (E10R 5Ј-GATGGCAGGCAGTGACCCATCAG-3Ј; 1120 to 1098). A 0.87-kb PCR product is expected when human MAO A cDNA is expressed. The PCR conditions were as follows: 94°C for 1 min, 60°C for 1 min, and 72°C for 2 min, 30 cycles (Fig. 1B). Anchor PCR for Determination of Transgene Insertion Site-Genomic DNA isolated from the transgenic MAO A KO mouse tail was partially digested by Sau3AI enzyme to create sticky ends for ligation into the BamHI site of the pUC19 vector. The primer was designed according to the most 5Ј end of mouse CaMKII␣ promoter sequence in the antisense direction (5Ј-AGA AGG GTG CGG ACT ACA TCG-3Ј). The anchor primer was from pU19 flanking the BamHI site (5Ј-cgg ctc gta tgtt gtg tgg-3Ј). PCR was run under the following conditions: 94°C 3 min; 94°C 30 s, 58°C 30 s, 72°C 2 min, for 32 cycles; the last extension at 72°C was for 10 min. The PCR product was checked by 1% agarose gel to ensure a product larger than 1 kb was present. The PCR product was cloned into TA cloning kit (Invitrogen), and single colonies were isolated. Clones containing inserts were sequenced. Sequences were subjected to National Center for Biotechnology Information VecScreen and BLAT analysis using the University of California, Santa Cruz, Genome Bioinformatics server (mouse genome assembly Feb. 2006). Demonstration of Forebrain-specific Expression of Human MAO A Protein and RNA by Western Blot and RT-PCR-Western blot was done by using anti-human MAO A antibody. Tissue homogenates were isolated from various tissues of WT, MAO A KO, and forebrain transgenic mice, separated in a 7% SDS-polyacrylamide gel, and transferred to a polyvinylidene difluoride membrane. The membrane was immunoblotted by 1:1,000 diluted rabbit anti-human MAO A antibody for 1 h at room temperature, washed three times, and then incubated with 1:10,000 diluted goat anti-rabbit antibody for another 30 min, and washed three times. The membrane was incubated with ECL for 1 min and exposed to x-ray film. The total RNAs were isolated from WT, MAO A KO, and forebrain transgenic mice and reverse-transcripted by random primers. The specific primers were designed from the human MAO A gene ( Determination of MAO A Activity in Forebrain Transgenic Mice-Adult male wild type (C3H strain), MAO A KO, and forebrain transgenic mice aged 1-4 months were used in all studies. The mice were housed individually and were allowed free access to food and water. They were housed in an airconditioned unit with controlled temperature (20 -22°C) and humidity (50 -60%). Lighting was maintained on a 12-h lightdark cycle (lights on 0600 -1800). The mice were sacrificed, and their whole brains were rapidly removed and placed in a brain matrix (ASI Instruments) embedded in ice. 2-mm thick sections were cut, and samples from frontal cortex (3 per side from 3.2 to 1.2 mm), striatum (4 per side from 0.2 to Ϫ1.8 mm), hippocampus (3 per side from Ϫ1.8 to Ϫ3.8 mm), and cerebellum (4 per side from Ϫ5.8 to Ϫ7.8 mm) were punched out from the sections with a blunt 18-gauge needle. Punched samples were homogenized in 50 volumes of 50 mM sodium phosphate buffer, pH 7.4, used for MAO A activity determination as described previously (17). In some cases, mitochondria were isolated from different brain regions, and MAO A activity was determined in the mitochondria pellets. Autoradiography-The mice were sacrificed, and their brains were removed and frozen in isopentane. The brains were stored at Ϫ70°C for no longer than 1 month before sectioning. For autoradiographic mapping, 12-m frozen coronal sections were cut in duplicate (500 m apart) in a cryostat at Ϫ20°C and were thaw-mounted onto precleaned, superfrost/plus ice-cold microscope slides and dried by using anhydrous CaSO 4 for 2 h at 4°C followed by 1 week at Ϫ20°C. For the determination of total and nonspecific binding with [ 3 H]Ro 41-1049, adjacent sections were cut from 3.20-mm bregma, which was identified according to the mouse brain atlas of Franklin and Paxinos (29). For autoradiographic visualization of MAO A-binding sites, incubations were carried out in parallel for sections from wild type, MAO A KO, and forebrain transgenic mice with 15 nM [ 3 H]Ro 41-1049. Prior to binding, slides were thawed for 30 min at room temperature. Slides for the determination of total binding were incubated in 50 mM Tris-HCl containing 120 mM NaCl, 1 mM MgCl 2 , 5 mM KCl, and 0.5 mM EDTA, pH 7.4 (1 ml for each slide), for 60 min at 37°C. Nonspecific binding was determined by simultaneously treating a parallel set of slides, under identical incubation conditions with the addition of 1 M clorgyline. The detailed procedure was published previously (30). Quantitative analysis of MAO A binding was carried out by video-based computerized densitometry using a Xenix image analyzer. Tissue equivalents (nCi per mg of tissue) for ⌴〈⌷ 〈 labeling were derived from 3 H microscale standard-based calibrations laid down with each film after subtraction of nonspecific binding images. Quantified measures were taken from both the left and right sides of sections from the whole of the frontal cortex (3.2 to Ϫ1.8 mm), striatum (1.7 to Ϫ1.3 mm), hippocampus (Ϫ1.3 to Ϫ3.8 mm), and cerebellum (Ϫ5.8 to Ϫ7.3 mm) in successive sections 500 m apart. Immunocytochemistry-Animal procedures were conducted in strict compliance with approved institutional protocols and in accordance with the provisions for animal care and use described in the European Communities Council Directive of 24 November 1986 (86/609/EEC). Experiments were carried out on P7 and adult mice. The day of birth was counted as P0. HPLC Determination of the Levels of NE, DA, Dihydroxyphenylacetic Acid, 5-HT, and 5-Hydroxyindolacetic Acid in Forebrain Regions of Forebrain Transgenic Mice-Mice were sacrificed, and brain samples were quickly removed and immediately frozen in isopentane. Brain samples were homogenized in 100 l of a solution containing 0.1 M trichloroacetic acid, 10 mM sodium acetate, and 0.1 mM EDTA, pH 2.0. The homogenates were sonicated using a Fisher sonic dismembrator (model 550) with the probe sonicator at setting 2 in 10 volumes of buffer at 4°C and centrifuged for 10 min at 12,000 ϫ g. The protein concentrations of the pellets were determined using the BCA kit (Pierce). The supernatant was centrifuged and stored at Ϫ70°C until HPLC analysis. HPLC analysis was described previously (18). Resident-Intruder Confrontations of Forebrain Transgenic Mice-All mice were housed individually for at least 1 week in transparent Makrolon cages measuring 29 ϫ 13 ϫ 13 cm. Confrontations were between mice of the same strain and of similar ages and weights. They were organized in a Latin Square arrangement, and all sessions were at least 2 days apart to prevent animal fatigue. An intruder mouse was placed in the cage of the resident mouse. The mice were allowed to interact for 10 min after the first attack. The interactions of the mice were videotaped for later analysis. The analysis procedure was described previously (31). The following five behaviors were assessed: (i) nonsocial (absence of exploration of other mouse); (ii) investigative (subject actively investigates cage, mainly by sniffing); (iii) defensive (subject actively defends or shields itself from opponent, e.g. standing on hind legs with forearms in the air); (iv) aggressive (subject engages in physical fight with opponent, e.g. biting or kicking attack); and (v) locomotive (subject is rapidly roaming cage). Generation of Forebrain-specific MAO A Transgenic Mice from MAO A KO Mice- To produce forebrain-specific MAO A transgenic mice, a full-length human MAO A cDNA clone was ligated to the CaMKII␣ promoter to confer expression in specific brain regions where the CaMKII␣ promoter is normally active. The orientation of the insert was determined by digestion with HindIII. Fig. 1A illustrates the construction schema and final construct used to create the transgenic mice by microinjection of the linearized plasmid DNA into fertilized eggs of MAO A KO mice. After birth, the introduction of human MAO A was assayed by PCR and confirmed. A specific 0.87-kb PCR product was detected in genomic DNA prepared from the tails of forebrain transgenic mice but not in the wild type or in the MAO A KO mice (Fig. 1B). This confirms that we were able to successfully integrate the human MAO A cDNA into the mice genomes. Sequences derived from anchor PCR using a CaMKII␣ promoter indicated insertion took place in chromosome seven (7qE3 region) and in chromosome four (4qC7 region). In each case chromosomal sequences contiguous with CaMKII␣ promoter-based PCR primer corresponding to genomic sequences flanking the 5Ј end of the transgenic insertion were identified by BLAT analysis for both forward and reverse CaMKII␣ primers with these primer sequences aligning with chromosome 18 in the upstream promoter region of the CaMKII␣ gene. The chromosome seven anchor PCR-derived sequences spanned 763 bases with 99.5% identity from 105031154 to 105031916 (same results with forward and reverse primers) and included partial overlap with an intronless olfactory receptor gene, Olfr684. Chromosome four aligned with derived anchor PCR sequences over a span of 290 bases from 106111874 to 106112163 with 99% identity. This integration site fell in to the fourth intron of the uncharacterized BC055111 gene. Forebrain-specific expression of the introduced human MAO A transgene was evaluated by reverse transcription PCR and Western blotting using anti-human MAO A antibody. Both evaluations indicated a brain region-specific expression was achieved in the transgenic mice. Fig. 1C illustrates Western blot results and RT-PCR results for the human MAO A transcript and protein, respectively. Transcript and protein are observed in specific brain regions or tissue. Frontal cortex and hippocampus both expressed MAO A, but cerebellum and liver did not. Negative controls were wild type mouse, which do not express human MAO A and the MAO A KO strain used to create this transgenic mouse strain. As expected there was no detectable human MAO A mRNA or protein present in MAO A KO mice (Fig. 1C). This line of transgenic mice was viable and fertile. They were intercrossed to obtain the needed number of animals for behavioral and biochemical studies. Healthy forebrain transgenic mice were obtained at the expected frequency and weight. No changes in overall brain structure or appearance compared with MAO A KO and wild type mice could be detected. MAO A Catalytic Activity and Autoradiography Show the Expression of MAO A Specifically in the Forebrain of Transgenic Mice-Catalytic activity of MAO A using 5-HT as substrate was measured in the whole homogenates or mitochondria of the frontal cortex, striatum, hippocampus, and cerebellum of wild type, MAO A KO, and the forebrain transgenic mice (Table 1). No activity was detectable in MAO A KO mice as expected. Significant MAO A activity was detected in the frontal cortex, striatum, and hippocampus but not in the cerebellum of the transgenic mice. This suggests that MAO A is indeed expressed in the specific forebrain regions under the control of the CaMKII␣ promoter. Furthermore, MAO A activity was found to be associated with the mitochondria, and this result indicates that the expressed MAO A are located on mitochondria, as is wild type for MAO. This result indicates that the human C-terminal sequence retains this targeting ability (32)(33)(34) when expressed in mice. Thus, MAO A is expressed in a forebrain-specific manner, with mitochondrial localization; however, the MAO A activity is only 2-5% of the wild type. The MAO A KO mice into which the human MAO A was introduced have essentially zero MAO A activity. The low MAO A activity in these transgenic mice points toward low CaMKII␣ promoter activity compared with the endogenous MAO A promoter. Ro 41-1049 is a specific inhibitor of MAO A (K d value in low nanomolar range), which can be used as a radioligand for MAO A. Autoradiographic visualization of MAO A-binding sites in the brain sections using [ 3 H]Ro 41-1049 was done to assess regional specific MAO A expression in the brain of the forebrain transgenic mice compared with MAO A KO and wild type mice. Results are shown in Fig. 2 Fig. 2) suggest that MAO A was specifically expressed in the frontal cortex, striatum, and hippocampus but not in cerebellum of forebrain transgenic mice, which is consistent with the expression of CaMKII␣ (17)(18)(19)(20) and is the expected outcome of expressing MAO A under the control of this promoter (24 -27). The direct visualization of MAO A protein in the forebrain transgenic mice was achieved by using polyclonal antibodies raised against purified human MAO A. Immunocytochemistry was carried out using the anti-human MAO A antibodies. The specificity and efficacy of staining was first assessed using wild type and MAO A KO mice. In normal mice MAO A is expressed abundantly in the intralaminar nuclei of the thalamus, noradrenergic neurons of the locus coeruleus, and adrenergic neurons (A1-A3) of the brainstem. Fig. 3, A and B, illustrates specific staining using these antibodies to the locus coeruleus and brainstem, respectively, whereas Fig. 3C shows the absence of MAO A reactivity in the A3 region of the brainstem in MAO A KO mice. These experiments on control mice verified the specificity of the antibody, and staining of the forebrain transgenic mice was then assessed. Fig. 3, D-H, illustrates the regional specificities of expression obtained by using the CaMKII␣ promoter to drive human MAO A expression in this transgenic system. Similar to MAO A KO mice, MAO A expression was absent in noradrenergic, tyrosine hydroxylase neurons of the locus coeruleus (Fig. 3, D and E), and serotoninergic neurons of the raphe nucleus (Fig. 3F) of forebrain transgenic mice. Similar to the WT mice, the somatosensory cortex showed strong MAO A expression (Fig. 3G) in the supragranular layers II-IV and expression in the infragranular layers V-VI, albeit of lower intensity than in layers II-IV. These results suggest that MAO A was specifically expressed in the forebrain. Individual neuron staining in the supragranular layers can be visible at higher magnification (arrows in Fig. 3H). The staining pattern observed in Fig. 3H is consistent with MAO A mitochondrial expression. Fig. 3, I-L, presents immunocytochemistry using anti-MAO B antibodies to label MAO B in the transgenic mice. MAO B immunoreactivity is present and consistent with that of normal wild type mice in serotoninergic neurons (Fig. 3I), histaminergic neurons (J), and cortical astrocytes (K and L). Taken together, these data ( Table 1, Fig. 2, and Fig. 3) suggest that MAO A was specifically expressed in the somatosensory cortex, not in norepinephrine neurons in locus coeruleus or cerebellum of forebrain transgenic mice, and this is consistent with the expression of CaMKII␣ (24 -27). MAO A Substrates 5-HT, NE, and DA Decreased and MAO A Metabolite 5-HIAA in Forebrain of Transgenic Mice Increased Compared with MAO A KO Mice-To understand the consequences of the presence of MAO A on the neurotransmission in forebrain regions, steady-state levels of monoamines were determined in the frontal cortex, hippocampus, striatum, and cerebellum of transgenic, MAO A KO, and wild type mice (Fig. 4). MAO A KO mice have about a 2-fold increase in levels of MAO A substrates NE and 5-HT in all brain regions studied (frontal cortex, hippocampus, striatum, and cerebellum) than wild types in accordance with Kim et al. (18). The levels of 5-HIAA were concomitantly and significantly lower in all regions, which reflects the absence of oxidation of 5-HT in the brain of MAO A KO mice. The levels of DA in MAO A KO mice were significantly higher than wild types in the striatum and were slightly higher in the frontal cortex, the hippocampus, and the cerebellum. The levels of dihydroxyphenylacetic acid were concomitantly lower in all regions. The levels of NE and 5-HT in the frontal cortex (ϳ32%), striatum (52%), and hippocampus (35%) of forebrain transgenic mice were lower than MAO A KO and were higher than wild types (frontal cortex, ϳ25%; striatum 56%; hippocampus 22%). These changes in monoamines suggest that MAO A was expressed in these regions of forebrain transgenic mice. In contrast, as a control, the levels of NE, DA, 5-HT, and 5-HIAA in the cerebellum of forebrain transgenic mice were similar to MAO A KO suggesting that MAO A was not expressed in the cerebellum of forebrain transgenic mice. DA levels in the frontal cortex and hippocampus of forebrain transgenic mice were lower than MAO A KOs, suggesting DA is oxidized by MAO A. In the striatum, the levels of DA in forebrain transgenic mice were similar to MAO A KO and were significantly higher than wild types. This suggests that expressed MAO A in the striatum (about 3% by activity compared with wild type) is inadequate to oxidize the exceptionally high amount of DA in this brain region. Somatosensory Cortex Organization in Forebrain Transgenic Mice- Fig. 5 illustrates 5-HT immunolabeling in wild type, MAO A KO, and forebrain transgenic mice in substantia nigra (Fig. 5, A-C, respectively) and in the thalamus (Fig. 5, D-F, respectively). Darker regions of 5-HT immunostaining because of higher 5-HT levels are clearly visible in the MAO A KO compared with wild type and the forebrain transgenic mice where MAO A activity is present. Comparisons of the labeled substantia nigra and median geniculate nucleus in the substantia nigra region (Fig. 5, A-C) and somatosensory and visual thalamus regions (Fig. 5, D-F) illustrate well the decreases in 5-HT compared with MAO A KO. These axons target the forebrain; thus, this is another indicator that the forebrain transgenic mice express active MAO A. MAO A KO mice display permanent alterations in layer IV of the primary somatosensory cortex (S1); the thalamocortical axons and the granular neurons form a homogeneous band instead of being clustered into barrels (23). Restoration of MAO A function in the forebrain transgenic mice should therefore result in a restoration or similarity of cortical structure between wild and forebrain transgenic mice, compared with MAO A KO. Three experiments addressed this issue as follows: (i) analysis of thalamocortical axon segregation using 5-HT immunoreactivity (Fig. 6, A-C); (ii) dendritic differentiation using immunoreactivity of the metabotropic glutamate receptor 5 (mGluR5); and (iii) differentiation of cytoarchitecture via Nissl substance staining. Fig. 6, A-C, shows thalamocortical axon segregation visualized with serotonin immunoreactivity. Whisker-related patches of S1 layer IV are found in wild type mice, abrogated in MAO A KO mice, but are observed here in the forebrain transgenic mice. Fig. 6, D-F, clearly indicates the barrel formation resulting from mGluR labeling is abrogated in MAO A KO mice but is observed in the transgenic mice with very similar patterning as wild type. Finally, Nissl body staining is illustrated in Fig. 6, G-I, showing the granular neuron barrels of layer IV of wild type mice are also present in the forebrain transgenic mice but are absent in the MAO A KO. Aggression Reduced in Transgenic Mice-Behavioral characterization (nonsocial, investigative, defensive, aggressive, and locomotive behavior) of forebrain transgenic, MAO A KO, and wild type mice is presented in Table 2. MAO A KO mice showed higher levels of aggressive behavior than wild types in accordance with Cases et al. (17) ( Table 2). The duration of aggressive behavior of forebrain transgenic mice was reduced to wild type levels, and they spent more time in investigative behavior ( Table 2). They have similar locomotive behavior to MAO A KOs. No significant difference in nonsocial behavior was found in all mice. DISCUSSION One line of forebrain-specific MAO A transgenic mice has been generated from MAO A KO mice by using the promoter of CaMKII␣ (Fig. 1A). The presence of the human MAO A transgene (Fig. 1B) was shown by PCR of the genomic DNA, and its expression was verified by RT-PCR and Western blots using various regions of the tissues (Fig. 1C). Radioautography using the MAO A-specific radioligand [ 3 H]Ro 41-1049 (Fig. 2), MAO A antibody (Fig. 3), or 5-HT immunostain (Fig. 5) provides evidence for the forebrain-specific expression of the human MAO A. Compared with MAO A KO mice, lower levels of 5-HT, NE, and DA and higher levels of the MAO A metabolite 5-HIAA were found in the forebrain regions but not in the cerebellum of the forebrain transgenic mice (Fig. 4). Taken together, these results suggest that the human MAO was indeed specifically expressed in the forebrain of forebrain transgenic mice. It is intriguing that in vitro assay showed only about 5% of the wild type MAO A is expressed in forebrain transgenic mice. However, it is capable of producing significant changes in the steady-state levels of monoamines (22-56%). These results suggest that there may be an abundant excess of MAO A in vivo. This result is consistent with the previous finding that MAO inhibitors are only effective as antidepressants when inhibiting MAO activity by 80 -90% (34). It is also possible that the newly expressed human MAO A may have a different microenvironment on the mitochondria, thus the catalytic activity may have been underestimated. Nevertheless, these results confirm a direct role of MAO A in aggression rather than a secondary effect related to the MAO A KO. These results suggest that the increased levels of NE, 5-HT, and possibly DA in the forebrain of MAO A KO mice may underlie their aggressive behavior. Our analysis of 5-HT immunolabeling correlates with the HPLC study (Fig. 4). MAO rescue in Fb transgenic mice of 5-HT and NE metabolism in postsynaptic regions such as cortex and striatum implies 5-HT is transported onto these cells. It has been shown that there are atypical locations of 5-HT containing neurons in cortical, hippocampal, or amygdaloid areas of MAO A KO mice and that the thalamic neurons transiently express the serotonin transporter on axonal terminals in early postnatal development (35). Similarly, we have found 5-HT immunolabeling in the substantia nigra region (Fig. 5, A-C), also in somatosensory and visual thalamus regions (Fig. 5, D-F). The dopamine transporter may play a role in this. Previous work has shown that in substantia nigra neurons the dopamine transporter acts to cause uptake of 5-HT in mice that are doubly deficient in MAO and the 5-HT transporter (36). Importantly, assessment of the rescue of 5-HT metabolism in Fb transgenic mice was consistent as assessed by autoradiography and by biochemical means; both the intensity of 5-HT immunolabeling (Fig. 5) and the levels of 5-HT determined by HPLC (Fig. 4) were highest in MAO A KO mice, whereas forebrain transgenic mice were reduced to WT type levels. MAO A KO mice display permanent alterations of the somatosensory cortex, as shown by 5-HT immunolabeling (Fig. 6), Interestingly, axonal, cellular, and dendritic patterning are restored in forebrain transgenic mice (Fig. 6). We also analyzed two general differentiation markers of the cortex, calretenin and calbindin, and found no alterations in their distribution (data not shown). In summary, this study demonstrates that the somatosensory cortex organization, including thalamocortical axon segregation, barrel field structure, and cytoarchitecture, reverted to a phenotype with similarity to wild type mice and easily distinguishable for the MAO A KO mice that were transformed by introduction of the CaMKII␣ promoter-driven expression of human MAO and consequent regional specific expression of human MAO A. Aggressive behavior of MAO A KO mice decreased with this forebrain-specific MAO A expression in the MAO A KO mice. Our results suggest that the increased levels of NE, 5-HT, and possibly DA in the forebrain of MAO A KO mice may trigger a complex chain of events in the brain, which lead to aggressive behavior. It is also possible that an increased 5-HT level in forebrain during the development of MAO A KO pups permanently disrupts the distribution of layer IV neurons in the somatosensory cortex (37), and thus resulted in behavioral changes in MAO A-deficient mice. Although the aggressive behavior seen in MAO A-deficient mice is consistent with the impulsive aggression in man with MAO A deficiency (10,11), considerable caution has to be taken in extrapolating results obtained in mice to explain complex aggressive behavior in humans. Nevertheless, a mouse "knock-out" and "knock-in" model for the MAO A gene, as we have demonstrated in this study, provides a valuable basis for understanding the mode of its function in the brain. The region-specific alterations of brain neurotransmitters because of MAO A expression, as shown in forebrain transgenic mice, may contribute to the understanding and perhaps to the treatment of aggression. Our findings that aggressive behavior was reduced in MAO A forebrain-specific expressed mice are consistent with literature reports in that the aggressive behavior in adults is because of early neuronal developmental effects on the brain structures (17). A functional polymorphism in humans with maltreatment in childhood increased the risk of antisocial and criminal behavior index. Brain structures are affected by the polymor- The duration of aggressive behavior of the more aggressive mouse from each pair is reported, and values are expressed in seconds and represent the mean Ϯ S.E. n represents the number of mice. Multiway comparisons (one-way ANOVA and Tukey-Kramer test) are shown for wild type versus MAO A KO mice (**, p Ͻ 0.01), wild type versus forebrain transgenic mice (##, p Ͻ 0.01), and MAO A KO versus forebrain transgenic mice (ϩϩ, p Ͻ 0.001). The animal care was in accordance with institutional guidelines. Genotype Duration of behavior (s/10 min) phism in MAO A promoter (15). These studies and our current study demonstrated that MAO A may be an important enzyme that regulates the brain structure at a critical neuronal developmental stage, which is important for the development of brain structures and behaviors in adults. In addition, we have found a new function of MAO A and its novel repressor R1 (RAM2/ CDCA7L/JPO 2 ) in apoptosis and c-Myc-induced cell cycle signaling pathway (38). The various MAO KO animal models generated, including MAO A KO, MAO B KO, MAO AB double KO, and forebrain-specific human MAO A transgenic MAO A KO generated here, would be valuable for further studying the role of MAO A and neurotransmitters, 5-HT, NE, and DA, in neuronal development, brain structure, and behavior.
2018-04-03T02:40:47.403Z
2007-01-05T00:00:00.000
{ "year": 2007, "sha1": "5472fe5f94d16189da8dbad72ccba2f671504ffc", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/1/115.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "59d32cb193b41f34e47a4edba1fff99f2163158c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232775442
pes2o/s2orc
v3-fos-license
Genetic Causes of Oculocutaneous Albinism in Pakistani Population Melanin pigment helps protect our body from broad wavelength solar radiation and skin cancer. Among other pigmentation disorders in humans, albinism is reported to manifest in both syndromic and nonsyndromic forms as well as with varying inheritance patterns. Oculocutaneous albinism (OCA), an autosomal recessive nonsyndromic form of albinism, presents as partial to complete loss of melanin in the skin, hair, and iris. OCA has been known to be caused by pathogenic variants in seven different genes, so far, according to all the currently published population studies. However, the detection rate of alleles causing OCA varies from 50% to 90%. One of the significant challenges of uncovering the pathological variant underlying disease etiology is inter- and intra-familial locus heterogeneity. This problem is especially pertinent in highly inbred populations. As examples of such familial locus heterogeneity, we present nine consanguineous Pakistani families with segregating OCA due to variants in one or two different known albinism-associated genes. All of the identified variants are predicted to be pathogenic, which was corroborated by several in silico algorithms and association with diverse clinical phenotypes. We report an individual affected with OCA carries heterozygous, likely pathogenic variants in TYR and OCA2, raising the question of a possible digenic inheritance. Altogether, our study highlights the significance of exome sequencing for the complete genetic diagnosis of inbred families and provides the ramifications of potential genetic interaction and digenic inheritance of variants in the TYR and OCA2 genes. Introduction Melanosomes are the cellular organelles (~500 nm in diameter) that are involved in the synthesis, storage, and transportation of melanin pigment in various tissues. This includes but is not limited to the skin, retinal pigment epithelium cells (RPE), and stria vascularis of the inner ear in mammals [1]. The multi-step melanocyte development process is comprised of fate specification, migration, and differentiation in a highly controlled temporospatial manner [2,3]. Melanocytes operate under the control of multiple gene regulatory networks for the sake of optimal functionality [4]. Significant aberrations at any stage of melanocyte, melanosome, or melanin synthesis and their inter-and intracellular transport can lead to heterogeneous pigmentation disorders in humans. Insufficient or lack of pigmentation makes the affected individuals more vulnerable to ultraviolet-mediated skin abrasions and prone to developing life-threatening conditions, e.g., melanoma and skin carcinoma [5,6]. Oculocutaneous albinism (OCA) is a pigmentation disorder that presents a lack of pigment in the skin, eyes, and hair follicles [7]. Worldwide, albinism affects approximately every 1 in 17,000 individuals, though the prevalence of OCA subtypes varies among different populations [8,9]. Additionally, in humans, OCA can manifest as part of a multi-organ syndrome or an isolated (non-syndromic; nsOCA) clinical entity. Clinical features of OCA include nystagmus, photophobia, strabismus, foveal hypoplasia, visual deficits, and misrouting of the optic nerve at the chiasm [10]. Among the known genetic causes of nsOCA, variants in TYR and OCA2 are the most prevalent worldwide [11][12][13]. TYR encodes a transmembrane glycoprotein tyrosinase that resides in the melanosome membrane and plays a vital role in catalyzing the initial and ratelimiting steps of melanin synthesis [14]. In contrast, the OCA2-encoded transmembrane protein is involved in the maintenance of melanosome pH and activity of the chloride-ion channels [15,16]. In recent years, advances in massively parallel sequencing approaches have expedited the process of gaining insight into the genetic basis of Mendelian disorders. These massive genetic profiling projects have also brought to light the significance and severity of several crucial issues. These issues include the variability in disease onset and progression rate, incomplete penetrance, and high inter-and intra-familial genetic heterogeneity for Mendelian disorders, including OCA [17,18]. Recently, a rhesus macaque model of albinism revealed biallelic variants in both TYR and OCA2 that have been used to carry out foveal development studies and preclinical trials of new therapies for OCA [19]. The inheritance of pathogenic variants at different loci, which triggers the disease commencement [20], could also be suggestive of some level of genetic association or functional corroboration between these loci [21]. The current study strives to find the single, double, or multiple disease-associated variants in known OCA genes in inbred Pakistani families with diverse ethnicities with the goals of providing molecular diagnosis and identifying potential genetic interactions between known OCA genes in humans. Ethics Statement After receiving study approval by the Institutional Review Boards and Ethics committees (HP-00061036, approved on 20 January 2020) at participating institutes (Universities of Maryland, Baltimore, MD, USA, Liaquat University of Medical and Health Sciences, Jamshoro, Bahauddin Zakariya University, Multan, and Mirpur University of Science and Technology, Mirpur, Azad Jammu and Kashmir, Pakistan), families that were segregating OCA were identified and ascertained from the Sindh, Kashmir, and Punjab provinces of Pakistan. All the protocols used to carry out this study ensued the Declaration of Helsinki. Written informed consent was also obtained from all participants before enrollment. Peripheral venous blood samples were collected from all the participating individuals for the genomic DNA extraction. Clinical Examination We recorded a detailed clinical history by interviewing subjects at the time of enrollment. Photographs were taken to document the pigmentation phenotype of the skin, eyes, and hair. Ophthalmic evaluations consisting of a visual acuity test, slit lamp microscopy, fundoscopy, and optical coherence tomography were performed on the available subjects by clinicians. Sanger Sequencing of Known OCA Genes For the genetic screening, we amplified both the coding and exon-intron junction regions of all the exons of known nonsyndromic OCA genes through PCR using Econotaq DNA Polymerase (Bioresearch Technologies, Radnor, PA, USA). The samples were then subjected to Sanger sequencing as previously described [22]. Allele-specific PCR was also used to confirm results for a few variants [23]. Bioinformatic Analysis We used Varsome [24] for classification of the identified variants in accordance with the American College of Medical Genetics and Genomics (ACMG) guidelines [25]. We also used several other in silico algorithms, including DANN (which presents a score based upon deep neural networks) [26], REVEL (which predicts pathogenicity using 13 independent programs: MutPred, FATHMM v2.3, VEST v3.0, Polyphen-2, SIFT, PROVEAN, MutationAssessor, MutationTaster, LRT, GERP++, SiPhy, phyloP, and phastCons) [27], MetaSVM (which shows the combinatory result of nine pathogenicity prediction programs and 1KG allele frequency database) [28], and DEOGEN2 (which integrates information related to amino acid, protein structure, domain function, and molecular pathway) [29] to evaluate the impact of identified variants on the encoded proteins. Finally, Clustal Omega was used to show protein conservation across several species, and protein 3D structures were generated and visualized by Phyre2 and Chimera, respectively. Clinical Manifestation We enrolled nine consanguineous families segregating OCA (Figures 1 and 2) from different regions of Pakistan, including the Sindh, Kashmir, and Punjab provinces. Affected individuals from all of the recruited families presented with cardinal features of OCA symptoms that included hypopigmentation of the skin, white to yellow-white hair color, lightly pigmented eyes, reduced vision, iris transillumination, nystagmus, and photophobia (Table 1). Representative fundus and optical coherence tomography (OCT) images of the affected (V:6) and unaffected individuals of family LUAB08 are shown in Figure 3. As can be seen in contrast to the well-developed fovea/macula with normal pigmentation in the unaffected individual (V:2; aged 45 years), the fundus images of the affected individual (V:6, aged 47 years) show foveal hypoplasia (arrowhead) with prominent choroidal vasculature (arrow) and variable levels of pale-pigmented retinal epithelial layer (particularly outside the vascular arcs) ( Figure 3A). Similarly, OCT of the unaffected individual (V:2) show a normally structured fovea, foveal pit, and all retinal layers ( Figure 3B). Conversely, the OCT image of the affected individual (V:6) revealed a lack of outer nuclear layer widening at the fovea and an absence of the foveal pit ( Figure 3B). Furthermore, the mean of the macular thickness (shown by the macular thickness map using 1, 3, and 6 mm ETDRS circles describing inner fovea, inner, and outer macula, respectively), was reduced in the affected individual (V:6) as compared to unaffected sibling ( Figure 3B). Slit lamp microscopy in the affected individuals (IV:1 and IV:2) of family PKAB107 showed iris transillumination and albinotic fundus ( Figure 3C) that is consistent with the albinism phenotype. Intriguingly, the affected individual, III:1, of family LUAB17 is heterozygous for both TYR and OCA2 variants. This raises the question of digenic inheritance of the OCA phenotype and genetic interaction between these two known OCA genes. Intriguingly, the affected individual, III:1, of family LUAB17 is heterozygous for both TYR and OCA2 variants. This raises the question of digenic inheritance of the OCA phenotype and genetic interaction between these two known OCA genes. Intriguingly, the affected individual, III:1, of family LUAB17 is heterozygous for both TYR and OCA2 variants. This raises the question of digenic inheritance of the OCA phenotype and genetic interaction between these two known OCA genes. Identification of Pathogenic Variants in OCA-Affected Families Next, to determine the genetic causes of OCA segregating in these nine families, Sanger sequencing of coding and non-coding exons of all six known OCA genes (TYR (OCA1), OCA2(OCA2), TYRP1 (OCA3), SLC45A2 (OCA4), SLC24A5 (OCA6) and C10ORF11 (OCA7)) was performed for the proband of each family. Both homozygous and compound heterozygous variants were identified in these genes. All genes with a minor allele frequency of <0.001 in the gnomAD database were considered for segregation analysis in all the participating family members. Using this approach, we were able to resolve the locus heterogeneity in all families. In five families, the variants in TYR were associated with the disease phenotype, while compound heterozygous variants in OCA2 are responsible for OCA in one family ( Figure 1; Table 2). Four previously reported variants, c.832C > T (p.(Arg278*)), c.1255G > A (p. (Gly419Arg)), c.649C > T (p. (Arg217Trp)), and c.1037G > T (p.(Gly346Val)) in TYR were found segregating with the OCA phenotype in the homozygous or compound heterozygous (family LUAB33) state in five families (Figure 1). Two novel variants, c.827T > A (p. (Val276Glu)), and c.877G > C (p. (Glu293Gln)), of OCA2 were found in family 6 ( Figure 1). Identification of Pathogenic Variants in OCA-Affected Families Next, to determine the genetic causes of OCA segregating in these nine families, Sanger sequencing of coding and non-coding exons of all six known OCA genes (TYR (OCA1), OCA2(OCA2), TYRP1 (OCA3), SLC45A2 (OCA4), SLC24A5 (OCA6) and C10ORF11 (OCA7)) was performed for the proband of each family. Both homozygous and compound heterozygous variants were identified in these genes. All genes with a minor allele frequency of <0.001 in the gnomAD database were considered for segregation analysis in all the participating family members. Using this approach, we were able to resolve the locus heterogeneity in all families. In five families, the variants in TYR were associated with the disease phenotype, while compound heterozygous variants in OCA2 are responsible for OCA in one family ( Figure 1; Table 2). Four previously reported variants, c.832C > T (p.(Arg278*)), c.1255G > A (p. (Gly419Arg)), c.649C > T (p. (Arg217Trp)), and c.1037G > T (p.(Gly346Val)) in TYR were found segregating with the OCA phenotype in the homozygous or compound heterozygous (family LUAB33) state in five families (Figure 1). Two novel variants, c.827T > A (p. (Val276Glu)), and c.877G > C (p.(Glu293Gln)), of OCA2 were found in family 6 ( Figure 1). Intriguingly, in family LUAB17, the affected individuals harbor rare variants both in TYR and OCA2 genes that are predicted to be deleterious, while the unaffected family member is a carrier of the identified TYR variant (Figure 2; Table 1). In family LUAB17, we identified the segregation of two previously reported pathogenic missense variants (c.649C>T: p.(Arg217Trp); c.1456G>T: p.(Asp486Tyr)) of TYR and OCA2, respectively, in multiple genotype states (Figure 2). Importantly, the affected individual III:1 (white skin and hair and brown iris color) was found to be heterozygous for both TYR and OCA2 variants, and thus poses the question of digenic inheritance of an OCA phenotype and genetic interaction between these two known OCA genes. Although screening of both coding and exon-intron splicing regions did not reveal any additional pathogenic variant either in TYR or OCA2 in all the affected individuals of family LUAB17, we cannot rule out the possibility of deep intronic variants of either gene acting in trans with the identified variants. Finally, all the affected individuals of families PKAB107 and LUAB08 were found to be homozygous for the known variants (c.1255G>A: p.(Gly419Arg); c.832C>T (p.(Arg278*)) of TYR, respectively (Figure 2). Some of the participating members of these families were also heterozygous for a rare variant (c.954G>A; p.(Met318Ile)) of OCA2 (Figure 2). Although the p.(Met318Ile) does not have an evolutionarily conserved residue ( Figure 4A), it has high Combined Annotation Dependent Depletion (CADD) scores and was predicted pathogenic by few in silico algorithms (Table 2). However, the individual III:4 of family PKAB107 and individuals V:1; VI:3 of family LUAB08 that are heterozygous for TYR, and the p.(Met318Ile) OCA2 (Figure 2), have no pigmentation problems. On the other hand, in the exome data of 141,334 individuals listed in the gnomAD database, only one homozygote (minor allele frequency: 4.28 × 10 −4 ) was found. With the current evidence that lacks a detailed pigmentation phenotype and takes into account the description of the p.(Met318Ile) homozygote without functional studies, we cannot conclude if p.(Met318Ile) would be pathogenic or not in the homozygous state. However, we included this rare variant in our in silico 3-dimensional molecular modeling to assess the potential impact on the OCA2 protein along with other identified variants (Figure 4). Protein Modeling of TYR and OCA2 Variants Collectively, we have identified six potential missense variants of TYR and OCA2 in our OCA families ( Table 2). All these variants are either absent or have very low frequencies in the gnomAD database, are predicted by several in silico algorithms to be damaging (Table 2), and also most of them have high conservation across multiple species ( Figure 4A). To assess the predicted impact of the identified OCA1-associated variants on the encoded tyrosinase enzyme secondary structures, we performed 3D molecular modeling with Phyre-2 software. The p.(Arg217Trp) missense variant of TYR is predicted to be present in the copperbinding domain, which is vital for the oxidoreductase activity of the encoded tyrosinase enzyme. The WT arginine residue at position 217 is located on the protein surface and is predicted to form hydrogen bonds with p.Glu221 and p.Leu213 residues as well as a salt bridge with p.Glu221. Due to differences in the structure and properties, the p.(Arg217Trp) replacement is predicted to induce a loss of ionic interactions with other residues ( Figure 4B). Similarly, the p.(Gly346Val) variant found in family LUAB33, is also located within the tyrosinase copper-binding domain. Replacement of glycine at position 346 with valine is predicted to introduce new hydrogen bonds and force the local protein backbone into an improper conformation (due to size and charge differences between the WT and mutated residues) ( Figure 4B). Finally, the glycine residue at position 419 is buried in the lumenal melanosome residues of the repeat stretch of the tyrosinase enzyme. Replacement of the glycine with a bigger and positively charged arginine residue at position 419 is predicted to disrupt the protein folding and secondary structure as well as introduce aberrant ionic interactions ( Figure 4B). Protein Modeling of TYR and OCA2 Variants Collectively, we have identified six potential missense variants of TYR and OCA2 in our OCA families ( Table 2). All these variants are either absent or have very low frequencies in the gnomAD database, are predicted by several in silico algorithms to be damaging (Table 2), and also most of them have high conservation across multiple species ( Figure 4A). To assess the predicted impact of the identified OCA1-associated variants on the encoded tyrosinase enzyme secondary structures, we performed 3D molecular modeling with Phyre-2 software. The p.(Arg217Trp) missense variant of TYR is predicted to be present in the copperbinding domain, which is vital for the oxidoreductase activity of the encoded tyrosinase We also modeled the OCA2-associated missense variants. The p.Val276Glu variant of OCA2 is predicted to replace the neutral (valine) residue with a negatively charged (glutamic acid) residue, which may cause repulsion of ligands or other residues of the same charge. Furthermore, differences in the size and hydrophobicity of valine and glutamic acid are also predicted to result in the loss of hydrophobic interactions ( Figure 4B). Similarly, the p.Glu293Gln variant is predicted to result in a loss of charge and associated interactions with other residues in the core of the encoded protein ( Figure 4B). The p.Met318Ile variant of uncertain significance found in families, PKAB107 and LUAB08 (Figure 2), is located in the alpha-helix loop. Replacing the residue isoleucine (p.Met318Ile) would cause the protein to resist the alpha-helices secondary structure and would likely cause spacing in the secondary structure due to the small size ( Figure 4B). Finally, the p.Asp486Tyr change is predicted to negatively impact the protein folding and ionic interactions due to the differences in size and charge among amino acids ( Figure 4B). Discussion OCA is a clinically and genetically heterogeneous disorder that segregates in an autosomal recessive pattern in humans. Unlike any other genetic disorders caused by single-gene pathogenic variants (e.g., cystic fibrosis), non-syndromic presentations of OCA are already linked with eight distinct autosomal genetic loci. Among these genetic links, variants in the TYR (OCA1) and OCA2 (OCA2) genes account for a majority of the OCA cases worldwide, including those in Pakistan [22,23]. Besides the hundreds of homozygous variants, there are many paragons illustrating the inheritance of the compound heterozygous variants of TYR and OCA2 and their linkage to the OCA phenotype. Further, challenges in uncovering the pathological variant underlying disease etiology are imposed by the inter-and intra-familial locus heterogeneity. Our study illustrates nine examples of familial locus heterogeneity for nonsyndromic OCA. We describe four OCA families (LUAB27, LUAB30, LUAB32, and Family 5) that harbor two known TYR variants (p.(Arg278*); p.(Gly419Arg)) that segregate in the homozygous state. The affected individuals of family LUAB33 inherited two heterozygous variants (p.(Arg217Trp); p.(Gly346Val)) of TYR in trans configuration from their parents. Similarly, two novel compound heterozygous variants (p.(Val276Glu); p.(Glu293Gln)) of OCA2 were found in the affected individuals of Family 6. Besides these cases of single-gene variants, we also found variants of both TYR and OCA2 in different zygosity combinations within family LUAB17. The obtained combination of OCA variants might interact in a novel manner to generate the observed OCA phenotypes (e.g., individual III:1 family LUAB17), accentuating the significance of genetic interactions towards OCA etiology. Possible digenic inheritance of OCA variants has also previously been proposed in other populations. For instance, five cases of OCA harboring distinct allelic combinations of TYR, OCA2, and SLC45A2 have been reported in the Chinese population [18]. This only serves to emphasize the importance of considering the implications of genetic interaction between multiple known OCA genes during embryonic development. Although digenic or oligogenic inheritance has not been proven for albinism, it has been reported for other Mendelian disorders, e.g., familial microscopic hematuria [32] and digenic familial exudative vitreoretinopathy [33]. For example, Bardet-Biedl syndrome is a well-studied vision disorder with oligogenic inheritance, genetic interactions, and phenotype modifications [34,35]. Currently, the sample size of OCA cases with oligogenic variants is not large enough for a meaningful evaluation of phenotype modifications. However, our study contributes useful genetic information towards such an endeavor. Conclusions In conclusion, our study expands the genetic spectrum of OCA in the Pakistani population, aids in the complete genetic testing and counseling of families inheriting variants of OCA genes, and raises the question of whether a potential genetic interaction and digenic inheritance of variants in TYR and OCA2 genes can exist.
2021-04-04T06:16:22.854Z
2021-03-28T00:00:00.000
{ "year": 2021, "sha1": "b975437bf9ef752f7e90be4c4571ae16b3582f97", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/12/4/492/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cf3303b5927b3fe33bf26b1c82bf9c6cace1e6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119559935
pes2o/s2orc
v3-fos-license
Complex orthogonal geometric structures of dimension three A complex orthogonal (geometric) structure on a complex manifold is a geometric structure locally modelled on a non-degenerate quadric. One of the first examples of such a structure on a compact manifold of dimension three was constructed by Guillot. In this paper, we show that the same manifold carries a family of uniformizable complex orthogonal (geometric) structures which includes Guillot's structure; here, a structure is said to be uniformizable if it is a quotient of an invariant open set of a quadric by a Kleinian group. We also construct a family of uniformizable complex (geometric) projective structures on a related compact complex manifold of dimension three. Introduction A (classical) Kleinian group Γ is a discrete subgroup of the group of Möbius transformations which acts properly discontinuously on some non-empty invariant open set of the Riemann sphere. It is wellknown that every classical Kleinian group Γ splits the Riemann sphere into two sets: the limit set and the discontinuity region; the dynamics of the group Γ is concentrated on the limit set, while the geometry lives in the discontinuity region. In fact, if the group acts freely on the discontinuity region Ω, then the quotient Γ \ Ω inherits the local structure of the Riemann sphere: it is a Riemann surface such that the projection Ω → Γ \ Ω is a local biholomorphism; thus, one may say that, in the classical setting, there is a strong relationship between the geometry and the dynamics of a Kleinian group. The Möbius transformations can be characterized either as the conformal automorphisms of the Riemann sphere which preserve the orientation or as the biholomorphisms of the complex projective space of dimension one or, finally, as the projective transformations of the complex projective plane which preserve a onedimensional non-degenerate quadric (conic). Accordingly, there are, at least, three natural generalizations of the classical Kleinian groups to higher dimensions: • A conformal Kleinian group is a discrete subgroup of the group Conf + (S n ) of conformal orientationpreserving automorphisms of the n-dimensional sphere S n that acts properly discontinuously on a non-empty invariant open set of S n . • A complex Kleinian group is a discrete subgroup of the group PSL(n + 1, C) of projective transformations of the n-dimensional complex proyective space CP n that acts properly discontinuously on a non-empty invariant open set of CP n . • A complex orthogonal Kleinian group is a discrete subgroup of the group PO(n + 1, C) of projective transformations which preserve the n-dimensional non-degenerate quadric Q n that acts properly discontinuously on a non-empty invariant open set of this quadric. The geometric structure (see Goldman [6]) determined by the quotient of a conformal Kleinian group, a complex Kleinian group or a complex orthogonal Kleinian group is called a uniformizable conformal structure, a uniformizable complex projective structure or a uniformizable complex orthogonal structure, respectively. Of these three kinds of groups, conformal Kleinian groups are the best-understood so far; a complete survey can be found in [11]. Much work has also been done on higher-dimensional complex Kleinian groups; some of the first examples were given by Kato [13], Larusson [16], Nori [19] and Seade and Verjovsky [20]. Complex orthogonal Kleinian groups are the least studied at the moment; one of the first examples on dimension three was constructed by Guillot in [9, p. 224, 225]. The first result of this paper is a family of uniformizable complex Kleinian groups which includes Guillot's example. consider the (unique) extension of the action of SL(2, C) × SL(2, C) on SL(2, C), which sends (g, h), x to gxh −1 , to Q 3 . Then, Q 3 −SL(2, C) is biholomorphic to CP 1 ×CP 1 and, for every group homomorphism u : Γ → SL(2, C), such that (1) Γ u := γ, u(γ) : γ ∈ Γ acts properly discontinuously on SL(2, C); then, Γ u acts properly discontinuously on (2) U Γ := SL(2, C) ∪ Ω × CP 1 . While this paper was in preparation, examples similar to those of this Theorem were obtained, independently using other techniques, by Guéritaud The examples constructed by Guillot correspond to the quotient Γ I \ U Γ of this Theorem, where I is the constant morphism and Γ is a convex-cocompact Kleinian group. The geometric study of the complex and the complex orthogonal Kleinian groups is complicated by the fact that there is no good way to define an analogue of the discontinuity region in these cases. This makes the examples of the corresponding uniformizable structures all the more valuable. One of the first examples of a compact manifold with a uniformizable complex orthogonal structure of dimension three was given by Guillot in [9, p. 224, 225] as the quotient of his example of complex Kleinian groups of dimension three. The main result of the present paper says that Guillot's example is part of a family of uniformizable complex orthogonal structures on the same manifold: Theorem 1.2. Let Γ ⊂ SL(2, C) be a torsion-free, convex-cocompact, (classical) Kleinian group with domain of discontinuity Ω in CP 1 . Consider the action of SL(2, C)×SL(2, C) on the three-dimensional non-degenerate quadric Q 3 , defined in Theorem 1.1, the open set U Γ ⊂ Q 3 , defined in (2), and for each group morphism u : Γ → SL(2, C), the group Γ u , defined in (1). Then, for each group morphism u : Γ → SL(2, C), sufficiently close to the constant morphism, U Γ is a maximal open set where Γ u acts properly discontinuously. Also, for all homomorphisms u, the quotients Γ u \ U Γ are compact and diffeomorphic to each other. The examples constructed by A. Guillot correspond to the quotient of Γ I \ U Γ of this Theorem, where I is the constant morphism. We call the Guillot manifold, the quotient manifold (both, differentiable and complex) and the Guillot structure, the complex orthogonal structure determined by it. We will also construct uniformizable complex projective structures on a related complex manifold of dimension three. Theorem 1.3. Let Γ ⊂ SL(2, C) be a torsion free, convex-cocompact, classical Kleinian group with domain of discontinuity Ω in CP 1 . Consider CP 3 as the projectivization of the space of 2 × 2 complex matrices and the action of SL(2, C) × SL(2, C) on it that sends (g, h), [x] to gxh −1 . Then, there exists an open set V Γ ⊂ CP 3 , such that, for each group homomorphism u : Γ → SL(2, C) sufficiently close to the constant morphism, V Γ is a maximal open set where Γ u , defined in (1), acts properly discontinuously. Also, for all u, the quotients Γ u \ V Γ are compact and diffeomorphic to each other. The complex manifold of this Theorem and the uniformizable complex projective structure induced by Γ I \ U Γ , where I is the constant morphism, were also found by Guillot. The spaces of homomorphisms from Γ to SL(2, C) × SL(2, C) of Theorems 1.2 and 1.3, are considered with the compact-open topology. As we will see, the groups Γ u of these theorems are embedded as subgroups into PO(5, C) and PO(4, C). If u is close to the constant morphism, the homomorphism γ → γ, u(γ) is close to γ → γ, I . Then, each group Γ u of Theorem 1.2 determines an uniformizable complex orthogonal structure on the Guillot manifold, which is close to the Guillot structure. If u and v are close to the constant morphism, the geometric structures determined by Γ u and Γ v coincide if and only if u and v are conjugate. The same phenomenon occurs in the context of Theorem 1.3. The proofs of Theorems 1.1 and 1.2 go as follows. First, we will consider a torsion-free, finitely generated, (classical) Kleinian group Γ ⊂ SL(2, C) with domain of discontinuity Ω in CP 1 and u : Γ → SL(2, C) a group morphism. Then, we will recall that if we consider the intersection of Q 3 with the projectivization of the hyperplane in C 5 defined by z 3 = 0 we get the two-dimensional quadric Q 2 . We will also recall that there exists a SL(2, C) × SL(2, C) -equivariant biholomorphism from Q 2 to CP 1 × CP 1 . Next, we will consider the action of Γ u on CP 1 × CP 1 defined by where γ, u(γ) ∈ Γ u and (x, y) ∈ CP 1 × CP 1 . Since this action is properly discontinuous in the first coordinate of Ω × CP 1 , it follows that Γ u acts properly discontinuously and uniformly on Ω × CP 1 (by the uniformity of the action, we mean that, for every compact set, there is a bound on length of the Γ u -translates of this compact set that intersect it, and the bound is independent of u). Then, we will develop some of the ideas and techniques of Frances in [3] in order to study the dynamics of the compact sets of Q 3 for divergent sequences of Γ u . In particular, we will prove that if Γ u , defined in (1), acts properly discontinuously on SL(2, C), then it acts properly discontinuously on U Γ , defined in (2). Moreover, if Γ \ Ω is compact then, U Γ is maximal. This way, Theorem 1.1 will be proved. In order to prove Theorem 1.2, we will consider the group Γ to be convex-cocompact. Then, we will generalize Lema 2.1 of Ghys [4, p. 119] to prove that Γ u acts properly discontinuously and uniformly on SL(2, C), for all u sufficiently close to the constant morphism. So, the hypothesis of Theorem 1.1 are valid and then, for all u sufficiently close to the constant morphism, Γ u acts properly discontinuously on U Γ . Then, we will continue developing the ideas and techniques of Frances in order to prove that Γ u acts uniformly on U Γ . So, there exists an open neighborhood V of the constant morphism I such that Γ acts properly discontinuously on V × U Γ . If V is a manifold, then this means that where ν ∈ V, x ∈ U Γ , is a locally trivial fibration; this proves the theorem. If V is not a manifold, we will consider a resolution of singularities r : X → V of a neighborhood V of the constant morphism to construct a locally trivial fibration over X whose fibers are the quotients Γ u \ U Γ and the Theorem will be proved in the general case. In order to prove Theorem 1.3, we will show that Γ u is a subgroup of PO(4, C) and that there exists a SL(2, C) × SL(2, C) -equivariant, continuous, proper and open map from Q 3 to CP 3 . We will push forward the set U Γ of Theorem 1.2 to get the set V Γ of Theorem 1.3. Section 2 is dedicated to the geometry of Q 3 . In Section 3, we consider the action of Γ u on Q 2 and, on SL(2, C), for the homomorphisms u : Γ → SL(2, C) sufficiently close to the constant morphism. In Section 4, we study the dynamics of the accumulation points for the orbits of compact sets of Q 3 for divergent sequences of Γ u ; we prove that if Γ u acts properly discontinuously on SL(2, C), then it acts properly discontinuously on U Γ and if Γ \ Ω is compact, then U Γ is maximal. In Section 5, we prove that for u sufficiently close to the constant morphism, all the quotients Γ u \ U Γ are compact and diffeomorphic to each other. Finally, we construct the SL(2, C) × SL(2, C) -equivariant, continuous, open and proper map from Q 3 to CP 3 and push forward the complex orthogonal Kleinian group Γ u to get a complex Kleinian group. The author would like to thank Adolfo Guillot for all his help and support. The geometry of the quadric In this Section, we study the geometry of the non-degenerate quadric Q 3 of dimension three and its group PO(5, C) of transformations. We will consider the non-degenerate quadric Q 2 of dimension two obtained by intersecting Q 3 with the projectivization of the hyperplane z 3 = 0. We will recall that the orthogonal group O(4, C) is a subgroup of PO(5, C) which preserves Q 2 and that the group SO(4, C) of orthogonal matrices of determinant one is isomorphic to SL(2, C) × SL(2, C) / (I, I), (−I, −I) . We will also define two important kinds of subsets of Q 3 and study their geometry; namely, light geodesics and light cones. In Section 4, we will see that these sets appear naturally as the sets of accumulation points of the orbits of compact sets of Q 3 under discrete subgroups of SO(4, C). 2.1. The quadric and its automorphism group. The reader can consult Guillot [9] and Méndez [17,18] for further discussion of this Section. The non-degenerate quadratic form on C 5 defines the non-degenerate quadratic form on C 4 . The groups O(4, C) and O(5, C) consist of the matrices which preserve q * and q, respectively; the group SO(4, C) is the subgroup of O(4, C) which contains the matrices of determinant one. We say that two matrices in O(n, C) (n = 4, 5) are equivalent if one of them is a nonzero C * -multiple of the other and PO(n, C) is the set of equivalence classes. Consider the quadric in C 5 and the non-degenerate quadric in CP 4 . Then, PO(5, C) is the group of projective transformations which preserves Q 3 . Let H be the hyperplane in C 5 given by z 3 = 0; denote by π the projection π : and let Q 2 := Q 3 ∩ π(H). Since the composition of this embedding and the projection of O(5, C) onto PO(5, C) defines a holomorphic monomorphism φ from O(4, C) to PO(5, C). We also use the notation O(4, C) for its image; similarly, we write SO(4, C) for the image of this group in PO(5, C). The group O(4, C) is isomorphic to the subgroup of PO(5, C) that preserves the projection π(H) of the hyperplane H and the projection π(e 3 ) of the vector e 3 . Recall the embedding (1) considered in the Introduction; as the group O(4, C) preserves Q 2 , therefore, is biholomorphic to SL(2, C) and the action of SL(2, C) × SL(2, C) on SL(2, C) given by where f, g, x ∈ SL(2, C), defines a holomorphic action of SL(2, C) × SL(2, C) on Θ. This action extends in a unique way to a (non-faithful) action on Q 3 , so it defines an holomorphic homomorphism ψ from SL(2, C) × SL(2, C) to PO(5, C) whose image is contained in SO(4, C) and whose kernel is (I, I), (−I, −I) . As the image of this homomorphism is a connected subgroup of the connected group SO(4, C) (see [5, p. 82]) and both of them are of the same dimension, this homomorphism is surjective. Therefore, ψ induces a biholomorphic isomorphism from SL(2, C) × SL(2, C) / (I, I), (−I, −I) to SO(4, C). The quadric Q 2 is biholomorphic to CP 1 × CP 1 . The function is a biholomorphism which is SO(4, C)-equivariant with respect to the restriction of the action of SO(4, C) on Q 3 and the action 2.2. Light geodesics and light cones. The results of this Section are analogous to those of the real case given by Frances in [3]. Consider the bilinear form b associated to the non-degenerate quadratic form q defined in (3). For each subspace W of C 5 , W ⊥ is the set of vectors v ∈ C 5 such that b(v, w) = 0 for all w ∈ W . A vector subspace W of C 5 is called isotropic if q(w) = 0 for all w ∈ W . There exist isotropic C-planes, for example e 1 , e 2 , where e 1 , . . . , e 5 is the canonical base of C 5 . For every subspace W of C 5 , dim(W ) + dim(W ⊥ ) = dim(C 5 ). If W is isotropic, by the polarization identity, we know that W ⊂ W ⊥ ; thus, there are no isotropic subspaces of C 5 of dimension three or four. The projectivization of an isotropic C-plane is called a light geodesic. The group PO(5, C) sends light geodesics to light geodesics. If p ∈ Q 3 , the union of all the light geodesics which contain p is called the light cone of p and is denoted by C(p). We have that if p is any point in C 5 such that π( p) = p, then where C 4 was defined in (4). Let us consider the following equivalence relation in C(p) − {p}: we say that x, y ∈ C(p) − {p} are equivalent if they belong to the same light geodesic which contain p. Let us denote by C(p) the space of all light geodesics which contain p, that is, the space of all equivalence classes of C(p) − {p}. The group PO(5, C) sends light cones to light cones. Proof. Recall that the non-degenerate quadric Q 1 := [0 : z 2 : z 3 : z 4 : 0] ∈ CP 2 : z 2 z 4 + z 2 3 = 0 is biholomorphic to CP 1 . In a similar fashion, for all g ∈ PO(5, C), the map p g : C π g(e 1 ) − π(g(e 1 )) = gives the space of light geodesics which contain π g(e 1 ) a structure of a complex manifold biholomorphic to CP 1 . Also, g induces a biholomorphismḡ from C π(e 1 ) to C π g(e 1 ) . Let us define the map There are two natural foliations on CP 1 × CP 1 , namely the vertical and the horizontal foliations, whose leaves are the sets of the form {z} × CP 1 , z ∈ CP 1 and CP 1 × {w}, w ∈ CP 1 , respectively. As the action of SO(4, C) is transitive on CP 1 × CP 1 and preserves the space of light geodesics, then, the leaves of these foliations are all the light geodesics contained in CP 1 × CP 1 . We will call these leaves vertical and horizontal light geodesics, respectively. If two light geodesics are both horizontal (or vertical), then we say that they are parallel. Some deformations of (classical) Kleinian groups In the last Section, we saw that there is an holomorphic epimorphism ψ : SL(2, C) × SL(2, C) → SO(4, C). In this Section, we will recall that (classical) Kleinian groups of SL(2, C) ∼ = SL(2, C) × {I} inject in SO(4, C), via this epimorphism. We will also recall a result of Guillot in [9, p. 224, 225] which says that these groups are in fact, orthogonal Kleinian groups. We will also study some deformations of them inside SO(4, C) and their geometry in the quadric Q 3 and in Θ = Q 3 − Q 2 . Consider the projection P : Consider a torsion-free (classical) Kleinian group Γ of SL(2, C) (I. Kra in [15] proved that they exist). It follows that Γ acts on CP 1 Also, if Γ is a lift of the (classical) Kleinian group Γ ⊂ PSL(2, C) and Ω is the domain of discontinuity of Γ in CP 1 , then Γ acts properly discontinuously on Ω and all the properties of Γ are still valid for Γ. By definition P(−I) = Id, where I is the identity matrix and Id the identity Möbius transformation. So, if there exists an element a ∈ Γ such that j(a) = −I, as P • j is the identity, then a = Id; however, this is a contradiction, since j is an isomorphism and, hence, j(Id) = I. Then −I / ∈ Γ. By similar arguments it follows that, if A ∈ Γ, then −A / ∈ Γ. The composition of the inclusion of SL(2, C) into SL(2, C)×SL(2, C), defined by g → (g, I), and ψ, defined in the Subsection 2.1, defines a holomorphic monomorphism from SL(2, C) to PO(5, C); hence, SL(2, C) can be considered as a subgroup of PO(5, C). Consider a (classical) Kleinian group Γ ⊂ SL(2, C) and a group morphism u : Γ → SL(2, C). Recall the definition of the group Γ u , given in (1); the kernel of the homomorphism ψ, given in Section 2.1, and that −I / ∈ Γ. Then ψ, restricted to Γ u , is also a monomorphism, and so, Γ u can be considered as a subgroup of PO(4, C). That is, the group Γ u is a deformation of the (classical) Kleinian group Γ inside SO(4, C). In fact, Γ u is torsion free. One of the first examples of orthogonal Kleinian groups of dimension three was given by A. Guillot in [9, pp. 224, 225]. Guillot proved that, if Γ is a torsion free, (classical) Kleinian group of SL(2, C), then Γ is a complex orthogonal Kleinian group of dimension three (by means of the homomorphism ψ, defined in Section 2.1). In particular, if Ω denotes the domain of discontinuity of Γ in CP 1 , then Γ acts properly discontinuously on Θ ∪ Ω × CP 1 (see Section 2). Guillot also proved that if, in addition, Γ is a convex-cocompact classical Kleinian group of SL(2, C), then Γ acts cocompactly on Θ ∪ Ω × CP 1 . As we discussed in the Introduction of this paper, we call this quotient the Guillot manifold and the Guillot structure the complex orthogonal structure determined by it. Recall that a (classical) Kleinian group Γ with domain of discontinuity Ω in CP 1 is convex-cocompact if Γ \ (H 3 ∪ Ω) is compact. Classical Fuchsian groups which define compact surfaces of genus g for g ≥ 1 and (classical) Schottky groups are examples of convex-cocompact groups. A good reference for convexcocompact Kleinian groups is [2]. is a group morphism sufficiently close to the constant morphism, then, Γ u , which was defined in (1), acts properly discontinuously and uniformly on SL(2, C). Remark 1: In Theorem 1.3 of [12], Kassel proved that Γ u acts properly discontinuously on SL(2, C). Remark 2: The proof of this Proposition is based on a modification of Lemma 1.2 of Ghys in [4]. Proof. We will show that there exists an open neighborhood V of the constant morphism such that is properly discontinuous. In Lemma 1.2 of [4], Ghys showed the same statement but for discrete and cocompact groups, that is, he proved that if Γ is discrete and cocompact, then, Γ u acts properly discontinuously and uniformly on SL(2, C). In his proof, Ghys considered three different topologies on Γ: The word metric (we denote by l(γ) the lenght of γ), the restriction of the metric induced by any right-invariant Riemmannian metric d on SL(2, C) and the restriction of any Euclidean norm || · || on C 4 . Then, he used the fact that Γ \ SL(2, C) is compact to apply theŠvarc-Milnor Lemma and concluded that l(·) is bounded by a function which depends linearly on d(I, ·). On the other hand, it also happens that, in SL(2, C), d(I, ·) is always bounded by a function which depends logarithmically on || · ||. Therefore, l(·) is bounded by a function which depends logarithmically on || · ||. Ghys considered a compact set K ⊂ SL(2, C), an element γ in Γ and a group morphism u : Γ → SL(2, C), such that the γ, u(γ) -translate of K intersects K. By routine analysis, it follows that ||γ|| is bounded by a constant which depends exponentially on l(γ). By this and by the last paragraph, l(γ) is bounded by a constant which depends linearly on l(γ). It happens that, if the group morphism u is sufficiently close to the constant morphism, then, we can get a constant upper bound on l(γ) and this constant does not depend on the homomorphism u. So, if u is sufficiently close to the constant morphism, only a finite number of Γ u -translates of K intersect K and the set of these translates does not depend on u. We will present a modification of Ghys' Lemma. We will consider a convex-cocompact (classical) Kleinian group Γ of SL(2, C) and first prove that SL(2, C) and H 3 are quasi-isometric with respect to any left-invariant Riemannian metric in SL(2, C) and any word metric in Γ. Then, we will use that Γ is convex-cocompact; in particular, there exists a Γ-invariant convex set of H 3 , such that, its orbit space is compact, in order to apply theŠvarc-Milnor Lemma and prove that a similar upper bound of l(·) (as a linear function of d(I, ·)) is valid. We will first construct a specific left-invariant Riemannian metric in SL(2, C) and prove that SL(2, C) and H 3 are quasi-isometric with respect to this specific left-invariant Riemannian metric in SL(2, C) and any word metric in Γ. As any two left-invariant Riemannian metric in SL(2, C) are quasi-isometric, this will imply that SL(2, C) and H 3 are quasi-isometric with respect to any left-invariant Riemannian metric in SL(2, C) and any word metric in Γ. It can be proved that is a trivial fibration with fiber SU(2) (for further discussion see [18, pp. 37-38]) and, hence, T I SL(2, C) = T I H 3 ⊕ T I SU(2). Let us denote by g 1 , the hyperbolic metric of H 3 and, by g 1 I , the inner product in T I H 3 determined by g 1 at the identity. Consider an arbitrary inner product g 2 I in T I SU(2). Then (2) is an inner product in T I SL(2, C). Let us consider the corresponding left invariant Riemannian metric in SL(2, C): where L s −1 : G → G is the left multiplication by s −1 and D s denotes the differential of a function at the point s. After some computation, we prove that ρ is a quasi-isometry, with respect to the Riemannian metric g I in SL(2, C) and the hyperbolic metric g 1 in H 3 (for further discussion see [18, pp. 42-44]). Let us consider the convex hull C(Γ) of the limit set Λ of Γ. Then C(Γ) ∩ H 3 is a closed, convex and Γ-invariant subset of H 3 . By definition of a convex-cocompact Kleinian group, we know that the convex-core of Γ \ H 3 , is compact. Without loss of generality, we can suppose that (0, 0, 1) ∈ C(M ) and so, by thě Svarc Milnor Lemma, Γ → C(M ), γ → g (0, 0, 1) is a quasi-isometry. Then, by the latter and, as ρ is a quasi-isometry, we get that there exists constants B, C > 0, such that, We can now follow the same arguments of E. Ghys to prove that (11) is properly discontinuous. is a classical Kleinian group of SL(2, C) with domain of discontinuity Ω in CP 1 . Then Γ u acts properly discontinuously and uniformly on Ω × CP 1 , for all u ∈ Hom Γ, SL(2, C) . Proof. We will show that is properly discontinuous. As the projection is continuous and Γ-equivariant and Γ acts properly discontinuously on Ω, it follows that Γ acts properly discontinuously on Ω × CP 1 . As Γ acts properly discontinuously on the second factor of (13), then, we get the Proposition. Divergent sequences and limit sets We begin this Section giving a formula to compute the accumulation points of the orbits of compact sets in CP m for divergent sequences of GL(m + 1, C). This formula gives a qualitative expression to the idea of Frances (see Propositions 3, 4 and 5 of [3]) that the dynamics of the compact sets, of a divergent sequence and of a compact permutation of this divergent sequence, are equal. Let us consider a continuous action G × X → X of a topological group G on a locally compact metric space X. We say that x ∈ X is dynamically related to y ∈ X if there exists a convergent sequence (x n ) of X and a divergent sequence (g n ) of G, such that, x = lim x n , (g n x n ) is convergent and y = lim g n x n . This relation is not necessarily reflexive nor transitive, but it is symmetric, and this allows us to say, without any ambiguity, that two points are dynamically related. If (g n ) is divergent, let us denote by D (gn) (x) the set of points y ∈ X, such that there exists a sequence (x k ) of X which converges to x, together with a subsequence (g n k ) of (g n ), such that (g n k x k ) is convergent and y = lim g n k x k . Let us define where the union is taken over all divergent sequences of G and U is an open set of X. That is, D G (x) is the set of all points of X which are dynamically related to x. We say that y ∈ X is an accumulation point of the orbit of the compact set K ⊂ X if every neighborhood of y intersects infinitely many G-translates of K. If U ⊂ X is an open set and G is a discrete group, the set D G (U ) equals the set of all the accumulation points of the orbits of G of all the compact sets of U . Proposition 4.1. Let m ∈ N, suppose that u, u ∈ GL(m + 1, C), for all n ∈ N, g n , u n , u n also belong to GL(m + 1, C), u n → u, u n → u, (g n ) is a divergent sequence of GL(m + 1, C) and U is an open set of CP m , then, Proof. We will prove this Proposition for m = 4, the general proof is similar. It is well-known that, if a sequence of 4 × 4 matrices converges in the usual topology, then, it converges locally uniformly on CP 4 ; then, u n → u and u n → u, locally uniformly on CP 4 . We will first prove that Suppose that z ∈ D (gnun) u −1 (U ) ; then, we can assume that there exists a convergent sequence (y n ) of u −1 (U ) such that (g n u n (y n )) is convergent and z = lim g n u n (y n ). Let y := lim y n ∈ u −1 (U ); in order to prove that z ∈ D (gn) (U ), we will show that u n (y n ) → u(y). Consider any ǫ > 0. Since u n → u locally uniformly, y n → y and u is continuous, there exists M ∈ N, such that for all n ≥ M , Then, u n (y n ) → u(y). Now, suppose that w ∈ D (gn) (U ). We can then assume that there exists a convergent sequence (z n ) of U , such that, (g n z n ) is convergent and w = lim g n z n . Let z := lim z n ∈ U ; in order to prove that w ∈ D (gnun) u −1 (U ) , we will show that u −1 n (z n ) → u −1 (z). Consider any ǫ > 0. Since u −1 n → u −1 locally uniformly, z n → z and u −1 is continuous, there exists N ∈ N, such that, for all n ≥ N , . This proves (14). Now, we will prove Consider z ∈ D ( ungn) (U ). We can assume that there exists a convergent sequence (x n ) of U , such that, u n g n (x n ) is convergent and z = lim u n g n (x n ); if not, take a subsequence. In order to show that z ∈ u D (gn) (U ) , we will show that there exists y ∈ D (gn) (U ), such that, z = u(y). Consider any ǫ > 0 and let z n := u n g n (x n ). As u −1 n → u −1 locally uniformly in X, u −1 is continuous and z n → z, there exists N ∈ N, such that, for all n ≥ N , Now, suppose that z ∈ u D (gn) (U ) , then we can assume that there exists a convergence sequence (x n ) of U , such that (g n (x n )) is convergent and if y := lim g n (x n ); then z = u(y). In order to prove that z ∈ D ( ungn) (U ), we will show that z = lim u n g n (x n ). Consider any ǫ > 0; since u is continuous, u n → u locally uniformly and g n (x n ) → y, there exists N ∈ N, such that, for all n ≥ N d z, u n g n (x n ) ≤ d z, u(g n x n ) + d u(g n x n ), u n (g n x n ) < ǫ. Then u n g n (x n ) → z. This proves (15). Finally, (14) and (15) prove the Proposition. Now, let us consider a torsion-free, finitely generated, (classical) Kleinian group Γ and a group morphism u : Γ → SL(2, C) sufficiently close to the constant morphism. Recall the definition of the group Γ u given in (1). Recall also that, in Section 3, we saw that Γ u is a subgroup of PO(5, C) and a deformation of Γ inside SO(4, C). Also, recall that, in Section 3, we studied the geometry of Γ u on the quadric Q 2 and on Θ = Q 3 − Q 2 (see Propositions 3.1 and 3.2). In this Section, we use these results to study the geometry of Γ u on Q 3 . Now, we will use the last Proposition and the Cartan decomposition for SO(4, C) (see [14, p. 397 Let (g n ) be a sequence of SO(4, C) and g n = u n a n u n , be the Cartan decomposition of g n , where u n , u n ∈ K, a n ∈ A + . We claim that there exists i ∈ O(4, C), such that, where, if (λ n ) or (µ n ) converges to ∞, then it converges to +∞. Also, i is the identity matrix, or it is an element of O(4, C) − SO(4, C), such that, if restricted to Q 2 , can be represented in coordinates as where f, g ∈ SL(2, C). In particular, in this last case, i interchanges the direction of the light geodesics contained in Q 2 ; the reader can consult [18, pp. 76, 77] for an example of this. We say that (g n ) tends simply to infinity if: • The sequences (u n ) and ( u n ) converge, • The sequences (λ n ), (µ n ) and (λ n − µ n ), converge in R. Following Frances in [3, p. 8], if (g n ) is a sequence of O(4, C) which tends simply to infinity, we say that (g n ) is of balanced distortion if (λ n ) and (µ n ) converge to +∞ and (λ n − µ n ) converges to a point in R. We say that (g n ) is of bounded distortion if one of the sequences (λ n ) and (µ n ) converge to +∞ and the other converges to a point in R. We say that (g n ) is of mixed distortion if (λ n ) and (µ n ) converge to +∞ and (λ n − µ n ) converge to ∞. It is clear that every sequence of SO(4, C) which tends simply to infinity is of one of these kinds. Now, we will show that this classification corresponds to the different dynamics of the orbits of compact sets in Q 3 . Also, from the definitions, it follows that, in order to compute the accumulation points of the orbits of the compact sets in Q 3 for discrete subgroups of O(4, C), it is enough to consider only the sequences which tend simply to infinity. If p = q, then l q ∩ l p = ∅. Also, the collection {l q } q∈∆ + foliates Q 2 − ∆ − . (3) If the sequence (g n ) is of the form (17); then, the function that assigns to each q ∈ ∆ + its corresponding l q , constructed in part 2 of this Proposition, is a Möbius transformation. Proof. We will first consider sequences of the form (17); in other words, let a n := where (λ n ) and (µ n ) are sequences of R which converge to +∞ and (λ n − µ n ) converges in R. In order to prove part 1 of this Proposition, we will prove that, for all y ∈ Q 3 − ∆ − , there exists a point p ∈ ∆ + , such that, D (gn) (y) = p and, for all p ∈ ∆ + , there exists a point y ∈ Q 3 − ∆ − , such that, D (gn) (y) = p. Let ∇ + and ∇ − be the light geodesics π e 1 , e 2 and π e 4 , e 5 , respectively, and δ := lim n→∞ (λ n − µ n ). Also, by the latter, it is clear that, there exist parametrizations of ∇ + and of the space of horizontal light geodesics, such that, the function that assigns to each q ∈ ∆ + its corresponding l q is of the form: This proves part 3 of this Proposition. For all n ∈ N, let g n = u n ia n i −1 u n be the Cartan decomposition of g n , where • (a n ) is of the form (17), • (λ n ) and (µ n ) are sequences that converge to +∞, • (λ n − µ n ) is a sequence that converges to a point in R, • ( u n ) and (u n ) are convergent sequences of SO(4, C) • i is the identity matrix or it is an element of O(4, C) − SO(4, C) of the form (18). Let us define u := lim u n , u := lim u n , ∆ + := u i(∇ + and ∆ − := u −1 i(∇ − ) . As Q 2 is O(4, C)invariant, then ∆ + and ∆ − are light geodesics contained in Q 2 . As SO(4, C) fixes the direction of the light geodesics contained in Q 2 , by the properties of i, we have that ∆ + and ∆ − are parallel light geodesics. Let p ∈ ∆ + , define q := i −1 u −1 (p) ∈ ∇ + . By part 2 of this Proposition for sequences of the form (17), there exists a light geodesic l q ⊂ Q 3 that is transversal to ∇ − , such that, l q − ∇ − ⊂ D (an) (q). Also, the collection of all light geodesics l q , constructed this way, is a folliation of Q 3 − ∇ − . By Proposition 4.1 and, as O(4, C) sends light geodesics to light geodesics, l p := u −1 i(l q ) is a light geodesic that is transversal to ∆ − , such that, l p − ∆ − ⊂ D (gn) (p) and the collection of all light geodesics l q , constructed this way, is a folliation of Q 2 − ∆ − . This proves part 2 of the Proposition for all sequences. Recall that if p ∈ Q 3 , in Section 2 we defined C(p) as the union of all light geodesics in Q 3 which contain p, C(p) as the space of all the light geodesics which contain p and we showed that C(p) is a complex manifold biholomorphic to the Riemann sphere. Let us also recall that, by definition (see Subection 2.1), Θ is a manifold SL(2, C) × SL(2, C) -equivariantly biholomorphic to SL(2, C). Now, we will study the dynamics of the second kind of sequence which diverges simply to infinity. As we will see later in this Section, the group Γ u , defined in (1), for u sufficiently close to the constant map (see Section 3), does not admit sequences of this kind. Consider any sequence (y n ) in Q 3 which converges to q, then, for n sufficiently large, y n = [y (1) n : 1 : y (3) n : y (4) n : y (5) n ] and y (1) n → 0, y (3) n → z 3 , y (4) n → z 4 , y (5) n → z 5 . We have that a n y n = [e λn y (1) n : e µn : y (3) n : y (4) n e −µn : y (5) n e −λn ]. Suppose that (a n y n ) is convergent; then, e λn y (1) n → b, for some b ∈ C, or e λn y n := be −λn z 5 − z 2 3 . Then, a n y n → b : e µ∞ z 2 : z 3 : e −µ∞ z 4 : 0 , so we get . This proves part 2 of this Proposition for sequences of the form (17). Also, if q ∈ Θ, that is z 3 = 0, then D (an) (q) intersects to Θ; then, we get part 3 also for sequences of the form (17). For all n ∈ N, let g n = u n ia n i −1 u n be the Cartan decomposition of g n , where • a n is of the form (17), • (λ n ) is a sequence which converges to +∞, • (µ n ) is a sequence that converges in R, • ( u n ) and (u n ) are convergent sequences in SO(4, C) • i is either the identity matrix, or an element of O(4, C) − SO(4, C) of the form (18). Let us define p − := u −1 (q − ), p + := u(q + ), u := lim u n , u := lim u n and g ∞ := u i h ∞ i −1 u. Observe that g ∞ is an element of O(4, C) and induces a biholomorphismḡ ∞ from C(p − ) onto C(p + ). Then, since O(4, C) sends light geodesics to light geodesics and light cones to light cones, Proposition 4.1 implies part 2 in the general case. Also, as ui, i −1 u ∈ O(4, C), C(p + ) and C(p − ) are not contained in Q 2 and Θ is O(4, C)-invariant, by part 3 for sequences of the form (17), we get part 3 for all sequences. Proposition 4.5. Suppose (g n ) is a sequence of mixed distortion of SO(4, C). Then, there exist in Q 2 two points p + and p − and two parallel light geodesics ∆ + and ∆ − which contain p + and p − , respectively, such that Proof. Part 1 of this Proposition follows as in Proposition 4.4. Remark. By Proposition 3.1, if Γ ⊂ SL(2, C) is a torsion free, convex-cocompact, (classical) Kleinian group and u : Γ → SL(2, C) is a group morphism sufficiently close to the constant morphism, then Γ u acts properly discontinuously on Θ, and so, the hypothesis of the Proposition are valid. Proof. By Proposition 4.4, every sequence of bounded distortion has points dynamically related in Θ, it follows that Γ u does not contain such sequences. It follows that every sequence of Γ u which diverges simply to infinity is of balanced or mixed distortion. By Propositions 4.3 and 4.5, every sequence of balanced or mixed distortion has two limit light geodesics associated to it (contained in Q 2 ), one attractor and the other repeller, such that the accumulation points of the orbits (associated with this sequence and with the sequence formed by the inverses) of the compact sets in Q 3 , which do not intersect these two light geodesics, lie in these light geodesics. Then, Γ u acts properly discontinuously on the complement in Q 3 of Λ F . Suppose that Γ u acts properly discontinuously on Θ. By Proposition 4.6, all the sequences of Γ u which tend simply to infinity, are of balanced or mixed distortion and Γ u acts properly discontinuously on Q 3 − Λ F , where Λ F ⊂ Q 2 was defined in the same Proposition. We will prove that U Γ ⊂ Q 3 − Λ F , or equivalently, that First, we will prove that the light geodesics of Λ F are vertical, as in the case of A. Guillot in [9, pp. 224, 225], where u is the constant morphism. Supose that ∆ + and ∆ − are the limit light geodesics which correspond to the divergent sequence g n , u(g n ) and that ∆ + and ∆ − are horizontal. We have two cases: (1) Suppose that g n , u(g n ) is of mixed distortion and C + and C − are their attractor and repellor limit light cones. Then, by Proposition 4.5, any point x of Ω × CP 1 ∩ ∆ − , different from p − , is dynamically related to any point in C + ∩ Ω × CP 1 . In particular, as ∆ + ⊂ C + , then x is dynamically related to any point in ∆ + ∩ Ω × CP 1 . (2) Suppose that g n , u(g n ) is of balanced distorsion. Then, by the Proposition 4.3, for each point q in ∆ + ∩ Ω × CP 1 , there exists a vertical light geodesic l q ⊂ Q 2 , such that q is dynamically related to any point in l q − ∆ − . We will prove that there exists q ∈ ∆ + ∩ Ω × CP 1 , such that, l q ⊂ Ω × CP 1 . For all n ∈ N, let u n ia n i −1 u n be the Cartan decomposition of g n , u(g n ) , where • a n is of the form (17), • ( u n ) and (u n ) are convergent sequences in SO(4, C) • i is an element of O(4, C) which, restricted to Q 2 , can be represented in coordinates as (18). Now, we will translate what we want to prove for (g n ) to the same problem, but for the sequence (a n ). If q ∈ ∆ + , recall the construction of l q (see Proposition 4.3). Let u := lim u n and u := lim u n . Then, there exist y 0 ,ȳ 0 ∈ CP 1 such that By the aforementioned properties of i and, since the restriction of u −1 and u to Q 2 can be represented as (10), we get that the function i −1 u −1 can be represented in coordinates as: where v 1 , v 2 ∈ SL(2, C) and the function i −1 u can be represented in coordinates as: where w 1 , w 2 ∈ SL(2, C). Then, by the formula given by Proposition 4.1, it is enough to prove that there exists a point q in ∇ + ∩ CP 1 × v 2 (Ω) , such that the l q that corresponds to the sequence (a n ), is contained in CP 1 × w 2 (Ω). Then, by part 3 of the Proposition 4.3, this is equivalent to show that there exists a point in g v 2 (Ω) ∩ g w 2 (Ω), where g is a Möbius transformation. But, by Ahlfors Thereom (see [1]), Λ has Lebesgue measure zero and as the Möbius transformations preserve the sets of Lebesgue measure zero, this is true; it follows that has Lebesgue measure zero. In any case, there exist two points in Ω × CP 1 that are dynamically related to each other and correspond to the sequence g n , u(g n ) . On the other hand, by Proposition 3.2, we know that Γ u acts properly discontinuously and uniformly on Ω × CP 1 ; then, there are no points dynamically related to each other in Ω × CP 1 ; this, however, contradicts the previous paragraph. Therefore, all the limit light geodesics of Γ u are vertical. Assume now that (21) is not true, so there exists a point As Ω is open in CP 1 , there exists an attractor limit light geodesic ∆ + which corresponds to a sequence g n , u(g n ) of balanced or mixed distortion of Γ u , such that, ∆ + ∩ Ω×CP 1 = ∅. As the limit light geodesics of Γ u are vertical, then ∆ + ⊂ Ω × CP 1 . We will show that there exist two points in Ω × CP 1 which are dynamically related to each other, and this will be a contradiction because we know that the action is properly discontinuous on Ω × CP 1 . There are two cases: (1) If g n , u(g n ) is a mixed distortion sequence, then by Proposition 4.5, there exist a repeller limit light geodesic ∆ − and two limit light cones C − , C + of g n , u(g n ) , such that, ∆ − ⊂ C − and ∆ + ⊂ C + and if y ∈ C − − ∆ − , then D gn,u(gn) (y) = ∆ + . By Proposition 2.1, we know that is a light geodesic of the form CP 1 × {z} (minus one point), for some z ∈ CP 1 . This light geodesic (minus one point) intersects Ω × CP 1 , and any point of this intersection is dynamically related to any point of ∆ + ⊂ Ω × CP 1 . (2) If g n , u(g n ) is a balanced distortion sequence, then, by Proposition 4.3, any point of Ω×CP 1 −∆ − is dynamically related to a point of ∆ + ⊂ Ω × CP 1 . This proves (21). Now, suppose that Γ \ Ω is compact. We will prove that U Γ ⊂ Q 3 − Λ F , or equivalently, that (21) is not only an inclusion, but an equality. Recall the SO(4, C)-equivariant biholomorphism (9) between Q 2 and CP 1 × CP 1 . By Proposition 3.2, The projection (x, y) → x, x ∈ Ω, y ∈ CP 1 , is well defined in the quotient In fact, it is a locally trivial fibration with compact fiber, so it is a proper map. As Γ \ Ω is compact, then Γ u \ (Ω × CP 1 ) is compact. Suppose that there exists an invariant open set U , which contains U Γ , where Γ u acts properly discontinuously. Then, Γ u \ Ω × CP 1 is a compact subset of the Hausdorff space Γ u \ U ∩ Q 2 , so Γ u \ Ω × CP 1 is closed, but its complement Γ u \ U ∩ Q 2 − Γ u \ Ω × CP 1 has empty interior (because the classical limit set Λ of Γ has empty interior, see [1]), so this is a contradiction. Then, U Γ is a maximal open set where Γ u acts properly discontinuously; in particular, U Γ = Q 3 − Λ F . . We just found a family of complex orthogonal Kleinian groups, now we will see that some of these groups are a generalization of the (classical) Schottky groups. We say that a finitely generated discrete subgroup Γ of PO(5, C) is a complex orthogonal Schottky group of genus g if there exists a collection {C 1 , D 1 , . . . , C g , D g } of open sets of Q 3 , with disjoint closures, and a finite set of generators {s 1 , . . . , s g } of Γ, such that, for all i = 1, . . . , g, If the group Γ ⊂ SL(2, C), of Theorem 1.2, is a (clasical) Schottky group, then Γ acts, by means of the homomorphism ψ, defined in (2.1), as a complex orthogonal Schottky group. The proof of this goes as follows: Consider the action of {I} × SU(2) ⊂ SL(2, C) × SL(2, C) in Q 3 , by means of the homomorphism ψ, defined in (2.1), the quotient space {I} × SU(2) \ Q 3 and the quotient map Then, δ is continuous, open and SL(2, C)-equivariant and can be represented by where the function ρ was defined in (12). Finally, we pull back the (classical) Schottky group to get a complex ortogonal Schottky group of dimension three, that is, if Let us denote by R 2,n the space R n+2 endowed with the quadratic form q 2,n = −x 2 1 − x 2 2 + x 2 3 + · · · + x 2 n+2 . The isotropic cone of q 2,n is the subset of R 2,n on which q 2,n vanishes. We call C 2,n this isotropic cone, with the origin removed. Let's denote by π the projection from R 2,n , minus the origin, on RP n+1 . The set π(C 2,n ) is a smooth hypersurface Σ of RP n+1 . Recall that this hypersurface turns out to be endowed with a natural Lorentzian conformal structure such that its group of conformal transformations is PO(2, n) (see [3, p. 886]). We call the Einstein universe this hypersurface Σ, together with this canonical conformal structure, and we denote it by Ein n . Let us suppose that Γ ⊂ SL(2, R) is a torsion free, finitely generated (classical) Fuchsian group, with Ω as discontinuity domain in S 1 and u : Γ → SL(2, R) is a group homomorphism. We have that is a subgroup of PO (2,2). If Γ u acts properly discontinuously in AdS 3 := Ein 3 − Ein 2 , then Γ u acts properly discontinuouly on (23) Even more, if Γ \ Ω is compact, then W Γ is maximal. As the limit set of Γ in S 1 has Lebesgue measure zero (see [21]), the proof of this assertion is essentially the same as the proof of Theorem 1.1. Also, if Γ ⊂ SL(2, R) is a a (classical) Shottky group, then Γ is a Lorentzian Shottky group of dimension three (as those of Frances in [3, p.23]), every small perturbation of Γ inside PO(5, R) is also a Lorentzian Schottky group of dimension three and all the corresponding quotients spaces are diffeomorphic to each other. Quotients Let Γ be a torsion free, convex-cocompact, (classical) Kleinian group and u : Γ → SL(2, C) a group morphism sufficiently close to the constant morphism. Recall the definition of the group Γ u given in Theorem 1.2. Recall also that, in Section 3, we saw that Γ u is a torsion free subgroup of PO(5, C). By Propositions 3.1 and 1.1, if u : Γ → SL(2, C) is a group morphism sufficiently close to the constant morphism I, then the quotient M (u, Γ) of the action of Γ u on U Γ is a complex manifold, where U Γ was defined in Proposition 1.1. The manifold Γ I \ U I is the Guillot manifold (see Section 3 and [9,p. 224,225]). In this Section, we will prove that, for every group morphism u : Γ → SL(2, C) close enough to the constant morphism, the manifold M (u, Γ) is indeed compact and diffeomorphic to the Guillot manifold; so, it defines an uniformizable complex orthogonal structure of dimension three, on the Guillot manifold, that is close to the Guillot structure. We will also use this result to construct some close uniformizable complex projective structures on other compact complex manifold of dimension three. Proof of Theorem 1.2: Let us consider a torsion-free, convex-cocompact, (classical) Kleinian subgroup Γ of SL(2, C). By Propositions 3.1 and 1.1, we know that there exists an open neighborhood V of the constant morphism, such that, if u ∈ V, then Γ u acts properly discontinuously on U Γ . We will first prove that for u ∈ V, Γ u acts uniformly on U Γ ; that is, the action is properly discontinuous. Then, we will consider a resolution r : X → V of singularities of V, and define an action of Γ on X × U Γ , such that, the action on {x} × U Γ coincides with the action on {r(x)} × U Γ . Finally, we will see that Γ acts properly discontinuously on X × U Γ and the quotient of the projection of X × U Γ onto its first coordinate is a locally trivial fibration whose fibers are the manifolds M (u, Γ). Let us consider the restriction to SL(2, C) of any Euclidian norm in C 4 and let us denote it by || · ||. As by definition Θ is biholomorphic to SL(2, C), || · || defines a norm in Θ; we will also denote it by || · ||. Recall that Θ is biholomorphic Let us consider a finite set of generators G of Γ, any norm || · || in Θ = Q 3 − Q 2 and for all δ > 0, the neighborhood V δ (I) := g ∈ Hom Γ, SL(2, C) : ∀s ∈ G, ||g(s) − I|| ≤ δ , of the constant morphism, with respect to the compact-open topology, where I is the identity matrix. We will prove that there exists ǫ > 0, such that, for every convergent sequence (u n , z n ) in V ǫ (I)×U Γ and for every divergent sequence (g n ) of Γ, such that (g n , u n (g n )) diverges simply to infinity, then (g n , u n (g n ))z n does not converge in U Γ . This will imply that, if V := V ǫ (I), then (24) is properly discontinuous. Note that, by Proposition 1.1, this is true for constant sequences (u n ). Recall the SO(4, C)-equivariant biholomorphism between Θ and SL(2, C). By Proposition 3.1, there exists ǫ > 0, such that, for all convergent sequences u n , z n in V ǫ (I) × SL(2, C), and for all divergent sequences (g n ), the sequence g n , u n (g n ) z n is not convergent in Θ. Suppose (u n ) is in V ǫ (I), (g n ) is divergent in Γ and g n , u n (g n ) diverges simply to infinity in PO(5, C). Then, by the last paragraph, there are no points in Θ dynamically related to each other which correspond to the sequence g n , u n (g n ) . As sequences of bounded distortion have points dynamically related in Θ (see Proposition 4.4) then, g n , u n (g n ) is of balanced or mixed distortion and, by Propositions 4.3 and 4.5, it has associated two limit light geodesics contained in Q 2 , one attractor ∆ + and other repeller ∆ − . We claim that ∆ + and ∆ − are vertical and contained in Λ × CP 1 . The proof of this is essentially the same (see Proposition 1.1) as the proof that the limit light geodesics of Γ u are vertical and contained in Λ × CP 1 (it is only necessary to change the sequence g n , u(g n ) by the sequence g n , u n (g n ) and use Proposition 3.2). Then, for each convergent sequence (z n ) of U Γ , the sequence g n , u n (g n ) z n does not converge in U Γ . Then, (24) is properly discontinuous. Now, note that the projection of V × U Γ onto its first factor, defines the continuous function [(u, y)] → u. If the algebraic variety V has no singularities in a neighborhood of the identity, i.e., it is a manifold at the identity (as, for example, when Γ is a classical Schottky group), then, by the Ehresmann Fibration Lemma, t is a locally trivial fibration and so all M (u, Γ) are diffeomorphic to each other, for u sufficiently close to the constant morphism. However, V can have singularities arbitrarily close to the identity (see, for example, the Fuchsian groups of [7, p. 567]); if this is the case, consider a resolution of the singularities (see [10]) of V, that is, let r : X → V be an holomorphic surjective proper map, where X is a complex manifold. Let us consider X × U Γ as a differential manifold and define the action and the map which is Γ-equivariant: r × I γ, (x, y) = r(x), γ, r(x)(γ) (y) = γ, r(x), y = γ, (r × I)(x, y) . As Γ acts properly discontinuously on V × U Γ , then it acts properly discontinuously on X × U Γ . As the projection of X × U Γ onto its first factor is Γ-invariant, then it defines a submersion Let us consider any y ∈ r −1 (I), then as s −1 (y) was defined to be the Guillot manifold t −1 (I), then it is compact. So, by the Ehreshmann Fibration Lemma, s is trivial bundle in a neighborhood U of y. As every fiber s −1 (u), of s over u ∈ U, was defined to be as the fiber t −1 r(u) , of t over r(u), and as r is open, then r(U) is an open neighborhood of I such that for all v ∈ r(U), t −1 (v) is compact and diffeomorphic to the Guillot manifold. Let us consider the geometry Q 3 , PO(5, C) , the intermediate covering U Γ of the Guillot manifold Γ \ U Γ and the developing map It should be pointed out that there are examples of (classical) Kleinian groups for which there is a family of such mutually non-conjugate group morphisms from Γ to SL(2, C): If Γ is a (classical) Schottky group of genus g, then the space Hom(Γ, SL(2, C)) of group morphisms from Γ to SL(2, C) is a complex manifold, of dimension 3g, biholomorphic to SL(2, C) g . Since we have two free 3-dimensional parameters and the conjugation is by a family of dimension three, there exists, even up to conjugation, an infinite family of homomorphisms arbitrarily close to the constant morphism. If Γ is a (classical) Fuchsian group, then, as any matrix commute with itself, for every pair of matrices A and C in SL(2, C), the equation [A, A][C, C] = I is satisfyed; then, by [7, p. 567], Γ is a Fuchsian group. Since we have, at least, two free 3-dimensional parameters and the conjugation is by a family of dimension three, there exists, even up to conjugation, an infinite family of homomorphisms arbitrarily close to the constant morphism. We would like to mention that, in the real case, there is a weaker version of the last Theorem; in particular, if Γ ⊂ SL(2, R) is a torsion free, convex-cocompact (classical) Kleinian group with domain of discontinuity Ω in S 1 ; then, for all group morphisms u : Γ → SL(2, R) sufficiently close to the identity morphism, then Γ r u , which was defined in (22), acts properly discontinuously on W Γ , defined in (23). Now, we will consider some quotients of the manifolds M (u, Γ) of Theorem 1.2 to construct uniformizable complex projective structures locally modeled on (CP 4 , PGL(4, C)) on a related compact complex manifold of dimension three. Proof of Theorem 1.3: Let Γ ⊂ SL(2, C) be a torsion free, convex-cocompact, (classical) Kleinian group with domain of discontinuity Ω in CP 1 . We will prove that there exists a Γ u -equivariant continuous, open and proper function f from Q 3 onto CP 3 . By Theorem 1.2, this will imply that if u is a group morphism sufficiently close to the constant morphism, then Γ u acts properly discontinuously on V Γ := f (U Γ ) and the quotient is compact. As I × f is proper, then Γ acts properly discontinuously on V × V Γ , where this action is the action of Γ u on V Γ in the fibers {u} × V Γ . Finally, we will consider a resolution of of singularities r : X → V and follow the same reasoning of Theorem 1.2 to prove that, for all u ∈ V, all the quotients Γ u \V Γ are diffeomorphic to each other. Let us consider the involution j : Q 3 → Q 3 , is a continuous and proper function f , where |z| denotes the equivalence class of z. Note that f restricted to Θ is the usual 2 to 1 covering of PSL(2, C) by SL(2, C) and restricted to Q 3 is the identity. The action of SL(2, C)×SL(2, C) on Q 3 induces, in a unique way, an action of SL(2, C)×SL(2, C) on CP 3 such that f is SL(2, C)×SL(2, C) -equivariant. Even though this action is not faithful, it defines an holomorphic homomorphism τ from SL(2, C) × SL(2, C) to PO(4, C), whose image is the projectivization PSO(4, C) of the orthogonal matrices of determinant one, and whose kernel is (I, I), (−I, I), (I, −I), (−I, −I) . As −I / ∈ Γ, where I is the identity matrix, then the restriction of τ to Γ u is a monomorphism. As usual, we identify the domain and the image. As Γ u acts properly discontinuously on U Γ , it also acts properly discontinuously on V Γ := f (U Γ ). So, Γ u \ V Γ is a complex manifold and f defines a continuous function on the quotient In fact, the quotient Γ u \ f (U Γ ) is a quotient of the manifold Γ u \ U Γ : For all morphism u : Γ → SL(2, C), the involution j commutes with the action of Γ u in Q 3 and then, it is well defined in the quotient Γ u \ U Γ , that is, defines a transformation j : Γ u \ Q 3 → Γ u \ Q 3 , |[z 1 : z 2 : z 3 : z 4 : z 5 ]| → |[−z 1 : −z 2 : z 3 : −z 4 : −z 5 ]|. The group Σ generated by j acts freely and properly discontinuously on Γ u \ U Γ and, as the actions of Σ and Γ u comute, then, the quotients Σ \ (Γ u \ U Γ ) and Γ u \ f (U Γ ) are biholomorphic. It also happens that V Γ is maximal; otherwise, we could pull back an invariant open set V ⊃ V Γ , where Γ u acts properly discontinuously and contradict the maximality of U Γ . By Theorem 1.2, there exists an open neighborhood V of the constant morphism, such that, if the homomorphism u : Γ → SL(2, C) belongs to it, then Γ u \ U Γ is compact, so Γ u \ V Γ is also compact. Now, let us define the following action As I × f is proper, the last action is properly discontinuous. Finally, let us consider a resolution of singularities r : X → V, where X is a complex manifold and r is holomorphic, then, by the same reasoning of Theorem 1.2, for all u ∈ V, all the quotients Γ u \ V Γ are diffeomorphic to each other. Let us suppose that the hypothesis of the last Theorem are satisfied. By the same arguments of Section 4, but applied to CP 3 , instead of Q 3 , if follows that the vertical light geodesics of Λ × CP 1 are also attractor and repeller limit light geodesics for the action of Γ u on V Γ . Let us consider the geometry CP 3 , PGL(4, C) . We denote by M the differentiable manifold defined by the quotients Γ u \ V Γ of last Theorem. This manifold was discovered by Guillot. Also, Guillot found the uniformizable complex projective structure induced by Γ I \ V Γ . As we said in the Introduction, we call the G manifold the quotient manifold (both differentiable and complex) and the G structure the complex projective structure determined by it. Let us consider the intermediate covering V Γ of M and the developing map It should be pointed out that, in this case, we also have that Schottky and Fuchsian groups are examples of (classical) Kleinian groups for which there is a family of such mutually non-conjugated homomorphisms from Γ to SL(2, C).
2018-09-18T22:12:22.000Z
2018-09-18T00:00:00.000
{ "year": 2018, "sha1": "334172713dd554f1b9fea04d56294a6f73c80879", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1809.06953", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "334172713dd554f1b9fea04d56294a6f73c80879", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8269392
pes2o/s2orc
v3-fos-license
Magnetic Phase Diagram of CuO High resolution ultrasonic velocity measurements have been used to determine the temperature -- magnetic-field phase diagram of the monoclinic multiferroic CuO. A new transition at TN3 = 230 K, corresponding to an intermediate state between the antiferromagnetic non-collinear spiral phase observed below TN2 = 229.3 K and the paramagnetic phase, is revealed. Anomalies associated with a first order transition to the commensurate collinear phase are also observed at TN1 = 213 K. For fields with B along the b axis, a spin-flop transition is detected between 11 T - 13 T at lower temperatures. Moreover, our analysis using a Landau-type free energy clearly reveals the necessity for an incommensurate collinear phase between the spiral and the paramagnetic phase. This model is also relevant to the phase diagrams of other monoclinic multiferroic systems. High resolution ultrasonic velocity measurements have been used to determine the temperaturemagnetic-field phase diagram of the monoclinic multiferroic CuO. A new transition at TN3 = 230 K, corresponding to an intermediate state between the antiferromagnetic non-collinear spiral phase observed below TN2 = 229.3 K and the paramagnetic phase, is revealed. Anomalies associated with a first order transition to the commensurate collinear phase are also observed at TN1 = 213 K. For fields with B b, a spin-flop transition is detected between 11 T -13 T at lower temperatures. Moreover, our analysis using a Landau-type free energy clearly reveals the necessity for an incommensurate collinear phase between the spiral and the paramagnetic phase. This model is also relevant to the phase diagrams of other monoclinic multiferroic systems. Multiferroic phenomena have been a subject of intense interest in recent decades arising from opportunities to explore new fundamental physics as well as possible technological applications [1][2][3]. Coupling between different ferroic orders has been proven to be driven by several different types of mechanisms. In particular, multiferroics with a spiral spin-order-induced ferroelectricity have revealed high spontaneous polarization and strong magnetoelectric coupling [4,5]. Cupric oxide (CuO), the subject of this letter, was characterized as a magnetoelectric multiferroic four years ago when it was shown that its ferroelectric order is induced by the onset of a spiral antiferromagnetic (AFM) order at an unusually high temperature of 230 K [3]. Thus far, two AFM states have been reported, a low temperature (T N 1 ∼ 213 K) AF1 commensurate collinear state with the magnetic moments along the monoclinic b axis and an AF2 incommensurate spiral state with half of the magnetic moments in the ac plane (T N 2 ∼ 230 K) [3,6,7]. However, the authors of the neutron diffraction measurements [6] questioned the possibility of having a direct condensation from a paramagnetic (PM) phase to a spiral magnetic phase. Despite this remark, a recent Landau theory [8], as well as several Monte-Carlo simulations [9,10], appear to support this sequence of magnetic orderings. Encouraged by recent experiments on other multiferroic systems using ultrasonic measurements [11], we measured the temperature and field dependence of the velocity of transverse modes in order to determine the magnetic phase diagram of CuO. A new transition is detected at T N 3 = 230 K just above the AF2 spiral phase observed at T N 2 = 229.3 K, while the first order transition is observed at T N 1 = 213 K. Furthermore, dielectric constant measurements confirm that only the spiral phase (between T N 1 and T N 2 ) supports a spontaneous electric polarization. In addition, we report on a spin-flop tran-sition in the low temperature AF1 collinear phase when B b. Thus, based on these findings, a new magneticfield vs temperature phase diagram is proposed for CuO. In order to elucidate the possible nature of the AFM states observed in CuO, a non-local Landau-type free energy is also developed for CuO and similar monoclinic multiferroics. This approach has been very successful in explaining the magnetic phase diagrams of other multiferroic systems [12][13][14]. In contrast with the conclusions of Refs. [8][9][10], our analysis based on rigorous symmetry arguments indicates that there must be a collinear intermediate phase (AF3) between the paramagnetic and spiral AF2 states. Such a phase has been shown, both theoretically and experimentally, to occur in other geometrically frustrated antiferromagnets where symmetry allows for uniaxial anisotropy at second order [14,15]. Finally, we compare the model predictions to the B-T phase diagram of CuO obtained using ultrasonic velocity data. Similarities with other multiferroic systems such as MnWO 4 , AMSi 2 O 6 , RMnO 3 , RMn 2 O 5 , and Ni 3 V 2 O 8 are also noted. For the purpose of this study, a CuO sample was grown using a floating zone technique as described in Ref. [3]. A single crystal was cut with faces perpendicular to the monoclinic axes a * , b * = b, and c * (4 × 4 × 3 mm 3 ). The sample was then polished to obtain parallel faces. For velocity measurements, plane acoustic waves were generated using 30 MHz LiNbO 3 piezoelectric transducers bonded to opposite faces. Using an ultrasonic interferometer, which measures the phase shift and the amplitude of the first elastic transmitted pulse, high-resolution relative velocity variations (∆V /V ∼ 1 ppm) were achieved. Experimental data presented here were all obtained using the velocity of transverse waves V a * [c * ] propagating along the a * axis and polarized along c * , with the magnetic field applied along the easy magnetic axis of CuO (b axis). Simultaneous capacitance measurements were carried out using an AH 2550A Ultra Precision 1kHz Capacitance Bridge to identify which of these phases are ferroelectric. For that purpose, electrodes were mounted on faces perpendicular to the b axis in order to determine the dielectric constant ǫ b . Fig. 1 shows the temperature dependence of the relative sound velocity variations (∆V /V ) for B b. At zero field, the anomaly observed at T N 1 = 213 K (see inset of tion of a spiral order previously determined by neutron diffraction and susceptibility measurements [3,6], which were thought to occur at a single transition. At higher fields, the amplitude of the step like variation observed at 229.3 K, as well as the temperature difference between T N 2 and T N 3 increases, confirming the existence of a new intermediate magnetic order AF3. This finding is supported by dielectric measurements also shown in Fig. 1. Notice that, as the stability range of the intermediate phase is small (∆T ∼ 0.7 K), velocity and dielectric data have been collected simultaneously to avoid any ambiguity regarding the actual critical temperatures. Thus, as shown in Fig. 1 (for B = 0 and 7 T), the anomaly observed on the dielectric constant ǫ b coincides very well with T N 2 determined using velocity data, while no variation is noticeable at T N 3 . These results also indicate that the new phase AF3 is not ferroelectric, while magnetoelectric coupling exists for the AF2 phase. We present in Fig. 2 the magnetic phase diagram of CuO determined up to 16 T using ultrasonic velocity measurements for B b. The inset of Fig. 2 shows the field dependence of the velocity which displays a minimum around 11 T for T = 125 K. As the magnetic moments are known to be parallel to the field in the AF1 commensurate collinear state [3,6], we attribute this anomaly to a spin-flop transition [16]. In summary, while the critical temperatures T N 1 , T N 2 , and T N 3 are weakly field dependent, the spin-flop critical field H SF increases with temperature. At 10 K, H SF = 11 T and increases slowly up to 13.5 T at T N 1 , in good agreement with magnetic susceptibility measurements performed on powder samples [17]. Since no neutron scattering data exists for the HF1 and AF3 states, we develop a Landau-type model in order to elucidate the nature of these new magnetic orders [14,15]. The integral form of the free energy is expanded in powers of the nonlocal spin density s(r) defined in terms of a uniform field-induced magnetization m and a spin polarization vector S modulated by a single wave vector Q describing the long-range magnetic order (Eq. (6) of Ref. [14]). Within the present model, the value of Q can be determined by simply considering the isotropic quadratic contribution F 2I = 1 2V 2 dr 1 dr 2 A(r 1 − r 2 )s(r 1 ) · s(r 2 ), (1) which leads to F 2I = 1 2Ã m 2 + A Q S 2 where A Q = aT + J Q , with J Q being the Fourier transform of the exchange integral J(R). Considering the C-type monoclinic cell with four Cu 2+ magnetic ions, we obtain f 1 (Q) = cos (πq a − πq c ) f 2 (Q) = cos (πq a + πq c ) (2) where J 1 and J 2 represent the nearest-neighbors (NN) exchange interactions along the AFM-chain (sites 2-3) and the coupling between chains (sites 1-4) on the same plane normal to b, respectively, and J 3 and J 4 represent the exchange interactions along a (sites 1-2) and c (sites 1-3) between ions on different planes (see Fig. 4). The value of Q is then obtained by finding the extrema of J Q (Eq. 2) as a function of the exchange interactions. Results of our numerical algorithm are summarized in the J 2 − J 3 phase diagram shown in Fig. 3 More interestingly, with J 3 = J 4 = 0 we obtain the expected commensurate wave vector Q CM = [ 1 2 0 − 1 2 ] for J 2 ≤ 0 (dash line in Fig. 3). Moreover, an ICM state with a modulation vector comparable to that of the experimental value Q ICM = [0.506 0 − 0.483] is stabilized whenever J 3 and/or J 4 are non-zero but small relative to J 1 (for example, J 2 /J 1 = −0.3, J 3 /J 1 = 0.017, and J 4 /J 1 = 0 leading to J Q /J 1 = −2.6). These relative values are also in good agreement with estimates obtained by density functional theory [9,18,19] and are consistent with the quasi-1D magnetic character of CuO. In addition to the usual isotropic second order exchange term, we also consider anisotropic contributions. Considering the symmetry of monoclinic crystals (C2/c), we identified three invariants, written in single-ion form as [D y (r)s y (r)s y (r) + D z (r)s z (r)s z (r) When no direction is specified (as in AF3 and HF3), spins on these sites are not ordered. While D y can be used to set the magnetic easy axis along b, the other terms are necessary in order to define the direction of the moments in the ac plane. Furthermore, to account for non-collinear spin configurations, we define S = S 1 + i S 2 , with S 2 = S sin β[cos θρ 1 + sin θ(cos γŷ + sin γρ 2 )], whereρ 1 andρ 2 are two orthogonal unit vectors normal to the easy axis,ŷ b. Thus, the direction of the moments in the ac plane is accounted for by defining the unit vectorsρ 1 andρ 2 relative to the lattice vectors, ρ 1 = cos αx + sin αẑ andρ 2 = − sin αx + cos αẑ. As shown in Fig. 4, the parameter α represents the angle between the ac plane component of S relative to the monoclinic axis a x. After integration, all second-order contributions for m H ŷ reduce to Adopting the same approach for the fourth-order isotropic term, we obtain Note the umklapp term ∆ 4Q,G , arising directly from the lattice periodicity [12]. This term is crucial in order to account for the first order phase transition observed at T N 1 in CuO where a commensurate collinear state is stabilized. The free energy, , is then numerically minimized. As in Ref. [15], most coefficients are set using analytical solutions associated with phase boundaries of second order transition. For example, setting T Q = 1.18, D yQ = 0.02, B 1 = 0.103, and B 2 = 0.011, reasonable values for the critical temperatures at zero field (T N 3 = 1.2 and T N 2 = 1.12). We also set D zQ = 0.01 as we must have D zQ < D yQ , while the direction of the spins in the ac plane (α exp ∼ 70 • ) [1] is used to determine the ratio D xzQ /D zQ = −0.42. The last coefficients are determined using the temperature of the multicritical point (where T N 2 and T N 3 boundaries meet) and the maximum field at T = 0 K. From this exercise, we find B 3 = 0.063 and B 4 = 0.013 while B 5 = 0.1 was set arbitrarily. Finally, B U = 0.035 is used to obtained T N 1 = 0.77. Fig. 5 shows the magnetic phase diagram obtained from minimization of the free energy. For comparison, we also present results obtained without the anisotropic terms D zQ and D xzQ (dotted lines). Depending on the scenario considered, we obtain 5 or 6 magnetic phases illustrated in Fig. 4, described by the order parameters listed in Table I. At zero field, both models (with and without D z and D xz ) predict the same phase sequence, consistent with our experimental observations shown in Fig. 2. At low temperatures, a collinear phase AF1 with the moments along b is predicted (see Fig. 4) while the AF2 phase corresponds to a spiral configuration in agreement with neutron scattering data [6]. According to our numerical calculation, the new intermediate phase AF3 is associated with a collinear phase where only half of the moments order with S b. As the field is applied, two spin-flop transitions (AF1→ HF1 and AF2 → HF2) are found. The comparison of both phase diagrams indicates that the role of the anisotropic terms D zQ and D xzQ is to reduce the critical field of the AF1→ HF1 transition, decrease the stability range of the intermediate phase AF3, and lead to a new magnetic order HF3 in which half the moments align into the ac plane. These findings could account for the fact that no spin-flop phase transition has been observed experimentally up to 16 T for the spiral phase AF2. Our principal conclusions are that a new collinear phase (AF3) has been detected by high resolution ultrasonic velocity measurements which occurs between the paramagnetic and the previously identified spiral phase. The magnetic-field vs temperature phase diagram for B b has also been determined, revealing the existence of a new spin-flop phase (HF1). Complementary dielectric measurements also confirm that magnetoelectric effects only exist in the non-collinear phase. Verification that the new AF3 phase must exist is achieved by a Landautype model based on rigorous symmetry arguments. Furthermore, the occurrence of such a collinear state, just above a non-collinear state, is confirmed in well studied frustrated RMnO 3 and RMn 2 O 5 systems [20][21][22], and the kagomé compound Ni 3 V 2 O 8 [23]. Finally, the proposed model accounts for the experimental phase diagram of CuO determined in this work and is potentially useful for the description of other monoclinic multiferroic systems, in particular MnWO 4 [24] and AMSi 2 O 6 [25].
2012-05-23T17:24:05.000Z
2012-05-23T00:00:00.000
{ "year": 2012, "sha1": "b2082380d7667589597d4a0c387fd4081c6346b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.5229", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3f2089952ac785cbe675f967c7267cda689a3cf4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
22497094
pes2o/s2orc
v3-fos-license
Real-time use of the iPad by third-year medical students for clinical decision support and learning: a mixed methods study Purpose Despite widespread use of mobile technology in medical education, medical students’ use of mobile technology for clinical decision support and learning is not well understood. Three key questions were explored in this extensive mixed methods study: 1) how medical students used mobile technology in the care of patients, 2) the mobile applications (apps) used and 3) how expertise and time spent changed overtime. Methods This year-long (July 2012–June 2013) mixed methods study explored the use of the iPad, using four data collection instruments: 1) beginning and end-of-year questionnaires, 2) iPad usage logs, 3) weekly rounding observations, and 4) weekly medical student interviews. Descriptive statistics were generated for the questionnaires and apps reported in the usage logs. The iPad usage logs, observation logs, and weekly interviews were analyzed via inductive thematic analysis. Results Students predominantly used mobile technology to obtain real-time patient data via the electronic health record (EHR), to access medical knowledge resources for learning, and to inform patient care. The top four apps used were Epocrates®, PDF Expert®, VisualDx®, and Micromedex®. The majority of students indicated that their use (71%) and expertise (75%) using mobile technology grew overtime. Conclusions This mixed methods study provides substantial evidence that medical students used mobile technology for clinical decision support and learning. Integrating its use into the medical student's daily workflow was essential for achieving these outcomes. Developing expertise in using mobile technology and various apps was critical for effective and efficient support of real-time clinical decisions. Introduction Over the past 10 years, small handheld Personal Digital Assistants (PDAs) were replaced by the popular smartphone but now by devices with a much larger landscape Á tablet computers, such as the Apple iPad † and Samsung Galaxy Tablet † . Mobile technology has rapidly become the norm in the preclinical years of medical education because it affords flexible, wireless, and mobile access to endless amounts of medical content and knowledge (5). Since 2009, a growing number of medical schools have implemented mobile device medical curriculums (6). Many of these medical schools purchase devices for students and pre-load them with electronic textbooks and the medical school curriculum. Other medical schools simply require or recommend mobile devices to students for purchase (7). The ways in which mobile technologies are being used in clinical practice is also expanding. At least 75% of US physicians are currently using iOS mobile devices and technology (8). Healthcare organizations are now integrating their electronic health records (EHRs) with mobile devices, such as the iPad. The use of iPads has been embraced by Chief Medical Information Officers (CMIO) in many hospitals across the country and has created a more efficient workflow for practitioners (9). This expansion of access has afforded ready availability of the EHR at the bedside for patient education, electronic prescription writing, laboratory, and x-ray order as well as access to evidence-based practice tools for patient care (10). With thousands of medical apps available at the tap of a finger on the screen, the iPad has enabled even more opportunities for evidence-based decision support in real-time at the point of care. Research related to the use of mobile technology for patient care is not new (11). While initial reports of iPad use by medical professionals are promising (12), there are no in-depth research studies (i.e., beyond questionnaires) that explore the use of the iPad in clerkships by thirdyear medical students. The purpose of this year-long mixed methods study was to further understand medical students' use of the iPad during their Internal Medicine clerkship. Specifically, we sought to answer the following questions: 1. In what ways did the students use mobile technology for learning and clinical decision support? 2. What apps did the students use in the care of patients? 3. Did the amount of time spent and the students' expertise in using mobile technology grow overtime? Participants and iPads Following approval by the University of Georgia Institutional Review Board, we invited 37 third-year medical students from the partnership medical school campus (GRU/ UGA Medical Partnership) who rotated to a local community hospital (St. Mary's Hospital) to participate in the study. All students were issued a Wi-Fi only, thirdgeneration iPad with 64 Gb of storage. Management of the iPads was enabled by a mobile device management (MDM) program that required passcodes, provided lost device tracking and remote device erasure capabilities. All students were required to sign an iPad Agreement attesting to the terms and conditions when using the iPad to be HIPAA compliant. Students were free to install additional applications on their iPads purchased with their own funds. Each device was pre-loaded with a variety of applications (e.g., PDF Expert † , VisualDx † , Penultimate † ), as well as bookmark links to PubMed † , the medical school library and MedlinePlus † . All students were required to complete a minimum of three hours of iPad training. Training by members of the research team involved extensive review of the technology as well as case-based scenarios to practice use of the apps. Broad topics covered during the training included: 1) basic use of the iPad, 2) utilization of productivity apps, and 3) how to access appropriate medical knowledge resources and apps for clinical decision support. Research design, procedures, and analysis This year-long mixed methods study involved quantitative and qualitative data collection (13) from 37 thirdyear medical students. Four data collection instruments were used: 1) beginning and end-of-year questionnaires, 2) iPad usage logs, 3) weekly rounding observations, and 4) weekly semi-structured medical student interviews. The team of five researchers collected data over a 12-month period (July 2012ÁJune 2013). We administered a baseline questionnaire prior to beginning the Internal Medicine clerkship which included demographic questions and eight additional questions assessing the students' past and present use of mobile technology and Apple † computers. A second questionnaire was administered at the completion of the 48 weeks of clerkships and included demographics questions and additional questions that related to their experience with the iPad over the past year. Students completed weekly iPad usage logs during their 8-week Internal Medicine clerkship. Students logged weekly the types and amount of time used on medical resources and apps on the iPad in the care of patients. The time used for the apps ranged from 1 min to a few hours depending on the application. Logs were submitted weekly to the research team. We conducted weekly, 1-hour observations at the hospital as students rounded with their preceptors. Researchers were able to observe directly how the iPad was being utilized in real-time and recorded notes on a structured observation log. Weekly semi-structured, oneon-one interviews (face-to-face, phone or Facetime † ) were conducted with each medical student during the Internal Medicine clerkship. The end-of-the week interviews with students ranged from 10 to 30 min. Interviews were recorded and transcribed confidentially. Descriptive statistics were generated for the questionnaires as well as for the apps and resources reported in the iPad usage and observation logs. Inductive analysis (14) was used by a member of the research team to code the open-ended responses on the iPad usage and observation logs, as well as the weekly medical student interviews. Following a robust data analysis protocol using Microsoft Word † (15) by a member of the research team, patterns and themes were identified across participants and used to inform the results. Students' use of mobile technology in patient care All of the students indicated they used the iPad in a variety of ways to assist in the care of patients. These activities are intimately linked; therefore, the results are reported together in two major areas: 1) clinical decision support and 2) student learning and productivity. Clinical decision support The students reported using the iPad at all stages of patient care: before, during, and after patient encounters. Two primary uses were indicated in the data: obtaining real-time patient data via the EHR and finding additional information for clinical decision support. Most students, as reported in all four sources of data, indicated daily use of the EHR to obtain real-time patient data. This use is well summarized in these quotes from weekly interviews when talking about the use of the iPad on rounds: These quotes as well as the other data indicate the value of the iPad for enabling real-time access to the EHR to assist with clinical decision support. Identifying medical knowledge resources for clinical decision support was the second most frequent way that students reported using the iPad. Various medical knowledge resources were used including library resources and a multitude of iPad apps. Library resources were the fifth highest used resource as reported by 65% of the students on the end-of-year questionnaire. As a student detailed in an interview: I mainly used it to get access to the journals and I didn't need to get it through the school [physically]. And so what was very helpful from the library was that by logging on, I could get the pdf [of the article]. More details about use of apps on the iPad are reported later in the article; the data from the interviews indicated that easy and ready access to information was an important use of the iPad by the students in their Internal Medicine clerkship. Student learning and productivity While patient care accounted for the vast majority of the use of the iPad, students also indicated using the iPad for personal learning and productivity throughout the day. The iPad usage and observation logs, and end-of-year questionnaire data indicated that the highest productivity use of the iPad were email, note taking and word processing (e.g., QuickOffice † , Notepad † ). The high value of easy access to productivity tools was also explained during weekly interviews, for example: I had been using it primarily as a study tool with my PDFs on it . . . I was still doing research through First Consult † , DynaMed † and Epocrates † but now . . . I'm taking patient histories on my wireless keyboard on the iPad and I'm presenting the patient histories off of the iPad to the preceptor, as well as just like doing research on the go rather than sitting at home using it. The students also provided evidence about using the iPad for learning. The most commonly reported resources included question banks (e.g., USMLE, Kaplan), medical knowledge resources, and documents related to the medical school curriculum. As students explained during weekly interviews and in the end-of-year questionnaires: I use my iPad as an eBook reader for my textbooks. The iPad was VERY useful for doing practice questions and reading while studying. I use it for studying, I read my textbooks on there and I do review questions, because I can carry it around and have an infinite number of review questions, and my textbooks, anywhere in the hospital. As the data indicated, the students reported multiple uses of their iPads for personal learning and productivity. Resources and apps used The third-year medical students used a multitude of apps for clinical decision support in the care of patients. For example, the top apps reported in the end-of-year questionnaires are summarized in Fig. 1. The top three apps reported as widely used by the students on the end-of-year questionnaire included Epocrates † , PDFExpert † , and VisualDx † . The data is corroborated by the usage and observation logs. Top apps recorded in the iPad usage logs included Micromedex † , DynaMed † , and Epocrates † . Twenty-three of the 37 students also reported loading additional apps, with the top three being: 1) First Consult † , 2) DrawMD † , and 3) USMLE † World Q Bank. Student-loaded apps reported by more than one student are listed in Table 1. The overall usefulness of the apps and iPad for clinical decision support is best conveyed in open-ended responses to the end-of-year questionnaire as well as in the weekly interviews. As described by these quotes from students: Initially I was using Micromedex † a lot to review mechanisms or look up side effects of drugs . . . I used VisualDx a lot when looking at rashes; I used it a couple of times with patients to see if the images I had looked similar to what their rash initially looked like . . .. As time went by I learned that Epocrates was a great tool for looking up recommended treatments and quick facts about diseases; even helped my preceptors look up recommended dose of medications or alternative treatments for certain situations. The iPad made it easy to look up a patient's Á or physician's Á question without leaving the room, or within minutes of leaving. The accessibility of information meant we were all more informed more quickly, and that our knowledge of labs, cultures, and studies could always be up to date at the time of rounding . . . As illustrated in these quotes, not only were students using the iPad to assist their own learning, they used them to assist with patient education and to provide realtime access to information for their preceptors. Integrating the iPad into the daily workflow In the end-of-year questionnaire, the majority of the third-year medical students (71%; n 020) reported that the amount of time they spent using the iPad for clinical decision support grew overtime (see Fig. 2). Of those who indicated strongly agree or agree, the majority (58%; n011) reported using their iPad many times a day. The 18% of students who disagreed provided insights into why their use did not grow, ranging from the size and weight of the iPad to the use of other electronic devices (e.g., desktop computers, iPhone † ). Students also reported that the fast pace of rounds or lack of access to a stable and consistent wireless network impacted their use. The majority of the third-year medical students (75%; n021) also reported that their expertise in using the iPad for clinical decision support grew overtime (see Fig. 2). All students (n028) indicated that their level of expertise in using the iPad at the end-of-year was expert or intermediary. Those who indicated strongly agree or agree provided insights as to why their expertise grew over time. One major theme was that the students' comfort level with the use and navigation increased over time, contributing to the growth in expertise using the iPad. As stated by one student: 'I did become much more comfortable using a number of medical apps; I also felt more comfortable using my iPad with my preceptor around'. Secondly, students indicated that they learned how to use specific apps to meet specific needs. As described by a student: I learned to use the apps that were appropriate for each case more effectively. Initially I would be just searching for information on Google † or different apps. As time went by I knew which apps would have the information that I was looking for. Expertise in using the iPad allowed students to use the iPad in various clinical settings beyond Internal Medicine. As one student reported: 'By the end of the year I was able to use the iPad in any clinical setting. Initially started as a personal reference tool, then a presentation tool with peers and preceptors, then a patient education tool'. The students not only became more comfortable with use, some also started thinking about the use of the iPad as just what you do: 'Everyday use of the iPad is a habit now, and I do not have to think about where to find information and how to use it'. The 11% (n03) who disagreed with their expertise growing over time stated they did not choose to use the iPad (e.g., worried about losing it, like paper and books, don't have time to learn how to use it). While it is important to recognize that not all students grew in use and/or expertise with the iPad, it is noteworthy that the majority of the students did grow in both areas. This is promising for continued use of mobile technologies for real-time clinical decision support. Discussion and conclusions This year-long, mixed methods study provides substantial evidence that medical students used mobile technology to support their clinical decision support and learning in the care of patients. Although numerous studies have been conducted using surveys as their primary methodology, this study significantly extends the data collected by studying the use of the iPad using weekly usage and observation logs, interviews and questionnaires. This study supports the consistent conclusions of multiple studies (1Á4, 16), that tablet computers are being used to enhance patient care and learning in clinical contexts for students and residents. This study finds that students' primary uses were to obtain real-time patient data via the EHR during rounds and accessing other medical knowledge resources (e.g., apps, library databases, e-Textbooks) to support clinical decisions and personal learning. In addition, students used mobile technology to access productivity apps for note taking during and after rounds, e-mail, and studying. These uses were generally consistent with the findings of a national survey (16) students that found the highest uses for medical reference apps (47%), USMLE preparation (49%), and e-Books (43%), but relatively lower use for EHR access (23%). In contrast yet consistent with Ellaway's (17) findings that student choices are shaped by the learning context, evidence from this study indicates that the EHR use was much higher because it was conducted in a hospital setting and preceptors also were given iPads. One supposition is that the tablet computers' ability to display large, highquality images enables a clearer reading of the EHR, radiographic images, as well as articles and e-textbooks than do other mobile devices (2,18) providing the necessary affordances to use mobile technology for real-time clinical decision support. The most used apps were ones for clinical decision support, learning and productivity. This study found that the primary applications that students used for clinical decision support were Epocrates † , PDF Expert † , Micromedex † , DynaMed † , and VisualDx † in addition to more traditional library resources. The students indicated that the use of the apps enabled easy access to information for clinical decision support and their learning by connecting knowledge to real-time needs. As found in the Boruff and Storie study (2), Epocrates was a favorite resource because it provided quick facts on diseases, drugs and treatments. Integrating mobile technology into the medical student's daily workflow was essential for learning and clinical decision support. Consistent with Boruff and Storie's (2) finding that 70% of third-and fourth-year medical students used mobile devices at least once a day, the majority of the students in this study reported using their iPad many times a day as it became a seamless part of their work. The primary reasons indicated for the students' increased usage were that practice using the iPad shortened access time, increased their comfort level using it in patient care, and improved their knowledge of useful resources and apps. The importance of practice using the iPad is supported by Lombardo and Honisett's (19) results from a pediatric clerkship that found 70% of students agreed that the iPad was useful in achieving the learning objectives of the clerkship and 84% agreed the technology skills acquired by using the iPad would be useful in future medical careers. In our study, the students who did not report an increase in use and expertise indicated alternative ways to access information. This study has several limitations: 1) the 37 students were the first class to rotate in one community hospital during a third-year Internal Medicine clerkship, 2) the wireless network in the community hospital that served all users (including patients and visitors) was unstable and lacked the robustness to support consistent mobile technology use, and 3) the iPad usage logs were self-report and could have been more accurate with automatic tracking of use. The current study indicates that developing expertise in using mobile technology and various apps was critical for effective and efficient support of real-time clinical decisions. Although this study used iOS devices, certainly other mobile technology devices (e.g., Windows, Android) may have produced similar results. However, due to the limited medical-related apps on other platforms, healthcare providers are choosing iOS devices (8). Several recommendations result from the study and include: 1) encouraging medical students to use mobile technology to access medical knowledge resources including the EHR; 2) providing data service capability and mid-level storage capacity on each device; 3) integrating quarterly app training to increase effectiveness in clinical decision support; and 4) providing tablets that fit into a white coat pocket. Future research could include the impact of mobile technology on 1) patient outcomes and length of stay, 2) establishing patient/doctor rapport, 3) student performance on shelf tests and USMLEs, and 4) the preceptors' experience with students use of mobile technology. Previous presentations Initial results of this study were presented at the following events:
2016-05-14T15:17:49.145Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "c8c1109cd197e05029bc5e32dbd64772d616f43d", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/jchimp.v4.25184?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8c1109cd197e05029bc5e32dbd64772d616f43d", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
244255886
pes2o/s2orc
v3-fos-license
A REVIEW ON THE IMPACT OF THE COVID-19 PANDEMIC ON THE HEALTH CARE SECTOR The global crisis of the present era, the COVID-19 pandemic, has changed given new normal ways in many of the sectors. The present review highlights the impact, problems, and challenges faced by major areas of the health care sector due to pandemics and also addresses some of the aspects of upcoming approaches. The healthcare sector is the one sector that is on-demand since this COVID-19 pandemic raised. During the initial period, there was disruption of various services provided by the health care sector due to supply chain management issues and reduction in demand by consumers, quarantine, and lockdown period. The healthcare workers also confronted a huge challenge due to the increased number of cases and shortage of amenities and safety measures. This significantly affected even COVID-19 patients and the general public suffering from other diseases. To fight this issue, research and development (RandD) in pharmaceutical industries with great efforts to explore molecules and save many lives. Gradually innovative ways to strengthen and combat pandemics started emerging. Numeral ways and rules were adopted to prevent, diagnose and cure the disease. Artificial intelligence technology has emerged as one of the boons to address many of the unresolved or time-consuming mysteries. All the divisions of health care sectors have started working more efficiently with adopted new strategies to face future challenges. INTRODUCTION The pandemic of SARS-Cov, which took birth in China, has turned to grapple of historic proportions. As quoted by George Bernard Shaw "History repeats itself" [1], and each crisis leaves behind permanent structural changes [2], but the man seems to be incapable to learn from history. On December 31 st, 2019, the World Health Organization (WHO) has informed the cases of pneumonia of unknown causes in Wuhan City, China [3]. The batch of people first infected had shared their exposure in the Huanan seafood wholesale market, where not only farm animals and seafood but also small carnivore animals like bats and snakes were also sold [4]. By 7 th January 2020, Betacoronavirus which was previously unknown, was reported as a causative agent for this novel [5] infection by the Chinese authorities, and it was named as "SARS-CoV-2" by the International Committee of Taxonomy of Viruses [6] isolated from Human airway epithelial cells [7]. Patients infected were observed with common symptoms like fever, cough, headache, throat pain, tastelessness, fatigue followed by difficulty in breathing. Some patients were also found to have sepsis, septic shock, pulmonary edema, severe pneumonia, and acute respiratory distress syndrome [8]. This deadly virus infects the respiratory, gastrointestinal, hepatic, and central nervous system tracts of humans as well as cattle, birds, bats [6], and other wild animals. This novel virus possesses a ball-like structure and along with some other cell surface proteins. It also possesses a spike-like structure on the surface which is made up of glycoprotein gives coronaviruses their name [9]. It is the crown-like structure that binds to host cells. The researchers found that SARS-CoV-2 glycoproteins which are like spikes bind to receptors on the human cell surface called Angiotensin-converting enzyme 2 (ACE2) and its binding capacity was found to be 10 to 20 times more likely than the SARS virus of 2002 [10]. Though there are few structural and sequence similarities between the SARS 2002 and SARS-CoV-2 the unique difference of SARS-CoV-2 is that it cannot bind against three different antibodies [11], enabling the interhuman transmission, implementing high impact in public health [6]. Reverse transcription-polymerase chain reaction (RT-PCR) is the confirmatory and final test for individuals for diagnosis and confirmation of COVID-19. This test is done by taking swabs from the nasal and oral tracts of the suspects, and extracting the viral RNA in a printer-like machine, and amplifying it to detect SARS-CoV-2. The rapid antibody test is one more test to detect COVID-19 positive patients that checks the level of antibodies developed against a certain infection. ELISA is another antibody test approved for serological surveys to detect the infection in the containment zones or population exposed to COVID-19 patients. The drawback of the above two tests is they may detect antibodies produced against some other infection and show that the sample is positive for COVID-19 [12]. Although the deadly virus was isolated from a throat swab, consequently, specimens of blood and stool [13] could also use to detect it. But document on virus shedding patterns was not understood [6], which would help to collect the optimum specimen. In contrast to SARS-CoV or Middle East respiratory syndrome MERS-CoV, this novel SARS-nCoV-2 grows better in primary human airway epithelial cells than in vitro standard tissue-culture cells techniques. By early March 2020, the World Health Organization claimed COVID-19 infection to be "PANDEMIC". By then, it was spread over 200 countries and territories across the globe. In the first week of June, more than 1.4 billion confirmed cases were detected worldwide. As of 1:00 pm on 4 th August 2020 number of confirmed virus cases in India is 1,858,689 with a death rate of 39,002 [14]. In this current emergent situation when drugs or vaccines are still under clinical trials, experts in India have begun with treatment as per the guidelines given by the Ministry of Health and Family Welfare Directorate General of Health Services. As per the above-stated protocol patients with mild complication/condition are administered antipyretic, and are also advised to take adequate nutrition and hydration. Hydroxychloroquine tablet an old antimalarial drug also administered to patients with moderate symptoms, those having high-risk features for severe disease [15]. Hydroxychloroquine is proved to be safer than chloroquine with fewer drug-drug interactions [16,17]. On May 1, 2020 drug Remdesivir gained emergency use authorization (EUA), from FDA [18] and was repurposed to treat severe, hospitalized COVID-19 patients with oxygen supplementation therapy [19]. Another treatment that proved effective is to choose individuals from the previous pandemic with a significant reduction in the relative risk of mortality, and get convalescent plasma or antibody and use against severe acute respiratory infections [20]. Most importantly, the current guidelines in The Lancet emphasize that systematic corticosteroids should not be given routinely for the treatment of COVID-19 [21] that can result in delayed clearance of viral RNA (from the previous report of SARS-CoV and MERS-CoV) and other complications such as psychosis [22]. Even though there is no specific drug/vaccine developed for fighting against COVID-19, preparing a route map of people who have come across the COVID-19 patients, collection of their appropriate specimen for testing is a priority for clinical management and outbreak control [6]. Apart from this, restricting the movement of COVID-19 patients either by admitting them in the hospitals or by properly making the arrangements for their quarantine and treating them until they get back to normal health is one more important aspect. This discusses the issues about the healthcare sector and assumes that artificial intelligence and telehealth play a key role in future perspectives. The review article was compiled utilizing search engines include Pubmed, Scopus, Science Direct, Google scholar, etc. The articles related to the review were compiled from the last 10 y from 2010 to 2020. COVID-19 impact on hospital and medical services India is one of the worst-hit countries [23] with shooting up coronavirus pandemic cases day by day. This pandemic is pushing the limits of the healthcare systems [24] and hospital administrators are forced to start preparing for worst-case scenarios now [25]. The novel COVID-19 pandemic has multiply affected the health care sector. In the initial response, there was a serious shortage of healthcare facilities, equipment, pharmaceuticals, and skilled personnel [26] in hospitals but in contrast, demand for medical masks, hand sanitizers, and gloves has significantly increased [27]. Loss of life because of an inadequate inventory of drugs or ventilators would be tragic [27]. Suspension of most surgical procedures focus on reducing excess inpatient capacity, postponement of non-COVID-19 health-related issues is walking a precarious line [28], OPD (outpatient department) counters remained closed for too long, which sparked fears of an impending economic crisis and recession in hospital [27]. Cancer patients who are already immunocompromised facing a high risk because of the need for chemotherapy, they weigh the risks of delaying treatment [29]. But when compared with patients suffering from other types of cancer, lung cancer patients did not have a higher probability of severe complications [30]. Hospitals have a responsibility to minimize foreseeable risks to their patients in consideration to that, both doctors and nurses are in a dilemma in making decisions on patients suffering from paranoid delusions [31,32]. Challenges and problems faced by healthcare workers Health workers (HWs) are at the forefront of the COVID-19 outbreak response and responsible for daily patient care and are directly exposed to hazards that put them at high risk of infection [33]. This new reality in the healthcare sector comes with a lack of adequate personnel and resources. Physicians and nurses are asked to work for an extended period of a day or two [26]. Patients in ICU need to be shifted to other units to cater to the need for increased cases of COVID-19. Later on, as the number of cases started increasing the Indian Council of Medical Research (ICMR) approved 176 government and 78 private hospitals [34] to undertake coronavirus testing [35]. However, after the introduction of testing kiosks and even drive-through testing recently, shortage of testing kits and handling gear and high-test cost have been some of the problems faced by private hospitals after scaling up of coronavirus testing [20]. Healthcare workers approaching the patients are instructed to wrap themselves with a PPE (personal protective equipment) kit hence the availability of the gowns must be monitored by the store and inventory control. It is reported that currently half of the healthcare workforce (that includes nurses, lab workers, and housekeeping) are self-reporting the fear of work, inadequate facilities, and mental depression altogether has forced them to resign. In the year 2020 World Health Organization (WHO) identified that there is a shortage of 9 million nurses and midwives around the world [36]. Fields R the researchers surveyed the healthcare workers to know their opinion on hospital management whether the management indulges in the betterment of hospital running by implementing novel technology and surveys report says that healthcare professionals felt they had insufficient access to novel technology, inadequate training may leave employees frustrated and confused, the lack of time with patients may be exacerbated by an overload of administrative tasks or non-clinical assignments, too few staff resulting in high work pressure all this would result in poor handling of the pandemic situation in some hospitals [37]. COVID-19 impact on pharmaceutical industries This unprecedented time has brought a negative impact on the pharmaceutical industry for a short while of time. Wherein some pharma industries remained closed for a certain period as workers test positive for COVID-19 [38]. But to prevent this last long special COVID-19 HR team was appointed in pharma industries to monitor the spread of infection among the employees working within the plants. This team includes the panel of doctors to provide virtual guidance to employees on health issues, conduct a periodic medical checkup, including a thermal checkup/scanning at the entry point for all employees, and also prepared a protocol for staff working at plants [39]. Unfortunately, Indian pharmaceutical industries faced other problems too. Some industries in India rely heavily on China for API, starting and intermediate materials for the manufacture of generic drugs. At this pandemic period, some generic drugs played a crucial role to fight against COVID-19. To name few Indian pharma industries manufacturing these generic drugs to include IPCA LABS, Zydus Cadila, Mangalam drugs and organics, Wallace Pharma. With a canceled or reduced frequency of cargo flights and erratic supply chain rendering it impossible for these Indian pharma companies to purchase the product from China [38]. This resulted in slow production, less availability, and higher costs for the drugs like vitamins, penicillin, and also the cost of paracetamol hiked up from Rs 250-300 kg to 400-450 kg [40]. Indian pharma industries can sell drugs in the US market only after undergoing inspection and get approval from US FDA. With the ban on international travel, inspection becomes out of topic, rendering it impossible for Indian drug companies to sell in the US and other overseas markets [38]. Challenges and problems faced by pharmaceutical industries The challenges faced by the pharma sector are hoped to be temporary. In the present critical situation, pharmaceutical companies across the globe are striving very hard to fulfill the huge demand for much-needed medicines, vaccines, and medical devices. In the global healthcare infrastructure, the Indian pharma sector is considered as an important component and an instrument in saving millions of lives every year, engaging in manufacturing almost 60 percent of the vaccines used globally and certainly almost 50 percent of the US's generic drug requirements. It's a matter of pride [41] that India belongs to the third-largest pharma sector of the world [42]. Medicine choice for COVID-19 often works on a trial and error basis [43], which is currently under investigation and yet to be fully approved by the FDA that includes hydroxychloroquine, lopinavir, and ritonavir, tocilizumab, and sarilumab [44]. Chloroquine tablets manufactured by Bayers Pharmaceutical are considered as a drug for emergency use by the emergency use authorization US government for treating COVID-19 patients. Even though Bayer's had a great hit in revenue during the pandemic time, the repurposing of Chloroquine made them balance in their business. While Bayer's would sell 3 million of worth drugs to the public, Novartis is engaged in giving back the population 130 million doses of Hydroxychloroquine tablets to aid the global pandemic [45]. This drug is known to have similar benefits as Chloroquine with greater tolerability [46]. An unexpected jump in sales was observed with Pfizer's company in manufacturing the Prevnar 13 vaccine, which could be used in treating pneumonia in the course of the COVID-19 pandemic. Pharmaceutical Company AbbVie is engaged in a joint venture with health authorities and several institutions involved in clinical studies and research of antiviral drugs lopinavir and ritonavir. The company concluded lopinavir and ritonavir as investigational drugs for COVID-19 treatment in various countries [47]. Bill gates while commenting on the strength of the Indian Pharmaceutical industries, highlighted the contribution of Bharat Biotech in developing a vaccine that could immunize the whole world to fight against pandemics [48]. COVAXIN an inactivated vaccine manufactured and marketed by Bharat biotech jointly with ICMR-National institute of virology, received DCGI approval is now under Phase II clinical trials [49]. Remdesevir antiviral drug from Gilead's Pharmaceuticals showed a promising result in animal studies and is considered as top-ranked drug to help battle the coronavirus crisis. This marked increase in the financial status of the pharmaceutical company has raised the question of whether this pandemic outbreak has come in beneficial to them? [45]. The above statement can be justified based on the sales lift in Novartis pharmaceutical of 400 million and other companies like Eli-Lilly, Sanofi Pharmaceuticals having a profit of 280 million during the COVID time. Protecting patients and health care workers Strategic planning in the healthcare sector has been opened. Many health care facilities need to improve and upgrade their infrastructure that could help in streamlining and make their facilities more conducive to the health and safety of patients, staff, and health care professionals. Based on their own experience and inputs received from experts, hospital administration would emphasize employing effective measures for the future management of this kind of pandemic. Eli Perencevich, doctor and epidemiologist at the University of IOWA quoted that "Ninety-nine percent of alcohol hand rub is placed at the entrance could help in the maintenance of hygiene among patients and healthcare workers of hospital. There's some evidence that higher air humidity can reduce the viability and airborne transmission of certain kinds of viruses, including coronaviruses so hospitals can plan to bump up their ventilation rates air in from outside says Kevin Van Den. Handrail in hospitals is installed on staircases to serve as support for those climbing steps, especially, the sick, the elderly person, visitors, etc. But these handrails are silent vehicles for transmitting pathogenic bacteria. Transmission of pathogens from toilet lock handles to office lock handles was also been identified by Amala, Smart Enoch in his studies. Therefore, to prevent these potential pathogens, adequate sanitary surveillance should be ensured through the provision of soap and water for hand-washing for visitors, healthcare workers, and subsequent application of disinfectant where necessary [50]. Hospitals in the future must be planned with independent rooms for hospitalized patients to reduce hospitalacquired infections. In 2010, Sweden's Skane University Hospital an infectious-disease building could give an idea for constructing a hospital wherein outpatient or suspected Corona patients can give entry to several private isolation rooms can bypass the communal waiting areas, on the upper floors the inpatient rooms, have doors that open onto balconies that wrap around the circular building. Patients can be admitted to rooms via the outdoor pathways. The doctor from Swedish hospital Torsten Holmdahl, says resolving the entrance of COVID patients to the hospital in emergency condition is done by giving separate entrances and waiting areas. Scientists have also found the virus in stool samples and toilet bowls so flushing an uncovered toilet can spray aerosolized droplets of water and waste around the room [51]. Toner and Waldhorn suggested his opinion as a preventive measure that Physicians, nurses, respiratory therapists, pharmacists, environmental services staff, supply chain managers, will require a multidisciplinary effort and needed expertise in their field [25] to prepare effectively and respond to crises [52]. On expert information read from an article published in the American Journal of Infection Control, there was a 30 percent reduction of antibiotic prescriptions and prescriptions related to common respiratory infection in a group that used hand sanitizers [53]. Hospitals and healthcare professionals have been recognized as vital not only to the safety and well-being of their local communities but also to the security and economic health of the nation. To ensure the safety of the patients' outpatient departments (OPD) and in-patients, services should be resumed to treat all non-COVID-related cases. To make it more convenient, morning and evening shifts in consulting doctors can be planned to avoid overcrowding of people or even a prior appointment system can also be taken into consideration. Healthcare staff involved in treating COVID-19 suspected patients should be provided with PPEs, rotation duty, and periodic offs must be planned to remove the mental stress felt by the staff. On completion of his shift duty, all the staff must be provided with the facilities to quarantine in hospital by management to avoid the infection to carry over to their families. Optimizing telehealth system for recovery from COVID-19 Almost overnight this pandemic has made healthcare sectors start switching towards immediate and universal secure telehealth. It is now considered a lifesaving tool that saves time and money. In the management of COVID-19 patients, telemedicine has included an additional informational page, for guidance about the prevention and treatment, training, communication, to assist the remote consulting for the community residents and medical staff. This enabled to conduct of preliminary screenings through remote consultation, which avoided the risk of cross-infection in the hospitals. Additionally; helped the medical staff to communicate with their colleagues, listen to lectures and apply for consultations [54]. This enabled care providers of smaller health care facilities situated in the rural areas to connect with specialists in large hospitals over a video calling and update them on the latest treatment and even demonstration can also be made on certain therapeutic aspects, including surgery [55]. With this regard, the Indian Space Research Organization (ISRO) began a pilot project linking Apollo Hospital in Chennai with Apollo Rural Hospital at Aragonda Village in Andhra Pradesh [56]. It not only benefits the rural people but people in urban areas, especially [55] at the time of a pandemic outbreak, people under quarantine were instructed on quarantine processes at home, applications for personal protection, and seeking medical attention. Song X with his own experience in the telehealth platform suggests that with this innovation, the health care sector has got great benefits and also become capable of sharing the necessary information and support healthcare providers. Telehealth is scaling up in this era, because of achieving high-quality outcomes regardless of geography at lower costs, also a burden on traveling can be reduced [55]. Centers for Disease Control and Prevention (CDC) in 2002, developed Crisis Emergency and Risk Communication training module, which is also called a communication model "to communicate information that public wants or needs to know to reduce the incidence of illness and death" in an emergency. Wherein the communication is done by "spokesperson" from public health officials or hospital physicians from the top official level in front of television [57]. Finally, COVID-19 lockdown 2.0 in India, long-pending telemedicine guidelines were issued by the ministry of health and family welfare (MoHFW), in collaboration with NITI Aayog and the Board of Governors (BOG) medical Council of India (MCI), these resulted in a surge teleconsultations program [58]. Application of artificial intelligence in tackling COVID-19 "Infectious disease surveillance, in particular, the timely detection and early warning of disease outbreaks are indeed a function of strength and capacity of the health system. The new booming artificial Intelligence (AI) has proven as a very good weapon to fight back [59] and analyze many issues. In the present situation, Artificial Intelligence (AI) has played a major role, starting from the outbreak of the virus, virus mutation to its forecast, and is useful in controlling this infection in real-time [60,61]. It has helped the hospitals to prepare a strategic plan regarding the requirements. Biosensors have helped very efficiently for detecting viral pathogens in the air, water, soil surfaces, human and animal tissues and also detect symptoms even before people realize they're infected [62]. In the future, this seems to become an important technology to fight against other epidemics and pandemics as well [63]. While the world waits for a vaccine for COVID-19, Artificial intelligence (AI) accelerates the process by reasoning across all available biomedical data and information in a systematic search for existing approved medicines [64,65]. White House and a coalition of leading research groups in response to the pandemic have started free open Research Dataset Challenge(AI's , which contains around 200,000 resources and 93,000 full-text articles on SARS-CoV-2 and related matter [65]. These resources are provided to the global research community to apply Artificial intelligence and extract relevant medical information from resources and guide the practitioners for effective treatment. Artificial intelligence also has a promising role starting from the prevention of disease-to predict the probable sites of infection, the influx of the virus, the need for beds, and necessary information/guidance for healthcare professionals during this crisis [60]. BlueDot is a Canadian start-up and Amazon web services (AWS) customer that uses AI machine learning algorithms to detect disease outbreaks, and first to raise the alarm about an outbreak of a respiratory illness in Wuhan, China [6]. Insilico methods by using AI have designed six new molecules that could limit COVID-19 pneumonia's ability to virally replicate in cells [67]. A neural network is a kind of machine learning method that can extract the visual features of this disease and this would help in proper monitoring and treatment of the affected individuals [69]. Benevolent AI and imperial college London used algorithms to find the potential drug targets, which are found to be more promising because software pointed to the enzyme adaptor-associated protein kinase (AAK1) as possible to target for the disease [67]. For instance, experts from the pharmaceutical industries often seek help through artificial Intelligence to understand the depth of health crises in society. CONCLUSION Novel COVID-19 pandemic has affected almost all the sectors very badly perhaps pharmaceuticals and health care sector are also a major hit at the initial lockdown period. Currently, the detection of the corona infection is been carried out by taking the swabs from nasal and oral tracts of the suspects however, since the presence of the virus is also reported from the stool samples of the patients hence alternatively examining fecal samples of the suspects for the confirmation of the virus can be thought of. The usage of face masks, hand gloves, and hand sanitizers has greatly increased by health workers and also by the general public. Although there was a shortage of ventilators and PPE kits supply initially but gradually, it is adequately supplied now. Doctors and nurses and allied health workers are forced to work many extra hours to treat the over the rush of COVID 19 patients which are causing them stress and exposing them to great health risk in this situation some governments came forward to announce the life insurance policy to corona warriors which are a welcome move, in addition, hospital management should also think to compensate the health care professionals involved in treating the corona patients with good incentives or other benefits. During this COVID 19 pandemic telemedicine was experimented very well by most of the doctors to reach out to their patients and to provide consultation, so this in future can be of major help in case of any such outbreaks. Artificial intelligence is much to offer future as in this biosensor were initially helped to find the pandemic in future, they can be adopted great way to detect the outbreaks. This pandemic also made to think about constructing the wards in the hospitals in a special way to offer not only a better for the such rapid spreading infected patients but also to avoid the other outpatients and other admitted in-patients with other complications from such isolation wards. As symptomatic treatment was offered for the patients initially such as Hydroxychloroquine, Lopinavir, Ritonavir, Tocilizumab, and Sarilumab. Thus, pharmaceutical companies were asked to manufacture these drugs in bulk quantity to cater to the world's needs and in the first quarter of this year, it was seen that there was a great surge in the manufacture and distribution of these drugs especially hydroxychloroquine by renowned pharmacy companies worldwide. India and China played a major role in supplying these drugs to many countries around the world. As this pandemic hit the world's economy very badly, after the lockdown of more than two months slowly many countries started to unlock the movement of transport, public and other processes to normalize and help the common public to lead their normal life. As the process of unlocking began the number of COVID 19 cases is also increasing drastically as sometimes it is hard to follow strictly the guidelines suggested by the health authorities. This situation necessitates the strong immunization of the people such as vaccines or interferons which can effectively save the life of patients against such infections. Inactivated COVID 19 vaccines are tried in laboratory animals and also in humans on a small scale by some pharmaceutical companies but still, it will take time to complete the clinical trials and to get final approval by authorities to reach the people, till then it is advised to work by maintaining social distancing, protecting oneself with mask, by employing hand sanitization and avoid overcrowding. Certainly, new potential therapeutic regimens will emerge out with the researches going on across the globe with the rapid phase against this rapidly spreading COVID 19 virus. Let us be optimistic and work safely to save ourselves and serve the community. RESPONSE TO COMMENTS Corrections on font size, changing the referencing style are done accurately. AUTHORS CONTRIBUTIONS All author has equally contributed by giving ideas in writing this review. CONFLICTS OF INTERESTS All the authors have hereby declared that they have no conflict of interest.
2021-10-14T13:09:01.745Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "0867b9bd9d46407eda7d0323f9ea8d52b9640eee", "oa_license": "CCBY", "oa_url": "https://innovareacademics.in/journals/index.php/ijpps/article/download/42566/25503", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9608389e4765f1e471b80d9ccb5723ce7dd54266", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [ "Business" ] }
234705433
pes2o/s2orc
v3-fos-license
Clinical, laboratory, and radiological features indicative of novel coronavirus disease (COVID-19) in emergency departments – a multicentre case- control study in Hong Kong (1) Background: It is unclear whether the reported presenting clinical features of coronavirus disease 2019 (COVID-19) are useful in identifying high-risk patients for early testing and isolation in the emergency department (ED). We aimed to compare the exposure history, clinical, laboratory, and radiographic features of ED patients who tested positive and negative for COVID19; (2) Methods: We conducted a case-control study in seven EDs during the first five weeks of the COVID-19 outbreak in Hong Kong. Thirty-seven laboratory-confirmed COVID-19 patients were compared with 111 ageand gender-matched controls; (3) Results: There were no significant differences in patient characteristics and reported symptoms between the groups, except patientreported fever. A positive travel history or contact history was the most significant predictor for COVID-19 infection. After adjustment for age and presumed location of acquiring the infection in Wuhan/Hubei, patient-reported fever (OR 2.6, 95% CI 1.1 to 6.3), delayed presentation (OR 5.0, 95% CI 2.0 to 12.5), having medical consultation before ED presentation (OR 7.4, 95% 2.9 to19.1), thrombocytopenia (OR 4.0, 95% CI 1.6 to 9.7), raised lactate dehydrogenase (OR 5.9, 95% CI 1.9 to 18.5), haziness, consolidation or ground-glass opacity on chest radiography (OR 5.6, 95% CI 2.0 to 16.0), and bilateral changes on chest radiography (OR 13.2, 95% CI 4.7 to 37.4) were associated with a higher odds of COVID-19 separately while neutrophilia was associated with a lower odds (OR 0.3, 95% CI 0.1-0.8); and (4) Conclusions: This study highlights several features that may be useful in identifying high-risk patients for early testing and isolation while waiting for test result. Further studies are warranted to verify the findings. Introduction On 11 March 2020, the World Health Organization (WHO) declared a pandemic for the coronavirus disease 2019 (COVID-19) outbreak [1]. Within 3 months of its first emergence in Wuhan [2,3], severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has rapidly spread all over the world, having infected more than 4.3 million people [4] and caused many more deaths than severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) combined [5]. WHO advises to test every suspected case and isolate those who test positive [1,6]. Health systems worldwide now need to respond to the rising demand for rapid diagnosis and isolation in patients suspected to have COVID- 19. Emergency departments (ED) are patient's first contact point with the health care system in most countries and they play a critical role in diagnosis and decision-making on isolation. However, access to real-time reverse transcriptase-polymerase-chain-reaction (RT-PCR) testing is limited, especially in the initial phases of the outbreak and in resource-poor settings. In Hong Kong, even though the laboratory testing capacity has been expanded to include all inpatients with pneumonia and ED patients with fever or respiratory symptoms as the pandemic evolves [7], the turnaround time of RT-PCR is long, frequently up to several hours even in university hospitals [8]. Given the limited number of negative pressure rooms, emergency physicians still rely on travel or contact history, presenting signs and symptoms, routine laboratory tests, and imaging when deciding where to place the suspected cases while waiting for test results. Despite this, little is known about the value of routine clinical assessment in identifying patients with COVID-19 in the ED setting. In this study in Hong Kong, we compared the clinical characteristics on ED presentation, including exposure history, symptoms and signs, laboratory and radiological findings of ED patients with COVID-19 with those who were not infected but who were tested for suspected infection. We further determined the risk of COVID-19 for each clinical characteristic that was significantly different between the groups. Methods We conducted a case-control study in the emergency departments of seven public hospitals managed by the Hospital Authority (HA) of Hong Kong. The HA is a statutory body that manages all public hospitals in Hong Kong, which are organized into seven hospital clusters based on geographical locations [21]. At present, all confirmed and suspected cases of COVID-19 in Hong Kong are treated in public hospitals. The seven study sites, one in each of the seven hospital clusters, included two university-affiliated hospitals and five acute regional hospitals. The study was approved by the institutional review boards of all study hospitals (HKU/HKWC IRB UW 20-087, HKECREC-2020-017, NTEC-2020-0092, REC (KC/KE)-20-0049/ER-2, REC (KC/KE)-20-0051/ER-2, 3 of 17 KWC-2020-0032, NTWC-2020-0026). Written consent was waived in light of the retrospective study design and anonymized use of data. Immediately following the official announcement of a cluster of patients with pneumonia of unknown etiology in Wuhan by the National Health Commission of the People's Republic of China, the Hong Kong Department of Health (DoH), through the HA, implemented a bundle of measures to facilitate early recognition, isolation, notification, and molecular testing for all suspected cases [8]. Active surveillance, based on a set of clinical and epidemiological criteria that has evolved as the epidemic further spread beyond Wuhan, is performed upon patient presentation to any healthcare facilities in Hong Kong. All ED patients are screened by staff for possible COVID-19 infection with epidemiological and clinical criteria, and all suspected cases are hospitalized and isolated in negative pressure rooms for RT-PCR testing after admission. At this stage of the outbreak in Hong Kong, the local health authority still recommends hospital admission for all suspected cases. Some patients are hospitalized and isolated directly from sources other than EDs under the direction of the Centre for Health Protection (CHP), part of the DoH. On 14 January 2020, the CHP further expanded laboratory surveillance to cover all inpatient community-acquired pneumonias irrespective of travel history. On 19 February 2020, deep throat saliva testing, a non-invasive method of specimen collection with reasonable sensitivity [22], was introduced in EDs and government outpatient clinics. Low risk patients with mild febrile illnesses are instructed to produce an early morning sample of saliva at home on the following day for RT-PCR testing. We set our study period from 20 January 2020 to 29 February 2020, with the intention to gather information from the initial phase of the epidemic to inform clinical decision making. During that period of time, Hong Kong witnessed the first imported case from Wuhan (22 January 2020), followed by intermittent presentations of local cases. We recruited cases who were admitted to the hospital from the study EDs as inpatients with laboratory confirmation of infection with COVID-19, irrespective of clinical signs and symptoms, as defined in the WHO interim guidance [6]. RT-PCR tests were performed in the respective hospitals and/or the government public health laboratory in accordance with prevailing local practice [8]. Controls were patients admitted through the study EDs during the same period for RT-PCR testing for COVID-19 but who tested negative. Patients of all age groups were included. We excluded patients who were (1) admitted to the study hospitals under CHP isolation orders from sources other than ED, such as fever clinics or quarantine camps because these patients might have different risk profiles and clinical presentations compared with ED patients; (2) patients who were transferred to the study hospitals from another hospitals because interventions received before transfer may have altered the clinical characteristics. Eligible patients were identified by searching the Hospital Authority Clinical Data Analysis and Reporting System (CDARS), which is a centralized repository of electronic medical records in the HA that contains data on patient characteristics, dates of various clinical activities, diagnoses, laboratory tests, procedures, and drug prescriptions for audit and research purposes. The system has a high accuracy in coding and has been used in many population-based research studies [23]. We used laboratory test orders for COVID-19 RT-PCR as the search criterion. During the study period, 14,595 RT-PCR tests were ordered for 10,845 patients in the study hospitals, of whom 37 confirmed cases and 9,283 negative cases were admitted through the ED (Figure 1). We matched each confirmed case with three negative control cases, who were randomly selected from the same hospital within five days of presentation, and were the same gender and similar age (+/-five years). The controls were selected by a biostatistician using the statistics software R version 3.6.2 (R Foundation for Statistical Computing, Vienna, Austria) with no knowledge of their clinical presentations. They represented the population at risk of COVID-19 infection [24]. 4 of 17 We reviewed electronic medical records and extracted demographic, epidemiological, clinical, laboratory, radiological, treatment and outcome data using a standardized data collection form. Each patient record was reviewed by the study lead investigator and a local co-investigator independently to ensure accuracy of data entry. Both of them were aware of the COVID-19 test results. Any disagreement was resolved by discussion between the case reviewers. We defined the date of symptom onset as the date when the first symptom was reported. Since the exposure history might not be available or reliable at ED presentation, we cross-checked the exposure history of each confirmed case with the official account released by the CHP. For travel history outside Wuhan/Hubei, we defined a place with active community transmission of COVID-19 based on the prevailing CHP's criteria for disease notification, which is determined according to the situation of local outbreak in different countries by public health officials. For controls, such an official account was not available because no epidemiological investigation is normally conducted by the CHP if a patient is tested negative for COVID-19. Starting from 6 February 2020, ED staff had access to patient cross-border travel records to mainland China or other countries within 30 days of ED registration; these data were provided by the Immigration Department. We believe the travel history recorded in the clinical notes should be accurate for control cases after that date. Since the RT-PCR tests were new tests in the study hospitals, the sensitivity and specificity had not been reported. To avoid misclassifying false negative cases as controls, we reviewed all re-attendance records of the controls after hospital discharge up to 16 March 2020 and did not find any reattendance due to COVID-19. We extracted symptoms as reported in the medical record. We defined fever as a patientreported symptom in the clinical notes without specifying the temperature threshold because most clinicians did not record the temperature reported by the patients and how it was measured. We reviewed the consultation history of each patients and considered any visit to any health care provider for the same physical complaints as prior medical consultation before their ED presentation. For patients with multiple ED attendances within the study period, only clinical characteristics recorded on the episode that led to hospital admission and COVID-19 testing were extracted. As for laboratory tests, we reviewed the results of complete blood count, coagulation profile, erythrocyte sedimentation rate (ESR), liver and renal function, lactate dehydrogenase, C-reactive protein, procalcitonin, creatine kinase, d-dimer, lactate, bacterial cultures, and RT-PCR for other viruses, collected within 48 hours of ED presentation. Beyond that, the laboratory results are likely to have been affected by medical interventions after admission and possible nosocomial infection. Since CT thorax was seldom ordered for ED patients who tested negative for COVID-19 in public hospitals, we only compared the chest x-ray findings. As for radiographic findings, we adopted the interpretation of the reporting radiologist or treating clinicians, wherever available. All laboratory tests and radiological studies were ordered by the attending clinicians based on clinical need and local hospital practice. We followed up the clinical outcome of all patients up to 16 March 2020. Statistical analysis Missing values were not imputed. We used descriptive statistics to analyze the data, with categorical variables reported as proportions and continuous variables as mean +/-standard deviation or median with interquartile range (IQR), as appropriate. We used Chi-square test or Fisher's exact test for comparison of categorical variables between groups, and the Student's t-test or Mann-Whitney U test for continuous variables, as appropriate. We conducted a univariate analysis to study the association of individual variables with laboratory-confirmed COVID-19 infection. We then determined the odds ratio (OR) with 95% confidence interval (CI) of COVID-19 infection for variables with a significant association in univariate analysis. Because of the association of age with a poor outcome [14][15][16]25] and apparently milder infections outside Wuhan as shown in previous studies [16,17], we used multivariable logistic regression to adjust the odds ratios for the patient's age (<65 or >/= 65 years) and presumed location of acquiring the infection in Wuhan/Hubei (we used travel history to Wuhan/Hubei as a surrogate) [26]. The Statistical Package for the Social Sciences for Windows version 23.0 (IBM Corp., Armonk, NY, USA) was used for data analysis with a two-sided p-value of <0.05 considered to be statistically significant. During the study period, there were 37 confirmed cases, including 8 imported cases and 29 local cases, who were matched to 111 controls admitted through the EDs of the study hospitals. The median age of the confirmed cases was 63.0 years (IQR 55.5-71.0) and there was no gender preponderance. There were no significant differences between the cases and controls regarding age, gender, smoking history, and co-morbidities, except that a history of cancer was more common in the control group (Table 1). A significantly higher proportion of cases had a travel history to Wuhan/Hubei (21.6% vs 0.9%, p<0.001) or a place with active community transmission of COVID-19 according to the prevailing local health authority advice (21.6% vs 2.7%, p=0.001), and a contact history with a person with confirmed COVID-19 infection (48.6% vs 2.7%, p<0.001). The proportion of patients with a history of visiting healthcare facilities in mainland China and contact with a person from Wuhan/Hubei with no known COVID-19 infection did not differ significantly between the groups, but the number was too small for a meaningful comparison. Compared with the controls, COVID-19-confirmed cases presented to the ED later after symptom onset (median 7.0 days vs 2.0 days, p=0.007) and a higher proportion of them had had prior medical consultation before their ED presentation (62.2% vs 29.7%, p<0.001). Preprints The clinical features of cases and controls are summarized in Table 2. Overall, the symptoms and triage vital signs did not differ significantly between the groups, except there was a higher proportion of COVID-19 cases reporting fever (73.0% vs 48.6%, p=0.01) and these patients had a higher temperature at ED triage than controls (37.4 o C vs 36.9 o C, p=0.004), though the difference is not clinically significant. Of note, only 86.5% of the confirmed cases were directly admitted to an isolation or surveillance ward from the ED. Five confirmed cases were admitted to general wards, all were local cases and one had close contact with another confirmed case that was only discovered after admission. During the course of hospitalization, seven (18.9%) confirmed cases, two of whom had a travel history to Wuhan/Hubei, and seven control patients required admission to intensive care. Two confirmed COVID-19 cases and five controls died. Table 3 shows the laboratory and radiographic characteristics of the confirmed cases and controls. Compared with controls, COVID-19-confirmed cases had a lower total white blood cell count (median 4.9 x 10 9 /L vs 8.6 x 10 9 /L, p<0.001), and lower neutrophil (median 3.4 x 10 9 /L vs 6.9 x 10 9 /L, p=0.001), and platelet counts (median 171.0 x 10 9 /L vs 232.5 x 10 9 /L, p<0.001). A significantly higher proportion of cases had a lymphocyte count <1.0 x 10 9 /L (62.2% vs 40.4%, p=0.025). Conversely, neutrophilia (defined as a neutrophil count >8.0 x 10 9 /L) was more common in the control group (10.8% vs 37.2%, p=0.003). Confirmed cases had significantly higher serum lactate dehydrogenase levels than controls (median 280.0 U/L vs 194.0 U/L, p=0.001). Other laboratory parameters, such as prothrombin time, activated partial thromboplastin time, albumin, alanine aminotransferase, creatinine, creatine kinase, and C-reactive protein did not differ significantly between the groups. Only a few patients had procalcitonin tested and the proportion of patients with a level ≥0.5 ng/mL were significantly higher in the control group (17.4% vs 80.0%, p=0.015). The number of cases and controls with d-dimer, ESR, and lactate were 5, 18, and 17, respectively, which were too small to allow a meaningful comparison between the groups. SARS-CoV-2 was detected in upper respiratory specimens (including nasopharyngeal aspirate, nasopharyngeal swab, and throat swab or their combination), lower respiratory specimens (sputum, tracheal aspirate, bronchoalveolar lavage), and stool in 33, 19, and 7 confirmed cases, respectively. In three confirmed cases, other human coronaviruses, including human coronavirus OC43 in two cases and human coronavirus 229E in one case, were also detected with RT-PCR in nasopharyngeal swab specimens. As for the controls, six had influenza A virus H1, four had adenovirus, one had parainfluenza virus 3, one had enterovirus/rhinovirus, one had human metapneumovirus, and one had cytomegalovirus detected by RT-PCR in their nasopharyngeal specimens, and 25 had a positive bacterial culture. The odds ratios of having COVID-19 infection in patients with a positive travel or contact history, fever, delayed presentation, prior consultation, leukopenia, lymphopenia, neutrophilia, thrombocytopenia, raised lactate dehydrogenase, and abnormalities on chest radiographs are shown in Table 4. After adjusting for age, the odds ratios were almost unchanged, indicating that age may not be an important factor in confounding clinical presentations. However, when the presumed location of acquiring the infection in Wuhan/Hubei was adjusted for, the association between COVID-19 and leukopenia, and with lymphopenia, was not statistically significant. Discussion Differentiating COVID-19 from influenza and other respiratory illnesses in the ED during flu season in the Northern Hemisphere is challenging. While RT-PCR test remains the most important diagnostic tool, this study shows that a number of features in exposure history, clinical presentation, laboratory and radiological findings may be useful for clinicians in identifying patients with COVID-19 for early testing and isolation while waiting for test results. Patients with a travel history to the epicentre of COVID-19, Wuhan/Hubei in the initial phase of the epidemic and other countries with active local transmission later in the course of its global spread, and a contact history with a person with confirmed COVID-19 infection, have a much higher odds of having COVID-19 infection. This finding highlights the time-honoured value of a proper travel and contact history assessment at the point of patient entry to hospital. Patients who are screened positive should be directed to a separate area, an isolation room if available, and separated from other suspected cases by at least 1 meter [6]. It is noteworthy that as local outbreaks expand in other countries, travel history may become less important, and a contact history may not be apparent at presentation, but only after contact tracing is completed by the local public health authority. Our study shows that despite heightened awareness among healthcare staff from the outset of the epidemic, as local transmission progressed a small proportion of local cases were still admitted to general wards initially, highlighting the importance of infection control measures even in the general ward setting. Compared with the cases, a significantly lower proportion of the controls were admitted to an isolation or surveillance ward directly from the ED, though they were offered COVID-19 testing. This discrepancy may reflect the limited capacity of isolation facilities in the study hospitals or liberal use of testing even for those who were perceived to be less likely to have the infection. Compared with those reported in the published case series, the confirmed cases in our study were older patients with more comorbid conditions [27]. Symptoms, predominantly fever and lower respiratory symptoms such as cough and dyspnoea, were similar to those reported elsewhere, except a higher proportion of patients had sputum production in our study [9,12,13,14,16,20]. Contrary to the observation that cases outside Wuhan/Hubei might be relatively milder [17][18][19], we did not observe such a pattern here in Hong Kong. The intensive care unit (ICU) admission rate (18.9%) was higher in our cohort compared with mainland China (5%) [13] and only two cases admitted to the ICU had a travel history to Wuhan/Hubei [3]. This can be explained by the older age of our patients, which has been associated with a poorer outcome [14,15], differences in ICU admission policy, better accessibility to ICU beds in the initial phase of the outbreak when the number of confirmed cases was still small, and our sampling strategy of recruiting ED patients only. Overall, we found that symptoms, except patient-reported fever, were not useful in identifying patients with COVID-19 in the ED. However, given the retrospective study design, we could not define the temperature range and method of measurement for patient-reported fever. Also, a 0.5 o C difference in triage temperature, though statistically significant, might not be clinically useful as an indicator. In general, we found that patients with COVID-19 presented to ED later than controls. It is also more likely that they had consulted other doctors before ED presentation. Delayed ED presentation could be explained by the non-specific initial symptoms of COVID-19. Multiple consultations might reflect failure to respond to treatment offered by other clinicians, which often targeted other pathogens, or disease deterioration along its clinical course. Huang and colleagues showed that the median time from symptom onset to first hospital admission was 7 days [9], a time interval that was also observed in our confirmed cases. Delayed presentation with prior medical consultation before ED presentation should be a red flag of a possible novel infection that is not responding to usual treatment. In the absence of tell-tale clinical features, similarities in certain abnormalities in laboratory tests between betacoronaviruses, COVID-19, SARS, and MERS, may offer some clues for diagnosis. Lymphopenia is the most widely reported characteristic of COVID-19 [9, 11, 12-15, 18, 19]. In a group of critically-ill COVID-19 patients, the lymphocyte count fell to the lowest point 7 days after symptom onset in survivors and remained low till death in non-survivors [14]. Likewise, lymphopenia is also observed in other betacoronavirus infections, with evidence of lymphocyte infection in SARS [28] and virus-induced T lymphocyte apoptosis in MERS [29]. Chen and colleagues suggested using lymphopenia as a reference index for diagnosis of COVID-19 in clinics [11]. However, we found that neither lymphopenia or leukopenia, which is less commonly reported [9,13,18], was useful in identifying COVID-19 after adjusting for age and location of acquiring the infection. Other viral infections, such as influenza [30,31], can also cause lymphopenia, making it less discriminatory during the flu season. Neutrophilia, on the other hand, has been reported in one-third of cases with COVID-19 [11] and non-survivors appeared to have a higher neutrophil count than survivors [12]. Interestingly, we found that those with neutrophilia had a lower odds of COVID-19 even after adjustment, indicating that a high neutrophil count at ED presentation may suggest infection by pathogens other than SARS-CoV-2. Further studies are required to investigate the diagnostic role of neutrophilia at different stages of the disease in light of these contradictory findings. Thrombocytopenia, a feature also reported in SARS and MERS [32,33], remained significant after adjustment in our study. It occurred in up to one third of cases in a large case series in China [11], but it is not consistently reported across different studies. In our study, less than half of the confirmed cases had a platelet count lower than 150 x 10 9 /L. Using that threshold would have missed more than half of the cases. As for biochemical tests, elevated lactate dehydrogenase is frequently reported in COVID-19 [9,[11][12][13][14] and its discriminatory value has been demonstrated in differentiating SARS from other causes of community-acquired pneumonia [34]. In our study, 21 out of 31 cases who were offered testing had an elevated lactate dehydrogenase level in serum, but its role in diagnosis requires further evaluation. It only has a value if its turnaround time is shorter than that of RT-PCR testing. C-reactive protein has been shown to be elevated in COVID-19 cases [11,13,16], but we found it unhelpful in differentiating COVID-19 from other infections. Previous studies showed that most COVID-19 patients had normal serum procalcitonin level on admission [9,11,13,14,18]. Despite our small numbers, our findings support that an elevated procalcitonin level might indicate infection by pathogens other than SARS-CoV-2. As for other sepsis biomarkers, such as lactate, d-dimer, and ESR, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 17 May 2020 doi:10.20944/preprints202005.0285.v1 the number of patients tested was too small in our cohort to allow a meaningful comparison between groups. Early reports suggest patchy shadows, ground-glass opacities, subsegmental areas of consolidation, especially bilateral distributions involving the peripheral lung, are typical radiological abnormalities of COVID-19 found on CT thorax [9,11,12,18,35,36], with their appearance correlating with the stage of disease [16] and the number of lung segments involved increasing with time [37]. Tao and colleagues demonstrated the high sensitivity of CT thorax in diagnosing COVID-19 and its good correlation with disease progression [38]. However, CT thorax is not readily available in most EDs in Hong Kong except for major trauma or life-threatening chest emergencies, such as acute aortic dissection. Despite the frequent ordering of chest x-rays in the ED, their value in diagnosing COVID-19 has not been fully explored in the literature. Our study shows that patients with any haziness, ground-glass opacity or consolidation on the presenting chest radiograph, though non-specific, are at a higher risk of having COVID-19, especially when both lungs are involved. However, it is noteworthy that even with CT thorax, a significant proportion of cases had no radiographic or CT abnormality, especially among those with mild infection [13,36]. Limitations Our study has several limitations. First, we could only compare a small number of confirmed cases with selected controls. The sample size was small but is necessarily limited by the time frame of the early phase of the outbreak in Hong Kong. On one hand, we do not have enough statistical power to detect the differences in certain variables between the groups, neither could we adjust the odds ratios for more confounding variables. Lymphopenia, in particular, almost reaches statistical significance and we cannot totally exclude its role in early case identification. On the other hand, a significant p value seen in comparison only indicates statistical significance between the cases and the selected controls and it does not imply clinical significance. For instance, we believe that the significant difference in triage temperature between the groups is not clinically useful. Second, the selection of controls based on RT-PCR results irrespective of symptoms and signs might introduce selection bias since a number of controls might not have infection at all. That might inflate the odds ratios of some variables in comparison. Yet, we think the liberal strategy of control selection reflects the current context better because a well-defined set of clinical criteria for a 'COVID-19-like illness' simply does not exist. This discounting of symptoms and signs in the selection of cases and controls is consistent with the current WHO definition of the disease. Third, information bias still existed. Compared with the cases, controls might have less detailed travel or contact history. Access to the Immigration Department cross-border travel record by medical staff has already reduced the chance of omitting important travel information. However, controls received less attention from the health authority in contact tracing once they tested negative. Extracting data without blinding to the test result might also introduce information bias. However, many clinical parameters, such as triage temperature, laboratory and reported radiological findings were objective data and we believe the risk of introducing such a bias is low. Fourth, clinical practice and the quality of documentation naturally varied considerably across the study hospitals, over which we had no control. A notable example is that not all controls had two negative RT-PCR tests that were spaced 24 hours apart before hospital discharge. We reviewed the re-attendance records of all controls and found that none were re-admitted subsequently for COVID-19 infection up to 16 March 2020. Finally, this study is an account of the early outbreak of COVID-19 in Hong Kong. The current findings were based on ED patients who presented within the first five weeks of the epidemic in Hong Kong. The observed strength and magnitude of associations may change as the pandemic further evolves in Hong Kong and elsewhere. The sample is also not representative of patients admitted to hospital from sources other than ED. Our findings should be interpreted with caution as they are exploratory. Further studies, preferably prospective studies in other affected areas, are warranted to verify our findings. We are also aware that the insights found in this study will likely be superceded by findings from emerging larger series elsewhere. Taken together, a positive travel or contact history still remains the most significant predictor for COVID-19 infection among ED patients with undifferentiated presentations in the early phase of the outbreak. As the outbreak progresses with more local transmissions, delayed ED presentation and prior consultation before ED presentation should alert clinicians of a possible novel infection. Radiological abnormalities, including haziness, consolidation and ground-glass opacity on chest radiography, especially if bilateral, may indicate a higher risk of COVID-19 infection. Patientreported fever appears to more likely in COVID-19 patients. Neutrophilia may suggest infection by pathogens other than SARS-CoV-2. The discriminatory value of lymphopenia and thrombocytopenia appear to be modest. Elevated lactate dehydrogenase may be useful only when its turnaround time is shorter than that of the RT-PCR test.
2020-05-21T09:16:42.959Z
2020-05-17T00:00:00.000
{ "year": 2020, "sha1": "bf5ae5ecbccd49d03d340c6bd55ad0d9b367c88b", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202005.0285/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "4448cb07f753a06c914b5934fa0f4505cfe1b15f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11719756
pes2o/s2orc
v3-fos-license
miR-30a radiosensitizes non-small cell lung cancer by targeting ATF1 that is involved in the phosphorylation of ATM Increasing number of studies report that microRNAs play important roles in radiosensitization. miR-30a has been proved to perform many functions in the development and treatment of cancer, and it is downregulated in non-small cell lung cancer (NSCLC) tissues and cells. This study was conducted to understand if miR-30a plays a role in the radiosensitivity of NSCLC cells. Radiosensitivity was examed by colony survival assay and tumor volume changing in vitro and in vivo, respectively. Bioinformatic analysis and luciferase reporter assays were used to distinguish the candidate target of miR-30a. qRT-PCR and western blotting were carried out to detect the relative expression of mRNAs and proteins. Cell cycle and cell apoptosis were determined by flow cytometry. Our results illustrated miR-30a could increase the radiosensitivity of NSCLC, especially in A549 cell line. In vivo experiment also showed the potential radiosensitizing possibility of miR-30a. Further exploration validated that miR-30a was directly targeting activating transcription factor 1 (ATF1). In studying the ataxia-telangiectasia mutated (ATM) associated effects on cell radiosensitivity, we found that miR-30a could reduce radiation induced G2/M cell cycle arrest and may also affect radiation induced apoptosis. Together, our results demonstrated that miR-30a may modulate the radiosensitivity of NSCLC through reducing the function of ATF1 in phosphorylation of ATM and have potential therapeutic value. MicroRNAs have been found to modulate tumor radiosensitivity in modulating a variety of pathways and molecules (17,18). The primary ways that miRNAs modulate radiosensitivity were DNA damage repair, apoptosis, cell cycle checkpoint and tumor microenvironment (19). miR-124, miR-200c, miR-302 and miR-142 were found to affect the radiosensitivity of colorectal cancer (20), NSCLC (21), breast cancer (22) and malignant pediatric brain tumors (23), respectively. Moreover, a recent study firstly found that miR-30a could increase the radiosensitivity of prostate cancer cells (24). We did not find other studies concerning miR-30a and radiosensitivity. So, we investigated whether miR-30a could function as a radiosensitizer in NSCLC and its mechanism. In this study, the effects of miR-30a on the radiosensitivity of NSCLC was studied in vitro and in vivo. Bioinformatic analysis and luciferase reporter assay illustrated that activating transcription factor 1 (ATF1) was a predicted target of miR-30a on its 3' untranslated region (3'UTR). Overexpression of miR-30a in NSCLC cell lines enhances the radiosensitivity of NSCLC, especially in A549 cells. Further investigation found ionizing radiation (IR)-induced G2/M cell cycle arrest was blocked by miR-30a and it may also affect the IR-induced apoptosis. These changes may be partly through binding to the 3'UTR of ATF1 mRNA, thereby affected the function of ATF1 in ataxia-telangiectasia mutated (ATM) phosphorylation process. Overall, our study indicated that miR-30a may be an important factor in influencing the radiosensitivity of NSCLC. More studies are needed to explore the precise molecular mechanism of miR-30a on regulating radiosensitivity in NSCLC. Materials and methods Cells and animal culture. A549 and H460 cells were obtained from the Center for Translational Medicine, Department of Xi'an Jiaotong University (Shaanxi, China). Cells were maintained in RPMI-1640 medium (Gibco Life Technologies, Carlsbad, CA, USA) containing 10% fetal bovine serum (FBS; ExCell), cultured at 37˚C, humidified thermostat with 5% CO 2 atmosphere. Five-week-old nude mice were used for tumor xenograft model and housed in the Center of Laboratory Animals of Xi'an Jiaotong University. The mice were all caged in specific pathogen free condition with constant temperature and humidity. Animals were randomly grouped to accept subcutaneous injection of A549 cells with lenti-GFP or lenti-miR-30a-5p or lenti-inhibitor stable expression. Animal experiments were authorized by the Institutional Animal Care and Use Committee of Xi'an Jiaotong University. Animal care abided by the rules of the Institutional Animal Care and Use Committee of Xi'an Jiaotong University. RNA extraction and qRT-PCR analysis. RNA extraction kit (Takara Bio, Inc., Shiga, China) was used to isolate total RNA and TRIzol (Invitrogen Life Technologies) to extract microRNA, following the manufacturer's instructions. Prime Script RT Master Mix and Mir-X miRNA First-Strand Synthesis kit (both from Takara Bio, Inc.) were used to synthesise reverse tra nscribed complementary DNA, respectively. SYBR Premix Ex Taq II and Mir-X miRNA qRT-PCR SYBR kit were used to perform qRT-PCR. U6 was the internal control. Primer sequences (5'-3') were as follows: hsa-miR-30a-5p, GTGTAAA CATCCTCGACTGGAAG; hsa-ATF1 forward, TTCTGGAG TTTCTGCTGCTGT and reverse, CCATCTGTGCCTGGAC TTG. All of the primers were synthesized by Sangon Biotech Co., Ltd. (Shanghai, China). Dual-luciferase reporter assay. Fragments of ATF1 mRNA 3'UTR with either the sequence of miR-30a-5p binding site or its complementary bases were cloned in dual-luciferase report vector, obtaining pmirGLO-ATF1-wild and pmirGLO-ATF1-mutant recombinant plasmids (GenePharma). The two recombinant plasmids and pmirGLO-negative control were then transfected into A549 cells with miR-30a-5p agomir (50 nM). Thirty-six hours after transfection, Dual-Luciferase Reporter assay system (Promega, Madison, WI, USA) was used to measure the activity of luciferase. IR. Linear accelerator (Siemens, Munich, Germany) was used for irradiation. Cells were treated with 200 cGy/min dose rate in room temperature to reach a required total dose using. IR group contain fifteen mice, consist of five randomly selected nude mice in each different miR-30a-5p expression group (10/group). Tumor-bearing mice in the IR group were treated with 2.0 Gy irradiation for 5 consecutive days from day 21 to 25, and achieved a total dose of 10.0 Gy irradiation. Colony formation assays. After 0, 2, 4, 6 and 8 Gy irradiation cells were incubation for 10 to 14 days. Formaldehyde of 4% was used to fix the cell clones and then stained with 1% crystal violet. Colonies of ≥50 cells were counted and fitted to single target model using GraphPad Prism 5 (GraphPad Software, Inc., La Jolla, CA, USA). Cell cycle and apoptosis analysis. For cell cycle assay, 70% ethanol was used to fix the harvested cells and placed at -20˚C overnight. Then incubated for 10 min in 50 µg/ml propidium iodide (PI) for analysis. Annexin V-PE/7-AAD apoptosis detection kit was used to test cell apoptosis, based on to the manufacturer's instructions. Cell cycle and apoptosis of the prepared cells were detected by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA). Statistical analysis. Statistics of at least three different experiments were analyzed by GraphPad Prism 5. Results are presented as means ± SEM. Statistical significance of two groups were tested by Student's t-test. P-value <0.05 was considered to have statistical significance. miR-30a enhances radiosensitivity of A549 and H460 cells. Colony survival assays were assessed to estimate the radiosensitivity of A549 and H460 cells. The two cell lines were treated with 0, 2, 4, 6 and 8 Gy radiation after transfected with miR-30a agomir (50 nM) or miR-30a antagomir (100 nM) or their negative controls (50 and 100 nM) for 36 h, respectively. Besides, cell proliferation was evaluated using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay after A549 cells were transfected with miR-30a agomir or miR-30a antagomir or their negative control. No statistical differences were found between the groups in 24, 48 and 72 h (data not shown). miR-30a expression were examined by qRT-PCR and confirmed that the agomir and antagomir were transfected successfully (P<0.01) ( Fig. 1A and B). The miR-30a agomir groups of A549 cells showed a decrease of colony formation rate after radiation exposure compared to the controls, especially after 6 Gy (P=0.0408) or 8 Gy (P=0.0258) irradiation ( Fig. 1C and E). Conversely, the colony formation rate was increased in the miR-30a antagomir A549 cell groups than in the antagomir NC groups, 6 Gy (P=0.0103) and 8 Gy (P=0.0451) also showed statistical significance ( Fig. 1C and E). Results of the four groups in H460 cell line were in accordance with A549 cell line, but no statistical significance was found ( Fig. 1D and F). ATF1 expression is a target of miR-30a. In order to investigate the underlying mechanism of miR-30a affecting the radiosensitivity of NSCLC, we conducted bioinformatic analysis to predict the potential targets for miR-30a through searching PicTar, TargetScan and miRDB. We found that ATF1, which may also be associated with tumor radiosensitivity (25), was a predicted target of miR-30a ( Fig. 2A). Schematic diagram of miR-30a targeting the 3'UTR of ATF1 is shown in Fig. 2B. Furthermore, qRT-PCR and western blotting were assessed to examine if miR-30a could regulate the expression of ATF1 in A549 cell line. We found that ATF1 mRNA and protein were decreased in the miR-30a agomir group compared to the control group (Fig. 2D-F). Conversely, the ATF1 expression increased in the miR-30a antagomir group (Fig. 2D-F). These results further demonstrated that ATF1 was inversely regulated by miR-30a in the A549 cells. miR-30a may enhance radiosensitivity of A549 cells through ATM pathway. Lentivirus systems were used to further explore the mechanism of miR-30a sensitizing radiation. A549 cells with stable overexpression and downexpression of miR-30a were designated as lenti-miR-30a and lenti-inhibitor, respectively. A549 cells with stable expression of GFP was used as a control and named lenti-GFP. Infection efficiency at 48 h by fluorescence microscopy showed bright GFP tag in lentiviruses (Fig. 3A). Relative miR-30a expression by qRT-PCR showed miR-30a was significantly increased by lenti-miR-30a (P=0.0108) (Fig. 3B) and decreased by lenti-inhibitor (P=0.0014) (Fig. 3C). Western blotting results showed that ATF1 expression was downregulated in lenti-miR-30a cells and upregulated in lenti-inhibitor cells, compared with lenti-GFP cells (Fig. 3D and E). Given that ATM was an important and the first responder to DNA double-strand breaks (DSBs) (26) and by phosphorylation it involves in many IR-induced cell processes (27), we then detected ATM and p-ATM (S1981) expression. The results indicated that ATM protein expression and phosphorylation of ATM at S1981 corresponded with ATF1 (Fig. 3D, F and G). The expression of ATF1 and ATM showed no difference with 8 Gy irradiation or without irradiation. Phosphorylation of ATM at S1981 was very low without irradiation, and significantly increased after 8 Gy irradiation (Fig. 3G). miR-30a enhances radiosensitivity by blocking the radiation induced G2/M checkpoint arrest in A549 cell line. To examine the impact of miR-30a on cell cycle progression and cell cycle distribution of A549 cells were measured by flow cytometry in lenti-miR-30a or lenti-inhibitor or lenti-GFP cells as contrast. In addition, western blotting results showed that miR-30a negatively influence IR-induced p53 expression, and the expression of p21 was suppressed (Fig. 4C). Taken together, the above results demonstrated that miR-30a may sensitize miR-30a may enhance irradiation-induced apoptosis of A549 cells. Furthermore, we detected the effects of miR-30a on apoptosis of A549 cells after irradiation by using flow cytometry. We found that miR-30a could not induce apoptosis without IR. The apoptosis rate was significantly increased after 8 Gy irradiation in all the three groups ( Fig. 5A and B). Moreover, percentage of apoptosis in lenti-miR-30a cells was higher than the rate in lenti-GFP cells after 8 Gy irradiation (27.93±2.00 vs. 18.63±1.59%, P=0.0026) (Fig. 5B). On the contrary, the apoptosis rate in lenti-inhibitor cells was modestly decreased, but not statistically significant (14.1±1.73 vs. 18.63±1.59%, P=0.1409) (Fig. 5B). Since cell apoptosis after 8 Gy irradiation showed inconformity with p53 expression, we detected Bcl-2 and Bax protein expression by western blotting. After IR, Bcl-2/Bax ratio in lenti-miR-30a group was decreased compared with lenti-GFP group, but had no statistical significance and lenti-inhibitor group did not exhibit the desired result ( Fig. 5C and D). Indicating that the apoptotic rate increase in the miR-30a upregulated group after IR may be partly involved in mitochondrial apoptotic pathway, but not in p53 apoptotic pathway. These data illustrate that increasing radio-induced apoptosis may be another way in which miR-30a sensitizes the radiosensitivity of A549 cells. The precise regulatory mechanism is complicated and need further research. miR-30a may enhance the sensitivity of A549 cells murine xenograft model to irradiation. To explore the radiosensitization potential of miR-30a in vivo, lenti-miR-30a and lenti-GFP cells were injected subcutaneously on the back of nude mice. Tumor growth was evaluated from the 7th day after injection until the mice were sacrificed. Our results illustrated that after IR tumor growth trend slowed down ( Fig. 6A and B). Tumor volume in the lenti-miR-30a group were smaller than those derived from lenti-GFP group (Fig. 6C), and tumors in lenti-inhibitor group showed counter trends, but no statistical significance. Treatment dose and schedule may be the two main influencing factors. Moreover, when irradiating the tumor, the nude mice were completely exposed to X-ray. Nude mice in the IR group gradually showed a series of symptoms, the most obvious was weight loss, this may also influence the tumor size. Discussion MicroRNAs are important radiosensitivity regulators by interacting with the key molecules involved in radiosensitivity (19) and miR-30a has been found to act as a radiosensitizor in prostate cancer cells by targeting TP53INP1 and modulating autophagy (24). Thus, the potential therapeutic value of miR-30a attracted our interests in determine its function on NSCLC radiosensitization. Here, we found miR-30a functions as a sensitizer to irradiation in NSCLC cells, especially in A549 cells and may enhances the effect of radiation on tumors 29) had no statistical differences compared with lenti-GFP group (1.04±0.14) after 0 Gy irradiation. After 8 Gy irradiation, Bcl-2/Bax ratio was decreased in lenti-miR-30a group (0.59±0.23) and increased in lenti-inhibitor group (1.09±0.14) compared with lenti-GFP group (0.95±0.13), but not statistically significant. IR, ionizing radiation. in nude mice. Furthermore, our data provide evidence for the potential role of miR-30a in suppressing the IR-induced G2/M cell cycle arrest and increasing the IR-induced cell apoptosis. The main target of IR is cellular DNA, ATM has a key role in the study of IR caused DNA damage (28). In response to DNA damage, by phosphorylation of ATM S1981, a series of downstream molecules can be actived to mediate cell cycle arrest, apoptosis (29) and initiate DNA repair (26). Shanware et al (25) announced that the downregulation of ATF1 could inhibit ATM expression synergistically. Interestingly, by using three public prediction databases we identified ATF1 as a potential target gene of miR-30a. The dual luciferase reporter assay, qRT-PCR and western blotting also proved that ATF1 is a direct target of miR-30a in the 3'UTR. Consistent with a previous study (25), we found that IR exposure neither affect the expression of ATM nor ATF1, but downregulation of ATF1 could reduce ATM expression and suppress IR induced ATM S1981 phosphorylation. These data suggested that by targeting ATF1, miR-30a could enhance the radiosensitivity of A549 cells through inhibiting the effect of ATF1 in IR induced ATM S1981 phosphorylation. Since cell cycle arrest, DNA repair and apoptosis are the main ways that cancer cells react to IR through ATM (30), we further investigated the effect of miR-30a on these aspects after IR. Our results indicated that miR-30a could not alter cell cycle and apoptosis rate in non-irradiated A549 cells. While, miR-30a expression can increase IR-induced apoptosis and decrease IR-induced G2/M cell cycle arrest after 8 Gy IR. In response to IR induced DNA damage, phosphorylation of ATM can increase p53, either inducing DNA repair, cell cycle arrest (31), or apoptosis, thereby, maintain genomic stability (32) and this may also reduce the therapeutic effectiveness (33). p53 wild-type cell lines, when irradiating with ATM were downregulated, p53 cannot be retarded and lead to cell cycle checkpoint deficiency (1). In line with these documented studies, we noted in p53 wild-type A549 cells, p53 expression was consistent with the activation of ATM after IR. With p53 downregulation, cell cycle checkpoint was shortened, damaged cells cannot be eliminated in time, in this way, DNA repair ability can be decreased, thus radiosensitivity was enhanced. Moreover, with the accumulation of unrepaired, misrepaired and mutated DNA, the apoptosis can be subsequently increased, this may also partly cause the enhancing of radiosensitivity. However, in human cancer, one individual miRNA could participate in the whole cancer procedure from initiation, progression to terminal by targeting hundreds of genes (34). They are involved in multiple pathways and could not only restrain but also accelerate cancer development (35). In our study, we surprisingly found that unlike A549, when combined with miR-30a, the colony survival of H460 showed a modest decrease, but no statistical difference with its control group. This may be associated with the modest miR-30a expression fold-change compared with A549 cells after miR-30a transfection ( Fig. 1A and B). The in vivo study showed miR-30a can result in tumor volume regression, but still no statistical differences. Possibly this is due to the IR starting too late or the IR ceased too early or the IR dose was insufficient. The relationship between miR-30a expression and the time and dose of IR need further investigation to reveal the accurate role and profound underlying mechanism of miR-30a. In conclusion, our study indicated the importance of miR-30a in enhancing the radiosensitivity of A549 cell line by targeting ATF1, and association with the downregulation of ATM pathway, which may be a potential therapeutic factor of radiosensitization.
2018-04-03T02:05:31.982Z
2017-02-14T00:00:00.000
{ "year": 2017, "sha1": "13febe6b2cc6b2978080b9147ce64fd44dfaf247", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/or.2017.5448/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "13febe6b2cc6b2978080b9147ce64fd44dfaf247", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235675615
pes2o/s2orc
v3-fos-license
Lenvatinib treatment for thyroid cancer in COVID era: safety in a patient with lung metastases and SARS-CoV-2 infection During the coronavirus disease 2019 (COVID-19) pandemic, clinicians are required to manage patient care for pre-existing conditions. Currently, there are no clear indications regarding the management of lenvatinib-treated patients for radioiodine-refractory thyroid cancer and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. A 74-year-old male patient was treated with lenvatinib since March 2019, with disease recurrence in the thyroid bed and bilateral multiple lung metastases. The patient partially responded to treatment, with reduction in lung metastases. In September 2019, the patient tested positive for SARS-CoV-2 and isolated at home. Initially asymptomatic, the patient developed mild symptoms. Lenvatinib treatment continued with daily monitoring of vital signs. After telemedicine consultation of patient’s clinical condition, severity of symptoms was low. He tested negative for SARS-CoV-2 21 days after testing positive. The patient received the full course of lenvatinib treatment. This is the first reported case of a lenvatinib-treated patient who developed COVID-19 and could continue treatment. Despite concerns over COVID-19, clinicians should not overlook treatment of pre-existing diseases or discontinue treatment, particularly for cancer. Clinicians should evaluate a patient’s history and clinical presentation, monitoring the patient to reduce the development of complications in high-risk settings, avoiding treatment discontinuation. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus disease 2019 (COVID-19) have emerged as a global pandemic. Patients with either a history of or active cancer may be at an increased risk of contracting the virus and developing complications [1]. Patients on oral chemotherapy with a history of lung disease are at particular risk, due to intercurrent disease potentially worsening respiratory function. It is paramount that cancer treatment continues whilst protecting patients against the virus. The aim of this case report is to determine the safety profile of continuing lenvatinib treatment for patients with radioiodine refractory (RR) advanced differentiated thyroid cancer (DTC), given that COVID-19 may worsen respiratory function. Advanced DTC is treated by total or near-total thyroidectomy, followed, where necessary, by radioiodine ( 131 I) and thyroid hormone suppressive therapy [2]. However, some patients are resistant to 131 I, and cytotoxic chemotherapy is not very effective in patients with metastatic RR-DTC. Alternatively, lenvatinib, an oral multikinase inhibitor is a novel therapy to manage DTC [3]. Case presentation We present a case of a 74-year-old male patient treated with lenvatinib for advanced RR-DTC, since March 2019, with disease recurrence in the thyroid bed and bilateral multiple lung metastases (maximum diameter 7.4 × 5.7 cm). He started treatment at 24 mg/day lenvatinib; however, following weight loss and nausea in September 2019, the dose was reduced to 18 mg/day. He showed partial response to treatment, with a progressive reduction of lung metastases (actual maximum diameter 6.5 × 3.2 cm). Whilst on treatment in September 2020, the patient tested positive for SARS-CoV-2 via a nasopharyngeal swab. The patient isolated at home and was initially asymptomatic. After a few days, he developed mild symptoms (cough, diarrhea, and worsening asthenia), but never experienced anosmia or fever. We decided not to discontinue lenvatinib treatment and daily monitoring of vital signs was performed, including blood pressure, body temperature, and oxygen saturation. The evaluation of adverse events (AEs) and the patient's clinical condition was carried out by telemedicine. Due to the low severity of symptoms, chest imaging was not performed. The patient performed a new nasopharyngeal swab 21 days after the detection of SARS-CoV-2 and tested negative. Despite our concerns, we observed no severe respiratory, gastrointestinal, or hematopoietic complications, and the patient needed neither specific therapy for COVID-19 nor lenvatinib interruption or discontinuation for the full course of the intercurrent disease. Discussion Patients with cancer are typically older with more comorbidities and may be immunocompromised by treatment or through the nature of cancer [1]. Patients with cancer have an increased risk for COVID-19 related morbidity and mortality, regardless of whether they have active cancer or are being treated [1]. Careful monitoring of both COVID-19 symptoms and anti-cancer treatment associated AEs are important for assessing treatment continuation. To date, this is the first reported case of a lenvatinib-treated patient who developed COVID-19, with the patient able to continue treatment without experiencing any additional AEs. As data are limited, this report is an important indicator of the safety of continuing lenvatinib treatment during the COVID-19 pandemic and could be more widely generalized to patients with COVID-19 infections for other cancer types receiving anticancer treatments. Despite the complications of COVID-19 and the increased risk of mortality and morbidity for patients with cancer and SARS-CoV-2, continuing lenvatinib treatment should be favored over treatment discontinuation, given the treatment benefits. In a phase 3 trial, lenvatinib treatment significantly improved progression-free survival (PFS) for patients with RR-DTC, with a 14.6-month longer median PFS vs. patients receiving placebo (P < 0.001) (5). Lenvatinib significantly improved response rate (64.7% in the lenvatinib group vs. 1.5% in the placebo group, P < 0.001) [5]. In a post hoc analysis performed on patients enrolled in the SELECT study, higher rates of dose interruption or dose reduction had a negative impact on PFS [6]. In the present case, we could hypothesize that effective monitoring of a patient's AEs and clinical presentation is important in deciding if lenvatinib treatment should continue for patients testing positive for SARS-CoV-2. The continuation of treatment for patients with cancer is crucial for disease management. Careful monitoring of patients by clinicians could ensure lenvatinib treatment continues whilst managing the complications in high-risk settings due to the COVID-19 pandemic [7]. In summary, this case report demonstrates that lenvatinib treatment can continue if patients are carefully monitored for COVID-19-associated complications. Studies with larger samples and longer follow-up periods are required to determine the safety of continuing cancer treatment for patients with cancer and COVID-19.
2021-06-30T06:17:07.705Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "8bbb936aefa8a72d863b02f0cd0ee98adc7a34ab", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/anti-cancerdrugs/Abstract/9000/Levatinib_treatment_for_thyroid_cancer_in_COVID.98397.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9fbf2f0b75394ee7e1b9b542189d2e108577b929", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169297417
pes2o/s2orc
v3-fos-license
Research on Green Certificate Trading Mechanism of Renewable Energy The green certificate trading system is a mandatory quota for renewable energy. On the one hand, it can promote the development of renewable energy. On the other hand, it can help to respond to the national policy of energy conservation and emission reduction. Firstly, this paper reviews the development status of renewable energy green certificate trading, and designs the framework of renewable energy green certificate trading mechanism in China. On this basis, with the help of certificate income, the weighted grid price of newly built onshore wind power will reach parity in 2020 and the additional level of renewable energy will remain unchanged. The green certificate price will be calculated on the premise that the full amount of funds will be used to subsidize the price of renewable energy and meet the demand. Finally, according to the results obtained under different measurement conditions, the changes of interests of different subjects after the implementation of green certificate are analyzed, which can provide reference for the implementation of green certificate system in China. Introduction Green Power Certificate (hereinafter referred to as "Green Certificate") is a system to promote the development of renewable energy power through market mechanism on the basis of the compulsory quota of renewable energy power. Mandatory market quota and green certificate trading mechanism are mature international systems to promote the development of renewable energy and electricity. The main implication is to issue a green certificate for a certain scale of renewable energy power generation (usually 1 MW/h), and at the same time to regulate the renewability that must be achieved by power supply enterprises or power grid enterprises. Energy generation or electricity sales account for the proportion of its total power generation or electricity sales. Enterprises that do not reach the required proportion need to buy green certificates in the market. Renewable energy power generation enterprises can get corresponding benefits through green certificate trading. Voluntary Subscription of Green Power Certificates for Renewable Energy, which clearly states that a voluntary subscription system for green power certificates for renewable energy should be established, and the formal subscription of green power certificates for renewable energy should be launched from July 1, 2017. This notice encourages government organs, enterprises, institutions, social institutions and individuals at all levels to voluntarily subscribe for green power certificates as proof of consumption of green power. According to the market subscription situation, renewable energy quota assessment and green power certificate compulsory binding transaction will be started at 2018. [3] The purpose of establishing the target guidance system of renewable energy development and utilization is to establish a comprehensive index statistical assessment system, guide local governments to formulate the target of renewable energy development and utilization scientifically, formulate and implement the energy development plan, clarify the responsibilities of local governments, power grid enterprises and power generation enterprises in the development of renewable energy. And on this basis, the corresponding monitoring and evaluation system is established to achieve more effective after-the-event supervision, and then promote the transformation of energy system towards green and low-carbon direction. Implementing renewable energy quota assessment and green power certificate compulsory restraint trading is a concrete measure to implement the goal guidance system, establish monitoring and evaluation system, and also an important grasp to mobilize and urge all relevant parties in the energy field to promote energy transformation. It should be taken as a guide to gradually establish a comprehensive and reproducible system. [4,5] The management and assessment system of raw energy indicators will truly implement the requirements of energy transformation and non-fossil energy proportion put forward by the state. The goal guidance system of renewable energy development and utilization is the foundation of establishing green certificate mechanism. Framework Design of Green Certificate Trading Mechanism for Renewable Energy From the international experience, the compulsory market share or quota system is a mature mechanism, which is implemented in dozens of countries and regions such as the United States, the United Kingdom, Sweden, Australia, Japan, South Korea and so on. It is an important market mechanism to promote the development of renewable energy in these countries. Some power grid enterprises (such as most states in the United States and the United Kingdom) and some power generation enterprises (such as Korea and Taiwan) are responsible for the mandatory quota. Therefore, the main body of quota liability can be selected among power generation enterprises and power distribution (power grid) enterprises. At present, China's electricity sales (power grid) enterprises as the main body of quota responsibility can reduce the demand for subsidized funds, solve at least the increasingly severe problem of renewable energy power consumption, and provide stable and sustainable development of renewable energy in the future through appropriate quota index requirements. Growth space to ensure the achievement of the national non-fossil energy development goals in 2020 and 2030. Boundary Conditions for Green Certificate Price Estimation According to the target of 15% non-fossil energy in 2020, and considering the 13th Five-Year Plan target of energy and renewable energy development under discussion, according to the scale and power consumption of wind power, solar power, biomass power generation and the whole society in Table 4-1, the power generation capacity of coal power in 2020 is 5 trillion kWh. By 2020, the ratio of total non-water renewable energy generation to total coal-fired power generation is estimated to be 15.4%. According to the preliminary results of the Energy Research Institute of the National Development and Reform Commission and the National Renewable Energy Center in the Study of the Roadmap of Wind and Solar Power Parity on the Internet, it is conservatively anticipated that the weighted price of new wind power will be reduced to 0.45 yuan/kWh in 2020, the weighted price of 3 photovoltaic power will be reduced to 0.6 yuan/kWh, and the average price of coal power will be reduced to 0.45 yuan/kWh The lattice is 0.36 yuan per kilowatt-hour (without considering the impact of green certificates). Every megawatt-hour of non-water renewable energy power production is recorded as a green certificate, and 755 million green certificates will be issued nationwide by 2020. The following boundary conditions are taken into account in the preliminary calculation: (1) The proportion of non-water renewable energy generation in coal-fired power generation of power generation enterprises in 2020 should reach the national average(15%), which means that by 2020, every power generation enterprise producing about 6.5 MW of coal-fired power needs a green certificate; if not, it must purchase a certain green certificate. To meet the proportion requirement; (2) Regardless of the renewable energy power, the share of green certificates is the same, i.e. one green certificate for every megawatt hour of electricity; 4.2Green Certificate Price Estimation Scheme and Results Based on the above basic assumptions and boundary conditions, two schemes are designed to calculate the price of green certificates. The two schemes consider different preconditions. First, with the help of certificate income, the weighted grid-connected price of new onshore wind power will reach parity in 2020. Second, the additional level of renewable energy will remain unchanged, and the full amount of funds will be used to subsidize the price of renewable energy and meet the demand. (1) Scheme 1: With the benefit of certificates, wind power can be calculated with the green certificate price in the case of coal-fired power parity by 2020. For all renewable energy projects built in 2020 and before, there is no difference in the way of green certificate income of various renewable energy technologies through market behavior. Due to the differences in the cost of renewable energy projects built in different technologies and in different periods, it can be moderately balanced by different electricity price standards or electricity subsidy standards. According to the above boundary conditions, the average price of each green certificate (1 MWhour non-water renewable energy power) is about 78 yuan, and the additional cost for each MW-hour coal-fired power generation enterprise is 12 yuan. Considering that the proportion requirement of green certificate is adjusted year by year according to the development situation of renewable energy and electricity demand, if the proportion requirement of green certificate is appropriate, the price of certificate is less affected by the change of renewable energy power generation. When the renewable energy power generation increases by 100 billion kW, the price of certificate is false. The fixed generation capacity of the whole society will remain unchanged, and the quota certificates for the same coal-fired power demand will increase, resulting in an increase in the power cost of coal-fired power at 1 MW, which is about 13.5 yuan. At this time, the price of green certificates will decrease to 76 yuan. If renewable energy power is reduced by 100 billion kilowatt-hours, the price of green certificates will rise to 80 yuan, but the cost of coalfired power will fall to 10 yuan per megawatt-hour. Based on the existing policy conditions, the total demand for renewable energy price subsidies in China is estimated to be about 65 billion yuan in 2015. With the additional receivable of renewable energy price, the income of renewable energy development fund is about 57 billion yuan. There is a shortfall of nearly 10 billion yuan in that year, but due to the collection of self-owned power plants. The actual income fund is about 51.5 billion yuan, and the gap is 13.5 billion yuan. In 2020, if we take into account the decline in the cost and price demand of the aforementioned scenic spots and the absence of green certificate income, the total demand for renewable energy price subsidy funds will be about 180 billion yuan. In the case of additional receivable of renewable energy price, the revenue will be about 90 billion yuan, and there will be a gap of about 90 billion yuan in that year. According to the above green certificate price level, the demand for renewable energy price subsidy fund can be reduced by 70 billion yuan in 2020, and the income and expenditure of renewable energy additional fund are slightly insufficient, but the accumulated surplus of previous years can be used to supplement the gap. However, there may still be a gap of about 20 billion yuan if the actual levy ratio is used. (2) Scheme 2: The additional level of renewable energy remains unchanged, and the full amount of funds is used to calculate the green certificate price when the renewable energy price subsidy can meet the demand. The calculation of green certificate price is still at the level of 2020. Considering that the current 1.9 cents per kilowatt-hour additional subsidy for energy and electricity price is no longer adjusted, and all the funds collected are used for renewable energy and electricity price subsidy, then: 1)In the case of exhaustive receivables, the average price of each green certificate (1 MW-hour non-water renewable energy capacity) is estimated to be about 84 yuan; in the case of power generation enterprises, the additional cost for each MW-hour coal-fired power supply is 12.9 yuan. 2) According to the current levy rate, the average price of each green certificate (1 MW-hour nonwater renewable energy capacity) is estimated to be about 102 yuan, and the additional cost for each MW-hour coal-fired power generation enterprise is 15.7 yuan. Conclusion (1) The implementation of green certificate transaction will affect the demand for renewable energy price or electricity subsidy. The coordination between these policies is easier to achieve. With the implementation of green certificate transaction, benchmark price standard and electricity subsidy standard can be adjusted in time. Therefore, for renewable energy power generation enterprises, there is an additional revenue channel, but in theory, the revenue level remains unchanged, and the revenue risk increases. (2) According to the electric power system reform plan and the supporting documents, the increased cost will be directly reflected in the terminal price. If the above green certificate price calculation parameters are still used, the terminal price will be increased by 0.7 cents per kilowatt-hour in the first scenario of 2020 and 0.9 cents per kilowatt-hour in the second scenario.
2019-05-30T23:47:20.840Z
2019-03-30T00:00:00.000
{ "year": 2019, "sha1": "6b631bd86a5c848fc3bff8112f24cec962037580", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/242/2/022036", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fe1739025a943da192b3a1e60d00e9ffef44b713", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Business" ] }
239876428
pes2o/s2orc
v3-fos-license
Voice controlled shopping trolley navigation with RFID scanner and live billing using IoT The current process of shopping in a mall involves figuring out and navigating to required sections and either carrying a shopping basket or pushing around a trolley. Although it might not feel so but this is a time consuming and for some people, a physically demanding task. The other main disadvantage is having to wait in queues at the cashier and have the items billed in one by one. The proposed system is a Raspberry Pi based device integrated to traditional shopping trolleys. The desired section of the mall is given to it as voice input. Necessary signals are provided to a driver IC which drives the wheels of the trolley through motors towards the required section. The very moment an item is dropped into the trolley, it is scanned by an RFID reader module. The item details and total bill amount to the customer and live update is provided at the checkout through IoT. Introduction With all the technological advancements getting updated and becoming outdated by the hour, there is a need for constant innovation in every aspect of life. Shopping is a process which has never had any kind of technological advancements in quite some time. The entire process of traditional shopping at malls is a very time consuming and a physically engaging process [4] [16]. Having to figure out the layout of a mall and locate the required sections of it can not only be time consuming [13][18] [20] but can also be misleading to the customer as the customer is usually made to navigate past items that are expensive, attractive and potentially not needed to reach the items that are actually needed [6][7] [23]. At times the customers forget what was required by them in the first place and end up wasting too much time and money on items that are not needed [5] [24]. As time passes and the trolley keeps getting filled, naturally it becomes heavier and harder to push [19]. At the end of procuring the required items, customers have to wait in usually long queues to get their items billed [3] [12]. Since the cashier has to scan every single item of every single customer [1] [14], it can prove to be a very demanding and an error prone task [2]. Technology is required in many instances [8] during the entire shopping process to benefit both the customer and the cashiers [25]. Literature survey In shopping malls, RFID technology is also used for billing mostly during purchasing decision, and IOT is used for bill management through the ESP module.The payment details are getting to be sent to the server by which the central billing unit will affect the customer's payment. The ESP module goes to be operating as a short-distance Wi-Fi chip for wireless communication. But there is a drawback which includes constraints like distance and interference. The server goes to be busy if customers are high and internet connectivity should be stable for finishing the tactic. [7] Sainath (2014) developed an automated supermarket trolley for a supermarket billing system, that uses barcode technology to scan goods. The bill is going to be forwarded to the central billing system where the customer can pay them by showing a singular id. The limitation of barcode scanning needs a line of sight for scanning and it ought to be fastened inside its boundary. [9] Budic (2014) developed a cash register line optimization system using RFID technology. The system was developed for smart shopping using RFID. The RFID is employed for scanning products and also the info is kept within the database which will be paid online or during a central bill. It also uses an internet application to take care of entire shopping details. It requires the upkeep of an internet application server. No necessary steps are taken for the merchandise that's accidentally dropped into the trolley by the client. [10] In Smart shopping trolley using RFID paper. They implemented a sensible way of a shopping trolley with RFID and Zigbee by which bill is generated by a scan of products within the reader and bill transmitted to central billing department by that bill could also be paid at the counter that would be a serious issue for the client. [11] Prateek Aryan (2014) presented a smart shop cart with automatic billing and Bluetooth, which is a method for billing to be done while in a trolley and then uploaded to the user's Android device through Bluetooth.. Every customer can't be expected to possess a Smartphone and Bluetooth can have connectivity issues and the range is a smaller amount. [15] In Smart RFID-based Interactive Kiosk Cart using wireless sensor nodepaper , they used RFID technology for the smart automated shopping. They used a fanatical website for billing maintenance and for user interaction. Every user with a singular id accesses the webserver for the bill payment and invoice information. Internet service is mandatory during this sort of service. therefore the method could fail thanks to internet instability and server error issues may additionally occur due to high load. [17] Vinutha (2014) built an automatic billing with server end employing RFID technology for shopping and automatic billing.This scans product by frequency identification then the bill is generated at the server finish that's then communicated to the client. This needs server maintenance and internet connectivity each for the client and shopkeeper. [21] Smart handcart with customer-oriented service by Hsin-Han Chiang (2016), established a thought of an automatic billing system and programmed shopping trolley where they used face recognition for client authentication. It's not an easy straightforward method as face recognition of shoppers throughout shopping hours won't be easy and proper as malls are often crowded. Many errors are potential whereas using recognition for authentication. [22] 3. Methodology The proposed system primarily aims to automate the existing shopping process as much as possible. A smart device is fitted to a traditional trolley. The device takes the customer's required section of the store or mall as voice input. As per the input, the trolley automatically moves towards the desired section, eliminating the need to push the trolley while searching for the needed section. Whenever an item is dropped onto the trolley, it is scanned and updated in the trolley's audio output and also at the designated billing system at the cashier. When an item is removed from the trolley, it is removed from the live bill both in the trolley and the billing computer. The many advantages of this system like automation of trolley movement, voice input, automatic billing and updation make this a very practical and efficient way to shop. Implementation The various technologies, software and hardware used in the implementation of the prototype include Raspberry Pi, Wi-Fi module, Bluetooth module, RFID Reader and tags, driver IC, Motors with battery power supply, IOT, headset for audio output, Python to code and program the trolley. The very first step in this approach to shopping is to give the trolley the desired section of the store as a voice input. This is done through an app on the phone as it is much more convenient than bending down to the trolley and speaking out the command. This voice command is sent to the device's brain, Raspberry Pi through Bluetooth. Once the command is received by the Raspberry Pi, it gives out an audio acknowledgement and then sends the instructions to the driver IC to power the motors attached to the trolley's wheels and navigate to the desired section. The directions are pre-programmed using python to the Raspberry Pi using the layout of the store. Once the trolley reaches the destination, a prompt is given to the user to point them to the precise shelf where the required item is kept. The entire process is based on the fact that every item in the store is RFID tagged. When the customer drops the item to the trolley, the RFID reader scans the item's RFID tag and gives out a voice prompt informing the customer of the details of the item dropped in the trolley and the total bill amount. In case of removal of any item, the action is recorded and the item gets removed from the live bill which also provides a voice prompt for the same action. Simultaneously the same details are being updated at the billing section at designated billing lane's billing computer online, thereby implementing IOT to make shopping better. On completion of shopping, the customer can check out at the designated counter for the trolley after making the payment for the bill which was created in real time during the entire duration of the purchase. There are numerous advantages to the proposed system like,  Ease of shopping  Convenient way to find the required items  Automation of trolley thereby minimizing physical involvement only to drop items inside the trolley  Live billing  Time and energy saving approach  Practicality  Ease of installation and maintenance The Bluetooth module has a GFSK modulation and the sensitivity is ≤-84dBm at 0.1% BER.It is used to get input(which product customer wants) from the mobile phone. Node MCU is a SDIO 2.0, SPI, UART and it has a 32-pin QFN package. Integrated MAC/baseband processors. It is used for billing the products. It get the input from the Raspberry PI(which products the customer selected) and it gives the output to the billing counter. Figure 11. Live Bill We can bill the products automatically with this output and it is a time saving process and also the customers no need to wait in the bill counter. Figure 12. Simulation output Conclusion The proposed system doesn't just focus on current difficulties in shopping like the physical requirement of pushing a trolley or long queues at the billing section, but also aims to make innovative advances using technology to make the entire shopping experience a short but efficient one. The system is also very practical and even helpful for physically challenged people equally as it is useful for any other customer. In the growing age of technology, there isn't a place in the world that lacks the need to be updated and technologically improved. By using the various technologies like IOT, Cloud and RFID, the project aims to technologically advance the process of shopping as far as possible. The result of the project is not just to help the customer save time and shop efficiently but also cover all aspects of shopping like eliminating long queues at the billing section, reducing the cashier's work, increasing the number of customers at the mall etc. Furthermore, this alternative to traditional shopping does not prove to be harmful for the environment and does not contribute in any type of pollution. In fact, the energy source used is electric batteries, which can be recharged and reused several times. Considering the very negligible cons of the project (Need to recharge the batteries), the many advantages make this a perfect solution to the problem that is the method of traditional shopping.
2021-10-26T20:07:10.667Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "574eedbf1aa36cede228df9c49b687a0eea60fa5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2040/1/012035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "574eedbf1aa36cede228df9c49b687a0eea60fa5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
219642777
pes2o/s2orc
v3-fos-license
PREDICTION OF FATIGUE CRACK GROWTH DIAGRAMS BY METHODS OF MACHINE LEARNING UNDER CONSTANT AMPLITUDE LOADING Important structural elements are often under the action of constant amplitude loading. Increasing their lifetime is an actual task and of great economic importance. To evaluate the lifetime of structural elements, it is necessary to be able to predict the fatigue crack growth rate (FCG). This task can be effectively solved by methods of machine learning, in particular by neural networks, boosted trees, support-vector machines, and k -nearest neighbors. The aim of the present work was to build the fatigue crack growth diagrams of steel 0.45% C subjected to constant amplitude loading at stress ratios R = 0, and R = –1 by the methods of machine learning. The obtained results are in good agreement with the experimental data. INTRODUCTION Methods of strength and durability evaluation of the responsible structural elements often need the complicated calculations. Therefore, it is important to learn how to solve the problems of fracture mechanics by methods of machine learning, in particular, neural networks (NN), support-vector machines (SVM), k -nearest neighbors and boosted trees, which allow to achieve high accuracy of solutions [1  4] . The structural elements are often fractured by fatigue, gradually accumulating damage. It is possible to observe a small crack which grows under loading. The fatigue crack is formed mainly at the stress concentrator, that is, the place of damage, which weakens the cross-section of the material. The crack grows as long as the material is able to withstand the loading. Therefore, the basic factors that influence the strength of structural elements are the surface defects of the parts, temperature and the environment during operation, the nature of loading and loading conditions [5]. It is known that, the basic parameters characterizing the fatigue crack growth (FCG) rate da / dN are the stress intensity factor K (SIF) and the stress ratio R [6  9]. The fatigue crack growth diagram is usually built in double logarithmic coordinates lg da/dNlg ΔK. It has the form of an S-shaped curve limited on the left by the threshold SIF range ΔKth, and on the right by the critical SIF ΔKfc (cyclic fracture toughness). The threshold SIF ΔKth is determined experimentally. It is an important characteristic of material resistance to fatigue fracture. The diagram consists of three regions: region I corresponds approximately to the rate da / dN ≈10 -10 ...10 -8 m/cycle, in which the rate of the FCG increases significantly with a slight change of ΔK. Region II has the form of a straight line. The rate in this region is in the range of 10 -8 ...10 -6 m/cycle. In particular, it is considered that here the crack grows evenly for each loading cycle. Region III is characterized by accelerated FCG and corresponds the values of da/dN >10 -6 m/cycle [5]. At high SIF values, the rate of crack growth is extremely high. P. Paris and F. Erdogan found out that the FCG rate for metallic materials can be determined by the SIF [10]. In particular, the formula obtained by them describes only the second region of the fatigue fracture diagram and does not take into account the influence of the stress ratio R on the FCG rate [11]. It is known that, with increasing R the FCG rate increases [12][13]. Therefore, the Walker's equation [2,14] is used to describe the FCG rate taking into account the stress ratio R. However, these models don't take into account the variable regions of FCG. The Forman's equation [15][16] Machine learning: Background and Modeling Progress of modern technology, in particular, high demands for accuracy and efficiency, have led to the creation of methods that solve a number of important tasks. Therefore, neural networks, support-vector machines, k -nearest neighbors, boosted trees are powerful algorithms of supervised learning, which can be used to predict FCG. NN consist of a very large, though the finite, number of items that form the input layer, one or more hidden layers of computational neurons, and one output layer. The input signal is transmitted over the network in the direction from layer to layer [18]. Such networks are usually called multilayer perceptrons, which quite accurately solve different tasks. NN determines the relationship coefficients between neurons, whereas the computational power of a multilayer perceptron is in its ability to learn on its own experience and the backpropagation algorithm. The idea of this algorithm is based on an error correction. The basic parameters of NN are its topology, algorithm of training and the functions of the neurons activation. In the current study, the sum of squares error function (SOS) was chosen and the training method was Broyden-Fletcher-Goldfarb-Shanno (BFGS) [19][20][21][22]. The stop parameter of learning network was number of epochs, which in this study was equal to 1000. The boosted trees algorithm reflects the natural thinking of human processes while making a decision [23]. The data obtained by building and using boosted trees are logical and easy for visualizing and interpreting. The algorithm of building the boosted trees structure consists of the creating and cancellation stages of trees. In creating trees, one chooses the criteria of splitting and termination of learning, whereas in the course of trees cancellation, some branches are removed. The boosted trees method is used when the results of one decision influence the next, in particular, for making consistent decisions. The ideas of the methods of support-vector machines and k-nearest neighbors are the simplest [24]. In the first method, the data are presented as points in the space. The training data are split into two categories. The training algorithm creates a model attributing new data to a certain category. Geometrically, it looks as if we are trying to draw a straight line centrally between two sets. The nearest to this straight line points are the support vectors. The support-vector machines method, as any method of machine learning, has many parameters. In this case, the basis objects are the regularization parameter, the loss function, which treats as errors only predicted values deviating from the actual values by a distance greater than ε, and the kernel parameter γ. As the kernel function, the radial basis function (RBF) is used. The method of k-nearest neighbor assigns a new object to the class that is the most common over k-nearest neighbors of the training sample. The distance between k-nearest neighbors is usually chosen as Euclidean. The aim of the learning process is to minimize the loss function, which should decrease. In the current study, the loss function was chosen as the mean squared error (MSE) [ Experimental material(s) and methods During the operation, a railway axle undergoes static and cyclic loading, including random loading, bending, as well as corrosive action of environment and climate temperatures. The FCG rate in axle steel was predicted by methods of machine learning according to the experimental data of FCG obtained for 0.45% C steel and stress ratios R = 0, -1 [25]. The sample consisted of 200 elements, 70% of which were chosen randomly for the training sample and 30% were left for estimating the quality of predictions. The input parameters were the SIF range K and the stress ratio R. The FCG rate da/dN under regular loading for the stress ratios R = 0, -1 was chosen as the output parameter. The input and output parameters were normalized using the decimal log function to decrease the prediction error. RESULTS AND DISCUSSION The dependences of the experimental FCG rates da/dNexp on the predicted FCG values da/dNpred for R = 0, -1 are shown in Fig. 1 There were built the experimental and predicted dependences of the FCG rate da /dN on the SIF range K for R = 0, 1 using the methods of machine learning (Fig. 2). The parameters of the constructed neural networks, support-vector machines, knearest neighbors and boosted trees are summarized in Tables 1−3. The error of the NN method for the test sample is 4.5%, the support-vector machines is 5.5%, the k -nearest neighbors is 5.5% and the boosted trees is 6.7%. CONCLUSION The predicted FCG rate data are in good agreement with the experimental ones. In the present study, the NN prediction accuracy is 95.5%, which is the best among the applied methods. Support-vector machines, k -nearest neighbors, and boosted trees also show good results in terms of accuracy. The methods of machine learning are powerful and efficient tools which allow evaluating the FCG behavior.
2020-05-28T09:18:42.969Z
2020-03-19T00:00:00.000
{ "year": 2020, "sha1": "1c7a53b8c3c60ddc12ee85c03ba748b0dbc00d53", "oa_license": "CCBY", "oa_url": "https://journals.scicell.org/index.php/AMS/article/download/346/441", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ddbe8a9cb2bebdc53f828d757c3bd2e0f50c75d2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Mathematics" ] }
139203276
pes2o/s2orc
v3-fos-license
A Review on Functionally Gradient Materials (FGMs) and Their Applications Functionally gradient materials (FGM) are innovative materials in which final properties varies gradually with dimensions. It is the recent development in traditional composite materials which retains their strengths and eliminates their weaknesses. It can be formed by varying chemical composition, microstructure or design attributes from one end to other as per requirement. This feature allows FGM to have best material properties in required quantities only where it is needed. Though there are several methods available for manufacturing FGMs, additive based metal deposition (by laser, electron beam, plasma etc.) technologies are reaping particular interest owing to their recent developments. This paper presents evolution, current status and challenges of functionally gradient materials (FGMs). Various manufacturing processes of different types of FGMs are also presented. In addition, applications of FGMs in various fields including aerospace, defence, mining, power and tools manufacturing sectors are discussed in detail. Introduction Materials are continuously developed from iron, pure metals to composite materials which are in use today. Continuous material development from Bronze Age to present and future scenario is presented in figure 1 [1]. Pure metals have very limited use, since actual application may require contrary property requirement which cannot provide by using single metal. As compared to pure metals, alloys can be stronger and more versatile. Bronze which is alloy of copper and tin was the first alloy that was developed in 4000 BC (Bronze age). Since then, different mixtures of metals and non-metals were tried to combine strengths of multiple materials as per functional requirement. per functional requirement of application. Heterogeneity, anisotropy, symmetry and hierarchy are the main characteristics of composite materials reaping particular interest for various applications. High strength to stiffness ratio, grater resistance to fatigue, wear and corrosion, high reliability etc. are the advantages of composites over pure or alloyed metals. In spite of all these advantages, composite materials are subjected to sharp transition of properties at the interface which can result in component failure (by delamination) at extreme working conditions. This drawback of conventional composites eliminated by modified form of composites called functionally gradient materials (FGMs). These materials replaces sharp interface with gradient interface which results in smooth transition of properties from one material to the other. These advanced materials with engineered gradients of composition, structure and specific properties in the preferred direction are superior to homogeneous material composed of similar constituents [1]. The mechanical properties such as Young's modulus of elasticity, Poisson's ratio, shear modulus of elasticity, material density and coefficient of thermal expansion vary smoothly and continuously in preferred directions in FGMs. Bone, teeth, skin and bamboo tree are some examples of naturally occurring functionally gradient materials. The concept functionally gradient material (FGM) was first developed by researchers from japan in 1984. They designed functionally gradient thermal barrier with outside temperature of 2000 K and inside temperature of 1000K across 10 mm thickness. Since then, use of functionally gradient materials has been increased in various fields including aerospace, mining, power and medical. FGMs have numerous advantages that make them suitable in these applications. It includes high fracture toughness, reduction in plane and through the thickness transverse stresses, enhance performance of thermal barrier systems etc. Due to this prominence of FGM, lots of efforts were made to improve FGM manufacturing process and properties of the FGM. There are several manufacturing methods are available for manufacturing FGMs depending on the type of FGM required. These include powder metallurgy, vapour deposition, centrifugal method and solid free form techniques. Out of all these methods, solid free form techniques by using laser, plasma or electron beam as energy source are becoming very popular in the recent years. This paper presents describes functionally gradient materials (FGMs), their types and methods of manufacturing. Solid free form manufacturing (SFF) techniques of FGMs are also discussed in detail. Finally, few applications of FGMs along with recent research work and challenges of FGMs are presented. Figure 2. Classification of Functionally gradient materials [2] Two different criteria's are used to classify functionally graded materials. One is based on structure of material and the other is based on size of functionally graded materials. As shown in figure 2, functionally graded materials can be further divided into two major groups based on structure of materials: Continuously structured and discontinuously structured FGM. In continuous FGM, there is continuous gradient present from one material to the other material. However in case of discontinuous FGM, material gradient is provided in layered fashion. Based on size of materials, FGM's are Manufacturing processes like physical Vapor Deposition (PVD), Chemical Vapor Deposition (CVD) and Self propagating High temperature Synthesis (SHS) method are used to manufacture thin FGM. Whereas, bulk FGM's are manufactured by using methods such as powder metallurgy, centrifugal casting and solid freeform/additive manufacturing techniques. Methods of manufacturing FGMs Several techniques are available to produce functionally graded materials (FGMs). Few of them are described below in detail. Vapour deposition technique Vapor deposition techniques describe a variety of vacuum deposition methods which can be used to produce thin films on the base materials. All these techniques can be used to produce thin FGMs only. Different types of vapour deposition techniques include physical vapour deposition (PVD) and Chemical vapour deposition (CVD). These are energy intensive and produce poisonous gages as their by-products [3]. Other deposition based techniques which can deposit thin functionally gradient coatings are electron beam deposition (EBD), Ion beam deposition (IBD) and Self propagating high temperature synthesis (SHS) [4]. All above mentioned methods are uneconomical to produce bulk type FGMs. Powder metallurgy Powder metallurgy based technique can be used to produce bulk type FGMs with discontinuous (stepwise) structure. The process is carried out by using steps including weighing and mixing of powder according to the pre-designed spatial distribution as per functional requirement, stacking and ramming of the premixed-powders, and finally sintering [5]. Centrifugal method Centrifugal method is capable to produce continuously structured bulk type FGMs. It uses force of gravity through spinning of mould to produce functionally graded materials [6]. Difference in material densities and spinning of mould produces FGMs. There are two disadvantages of this method are this method can produce only cylindrical shaped FGMs and there is limit to which type of gradient can be produced. Solid free form fabrication/additive manufacturing (AM) techniques Solid freeform fabrication (SFF)/Additive manufacturing (AM), also known as 3D printing, is a process of joining materials to make objects from 3D model data, usually layer upon layer, as opposed to subtractive manufacturing technology [7]. This tool-less manufacturing method can produce fully dense metallic parts in short time, with high precision. Metal AM processes can be broadly classified into two major groups, -Powder Bed Fusion based technologies (PBF) and Directed Energy Deposition (DED) based technologies. Both of these technologies can be further classified based on the type of energy source used. In PBF based technologies, thermal energy selectively fuses regions of powder bed. Selective laser sintering/melting (SLS/SLM) and electron beam melting (EBM) are main representative processes of PBF based technologies. In DED based technologies focused thermal energy is used to fuse materials (powder or wire form) by melting as they are being deposited. Laser Engineered Net Shaping (LENS)/Direct Metal Deposition (DMD), Electron Beam Free Form Fabrication (EBFFF) and arc based AM are some of the popular DED based technologies. Most of above mentioned SFF/AM methods are capable to produce functionally gradient materials (FGMs) from thick coatings to complicated FGM bulk parts. Advantages offered by AM techniques like higher material utilization, speed of production, design freedom, capability to produce complicated parts and less energy intensiveness are garnering particular interest in manufacturing FGMs for different applications. Powder bed fusion (PBF) based AM technologies like Selective laser melting (SLM) and Electron beam melting (EBM) are very popular methods for producing complicated parts owing to their high accuracy and surface finish as compared to directed energy deposition (DED) based technologies. However, PBF based technologies are less flexible than DED based technologies as far as functionally gradient material manufacturing are concerned. It is due to fact that material gradient by varying chemical composition of powder is not possible. However these methods can produce bulk FGMs by controlling porosity or by introducing different types of lattice structures in parts to be manufactured. Directed energy deposition (DED) based AM techniques are most convenient methods to produce FGMs since these methods can produce FGM from thick coatings to bulk parts having continuous or discontinuous gradient. These methods can produce FGMs with better adhesion and mechanical properties than powder bed technologies. Laser metal deposition (LMD) and Electron beam free form deposition (EBFFF)/ Electron beam additive manufacturing (EBAM) are popular methods based on DED based AM systems which can be used to manufacture different kinds of FGMs. Laser metal deposition (LMD). Laser engineered net shaping ( LENS ) and direct metal deposition ( DMD ) are main processes based on DED technology which uses laser beam as power source and raw material in the form of powder. LENS process was originally developed by Sandia national laboratories in 1997 and then licensed to Optomec (USA), whereas DMD process was jointly developed by POM group and University of Michigan [8,9]. In these process, high power laser beam is used to create a molten pool on base material and then powder material is injected into the molten pool by using nozzles. Delivered powder at laser beam spot is absorbed into the melt pool and creates deposit. As shown in figure 3, the work table can move in x -y direction to obtain desired cross section of sliced model and then subsequent layers can be deposited by incrementing deposition head in z direction to complete the object. Deposition of layers is repeated until the desired threedimensional component has been additively formed. Metal powder is delivered through nozzles and distributed around the circumference of deposition head either by gravity, or by using inert carrier gas. The entire process is conducted under controlled argon atmosphere where oxygen levels are maintained below 10 ppm. Laser based Directed energy deposition (DED) technique of metal AM is the most suitable technology to produce FGMs. All types of FGMs including continuous/discontinuous structured and thin /bulk type can be easily manufactured by using laser metal deposition (LMD). Pre-alloyed powders can be used to produce discontinuous type FGM. Whereas elemental powders can be delivered in precise amounts to the melt zone using separate feeders to generate various alloys and composite materials in continuously graded fashion. With the adoption of this technique, number of FGMs can be fabricated into complex shapes as the rate of elemental powder deposition can be controlled for each feeder during the fabrication for each layer and the final product can be achieved within hours [11]. Electron beam direct manufacturing. Electron Beam Direct Deposition (EBDM) is another technology based on directed energy deposition (DED) which uses electron beam as power source and raw material in the form of wire. This technology was developed by Sciaky (Chicago, USA) and also known as Electron beam additive manufacturing (EBAM). This process can produce medium to large sized near net shaped components inside vacuum chamber directly from digital model. After manufacturing, component requires finishing operations such as heat treatment and machining. Maximum size of component to be manufactured by EBAM is restricted by vacuums chamber size of the machine. Commercially available welding wires are used as the deposition material. The standard electron beam system is a Sciaky 60 kW / 60 kV welder. The electron beam is electronically focusable and the output power is scalable over a very wide range. This enables a very wide range of deposition rates to be achieved using the same system. Typical deposition rates of EBAM systems are from 3 to 9 Kgs/hrs depending on the material used and part complexity. Additionally, the EBAM system has closed loop control system in which melt pool size is continuously monitored and parameters are adjusted to keep the size constant. This ensures consistent part geometry, uniform microstructure and mechanical properties. [12] EBAM technology can also produce various types of functionally graded materials (FGMs) by using multiple wire feed nozzles as shown in figure 4 to single EB gun. Two or more wires of different metal alloys can be independently controlled and simultaneously feed to single molten pool to form graded materials. Both coating and bulk type of FGMs can be formed in continuous or discontinuous manner. FGMs by Arc deposition technologies. Wide ranges of arc based additive manufacturing processes are available where arc (plasma, TIG, MIG) is used as power source and material is used in the form of powder or wire. Plasma transferred arc (PTA) and plasma arc welding (PAW) are free form AM processes which uses plasma arc as power source and raw material in the form of powder and wire respectively. Shaped metal deposition (SMD) is another AM technique which uses tungsten inert gas (TIG) or Metal inert gas (MIG) welding with material in the form of wires for free form fabrications. Since most of such systems are wire feed type, these are also known as Wire assisted additive manufacturing (WAAM) systems. Large number of system configurations can be achieved by integrating conventional welding systems with robots, manipulators or gantries for automation. All of these processes with proper inert gas shielding have strong potential to produce near net shaped medium to large sized parts at much lower cost as compared to laser and electron beam based processes. Few welding based AM systems has been developed which can deposit functionally gradient materials. In this case, two filler wires are controlled separately and supplied to the arc (TIG or MIG) for deposition. Several studies have been carried out to demonstrate effectiveness of arc based AM configurations to produce FGMs. Sajan Kapil et. al [13] successfully fabricated Al-Si alloy having gradient in thermal conductivity. It was fabricated by using Hybrid layered manufacturing machine (HLM) which combines 3 axis CNC and gas metal arc welding (GMAW) deposition system. S. Suryakumar et al. [14] demonstrated two different ways to fabricate functionally gradient materials by using weld deposition. FGMs can be produced by varying process parameters or by using double wire feeder which can be guided and controlled separately. Applications In the recent years, there has been growing interest in the use of functionally gradient materials (FGMs) due to their numerous advantages over composite materials. Owing to graded variation in the composition, the properties of FGMs changes significantly and continuously from one surface to another, thus eliminating interface problems like stress concentrations and poor adhesion [15]. Use of FGMs is increasing in aerospace, defense, nuclear industry, biomedical and electronics sectors. FGMs are mainly used in those applications where combinations of two extreme properties are required in single component for example hardness and toughness [16]. For example, in case of turbine blade, thermal resistance and anti-oxidation properties are required at high temperature side and mechanical strength and toughness are required at low temperature side. To tackle these requirements turbine blades were used to manufacture by using metal-ceramic composites [17]. However, the property difference between two materials created residual stresses and adhesion issues at interface which may lead to failure. Turbine blade made by using FGM possesses smooth property change from ceramic to metal and diminishes interface problems [18]. Figure 5 shows turbine blade made by using FGM where properties like thermal conductivity and mechanical strength are continuously graded from metal to ceramic region. The applications like cutting tools and machine parts require heat, wear, mechanical shock and corrosion resistance. Reliability and cost/performance ratio plays major role in these applications [16]. Figure 6 shows composite and FGM cutting tools with metal shank and ceramic tip. Composite tool suffers from sharp transition of properties from metal to ceramic which may result in residual stresses and tool failure. However FGM cutting tool where FGM material used in between metal and ceramic increases thermal strength and tool expected to have long life. [20] Metal-ceramic FGM is also used in armors, [21,22] Where hard ceramic front surface blunts the projectile, whereas metallic back surface catches fragments and prevents penetration [23]. Similar FGMs also finds applications as heat resistant valves of internal combustion engines [17]. Functionally graded thermal barrier coatings (FTBC) by using various spraying techniques are popular methods of producing such FGMs [24,25] Another emerging area for FGM application is biomedical sector where functionally graded prosthesis joint can increase adhesive strength and reduce pain [26]. Figure 7. Prosthesis joint by using FGM [27] Performance of metal prosthesis joint can be enhanced by using functionally gradient material having high biocompatibility at surface. As shown in figure 7, biocompatibility property increases and mechanical strength decreases as we move from metal to bone. Number of diffusion and deposition based technologies are used to enhance surface properties of components. However, recently it is observed that their performance can be further increased by combining diffusion with deposition processes. Combination of diffusion processes like nitriding, nitro-carburizing etc. with deposition processes like PVD,CVD type coatings of hard material provides functionally gradient effect which improves properties [28]. These treatments are well known as duplex surface treatments where diffusion process (nitriding, boriding etc) are combined with coating processes (like PVD ,CVD or TRD-thermo reactive diffusion). Deposition technologies can produce hard, wear resistant layer on metal surface. However, thickness of this coating is very small and there is sudden change in properties between coating and the substrate material. It can result in premature failure of coating during service conditions by delamination of coating. Thermo chemical diffusion treatment like nitro-carburizing prior to deposition of hard coating can form graded structure from surface to substrate and provides tough and supportive sub-surface for hard coating [29]. It is also observed that such a graded structure can shift failure mechanism from Thus, Duplex surface treatment involving nitro-carburizing and thermo reactive deposition can retain beneficial effects of both treatments and eliminates drawbacks of them by forming graded structure from surface to base. Along with all these applications, FGMs are also used in piezoelectric actuators [30], functionally graded thermal protection system for hypersonic and supersonic planes [pap-modelling and analysis-206] and functionally graded heated floor systems [31]. Recent developments and challenges of FGMs In case of most of the FGMs, a material property varies in thickness direction [32]. However, modern applications may demand FG materials in which material properties in both thickness and axial directions [33]. Recently, a gradient material in which properties varies in both directions are also developed and extensively studied [34,35].Such smart materials are known as bidirectional functionally gradient (BDFGMs) materials. Laser metal deposition based AM technique is most suitable to produce such BDFGMs [11] . Though substantial technology advancement has been made in the field of FGMs, few critical issues still need to be addressed. A proper database of FGMs in terms of parameters and testing is still not available. Conventional testing and measurement method may not suitable to evaluate performance of modern FGMs, so developments of advanced testing methods are required [36]. Most of the processing techniques of FGMs are very costly, so low cost processing technique which can mass produce large sized, complex shape FGMs is still remain as a challenge . The selection of proper material suitable for intended application is the immediate and direct challenge for future technology development in FGM research field.
2019-04-30T13:03:38.104Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "3a5150d5edb1af718343153244cae621498b78c1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/229/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3dd5221f590ba91dd280c93a843c28d43a125742", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
229663647
pes2o/s2orc
v3-fos-license
Calibration of a generalized plasticity model for compacted silty sand under constant-suction shearing tests . The stress-strain response of compacted silty sand with over-consolidated stress history often exhibit distinct peak stress before reaching the critical stress type of response when subjected to suction-controlled triaxial shearing. Such heavily consolidated soil also tends to simultaneously manifest initial compression which transitions into dilational type volumetric response. Modelling such strain-softening type response, especially emulating the smooth transition from peak to critical state is a challenge. In this paper a previously developed generalized plasticity constitutive model, called MPZ (Modified Pastor-Zienkiewicz) is fine-tuned and calibrated using a set of suction-controlled consolidated drained triaxial tests conducted on compacted silty sand specimens. Firstly, the saturated and unsaturated silty sand characteristics and the experimental test program are briefly introduced. Secondly, the calibration of each component of the constitutive model, namely critical state, dilatancy, peak state, loading direction, water retention curve and bounding function are briefly explained. Furthermore, the material parameters are estimated, model performance is displayed, and finally discussed. Preliminary simulations show that the MPZ model is able to mimic overall suction controlled triaxial test response of compacted silty sand decently well by taking into account the changes in density, pressure and suction. However, the peak states are not accurately modelled for low-high suction levels which needs further modifications in proposed Introduction Typically, the compacted soils remain in unsaturated condition almost throughout the year and hence suction plays a key role in obtaining the unsaturated soil response. Experimental characterization of the compacted soil over wide range of soil suction and external confining stress assists us in the assessment of our current unsaturated soil shear strength evaluation protocols, expand our database of strength and volume change response for various soils, and understand to the furthest extent the macro-level interaction between solid grains and air-water menisci. Although, the experimental results form a necessary basis to validate the predicted response from a particular soil model, there is a further need to reinforce the same from more than one constitutive modelling approach. For more than three decades now several researchers have spent considerable efforts in either developing new constitutive models or have tried modifying existing models to predict the unsaturated soil response. Most notably, among them being the Barcelona Basic Model, BBM [1], and Bounding Surface Plasticity theory-based models [2,3,4,5,6]. BBM has been widely used in simulating the response of normally consolidated and slightly over consolidated unsaturated soils but it fails to capture the post peak softening and stress induced dilatancy response which is typical of over consolidated soils. Compacted silty sand specimens with over consolidated stress history exhibit different response when compared with response of same soil but with normally consolidated stress history at varying suction [7]. Normally consolidated soils show continuous strain hardening type stress-strain response followed by compression type volumetric response. On the other hand, the over consolidated soils tend to show post-peak softening type stress-strain response accompanied by initial compression followed by dilation type volumetric response. Accommodating these two distinct and different types of stress-strain and volume change response in soil-models is a challenge for researchers. Pastor, Zienkiewicz and Chan [8] proposed a generalized plasticity model for saturated soils which was later modified by Manzanal et al. [9] and also later extended to unsaturated soils [10]. In this paper, a generalized plasticity model, named MPZ (Modified Pastor-Zienkiewicz) [8,10], capable of reproducing unsaturated states is used to reproduce the response of suction-controlled consolidated drained triaxial experimental tests conducted by [7]. The formulation of MPZ model and physical properties including saturated and unsaturated properties of test soil along with test procedure are briefly explained. Calibration of the important features of the MPZ model in light of recently obtained experimental test results is then outlined. A discussion of the model performance is then presented.. Preliminary model simulations from present studies demonstrate ability of proposed MPZ model to account for the influence of capillary forces and external confining stress on the stress-strain and volume change response, and therefore, validates its ability to reproduce the macro mechanical behaviours of unsaturated test materials. Soil properties The test soil is classified as silty sand (SM) according to the Unified Soil Classification System (USCS). The physical and unsaturated properties of the compacted silty sand are listed in Table 1. Each soil sample was prepared by statically compacting it in a steel mold in nine equal layers (stress-based compaction) at a moisture content of +2% of optimum moisture content and at its maximum proctor dry density. Care was taken to produce almost identical soil samples with similar dimensions and initial compacted properties for each test. More details can be obtained from Patil et al. [10]. For unsaturated testing, after preparing the soil sample, it was carefully transferred to the triaxial cell and the target suction-equalization was achieved within the triaxial cell via axis-translation technique. After the pore-fluid equalization stage, isotropic consolidation under constant matric suction ending with net confining stress (p=σ 3 -u a ) of 100, 200 and 300 kPa is accomplished. As the suction is kept constant, cell pressure (σ 3 ) is increased to attain the target net stress. Once the sample has fully dissipated the excess of pore-air and pore-water pressures, the suction-controlled shearing phase is carried out. Both matric suction and net stress are constant throughout the strain-controlled shearing stage. Table 2 includes void ratio and water content at the end of compaction (initial) and shearing (final) phases. It is an extract from Table 1 of [7], omitting equalization (II) and consolidation (III) stages. In this paper the notations adopted for the test values are "p"-"s", where p is net stress and s is the suction. Compacted specimens were tested in unsaturated condition at four matric suction values (i.e., s = 50, 250, 500, and 750 kPa) and at three values of net confining stress i.e., σ 3 -u a = 100, 200 and 300 kPa. Both sample preparation and experimental program, including porefluid equalization and preconditioning stages, are thoroughly described in [7]. Constitutive model framework Generalized plasticity (GP) constitutive models are characterised by introducing the phenomenological features in a hierarchical manner, i.e., progressing from simple to complex models, acquiring generality as they are further developed. In this context, first generalized plasticity models were used in the late 1980's and 1990's [8]. They were further improved to account for anisotropy [11], stress history [12], critical state [9] and unsaturated state [10]. Formulation The present constitutive model is based on a formulation that reproduces the main features of unsaturated soil behaviour. Both these features and their corresponding equations are subsequently summarized, beginning with the saturated framework and continuing with the extension to unsaturated soils. Formulae references can be traced back in [9] and [10]. Critical state line (CSL) depends on both pressure and density level as shown in Eq. (1). where: e is the void ratio, p' is effective mean pressure, λ and ζ are material parameters, subindex c denotes critical state and subindex a denotes atmospheric values. On the triaxial stress space, CS imposes a limit on the q/p' ratio as shown in Eq. (2). where M g is a material parameter Dilatancy, d, is also density-(through void ratio) and stress-dependent as shown in Eq. (3). where: d 0 and m are material parameters, η is the stress ratio (q/p') and ψ s is the state parameter (Eq. 4), which measures in terms of void ratio, the gap between current and critical (c) state. where: G es0 and K ev0 are material parameters and p 0 ' is the initial mean effective pressure. Poisson modulus, ν, can be obtained from G es0 and K ev0 with (Eq. 7). Compacted soils exhibit a peak state, which in the model is related to void ratio by (Eq. 8). where: subindex p denotes peak state and βv is a material parameter. where: h 1 and h 2 are material parameters, M f is the slope of the yield surface in the p'-q plane and ψ q is a variation of the state parameter (Eq. 10), being β a material parameter. Plastic modulus H L is a function (Eq. 11) of different plastic moduli (H DM ,H f ,H v ,H s ) whose formulation is presented in [2] and references therein. On top of the previous formulation, following equations are needed to adapt the GP model to unsaturated states. Effective stress, σ', in unsaturated states is of the form (Eq. 12). σ'= (σ-p a ) +s.Sr (12) where: s= p ap w is the matrix suction, p a is the pore air pressure, p w is the pore water pressure and Sr is the saturation degree. Bold face indicates tensor functions. In order to express the influence of suction, a bonding or cementation parameter, ξ, is introduced, according to [15]. and ξ is the bonding parameter [15,16] and a, b are material parameters [19]. Bonding function (Eq. 15) relates the values of the critical effective stress, p' cs , at s=0 (saturation) and at a given suction (Eq. 16), as introduced in [ Model calibration As explained in section 3.1.1, the constitutive formulation of the model requires adjusting the material parameters. In the following, procedures to calibrate each component of the model will be outlined, namely: critical state, dilatancy, elasticity, peak state, loading direction and plastic modulus, along with WRC and bonding function. Critical state Critical state requires the estimation of four parameters: e a , λ, ζ and M g . First three are related in Eq. (1) and M g in Eq. (2). To adjust parameters of Eq. (1), void ratio (e) vs normalized pressure (p'/pa) ζ , both at critical state, are depicted, distinguishing between suction levels. Figure 2 shows the experimental results and the estimated values of the three parameters involved. Two tests (50-300 and 250-300) have not been considered because they divert from the overall tendency. They are shown in Figure 2 as outliers. Table 3. Stress points at critical state on triaxial space (p' vs q) allow the estimation of M g , where p' is calculated with Eq. (12) with the residual saturation degree S r =0.05. M g value is referenced in Table 3 and is obtained from the saturated state. Dilatancy It was observed that the specimens did not dilate during saturated tests. However, all the unsaturated specimens exhibited suction-induced dilatancy. In addition, dilatancy increased with increasing suction and was supressed with increasing confinement at constant suction. Hence, dilatancy parameters need to be included in the proposed model. Dilatancy parameters are contained in Eq. (3): d 0 and m. Their estimation is done in two steps: first, m is determined analytically at the phase transformation point (TF), when d=0, as soil state changes from contractive to dilative (Table 4). Only some tests are included in the Table 4. Then, d 0 is determined to fit the experimental curve "d" vs "ψ s , p " ( Table 4). It is evaluated at peak state and not at the TF, because at the latter dilatancy is zero. As the values for m and d0 vary with suction, an average of these values is chosen, assuming the error associated with this assumption. Thus, m=2,3 and d 0 =1,6. Peak state As the soil response shows peak states, Eq. (8) requires adjusting parameter βv. This is done representing the exponential experimental curve "η p " vs "ψ s,p ". Figure 3 depicts both experimental and calculated response for all suction values. Interestingly, in this research the saturated tests showed strain hardening type stress-strain response with no distinct peak i.e., peak stress is equal to critical stress. Table 5 indicates parameters M g and βv for all suctions but s=0 (saturated case, no peak). However, during unsaturated testing the stress-strain curve showed distinct peak before reaching a critical stress. As the suction increases, so does the peak stress and hence a unified value has been considered, namely βv=2,0 (Mg=2,0). Elasticity There are three elastic parameters involved in Eqns. (5), (6) and (7): G es0 , K ev0 , ν. Any combination of two of them is a possible choice. Model performance with these two parameters should fit the initial slope of the experimental curve "q" vs "ε s ". For the sake of brevity this fitting is not included. Final parameters are G es0 =80 and ν=0.2. Loading direction Calibration of loading directions constants, i.e. h 1 and h 2 , is based on the hypothesis according to [17] that M f /M g ≈D r , where D r is the relative density. Thus, adjusting "M f /M g " vs "ψ q " from Eq. (9), allows us to determine both h 1 and h 2 . A value of 1.8 for β has been assumed ( Table 6). Bonding function Bonding function contains two material parameters: a and b, according to Eq. (15). Calibration is achieved with experimental adjustment of Eq. (16) by plotting the bonding parameter "ξ" vs the ratio of the effective stresses at CS, for unsaturated and saturated states. Table 6 shows the values for a and b. Plastic modulus As explained in [18], calibration of plastic modulus H L from Eq. (11) can be accomplished by fitting the experimental (left) and calculated (right) terms of Eq (17). Vectors n=(n v ,n s ) and n g =(n g,s ,n g,v ) can be calculated from Eqns. (12) and (13) of [9]. dq/dε s ≈ H L /[n g,s (n v /3+n s )] Once H L is estimated, from Eq. (11) and reference [9], parameter H 0 ' can be determined. For the sake of brevity this fitting is not included. The plastic modulus chosen is H 0 '=80. Water retention curve Water retention curve (WRC), or Soil Water Characteristic Curve (SWCC), as expressed in Eq. (13) contains the following material parameters: Ω, a w , n, m and S r0 . To fit this WRC, an initial value of e=0.44 is assumed. Table 7 shows the corresponding parameters obtained via adjustment of experimental SWRC. Model performance The MPZ constitutive model based on Generalized Plasticity (GP) is run with a driver capable of testing saturated and unsaturated states. Experimental data available [6] consists on constant-suction drained consolidated triaxial tests with three net stress levels (100, 200 and 300 kPa) and five suction levels (0, 50, 250, 500 and 750 kPa), as explained in section 2. Figures 4 and 5 represent deviatoric stress-strain response(a) and volumetric strains (b) for s=0, and 500 kPa, respectively, and include both experimental and predicted (MPZ) curves together. Clearly, the MPZ model simulates the saturated response very well. Also, the MPZ constitutive model captures reasonably well the tendencies due to different net pressure and suction, with a unique set of constitutive parameters for s=500 kPa test. Clearly, the transition of compressive response for s=0 to initial compression followed by dilatant type volumetric response for s=500 kPa is smoothly captured by proposed MPZ model. However, some departures from experimental results should be noted. Peak states for suction of 500 kPa are not reproduced. This is noticeable especially for higher suction and net pressures values. Regarding volumetric behaviour, constitutive model shows a slight tendency to increase the dilatant response, in comparison with the experimental volumetric strains. Conclusions A procedure for calibrating the parameters of the MPZ constitutive model, a type of generalized constitutive (a) and volumetric strains (b), s=0 kPa. parameters of the MPZ constitutive model, a type of generalized constitutive model, is presented. This calibration is done for a set of consolidated drained triaxial tests, with three different net stress levels and five different suction levels including the saturated case. In some cases, differences between saturated and unsaturated states arise, as in critical state parameters e a and dilatancy and peak states, response and a compromise solution has been Criterium has been to maintain a unique set of parameters for all states of density, pressure and suction. However, in future research, the effect of suction on peak and dilatancy must be further clarified. quantification of error induced by calibration should be introduced, to be able to measure its quality. The constitutive model adopted performs reasonably well, given the different net pressures and suctions that the set of experimental tests comprises. However, peak states associated with high suctions (s= accurately modelled. This requires further research and improvement of the model.
2020-10-28T19:21:50.579Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4ac6c0239f7b2778a80967ec238fd183a3c6abe8", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/55/e3sconf_e-unsat2020_02020.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "39ebaa425eb1f8d417271ce6305390f6cbb62d20", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
4330997
pes2o/s2orc
v3-fos-license
Clinical use and applications of histone deacetylase inhibitors in multiple myeloma The incorporation of various novel therapies has resulted in a significant survival benefit in newly diagnosed and relapsed patients with multiple myeloma (MM) over the past decade. Despite these advances, resistance to therapy leads to eventual relapse and fatal outcomes in the vast majority of patients. Hence, there is an unmet need for new safe and efficacious therapies for continued improvement in outcomes. Given the role of epigenetic aberrations in the pathogenesis and progression of MM and the success of histone deacetylase inhibitors (HDACi) in other malignancies, many HDACi have been tried in MM. Various preclinical studies helped us to understand the antimyeloma activity of different HDACi in MM as a single agent or in combination with conventional, novel, and immune therapies. The early clinical trials of HDACi depicted only modest single-agent activity, but recent studies have revealed encouraging clinical response rates in combination with other antimyeloma agents, especially proteasome inhibitors. This led to the approval of the combination of panobinostat and bortezomib for the treatment of relapsed/refractory MM patients with two prior lines of treatment by the US Food and Drug Administration. However, it remains yet to be defined how we can incorporate HDACi in the current therapeutic paradigms for MM that will help to achieve longer disease control and significant survival benefits. In addition, isoform-selective and/or class-selective HDAC inhibition to reduce unfavorable side effects needs further evaluation. Introduction Multiple myeloma (MM) is a plasma cell malignancy, characterized by an accumulation of high levels of monoclonal immunoglobulins or paraproteins in blood and/ or urine and end organ damage, including anemia, renal failure, hypercalcemia, and bony lesions. 1 MM is the second most commonly diagnosed hematologic malignancy representing 1.6% of all new cancer cases in the US. The outcomes of these patients have not been satisfactory, and the 5-year survival is 46.6% according to surveillance, epidemiology, and end results analysis. 2 Over the last two decades, the treatment paradigm for MM has changed with the use of autologous stem cell transplantation (ASCT) and novel therapeutic options including proteasome inhibitors (PIs) and immunomodulatory drugs (IMiDs). 3 The incorporation of novel drugs, particularly thalidomide, lenalidomide (Len), and bortezomib (Btz), has resulted in a significant prolongation of overall survival (OS) in newly diagnosed and relapsed patients. 4 Despite these advances, acquired or intrinsic resistance to therapy leads to eventual relapse and fatal outcomes in vast majority of patients. In an analysis of 286 patients with relapsed MM, who were refractory to Btz and had relapsed following, or were refractory to or ineligible to receive an IMiD, OS and event-free survival were 9 and 5 months, respectively. These findings indicate the poor outcome of patients, once they become refractory to current modalities and an unmet need for safe and efficacious novel therapies. 5 MM is a biologically complex disease, with great heterogeneity in terms of genetic alterations, thereby giving rise to individual differences in overall response and survival of patients receiving the same treatment. In addition to genetic alterations such as point mutations, deletions, or translocations, epigenetic alterations and abnormal microRNA expression also contribute to the pathogenesis of MM. [6][7][8][9] Epigenetic aberrations are heritable changes in gene expression that occur independent of changes in the primary DNA sequence. Most of the epigenetic mechanisms occur at the level of chromatin. Chromatin is built up by nucleosomes that contain ∼146 bp of DNA wrapped around an octamer consisting of four core histones (H3-H4 tetramer and two H2A-H2B dimers). The modifications of these nucleosomes play an important role in the transition of chromatin between open and closed states. 10 The N-terminal tail regions of histones undergo a wide variety of enzyme modifications, including acetylation, methylation, sumoylation, phosphorylation, and ubiquitination, which are crucial in modulating gene expression. This phenomenon is referred to as the "histone code" and has a significant effect on gene expression and chromatin structure. 11 Histone acetylation is controlled by histone acetyl transferases, which transfer acetyl groups to the side chain of lysine residues on N-terminal, and histone deacetylases (HDACs), which deacetylate and counterbalance activity of histone acetyl transferases. 12,13 The HDAC family of enzymes regulates the acetylation level of histones in chromatin and various nonhistone substrates, including many proteins involved in tumor progression, cell cycle control, apoptosis, angiogenesis, and cell invasion. 14 The HDAC family consists of 18 genes that are grouped into classes I, II, III, and IV based on their homology to respective yeast orthologs Rpd3, HdaI, and Sir2. 15 Various classes of HDAC and their localization and function have been shown in Table 1. HDAC inhibitors (HDACi) typically target the "classical" classes I, II, and IV HDACs (which contain a Zn2þ catalytic ion in their active site) and not class III HDACs that use NAD+ as the essential cofactor. 14,16 This review focuses on the antimyeloma activity of different HDACi in preclinical and clinical settings. Epigenetic changes in MM Epigenetic aberrations play an important role in the initiation and progression of most of the malignancies, including MM, and this is largely attributed to alterations in the expression of histone-modifying enzymes. 17,18 In cancer, global DNA hypomethylation of repetitive sequences (such as long interspersed nuclear element 1 [LINE-1] and Alu repeats), gene bodies, and intergenic regions has been observed. This contributes to genomic instability, transposon activation, proto-oncogene activation, and loss of normal imprinting patterns. In addition, site-specific CpG island hypermethylation of gene promoters such as tumor suppressor genes results in gene silencing. 19 Even in MM, there is increased global hypomethylation of 36 the LINE-1 and Alu repetitive elements compared to normal control subjects. 20 This seems to be an early event in the pathogenesis, and the global methylation levels of repetitive elements decrease as the disease progresses from monoclonal gammopathy of unknown significance to MM. 20,21 Epigenetic alterations also increase the vulnerability to genomic instability. LINE-1 hypomethylation is found to be associated with translocations of chromosome 14 and deletion of chromosome 13q. 21 Also, t(4;14) translocation showed more frequent hypermethylation that may underline the poor prognosis associated with its presence. 22 The most characteristic documentation of aberrations of histone modifications is in t(4;14) MM, which leads to overexpression of multiple myeloma SET domain containing protein (MMSET) (NSD-2), a histone methyltransferase. MMSET regulates genes involved in the p53 pathway, nuclear factor kappa B pathway, apoptosis, cell cycle regulation, DNA repair, and adhesion, and its upregulation enhances survival and adhesion of MM cells. 23,24 In addition, epigenetic alterations may result in dysregulation of critical oncogenic pathways such as cyclin dependent kinase/retinoblastoma (CDK/Rb), Wnt/β-catenin, Janus kinase/signal transducer and activator of transcription protein (JAK/STAT), death associated protein kinase-1/ p14-ARF/p53 (DAPK-1/ p14 ARF/p53) pathways, which contribute to the pathogenesis of MM. 8 HDACs in MM Overexpression of HDAC proteins, especially class I HDACs, has been observed in both solid and hematological malignancies. [25][26][27][28][29] In the majority of tumors, HDAC expression is associated with a poor prognosis. [30][31][32] However, HDAC expression is correlated with a better prognosis in breast cancer, acute lymphoblastic leukemia, and chronic lymphocytic leukemia. [33][34][35] Patients with MM with high transcript levels of HDACs 1, 2, 4, 6, and 11 show a shorter progression-free survival (PFS) than those expressing lower levels. However, when HDAC protein levels were examined, it was found that only increased HDAC1 expression correlated with poor PFS and OS. 36 HDACi in MM Butyrate and trichostatin A were among the initial molecules identified as HDACi. Since then, various natural and synthetic HDACi have been developed and evaluated as anticancer agents in the preclinical and clinical settings. Major HDACi can be divided into five categories on the basis of their chemical structure ( Table 2). The direct impact of HDAC inhibition on chromatin is hyperacetylation of histone proteins, which alters the chromatin structure and results in up-or downregulation of gene expression involved in cell cycle regulation, apoptosis, cytokine signaling, adhesion and migration, proteasomal degradation, drug resistance, and DNA damage. [37][38][39] Preclinical activity of HDACi in MM As a single agent Microarray analysis has shown that HDACi induce transcriptional modulation of 7%-10% of the genes in myeloma and human lymphoid cell lines by acetylation of histones and nonhistone proteins. 38,40 The pattern of gene alteration is quite similar across different HDACi in the same cell line. 41,42 HDACi such as valproate, FK228, and ITF2357 affect the viability of interleukin (IL)-6-dependent and -independent MM cell lines, indicating that the antimyeloma activity of HDACi is not influenced by IL-6. [43][44][45] Moreover, coculturing MM cells with bone marrow stromal cells (BMSCs) do not protect them from death induced by LAQ824, ITF2357, LBH589, or KD5170, suggesting that HDACi could overcome the protective effect of the BMSCs. [45][46][47][48] The various possible mechanisms of the anti-myeloma activity of HDACi have been described later. Cell cycle arrest Almost all HDACi induce G0/SG1 arrest due to increase in histone acetylation and upregulation of cyclin-dependent kinase (CDK) inhibitor CDKN1A by p53-dependent and -independent ways, as observed in MM cell lines treated DNA damage and oxidative stress HDACi interfere with the function of DNA-repair proteins such as Ku70, RAD51, RAD50, DNA-PKcs, BRCA1, and BRCA2, thus inducing double-stranded breaks in DNA. 55,56 The HDACi PDX-101 and KD-5170 phosphorylate H2AX on ser139 and induce DNA damage. 57 Another HDACi, SDNX-275, could enhance the DNA damage response induced by the alkylating agent melphalan in MM cell lines. 58 Moreover, HDACi-induced chromatin hyperacetylation makes DNA more sensitive to drugs, radiation, and reactive oxygen species. 59 The production of reactive oxygen species observed after HDAC inhibition seems crucial as evidenced by the upregulation of several antioxidant genes such as glutathione S-transferase, glutathione reductase, and superoxide dismutase 1 and 2 on the treatment of U937 leukemic cells with vorinostat. 60 Ubiquitin-proteasome system HDACi decrease activity of 20S proteasome and downregulate genes encoding 26S proteasome and ubiquitin conjugating enzymes in MM cells. 38,46 Also, tubacin or pan HDACi such as SAHA or LBH589, hyperacetylate α-tubulin, accumulate polyubiquitinated proteins, leading to apoptosis subsequently. 48 HDAC inhibition enhances the cytotoxic effects of Btz both in vitro and in vivo, 38,47,48,61,62 which will be discussed later. BMSC interaction Multiple cytokines such as IL-6, IL-1, insulin-like growth factor-1 (IGF-1), tumor necrosis factor-α, vascular endothelial growth factor (VEGF), Dickkopf-related protein 1, and secreted frizzled-related protein are secreted at high levels by either the malignant plasma cells or the BMSCs. This then causes activation of signaling pathways in the MM cells and further promotes their interaction with cells in the tumor microenvironment such as BMSCs, endothelial cells, osteoblasts, and osteoclasts. The net result of such interaction is increased tumor growth, angiogenesis, bone disease, and drug resistance. 63,64 HDACi downregulate the expression of genes involved in cytokine signaling such as IGF-1, IGF-1 receptor, and IL-6 receptor. 38 Mitsiades et al 38,50 showed that vorinostat not only suppresses the expression of receptor genes involved in MM cell proliferation, survival, and/or migration such as IGF-1R, IL-6R, TNF-R, CD138 (syndecan-1), and CXCR-4.55 but also reduces the autocrine IGF-1 and paracrine IL-6 secretion of BMSC. In combination with PIs HDACi have been tried in combination with a variety of agents for MM, but the most synergistic effects are seen with Btz. The precise mechanisms causing this synergy are not yet completely defined. The best understood mechanism is dual inhibition of the proteasomal and aggresomal protein degradation pathways, targeted by Btz and HDACi, respectively. Btz inhibits proteasome and causes accumulation of polyubiquitinated proteins that form an aggresome by a process dependent on the interaction of HDAC6 with tubulin and dynein complex. HDAC6 inhibition leads to increased hyperacetylation of tubulin and upregulation of polyubiquitinated proteins, resulting in apoptosis. 70,71 In accordance with the above mentioned dual inhibition phenomenon, non-selective HDACi like vorinostat as well as selective HDAC6i like tubacin and ACY-1215 have been found to inhibit aggresome formation and induce caspase-mediated apoptosis in MM when combined with Btz. [72][73][74] 38 In addition, HDAC1 overexpression causes resistance to Btz both in vitro and in vivo, which is reversed by the class I HDACi romidepsin. Moreover, Btz downregulates the expression of class I HDACs and enhances HDACi cytotoxicity. 75 Taken together, Btz and HDACi combination appears to be a promising therapeutic strategy that can overcome drug resistance. In combination with other agents Preclinical studies have shown that addition of vorinostat or panobinostat to MM cell lines and tumor cells derived from patients resistant to conventional therapies increases their susceptibility to IMiDs (such as pomalidomide or Len) and dexamethasone. 38,76 Moreover, treatment of MM cells with vorinostat increases their sensitivity to DNA-damaging agents, such as doxorubicin or melphalan. 38,77 Treatment of MM cell line with sodium butyrate in combination with DNA methyltransferase inhibitor, decitabine, resulted in increased expression of p16 gene and G1 arrest, a phenomenon not seen with either agent alone. 78 Furthermore, mTORC1 inhibitor RAD001 caused potent G0/G1 arrest, while LBH589 induced pronounced apoptosis, both of which were enhanced when the drugs were used in combination. 79 In addition, additive effects of HDACi have been seen in conjunction with RSK2 (Ser227) inhibitor BI-D1870 and heat shock protein-90 (alpha/beta) inhibitor NVP-AUY922 in preclinical studies. 80,81 Also, HDACi-inducible Bim is primarily neutralized by Bcl-2 and Bcl-xL, thus providing a mechanistic framework by which Bcl-2 antagonists potentiate the lethality of HDACi. 82 Also, SAHA and trichostatin A induce G1 arrest by upregulating p21 and p27 and inhibiting E2F transcriptional activity. The tumor necrosis factor-related apoptosis-inducing ligand effect can be enhanced after HDACi pretreatment and is found to be consistent with the upregulation of proapoptotic Bim, Bak, Bax, Noxa, and p53 upregulated modulator of apoptosis (PUMA) and downregulation of antiapoptotic Bcl-2 and Bcl-xL. 83 In combination with immune therapies In addition to all the above-mentioned combination therapies, HDACi enhance MHC classes I and II expression and tumor-associated antigens on tumor cells, inducing cell death mediated by natural killer cells and cytotoxic T-cells. [84][85][86][87][88][89] Also, vorinostat induces the secretion of adenosine triphosphate and high mobility group box 1 protein (HMGB-1) and expression of calreticulin on the tumor cell surface, which are important mediators of recognition and phagocytosis by dendritic cells. [88][89][90] However, the effect of HDACi on the immune cells is far from clear and has been reviewed in detail elsewhere. Overall, it appears that the HDACi could promote or inhibit the functions of regulatory T-cells, myeloid-derived suppressor cells, and tumor-associated macrophages. 91 HDACi have shown favorable responses in combination with immune therapies in preclinical settings. Christiansen et al 88 observed synergistic responses when vorinostat or panobinostat was used in combination with anti-CD40 and anti-CD137 antibodies in solid tumors. He also noted an important role for CD8+ cytotoxic T-cells and natural killer cells for the synergy observed. 88 In another study, LAQ824 induced synergistic cell death in combination with adoptive transfer of tumor-specific T-cells in melanoma. 92 However, the effect of this combination remains unexplored in MM. One preclinical study showed that LBH589 impairs the phenotype and function of dendritic cells by downregulating dendritic cell maturation, antigen presentation, and T-cell costimulation markers on immature and mature dendritic cells. 93 Thus, it is important to examine the immune status of patients with MM before and after HDACi treatment. Such studies will help us not only to better understand the effects of HDACi on immune cells but also identify potential combinations of HDACi with immune therapies. Clinical trials using HDACi in MM HDAC represents a very interesting clinical target for the development of novel antimyeloma therapy. The early clinical trials of different HDACi have revealed only modest single-agent activity, but encouraging clinical response rates have been reported in combination with other antimyeloma agents such as PIs, IMiDs, dexamethasone, and conventional cytotoxic therapy. Vorinostat Vorinostat (SAHA) is a potent nonselective HDACi with a hydroxamic acid moiety, which causes reversible inhibition of classes I and II HDACs. It was the first epigenetic agent used therapeutically in malignancy and was approved by the US Food and Drug Administration (FDA) for the treatment of cutaneous T-cell lymphoma in 2006. 94 In the initial dose-escalating Phase I trial of vorinostat in relapsed/refractory MM (RRMM), 13 patients with a median of three prior lines of therapy were included. The most common drug-related adverse effects (AEs) included fatigue, anorexia, dehydration, diarrhea, and nausea and were mostly grade #2. Among the ten evaluable patients, one had a minimal response and nine had stable disease (SD). 95 Based on the synergy with PIs depicted in preclinical studies, a Phase I trial evaluated vorinostat in combination 39 with Btz in patients with RRMM. The 23 patients enrolled in the study had received a median of seven prior regimens with 20 patients post ASCT and 19 patients with prior Btz (nine of whom were Btz refractory). The dose-limiting toxicity was prolonged QT interval seen in two patients. The most common toxicities were myelosuppression, diarrhea, and fatigue. The overall response rate (ORR) was 42%, with two patients having very good partial response (VGPR) and seven patients having PR, including three patients who were Btz refractory. 96 VANTAGE 095 was a multicenter, open-label Phase IIB study in which 143 patients with RRMM (Btz refractory) received vorinostat in combination with Btz till progressive disease, unacceptable toxicities, or patient withdrawal. The ORR was 11%, while 47% of patients had SD. The median duration of response (DOR) was 7.0 months, and the median OS was 11.1 months. However, serious AEs were reported in 65% of patients, resulting in treatment discontinuations in 11% of patients. 97 On the basis of these encouraging responses, a multi center, randomized, double-blind Phase III study, VANTAGE 088 trial, was conducted. They enrolled 637 patients with RRMM who had progressive disease after one to three prior antimyeloma treatments (but were Btz sensitive) and randomized them to receive Btz with vorinostat or placebo. The addition of vorinostat to Btz significantly improved the ORR (56% vs 41%) and clinical benefit rates (CBRs) (71% vs 53%). The median PFS also increased from 6.83 to 7.63 months, but the median OS was not significantly different between the two groups. More patients in the vorinostat group developed high-grade AEs, especially fatigue, myelosuppression, and gastrointestinal disorders compared to the placebo group. The authors concluded that though the study achieved the primary end point of prolonging the PFS, the clinical value of adding vorinostat to Btz needed further evaluation with regard to optimizing the dose of vorinostat to minimize toxicity. 98 Vorinostat has also been used in combination with carfilzomib in compassionate use setting for patients with RRMM and was well tolerated. 99 A Phase I dose-escalation trial of vorinostat with Len/dexamethasone in RRMM demonstrated an ORR of 47%. Serious AEs were reported in 45% of the patients and were considered to be study drug related in 22%. 100 Hence, this combination seems to be effective and needs further evaluation. Panobinostat Panobinostat (LBH589) is a cinnamic hydroxamic acid analog that exhibits tenfold higher inhibitory activity against classes I, II, and IV HDACs than vorinostat. A Phase II multi center study of oral panobinostat in 38 heavily pretreated patients with RRMM showed that it was well tolerated and the most common AEs were nausea and fatigue. But the ORR was lower than what was seen in the preclinical studies with VGPR in one patient, mixed response (MR) in one patient, and SD in three patients. 101 In view of poor results with its use as monotherapy and preclinical data depicting synergy with Btz, a Phase Ib trial studied the use of panobinostat in combination with Btz in RRMM. Among the 47 patients enrolled in the doseescalation phase, 76% of patients had $MR with responses seen in ten of 15 Btz refractory patients. Out of the 12 evaluable patients enrolled in the dose-expansion phase, MR was seen in 75% of patients. 102 PANORAMA 2 is a Phase II trial of panobinostat in combination with Btz and dexamethasone in patients with relapsed and Btz refractory MM with at least two prior lines of therapy. Fifty-five heavily pretreated patients with a median of four prior regimens were enrolled. The ORR was 34.5%, and the CBR was 52.7%. Median PFS was 5.4 months, and the median DOR was 6.0 months. Common grade 3/4 AEs included thrombocytopenia (63.6%), fatigue (20.0%), and diarrhea (20.0%). 103 PANORAMA 1 is a multicenter double-blind Phase III trial of patients with RRMM after one to three previous treatment regimens. Approximately 768 eligible patients were randomized to receive Btz and dexamethasone with panobinostat or placebo. It was demonstrated that though the ORR (60.7% vs 54.6%) was similar, the proportion of patients achieving complete response (CR) or near CR (27.6% vs 16.7%) was significantly higher with panobinostat compared to placebo. The addition of panobinostat prolonged the median DOR (13.14 vs 10.87 months), median PFS (11⋅99 vs 8⋅08 months), and median OS (33.6 vs 30.4 months). Serious AEs were higher in the panobinostat group (60% vs 42%). Common grade 3-4 AEs were thrombocytopenia, lymphopenia, diarrhea, asthenia, and peripheral neuropathy. 104 Recent subgroup analysis of PANORAMA 1 trial demonstrated a clear PFS benefit of 7.8 months for panobinostat-Btz-Dex among patients who received two or more prior regimens, including Btz and IMiD, a population with poorer prognosis and limited treatment options. 105 Collectively, the results of PANORAMA 1 and 2 show that the combination of panobinostat and Btz appears promising and has recently been approved by the FDA for the treatment of RRMM in patients with two prior treatments, including Btz and IMiDs. Romidepsin Romidepsin (FR901228 or FK228) is a depsipeptide derived from the bacterium Chromobacterium violaceum with activity mainly against class I HDAC. It was approved by the FDA for the treatment of relapsed cutaneous T-cell lymphoma in 2009. 106 A Phase II study evaluated the activity of romidepsin in heavily pretreated patients with MM who were refractory to therapies, including ASCT, Btz, and IMiDs. Although no objective responses were achieved, ∼30% of patients exhibited stabilization of M-protein, resolution of hypercalcemia, and improvement in bone pain. The most common AEs were grade 1/2 and included nausea, fatigue, taste alteration, and clinically insignificant electrocardiographic abnormalities. 107 A Phase II trial used romidepsin with Btz and dexamethasone based on preclinical synergy. The incidence of grade 3 anemia and neutropenia was similar to that reported in previous trials using Btz-dexamethasone. PR was seen in 52% (VGPR in 28%) and CR was seen in 8% of the 25 patients enrolled. The median time to progression was 7.2 months, and the median OS was . 36 months. 108 A Phase I/II trial is evaluating the combination of romidepsin and Len in patients with relapsed/ refractory lymphoma and myeloma. The study is ongoing, but the Phase I results suggest that the combination is well tolerated up to standard single-agent doses of each drug. 109 ACY-1215 ACY-1215 is an oral small molecule targeted against HDAC6. In view of responses seen in xenograft severe combined immunodeficiency mouse models, 60 a Phase I trial is evaluating ACY-1215 alone (part 1, Phase Ia) and in combination with Btz (part 2, Phase Ib) in patients with RRMM after at least two lines of treatment. In Phase Ia, no maximal tolerated dose was identified and AEs reported were elevated creatinine, fatigue, hypercalcemia, and upper respiratory tract infection (not attributed to ACY-1215). In Phase Ib, grade 3 or 4 gastrointestinal AEs were rare and hematologic AEs were manageable. The ORR was 25%, and the CBR was 60% in this heavily pretreated patient population. 110 Another ongoing trial is exploring the combination of ACY-1215 with Len/dexamethasone. ACY-1215 is found to be well tolerated, and no dose-limiting toxicity has been observed so far. The most common AEs, mainly grades 1/2, were fatigue, upper respiratory tract infections, and neutropenia. At the interim analysis, the ORR was 81%, including one CR and three VGPR. 111 Belinostat Belinostat (PXD101) is a nonselective HDACi of hydroxamic acid class. A Phase II study enrolled 24 patients with RRMM who received belinostat as monotherapy and in combination with high dose of dexamethasone. This treatment was well tolerated, with minimal side effects, obtaining one MR and five SD. 112 Givinostat Givinostat (ITF2357) is an orally active HDACi. In a Phase II trial, givinostat (alone or combined with dexamethasone) proved tolerable but showed only a modest clinical benefit. Only five of the 19 patients with advanced MM achieved SD. All patients experienced grade 3/4 thrombocytopenia, three had grade 3/4 gastrointestinal toxicity, and three had transient electrocardiographic abnormalities. 113 Conclusion Epigenetic aberrations have now been recognized to contribute to the development and progression of various types of cancer, including MM. HDACi regulate the acetylation status of various histone and nonhistone proteins required for cellular processes, including gene expression, protein recycling, cell proliferation, and apoptosis, that are important for myeloma cell growth and survival. Preclinical evidence from studies of HDACi, alone or in combination with other antimyeloma agents, provides a strong scientific rationale for the evaluation of these regimens in the clinical setting. Results from early-stage clinical trials demonstrate that though HDACi show only modest activity as single agent, using them in combination with other anti-MM agents, especially Btz, show significant clinical responses. It must be noted that most of these trials were performed in patients relapsed on or refractory to Btz, and perhaps their utilization earlier in therapy, likely in combination with Btz, would be more effective. Hence, their precise role in the armamentarium of therapy for MM is yet to be defined. In addition, isoformselective and/or class-selective HDAC inhibition needs further evaluation to reduce unfavorable side effects. Disclosure SKK has received research support from Novartis for clinical trials. The authors report no other conflicts of interest in this work.
2017-10-30T20:09:29.428Z
2016-05-06T00:00:00.000
{ "year": 2016, "sha1": "290aecc1fcb2fdfc75a2f958bd875ef495bd8785", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=30254", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4bf9c4018b20af7f464c2fc5776de4248a625ff", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6520320
pes2o/s2orc
v3-fos-license
Treatment of livestock with systemic insecticides for control of Anopheles arabiensis in western Kenya Background Despite the implementation of vector control strategies, including insecticide-treated bed nets (ITN) and indoor residual spraying (IRS) in western Kenya, this area still experiences high level of malaria transmission. Novel vector control tools are required which target such vector species, such as Anopheles arabiensis, that feed outdoors and have minimal contact with ITNs and IRS. Methods To address this need, ivermectin, eprinomectin, and fipronil were evaluated in Zebu cattle under semi-field conditions to evaluate the potential of these compounds to reduce the survival of blood feeding An. arabiensis. Over the course of four experiments, lactating cattle received doses of oral ivermectin at 0.1 or 0.2 mg/kg, oral eprinomectin at 0.2 or 0.5 mg/kg, topical eprinomectin at 0.5, 0.75, or 1.5 mg/kg, or oral fipronil at 0.25, 0.5, 1.0, or 1.5 mg/kg. On days 1, 3, 5, 7, 14, and 21 days post-treatment, cattle were exposed to An. arabiensis, and mosquito mortality post-blood feeding was monitored. For the analysis of survival data, the Kaplan–Meier estimator and Mantel–Haenszel test was used to contrast the treatment and control survival functions. Results All three compounds significantly reduced the survival time of An. arabiensis. Twenty-one days post-treatment, mortality of mosquitoes fed on cattle dosed orally with 0.2 or 0.5 mg/kg eprinomectin, topically with eprinomectin at 0.5 mg/kg, or orally with either 1.0 or 1.5 mg/kg fipronil was still significantly higher than control mortality. Conclusions These data demonstrate the effectiveness of three insecticidal compounds administered systemically to cattle for controlling the cattle-feeding mosquito An. arabiensis. Eprinomectin and fipronil provided the longest-lasting control. Such endectocidal treatments in cattle are a promising new strategy for control of residual, outdoor malaria transmission and could effectively augment current interventions which target more endophilic vector species. Electronic supplementary material The online version of this article (doi:10.1186/s12936-015-0883-0) contains supplementary material, which is available to authorized users. on non-human vertebrates, particularly cattle [7][8][9]. Anthropophily in An. arabiensis also varies significantly, ranging from a high preference for human blood in West Africa to almost exclusive zoophily in Madagascar [7,[10][11][12]. In western Kenya, over half of the blood meals identified from An. arabiensis came from cattle, but a small proportion of An. gambiae s.s. also fed on cattle [13]. In areas where An. arabiensis is more anthropophagic, blood feeding still occurs predominately outdoors [14]. These behaviour traits make An. arabiensis less likely to encounter control strategies, which target endophagic and endophilic mosquitoes. Zhou et al. [15] documented a resurgence of malaria parasite prevalence and malaria vectors in western Kenya despite increased usage of ITNs, which could be attributed to insecticide resistance and poor ITN coverage or usage. However, over the last 10 years, An. gambiae s.s. and An. arabiensis have also undergone changes in their relative abundance, likely influenced by the implementation of IRS and ITNs [13,16]. While these strategies have led to the reduction of An. gambiae s.s. and An. funestus s.s., an unintentional consequence has been a proportionate increase in An. arabiensis [13]. Therefore, novel control strategies are needed for use in integrated malaria management programmes that target outdoor-feeding vectors not effectively controlled by ITNs and IRS. One such approach is the use of "endectocides", or treatment of a vertebrate host with a systemic insecticide that haematophagous arthropod vectors would become exposed to upon blood feeding. This host-targeted insecticide strategy for vector control has already been demonstrated effective in reducing sand fly vectors of visceral and cutaneous leishmaniasis [17][18][19][20], and flea vectors of plague [21,22]. Targeting cattle, a frequent blood host of An. arabiensis [7-9, 12, 13], with a systemic insecticide may be an efficient approach to control this vector species. Foy et al. [23] discussed the application and potential impact of ivermectin and other endectocides on malaria control. Community-directed ivermectin treatment of humans is already main strategy for control of onchocerciasis [24], and has been successfully used in humans for malaria control as well [23,25]. Many studies have demonstrated the lethal effect of ivermectin on mosquitoes after imbibing ivermectin-treated blood [26][27][28][29][30]. Eprinomectin is commercially used for control of endoparasites of livestock [31] and was demonstrated to be as effective as ivermectin at killing blood-feeding An. gambiae s.s. in the laboratory [30]. However, further investigation is needed to determine whether efficacy against mosquitoes is maintained in an in vivo system, and ascertain the duration of effectiveness. Fipronil is a broad spectrum insecticide which blocks the GABAgated ion channels in the central nervous system [32]. Fipronil has been used to control ectoparasites on domestic animals [33], and as a pour-on or dip for cattle to control ticks [34,35]. Mosquitoes are highly susceptible to fipronil during all life stages and by different routes of exposure [36][37][38][39][40]. However, field tests of fipronil as a systemic insecticide for mosquito control are currently lacking. The long-term goal of the research is to create a product that can be utilized in an integrated malaria management programme, particularly to augment current control methodologies aimed at endophilic vectors by targeting more exophilic vectors with broader host utilization, such as An. arabiensis. To that end, this study examined the efficacy of ivermectin, eprinomectin and fipronil on the survivorship of adult An. arabiensis. The specific aim of this study was to determine the percent mortality of adult female An. arabiensis fed on cattle treated with different doses of ivermectin, eprinomectin, and fipronil, and determine the duration of this lethal effect post-treatment. Study area The study site was located 10 km west of Kisumu in the village of Kisian, Kenya (latitude −0.073220° and longitude 34.662974°). Cattle breed selection and cattle maintenance All animal activities were reviewed and approved by the Institutional Animal Care and Use committees at Genesis Laboratories, Inc. and the Kenya Medical Research Institute (KEMRI). Lactating Zebu cattle (Bos indicus) were leased or purchased from markets or from private individuals. Cattle were transported to the study cattle shed located on the grounds of US Centers for Disease Control and Prevention and the Kenya Medical Research Institute (KEMRI), Kisian, Kenya. Transportation permits were provided by the department of veterinary services nearest to each purchase location. Test subjects were housed in individual stalls (1.5 × 3 m) within a covered cattle shed and were allowed periodic grazing in an outdoor pen during the 12-days acclimation period. Upon arrival to the test facility each cow received an ear tag with a unique identification number and was inspected for general health. All test subjects were provided with clean tap water ad libitum and clean feed consisting of 8 kg of chopped Napier grass (Pennisetum purpureum) and 1.3 kg of dairy meal per day as directed by project veterinarians. Cattle (test subjects) were maintained in a semi-controlled environment with adequate ventilation and natural light. Each test subject's general health, and the daily temperature and relative humidity of the animal facility were documented by staff during the acclimation period and the test. Treatment randomization A blocked randomization scheme by body weight was used to eliminate possible bias. Randomization was carried out using a random number generator service [41]. Each of the test subjects was assigned to either a control or treatment group. For each experiment, treatment groups which received doses of insecticide (test substance) consisted of three lactating Zebu cattle each, and the control group was allocated two lactating Zebu cattle. Precautions were taken to avoid animals contacting or grooming each other. The animals were housed individually in separate pens with a minimum distance to avoid contact between animals within and between treatment groups. Control animals were separated from the treatment animals. Administration of the test substance Four experiments were conducted in order to evaluate multiple doses each of ivermectin, fipronil, and eprinomectin (Table 1) these experiments, cows were randomized into 3 cows per treatment group and 2 cows per control for a total of 11 cows per experiment. Test substance quantity was calculated using weights recorded no more than 3 days prior to dosing. Topical and oral application methods of administering eprinomectin were chosen to assess efficacy and explore differences between application routes on mosquito survivorship. Experiment 1 Cattle in treatment group one (T1) received an eprinomectin dose of 0.2 mg/kg orally, subjects in T2 received 0.5 mg/kg orally and subjects in T3 received 0.5 mg/kg topically. Because eprinomectin is not commercially available in oral formulations, crystalline eprinomectin was weighed in the laboratory and placed in a capsule for oral application. For T3, eprinomectin was applied topically using liquid Eprinex© (Merial Ltd., New Zealand) which was applied according to the manufacturer's application directions. The manufacturer recommended application for Eprinex© pour-on commercial product is 1 ml/10 kg which would achieve a dosage of 0.5 mg eprinomectin/kg body weight. Experiment 2 Treatment group T1 received a 0.1 mg/ kg ivermectin orally, subjects in T2 received 0.2 mg/kg ivermectin orally and subjects in T3 received 0.75 mg/ kg eprinomectin topically. Ivermectin was administered orally using boluses; ivermectin tablets were weighed in the laboratory and placed in a capsule for oral application. Eprinomectin was applied topically using liquid Eprinex© applied according to the manufacturer's application directions, but with a higher dose. Experiment 3 Treatment group T1 received a 1.5 mg/kg eprinomectin topically, subjects in T2 received 1.0 mg/ kg fipronil orally and subjects in T3 received 1.5 mg/kg fipronil orally. Fipronil was administered orally using capsules. Technical grade fipronil was weighed in the laboratory and placed in a capsule for oral application. Eprinomectin was applied topically using liquid Eprinex©, applied as described above, but with a higher dose. Experiment 4 Treatment group T1 received a fipronil dose of 0.5 mg/kg orally while subjects in T2 received 0.25 mg/kg orally. Fipronil was weighed in a laboratory and placed in a capsule for oral application. For each experiment, the control group (T0) was left untreated. Animal subject performance Clinical observations of test subjects were recorded daily by project staff during acclimation and experimentation phases of the study. In addition, a veterinarian conducted weekly health checks to more thoroughly examine test subject health. During application and experimentation periods, feed was weighed daily to assess the effects of test substances on the animals' appetite. When spillage occurred, feed was returned to the appropriate container and weighed to the nearest 0.5 gram. Cattle weights were recorded on the final day of acclimation and weekly throughout the course of the study. Differences in appetite and body mass were compared by evaluating test subject weight means and standard deviations before and after treatment. Mosquito bioassays All An. arabiensis used in this study were reared at the KEMRI/CDC, Kisian station, Kenya. Efficacy of each treatment was assessed by comparing survivorship of fully blood fed An. arabiensis at 1, 3, 5, 7, 14 and 21 days post treatment in experiments 1 and 2. While in experiment 3 mosquitoes were exposed at days 1, 7, 10, 14 and 21, in experiment 4 we exposed mosquitoes in days 1, 3, 5, 7, 14 and 21. Prior to bioassays approximately 600 An. arabiensis adults were separated into an experimental cage and starved for 12 h. The day of application, 11-12 plastic capsules were filled with approximately 50 3-4 day-old female mosquitoes. Containers were modified round paper cartons that were 9.5 cm deep and 8.5 cm in diameter, covered with nylon netting material on one end to facilitate blood feeding. Containers with mosquitoes were transported in a cooler to and from the cattle shed. The day before application all cows had a circular patch approximately 6 inches in diameter shaved on the ventral portion of the abdomen to expose skin and facilitate feeding. One container with An. arabiensis was applied to the shaved location of each test subject and secured by wrapping an ace bandage around the torso. One test subject in the control group received one capsules to ensure that the number of cartons applied to each group was equal. Containers were attached to test subjects for 30 min, and then carefully removed, and blood-fed females were counted. Unfed females were removed from the study. Data were only analyzed for fully-engorged female mosquitoes. Blood fed females were placed into cages, provided with a 10 % sugar source ad libitum. For each group of mosquitoes in experiment 1 and 2, mortality was monitored at 3, 6 and 24 h post feeding and then daily for approximately 12 days thereafter. In experiment 3 and 4 we followed the same scheme but mortality was recorded daily for 9 days after the first 24 h. Statistical analysis The statistical analysis of the survival data obtained from the control and treatment groups was conducted using the "survival" package [42] for the software R [43]. The package implements the Kaplan-Meier estimator, which is used to calculate the survival function of a random variable in time. A survival curve is the plot of the survival function representing the survivorship of the target population. The statistical difference between the control and the treatment was assessed using the Mantel-Haenszel test as implemented in the survival package. Values smaller than 0.05 represent a significant difference between the control and treatment group. The resulting survival functions were used to estimate the median survival time and 95 % confidence intervals for the estimate and the size of the effect of the active ingredient (Table 2). To compare the effect of time on the effectiveness of the test substance we did a post hoc analysis for the same concentration and delivery method (a single row on a table). For this, the significant level was adjusted using a Bonferroni correction (α/n, where α is the significance level set at 0.05 and n is the number of comparisons). Cattle observations, health, and performance No adverse health effects arose in association with the treatments. For experiments 1-3, the test subject's mean daily feed consumption did not differ between the acclimation and test periods. For experiment 4, test subjects' mean daily feed consumption increased slightly from the acclimation (µ = 7.5 kg of hay/day, σ = 0.37) to the test period (µ = 7.8 kg of hay/day, σ = 0.02); t (15) = 2.13, p = 0.049. None of the cattle in any of the experiments experienced any large changes in their body mass over the course of the study. Mosquito bioassays The data are presented in table form, with survival curves for all treatment groups and time points available in additional file 1. Table 2 shows the median survival time (and 95 % confidence intervals) with experiments separated by horizontal lines, each row correspond to a treatment (or control) in an experiment and each column is a time point when mosquitoes were challenged. Tables 3, 4, 5, 6 correspond to an experiment and each row shows the result for the active ingredient, concentration and delivery method. The columns represent the day post-exposure when mosquitoes were challenged against the test substance. The values shown are the p value of the comparison between a particular treatment and the control at a given day. Experiment 1 Mortality of mosquitoes fed on cattle dosed orally with 0.2 mg/kg eprinomectin was delayed during the days immediately following treatment. Survivorship of mosquitoes in this group was not significantly different from the controls until 5 days post-treatment, but then remained significant out to 21 days post-treatment (Table 3) with the exception of day 7. In contrast, mortality of mosquitoes fed on cattle dosed orally or topically with 0.5 mg/kg eprinomectin was significantly different from controls by 1 day post-treatment (Table 3). The 7-day time point post-treatment was an anomaly, a control replicate had a large mortality by 24 h. Experiment 2 At the lowest dose of ivermectin, 0.1 mg/ kg, mosquito survivorship was marginally significantly different from the control at 1 day post-treatment, but then not significant (Table 4). For mosquitoes fed on cattle dosed with 0.2 mg/kg ivermectin, survivorship was significantly different from the controls at 1, 5 and 7, but not at 3, 14 and 21 days. For mosquitoes fed on cattle dosed topically with 0.75 mg/kg eprinomectin, survivorship was significantly different from the controls from 1 to 7 days, but not at 14 or 21 days (Table 4). Experiment 3 For mosquitoes fed on cattle dosed topically with 1.5 mg/kg eprinomectin, survivorship was significantly different from the control out to 10 days, but not at 14 or 21 days (Table 5). For both doses of fipronil, mosquito survivorship was significantly different from the control at all time points out to 21 days (Table 5). Table 2 Median survival time and 95 % confidence interval per experiment Values in square brackets represent the 95 % confidence intervals for the estimated median lethal time; n/a, time point not tested; n/a 1 , denotes estimate nor applicable because the survival function did not reach 0.5, therefore there is not a median value estimate; ∞, infinity, the survival function did not reach the corresponding to 95 % limit value h hours, d days, Exp. experiment, OE oral eprinomectin, TE topical eprinomectin, OI oral ivermectin, OF oral fipronil Discussion This study evaluated the endectocidal activity of three compounds in cattle against An. arabiensis mosquitoes in a semi-field environment. Positive results were achieved with each test substance, but with varying degrees of efficacy depending on dose and route of administration. Ivermectin Ivermectin mass drug administration (MDA) to humans for onchocerciasis control has been demonstrated to also reduce malaria parasite transmission by affecting mosquito survivorship, vector competence, re-feeding rates, and parity [25,29,44,45]. When administered to humans during MDA campaigns, the standard oral dose of ivermectin is 150 µg/kg. While the aforementioned studies and others have well-characterized the use of ivermectin as a human endectocide for malaria vector control, this study was one of the first to evaluate the use of ivermectin in cattle for control of An. arabiensis. Fritz et al. [26] evaluated a commercially-available injectable formulation of ivermectin in cattle, and found that most (90 %) of the An. gambiae s.s. that fed on the ivermectin-treated cattle within two weeks of treatment failed to survive more than 10 days post-blood meal. Further, no eggs were deposited by An. gambiae s.s. that fed on ivermectin-treated cattle within 10 days of treatment [26]. These results are promising, however injectable formulations are difficult to administer and require veterinary expertise. In that light, the current study evaluated two oral doses of ivermectin. Of these oral formulations, the higher of the two doses (0.2 mg ivermectin/kg) achieved significant results out to 7 days (168 h) post-treatment. This result is also consistent with the described pharmacokinetics of this compound. Ivermectin has an elimination half-life of 32-178 h when administered intravenously, depending on species [46]. Day 3, in the ivermectin experiment, had a control replicate with large mortality (38 %) by 24 h. If the control replicate is removed, the median survival time in control is increased to 96 h; 95 % confidence interval [48, 120 h] ( Table 2). The 0.1 mg/Kg ivermectin treatment remains insignificant, while the 0.2 mg/Kg of ivermectin treatment becomes significant (p = 0.001). A significant effect on mosquito survivorship for approximately 1 week also corroborates the results obtained by Alout et al. [25], whereby a 33.9 % reduction in survivorship of An. gambiae s.s. was observed for 7 days following a MDA in humans. While this effect on mosquito survival was brief, a significant reduction in mosquito parity rates was observed for more than 2 weeks after the MDA [25]. Additionally, sporozoite rates were reduced by 77.5 % for 15 days [25]. Kobylinski et al. [45] similarly also observed a 79 % reduction in sporozoitepositive An. gambiae s.s. for over 2 weeks following MDA. Therefore, ivermectin treatments in cattle may similarly impact the vectorial capacity of An. arabiensis in a field situation, and warrant further field investigation. Table 4 Kaplan-Meier curve comparisons (p values) for oral doses of ivermectin and topical eprinomectin (experiment 2) * Comparison statistically different from the control at an adjusted α of 0.003 Eprinomectin Eprinomectin has established utility in the agricultural industry as an effective means to control endoparasite loads in cattle [31], with the additional health benefits of increasing cattle weight gain and milk production [31,47]. However, eprinomectin has not been widely used for public health purposes. Butters et al. [30] evaluated eprinomectin alongside several other active ingredients in the laboratory for control of An. gambiae s.s. and found it had a similar LC 50 to ivermectin. Fritz et al. [48] also evaluated eprinomectin and ivermectin in the laboratory against An. arabiensis and found both compounds to be effective at killing mosquitoes at concentrations under 10 parts per billion. However, no studies to date have evaluated eprinomectin under field conditions for control of anopheline malaria vectors. In this study two oral (0.2 and 0.5 mg/kg) and three topical (0.5, 0.75, and 1.5 mg/kg) doses of eprinomectin were evaluated. Of the oral formulations, the lower dose demonstrated a delayed effectiveness, with a significant effect on mosquito mortality at time points from 5 to 21 days post-treatment, but not at 7 days. In contrast, the 0.5 mg/ kg dose had a significant effect on mosquito mortality up to 5 days post-treatment and again at 21 days, but not at 7 or 14 days. Of the topical (pour-on) formulations, significant effects were observed immediately for all three doses (1 day post-treatment) (Tables 3, 4, 5), however, the lowest treatment (0.5 mg/kg) resulted in significant mosquito mortality for 21 days with the exception of day 7 (Table 3), however as previously mentioned, day 7 of experiment 1 had large mortality in a control replicate (12/17 dead by 24 h). Removing this replicate increases the median survival time in the control from 96 h (95 % confidence interval 48, 116 h) to 192 h (95 % confidence interval 120, 264 h). As a result the topical 0.5 mg/kg eprinomectin treatment becomes statistically significant (p = 5.9 × 10 −4 ). For reasons unclear, the eprinomectin higher doses were effective for shorter periods of time (7 days for 0.75 mg/kg for and 10 days for 1.5 mg/kg) than the lower doses (21 days for 0.5 and 0.2 mg/Kg). The long-lasting low-dose effect of eprinomectin are unexpected and despite a large sample size (Table 1) and low variability ( Table 2) further experimentation will be necessary to confirm these results. Oral formulations of eprinomectin are currently not commercially available, but should be further developed for study due to the potential for low concentrations to have a significant killing effect on mosquitoes ( Table 3). The same dose and route of administration was assessed for eprinomectin and ivermectin, although in separate experiments. When comparing 0.2 mg/kg oral ivermectin (Table 4) and 0.2 mg/kg oral eprinomectin (Table 3), ivermectin was immediately effective with significant mosquito mortality out to 7 days post-treatment, whereas the effectiveness of eprinomectin was delayed, but lasted out to 21 days. Laboratory studies comparing eprinomectin and ivermectin have also demonstrated comparable effectiveness of both compounds but with slightly different pharmacokinetics. Butters et al. [30] reported significant knockdown of An. gambiae s.s. with both ivermectin and eprinomectin, however, the knockdown effect of eprinomectin was within the first hour following the blood meal whereas the knockdown effect for ivermectin was not apparent until 24 h after the blood meal [30]. The discrepancy between our results and those of Butters et al. [30] may relate to the difference in pharmacokinetics of these compounds under laboratory and in vivo conditions. In the laboratory where mosquitoes were exposed to blood spiked with the active ingredient, the results obtained would related directly to the activity of the compound itself in the absence of any metabolites or conditions associated with feeding on treated cattle. Ivermectin and eprinomectin have similar plasma kinetics and mean residence time (the amount of time one molecular stays in the organism) when administered to mice intravenously and orally, however with some variation in the rate and mechanism of drug elimination [49]. With the information available, it is also difficult to compare the concentration of active ingredient mosquitoes would have been exposed to at corresponding time points between these publications, or to know the relative contribution made by mosquito genetics, since Butters et al. [30] utilized An. gambiae s.s. G3 strain, and this study used An. arabiensis sourced in Kenya. Further study is warranted to ascertain the nature of the delayed knockdown effect observed in An. arabiensis when exposed to eprinomectin circulating in cattle treated orally. More work is also needed to assess the complementary uses of these compounds in the field. Since eprinomectin is not approved for human use as is the case ivermectin, endectocidal treatments in cattle with eprinomectin may be a complementary approach to the use of ivermectin in people when both An. gambiae s.s. and An. arabiensis are present. Fipronil Cattle were dosed orally with four different doses of fipronil over the course of two experiments. Fipronil dosing significantly reduced mosquito survivorship for at least 21 days when cattle were administered either 1.0 or 1.5 mg/kg, and for at least 7 days at the lower doses of 0.25 and 0.5 mg/kg. A significant effect may have occurred at the 14-day time point for the 0.25 and 0.5 mg/kg concentrations; however these data could not be analysed due to unexplained mortality in the control groups. Poché et al. [19] also tested fipronil as an endectocide in cattle, although for control of sand fly vectors of visceral leishmaniasis in India. In that study, four oral dose levels were evaluated: 0.5, 1.0, 2.0, and 4.0 mg/kg. Between 20 % (0.5 mg/kg) and 100 % (4.0 mg/kg) mortality was observed in adult Phlebotomus argentipes sand flies fed on treated cattle 21 days post-treatment with fipronil [19]. At the 1.0 mg/kg dose level in both studies, control of adult sand flies [19] and mosquitoes (this study) was significantly different from the controls at 21 days post treatment. This long-lasting efficacy makes fipronil a strong candidate for future malaria control field studies. The use of fipronil as a public health endectocide has already been extensively evaluated for control of adult and larval sand fly vectors of leishmaniasis. In India, fipronil treatment of two rodent species resulted in 100 % mortality of P. argentipes larvae following consumption of treated feces [18]. In that same study, 100 % mortality of blood-feeding adult P. argentipes was also achieved when sand flies were allowed to feed on rodents up to 20 days post-treatment [18]. In Tunisia, Derbali et al. [20] reported that fipronil -treated baits consumed by the desert's jird (Meriones shawi) had a systemic effect on the survival of Phlebotomus papatasi after blood meal acquisition, as well as a feed-through effect on the survival of larval P. papatasi after consumption of feces. And as mentioned above, treatment of cattle with fipronil also successfully controlled adult and larval P. argentipes for 21 days [19]. Lopes et al. [35] also used fipronil treatment of cattle to control ivermectin-resistant cattle ticks, Rhipicephalus (Boophilus) microplus. The topically administered fipronil formulation (1 mg/kg) achieved efficacy values greater than 95 % from 3 to 28 days after treatment. On 35, 42 and 49 days post-treatment, efficacy values were 94, 78 and 61 %, respectively [35]. The application of fipronil as a cattle endectocide for malaria control is a natural extension of these studies. Conclusions Ivermectin, eprinomectin, and fipronil each show promising potential as endectocides administered to cattle for lowering the survival rate of An. arabiensis mosquitoes, and hence reducing malaria transmission rates. Mosquito mortality was significantly higher than control mortality as long as 21 days post-treatment after mosquitoes fed on cattle dosed orally with 0.2 or 0.5 mg/kg eprinomectin, topically with eprinomectin at 0.5 mg/kg, or orally with either 1.0 or 1.5 mg/kg fipronil. Other components of vectorial capacity were not evaluated, and would be valuable to incorporate into future studies. Endectocidal treatments in cattle are a promising new strategy for control of residual, outdoor malaria transmission driven by vectors that feed on cattle, and could effectively augment current interventions which target more endophilic vector species.
2017-06-26T12:38:59.870Z
2015-09-17T00:00:00.000
{ "year": 2015, "sha1": "1254307324294134f5d8623d85605d6b9675c85d", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-015-0883-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09dc53839a99d923fbba14f245f019706b42724b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
128309218
pes2o/s2orc
v3-fos-license
The Role of the OCP and Syllable Structure in Makkan Arabic Hypocoristics This paper shows that the failure of names related to glide-medial and glide-final roots to form Pattern I C1aC2C2uuC3 hypocoristics is due to the effects of the OCP and syllable structure constraints, respectively. Names related to roots with medial [w] fail to form Pattern I hypocoristics since the sequence -wuuviolates the OCP. Names related to glide final roots [y] or [w] form C1aC2C2u hypocoristics where the deletion of the final glide avoids violation of syllable structure constraints in the language. The Optimality-Theoretic account in this paper demonstrates that there is only one native pattern of hypocoristic formation in Makkan Arabic with two manifestations, C1aC2C2uuC3 and C1aC2C2u. Syllable structure constraints and the OCP account for the apparent differences between these two forms. Data from Makkan Arabic hypocoristic formation show that native speakers can factor out the root consonants from the actual name and use them in hypocoristic formation. Introduction In several Arabic dialects including Makkan Arabic, (MA, henceforth) the main pattern of Hypocoristic Formation, (HF, henceforth) is disyllabic C 1 aC 2 C 2 uuC 3. The vowels are invariably /a/ and /uu/, while the consonants coincide with those of the actual name. The first syllable in the pattern is a CVC syllable with germination of the second consonant of the root. Stress falls on the second syllable of the hypocoristic in accordance with the general rule of stressing final superheavy syllables in the majority of the Arabic dialects. This pattern is primarily used for hypocoristics related to triconsonantal roots with no glides. For instance, ħassuun is the acknowledged nickname for names such as ħasan, muħsin, ħusen, all of which are related to the root √ħsn. However, names related to triconsonantal roots with medial or final glides fail to form C 1 aC 2 C 2 uuC 3 hypocoristics. First, glide-medial roots exhibit dual behavior with respect to HF of Pattern I. If the medial glide of the root is [w], the name fails to take a hypocoristic of Pattern I, for example, the name nawaal comes from the root √nwl 'achievement' fails to take Pattern I *nawuula as a nickname. If on the other hand, the medial glide is [y], the name does form a hypocoristic of Pattern I. The name ʕaayʃa from the root /ʕyʃ/ 'living' forms ʕayyuuʃ as its hypocoristic. Second, names related to glide-final roots fail to form Pattern I, whether the final glide is [w] or [y]. Thus, the names ʃaadyah and zakiyyah fail to form Pattern I hypocoeistics *ʃadduuw and *zakkuuy, respectively. The two names are related to the roots √ʃdw 'chanting' and √zky 'righteous', respectively. Instead, these two names form Pattern two hypocoristics ʃaddu and zakku, respectively. The templatic shape of Pattern II hypocoristics is C 1 aC 2 C 2 u. The final glide never appears in the hypocoristic. The present Optimality-Theoretic analysis provides a straightforward account for the failure of glide-medial and glide-final roots to form C 1 aC 2 C 2 uuC 3 hypocoristics. The first follows from the OCP effects while the second is enforced by constraints on MA syllable structure. These constraints are well motivated and independently needed in the language. The other major issue in the paper is the claim that the two most frequently used patterns of HF, C 1 aC 2 C 2 uuC 3, and C 1 aC 2 C 2 u are structurally related and represent two variants of one pattern. There is, however, a division of labor in the use of these patterns: C 1 aC 2 C 2 u pattern is followed by exactly those roots that fail to follow Pattern I, C 1 aC 2 C 2 uuC 3, i.e. glide-final roots. Abu-Mansour [1,2,3] describe these forms of HF in MA as two separate patterns, and falls short of uncovering the underlying basic similarities of the two forms as well as their differences. In the present analysis, evidence for the claim that we are dealing with two manifestations of one pattern comes from the explanation of the same problems that led earlier 32 The Role of the OCP and Syllable Structure in Makkan Arabic Hypocoristics analyses to challenge the primacy of the lexical root and to assume an output root for HF [4]. In this paper, the failure of glide-medial and glide-final roots to form C 1 aC 2 C 2 uuC 3 hypocoristics is shown to result from the domination of the faithfulness constraints by the OCP and by constraints on syllable codas in MA, respectively. This in itself leads to a very welcome result of the analysis: what has been referred to as Pattern II C 1 aC 2 C 2 u in Abu-Mansour [1,2,3] and in Davis and Zawaydeh [4] will in fact provide the only possible way of forming hypocoristics for names related to glide-final roots. In OT terms, this will be the result of only two of the constraints in the grammar of MA hypocoristics occupying different places in the constraint hierarchy. The remainder of this article is organized as follows. Section 2 introduces the two patterns of HF in MA C 1 aC 2 C 2 uuC 3 and C 1 aC 2 C 2 u, as well as exceptions to Pattern I. Section 3 provides a detailed analysis of hypocoristics related to triconsonantal roots and supports the fact that only the root consonants are referenced in the hypocoristic. It establishes the role of the OCP and syllable structure constraints in explaining the failure of glide-medial and glide-final roots to form C 1 aC 2 C 2 uuC 3 hypocoristics. Section 4 summarizes the similarities as well as the differences between the two patterns in terms of constraint ranking capitalizing on the idea that C 1 aC 2 C 2 u is, in fact, a variant of C 1 aC 2 C 2 uuC 3 selected by glide-final roots. Section 5 summarizes the main points of the paper. Basic Patterns of HF in Makkan Arabic Makkan Arabic utilizes two patterns of hypocoristics C 1 aC 2 C 2 uuC 3 and C 1 aC 2 C 2 u. They are referred to as Pattern I and Pattern II, respectively. Examples of the two patterns with their CV-Templates are given in (1 Among the two patterns, C 1 aC 2 C 2 uuC 3 is the most common and frequently used by all speakers of MA and several other dialects of Arabic. C 1 aC 2 C 2 uuC 3 hypocorostics have been the focus of several studies [1,2,3], [5,4], and [6]. As for Pattern II, it is less common than Patten I, and is only used for certain names. A detailed discussion of Pattern II in MA is found in Abu-Mansour [1, 2, 3]. Pattern I: C 1 aC 2 C 2 uuC 3 Hypocoristics Pattern I hypocoristics is primarily used for names that are associated with sound triconsonantal roots that include no glides. The three consonants always appear in the actual name as well as in the corresponding hypocoristic. Representative examples appear in (2) Since only root consonants are realized in the hypocoristic, several names may share the same nickname as in (2b) and (2c). Finally, the suffix [a] optionally follows the hypocoristic, and is not decided by the gender associated with the name. In addition, names related to quadrilateral and biconsonantal roots form Pattern I hypocoristics. For instance, the names Ɂibraahiim and ħanaan come from the roots √brhm 'steadfast' and √ħn 'affection', respectively. Their corresponding nicknames are barhuum and ħannuun(a). However, the analysis in this paper will include only Pattern I hypocoristics of names related to triconsonantal sound roots as well as the exceptions to this pattern, i.e., glide-medial and glide-final roots. Glide-medial Roots Names related to glide-medial roots exhibit dual behavior with respect to HF of Pattern I. If the medial glide of the root is [w], the name fails to take a hypocoristic of Pattern I as in (3). Pattern II: C 1 aC 2 C 2 u<u> Hypocoristics Pattern II of hypocoristics is mainly used for names related to glide-final roots, both [w] and [y]. This is exactly the same group of names that fail to form Pattern I hypocoritics because of the structure of the roots with which they are associated (cf. the examples in (5). Examples of this category along with their Pattern II hypocoristics are given in (6) Analysis This section presents an Optimality-Theoretic analysis of Pattern I and Pattern II hypocoristics. The analysis will focus on the following points. First, the similarities between the two patterns as well as the differences will be stated in OT terms. Second, it explains the role of the OCP and syllable structure in accounting for Pattern II hypocoristics and in explaining the exceptions to Pattern I. Third, the analysis will demonstrate that what has been referred to as a separate pattern, Pattern II C 1 aC 2 C 2 u [1,2,3], and [5] is in fact a variant of Pattern I. The structure of C 1 aC 2 C 2 u (Pattern II) is minimally different from C 1 aC 2 C 2 uuC 3 (Pattern I). In OT terms, this difference will be the result of only two of the constraints in the grammar of Arabic hypocoristics occupying different places in the constraint hierarchy. Pattern II provides the only possible way of forming hypocoristics for names related to glide-final roots. Analysis of Pattern I C 1 aC 2 C 2 uuC 3 Hypocoristics Hypocoristic formation in MA is a process that references the consonantal root. It involves considerable abstraction from the structure of the actual name. This is clear from hypocoristics of names in (1), (2), and (4) where only the root consonants are abstracted from the actual name and mapped into the hypocoristic pattern. No other structural property of the name survives in the hypocoristic. The full name provides the base for HF. Native speakers have the ability to factor out the root consonants and leave behind non-root material, such as the vowels and affixes. In addition to the root consonants, the vowels of the pattern [aa] and [uu] form the other part of the input. The first property of this pattern is that all hypocoristics of this type surface with two syllables and each syllable is bimoraic. The specific realization of the two heavy syllables in the output of hypochoristics whether CVC or CVV, will be decided by the interaction of two of the markedness constraints. Second, the first consonant in the hypocoristic must coincide with the first root consonant and must end with the final consonant of the root (not the actual name). Third, the second consonant of the root is always geminated. The final important property of Pattern I hypocoristics is that only root consonants appear in the hypocoristic; affixal consonants are left out. These properties will be accounted for using three types of constraints, Correspondence Constraints [7] and [8], Alignment Constraints [9], and Markedness Constraints [10] and [11]. The first two types of constraints are undominated in the grammar of Pattern I hypocoristics in MA. The right edge (last consonant) of the root must be aligned with the right edge of the hypocoristic. The faithfulness constraint in (8) requires that all root consonants are realized in the hypocoristic, while (9) ensures that only root consonants appear in the hypocoristic. The constraint in (10) requires that the four moras of the input be realized in the hypocoristic [7]. The long vowel of the first syllable in the input surfaces as short, however, the syllable maintains its weight and surfaces as a heavy CVC syllable. Since MA allows long vowels both in basic and derived structures, the occurrence of the CVC syllable in this case must be the effect of a restriction other than the one that bans long vowels in adjacent syllables [12]. It is, in fact, the ranking *VV >> *GEM that insures a CVC syllable instead of CVV in this position. Markedness Constraints The other two alignment constraints in (12) and (13) require that a hypocoristic form of this type starts with the first consonant of the root and ends with the last consonant, respectively. The faithfulness constraints in (8), (9), (10), and (11) as well as the alignment constraints in (12) and (13), are all undominated constraints in the phonology of this pattern of hypocorisctics. The markedness constraints in (14) and (15) are also undominated in the language. They require syllable well-formedness in the output. Candidates with complex onsets or codas, and syllables that lack onsets are all ruled out in the language. The following is a short OT account of the basic features of Pattern I. For example, ħassuun is the nickname for several personal names such as ħasan, ħuseen, muħsin, Ɂiħsaan, all of which are associated with the root √ħsn. The input to all hypocoristics of Pattern I includes the root consonants and the two bimoraic syllables of the pattern. The ranking in (18) establishes the role of the faithfulness and alignment constraints (8)(9)(10)(11)(12)(13) in deciding the template shape of this hypocoristic form. Table 1 establishes the role of the faithfulness constraints in the grammar of this pattern. The actual hypocoristic is candidate (g). It obeys all constraints except the markedness constraint *GEM, while each of the other candidates violates one or two of the faithfulness constraints. The full name muħsin has a non-root consonant, the prefix m-. Both candidates (a) and (b) are excluded for including this consonant. In addition, candidate (a) does that at the expense of deleting a root consonant [ħ]. Another crucial aspect of the grammar of this pattern is the inalterability of the vowels in the pattern: [a] in the first syllable and [uu] in the second. The identity constraint IDENT-IO(V) warrants faithful mapping of the vowels of the input in both syllables. Both candidates (e) and (f) are not optimal: candidate (f) changes the identity of one vowel of the input while (e) incurs two violations of IDENT-IO(V) by reversing the order of the vowels of the input. This allows candidate (g) to surface as the actual hypocoristic by obeying all constraints including IDENT-IO(V). The alignment constraints in (12) and (13) are obeyed by Pattern I hypocoristics. This explains why candidates (b) and (d) are not optimal despite their satisfaction of the rest of the constraints. The ranking of the markednaess constraints *VV and *GEM in (19) and their interaction in Table 2 account for the realization of the first syllable of the pattern as a CVC. To summarize, this section has established two salient characteristics of the C 1 aC 2 C 2 uuC 3 Pattern used for triconsonantal roots. First, only the root consonants are referenced in the hypocoristic. Second, the specific template of this pattern results from the satisfaction of all of the constraints relevant to the pattern. Apart from *VV >> *GEM, the rest of the constraints are unranked with respect to each other. Thus, the optimal form must satisfy all of them. The Role of the OCP: Glide-medial Roots This section shows that the failure of the medial glide [w] to appear in Pattern I hypocoristics is an effect of the OCP. In Section 2.2.1, examples in (3) and (4) The failure of [w] to appear in hypocoristics for glide-medial names will be a consequence of a well-motivated constraint in the language, the OCP in (21): (21) Obligatory Contour Principl (OCP)( [13], [14,15], [16]) Adjacent identical elements are prohibited. The role of the OCP in Pattern I hypocoristics is that it specifies what constitutes possible onsets for the second vowel of the hypocoristic pattern. Simply stated, *wuu in glide-medial hypocoristics is not a well-formed sequence in the syllable structure of MA. In OT terms, they establish the domination of the markedness constraints and constraints on what constitutes permissible codas in MA. In addition to the OCP, all the undominated constraints introduced for Pattern I apply here. A hypocoristic has to obey all of the undominated constraints (8)(9)(10)(11)(12)(13)(14)(15)(16)(17). However, when there is a clash with the OCP, the form that does not violate the markedness constraint (OCP) is the one chosen. The ranking established in (22) and the interaction in Table 3 explain why names related to w-medial roots fail to form Pattern I hypocoristics. The markedness constraint outranks both the faithfulness and the alignment constraints. Table 3 is faithful to the underlying glide [w]; it however loses since it violates the high-ranking OCP that prohibits adjacent identical segments. Candidate (b) satisfies the OCP at the expense of violating the identity constraint of the template vowel [uu]. Candidate (c) satisfies the high ranking OCP by changing the underlying glide to [y] but does not win since fayyuuz is not the acknowledged hypocoristic for fawziyyah in MA. The constraint ranking in (22) explains the failure of the underlying glide to surface in the hypocoristic and thus the total failure of names related to glide-medial roots where the glide is [w] to have Pattern I hypocoristics. ; it serves as the onset for the vowel [uu] of the hypocoristic pattern, but since it is not homorganic with the vowel of the pattern, the structure is well formed and these roots are treated as if they were regular triconsonantal roots. The same ranking in (18) holds here. This is shown in Table 4. It is evident from Table 4 that given the input and the undominated constraints ( (8)-(17)), the actual form is the only candidate that satisfies all of the constraints. The other candidates are excluded because of different fatal violations that have been discussed for sound triconsonantal roots. The OCP has no role in deciding hypocoristics of y-medial roots just like in sound triconsonantal roots. No new ranking is established in this case. The upshot of the discussion is that glide-medial roots behave just like sound triliteral roots, except when the medial glide is [w], in which case, the name fails to form Pattern I hypocoristics because of violation of the OCP. The Role of Syllable Structure: Pattern II Hypocoristics The analysis provided here is along the same lines as that in Abu-Mansour [3], which offers the first formal analysis of this pattern. However, Abu-Mansour [3] treats Pattern I and Pattern II as two unrelated patterns, and fails to uncover the underlying similarities in their structure. The analysis also misses the fact that there is a division of labor between the two patterns. Generally, Pattern II is followed mainly by those names that fail to follow Pattern I, i.e. glide-final roots whether the glide is [y] or [w] (cf. Section 2.3). These names represent the majority of names that follow this pattern. The same input proposed for Pattern I hypocoristics is assumed for Pattern II. The actual name provides the base from which native speakers factor out the consonantal root. In addition to the root consonants, the vowels of the pattern [aa] and [uu] form the other part of the input, exactly the same as the input to Pattern I. Two characteristics differentiate this pattern from Pattern I, and need to be accounted for in term of constraints. First, this pattern ends in an open syllable, and second, this syllable is always light. The first of these differences will be the result of one of the faithfulness constraints ranking low in the grammar of this pattern. As for the short vowel in the final syllable of the pattern, it follows from an independent characteristic of Arabic including MA, where vowels in final position are realized short unless they represent suffixes [17]. Therefore, no special ranking is required to account for the short vowel in the second syllable of Pattern II hypocoristics, and will be left out of all tables. In addition, we adopt a constraint proposed by Rosenthall [18] to account for the distribution of vowels and glides in Standard Arabic. This is given in (24): (24) *ADJHIVOC ( [18]: 411) No two adjacent high vocoids in the same syllable. The constraint in (24) is specific to Arabic syllable structure. This constraint prohibits two adjacent high vocoids in the same syllable discussed in Rosenthall [18]. Rosenthall [iu] are prohibited in the same syllable. These sequences are marked because adjacent syllable positions have a sonority plateau' ( [18]: 411). Rosenthall extends this constraint to include not only tautosyllabic sequences but also to any sequence of high vocoids. T The constraint in (24) will figure prominently in the account of hypocoristics related to glide-final roots. However, our use of (24) in accounting for the hypocoristic data will depart slightly from Rosentall's use of the constraint. In MA, the restriction on adjacent high vocoids is restricted to tautosyllabic sequences that are also tautomorphemic. Thus, the constraint in (24) will not rule out sequences like [nis.yu] < /nisy-u/ 'they forgot' and [ram.yi] < /ramy-i] 'my throwing' where [u] and [i] are tautosyallbic, but they represent independent morphemes, a subject and a possessive pronoun, respectively. We express the modified constraint in (25): (25) *ADJHIVOC (Tautomorphemic) (Based on [18]: 411) No two adjacent tautomorphemic high vocoids in the same syllable. We start with the roots that end in [y], (5c and d). The input to Pattern II forms includes the consonantal root and the two heavy syllables. All the constraints that constitute part of the grammar of Pattern I hypocoristics will be shown to be active in the derivarion of Pattern II. First, the ranking in (26) and the interaction in Table 5 establish the fact that Pattern II hypocoristics end in an open syllable. In Table 5, candidate (a) violates the restriction on two high vocoids by being faithful to the input glide [y]; however, it obeys the constraint at the right edge of the hypocoristic by parsing [y]. Candidate (b) avoids this violation by deleting the glide. The long vowel in final position will not surface in accordance with the general restriction on final long vowels in the language. Candidate (b) emerges as optimal despite deletion of the underlying glide and misaligning the last consonant [y] of the root with the right edge of the hypocoristic. Pattern II is then the result of the high ranking of the markedness constraint in (25) demoting two of the faithfulness constraints, namely, MAX-Rt Hypo (C) and Align (Rt, R, Hypo, R), to a rank that is lower than the one it occupies in the grammar of Pattern I. This violation is an inherent property of Pattern II. It minimally distinguishes it from Pattern I. (26) *ADJHIVoc (tautomorphemic) >> Max-Rt Hypo (C), Align, Rt, R, Hypo,R, As mentioned above, in MA, the restriction on adjacent high vocoids is restricted to tautosyllabic sequences that are also tautomorphemic. This additional restriction is borne out by the two candidates considered below, namely, *zak.kuy and *zak.yu<u>. Both fail to emerge as optimal because of their morphological structure. The same ranking in (26) holds here. In Table 6 below, faithful parsing of the root glide [y] as a coda or an onset when combined with the vowel of the hypocoristic pattern [uu] creates a sequence of two high vocoids that are both tautosyllabic and tautomorphemic. Thus, both candidates (a) and (b) lose allowing (c) to emerge as the winning candidate despite its violation of the faithfulness constraint MAX-Rt Hypo (C). This further confirms the role of *ADJHIVOC (tautomorphemic) as a syllable structure constraint in the grammar of Pattern II hypocoristics. The same ranking obtained in (26) accounts for glide-final roots where the glide is [w]. This is illustratrd in Table 7 below. Similarities between Pattern I and Pattern II Hypocoristics Section 3.3 shows that the differences between the two patterns can be summarized in two points. Pattern II does not parse the final glide because of restrictions on syllable structure in the language. It does not align the final glide with the right edge of the hypocoristic either. The combined result of the two violations is that this pattern ends in an open syllable. However, the two patterns are more similar than different as the following discussion shows. We use the case of names associated with biconsonantal roots to explain. As mentioned earlier, this group of names may form Pattern I hypocoristics, however, Pattern II is more common ((2.2) and (2.3)). This section will focus on exposing the similarities between the two patterns, capitalizing on the fact that they are two variants of one pattern. The similarities will be cast in terms of OT constraints. The ranking in (27) and Table 8 show the constraint interaction of Pattern I hypocoristics of the name ħanaan and the emergence of Pattern II as one of the candidates in the interaction. In Table 8 below, the actual hypocoristic is candidate (e); it obeys all undominated constraints; it only violates *GEM which ranks low in the grammar of Pattern I hypocoristics. Note that candidate (d), violates the alignment constraint at the right edge of the hypocoristic and is thus excluded as optimal for Pattern I hypocoristic; however, it has the exact C 1 aC 2 C 2 u template of Pattern II. This is exactly the candidate that will emerge as optimal in Pattern II hypocoristics. (27) ONSET, MAX-IO (µ), Align (Rt, R, Hypo, R), *VV >> *GEM Table 9 is an illustration of the constraint ranking in (28) that accounts for Pattern II hypocoristics of names related to biconsonantal geminate roots. The low ranking of the alignment constraint in the grammar of Pattern II is borne out by the optimal candidate (d); it minimally violates the low ranking *GEM and the constraint that aligns the last root consonant with the right edge of the hypocoristic. However, the similarities between the two patterns outnumber the differences. Just like Pattern I, Pattern II realizes the first syllable in the hypocoristic as a CVC through the ranking *VV >> *GEM. Satisfaction of the faithfulness constraints Max-Rt Hypo C, Dep-Rt Hypo C, and MAX-IO (µ) and the markedness constraints ONSET and *VV is among the shared properties of the two patterns. Conclusions The paper offers an explanation for the failure of names related to glide-medial and glide-final roots to form Pattern I hypocoristics without recourse to the idea of an output root. This failure is attributed to the effects of two constraints, the OCP and the constraint against having two adjacent high vocoids in the same syllable. Both constraints are well motivated and form a basic part in the phonology of Arabic in general. The present analysis has the advantage of relating Pattern I and Pattern II of hypocoristics showing their minimal differences in terms of constraint ranking. Both patterns have the same constraints ranking with the exception of the alignment constraint at the right edge of Pattern II hypocoristics and the lack of parsing the root final glide because of syllable structure constraints. The analysis provides evidence for the importance of the template in MA morphology. Satisfaction of the template was crucial in the grammar of both patterns. However, the template is not stipulated, rather it results from the interaction of the various families of constraints.
2019-04-23T13:23:07.867Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "df76de2b20be3459f41366de32a8a596cde95008", "oa_license": "CCBY", "oa_url": "https://www.hrpub.org/download/20190130/LLS4-19312532.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "594c0e243a6fbb1c3f6d4c436d3f4935bf17b897", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Mathematics" ] }
107897254
pes2o/s2orc
v3-fos-license
Effect of Interaction Between Ag Nanoparticles and Salinity on Germination Stages of Lathyrus Sativus l The aim of the study was to effect of interaction between Ag nanoparticles and salinity on Germination Stages of Lathyrus sativus L. Treatments included in the study were viz. To 3 levels of salinity (0 as control, 8 and 16 dS/m NaCl), 8 and 16 dS/m and four levels of silver nanoparticles (0, 5, 10 and 15 ppm) on grass pea seed were tested. An experiment was conducted to evaluate the effects of silver nanoparticles (AgNPs), on the seed germination factors, root and shoot length (RL and SL) and proline content of grass pea Survival under Salinity Levels. Results showed a significant reduction in growth and development indices due to the salinity stress. The salt stress impaired the germination factors of grass pea seedlings. The application of Ag in combination improved the germination percentage, shoot and root length, seedling fresh weight and seedling dry weight and seedling dry contents of grass pea seedlings under stressed conditions. The results suggest that Ag nanoparticles enhancement may be important for osmotic adjustment in grass pea under salinity stress and application of Ag mitigated the adverse effect of salinity and toxic effects of salinity stress on grass pea seedlings. Introduction High salinity is a common abiotic stress factor that causes a significant reduction in growth. Germination and seedling growth are reduced in saline soils with varying responses for species and cultivars [1]. Soil saltiness may impact the germination of seeds either by causing an osmotic potential outside to the seed averting water uptake, or the poisonous effects of Na + and Cl − ions on germinating seed [2]. Salt and osmotic stresses are responsible for both inhibition or delayed seed germination and seedling establishment [3]. The majority of our present-day crops are adversely affected by salinity stress [4]. NaCl causes extensive oxidative damage in different legumes, resulting in significant reduction of different growth parameters, seed nutritional quality, and nodulation [5,6]. To mitigate and repair damages triggered by oxidative stress, plants evolved a series of both enzymatic as well as a non-enzymatic antioxidant defense mechanism. Ascorbate and carotenoids are two important non-enzymatic defenses against salinity, whereas proline is the most debated osmoregulatory substances under stress [7]. Lathyrus sativus L. (Grass pea) is an annual pulse crop belonging to the Fabaceae family and Vicieae tribe [8]. Grass pea has a long history in agriculture. The crop is an excellent fodder with its reliable yield and high protein content. This plant is also commonly grown for animal feed and as forage. The grass pea is endowed with many properties that combine to make it an attractive food crop in drought-stricken, rain-fed areas where soil quality is poor and extreme environmental conditions prevail [9]. Despite its tolerance to drought it is not affected by excessive rainfall and can be grown on land subject to flooding [10,11]. Compared to other legumes, it is also resistant to many insect pests [12][13][14][15]. Nanoparticles (NPs) are wide class of materials that include particulate substances, which have one dimension less than 100 nm at least [16]. The importance of these materials realized when researchers found that size can influence the physiochemical properties of a substance e.g. the optical properties [17]. NPs with different composition, size, and concentration, physical/ chemical properties have been reported to influence growth and development of various plant species with both positive and negative effects [18]. Silver nanoparticles have been implicated in agriculture for improving crops. There are many reports indicating that appropriate concentrations of AgNPs play an important role in plant growth [19,20]. The application of Nano silver during germination process may enhance germination traits, plant growth and resistance to salinity conditions in basil seedlings [21]. The use of Silver Nanoparticle on Fenugreek Seed Germination under Salinity Levels is a recent practice studied [22]. Nanomaterials have also been used for various fundamental and practical applications [23]. Although the potential of AgNPs in improving salinity resistance has been reported in several plant species [24,25], its role in the alleviation of salinity effect and related mechanisms is still unknown. Therefore, the main objective of this work was to study the effect of Silver Nanoparticles on salt tolerance in Lathyrus sativus L. These all were washed with deionized water. Seeds were sterilized in a 5% sodium hypochlorite solution for 10 minutes [26], rinsed through with deionized water several times. Their germination was conducted on water porous paper support in Petri dishes (25 seed per dish) at the controlled temperature of 25 ± 1°C. After labeling the Petri dishes, seed were established between two Whatman No. Material and Methods 2 in Petri dishes. Silver nanoparticles in different concentration silver nanoparticles (0, 5, 10 and 15ppm) were prepared directly in deionized water and dispersed by ultrasonic vibration for one hour. Each concentration was prepared in three replicates. Every other day supply with 0.5 ml silver nanoparticles per every test plantlet was carried out for 21 days along with control. Germination counts were recorded at 2 days' intervals for 21 days after sowing and the seedlings were allowed to grow. The germination percentages of the seeds were finally determined for each of the treatments. After 21 days of growth, the shoot and root lengths were long enough to measure using a ruler. The controlled sets for germinations were also carried out at the same time along with treated seeds ( Figure 2). A. Germination Stages Total germination percentage (GT) was calculated as Gt = (n/N ×100), where n = total number of germinated seeds (normal and abnormal) at the end of the experiment and N = total number of seeds used for the germination test. B. Germination Speed Index (GSI) Conducted concomitantly with the germination test, with a daily calculation of the number of seeds that presented protrusion of primary root with length ≥2 millimeter, continuously at the same Seedling Vigour Index The seedling vigor index was determined by using the formula given by Abdul baki and Anderson [28]. Fresh and Dry Mass The fresh mass was quantified through weighing on precision scale, and the dry mass was determined through weighing on a precision scale after permanence of the material in a kiln with air forced circulation, at a temperature of 70°C, until indelible weight. At the ending of the experiment, At the end of the experiment, radical and plumule length and fresh weight measured. Plants were placed in the oven at 70°C for 48 h and weighted with sensitive scale. Proline Contents Proline was determined spectrophotometrically following the ninhydrin method described, using L-proline as a standard [29]. Approximately 300 mg of dry tissue was homogenized in 10ml of 3% (w/v) aqueous sulphosalicylic acid and filtered. To 2ml of the filtrate, 2ml of acid ninhydrin was added, followed by the addition of 2ml of glacial acetic acid and boiling for 60 min. The mixture was extracted with toluene, and the free proline was quantified spectrophotometrically at 520nm from the organic phase using a spectrophotometer. Statistical analysis each treatment was conducted, and the results were presented as mean ± SD (standard deviation). The results were analyzed by one-way ANOVA with used Minitab Version 16. The present study showed clearly that salinity had a negative effect on the yield and its components of grass pea. It is well known that seed germination provides a suitable foundation for plant growth, development, and yield [30]. Increased salt concentration caused a decrease in germination percent (Table 1). Results and Discussion Seed germination decreased as the doses increased. The Strong reduction in germination (-47%) was observed mainly at the highest level of salt concentration as compared to control treatment. Delayed germination causes increased irrigation cost and irregular and weak seedling growth in the establishment of legume crops. Relevant results were reported by Gunjaca and Sarcevic [31] and Almansouri et al. [32]. They reported that increasing osmotic potential decreased water uptake and slow down germination (Tables 1 & 2). The results showed that the impact of Ag NPs was significant on germination percentage in P≤0. conditions. The results of this study showed that Ag can be involved in the metabolic or physiological activity in higher plants exposed to abiotic stresses.
2019-04-11T13:07:41.497Z
2019-02-04T00:00:00.000
{ "year": 2019, "sha1": "8b54add01883208c2e7eeb2bcab3cb8ad516d17c", "oa_license": "CCBY", "oa_url": "https://lupinepublishers.com/environmental-soil-science-journal/pdf/OAJESS.MS.ID.000132.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1eab70d37b47a70be8fadbf938bcf2f8f611b8e8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
251622886
pes2o/s2orc
v3-fos-license
Cross-calibration of two dual-energy X-ray absorptiometry devices for the measurement of body composition in young children This study aimed to cross-calibrate body composition measures from the GE Lunar Prodigy and GE Lunar iDXA in a cohort of young children. 28 children (mean age 3.4 years) were measured on the iDXA followed by the Prodigy. Prodigy scans were subsequently reanalysed using enCORE v17 enhanced analysis (“Prodigy enhanced”). Body composition parameters were compared across three evaluation methods (Prodigy, Prodigy enhanced, iDXA), and adjustment equations were developed. There were differences in the three evaluation methods for all body composition parameters. Body fat percentage (%BF) from the iDXA was approximately 1.5-fold greater than the Prodigy, whereas bone mineral density (BMD) was approximately 20% lower. Reanalysis of Prodigy scans with enhanced software attenuated these differences (%BF: − 5.2% [95% CI − 3.5, − 6.8]; and BMD: 1.0% [95% CI 0.0, 1.9]), although significant differences remained for all parameters except total body less head (TBLH) total mass and TBLH BMD, and some regional estimates. There were large differences between the Prodigy and iDXA, with these differences related both to scan resolution and software. Reanalysis of Prodigy scans with enhanced analysis resulted in body composition values much closer to those obtained on the iDXA, although differences remained. As manufacturers update models and software, researchers and clinicians need to be aware of the impact this may have on the longitudinal assessment of body composition, as results may not be comparable across devices and software versions. Scientific Reports | (2022) 12:13862 | https://doi.org/10.1038/s41598-022-17711-0 www.nature.com/scientificreports/ may be upgraded over the course of the study or in multi-centre studies where different DXA devices may be available at each site. GE Lunar is one of two manufacturers of DXA devices, with the iDXA being their most advanced model (introduced in 2005). The iDXA has improved image resolution due to an X-ray source with a higher voltage (100 kV), greater pixel density, and a greater number of detectors 8 . The enhanced algorithms from the iDXA have been modified to enable old scans from the GE Lunar Prodigy to be reanalysed with GE Lunar's "enhanced" analysis option, introduced with the version 14 release of their enCORE analysis software in 2012. While differences between the iDXA and Prodigy have previously been reported in adults [9][10][11][12][13] , this has not been evaluated in young children. Furthermore, it is unclear how much of the difference observed between old and new scans is related to the software versus differences in scan resolution between the models. Therefore, among a cohort of young children, we aimed first to determine if body composition values are the same when obtained with a GE Lunar Prodigy and with a GE Lunar iDXA; and second, to determine if body composition values from a Prodigy, reanalysed with the enhanced analysis software, are comparable to those obtained using an iDXA. Subjects and methods A sample of children aged 3.4 years (n = 29) was selected from the Auckland site of the Nutritional Intervention Preconception and During Pregnancy to Maintain Healthy Glucose Metabolism and Offspring Health (NiPPeR) study 14 . Children were selected based on good compliance with the DXA protocol (i.e., producing a DXA scan without movement artefact). The NiPPeR trial was registered on 16 July 2015 with ClinicalTrials.gov (NCT02509988, Universal Trial Number U1111-1171-8056); ethics approval was granted by the Northern A Health and Disability Ethics Committee (15/NTA/21/AM20). Written informed consent was obtained from the parents/guardians of the study subjects. All procedures in this study were conducted according to the ethical principles and guidelines laid down in the Declaration of Helsinki. Children were scanned on a GE Lunar iDXA (enCORE v17, paediatric mode) immediately followed by a scan on a GE Lunar Prodigy (enCORE v17, paediatric mode). It has previously been reported that the effective radiation dose of the iDXA scanner for an infant phantom was 8.9 μSv 15 , and in adults, 4.7 μSv 16 . In comparison, the global average for daily natural background radiation exposure is 6.6 μSv 17 . Therefore, the risk associated with repeat DXA scanning is low. Before measurement with the DXA machines, standing height was measured three times to the nearest 0.1 cm using a calibrated SECA 213 portable stadiometer (SECA, Hamburg, Germany), and weight was measured once to the nearest 100 g using calibrated SECA 899 scales. Median height, weight, and date of birth were input into the DXA machines prior to measurement. Both DXA machines were calibrated daily with a manufacturer-specific calibration block phantom and with a spine phantom at regular intervals. Children were measured while lightly clothed, in clothing without metal, lying supine on the measurement bed within the scan limit borders. Feet were rotated inwards slightly, and a Velcro strap was used to hold feet in place. If necessary, the child was swaddled lightly with a thin blanket, ensuring arms and legs remained separated. Each scan was graded according to the degree of movement, with significant movement artefact being excluded from the main analyses (n = 1). Images with minor movement were flagged and sensitivity analyses were run excluding these participants (n = 11). The results of the sensitivity analyses were little changed, so results are reported for the main analyses only. Three sets of body composition values were obtained: iDXA scan analysed with enCORE v17 and Prodigy scan analysed with enCORE v17 basic and with enCORE v17 enhanced analysis. Total body less head (TBLH) 18 and regional estimates of body composition are reported for FM, LM, BMC, and bone area, as well as %BF (FM ÷ total mass × 100) and BMD (BMC ÷ bone area). Statistical analyses. Subject characteristics and body composition values are reported as means ± SD for continuous variables and n (%) for categorical variables. Differences in body composition values between the three evaluation methods (iDXA, Prodigy basic, and Prodigy enhanced) were assessed using within-subjects ANOVA with Bonferroni post-hoc testing. Differences between the Prodigy and iDXA scans are reported as percentage differences and 95% confidence intervals. To assess differences between the devices and software versions across a range of body sizes, Bland-Altman analyses were conducted to compare the Prodigy (basic and enhanced) to the iDXA (reference), with results reported as biases (i.e., mean differences) and 95% limits of agreement (LOA). Finally, equations were developed using linear regression to allow measurements made on the Prodigy to be adjusted to be comparable to those made on the iDXA. Prediction equations were developed using leave-one-out cross-validation for FM, LM, BMC, and bone area. Adjusted body composition values were then compared to the reference (iDXA) using pairedsamples t-tests and Bland-Altman analyses. All tests were two-tailed and were performed within R (R Foundation for Statistical Computing, Vienna, Austria), with p values less than 0.05 being considered statistically significant. Results Population characteristics. 29 children were measured on the two DXA devices. Following exclusion of scans with movement artefact (n = 1), the sample comprised of 28 children, described in Table 1. The excluded child was similar in height, weight, BMI, and age (all p > 0.05). Comparison of the prodigy and iDXA. The mean body composition values for each measurement condition are summarised in Table 2 and Supplementary Table S1. Within-subjects ANOVA indicated differences between the three scan conditions (p < 0.001 for all body composition values). Post-hoc testing revealed differences between the iDXA and Prodigy basic for all body composition parameters. Following reanalysis of www.nature.com/scientificreports/ Prodigy scans using enhanced analysis, there remained differences between the iDXA and the Prodigy, except for TBLH BMD (− 0.004 g/cm 2 [95% CI − 0.009, 0.001], p = 0.131), and some regional estimates (Supplementary Table S1). When expressed as percentage differences, Prodigy basic TBLH values were up to 37% different from those obtained on the iDXA ( Table 2). Differences were largest for fat mass (kg and %) and bone parameters (BMC, bone area, and BMD), as well as for regional estimates, which were up to 65% different (Table 2 and Supplementary Table S1). When Prodigy scans were reanalysed using enhanced analysis, the percentage differences reduced to < 6.5% for TBLH values and < 15.5% for regional estimates (Table 2 and Supplementary Table S1). The Bland-Altman analyses are reported in Tables 3 and Table S1 and Supplementary Fig. S1. Compared to the iDXA, Prodigy basic LM was higher by ~ 800 g and FM lower by ~ 1.3 kg, resulting in a difference in total mass of − 550 g and a difference in %BF of − 9.7%. Both bone area and BMC were reduced (− 255 g [95% LOA − 329, − 181] and − 59 [95% LOA − 86, − 32], respectively), although bone area to a greater extent, resulting in greater estimates of BMD (+ 0.11 g/cm 2 [95% LOA 0.09, 0.13]). A systematic bias for %BF was observed, with differences being greater among those with low %BF. When the Prodigy scans were reanalysed using enhanced analysis, the bias for TBLH LM reduced to less than 250 g, while the bias for FM reduced to almost a tenth of the original value (+ 167 g [95% LOA − 133, 466]). Meanwhile, the bias for BMD was reduced to 0 g/cm 2 . Regional analyses paralleled the TBLH results, with the Prodigy basic having higher LM but lower FM and bone area. Reanalysis of Prodigy files with enhanced analysis attenuated these differences. Adjustment equations. Prediction equations were developed to enable adjustment of Prodigy (basic) measurements (Table 4 and Fig. 1). Prediction equations for enhanced measurements are contained within the supplementary file (Supplementary Table 2 and Supplementary Fig. S2). www.nature.com/scientificreports/ When the equations were validated, the adjusted values aligned more closely with iDXA estimates than the reanalysed Prodigy scans (i.e., enhanced Prodigy) did. For example, in comparison to iDXA estimates, Prodigy enhanced LM was 2.7% lower, with a bias of ~ 250 g, whereas adjusted Prodigy LM was almost identical (0.0% [95% CI − 1.8, 1.7]), with a bias of less than 10 g (Table 5). Discussion Previous studies have identified differences between DXA models and software versions; however, few have evaluated differences in young children. The International Society for Clinical Densitometry (ISCD) recommend in vitro cross-calibration when comparing devices of the same model but in vivo cross-calibration when comparing devices from different manufacturers 19 . A study comparing two models by the same manufacturer found that spine phantom cross-calibration can be inaccurate compared to in vivo calibration 20 . This is further complicated in body composition studies, as there is a lack of a suitable phantom for cross-calibration of fat and lean masses. Therefore, in our study, we cross-calibrated two GE Lunar DXA systems (Prodigy and iDXA) in vivo among 28 young children and found significant differences between the two devices, even after Prodigy scans were reanalysed with enhanced analysis. To our knowledge, no previous study has cross-calibrated the Prodigy and iDXA in a cohort of young children (< 5 years). DXA cross-calibration studies in young children are limited; however, a previous study (3-19 years, n = 126) found that FM from the iDXA (v16) was approximately 15% higher in girls and 31% higher in boys in comparison to the GE Lunar DPX-Pro (v9.3). LM was also reduced when measured with the iDXA compared to the DPX-Pro; however, this was only significant in boys 21 . Other studies have compared single Hologic scans reanalysed with updated software and found differences in FM, FFM, and %BF, but no differences in total mass 4,6 . In young children, there are clear differences between device types and software versions; however, the contribution of scan versus software has not previously been evaluated. We observed differences between the two devices in all parameters, with iDXA %BF being approximately 1.5-fold greater than Prodigy measurements, whereas BMD was ~ 20% lower (Table 2). When we reanalysed the Prodigy scans with enhanced analysis, although differences remained in all estimates except for TBLH BMD, as well as some regional estimates, the percentage differences and biases were substantially reduced ( Table 2 and Table S1). In our study, differences between devices were most substantial among children with low %BF. Shypailo et al. 6 reanalysed a large number of paediatric scans (n = 1384) obtained with a Hologic QDR-4500 (v11.2) with updated software (v12.1) and observed greater differences in FM and %BF among younger, smaller subjects, and in girls; although, these results may not be relevant to GE Lunar devices given the differences in technology used in the two scanner types 22 . A pilot study in 13 women (20-46 years) found that differences between the iDXA and Prodigy were most substantial among women who were least adipose (< 20 kg FM and < 30% BF) 16 . DXA estimates body composition according to the attenuation of X-ray beams at high and low energy. A limitation of the technology is that DXA can only differentiate between two tissue types simultaneously (i.e., bone vs nonbone, fat vs lean) 1 . In an adult DXA scan, 40 to 45% of pixels will contain bone, fat, and lean tissue, whereas, Table 3. Bland-Altman analysis comparing total body less head (TBLH) body composition parameters of young children measured by dual-energy X-ray absorptiometry (DXA) on the GE Lunar Prodigy, analysed using basic and enhanced analysis in reference to scans obtained on the GE Lunar iDXA. Significant www.nature.com/scientificreports/ in children, this percentage is increased 4 . Therefore, improvements to the estimation of body composition in bone-containing tissue will have a greater impact in younger, smaller children. This may also explain why in some cross-validation studies, only regional estimates were affected 9,10 . Although comparison has not been made between the iDXA and the Prodigy in a cohort of young children, previous studies in adults have found only small differences between the Prodigy and the iDXA, which have not been consistent across body composition parameters and regions, nor in the direction of the difference 9-13 . The variations in software used may partially explain these conflicting results. The studies used Prodigy scanners with enCORE software versions ranging from 6.10 to 16, while the iDXA scanners used enCORE software version 12.3-17 9-13 . Watson et al. 13,23 evaluated differences between the iDXA and Prodigy following reanalysis of Prodigy files with enhanced analysis in both adults (20-65 years, n = 69) and school-aged children (6-16 years, n = 124). Among their cohort of children, differences were apparent in all parameters except whole-body, leg, and trunk www.nature.com/scientificreports/ BMC. Similar to our findings, differences were most pronounced for total FM and LM, which were 0.71 kg (6%) higher and 1.07 kg (3.5%) lower with the Prodigy than the iDXA 23 . Although they did not compare basic and enhanced analysis in their study of children, among adults, Watson et al. 13 noted no differences in whole-body FM and LM when Prodigy scans were analysed with basic compared to enhanced analysis. However, the authors observed differences in total BMC and bone area and regional FM and LM (arm FM and leg LM). This contrasts with our study, where substantial differences were noted between Prodigy scans analysed with the two software versions for all parameters. In line with our results, Crabtree et al. 22 found differences between basic and enhanced analysis when data was pooled from DXA studies involving children aged 4-20 years. An inherent limitation of using DXA is that although based on basic principles and hence intrinsically accurate, the software used to analyse the data is proprietary. Animal cadaver studies have shown that both the Prodigy and iDXA have good correlation with chemical analysis results, though many body composition parameters were over-or underestimated 24,25 , which may in part be due to differences in animal tissue thickness 26,27 and FFM hydration 24,26 . The proprietary nature of the software means that we are unable to fully elucidate where differences between the devices may stem from, though our results suggest a larger role of software than instrumentation. However, it is unclear how the enhanced software option that can be applied to Prodigy scans differs from the default iDXA software, and what adjustments are applied to paediatric scans. An additional limitation of our study is that we could not compare our results to a suitable reference method to determine which of the two DXA scans was most accurate. In early childhood, there is no gold-standard method for assessing body composition. A four-compartment (4C) model may be used as a reference since it provides additional clarification about the composition of the FFM compartment 2 ; however, this would have been time-and resource-intensive. Furthermore, air displacement plethysmography body volume measurements (as required for computation of body composition using a 4C model) are currently not optimised for use at this age 28 . Nonetheless, a previous study in adults found that the iDXA aligned more closely with a 4C model than results from the Prodigy, although there was a systematic bias, with FM being overestimated among those with greater FM 13 . This systematic bias in FM was not observed when iDXA measurements were validated against a 4C model in school-aged children, although mean FM was overestimated by 2 kg 23 . The authors also found iDXA to underestimate FFM by 1.3 kg, with this increasing as total FFM increased 23 . Correction of iDXA FFM according to individually measured TBW (i.e., correcting for FFM hydration) resulted in a reduction in limits of agreement and removal of the systematic bias. However, a mean bias of approximately 2 kg remained 23 . In addition to determining which DXA device is more accurate, we acknowledge the need to replicate the adjustment equations in an independent group of children. In summary, we have conducted the first cross-calibration study of the GELunar Prodigy and iDXA in a cohort of young children. There were substantial differences between the iDXA and the Prodigy, which were attenuated following reanalysis of the Prodigy scans with enhanced software. Thus, the same child scanned by the two devices will yield different results in part due to differences in scan resolution but also due to software differences. However, it is difficult to disentangle these differences and to determine which is a more accurate reflection of true body composition. This highlights a key challenge researchers and clinicians face when collecting longitudinal body composition data in children. As manufacturers upgrade devices and software over the duration of a study or clinical observation, it becomes difficult to determine the true trajectory of body composition. Therefore, researchers and clinicians need to consider the manufacturer, model, and software version when conducting DXA scans as results may not be comparable. Data availability The datasets generated during and/or analysed during the current study are not publicly available as the participants did not consent to open access data sharing and this is an ongoing longitudinal study in which there will be further future analyses conducted but are available from the corresponding author on reasonable request.
2022-08-18T06:17:20.487Z
2022-08-16T00:00:00.000
{ "year": 2022, "sha1": "7c2894e63532ab55319c77cb14024ebe772ab74e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-17711-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de69b91bd3f3679315ebd7f4b808a95111ef3fdf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18702475
pes2o/s2orc
v3-fos-license
Epidural Anesthesia Complicated by Subdural Hygromas and a Subdural Hematoma Inadvertent dural puncture during epidural anesthesia leads to intracranial hypotension, which if left unnoticed can cause life-threatening subdural hematomas or cerebellar tonsillar herniation. The highly variable presentation of intracranial hypotension hinders timely diagnosis and treatment. We present the case of a young laboring adult female, who developed subdural hygromas and a subdural hematoma following unintentional dural puncture during initiation of epidural anesthesia. Introduction Inadvertent dural puncture during epidural anaesthesia leads to intracranial hypotension, which if left unnoticed can cause life-threatening complications such as subdural hematomas and cerebellar tonsillar herniation [1,2]. The highly variable presentation of intracranial hypotension hinders timely diagnosis and treatment. Case Presentation A twenty-seven-year-old otherwise healthy nulliparous patient requested epidural anesthesia for pain relief during spontaneous labor. Following informed consent and using an aseptic technique, an 18 g Tuohy needle was inserted into the L3-4 epidural space, guided by a loss of resistance to normal saline. Unfortunately, the thecal sac was breached and the needle was immediately withdrawn. A second attempt, through the L2-L3 interspinous space, resulted in the successful placement of an epidural catheter and this was confirmed with a test dose of 10 mL of 0.2% ropivacaine. Further analgesia was provided via patient controlled epidural analgesia (PCEA) using 5 mL of 0.125% bupivacaine with a lockout of 15 minutes, as per the institution's protocol. There was no evidence of a high block. Six hours after the initiation of epidural analgesia, the patient required instrumental delivery with Kielland's rotational forceps. The patient developed a mild, intermittent, nonpostural headache on day one following delivery but was able to continue caring for her newborn child. Her neurological examination and vital signs were normal. The symptoms were not indicative of a Postdural Puncture Headache (PDPH) and she was treated with intravenous hydration and oral analgesia. On day two, the patient's headache became persistent and postural, and she developed nausea and vomiting. This was attributed to PDPH and she was informed of the potential treatments including autologous blood patching. She declined the blood patch and wished to continue with conservative management of paracetamol, ibuprofen, metoclopramide, and ondansetron with reasonable control. On day three, the Medical Emergency Team urgently attended the patient's bedside due to the onset of bradycardia (heart rate of forty beats per minute) in the setting of severe headache and vomiting. The patient was promptly investigated with Computed Tomography (CT). Brain CT demonstrated bilateral cerebral convexity subdural hygromas and a small right frontal subdural hematoma (Figure 1), while a head CT venogram was unremarkable. The patient also underwent a brain MRI, which demonstrated 2 Case Reports in Anesthesiology On day four, an epidural blood patch was performed without complication using 25 mL of autologous blood, resulting in rapid relief of the patient's headache. Case Reports in Anesthesiology 3 A follow-up brain MRI was performed one month later, which demonstrated complete resolution of the subdural hygromas ( Figure 2). The patient was symptom-free. Discussion Postpartum headache is extremely common, reportedly occurring in up to 80% of patients [4]. The commonest causes are tension headache and migraine, which in combination are twenty times more common than PDPH, let alone the rarer complications of subdural hygromas and hematomas [5]. Subdural hygromas are composed of xanthochromic fluid and result from intracranial hypotension [6]. The prevailing theory is that cerebrospinal fluid (CSF) leaks into the epidural space via the dural defect leading to compensatory vasodilatation of the pachymeningeal blood vessels (Monro-Kellie doctrine), which subsequently become leaky [3,[7][8][9][10]. Some investigators have proposed that arachnoid granulation rupture may be a contributing factor [10]. Subdural hygromas occur in 10-69% of patients with intracranial hypotension and can occur as early as five hours or as late as five months after dural puncture [11][12][13][14]. If a dural tear is left untreated, continued spinal CSF leakage can lead to caudal sagging of the intracranial contents (occurring after ≥250 mL of CSF is lost) [15]. Traction-related tearing of subdural veins is the likely mechanism by which hygromas are complicated by hematomas, which may be unilateral or bilateral [14]. The risk of subdural hygroma and hematoma formation increases proportionally with the degree of intracranial hypotension and the number of dural punctures, as well as with coexistent cerebral atrophy, cerebral aneurysm, vascular malformation, pregnancy, dehydration, and use of anticoagulants. The true incidence of subdural hematoma following dural puncture remains elusive as most patients are managed without imaging investigation. Studies have reported that, of the patients who develop subdural hygromas, 47% go on to develop subdural hematomas [16][17][18]. The cardinal feature of intracranial hypotension is an orthostatic headache, which is of variable quality, typically most severe within the first twenty-four hours and usually resolving within ten days [19,20]. Altered conscious state, meningism, nausea, vomiting, dizziness, cranial nerve palsies, visual disturbance, photophobia, and rarely seizures have also been described [21]. Bradycardia has also been described and is thought to occur due to rostral migration of the brain with subsequent compression of the hypothalamus. Mass effect on the hypothalamus can cause alterations in autonomic outflow [22,23]. If the headache persists, loses its postural nature, returns following initial resolution, or is associated with haemodynamic changes, neuroradiological investigation is advocated to assess sequelae of intracranial hypotension as a delay in diagnosis can be catastrophic [14]. Studies have demonstrated that dural puncture complicated by subdural hematoma carries a mortality rate of a value between 17 and 29% [14,24]. Subdural fluid collections (hematomas or hygromas) can be managed safely with conservative methods, such as bed rest, hydration, and caffeine. If the patient is still symptomatic despite these measures, an epidural blood patch (EBP) should be performed [17]. Craniotomy or burr hole evacuation is rarely required even if the subdural fluid collection is large and exerts significant mass effect; however they may take up to three months to resolve [13,25]. Anaesthetists need to be cognisant of the possibility of subdural hematomas in the setting of PDPH, especially in parturients experiencing persistent headache with neurological or haemodynamic disturbance. Early radiological investigation is encouraged, as a delay in diagnosis can be fatal. Consent Informed written consent has been obtained from the patient prior to submitting this article for publication.
2018-04-03T05:07:35.349Z
2016-08-29T00:00:00.000
{ "year": 2016, "sha1": "06d66d335c0210aa54d6b7d55531ca930e6c766e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cria/2016/5789504.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da489ddb6bb24fe6c7ad08d52a4ee163b287911e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59250326
pes2o/s2orc
v3-fos-license
Negative regulation of glial Tim‐3 inhibits the secretion of inflammatory factors and modulates microglia to antiinflammatory phenotype after experimental intracerebral hemorrhage in rats Summary Aims To investigate the critical role of Tim‐3 in the polarization of microglia in intracerebral hemorrhage (ICH)‐induced secondary brain injury (SBI). Methods An in vivo ICH model was established by autologous whole blood injection into the right basal ganglia in rats. The primary cultured microglia were treated with oxygen‐hemoglobin (OxyHb) to mimic ICH in vitro. In this experiment, specific siRNA for Tim‐3 and recombinant human TIM‐3 were exploited both in vivo and in vitro. Results Tim‐3 was increased in the brain after ICH, which mainly distributed in microglia, but not neurons and astrocytes. However, the blockade of Tim‐3 by siRNA markedly reduced secretion of inflammatory factors, neuronal degeneration, neuronal cell death, and brain edema. Meanwhile, downregulation of Tim‐3 promoted the transformation of microglia phenotype from M1 to M2 after ICH. Furthermore, upregulation of Tim‐3 can increase the interaction between Tim‐3 and Galectin‐9 (Gal‐9) and activate Toll‐like receptor 4 (TLR‐4) pathway after ICH. Increasing the expression of Tim‐3 may be related to the activation of HIF‐1α. Conclusion Tim‐3 may be an important link between neuroinflammation and microglia polarization through Tim‐3/Gal‐9 and TLR‐4 signaling pathways which induced SBI after ICH. | BACKG ROU N D Intracerebral hemorrhage (ICH) is an acute central nervous system (CNS) disease with high mortality and disability, accounting for ~15% of all patients with stroke. 1 Although a large number of researches have been performed to investigate the mechanisms of brain injury after ICH, there is still no effective clinical treatment which can significantly improve the prognosis of patients with drug. 2 ICH triggers a series of complex physiological and pathological events that ultimately lead to brain damage, especially in the tissues around the hematoma. 3,4 These events include the hematoma mass effect and the potential hematoma expansion, oxidative stress, inflammatory cell infiltration, cell necrosis and apoptosis. 5,6 Previous studies reported that inflammation is closely related to brain injury after ICH and suggested that inflammation may be an effective indicator of the outcome and prognosis of ICH. 3,4 Recent studies are focusing on the effects of inflammation in brain injury after ICH. However, the mechanism is still not fully illustrated. T-cell immunoglobulin-and mucin-domain-containing molecule family (Tims family) is initially found in the clonal mouse asthma model. 8 The Tims family includes eight members, and Tim-1, Tim-3, and Tim-4 are predominantly expressed in the human body. Tim-3 is an important member of the Tims family which is expressed explicitly in CD4 + Th1 cells and other related immune cells except Th2 cells. 9 In addition, Tim-3 is expressed in tissues and organs which are involved in innate immune, and specific immune response and is closely related to the progression and outcome of various diseases. 9,10 Tim-3 is mainly expressed in microglia in the brain and involved in the inflammatory response in CNS diseases, including ischemic stroke, multiple sclerosis, and cerebral parasitic disease. 10,11 Recent studies showed that microglia/macrophages are the primary immune defender of the CNS. 4,14,15 After ICH, a large number of endogenous activated microglia and exogenous macrophages rapidly gathered in brain tissues surrounding the hematoma and released inflammatory factors. 16,17 Previous studies suggested that microglia participated in brain injury through transforming into different phenotypes and releasing a large number of cytokines in the local microenvironment after ICH. 15,17 Microglia/macrophages are divided into two phenotypes, pro-inflammatory phenotype (M1) and antiinflammatory phenotype (M2) after ICH. M1 phenotype microglia promote inflammation, while M2 phenotype microglia inhibit inflammatory response and participate in tissue reparations after ICH. 17 The activation and polarization of microglia in the brain after ICH have been reported, 18,19 but whether Tim-3 is involved in this remains unclear. Galectin-9 (Gal-9) is one of the ligands of Tim-3, which regulates inflammatory response in diverse diseases. 10,20,21 Interaction of Tim-3 with Gal-9 is essential in the induction of autoimmune diseases by regulating secretion of inflammatory factors. 9 Toll-like receptors (TLRs) are the classical family of molecules which involved in innate immunity. 22 TLR-4 is an essential member of the TLRs family, and it is closely related to the inflammatory response of CNS diseases. 23 Meanwhile, the interaction of TLR-4 and Tim-3 plays an essential role in the regulation of inflammation. These studies suggested that Gal-9/Tim-3 and TLR-4 signaling pathways may be the potential mechanisms of neuroinflammation. However, the mechanism of Tim-3 in inflammation of SBI after ICH it is still unclear. In this study, we explored the relationship between Tim-3 and ICH, particularly the function of Tim-3 in the activation or polarization of microglia and the effect of Tim-3 in brain injury after ICH. Thus, we investigated the expression level and potential effects of Tim-3 in inflammation of SBI after ICH. | Ethics and experimental animals Sprague-Dawley (SD) rats weigh 250-300 g and about 8 weeks old, which were provided by the Shanghai Experimental Animal Center of the Chinese Academy of Sciences. All animals were fed ad libitum and housed in a quiet environment (indoor temperature about 18-22°C). Additionally, we strived as much as possible to minimize animals' number and their suffering. | Establishment of ICH model in vivo and in vitro In vivo ICH model was established in SD rats as described in a previous study. 24 A schematic illustration of the coronal section of the brain is shown in Figure 1A. In vitro, primary microglia-enriched cultures were prepared from the brain tissues of 1-day-old pups (from pregnant SD rats) according to our previous study. 25 A detailed description of this method is provided in the Data Supplement. | Experimental grouping The experiments were divided into three parts (experiment I-III), for details, please see the online-only Data Supplement. | Cell nuclear protein and cytoplasm protein extraction Cell nuclear protein and cytoplasm protein extraction of brain tissues were performed by using a Cell nuclear protein and cytoplasm protein extraction kit (BeyoTime Institute of Biotechnology, Nantong, China). All the reagent preparation and experimental operation were | Western blot analysis Western blot analysis was performed as described previously. 26 For details, please see the online-only Data Supplement. | ELISA The concentration of brain tissue of IL-1β and IL-17 were determined by ELISA using the rat IL-1β and IL-17 kits (Cloud Clone Corp; SEA563Ra, SEA063Ra, Wuhan, China). This assay was performed according to the manufacturer's instructions, and the data were expressed relative to a standard curve prepared for IL-1β and IL-17. | Short-term and long-term neurological functions In experiment II, we tested neuro-functional impairment of rats with a previously published scoring system which monitors their activity, appetite, and neurological deficits at 72 hours after ICH. 27 Assessments of sensorimotor deficits will be performed before and after ICH at day 1, 3, 5, 7, 14, 21, and 28 with the adhesive removal and foot-fault test follow the standard method. 28,29 F I G U R E 1 Intracerebral hemorrhage (ICH) model and levels of Tim-3 in brain tissues which were mainly located in microglia and the levels of inflammatory factors after ICH. A, Whole brain and the largest coronal section of hematoma. B, C, Western blot analysis showed the protein levels of Tim-3 at various time points in brain tissues after ICH. D, E, ELISA assay was used to detect the brain tissues of IL-1β and IL-17. F, Double immunofluorescence analysis was performed with Tim-3 (green) and astrocytic marker GFAP, microglia marker CD11b, or neurotic maker NeuN (red) in brain sections. Nuclei were fluorescently labeled with DAPI (blue). Scale bar=50 μm. G, The relative Tim-3 fluorescent intensity in different brain cells was shown. Data are mean ± SD. Except for (G), the rest of the graph, *P < 0.05 vs sham group; **P < 0.01 vs sham group; ***P < 0.001 vs sham group, n = 6; † P < 0.05 vs 12 h ICH group; † † P < 0.01 vs 12 h ICH group, n = 6; ‡ P < 0.05 vs ICH (1 d) group; ‡ ‡ P < 0.01 vs ICH (1 d) group, n = 6; (G), ***P < 0.001 vs astrocyte group, NS, no significant differences, † † † P < 0.001 vs microglia group, n = 6 | Immunofluorescence analysis Immunofluorescence analysis was performed as described previously. 30 For details, please see the online-only Data Supplement. An observer who was blind to the experimental group performed the quantitative analysis. Staining images were auto-thresholded using Image J program (NIH, Bethesda, MD, USA) to subtract background staining. Relative Tim-3 fluorescence intensity in different brain cells at 1 day after ICH was measured by the ratio of the fluorescence intensity of each group to the fluorescence intensity of the astrocyte. The fluorescence intensity in each cell area was calculated. ROI were selected within the ipsilateral around the hematoma. | Transfection of rhTIM-3 and siRNA in vivo and in vitro The drilling site of the intracerebroventricular region for rats was determined as described in previous studies. 31 | Fluoro-Jade B staining and TUNEL staining Fluoro-Jade B (FJB) and TUNEL staining was performed as described previously. 7 For details, please see the Supplementary Material. | Immunoprecipitation analysis Immunoprecipitation tests were performed as reported previously. 7 For details, please see the online-only Data Supplement. | Statistical analysis All data were presented as mean ± SD. Graph pad prism 7.0 was used for all statistical analysis. Data groups (two groups) with normal distribution were compared using the two-sided unpaired Student test. Differences in means among multiple groups were analyzed using one-way ANOVA or two-way ANOVA followed by the Bonferroni/ Dunn post hoc test. Behavioral tests use the two-way repeated ANOVA P < 0.05 was considered statistically significant. | General observation The body temperature, mean arterial pressure, and body weight of rats in each experimental ICH group did not change significantly (data not shown). The mortality rate of rats in the normal and sham group was 0% (0/24 rats), and it was 5.9% (6/102 rats) in all experimental ICH groups. | Tim-3 was increased and mainly in microglia after ICH To detect the expression level of Tim-3 in brain tissues after ICH, we tested the protein samples from brain tissues around the hematoma by Western blot analysis. Comparing with the sham group, the level of Tim-3 was increased significantly from 12 hours after ICH, reached the peak point at 24 hours, and then reduced gradually after that ( Figure 1B,C). Also, ELISA results showed that the levels of IL-1β and IL-17 in brain tissues were significantly increased at 24 hours after ICH and then decreased gradually to the level of the sham group at 7 days ( Figure 1D,E). To further clarify the cell type which expressed Tim-3, we detected the expression levels of Tim-3 in astrocytes, microglia, and neurons, respectively. Double immunofluorescence staining was performed on brain sections which incubated by Tim-3 and GFAP (astrocytic marker), CD11b (microglia marker), or NeuN (neuronal marker), respectively. We observed a marked increment of Tim-3 in microglia, but not in neurons and astrocytes at 1 day after ICH (Figure 1F,E; Figure S4A,B). | The secretion of IL-1β and IL-17 was blocked by inhibiting Tim-3 in the brain after ICH To further define the role of Tim-3 in the secretion of IL-1β and IL-17 in brain tissues after ICH, the rhTIM-3 and Tim-3 siRNA were applied to regulate the levels of Tim-3 in this study. Consistent with the results above, Western blot analysis showed Tim-3 expression level was higher at 1 day after ICH than that in the sham group, while it was significantly increased by rhTIM-3 treatment and decreased by Tim-3 siRNA treatment (Figure 2A,B). In ELISA analysis, we accidentally discovered that the level of IL-1β and IL-17 in ICH rat brain tissue was promoted at 1 day after ICH, which was significantly decreased by Tim-3 siRNA treatment and increased by rhTIM-3 treatment ( Figure 2C,D). | Inhibition of Tim-3 improved short-term and long-term neurological functions after ICH To identify the impact of Tim-3 on neurological behavior at day 3 after ICH, behavioral activity was performed. Neurological behavior was severely impaired in rhTIM-3 group compared with the ICH group, which was dramatically attenuated in Tim-3 siRNA group ( Figure 2E). To investigate the effect of Tim-3 in long-term neurological outcomes, two independent behavior tests: adhesive test and foot-fault test were performed until day 28 to measure neurological function. After ICH, the vehicle-treated group showed spontaneous recovery, rhTIM-3-treated rats had a dramatically worse performance in adhesive tests than the vehicle group, and siTim-3 rats showed better performance in adhesive test compared with Ctr-siRNA group ( Figure 2F). In addition, siTim-3 rats performed significantly better throughout day 28 compared with Ctr-siRNA group as shown in the foot-fault test ( Figure 2G). | Tim-3 knockdown by siRNA mitigated brain injury after ICH To detect the effects of Tim-3 in ICH-induced SBI, the FJB and TUNEL staining were performed to test neuronal degeneration and death in the brain in all groups. Comparing with the sham group, the number of FJB-positive cells was increased in brain tissues in the ICH group, which was significantly aggravated by rhTIM-3 treatment and attenuated by Tim-3 siRNA treatment ( Figure 3A,B). TUNEL-positive cells exhibited similar results as FJB-positive cells ( Figure 3C,D). In addition, brain water content was higher in the ICH group than that in the sham group. The brain water content was higher in ICH rats with rhTIM-3 treatment than that in the ICH control group, while the brain water content was lower in ICH rats with Tim-3 siRNA treatment than that in the ICH group ( Figure 3E). These data indicated that the inhibiting Tim-3 with siRNA could reduce the ICH-induced SBI. | Tim-3 induced polarization of microglia in the brain after ICH Next, we observed the effects of Tim-3 on the microglia polarization in brain tissue after ICH by using rhTIM-3 and Tim-3 siRNA ( Figure 4A-D). The results of double immunofluorescence staining showed that ICH-induced microglia polarization to a pro-inflammatory phenotype, as defined by CD16/CD11b-positive, and also to an antiinflammatory phenotype, as defined by CD206/CD11b-positive ( Figure 4A-D). Compared with the sham group, CD16/CD11b-positive cells were increased in the ICH group, while it was significantly aggravated by rhTIM-3 and attenuated by Tim-3 siRNA treatment ( Figure 4A,B). Compared with the sham group, CD206/CD11b-positive cells were decreased in the ICH group, while it was significantly attenuated by rhTIM-3 and aggravated by Tim-3 siRNA treatment ( Figure 4C,D). F I G U R E 2 Effects of rhTIM-3 and Tim-3 siRNA treatments on the protein level of Tim-3 in the brain tissues of IL-1β and IL-17 under ICH conditions. A and B, Western blot analysis showed the protein level of Tim-3 in brain tissues in various groups. C and D, ELISA assay was used to detect the brain tissues content of IL-1β and IL-17 at 1 d after ICH. Data are mean ± SD. NS: no significant differences; *P < 0.05; **P < 0.01; ***P < 0.001, n = 6. E, Neurological behavior scores. F, Adhesive removal test. G, Foot-fault test. Data are mean ± SD. NS: no significant differences; *P < 0.05; **P < 0.01; ***P < 0.001, n = 6. rhTIM-3, recombinant human TIM-3 In addition, as previously reported, 33 the M1 and M2 macrophages are defined by single phenotypic markers (CD206 and CD16) is not sufficient to identify microglia polarization. Inflammation-associated molecules, including pro-inflammatory factors TNF-α, IL-1β, and iNOS and antiinflammatory factors ar-ginase1, IL-4, and IL-10 were also tested to provide more information on the biological state of microglia after ICH ( Figure 4E-H). The results showed that the protein levels of all inflammation-associated molecules in brain tissues were significantly increased by ICH treatment. However, the results showed that the protein levels of pro-inflammation molecules were significantly aggravated by rhTIM-3 and attenuated by Tim-3 siRNA treatment, whereas the protein levels of antiinflammation molecules were significantly attenuated by rhTIM-3 and aggravated by Tim-3 siRNA treatment. | Tim-3 knockdown inhibited ICH induced the interaction between Tim-3 and Gal-9 and promoted the interaction between TLR-4 and Gal-9 It has been reported that Tim-3 interacts with Gal-9 on the cell membrane and promotes inflammation in the CNS diseases. 21,34 However, it has not been reported that TLR-4 interacts with Gal-9 on the cell membrane after ICH. To further investigate the mechanism whether Tim-3 siRNA could rescue the SBI in ICH rats, we analyzed the interaction between Tim-3/Gal-9 and TLR-4/Gal-9 by immunoprecipitation in brain tissues following ICH. The results of immunoprecipitation showed that, compared with the sham group, Tim-3 and Gal-9 interaction was increased in the ICH group, and it was significantly aggravated by rhTIM-3 and attenuated by Tim-3 F I G U R E 3 Effects of Tim-3 siRNA on neuronal degeneration and cell death, and brain water content under ICH conditions. A, Fluoro-Jade B (FJB) staining (green) shows neuronal degeneration in the cerebral cortex. Scale bar = 100 μm. B, Degeneration of neuronal cells in brain tissues is shown. C, Percentage of apoptotic neurons is shown. D, Double immunofluorescence for NeuN (red) and TUNEL (green) counterstained with DAPI (blue) was performed. Arrows point to NeuN/TUNEL-positive cells. Scale bar = 100 μm. E, Bar graphs showing the effects of rhTIM-3 and Tim-3 siRNA on brain water content. Cont: contralateral; Ipsi: ipsilateral; CX: cortex; BG: basal ganglia; Cerebel; cerebellum. Data are mean ± SD. NS: no significant differences; *P < 0.05; **P < 0.01, n = 6. rhTIM-3, recombinant human TIM-3 siRNA treatment ( Figure 5A,B). The results also showed that compared with the sham group, TLR-4 and Gal-9 interaction increased in the ICH group, while it was significantly attenuated by rhTIM-3 and aggravated by Tim-3 siRNA treatment ( Figure 5C,D). In addition, compared with the normal group, the yellow areas by arrows indicated the overlap of Tim-3 and Gal-9 was significantly increased in the OxyHb group, especially the distribution of Tim-3 alongside the cell edge, which was significantly aggravated by rhTIM-3 and attenuated by Tim-3 siRNA treatment ( Figure 5E). Compared with the normal group, the yellow areas by arrows indicated the overlap of TLR-4 and Gal-9 was not significantly increased in the OxyHb group, but it was significantly attenuated by rhTIM-3 and aggravated by Tim-3 siRNA treatment ( Figure 5E). | The expression level of HIF-1α protein in cytoplasm and nucleus is closely related to the regulation of Tim-3 To detect the expression level of HIF-1α protein in cytoplasm and nucleus, Western blot analysis was first performed to test HIF-1α in F I G U R E 4 Effects of Tim-3 on ICH-induced microglia polarization and changes of phenotype. Sections were stained for CD16/CD11b (pro-inflammatory microglia marker) or CD206/CD11b (antiinflammatory microglia marker). Representative images were shown in (A) and (C) and percentage of CD16-positive cells or CD206-positive cells was shown in (B) and (D). Scale bar = 50 μm. Data are mean ± SD. NS: no significant differences; *P < 0.05; **P < 0.01; ***P < 0.001, n = 6. E and G, The immunoblots showed TNF-α, IL-1β, iNOS, arginase1, IL-4, and IL-10 produced by microglia under indicated treatment. F and H, The quantitative analyses of TNF-α, IL-1β, iNOS, arginase1, IL-4, and IL-10 in the immunoblots. Data are mean ± SD. NS: no significant differences; *P < 0.05; **P < 0.01; ***P < 0.001, n = 6 F I G U R E 5 Effects of Tim-3 on Gal-9/Tim-3 and Gal-9/TLR-4 interactions after ICH. A and B, Gal-9/Tim-3 interaction and Gal-9/ TLR-4 interactions in brain tissues were determined using immunoprecipitation (IP). C and D, Quantitative analysis was performed. Data are mean ± SD. *P < 0.05; **P < 0.01; ***P < 0.001, n = 6. E, Double immunofluorescence for Tim-3 or TLR-4 (red) and Gal-9 (green) counterstained with DAPI (blue) was performed. Arrows indicated the overlap of Gal-9 and Tim-3 or TLR-4 around the cell edge. Scale bar = 10 μm F I G U R E 6 The expression level of HIF-1α protein in cytoplasm and nucleus is closely related to the regulation of Tim-3 after ICH. A-D, Western blot analysis and quantification of the protein level of HIF-1α in the cytoplasm and the nucleus. Data are mean ± SD. NS: no significant differences. Data are mean ± SD. *P < 0.05; **P < 0.01; ***P < 0.001, n = 6. Hypothesis of potential mechanisms of Tim-3 in inflammatory signaling pathway under ICH condition. Green colored arrows indicate the ICH-induced Tim-3 actions, and red colored arrows indicate the effects of rhTIM-3 and Tim-3 siRNA. HIF-1α transports from the cytoplasm to the nucleus, Tim-3 expression increased after ICH. A large number of Tim-3 interact with Gal-9, and the results are activated Tim-3/Gal-9 signaling pathway which promotes the production of inflammatory factors. In addition, a large number of TLR-4 exposure, activation of TLR-4 signaling pathway which promotes microglia to the pro-inflammatory state. Two inflammatory pathways are activated, leading to SBI after ICH. rhTIM-3, recombinant human TIM-3 the subcellular space. Compared with the sham group, plasm-protein HIF-1α was increased in the ICH group, which was significantly attenuated by rhTIM-3 and aggravated by Tim-3 siRNA treatment ( Figure 6A,B). Compared with the sham group, nucleoprotein HIF-1α was increased in the ICH group, which was significantly aggravated by rhTIM-3 and attenuated by Tim-3 siRNA treatment ( Figure 6A,C). Compared with the sham group, total protein HIF-1α was increased in the ICH group, which was not significantly aggravated by rhTIM-3, but it was significantly attenuated by Tim-3 siRNA treatment ( Figure 6A,D). | D ISCUSS I ON As previously reported, experimental and clinical results indicated that inflammation was critically involved in the pathogenesis of SBI after ICH. 3,4 The level of expression of Tim-3 is elevated in CNS diseases, and it activates microglia, where it plays a vital role in the inflammatory processes associated with neutrophil infiltration. 14,15,35 Gal-9 as a ligand of Tim-3 can be upregulated by the Tim-3 and is involved in CNS diseases associated with inflammation. 10 Recent studies suggested that Tim-3 was constitutively expressed on cells of the innate immune system in both humans and mice, and it can synergize with TLRs. 10 Therefore, a complete understanding of the relationship between Tim-3, Gal-9, and TLRs may better explain the pathophysiological mechanisms of SBI after ICH. Our study showed the effects of Tim-3 on the pathogenesis of SBI in a rat ICH model. In experiment I, the results showed that the higher level of Tim-3 expression in microglia in brain tissues in SBI after ICH. It was shown that the most significant time point was 1 day after ICH. In experiment II, firstly, we found that the usage of Tim-3 siRNA eliminated ICH-induced Tim-3 upregulation, while rhTIM-3 showed the opposite effect. Secondly, Tim-3 siRNA treatment decreased the secretion of inflammatory factors in the brain after ICH, improved long-term neurological outcomes as well as the neuronal degeneration and death, and ameliorated brain edema after ICH. Finally, Tim-3 siRNA treatment induced microglia polarization to an antiinflammatory phenotype (M2) and the levels of antiinflammation factors were increased, while rhTIM-3 showed the opposite effect. In experiment III, the results were shown as follows: Tim-3 siRNA treatment rescued ICH-induced disruption in the interaction between Tim-3 and Gal-9 and promoted the interaction between TLR-4 and Gal-9. Besides, increased protein levels of Tim-3 induced by ICH may be closely related to HIF-1α from the cytoplasm to the nucleus ( Figure 6). Based on these results, we hypothesized here that high-level expression of Tim-3 binds closely to Gal-9, the activation of the Tim-3/Gal-9 signaling pathway promoted the production of pro-inflammatory factors. At the same time, TLR-4 lost its association with Gal-9, thus causing more TLR-4 exposure and the activation of TLR-4 signaling pathway which promoted microglia transform to pro-inflammatory state (M1 phenotype). Therefore, we can conclude that there are two pathways promoting inflammation involved in SBI after ICH. The mechanisms of this study were shown in Figure 6E. Gal-9 was thought to be essential for regulating cell homeostasis and inflammation. Recent studies demonstrated that Gal-9 induced various biological reactions, such as cell aggregation, adhesion, activation, and apoptosis. 36,37 Gal-9 showed immunomodulatory properties by inducing Th1 cell (not Th2 cell) death through interaction with Tim-3. 38 In addition, Tim-3 and Gal-9 were also expressed in brain tissues and involved in inflammation in the rat stroke model. 21 High-level expression of Tim-3 in glial cells triggers and activates the Tim-3/Gal-9 signaling pathway and promotes the release of pro-inflammatory factors, but its role in ICH is unknown. Our study demonstrated that the expression level of Gal-9 was upregulated together with the increasing of Tim-3 on microglia after ICH. Additionally, we used rhTIM-3 to validate the effect of Tim-3/Gal-9 signaling pathway and Tim-3 siRNA to disrupt the effect of Tim-3/Gal-9 signaling pathway on SBI after ICH. These results suggested that blocking Tim-3/Gal-9 signaling pathways may rescue SBI and reduce the release of inflammatory factors in brain tissues after ICH. TLR-4, a major member of the Toll-like receptors, which involved in SBI and neurobehavioral dysfunction through TLR4-mediated inflammatory pathway after experimental subarachnoid hemorrhage. 23 In addition, the TLR-4 signaling pathway has been widely studied in macrophages polarization and its activation is closely related to M1 macrophage produces. 39,40 For various neurological diseases, such as traumatic brain injury, multiple sclerosis, or ischemic stroke, M1 macrophages/microglia are generally to exacerbate neuronal necrosis and inflammation, whereas M2 macrophages/microglia inhibit inflammation and are beneficial for tissue reparations. 4,41 Our results confirmed that the high-level expression of Tim-3 after ICH caused TLR-4 separate from Gal-9; thus, TLR-4 was exposed. The exposure of TLR-4 may promote microglia transfer to the M1 phenotype and release more pro-inflammatory factors, while TLR-4 binds closely to Gal-9 which may promote microglia to the M2 phenotype and release more antiinflammatory factors. These results suggested that blocking the TLR-4 signaling pathway may reduce SBI and induce microglia transfer to antiinflammatory phenotype in the brain after ICH. In this study, we investigated the mechanism of SBI after ICH through the Tim-3/Gal-9 signaling pathway and TLR-4-mediated inflammatory responses, which were closely related to the increasing level of Tim-3. However, the reason for the high-level expression of Tim-3 after ICH is unclear. Previous studies have shown that HIF-1α may induce the inflammatory cells into the hypoxic brain tissues by regulating the level of glial Tim-3 and HIF-1α genes which are associated with metabolic responses. 35 We detected the distribution of HIF-1α in the cytoplasm or nucleus or the whole cell after ICH, and the results showed that the level of HIF-1α levels increased in cytoplasm and nucleus after ICH; the level of HIF-1α decreased in cytoplasm and the level of HIF-1α increased in nucleus after rhTIM-3 treatment; the level of HIF-1α in nucleus and cytoplasm decreased after Tim-3 siRNA treatment. Therefore, we speculate that elevated the level of Tim-3 after ICH may be closely related to the different expression of HIF-1α in cytoplasm and nucleus. However, the above results do not demonstrate that elevated levels of Tim-3 in brain tissue are closely related to the transport of HIF-1α from the cytoplasm to the nucleus. In order to confirm the relationship between Tim-3 and HIF-1α, further research is needed in the future. There are several limitations to this study. In vitro experiments, we showed OxyHb might increase the expression of Tim-3. However, there is no study of other contents of hematoma. So this study cannot explain all the mechanism in the high level of Tim-3 expression post-ICH. Lastly, we did not figure out it was the Tim-3/ Gal-9 signaling pathway or the TLR-4-mediate inflammation pathway that played the dominant role. | CON CLUS IONS The present study demonstrated that the level of Tim-3 was increased after ICH and mainly distribute in microglia. The regulation of Tim-3 expression was performed by rhTIM-3 and Tim-3 siRNA, which aggravated or relieved, respectively, in SBI after ICH. High-level expression of Tim-3 induced the activation of two inflammatory pathways that all aggravated SBI after ICH. The activation of Tim-3/Gal-9 signaling pathway promoted the release of inflammatory factors; on the other hand, the activation of the TLR-4 signaling pathway is closely related to microglia transformation to M1 phenotype. Tim-3 may be an important link between neuroinflammation and the polarization of microglia, and negative regulation of glial Tim-3 signal may be a novel therapeutic target for ICH. ACK N OWLED G M ENTS None. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest.
2019-01-26T14:02:47.894Z
2019-01-24T00:00:00.000
{ "year": 2019, "sha1": "35518d2e15ef9bf10adbff81247bf78a90461ea2", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cns.13100", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35518d2e15ef9bf10adbff81247bf78a90461ea2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267522172
pes2o/s2orc
v3-fos-license
Developing public health surveillance dashboards: a scoping review on the design principles Background Public Health Dashboards (PHDs) facilitate the monitoring and prediction of disease outbreaks by continuously monitoring the health status of the community. This study aimed to identify design principles and determinants for developing public health surveillance dashboards. Methodology This scoping review is based on Arksey and O'Malley's framework as included in JBI guidance. Four databases were used to review and present the proposed principles of designing PHDs: IEEE, PubMed, Web of Science, and Scopus. We considered articles published between January 1, 2010 and November 30, 2022. The final search of articles was done on November 30, 2022. Only articles in the English language were included. Qualitative synthesis and trend analysis were conducted. Results Findings from sixty-seven articles out of 543 retrieved articles, which were eligible for analysis, indicate that most of the dashboards designed from 2020 onwards were at the national level for managing and monitoring COVID-19. Design principles for the public health dashboard were presented in five groups, i.e., considering aim and target users, appropriate content, interface, data analysis and presentation types, and infrastructure. Conclusion Effective and efficient use of dashboards in public health surveillance requires implementing design principles to improve the functionality of these systems in monitoring and decision-making. Considering user requirements, developing a robust infrastructure for improving data accessibility, developing, and applying Key Performance Indicators (KPIs) for data processing and reporting purposes, and designing interactive and intuitive interfaces are key for successful design and development. Introduction Public health surveillance is the continuous and systematic collection, analysis, and interpretation of healthrelated data essential for planning, implementing, and evaluating public health performance.It is a tool for estimating the health and behavior of a society, enabling the determination of health status and identification of interventions and their effects.Health monitoring empowers decision-makers for effective management based on valuable and up-to-date evidence [1].Interpreting the obtained results and sharing the information helps the stakeholders take quick and appropriate measures to reduce morbidity and mortality and improve the welfare of society [2].This process requires the cooperation of many stakeholders, from the community level to the senior management of the health system, who should work systematically and complementarily to promote public health security [3].Public health goals include preventing epidemics, protecting against environmental hazards, encouraging and promoting healthy behaviors, managing natural disasters, assisting in community recovery, and ensuring quality and access to health services [4].One of the essential services public health organizations provide is monitoring health status and identifying community health problems [4,5]. History of surveillance system and challenges For this purpose, the public health monitoring system employs continuous monitoring systems to assess the health status of the community and utilizes the data for planning, implementation, and evaluation [4].Initially, telephone reporting was used for public health monitoring.However, this method faced numerous challenges in analyzing and extracting valuable information for timely decision-making due to the production of large and complex data sets. Evidence shows that the generation of substantial data amounts led to information overload at high organizational levels.Consequently, these data sets were rarely utilized for decision-making in practice in an effective way.The process of reporting, collecting, and analyzing the data often extended over several weeks, impeding a targeted and timely response [5,6]. With the advent and popularization of the Internet, a suitable platform was provided for the swift collection of society health-related data from a wide range of available electronic data sources.The first initiative of this kind was the Program for Monitoring Emerging Diseases (ProMED-Mail), launched in 1994 as the communication system of the Program for Monitoring Emerging Diseases.Subsequently, the World Health Organization (WHO) established an effectively organized infrastructure called the Global Outbreak Alert Response Network (GOARN) [7]. Today, public health monitoring systems can swiftly collect necessary data from different parts of society, including remote areas, to obtain essential information for identifying early events and preparing for them [7,8].Studies demonstrate that, despite the clear advantages these systems offer compared to traditional surveillance systems, they still face unresolved limitations.The key limitation of other surveillance systems, in contrast to dashboards, is their inability to analyze and extract valuable information for timely decision-making and the lack of integration and collection of information from different sources.Given the large volume of data and the unstructured nature of data sources, methods are required to extract, process, and analyze the data, presenting the interpreted information most effectively to users [9]. Dashboard in public health surveillance Considering the extensive data sources and the diversity of potential users in public health monitoring systems, dashboards can serve as a suitable tool to facilitate production and provide information to managers and policymakers in this field.In recent years, with the increase in the global spread of infectious diseases that have the potential to become epidemics and pandemics [10,11], the importance of utilizing Public Health Dashboards (PHDs) in continuously monitoring the health status of communities, timely diagnosis, and proper management of these diseases has significantly increased.The advent of the COVID-19 pandemic further emphasized the importance of using real-time data to manage and control this disease at the societal level, making the role of PHDs more prominent [12,13]. Dashboards serve as decision support tools, presenting essential business information in graphic and visual form.They can retrieve and analyze a large amount of data by interacting with various data sources, extracting information from databases, and delivering results based on Key Performance Indicators (KPIs).As a result, dashboard users can quickly gain insights into the current situation and progress of the business.When designing dashboards, it is necessary to choose KPIs that align with users' needs.Appropriate KPIs should be selected and organized based on the dashboard`s objectives and its users.The effectiveness of KPIs is maximized when the dashboard displays indicators that resonate with users' understanding and knowledge.Furthermore, the careful consideration of the number of selected KPIs for monitoring by the dashboard is essential [14][15][16]. The PHDs aim to facilitate the continuous monitoring of the health status of the community and the monitoring and prediction of disease outbreaks by collecting and integrating real-time data from various data sources.They assist in managing and controlling diseases by displaying KPIs in a well-designed user interface [17].Therefore, considering the volume of data and the need for real-time monitoring and response in public health situations, attention to dashboard design principles for public health surveillance is essential [18]. Studies on PHDs principles primarily focus the content and user interface of these systems.The suggested design principles in these studies include a customizable, actionable "launch pad" [19,20], supporting correct data interpretation [20,21], information aggregation [22], minimalist aesthetics [21], user workload reduction [21], GIS interface [23], minimal cognitive processing, and the use of temporal trend analysis techniques [24].In other words, the design principles suggested in the studies primarily focus on the content and user interface of these systems.Additionally, our study's results section highlights other features that should be considered in the design of public health dashboards. This study was conducted to identify the design principles of PHDs not only focusing on the content and user interface aspects but also presenting a comprehensive view of all key design principles of PHDs.The aim is to provide insight for public health policymakers to facilitate and accelerate decision-making in epidemics and medical crises by extracting data from various systems and sources and providing timely reports. Study design Scoping reviews try to identify, retrieve, and summarize information from studies relevant to a particular topic to find key concepts.They are conducted to map the body of the literature on a topic area [25].One of their advantages is determining the feasibility and necessity of conducting a systematic review in a specific domain [25].Available knowledge indicates that the research question could be considered a dominant factor in designing a scoping review or a systematic one.With a research question addressing the feasibility, appropriateness, meaningfulness or effectiveness of a specific treatment or practice, systematic review is preferred.In contrast, when the authors aim to identify specific characteristics/ concepts in the studies, mapping, reporting or discussing these characteristics/concepts, a scoping review is preferred [26]. Based on the present RQ, Arksey and O'Malley's framework (2005), as an influential framework suggested by the JBI guidance, was applied to conduct this scoping review [25].Six following stages are recommended based on this framework; the first five are compulsory for the robustness and trustworthiness of the review, while the last stage is indicated as an optional one. Identifying the research question The question should incorporate the population (or participants) /concept /context (PCC) elements per the guideline.This study included all the published papers about PHDs.The context refers to all the principles and determinants that impact designing such dashboards, and it also refers to applying PHDs in decision-making and monitoring the health status.Accordingly, the main research question is: "What are the key design principles of a public health dashboard?". Identification of relevant studies Searches were conducted in PubMed, Web of Science, IEEE, and Scopus.A combination of MeSH terms and related keywords was used for the search strategy.The search strategy was carried out with the following keywords. (("Surveillance"[Title/Abstract] OR "Public Health Surveillance"[Mesh] OR "public health"[Mesh] OR "public health"[Title/Abstract]) AND) dashboard [Title/ Abstract] OR "Web-based surveillance system" [Title/ Abstract])( The search was carried out for articles published between January 1, 2010 and November 30, 2022.The final search of articles was conducted on November 30, 2022.EndNote version 20.2.1 was applied to manage the articles` inclusion and screening process. Study selection For this purpose, first, the retrieved articles were screened based on their title and abstract.Two authors reviewed all these titles and abstracts independently, and the senior author (RR) finalized the cases of disagreement.After the approval of the remaining articles by the senior author, the articles` full text was independently reviewed by two authors based on the inclusion and exclusion criteria of the study (Table 1).Any disagreement regarding the selection of articles was discussed with the senior author.Preferred Reporting Items for Systematic Reviews and Meta-Analyzes Extension for Scoping Review (PRISMA-ScR) guideline [27] was used to manage the eligible articles at this stage. Charting data The descriptive data extracted from the articles, including the year of publication, public health category, study setting, and dashboard implementation level, was inserted into Microsoft Excel Version 16 (Microsoft Corporation, Redmont, WA) for combination and analysis. In this step, two data analysis methods, quantitative descriptive analysis and qualitative content analysis, were applied.Excel software (version 16) was used to summarize the distribution and frequency of the included articles based on year of publication, public health category, setting of the study, place of conducting the study, and dashboard implementation level (level of implementation of the dashboard at the global, national, or local levels).Then, the design principles of the PHDs were extracted by reviewing the content of the articles (Table 2). For qualitative thematic analysis, the findings of the studies were examined line by line, and the primary codes were extracted for formulating the research question.After extracting the initial codes and reviewing these, the final codes were emerged and subsequently categorized to create subsidiary principles that ultimately led to a higher conceptual level. Microsoft Packages Office 360 was used to categorize the design principles of dashboards.This scoping review also utilized trend analysis to illustrate the trends of publications in each of the public health categories.The number of articles published in different years was drawn using Microsoft Excel (Version 16). Results A total of 543 articles were retrieved after searching the databases.The PRISMA flow diagram illustrates that 67 articles were eligible for analysis based on the inclusion and exclusion criteria after eliminating the duplications and screening the articles.(Fig. 1). Characteristics of included studies The geographical distribution of the designed dashboards showed that most of the selected studies were conducted in North America (N = 29, 43%), Europe, Asia, and Africa, respectively (Fig. 2).About the studies conducted on PHDs, there was an increasing trend in the number of published articles from 2020 to 2022.Regarding implementation scale, the designed dashboards were mainly reported at the national level (58%) (Regional 27%, local 11%, and global 4%).In addition, (N = 23, 30%) of dashboards were designed to monitor and control for COVID-19; followed by dashboards developed for maternal and newborn health (N = 8, 12%) and AIDS (N = 6, 9%) (Fig. 3). Principles of designing PHDs Considering the objective and target users First, the purpose of designing a dashboard and the target users should be considered.The dashboard's design, visualization tools, content, and how the information is represented vary based on the dashboard users.In the study of Véronique et al. in the Netherlands, to investigate the development and actionability of the COVID-19 dashboard, it is important to specify the purpose and users of the dashboard in designing the dashboard [28].In a review of 158 dashboards from 53 countries, Ivankovi et al. identified seven common features among them."Know their audience and information needs" is mentioned as the first feature in the principles of designing PHDs [29].Therefore, the need for compatibility between the content and information displayed by the dashboard and the tasks and needs of users can impact the use of the dashboard [28][29][30]. Appropriate content Véronique et al. [28), introduced content and data, and Ivanković et al. [29], presented managing the type, volume, and flow of displayed information, as public health dashboard design features. In the reviewed dashboards, KPIs were placed on the dashboard's main page, allowing for timely monitoring and display of the current situation at a glance.KPIs' placement and display in the dashboard is top-down so that macro indicators (global, national) (for example, number of deaths due to COVID-19 global or by county) are placed on the main screen.KPIs and global indicators can be compared at this level.Mezzo (urban, regional) (for example number of deaths due to covid-19 at global or by region or cities) indicators are at the next level, which can compare cities and regions.Micro indicators (for example, the number of deaths due to COVID-19 at hospitals) are on the third level, which are performance indicators at the level of institutions.Managing the amount of information displayed on the dashboard is also essential [28,29,[31][32][33][34][35][36][37][38][39][40][41]. Inclusion criteria • Studies focused on PHDs were used for geographic monitoring and tracking of public health or disease surveillance • Studies that focused on the development and implementation of dashboards at the global, national, provincial or local levels • Studies on web-based surveillance systems equipped with a dashboard Exclusion criteria • Peer-reviewed articles focused on the development, implementation, and/or evaluation of a dashboard used in healthcare settings, including clinics, hospitals, health systems, or any other settings where medical care is provided.(Based on the aim of this study to review the PHDs, we exclude those dashboards targeted at medical care centers due to the different nature, capabilities and features of the public health dashboard are those dashboards used in medical care centers) • Non-English publications Interface The dashboard user interface consists of two parts: interactive tools and visual tools. Interactive tools In the reviewed dashboards, the summary view feature was first used to monitor macro indicators at a glance, and unnecessary details were not displayed.This feature helps summarize data and reduce complexity.The indicators' details can be accessed using the drill-up and drilldown features if needed.The pan-and-zoom feature can be used to magnify or reduce the details.The customizable feature enables users to customize information display based on indicators according to their needs.If real-time monitoring is needed, the reports based on the determined KPIs are displayed in real-time [4, 28, 31, 34, 36-38, 40, 42-49]. Considering the types of data analysis and presentation Data analysis helps users understand the relationships between data and trends in the dashboard [29,34].Various types of analysis were in the reviewed dashboards, including analysis at different geographic levels, comparing global local KPIs, comparing indicators with standard values, and presenting data or reports in the format required by the users, such as Word or PDF [29,34,36,38,45,53,55,[60][61][62].In the study by Cheng et al., the features for efficient data presentation are suggested: (1) provision of information that viewers need quickly and clearly, (2) organization of information to support meaning and usability, (3) minimization of distractions, clichés, and unnecessary embellishments that could create confusion, (4) creation of an aesthetically pleasing viewing experience, and (5) consistency of design for easy data comparison [45].Artificial intelligence and data mining techniques can be used to predict trends and patterns in data over time [53,55,[60][61][62]. Infrastructure The infrastructure and implementation of the data warehouse are vital in designing dashboards and facilitating the collection and management of data from different sources.Data warehouses are central repositories of integrated data from one or more disparate sources.A dashboard pulls the data from your data warehouse and transforms it into a series of charts, graphs and other visualizations that update in real-time.The data warehouse is used to collect and manage data from various sources, and it can be used for reporting, reviewing, and analyzing data if equipped with a dashboard [45,46,51,63].Highquality data is essential for an effective data warehouse.It is crucial to have a standard for data transfer and check the data quality before storing it in the data warehouse.Data quality aspects in the examined dashboards included data completeness (e.g., missing data), correctness (e.g., accuracy), currency (e.g., timeliness), and provenance (e.g., reliability of the source).The standards included content, transmission, structural, and security [28,30,35,37,40,42,52,55,56,59].Transferring data between systems and creating interactions between data sources requires attention to security and data access.For security measures, all users are assigned a level based on their performance and duties in the authorization system.Three levels of data security were implemented in the reviewed dashboards, i.e., client level, data transfer level, and server level.At the client level, user authentication is checked every 10 min to prevent cyber-attacks and interference in database queries through SQL injection [40,42,52].The client and server data were encrypted through NoSSL open-source software at the data transfer level [52,55]. Given that the web server is open to public access, a backup computer in the middle (intermediary computer) is needed for filtering access to the database [28,30,40] to ensure proper security standards and protect the central database.This means all requests are passed through the web server to the intermediary computer, then to the central database, and vice versa.The dashboard design should consider easy access to the dashboard via phone, tablet, and laptop for real-time monitoring and checking KPIs at a glance [55,56,59]. Main findings This scoping review study aimed to determine the design principles of PHDs.The included articles explained the details of the design and development of PHDs and their design criteria.The study findings revealed that the production rate of PHDs has been increasing in the past few years.The emergence of COVID-19 and the efforts to manage and control the outbreak/pandemic have significantly impacted this increasing trend.Several institutions worldwide have designed and developed COVID-19 dashboards to report epidemiologic statistics on a county, state, or national scale.Almost all states and most major cities in the USA had deployed a COVID-19 dashboard by the end of 2020.By 2021, all dashboards designed for this purpose had been updated to include information on vaccination or separate dashboards had been created to track COVID-19 vaccination [13].Due to the massive amount of data and the need for real-time monitoring and response in public health situations, it is essential to pay attention to dashboard design principles to support the goals of public health surveillance [18].After examining the indicators presented in the reviewed studies, dashboard design objectives and target users, dashboard content, dashboard user interface, data analysis and display, and infrastructure were identified as five general and essential principles in designing PHDs.Studies have also discussed the requirements and design principles of PHDs.Identifying users and their needs, using narrative information in addition to quantitative information in the dashboard, using a geographic map to display location data better, and stating the source of the data reported by the dashboard are mentioned criteria for designing a dashboard [66]. Likewise, the necessary components to support and facilitate implementing dashboards in public health organizations have been mentioned, including storage and management of data and information from different sources, coordination of data from different sources, standards support, analysis, defining and identifying KPIs, and information visualization [13].Rasmussen et al. suggested four general principles for designing dashboards: presentation format, integration, interface design, and development and implementation [67].These researchers remarked that inadequate attention to these principles could result in challenges for PHDs [67].Furthermore, Ghazi Saeedi et al. mentioned KPI development, data sources, data generation, integration of dashboards to source systems, and information presentation issues as the challenges of implementing PHDs [68]. Purpose and users The purpose of designing a dashboard is to provide a suitable tool for exploring a data set and finding the information the user needs.Therefore, paying attention to the user's needs and designing the appropriate dashboard is particularly important.Considering that a variety of users use dashboards, it is impossible to design a dashboard that fits the personality and ability of each user.However, identifying the primary goal of designing a dashboard and its target user group is the first step in choosing the correct and accurate KPIs, defining appropriate interactive and visual tools, and considering related data analysis methods.Marshal et al. have also emphasized the importance of this principle in designing PHDs in two separate studies [69]. Content KPIs are the main content component of a health dashboard.Therefore, choosing the type and number of indicators the dashboard should monitor and display is essential in designing and developing dashboards [32,70,71].Every organization must measure the indicators that fit its objectives [72].After identifying the main objective and target users, it is necessary to determine the appropriate measurement indicators.Determining a specific and adequate number of indicators emphasizes the available information, and users can review all the indicators at a glance.These findings are consistent with Peters et al. 's study, which indicated that moderate use of indicators can display information in various ways and effectively guide the user's visual flow by creating a particular order [73,74].Serb et al. also suggested the importance of organizing indicators in the dashboard according to the level of use (macro, mezzo, micro level).Their study showed that at least 15 to 25 indicators are required for monitoring purposes in dashboards [75]. Interface In user interface design, attention to the principles of information visualization and interaction with the user interface is essential [76,77].Uniform techniques were not used to visualize functional indicators in the reviewed studies.Uniform visualization techniques are ineffective in dashboard design since it is necessary to consider users' preferences, abilities, knowledge, and skills in visualizing dashboards.Besides, Steichen and Mawad pointed out in separate studies that creating adaptive and personalized visualization systems tailored to users' cognitive and individual abilities can lead to a better understanding of displayed information [78].The nature of data and human factors such as experience, skill, cognitive styles, and user preferences are also influential in selecting visualization and interactive techniques [79,80].In Shneiderman's study, interactive techniques included "overview, zoom, filter, details-on-demand, relate, history, and extract" [81].Khan et al. indicated that interactive techniques included "zoom and pan, overview and detail, and filtering" [82].In Dal et al. 's study, interactive techniques for the dashboard included controlling the level of detail, filtering, searching, and customizing the display [83].Yi et al. similarly implied interactive features included "select, explore, reconfigure, encode, abstract/ elaborate, filter, and connect" [76]. Types of analysis and data presentation The main application of dashboards is data analysis to provide appropriate insights into the regional distribution of disease burden and help allocate resources correctly.This analysis can help policymakers and healthcare providers make appropriate decisions.In most studies, timely data reporting and a suitable time trend in data analysis have been proposed as essential indicators in dashboard design.These findings align with the results of Curriero et al., emphasizing the importance of providing up-to-date data reports [57].Another critical indicator in dashboard design is the ability to analyze data based on geographic location, age, gender, social status, ethnicity, and race.By collecting, registering, and using data related to meaningful subgroups of the population, these critical (and changeable) differences might be noticed.Brehaut et al. also showed that as far as infrastructure limitations and legal barriers allow, these indicators are vital and should be considered in designing a dashboard.Finally, some studies used descriptive approaches, machine learning prediction models, and simulations to predict future situations [84].This indicator can help control diseases, especially pandemics [85].This issue was also raised as one of the indicators that can help increase the efficiency of these dashboards in Brehaut's research [84]. Infrastructure Infrastructure is the backbone of every system, and the successful adoption of any eHealth system depends on the infrastructural arrangements [86]. The findings of this study revealed that a high percentage of studies had mentioned data warehousing and appropriate web service architecture as necessary infrastructures for dashboard design [67,87].Given the diversity of systems and data in different formats, the dashboard infrastructure's main challenge is data integration, and creating data warehouses is an appropriate solution to this challenge [88,89].Access to appropriate software and hardware, use of modern technology, sharing reliable and up-to-date data, and the need for a capable workforce to create and maintain dashboards are other identified components related to dashboard infrastructure [90]. In addition, the necessary infrastructure for creating a dashboard includes access to modern IT software and hardware, continuous and reliable data sharing, and the need for a capable workforce to create and maintain dashboards [13].Among the challenges associated with PHDs are data quality, big data, information architecture, privacy, and security [91].The quality of stored data is also one of the critical issues in dashboard infrastructure.Given the importance of data in decision-making at the public health level, the quality of stored data is also an essential prerequisite for dashboard infrastructure.Fadahunsi et al. also considered data quality an essential dashboard infrastructure component in two separate studies [92]. Informativeness (accuracy, completeness, interpretability, plausibility, provenance, and relevance), availability (accessibility, portability, security, and timeliness), and usability (conformance, consistency, and maintainability) are key features indicated in these two studies [92,93].Transparency about data sources and how indicators are calculated are critical for reports' overall quality, credibility, and reliability.Identifying the sources used and calculating indicators in PHDs are essential for transparency about data collection and would help to understand the logic behind the reports [73,94]. Regarding infrastructure, information security was also one of the issues mentioned in a considerable number of sources.Given the integration of various systems at the organizational level and their connection to the dashboard, using data exchange standards for system interaction is an issue that should be considered [95].These findings were in line with a study by Li Y-CJ et al., who considered electronic data exchange in standard data formats essential for improving data accessibility [96].Moreover, this study showed that these standards preserve data security, reduce resource waste, and improve the quality of care [96].Based on the importance and quality of the disclosed information, access control should exist at multiple levels of security/privacy [97]. Implications for policy, practice, and future research This study extracts the public health dashboard's design criteria and proposes some design principles based on the available knowledge in the area.Given the enormous volume of data and the need for quick response in public health situations, this study is a potentially vital source for helping policymakers, developers, public healthcare organizations, and managers to design and develop PHDs as a prerequisite for early response, particularly during the probable pandemic.As pandemic response requires early and robust verifications, identifying this potentiality of dashboards in data management can be helpful.The lesson learned from the COVID-19 pandemic indicates that public health organizations must equip themselves with dashboards for emerging pandemics and many other vital activities for public health promotion.In other words, investing in dashboard software tools and systems, processes, and people who support PHDs, could be a tailored practice and intervention for the public health policymakers.Exchanging information between healthcare providers and public health organizations and developing an appropriate infrastructure for data exchange is critical for more effective monitoring of epidemic diseases.Clinical information systems should exchange information in real-time at a national level to effectively use dashboards at the public health level for monitoring and managing epidemic diseases and taking timely actions.Therefore, it is suggested that the government examines the technical infrastructure (data architectures, structural and content standards, data exchange, security, and data resources) for appropriate data exchange between various clinical systems and the dashboard. Strengths and limitations The present study addresses the principles of designing PHDs and provides a comprehensive view of designing dashboards.In addition, this study investigated all aspects of PHDs design, including purposes, content, user interface, types of analysis, and infrastructure, and proposed sub-criteria for each criterion.However, the study needed further access to some articles' full text and the search was also restricted to articles published in English. Although the scoping reviews are mainly designed to help policymakers figure out the key concepts underpinning a research area and help them to have clear working definitions, and/or the conceptual boundaries of a topic, the results of this study need to be customized and tailored based on the local public health priorities of the countries through Focus Group Discussions (FGDs) and feasibility assessment panels before applying at the implementation phases.It is also suggested to conduct a study regarding the design and implementation of PHDs according to the income level of the countries.The results of this scoping review can open a new window for conducting future systematic reviews to address the feasibility, appropriateness, meaningfulness, or effectiveness of public health surveillance dashboards.Finally, as the descriptive results present a geographical distribution of PHDs implementation to create a general understanding and illustrate a map to policymakers, stakeholders and researchers to figure out the concentration hotspots and healthcare system`s attention to the topic, it is important to interpret the results conservatively to avoid any kind of misinterpretation about the place or type of the included studies.The same limitation could be considered as the present results were not broken down by country (low, middle and high income), so the findings should be generalized conservatively to the setting of low-income countries as most of the included studies were conducted in high income countries. Conclusion Monitoring health, managing epidemics, and taking timely action requires real-time information exchange between clinical information systems and PHDs.Therefore, given the volume of data, the need for real-time monitoring and response in public health situations, and disease surveillance during epidemics, it is necessary to pay attention to dashboard design principles to achieve public health surveillance goals.Findings of the current indicated that design principles for the PHDs could be presented in five groups, i.e., considering aim and target users, appropriate content, interface, data analysis and presentation types, and infrastructure. AIDS Acquired Immunodeficiency Syndrome SOA Service-Oriented Architecture FGDs Focus Group Discussions Fig. 1 Fig. 2 Fig. 1 Flow diagram of conducting searches, filtering and paper selection Fig. 3 A Fig. 3 A) Public health category, B) Number of articles published per year, C) Level implementation of PHDs Table 2 Principles for designing public health dashboard
2023-07-21T13:06:28.328Z
2024-02-06T00:00:00.000
{ "year": 2024, "sha1": "db335102ceb1411187a2d4b4d745563a60cdccad", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-024-17841-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38fc3331bc6abd8cdbe1e6fd3febb67ea40d819c", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
259666340
pes2o/s2orc
v3-fos-license
Cambrian lobopodians shed light on the origin of the tardigrade body plan Significance Panarthropoda, the most speciose animal group, consists of three phyla (Euarthropoda, Onychophora, and Tardigrada), all of which are considered to have originated from Cambrian lobopodians. Numerous investigations of the evolutionary origin of euarthropods and onychophorans have been conducted, but the origin of tardigrades (water bears) remains largely underexplored. Here, we present an integrative morphological comparison between tardigrades and lobopodians with a phylogeny of panarthropods including lobopodians and major tardigrade lineages. The results provide insights into how tardigrades evolved their current morphology from the Cambrian lobopodian bodyplan. Figures S1 to S4 Legends for Datasets S1 to S2 SI References Other supporting materials for this manuscript include the following: Notes on character coding The phylogenetic data matrix of this study is based on a previous panarthropod character matrix (1), with several references (2)(3)(4)(5)(6)(7), and additional characters are added for tardigrade taxa.Because Miraluolishania has been considered as the junior synonym of Luolishania, the matrix for the phylogenetic analysis of this study does not include Miraluolishania.However, Liu and Dunlop (8) resented several morphological differences between Miraluolishania and Luolishania, i.e., the presence of antenniform structure, the tubercle number per segment, and the number of limb pair. Pillar structure in the epicuticle (0) absent (1) present The pillar structure in the epicuticle is an important feature of heterotardigrades (9).This structure has been considered a plesiomorphic character, and the presence of the pillar structures at the ventral cuticle of the Orsten stem-group tardigrade fossils supports this idea (10).Some eutardigrades also show the pillar structures in their epicuticle, particularly two Dactylobiotus species, i.e., D. parthenogeneticus and D. selenicus (11).Therefore, due to the possibility of presence of this structure, D. ovimutans is coded as uncertain. Anteriormost part of body (head) distinguished from posterior trunk part by numerous spines (0) By numerous spines (introvert) (1) Not by numerous spines Modified from character 1 in Shi et al. (7).The body of priapulids and palaeoscolecids is typically divided into an introvert region with scalids, followed by an annulated trunk region (9,10). Strong modification of head shape, excluding proboscides and appendages (0) Absent (1) Present (-) Inapplicable: head distinguished by numerous spines (introvert) (Character 5) Character 8 in Caron & Aria (1).Although Caron & Aria (1) coded Microdictyon present due to presence of the very elongated neck, the head part of Microdictyon does not seem to be strongly modified.Additionally, the head of Microdictyon is similar to that of Paucipodia.Therefore, we code Microdictyon as absent. Non-appendicular dorsolateral paired structures on the mid-head Heterotardigrades have filamentous sensory organs called cirri A, which are non-appendicular paired structures located on the mid to posterior part of the head segment.In contrast, eutardigrades do not have filamentous cirri on the head, but they have sensory field in the same region where cirri A occur in heterotardigrades, which is considered to be a remnant of cirri A. Additionally, luolishaniids also have filamentous sensory structures on the mid-head. Position of dorsolateral paired structures (0) Mid-head (1) Posterior part of the head (2) The first trunk segment (-) Inapplicable: non-appendicular dorsolateral paired structures on the mid-head (Character 15) absent Cirri A typically occur on the head segment in most arthrotardigrades, and on the posterior part of the head in echiniscoideans.Neoarctus displays the cirri A on the first trunk segment.Filamentous sensory structures on the mid-head are present in luolishaniids as dorsolateral paired structures. Club or dome-shaped chemosensory organ on the head (0) absent (1) present Heterotardigrades have two kinds of head sensory organs: i.e. three pairs of cirri (internal, external, and cirrus A (lateral cirrus) (+ unpaired median cirrus in arthrotardigrades) and two or three pairs of clavae (primary, secondary (cephalic papillae) and tertiary).While a cirrus is a filament shape sensory organ which has been considered as a mechanoreceptor (and a possible chemoreceptor), a clava is a club, rod, or dome shaped structure treated as a chemoreceptor with a terminal pore (11).The presence of cirri and clavae on the head is the characteristic feature of heterotardigrades. 18 . Unpaired dorsal sensory organ on the middle part of the head (0) absent (1) present One of the important sensory organs on the tardigrade head is an unpaired median cirrus on the dorsomedial region of the head.Although this mechanoreceptor is a key character of arthrotardigrades (12), the other three orders also have been considered to possess a rudiment of this organ (13)(14)(15)(16).Therefore, the last common ancestor of tardigrades likely had the median cirrus (11).Considering the position and function, the sensory organ in tardigrades may be comparable to the frontal organ of the upper stem-group to early crown-group euarthropods Helmetia, Odaraia, Fortiforceps, and Jianfengia (17,18).The lower stem-group euarthropod Kerygmachela shows anterior neural projections which may be related to the frontal organs (19).However, the median cirrus of tardigrades invariably occurs as an unpaired single organ without sclerites, whereas the frontal organs are covered by an anterior sclerite in euarthropod are paired.Therefore, only arthrotardigrades are coded as present. Anterior paired projection (0) absent (1) present Character 58 in Zeng et al. (5).Anterior paired projections are non-appendicular structures on the anterior part of the head, which are found in total-group euarthropods, including Kerygmachela, Pambdelurion, Cadanaspis, Cambropycnogon, and Tanazios (20).However, because the possibility of homology between the rostral spines of Kerygmachela and the stylets of tardigrades has been suggested in this study, Kerygmachela is coded as absent.The frontal processes of onychophorans and frontal filaments of some crustaceans might be homologous to anterior paired projections.Internal cirri and external cirri of heterotardigrades are innervated by the brain, and they are located on the anteriormost part of the head.(1).The filamentous sensory structures on the mid-head in luolishaniids are homologous to the cirri A of heterotardigrades.Therefore, luolishaniids were coded as absent.The hypothesis that tardigrade stylet is internalized appendages is doubtful, and the homology between the stylets of tardigrades and the rostral spines of Kerygmachela has been raised.Therefore, tardigrades are also coded as absent. Differentiation of lobopodous trunk limbs into two types (0) absent ( 1) present (-) Inapplicable: paired limbs (Character 1) absent Character 4 in Caron & Aria (1).Hallucigenia sparsa has three pairs of tentacle-like structures on the "neck" region (between the head and the trunk) (2).Unlike the anterior two pairs, the last pair of tentacle-like structures matches to the first dorsal trunk spine, similar to lobopodous trunk limbs.Therefore, due to the last pair of tentacle-like structures, H. sparsa is coded as present. In several lobopodian cases, the posteriormost limbs are relatively shorter, and some show smaller claws.Particularly, although luolishaniids have two types of trunk lobopodous limbs, the posteriormost limbs of several luolishaniids, like Luolishania and Ovatiovermis, is shorter than other "posterior" batch limbs (1).On the contrary, the posteriormost limbs of heterotardigrades have a similar length (or longer than anterior limbs) (see Fig. 1B, SI Appendix, Fig. S1D and S1E).Eutardigrades look like possessing much shorter limbs due to partial fusion.However, total length, including the fused region, is similar to the that of anterior limbs. Length of anterior batch of differentiated lobopodous trunk limbs (0) Longer than the posterior limbs (1) Similar to the posterior limbs (-) Inapplicable: tentacle-like anterior batch of differentiated trunk limbs (Character 31) While length of the anterior batch of differentiated lobopodous trunk limbs of luolishaniids are longer than the posterior batch limbs, the length of those is similar in tardigrades. Number of adornments of anterior batch of differentiated lobopodous trunk limbs (0) Numerous (1) One (-) Inapplicable: adornments on anterior batch of differentiated lobopodous trunk limbs (Character 35) absent While luolishaniids possess numerous spinules on the anterior batch of differentiated lobopodous trunk limbs, heterotardigrades have a spine or a papilla-like sensory organ on the anterior limbs.Eutardigrades do not have sensory organs on the anterior limbs. Telescopic lobopodous limbs (0) absent (1) present (-) Inapplicable: lobopodous limbs (Character 41) absent Heterotardigrades, especially many arthrotardigrades, possess limbs with partitioning, which are called "telescopic legs".The telescopic legs have been compared to segmented limbs of arthropods and thus telescopic leg parts are referred to as coxa, femur, tibia, and tarsus following arthropod terminology (12).Unlike bendable arthropod jointed limbs, telescopic legs of heterotardigrades are only retractable.A double claw is a characteristic of eutardigrades, which is made up of a large primary branch, a small secondary branch, and a basal track (25).The symmetry of fused double claws is determined by the arrangement of the claw branches on each limb pair. 1 designates the primary branch, and 2 represents the secondary branch.If the claw is marked as 2121, the sequence of claws on a limb is external claw secondary branch (2), external claw primary branch (1), internal claw secondary branch (2), and internal claw primary branch (1).In some tardigrades such as hypsibiids and ramazzottiids, the primary branch is articulated to the secondary branch with a flexible hinge-like link (26).(7).Digits are the finger-like elements in the tip of the limbs (12).Claws or discs of some arthrotardigrades, such as Archechiniscus, Batillipes, and Dipodarctus, are on the tip of the digits.Claws of onychophorans are on the tip of the 'foot' structure. Structures on the digit tip (0) Claws (1) Discs (-) Inapplicable: digit (Character 57) absent Batillipes has a disc on the distal tip of each digit. Numerous tooth shape structures in the mouth opening (0) Absent (1) Present Parachelans and Ovatiovermis has numerous tooth-shaped structures in the mouth opening.However, the homology between them is doubtful.Therefore, Ovatiovermis is coded as uncertain. Cuticular sensory structures surrounding the mouth (0) Absent (1) Present Tardigrades have a circumoral sensory field (COS) surrounding the mouth, which is different to the circumoral structure, the peribuccal lamellae.Whereas COS is a sensory structure surrounding the mouth broadly, the peribuccal lamella is on the distal end of the mouth, and it is a part of the foregut.The bulbous part of the Ovatiovermis mouth, the buccal papillae or spine like structures of Aysheaia, and the ovate plates of Pambdelurion share several characters with the COS of tardigrades: they are sensory organs surrounding the mouth, not a foregut structure.The scalids of priapulids are also the sensory organs surrounding the mouth opening, however, the evidence of homology between the COS and the scalids is insufficient.(2).Due to the uniform distribution on the pharynx, microspines of cockroach Supella have been treated as the pharyngeal teeth (2).The microspines on the Supella's pharynx show similar morphology to the pharyngeal teeth of Jianshanopodia, Pambdelurion, and Omnidens: i.e., several spines with the base.However, the microspines also occur at the other part of foregut and even the hindgut (27).Particularly, the microspines in the buccal cavity have identical shape to those in the pharynx (28).The other cockroach relatives possess hairs (setae) with the base, not spines, on the pharyngeal cuticle.Therefore, the homology between the pharyngeal teeth and the microspines is doubtful. A pair of rostral spines (stylet) (0) Absent (1) Present The stylet is a feeding apparatus of the foregut, which has a pair of spines.It is located near the mouth. Apophysis for the insertion of the stylet muscle (AISM) (0) Absent (1) Present (-) Inapplicable: stylet (Character 73) absent; buccal tube absent AISM is a hook, ridge, or combined structure (25) located on the anterior part of the buccal tube in parachelans.The buccal tube connects a mouth opening and a pharyngeal bulb in tardigrades.Similarly, lobopodians Onychodictyon ferox (29) and Cardiodictyon catenulum (30) also possess the buccal tube connecting the mouth opening and the pharynx (pharyngeal bulb), which appears to be homologous to that of tardigrades.However, no other lobopodians have been reported to have a bulbous pharynx so far, making it unclear whether the buccal tube is a synapomorphic character of panarthropods or a convergent evolutionary structure.Although Kerygmachela has a pair of rostral spines, due to the absence of the buccal tube, Kerygmachela is coded as inapplicable. In priapulid studies, the buccal tube is described as a narrow cylinder and is considered part of the digestive tube (particularly foregut) (31)(32)(33), but the possibility of convergent evolution between priapulids and tardigrades (or lobopodians) cannot be ruled out.Some tardigrades exhibit a buccal tube which shows a flexible posterior part that displays annulations.Kristensen (34) has discussed the morphological similarity of the flexible buccal tubes with annulations between tardigrades and loriciferans, but the homology between them remains uncertain. Stylet support (0) Absent (1) Present (-) Inapplicable: stylet (Character 73) absent; buccal tube absent Stylet support is a structure that anchors the posterior part of the stylet to the buccal tube.Although Kerygmachela has a pair of rostral spines, however, due to the absence of the buccal tube, Kerygmachela is coded as inapplicable.Although lateral diverticula structures have been reported from some heterotardigrades (11), lateral diverticula of heterotardigrades means a midgut with sinuous side, rather than an independent gland on both sides of the gut.Therefore, it is different to the diverticula of euarthropods, and thus heterotardigrades were coded as absent. Gonopore -Anus Eutardigrades have a combined opening of the gonopore and the anus, i.e., cloaca. External pouch as a seminal receptacle (0) Absent (1) Present A seminal receptacle is a cuticular pocket that stores the sperm.In some heterotardigrades, the duct of the receptacle extends to the external part of the body. Basal connection between dorsal spines of the same somite Inapplicable: annulation or segmentation absent Character 2 in Caron & Aria (1). Inapplicable: claws on the digit tip (Character 58) absent Some arthrotardigrades, such as Archechiniscus, Raiarctus, and Styraconyx, have claws on the distal tip of digits, with several points.Yang et al. (3). Fig. S2 .Fig. S3 . Fig. S2.Onychodictyon claws with the branch-like base.(A) JS0009.The white box designates (B).(B) Enlarged image of the claw in the white box of (A).(C)-(D) Enlarged images of Fig. 2F and 2G in the main text, respectively.
2023-07-05T06:17:07.901Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "037aeb20bb20a9798c1ccce2879b7dcc61bfcf06", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2211251120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "67e251a08b6cff5a88ccbc8050e427dba7d52cd6", "s2fieldsofstudy": [ "Biology", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
196903831
pes2o/s2orc
v3-fos-license
Aqueous reactions of organic triplet excited states with atmospheric alkenes Triplet excited states of organic matter are formed when colored organic matter (i.e., brown carbon) absorbs light. While these “triplets” can be important photooxidants in atmospheric drops and particles (e.g., they rapidly oxidize phenols), very little is known about their reactivity toward many classes of organic compounds in the atmosphere. Here we measure the bimolecular rate constants of the triplet excited state of benzophenone (3BP∗), a model species, with 17 water-soluble C3–C6 alkenes that have either been found in the atmosphere or are reasonable surrogates for identified species. Measured rate constants (kALK+3BP∗ ) vary by a factor of 30 and are in the range of (0.24–7.5)×109 M−1 s−1. Biogenic alkenes found in the atmosphere – e.g., cis-3hexen-1-ol, cis-3-hexenyl acetate, and methyl jasmonate – react rapidly, with rate constants above 1×109 M−1 s−1. Rate constants depend on alkene characteristics such as the location of the double bond, stereochemistry, and alkyl substitution on the double bond. There is a reasonable correlation between kALK+3BP∗ and the calculated one-electron oxidation potential (OP) of the alkenes (R2 = 0.58); in contrast, rate constants are not correlated with bond dissociation enthalpies, bond dissociation free energies, or computed energy barriers for hydrogen abstraction. Using the OP relationship, we estimate aqueous rate constants for a number of unsaturated isoprene and limonene oxidation products with 3BP∗: values are in the range of (0.080–1.7)×109 M−1 s−1, with generally faster values for limonene products. Rate constants with less reactive triplets, which are probably more environmentally relevant, are likely roughly 25 times slower. Using our predicted rate constants, along with values for other reactions from the literature, we conclude that triplets are probably minor oxidants for isopreneand limonene-related compounds in cloudy or foggy atmospheres, except in cases in which the triplets are very reactive. Abstract. Triplet excited states of organic matter are formed when colored organic matter (i.e., brown carbon) absorbs light. While these "triplets" can be important photooxidants in atmospheric drops and particles (e.g., they rapidly oxidize phenols), very little is known about their reactivity toward many classes of organic compounds in the atmosphere. Here we measure the bimolecular rate constants of the triplet excited state of benzophenone ( 3 BP * ), a model species, with 17 water-soluble C 3 -C 6 alkenes that have either been found in the atmosphere or are reasonable surrogates for identified species. Measured rate constants (k ALK+3BP * ) vary by a factor of 30 and are in the range of (0.24-7.5) ×10 9 M −1 s −1 . Biogenic alkenes found in the atmosphere -e.g., cis-3hexen-1-ol, cis-3-hexenyl acetate, and methyl jasmonatereact rapidly, with rate constants above 1×10 9 M −1 s −1 . Rate constants depend on alkene characteristics such as the location of the double bond, stereochemistry, and alkyl substitution on the double bond. There is a reasonable correlation between k ALK+3BP * and the calculated one-electron oxidation potential (OP) of the alkenes (R 2 = 0.58); in contrast, rate constants are not correlated with bond dissociation enthalpies, bond dissociation free energies, or computed energy barriers for hydrogen abstraction. Using the OP relationship, we estimate aqueous rate constants for a number of unsaturated isoprene and limonene oxidation products with 3 BP * : values are in the range of (0.080-1.7) ×10 9 M −1 s −1 , with generally faster values for limonene products. Rate constants with less reactive triplets, which are probably more environmentally relevant, are likely roughly 25 times slower. Using our predicted rate constants, along with values for other reactions from the literature, we conclude that triplets are probably minor oxidants for isoprene-and limonene-related compounds in cloudy or foggy atmospheres, except in cases in which the triplets are very reactive. Recent studies have shown that aqueous triplets can be the dominant oxidants for phenols emitted during biomass combustion , with phenol lifetimes on the order of a few hours in fog drops and aqueous particle extracts . There is also evidence that triplets can oxidize some unsaturated aliphatic compounds. Richards-Henderson et al. (2014) measured rate constants for five unsaturated biogenic volatile organic compounds (BVOCs) with the model triplets 3,4-dimethoxybenzaldhyde and 3'methoxyacetophenone, and they found that rate constants ranged between 10 7 and 10 9 M −1 s −1 . Other laboratory studies have shown that triplet states of photosensitizers such as imidazole-2-carboxaldehyde and 4-benzoylbenzoic acid can oxidize gaseous aliphatic BVOCs, e.g., isoprene and limonene, and model aliphatic compounds, e.g., 1-octanol, at the air-water interface to form low-volatility products that increase particle mass (Fu et al., 2015;Rossignol et al., 2014;Li et al., 2016;Laskin et al., 2015). However, the atmospheric importance of these types of processes is unclear (Tsui et al., 2017). Additionally, we recently reported that natural triplets in illuminated fog waters and particle extracts are significant oxidants for methyl jasmonate, an unsaturated aliphatic BVOC, accounting for 30 %-80 % of its aqueous loss during illumination . Abundant BVOCs such as isoprene and limonene are rapidly oxidized in the gas phase to form unsaturated C 3 -C 6 oxygenated volatile organic compounds (OVOCs) that include isoprene hydroxyhydroperoxides, isoprene hydroxynitrates, and isoprene and limonene aldehydes (Surratt et al., 2006;Paulot et al., 2009a, b;Crounse et al., 2011;Ng et al., 2008;Walser et al., 2008). Several of these firstgeneration products have high Henry's law constants, above 10 4 M atm −1 (Marais et al., 2016), and partition significantly into cloud and fog drops and, to a smaller extent, into aerosol liquid water. There, they can undergo further oxidation by aqueous photooxidants, including q OH, ozone (Wolfe et al., 2012;St. Clair et al., 2015;Khamaganov and Hites, 2001;Schöne and Herrmann, 2014;Lee et al., 2014), and possibly triplets. Our past measurements have shown that steady-state concentrations of 3 C * are orders of magnitude higher than q OH in fog waters and aqueous particles , and thus they might contribute significantly to the loss of OVOCs derived from isoprene and other precursors. However, testing this hypothesis requires rate constants for the reactions of triplets with alkenes, which are scarce. To address this gap, we studied the reactions of 17 C 3 -C 6 unsaturated compounds with the triplet state of the model compound benzophenone (Fig. 1). While our 17 unsaturated compounds include alcohols, esters, and chlorinated com- pounds, for simplicity we refer to them all as "alkenes". The tested alkenes include BVOCs emitted into the atmosphere as well as surrogates for some of the small unsaturated gas-phase products formed as secondary OVOCs. The goals of this study are to (1) measure rate constants for reactions of the alkenes with the triplet excited state of benzophenone, (2) explore quantitative structure-activity relationships (QSARs) between the measured rate constants and calculated alkene properties (e.g., the one-electron oxidation potential), and (3) use a suitable QSAR to estimate rate constants for triplets with some unsaturated isoprene and limonene oxidation products to predict whether or not triplets are significant oxidants for these species in cloud and fog drops. Chemicals All chemicals were purchased from Sigma-Aldrich with purities of 95 % and above and were used as received: the compound numbers, compound names, and abbreviated names are listed in Table 1. All chemical solutions were prepared using purified water (Milli-Q water) from a Milli-Q Plus system (Millipore; ≥ 18.2M cm) with an upstream Barnstead activated carbon cartridge. To mimic fog drop acidity (Kaur and Anastasio, 2017), the pH of each reaction solution was adjusted to 5.5(±0.2) using a 1.0 mM phosphate buffer. Kinetic experiments Bimolecular rate constants of the alkenes with the triplet state of benzophenone ( 3 BP * ) were measured using a relative rate technique, as described in the literature (Richards-Henderson et al., 2014;Finlayson-Pitts and Pitts Jr., 1999). a One-electron oxidation potential calculated using the CBS-QB3 compound method. b,c Lowest transition-state energy barrier for H abstraction by triplet benzophenone; calculated using uB3LYP/6-31+G(d,p). d Measured bimolecular rate constant for alkene reacting with 3 BP * with uncertainties of ±1 standard deviation; determined from triplicate measurements (Table S1 in the Supplement). e Listed uncertainty is ±1 standard error; n = 1. f The oxidation potential and energy barriers could not be computed for MeJA (17). Because the CB3-QB3 method scales at N 7 (where N is the number of atoms), the larger compound required more computational power than available. g Predicted bimolecular rate constant for select isoprene-and limonene-derived OVOCs reacting with 3 BP * ; determined from the correlation between OP and k ALK+3BP * . Listed uncertainties are ±1 standard error propagated from the error of the slope of the quantitative structure-activity relationship between oxidation potential and k ALK+3BP * (Fig. 3). The technique involves illuminating a solution containing the triplet precursor (BP), a reference compound with a known second-order rate constant with 3 BP * , and one test alkene for which the rate constant is unknown. The reference compound for each alkene was chosen so that the triplet-induced loss rates for the test alkene and reference compound were similar. Buffered, air-saturated solutions containing 50 µM each of the reference and test compounds and 100 µM of BP were prepared, and then 10 mL of this solution was illuminated in a stirred 2 cm, airtight quartz cuvette (Spectrocell) at 25 • C. Samples were illuminated with a 1000 W Xenon arc lamp filtered with an AM 1.0 air mass filter (AM1D-3L, Sciencetech) and 295 nm long-pass filter (20CGA-295, Thorlabs) to mimic tropospheric solar light ( Fig. S1 in the Supplement). At various intervals, aliquots of illuminated sample were removed and analyzed for the concentration of the reference compound and test alkene using HPLC (Shimadzu LC-10AT pump, Thermo Scientific BetaBasic 18 C 18 column (250 × 33 mm, 5 µM bead), and Shimadzu-10AT UV-Vis detector). For each alkene, illumination experiments were performed in triplicate (Table S1) using total illumination times typically between 60 and 150 min. Parallel dark controls were employed with every experiment using an aluminumfoil-wrapped cuvette containing the same solution and ana-lyzed in the same manner as the illuminated solutions. The dark cuvette was placed in a corner of the sample chamber, out of the path of the light beam. As a direct photodegradation control, each alkene was also illuminated (separately) in solution without benzophenone; there was no loss for any of the compounds. In every case, loss of test and reference compounds followed first-order kinetics. Plotting the change in concentration of the test alkene against that of the reference compound yields a linear plot that is represented by (1) (with the y intercept fixed at the origin) gives a slope equal to the ratio of the bimolecular rate constants; dividing k Reference+3BP * by the slope gives k ALK+3BP * . The measurement technique is illustrated in Fig. S2. While 3 BP * makes singlet molecular oxygen ( 1 O * 2 ), the latter is an insignificant oxidant of alkenes in our solutions: the concentrations of the two oxidants are similar (McNeill and Canonica, 2016), but our measured rate constants of alkenes with 3 BP * are approximately 2500 times faster than the corresponding rate constants with 1 O * 2 (Richards-Henderson et al., 2014). To determine the predicted BDFEs, the neutral (AH g , AH aq ) and radical species (A q g , A q aq ) of each alkene and the H radical (H q g , H q aq ) were optimized in the gas and solvent phases and their differences taken to give G • solv,AH , G • q solv,A , and G • q solv,H , respectively. Based on the thermodynamic cycle shown (Scheme 1), these values were used in Eqs. (3) and (4) to calculate the BDFEs of C-H and O-H bonds. To predict OPs, the neutral (A g , A aq ) and radical cation (A q + g , A q + aq ) forms of each alkene were optimized in the gas and solvent phase, their difference giving G • solv,A and G • solv,A q . Based on the thermodynamic cycle shown below (Scheme 2), these values were used in Eqs. (5)-(7) to calculate the OP (i.e., E ox ) of each alkene. Here, n is the number of electrons, F is Faraday's constant (96485.3365 C mol −1 ), and SHE is the potential of the standard hydrogen electrode (4.28 V) (Tripkovic et al., 2011). 3 Results and discussion 3.1 Alkene-triplet bimolecular rate constants (k ALK+3BP * ) Figure 1 shows the chemical structures for all 17 alkenes and the triplet precursor benzophenone. The alkenes have molecular weights ranging between 58 and 220 g mol −1 and include 13 alcohols, three esters, and one chlorinated compound. The model triplet precursor benzophenone (BP) has been previously employed in surface water studies, and its triplet state rapidly reacts with aromatics such as substituted phenols and phenyl urea herbicides with rate constants faster than 10 9 M −1 s −1 (Canonica et al., 2000(Canonica et al., , 2006. The bimolecular rate constants for the alkenes with the excited triplet state of BP (k ALK+3BP * ) vary by a factor of 30, spanning the range of (0.24-7.5) ×10 9 M −1 s −1 . Values are shown in Tables 1 and S1 in the Supplement and in Fig. S3, in which the alkenes are numbered in ascending order of their reactivity towards 3 BP * . Based on their rate constants, the alkenes appear to be broadly split into two groups: the slower alkenes (1-9), whose rate constants lie below 1 × 10 9 M −1 s −1 and span a range of only a factor of 2.5, and the faster alkenes (10-17), which vary by a factor of 5. Notably, three of the four BVOCs identified in emissions to the atmosphere -3MBO (12), cHxO (15), cHxAc (16) and MeJA (17) -react rapidly with 3 BP * , with rate constants greater than 1 × 10 9 M −1 s −1 . Three alkene characteristics appear to increase reactivity: internal (rather than terminal) double bonds, methyl substitution on the double bond, and alkene stereochemistry. To more specifically examine the impact of these variables, we compare the rate constants for three sets of alkenes (Fig. 2). The lowest free energy and enthalpy barriers for the abstraction of a hydrogen atom are also shown in Fig. 2 (and in Table 1); while overall these computed barriers are not wellcorrelated with rate constants (discussed below), lower barriers generally correspond to higher rate constants for the sets of alkenes in Fig. 2. The first two sets of compounds in Fig. 2 indicate that internal alkenes react faster with 3 BP * than do terminal isomers: cHxAc (16), an internal hexenyl acetate, has a reaction rate constant 11 times faster than its terminal isomer 5HxAc (9). The corresponding alcohols also exhibit the same trend: the internal alkenes cHxO (15) and tHxO (10) react 27 and 5.8 times faster, respectively, than the terminal isomer 5HxO (1). This dependence of reactivity on double bond location has implications for isoprene hydroxy- Figure 2. Comparison of three sets of alkenes to illustrate how rate constants with the benzophenone triplet state vary with double bond location, stereochemistry, and methyl substitution. The teal numbers on each alkene represent the lowest free energy ( G ‡ ) and enthalpy ( H ‡ ) transition-state barriers in kcal mol −1 for H abstraction by the triplet benzophenone; these were calculated at the uB3LYP/6-31+G(d,p) level of theory. Though computed barriers (Table 1) are not correlated with the overall rates measured, they broadly match the rate trends within a given set of alkenes in this figure. hydroperoxides (ISOPOOHs) and hydroxynitrate (ISONO 2 ), which have both terminal (β-) and nonterminal (δ-) isomers formed from gas-phase oxidation (Marais et al., 2016;Paulot et al., 2009a, b). Based on our results we expect the δisomers to react more quickly with organic triplets than the β-isomers. Alkene stereochemistry also affects the triplet-alkene reaction rate constant. The data in the middle of Fig. 2 show that cis-HxO (15) reacts nearly 5 times more quickly with 3 BP * than does trans-HxO (10), consistent with the lower predicted energy barrier for hydrogen atom abstraction from the cis-isomer. Finally, the addition of electron-donating substituents (methyl groups) on an unsaturated carbon atom also increases the rate constant. This is evident from comparing 2B1O (8) and its methyl-substituted analog 3MBO (12): k ALK+3BP * is 3.7 times faster with the methyl group (Fig. 2). Mechanistically, triplet-induced oxidation can proceed via either hydrogen atom transfer or a proton-coupled electron transfer Warren et al., 2010;Erickson et al., 2015), and the presence of an electron-donating substituent on the double bond likely selectively stabilizes the intermediates (e.g., radical or radical cation) formed from these two processes, as well as the transition-state structures for their formation. R. Kaur et al.: Aqueous reactions of organic triplet excited states with atmospheric alkenes Relationship between k and one-electron oxidation potential Our next goal was to develop a quantitative structure-activity relationship (QSAR) so that we can predict rate constants for alkene-triplet reactions. To use as predictor variables in the QSARs we computed several properties of the alkenes: bond dissociation enthalpy and free energy for various hydrogen atoms (Fig. S4), free energy and enthalpy barriers for hydrogen atom abstraction (Table 1), and one-electron oxidation potentials (Table 1). Apart from the oxidation potential, none of the other properties correlate well with the measured rate constants (Figs. S5 and S6). While there is no correlation between the rate constants and predicted energy barriers, alkenes with lower predicted free energy barriers ( G ‡ ) are predicted to be fast-reacting, with rate constants above 5 × 10 8 M −1 s −1 (Fig. S6). As shown in Fig. S6, computed barriers predict much larger variation in the rate than observed experimentally, suggesting that the breaking of the C-H or O-H bond does not occur in the rate-determining step for all alkenes. Of all the properties examined, the one-electron oxidation potential of the alkenes best correlates with the (log of) measured rate constants, with rate constants generally increasing as the alkenes are more easily oxidized, i.e., at lower OP values (R 2 = 0.58) (Fig. 3). Measured rate constants for 13 of the 16 alkenes lie within (or very near) the 95 % confidence interval (blue lines) of the regression fit, but there are three notable outliers: hexen-1,3-diol (3, HDO), cis-3-hexen-1-ol (15, cHxO), and cis-3-hexenylacetate (16, cHxAc). The measured HDO rate constant is 3.3 times lower than that predicted by the regression line, while measured rate constants for cHxO and cHxAc are 3.9 and 4.9 times higher, respectively, than predicted. To try to assess why these compounds differ from the others, we calculated the highest occupied molecular orbital of the alkene and the singly occupied molecular orbital of the alkene radical cation (i.e., after oxidation) (Fig. 4). Depending on the system, oxidation is predicted to occur by removing an electron either from the π system of the C-C double bond or from a lone pair on the O atom. This is illustrated in Fig. 4, which shows the HOMO and SOMO structures for HDO (3), wherein the electron is removed from the C-C double bond, and 3B1O (5), wherein the electron is removed from the oxygen atom. However, the three outliers in the correlation do not all fall into just one of these categories: for cHxAc (16) the electron is most likely abstracted from the oxygen, while for HDO (3) and cHxO (15) the electron is likely removed from the π system (Tables S2 and S3). This suggests that the location of electron removal does not control the rate constants. We also examined if the rate of loss of cHxO might be enhanced due to oligomerization, whereby an initially formed cHxO radical leads to additional cHxO loss. Since the pseudo-first-order rate constant of oligomerization should increase with initial cHxO concentration, we Figure 3. Correlation between measured bimolecular rate constants for alkenes with the triplet excited state of benzophenone (k ALK+3BP * ) and the computed one-electron oxidation potentials of the alkenes. Numbers on each point represent the alkene numbers in Table 1. Blue lines represent 95 % confidence intervals of the regression prediction. The gray lines bound the region that is within a factor of 4 of the regression prediction; all but one of the alkene values fall within this. Methyl jasmonate (17) is not included in this figure due to computational challenges in calculating its OP (see Table 1). measured the rate constant for cHxO loss over a range of initial concentrations (2-50 µM). However, as shown in Fig. S8, the rate constant for cHxO loss does not depend on its concentration, suggesting that oligomerization is an unimportant loss process for cHxO in our experiments. Thus, it is not clear why these three compounds do not fall closer to the regression line in Fig. 3. However, except for 16, all of the alkenes fall within a factor of 4 of the correlation line (gray lines). Finally, even though there is a good correlation between rate constant and OP in Fig. 3, it does not indicate whether these reactions proceed via pure electron transfer, proton-coupled electron transfer, or hydrogen transfer. As discussed earlier, since the predicted energy barriers for hydrogen abstraction do not correlate with measured rate constants (Fig. S6) and appear to split into two groups, uncertainty remains about the mechanism of triplet-induced oxidation of the alkenes. Predicted triplet-OVOC bimolecular rate constants We next use the relationship in Fig. 3, along with calculated oxidation potentials, to predict second-order rate constants for 3 BP * with a set of unsaturated oxygenated VOCs formed by the oxidation of isoprene and limonene. As shown in Fig. 5, we predict that limonene products generally react faster with 3 BP * than do isoprene products. For the five isoprene-derived OVOCs that we considered, rate constants vary by a factor of 17 and range (0.080-1.4) ×10 9 M −1 s −1 (Table 1, Fig. 5). The δ-isomers of ISOPOOH and ISONO 2 , which contain internal double bonds, have lower computed one-electron oxidation poten- Figure 4. Diagrams of the highest occupied molecular orbitals (HOMOs) of the alkenes before oxidation, the singly occupied molecular orbitals (SOMOs) after the removal of one electron from the alkenes, and the lowest-energy transition-state structures ( ‡) of alkenes 3 and 5. Bond dissociation enthalpy (italicized) and free energy (in parentheses) for various hydrogen atoms (in kcal mol −1 ) for each alkene are shown in the boxes. Numbers in green are the lowest values and thus represent the most labile hydrogen in each alkene. (a) The electron removed during H abstraction of HDO is predicted to come from the π system, but this results in delocalization due to hyperconjugation. (b) The electron removed from 3B1O during H abstraction is predicted to come from the oxygen. See Tables S2 and S3 for HOMO-SOMO structures and Fig. S4 for the bond dissociation enthalpies and free energies for other alkenes. tials and thus higher predicted rate constants compared to the terminal β-isomers. This is similar to the trend observed with the other alkenes (Fig. 2). In the case of isoprene hydroperoxyaldehydes, we were able to determine the oxidation potential for only HPALD2 (22), and its predicted reaction rate constant (±1 SE) of 4.0(±0.9) × 10 8 M −1 s −1 is among the lowest of the isoprene-derived alkenes (Fig. 5). We calculated OP values and triplet rate constants for three limonene-derived OVOCs: limonene aldehyde (LMNALD) and two dihydroxy-limonene aldehydes (2,5OH-LMNALD and 4,7OH-LMNALD). Compared to the isoprene-derived alkenes, the rate constants for all three limonene products are high and range (0.89-1.7) ×10 9 M −1 s −1 . All of the limonene aldehydes (as well as the isoprene products) can have several isomers whose calculated oxidation potentials can vary, which affects the predicted rate constant. For example, for 4,7OH-LMNALD (25) the computed oxidation potential for five of its isomers vary between 2.17 and 2.48 V (Table S4), which leads to a relative standard deviation of 40 % in the predicted rate constants for the various isomers. For each OVOC, the predicted rate constants in Table 1 are Figure 5. Predicted bimolecular rate constants for a range of limonene and isoprene oxidation products (OVOCs) with the triplet state of BP. Rate constants are estimated from the QSAR with oneelectron oxidation potentials (OPs) (Fig. 3). Oxidation potentials used to predict the rate constants here (and in Table 1) are for the lowest-energy isomers of the OVOCs, which are the structures shown here. The structures of some of the other higher-energy isomers are shown in Table S4. for the lowest-energy isomers whose structures are shown in Fig. S9. Role of triplets in the fate of isoprene-and limonene-derived OVOCs Next, we use our estimated rate constants, along with previously published estimated values for rates of other loss processes (Table S5), to understand the importance of triplets as sinks for isoprene-and limonene-derived OVOCs in a foggy-cloudy atmosphere. For our simple calculations we use a liquid water content of 1 × 10 −6 L aq / L g, a temperature of 25 • C, and calculated Henry's law constants from EPISuite (US EPA, Estimation Programs Interface Suite ™ for Microsoft ® Windows v4.1, 2016) (Table S6). From these inputs, we estimate that between 10 % and 97 % of the OVOCs will be partitioned into the aqueous phase under our conditions (Table S6). The OVOC sinks we consider are photolysis and reactions with the hydroxyl radical ( q OH) and ozone (O 3 ) in the gas phase as well as hydrolysis and reactions with q OH, O 3 , and triplets in the aqueous phase (Table S5). Based on typical oxidant concentrations in both phases and available rate constants with sinks, the overall pseudo-first-order rate constants for initial OVOC losses are estimated to be in the range of (0.27-3.0) ×10 −4 s −1 , corresponding to overall lifetimes of 0.93 to 10 h (Table S7). The only exception is δ-ISONO2, which is expected to undergo rapid hydrolysis to form its corresponding diol (Jacobs et al., 2014) with a lifetime of just 0.078 h (280 s). Figure 6 shows the overall loss rate constants and the contribution from each pathway for four of these OVOCs: δ4-ISOPOOH (19), β-ISONO2 (20), HPALD2 (22), and 4,7-OH-LMNALD (25). Overall, aqueous-phase processes dom-inate the fate of these OVOCs, accounting for the bulk of their loss, but the contribution of aqueous triplets to OVOC loss depends strongly on the triplet reactivity. Panel (a) of Fig. 6 shows OVOC loss when we assume that the aqueous triplets are highly reactive, i.e., using rate constants estimated for 3 BP * (Fig. 5). Since our recent measurements indicate that, on average, ambient triplets are not this reactive, this scenario likely represents an upper bound for the triplet contribution. In this case highly reactive triplets are the dominant sinks for δ4-ISOPOOH and 4,7-OH-LMNALD, accounting for 74 % and 47 % of their total losses, respectively (Fig. 6a). For β-ISONO 2 and HPALD2, triplets are not dominant but still significant, accounting for 19 % and 24 % of loss, respectively, while other sinks dominate. For the OVOCs for which we calculated rate constants with 3 BP * (Fig. 5) but that are not shown in Fig. 6, the triplet contribution varies widely, from less than 1 % for δ-ISONO2 (21), for which hydrolysis dominates, to 59 % for 2,5-OH-LMNALD (24) ( Table S7). While 3 BP * likely represents an upper bound of triplet reactivity in atmospheric waters, our recent measurements indicate that the triplets in fog waters and particles have an average reactivity that is typically similar to 3'-methoxyacetophenone (3MAP) and 3,4dimethoxybenzaldehyde (DMB) . A comparison of our 3 BP * rate constants (Table 1) with the average values for the 3MAP and DMB triplets for a subset of the alkenes (Richards-Henderson et al., 2014) indicates that the average 3MAP-DMB triplet rate constants are 1 %-18 % of the corresponding 3 BP * values. Thus, to scale alkene-triplet rate constants from 3 BP * to the 3MAP and DMB triplets we take the median value of 4 %, which is derived from the MeJA rate constants (Table S8). Figure 6b shows the calculated fates of the OVOCs in the case in which we consider "typical-reactivity" triplets; i.e., we multiply the 3 BP * + OVOC rate constants (Fig. 5) by a factor of 0.04. Under these conditions, triplets are minor oxidants (Fig. 6b), accounting for 9 % and 3 % of the loss of δ4-ISOPOOH and 4,7-LMNALD, respectively, and approximately 1 % for the other two OVOCs. This suggests that aqueous triplets are generally minor sinks for OVOCs derived from isoprene and limonene, in contrast to the case for phenols, for which triplets appear to be the major sink Yu et al., 2014;. However, there are several important uncertainties in our determination that triplets are likely minor sinks for oxygenated alkenes. First, the factor we used to adjust from 3 BP * rate constants to triplet 3MAP-DMB rate constants (i.e., a factor of 0.04) is quite uncertain: values for the three BVOCs examined range from 0.01 to 0.18 (Table S8). Additionally, there are very few measurements of triplets in atmospheric drops or particles and only from two sites, so it is possible that we are underestimating the average reactivity and/or concentrations of triplets in atmospheric drops and particles. Figure 6. Estimated pseudo-first-order loss rate constants and corresponding lifetimes (in parentheses) for representative isoprene-and limonene-derived oxidation products in a foggy atmosphere (Tables S5-S7). Colors and data labels indicate the percentage of OVOC lost via each gas and aqueous pathway, including direct photoreaction (hν) and hydrolysis (Hyd); pathways contributing less than 4 % are not labeled. Panel (a) is a likely upper bound for the triplet contributions to OVOC loss in which we assume that all fog triplets are highly reactive, like benzophenone. Panel (b) shows the more likely contribution from triplets, assuming moderately reactive triplets that are more representative of the average measured in fog waters and aqueous particle extracts (Tables S5-S7). Conclusions To explore whether triplet excited states of organic matter might be important sinks for unsaturated organic compounds in atmospheric drops, we measured rate constants for 17 C 3 -C 6 alkenes with the triplet excited state of benzophenone ( 3 BP * ). The resulting bimolecular rate constants span the range of (0.24-7.5) ×10 9 M −1 s −1 . Notably, the rate constants are high (above 10 9 M −1 s −1 ) for some important green-leaf volatiles emitted from plants: 3MBO, cHxO, cHxAc, and MeJA. Rate constants appear to be enhanced by alkene characteristics such as an internal double bond, cisstereochemistry, and alkyl substitution on the double bond. To be able to predict rate constants for other alkenes, we examined QSARs between our measured rate constants and a variety of calculated properties for the alkenes and 3 BP *alkene transition states. Rate constants are not correlated with bond dissociation enthalpies, free energies, or predicted energy barriers for the removal of various hydrogen atoms, but they are reasonably correlated with the one-electron oxidation potential of the alkenes (R 2 = 0.58). Based on the relationship between rate constants and oxidation potential, we predict that highly reactive triplets will react with first-generation isoprene and limonene oxidation products with rate constants on the order of 10 8 -10 9 M −1 s −1 , with higher values for the δ-isomers compared to terminal β-isomer products. Using these rate constants in a simple model of OVOC chemistry in a foggy-cloudy atmosphere suggests that highly reactive aqueous triplets could be significant oxidants for some isoprene hydroxyhydroperoxides and limonene aldehydes. However, for our current best estimate of typical reactivities, triplets are a minor sink for isopreneand limonene-derived OVOCs. To more specifically quantify the contributions of triplet excited states towards the loss of alkenes in particles and drops requires more insight into both the reactivities and concentrations of atmospheric triplet species. In addition, assessing whether triplets might be important sinks for other organic species requires more measurements of reaction rate constants with atmospherically relevant organics. Data availability. Data are available upon request. Author contributions. CA and RK conceptualized the research goals and designed the experiments. RK and JD performed the laboratory work, while BH and DT planned and performed the computational calculations. RK analyzed the experimental data and prepared the paper with contributions from all coauthors, particularly BH, who wrote the sections on computational calculations and prepared the corresponding figures. CA reviewed and edited the paper. CA and DT provided oversight during the entire process.
2019-04-10T13:12:39.684Z
2018-12-04T00:00:00.000
{ "year": 2019, "sha1": "e154884943f00a96af59a99bea23d8e360af07f1", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/19/5021/2019/acp-19-5021-2019.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "417fd3e8da5cddad9b90923ac72697198f04893c", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
237574076
pes2o/s2orc
v3-fos-license
Late recanalization after complete occlusion of patent ductus arteriosus in a Pembroke Welsh Corgi with von Willebrand disease Abstract A 36‐month‐old female Pembroke Welsh Corgi with a cardiac murmur weighing 12.6 kg was referred to the Matsubara Animal Hospital cardiology service. Echocardiography revealed a patent ductus arteriosus. The dog underwent ductus arteriosus closure using an Amplatz Canine Duct Occluder. After the operation, we suspected coagulation and a platelet disorder because of the slightly increased haemorrhage during the operation, postoperative purpura around the surgical wound inside of the thigh, and dog breed, which is known to be commonly affected with von Willebrand disease (vWD). Subsequently, type 1 vWD was confirmed. Complete occlusion was achieved 1 month after the operation; however, 2 months after the operation, recanalization appeared. Recanalization progressed gradually; cardiac redilation was not detected 6 years after the operation. The late recanalization was most likely associated with vWD. In canine breeds pre‐disposed to developing vWD, pre‐operative testing may be indicated prior to patent ductus arteriosus occlusion, though the prevalence of vWD is rare. vessels were enlarged (Figure 1a,b). Echocardiographic examination revealed that left ventricular was enlarged but left atrial was normal (normalized diastolic left ventricular internal diameter of 2.03 and left atrial-to-aortic ratio of 1.25). There was continuous flow in the main pulmonary artery originating from a concurrent left-to-right shunting through a patent ductus arteriosus (PDA) (Figure 1,d). We diagnosed isolated PDA based on the mentioned findings. One month later, a transarterial embolization of the PDA was performed by implanting an Amplatz Canine Duct Occluder (ACDO) under general anaesthesia according to a previously described protocol (Nguyenba & Tobias, 2007). A surgical cutdown was used to access F I G U R E 1 Thoracic radiography and right parasternal long axis transthoracic echocardiographic images on the first examination. (a and b) Thoracic radiography revealed cardiomegaly with enlarged pulmonary vessels. Continuous colour (c) and spectral (d) Doppler flow F I G U R E 2 Intraoperative angiography after implanting an Amplatz Canine Duct Occluder. A complete occlusion of the ductus arteriosus was immediately reached. ACDO, Amplatz Canine Duct Occluder; Ao, aorta; DA, ductus arteriosus the right femoral artery. Although the haemorrhage slightly increased, the catheter was inserted without complications. On the angiography images, the pulmonary ostium of the PDA measured 3.75 mm and the ductal ampulla was 9.45 mm. An ACDO with a waist diameter of 6 mm was implanted according to the manufacturer's recommendations. Immediate complete occlusion of the PDA was achieved by intraoperative angiography (Figure 2). After the operation, we suspected a coagulation and/or platelet disorder because of the slightly increased haemorrhage during the operation and post-operative purpura around the surgical wound inside of the thigh. Additionally, the dog belonged to a breed that is predisposed to von Willebrand disease (vWD). Subsequently, type 1 vWD was confirmed by genetic testing (Kahotechno, Co., Ltd, 680-41, Iizuka, Fukuoka, 820-0067, Japan). DISCUSSION Patent ductus arteriosus, a common congenital heart disease in dogs (Schrope, 2015), represents the persistence of the arterial canal that carries blood from the pulmonary artery to the aorta during fetal life and that normally closes within hours after birth in response to hemodynamic and neurohormonal processes (Clyman, 2006). Patent The complications of utilizing an ACDO in dogs are rare but reported to include bacterial endocarditis (Fine & Tobias, 2007), acute embolization (Gordon et al., 2010), and delayed embolization (Carlson et al., 2013). Late complications such as recanalization and development of residual flow are very rare (Broaddus & Tillson, 2010). In dogs, most cases showed no residual ductal flow after ACDO implantation (Nguyenba & Tobias, 2008;Sisson, 2003;Wesselowski et al., 2019), and most of the delayed occlusions were observed in the first 3 months (Nguyenba & Tobias, 2008;Sisson, 2003;Stauthammer et al., 2015). Although there were several studies reporting residual PDA shunting, to the best of our knowledge, there have been no reports regarding late recanalization after complete occlusion in dogs with PDA. In the present case, complete occlusion was confirmed, but 2 months after the operation, recanalization appeared and deteriorated gradually. Von Willebrand disease is a common bleeding disorder. However, we did not suspect vWD before the operation because the general blood examination results showed values within the reference ranges, and the dog had no clinical signs associated with vWD. After placement of an intravascular/intracardiac implant, a series of events take place in which the function of von Willebrand factor is very important (Sigler et al., 2000). First, thrombotic material, consisting of fibrin and blood cells, develops and seals the surface of the implant. This process begins immediately after implantation and usually ends within 1-2 days. Subsequently, fibromuscular cells begin to proliferate, which continues for 2-3 weeks. In the final phase, granulation tissue containing extracellular matrix and fibroblasts and new blood vessels forms (Foth et al., 2009). Von Willebrand factor is a protein that acts as a molecular bridge between platelets and subendothelium as well as a carrier for factor VIII (Denis, 2003;Wagner, 1990), which is important for coagulation. The dog in the present study was discovered to have type 1 vWD in which there is a quantitative deficiency of von Willebrand factor in the circulation. The partial embolization might have caused the temporary complete occlusion; however, late recanalization occurred. In addition, we considered slippage of the device. Carlson et al. (2013) reported a delayed embolization immediately after unrestricted exercise. In our case, the ACDO was sized appropriately based on published recommendations (minimal ductal diameter/waist diameter of the ACDO: 1.6) (Nguyenba & Tobias, 2007), locomotory activity of the dog was severely restricted for a month, and radiographic and echocardiographic examinations showed no change in the location of ACDO. Furthermore, the residual flow appeared through, rather than around, the ACDO. Therefore, slippage of the device was considered very unlikely. Despite the development of residual PDA flow, progressive cardiac enlargement was not noted during the follow-up period, suggesting that the degree of residual flow remained mild and clinically inconsequential. This is consistent with an adequate decrease in shunt flow despite recanalization. In conclusion, we report a case of a dog with late recanalization after ACDO placement that was most likely associated with vWD. Recurrent cardiac enlargement was not detected, consistent with a sustained decrease in shunt flow despite recanalization. Von Willebrand disease in dogs has been reported in more than 50 breeds (Littlewood et al., 1987). Dogs with known heritable risk for both PDA and vWD include Pembroke Welsh Corgi, Doberman Pinscher, and German Shepherd. (Fox et al., 1998;Harvey, 2012). In these breeds, screening for vWD may be considered before PDA occlusion, despite the rare prevalence of this disease. CONFLICT OF INTEREST The authors declare no conflict of interest ETHICS STATEMENT The authors confirm that the ethical policies of the journal, as noted on the journal's author guidelines page, have been adhered to. No ethics approval was required as no experimentation was conducted on the treated dog and the consultation was conducted normally. AUTHOR CONTRIBUTIONS Conceptualization ( PEER REVIEW The peer review history for this article is available at https://publons. com/publon/10.1002/vms3.634
2021-09-21T06:22:48.293Z
2021-09-19T00:00:00.000
{ "year": 2021, "sha1": "6a22e3fe6c22505542ca0665d83d4ef36c865a07", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/vms3.634", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73f8346e56b7b3e8bd1d92ed23a4c1181effec34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232269738
pes2o/s2orc
v3-fos-license
Human-AI Symbiosis: A Survey of Current Approaches In this paper, we aim at providing a comprehensive outline of the different threads of work in human-AI collaboration. By highlighting various aspects of works on the human-AI team such as the flow of complementing, task horizon, model representation, knowledge level, and teaming goal, we make a taxonomy of recent works according to these dimensions. We hope that the survey will provide a more clear connection between the works in the human-AI team and guidance to new researchers in this area. Introduction As AI systems are becoming a part of our day-to-day lives, there would be more interest to have a team of humans and AI where they can improve the shortcomings and limitations of one another if they were otherwise working alone. Even though when humans and AI team up, they can empower each other's abilities and reach better outcomes, new challenges emerge that aren't there with AI systems or humans alone. The primary challenge of an AI agent that is functioning alone is how effectively and flawlessly it achieves its goal. However, in a team of humans and AI assisting each other for a team goal, the challenges are not limited to the goal itself, but the AI system should also have the ability to not only reason about the human's actions but their mental models. There exists a variety of works in this area that try to have human and AI work together as a team for better outcomes. While the works in this area focus on different challenges, the lack of coherency between them makes it hard to see a clear connection between these works. In this paper, we try to categorize research efforts into different dimensions to have a taxonomy of related works in this area. To this end, we introduce various aspects of works in human-AI teams, then we elaborate on how the recent works in this area are matched with those aspects. This effort can hopefully provide a clear connection between researches in human-AI team from different perspectives and guidance for future research. Survey Scope and Outline In this survey, we highlight how different works in the area of human-AI teams can be viewed and organized from different dimensions. First, we emphasize how how does the complementing flow between the hu-man and the AI, then investigate task horizon and model representation in different works. Also, we organize different works in this area based on their knowledge and capability levels and their teaming goal perspectives. Then, we highlight how recent works can be categorized regarding these dimensions. Dimensions of Human AI Teams In this section, we highlight many dimensions upon which the recent works in the area of human-AI teams can be integrated. In particular, we focus on different dimensions that make the works in this area distinct from each other (shown in Figure 1). Thus, we first provide an overview of the different dimensions associated with this direction of research, then various aspects of them delve into a survey of the existing works. Complementing Flow When we have a human-AI team, it is really important to see who is complementing whom. Thus, in the team in which they try to compensate for their weaknesses, it might be the human who complements the AI agent or the AI agent who complements the human, or even it can be considered as a peer to peer complementing where both entities are complementing each other. Different challenges arise depending on which category is researched. Therefore, (1) when the human complements the AI system, human inputs should be used to improve the AI system's performance in which case in addition to the challenges of using human inputs to complement, which itself is associated with its own costs, constraints, and quality, and availability issues, the AI system also needs to have some kind of reasoning capability to know how and when to use the human inputs [21]. However, (2) when the AI is complementing the human, not only the AI should help, it's important that the human also recognizes the help. So, since it is really important that the human understands the AI, in addition to the challenges that stem from the task itself and the optimality and effectiveness of the outcome, the interpretability of the AI behavior plays an important role. (3) For a peerto-peer teaming, the AI agent is still helping the human, but the bidirectional communication and feedback between the two entities will help achieve a more effective teaming. Task Horizon and Model Representation Task Horizon is another aspect that separates the works in this area. Generally, the tasks can be categorized as (1) single tasks such as classification and prediction or (2) sequential tasks which are sequential decision-making problems such as planning and scheduling. In one category, we have AI systems that can complement humans for perceptual, diagnostic, and reasoning tasks. These AI systems are usually Machine learning (ML) models trained to complement the strength of the human for predicting the answer to a given task [3]. Examples of these predictive models are a medical decision support system used by a doctor [50; 3], a recidivism predictor that advises a judge [50; 45; 29], or a classification system that helps scientists to understand the distribution of galaxies and their evolution [50; 21]. On the other hand, we have tasks that are sequential decisionmaking problems. This can be a proactive decision-making system that helps the user in constructing a plan [39; 38], a robot which is making a plan to help the human in doing a task [47; 14; 8], or a scheduling system that helps to allocate multiple users to different tasks [31; 52]. Model Representation is another variation in this area. While any background knowledge of the human or the AI agent is considered as the model, since it is hardly possible to have a model that captures all the background knowledge, the abstraction of this is shown in different forms. Thus, this abstraction of the model can be the beliefs or state information of the agent such as its goals and intentions, its capabilities or initial conditions or the reward function [8], the error model [3], features, and decision logic rules of a classifier [29]. It can also include the observation model and the computational capability of the observer [26]. Relationship between Tasks and Models Although for the works mentioned in this survey, we investigate the task horizon and the model representation separately, there is a clear relationship between them. Since the works with the single task such as classification, prediction, etc. are using machine learning models, they usually consider the model as error function, features and probability distribu-tion over output data. However, in sequential tasks, we usually have Markov Decision Process (MDP) models along with other factors such as initial conditions, goals, and observation models. Moreover, the new direction of works in this area considers human trust as another element of the model [11; 51]. Knowledge and Capability Level A valid reason for having human-AI teams is to achieve a complementing performance that is better than either one of them on their own. However, this is only possible if the appropriate tools are leveraged. The capability level and knowledge of either is very important to achieve a real complementary performance. For instance, in the case of having the AI agent assisting the human, appropriate reliance is crucial to improve performance in a team [5]. So, over-reliance on a human with limited computational capability or an AI agent with limited knowledge not only cannot improve team performance but can hurt it. The literature in this area considers various scenarios regarding the capability and knowledge levels of the AI agent and the human, and they base their work on them. Therefore, the level of knowledge and capabilities of the AI agent compared to the human can affect the types of problems that can be solved. Works in this area are usually categorized into (1) The AI agent knows more and so its model is the right one (2) The human knows more (3) Both have the same capability and knowledge level and (4) Their knowledge and capability level are not comparable or are unspecified. Teaming Goal A human-AI team can face different challenges depending on its goal. Whether the purpose of the team is to improve the overall performance, the human performance, or that of the AI agent will result in different challenges. Another aspect is the interaction state of the team as the goal of the human-AI team can be affected differently with single interaction versus multiple interactions between the human and the AI. Scope of the Survey Integrating the abilities of humans and AI in a team offers great promise for the development of practical applications. This is a growing field of research with many challenges. The recent works in this area try to address many of the existing challenges, but there are significant differences between the direction of the works in this area such that this makes them seem independent and separated. The four dimensions we proposed in the paper can be a standpoint to see a clear connection as well as the differences between the existing works. While all four mentioned dimensions are significantly important and discriminative, the first dimension alone can bring up distinct challenges for each category. Thus, we describe the recent works through the challenges that arise with the category of the first dimension as the central one, then elaborate other dimensions through them to make a cluster of mutual works. Table 1 is the summary of results from three dimensions. Human Complements AI It is acknowledged that to overcome AI mistakes and limitations the human involvement is necessary. However, using human inputs to improve the AI systems' performance has many challenges. These challenges include factors associated with costs, constraints, quality, and availability when using human inputs to complement, and the need for the AI system to have some kind of reasoning capability to know how and when to use the human inputs [21]. In this section, we discuss the different ways that the human can complement an AI system and the existing challenges and we investigate other introduced dimensions through them. Solving the Task One way to complement an AI with human input is to infuse human intelligence (e.g. inputs from crowd-sourced workers) to improve the accuracy of the given task. In such a setting, the challenges would be reasoning about when and where those inputs can be used to reach a better efficacy. Since the human complements the AI system, the works in this line mostly assume that the human (which are crowdsourced workers) has more knowledge and capability in solv-ing the task. Moreover, the teaming goal in this area is how to improve the AI performance while optimizing the cost of collecting information from the human, which result in improving the efficacy of the large-scale crowdsource. Thus, the task horizon is separated into two parts (1) the learning part which is the prediction of the answer, and (2) the inference part for reasoning and planning about the hiring and routing of workers. With this, the task horizon for the first part is a single task classification task given the input data, however, for the second part, it is a sequential decision-making problem using different planning methods. For example, the Crowdsynth effort describes a general system that combines machine learning and decision-theoretic planning to guide the allocation of human efforts in consensus tasks [23]. By collecting multiple assessments from human workers, their goal is to identify the true answer to each task such that the AI agents learn about the task and capability of the workers to make decisions about how to guide and fuse different contributions. Furthermore, they extend Crowdsynth for solving hierarchical consensus tasks (HCT) to find a true answer to a hierarchy of subtasks [22]. They described a general system that uses hierarchical classification to combine evidence from humans in various subtasks with machine perception for predicting the correct answer. They used Monte Carlo planning to reason about the cumulative value of workers for the decision on hiring a worker, and customized it for HCT to constraint the policy space. CrowdExplorer is another extension for the adaptive control of consensus tasks when an accurate model of the world is not available and needs to be learned [24]. CrowdExplorer is using a set of linear predictive models and a novel Monte Carlo planning algorithm to continuously learn about the dynamics of the world and simultaneously optimize decisions about hiring workers and reasoning about the uncertainty over models and task progress in a life-long learning setting. Moreover, other than consensus tasks, one of the important challenges is the ability to make a balance between value and costs of collecting information prior to taking an action. This is the reasoning behind whether to stop or continue collecting information (human inputs). Since the individual observation is weak evidence, the computation of the value of informa-tion where there is a large sequence of evidence is challenging. Monte Carlo value of information (MC-VOI) performs a large look-ahead to explore multiple observation and action sequence with a single sample [22]. Furthermore, unlike the standard approaches which construct a machine learning model to predict the answer to a given task and take the predictive model as fixed and then build a policy for deciding when to use human inputs, the authors in [50] jointly optimize the predictive model and query policy with a combined loss function that puts into account the relative strength of the human and machine. Although in the aforementioned works, the model is represented through features, the value of information, and the probability distribution over different answers to the tasks, there are researches in which the crowd-sourced inputs help the planner to build a domain of the model which includes state information, goal and initial state to solve a planning problem [17]. In such works, the challenge is how to exploit the knowledge to address the noisy inputs from the crowds. Troubleshooting To reach a better competency, AI systems should be able to identify and troubleshoot their failures. Using human inputs can help to effectively identify the failures and try to address them accordingly. This can be done either (1) by investigating and identifying the differences between the human and the AI agent in doing the task when both the human and the AI agent may have their own shortcomings. This means the knowledge and capability level of both the human and the AI agent are the same or incomparable, or (2) by getting feedback and assessments from humans to know and address the failures when the human is the expert which categorizes as the human having a higher knowledge and capability level. For instance, the effort on analyzing how human and machine decisions differ and how they make errors on the problem of Recidivism prediction may yield improvements in the Recidivism prediction [45]. So, they used a widely used commercial risk assessment system for the Recidivism -COMPAS, and characterized the agreement and disagreement between the human and the COMPAS by clustering and decision trees, then investigate how combining the differences can reduce the failures. The systematic errors result from the difference between the simulated world and the real world -blind spots-can be addressed by human inputs, because the agent may never encounter some aspects of the real world [34]. In this work, they applied imitation learning to demonstrate data from the human to identify important features that the human is using but the agent is missing and then they used the noisy labels extracted from action mismatches between the agent and the human across simulation and demonstration data to train blind spot models. Regarding the knowledge and capability level and teaming goal, both of these works have an incomparable level of expertise between the human and the AI, and both try to improve team performance as the teaming goal. However, their model representation and task horizons are completely different. [45] represents the model as the error with a single task horizon that is prediction, and [34] has sequential decision-making tasks with features, actions, and rewards represented as the model. Using human intellect, when the human is considered more expert, assessing the system can result in the troubleshooting of the system failures with the goal of improving AI system's performance. The effort by [33], simulates potential component fixes through human computation tasks and measures the expected improvements in the system. The system is first evaluated by crowd-sourced workers then when the workers apply their fixes for the components, the fixed output is integrated into the system, and the improved system is evaluated again by crowd-workers so that the fixes of earlier components are reflected on the inputs of later components. Acting in Unknown Environments One of the characteristics of AI systems that would act naturally is their ability to deal with new environments and tasks. For agents to act in new environments, one way is to learn how to act in such environments, so having human inputs like advice or instruction would help significantly for more effective learning. Thus, in such works, the human is considered as an expert with higher knowledge and capability level, and the teaming goal is to improve the AI agent performance. Therefore, the human can be like a teacher that gives instructions to the AI system (student) by suggesting actions that the AI agent can take while learning [46]. The authors proposed a different set of teaching algorithms such as early advising, importance advising, mistake correction, and predictive advising that a human teacher can take to show how they affect the learning speed of an RL agent which is learning how to act. However, their proposed method required the human to continuously monitor the AI agent to know when and where to give advice, so an interactive teaching strategy in which the teacher and the student jointly identify the advising opportunities will address this issue [1]. When the human teacher and AI student interactively train such that the RL agent decides when to ask for attention and the human teacher who is asked for the attention decides what advice to give, it can speed up AI agent learning without the need for constant attention. Moreover, for the AI agent to use human instructions to understand the different aspects of an unknown environment like tasks, goals, subtasks and other unknowns, needs a mechanism for understanding human instructions in natural language [44]. AI Complements Human With the advances in Artificial Intelligence, there is the pervasive use of AI systems to integrate their capabilities with human users. As a result of ubiquitous AI systems which are helping humans in their tasks, the first challenge is regarding the optimality and efficacy of AI systems in doing different tasks and decision making to achieve the desired complementing objectives. However, unlike the works in which the human complements the AI where there was significant attention to the helper's costs and constraints, here the focus shifts more to the understandability of the help. Indeed, the help is considered efficient if AI systems behavior conforms to the human's expectation, and human trust. Although there are an ever-expanding line of works that are investigating the AI agent physical and algorithmic capabilities so that they will be able to participate in a variety of complementing tasks and interactions autonomously [37], in this section, we just talk about different aspects that AI systems can take into account for being interpretable and trustable to be an effective complement toward the human along with discussing how works in this area are different in regard to the introduced dimensions. The main challenge in having more interpretable and trustable systems is how to account for the human mental model. A behavior might be uninterpretable to the human if it's not comprehensible with respect to the human's expectations [8]. This mental model like the AI model can be regarded as the beliefs, state information, goals, intentions, capabilities, reward function, features, and errors, but it might be different from the AI model. For instance, when the human interacts with an AI agent, they make a mental model of the AI agent's error boundary which affects the human's decision as to decide when and where to trust and use the AI agent's complement [3]. Thus, the AI agent can account for this mental model to optimize for team performance instead of mere accuracy [2; 4]. Therefore, given the human mental model, the AI agent can behave according to the human's expectations or communicate to change the expectations. The teaming goal for the works in this area is to improve team performance. Although in most of the works the AI agent is the only actor in the team, interpretability will affect the team performance in longitudinal interactions. Moreover, while most of the works in this area try to improve the team performance, and the AI agent is considered to have more capability and knowledge than the human, their task horizon and the model representation are different depending on various interpretable communications. Interpretable Through Behavior The interpretable behavior can be concerned with the plan or the goal. The AI agent in a sequential decision-making task horizon can behave to be understandable and predictable to the human by showing an explicable plan or predictable plan. Where explicability is concerned with the association between human-interpreted tasks and agent actions, predictability is concerned with how predictable the completion of the task is regarding the current action [54]. So, one way to generate explicable behavior is for the AI agent to use plan distance between the expected and agent action [27]. While the works here usually represent the model as state information, goal and initial condition, the human observational model will be added into the model representation if the AI agent tries to express its intentions with legible behavior which enables the human collaborators to infer the goal [14]. In [12], they proposed a gradient optimization technique to autonomously generate legible motion, and used a trust level constraint to control the unpredictability of that motion. Transparent planning is also committing in communicating goal, while the AI agent can communicate their goals through action as efficient as possible regardless of how far this might remove it from the goal [30]. Even though it is proven that legible and predictable behaviors affect the collaboration fluency [13], the AI agent should be able to obfuscate the plan to protect privacy in the case of having adversarial entities [26]. Thus, it is very important that the AI agent synthesizes a single behavior that is simultaneously legible to friendly entities and obfuscatory to adversarial ones [25]. In addition to communicating intentions through behavior, the AI agent can communicate its incapabilities through showing what and why it is unable to accomplish [28]. The mentioned works in this line mostly consider sequential task horizon, however, there are another set of works that account for the human mental model to improve team performance when there is a single prediction task with error boundary as the model [2; 4]. Interpretable Through Explanation It is necessary for the AI agent to be able to provide explanations to its human collaborator to increase the interpretability of its behavior. For a single classification task, the explanation can involve explaining the correctness or rationale of the decisions such as providing faithful and customized explanations of a black box classifier that accounts for the fidelity to the original model as well as user interest [29], or the explanation can concern with analyzing and explaining the details of failure and error [32], which is good for debugging and troubleshooting. Moreover, the explanation can be called upon to explain the AI agent's incomprehensible behaviors or plans which are categorized as sequential tasks. This explanation can solve the root cause of inexplicable behavior, in which the AI agent provides explanations to reconcile the human model to its model till the behavior becomes explicable to the human [10; 40]. However, the comprehensible behavior might be infeasible, in which case the explanation can be in the form of expressing incapability. This can come in the form of explaining the unsynthesizable cores of a specified behavior [36; 35; 6], or the absence of a solution to a planning task (unsolvability). Explaining the unsolvability of a planning problem can be in the form of providing a certificate of unsolvability [15; 16], or a more compact and understandable reason through the use of hierarchical abstraction to generate a reason for unsolvability [43]. Furthermore, an AI agent can provide a novel behavior that makes a trade-off between explanation and explicable behavior to combine the strength of each [9; 41]. Bidirectional Complementing Instead of having one of the AI agent or the human be responsible for the task and the other acting as an advisor or observer, here we have cases in which both the AI agent and the human are responsible for the task and each of them helps the other in the task or decision making. So, both the AI agent and the human may alternatively enter the land of one another in performing a task, decision making, or coordination. This is considered as bidirectional complementing because either the communications or the actions are bidirectional. Therefore, we can categorize it into bidirectional communication or behavioral coordination which results in more effective team performance. As a result, the teaming goal for works in this category is all to improve the team performance. Also, most of the works in this category include planning and scheduling tasks which makes the task horizon as sequential tasks. Bidirectional Communication One of the important challenges is when and what to communicate during human-AI collaboration. For example, [48] proposed a CommPlan framework which enables jointly reasoning about the robot's action and communication at its policy in a shared workspace task where the robot has multiple communication options and need to reason in a short time. Also, [19] investigated robot reasoning on how to interact to localize the human model by generating the right questions to refine the robot understandings of the teammate. Other challenges arise when the AI agent and the human participate in an interactive dialogue such as contrastive explanation in the framework of counterfactual reasoning. In a planning problem [42] or a scheduling problem [52], this contrastive explanation would help the understanding of the user who is confused by the agent's behavior or offers and presents an alternative behavior that they would expect. Also, this can be used for model refinement, like the RADAR-X framework that uses the foil raised by the user as evidence for unspecified user preferences and to refine plan suggestions [49]. Behavioral Coordination Behavioral coordination includes both team level and task level coordination. In task-level coordination, an AI agent can coordinate its behavior for a serendipitous plan in a cohabitation scenario [7], or it can employ human motion prediction in conjunction with a complete, time-optimal path planner to execute efficient and safe motion in the shared environment [47], and by detecting blind spot of both the human and the AI agent, they can coordinate for safe joint execution by handing off the task to the most capable agent [34]. Also, the AI agent can use nonverbal cues and feedback to signal how it expects the human to act next to enable the human to demonstrate their preferences more effectively [20]. Furthermore, in a team level coordination, the AI agent coordinates at a team level. For example, Mobi is a single interface that enables crowd participants to tackle tasks with global constraints. Mobi allows the users to specify their desires and needs, and produce as output an itinerary that satisfies the mission [53]. AI-mix is an interface that improves the effectiveness of human crowds. It aims at planning and scheduling tasks for crowds by facilitating roles such as steering and interpretation [31]. RADAR, a proactive decision-making system, improves the decision-making experience of the human by providing suggestions that aid in constructing a plan for single [39; 18] and multiple [38] humans. Goal of Teaming It is very important to see what the purpose of the human and the AI agent is in forming a team, so depending on the main purpose of teaming, other factors may significantly change. In other words, depending on the performance they seek and the interaction state, the team goal will be different. Performance goal Despite the commonly accepted assumption that performance goals of individual entities in the team will result in better overall team performance, it is shown that in some cases better individual performance cannot cause better team performance due to incompatibility between them [4]. Thus, when we have a team of human and AI, if the team is formed to improve the team performance, it is very necessary to take into account the whole team performance instead of individual performance. Interaction State is a significant factor that affects the teaming goal. If the human and the AI form a team for a short or single interaction it will affect the setting and general teaming goal differently than longitudinal interactions. For instance, the interpretability concepts are all meaningful when there is longitudinal interactions between the human and the AI. Besides, other concepts such as trust emerge which might not be important in a single interaction. While in a single interaction teaming, both the human and the AI seek immediate rewards, in longitudinal interactions rewards over a longer horizon are important, which influence the teaming strategy significantly. Conclusion This survey provides an overview of the many different directions of works in human-AI symbiosis and the current trends in this area. Generally, human-AI symbiosis is a growing field of research with variety of challenging problems. Since the recent works in this area explored the existing challenges and potential solutions from different perspectives, the lack of clear connection makes them seem independent. This issue limits the use and expansion of one method from one direction to another. Therefore, in this paper, we highlighted the various dimensions that ramify the researches in this area and as a result diverge the researchers from finding a clear connection between their works. We emphasized some of the important angles in this area (1) complementing flow (2) task horizon and model representation (3) knowledge and capability level (4) teaming goal. We noted that all the researches in this area are matched into one category of each dimension, then we have a collection of works that are mutual in all the mentioned directions. With the clustering of works based on their common dimensions, not only works of a similar nature are easily identifiable but it also provides the potential for works that fall in different clusters to find common ground. We hope that this survey makes a direction for future research and provide a clear connection between the works in the field such that future connections between methods and solutions can be used from the different dimensions.
2021-03-19T01:15:29.023Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "fbdad96c3c40cbc1f2bf1a8ec15b6ce382c3d593", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fbdad96c3c40cbc1f2bf1a8ec15b6ce382c3d593", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247097105
pes2o/s2orc
v3-fos-license
Short communication: Experimental factors affecting fission-track counts in apatite The tools for interpreting fission-track data are evolving apace, but, even so, the outcomes cannot be better than the data. Recent studies showed that track etching and observation affect confined-track length measurements. We investigated the effects of grain orientation, polishing, etching and observation on fission-track counts in apatite. Our findings throw light on the phenomena that affect the track counts and hence the sample ages, whilst raising the question: what counts as an etched surface track? This is pertinent to manual and automatic track counts and to designing training strategies for neural networks. Counting prism faces and using the ζ calibration for age calculation are assumed to deal with most etchingand counting-related factors. However, prism faces are not unproblematic for counting, and other surface orientations are not unusable. Our results suggest that a reinvestigation of the etching properties of different apatite faces could increase the range useful for dating and lift a significant restriction for provenance studies. Introduction Fission-track dating and temperature-time path modeling are much used thermochronological tools for geological research. The fission-track method rests on counting and measuring the lattice damage trails caused by uranium fission. Latent fission tracks in apatite are ∼ 20 µm long (Bhandari et al., 1971;Jonckheere, 2003) and ∼ 10 nm wide (Paul and Fitzgerald, 1992;Paul, 1993;Li et al., 2011Li et al., , 2012Li et al., , 2014, too thin to observe with an optical microscope. The polished grain mounts are therefore etched to make them visible. It is often taken for granted that factors related to etching and counting are inconsequential, e.g., that counting losses are negligible in slow-etching surfaces such as apatite prism faces. It is also assumed that systematic errors on the track counts cancel out if the sought ages are calibrated against the reference ages of standards (ζ calibration; Hurford, 1990). We believe that, from lack of investigation, there persist certain misconceptions concerning these issues, which lead researchers to overestimate the accuracy of fission-track ages but also to impose undue practical restrictions, such as excluding apatite grains not polished parallel to their c axes from track counts and confined-track length measurements. We report two experiments aimed at a better understanding of fission-track counts and measurements in apatite. Because there is a subjective aspect to the counts (Enkelmann et al., 2005;Jonckheere et al., 2015) and measurements (Ketcham et al., 2015;Tamer et al., 2019), our numerical results must not be generalized. They nevertheless reveal significant trends, which we interpret in the context of a recent etching model and relate to practical dating issues. Experiments and results We cut plane sections from an unannealed and unirradiated Durango apatite at 0 • (prism face; sample P00), 30 • (B60) and 90 • (basal face; B00) to their c axes and mounted them in resin. We ground and polished the sections with 6, 3 and 1 µm diamond suspensions and a 0.04 µm silica suspension and etched them in 10 s steps for 10, 20 and 30 s in 5.5 M HNO 3 at 21 • C to reveal the fossil tracks. Reference points on the mounts allowed us to record the position of each investigated area and return to it after each step. At each step, we counted the tracks in transmitted light and measured the track openings in reflected light with a Zeiss Z2m motorized microscope and Märzhäuser stage controlled from a desktop Table 1. computer running the Autoscan program. Supplement file S1 shows representative images of the different sections at different etch times. Figure 1 and Table 1 compare the track counts at different etch times. Because the same areas were recounted after each step, deviations from the 1 : 1 line reflect actual loss or gain of tracks. The individual deviations are random: a track is lost in one area while one is added in a different area of the same sample. In general, track loss dominates in the basal face (B00), while tracks are gained in P00 and B60. The differences between 10 and 30 s amount to ∼ 10 % of the initial counts. They are smaller from 20 to 30 s etching than from 10 to 20 s but consistent with the initial trend. We interpret this as an indication of a diminishing surface etch rate, linked to decreasing polishing damage with increasing depth below the surface (Kumar et al., 2013;Hicks et al., 2019). The corresponding track counts at 10, 20 and 30 s are little affected by random variation and thus robust; the surface etch rate is therefore a factor meriting further attention. Table 1 lists the intercepts and slopes of geometric mean regression lines fitted to the graphs in Fig. 1. For B00, the intercepts remain low while the slopes decrease with etch time. The implication that the track loss is proportional to the track count is not obvious because higher track counts are not associated with higher uranium concentrations but are due to random Poisson variation. We propose that the track loss is due to the growth and fusion of surface etch pits, which consume the shorter track channels causing losses proportional to the initial number of tracks in each field. Figure 2 illustrates track gain and track loss in a basal face of the Durango apatite. For P00 and B60, the slopes of the regression lines remain constant at ∼ 1 while the intercepts increase with etch time. A uniform increase, independent of the initial track count, suggests that on average tracks are added due to surface etching. Jonckheere et al. (2019) compared the conventional etch model (e.g., Tagami and O'Sullivan, 2005) with a competing one (Jonckheere and Van den haute, 1999) concerning their implications for the track counts. The first predicts increasing track counts, whereas the latter predicts con- (Stübner et al., 2008), losing contrast until their images dissolve or are no longer recognizable. (c) Track added due to surface etching, characterized by a long, thin etch channel and an undersized etch pit. (d) Two etch pits formed at shallow tracks with a stepped appearance due to intermittent etching at the track extremities (Wauschkuhn et al., 2015b). (e) Measurement of etch pit size. stant counts. Despite its correct prediction, the first model was deemed inapplicable because it failed on other counts (Jonckheere et al., 2019(Jonckheere et al., , 2022. In contrast to that model, in which no etched track is ever lost, the second model implies constant track counts because the rate at which tracks are added from surface etching equals that at which others are lost from the same cause. Tracks are added when the advancing surface reaches their upper ends and lost when it overtakes their lower ends. However, before the surface gets to the lower end of a latent track (t; Fig. 3), its etch channel has increased in length. Around that time, the slow-etching faces (cd and de) terminating the channel come to intersect the surface, making the intersections a-c-d and d-e-g convex. This modifies their etching behavior and increases their etch rates, allowing them to stay ahead of the advancing surface for a time (Fig. 3). A residual etch figure can thus persist after the surface has overtaken the latent track. This phenomenon is more pronounced at low etchant concentrations (Jonckheere and van den Haute, 1996). This reconciles our current observations with the latter etch model. It also accounts for the observation that the net rate of addition is not much greater for B60 than for P00 despite its etch rate being more than twice as high (Aslanian et al., 2021). Figure 4a-c compares the sizes (long axes) of the track openings in B00, B60 and P00 at 10, 20 and 30 s. Those in the basal (B00) and prism face (P00) have a uniform size, reflected in a narrow distribution. With increasing etch time, the distributions shift to greater values and become leftskewed. We interpret the latter as being due to tracks added by surface etching. Insofar as the distributions are diagnostic, tracks are added at a decreasing rate, suggesting a declining surface etch rate. The track openings in the intermediate face (B60) have a limited size range at 10 s but broader distributions at longer etch times. In contrast to basal and prism Figure 3. Mechanism of the continuation of an etch pit past the termination of a latent track. Before the advancing surface (a-b-f-g) overtakes the latent track at t, the faces (c-d-e) terminating the track channel have moved ahead, creating a feature that, depending on the etch rates of c-d-e and a-b-f-g, can persist for a time. The duration is extended when, upon intersecting the surface, the intersections ac-d and d-e-g become convex allowing faster etching orientations to develop (white arrows). This mechanism accounts for the observed increases in the track counts in P00 and B60 within the etch model of Jonckheere et al. (2019Jonckheere et al. ( , 2022 and Aslanian et al. (2021). faces, the track openings in B60 do not have uniform shapes or orientations (Supplement file S1; Jonckheere et al., 2020). Their long axes therefore increase at different, orientationdependent rates, stretching their size distribution. Figure 5 shows the envelopes of the etch rate vectors (Aslanian et al., 2021), scaled to show the displacement of a plane surface perpendicular to each vector after 10, 20 and 30 s etching. The vectors radiate from the intersection of a prism plane P-P and a basal plane B-B, both perpendicular to the drawing plane. The elongate diamond shapes are the etch figures formed by the fastest etching faces after 10, 20 and 30 s etching, constructed using the model of Jonckheere et al. (2019Jonckheere et al. ( , 2022. The intersections of the elongate diamond shapes with the etched prism planes and basal planes give the sizes of the track openings in these surfaces. It thus follows that there exists a definite relationship between the etch pit sizes in a basal plane (D BAS ) and the track openings (D PAR ) in a prism plane. The predicted ratio at 10 s etching is D PAR /D BAS = 1.02 µm/4.01 µm ≈ 1/4 ( Fig. 5; solid line); the measured ratio, in contrast, is 1.01 µm/3.08 µm ≈ 1/3 (Table 1). The difference is attributed to an initial stage during which polishing damage is etched at a greater rate. Assuming that this stage lasts < 10 s and removes an equal thickness from the basal and prism surfaces, resulting in D PAR /D BAS = 1/3, then a calculation shows that the thickness removed after 10 s amounts to ∼ 0.15 µm. On this assumption, the predicted ratio is D PAR /D BAS = 0.96 µm/2.82 µm ≈ 1/3 ( Fig. 5; dotted line). The theoretical values of D PAR and D BAS for 20 and 30 s etching, calculated from this point on, are also listed in Table 1, for comparison with the measurements. The predicted D PAR and D BAS at 10 s, and not just their ratio, are in reasonable agreement with the measured values. At longer etch times, the latter fall behind the predicted values (Fig. 4d) because they include a growing fraction of tracks added after the start of etching ( Fig. 4a-b). This has less influence on the D PAR /D BAS ratios because both surfaces are affected. For 20 and 30 s etching, the predicted and measured D PAR /D BAS ratios are 0.290 (predicted) vs. 0.290 (measured) at 20 s and 0.276 (predicted) vs. 0.273 (measured) at 30 s. The decreasing D PAR /D BAS ratios with increasing etch time are a singular consequence of the accelerated etching of the damaged surface. The result is measurable at practical etch times and useful for investigating the effects of polishing on fissiontrack etching. For the second experiment, we cut 14 prism sections from a crystal of Durango apatite. We annealed seven at 450 • C for Table 1. As discussed in the text, their calculation involves a correction for an initial stage of accelerated surface etching due to polishing damage. We assume that in both faces a total thickness of 0.15 µm was removed after the first 10 s etch step, which is 2-3 times the etch rate of the undamaged surfaces. The additional thickness removed during this step is shown by the elongated shaded areas bordered by the dotted lines. The reason for this correction is discussed in the text. 24 h to erase the fossil tracks; the other seven retained their full complement of fossil tracks. The annealed sections were irradiated with thermal neutrons in channel Y4 of the BR1 reactor of the Belgian Nuclear Research Center (SCK·CEN; φ TH ≈ 10 16 cm −2 ) to produce induced fission tracks. A section with fossil tracks was paired with one containing induced tracks and annealed for 24 h at temperatures of 183, 231, 271, 291, 304 and 313 • C; the remaining sections were not annealed. The samples were mounted in resin, ground to expose internal surfaces, and polished with 6, 3 and 1 µm diamond suspensions and 0.04 µm silica suspension to the highest standard attainable with our equipment and expertise. Each mount was equipped with reference points and etched for 20 s in 5.5 M HNO 3 at 21 • C. Our samples also included four prismatic sections of Durango apatite from an inter-lab experiment. The pre-annealing, neutron-irradiation and partial-annealing conditions are given in Ketcham et al. (2015). These apatite sections were also mounted, ground and polished as described. We carried out track counts in transmitted light and reflected light on the same areas, with a Zeiss Z2m microscope with a Märzhäuser motorized stage connected to a desktop computer. The Autoscan software was used for stage control and for recording the positions of the counted areas but the track counts were done at the microscope at an overall magnification of 800×. Figure 6 shows reflected-light (RL) and transmitted-light (TL) images of the same areas in one unannealed section and five with different degrees of partial annealing. The RL images show numerous near-identical features (RL features). In the samples annealed at ≤ 271 • C, most -but not all -RL features correspond to the openings of unmistakable fissiontrack channels in TL (TL tracks). To our knowledge, the RL features that do not correspond to TL tracks have not been reported before, and it is reasonable to question if they are actually fission tracks. Then again, the shallow RL features would not be distinguishable in less well polished surfaces. Apatite samples are often not polished to the standard of our present samples, i.e., a nano-polish with 0.04 µm silica suspension, until no scratches are visible with RL Nomarski differential interference contrast, although faint polishing scratches reappeared after etching (Fig. 6). A second reason for shallow RL features not to have been reported is that they cannot be counted in transmitted light, whereas track counts in reflected light are uncommon. Moreover, an operator observing them may be inclined to dismiss them, either as not being tracks or as being uncertain or impossible to count. The RL features are shallower than an etch pit in a prism face after 20 s etching (Fig. 4) and lack the distinctive track channel, which affords them their uniform appearance. However, none of this is reason enough for concluding that they are not tracks. Our reflected-light counts were performed on the assumption that each distinct RL-feature corresponds to the surface intersection of a continuous track or of a section of a segmented track (Gleadow et al., 1983;Green et al., 1986). The TL counts of the most annealed apatite sections required some judgment but presented no more difficulties than routine counts of unproblematic geological samples. Table 2 summarizes the RL and TL counts; Fig. 7 plots the normalized RL counts (r RL = ρ RL /ρ RL,0 ) against the normalized TL counts (r TL = ρ TL /ρ RL,0 ). The TL counts are normalized to the RL counts of the unannealed samples for the purpose of comparing TL and RL. It is significant that, with few exceptions, the RL and TL densities have standard deviations close to those of a Poisson distribution (σ/σ P ≈ 1), irrespective of the ρ TL /ρ RL ratio, as expected for products of a radioactive process. It is most improbable that defect swarms possess statistical properties indistinguishable from those of the actual fission tracks in the same samples. Jonckheere and Van den haute (2002) calculated from their projected-length distributions which fraction of the tracks intersecting internal and external apatite prism faces and mica external detectors is counted (counting efficiencies; ηq factors). The results showed that ηq ≈ 0.90 for an internal surface, ηq ≈ 1.00 for an external surface and ηq ≈ 0.90 for an external detector. The authors concluded that fission-track counts are in much greater measure governed by an observation threshold than by the etching properties of the tracks and the mineral (v T , v B ). They proposed that the observation threshold corresponds to a critical depth, z. Shallow tracks lack the shape and contrast to be identified as fission tracks in transmitted light, as Figs. 2 and 6 show. The fact that shallow surface tracks are the most abundant in an internal surface and external detector but almost absent in an external surface explains their relative counting efficiencies (Dakowski, 1978;Iwano and Danhara, 1998;Jonckheere and Van den haute, 1998;Soares et al., 2013). The relationship between r RL and r TL presents two distinct trends ( Fig. 7a and b). In the interval 0.65 ≤ r TL ≤ 1.00 (Fig. 7a), there is a strong correlation between r RL and r TL , with r RL values 5 %-10 % higher than r TL values. This applies to fossil and to induced tracks over a wide range of track densities (Table 2: ρ TL = 0.127-2.923 ×10 6 cm −2 ; ρ RL = 0.134-3.016 ×10 6 cm −2 ). A geometric mean regres- Table 2, polished to a final high finish with 0.04 µm silica suspension and etched for 20 s in 5.5 M HNO 3 at 21 • C. The induced track densities are normalized to those of the unannealed samples and those of the fossil tracks to 0.89× that of the unannealed sample, to account for natural annealing. A-A: theoretical relationship for an observation threshold z = 0 µm; B-B: theoretical relationship for z = 1 µm; C-C: geometric mean regression line to the data before the break in slope (a; 0.65 ≤ r TL ≤ 1); D-D: mean r RL value of the data past the break in slope (b; r TL ≤ 0.65). sion line (r RL = 0.931r TL +0.125; r = 0.956) has a slope < 1 and positive intercept at some distance from the data. We interpret the offset between r RL and r TL as reflecting the fact that shallow tracks, not observed in TL, were counted in RL. The ηq factor for an internal surface is given by ηq ≈ 1-2 (z/ l) + (z/ l) 2 ≈ 1-2(z/ l), wherein z is the critical depth and l the mean track length (Jonckheere and van den Haute, 1999). In the case that all the tracks are counted in RL: Equation (1) implies that an observation threshold, in the form of a minimum depth for counting a track in TL, accounts for the offset between r RL and r TL and for their correlation. For fixed z, the difference between r RL and r TL increases a little with decreasing l. On average, our results indicate that z ≈ 0.60 µm (0.89 ≤ ηq ≤ 0.93 for 10.5 µm ≤ l ≤ 16.5 µm), which is less than the depth or width of an etch pit in a prism face after 20 s etching (Figs. 3 and 4). Accounting for 10 % track loss by etching (η = 1 − (v B /v T ) 2 ; e.g., Hurford, 2019) requires a critical angle θ C = arcsin(v B /v T ) > 15 • and tracks with a cone angles > 30 • , instead of the 1-8 • angles measured by Aslanian et al. (2021). The TL count collapses in the interval 0 ≤ r TL ≤ 0.65, while the RL count exhibits little change (Fig. 7b). The change from correlated to uncorrelated TL and RL counts occurs at the point at which tracks at high angles to the c axis break up in a string of etchable segments separated by unetchable gaps (r TL ≈ 0.65; Watt et al., 1984;Green et al., 1986;Green, 1988) or undergo accelerated length reduction (Donelick et al., 1999;Ketcham, 2003). Figure 8 illustrates how the segmentation, combined with an observation threshold, accounts for the break in slope and for the ρ RL trend. Before break-up (Fig. 8a), all tracks intersecting the surface are counted in RL, but only those extending below the threshold depth z are also counted in TL. This accounts for the correlation as well as the offset between ρ TL and ρ RL . Following break-up (Fig. 8b), a fraction of the tracks that extend below z can no longer be etched from the surface over their entire lengths, causing ρ TL to plummet without affecting ρ RL , terminating their correlation. At advanced annealing stages, the etchable sections are further shortened causing a rapid decrease in those exceeding the TL threshold (ρ TL ) while having little effect on ρ RL (Fig. 8c). We emphasize that the proposed mechanism is conceptual. The assumption that all surface tracks are counted in reflected light, irrespective of the extent of annealing, may be too radical. On the other hand, the sudden breakdown of the TL track densities due to the combination of segmentation and an observation threshold also provides an explanation for the break in slope in plots of reduced mean confined-track lengths against normalized TL track densities (Watt et al., 1984;Watt and Durrani, 1985;Green 1988;Ketcham, 2003). At the same time, it underlines the tenuous character of empirical fits and their dependence on observation criteria and in part explains the disagreement between different solutions (see Table 5 and Fig. 5 of Wauschkuhn et al., 2015a). Discussion and conclusion We submit this contribution from a concern that, while the tools for interpreting fission-track data are evolving, the calculated ages, age components and thermal histories are only as good as the track counts and the measured track lengths. Measuring and counting fission tracks requires etching to make them accessible for microscopic examination. Track etching is often regarded as an inconsequential sample preparation step. However, recent studies that have taken up the twin issues of etching and observation confirm that both have an effect on confined-track lengths (Jonckheere et al., 2007(Jonckheere et al., , 2017Tamer et al., 2019;Tamer and Ketcham, 2020;Aslanian et al., 2021;Ketcham and Tamer, 2021). Our results show that etching and observation also have consequences for the track counts, which we cannot be confident of evading by selecting apatite prism faces and adopting the ζ cal- Figure 8. The effect of break-up of fission tracks due to the appearance of unetchable gaps in the course of progressive annealing on the number of tracks counted using transmitted (TL) and reflected (RL) light. We assume for the purpose of illustration that all the tracks that intersect the surface are counted in reflected light but only those that reach at least a depth z below the surface have enough optical contrast to be counted in transmitted light. (a) Continuous tracks: the RL count is proportional to the mean track length l (ignoring anisotropy), while the TL count is low by a fraction proportional to (z/ l), which varies little with l. (b) Initial break-up: there is no loss of tracks in RL; (1) short tracks invisible in TL remain invisible, but (2) some long tracks can no longer be etched over a sufficient distance from the surface to be visible in TL; (3) if their longer central segments intersect the surface they continue to be counted in TL; the corresponding sections in the grain interior contribute most to the mean confined-track length. (c) Advanced annealing and break-up: the shortest sections remain countable in RL, but none reaches far enough below the surface to be counted in TL. The sketch does not aim to depict the actual dimensions or proportions of etched fission tracks. ibration for age calculations. Besides being inadequate for the purpose, both measures have drawbacks. Selecting prism (scratched) faces for dating often implies that a large fraction of the grains in a mount is ignored. This can lead to reduced grain counts, which is a particular problem for distinguishing age components in a mixture. Grain selection based on shape can also cause an age component to be missed. The drawbacks of the ζ calibration are of a different nature (Hurford, 1998;Enkelmann et al., 2005;Soares et al., 2013;Jonckheere et al., 2015;Iwano et al., 2018Iwano et al., , 2019; ζ is an efficient workaround for the calibration problem, but it is just that: it circumvents difficulties without addressing them. It must be taken on trust that it deals with etching-and counting-related factors under all circumstances. Our findings provide no solution. It is doubtful that there is a single solution for all polishing, etching and counting protocols or for all samples. Our results do illustrate how simple experiments throw light on the factors affecting the track counts and, hence, the sample ages. This is relevant to the advantages and disadvantages of manual and automatic track counts (Gleadow et al., 2009Enkelmann et al., 2012) and to designing training strategies for neural networks (Nachtergaele and De Grave, 2021). It is, in general, useful for evaluating the input, and thus the output, of modeling programs. Grain orientation, polishing finish, etching conditions (time) and observation method are all shown to influence the fission-track counts in apatite. Prism faces are not unproblematic for counting tracks and other orientations are not per se useless. Faster-etching surfaces, in which etch pits do not form at the track-surface intersections (Jonckheere et al., 2020(Jonckheere et al., , 2022 can indeed present practical advantages, in addition to the numerical advantage of including them. Their fission-track properties are the subject of ongoing studies. Our results also support the fact that fossil and induced fission tracks are discontinuous towards their tips and that individual segments remain etchable after annealing and break-up. Data availability. All raw data are available in Supplement file S2. Author contributions. RJ conceived the experiments, and CA and BW carried them out. They interpreted the results together. CA prepared the figures and the tables. RJ wrote the first draft of the paper. CA, BW and LR discussed and revised it. CA handled the reviews and made the corrections.
2022-02-26T00:23:31.965Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "04b8a790c8d158f399bc9f208915ab8e95a02e91", "oa_license": "CCBY", "oa_url": "https://gchron.copernicus.org/articles/4/109/2022/gchron-4-109-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ac544378a136f8d0f4f4a49a81b0188827485537", "s2fieldsofstudy": [ "Geology", "Materials Science" ], "extfieldsofstudy": [] }
237314315
pes2o/s2orc
v3-fos-license
Peri-Implantitis Regenerative Therapy: A Review Simple Summary Regenerative therapies are one of the options to treat peri-implantitis diseases that cause peri-implant bone loss. This review reports classic and current literature to describe the available knowledge on regenerative peri-implant techniques. Abstract The surgical techniques available to clinicians to treat peri-implant diseases can be divided into resective and regenerative. Peri-implant diseases are inflammatory conditions affecting the soft and hard tissues around dental implants. Despite the large number of investigations aimed at identifying the best approach to treat these conditions, there is still no universally recognized protocol to solve these complications successfully and predictably. This review will focus on the regenerative treatment of peri-implant osseous defects in order to provide some evidence that can aid clinicians in the approach to peri-implant disease treatment. Introduction Due to the increasing number of dental implants placed every day in clinical practice, the biological complications related to these treatments are increasing. These complications range from inflammation and bleeding upon probing (BOP) to severe peri-implant bone resorption and implant failure [1]. Despite many investigations aimed at identifying the best approach to treat these conditions, there is still no universally recognized protocol to solve these complications successfully and predictably. The techniques available for the clinicians can be divided into non-surgical and surgical. Among the surgical options, the two main approaches are resective and regenerative treatments. Several methods have been explored to determine the most predictable and successful treatment protocol for arresting or reversing the loss of peri-implant bone. These methods included non-surgical, resective, and regenerative treatments along with various methods of adjunctive surface decontamination [2]. Generally speaking, if the peri-implant defect does not have bone walls or has a supra-bony component, the resective approach is usually preferred. The focus of this review will be the regenerative treatment of peri-implant osseous defects. The goals of this article will be to answer questions to assist clinicians pursuing peri-implant disease treatment: is any product superior to the other? Should a membrane be added to the graft? Is any method of decontamination superior? The authors reviewed recent studies on peri-implant regeneration and their outcomes, with background studies that led to the current knowledge of materials and techniques. Although this paper does not represent a systematic review, an effort was made to include as many scientific references on peri-implantitis surgical regenerative treatments. An electronic search in the database PubMed (National Library of Medicine) was performed. English was the only language included and the search was concluded in February 2021. The list of references was screened in order to exclude papers that did not include regenerative treatment of peri-implantitis. The following search terms were employed: ((peri-implant disease OR periimplant disease OR peri-implantitis OR peri implantitis) AND (guided bone regeneration OR regeneration OR re-osseointegration)) and (re-osseointegration OR reosseoitegration). The articles obtained from the electronic searches (857) were included only if they mentioned regenerative techniques on the surgical treatment of peri-implantitis. First, titles, and abstracts were assessed and those fulfilling the eligibility criteria were included. Secondly, the full texts were obtained and evaluated by the authors. References of systematic reviews was also obtained and reviewed, in case they were not present in the initial search. Osseointegration and Re-Osseointegration One important factor in evaluating implant placement success is osseointegration [3][4][5]. Implant osseointegration was defined as the "direct structural and functional connection between ordered living bone and the surface of an implant, without intervening fibrous tissue" [6]. Like osteogenesis, osseointegration is also comprised of bone formation and remodeling. In a rat study, Guglielmotti et al. found de novo woven bone around the implant six days after placement. Lamellar bone becomes present at 12-13 days, which signifies bone maturation. With additional bone formation occurring, the osseointegration process was still observed over two months post-implantation [7]. Implant osseointegration is affected by several factors. The characteristics of implant/tissue interface are considered as one of the critical local factors for osseointegration. Besides the implant surface treatment, the utilization of graft materials, platelet-rich plasma (PRP), and collagen materials can be considered local factors that may improve osseointegration [8]. Systemically, there are factors that might impair osseointegration (anemia, liver alteration, diabetes), as well as some systemic drug administration, may impair osseointegration (radiation therapy) or improve the osteogenic response (dexamethasone application) [9][10][11][12]. Systemic factors like blood cholesterol, glucose, and vitamin D levels may also contribute to bone healing around the implants [13]. The classic method for verifying peri-implant osseointegration is light microscopy analysis of undecalcified sections of the bone to implant connection [14]. This method involves a qualitative and quantitative analysis. The qualitative analysis focuses on the identification and description of different tissues, specifically mineralized and unmineralized fibrous connective tissues. The quantitative analysis is defined by histomorphometry as it describes the characteristics at the bone-implant junction and in the surrounding peri-implant bone [15]. The standard parameters analyzed are the bone area fraction occupancy, bone-to-implant contact (BIC), and the mineral apposition rates. These parameter outcomes are related to the quality of the histological specimens and to the recognition of artifacts that can falsify the true nature of the bone-implant interface [16]. BIC has always been measured either histomorphometrically or radiographically. If the disease affects peri-implant bone, causing its resorption, the regenerative therapy is aimed at repairing and restoring the missing peri-implant structures through reosseointegration. Some authors defined this term as the establishment of de novo osteogenesis and osseointegration [17], especially after peri-implant bone loss and over a previously bacterial contaminated implant surface [2]. Previous studies have shown several factors affecting the treatment of peri-implantitis and re-osseointegration outcomes; the most important were the efficacy of biofilm removal, quality of implant surface decontamination and conditioning, successful defect sites correction for adequate oral hygiene maintenance, effective plaque control, and use of grafts with growth factors for tissue regeneration [18]. In general, achieving re-osseointegration and long-term stability are considered the ultimate goals of peri-implantitis treatment [19]. Peri Implant Diseases Implant status ranges from clinical health to implant failure and loss. In particular, the implants are classified as health, peri-implant mucositis, and peri-implantitis. The review by Araujo and Lindhe in the 2017 World Workshop [20] described the healthy peri-implant mucosa as comprised of either a keratinized (masticatory mucosa) or nonkeratinized epithelium (lining mucosa) with underlying connective tissue. The clinical criteria available to assess and diagnose implant conditions are the same used to assess and diagnose periodontal conditions around teeth: probing depth (PD), bleeding on probing (BOP), suppuration (SUP), and radiographs. These can identify an inflammatory status and periodontal/peri-implant bone loss. The same authors stated that peri-implant health requires the absence of clinical signs of inflammation such as erythema, swelling, and bleeding on probing (BOP). Radiographic evidence of crestal bone changes around implants is important when differentiating peri-implant health from disease. When inflammatory signs appear (BOP, erythema, soft tissue swelling), a diagnosis of peri-implant mucositis (PIM) can be done. When PIM is associated with the progressive loss of supporting peri-implant bone, the diagnosis of peri-implantitis (PI) is established [1]. It is generally accepted that 0.5 to 2 mm of crestal bone loss during healing is considered physiological bone remodeling following implant installation and initial loading [21]. However, any additional radiographic evidence of bone loss more than 2mm after the placement of the prosthetic supra-structure, would suggest PI [21]. Although the conversion from an inflammatory process identified as PIM to PI is not well understood, it is generally agreed that both diseases share the same infectious etiology through the development of biofilm [1]. If the lesion is left untreated, the inflammatory process can lead to progressive peri-implant bone loss and implant failure. In case of an absence of documentation from the time of implant placement to the time of disease manifestation, radiographic bone level ≥ 3 mm in combination with BOP and PD ≥ 6 mm is indicative of peri-implantitis [21] (Figure 1). Peri Implant Diseases Implant status ranges from clinical health to implant failure and loss. In particular, the implants are classified as health, peri-implant mucositis, and peri-implantitis. The review by Araujo and Lindhe in the 2017 World Workshop [20] described the healthy periimplant mucosa as comprised of either a keratinized (masticatory mucosa) or non-keratinized epithelium (lining mucosa) with underlying connective tissue. The clinical criteria available to assess and diagnose implant conditions are the same used to assess and diagnose periodontal conditions around teeth: probing depth (PD), bleeding on probing (BOP), suppuration (SUP), and radiographs. These can identify an inflammatory status and periodontal/peri-implant bone loss. The same authors stated that peri-implant health requires the absence of clinical signs of inflammation such as erythema, swelling, and bleeding on probing (BOP). Radiographic evidence of crestal bone changes around implants is important when differentiating peri-implant health from disease. When inflammatory signs appear (BOP, erythema, soft tissue swelling), a diagnosis of peri-implant mucositis (PIM) can be done. When PIM is associated with the progressive loss of supporting peri-implant bone, the diagnosis of peri-implantitis (PI) is established [1]. It is generally accepted that 0.5 to 2mm of crestal bone loss during healing is considered physiological bone remodeling following implant installation and initial loading [21]. However, any additional radiographic evidence of bone loss more than 2mm after the placement of the prosthetic supra-structure, would suggest PI [21]. Although the conversion from an inflammatory process identified as PIM to PI is not well understood, it is generally agreed that both diseases share the same infectious etiology through the development of biofilm [1]. If the lesion is left untreated, the inflammatory process can lead to progressive peri-implant bone loss and implant failure. In case of an absence of documentation from the time of implant placement to the time of disease manifestation, radiographic bone level ≥ 3 mm in combination with BOP and PD ≥ 6 mm is indicative of peri-implantitis [21] ( Figure 1). Figure 1. Representation of peri-implant clinical parameters that, associated, can lead to a diagnosis of peri-implantitis. In the sequence above, BOP/Suppuration (a), radiographic bone level ≥ 3 mm (b) in combination with PD ≥ 6 mm (c). Peri-Implantitis Treatment Factors The goal of peri-implantitis treatments is re-osseointegration or bone fill of the osseous defect to provide support to peri-implant soft tissue and thereby improve esthetic outcomes [2,22]; surgical regenerative approaches are indicated to achieve this goal. Nevertheless, these techniques are not always applicable due to varying defect morphologies and progressively advancing stages of the disease. Re-osseointegration or regeneration, by definition, is an event that can only be assessed histologically in experimental models [2,22]. The efficacy of peri-implantitis treatment can vary depending on the outcome variables [22]. In fact, clinical protocols are limited to bone level assessments by radiographs and clinical variables (BOP, PD, REC), while experimental protocols also include histological evaluations regarding inflammation resolution and osseous defect repair [22]. Clinicians cannot truly assess re-osseointegration on their patients unless a histological specimen is harvested and analyzed. Radiographic Representation of peri-implant clinical parameters that, associated, can lead to a diagnosis of peri-implantitis. In the sequence above, BOP/Suppuration (a), radiographic bone level ≥ 3 mm (b) in combination with PD ≥ 6 mm (c). Peri-Implantitis Treatment Factors The goal of peri-implantitis treatments is re-osseointegration or bone fill of the osseous defect to provide support to peri-implant soft tissue and thereby improve esthetic outcomes [2,22]; surgical regenerative approaches are indicated to achieve this goal. Nevertheless, these techniques are not always applicable due to varying defect morphologies and progressively advancing stages of the disease. Re-osseointegration or regeneration, by definition, is an event that can only be assessed histologically in experimental models [2,22]. The efficacy of peri-implantitis treatment can vary depending on the outcome variables [22]. In fact, clinical protocols are limited to bone level assessments by radiographs and clinical variables (BOP, PD, REC), while experimental protocols also include histological evaluations regarding inflammation resolution and osseous defect repair [22]. Clinicians cannot truly assess re-osseointegration on their patients unless a histological specimen is harvested and analyzed. Radiographic investigations are commonly used as the other non-invasive tool to assess bone changes after therapy [23], although questions regarding their reliability have been reported [16,24] ( Figure 2). The most accurate solution to identify defect configuration is by direct access during the surgical intervention [25]. investigations are commonly used as the other non-invasive tool to assess bone changes after therapy [23], although questions regarding their reliability have been reported [16,24] (Figure 2). The most accurate solution to identify defect configuration is by direct access during the surgical intervention [25]. Peri-Implant Defect Configuration One of the most important factors related to the success of peri-implantitis regenerative procedures is the peri-implant bone defect configuration. These procedures are not aimed at addressing disease resolution but are an attempt to fill the defect created by the disease. The feasibility of this goal has been shown to be closely associated with the configuration of the peri-implant defect and the number of walls surrounding the lesion (Figure 3). Studies have described a relationship between the peri-implant defect morphology and the clinical success of peri-implantitis therapies [23,25,26]. Therefore, careful consideration of the defect morphology must be made when selecting a peri-implantitis intervention. In 2007, Schwarz et al. [27] proposed a classification of peri-implantitis defects verified through intra-surgical findings in humans and animals. The classification was based on intrabony features and horizontal bone loss, describing the absence of buccal and/or lingual walls, circumferential, and supra-or sub-crestal patterns ( Figure 4). In 2019, Monje et al. [23] updated this classification by adding combined defects ( Figure 5A). Peri-Implant Defect Configuration One of the most important factors related to the success of peri-implantitis regenerative procedures is the peri-implant bone defect configuration. These procedures are not aimed at addressing disease resolution but are an attempt to fill the defect created by the disease. The feasibility of this goal has been shown to be closely associated with the configuration of the peri-implant defect and the number of walls surrounding the lesion (Figure 3). Biology 2021, 10, x investigations are commonly used as the other non-invasive tool to assess bone after therapy [23], although questions regarding their reliability have been r [16,24] (Figure 2). The most accurate solution to identify defect configuration is b access during the surgical intervention [25]. Peri-Implant Defect Configuration One of the most important factors related to the success of peri-implantitis re tive procedures is the peri-implant bone defect configuration. These procedures aimed at addressing disease resolution but are an attempt to fill the defect create disease. The feasibility of this goal has been shown to be closely associated with figuration of the peri-implant defect and the number of walls surrounding the les ure 3). Studies have described a relationship between the peri-implant defect mor and the clinical success of peri-implantitis therapies [23,25,26]. Therefore, careful eration of the defect morphology must be made when selecting a peri-implantit vention. In 2007, Schwarz et al. [27] proposed a classification of peri-implantitis verified through intra-surgical findings in humans and animals. The classificat based on intrabony features and horizontal bone loss, describing the absence o and/or lingual walls, circumferential, and supra-or sub-crestal patterns ( Figure 4). Monje et al. [23] updated this classification by adding combined defects (Figure 5 Studies have described a relationship between the peri-implant defect morphology and the clinical success of peri-implantitis therapies [23,25,26]. Therefore, careful consideration of the defect morphology must be made when selecting a peri-implantitis intervention. In 2007, Schwarz et al. [27] proposed a classification of peri-implantitis defects verified through intra-surgical findings in humans and animals. The classification was based on intrabony features and horizontal bone loss, describing the absence of buccal and/or lingual walls, circumferential, and supra-or sub-crestal patterns ( Figure 4). In 2019, Monje et al. [23] updated this classification by adding combined defects ( Figure 5A). Prevalence data for the defect configurations seems to vary among studies. Schwarz et al., found circumferential defects to be the most prevalent (55.3%) [27]. In contrast, Garcia-Garcia et al. [28] found about 30% of defects to exhibit a circumferential configuration, while 25% included a buccal dehiscence combined with a circumferential defect. Aghazadeh et al. [25], on the other hand, found two-wall defects to be the most prevalent. Roccuzzo et al. [29] also introduced dissimilar data when exhibiting that onethird of cases (35.7%) had a semi-circumferential component combined with a buccal dehiscence. In a more recent study, Monje et al. [23] described three-wall defects as the most prevalent, followed by buccal dehiscence defects ( Figure 5B). Prevalence data for the defect configurations seems to vary among studies. et al., found circumferential defects to be the most prevalent (55.3%) [27]. In contr cia-Garcia et al. [28] found about 30% of defects to exhibit a circumferential config while 25% included a buccal dehiscence combined with a circumferential defect. deh et al. [25], on the other hand, found two-wall defects to be the most p Roccuzzo et al. [29] also introduced dissimilar data when exhibiting that one-third (35.7%) had a semi-circumferential component combined with a buccal dehisce more recent study, Monje et al. [23] described three-wall defects as the most p followed by buccal dehiscence defects ( Figure 5B). Prevalence data for the defect configurations seems to vary among studies. Schwarz et al., found circumferential defects to be the most prevalent (55.3%) [27]. In contrast, Garcia-Garcia et al. [28] found about 30% of defects to exhibit a circumferential configuration, while 25% included a buccal dehiscence combined with a circumferential defect. Aghazadeh et al. [25], on the other hand, found two-wall defects to be the most prevalent. Roccuzzo et al. [29] also introduced dissimilar data when exhibiting that one-third of cases (35.7%) had a semi-circumferential component combined with a buccal dehiscence. In a more recent study, Monje et al. [23] described three-wall defects as the most prevalent, followed by buccal dehiscence defects ( Figure 5B). It has been noted that the inconsistencies in these findings may be attributed to anatomical variations at implant placement. Schwarz et al. [27] described that the alveolar ridge width played a role in the number of bony walls formed in the future peri-implantitis defect. Moreover, the studies varied considerably in their anatomical location. For instance, most of the implants analyzed by Schwarz et al. were placed in the posterior region while other studies evaluated implants placed in the anterior and premolar regions [25]. Nonetheless, identifying the defect morphology yielded the opportunity to evaluate the predictability of various peri-implantitis treatments. It has been noted that the inconsistencies in these findings may be attributed to anatomical variations at implant placement. Schwarz et al. [27] described that the alveolar ridge width played a role in the number of bony walls formed in the future peri-implantitis defect. Moreover, the studies varied considerably in their anatomical location. For instance, most of the implants analyzed by Schwarz et al. were placed in the posterior region while other studies evaluated implants placed in the anterior and premolar regions [25]. Nonetheless, identifying the defect morphology yielded the opportunity to evaluate the predictability of various peri-implantitis treatments. Understanding the peri-implant defect morphology is important because of its potential for determining the likelihood of regeneration therapy success [25]. For situations in which regenerative procedures are unlikely to produce favorable results, resective surgeries may offer more clinical benefits. Of all the defect configurations, circumferential defects (Ie) achieved the highest reduction in PD and clinical attachment level (CAL) [26], while class Ib and Ic defects showed the poorest. Class Ib defects were the most prevalent in many of the studies. Therefore, it is more common to see peri-implantitis defect morphologies that are poorly responsive to reconstructive therapies ( Figure 6). Understanding the peri-implant defect morphology is important because of it tential for determining the likelihood of regeneration therapy success [25]. For situa in which regenerative procedures are unlikely to produce favorable results, resective geries may offer more clinical benefits. Of all the defect configurations, circumfere defects (Ie) achieved the highest reduction in PD and clinical attachment level (CAL) while class Ib and Ic defects showed the poorest. Class Ib defects were the most prev in many of the studies. Therefore, it is more common to see peri-implantitis defect phologies that are poorly responsive to reconstructive therapies ( Figure 6). Surface Decontamination The implant surface holds great importance on the success and speed of osseo gration. Different implant brands are characterized by varying treatment surfaces roughness [30]. The question on implant surface characteristics and healing following gical therapy to treat peri-implantitis is not new. It is demonstrated that, as part o regenerative procedure, implant decontamination is essential to obtain positive outc [31]. The consensus report from the 8th European Workshop on Periodontology stated that implant surface decontamination is a critical component of surgical treatm Implant decontamination is aimed at removing bacterial biofilm and resolving infe and inflammation, rendering the surface biocompatible and conducive to bone rege tion and possible re-osseointegration, or at least minimizing bacterial adhesion [2]. ous techniques have been proposed for implant surface decontamination after sur exposure: mechanical, chemical, laser or photodynamic, or a combination of these ( Figure 7). The decontamination process presents multiple challenges. Besides th tempts to solve the infectious process, the implant threads and rough surfaces pose a nificant obstacle to the mechanical cleansing that, if not optimal, can lead to the ree lishment of pathogenic microflora and persistence of pathology [33]. In advanced implant defect lesions, surface decontamination alone will not adequately achieve regeneration. In these cases, filling the defect with graft materials and growth fa yields better outcomes. Investigations on surgical treatment for peri-implantitis r from in-vitro and animal studies to human clinical trials. Each of these fields prov important insight on outcomes and healing processes. Due to the limitations of model, study design, length of treatment, materials, outcome measures, and the hete neity of data, it is difficult to compare outcome measurements [31,[33][34][35][36]. Surface Decontamination The implant surface holds great importance on the success and speed of osseointegration. Different implant brands are characterized by varying treatment surfaces and roughness [30]. The question on implant surface characteristics and healing following surgical therapy to treat peri-implantitis is not new. It is demonstrated that, as part of the regenerative procedure, implant decontamination is essential to obtain positive outcomes [31]. The consensus report from the 8th European Workshop on Periodontology [32] stated that implant surface decontamination is a critical component of surgical treatment. Implant decontamination is aimed at removing bacterial biofilm and resolving infection and inflammation, rendering the surface biocompatible and conducive to bone regeneration and possible re-osseointegration, or at least minimizing bacterial adhesion [2]. Various techniques have been proposed for implant surface decontamination after surgical exposure: mechanical, chemical, laser or photodynamic, or a combination of these [33] (Figure 7). The decontamination process presents multiple challenges. Besides the attempts to solve the infectious process, the implant threads and rough surfaces pose a significant obstacle to the mechanical cleansing that, if not optimal, can lead to the reestablishment of pathogenic microflora and persistence of pathology [33]. In advanced peri-implant defect lesions, surface decontamination alone will not adequately achieve bone regeneration. In these cases, filling the defect with graft materials and growth factors yields better outcomes. Investigations on surgical treatment for peri-implantitis range from in-vitro and animal studies to human clinical trials. Each of these fields provided important insight on outcomes and healing processes. Due to the limitations of each model, study design, length of treatment, materials, outcome measures, and the heterogeneity of data, it is difficult to compare outcome measurements [31,[33][34][35][36]. Pre-Clinical Studies Pre-clinical experimental studies used a variety of methods to assess re-osseointegration following surgical therapy of peri-implantitis affecting implants with various surface characteristics [37][38][39]. Wetzel et al. [37] used reference points to indicate the most apical area of the peri-implant defect during surgery, while Persson et al. [38] utilized a fluorochrome marker following surgical therapy. In both studies, the amount of re-osseointegration on rough implants (sand-blasted, acid-etched) was superior to smooth, polished surfaces. Similar results were obtained by Namgoong et al. [39] who reported larger amounts of re-osseointegration at implants with sand-blasted, acid-etched/hydroxyapatite-coated surfaces than at implants with a turned/machined surface. The results presented by these authors are in contrast with data reported by Almohandes et al. [22], who showed that re-osseointegration was significantly more frequent at smooth compared to rough surface implants (96% vs. 54%). The odds ratio for smooth implants to achieve re-osseointegration was~25 compared to rough implants. The authors demonstrated that smooth surfaces showed a significantly higher radiographic bone level gain, enhanced resolution of peri-implantitis lesions, and a larger frequency of re-osseointegration sites. Pre-Clinical Studies Pre-clinical experimental studies used a variety of methods to assess re-osseointegra tion following surgical therapy of peri-implantitis affecting implants with various surfac characteristics [37][38][39]. Wetzel et al. [37] used reference points to indicate the most apica area of the peri-implant defect during surgery, while Persson et al. [38] utilized a fluoro chrome marker following surgical therapy. In both studies, the amount of re-osseointe gration on rough implants (sand-blasted, acid-etched) was superior to smooth, polished surfaces. Similar results were obtained by Namgoong et al. [39] who reported large amounts of re-osseointegration at implants with sand-blasted, acid-etched/hydroxyap atite-coated surfaces than at implants with a turned/machined surface. The results presented by these authors are in contrast with data reported by Almo handes et al. [22], who showed that re-osseointegration was significantly more frequen at smooth compared to rough surface implants (96% vs. 54%). The odds ratio for smooth implants to achieve re-osseointegration was ~25 compared to rough implants. The author demonstrated that smooth surfaces showed a significantly higher radiographic bone leve gain, enhanced resolution of peri-implantitis lesions, and a larger frequency of re-osseoin Human Clinical Studies Implant surface characteristics were also identified to influence the results of periimplantitis surgical therapies in clinical human studies. Roccuzzo et al. [29,40] prospectively evaluated a regenerative surgical treatment with a bovine-derived graft for periimplantitis lesions on two different implant surfaces. A one-year follow-up resulted in clinical and radiographic improvements. The authors concluded that surface characteristics may have an impact on the clinical outcome after regeneration techniques and that complete defect fill was not always predictable. The authors reported that healing was superior around sandblasted, large grit, acid-etched (SLA) test surface implants than those with a rough titanium-plasma spray (TPS) control surface. Aghazadeh et al. [25] treated two groups of subjects with autogenous bone or bovine-derived xenograft in conjunction with a collagen membrane. The decontamination protocol consisted of mechanical debridement (titanium instruments) and hydrogen peroxide (H 2 O 2 ). After 1 year, the bovine-derived xenograft showed significantly better results for bone levels, BOP, plaque index (PI), and suppuration. Isehed et al. [41], in an RCT, demonstrated that surface treatment with Emdogain ® (EMD) combined with sodium chlorohydrate could switch the microbiota to Gram positive aerobic bacteria and lead to an increase in bone levels. Jepsen et al. [42] utilized titanium brushes and H 2 O 2 prior to placing titanium granules within the defects. It was not possible to demonstrate significant clinical benefits, but only marginal bone gain. Roccuzzo et al. [43] chemically treated the implant diseased surfaces with 24% EDTA and 1% Chlorhexidine before grafting the sites with DBBM + 10% collagen. Both authors reported PD reduction, inflammation resolution, and radiographic bone fill. In a 4-year follow-up study, Schwarz et al. [44,45] compared the 48 and 84 months regenerative outcomes of two decontamination techniques: Er:YAG laser and plastic curettes with cotton pellets and sterile saline. They did not find a statistically significant difference in the two treatment modalities. In a case series, Nart et al. [46] performed implantoplasty and filled the defects with allografts mixed with tobramycin and vancomycin (50%-50%) and membranes. The results showed positive outcomes in terms of radiographic bone fill, PD reduction, and CAL gain after a 12 months period. In these last studies, the implants were treated with implantoplasty, which removed the surface properties of the fixtures (Table 1). It is debatable whether this procedure should be performed or not and conflicts exist among the literature [47,48] (Figure 8). with Emdogain ® (EMD) combined with sodium chlorohydrate could switch the microbiota to Gram positive aerobic bacteria and lead to an increase in bone levels. Jepsen et al. [42] utilized titanium brushes and H2O2 prior to placing titanium granules within the defects. It was not possible to demonstrate significant clinical benefits, but only marginal bone gain. Roccuzzo et al. [43] chemically treated the implant diseased surfaces with 24% EDTA and 1% Chlorhexidine before grafting the sites with DBBM + 10% collagen. Both authors reported PD reduction, inflammation resolution, and radiographic bone fill. In a 4-year follow-up study, Schwarz et al. [44,45] compared the 48 and 84 months regenerative outcomes of two decontamination techniques: Er:YAG laser and plastic curettes with cotton pellets and sterile saline. They did not find a statistically significant difference in the two treatment modalities. In a case series, Nart et al. [46] performed implantoplasty and filled the defects with allografts mixed with tobramycin and vancomycin (50%-50%) and membranes. The results showed positive outcomes in terms of radiographic bone fill, PD reduction, and CAL gain after a 12 months period. In these last studies, the implants were treated with implantoplasty, which removed the surface properties of the fixtures (Table 1). It is debatable whether this procedure should be performed or not and conflicts exist among the literature [47,48] (Figure 8). It is noticeable how difficult it is to completely remove titanium debris from the tissues, how it is not possible to completely reach the implant surface in narrow defects, and how much implant structure may have to be removed to accomplish smoothness. Narrower implants may risk fracture due to structural modification. Current evidence in the literature regarding the different clinical decontamination protocols has shown that complete implant surface decontamination (mechanical and chemical) could not even be achieved in vitro; there is a large variation in the effectiveness of the various approaches depending on the type of implant surface [33,48]. Despite the positive results of these studies, decontamination techniques standardization and comprehensive evaluation for true efficacy [33]. There is no evidence of clinical, radiographic, or microbiological data to favor one specific decontamination method over another. Further clinical investigations are needed to determine the superiority of a decontamination method, if possible [33]. It is noticeable how difficult it is to completely remove titanium debris from the tissues, how it is not possible to completely reach the implant surface in narrow defects, and how much implant structure may have to be removed to accomplish smoothness. Narrower implants may risk fracture due to structural modification. Current evidence in the literature regarding the different clinical decontamination protocols has shown that complete implant surface decontamination (mechanical and chemical) could not even be achieved in vitro; there is a large variation in the effectiveness of the various approaches depending on the type of implant surface [33,48]. Despite the positive results of these studies, decontamination techniques standardization and comprehensive evaluation for true efficacy [33]. There is no evidence of clinical, radiographic, or microbiological data to favor one specific decontamination method over another. Further clinical investigations are needed to determine the superiority of a decontamination method, if possible [33]. Regeneration Techniques & Materials The validation of reconstructive surgical therapy in peri-implantitis should be investigated in pre-clinical trials before applying it to human studies since evidence of re-osseointegration can only be confirmed histologically [2]. Due to the ethical considerations on humans, animal studies have been utilized to demonstrate successful bone regeneration around a previously affected implant [22,[51][52][53][54]. Many parameters need to be controlled, such as bone substitute materials, membranes, implant surface characteristics, and their decontamination to understand their different roles in re-osseointegration. The scientific literature still lacks strong consensus demonstrating the absolute benefit of bone substitute materials in successfully repairing and/or regenerating peri-implant bone defects [22]. Schwarz et al. (2007) [27] demonstrated a similarity between naturally occurring peri-implantitis in humans and ligature-induced peri-implantitis in beagle dogs. The results showed comparable configurations and dimensions of the defects, which concluded that the dog model could be a valuable representation of human defects. In a systematic review on re-osseointegration after surgical treatment of peri-implantitis, Madi et al. [19] and Renvert et al. [2] investigated the success rate of different protocols. After the surgical treatment of ligature-induced peri-implantitis, numerous methods to promote reosseointegration were studied, such as regeneration with or without membranes and with or without bone grafts, laser treatment, and growth factors. Favorable results were observed in the studies that used a combination of bone grafts in guided bone regeneration therapy [19]. Re-osseointegration was achieved in some reports but it was highly variable. No methods were found to predictably resolve the peri-implant defects [2]. For instance, Schou et al. [51][52][53] performed a series of experiments on monkeys, with surgical treatment of ligature-induced peri-implantitis. Non-resorbable membranes (Expanded Polytetrafluoroethylene or ePTFE) with autogenous bone graft (ABG) or xenograft (Bio-Oss) were employed; At the histologic analysis, both combinations resulted in re-osseointegration (36% in the Bio-Oss group and 45% in the autogenous). Almohandes et al., 2019 [22], investigated the effect of bone substitute materials on hard and soft tissue healing in regenerative surgical therapy of dog ligature-induced peri-implantitis affecting implants with rough and smooth surfaces. The mean radiographic bone level (RBL) gain was significantly larger in the smooth implants (1.32 ± 0.69 vs. 0.27 ± 1.76 mm), showing more favorable outcomes in relation to the surface of the implant rather than the treatment rendered. In fact, in the rough implant group, the best radiographic result was achieved by controls, where no grafting material was added. Moreover, the additional use of a collagen membrane did not seem to impart additional benefits on the outcomes. The histologic analysis confirmed that smooth implants consistently showed better results with higher bone levels regardless of the regeneration technique. This trend was almost halved at the moderately rough sites. The issue seemed to depend on the quality and extent of implant decontamination, which is more challenging on rough surfaces [22]. These findings are consistent with results presented in a dog study by Ramos et al. [54], who reported that the use of the bone filler material did not improve results regarding re-osseointegration and bone level gain. Adjunctive Therapies Some authors have investigated the use of growth factors or lasers to understand the potential additive effect on re-osseointegration. You et al. [55] used ABG with or without Platelet Enriched Fibrin (PRF) after decontaminating the diseased implants with alternating gauze soaked in 0.1% chlorhexidine and saline. Re-osseointegration (~50%) was identified in the treatment group featuring ABG + PRF without membranes. Park et al. [56] applied three treatment modalities after surgical defect exposure: Hydroxyapatite (HA) particles and collagen gel (control), HA with collagen gel containing autologous periodontal liga-ment stem cells (PDLSCs), and HA particles with collagen gel containing BMP-2-expressing autologous PDLSCs. Despite no significant difference between groups regarding BIC, the histological specimens showed a significantly higher amount of re-osseointegration for BMP2/PDLSC group (2.1 mm) and 61% of the defect were regenerated. Shibli et al. [57] tested a photosensitization technique, as the combination of low-level diode laser + toluidine blue O, on 4 different implant surfaces without the use of biomaterials but covered with ePTFE membranes. The authors concluded that photosensitization could provide significant bone fill with re-osseointegration. Machtei et al. [58] evaluated the effect of a bone substitute material, beta tri-calcium phosphate (β-TCP), with or without endothelial progenitor cells (EPC). It was reported that the combination of β-TCP and EPC enhanced bone formation after surgical therapy, while differences between the β-TCP group and controls were small. The benefits derived from using bone grafts and membranes were not always significant. In other words, the use of bone substitute materials was not essential for bone level gain in radiographs, resolution of peri-implantitis lesions, and occurrence of reosseointegration. Thus, experimental studies investigating the benefit of bone substitute materials in the management of peri-implantitis-associated osseous defects are few and do not provide evidence to support for use of those graft materials. Bone substitute materials do not provide obvious advantages in achieving bone fill or re-osseointegration, even though interpretation must be made with care and considering the specific nature of the experimental model [2,22]. The osseous defect that occurs in the dog mandible following experimental peri-implantitis often demonstrates a contained, symmetric morphology with well-preserved bone walls. Bone healing that occurs after surgical therapy is favored by the contained morphology of the bone defect more than the potential benefit of placing a bone substitute material [22]. Studies on Humans Regenerative procedures around teeth imply reconstitution of the lost attachment apparatus composed of bone, cementum, connective tissue, and periodontal ligament [59]. If translated in the dental implant realm, bone regeneration and re-osseointegration are the sole objective therapeutic goal in specific peri-implant bony defects on functioning implants [2,3]. Daugela et al. [60] performed a systematic review of the literature to identify the most effective therapeutic predictable option on the surgical regenerative treatment of peri-implantitis. The review revealed that the weighted mean Radiographic Bone Level (RBL) fill was close to 2 mm, PD reduction was 2.78 mm and BOP reduced by more than 50%. Defect fill, in studies using and not using barrier membranes for graft coverage, were 1.86 mm and 2.12 mm, respectively. High heterogeneity among the studies regarding defect morphology, surgical protocols, and selection of biomaterials was detected. The results showed an improvement of clinical scenarios after the surgical regenerative approach of peri-implantitis; however, the authors could not find scientific evidence regarding the superiority of the regenerative versus non-regenerative surgical treatment. They also concluded that the use of a barrier membrane or submerged healing did not seem to be mandatory for a successful outcome. A contrasting conclusion was derived from a systematic review from the AAP Task Force in 2014 stating that although the evidence was limited, the use of grafting material and barrier membranes may contribute to a better reduction of PD and defect fill [61]. A consensus report from Khoury et al. [31] acknowledged that surgical regenerative peri-implantitis therapy improved clinical and radiographic treatment outcomes compared with the baseline with up to 10 years of follow-up. However, the authors did not find evidence to support the superiority of a specific material, product, or membrane in terms of long-term clinical benefits. The surgical approach to treat peri-implantitis may be justified when the non-surgical approach failed, evidenced by the persistence of BOP and suppuration. Ramanauskaite et al. [34] reviewed the literature to identify the difference in clinical and radiographic outcomes between regenerative and non-augmentative surgical therapies. Regenerative peri-implantitis therapy demonstrated significant improvements in BOP and PD values compared to the baseline. In particular, the mean BOP reduction ranged from 25.9% to~90% and 91% in a 1-to 7-year period, and the mean PD reduction ranged from 0.74 to 5.4 mm. The mean radiographic bone fill ranged between 57% and 93.3%. Furthermore, the radiographic reduction of the intrabony defect height varied from 0.2 to 2.8 mm and up to 3.70 and 3.77 mm. A variety of bone grafting materials were applied (autogenous bone, alloplasts, xenograft, and titanium granules) with and without resorbable or non-resorbable membranes [34]. Xenografts demonstrated significantly higher radiographic bone level gain, compared to ABG, in the short term. The reasons could be attributed to the greater radio-opacity of these materials compared with ABG. In a recent systematic review by Aljohani et al. [62] the regenerative treatments were compared according to PD, BOP, and RBL. The materials evaluated included autogenous bone compared to bovine-derived xenografts with a resorbable collagen membrane, porous titanium granules without membranes, and autogenous bone mineral with a collagen membrane. Lastly, the detoxification methods included in the review were 3% H 2 O 2 and saline, 24% EDTA gel and saline, implantoplasty and Er:YAG laser, and sterile saline only. All the interventions demonstrated a significant decrease in PD when compared to the baseline (before the intervention) PD measurements. However, the difference in mean PD among all the studies was not statistically significant. Aghazadeh et al. [25] achieved the highest mean reduction in PD (3.1 mm) with the use of bovine-derived xenograft and a collagen membrane. The lowest reduction in PD (1.2 mm) was observed in patients treated with implantoplasty and a saline rinse. This decontamination method reduced BOP by 85.2%. When evaluating the RBL, most studies showed an increase compared to baseline. However, this parameter also failed to exhibit statistical significance in all groups. By using porous titanium granules, Jepsen et al. [42] reported the greatest mean defect fill when compared to other interventions. Overall, the five studies included demonstrated improvements in clinical conditions when compared to baseline. Nevertheless, there was no statistically significant difference in PD, BOP, or RBL when comparing the studies to each other. The authors believe that the reduction in BOP was the outcome of the normal healing process after the surgical treatment rather than the materials used or decontamination methods utilized. The authors further suggested that the porous titanium granules may be the best bone substitute to achieve the greatest RBL. Xenograft would be the next best, and autogenous bone would come after. A limitation of evaluating RBL is that it does not indicate complete re-osseointegration. Therefore, because the autogenous bone is less radiopaque than titanium granules, it may not appear as having comparable radiographic bone levels. Two examples of regeneration using bone grafts are shown in Figures 9 and 10. To be noted is the partial fill of the intra-bony defect with a residual supra-crestal fixture. Success of Regenerative Therapy The success of regenerative therapy varies from study to study. The use of composite outcomes for treatment success is not standardized and varies among different studies [34,35]. Various authors reported different treatment outcome goals when defining treatment success ( Table 2). A recent review [34] showed that, depending on the criteria used, treatment success varied between 11% and 38.5% (implant level in a 1-year period), and between 14.3% and 66.7% (implant level), and 60% (patient level), in a 5-7 years followup. A systematic review [63] from Sahrmann investigated the available literature for regenerative treatment of peri-implantitis using bone grafts and membranes. Qualitative measures showed ~10% of complete radiographic bone fill, 85.5% of incomplete defect resolution, and no bone fill in 4% of the cases. It is noted that a high heterogeneity among disinfection protocols and biologic materials used was noticed. This limitation made a meta-analysis impossible to be achieved. The clinical outcomes of surgical regenerative therapy were reported to be influenced by the implant surface characteristics [29] as well as by the peri-implant defect configuration. For instance, moderately rough or smooth surface implants seemed to outperform rough surface implants in terms of clinical treatment outcomes; furthermore, circumferential-type defects responded better to therapy compared to dehiscence-type defects [2,24,27,36]. Success of Regenerative Therapy The success of regenerative therapy varies from study to study. The use of composite outcomes for treatment success is not standardized and varies among different studies [34,35]. Various authors reported different treatment outcome goals when defining treatment success (Table 2). A recent review [34] showed that, depending on the criteria used, treatment success varied between 11% and 38.5% (implant level in a 1-year period), and between 14.3% and 66.7% (implant level), and 60% (patient level), in a 5-7 years follow-up. A systematic review [63] from Sahrmann investigated the available literature for regenerative treatment of peri-implantitis using bone grafts and membranes. Qualitative measures showed~10% of complete radiographic bone fill, 85.5% of incomplete defect resolution, and no bone fill in 4% of the cases. It is noted that a high heterogeneity among disinfection protocols and biologic materials used was noticed. This limitation made a meta-analysis impossible to be achieved. The clinical outcomes of surgical regenerative therapy were reported to be influenced by the implant surface characteristics [29] as well as by the peri-implant defect configuration. For instance, moderately rough or smooth surface implants seemed to outperform rough surface implants in terms of clinical treatment outcomes; furthermore, circumferential-type defects responded better to therapy compared to dehiscence-type defects [2,24,27,36]. Table 2. The success of regenerative treatment reported by the listed authors. As it is noted, the criteria vary among studies, making the concept of treatment success difficult to standardize. PD (Probing Depth); BL (Bone Loss); BOP (Bleeding on Probing); SUP (Suppuration); RF (Radiographic Fill); DF (Defect Fill). Adapted from Ramanauskaite et al. [34]. Time Stability of Therapy A systematic review from Heinz-Mayfield [35] investigated the treatment of periimplantitis by defining the therapy successful (implant survival) if PD < 5 mm and no progressive bone loss 12 months after treatment. The studies varied in their decontamination methods, grafting materials, membrane usage, and implant surfaces. In a 12-month follow-up study, Roccuzzo et al. [40] found a mean radiographic bone gain of 1.7 mm and an incomplete defect fill in 75% of implants. Two implants were lost after developing suppuration. Wiltfang et al. [65], in a 12-month follow-up study, employed implantoplasty and a mixture of autogenous bone and demineralized xenogenic bone graft with growth factors and systemic antibiotics. The investigators reported an average reduction in PD (4 mm), in BOP (from 61% of implants to 25%), a mean gain in radiographic bone height of 3.5 mm, and an increased recession of 2 mm. Overall, one implant was lost due to mobility. Froum et al. [66] investigated the use of enamel matrix derivatives, bone graft mixed with PDGF, and a collagen membrane or subepithelial connective tissue graft. With a 3-7.5-year follow-up, researchers reported a mean PD reduction of 5.3 mm and radiographic bone gain of 3.4 mm. No implants exhibited recession, and no implants were lost. Haas et al. [67] investigated the use of penicillin and photodynamic therapy with autogenous bone and ePTFE membranes. Researchers reported a mean radiographic bone gain of 2 mm and the loss of two implants. Roos-Jansaker et al. [50] investigated the submerged approach using bone substitutes and resorbable membranes. At the 12-month follow-up period, implants exhibited a radiographic bone fill of 2.3 mm, a mean PD reduction of 4.2 mm, and BOP reduction from 75% to 13%. Schwarz et al. [26] studied the regenerative potential of the non-submerged healing using xenografts with a collagen membrane. The 12-month follow-up demonstrated PD reduction from 6.9 mm to 2.0 mm. In a 3-year follow-up study, Behneke et al. [68] used air-powder abrasives, autografts, and systemic metronidazole to treat peri-implantitis. The researchers reported no implant losses, a mean PD reduction of 3.1 mm, and a median marginal bone gain of 4 mm. These favorable outcomes were maintained for up to 3 years (Table 3). In a systematic review evaluating regenerative treatments by Heinz-Mayfield et al. [35], it was not possible to advocate one specific treatment as being more successful in achieving regeneration than the others due to the great heterogeneity among studies. Regardless, certain aspects of therapies seem to influence treatment outcomes positively. The beneficial factors found in the pretreatment phase include oral hygiene instruction and smoking cessation, prosthesis adjustments, and nonsurgical debridement. In terms of surgical access, the use of full-thickness mucoperiosteal flaps and the use of bone grafts or substitutes seem to be associated with improved treatment outcomes. Postoperative protocols seemed to positively influence treatment outcomes when antibiotics were used and when chlorhexidine rinse was included in the postoperative management. Lastly, maintenance care seemed to improve treatment outcomes when a 3-to 6-month interval was utilized. Table 3. Studies that reported on the number of patients (Pt) treated, % therapy success, mean probing depth (PD) change, % of bleeding and/or suppuration on probing, radiographic bone levels at 12 months after treatment. Successful outcomes are defined as: implant survival with a mean PD < 5 mm and no progressive bone loss 12 months after treatment. AB: Autogenous Bone; T0: Baseline; m: month; wk: week; Pt: patient; im: implant. Adapted from Ramanauskaite et al. [34]. Discussion Animal research has demonstrated that it is possible to regenerate bone to achieve re-osseointegration on a previously infected implant surface [22,[37][38][39][55][56][57][69][70][71]. These studies including histological evidence covered different methods of implant surface decontamination such as mechanical, chemical, and adjunctive (i.e., lasers). These preclinical studies tested regenerative procedures using membranes, bone grafting materials, and biologic factors on different surgical protocols. Multiple studies also investigated the effect of different implant surface decontamination techniques prior to regenerative procedures on the potential for re-osseointegration. Promising results were observed in the studies that used a combination of bone grafts and membranes even if others did not confirm the same outcomes. The multitude of techniques and related outcomes measures differed among authors. Presently, peri-implantitis treatment is not predictable and cannot be distinguished as a single effective protocol. The heterogeneity was high due to different protocol designs and animal models used, defect size and shape the number, location and kind of implants placed, the implant surface, the time of healing, peri-implantitis induction, treatment length, and timing of the animal sacrifices. Also, many protocols were used regarding pre-surgical plaque reduction, oral hygiene measures, and definitions for outcome measurements. Some studies failed to report some of the aforementioned information. As a result, it is difficult to reach an overall definitive conclusion. Many studies agree that surgical treatment seems to be the most predictable treatment option when an adequate chemical and mechanical implant surface decontamination is achieved. In summary, based on animal studies, re-osseointegration can be achieved on a previously infected and contaminated implant surface. Re-osseointegration was highly varied among and between studies and was unpredictable. It may be seen in pre-clinicalinduced peri-implantitis defects following regeneration and may also be influenced by the implant surface properties. Human studies do not allow any definitive conclusions. According to the current evidence, it seems possible to achieve some defect fill and disease resolution by means of regenerative techniques using different bone substitute grafts (autogenous, xenograft, allograft, and titanium granules), with or without the adjunctive use of barrier membranes [19,31,[34][35][36]. These regenerative techniques should be considered in areas of high esthetic demand and when the defect morphology is suitable for a predictable outcome [23,26]. The initial intraosseous defect fill could be maintained over time if low plaque and bleeding scores could be controlled by effective oral hygiene and frequent maintenance [49]. As with every disease, prevention is the best form of treatment, and peri-implantitis is no exception. The scarcity of human histological evidence still makes it difficult to form generalizations about the efficiency of regenerative procedures and their potential for re-osseointegration [36]. A determining factor is the defect configuration for a predictable outcome following regenerative treatment. Yet, several experimental studies demonstrated that even where in circumferential defects, the amount of regeneration achieved is limited. It is important to inform the patient clearly about the possibility of a recession and subsequent exposure of the body of the implant. Unlike natural teeth, peri-implant lesions do not seem to respond predictably to either non-surgical or surgical treatments [36]. The main common postoperative complication of regenerative therapies seems to be membrane exposure when used [35,58]. The recent consensus on peri-implant diseases identified strong evidence that lack of oral hygiene, history of periodontitis, and smoking are risk indicators for peri-implantitis [1]. It is so advised that, following implant placement, patients should be included in a strict and regular maintenance schedule [72]. If periimplantitis is already established, the proposed strategies and recommendations for its treatment are still considered empirical [36]. From the existing evidence, it seems that nonsurgical therapy is not completely effective, at least not in advanced cases [36]. The advantage of surgical techniques is the ability to achieve adequate access to degranulate the inflamed tissues effectively, to modify implant surfaces, to decontaminate the implant, to reduce PD, and, when indicated, to attempt regeneration. Conclusions Based on the current evidence in reconstructive peri-implant therapy, regenerative surgical techniques demonstrated improvement of peri-implant clinical and radiographic parameters. Yet, there is not enough evidence to identify a specific grafting material or membrane that would grant long-term clinical treatment benefits over the others. No specific surface decontamination treatment can be considered superior in terms of influencing the clinical outcomes of regenerative peri-implantitis therapy. One of the most important treatment factors is the peri-implant bone defect morphology, which demonstrated its influence on the final therapy outcomes. The initial intraosseous defect fill could be maintained over time if low plaque and bleeding scores are controlled by effective oral hygiene and frequent maintenance. Regenerative therapies should be applied in specific and selected clinical scenarios.
2021-08-28T05:19:54.232Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "3aaadce2ca5f5ed9d932496e5673c6394d06648f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/10/8/773/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3aaadce2ca5f5ed9d932496e5673c6394d06648f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32773062
pes2o/s2orc
v3-fos-license
Arctic cut-off high drives the poleward shift of a new Greenland melting record Large-scale atmospheric circulation controls the mass and energy balance of the Greenland ice sheet through its impact on radiative budget, runoff and accumulation. Here, using reanalysis data and the outputs of a regional climate model, we show that the persistence of an exceptional atmospheric ridge, centred over the Arctic Ocean, was responsible for a poleward shift of runoff, albedo and surface temperature records over the Greenland during the summer of 2015. New records of monthly mean zonal winds at 500 hPa and of the maximum latitude of ridge peaks of the 5,700±50 m isohypse over the Arctic were associated with the formation and persistency of a cutoff high. The unprecedented (1948–2015) and sustained atmospheric conditions promoted enhanced runoff, increased the surface temperatures and decreased the albedo in northern Greenland, while inhibiting melting in the south, where new melting records were set over the past decade. A tmospheric circulation affects the energy and mass budgets of the Greenland ice sheet 1-5 by controlling cloud coverage and optical depth 6 , and by driving the spatial distribution, and amount of surface melting and accumulation 7,8 . Improving our understanding of the impact of atmospheric circulation on the Greenland's surface mass balance is, therefore, crucial for the refinement of climate and ice-sheet models, and will ultimately enable improved estimates of current and future contributions to sea level, by the largest ice body in the Northern Hemisphere. Here we show that a poleward shift of melting record over the Greenland ice sheet in 2015 was driven by the exceptional atmospheric conditions characterized by new records in mean zonal winds and jet stream wave amplitude associated with the formation and evolution of a Arctic cutoff high. Results Atmospheric conditions and indicators. Our analysis of the geopotential height at 500 hPa (Methods) shows that during July 2015 a persistent atmospheric ridge was centred over the Arctic Ocean (Lincoln Sea, north of Greenland), with geopotential height anomalies being up to 3.7 s.d.'s (s,B150 m) above the 1981-2010 long-term mean (Fig. 1a). The North Atlantic Oscillation (NAO; Methods) and Greenland Blocking Index (GBI, defined as the 500 hPa geopotential height area averaged between 60-80°N and 20-80°W (ref. 9); Methods) have been associated with extreme melting events over the Greenland 9-12 . The summer average (June-July-August) value for NAO in 2015 of À 1.61 was close to the summer value in 2012 of À 1.59. Differently from 2015, however, the atmospheric ridge in 2012 was centred over the Greenland ice sheet 2,10 ( Supplementary Figs 1 and 2). The July monthly averaged NAO value set a new record low of À 1.23 (since 1899), being 3.2s below the 1981-2010 mean (Fig. 1c). Concurrently, the GBI also set a new record for the month of July ( Fig. 1c; Supplementary Fig. 3b), being 2.8s above the 1981-2010 mean. The June and August conditions in 2015 were not as exceptional in 2012, with mean June and August NAO values in 2015 being higher than the same quantities in 2012 ( Supplementary Fig. 3a). Arctic cutoff high and new surface Greenland records. The July 2015 high-pressure ridge over the Greenland evolved from a cutoff high that formed along the eastern coast of Greenland at the end of June (white circle in Fig. 2b). Over the same period, the jet stream, here characterized through the 5,700 ± 500 m 500 hPa isoheights 13 , broke into three positive heights around the Northern Hemisphere (Fig. 2c). These conditions reinforced the atmospheric ridge over the Greenland ice sheet, which moved westward and persisted until mid-July ( Fig. 2b-e). This promoted new records for meltwater production, runoff, albedo and surface temperature over northwest Greenland ( Fig. 3; Supplementary Fig. 4), as simulated by the Modèle Atmosphérique Régionale 1,3,7 (MAR; Methods). The monthly averaged record setting values for simulated albedo and surface temperature in northwest Greenland for July 2015 were, respectively,B2.5s below and above the 1981-2010 mean, while runoff was up to B3s above the mean (Fig. 3). The spatial distribution of the 2015 surface albedo anomaly simulated by MAR ( Supplementary Fig. 5a-c) indicates that the July 2015 negative anomaly was driven by an albedo decrease at relatively high elevations, promoted by the reduced summer snowfall (associated with anticyclonic conditions) and by increased surface melting and runoff. The same atmospheric conditions that promoted these new records over the northern Greenland also inhibited melting in the south, where enhanced melting and new records have been occurring over the past recent years 1,10 . This had implications for the surface mass balance of Greenland at both regional and ice-sheet scales ( Supplementary Fig. 6). The summer exposure of bare ice and the presence of surface impurities have been suggested to be driving the enhanced melting observed over the past B20 years 1 . However, the 2015 summer atmospheric conditions promoted the flow of cold air from the Arctic Ocean (Fig. 1b), favouring the accumulation of fresh new snow with a high albedo along western Greenland, hence offsetting the effects of bare ice exposure ( Supplementary Figs 5-7). Mean zonal winds and jet stream wave amplitude records. The westward shift of the cutoff high between the end of June and the beginning of July 2015 is associated with new records for both the mean zonal winds at 500 hPa (Fig. 1d) and the maximum latitude of ridging (Fig. 2g) Fig. 8). Discussion The mechanisms that created and maintained the 2015 observed ridge may be linked with forcing from very strong extratropical cyclones 14 , to forcings from southern regions 15 or to latent heat release 16 . Another possibility is the local forcing related to Arctic amplification 13,17 . Although recent melt records over the Greenland have been linked to exceptional mid-tropospheric atmospheric conditions, with episodes of atmospheric blocking ridges being associated with Greenland's melting extremes 9,12 , little or no attention has been given to the impact of the anticipated effects of Arctic amplification on the surface mass balance of the Greenland ice sheet. In this regard, the 2015 records for both the 500 hPa zonal winds and the maximum ridging latitude are consistent with the proposed effects on upper level atmosphere characteristics associated with Arctic amplification 13,17 . The 2015 poleward shift of the surface melting record in 2015, clearly indicates that improving our understanding of the impact of exceptional atmospheric conditions on the spatial distribution of extreme melting is crucial. Besides modulating the contribution of Greenland to sea level through the volume of meltwater production, the location of enhanced melting can influence ocean/ice interaction processes and ocean circulation 18 , and bio-productivity, by altering salinity and temperature profiles of the surrounding ocean. Furthermore, the evolution of surface melting strongly impacts the Greenland's hydrological system, with implications for the englacial and subglacial systems, as well as ice discharge and dynamics 19 . Currently, several general circulation global climate models and Earth System models do not properly capture summer Arctic atmospheric forcing 8 , limiting our capability to properly project the evolution of the surface mass balance and melting under future warming scenarios. Our work presented here demonstrates a strong need to identify the mechanisms that create and maintain strong cutoff highs. The new atmospheric records, and the trends of mean zonal winds and wave amplitude of the jet stream are consistent with the suggested effects of Arctic amplification 13,17 . Recent studies provide theoretical arguments that slowing zonal winds might be associated with larger planetary wave amplitudes 20 and that Arctic amplification and/or sea-ice loss do intensify existing ridges, thereby contributing to their persistence 21,22 . In the event studied here, however, the exceptional melting followed the ridging, rather than preceding it in alignment with other studies, indicating that observations and models results do not support the above mentioned expected effects of Arctic amplification [23][24][25][26][27] . Be that as it may, understanding the impact of cutoff highs on the Greenland's surface mass balance, and studying the mechanisms driving the trends and extremes of the anticipated effects of Arctic amplification are crucial tasks in view of the potential regional and global impacts long-time effects of enhanced melting over Greenland. Here we use daily outputs obtained from the average of 120 s outputs produced by the model. The temporal configuration for the runs is from 1948 to the present. The sea surface temperature and the sea-ice cover are also prescribed every 6 h in the model, using NCEP/NCAR reanalysis data 29 . No nudging or interactive nesting is used in any of the experiments, with the atmospheric fields over the Greenland ice sheet computed by the atmospheric module of MAR. The atmospheric model, in turn, interacts with the CROCUS model, which provides the state of the snowpack, and associated surface mass balance and energy balance quantities (for example, albedo and runoff). Code availability. MAR is an open-source code available to the scientific community. The source code for the MAR version used in this study is available at ftp://tedesco-dell.ldeo.columbia.edu/cryoftp/MARv3.5.2src_2015-03-18.tgz. The codes used for analysing the reanalysis data are available upon request from the authors.
2018-04-03T05:09:14.632Z
2016-06-09T00:00:00.000
{ "year": 2016, "sha1": "7d2ef09b15070d3e75e88f2e28b8e2bf13982cf8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms11723.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b082c38a5db0a12b0bd47561a2dbe34295f95db3", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }